Why didn't OpenAI MIT license Jukebox the same way they did CLIP?

This page summarizes the projects mentioned and recommended in the original post on /r/OpenAI

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • jukebox

    Code for the paper "Jukebox: A Generative Model for Music"

  • I didn't even know about it until I heard Sam Altman casually mention it in an interview, I was expecting some basic tunes generator, but this is so amazing! I mean yeah the voices are not clear, it's muffled, but look at how far have image models progressed, if you applied the same amount of collaborative effort here, the results could be amazing! ElevenLabs showed how good and clear can AI-created voices sound. The only reason I can think of is that the Jukebox code is under view license only.

  • CLIP

    CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

  • As many of you are probably aware CLIP research paper and open source code that's under MIT license underpins the entire generative image model industry. Let's not forget it's only a little over 2 years ago that this image was completely mindblowing, and now it's so basic. People took the research and released code it took off like wildfire. But Jukebox, released a year earlier, never did even though it's just as revolutionary.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts