Why didn't OpenAI MIT license Jukebox the same way they did CLIP?

This page summarizes the projects mentioned and recommended in the original post on reddit.com/r/OpenAI

Our great sponsors
  • CodiumAI - TestGPT | Generating meaningful tests for busy devs
  • InfluxDB - Access the most powerful time series database as a service
  • ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
  • SonarQube - Static code analysis for 29 languages.
  • jukebox

    Code for the paper "Jukebox: A Generative Model for Music"

    I didn't even know about it until I heard Sam Altman casually mention it in an interview, I was expecting some basic tunes generator, but this is so amazing! I mean yeah the voices are not clear, it's muffled, but look at how far have image models progressed, if you applied the same amount of collaborative effort here, the results could be amazing! ElevenLabs showed how good and clear can AI-created voices sound. The only reason I can think of is that the Jukebox code is under view license only.

  • CLIP

    CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

    As many of you are probably aware CLIP research paper and open source code that's under MIT license underpins the entire generative image model industry. Let's not forget it's only a little over 2 years ago that this image was completely mindblowing, and now it's so basic. People took the research and released code it took off like wildfire. But Jukebox, released a year earlier, never did even though it's just as revolutionary.

  • CodiumAI

    TestGPT | Generating meaningful tests for busy devs. Get non-trivial tests (and trivial, too!) suggested right inside your IDE, so you can code smart, create more value, and stay confident when you push.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts