community-events
jukebox
community-events | jukebox | |
---|---|---|
8 | 129 | |
379 | 7,580 | |
2.1% | 0.5% | |
7.2 | 0.0 | |
5 months ago | 1 day ago | |
Jupyter Notebook | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
community-events
-
Controlling Stable Diffusion with JAX & Diffusers using TPU v4
Best applications that will come out of this sprint will receive prizes. You can find more information here. If you want to get started, simply join huggingface.co/discord, take the role 🧨 Diffusers and head to #jax-diffusers-ideas to share your idea or join one of the teams, and fill this form: https://forms.gle/t3M7aNPuLL9V1sfa9
-
JAX & Diffusers to Control Stable Diffusion (with TPUs ⚡️ )
It will start on 17th of April. To join us, you can join huggingface.co/join/discord and take the role Diffusers from #role-assignment. After this, simply fill the form provided in this guide to later get access to TPUs. https://github.com/huggingface/community-events/tree/main/jax-controlnet-sprint
- “Control Stable Diffusion” Sprint kicks off with free TPU-v4 from Google
-
Free compute to train custom ControlNet by Hugging Face
Details and sign-up: https://github.com/huggingface/community-events/tree/main/jax-controlnet-sprint
-
How can I create a dataset to refine Whisper AI from old videos with subtitles?
For the training, I extremely recommend checking out the Whisper Fine-Tuning Event. It has a python script to train in one command, tons of tips, even a walkthrough video.
- I am using OpenAi's whisper transcription/translation model. I am wondering if I can improve it's performance by optimizing the audio files somehow. What features of audio files should I look into to make the whisper model perform better?
-
[N] Gradio Blocks + Hugging Face event is starting this week. A hackathon type event from May 17th to May 31st with prizes in which we will create interactive web demos for state-of-the-art machine learning models
We are happy to invite you to the Gradio Blocks Party - a community event in which we will create interactive demos for state-of-the-art machine learning models. Demos are powerful because they allow anyone — not just ML engineers — to try out models in the browser, give feedback on predictions, identify trustworthy models. The event will take place from May 17th to 31st. We will be organizing this event on Github and the Hugging Face discord channel. Prizes will be given at the end of the event, see the Prizes section
-
Dall-E 2
If you're interested in generative models, Hugging Face is putting on an event around generative models right now called the HugGAN sprint, where they're giving away free access to compute to train models like this.
You can join it by following the steps in the guide here: https://github.com/huggingface/community-events/tree/main/hu...
There will also be talks from awesome folks at EleutherAI, Google, and Deepmind
jukebox
-
Open Source Libraries
openai/jukebox: Music Generation
- Will AI be able to create similar sounding music based off input?
-
Best model for music generation?
https://github.com/openai/jukebox The demo code is there.
-
Why didn't OpenAI MIT license Jukebox the same way they did CLIP?
I didn't even know about it until I heard Sam Altman casually mention it in an interview, I was expecting some basic tunes generator, but this is so amazing! I mean yeah the voices are not clear, it's muffled, but look at how far have image models progressed, if you applied the same amount of collaborative effort here, the results could be amazing! ElevenLabs showed how good and clear can AI-created voices sound. The only reason I can think of is that the Jukebox code is under view license only.
-
[R] [N] Noise2Music - Diffusion models for generating high quality music audio from text prompts, by Google Research
OpenAI had this figured out 3 years ago: https://openai.com/blog/jukebox/ . You could then even define your own text. Model is open source too.
-
Is music next?
They've had jukebox for a few years now, so I'm sure some new model will get released and explode overnight, like what chatGPT did.
-
Mongolian Gabba Goat Techno
That already exists
- El éxito continuo de OpenAI: Y como llegaron a crear la IA más avanzada del 2023. ChatGPT.
-
Implementation of Google's MusicLM in PyTorch
This model is designed to output raw audio.
However, there are many models which do output midi. That's actually much simpler, and has been done already a few years ago.
I thought OpenAI did this. But then, I might misremember, because their Jukebox actually also seems to produce raw audio (https://openai.com/blog/jukebox/).
However, midi generation is so easy, you even find it in some tutorials: https://www.tensorflow.org/tutorials/audio/music_generation
- FREE AI THINGS
What are some alternatives?
dalle-2-preview
lucid-sonic-dreams
dalle-mini - DALL·E Mini - Generate images from a text prompt
ultimatevocalremovergui - GUI for a Vocal Remover that uses Deep Neural Networks.
bevy_retro - Plugin pack for making 2D games with Bevy
spleeter - Deezer source separation library including pretrained models.
lm-human-preferences - Code for the paper Fine-Tuning Language Models from Human Preferences
music-demixing-challenge-starter-kit - Starter kit for getting started in the Music Demixing Challenge.
gpt-3 - GPT-3: Language Models are Few-Shot Learners
glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models