make-a-video-pytorch
jukebox
make-a-video-pytorch | jukebox | |
---|---|---|
6 | 129 | |
1,843 | 7,580 | |
- | 0.6% | |
3.4 | 0.0 | |
8 days ago | 8 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
make-a-video-pytorch
- How do I get this Python machine learning source code file to run?
- Imagic ( Google's Text-Based Image Editing ) implemented in Stable Diffusion
-
An AI that generates videos from text! | Make-A-Video Explained
► Pytorch implementation: https://github.com/lucidrains/make-a-video-pytorch
- New text2video and img2video model from Meta - someone implement this with SD please
- Lucidrains / Make-a-Video-PyTorch
-
Make-A-Video is a state-of-the-art AI system that generates videos from text
Amazing. And lucidrains is on the case as well: https://github.com/lucidrains/make-a-video-pytorch
jukebox
-
Open Source Libraries
openai/jukebox: Music Generation
- Will AI be able to create similar sounding music based off input?
-
Best model for music generation?
https://github.com/openai/jukebox The demo code is there.
-
Why didn't OpenAI MIT license Jukebox the same way they did CLIP?
I didn't even know about it until I heard Sam Altman casually mention it in an interview, I was expecting some basic tunes generator, but this is so amazing! I mean yeah the voices are not clear, it's muffled, but look at how far have image models progressed, if you applied the same amount of collaborative effort here, the results could be amazing! ElevenLabs showed how good and clear can AI-created voices sound. The only reason I can think of is that the Jukebox code is under view license only.
-
[R] [N] Noise2Music - Diffusion models for generating high quality music audio from text prompts, by Google Research
OpenAI had this figured out 3 years ago: https://openai.com/blog/jukebox/ . You could then even define your own text. Model is open source too.
-
Is music next?
They've had jukebox for a few years now, so I'm sure some new model will get released and explode overnight, like what chatGPT did.
-
Mongolian Gabba Goat Techno
That already exists
- El éxito continuo de OpenAI: Y como llegaron a crear la IA más avanzada del 2023. ChatGPT.
-
Implementation of Google's MusicLM in PyTorch
This model is designed to output raw audio.
However, there are many models which do output midi. That's actually much simpler, and has been done already a few years ago.
I thought OpenAI did this. But then, I might misremember, because their Jukebox actually also seems to produce raw audio (https://openai.com/blog/jukebox/).
However, midi generation is so easy, you even find it in some tutorials: https://www.tensorflow.org/tutorials/audio/music_generation
- FREE AI THINGS
What are some alternatives?
NeROIC
lucid-sonic-dreams
video-diffusion-pytorch - Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
ultimatevocalremovergui - GUI for a Vocal Remover that uses Deep Neural Networks.
Clip-Forge
spleeter - Deezer source separation library including pretrained models.
text2mesh - 3D mesh stylization driven by a text input in PyTorch
music-demixing-challenge-starter-kit - Starter kit for getting started in the Music Demixing Challenge.
DALLE2-video - Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers
dalle-mini - DALL·E Mini - Generate images from a text prompt
stable-diffusion
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models