RAVE
riffusion
RAVE | riffusion | |
---|---|---|
1 | 13 | |
213 | 3,191 | |
5.6% | 2.2% | |
7.2 | 2.0 | |
about 1 month ago | 24 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RAVE
-
RAVE has been released!
New preprint alert! Introducing RAVE - a zero-shot, lightweight, and fast framework for text-guided video editing, supporting videos of any length utilizing text-to-image pretrained diffusion models. Project Webpage: https://rave-video.github.io ArXiv: https://arxiv.org/abs/2312.04524 More Examples: https://rave-video.github.io/supp/supp.html Code: https://github.com/rehg-lab/RAVE Demo: https://github.com/rehg-lab/RAVE/blob/main/demo_notebook.ipynb Abstract:
riffusion
-
You know what I REALLY want? Something like img2img but for sound/music.
Why don’t you try Riffusion? https://github.com/riffusion/riffusion The general idea is convert your audio waveform into an image (spectrogram).
- Interpolation between 2 seed images
-
Just heard of "superdub", AI MUSIC creator, I am searching for LOCAL models to use on my commputer.
riffusion - uses stable diffusion to generate spectrograms as images and converts them into audio. There's an online demo you can try. Not sure how easy it is to make full length songs as each image generated is about 5 seconds of audio. The demo sort of does it.
-
Ask HN: What audio/sound-related OSS projects can I contribute to?
Stable diffusion for real-time music generation:
https://github.com/riffusion/riffusion
https://github.com/riffusion/riffusion-app
-
Choppy transition
I'm using the (Vanilla?) Riffusion + App https://github.com/riffusion/riffusion https://github.com/riffusion/riffusion-app
-
LLWCHほこり - AIwave // more AI-generated vaporwave
Riffusion: https://github.com/riffusion/riffusion
-
Downloading songs?
There's a riffusion app you can run locally.
-
[P] Potential ML models for music generation that might run on CPU or low end GPU
Well, if you've got Stable Diffusion running, you should be able to run Riffusion.
- Riffusion v0.3.0 - Stable diffusion for music and audio
- Riffusion Release v0.3 – Stable Diffusion for audio
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
SBEMU - legacy sound blaster emulation for DOS
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-inference]
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion]
sd-webui-riffusion - Riffusion extension for AUTOMATIC1111's SD Web UI
DSP.jl - Filter design, periodograms, window functions, and other digital signal processing functionality
StableFusion - Transform text into images and images into new ones using AI. Our user-friendly web app, built with Diffusion, Python, and Streamlit, offers customizable outputs in various styles and formats
Dplug - Audio plugin framework. VST2/VST3/AU/AAX/LV2 for Linux/macOS/Windows.
ControlNet-for-Diffusers - Transfer the ControlNet with any basemodel in diffusers🔥
codal-core
bark - 🔊 Text-Prompted Generative Audio Model
StableDiffusionTelegram - StableDiffusionTelegram is a telegram bot that allows to generate images using the stable diffusion AI from a telegram bot, in a much more comfortable and simple way.