codal-core
riffusion
codal-core | riffusion | |
---|---|---|
1 | 13 | |
11 | 3,201 | |
- | 2.5% | |
7.3 | 2.0 | |
29 days ago | 29 days ago | |
C++ | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
codal-core
-
Ask HN: What audio/sound-related OSS projects can I contribute to?
If you're into the embedded/edu/stem/steam space at all, the micro:bit v2 codebase for audio is being integrated more widely this year, and expanding our audio processing components might be a fun, bounded project if you want something smaller, but reasonably high impact?
See https://github.com/lancaster-university/codal-microbit-v2 for the ecosystem, or https://github.com/lancaster-university/codal-core/tree/mast... for the relevant section of the API.
If you're interested, prod me on Github (JohnVidler)
riffusion
-
You know what I REALLY want? Something like img2img but for sound/music.
Why don’t you try Riffusion? https://github.com/riffusion/riffusion The general idea is convert your audio waveform into an image (spectrogram).
- Interpolation between 2 seed images
-
Just heard of "superdub", AI MUSIC creator, I am searching for LOCAL models to use on my commputer.
riffusion - uses stable diffusion to generate spectrograms as images and converts them into audio. There's an online demo you can try. Not sure how easy it is to make full length songs as each image generated is about 5 seconds of audio. The demo sort of does it.
-
Ask HN: What audio/sound-related OSS projects can I contribute to?
Stable diffusion for real-time music generation:
https://github.com/riffusion/riffusion
https://github.com/riffusion/riffusion-app
-
Choppy transition
I'm using the (Vanilla?) Riffusion + App https://github.com/riffusion/riffusion https://github.com/riffusion/riffusion-app
-
LLWCHほこり - AIwave // more AI-generated vaporwave
Riffusion: https://github.com/riffusion/riffusion
-
Downloading songs?
There's a riffusion app you can run locally.
-
[P] Potential ML models for music generation that might run on CPU or low end GPU
Well, if you've got Stable Diffusion running, you should be able to run Riffusion.
- Riffusion v0.3.0 - Stable diffusion for music and audio
- Riffusion Release v0.3 – Stable Diffusion for audio
What are some alternatives?
riffusion-app - Stable diffusion for real-time music generation (web app)
SBEMU - legacy sound blaster emulation for DOS
FXcursion - Guitar processor prototype
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-inference]
codal-microbit-v2 - CODAL target for the micro:bit v2.x series of devices
riffusion-inference - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion]
DSP.jl - Filter design, periodograms, window functions, and other digital signal processing functionality
sd-webui-riffusion - Riffusion extension for AUTOMATIC1111's SD Web UI
faustideas - A central place for Faust GSoC proposals, todo list and new ideas
StableFusion - Transform text into images and images into new ones using AI. Our user-friendly web app, built with Diffusion, Python, and Streamlit, offers customizable outputs in various styles and formats