StableDiffusionTelegram
riffusion-inference
StableDiffusionTelegram | riffusion-inference | |
---|---|---|
1 | 6 | |
122 | 714 | |
- | - | |
10.0 | 10.0 | |
over 1 year ago | over 1 year ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
StableDiffusionTelegram
-
how to make a telegram bot like anime ai bot, but instead it uses fine tune model instead of 'qq web 2d' as a newbie, please walk me through it, right now I'm trying to build own custom model, but don't know how this telegram/discord bots work.. any source would be extremely helpful
My general advice is find an open-source project that already does what you want. (I found this one but maybe there's others that are more suitable.) Then create a fork of the project and make whichever changes you want for your own purposes.
riffusion-inference
- Looping/Interpolation
-
Riffusion Manipulation Tools
Just put the generated image into the seeds folder on the inference server, replace it with one of them on the seed_images folder: https://github.com/hmartiro/riffusion-inference/tree/main/seed_images
-
Am I understanding this right?
I think so. This is the specific code they say they use to do the image>audio conversion
-
Stable Diffusion fine-tuned to generate Music — Riffusion
I'm still reading, but it looks like they're doing some extra pre and post processing: https://github.com/hmartiro/riffusion-inference
-
Riffusion – Stable Diffusion fine-tuned to generate Music
Yes from https://huggingface.co/runwayml/stable-diffusion-v1-5. Our checkpoint works with automatic1111, and if you'd like to make an extension to decode to audio, it should be pretty straightforward: https://github.com/hmartiro/riffusion-inference/blob/main/ri...
What are some alternatives?
riffusion-manipulation - tools to manipulate audio with riffusion
sd-webui-riffusion - Riffusion extension for AUTOMATIC1111's SD Web UI
stable-diffusion-pytorch - Yet another PyTorch implementation of Stable Diffusion (probably easy to read)
riffusion-app - Stable diffusion for real-time music generation [Moved to: https://github.com/riffusion/riffusion-app]
Ckpt2Diff - This user-friendly wizard is used to convert a Stable Diffusion Model from CKPT format to Diffusers format.
spleeter - Deezer source separation library including pretrained models.
caption-upsampling - This repository implements the idea of "caption upsampling" from DALL-E 3 with Zephyr-7B and gathers results with SDXL.
riffusion - Stable diffusion for real-time music generation
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
audio-diffusion - Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.
onnx-web - web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD
collage-diffusion-ui - An open source, layer-based web interface for Collage Diffusion - use a familiar Photoshop-like interface and let the AI harmonize the details.