Stable-Diffusion-WebUI-TensorRT
sd-webui-deforum
Stable-Diffusion-WebUI-TensorRT | sd-webui-deforum | |
---|---|---|
5 | 17 | |
1,806 | 2,593 | |
2.9% | 1.5% | |
4.6 | 8.0 | |
6 days ago | 3 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stable-Diffusion-WebUI-TensorRT
- Nvidia's TensorRT before and after
-
Custom SDXL Turbo model + TensorRT ( 1024x1024 image - 3 sec on RTX 3060 12Gb)
Install the TensorRT plugin TensorRT for A1111
- PSA - TensorRT works with turbo models for even faster speeds
-
Stable Diffusion Gets a Major Boost with RTX Acceleration
Here's a direct link to the extension that's referenced in the page. https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT
- Nvidia TensorRT Extension for Stable Diffusion Web UI
sd-webui-deforum
-
p5.js Visual Art Composer GPT - enter a prompt, get p5.js code output
I’m thinking about constructing a file based on this default: https://github.com/deforum-art/sd-webui-deforum/blob/automatic1111-webui/scripts/default_settings.txt
-
This is not an infinite zoom.
I asked Audiocraft to make me a "chill hip hop beat", I used framesync.xyz to make keyframes for A1111 Deforum extension. Unfortunately, I don't have the settings file anymore, but it was pretty much just a 26s clip at 15fps (440 frames) with a single prompt "a surreal painting by Magritte" and the usual negative prompt magic voodoo. Then, for every clip I used the last frame of the previous clip as init frame. I render at 512x512 and then use ESRGAN4x to upscale to 2048x2048
-
The Flashbulb - Passage D (unofficial experiment with ai video)
https://github.com/deforum-art/sd-webui-deforum - for video generation
-
How can I replicate this kind of video using lyrics as prompt ?
This kind of animation can be done with Deforum, a rather advanced add-on for Stable Diffusion. Be aware thought that you will have to really work yourself into that one. Also as a heads up, you probably won't be able to just use existing lyrics as prompts, rather create your different prompts based on the lyrics.
- Projects of AI tools for creating inbetween frames of 2D animations
-
Viral AI Art Video on Instagram
The tools you need for that are Stable DIffusion (e.g., with the Automatic1111 UI), therein you install Deforum. The github pages for both outline the installation process. Here's also a decent tutorial to get you started with Deforum, as it may seem a bit complex at first: Link
- Gunship posed the question what happens after we die? Into image generating artificial intelligence software... These are the results.
-
Txt-to-Video with GEN-2 AI
There’s also video to video with deforum, and text to video with deforum and modelscope. I am not really that familiar with this as I only started recently, so don’t blindly believe this.
-
Updating her look to something more pink... (Deforum Morph)
Created with SD 1.5, Automatic1111 UI, Deforum plugin. This was attempt number 10 or so of this prompt, with lots of small settings tweaks between.
- How Do i load lora models after downloading from civitai browser
What are some alternatives?
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
sd-webui-text2video - Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
ebsynth - Fast Example-based Image Synthesis and Style Transfer
textext - Re-editable LaTeX/ typst graphics for Inkscape
images-grid-comfy-plugin - A simple comfyUI plugin for images grid (X/Y Plot)
stable-diffusion-webui-tensorrt
stable-diffusion-webui-normalmap-script - Normal Maps for Stable Diffusion WebUI
VideoCrafter - VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
stable-diffusion - A latent text-to-image diffusion model
Real-ESRGAN-ncnn-vulkan - NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.
audiocraft - Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.