sd-webui-stablesr
stable-diffusion-webui-vid2vid
sd-webui-stablesr | stable-diffusion-webui-vid2vid | |
---|---|---|
4 | 2 | |
968 | 39 | |
- | - | |
5.2 | 10.0 | |
6 months ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-webui-stablesr
-
Upscaling Realism of an Image
I've been interested in this sorta thing as well recently. This is an outstanding upscaler that uses SD2.1 to add detail to the image: https://github.com/pkuliyi2015/sd-webui-stablesr - A1111 Extension hardly mentioned. Developers have an SDXL trained models listed on their todo/coming soon. Heavy on the resources but for me the results have been excellent. Prob some other more known things (like a complicated comfy workflow maybe you can just download)
-
Upscaling - not sure what's going wrong
- Try with StableSR, it can add lots of details (https://github.com/pkuliyi2015/sd-webui-stablesr) this is my upscaler method of choice.
-
Optimization tips for 4GB vram gpu?
Then I upscale to 4k using StableSR+Tiled Diffusion+Tiled VAE (https://github.com/pkuliyi2015/sd-webui-stablesr) (I used to use Ultimate SD Upscaler)
-
A Simple Comparison of 4 Latest Image Upscaling Strategy in Stable Diffusion WebUI
StableSR (for webui): https://github.com/pkuliyi2015/sd-webui-stablesr.git
stable-diffusion-webui-vid2vid
-
AI edit/pixelating of music video options?
You could use Stable Diffusion. If you use A1111 webui there are extensions for transforming video. If wanted to only transform the faces you could use ADetailer to automatically detect them in the video and inpaint them.
-
Question on 3D render animation
You could try Stable Diffusion. If you use A1111 webui you can use the stable-diffusion-webui-vid2vid extension to convert each frame with models and prompts of your choice. If think that if you could render depth or normal maps you could also use these as hints to ControlNets which would improve your results. The problem with converting video like this is always consistency. The individual frames may look great but there are often noticeable variations in details between them. You could search r/StableDiffusion for vid2vid to see examples of what people actually achieve.
What are some alternatives?
stable-diffusion-webui-two-shot - Latent Couple extension (two shot diffusion port)
sd-webui-segment-everything - Segment Anything for Stable Diffusion Webui [Moved to: https://github.com/continue-revolution/sd-webui-segment-anything]
StableSR - Exploiting Diffusion Prior for Real-World Image Super-Resolution
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
PromptHub - Prompt history and management for the Stable Diffusion AUTOMATIC1111 WebUI
sd-webui-image-sequence-toolkit - Extension for AUTOMATIC111's WebUI
shift-attention - In stable diffusion, generate a sequence of images shifting attention in the prompt.
sd-webui-segment-anything - Segment Anything for Stable Diffusion WebUI
sd-webui-cloud-inference - Stable Diffusion(SDXL/Refiner)WebUI Cloud Inference Extension
sd-webui-lobe-theme - 🅰️ Lobe theme - The modern theme for stable diffusion webui, exquisite interface design, highly customizable UI, and efficiency boosting features.
stable-diffusion-webui - Stable Diffusion web UI