stable-diffusion-webui-vid2vid
sd-webui-image-sequence-toolkit
stable-diffusion-webui-vid2vid | sd-webui-image-sequence-toolkit | |
---|---|---|
2 | 2 | |
39 | 552 | |
- | - | |
10.0 | 10.0 | |
over 1 year ago | about 1 year ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-vid2vid
-
AI edit/pixelating of music video options?
You could use Stable Diffusion. If you use A1111 webui there are extensions for transforming video. If wanted to only transform the faces you could use ADetailer to automatically detect them in the video and inpaint them.
-
Question on 3D render animation
You could try Stable Diffusion. If you use A1111 webui you can use the stable-diffusion-webui-vid2vid extension to convert each frame with models and prompts of your choice. If think that if you could render depth or normal maps you could also use these as hints to ControlNets which would improve your results. The problem with converting video like this is always consistency. The individual frames may look great but there are often noticeable variations in details between them. You could search r/StableDiffusion for vid2vid to see examples of what people actually achieve.
sd-webui-image-sequence-toolkit
-
Alita Battle Angel 2019 Movie convert into Anime ( Test ControlNet v1.1.03 )
On some plan I used this version of Multi-frame rendering : https://github.com/OedoSoldier/sd-webui-image-sequence-toolkit
-
Experiment AI Anime w/ C-Net 1.1 + GroundingDINO + SAM + MFR (workflow)
I've updated the multi-frame rendering extension (https://github.com/OedoSoldier/sd-webui-image-sequence-toolkit, credit original author: Xanthius), which now supports the ControlNet 1.1 inpaint model.
What are some alternatives?
sd-webui-segment-everything - Segment Anything for Stable Diffusion Webui [Moved to: https://github.com/continue-revolution/sd-webui-segment-anything]
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
sd-webui-stablesr - StableSR for Stable Diffusion WebUI - Ultra High-quality Image Upscaler
dddetailer - Detection Detailer hijack edition
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
stable-diffusion-webui-dataset-tag-editor - Extension to edit dataset captions for SD web UI by AUTOMATIC1111
sd-webui-segment-anything - Segment Anything for Stable Diffusion WebUI
sd-webui-lobe-theme - 🅰️ Lobe theme - The modern theme for stable diffusion webui, exquisite interface design, highly customizable UI, and efficiency boosting features.
MetalDiffusion - Stable Diffusion for Intel and Silicon Mac's. Forked from @divamgupta's work
stable-diffusion-webui - Stable Diffusion web UI
a1111-scripts - Example scripts using the A1111 SD Webui API and other things.