IF-webui
sd-webui-text2video
IF-webui | sd-webui-text2video | |
---|---|---|
2 | 29 | |
28 | 1,250 | |
- | 1.8% | |
4.7 | 9.0 | |
about 1 year ago | 4 months ago | |
Python | Python | |
Creative Commons Zero v1.0 Universal | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
IF-webui
sd-webui-text2video
- Fat heroes
- SDXL 🤝 RealisticVision3 working together
- Testing Zeroscope v2 Text-to-Video using vid2vid
- zeroscope_v2_XL: a new open source 1024x576 video model designed to take on Gen-2
-
Fresh Pasta of Bel-Air
Link to Txt2Video extension: https://github.com/kabachuha/sd-webui-text2video
- WELCOME TO OLLIVANDER'S. Overriding my usual bad footage (& voiceover), The head, hands & clothes were created separately in detail in stable diffusion using my temporal consistency technique and then merged back together. The background was also Ai, animated using a created depthmap.
-
surfs up, poodles ! text to video, Modelscope
Thanks! I'm using a Touchdesigner setup + UI I've built that uses the api in https://github.com/kabachuha/sd-webui-text2video for automatic1111.
- How to Text 2 video?
- First Open-Source 1024x576 Text To Video Model (potat1) is out!
- "Acid Rain" (ModelScope text2video / Zeroscope 320x) [4K]
What are some alternatives?
collage-diffusion-ui - An open source, layer-based web interface for Collage Diffusion - use a familiar Photoshop-like interface and let the AI harmonize the details.
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
AI-Image-PromptGenerator - A flexible UI script to help create and expand on prompts for generative AI art models, such as Stable Diffusion and MidJourney. Get inspired, and create.
ebsynth_utility - AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth.
sd-webui-inpaint-anything - Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.
stable-diffusion-webui - Stable Diffusion web UI
onnx-web - web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD
sd-webui-modelscope-text2video - Auto1111 extension consisting of implementation of text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies [Moved to: https://github.com/deforum-art/sd-webui-text2video]
diffusion-browser - An easy way to view the images and metadata generated by Stable Diffusion's Automatic1111 WebUI
sd-webui-dragGAN-extension - extension of stable diffusion webui for dragGAN
sd-webui-deforum - Deforum extension for AUTOMATIC1111's Stable Diffusion webui