stable-diffusion-webui-chatgpt-utilities
sd-webui-text2video
stable-diffusion-webui-chatgpt-utilities | sd-webui-text2video | |
---|---|---|
6 | 29 | |
469 | 1,263 | |
- | 1.3% | |
4.9 | 9.0 | |
about 1 year ago | 5 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-chatgpt-utilities
- Chrome Ad On to Create Stable Diffusion Prompts with ChatpGPT
-
ChatGPT inside A1111 + possibly get GPT4 working if you are whitelisted
Also Thank you to the extension developer you can find it here hallatore/stable-diffusion-webui-chatgpt-utilities: Enables use of ChatGPT directly from the UI (github.com)
- Prompt Guide v4.3 Updated
-
Updated ChatGPT extension for AUTOMATIC1111 is out
Link: https://github.com/hallatore/stable-diffusion-webui-chatgpt-utilities
-
Impressive results using ChatGPT with stable diffusion
The extension for AUTOMATIC1111 stable-diffusion-webui is available at https://github.com/hallatore/stable-diffusion-webui-chatgpt-utilities
- Added ChatGPT to Automatic1111
sd-webui-text2video
- Fat heroes
- SDXL 🤝 RealisticVision3 working together
- Testing Zeroscope v2 Text-to-Video using vid2vid
- zeroscope_v2_XL: a new open source 1024x576 video model designed to take on Gen-2
-
Fresh Pasta of Bel-Air
Link to Txt2Video extension: https://github.com/kabachuha/sd-webui-text2video
- WELCOME TO OLLIVANDER'S. Overriding my usual bad footage (& voiceover), The head, hands & clothes were created separately in detail in stable diffusion using my temporal consistency technique and then merged back together. The background was also Ai, animated using a created depthmap.
-
surfs up, poodles ! text to video, Modelscope
Thanks! I'm using a Touchdesigner setup + UI I've built that uses the api in https://github.com/kabachuha/sd-webui-text2video for automatic1111.
- How to Text 2 video?
- First Open-Source 1024x576 Text To Video Model (potat1) is out!
- "Acid Rain" (ModelScope text2video / Zeroscope 320x) [4K]
What are some alternatives?
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
SHARK - SHARK - High Performance Machine Learning Distribution
ebsynth_utility - AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth.
stable-diffusion-webui - Stable Diffusion web UI
sd-webui-modelscope-text2video - Auto1111 extension consisting of implementation of text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies [Moved to: https://github.com/deforum-art/sd-webui-text2video]
sd-webui-dragGAN-extension - extension of stable diffusion webui for dragGAN
sd-webui-deforum - Deforum extension for AUTOMATIC1111's Stable Diffusion webui
SD-WebUI-BatchCheckpointPrompt - Test a base prompt with different checkpoints and for the checkpoints specific prompt templates
batchlinks-webui - Download several Huggingface, MEGA, and CivitAI links at once. SD webui extension. For colab.
Text-To-Video-Finetuning - Finetune ModelScope's Text To Video model using Diffusers 🧨
stable-diffusion-webui-normalmap-script - Normal Maps for Stable Diffusion WebUI