sd-webui-deforum
stable-diffusion-webui-normalmap-script
sd-webui-deforum | stable-diffusion-webui-normalmap-script | |
---|---|---|
17 | 2 | |
2,574 | 71 | |
2.4% | - | |
8.8 | 3.2 | |
5 days ago | 4 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-webui-deforum
-
p5.js Visual Art Composer GPT - enter a prompt, get p5.js code output
I’m thinking about constructing a file based on this default: https://github.com/deforum-art/sd-webui-deforum/blob/automatic1111-webui/scripts/default_settings.txt
-
This is not an infinite zoom.
I asked Audiocraft to make me a "chill hip hop beat", I used framesync.xyz to make keyframes for A1111 Deforum extension. Unfortunately, I don't have the settings file anymore, but it was pretty much just a 26s clip at 15fps (440 frames) with a single prompt "a surreal painting by Magritte" and the usual negative prompt magic voodoo. Then, for every clip I used the last frame of the previous clip as init frame. I render at 512x512 and then use ESRGAN4x to upscale to 2048x2048
-
The Flashbulb - Passage D (unofficial experiment with ai video)
https://github.com/deforum-art/sd-webui-deforum - for video generation
-
How can I replicate this kind of video using lyrics as prompt ?
This kind of animation can be done with Deforum, a rather advanced add-on for Stable Diffusion. Be aware thought that you will have to really work yourself into that one. Also as a heads up, you probably won't be able to just use existing lyrics as prompts, rather create your different prompts based on the lyrics.
- Projects of AI tools for creating inbetween frames of 2D animations
-
Viral AI Art Video on Instagram
The tools you need for that are Stable DIffusion (e.g., with the Automatic1111 UI), therein you install Deforum. The github pages for both outline the installation process. Here's also a decent tutorial to get you started with Deforum, as it may seem a bit complex at first: Link
- Gunship posed the question what happens after we die? Into image generating artificial intelligence software... These are the results.
-
Txt-to-Video with GEN-2 AI
There’s also video to video with deforum, and text to video with deforum and modelscope. I am not really that familiar with this as I only started recently, so don’t blindly believe this.
-
Updating her look to something more pink... (Deforum Morph)
Created with SD 1.5, Automatic1111 UI, Deforum plugin. This was attempt number 10 or so of this prompt, with lots of small settings tweaks between.
- How Do i load lora models after downloading from civitai browser
stable-diffusion-webui-normalmap-script
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
Stable Diffusion (2D Image Generation and Animation) https://github.com/CompVis/stable-diffusion (Stable Diffusion V1) https://huggingface.co/CompVis/stable-diffusion (Stable Diffusion Checkpoints 1.1-1.4) https://huggingface.co/runwayml/stable-diffusion-v1-5 (Stable Diffusion Checkpoint 1.5) https://github.com/Stability-AI/stablediffusion (Stable Difusion V2) https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main (Stable Diffusion Checkpoint 2.1) Stable Diffusion Automatic 1111 Webui and Extensions https://github.com/AUTOMATIC1111/stable-diffusion-webui (WebUI - Easier to use) PLEASE NOTE, MANY EXTENSIONS CAN BE INSTALLED FROM THE WEBUI BY CLICK "AVAILABLE" OR "INSTALL FROM URL" BUT YOU MAY STILL NEED TO DOWNLOAD THE MODEL CHECKPOINTS! https://github.com/Mikubill/sd-webui-controlnet (Control Net Extension - Use various models to control your image generation, useful for animation and temporal consistency) https://huggingface.co/lllyasviel/ControlNet/tree/main/models (Control Net Checkpoints -Canny, Normal, OpenPose, Depth, ect.) https://github.com/thygate/stable-diffusion-webui-depthmap-script (Depth Map Extension - Generate high-resolution depthmaps and animated videos or export to 3d modeling programs) https://github.com/graemeniedermayer/stable-diffusion-webui-normalmap-script (Normal Map Extension - Generate high-resolution normal maps for use in 3d programs) https://github.com/d8ahazard/sd_dreambooth_extension (Dream Booth Extension - Train your own objects, people, or styles into Stable Diffusion) https://github.com/deforum-art/sd-webui-deforum (Deforum - Generate Weird 2D animations) https://github.com/deforum-art/sd-webui-text2video (Deforum Text2Video - Generate videos from texts prompts using ModelScope or VideoCrafter)
-
Relighting AI Art
These examples use my normal map extension although it should work with thygate's depth map extension as well (super big thanks to thygate).
What are some alternatives?
sd-webui-text2video - Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
ebsynth - Fast Example-based Image Synthesis and Style Transfer
EasyMocap - Make human motion capture easier.
textext - Re-editable LaTeX/ typst graphics for Inkscape
TelegramGPT - simple basic python script to introduce Telegram Ai chatBot using DALL-E
images-grid-comfy-plugin - A simple comfyUI plugin for images grid (X/Y Plot)
pifuhd - High-Resolution 3D Human Digitization from A Single Image.
stable-diffusion-webui-tensorrt
VideoCrafter - VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
Real-ESRGAN-ncnn-vulkan - NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.