dream-textures
stable-diffusion
Our great sponsors
dream-textures | stable-diffusion | |
---|---|---|
72 | 186 | |
7,572 | 3,147 | |
- | - | |
5.8 | 0.0 | |
15 days ago | 7 months ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dream-textures
- Donut done with Artificial Intelligence and Blender
- Tell HN: The next generation of videogames will be great with midjourney
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
I'm a long time advanced AE user and would gladly give feedback according to how I envision a nice workflow to be if you want. I recently got into dream textures for blender, which I think is a great reference for the direction things could be heading. It's still not viable for consistent video, but I love how they expose multiple control nets and their weights to be animatable for example. I also suggested them exposed (animatable) prompt weights, which the author now also plans for future release. I see you have such things planned as well for this plugin so big thumbs up!
-
Resources for artists interesting in using StableDiffusion as a tool?
Dream Textures (SD for Blender) - https://github.com/carson-katri/dream-textures
- Using AI for 3d Game art
-
ControlNet fully integrated with Blender using nodes!
Yes, and it can also automatically bake the texture onto the original UV map instead of the projected UVs. The guide is here: https://github.com/carson-katri/dream-textures/wiki/Texture-Projection
- Using DALL-E 2 to create brick and water textures in Unity.
- 3D animation attempt using Sketchup screenshots and ControlNet
- Blender 3.5
-
Master AI Texture Projection for Blender 3
Dream AI latest release: https://github.com/carson-katri/dream-textures/releases
stable-diffusion
-
Possible to load Civitai models in basujindal optimizedSD fork?
I am using this repo: https://github.com/basujindal/stable-diffusion and it works fine with e.g. this model: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
-
40min to render 2x 256x256 pictures ..
That includes this optimized version : https://github.com/basujindal/stable-diffusion
- [Stable Diffusion] Coincé chez Unet: courir en mode EPS-Prédiction
-
How to use safetensors locally (optimized-sd)?
Ah, I wasn't aware of that. I use this version, which was very easy to set up and use by CLI.
-
[Stable Diffusion] stabile Diffusion 1.4 - CUDA-Speicherfehler
Used repo recommended in https://github.com/CompVis/stable-diffusion/issues/39 to use https://github.com/basujindal/stable-diffusion - same result.
- [Stable Diffusion] Aide avec Cuda hors de mémoire
- [Stable Diffusion] Comment créer notre propre modèle?
-
Help installing optimisedSD please. Thank you so much!
As per the best solution I found, I have download this (https://github.com/basujindal/stable-diffusion) version and pasted the optimizedSD folder in the main (user>stable-diffusion-webui) folder as per site instruction.
-
Stable Diffusion Web UI: Using Optimized SD Post-Installation
The git says you can simply grab the OptimizedSD folder and paste it into the installation path, which I did. However, I'm not sure how to call upon its functionality. Again, the reddit post says >Remember to call the optimized python script python optimizedSD/optimized_txt2img.py instead of standard scripts/txt2img. Though I'm not even sure where that script call is performed. Any ideas? Thanks in advance!
- [Stablediffusion] diffusion stable 1.4 - Erreur CUDA de mémoire insuffisante
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
stable-diffusion
stable-diffusion-nvidia-docker - GPU-ready Dockerfile to run Stability.AI stable-diffusion model v2 with a simple web interface. Includes multi-GPUs support.
diffusers-uncensored - Uncensored fork of diffusers
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
chaiNNer - A node-based image processing GUI aimed at making chaining image processing tasks easy and customizable. Born as an AI upscaling application, chaiNNer has grown into an extremely flexible and powerful programmatic image processing application.
DeepBump - Normal & height maps generation from single pictures
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
stable-diffusion