dream-textures
DeepBump
Our great sponsors
dream-textures | DeepBump | |
---|---|---|
72 | 5 | |
7,572 | 934 | |
- | - | |
5.8 | 2.7 | |
12 days ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 only | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dream-textures
- Tell HN: The next generation of videogames will be great with midjourney
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
I'm a long time advanced AE user and would gladly give feedback according to how I envision a nice workflow to be if you want. I recently got into dream textures for blender, which I think is a great reference for the direction things could be heading. It's still not viable for consistent video, but I love how they expose multiple control nets and their weights to be animatable for example. I also suggested them exposed (animatable) prompt weights, which the author now also plans for future release. I see you have such things planned as well for this plugin so big thumbs up!
-
Resources for artists interesting in using StableDiffusion as a tool?
Dream Textures (SD for Blender) - https://github.com/carson-katri/dream-textures
-
ControlNet fully integrated with Blender using nodes!
ControlNet is fully supported in the latest version of the Dream Textures add-on for Blender. You can get it from GitHub: https://github.com/carson-katri/dream-textures/releases/tag/0.2.0
It doesn’t use A1111 or venv, it’s all self contained in the addon, and really easy to setup. Here are the setup instructions: https://github.com/carson-katri/dream-textures/wiki/Setup
More likely there will be an A1111 backend in the future that would just call into their api: https://github.com/carson-katri/dream-textures/issues/604
We support inpainting and outpainting: https://github.com/carson-katri/dream-textures/wiki/Inpaint-and-Outpaint
Yes, and it can also automatically bake the texture onto the original UV map instead of the projected UVs. The guide is here: https://github.com/carson-katri/dream-textures/wiki/Texture-Projection
- Blender 3.5
DeepBump
-
Making a trailer for my book with midjourney. What do you think?
If you already have Photoshop, you can also generate a depth map in there by using the Depth Blur neural filter and enabling "output depth map only". But there are also lots of other free tools for creating depth maps, some of which you can use within Blender, like https://github.com/HugoTini/DeepBump
- AI Seamless Texture Generator Built-In to Blender
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion-nvidia-docker - GPU-ready Dockerfile to run Stability.AI stable-diffusion model v2 with a simple web interface. Includes multi-GPUs support.
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
stable-diffusion
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Blender-GPT - An all-in-one Blender assistant powered by GPT3/4 + Whisper integration
k-diffusion - Karras et al. (2022) diffusion models for PyTorch
chatgpt-raycast - ChatGPT raycast extension
Material-Map-Generator - Easily create AI generated Normal maps, Displacement maps, and Roughness maps.