RenderNode
dream-textures
RenderNode | dream-textures | |
---|---|---|
3 | 72 | |
85 | 7,599 | |
- | - | |
2.6 | 5.8 | |
about 2 years ago | 10 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RenderNode
-
RenderStackNode allows you to create node trees to batch render multiple scenes, view layers or cameras at once, using filenames that include date, time, render engine, camera or scene name and a whole lot more. This really should be core functionality.
It's available on Blendermarket if you wish to support it and directly at github if you wish to support it even more. 😉
-
RenderStackNode allows you to create node trees to batch render multiple scenes, view layers or cameras at once, using filenames that include date, time, render engine, camera or scene name and a whole lot more. This really should be core functionality. https://github.com/atticus-lv/RenderStackNode
I noticed "https://github.com/atticus-lv/RenderStackNode" in the title. Please do not put any references to social media or any links in the title – these things belong in the comments or post body for those interested (see rule 7). Instead, the title should be a concise description of what you are posting.
- My new favorite addon, RenderStackNode, allows you to visually create render batch lists and render multiple scenes, view layers or cameras at once, using output filenames that include date, time, render engine, camera name, scene name and a whole lot more. This really should be core functionality.
dream-textures
- Donut done with Artificial Intelligence and Blender
- Tell HN: The next generation of videogames will be great with midjourney
-
After Diffusion, an After Effects Extension Integrating the SD web UI seamlessly.
I'm a long time advanced AE user and would gladly give feedback according to how I envision a nice workflow to be if you want. I recently got into dream textures for blender, which I think is a great reference for the direction things could be heading. It's still not viable for consistent video, but I love how they expose multiple control nets and their weights to be animatable for example. I also suggested them exposed (animatable) prompt weights, which the author now also plans for future release. I see you have such things planned as well for this plugin so big thumbs up!
-
Resources for artists interesting in using StableDiffusion as a tool?
Dream Textures (SD for Blender) - https://github.com/carson-katri/dream-textures
- Using AI for 3d Game art
-
ControlNet fully integrated with Blender using nodes!
Yes, and it can also automatically bake the texture onto the original UV map instead of the projected UVs. The guide is here: https://github.com/carson-katri/dream-textures/wiki/Texture-Projection
- Using DALL-E 2 to create brick and water textures in Unity.
- 3D animation attempt using Sketchup screenshots and ControlNet
- Blender 3.5
-
Master AI Texture Projection for Blender 3
Dream AI latest release: https://github.com/carson-katri/dream-textures/releases
What are some alternatives?
fSpy-Blender - Official fSpy importer for Blender
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion - This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements. [Moved to: https://github.com/invoke-ai/InvokeAI]
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion-nvidia-docker - GPU-ready Dockerfile to run Stability.AI stable-diffusion model v2 with a simple web interface. Includes multi-GPUs support.
DeepBump - Normal & height maps generation from single pictures
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
stable-diffusion
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
Blender-GPT - An all-in-one Blender assistant powered by GPT3/4 + Whisper integration
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
k-diffusion - Karras et al. (2022) diffusion models for PyTorch