stable-dreamfusion
ControlNet-v1-1-nightly
stable-dreamfusion | ControlNet-v1-1-nightly | |
---|---|---|
41 | 31 | |
7,813 | 4,314 | |
- | - | |
7.2 | 8.4 | |
5 months ago | 6 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-dreamfusion
-
When are we getting stable diffusion for 3d models or 3d scenes?
Who is working on it? I've seen a few other models that do this, like Stable-Dreamfusion.
-
Is it possible for me to approximate a depth map from a generated image and make a 3D model?
I haven't tried Stable-DreamFusion, but it might be able to take an input image along with a prompt?
-
Meet ProlificDreamer: An AI Approach That Delivers High-Fidelity and Realistic 3D Content Using Variational Score Distillation (VSD)
similar to Magic3D and Dreamfusion / Stable-Dreamfusion, but this one looks a lot more vivid and detailed!
-
How would you all feel about 3D Stable Diffusion?
I've seen a few "text-to-3D" models that use Stable Diffusion. Zero-1-to-3 and Stable-DreamFusion appear to be capable of generating 3D models from text prompts.
-
Do any other software devs feel left behind by AI? I feel like I'm working on yesterday's tech
Ever heard of Stable Dreamfusion ? Open source text to 3D mesh model
-
Text-to-image-to-3D on 16GB GPU after stable-dreamfusion repo update
I followed the steps in this repo. They added my turtle as an example to the repo after their latest improvements. https://github.com/ashawkey/stable-dreamfusion
- I recreated Beat Saber in Unity only following Chat GPT. The code, the VFX and even the 3D models are made by an AI. Full video in the first comment.
-
ControlNet v1.1 has been released
There is an (independent?) implementation here, released last week to 0.1, but it already has a 100 issues.
-
Would it be possible for SD to make 3D deisgns that I can later 3D print?
But there's stable-dreamfusion, still not good but you could try it.
-
Game prototype using AI assisted graphics
Dreamfusion -- text-to-3d seems like it could be useful here: https://dreamfusion3d.github.io/ (once successfully opensourced, see https://github.com/ashawkey/stable-dreamfusion)
Rigging also looks like it could have a decent AI/DNN solution: https://arxiv.org/pdf/2005.00559.pdf
ControlNet-v1-1-nightly
-
Making a ControlNet inpaint for sdxl
1- https://github.com/lllyasviel/ControlNet-v1-1-nightly/issues/89
-
AI Yearbook Photos Workflow with Stable Diffusion 1.5 Automatic1111
Install ControlNet and download the models you want to use (canny/depth/openpose should be enough for this): https://github.com/lllyasviel/ControlNet-v1-1-nightly
-
can you downgrade Controlnet?
you can find the previous version on their git and if it's a previous version of v1.1 then you probably have to search for the right branch on the new git version and download that
- Could you help me with this problem?
- Controlnet v1.1 Lineart
- Request for current ControlNet information
-
AI conceptual massing iterations within a context image with input control sketch
Stable Diffusion: https://huggingface.co/runwayml/stable-diffusion-v1-5 with ControlNet extension: https://github.com/lllyasviel/ControlNet-v1-1-nightly running on Automatic1111 web UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui
- Inpaint Anything (uses "Segment Anything") - Cool A1111 Extension not (yet) on the in App list
-
Architectural design using Stable Diffusion and ControlNet
Sure thing, after testing midjourney a bit I found out that yhe quality of images produced is best but you have zero control on over what is produced. The big breakthrough here is ControlNet which is a Stable Diffusion extension that makes you control the initial noise based on image inputs (or at least this is what i understand) more on it here: https://github.com/lllyasviel/ControlNet-v1-1-nightly
-
Setting Removed from ControlNET - "Skip img2img processing when using img2img initial image" - why?
https://github.com/lllyasviel/ControlNet-v1-1-nightly/issues/61 it seems get removed as duplicated.
What are some alternatives?
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
sd-webui-controlnet - WebUI extension for ControlNet
dreamgaussian - Generative Gaussian Splatting for Efficient 3D Content Creation
ControlNet - Let us control diffusion models!
zero123plus - Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
sd-webui-reactor - Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111 SD WebUI, SD WebUI Forge, SD.Next, Cagliostro)
ComfyUI_Noise - 6 nodes for ComfyUI that allows for more control and flexibility over noise to do e.g. variations or "un-sampling"
ControlNet-v1-1-nightly-colab - controlnet v1.1 colab
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
style2paints - sketch + style = paints :art: (TOG2018/SIGGRAPH2018ASIA)
GET3D
sd-webui-inpaint-anything - Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.