sdgrid
stable-dreamfusion
sdgrid | stable-dreamfusion | |
---|---|---|
1 | 41 | |
4 | 7,827 | |
- | - | |
10.0 | 7.2 | |
over 1 year ago | 5 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sdgrid
-
I generated over 4000 images showing 200+ prompt styles. Additional prompt ideas welcome!
More prompt ideas are very welcome. Ideally send me a PR on github (https://github.com/lacop/sdgrid/blob/main/inputs/styles.csv) but I can also go through comments here tomorrow and add more stuff.
stable-dreamfusion
-
When are we getting stable diffusion for 3d models or 3d scenes?
Who is working on it? I've seen a few other models that do this, like Stable-Dreamfusion.
-
Is it possible for me to approximate a depth map from a generated image and make a 3D model?
I haven't tried Stable-DreamFusion, but it might be able to take an input image along with a prompt?
-
Meet ProlificDreamer: An AI Approach That Delivers High-Fidelity and Realistic 3D Content Using Variational Score Distillation (VSD)
similar to Magic3D and Dreamfusion / Stable-Dreamfusion, but this one looks a lot more vivid and detailed!
-
How would you all feel about 3D Stable Diffusion?
I've seen a few "text-to-3D" models that use Stable Diffusion. Zero-1-to-3 and Stable-DreamFusion appear to be capable of generating 3D models from text prompts.
-
Do any other software devs feel left behind by AI? I feel like I'm working on yesterday's tech
Ever heard of Stable Dreamfusion ? Open source text to 3D mesh model
-
Text-to-image-to-3D on 16GB GPU after stable-dreamfusion repo update
I followed the steps in this repo. They added my turtle as an example to the repo after their latest improvements. https://github.com/ashawkey/stable-dreamfusion
- I recreated Beat Saber in Unity only following Chat GPT. The code, the VFX and even the 3D models are made by an AI. Full video in the first comment.
-
ControlNet v1.1 has been released
There is an (independent?) implementation here, released last week to 0.1, but it already has a 100 issues.
-
Would it be possible for SD to make 3D deisgns that I can later 3D print?
But there's stable-dreamfusion, still not good but you could try it.
-
Game prototype using AI assisted graphics
Dreamfusion -- text-to-3d seems like it could be useful here: https://dreamfusion3d.github.io/ (once successfully opensourced, see https://github.com/ashawkey/stable-dreamfusion)
Rigging also looks like it could have a decent AI/DNN solution: https://arxiv.org/pdf/2005.00559.pdf
What are some alternatives?
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
dreamgaussian - Generative Gaussian Splatting for Efficient 3D Content Creation
zero123plus - Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
ComfyUI_Noise - 6 nodes for ComfyUI that allows for more control and flexibility over noise to do e.g. variations or "un-sampling"
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
GET3D
stable-diffusion-webui - Stable Diffusion web UI
Longhand - Text corpora in virtual reality
style2paints - sketch + style = paints :art: (TOG2018/SIGGRAPH2018ASIA)
ControlNet - Let us control diffusion models!
text2mesh - 3D mesh stylization driven by a text input in PyTorch
magic3d-pytorch - Implementation of Magic3D, Text to 3D content synthesis, in Pytorch