stable-dreamfusion
dreamgaussian
stable-dreamfusion | dreamgaussian | |
---|---|---|
41 | 1 | |
7,813 | 3,629 | |
- | - | |
7.2 | 7.6 | |
5 months ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-dreamfusion
-
When are we getting stable diffusion for 3d models or 3d scenes?
Who is working on it? I've seen a few other models that do this, like Stable-Dreamfusion.
-
Is it possible for me to approximate a depth map from a generated image and make a 3D model?
I haven't tried Stable-DreamFusion, but it might be able to take an input image along with a prompt?
-
Meet ProlificDreamer: An AI Approach That Delivers High-Fidelity and Realistic 3D Content Using Variational Score Distillation (VSD)
similar to Magic3D and Dreamfusion / Stable-Dreamfusion, but this one looks a lot more vivid and detailed!
-
How would you all feel about 3D Stable Diffusion?
I've seen a few "text-to-3D" models that use Stable Diffusion. Zero-1-to-3 and Stable-DreamFusion appear to be capable of generating 3D models from text prompts.
-
Do any other software devs feel left behind by AI? I feel like I'm working on yesterday's tech
Ever heard of Stable Dreamfusion ? Open source text to 3D mesh model
-
Text-to-image-to-3D on 16GB GPU after stable-dreamfusion repo update
I followed the steps in this repo. They added my turtle as an example to the repo after their latest improvements. https://github.com/ashawkey/stable-dreamfusion
- I recreated Beat Saber in Unity only following Chat GPT. The code, the VFX and even the 3D models are made by an AI. Full video in the first comment.
-
ControlNet v1.1 has been released
There is an (independent?) implementation here, released last week to 0.1, but it already has a 100 issues.
-
Would it be possible for SD to make 3D deisgns that I can later 3D print?
But there's stable-dreamfusion, still not good but you could try it.
-
Game prototype using AI assisted graphics
Dreamfusion -- text-to-3d seems like it could be useful here: https://dreamfusion3d.github.io/ (once successfully opensourced, see https://github.com/ashawkey/stable-dreamfusion)
Rigging also looks like it could have a decent AI/DNN solution: https://arxiv.org/pdf/2005.00559.pdf
dreamgaussian
-
JavaScript Gaussian Splatting Library
This mixed with the new text/image to 3D brings us some really exciting possibilities.
https://github.com/gsgen3d/gsgen
https://github.com/dreamgaussian/dreamgaussian
What are some alternatives?
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
DreamCraft3D - [ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior
zero123plus - Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
LGM - LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation.
ComfyUI_Noise - 6 nodes for ComfyUI that allows for more control and flexibility over noise to do e.g. variations or "un-sampling"
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
GET3D
stable-diffusion-webui - Stable Diffusion web UI
Longhand - Text corpora in virtual reality
style2paints - sketch + style = paints :art: (TOG2018/SIGGRAPH2018ASIA)
ControlNet - Let us control diffusion models!
text2mesh - 3D mesh stylization driven by a text input in PyTorch