stable-dreamfusion
instant-ngp
stable-dreamfusion | instant-ngp | |
---|---|---|
41 | 147 | |
7,813 | 15,364 | |
- | 1.1% | |
7.2 | 6.7 | |
5 months ago | 14 days ago | |
Python | Cuda | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-dreamfusion
-
When are we getting stable diffusion for 3d models or 3d scenes?
Who is working on it? I've seen a few other models that do this, like Stable-Dreamfusion.
-
Is it possible for me to approximate a depth map from a generated image and make a 3D model?
I haven't tried Stable-DreamFusion, but it might be able to take an input image along with a prompt?
-
Meet ProlificDreamer: An AI Approach That Delivers High-Fidelity and Realistic 3D Content Using Variational Score Distillation (VSD)
similar to Magic3D and Dreamfusion / Stable-Dreamfusion, but this one looks a lot more vivid and detailed!
-
How would you all feel about 3D Stable Diffusion?
I've seen a few "text-to-3D" models that use Stable Diffusion. Zero-1-to-3 and Stable-DreamFusion appear to be capable of generating 3D models from text prompts.
-
Do any other software devs feel left behind by AI? I feel like I'm working on yesterday's tech
Ever heard of Stable Dreamfusion ? Open source text to 3D mesh model
-
Text-to-image-to-3D on 16GB GPU after stable-dreamfusion repo update
I followed the steps in this repo. They added my turtle as an example to the repo after their latest improvements. https://github.com/ashawkey/stable-dreamfusion
- I recreated Beat Saber in Unity only following Chat GPT. The code, the VFX and even the 3D models are made by an AI. Full video in the first comment.
-
ControlNet v1.1 has been released
There is an (independent?) implementation here, released last week to 0.1, but it already has a 100 issues.
-
Would it be possible for SD to make 3D deisgns that I can later 3D print?
But there's stable-dreamfusion, still not good but you could try it.
-
Game prototype using AI assisted graphics
Dreamfusion -- text-to-3d seems like it could be useful here: https://dreamfusion3d.github.io/ (once successfully opensourced, see https://github.com/ashawkey/stable-dreamfusion)
Rigging also looks like it could have a decent AI/DNN solution: https://arxiv.org/pdf/2005.00559.pdf
instant-ngp
- I want a 3d scanner...
-
Mind-blowing results (LORA/Checkpoint mix)
This is really cool! Could you now use something like this to turn the new images in a 3d model? Or even use open pose (controlnet) to generate a bunch of images from different angles and use InstantNeRF to make a 3d model for free!
-
Scanning in real life environments to be viewed in VR >>> taking pictures. Simple process from video -> render, using instant-ngp
It is at this point that you should have Instant-NGP setup. The script for the COLMAP processing is in the repo, as well as the steps to perform it. My exact parameters were 3 fps and 16 aabb. It is pretty helpful to add the scripts directory into path for exact access system-wide.
-
[D] NeRF, LeRF, Prolific Dreamer, Neuralangelo, and a lot of other cool NeRF research
[Project Page] https://nvlabs.github.io/instant-ngp/
-
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
instant-ngp ([1]) from NVIDIA can render NeRF in VR in real-time, assuming a very good desktop video card. Note that instant-ngp is not as photo-realistic as Zip-NeRF. But it's still very good!
1. https://github.com/NVlabs/instant-ngp
- How about Ranger Green?
-
Roast my MC kit
Playing around with neRF AI (https://github.com/NVlabs/instant-ngp) to create some 3d gear reveals. I think this a fun way to show off a kit, what do you think?
- Has anyone tried to generate images from enough angles to feed Nvidia Nerf to make 3D models?
-
Instant NPG: how do minimize noise and maximize quality? Tips welcome!
3 not sure if it's the one you want but the -aabb_scale is a crop. This page recommends trying a large value of 128 for some outdoor scenes: https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md
-
I NeRF'd the new Taco Bell on Rt. 40
I don't know about lumalabs, but basically all NeRF projects these days are based on NVIDIAs Instant neural graphics primitives ( GitHub: instant-ngp). It utilizes COLMAP for SfM (preprocessing step for the neural network) and runs on average Geforce cards pretty good. The fox example (50 photos) on their page literally takes 5 seconds to complete.
What are some alternatives?
dreamgaussian - Generative Gaussian Splatting for Efficient 3D Content Creation
awesome-NeRF - A curated list of awesome neural radiance fields papers
zero123plus - Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
tiny-cuda-nn - Lightning fast C++/CUDA neural network framework
ComfyUI_Noise - 6 nodes for ComfyUI that allows for more control and flexibility over noise to do e.g. variations or "un-sampling"
nerf-pytorch - A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
TensoRF - [ECCV 2022] Tensorial Radiance Fields, a novel approach to model and reconstruct radiance fields
GET3D
colmap - COLMAP - Structure-from-Motion and Multi-View Stereo
stable-diffusion-webui - Stable Diffusion web UI
instant-meshes - Interactive field-aligned mesh generator