stable-diffusion
instant-ngp
Our great sponsors
stable-diffusion | instant-ngp | |
---|---|---|
111 | 147 | |
1,749 | 15,329 | |
- | 2.2% | |
10.0 | 6.7 | |
over 1 year ago | 9 days ago | |
Jupyter Notebook | Cuda | |
GNU Affero General Public License v3.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
-
PSA: You can run your GPU's at 80% power and get the same rendering speeds while saving heat/fan noise/electricity
use or update this one : https://github.com/hlky/stable-diffusion it has all the samplers, and if you want perfect faces, try k_euler_a
-
"a software developer after fixing a bug", by DALL-E 2
try this one https://github.com/hlky/stable-diffusion you need at least a 1050 to run it tho
- Which is the best fork out there ?
-
At the end of my rope on hlky fork, can anyone recommend any alternative GUI forks I could switch to?
https://github.com/hlky/stable-diffusion/issues/153 With 36 comments and tons of before and after comparisons, which are now deleted
-
CUDA memory error with hlky repo, (4GB Nvidia) - any ideas?
I wanted to try hlky version (https://github.com/hlky/stable-diffusion) , due to the WebUI and integration with upscaling models. It should also have the option to be optimized for low VRAM. To avoid getting a green square I have to add the parameters "--precision full --no-half". When I run a prompt, even with the smallest image size, I immediately get a CUDA memory error. Interestingly, without these parameters there isn't any memory error (but, of course, the result is a green square)
-
Fallout 5: Toronto (created with AI)
Made using https://github.com/hlky/stable-diffusion
-
Just released a Colab notebook that combines Craiyon+Stable Diffusion
Any chance to get this integrated into something like hlky's web ui?
-
AI Tekst til bilde: Elg og stavkirke med nordlys over Norsk flagg i bakgrunnen [OC] Mer detaljer i posten
Linux Guide her. Jeg har også Linux, men jeg valgte å sette det opp på Windows boksen min fordi driverne til Nvidia kortet på Linux ikke er helt sammarbeidsvillig når det kommer til å justere viftene etter sensorene i kortet (så jeg må sette det manuelt).
-
Using GFPGAN for only the eyes?
I'm seeing GFPGAN essentially remove all texture from faces, and I only want to use it on the eyes. Any thoughts on how to do this? I am using hlky/stable-diffusion now but I have no issues running a different repo/fork if needed and using command line.
- What's the best install of Stable Diffusion right now?
instant-ngp
- I want a 3d scanner...
-
Mind-blowing results (LORA/Checkpoint mix)
This is really cool! Could you now use something like this to turn the new images in a 3d model? Or even use open pose (controlnet) to generate a bunch of images from different angles and use InstantNeRF to make a 3d model for free!
-
Scanning in real life environments to be viewed in VR >>> taking pictures. Simple process from video -> render, using instant-ngp
It is at this point that you should have Instant-NGP setup. The script for the COLMAP processing is in the repo, as well as the steps to perform it. My exact parameters were 3 fps and 16 aabb. It is pretty helpful to add the scripts directory into path for exact access system-wide.
-
[D] NeRF, LeRF, Prolific Dreamer, Neuralangelo, and a lot of other cool NeRF research
[Project Page] https://nvlabs.github.io/instant-ngp/
-
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
instant-ngp ([1]) from NVIDIA can render NeRF in VR in real-time, assuming a very good desktop video card. Note that instant-ngp is not as photo-realistic as Zip-NeRF. But it's still very good!
1. https://github.com/NVlabs/instant-ngp
- How about Ranger Green?
-
Roast my MC kit
Playing around with neRF AI (https://github.com/NVlabs/instant-ngp) to create some 3d gear reveals. I think this a fun way to show off a kit, what do you think?
- Has anyone tried to generate images from enough angles to feed Nvidia Nerf to make 3D models?
-
Instant NPG: how do minimize noise and maximize quality? Tips welcome!
3 not sure if it's the one you want but the -aabb_scale is a crop. This page recommends trying a large value of 128 for some outdoor scenes: https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md
-
I NeRF'd the new Taco Bell on Rt. 40
I don't know about lumalabs, but basically all NeRF projects these days are based on NVIDIAs Instant neural graphics primitives ( GitHub: instant-ngp). It utilizes COLMAP for SfM (preprocessing step for the neural network) and runs on average Geforce cards pretty good. The fox example (50 photos) on their page literally takes 5 seconds to complete.
What are some alternatives?
diffusers-uncensored - Uncensored fork of diffusers
awesome-NeRF - A curated list of awesome neural radiance fields papers
stable-diffusion-krita-plugin
tiny-cuda-nn - Lightning fast C++/CUDA neural network framework
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
nerf-pytorch - A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
stable_diffusion.openvino
TensoRF - [ECCV 2022] Tensorial Radiance Fields, a novel approach to model and reconstruct radiance fields
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
colmap - COLMAP - Structure-from-Motion and Multi-View Stereo
stable-diffusion-webui - Stable Diffusion web UI
instant-meshes - Interactive field-aligned mesh generator