sd-webui-colab
instant-ngp
sd-webui-colab | instant-ngp | |
---|---|---|
14 | 147 | |
512 | 15,388 | |
- | 1.3% | |
6.8 | 6.7 | |
over 1 year ago | 18 days ago | |
Jupyter Notebook | Cuda | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-webui-colab
-
List of Stable Diffusion systems - Part 3
(Added Aug. 28, 2022) Colab notebook Stable Diffusion WebUi - Altryne by altryne. GitHub repo. txt2img. img2img. inpainting. Gradio user interface. Uses sd-webui repo.
-
What's the best install of Stable Diffusion right now?
you can try the colab version of hlky repo: https://github.com/altryne/sd-webui-colab, easier to setup or a one cell colab version: https://github.com/pinilpypinilpy/sd-webui-colab-simplified, everything works in one cell
- Using DOS games as init images - part 2
- Testando o Brasil no Stable Diffusion (versão open source do Dall-e)
-
Running Stable Diffusion on Your GPU with Less Than 10Gb of VRAM
For those without GPU's / not a powerful enough one. You can start the hlky stable diffusion webui (yes, web ui) in Google Colab with this notebook[0].
It's simple and it works, using colab for processing but actually giving you a URL (ngrok-style) to open the pretty web ui in your browser.
I've been using that on-the-go when not at my PC and it's been working very well for me (after trying numerous other colab-dedicated repos, trying to fix them, and failing).
[0]: https://github.com/altryne/sd-webui-colab
-
ModuleNotFoundError: No module named 'frontend' error in Stable Diffusion Kaggle Notebook
The code is directly adapted from the Colab notebook repo based on the hlky GitHub repo. I really don't have much experience with coding and didn't change much of the code other than the paths specific to Kaggle.
- Run Stable Diffusion on Your M1 Mac’s GPU
-
Anyone running stable diffusion webui on google colab pro+ account?
I'm running SD-webui on google colab https://github.com/altryne/sd-webui-colab/ with a colab pro account and its awesome but it does crash with attempting larger images. Is anyone using a google colab pro+ account and able to process larger images?
- sd-webui on google colab by Altryne and Hlky - Help saving to gdrive?
-
Made a super simple Colab version of Stable diffusion
Based off of https://github.com/altryne/sd-webui-colab and https://github.com/hlky/stable-diffusion
instant-ngp
- I want a 3d scanner...
-
Mind-blowing results (LORA/Checkpoint mix)
This is really cool! Could you now use something like this to turn the new images in a 3d model? Or even use open pose (controlnet) to generate a bunch of images from different angles and use InstantNeRF to make a 3d model for free!
-
Scanning in real life environments to be viewed in VR >>> taking pictures. Simple process from video -> render, using instant-ngp
It is at this point that you should have Instant-NGP setup. The script for the COLMAP processing is in the repo, as well as the steps to perform it. My exact parameters were 3 fps and 16 aabb. It is pretty helpful to add the scripts directory into path for exact access system-wide.
-
[D] NeRF, LeRF, Prolific Dreamer, Neuralangelo, and a lot of other cool NeRF research
[Project Page] https://nvlabs.github.io/instant-ngp/
-
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
instant-ngp ([1]) from NVIDIA can render NeRF in VR in real-time, assuming a very good desktop video card. Note that instant-ngp is not as photo-realistic as Zip-NeRF. But it's still very good!
1. https://github.com/NVlabs/instant-ngp
- How about Ranger Green?
-
Roast my MC kit
Playing around with neRF AI (https://github.com/NVlabs/instant-ngp) to create some 3d gear reveals. I think this a fun way to show off a kit, what do you think?
- Has anyone tried to generate images from enough angles to feed Nvidia Nerf to make 3D models?
-
Instant NPG: how do minimize noise and maximize quality? Tips welcome!
3 not sure if it's the one you want but the -aabb_scale is a crop. This page recommends trying a large value of 128 for some outdoor scenes: https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md
-
I NeRF'd the new Taco Bell on Rt. 40
I don't know about lumalabs, but basically all NeRF projects these days are based on NVIDIAs Instant neural graphics primitives ( GitHub: instant-ngp). It utilizes COLMAP for SfM (preprocessing step for the neural network) and runs on average Geforce cards pretty good. The fox example (50 photos) on their page literally takes 5 seconds to complete.
What are some alternatives?
stablediffusion-interpolation-tools
awesome-NeRF - A curated list of awesome neural radiance fields papers
stable_diffusion.openvino
tiny-cuda-nn - Lightning fast C++/CUDA neural network framework
stable-diffusion
nerf-pytorch - A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
awesome-stable-diffusion - Curated list of awesome resources for the Stable Diffusion AI Model.
TensoRF - [ECCV 2022] Tensorial Radiance Fields, a novel approach to model and reconstruct radiance fields
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
colmap - COLMAP - Structure-from-Motion and Multi-View Stereo
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
instant-meshes - Interactive field-aligned mesh generator