instruct-pix2pix
instant-ngp
instruct-pix2pix | instant-ngp | |
---|---|---|
21 | 147 | |
5,989 | 15,418 | |
- | 1.4% | |
0.0 | 6.4 | |
2 months ago | 29 days ago | |
Python | Cuda | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
instruct-pix2pix
-
Stable Video Diffusion
My guess is you're thinking of InstructPix2Pix[1], with prompts like "make the sky green" or "replace the fruits with cake"
[1] https://github.com/timothybrooks/instruct-pix2pix
-
AI image editors with “text to filter” function?
This comes from https://github.com/timothybrooks/instruct-pix2pix, there is also an extension to use it in Automatic1111 Stable diffusion webui.
- [D] NeRF, LeRF, Prolific Dreamer, Neuralangelo, and a lot of other cool NeRF research
-
Was it SD that had the ability to edit a photo using prompts?
InstructPix2Pix
-
Alternate download location for instruct-pix2pix-00-22000.ckpt?
Is there another place I can download the model? I tried downloading the file using the instructions on this page:
-
Using our photoshop plugin for some cool image editing! :D
It comes from https://github.com/timothybrooks/instruct-pix2pix, you can try it out https://huggingface.co/spaces/timbrooks/instruct-pix2pix
-
instruct pix2pix faces always come out messed up. The rest is really good. Any idea how to fix this?
interesting, I've been running it using this: https://github.com/timothybrooks/instruct-pix2pix/blob/main/LICENSE
-
Everybody is always talking about AGI. I'm more curious about using the tools that we have now.
This is already done and it's already been implemented in the most popular web-ui for stable diffusion too. Granted the results aren't perfect yet.
-
gif2gif: Quick and easy webui extension for dropping animated GIFs into img2img
Select the script, drop in a GIF, use img2img as normal to process it. Supports quick non-ffmpeg interpolation, and works surprisingly well with InstructPix2Pix. Intended to be a fun no-nonsense GIF pipeline.
-
NMKD Stable Diffusion GUI 1.9.0 is out now, featuring InstructPix2Pix - Edit images simply by using instructions! Link and details in comments.
Github Issue - Closed
instant-ngp
- I want a 3d scanner...
-
Mind-blowing results (LORA/Checkpoint mix)
This is really cool! Could you now use something like this to turn the new images in a 3d model? Or even use open pose (controlnet) to generate a bunch of images from different angles and use InstantNeRF to make a 3d model for free!
-
Scanning in real life environments to be viewed in VR >>> taking pictures. Simple process from video -> render, using instant-ngp
It is at this point that you should have Instant-NGP setup. The script for the COLMAP processing is in the repo, as well as the steps to perform it. My exact parameters were 3 fps and 16 aabb. It is pretty helpful to add the scripts directory into path for exact access system-wide.
-
[D] NeRF, LeRF, Prolific Dreamer, Neuralangelo, and a lot of other cool NeRF research
[Project Page] https://nvlabs.github.io/instant-ngp/
-
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
instant-ngp ([1]) from NVIDIA can render NeRF in VR in real-time, assuming a very good desktop video card. Note that instant-ngp is not as photo-realistic as Zip-NeRF. But it's still very good!
1. https://github.com/NVlabs/instant-ngp
- How about Ranger Green?
-
Roast my MC kit
Playing around with neRF AI (https://github.com/NVlabs/instant-ngp) to create some 3d gear reveals. I think this a fun way to show off a kit, what do you think?
- Has anyone tried to generate images from enough angles to feed Nvidia Nerf to make 3D models?
-
Instant NPG: how do minimize noise and maximize quality? Tips welcome!
3 not sure if it's the one you want but the -aabb_scale is a crop. This page recommends trying a large value of 128 for some outdoor scenes: https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md
-
I NeRF'd the new Taco Bell on Rt. 40
I don't know about lumalabs, but basically all NeRF projects these days are based on NVIDIAs Instant neural graphics primitives ( GitHub: instant-ngp). It utilizes COLMAP for SfM (preprocessing step for the neural network) and runs on average Geforce cards pretty good. The fox example (50 photos) on their page literally takes 5 seconds to complete.
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
awesome-NeRF - A curated list of awesome neural radiance fields papers
stable-diffusion-webui-instruct-pix2pix - Extension for webui to run instruct-pix2pix
tiny-cuda-nn - Lightning fast C++/CUDA neural network framework
GFPGAN - GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
nerf-pytorch - A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
gif2gif - Automatic1111 Animated Image (input/output) Extension
TensoRF - [ECCV 2022] Tensorial Radiance Fields, a novel approach to model and reconstruct radiance fields
k-diffusion - Karras et al. (2022) diffusion models for PyTorch
colmap - COLMAP - Structure-from-Motion and Multi-View Stereo
prolificdreamer - Official code of ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation (NeurIPS 2023 Spotlight)
instant-meshes - Interactive field-aligned mesh generator