nvdiffrec
Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images". (by NVlabs)
stable-diffusion-webui
Stable Diffusion web UI (by AUTOMATIC1111)
nvdiffrec | stable-diffusion-webui | |
---|---|---|
13 | 2,808 | |
2,053 | 129,975 | |
1.1% | - | |
3.2 | 9.9 | |
2 days ago | 5 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nvdiffrec
Posts with mentions or reviews of nvdiffrec.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-08-18.
-
[D] Found top conference papers using test data for validation.
It depends on which CV research you’re in. In NeRF view synthesis, it’s pretty common to use test sets as validation sets. This has been done in several papers, including oral papers.
-
3D NeRF of a footstool
I think there came a paper recently nerf2mesh, which I still have to evaluate (but haven't found time yet). There's also https://github.com/NVlabs/nvdiffrec/. And there's cool easy-to-use research software like nerfstudio (at least if you compare it to a lot of the raw code releases from research papers).
-
Fitting the texture from an image to the corresponding 3D model
For your use case, why is your model devoid of texture? You can try 3D scanning your desired object so that it comes with texture. Either that or use Nvidia's MoMA here to get your object from images.
- WHAT IS THE PROBLEM ???? HELP ME PLZ!!
- Blender animation augmented with AI
-
[R] BUNGEENeRF: progressive neural radiance field for extreme multi-scale scene rendering
Have you seen this project: https://github.com/NVlabs/nvdiffrec (I haven't tried it). Also videos tend to have compression. If you can get images you'll get higher quality results with most photogrammetry software. Projects like meshroom are probably better for this if you have high quality pictures. There's a few articles that cover high quality scans that can help also.
-
is NeRF photogrammetry? please don't call me old, but this technology, in my mind, does not fit the strict concept.
You can generate an accurate mesh from a NeRF: https://github.com/NVlabs/nvdiffrec, and measure from that.
-
NeRF export options and pgrammetry application question
NERF specifically generates a radiance field, but there are research codes for turning that into a mesh (https://github.com/NVlabs/nvdiffrec) (not easy to use yet)
-
[D] nvdiffrec setup
Hi, I'm not sure if this is the right place, but I was looking into seeing what the latest photo to model reconstruction looks like from here with NVIDIA (ArXiV paper is included there). There's a couple of neat examples, and after one dumb mistake setup was pretty easy. However, the meshes are not converging except very loosely when using the examples from the paper.
-
nvdiffrec tutorial?
Hi everyone! I'm not sure this is the right place to ask, but I've been drooling over these cool ml and deep learning techniques showcased in videos. I was wondering if anyone could help me out in getting something like nvdiffrec to work with my own sample. https://github.com/NVlabs/nvdiffrec
stable-diffusion-webui
Posts with mentions or reviews of stable-diffusion-webui.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-02-27.
-
Show HN: I made an app to use local AI as daily driver
* LLaVA model: I'll add more documentation. You are right Llava could not generate images. For image generation I don't have immediate plans, but checkout these projects for local image generation.
- https://diffusionbee.com/
- https://github.com/comfyanonymous/ComfyUI
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I would love to be able to have a native stable diffusion experience, my rx 580 takes 30s to generate a single image. But it does work after following https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki...
I got this up and running on my windows machine in short order and I don't even know what stable diffusion is.
But again, it would be nice to have first class support to locally participate in the fun.
-
Ask HN: What is the state of the art in AI photo enhancement?
In Auto1111, that just uses Image.blend. :)
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob...
- How To Increase Performance Time on MacOS
-
Can anyone suggest an AI model that can help me enhance a poorly drawn logo?
I used SDXL in automatic1111 webui for both images. Now that I think about it, the procedure I described was how I made this one, but the one that looks like an illustration was done in two steps. I used the canny ControlNet as I said for the outer part of the logo to preserve the shape of the fonts, but I had to turn it off for the boot to give SDXL leeway to add detail and make it look more like a boot.
-
Seeking out an experienced and empathetic coding buddy.
That said, please do learn coding and don't get discouraged when somebody says to learn PyTorch or recommends using a Jupiter notebook with no further information on how to translate the skill into images. I would highly recommend some short term goals. Get your feet wet by taking apart the UIs. The comfy API documentation is here and the A1111 API documentation is here. There is a difference in completeness, welcome to programming. Writing nodes or plugins is also a good way to jump into this world. Custom wildcard logic might be very attractive to you if you aren't the type that want to deal with a nested file structure to simulate logic.
- can't get it working with an AMD gpu
-
SD extension that allows for setting override
Possibly Unprompted? https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8094
- Need to write an application to use Stable Diffusion on my desktop PC - which resource should I learn to use?
-
4090 Speed Decrease on each Generation/Iteration
version: v1.6.1 • python: 3.10.13 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2 • checkpoint: 6e8d4871f8