stable-diffusion-webui-depthmap-script
stable-diffusion-webui-depthmap-script | sd_dreambooth_extension | |
---|---|---|
64 | 115 | |
1,594 | 1,829 | |
- | - | |
8.3 | 8.7 | |
2 months ago | about 2 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-depthmap-script
-
PATCHFUSION is really impressive. High resolution depth maps in 16bit. I've been waiting for this. https://github.com/zhyever/PatchFusion
The guide on the github page for the extension is OK: https://github.com/thygate/stable-diffusion-webui-depthmap-script
-
Extension not showing. Depthmap help 🙏
New to SD. I'm trying to get an extension to work (https://github.com/thygate/stable-diffusion-webui-depthmap-script) but opposite to the tutorials the "depth" tab doesn't show after installation. Anyone who can help locate the problem? Thanks!
-
Is anyone working on stereoscopic 3D SD? Is it even possible?
You can use this extension to generate stereoscopic images . . . I don't (yet) dabble in video, so I don't know what it'll do there. I've done a ton of stereo pics with it. My fascination sort of comes and goes. You can do cross-eyed or parllel view as well as red/cyan anaglyphs.
-
GUIDE: Ways to generate consistent environments for comics, novels, etc
Option 8. Use img2img of existing 360 HDRIs, extract their depth maps with the depth extension. Use that as a displacement map on a sphere in Blender, similarly to this, with the refurbished HDRI as an image texture, then take screenshots from a position close to the center of the sphere. You are limited to staying close to the center in order to avoid distortion, but now you have 360 degrees of consistent freedom for a particular scene. If you have 2 or more HDRIs of the same place, even better. You could also combine this with the 3D environments of the other options to use 360 renders as bases for the img2img.
-
Another Ai image to 3d
you have another automatic 1111 extension that allow you to create there also the 3d file, but this consume a lot of vram https://github.com/thygate/stable-diffusion-webui-depthmap-script
-
Get a 16-Bit Controlnet Depth
If you're using A1111 webui there is the depthmap2mask extension which you can install from the extensions tab. It will add a depth tab which will allow you to create 16-bit depth maps among many other things.
-
180 VR - Blue Techno World - (Stable Diffusion + Deforum) stereo video
actually it is very easy to do. What you need is install extension for Stable Diffusion webUI (https://stable-diffusion-art.com/install-windows/) . This extension will generate stereo for you automatically. Name is Depth. (https://github.com/thygate/stable-diffusion-webui-depthmap-script)
- Is it possible for me to approximate a depth map from a generated image and make a 3D model?
- Thanks for loving our Star Wars video! We created a new one for Lord of the Rings. Enjoy this mid-journey to Middle-Earth.
-
Found this site through twitter that slightly animates images. Throwing Stable Diffusion generations into it is pretty awesome. Site in comments.
You can do this inside of a1111 as well with this extension https://github.com/thygate/stable-diffusion-webui-depthmap-script
sd_dreambooth_extension
- SDXL Training for Auto1111 is now Working on a 24GB Card
-
(Requesting Help)
I am trying to use StableDiffusion via AUTOMATIC1111 with the Dreambooth extension
-
it will be an absolute madness when sdxl becomes standard model and we start getting other models from it
When I first attempted SD training, I was very frustrated. It wasn't until I found this obscure forum thread on Github that I actually started producing great results with Dreambooth. Because I have such satisfactory results, I'm very reluctant to beat my brains against LoRa and its related training techniques. I gave up trying to train TI embeddings a long time ago. And I never figured out how to train or how to use hypernetworks. I've only been able to get good results with Dreambooth directly because of that thread I linked above. I make LoRas by extracting them from Dreambooth-trained checkpoints. And I have no idea if I'm doing the extractions the right way or not.
-
"Exception training model: ' Some tensors share memory" with Dreambooth on Vladmatic
Getting the same with automatic1111 and sd_dreambooth extension. Check out more here in the issues log: https://github.com/d8ahazard/sd_dreambooth_extension/issues/1266
-
Yo, DreamBooth gatekeepers, SHARE YOUR HYPERPARAMETERS, please.
It's several moths old and many things have changed. But the spreadsheet available through this thread on Github has been indispensable for me when I train Dreambooth models. I'm astounded no one talks about it. I bring it up all the time. The research presented there should be continued. I'd love to see similar research done for SD v2.1.
-
What is the BEST solution for hyper realistic person training?
Training rate is paramount. Read this Github thread.
-
How do you train your LoRAs, 1 Epoch or >1 Epoch (same # of steps)?
https://github.com/d8ahazard/sd_dreambooth_extension/discussions/547/ (in depth training principles understanding)
-
Struggling to install Dreambooth
sd_dreambooth_extension https://github.com/d8ahazard/sd_dreambooth_extension.git main 926ae204 Fri Mar 31 15:12:45 2023 unknown
- Attempting to train a lora with RTX 2060 6 GB vRAM, how to go about this?
-
SD just released an open source version of their GUI called StableStudio
also the Dreambooth extension supports API (https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/scripts/api.py) so i'm not sure where do you get those news :/
What are some alternatives?
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
a1111-sd-zoe-depth - a1111 sd WebUI extention version of ZoeDepth
kohya_ss
multi-subject-render - Generate multiple complex subjects all at once!
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
Thin-Plate-Spline-Motion-Model - [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
depthmap2mask - Create masks out of depthmaps in img2img
dreambooth-training-guide
point-e - Point cloud diffusion for 3D model synthesis
sd-scripts