depthmap2mask
DPT
depthmap2mask | DPT | |
---|---|---|
26 | 6 | |
352 | 1,163 | |
- | - | |
2.7 | 10.0 | |
about 1 year ago | over 1 year ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
depthmap2mask
- Jessica Rabbit | Toon integration test
- Is there a Chroma Key embedding anywhere?
-
StableDiffusion locally, what am i doing wrong ? what settings should i use ? i am using img2img and keep getting these messed up results
for change background i suggest to use depth2mask.
-
Using SD as a green screen?
have you try with dept2mask?
-
Quick test of AI and Blender with camera projection.
Looks really good. Have you tried img2depth for the texturing? GitHub - Extraltodeus/depthmap2mask: Create masks out of depthmaps in img2img
-
Ideas for using SD to automatically enhance photographic portraits without completely distorting the face
have you try https://github.com/Extraltodeus/depthmap2mask ?
-
Deforum: FileNotFoundError: [Errno 2] No such file or directory:
No, and I don't need to. depthmap2mask works sloppy, I don't like it. It's much better to create mask for "Inpainting" using image-editing software. Here you can see how it's done. https://www.youtube.com/watch?v=dnIYTGW1m8w
- flowdas-meta missing from PYPI, can't pip install launch ? Impossible ?
-
The transformation no one asked for
Sent to img2img and used Depth Aware img2img mask with the model set to `midas_v21_small` so that I would hopefully affect as little of the image as possible. (after seeing the pants morph, I think it might have been better to just use inpaint)
- Me waiting for A1111 Depth2img to officially support custom depth maps.
DPT
-
Having issue installing text to image
wget https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt -O "C:\Users\itsju\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Deforum Stable Diffusion\dpt_large-midas-2f21e586.pt" --tries=1 --no-check-certificate --progress=bar:force
-
File not found error
try: from midas.dpt_depth import DPTDepthModel except: if not os.path.exists('MiDaS'): gitclone("https://github.com/isl-org/MiDaS.git") gitclone("https://github.com/bytedance/Next-ViT.git", f'{PROJECT_DIR}/externals/Next_ViT') if not os.path.exists('MiDaS/midas_utils.py'): shutil.move('MiDaS/utils.py', 'MiDaS/midas_utils.py') if not os.path.exists(f'{model_path}/dpt_large-midas-2f21e586.pt'): wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", model_path) sys.path.append(f'{PROJECT_DIR}/MiDaS')
-
Is there a reason that the community is sleeping on the SD 2 DEPTH model and 4X UPSCALER?
Try downloading the MiDaS model manually from here: https://github.com/intel-isl/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt It should go in the stable-diffusion-webui/models/midas folder. If that doesn't work, try the stable-diffusion-webui/midas_models folder
- Dreams of Many Landscapes
-
Need help with Deforum SD. 3d Animation Error.
Saving animation frames to output\2022-10\Test16 Downloading dpt_large-midas-2f21e586.pt... --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In [16], line 550 548 # dispatch to appropriate renderer 549 if anim_args.animation_mode == '2D' or anim_args.animation_mode == '3D': --> 550 render_animation(args, anim_args) 551 elif anim_args.animation_mode == 'Video Input': 552 render_input_video(args, anim_args) Cell In [16], line 202, in render_animation(args, anim_args) 200 if predict_depths: 201 depth_model = DepthModel(device) --> 202 depth_model.load_midas(models_path) 203 if anim_args.midas_weight < 1.0: 204 depth_model.load_adabins() File ~\dsd\stable-diffusion\helpers\depth.py:38, in DepthModel.load_midas(self, models_path, half_precision) 36 if not os.path.exists(os.path.join(models_path, 'dpt_large-midas-2f21e586.pt')): 37 print("Downloading dpt_large-midas-2f21e586.pt...") ---> 38 wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", models_path) 40 self.midas_model = DPTDepthModel( 41 path=f"{models_path}/dpt_large-midas-2f21e586.pt", 42 backbone="vitl16_384", 43 non_negative=True, 44 ) 45 normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) File ~\dsd\stable-diffusion\helpers\depth.py:16, in wget(url, outputdir) 15 def wget(url, outputdir): ---> 16 print(subprocess.run(['wget', url, '-P', outputdir], stdout=subprocess.PIPE).stdout.decode('utf-8')) File ~\miniconda3\envs\dsd\lib\subprocess.py:505, in run(input, capture_output, timeout, check, *popenargs, **kwargs) 502 kwargs['stdout'] = PIPE 503 kwargs['stderr'] = PIPE --> 505 with Popen(*popenargs, **kwargs) as process: 506 try: 507 stdout, stderr = process.communicate(input, timeout=timeout) File ~\miniconda3\envs\dsd\lib\subprocess.py:951, in Popen.__init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask) 947 if self.text_mode: 948 self.stderr = io.TextIOWrapper(self.stderr, 949 encoding=encoding, errors=errors) --> 951 self._execute_child(args, executable, preexec_fn, close_fds, 952 pass_fds, cwd, env, 953 startupinfo, creationflags, shell, 954 p2cread, p2cwrite, 955 c2pread, c2pwrite, 956 errread, errwrite, 957 restore_signals, 958 gid, gids, uid, umask, 959 start_new_session) 960 except: 961 # Cleanup if the child failed starting. 962 for f in filter(None, (self.stdin, self.stdout, self.stderr)): File ~\miniconda3\envs\dsd\lib\subprocess.py:1420, in Popen._execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session) 1418 # Start the process 1419 try: -> 1420 hp, ht, pid, tid = _winapi.CreateProcess(executable, args, 1421 # no special security 1422 None, None, 1423 int(not close_fds), 1424 creationflags, 1425 env, 1426 cwd, 1427 startupinfo) 1428 finally: 1429 # Child is launched. Close the parent's copy of those pipe 1430 # handles that only the child should have open. You need (...) 1433 # pipe will not close when the child process exits and the 1434 # ReadFile will hang. 1435 self._close_pipe_fds(p2cread, p2cwrite, 1436 c2pread, c2pwrite, 1437 errread, errwrite) FileNotFoundError: [WinError 2] The system cannot find the file specified
-
Hiii Everyone, I made a local Deforum Stable Diffusion Ver for animation output
wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", models_path)
What are some alternatives?
civitai - A repository of models, textual inversions, and more
stable-diffusion-webui - Stable Diffusion web UI
multi-subject-render - Generate multiple complex subjects all at once!
DeforumStableDiffusionLocal - Local version of Deforum Stable Diffusion, supports txt settings file input and animation features!
stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI
Next-ViT
stable-diffusion-webui - Stable Diffusion web UI
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
3d-photo-inpainting - [CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting
Merge-Stable-Diffusion-models-without-distortion - Adaptation of the merging method described in the paper - Git Re-Basin: Merging Models modulo Permutation Symmetries (https://arxiv.org/abs/2209.04836) for Stable Diffusion
voltaML-fast-stable-diffusion - Beautiful and Easy to use Stable Diffusion WebUI