DPT
Dense Prediction Transformers [Moved to: https://github.com/isl-org/DPT] (by intel-isl)
DeforumStableDiffusionLocal
Local version of Deforum Stable Diffusion, supports txt settings file input and animation features! (by HelixNGC7293)
DPT | DeforumStableDiffusionLocal | |
---|---|---|
6 | 21 | |
1,163 | 709 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | 12 months ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DPT
Posts with mentions or reviews of DPT.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-29.
-
Having issue installing text to image
wget https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt -O "C:\Users\itsju\AppData\Roaming\Visions of Chaos\Examples\MachineLearning\Text To Image\Deforum Stable Diffusion\dpt_large-midas-2f21e586.pt" --tries=1 --no-check-certificate --progress=bar:force
-
File not found error
try: from midas.dpt_depth import DPTDepthModel except: if not os.path.exists('MiDaS'): gitclone("https://github.com/isl-org/MiDaS.git") gitclone("https://github.com/bytedance/Next-ViT.git", f'{PROJECT_DIR}/externals/Next_ViT') if not os.path.exists('MiDaS/midas_utils.py'): shutil.move('MiDaS/utils.py', 'MiDaS/midas_utils.py') if not os.path.exists(f'{model_path}/dpt_large-midas-2f21e586.pt'): wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", model_path) sys.path.append(f'{PROJECT_DIR}/MiDaS')
-
Is there a reason that the community is sleeping on the SD 2 DEPTH model and 4X UPSCALER?
Try downloading the MiDaS model manually from here: https://github.com/intel-isl/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt It should go in the stable-diffusion-webui/models/midas folder. If that doesn't work, try the stable-diffusion-webui/midas_models folder
- Dreams of Many Landscapes
-
Need help with Deforum SD. 3d Animation Error.
Saving animation frames to output\2022-10\Test16 Downloading dpt_large-midas-2f21e586.pt... --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In [16], line 550 548 # dispatch to appropriate renderer 549 if anim_args.animation_mode == '2D' or anim_args.animation_mode == '3D': --> 550 render_animation(args, anim_args) 551 elif anim_args.animation_mode == 'Video Input': 552 render_input_video(args, anim_args) Cell In [16], line 202, in render_animation(args, anim_args) 200 if predict_depths: 201 depth_model = DepthModel(device) --> 202 depth_model.load_midas(models_path) 203 if anim_args.midas_weight < 1.0: 204 depth_model.load_adabins() File ~\dsd\stable-diffusion\helpers\depth.py:38, in DepthModel.load_midas(self, models_path, half_precision) 36 if not os.path.exists(os.path.join(models_path, 'dpt_large-midas-2f21e586.pt')): 37 print("Downloading dpt_large-midas-2f21e586.pt...") ---> 38 wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", models_path) 40 self.midas_model = DPTDepthModel( 41 path=f"{models_path}/dpt_large-midas-2f21e586.pt", 42 backbone="vitl16_384", 43 non_negative=True, 44 ) 45 normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) File ~\dsd\stable-diffusion\helpers\depth.py:16, in wget(url, outputdir) 15 def wget(url, outputdir): ---> 16 print(subprocess.run(['wget', url, '-P', outputdir], stdout=subprocess.PIPE).stdout.decode('utf-8')) File ~\miniconda3\envs\dsd\lib\subprocess.py:505, in run(input, capture_output, timeout, check, *popenargs, **kwargs) 502 kwargs['stdout'] = PIPE 503 kwargs['stderr'] = PIPE --> 505 with Popen(*popenargs, **kwargs) as process: 506 try: 507 stdout, stderr = process.communicate(input, timeout=timeout) File ~\miniconda3\envs\dsd\lib\subprocess.py:951, in Popen.__init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask) 947 if self.text_mode: 948 self.stderr = io.TextIOWrapper(self.stderr, 949 encoding=encoding, errors=errors) --> 951 self._execute_child(args, executable, preexec_fn, close_fds, 952 pass_fds, cwd, env, 953 startupinfo, creationflags, shell, 954 p2cread, p2cwrite, 955 c2pread, c2pwrite, 956 errread, errwrite, 957 restore_signals, 958 gid, gids, uid, umask, 959 start_new_session) 960 except: 961 # Cleanup if the child failed starting. 962 for f in filter(None, (self.stdin, self.stdout, self.stderr)): File ~\miniconda3\envs\dsd\lib\subprocess.py:1420, in Popen._execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session) 1418 # Start the process 1419 try: -> 1420 hp, ht, pid, tid = _winapi.CreateProcess(executable, args, 1421 # no special security 1422 None, None, 1423 int(not close_fds), 1424 creationflags, 1425 env, 1426 cwd, 1427 startupinfo) 1428 finally: 1429 # Child is launched. Close the parent's copy of those pipe 1430 # handles that only the child should have open. You need (...) 1433 # pipe will not close when the child process exits and the 1434 # ReadFile will hang. 1435 self._close_pipe_fds(p2cread, p2cwrite, 1436 c2pread, c2pwrite, 1437 errread, errwrite) FileNotFoundError: [WinError 2] The system cannot find the file specified
-
Hiii Everyone, I made a local Deforum Stable Diffusion Ver for animation output
wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", models_path)
DeforumStableDiffusionLocal
Posts with mentions or reviews of DeforumStableDiffusionLocal.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-03-17.
-
Stable Diffusion makes trippy music videos
I'm working on a video explaining my workflow and will share all my code when that's ready - I wrote a bunch of custom python scripts to help me make these videos. I use Stable Diffusion's V2 model and another amazing open-source extension called "Deforum Diffusion" which bundles a depth model and some helpers for generating animations https://github.com/HelixNGC7293/DeforumStableDiffusionLocal All my scripts deal with Deforum/stable diffusion, FFMPEG, subtitling, import/export from Audacity and ways to retry/tweak prompts that don't work out well.
-
Is there any prompt to video application?
Sorta but it's probably not what you are looking for. There's a plugin that will let you make animations from text prompts but it does not understand the video as a whole but only the current frame and the previous frame. What that means is you get a lot of videos that are just X morphing into Y instead of something with motion like a car driving across a road.
-
Dark Side Of The Moon, Full Album, A.I. Visuals (Group Project) https://youtu.be/69VMx9Kd3tQ
HOW TO DO IT
-
What are your thoughts on attempts to monetize model training and AI art community in general?
Nice, then you're probably looking for this
-
AssertionError: currently only supporting "eps" and "x0"
I am trying to run https://github.com/HelixNGC7293/DeforumStableDiffusionLocal on google colab with https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt and v2-inference-v.yaml and I get:
-
Stand with the women of Iran
For anyone else wanting to make shit like this - it's a free AI program called Deforum Stable Diffusion
- Stable Diffusion links from around September 23, 2022 that I collected for further processing
-
Deforum Stable Diffusion 3D animation NOOB question
Ok so I've read the HelixNG Git once more and apparently I had to download extra stuff like
-
Animated Train Ride
You can also use Deforum which works really well and has a ton of options. I used it to make this video .
- Stable Diffusion links from around October 11, 2022 that I collected for further processing
What are some alternatives?
When comparing DPT and DeforumStableDiffusionLocal you can also consider the following projects:
depthmap2mask - Create masks out of depthmaps in img2img
stable-diffusion-krita-plugin
stable-diffusion-webui - Stable Diffusion web UI
Waifu2x-Extension-GUI - Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, Real-CUGAN, RTX Video Super Resolution VSR, SRMD, RealSR, Anime4K, RIFE, IFRNet, CAIN, DAIN, and ACNet.
Next-ViT
MiDaS - Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
Stable-diffusion-webui-video
DPT vs depthmap2mask
DeforumStableDiffusionLocal vs stable-diffusion-krita-plugin
DPT vs stable-diffusion-webui
DeforumStableDiffusionLocal vs Waifu2x-Extension-GUI
DPT vs Next-ViT
DeforumStableDiffusionLocal vs stable-diffusion-webui
DPT vs MiDaS
DeforumStableDiffusionLocal vs Stable-diffusion-webui-video