MiDaS VS xformers

Compare MiDaS vs xformers and see what are their differences.

MiDaS

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022" (by isl-org)

xformers

Hackable and optimized Transformers building blocks, supporting a composable construction. (by facebookresearch)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
MiDaS xformers
27 46
4,089 7,578
4.1% 6.5%
2.4 9.3
2 months ago 3 days ago
Python Python
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

MiDaS

Posts with mentions or reviews of MiDaS. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-25.
  • How to Estimate Depth from a Single Image
    8 projects | dev.to | 25 Apr 2024
    The checkpoint below uses MiDaS, which returns the inverse depth map, so we have to invert it back to get a comparable depth map.
  • Distance estimation from monocular vision using deep learning
    3 projects | /r/computervision | 13 Jun 2023
    Hi, I have made use of the KITTI dataset for this, and yes it depends on objects of know sizes. Here I have defined the following classes: Car, Van, Truck, Pedestrian, Person_sitting, Cyclist, Tram, Misc, or DontCare and the predictions are pretty accurate for those classes. Even if it's not the same class, it still recognizes the object since I have made use of the coco names dataset here and that is used along with YOLO for object detection. And there are several already implemented projects that make use of deep learning models trained on 2D datasets to predict 3D distance. This was one of my inspirations for this project: https://blogs.nvidia.com/blog/2019/06/19/drive-labs-distance-to-object-detection/ Furthermore, there are well-documented and researched papers like DistYOLO or MiDaS that makes use of deep learning for depth estimation
  • OMPR V0.6.10 update
    2 projects | /r/u_OMPR_App | 14 Mar 2023
    -Added AI image depth generator Create your own depth map image at a click of a button. Using the awesome MIDAS3.1 https://github.com/isl-org/MiDaS as the backend and the model "dpt_beit_large_512" for the highest quality depth map. Video and GIF depth map generators coming out next together with the Depth movie player feature.
  • AI that converts a regular 2d image to stereoscopic
    1 project | /r/ArtificialInteligence | 9 Feb 2023
    It uses MiDaS. That extension may be the most accessible way to use it at home. IDK.
  • Idea: training on magiceye images
    1 project | /r/StableDiffusion | 5 Feb 2023
    Here's the project homepage https://github.com/isl-org/MiDaS
  • MiDaS v3_1 and DiscoDiffusion
    2 projects | /r/DiscoDiffusion | 27 Dec 2022
    The problem came up after MiDaS updated to version V3_1 on Dec 24th. Although the fix works fine, with the new version there are many changes, which for me produces slightly different results. I would like to able to produce results like before. I still clone the MiDaS repo, but then set it back to the last commit before the changes in december, which is 66882994a432727317267145dc3c2e47ec78c38a.
  • File not found error
    3 projects | /r/DiscoDiffusion | 27 Dec 2022
    try: from midas.dpt_depth import DPTDepthModel except: if not os.path.exists('MiDaS'): gitclone("https://github.com/isl-org/MiDaS.git") gitclone("https://github.com/bytedance/Next-ViT.git", f'{PROJECT_DIR}/externals/Next_ViT') if not os.path.exists('MiDaS/midas_utils.py'): shutil.move('MiDaS/utils.py', 'MiDaS/midas_utils.py') if not os.path.exists(f'{model_path}/dpt_large-midas-2f21e586.pt'): wget("https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", model_path) sys.path.append(f'{PROJECT_DIR}/MiDaS')
  • A quick demo to show how structurally coherent depth2img is compared to img2img using Automatic1111.
    2 projects | /r/StableDiffusion | 12 Dec 2022
    Cool. The repo for MiDaS is here. https://github.com/isl-org/MiDaS You can see that they partially trained the model on 3D movies Here's a list of the movies that were used to train it. I wonder if they'll be training a MiDaS v 4.0 as things have moved on quite a bit since it was released in Apr 2021?
  • Boosting Monocular Depth repo
    3 projects | /r/computervision | 9 Dec 2022
    We present a stand-alone implementation of our Merging Operator. This new repo allows using any pair of monocular depth estimations in our double estimation. This includes using separate networks for base and high-res estimations, using networks not supported by this repo (such as Midas-v3), or using manually edited depth maps for artistic use. This will also be useful for scientists developing CNN-based MDE as a way to quickly apply double estimation to their own network. For more details please take a look here.
  • DepthViewer is now live on Steam :)
    3 projects | /r/virtualreality | 30 Nov 2022
    I'll make the feature to export only the depthmap .png file. If you need the depthmap .png right now you can use the MiDaS python script.

xformers

Posts with mentions or reviews of xformers. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-15.
  • Animediff error
    1 project | /r/StableDiffusion | 31 Oct 2023
    (venv) G:\A1111\Animediff\animatediff-cli-prompt-travel>animatediff generate -c config/prompts/01-ToonYou.json -W 256 -H 384 -L 128 -C 16 WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.0.1+cpu) Python 3.10.11 (you have 3.10.6) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: 'Could not find module 'G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\Lib\site-packages\torchvision\image.pyd' (or one of its dependencies). Try using the full path with constructor syntax.'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source? warn( 15:07:25 INFO Using generation config: config\prompts\01-ToonYou.json cli.py:291 ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\cli.py:292 in generate │ │ │ │ 289 │ │ │ 290 │ config_path = config_path.absolute() │ │ 291 │ logger.info(f"Using generation config: {path_from_cwd(config_path)}") │ │ ❱ 292 │ model_config: ModelConfig = get_model_config(config_path) │ │ 293 │ is_v2 = is_v2_motion_module(data_dir.joinpath(model_config.motion_module)) │ │ 294 │ infer_config: InferenceConfig = get_infer_config(is_v2) │ │ 295 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\settings.py:134 in │ │ get_model_config │ │ │ │ 131 │ │ 132 │ │ 133 def get_model_config(config_path: Path) -> ModelConfig: │ │ ❱ 134 │ settings = ModelConfig(json_config_path=config_path) │ │ 135 │ return settings │ │ 136 │ │ │ │ in pydantic.env_settings.BaseSettings.__init__:40 │ │ │ │ in pydantic.main.BaseModel.__init__:341 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ValidationError: 1 validation error for ModelConfig prompt extra fields not permitted (type=value_error.extra) (venv) G:\A1111\Animediff\animatediff-cli-prompt-travel>animatediff generate -c config/prompts/prompt1.json -W 256 -H 38 4 -L 128 -C 16 WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.0.1+cpu) Python 3.10.11 (you have 3.10.6) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: 'Could not find module 'G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\Lib\site-packages\torchvision\image.pyd' (or one of its dependencies). Try using the full path with constructor syntax.'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source? warn( 15:08:30 INFO Using generation config: config\prompts\prompt1.json cli.py:291 15:08:35 INFO is_v2=True util.py:361 INFO Using base model: runwayml\stable-diffusion-v1-5 cli.py:309 INFO Will save outputs to ./output\2023-10-30T15-08-35-epicrealism-epicrealism_naturalsinrc1vae cli.py:317 INFO Checking motion module... generate.py:331 INFO Loading tokenizer... generate.py:345 INFO Loading text encoder... generate.py:347 15:08:38 INFO Loading VAE... generate.py:349 INFO Loading UNet... generate.py:351 15:08:59 INFO Loaded 453.20928M-parameter motion module unet.py:578 15:09:00 INFO Using scheduler "euler_a" (EulerAncestralDiscreteScheduler) generate.py:363 INFO Loading weights from generate.py:368 G:\A1111\Animediff\animatediff-cli-prompt-travel\data\models\sd\epicrealism_naturalSin RC1VAE.safetensors ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connection.py:17 │ │ 4 in _new_conn │ │ │ │ 171 │ │ │ extra_kw["socket_options"] = self.socket_options │ │ 172 │ │ │ │ 173 │ │ try: │ │ ❱ 174 │ │ │ conn = connection.create_connection( │ │ 175 │ │ │ │ (self._dns_host, self.port), self.timeout, **extra_kw │ │ 176 │ │ │ ) │ │ 177 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\util\connection. │ │ py:95 in create_connection │ │ │ │ 92 │ │ │ │ sock = None │ │ 93 │ │ │ 94 │ if err is not None: │ │ ❱ 95 │ │ raise err │ │ 96 │ │ │ 97 │ raise socket.error("getaddrinfo returns an empty list") │ │ 98 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\util\connection. │ │ py:85 in create_connection │ │ │ │ 82 │ │ │ │ sock.settimeout(timeout) │ │ 83 │ │ │ if source_address: │ │ 84 │ │ │ │ sock.bind(source_address) │ │ ❱ 85 │ │ │ sock.connect(sa) │ │ 86 │ │ │ return sock │ │ 87 │ │ │ │ 88 │ │ except socket.error as e: │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond During handling of the above exception, another exception occurred: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connectionpool.p │ │ y:703 in urlopen │ │ │ │ 700 │ │ │ │ self._prepare_proxy(conn) │ │ 701 │ │ │ │ │ 702 │ │ │ # Make the request on the httplib connection object. │ │ ❱ 703 │ │ │ httplib_response = self._make_request( │ │ 704 │ │ │ │ conn, │ │ 705 │ │ │ │ method, │ │ 706 │ │ │ │ url, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connectionpool.p │ │ y:386 in _make_request │ │ │ │ 383 │ │ │ │ 384 │ │ # Trigger any extra validation we need to do. │ │ 385 │ │ try: │ │ ❱ 386 │ │ │ self._validate_conn(conn) │ │ 387 │ │ except (SocketTimeout, BaseSSLError) as e: │ │ 388 │ │ │ # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. │ │ 389 │ │ │ self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connectionpool.p │ │ y:1042 in _validate_conn │ │ │ │ 1039 │ │ │ │ 1040 │ │ # Force connect early to allow us to validate the connection. │ │ 1041 │ │ if not getattr(conn, "sock", None): # AppEngine might not have `.sock` │ │ ❱ 1042 │ │ │ conn.connect() │ │ 1043 │ │ │ │ 1044 │ │ if not conn.is_verified: │ │ 1045 │ │ │ warnings.warn( │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connection.py:35 │ │ 8 in connect │ │ │ │ 355 │ │ │ 356 │ def connect(self): │ │ 357 │ │ # Add certificate verification │ │ ❱ 358 │ │ self.sock = conn = self._new_conn() │ │ 359 │ │ hostname = self.host │ │ 360 │ │ tls_in_tls = False │ │ 361 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connection.py:17 │ │ 9 in _new_conn │ │ │ │ 176 │ │ │ ) │ │ 177 │ │ │ │ 178 │ │ except SocketTimeout: │ │ ❱ 179 │ │ │ raise ConnectTimeoutError( │ │ 180 │ │ │ │ self, │ │ 181 │ │ │ │ "Connection to %s timed out. (connect timeout=%s)" │ │ 182 │ │ │ │ % (self.host, self.timeout), │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ConnectTimeoutError: (, 'Connection to raw.githubusercontent.com timed out. (connect timeout=None)') During handling of the above exception, another exception occurred: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\adapters.py:489 │ │ in send │ │ │ │ 486 │ │ │ │ 487 │ │ try: │ │ 488 │ │ │ if not chunked: │ │ ❱ 489 │ │ │ │ resp = conn.urlopen( │ │ 490 │ │ │ │ │ method=request.method, │ │ 491 │ │ │ │ │ url=url, │ │ 492 │ │ │ │ │ body=request.body, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connectionpool.p │ │ y:787 in urlopen │ │ │ │ 784 │ │ │ elif isinstance(e, (SocketError, HTTPException)): │ │ 785 │ │ │ │ e = ProtocolError("Connection aborted.", e) │ │ 786 │ │ │ │ │ ❱ 787 │ │ │ retries = retries.increment( │ │ 788 │ │ │ │ method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] │ │ 789 │ │ │ ) │ │ 790 │ │ │ retries.sleep() │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\util\retry.py:59 │ │ 2 in increment │ │ │ │ 589 │ │ ) │ │ 590 │ │ │ │ 591 │ │ if new_retry.is_exhausted(): │ │ ❱ 592 │ │ │ raise MaxRetryError(_pool, url, error or ResponseError(cause)) │ │ 593 │ │ │ │ 594 │ │ log.debug("Incremented Retry for (url='%s'): %r", url, new_retry) │ │ 595 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml (Caused by ConnectTimeoutError(, 'Connection to raw.githubusercontent.com timed out. (connect timeout=None)')) During handling of the above exception, another exception occurred: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\cli.py:326 in generate │ │ │ │ 323 │ global g_pipeline │ │ 324 │ global last_model_path │ │ 325 │ if g_pipeline is None or last_model_path != model_config.path.resolve(): │ │ ❱ 326 │ │ g_pipeline = create_pipeline( │ │ 327 │ │ │ base_model=base_model_path, │ │ 328 │ │ │ model_config=model_config, │ │ 329 │ │ │ infer_config=infer_config, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\generate.py:371 in │ │ create_pipeline │ │ │ │ 368 │ │ logger.info(f"Loading weights from {model_path}") │ │ 369 │ │ if model_path.is_file(): │ │ 370 │ │ │ logger.debug("Loading from single checkpoint file") │ │ ❱ 371 │ │ │ unet_state_dict, tenc_state_dict, vae_state_dict = get_checkpoint_weights(mo │ │ 372 │ │ elif model_path.is_dir(): │ │ 373 │ │ │ logger.debug("Loading from Diffusers model directory") │ │ 374 │ │ │ temp_pipeline = StableDiffusionPipeline.from_pretrained(model_path) │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\utils\model.py:139 in │ │ get_checkpoint_weights │ │ │ │ 136 │ │ 137 def get_checkpoint_weights(checkpoint: Path): │ │ 138 │ temp_pipeline: StableDiffusionPipeline │ │ ❱ 139 │ temp_pipeline, _ = checkpoint_to_pipeline(checkpoint, save=False) │ │ 140 │ unet_state_dict = temp_pipeline.unet.state_dict() │ │ 141 │ tenc_state_dict = temp_pipeline.text_encoder.state_dict() │ │ 142 │ vae_state_dict = temp_pipeline.vae.state_dict() │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\utils\model.py:124 in │ │ checkpoint_to_pipeline │ │ │ │ 121 │ if target_dir is None: │ │ 122 │ │ target_dir = pipeline_dir.joinpath(checkpoint.stem) │ │ 123 │ │ │ ❱ 124 │ pipeline = StableDiffusionPipeline.from_single_file( │ │ 125 │ │ pretrained_model_link_or_path=str(checkpoint.absolute()), │ │ 126 │ │ local_files_only=True, │ │ 127 │ │ load_safety_checker=False, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\diffusers\loaders.py:147 │ │ 1 in from_single_file │ │ │ │ 1468 │ │ │ │ force_download=force_download, │ │ 1469 │ │ │ ) │ │ 1470 │ │ │ │ ❱ 1471 │ │ pipe = download_from_original_stable_diffusion_ckpt( │ │ 1472 │ │ │ pretrained_model_link_or_path, │ │ 1473 │ │ │ pipeline_class=cls, │ │ 1474 │ │ │ model_type=model_type, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\diffusers\pipelines\stab │ │ le_diffusion\convert_from_ckpt.py:1234 in download_from_original_stable_diffusion_ckpt │ │ │ │ 1231 │ │ │ # only refiner xl has embedder and one text embedders │ │ 1232 │ │ │ config_url = "https://raw.githubusercontent.com/Stability-AI/generative-mode │ │ 1233 │ │ │ │ ❱ 1234 │ │ original_config_file = BytesIO(requests.get(config_url).content) │ │ 1235 │ │ │ 1236 │ original_config = OmegaConf.load(original_config_file) │ │ 1237 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\api.py:73 in │ │ get │ │ │ │ 70 │ :rtype: requests.Response │ │ 71 │ """ │ │ 72 │ │ │ ❱ 73 │ return request("get", url, params=params, **kwargs) │ │ 74 │ │ 75 │ │ 76 def options(url, **kwargs): │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\api.py:59 in │ │ request │ │ │ │ 56 │ # avoid leaving sockets open which can trigger a ResourceWarning in some │ │ 57 │ # cases, and look like a memory leak in others. │ │ 58 │ with sessions.Session() as session: │ │ ❱ 59 │ │ return session.request(method=method, url=url, **kwargs) │ │ 60 │ │ 61 │ │ 62 def get(url, params=None, **kwargs): │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\sessions.py:587 │ │ in request │ │ │ │ 584 │ │ │ "allow_redirects": allow_redirects, │ │ 585 │ │ } │ │ 586 │ │ send_kwargs.update(settings) │ │ ❱ 587 │ │ resp = self.send(prep, **send_kwargs) │ │ 588 │ │ │ │ 589 │ │ return resp │ │ 590 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\sessions.py:701 │ │ in send │ │ │ │ 698 │ │ start = preferred_clock() │ │ 699 │ │ │ │ 700 │ │ # Send the request │ │ ❱ 701 │ │ r = adapter.send(request, **kwargs) │ │ 702 │ │ │ │ 703 │ │ # Total elapsed time of the request (approximately) │ │ 704 │ │ elapsed = preferred_clock() - start │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\adapters.py:553 │ │ in send │ │ │ │ 550 │ │ │ if isinstance(e.reason, ConnectTimeoutError): │ │ 551 │ │ │ │ # TODO: Remove this in 3.0.0: see #2811 │ │ 552 │ │ │ │ if not isinstance(e.reason, NewConnectionError): │ │ ❱ 553 │ │ │ │ │ raise ConnectTimeout(e, request=request) │ │ 554 │ │ │ │ │ 555 │ │ │ if isinstance(e.reason, ResponseError): │ │ 556 │ │ │ │ raise RetryError(e, request=request) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ConnectTimeout: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml (Caused by ConnectTimeoutError(, 'Connection to raw.githubusercontent.com timed out. (connect timeout=None)')) (venv) G:\A1111\Animediff\animatediff-cli-prompt-travel>
  • Colab | Errors when installing x-formers
    2 projects | /r/comfyui | 15 Oct 2023
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. fastai 2.7.12 requires torch<2.1,>=1.7, but you have torch 2.1.0+cu118 which is incompatible. torchaudio 2.0.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchdata 0.6.1 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchtext 0.15.2 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchvision 0.15.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cu118) Python 3.10.13 (you have 3.10.12) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details xformers version: 0.0.22.post3
  • FlashAttention-2, 2x faster than FlashAttention
    3 projects | news.ycombinator.com | 17 Jul 2023
    This enables V1. V2 is still yet to be integrated into xformers. The team replied saying it should happen this week.

    See the relevant Github issue here: https://github.com/facebookresearch/xformers/issues/795

  • Xformers issue
    1 project | /r/StableDiffusion | 13 Jul 2023
    My Xformers doesnt work, any help see code. info ( Exception training model: 'Refer to https://github.com/facebookresearch/xformers for more information on how to install xformers'. ) or
  • Having xformer troubles
    1 project | /r/StableDiffusion | 6 Jul 2023
    ModuleNotFoundError: Refer to https://github.com/facebookresearch/xformers for more
  • Question: these 4 crappy picture have been generated with the same seed and settings. Why they keep coming mildly different?
    1 project | /r/StableDiffusion | 6 Jun 2023
    Xformers is a module that that can be used with Stable Diffusion. It decreases the memory required to generate an image as well as speeding things up. It works very well but there are two problems with Xformers:
  • Stuck trying to update xformers
    1 project | /r/SDtechsupport | 15 May 2023
    WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 1.13.1+cu117 with CUDA 1107 (you have 2.0.1+cu118) Python 3.10.9 (you have 3.10.7) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details ================================================================================= You are running xformers 0.0.16rc425. The program is tested to work with xformers 0.0.17. To reinstall the desired version, run with commandline flag --reinstall-xformers. Use --skip-version-check commandline argument to disable this check. =================================================================================
  • Question about updating Xformers for A1111
    1 project | /r/SDtechsupport | 29 Apr 2023
    # Your version of xformers is 0.0.16rc425. # xformers >= 0.0.17.dev is required to be available on the Dreambooth tab. # Torch 1 wheels of xformers >= 0.0.17.dev are no longer available on PyPI, # but you can manually download them by going to: https://github.com/facebookresearch/xformers/actions # Click on the most recent action tagged with a release (middle column). # Select a download based on your environment. # Unzip your download # Activate your venv and install the wheel: (from A1111 project root) cd venv/Scripts activate pip install {REPLACE WITH PATH TO YOUR UNZIPPED .whl file} # Then restart your project.
  • Is there a Pygmalion roadmap?
    1 project | /r/PygmalionAI | 24 Apr 2023
    Further reading/resources: RedPajama: https://www.together.xyz/blog/redpajama xFormers: https://github.com/facebookresearch/xformers Flash Attention: https://arxiv.org/abs/2205.14135 Sparsity [NEW!]: https://arxiv.org/abs/2304.07613
  • Slow/short replies?
    2 projects | /r/LocalLLaMA | 18 Apr 2023

What are some alternatives?

When comparing MiDaS and xformers you can also consider the following projects:

stable-diffusion-webui-depthmap-script - High Resolution Depth Maps for Stable Diffusion WebUI

flash-attention - Fast and memory-efficient exact attention

DenseDepth - High Quality Monocular Depth Estimation via Transfer Learning

stable-diffusion-webui - Stable Diffusion web UI

stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models

SHARK - SHARK - High Performance Machine Learning Distribution

deeplearning4j-examples - Deeplearning4j Examples (DL4J, DL4J Spark, DataVec) [Moved to: https://github.com/deeplearning4j/deeplearning4j-examples]

Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion

DiverseDepth - The code and data of DiverseDepth

diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch

Insta-DM - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency (AAAI 2021)

InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.