open_clip VS xformers

Compare open_clip vs xformers and see what are their differences.

xformers

Hackable and optimized Transformers building blocks, supporting a composable construction. (by facebookresearch)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
open_clip xformers
28 46
8,452 7,578
8.2% 6.5%
8.2 9.3
17 days ago 3 days ago
Jupyter Notebook Python
GNU General Public License v3.0 or later GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

open_clip

Posts with mentions or reviews of open_clip. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-13.
  • A History of CLIP Model Training Data Advances
    8 projects | dev.to | 13 Mar 2024
    While OpenAI’s CLIP model has garnered a lot of attention, it is far from the only game in town—and far from the best! On the OpenCLIP leaderboard, for instance, the largest and most capable CLIP model from OpenAI ranks just 41st(!) in its average zero-shot accuracy across 38 datasets.
  • How to Build a Semantic Search Engine for Emojis
    6 projects | dev.to | 10 Jan 2024
    Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
  • Database of 16,000 Artists Used to Train Midjourney AI Goes Viral
    1 project | news.ycombinator.com | 7 Jan 2024
    It is a misconception that Adobe's models have not been trained on copyrighted work. Nobody should be repeating their marketing claims.

    Adobe has not shown how they train the text encoders in Firefly, or what images were used for the text-based conditioning (i.e. "text to image") part of their image generation model. They are almost certainly using CLIP or T5, which are trained on LAION2b, an image dataset with the very problems they are trying to address, C4 (a text dataset similarly encumbered) and similar.

    I welcome anyone who works at Adobe to simply answer this question of how they trained the text encoders for text conditioning and put it to rest. There is absolutely nothing sensitive about the issue, unless it exposes them in a lie.

    So no chance. I think it's a big fat lie. They'd have to have made some other scientific breakthrough, which they didn't.

    Using information from https://openai.com/research/clip and https://github.com/mlfoundations/open_clip, it's possible to investigate the likelihood that using just their stock image dataset, can they make a working text encoder?

    It's certainly not impossible, but it's impracticable. On 248m images (roughly the size of Adobe Stock), CLIP gets 37% on ImageNet, and on the 2000m from LAION, it performs 71-80%. And even with 2000m images, CLIP is substantially worse performing than the approach that Imagen uses for "text comprehension," which relies on essentially many billions more images and text tokens.

  • MetaCLIP – Meta AI Research
    6 projects | news.ycombinator.com | 26 Oct 2023
    https://github.com/mlfoundations/open_clip/blob/main/docs/op...
  • COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
    8 projects | /r/StableDiffusion | 10 Jul 2023
    in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
  • Is Nicholas Renotte a good guide for a person who knows nothing about ML?
    1 project | /r/learnmachinelearning | 27 Jun 2023
    also, if you describe your task a bit more, we might be able to direct you to a fairly out-of-the-box solution, e.g. you might be able to use one of the pretrained models supported by https://github.com/mlfoundations/open_clip without any additional training
  • Generate Image from Vector Embedding
    1 project | /r/StableDiffusion | 6 Jun 2023
    It says on the Stable Diffusion Github repo that it uses the “OpenCLIP-ViT/H” https://github.com/mlfoundations/open_clip model as a text encoder, and from my prior experience with CLIP, I have found that it is very easy to generate image and text embeddings (because CLIP is a multimodal model).
  • What's up in the Python community? – April 2023
    3 projects | news.ycombinator.com | 28 Apr 2023
    https://replicate.com/pharmapsychotic/clip-interrogator

    using:

    cfg.apply_low_vram_defaults()

    interrogate_fast()

    I tried lighter models like vit32/laion400 and others etc all are very very slow to load or use (model list: https://github.com/mlfoundations/open_clip)

    I'm desperately looking for something more modest and light.

  • Low accuracy on my CNN model.
    1 project | /r/MLQuestions | 13 Apr 2023
    A library that is very useful for this kind of application is timm. You may also find the feature representation provided by a CLIP model particularly powerful.
  • Looking for OpenAI CLIP alternative
    1 project | /r/StableDiffusion | 21 Feb 2023

xformers

Posts with mentions or reviews of xformers. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-15.
  • Animediff error
    1 project | /r/StableDiffusion | 31 Oct 2023
    (venv) G:\A1111\Animediff\animatediff-cli-prompt-travel>animatediff generate -c config/prompts/01-ToonYou.json -W 256 -H 384 -L 128 -C 16 WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.0.1+cpu) Python 3.10.11 (you have 3.10.6) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: 'Could not find module 'G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\Lib\site-packages\torchvision\image.pyd' (or one of its dependencies). Try using the full path with constructor syntax.'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source? warn( 15:07:25 INFO Using generation config: config\prompts\01-ToonYou.json cli.py:291 ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\cli.py:292 in generate │ │ │ │ 289 │ │ │ 290 │ config_path = config_path.absolute() │ │ 291 │ logger.info(f"Using generation config: {path_from_cwd(config_path)}") │ │ ❱ 292 │ model_config: ModelConfig = get_model_config(config_path) │ │ 293 │ is_v2 = is_v2_motion_module(data_dir.joinpath(model_config.motion_module)) │ │ 294 │ infer_config: InferenceConfig = get_infer_config(is_v2) │ │ 295 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\settings.py:134 in │ │ get_model_config │ │ │ │ 131 │ │ 132 │ │ 133 def get_model_config(config_path: Path) -> ModelConfig: │ │ ❱ 134 │ settings = ModelConfig(json_config_path=config_path) │ │ 135 │ return settings │ │ 136 │ │ │ │ in pydantic.env_settings.BaseSettings.__init__:40 │ │ │ │ in pydantic.main.BaseModel.__init__:341 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ValidationError: 1 validation error for ModelConfig prompt extra fields not permitted (type=value_error.extra) (venv) G:\A1111\Animediff\animatediff-cli-prompt-travel>animatediff generate -c config/prompts/prompt1.json -W 256 -H 38 4 -L 128 -C 16 WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.0.1+cpu) Python 3.10.11 (you have 3.10.6) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: 'Could not find module 'G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\Lib\site-packages\torchvision\image.pyd' (or one of its dependencies). Try using the full path with constructor syntax.'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source? warn( 15:08:30 INFO Using generation config: config\prompts\prompt1.json cli.py:291 15:08:35 INFO is_v2=True util.py:361 INFO Using base model: runwayml\stable-diffusion-v1-5 cli.py:309 INFO Will save outputs to ./output\2023-10-30T15-08-35-epicrealism-epicrealism_naturalsinrc1vae cli.py:317 INFO Checking motion module... generate.py:331 INFO Loading tokenizer... generate.py:345 INFO Loading text encoder... generate.py:347 15:08:38 INFO Loading VAE... generate.py:349 INFO Loading UNet... generate.py:351 15:08:59 INFO Loaded 453.20928M-parameter motion module unet.py:578 15:09:00 INFO Using scheduler "euler_a" (EulerAncestralDiscreteScheduler) generate.py:363 INFO Loading weights from generate.py:368 G:\A1111\Animediff\animatediff-cli-prompt-travel\data\models\sd\epicrealism_naturalSin RC1VAE.safetensors ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connection.py:17 │ │ 4 in _new_conn │ │ │ │ 171 │ │ │ extra_kw["socket_options"] = self.socket_options │ │ 172 │ │ │ │ 173 │ │ try: │ │ ❱ 174 │ │ │ conn = connection.create_connection( │ │ 175 │ │ │ │ (self._dns_host, self.port), self.timeout, **extra_kw │ │ 176 │ │ │ ) │ │ 177 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\util\connection. │ │ py:95 in create_connection │ │ │ │ 92 │ │ │ │ sock = None │ │ 93 │ │ │ 94 │ if err is not None: │ │ ❱ 95 │ │ raise err │ │ 96 │ │ │ 97 │ raise socket.error("getaddrinfo returns an empty list") │ │ 98 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\util\connection. │ │ py:85 in create_connection │ │ │ │ 82 │ │ │ │ sock.settimeout(timeout) │ │ 83 │ │ │ if source_address: │ │ 84 │ │ │ │ sock.bind(source_address) │ │ ❱ 85 │ │ │ sock.connect(sa) │ │ 86 │ │ │ return sock │ │ 87 │ │ │ │ 88 │ │ except socket.error as e: │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond During handling of the above exception, another exception occurred: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connectionpool.p │ │ y:703 in urlopen │ │ │ │ 700 │ │ │ │ self._prepare_proxy(conn) │ │ 701 │ │ │ │ │ 702 │ │ │ # Make the request on the httplib connection object. │ │ ❱ 703 │ │ │ httplib_response = self._make_request( │ │ 704 │ │ │ │ conn, │ │ 705 │ │ │ │ method, │ │ 706 │ │ │ │ url, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connectionpool.p │ │ y:386 in _make_request │ │ │ │ 383 │ │ │ │ 384 │ │ # Trigger any extra validation we need to do. │ │ 385 │ │ try: │ │ ❱ 386 │ │ │ self._validate_conn(conn) │ │ 387 │ │ except (SocketTimeout, BaseSSLError) as e: │ │ 388 │ │ │ # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. │ │ 389 │ │ │ self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connectionpool.p │ │ y:1042 in _validate_conn │ │ │ │ 1039 │ │ │ │ 1040 │ │ # Force connect early to allow us to validate the connection. │ │ 1041 │ │ if not getattr(conn, "sock", None): # AppEngine might not have `.sock` │ │ ❱ 1042 │ │ │ conn.connect() │ │ 1043 │ │ │ │ 1044 │ │ if not conn.is_verified: │ │ 1045 │ │ │ warnings.warn( │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connection.py:35 │ │ 8 in connect │ │ │ │ 355 │ │ │ 356 │ def connect(self): │ │ 357 │ │ # Add certificate verification │ │ ❱ 358 │ │ self.sock = conn = self._new_conn() │ │ 359 │ │ hostname = self.host │ │ 360 │ │ tls_in_tls = False │ │ 361 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connection.py:17 │ │ 9 in _new_conn │ │ │ │ 176 │ │ │ ) │ │ 177 │ │ │ │ 178 │ │ except SocketTimeout: │ │ ❱ 179 │ │ │ raise ConnectTimeoutError( │ │ 180 │ │ │ │ self, │ │ 181 │ │ │ │ "Connection to %s timed out. (connect timeout=%s)" │ │ 182 │ │ │ │ % (self.host, self.timeout), │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ConnectTimeoutError: (, 'Connection to raw.githubusercontent.com timed out. (connect timeout=None)') During handling of the above exception, another exception occurred: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\adapters.py:489 │ │ in send │ │ │ │ 486 │ │ │ │ 487 │ │ try: │ │ 488 │ │ │ if not chunked: │ │ ❱ 489 │ │ │ │ resp = conn.urlopen( │ │ 490 │ │ │ │ │ method=request.method, │ │ 491 │ │ │ │ │ url=url, │ │ 492 │ │ │ │ │ body=request.body, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\connectionpool.p │ │ y:787 in urlopen │ │ │ │ 784 │ │ │ elif isinstance(e, (SocketError, HTTPException)): │ │ 785 │ │ │ │ e = ProtocolError("Connection aborted.", e) │ │ 786 │ │ │ │ │ ❱ 787 │ │ │ retries = retries.increment( │ │ 788 │ │ │ │ method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] │ │ 789 │ │ │ ) │ │ 790 │ │ │ retries.sleep() │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\urllib3\util\retry.py:59 │ │ 2 in increment │ │ │ │ 589 │ │ ) │ │ 590 │ │ │ │ 591 │ │ if new_retry.is_exhausted(): │ │ ❱ 592 │ │ │ raise MaxRetryError(_pool, url, error or ResponseError(cause)) │ │ 593 │ │ │ │ 594 │ │ log.debug("Incremented Retry for (url='%s'): %r", url, new_retry) │ │ 595 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml (Caused by ConnectTimeoutError(, 'Connection to raw.githubusercontent.com timed out. (connect timeout=None)')) During handling of the above exception, another exception occurred: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\cli.py:326 in generate │ │ │ │ 323 │ global g_pipeline │ │ 324 │ global last_model_path │ │ 325 │ if g_pipeline is None or last_model_path != model_config.path.resolve(): │ │ ❱ 326 │ │ g_pipeline = create_pipeline( │ │ 327 │ │ │ base_model=base_model_path, │ │ 328 │ │ │ model_config=model_config, │ │ 329 │ │ │ infer_config=infer_config, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\generate.py:371 in │ │ create_pipeline │ │ │ │ 368 │ │ logger.info(f"Loading weights from {model_path}") │ │ 369 │ │ if model_path.is_file(): │ │ 370 │ │ │ logger.debug("Loading from single checkpoint file") │ │ ❱ 371 │ │ │ unet_state_dict, tenc_state_dict, vae_state_dict = get_checkpoint_weights(mo │ │ 372 │ │ elif model_path.is_dir(): │ │ 373 │ │ │ logger.debug("Loading from Diffusers model directory") │ │ 374 │ │ │ temp_pipeline = StableDiffusionPipeline.from_pretrained(model_path) │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\utils\model.py:139 in │ │ get_checkpoint_weights │ │ │ │ 136 │ │ 137 def get_checkpoint_weights(checkpoint: Path): │ │ 138 │ temp_pipeline: StableDiffusionPipeline │ │ ❱ 139 │ temp_pipeline, _ = checkpoint_to_pipeline(checkpoint, save=False) │ │ 140 │ unet_state_dict = temp_pipeline.unet.state_dict() │ │ 141 │ tenc_state_dict = temp_pipeline.text_encoder.state_dict() │ │ 142 │ vae_state_dict = temp_pipeline.vae.state_dict() │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\src\animatediff\utils\model.py:124 in │ │ checkpoint_to_pipeline │ │ │ │ 121 │ if target_dir is None: │ │ 122 │ │ target_dir = pipeline_dir.joinpath(checkpoint.stem) │ │ 123 │ │ │ ❱ 124 │ pipeline = StableDiffusionPipeline.from_single_file( │ │ 125 │ │ pretrained_model_link_or_path=str(checkpoint.absolute()), │ │ 126 │ │ local_files_only=True, │ │ 127 │ │ load_safety_checker=False, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\diffusers\loaders.py:147 │ │ 1 in from_single_file │ │ │ │ 1468 │ │ │ │ force_download=force_download, │ │ 1469 │ │ │ ) │ │ 1470 │ │ │ │ ❱ 1471 │ │ pipe = download_from_original_stable_diffusion_ckpt( │ │ 1472 │ │ │ pretrained_model_link_or_path, │ │ 1473 │ │ │ pipeline_class=cls, │ │ 1474 │ │ │ model_type=model_type, │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\diffusers\pipelines\stab │ │ le_diffusion\convert_from_ckpt.py:1234 in download_from_original_stable_diffusion_ckpt │ │ │ │ 1231 │ │ │ # only refiner xl has embedder and one text embedders │ │ 1232 │ │ │ config_url = "https://raw.githubusercontent.com/Stability-AI/generative-mode │ │ 1233 │ │ │ │ ❱ 1234 │ │ original_config_file = BytesIO(requests.get(config_url).content) │ │ 1235 │ │ │ 1236 │ original_config = OmegaConf.load(original_config_file) │ │ 1237 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\api.py:73 in │ │ get │ │ │ │ 70 │ :rtype: requests.Response │ │ 71 │ """ │ │ 72 │ │ │ ❱ 73 │ return request("get", url, params=params, **kwargs) │ │ 74 │ │ 75 │ │ 76 def options(url, **kwargs): │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\api.py:59 in │ │ request │ │ │ │ 56 │ # avoid leaving sockets open which can trigger a ResourceWarning in some │ │ 57 │ # cases, and look like a memory leak in others. │ │ 58 │ with sessions.Session() as session: │ │ ❱ 59 │ │ return session.request(method=method, url=url, **kwargs) │ │ 60 │ │ 61 │ │ 62 def get(url, params=None, **kwargs): │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\sessions.py:587 │ │ in request │ │ │ │ 584 │ │ │ "allow_redirects": allow_redirects, │ │ 585 │ │ } │ │ 586 │ │ send_kwargs.update(settings) │ │ ❱ 587 │ │ resp = self.send(prep, **send_kwargs) │ │ 588 │ │ │ │ 589 │ │ return resp │ │ 590 │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\sessions.py:701 │ │ in send │ │ │ │ 698 │ │ start = preferred_clock() │ │ 699 │ │ │ │ 700 │ │ # Send the request │ │ ❱ 701 │ │ r = adapter.send(request, **kwargs) │ │ 702 │ │ │ │ 703 │ │ # Total elapsed time of the request (approximately) │ │ 704 │ │ elapsed = preferred_clock() - start │ │ │ │ G:\A1111\Animediff\animatediff-cli-prompt-travel\venv\lib\site-packages\requests\adapters.py:553 │ │ in send │ │ │ │ 550 │ │ │ if isinstance(e.reason, ConnectTimeoutError): │ │ 551 │ │ │ │ # TODO: Remove this in 3.0.0: see #2811 │ │ 552 │ │ │ │ if not isinstance(e.reason, NewConnectionError): │ │ ❱ 553 │ │ │ │ │ raise ConnectTimeout(e, request=request) │ │ 554 │ │ │ │ │ 555 │ │ │ if isinstance(e.reason, ResponseError): │ │ 556 │ │ │ │ raise RetryError(e, request=request) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ConnectTimeout: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml (Caused by ConnectTimeoutError(, 'Connection to raw.githubusercontent.com timed out. (connect timeout=None)')) (venv) G:\A1111\Animediff\animatediff-cli-prompt-travel>
  • Colab | Errors when installing x-formers
    2 projects | /r/comfyui | 15 Oct 2023
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. fastai 2.7.12 requires torch<2.1,>=1.7, but you have torch 2.1.0+cu118 which is incompatible. torchaudio 2.0.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchdata 0.6.1 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchtext 0.15.2 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. torchvision 0.15.2+cu118 requires torch==2.0.1, but you have torch 2.1.0+cu118 which is incompatible. WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cu118) Python 3.10.13 (you have 3.10.12) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details xformers version: 0.0.22.post3
  • FlashAttention-2, 2x faster than FlashAttention
    3 projects | news.ycombinator.com | 17 Jul 2023
    This enables V1. V2 is still yet to be integrated into xformers. The team replied saying it should happen this week.

    See the relevant Github issue here: https://github.com/facebookresearch/xformers/issues/795

  • Xformers issue
    1 project | /r/StableDiffusion | 13 Jul 2023
    My Xformers doesnt work, any help see code. info ( Exception training model: 'Refer to https://github.com/facebookresearch/xformers for more information on how to install xformers'. ) or
  • Having xformer troubles
    1 project | /r/StableDiffusion | 6 Jul 2023
    ModuleNotFoundError: Refer to https://github.com/facebookresearch/xformers for more
  • Question: these 4 crappy picture have been generated with the same seed and settings. Why they keep coming mildly different?
    1 project | /r/StableDiffusion | 6 Jun 2023
    Xformers is a module that that can be used with Stable Diffusion. It decreases the memory required to generate an image as well as speeding things up. It works very well but there are two problems with Xformers:
  • Stuck trying to update xformers
    1 project | /r/SDtechsupport | 15 May 2023
    WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 1.13.1+cu117 with CUDA 1107 (you have 2.0.1+cu118) Python 3.10.9 (you have 3.10.7) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details ================================================================================= You are running xformers 0.0.16rc425. The program is tested to work with xformers 0.0.17. To reinstall the desired version, run with commandline flag --reinstall-xformers. Use --skip-version-check commandline argument to disable this check. =================================================================================
  • Question about updating Xformers for A1111
    1 project | /r/SDtechsupport | 29 Apr 2023
    # Your version of xformers is 0.0.16rc425. # xformers >= 0.0.17.dev is required to be available on the Dreambooth tab. # Torch 1 wheels of xformers >= 0.0.17.dev are no longer available on PyPI, # but you can manually download them by going to: https://github.com/facebookresearch/xformers/actions # Click on the most recent action tagged with a release (middle column). # Select a download based on your environment. # Unzip your download # Activate your venv and install the wheel: (from A1111 project root) cd venv/Scripts activate pip install {REPLACE WITH PATH TO YOUR UNZIPPED .whl file} # Then restart your project.
  • Is there a Pygmalion roadmap?
    1 project | /r/PygmalionAI | 24 Apr 2023
    Further reading/resources: RedPajama: https://www.together.xyz/blog/redpajama xFormers: https://github.com/facebookresearch/xformers Flash Attention: https://arxiv.org/abs/2205.14135 Sparsity [NEW!]: https://arxiv.org/abs/2304.07613
  • Slow/short replies?
    2 projects | /r/LocalLLaMA | 18 Apr 2023

What are some alternatives?

When comparing open_clip and xformers you can also consider the following projects:

CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

flash-attention - Fast and memory-efficient exact attention

DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

stable-diffusion-webui - Stable Diffusion web UI

taming-transformers - Taming Transformers for High-Resolution Image Synthesis

SHARK - SHARK - High Performance Machine Learning Distribution

Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion

bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.

diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch

clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them

InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.