open_clip VS bitsandbytes

Compare open_clip vs bitsandbytes and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
open_clip bitsandbytes
27 61
8,452 5,389
8.2% -
8.2 9.4
17 days ago 4 days ago
Jupyter Notebook Python
GNU General Public License v3.0 or later MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

open_clip

Posts with mentions or reviews of open_clip. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-13.
  • A History of CLIP Model Training Data Advances
    8 projects | dev.to | 13 Mar 2024
    While OpenAI’s CLIP model has garnered a lot of attention, it is far from the only game in town—and far from the best! On the OpenCLIP leaderboard, for instance, the largest and most capable CLIP model from OpenAI ranks just 41st(!) in its average zero-shot accuracy across 38 datasets.
  • How to Build a Semantic Search Engine for Emojis
    6 projects | dev.to | 10 Jan 2024
    Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
  • Database of 16,000 Artists Used to Train Midjourney AI Goes Viral
    1 project | news.ycombinator.com | 7 Jan 2024
    It is a misconception that Adobe's models have not been trained on copyrighted work. Nobody should be repeating their marketing claims.

    Adobe has not shown how they train the text encoders in Firefly, or what images were used for the text-based conditioning (i.e. "text to image") part of their image generation model. They are almost certainly using CLIP or T5, which are trained on LAION2b, an image dataset with the very problems they are trying to address, C4 (a text dataset similarly encumbered) and similar.

    I welcome anyone who works at Adobe to simply answer this question of how they trained the text encoders for text conditioning and put it to rest. There is absolutely nothing sensitive about the issue, unless it exposes them in a lie.

    So no chance. I think it's a big fat lie. They'd have to have made some other scientific breakthrough, which they didn't.

    Using information from https://openai.com/research/clip and https://github.com/mlfoundations/open_clip, it's possible to investigate the likelihood that using just their stock image dataset, can they make a working text encoder?

    It's certainly not impossible, but it's impracticable. On 248m images (roughly the size of Adobe Stock), CLIP gets 37% on ImageNet, and on the 2000m from LAION, it performs 71-80%. And even with 2000m images, CLIP is substantially worse performing than the approach that Imagen uses for "text comprehension," which relies on essentially many billions more images and text tokens.

  • MetaCLIP – Meta AI Research
    6 projects | news.ycombinator.com | 26 Oct 2023
    https://github.com/mlfoundations/open_clip/blob/main/docs/op...
  • COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
    8 projects | /r/StableDiffusion | 10 Jul 2023
    in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
  • Is Nicholas Renotte a good guide for a person who knows nothing about ML?
    1 project | /r/learnmachinelearning | 27 Jun 2023
    also, if you describe your task a bit more, we might be able to direct you to a fairly out-of-the-box solution, e.g. you might be able to use one of the pretrained models supported by https://github.com/mlfoundations/open_clip without any additional training
  • Generate Image from Vector Embedding
    1 project | /r/StableDiffusion | 6 Jun 2023
    It says on the Stable Diffusion Github repo that it uses the “OpenCLIP-ViT/H” https://github.com/mlfoundations/open_clip model as a text encoder, and from my prior experience with CLIP, I have found that it is very easy to generate image and text embeddings (because CLIP is a multimodal model).
  • What's up in the Python community? – April 2023
    3 projects | news.ycombinator.com | 28 Apr 2023
    https://replicate.com/pharmapsychotic/clip-interrogator

    using:

    cfg.apply_low_vram_defaults()

    interrogate_fast()

    I tried lighter models like vit32/laion400 and others etc all are very very slow to load or use (model list: https://github.com/mlfoundations/open_clip)

    I'm desperately looking for something more modest and light.

  • Low accuracy on my CNN model.
    1 project | /r/MLQuestions | 13 Apr 2023
    A library that is very useful for this kind of application is timm. You may also find the feature representation provided by a CLIP model particularly powerful.
  • Looking for OpenAI CLIP alternative
    1 project | /r/StableDiffusion | 21 Feb 2023

bitsandbytes

Posts with mentions or reviews of bitsandbytes. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-09.
  • French AI startup Mistral secures €2B valuation
    2 projects | news.ycombinator.com | 9 Dec 2023
    No. Without the inference code, the best we can have are guesses on its implementation, so the benchmark figures we can get could be quite wrong. It does seem better than Llama2-70B in my tests, which rely on the work done by Dmytro Dzhulgakov[0] and DiscoResearch[1].

    But the point of releasing on bittorrent is to see the effervescence in hobbyist research and early attempts at MoE quantization, which are already ongoing[2]. They are benefitting from the community.

    [0]: https://github.com/dzhulgakov/llama-mistral

    [1]: https://huggingface.co/DiscoResearch/mixtral-7b-8expert

    [2]: https://github.com/TimDettmers/bitsandbytes/tree/sparse_moe

  • Lora training with Kohya issue
    2 projects | /r/StableDiffusion | 6 Dec 2023
    CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
  • FLaNK Stack Weekly for 30 Oct 2023
    24 projects | dev.to | 30 Oct 2023
  • A comprehensive guide to running Llama 2 locally
    19 projects | news.ycombinator.com | 25 Jul 2023
    While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:

    * I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.

    * If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].

    You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)

    Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)

    [1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...

    [2] https://github.com/microsoft/DeepSpeed/issues/1580

    [3] https://github.com/TimDettmers/bitsandbytes/issues/485

    [4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...

    [5] https://forums.macrumors.com/threads/ai-generated-art-stable...

  • Bit inference 4.2x faster than 16 bit
    1 project | news.ycombinator.com | 11 Jul 2023
    Release notes: https://github.com/TimDettmers/bitsandbytes/releases/tag/0.4...
  • Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0']
    1 project | /r/LocalLLaMA | 29 Jun 2023
    Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cpu.so /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32 CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... ERROR: /usr/bin/python3: undefined symbol: cudaRuntimeGetVersion CUDA SETUP: libcudart.so path is None CUDA SETUP: Is seems that your cuda installation is not in your path. See https://github.com/TimDettmers/bitsandbytes/issues/85 for more information. CUDA SETUP: CUDA version lower than 11 are currently not supported for LLM.int8(). You will be only to use 8-bit optimizers and quantization routines!! CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 00 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cpu.so... /usr/local/lib/python3.10/dist-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('http'), PosixPath('//172.28.0.1'), PosixPath('8013')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-t4-s-1b6gsytv7z9le --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('module'), PosixPath('//ipykernel.pylab.backend_inline')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
  • Having trouble using the multimodal tools.
    1 project | /r/oobaboogazz | 27 Jun 2023
    RuntimeError: CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment! If you cannot find any issues and suspect a bug, please open an issue with detals about your environment: https://github.com/TimDettmers/bitsandbytes/issues
  • [TextGen WebUI] Service terminated error? (Screenshots in post)
    1 project | /r/Pygmalion_ai | 27 Jun 2023
  • Considering getting a Jetson AGX Orin.. anyone have experience with it?
    5 projects | /r/LocalLLaMA | 26 Jun 2023
  • How to disable the `bitsandbytes` intro message:
    1 project | /r/LocalLLaMA | 23 Jun 2023
    ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda121.so CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 8.9 CUDA SETUP: Detected CUDA version 121 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda121.so...

What are some alternatives?

When comparing open_clip and bitsandbytes you can also consider the following projects:

CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support

taming-transformers - Taming Transformers for High-Resolution Image Synthesis

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion

Dreambooth-Stable-Diffusion-cpu - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion

clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them

llama.cpp - LLM inference in C/C++

stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM