Chinese-CLIP
autodistill-metaclip
Chinese-CLIP | autodistill-metaclip | |
---|---|---|
1 | 1 | |
3,655 | 16 | |
7.6% | - | |
7.6 | 6.4 | |
5 months ago | 5 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Chinese-CLIP
-
Meet ‘Chinese CLIP,’ An Implementation of CLIP Pretrained on Large-Scale Chinese Datasets with Contrastive Learning
Chinese-CLIP is open-sourced on https://github.com/OFA-Sys/Chinese-CLIP , we are working on applying it on more downstreaming tasks requiring cross-modal alignment!
autodistill-metaclip
-
MetaCLIP – Meta AI Research
I have been playing with MetaCLIP this afternoon and made https://github.com/autodistill/autodistill-metaclip as a pip installable version. The Facebook repository has some guidance but you have to pull the weights yourself, save them, etc.
My inference function (model.predict("image.png")) return an sv.Classifications object that you can load into supervision for processing (i.e. get top k) [1].
The paper [2] notes the following in terms of performance:
> In Table 4, we observe that MetaCLIP outperforms OpenAI CLIP on ImageNet and average accuracy across 26 tasks, for 3 model scales. With 400 million training data points on ViT-B/32, MetaCLIP outperforms CLIP by +2.1% on ImageNet and by +1.6% on average. On ViT-B/16, MetaCLIP outperforms CLIP by +2.5% on ImageNet and by +1.5% on average. On ViT-L/14, MetaCLIP outperforms CLIP by +0.7% on ImageNet and by +1.4% on average across the 26 tasks.
[1] https://github.com/autodistill/autodistill-metaclip
What are some alternatives?
dream-creator - Quickly and easily create / train a custom DeepDream model
clip-interrogator - Image to prompt with BLIP and CLIP
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
open_clip - An open source implementation of CLIP.
Queryable - Run OpenAI's CLIP model on iOS to search photos.
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
FARM - :house_with_garden: Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
NumPyCLIP - Pure NumPy implementation of https://github.com/openai/CLIP
PyTorch_CIFAR10 - Pretrained TorchVision models on CIFAR10 dataset (with weights)
sam-clip - Use Grounding DINO, Segment Anything, and CLIP to label objects in images.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Text2LIVE - Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)