clip-italian
PARSE-CLIP
clip-italian | PARSE-CLIP | |
---|---|---|
1 | 4 | |
172 | 3 | |
1.2% | - | |
2.0 | 0.0 | |
12 months ago | about 2 years ago | |
Jupyter Notebook | Jupyter Notebook | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clip-italian
-
[N][P] We built the Italian CLIP 🇮🇹
The demo and paper links are coming out soon! Github: https://github.com/clip-italian/clip-italian Twitter: https://twitter.com/peppeatta/status/1419593282682773507
PARSE-CLIP
- [P] PARSE-CLIP, A CLIP based project for combining images from multiple datasets to form a new dataset/class. GitHub link in the comments
- [Self Promo] : Combine Images from multiple datasets using CLIP.
- [Self Promo] : Merge Images from different datasets using CLIP.
- My project:Search and combine Images from multiple datasets.
What are some alternatives?
Basic-UI-for-GPT-J-6B-with-low-vram - A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model loading takes 12gb free ram.
fastpages - An easy to use blogging platform, with enhanced support for Jupyter Notebooks.
clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them
fastai - The fastai deep learning library
Transformer-MM-Explainability - [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
TargetCLIP - [ECCV 2022] Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.
browser-ml-inference - Edge Inference in Browser with Transformer NLP model