UPop
image-captioning
UPop | image-captioning | |
---|---|---|
1 | 1 | |
82 | 32 | |
- | - | |
8.4 | 3.1 | |
6 months ago | 9 months ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
UPop
image-captioning
What are some alternatives?
Torch-Pruning - [CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs
image-clustering - Easy image clustering tool.
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
sparseml - Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
marqo - Unified embedding generation and search engine. Also available on cloud - cloud.marqo.ai
neural-compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
model-optimization - A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
OFA - Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Coin-CLIP - Coin-CLIP: fine-tuned with a vast collection of coin images from CLIP using contrastive learning. It enhances feature extraction for coins, boosting image search accuracy. This model merges Visual Transformer (ViT) with CLIP's multimodal learning, optimized for numismatic applications.