Our great sponsors
-
CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
New repo with functioning training code for CLIP models: https://github.com/mlfoundations/open_clip/
OpenAI CLIP are vision models that learn from contrasting images and text, and can be used for classification and for generating images when paired with other things like VQGAN. So far OpenAI had a github repo with inference code, but no way to train the models
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a more popular project.
Related posts
- Why Vector Compression Matters
- Scalable Load Balancing Having Cloud GPU Service Salad Tutorial With Whisper Transcriber Gradio APP
- Show HN: I made a website that converts YT videos into step-by-step guides
- Metrics for bias in machine learning datasets
- Dream – A Distributed RAG Experimentation Framework