google-research
CLIP
Our great sponsors
google-research | CLIP | |
---|---|---|
98 | 103 | |
32,804 | 22,209 | |
1.5% | 6.3% | |
9.6 | 1.2 | |
5 days ago | 17 days ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
google-research
-
Show HN: Next-token prediction in JavaScript – build fast LLMs from scratch
People on here will be happy to say that I do a similar thing, however my sequence length is dynamic because I also use a 2nd data structure - I'll use pretentious academic speak: I use a simple bigram LM (2-gram) for single next-word likeliness and separately a trie that models all words and phrases (so, n-gram). Not sure how many total nodes because sentence lengths vary in training data, but there are about 200,000 entry points (keys) so probably about 2-10 million total nodes in the default setup.
"Constructing 7-gram LM": They likely started with bigrams (what I use) which only tells you the next word based on 1 word given, and thought to increase accuracy by modeling out more words in a sequence, and eventually let the user (developer) pass in any amount they want to model (https://github.com/google-research/google-research/blob/5c87...). I thought of this too at first, but I actually got more accuracy (and speed) out of just keeping them as bigrams and making a totally separate structure that models out an n-gram of all phrases (e.g. could be a 24-token long sequence or 100+ tokens etc. I model it all) and if that phrase is found, then I just get the bigram assumption of the last token of the phrase. This works better when the training data is more diverse (for a very generic model), but theirs would probably outperform mine on accuracy when the training data has a lot of nearly identical sentences that only change wildly toward the end - I don't find this pattern in typical data though, maybe for certain coding and other tasks there are those patterns though. But because it's not dynamic and they make you provide that number, even a low number (any phrase longer than 2 words) - theirs will always have to do more lookup work than with simple bigrams and they're also limited by that fixed number as far as accuracy. I wonder how scalable that is - if I need to train on occasional ~100-word long sentences but also (and mostly) just ~3-word long sentences, I guess I set this to 100 and have a mostly "undefined" trie.
I also thought of the name "LMJS", theirs is "jslm" :) but I went with simply "next-token-prediction" because that's what it ultimately does as a library. I don't know what theirs is really designed for other than proving a concept. Most of their code files are actually comments and hypothetical scenarios.
I recently added a browser example showing simple autocomplete using my library: https://github.com/bennyschmidt/next-token-prediction/tree/m... (video)
And next I'm implementing 8-dimensional embeddings that are converted to normalized vectors between 0-1 to see if doing math on them does anything useful beyond similarity, right now they look like this:
[nextFrequency, prevalence, specificity, length, firstLetter, lastLetter, firstVowel, lastVowel]
- Google Research website is down
-
Jpegli: A New JPEG Coding Library
The change was literally just made: https://github.com/google-research/google-research/commit/4a...
It appears this was in response to Hacker News comments.
- Multi-bitrate JPEG compression perceptual evaluation dataset 2023
-
Vector Databases: A Technical Primer [pdf]
There are options such as Google's ScaNN that may let you go farther before needing to consider specialized databases.
https://github.com/google-research/google-research/blob/mast...
-
Labs.Google
I feel it was unnecesary to create this because https://research.google/ already exists? It just seems like they want to take another URL with a "pure" domain name instead of psubdirectories, etc parts.
-
Smerf: Streamable Memory Efficient Radiance Fields
https://github.com/google-research/google-research/blob/mast...
-
Shisa 7B: a new JA/EN bilingual model based on Mistral 7B
You could also try some dedicated translation models like https://huggingface.co/facebook/nllb-moe-54b (or https://github.com/google-research/google-research/tree/master/madlad_400 for something smaller) and see how they do.
-
Translate to and from 400+ languages locally with MADLAD-400
Google released T5X checkpoints for MADLAD-400 a couple of months ago, but nobody could figure out how to run them. Turns out the vocabulary was wrong, but they uploaded the correct one last week.
- Mastering ROUGE Matrix: Your Guide to Large Language Model Evaluation for Summarization with Examples
CLIP
-
How to Cluster Images
We will also need two more libraries: OpenAI’s CLIP GitHub repo, enabling us to generate image features with the CLIP model, and the umap-learn library, which will let us apply a dimensionality reduction technique called Uniform Manifold Approximation and Projection (UMAP) to those features to visualize them in 2D:
-
Show HN: Memories, FOSS Google Photos alternative built for high performance
Biggest missing feature for all these self hosted photo hosting is the lack of a real search. Being able to search for things like "beach at night" is a time saver instead of browsing through hundreds or thousands of photos. There are trained neural networks out there like https://github.com/openai/CLIP which are quite good.
-
Zero-Shot Prediction Plugin for FiftyOne
In computer vision, this is known as zero-shot learning, or zero-shot prediction, because the goal is to generate predictions without explicitly being given any example predictions to learn from. With the advent of high quality multimodal models like CLIP and foundation models like Segment Anything, it is now possible to generate remarkably good zero-shot predictions for a variety of computer vision tasks, including:
-
A History of CLIP Model Training Data Advances
(Github Repo | Most Popular Model | Paper | Project Page)
-
NLP Algorithms for Clustering AI Content Search Keywords
the first thing that comes to mind is CLIP: https://github.com/openai/CLIP
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Stability Matrix v1.1.0 - Portable mode, Automatic updates, Revamped console, and more
Command: "C:\StabilityMatrix\Packages\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip --prefer-binary
-
[D] LLM or model that does image -> prompt?
CLIP might work for your needs.
-
Where can this be used? I have seen some tutorials to run deepfloyd on Google colab. Any way it can be done on local?
pip install deepfloyd_if==1.0.2rc0 pip install xformers==0.0.16 pip install git+https://github.com/openai/CLIP.git --no-deps pip install huggingface_hub --upgrade
What are some alternatives?
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
open_clip - An open source implementation of CLIP.
fast-soft-sort - Fast Differentiable Sorting and Ranking
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
faiss - A library for efficient similarity search and clustering of dense vectors.
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
disco-diffusion
Milvus - A cloud-native vector database, storage for next generation AI applications
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
struct2depth - Models and examples built with TensorFlow
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation