hnswlib
CLIP
Our great sponsors
hnswlib | CLIP | |
---|---|---|
12 | 103 | |
4,000 | 22,051 | |
3.5% | 5.6% | |
6.6 | 1.2 | |
11 days ago | 12 days ago | |
C++ | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hnswlib
-
Show HN: A fast HNSW implementation in Rust
How does this compare to hsnwlib - is it faster? https://github.com/nmslib/hnswlib
-
Show HN: Moodflix – a movie recommendation engine based on your mood
Last week I released Moodflix (https://moodflix.streamlit.app), a movie recommendation engine based to find movies based on your mood.
Moodflix was created on top of a movie dataset of 10k movies from The Movie Database. I vectorised the films using Hugging Face's T5 model (https://huggingface.co/docs/transformers/model_doc/t5) using the film's plot synopsis, genres and languages. Then I indexed the vectors using hnswlib (https://github.com/nmslib/hnswlib). LLMs can understand a movie's plot pretty well and distill the similarities between a user's query (mood) to the movie's plot and genres.
I have got feedback from close friends around linking movies to other review sites like IMDB or Rotten Tomatoes, linking movies to sites to stream the movie and adding movie posters. I would also love to hear from the community what things you like, what you want to see and what things you consider can be improved.
-
Hierarchical Navigable Small Worlds
Actually the "ef" is not epsilon. It is a parameter of the HNSW index: https://github.com/nmslib/hnswlib/blob/master/ALGO_PARAMS.md...
-
Vector Databases 101
If you want to go larger you could still use some simple setup in conjunction with faiss, annoy or hnsw.
-
[P] Compose a vector database
Many vector databases are using Hnswlib and that is a supported vector index alongside Faiss and Annoy.
-
Faiss: A library for efficient similarity search
hnswlib (https://github.com/nmslib/hnswlib) is a strong alternative to faiss that I have enjoyed using for multiple projects. It is simple and has great performance on CPU.
After working through several projects that utilized local hnswlib and different databases for text and vector persistence, I integrated hnswlib with sqlite to create an embedded vector search engine that can easily scale up to millions of embeddings. For self-hosted situations of under 10M embeddings and less than insane throughput I think this combo is hard to beat.
https://github.com/jiggy-ai/hnsqlite
-
Storing OpenAI embeddings in Postgres with pgvector
https://github.com/nmslib/hnswlib
Used it to index 40M text snippets in the legal domain. Allows incremental adding.
I love how it just works. You know, doesn’t ANNOY me or makes a FAISS. ;-)
-
Seeking advice on improving NLP search results
3000 texts doesn't sound like to many, so may be a brute force cos calculation to find the most similar vector would work. If that's taking too much time, may be look at KNN or ANN modules to speed up finding the most similar vector. I use hsnwlib in knn mode for this. SOrt through about 350,000 vectors in about 30-50 msec.
-
How to Build a Semantic Search Engine in Rust
hnswlib is in cpp and has python bindings (you should be able to make your own for other languages).
https://github.com/nmslib/hnswlib
-
Anatomy of a txtai index
embeddings - The embeddings index file. This is an Approximate Nearest Neighbor (ANN) index with either Faiss (default), Hnswlib or Annoy, depending on the settings.
CLIP
-
How to Cluster Images
We will also need two more libraries: OpenAI’s CLIP GitHub repo, enabling us to generate image features with the CLIP model, and the umap-learn library, which will let us apply a dimensionality reduction technique called Uniform Manifold Approximation and Projection (UMAP) to those features to visualize them in 2D:
-
Show HN: Memories, FOSS Google Photos alternative built for high performance
Biggest missing feature for all these self hosted photo hosting is the lack of a real search. Being able to search for things like "beach at night" is a time saver instead of browsing through hundreds or thousands of photos. There are trained neural networks out there like https://github.com/openai/CLIP which are quite good.
-
Zero-Shot Prediction Plugin for FiftyOne
In computer vision, this is known as zero-shot learning, or zero-shot prediction, because the goal is to generate predictions without explicitly being given any example predictions to learn from. With the advent of high quality multimodal models like CLIP and foundation models like Segment Anything, it is now possible to generate remarkably good zero-shot predictions for a variety of computer vision tasks, including:
-
A History of CLIP Model Training Data Advances
(Github Repo | Most Popular Model | Paper | Project Page)
-
NLP Algorithms for Clustering AI Content Search Keywords
the first thing that comes to mind is CLIP: https://github.com/openai/CLIP
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Stability Matrix v1.1.0 - Portable mode, Automatic updates, Revamped console, and more
Command: "C:\StabilityMatrix\Packages\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip --prefer-binary
-
[D] LLM or model that does image -> prompt?
CLIP might work for your needs.
-
Where can this be used? I have seen some tutorials to run deepfloyd on Google colab. Any way it can be done on local?
pip install deepfloyd_if==1.0.2rc0 pip install xformers==0.0.16 pip install git+https://github.com/openai/CLIP.git --no-deps pip install huggingface_hub --upgrade
What are some alternatives?
faiss - A library for efficient similarity search and clustering of dense vectors.
open_clip - An open source implementation of CLIP.
annoy - Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
awesome-vector-search - Collections of vector search related libraries, service and research papers
disco-diffusion
semantic-search-through-wikipedia-with-weaviate - Semantic search through a vectorized Wikipedia (SentenceBERT) with the Weaviate vector search engine
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation