hnswlib VS bert

Compare hnswlib vs bert and see what are their differences.

hnswlib

Header-only C++/python library for fast approximate nearest neighbors (by nmslib)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
hnswlib bert
12 49
4,015 37,036
1.5% 0.6%
6.2 0.0
19 days ago 24 days ago
C++ Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

hnswlib

Posts with mentions or reviews of hnswlib. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-14.
  • Show HN: A fast HNSW implementation in Rust
    6 projects | news.ycombinator.com | 14 Mar 2024
    How does this compare to hsnwlib - is it faster? https://github.com/nmslib/hnswlib
  • Show HN: Moodflix – a movie recommendation engine based on your mood
    1 project | news.ycombinator.com | 9 Nov 2023
    Last week I released Moodflix (https://moodflix.streamlit.app), a movie recommendation engine based to find movies based on your mood.

    Moodflix was created on top of a movie dataset of 10k movies from The Movie Database. I vectorised the films using Hugging Face's T5 model (https://huggingface.co/docs/transformers/model_doc/t5) using the film's plot synopsis, genres and languages. Then I indexed the vectors using hnswlib (https://github.com/nmslib/hnswlib). LLMs can understand a movie's plot pretty well and distill the similarities between a user's query (mood) to the movie's plot and genres.

    I have got feedback from close friends around linking movies to other review sites like IMDB or Rotten Tomatoes, linking movies to sites to stream the movie and adding movie posters. I would also love to hear from the community what things you like, what you want to see and what things you consider can be improved.

  • Hierarchical Navigable Small Worlds
    2 projects | news.ycombinator.com | 10 Jul 2023
    Actually the "ef" is not epsilon. It is a parameter of the HNSW index: https://github.com/nmslib/hnswlib/blob/master/ALGO_PARAMS.md...
  • Vector Databases 101
    3 projects | /r/datascience | 25 Jun 2023
    If you want to go larger you could still use some simple setup in conjunction with faiss, annoy or hnsw.
  • [P] Compose a vector database
    2 projects | /r/MachineLearning | 13 May 2023
    Many vector databases are using Hnswlib and that is a supported vector index alongside Faiss and Annoy.
  • Faiss: A library for efficient similarity search
    14 projects | news.ycombinator.com | 30 Mar 2023
    hnswlib (https://github.com/nmslib/hnswlib) is a strong alternative to faiss that I have enjoyed using for multiple projects. It is simple and has great performance on CPU.

    After working through several projects that utilized local hnswlib and different databases for text and vector persistence, I integrated hnswlib with sqlite to create an embedded vector search engine that can easily scale up to millions of embeddings. For self-hosted situations of under 10M embeddings and less than insane throughput I think this combo is hard to beat.

    https://github.com/jiggy-ai/hnsqlite

  • Storing OpenAI embeddings in Postgres with pgvector
    9 projects | news.ycombinator.com | 6 Feb 2023
    https://github.com/nmslib/hnswlib

    Used it to index 40M text snippets in the legal domain. Allows incremental adding.

    I love how it just works. You know, doesn’t ANNOY me or makes a FAISS. ;-)

  • Seeking advice on improving NLP search results
    4 projects | /r/LanguageTechnology | 22 Jan 2023
    3000 texts doesn't sound like to many, so may be a brute force cos calculation to find the most similar vector would work. If that's taking too much time, may be look at KNN or ANN modules to speed up finding the most similar vector. I use hsnwlib in knn mode for this. SOrt through about 350,000 vectors in about 30-50 msec.
  • How to Build a Semantic Search Engine in Rust
    3 projects | news.ycombinator.com | 9 Nov 2022
    hnswlib is in cpp and has python bindings (you should be able to make your own for other languages).

    https://github.com/nmslib/hnswlib

  • Anatomy of a txtai index
    4 projects | dev.to | 2 Mar 2022
    embeddings - The embeddings index file. This is an Approximate Nearest Neighbor (ANN) index with either Faiss (default), Hnswlib or Annoy, depending on the settings.

bert

Posts with mentions or reviews of bert. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • OpenAI – Application for US trademark "GPT" has failed
    1 project | news.ycombinator.com | 15 Feb 2024
    task-specific parameters, and is trained on the downstream tasks by simply fine-tuning all pre-trained parameters.

    [0] https://arxiv.org/abs/1810.04805

  • Integrate LLM Frameworks
    5 projects | dev.to | 10 Dec 2023
    The release of BERT in 2018 kicked off the language model revolution. The Transformers architecture succeeded RNNs and LSTMs to become the architecture of choice. Unbelievable progress was made in a number of areas: summarization, translation, text classification, entity classification and more. 2023 tooks things to another level with the rise of large language models (LLMs). Models with billions of parameters showed an amazing ability to generate coherent dialogue.
  • Embeddings: What they are and why they matter
    9 projects | news.ycombinator.com | 24 Oct 2023
    The general idea is that you have a particular task & dataset, and you optimize these vectors to maximize that task. So the properties of these vectors - what information is retained and what is left out during the 'compression' - are effectively determined by that task.

    In general, the core task for the various "LLM tools" involves prediction of a hidden word, trained on very large quantities of real text - thus also mirroring whatever structure (linguistic, syntactic, semantic, factual, social bias, etc) exists there.

    If you want to see how the sausage is made and look at the actual algorithms, then the key two approaches to read up on would probably be Mikolov's word2vec (https://arxiv.org/abs/1301.3781) with the CBOW (Continuous Bag of Words) and Continuous Skip-Gram Model, which are based on relatively simple math optimization, and then on the BERT (https://arxiv.org/abs/1810.04805) structure which does a conceptually similar thing but with a large neural network that can learn more from the same data. For both of them, you can either read the original papers or look up blog posts or videos that explain them, different people have different preferences on how readable academic papers are.

  • Ernie, China's ChatGPT, Cracks Under Pressure
    1 project | news.ycombinator.com | 7 Sep 2023
  • Ask HN: How to Break into AI Engineering
    2 projects | news.ycombinator.com | 22 Jun 2023
    Could you post a link to "the BERT paper"? I've read some, but would be interested reading anything that anyone considered definitive :) Is it this one? "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" :https://arxiv.org/abs/1810.04805
  • How to leverage the state-of-the-art NLP models in Rust
    3 projects | /r/infinilabs | 7 Jun 2023
    Rust crate rust_bert implementation of the BERT language model (https://arxiv.org/abs/1810.04805 Devlin, Chang, Lee, Toutanova, 2018). The base model is implemented in the bert_model::BertModel struct. Several language model heads have also been implemented, including:
  • Notes on training BERT from scratch on an 8GB consumer GPU
    1 project | news.ycombinator.com | 2 Jun 2023
    The achievement of training a BERT model to 90% of the GLUE score on a single GPU in ~100 hours is indeed impressive. As for the original BERT pretraining run, the paper [1] mentions that the pretraining took 4 days on 16 TPU chips for the BERT-Base model and 4 days on 64 TPU chips for the BERT-Large model.

    Regarding the translation of these techniques to the pretraining phase for a GPT model, it is possible that some of the optimizations and techniques used for BERT could be applied to GPT as well. However, the specific architecture and training objectives of GPT might require different approaches or additional optimizations.

    As for the SOPHIA optimizer, it is designed to improve the training of deep learning models by adaptively adjusting the learning rate and momentum. According to the paper [2], SOPHIA has shown promising results in various deep learning tasks. It is possible that the SOPHIA optimizer could help improve the training of BERT and GPT models, but further research and experimentation would be needed to confirm its effectiveness in these specific cases.

    [1] https://arxiv.org/abs/1810.04805

  • List of AI-Models
    14 projects | /r/GPT_do_dah | 16 May 2023
    Click to Learn more...
  • Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding
    1 project | news.ycombinator.com | 18 Apr 2023
  • Google internally developed chatbots like ChatGPT years ago
    1 project | news.ycombinator.com | 8 Mar 2023

What are some alternatives?

When comparing hnswlib and bert you can also consider the following projects:

faiss - A library for efficient similarity search and clustering of dense vectors.

NLTK - NLTK Source

annoy - Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk

bert-sklearn - a sklearn wrapper for Google's BERT model

qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

pysimilar - A python library for computing the similarity between two strings (text) based on cosine similarity

awesome-vector-search - Collections of vector search related libraries, service and research papers

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

semantic-search-through-wikipedia-with-weaviate - Semantic search through a vectorized Wikipedia (SentenceBERT) with the Weaviate vector search engine

PURE - [NAACL 2021] A Frustratingly Easy Approach for Entity and Relation Extraction https://arxiv.org/abs/2010.12812

txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows

NL_Parser_using_Spacy - NLP parser using NER and TDD