hamt
ann-benchmarks
hamt | ann-benchmarks | |
---|---|---|
7 | 51 | |
261 | 4,636 | |
- | - | |
6.9 | 7.7 | |
3 months ago | 3 days ago | |
C | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hamt
-
Visual Introduction to Hash-Array Mapped Tries (HAMTs)
This isn't a very good explanation. The wikipedia article isn't great either. I like this description:
https://github.com/mkirchner/hamt#persistent-hash-array-mapp...
The name does tell you quite a bit about what these are:
* Hash - rather than directly using the keys to navigate the structure, the keys are hashed, and the hashes are used for navigation. This turns potentially long, poorly-distributed keys into short, well-distributed keys. However, that does mean you have to compute a hash on every access, and have to deal with hash collisions. The mkirchner implementation above calls collisions "hash exhaustion", and deals with them using some generational hashing scheme. I think i'd fall back to collision lists until that was conclusively proven to be too slow.
* Trie - the tree is navigated by indexing nodes using chunks of the (hash of the) key, rather than comparing the keys in the node
* Array mapped - sparse nodes are compressed, using a bitmap to indicate which logical slots are occupied, and then only storing those. The bitmaps live in the parent node, rather than the node itself, i think? Presumably helps with fetching.
A HAMT contains a lot of small nodes. If every entry is a bitmap plus a pointer, then it's two words, and if we use five-bit chunks, then each node can be up to 32 entries, but i would imagine the majority are small, so a typical node might be 64 bytes. I worry that doing a malloc for each one would end up with a lot of overhead. Are HAMTs often implemented with some more custom memory management? Can you allocate a big block and then carve it up?
Could you do a slightly relaxed HAMT where nodes are not always fully compact, but sized to the smallest suitable power of two entries? That might let you use some sort of buddy allocation scheme. It would also let you insert and delete without having to reallocate the node. Although i suppose you can already do that by mapping a few empty slots.
- Show HN: A hash array-mapped trie implementation in C
- Ask HN: What are some 'cool' but obscure data structures you know about?
ann-benchmarks
-
Using Your Vector Database as a JSON (Or Relational) Datastore
On top of my head, pgvector only supports 2 indexes, those are running in memory only. They don't support GPU indexing, nor Disk based indexing, they also don't have separation of query and insertions.
Also with different people I've talked to, they struggle with scale past 100K-1M vector.
You can also have a look yourself from a performance perspective: https://ann-benchmarks.com/
- ANN Benchmarks
-
Approximate Nearest Neighbors Oh Yeah
https://ann-benchmarks.com/ is a good resource covering those libraries and much more.
-
pgvector vs Pinecone: cost and performance
We utilized the ANN Benchmarks methodology, a standard for benchmarking vector databases. Our tests used the dbpedia dataset of 1,000,000 OpenAI embeddings (1536 dimensions) and inner product distance metric for both Pinecone and pgvector.
-
Vector database is not a separate database category
Data warehouses are columnar stores. They are very different from row-oriented databases - like Postgres, MySQL. Operations on columns - e.g., aggregations (mean of a column) are very efficient.
Most vector databases use one of a few different vector indexing libraries - FAISS, hnswlib, and scann (google only) are popular. The newer vector dbs, like weaviate, have introduced their own indexes, but i haven't seen any performance difference -
Reference: https://ann-benchmarks.com/
-
How We Made PostgreSQL a Better Vector Database
(Blog author here). Thanks for the question. In this case the index for both DiskANN and pgvector HNSW is small enough to fit in memory on the machine (8GB RAM), so there's no need to touch the SSD. We plan to test on a config where the index size is larger than memory (we couldn't this time due to limitations in ANN benchmarks [0], the tool we use).
To your question about RAM usage, we provide a graph of index size. When enabling PQ, our new index is 10x smaller than pgvector HNSW. We don't have numbers for HNSWPQ in FAISS yet.
[0]: https://github.com/erikbern/ann-benchmarks/
- Do we think about vector dbs wrong?
-
Vector Search with OpenAI Embeddings: Lucene Is All You Need
In terms of "All You Need" for Vector Search, ANN Benchmarks (https://ann-benchmarks.com/) is a good site to review when deciding what you need. As with anything complex, there often isn't a universal solution.
txtai (https://github.com/neuml/txtai) can build indexes with Faiss, Hnswlib and Annoy. All 3 libraries have been around at least 4 years and are mature. txtai also supports storing metadata in SQLite, DuckDB and the next release will support any JSON-capable database supported by SQLAlchemy (Postgres, MariaDB/MySQL, etc).
-
Vector databases: analyzing the trade-offs
pg_vector doesn't perform well compared to other methods, at least according to ANN-Benchmarks (https://ann-benchmarks.com/).
txtai is more than just a vector database. It also has a built-in graph component for topic modeling that utilizes the vector index to autogenerate relationships. It can store metadata in SQLite/DuckDB with support for other databases coming. It has support for running LLM prompts right with the data, similar to a stored procedure, through workflows. And it has built-in support for vectorizing data into vectors.
For vector databases that simply store vectors, I agree that it's nothing more than just a different index type.
-
Vector Dataset benchmark with 1536/768 dim data
The reason https://ann-benchmarks.com is so good, is that we can see a plot of recall vs latency. I can see you have some latency numbers in the leaderboard at the bottom, but it's very difficult to make a decision.
As a practitioner that works with vector databases every day, just latency is meaningless to me, because I need to know if it's fast AND accurate, and what the tradeoff is! You can't have it both ways. So it would be helpful if you showed plots showing this tradeoff, similar to ann-benchmarks.
What are some alternatives?
AspNetCoreDiagnosticScenarios - This repository has examples of broken patterns in ASP.NET Core applications
pgvector - Open-source vector similarity search for Postgres
multiversion-concurrency-contro
faiss - A library for efficient similarity search and clustering of dense vectors.
RVS_Generic_Swift_Toolbox - A Collection Of Various Swift Tools, Like Extensions and Utilities
Milvus - A cloud-native vector database, storage for next generation AI applications
multiversion-concurrency-control - Implementation of multiversion concurrency control, Raft, Left Right concurrency Hashmaps and a multi consumer multi producer Ringbuffer, concurrent and parallel load-balanced loops, parallel actors implementation in Main.java, Actor2.java and a parallel interpreter
tlsh
CPython - The Python programming language
vald - Vald. A Highly Scalable Distributed Vector Search Engine
pyroscope - Continuous Profiling Platform. Debug performance issues down to a single line of code [Moved to: https://github.com/grafana/pyroscope]
pgANN - Fast Approximate Nearest Neighbor (ANN) searches with a PostgreSQL database.