oterm
a text-based terminal client for Ollama (by ggozad)
vectordb
A minimal Python package for storing and retrieving text using chunking, embeddings, and vector search. (by kagisearch)
oterm | vectordb | |
---|---|---|
1 | 6 | |
622 | 552 | |
- | 5.1% | |
9.2 | 7.6 | |
10 days ago | 6 days ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
oterm
Posts with mentions or reviews of oterm.
We have used some of these posts to build our list of alternatives
and similar projects.
-
term
Check it out here
vectordb
Posts with mentions or reviews of vectordb.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-11-26.
-
VectorDB: Vector Database Built by Kagi Search
We needed a low latency, on premise solution that we can run on edge nodes (so lightweight) with sane defaults that anyone in the team can whim in a sec.
Result is this and we constantly benchmark performance of different embeddings to ensure best defaults.
[1] https://github.com/kagisearch/vectordb#embeddings-performanc...
-
Embeddings: What they are and why they matter
If you are looking for lightweight, low- latency, fully local, end-to-end solution (chunking, embedding, storage and vector search), try vectordb [1]
Just spent a day updating it with latest benchmarks for text embedding models.
[1] https://github.com/kagisearch/vectordb