sqlite-vss
llama.cpp
sqlite-vss | llama.cpp | |
---|---|---|
17 | 774 | |
1,487 | 57,463 | |
- | - | |
8.0 | 10.0 | |
about 2 months ago | 1 day ago | |
C++ | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sqlite-vss
-
I'm writing a new vector search SQLite Extension
I guess this is an answer to the GitHub issue I opened against SQLite-vss a couple of months ago?
https://github.com/asg017/sqlite-vss/issues/124
-
Embeddings are a good starting point for the AI curious app developer
Perhaps sqlite-vss? It adds vector searches to sqlite.
https://github.com/asg017/sqlite-vss
-
How to Enhance Content with Semantify
Utilizing sqlite-vss to store and query vector embeddings managed by a local SQLite database, Semantify conducts fast, precise vector searches within these embeddings to find and recommend relevant content, ensuring readers are presented with articles that truly match their interests.
-
SQLite vs. Chroma: A Comparative Analysis for Managing Vector Embeddings
Whether you’re navigating through well-known options like SQLite, enriched with the sqlite-vss extension, or exploring other avenues like Chroma, an open-source vector database, selecting the right tool is paramount. This article compares these two choices, guiding you through the pros and cons of each, helping you choose the right tool for storing and querying vector embeddings for your project.
-
Vector database is not a separate database category
Here is a SQLite extension that uses Faiss under the hood.
https://github.com/asg017/sqlite-vss
Not associated with the project, just love SQLite and find it very useful.
- SQLite-Vss: A SQLite Extension for Vector Search
-
Introduction to Vector Search and Embeddings
Vector Databases: As your data grows, efficiently searching through millions of vectors can become a challenge. Specialized vector databases like FAISS, Annoy, or Elasticsearch's vector search capabilities can be explored to manage and search through large-scale vector data. Your sentence is grammatically correct. In addition, databases like SQLite and PostgreSQL have extensions, such as sqlite-vss and pgvector, that can be used to store and query vector embeddings, respectively.
-
The Problem with LangChain
I had a go at one of those a few months ago: https://datasette.io/plugins/datasette-faiss
Alex Garcia built a better one here as a SQLite Rust extension: https://github.com/asg017/sqlite-vss
-
Every request, every microsecond: scalable machine learning at Cloudflare
Since the problem domain is that of anomaly detection from constructed request feature embeddings, I wonder if an ANN-search methodology using an embedded database (such as https://github.com/asg017/sqlite-vss or similar) was explored.
-
Disrupting the AI Scene with Open Source and Open Innovation
As I searched for "sqlite vector plugin" I didn't find any results, before a couple of weeks ago. Two weeks ago I found Alex' SQLite VSS plugin for SQLite. The library was an amazing piece of engineering from an "idea perspective". However, as I started playing around with it, I realised it was ipso facto like "Titanic". Beautiful and amazing, but destined to leak water and sink to the bottom of the ocean because of what we software engineers refers to as "memory leaks".
llama.cpp
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
- Mixtral 8x22B
What are some alternatives?
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
chroma - the AI-native open-source embedding database
gpt4all - gpt4all: run open-source LLMs anywhere
pgvector-go - pgvector support for Go
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
milvus-lite - A lightweight version of Milvus wrapped with Python.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
typesense-instantsearch-semantic-search-demo - A demo that shows how to build a semantic search experience with Typesense's vector search feature and Instantsearch.js
ggml - Tensor library for machine learning
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM