chroma
llama.cpp
chroma | llama.cpp | |
---|---|---|
32 | 775 | |
12,324 | 57,463 | |
5.5% | - | |
9.8 | 10.0 | |
7 days ago | 1 day ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chroma
-
Let’s build AI-tools with the help of AI and Typescript!
Package installer for Python (pip), we use this for installing the Python-based packages, such as Jupyter Lab, and we're going to use this for installing other Python-based tools like the Chroma DB vector database
-
Mixtral 8x22B
Optional: You can use SillyTavern[1] for a more "rich" chat experience
The above lets me chat, at least superficially, with my friend. It's nice for simple interactions and banter; I've found it to be a positive and reflective experience.
0. https://www.trychroma.com/
-
7 Vector Databases Every Developer Should Know!
Chroma DB is a newer entrant in the vector database arena, designed specifically for handling high-dimensional color vectors. It's particularly useful for applications in digital media, e-commerce, and content discovery, where color similarity plays a crucial role in search and recommendation algorithms.
-
AI Grant Traction in OSS Startups
View on GitHub
- Qdrant, the Vector Search Database, raised $28M in a Series A round
-
Vector Databases: A Technical Primer [pdf]
For Python I believe Chroma [1] can be used embedded.
For Go I recently started building chromem-go, inspired by the Chroma interface: https://github.com/philippgille/chromem-go
It's neither advanced nor for scale yet, but the RAG demo works.
[1] https://github.com/chroma-core/chroma
- Chroma – the open-source embedding database
-
Show HN: Embeddings Solution for Personal Journal
The formatting is a bit off.
The web app is here: https://jumblejournal.org
The DB used is here: https://www.trychroma.com/
-
SQLite vs. Chroma: A Comparative Analysis for Managing Vector Embeddings
Whether you’re navigating through well-known options like SQLite, enriched with the sqlite-vss extension, or exploring other avenues like Chroma, an open-source vector database, selecting the right tool is paramount. This article compares these two choices, guiding you through the pros and cons of each, helping you choose the right tool for storing and querying vector embeddings for your project.
-
How to use Chroma to store and query vector embeddings
Create a new project directory for our example project. Next, we need to clone the Chroma repository to get started. At the root of your project directory let's clone Chroma into it:
llama.cpp
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
What are some alternatives?
SillyTavern - LLM Frontend for Power Users.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
faiss - A library for efficient similarity search and clustering of dense vectors.
gpt4all - gpt4all: run open-source LLMs anywhere
golang-ical - A ICS / ICal parser and serialiser for Golang.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
ggml - Tensor library for machine learning
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM