Our great sponsors
-
example-chroma-vector-embeddings
Example project for using chroma to store and query vector embeddings
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Chroma is an open-source embedding database designed to store and query vector embeddings efficiently, enhancing Large Language Models (LLMs) by providing relevant context to user inquiries. In this tutorial, I will explain how to use Chroma in persistent server mode using a custom embedding model within an example Python project. The companion code repository for this blog post is available on GitHub.
Create a new project directory for our example project. Next, we need to clone the Chroma repository to get started. At the root of your project directory let's clone Chroma into it:
This will set up Chroma and run it as a server with uvicorn, making port 8000 accessible outside the net docker network. The command also mounts a persistent docker volume for Chroma's database, found at chroma/chroma from your project's root.
Related posts
- How to Deploy a Fast API Application to a Kubernetes Cluster using Podman and Minikube
- LangChain, Python, and Heroku
- Fun with Avatars: Crafting the core engine | Part. 1
- Ask HN: Where to Host a FastAPI App
- Unresolved Memory Management Issues in FastAPI/Starlette/Uvicorn/Python During High-Load Scenarios