How to use Chroma to store and query vector embeddings

This page summarizes the projects mentioned and recommended in the original post on dev.to

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • example-chroma-vector-embeddings

    Example project for using chroma to store and query vector embeddings

  • Chroma is an open-source embedding database designed to store and query vector embeddings efficiently, enhancing Large Language Models (LLMs) by providing relevant context to user inquiries. In this tutorial, I will explain how to use Chroma in persistent server mode using a custom embedding model within an example Python project. The companion code repository for this blog post is available on GitHub.

  • chroma

    the AI-native open-source embedding database

  • Create a new project directory for our example project. Next, we need to clone the Chroma repository to get started. At the root of your project directory let's clone Chroma into it:

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • uvicorn

    An ASGI web server, for Python. 🦄

  • This will set up Chroma and run it as a server with uvicorn, making port 8000 accessible outside the net docker network. The command also mounts a persistent docker volume for Chroma's database, found at chroma/chroma from your project's root.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts