Using Llamafiles for Embeddings in Local RAG Applications

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • BinaryVectorDB

    Efficient vector database for hundred millions of embeddings.

  • This style of embeddings could be quite lightweight/cheap/efficient https://github.com/cohere-ai/BinaryVectorDB

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • ollama

    Get up and running with Llama 3, Mistral, Gemma, and other large language models.

  • Could you sidestep inference altogether? Just return the top N results by cosine similarity (or full text search) and let the user find what they need?

    https://ollama.com models also works really well on most modern hardware

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Show HN: MinimalChat Is a Full-Featured and Self-Contained Chat Application

    1 project | news.ycombinator.com | 14 Jun 2024
  • Introducing Semantic Kernel

    3 projects | dev.to | 14 Jun 2024
  • RAG with OLLAMA

    1 project | dev.to | 13 Jun 2024
  • Mathematical Optimization for Cargo Ships

    7 projects | news.ycombinator.com | 5 Jun 2024
  • Ollama 0.1.42

    2 projects | news.ycombinator.com | 8 Jun 2024