- raglite VS rag-postgres-openai-python
- raglite VS simple-pgvector-python
- raglite VS rag-with-amazon-postgresql-using-pgvector-and-sagemaker
- raglite VS duckdb-embedding-search
- raglite VS pgserver
- raglite VS tech-trend-tracker
- raglite VS ragflow
- raglite VS vector_sqlite
- raglite VS txtai
- raglite VS trieve
Raglite Alternatives
Similar projects and alternatives to raglite
-
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
rag-postgres-openai-python
A RAG app to ask questions about rows in a database table. Deployable on Azure Container Apps with PostgreSQL Flexible Server.
-
simple-pgvector-python
An Abstraction Using a similar API to Pinecone but implemented with pgvector python
-
-
rag-with-amazon-postgresql-using-pgvector-and-sagemaker
Question Answering application with Large Language Models (LLMs) and Amazon Postgresql using pgvector
-
-
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
ragflow
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
-
-
txtai
💡 All-in-one open-source AI framework for semantic search, LLM orchestration and language model workflows
-
raglite discussion
raglite reviews and mentions
- Show HN: RAGLite – A Python package for the unhobbling of RAG
-
32k context length text embedding models
The name ‘late chunking’ is indeed somewhat of a misnomer in the sense that the technique does not partition documents into document chunks. What it actually does is to pool token embeddings (of a large context) into say sentence embeddings. The result is that your document is now represented as a sequence of sentence embeddings, each of which is informed by the other sentences in the document.
Then, you want to parition the document into chunks. Late chunking pairs really well with semantic chunking because it can use late chunking's improved sentence embeddings to find semantically more cohesive chunks. In fact, you can cast this as a binary integer programming problem and find the ‘best’ chunks this way. See RAGLite [1] for an implementation of both techniques including the formulation of semantic chunking as an optimization problem.
Finally, you have a sequence of document chunks, each represented as a multi-vector sequence of sentence embeddings. You could choose to pool these sentence embeddings into a single embedding vector per chunk. Or, you could leave the multi-vector chunk embeddings as-is and apply a more advanced querying technique like ColBERT's MaxSim [2].
[1] https://github.com/superlinear-ai/raglite
[2] https://huggingface.co/blog/fsommers/document-similarity-col...
Stats
superlinear-ai/raglite is an open source project licensed under Mozilla Public License 2.0 which is an OSI approved license.
The primary programming language of raglite is Python.