Experimenting with LLM-Based Chunk Enhancement for Better RAG Results

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • vectorflow

    VectorFlow is a high volume vector embedding pipeline that ingests raw data, transforms it into vectors and writes it to a vector DB of your choice. (by dgarnitz)

  • Hey HN! While working on VectorFlow, an open source platform for building RAG data ingestion pipelines(repo: https://github.com/dgarnitz/vectorflow), I interviewed many people who told me they had no idea how to chunk their data. When debugging their RAG system, they found that the TOP-K results often did not include the relevant chunks. To solve this, we created a tool that can enhance the quality of a chunk by extracting relevant contextual information from the whole document based on a use case specified by the user, then selectively adding relevant portions of the extracted information to each chunk. This is only a proof-of-concept but we found that it still gives us better results on our internal RAG system.

    *The Problem:*

    Our users tend to have many large documents, so we typically recommend either paragraph-based chunking or token-length chunking of 512 because sentence chunking causes the information to be too spread out and key pieces get missed in retrieval. But even in these larger chunks, the embedding similarity search can miss the correct chunks because they don’t contain appropriate text. For example, the HyDE research paper does not contain the phrase “top-k similarity search” but if you are performing a RAG search asking about “the latest techniques in top-k similarity search” over a collection of academic papers related to RAG, it likely won’t show up.

    *Our Solution:*

    To solve this problem, we used GPT-4 to extract keywords, entities, labels, and themes from the whole document. For each chunk, we then add the five most relevant items to the end of it. When you pass in a long document, this will extract too much information for the model to effectively decide what should be part of each chunk. We found that passing in a use case for the search system, generating five potential questions based on this use case, and using those to guide the information extraction yielded more relevant results. We also add a document summary chunk to the end of every list of chunks to help with high level questions.

    Using our Chunk Enhancer, we can have GPT-4 add a phrase like “top-k similarity search” to the end of the relevant chunks from the HyDE research paper so that they get picked up during a search.

    *The Challenges We Faced:*

    Building a Chunk Enhancer is a harder problem than we originally anticipated. Just to build a proof of concept, we had to overcome several issues.

    Figuring out the right prompting techniques for this specific task was by far the hardest part. The prompt should avoid asking for multiple distinct lines of reasons. If the prompt is too complicated, even more advanced techniques like Chain of Thought and Tree of Thought do not help. We found breaking things up into multiple model calls and giving very explicit instructions (i.e. choose the top 5 best matches) was most effective. The feedback loop from prompting is different than in conventional programming. You are relying a lot more on gut feel than directly actionable feedback.

    Another major issue was the inconsistency of the results - we don’t get the desired outcome often enough to use this in production yet. We know prompting techniques like self-consistency can help resolve this but its expensive.

    To limit costs, we tried originally to use an open source LLM, but they are slow without GPUs and the smaller ones don’t have large enough context windows. GPT 3.5 Turbo did not work well. Some other issues we ran into were high latency, the 32K context window for larger documents, and the degradation in performance as you reach the context window limit.

    *How You Can Help:*

    We would love to hear feedback from the community to see if a chunk enhancer is helpful to them and how to solve some of the technical problems we encountered.

    To try out the chunk enhancer, check out this colab: https://colab.research.google.com/drive/1ZagHQ23ENSt0tkD1XuC...

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts