llmware
vectorflow
llmware | vectorflow | |
---|---|---|
9 | 9 | |
3,173 | 637 | |
6.7% | - | |
9.8 | 8.2 | |
6 days ago | 8 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llmware
-
More Agents Is All You Need: LLMs performance scales with the number of agents
I couldn't agree more. You should check out LLMWare's SLIM agents (https://github.com/llmware-ai/llmware/tree/main/examples/SLI...). It's focusing on pretty much exactly this and chaining multiple local LLMs together.
A really good topic that ties in with this is the need for deterministic sampling (I may have the terminology a bit incorrect) depending on what the model is indended for. The LLMWare team did a good 2 part video on this here as well (https://www.youtube.com/watch?v=7oMTGhSKuNY)
I think dedicated miniture LLMs are the way forward.
Disclaimer - Not affiliated with them in any way, just think it's a really cool project.
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: LLMWare – Small Specialized Function Calling 1B LLMs for Multi-Step RAG
I've been building upon the LLMWare project - https://github.com/llmware-ai/llmware - for the past 3 months. The ability to run these models locally on standard consumer CPUs, along with the abstraction provided to chop and change between models and different processes is really cool.
I think these SLIM models are the start of something powerful for automating internal business processes and enhancing the use case of LLMs. Still kinda blows my mind that this is all running on my 3900X and also runs on a bog standard Hetzner server with no GPU.
- Show HN: LLMWare – Integrated Solution for RAG in Finance and Legal
- Llmware.ai – AI Tools for Financial, Legal and Compliance
-
Open Source Advent Fun Wraps Up!
16. LLMWare by Ai Bloks | Github | tutorial
- FLaNK Stack Weekly 16 October 2023
- Strategy for PDF data extraction and Display
vectorflow
- FLaNK Weekly 08 Jan 2024
-
Open Source Advent Fun Wraps Up!
19. VectorFlow | Github | tutorial
-
Experimenting with LLM-Based Chunk Enhancement for Better RAG Results
Hey HN! While working on VectorFlow, an open source platform for building RAG data ingestion pipelines(repo: https://github.com/dgarnitz/vectorflow), I interviewed many people who told me they had no idea how to chunk their data. When debugging their RAG system, they found that the TOP-K results often did not include the relevant chunks. To solve this, we created a tool that can enhance the quality of a chunk by extracting relevant contextual information from the whole document based on a use case specified by the user, then selectively adding relevant portions of the extracted information to each chunk. This is only a proof-of-concept but we found that it still gives us better results on our internal RAG system.
*The Problem:*
Our users tend to have many large documents, so we typically recommend either paragraph-based chunking or token-length chunking of 512 because sentence chunking causes the information to be too spread out and key pieces get missed in retrieval. But even in these larger chunks, the embedding similarity search can miss the correct chunks because they don’t contain appropriate text. For example, the HyDE research paper does not contain the phrase “top-k similarity search” but if you are performing a RAG search asking about “the latest techniques in top-k similarity search” over a collection of academic papers related to RAG, it likely won’t show up.
*Our Solution:*
To solve this problem, we used GPT-4 to extract keywords, entities, labels, and themes from the whole document. For each chunk, we then add the five most relevant items to the end of it. When you pass in a long document, this will extract too much information for the model to effectively decide what should be part of each chunk. We found that passing in a use case for the search system, generating five potential questions based on this use case, and using those to guide the information extraction yielded more relevant results. We also add a document summary chunk to the end of every list of chunks to help with high level questions.
Using our Chunk Enhancer, we can have GPT-4 add a phrase like “top-k similarity search” to the end of the relevant chunks from the HyDE research paper so that they get picked up during a search.
*The Challenges We Faced:*
Building a Chunk Enhancer is a harder problem than we originally anticipated. Just to build a proof of concept, we had to overcome several issues.
Figuring out the right prompting techniques for this specific task was by far the hardest part. The prompt should avoid asking for multiple distinct lines of reasons. If the prompt is too complicated, even more advanced techniques like Chain of Thought and Tree of Thought do not help. We found breaking things up into multiple model calls and giving very explicit instructions (i.e. choose the top 5 best matches) was most effective. The feedback loop from prompting is different than in conventional programming. You are relying a lot more on gut feel than directly actionable feedback.
Another major issue was the inconsistency of the results - we don’t get the desired outcome often enough to use this in production yet. We know prompting techniques like self-consistency can help resolve this but its expensive.
To limit costs, we tried originally to use an open source LLM, but they are slow without GPUs and the smaller ones don’t have large enough context windows. GPT 3.5 Turbo did not work well. Some other issues we ran into were high latency, the 32K context window for larger documents, and the degradation in performance as you reach the context window limit.
*How You Can Help:*
We would love to hear feedback from the community to see if a chunk enhancer is helpful to them and how to solve some of the technical problems we encountered.
To try out the chunk enhancer, check out this colab: https://colab.research.google.com/drive/1ZagHQ23ENSt0tkD1XuC...
- FLaNK Stack Weekly for 27 November 2023
- FLaNK Stack Weekly 16 October 2023
-
Multi-Modal Vector Embeddings at Scale
Check out our Open Source repo - https://github.com/dgarnitz/vectorflow
-
Good RAG implementation
Hey, you should try out VectorFlow - https://github.com/dgarnitz/vectorflow - its the only open source high volume vector embedding pipeline out there. You can embed a few thousand files in minutes if you scale up the service. We also have a discord and can help you get set up. Our product is fully compatible with Llama Index, which we recommend people use for search
-
Challenges with Image Embeddings at Scale
I have built an open source vector embedding pipeline, VectorFlow (https://github.com/dgarnitz/vectorflow) that supports image embedding for both ingestion into vector database and similarity searches.
-
Improving the performance of RAG over 10m+ documents
We are building VectorFlow an open-source vector embedding pipeline and want to know what other features we should build next after adding open-source Sentence Transformer embedding models. Check out our Github repo: https://github.com/dgarnitz/vectorflow to install VectorFlow locally or try it out in the playground (https://app.getvectorflow.com/).
What are some alternatives?
llm-client-sdk - SDK for using LLM
karapace - Karapace - Your Apache Kafka® essentials in one tool
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
retake - PostgreSQL for Search [Moved to: https://github.com/paradedb/paradedb]
inference - A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
kafka-manager - CMAK is a tool for managing Apache Kafka clusters
openstatus - 🏓 The open-source synthetic & real user monitoring platform 🏓
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
megabots - 🤖 State-of-the-art, production ready LLM apps made mega-easy, so you don't have to build them from scratch 🤯 Create a bot, now 🫵
Wails - Create beautiful applications using Go
SimplyRetrieve - Lightweight chat AI platform featuring custom knowledge, open-source LLMs, prompt-engineering, retrieval analysis. Highly customizable. For Retrieval-Centric & Retrieval-Augmented Generation.
chatgpt-comparison-detection - Human ChatGPT Comparison Corpus (HC3), Detectors, and more! 🔥