quivr
trulens
quivr | trulens | |
---|---|---|
22 | 14 | |
32,917 | 1,629 | |
7.7% | 7.9% | |
9.9 | 9.8 | |
1 day ago | 6 days ago | |
TypeScript | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
quivr
-
privateGPT VS quivr - a user suggested alternative
2 projects | 12 Jan 2024
-
First 15 Open Source Advent projects
3. Quivr | GitHub | tutorial
- What's the catch with codecanyon?
-
Went down the rabbit hole of 100% local RAG, it works but are there better options?
I used Ollama (with Mistral 7B) and Quivr to get a local RAG up and running and it works fine, but was surprised to find there are no easy user-friendly ways to do it. Most other local LLM UIs don't implement this use case (I looked here), even though it is one of the most useful local LLM use-cases I can think of: search and summarize information from sensitive / confidential documents.
- FLaNK Stack Weekly for 21 August 2023
-
Discord Is Not Documentation
In my opinion LLM based document search tools such as OSS Quivr may be better suited for documentation search for startups.
A highly customed Quivr with one of the 'Open Source LLMs' may provides great 'semantic search' for product documentation.
https://github.com/StanGirard/quivr
- Quivr
-
I built an open source website that lets you upload large files such as academic PDFs or books and ask ChatGPT questions based on your custom knowledge base. So far, I've tried it with long ebooks like Plato's Republic, old letters, and random academic PDFs, and it works shockingly well.
Hey thanks for creating this, will try later if i have time. Meanwhile, do you try some of other second brain app such as this, and how was the comparison? The one i mentioned was trending on github so i think its decent (been playing with it since last week or so, also). But i already starred your repo so i can come back later.
- Quivr – Your Second Brain, Empowered by Generative AI
- Quivr: Chatting with your own docs
trulens
-
Why Vector Compression Matters
Retrieval using a single vector is called dense passage retrieval (DPR), because an entire passage (dozens to hundreds of tokens) is encoded as a single vector. ColBERT instead encodes a vector-per-token, where each vector is influenced by surrounding context. This leads to meaningfully better results; for example, here’s ColBERT running on Astra DB compared to DPR using openai-v3-small vectors, compared with TruLens for the Braintrust Coda Help Desk data set. ColBERT easily beats DPR at correctness, context relevance, and groundedness.
- FLaNK AI Weekly 18 March 2024
-
First 15 Open Source Advent projects
12. TruLens by TruEra | Github | tutorial
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
How are generative AI companies monitoring their systems in production?
3) Hallucination is probably the biggest problem we solve for. To do evals for hallucination, we typically see our users use a combination of groundedness (does the context support the LLM response) and context relevance (is the retrieved context relevant to the query). There's also a bunch more for the evaluations you mentioned (moderation models, sentiment, usefulness, etc.) and it's pretty easy to add custom evals.
Also - my hot take is that gpt-3.5 is good enough for evals (sometimes better) than gpt-4 if you give the LLM enough instructions on how to do the eval.
website: https://www.trulens.org/
- FLaNK Stack Weekly 28 August 2023
-
[P] TruLens-Eval is an open source project for eval & tracking LLM experiments.
The team at TruEra recently released an open source project for evaluation & tracking of LLM applications called TruLens-Eval. We’ve specifically targeted retrieval-augmented QA as a core use case and so far we’ve seen it used for comparing different models and parameters, prompts, vector-db configurations and query planning strategies. I’d love to get your feedback on it.
- [D] Hardest thing about building with LLMs?
- Stop Evaluating LLMs on Vibes
- OSS library for attribution and interpretation methods for deep nets
What are some alternatives?
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
langfuse - 🪢 Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
chart-gpt - AI tool to build charts based on text input
shapash - 🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
Flowise - Drag & drop UI to build your customized LLM flow
probability - Probabilistic reasoning and statistical analysis in TensorFlow
databerry - The no-code platform for building custom LLM Agents
LIME - Tutorial notebooks on explainable Machine Learning with LIME (Original work: https://arxiv.org/abs/1602.04938)
xTuring - Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
embedchain - Personalizing LLM Responses
vault-ai - OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, epub, etc) using a simple React frontend.
machine_learning_basics - Plain python implementations of basic machine learning algorithms