llm-api-starterkit
tonic_validate
llm-api-starterkit | tonic_validate | |
---|---|---|
2 | 6 | |
86 | 207 | |
- | 21.7% | |
5.2 | 9.5 | |
10 months ago | 6 days ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm-api-starterkit
tonic_validate
-
Validating the RAG Performance of Amazon Titan vs. Cohere Using Amazon Bedrock
I tried out Amazon Bedrock, and used Tonic Validate to do a head to head comparison of very simple RAG system's built using embedding and text models available in Amazon Bedrock. I compared Amazon Titan's embedding and text models to Cohere's embedding and text models in RAG systems that employ Amazon Bedrock Knowledge Bases as the vector db and retrieval components of the system.
The code for the comparison is in this jupyter notebook https://github.com/TonicAI/tonic_validate/blob/main/examples...
Let me know what you think, And your experiences building RAG with Amazon Bedrock!
-
Tonic.ai and LlamaIndex join forces to help developers build RAG systems
Tonic's RAG evaluation platform is Tonic Validate, which has open source RAG metrics https://github.com/TonicAI/tonic_validate, and a web app for tracking and monitoring RAG performance https://www.tonic.ai/validate.
- Evaluating Rag Parameters Using Tvalmetrics
-
Show HN: Tonic Validate Logging – an open-sourced SDK and convenient UI
Hey HN, Joe and Ethan from Tonic.ai here again. Alongside last week’s announcement of Tonic Validate Metrics (https://news.ycombinator.com/item?id=38012126), we’ve also released an open-source SDK for logging the performance of Retrieval Augmented Generation (RAG) applications during development, Tonic Validate Logging. Tonic Validate Logging is used to log your RAG responses to the Tonic Validate App. When RAG responses are logged, metrics are calculated on the responses using Tonic Validate Metrics.
We were working on a RAG-powered app to enable companies to talk to their free-text data safely when we ran into trouble tracking the performance of our models’ responses. So we built these solutions to help us out: Tonic Validate Metrics for benchmarking, and Tonic Validate Logging + the Tonic Validate UI to track performance improvements to help us choose the best system possible. Tonic Validate provides a simple and convenient UI that you can get for free at https://validate.tonic.ai/.
Two key benefits of using the Tonic Validate tools are (1) automatic logging and metrics calculation with just a few lines of code and (2) a simple, convenient UI to help visualize your experiments, iterations, and benchmarking results for your RAG applications.
Our hope is that these packages will become a useful part of the technique layer behind the growing suite of LLM-powered applications and, more importantly, that the open-source packages evolve and thrive with your contributions.
We’re excited to hear what you all think in the comments!
Read our docs here: https://docs.tonic.ai/validate/
Get the open-source Tonic Validate Metrics package at: https://github.com/TonicAI/tvalmetrics
Get the open-source Tonic Validate Logging SDK at: https://github.com/TonicAI/tvallogging
Sign up for the Tonic Validate UI here: https://validate.tonic.ai/
- Show HN: Tonic Validate Metrics – an open-source RAG evaluation metrics package
What are some alternatives?
Promptify - Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research
llm-guard - The Security Toolkit for LLM Interactions
langcorn - ⛓️ Serving LangChain LLM apps and agents automagically with FastApi. LLMops
obsidian-copilot - 🤖 A prototype assistant for writing and thinking
lanarky - The web framework for building LLM microservices
canopy - Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone
deeplake - Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai
spelltest - AI-to-AI Testing | Simulation framework for LLM-based applications
aegis - Self-hardening firewall for large language models
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
odin-slides - This is an advanced Python tool that empowers you to effortlessly draft customizable PowerPoint slides using the Generative Pre-trained Transformer (GPT) of your choice. Leveraging the capabilities of Large Language Models (LLM), odin-slides enables you to turn the lengthiest Word documents into well organized presentations.