tonic_validate
odin-slides
tonic_validate | odin-slides | |
---|---|---|
6 | 4 | |
216 | 97 | |
7.9% | - | |
9.5 | 7.8 | |
6 days ago | 3 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tonic_validate
-
Validating the RAG Performance of Amazon Titan vs. Cohere Using Amazon Bedrock
I tried out Amazon Bedrock, and used Tonic Validate to do a head to head comparison of very simple RAG system's built using embedding and text models available in Amazon Bedrock. I compared Amazon Titan's embedding and text models to Cohere's embedding and text models in RAG systems that employ Amazon Bedrock Knowledge Bases as the vector db and retrieval components of the system.
The code for the comparison is in this jupyter notebook https://github.com/TonicAI/tonic_validate/blob/main/examples...
Let me know what you think, And your experiences building RAG with Amazon Bedrock!
-
Tonic.ai and LlamaIndex join forces to help developers build RAG systems
Tonic's RAG evaluation platform is Tonic Validate, which has open source RAG metrics https://github.com/TonicAI/tonic_validate, and a web app for tracking and monitoring RAG performance https://www.tonic.ai/validate.
- Evaluating Rag Parameters Using Tvalmetrics
-
Show HN: Tonic Validate Logging – an open-sourced SDK and convenient UI
Hey HN, Joe and Ethan from Tonic.ai here again. Alongside last week’s announcement of Tonic Validate Metrics (https://news.ycombinator.com/item?id=38012126), we’ve also released an open-source SDK for logging the performance of Retrieval Augmented Generation (RAG) applications during development, Tonic Validate Logging. Tonic Validate Logging is used to log your RAG responses to the Tonic Validate App. When RAG responses are logged, metrics are calculated on the responses using Tonic Validate Metrics.
We were working on a RAG-powered app to enable companies to talk to their free-text data safely when we ran into trouble tracking the performance of our models’ responses. So we built these solutions to help us out: Tonic Validate Metrics for benchmarking, and Tonic Validate Logging + the Tonic Validate UI to track performance improvements to help us choose the best system possible. Tonic Validate provides a simple and convenient UI that you can get for free at https://validate.tonic.ai/.
Two key benefits of using the Tonic Validate tools are (1) automatic logging and metrics calculation with just a few lines of code and (2) a simple, convenient UI to help visualize your experiments, iterations, and benchmarking results for your RAG applications.
Our hope is that these packages will become a useful part of the technique layer behind the growing suite of LLM-powered applications and, more importantly, that the open-source packages evolve and thrive with your contributions.
We’re excited to hear what you all think in the comments!
Read our docs here: https://docs.tonic.ai/validate/
Get the open-source Tonic Validate Metrics package at: https://github.com/TonicAI/tvalmetrics
Get the open-source Tonic Validate Logging SDK at: https://github.com/TonicAI/tvallogging
Sign up for the Tonic Validate UI here: https://validate.tonic.ai/
- Show HN: Tonic Validate Metrics – an open-source RAG evaluation metrics package
odin-slides
- Show HN: GPT Fill‐in‐the‐Blanks: A Placeholder PowerPlay for PowerPoint
- GitHub - leonid20000/odin-slides: This is an advanced Python tool that empowers you to effortlessly draft customizable PowerPoint slides using the Generative Pre-trained Transformer (GPT) of your choice.
- Open-Source Python Tool to Make Highly Customizable PowerPoint Slides Using GPT
What are some alternatives?
llm-guard - The Security Toolkit for LLM Interactions
Jieba - 结巴中文分词
obsidian-copilot - 🤖 A prototype assistant for writing and thinking
aegis - Self-hardening firewall for large language models
llm-api-starterkit - Beginner-friendly repository for launching your first LLM API with Python, LangChain and FastAPI, using local models or the OpenAI API.
PPspliT - A PowerPoint add-in that splits slides according to slideshow-time animation effects
canopy - Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone
pntl - Practical Natural Language Processing Tools for Humans is build on the top of Senna Natural Language Processing (NLP) predictions: part-of-speech (POS) tags, chunking (CHK), name entity recognition (NER), semantic role labeling (SRL) and syntactic parsing (PSG) with skip-gram all in Python and still more features will be added. The website give is for downlarding Senna tool
spelltest - AI-to-AI Testing | Simulation framework for LLM-based applications
pkuseg-python - pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
SnowNLP - Python library for processing Chinese text