agenta
ragas
agenta | ragas | |
---|---|---|
9 | 10 | |
865 | 4,874 | |
8.5% | 17.7% | |
10.0 | 9.6 | |
about 8 hours ago | 1 day ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
agenta
-
Top Open Source Prompt Engineering Guides & Tools๐ง๐๏ธ๐
Agenta is an end-to-end LLMOps platform. It provides tools for prompt engineering and management, evaluation, human annotation, and deployment.
-
Ask HN: How are you testing your LLM applications?
I am biased, but I would use a platform and not roll your own solution. You will tend to underestimate the depth of capabilities needed for an eval framework.
Now for solutions, shameless plug here, we are building an open-source platform for experimenting and evaluating complex LLM apps (https://github.com/agenta-ai/agenta). We offer automatic evaluators as well as human annotation capabilities. Currently, we only provide testing before deployment, but we have plans to include post-production evaluations as well.
Other tools I would look at in the space are promptfoo (also open-source, more dev oriented), humanloop (one of the most feature complete tools in the space, enterprise oriented), however more enterprise oriented / costly) and vellum (YC company, more focused towards product teams)
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langfuse VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langchain VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
๐ค Agenta: Open-Source Platform for LLM Prompt Engineering, Evaluation, and Deployment
โญ Support Us: If you find this useful, please star us on GitHub: Agenta on GitHub
-
DeepEval โ Unit Testing for LLMs
I'd add ours too, although we're trying to be an end-to-end one-stop platform.
https://github.com/agenta-ai/agenta
- Show HN: Knit โ A Better LLM Playground
-
Patterns for Building LLM-Based Systems and Products
Great project! We're building an open-source platform for building robust LLM apps (https://github.com/Agenta-AI/agenta), we'd love to integrate your library into our evaluation!
ragas
-
Show HN: Ragas โ the de facto open-source standard for evaluating RAG pipelines
congrats on launching! i think my continuing struggle with looking at Ragas as a company rather than an oss library is that the core of it is like 8 metrics (https://github.com/explodinggradients/ragas/tree/main/src/ra...) that are each 1-200 LOC. i can inline that easily in my app and retain full control, or model that in langchain or haystack or whatever.
why is Ragas a library and a company, rather than an overall "standard" or philosophy (eg like Heroku's 12 Factor Apps) that could maybe be more robust?
(just giving an opp to pitch some underappreciated benefits of using this library)
- FLaNK 04 March 2024
- FLaNK Stack 05 Feb 2024
-
SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B?
Also, at some point you'll need to get serious about evaluation (trust me, you will). You may be interested in https://github.com/explodinggradients/ragas
- Ragas โ Framework for RAG Evaluation
- Ragas: Open-source Evaluation framework for RAG pipelines
-
Building a customer support chatbot using GPT-3.5 and lLamaIndex๐
The problem becomes worse if you want to inspect outputs from not just one, but several different queries. Luckily, there are several free open source packages such as ragas and DeepEval that can help evaluate your chatbot so you don't have to manually do it ๐
-
Patterns for Building LLM-Based Systems and Products
We have build RAGAS framework for this https://github.com/explodinggradients/ragas
-
[R] All about evaluating Large language models
Hi u/thecuteturtle, I am building open-source projects for evaluating LLM-based applications. Check it out https://github.com/explodinggradients/ragas and if you like to collaborate let me know :)
What are some alternatives?
ChainForge - An open-source visual programming environment for battle-testing prompts to LLMs.
deepeval - The LLM Evaluation Framework
langfuse - ๐ชข Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. ๐YC W23
chameleon-llm - Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".
OpenPipe - Turn expensive prompts into cheap fine-tuned models
Local-LLM-Langchain - Load local LLMs effortlessly in a Jupyter notebook for testing purposes alongside Langchain or other agents. Contains Oobagooga and KoboldAI versions of the langchain notebooks with examples.
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
FastLoRAChat - Instruct-tune LLaMA on consumer hardware with shareGPT data
SolidUI - one sentence generates any graph
text-generation-webui-colab - A colab gradio web UI for running Large Language Models
deepeval - Unit Testing For LLMs [Moved to: https://github.com/confident-ai/deepeval]
ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models