rebuff
agenta
rebuff | agenta | |
---|---|---|
3 | 9 | |
947 | 865 | |
5.5% | 8.5% | |
8.9 | 10.0 | |
about 2 months ago | about 8 hours ago | |
TypeScript | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rebuff
agenta
-
Top Open Source Prompt Engineering Guides & Toolsπ§ποΈπ
Agenta is an end-to-end LLMOps platform. It provides tools for prompt engineering and management, evaluation, human annotation, and deployment.
-
Ask HN: How are you testing your LLM applications?
I am biased, but I would use a platform and not roll your own solution. You will tend to underestimate the depth of capabilities needed for an eval framework.
Now for solutions, shameless plug here, we are building an open-source platform for experimenting and evaluating complex LLM apps (https://github.com/agenta-ai/agenta). We offer automatic evaluators as well as human annotation capabilities. Currently, we only provide testing before deployment, but we have plans to include post-production evaluations as well.
Other tools I would look at in the space are promptfoo (also open-source, more dev oriented), humanloop (one of the most feature complete tools in the space, enterprise oriented), however more enterprise oriented / costly) and vellum (YC company, more focused towards product teams)
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langfuse VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langchain VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
π€ Agenta: Open-Source Platform for LLM Prompt Engineering, Evaluation, and Deployment
β Support Us: If you find this useful, please star us on GitHub: Agenta on GitHub
-
DeepEval β Unit Testing for LLMs
I'd add ours too, although we're trying to be an end-to-end one-stop platform.
https://github.com/agenta-ai/agenta
- Show HN: Knit β A Better LLM Playground
-
Patterns for Building LLM-Based Systems and Products
Great project! We're building an open-source platform for building robust LLM apps (https://github.com/Agenta-AI/agenta), we'd love to integrate your library into our evaluation!
What are some alternatives?
gateway - A Blazing Fast AI Gateway. Route to 100+ LLMs with 1 fast & friendly API.
ChainForge - An open-source visual programming environment for battle-testing prompts to LLMs.
llm.report - π llm.report is an open-source logging and analytics platform for OpenAI: Log your ChatGPT API requests, analyze costs, and improve your prompts.
langfuse - πͺ’ Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. πYC W23
promptfoo - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. [Moved to: https://github.com/promptfoo/promptfoo]
OpenPipe - Turn expensive prompts into cheap fine-tuned models
Raycast-PromptLab - A Raycast extension for creating powerful, contextually-aware AI commands using placeholders, action scripts, selected files, and more.
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
sugarcane-ai - npm like package ecosystem for Prompts π€
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
SolidUI - one sentence generates any graph