agenta
ChainForge
agenta | ChainForge | |
---|---|---|
9 | 14 | |
865 | 2,015 | |
8.5% | - | |
10.0 | 8.9 | |
about 7 hours ago | 1 day ago | |
Python | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
agenta
-
Top Open Source Prompt Engineering Guides & Toolsπ§ποΈπ
Agenta is an end-to-end LLMOps platform. It provides tools for prompt engineering and management, evaluation, human annotation, and deployment.
-
Ask HN: How are you testing your LLM applications?
I am biased, but I would use a platform and not roll your own solution. You will tend to underestimate the depth of capabilities needed for an eval framework.
Now for solutions, shameless plug here, we are building an open-source platform for experimenting and evaluating complex LLM apps (https://github.com/agenta-ai/agenta). We offer automatic evaluators as well as human annotation capabilities. Currently, we only provide testing before deployment, but we have plans to include post-production evaluations as well.
Other tools I would look at in the space are promptfoo (also open-source, more dev oriented), humanloop (one of the most feature complete tools in the space, enterprise oriented), however more enterprise oriented / costly) and vellum (YC company, more focused towards product teams)
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langfuse VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langchain VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
π€ Agenta: Open-Source Platform for LLM Prompt Engineering, Evaluation, and Deployment
β Support Us: If you find this useful, please star us on GitHub: Agenta on GitHub
-
DeepEval β Unit Testing for LLMs
I'd add ours too, although we're trying to be an end-to-end one-stop platform.
https://github.com/agenta-ai/agenta
- Show HN: Knit β A Better LLM Playground
-
Patterns for Building LLM-Based Systems and Products
Great project! We're building an open-source platform for building robust LLM apps (https://github.com/Agenta-AI/agenta), we'd love to integrate your library into our evaluation!
ChainForge
- ChainForge is an open-source visual prompt engineering programming environment
- AI for ChainForge Beta
-
Anthropic Claude for Google Sheets
This seems like a sheets implementation of something like ChainForge (https://github.com/ianarawjo/ChainForge). Curious that Anthropic is entering the LLMOps tooling space ---this definitely comes as a surprise to me, as both OpenAI and HuggingFace seem to avoid building prompt engineering tooling themselves. Is this a business strategy of Anthropic's? An experiment? Regardless, it's very cool to see a company like them throw their hat into the LLMOps space beyond being a model provider. Interested to see what comes next.
- ChainForge, a visual programming environment for prompt engineering
-
I asked 60 LLMs a set of 20 questions
ChainForge has similar functionality for comparing : https://github.com/ianarawjo/ChainForge
LocalAI creates a GPT-compatible HTTP API for local LLMs: https://github.com/go-skynet/LocalAI
Is it necessary to have an HTTP API for each model in a comparative study?
- Show HN: Knit β A Better LLM Playground
-
Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation
I think you should probably mention that its source is available! [0]
I don't personally have a need for this right now, but I can really see the use for the parameterised queries, as well as comparisons across models.
Thanks for your efforts!
0: https://github.com/ianarawjo/ChainForge
- Continue multiple conversations simultaneously across multiple LLMs
- ChainForge now supports chat evaluation
-
GPT-Prompt-Engineer
No problem! I guess I will make a plug myself --we've been working on a similar 'prompt engineering', ChainForge (https://github.com/ianarawjo/ChainForge). It's targeted towards slightly different users and use cases than promptfoo --probably more geared towards early-stage, 'quick-and-dirty' prompting explorations of differences between prompts and models for less experience programmers, versus the kind of continuious benchmarking and verification testing that promptfoo offers.
I particularly like promptfoo's support for CI, which I haven't seen anywhere else, and is very important for developers pushing prompts into production (esp since OpenAI keeps updating their models every few months...).
What are some alternatives?
langfuse - πͺ’ Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. πYC W23
langflow - βοΈ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
OpenPipe - Turn expensive prompts into cheap fine-tuned models
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
promptfoo - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. [Moved to: https://github.com/promptfoo/promptfoo]
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
SolidUI - one sentence generates any graph
DocGPT - π»ππ‘ DoctorGPT provides advanced LLM prompting for PDFs and webpages. [Moved to: https://github.com/FeatureBaseDB/DoctorGPT]
deepeval - Unit Testing For LLMs [Moved to: https://github.com/confident-ai/deepeval]
GodMode - AI Chat Browser: Fast, Full webapp access to ChatGPT / Claude / Bard / Bing / Llama2! I use this 20 times a day.