langfuse
agenta
langfuse | agenta | |
---|---|---|
11 | 9 | |
3,681 | 865 | |
30.4% | 10.4% | |
9.9 | 10.0 | |
7 days ago | 6 days ago | |
TypeScript | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langfuse
-
Top Open Source Prompt Engineering Guides & Tools🔧🏗️🚀
Langfuse is an open-source LLM engineering platform that helps teams collaboratively debug, analyze, and iterate on their LLM applications.
- Roast My Docs
-
Show HN: Open-Source LLM Observability and Export to Grafana, Datadog etc.
Congrats on the Show! How’s this different from https://github.com/langfuse/langfuse? The exports seems really interesting
-
RAG observability in 2 lines of code with Llama Index & Langfuse
Thus, we started working on Langfuse.com (GitHub) to establish an open source LLM engineering platform with tightly integrated features for tracing, prompt management, and evaluation. In the beginning we just solved our own and our friends’ problems. Today we are at over 1000 projects which rely on Langfuse, and 2.3k stars on GitHub. You can either self-host Langfuse or use the cloud instance maintained by us.
-
langfuse VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
Ask HN: Who is hiring? (November 2023)
- We want to build a tool that is recommended here on HN: you can build a tool you would want to use yourself.
Please see more details here: https://langfuse.com/careers or reach out directly to me: [email protected]
[1] https://github.com/langfuse/langfuse
[2] https://create.t3.gg/
-
How are generative AI companies monitoring their systems in production?
We struggled with this ourselves while building LLM-based products and then open-sourced our observability/monitoring tool [1]. Many use it to track RAG and agents in production, run custom evals on the production traces (focused on hallucination), and track how metrics are different across releases or customers. Feel free to dm if there is something specific you are looking to solve, happy to help.
[1] https://github.com/langfuse/langfuse
-
LLM Analytics 101 - How to Improve your LLM app
Visit us on Discord and Github to engage with our project.
-
Ask HN: Any tools or frameworks to monitor the usage of OpenAI API keys?
Maybe try https://github.com/langfuse/langfuse
It was recently shared on HN
- Show HN: Langfuse – Open-source observability and analytics for LLM apps
agenta
-
Top Open Source Prompt Engineering Guides & Tools🔧🏗️🚀
Agenta is an end-to-end LLMOps platform. It provides tools for prompt engineering and management, evaluation, human annotation, and deployment.
-
Ask HN: How are you testing your LLM applications?
I am biased, but I would use a platform and not roll your own solution. You will tend to underestimate the depth of capabilities needed for an eval framework.
Now for solutions, shameless plug here, we are building an open-source platform for experimenting and evaluating complex LLM apps (https://github.com/agenta-ai/agenta). We offer automatic evaluators as well as human annotation capabilities. Currently, we only provide testing before deployment, but we have plans to include post-production evaluations as well.
Other tools I would look at in the space are promptfoo (also open-source, more dev oriented), humanloop (one of the most feature complete tools in the space, enterprise oriented), however more enterprise oriented / costly) and vellum (YC company, more focused towards product teams)
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langfuse VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langchain VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
🤖 Agenta: Open-Source Platform for LLM Prompt Engineering, Evaluation, and Deployment
⭐ Support Us: If you find this useful, please star us on GitHub: Agenta on GitHub
-
DeepEval – Unit Testing for LLMs
I'd add ours too, although we're trying to be an end-to-end one-stop platform.
https://github.com/agenta-ai/agenta
- Show HN: Knit – A Better LLM Playground
-
Patterns for Building LLM-Based Systems and Products
Great project! We're building an open-source platform for building robust LLM apps (https://github.com/Agenta-AI/agenta), we'd love to integrate your library into our evaluation!
What are some alternatives?
trulens - Evaluation and Tracking for LLM Experiments
ChainForge - An open-source visual programming environment for battle-testing prompts to LLMs.
llama_index - LlamaIndex is a data framework for your LLM applications
OpenPipe - Turn expensive prompts into cheap fine-tuned models
langchain - 🦜🔗 Build context-aware reasoning applications
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
opentelemetry-instrument-openai-py - OpenTelemetry instrumentation for the OpenAI Python library
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
examples - Your one-stop-shop to try Xata out. From packages to apps, whatever you need to get started.
SolidUI - one sentence generates any graph
clickhouse_knowledge_base - The Tinybird ClickHouse Knowledge Base
deepeval - Unit Testing For LLMs [Moved to: https://github.com/confident-ai/deepeval]