SolidUI
agenta
SolidUI | agenta | |
---|---|---|
20 | 9 | |
533 | 847 | |
3.9% | 8.5% | |
9.5 | 10.0 | |
4 months ago | 6 days ago | |
TypeScript | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SolidUI
- SolidUI Architectural Adjustment
- Pre-Set Background Image Interface
- Design the page, hide the static legends and the legends data tab
- Pre-Set Background Image Feature
- Interface with GIS, 2D/3D AI automatic visualization generation
-
Dostoevsky, 1879
SolidUI AI generates visualization, version 0.1.0 module division and source code explanation · CloudOrc/SolidUI · Discussion #89 · GitHub
- SolidUI AI generates visualization, version 0.1.0
- SolidUI AI-Generated Graphic Models v0.3.0 Proposal
- Version Update | SolidUI 0.2.0 Release
agenta
-
Top Open Source Prompt Engineering Guides & Tools🔧🏗️🚀
Agenta is an end-to-end LLMOps platform. It provides tools for prompt engineering and management, evaluation, human annotation, and deployment.
-
Ask HN: How are you testing your LLM applications?
I am biased, but I would use a platform and not roll your own solution. You will tend to underestimate the depth of capabilities needed for an eval framework.
Now for solutions, shameless plug here, we are building an open-source platform for experimenting and evaluating complex LLM apps (https://github.com/agenta-ai/agenta). We offer automatic evaluators as well as human annotation capabilities. Currently, we only provide testing before deployment, but we have plans to include post-production evaluations as well.
Other tools I would look at in the space are promptfoo (also open-source, more dev oriented), humanloop (one of the most feature complete tools in the space, enterprise oriented), however more enterprise oriented / costly) and vellum (YC company, more focused towards product teams)
-
trulens VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langfuse VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
langchain VS agenta - a user suggested alternative
2 projects | 22 Nov 2023
-
🤖 Agenta: Open-Source Platform for LLM Prompt Engineering, Evaluation, and Deployment
⭐ Support Us: If you find this useful, please star us on GitHub: Agenta on GitHub
-
DeepEval – Unit Testing for LLMs
I'd add ours too, although we're trying to be an end-to-end one-stop platform.
https://github.com/agenta-ai/agenta
- Show HN: Knit – A Better LLM Playground
-
Patterns for Building LLM-Based Systems and Products
Great project! We're building an open-source platform for building robust LLM apps (https://github.com/Agenta-AI/agenta), we'd love to integrate your library into our evaluation!
What are some alternatives?
IncognitoPilot - An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.
ChainForge - An open-source visual programming environment for battle-testing prompts to LLMs.
chat-to-your-database - Chat to your database with AI. An experimental app to test the abilities of LLMs to query SQL databases using natural language.
langfuse - 🪢 Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
superprompt - Prompt Development Environment for GPT
OpenPipe - Turn expensive prompts into cheap fine-tuned models
SolidUI-Website - SolidUI official website
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
YourVision - AI-powered image editor
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
superset - Apache Superset is a Data Visualization and Data Exploration Platform
deepeval - Unit Testing For LLMs [Moved to: https://github.com/confident-ai/deepeval]