deepmark
LLM-eval-survey
deepmark | LLM-eval-survey | |
---|---|---|
2 | 1 | |
98 | 1,240 | |
- | - | |
8.9 | 9.2 | |
6 months ago | 5 months ago | |
PHP | ||
GNU Affero General Public License v3.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deepmark
-
Show HN: Deepmark AI- LLM assessment tool for task-specific metrics on your data
Latency
Deepmark AI empowers organizations to make informed decisions when navigating through the most important performance metrics of Large Language Models.
So what are you waiting for? SignUp to IngestAI today and take your customer support to the next level!
Follow our roadmap on Github : Star us / watch us / fork us: https://github.com/IngestAI/deepmark
And join our community at :
LLM-eval-survey
What are some alternatives?
SciTS - A tool to benchmark Time-series databases
awesome-semantic-segmentation - :metal: awesome-semantic-segmentation
phoronix-test-suite - The Phoronix Test Suite open-source, cross-platform automated testing/benchmarking software.
awesome-refreshing-llms - EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.
php-bard-api - The PHP package that returns response of Google Bard through API.
PHPBench - PHP Benchmarking framework
go-benchmarks - Comprehensive and reproducible benchmarks for Go developers and architects.
llm-client-sdk - SDK for using LLM