SaaSHub helps you find the best software and product alternatives Learn more β
Deepeval Alternatives
Similar projects and alternatives to deepeval
-
qdrant
Qdrant - High-performance, massive-scale Vector Database and Vector Search Engine for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
-
-
evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
-
litellm
Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
-
swirl-search
Swirl is an open-source search platform that uses AI to search multiple content and data sources simultaneously and return AI-ranked results. And provides summaries of your answers from searches using LLMs. It's a one-click, easy-to-use Retrieval Augmented Generation (RAG) Solution.
-
-
-
pezzo
πΉοΈ Open-source, developer-first LLMOps platform designed to streamline prompt design, version management, instant delivery, collaboration, troubleshooting, observability and more.
-
LLMStack
No-code multi-agent framework to build LLM Agents, workflows and applications with your data
-
-
-
-
FLaNK-Halifax
Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data
-
super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
-
opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
-
deepeval discussion
deepeval reviews and mentions
-
ContextCheck VS deepeval - a user suggested alternative
2 projects | 12 Nov 2024
-
Testing LLM Apps with Trace-based Tests
This test does a call to our API and performs some simple assertions to check if you received a proper output, a valid JSON with a summary field in it. You also could do some tests to see if the text is relevant and have low accuracies (like the Python deepeval lib does).
-
Securing and enhancing LLM prompts & outputs: A guide using Amazon Bedrock and open-source solutions
You can get started with DeepEval by simply downloading it from the GitHub repository. Once installed, you can create custom evaluation metrics, run test cases, and analyze results directly on your system. DeepEval provides flexibility, making it suitable for anyone looking to test LLMs without additional overhead or setup requirements.
-
Unit Testing LLMs with DeepEval
For the last year I have been working with different LLMs (OpenAI, Claude, Palm, Gemini, etc) and I have been impressed with their performance. With the rapid advancements in AI and the increasing complexity of LLMs, it has become crucial to have a reliable testing framework that can help us maintain the quality of our prompts and ensure the best possible outcomes for our users. Recently, I discovered DeepEval (https://github.com/confident-ai/deepeval), an LLM testing framework that has revolutionized the way we approach prompt quality assurance.
-
Show HN: Ragas β the de facto open-source standard for evaluating RAG pipelines
Checkout this instead: https://github.com/confident-ai/deepeval
Also has native ragas implementation but supports all models.
-
Show HN: Times faster LLM evaluation with Bayesian optimization
Fair question.
Evaluate refers to the phase after training to check if the training is good.
Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!
So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.
-
Implemented 12+ LLM evaluation metrics so you don't have to
A link to a reddit post (with no discussion) which links to this repo
https://github.com/confident-ai/deepeval
- Show HN: I implemented a range of evaluation metrics for LLMs that runs locally
-
These 5 Open Source AI Startups are changing the AI Landscape
Star DeepEval on GitHub and contribute to the advancement of LLM evaluation frameworks! π
- FLaNK Stack Weekly 06 Nov 2023
-
A note from our sponsor - SaaSHub
www.saashub.com | 3 Dec 2024
Stats
confident-ai/deepeval is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of deepeval is Python.