deepeval

The LLM Evaluation Framework (by confident-ai)

Deepeval Alternatives

Similar projects and alternatives to deepeval

  • qdrant

    Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

  • openvino_notebooks

    79 deepeval VS openvino_notebooks

    📚 Jupyter notebook tutorials for OpenVINO™

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • LocalAI

    :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

  • evals

    49 deepeval VS evals

    Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.

  • litellm

    28 deepeval VS litellm

    Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)

  • pezzo

    16 deepeval VS pezzo

    🕹️ Open-source, developer-first LLMOps platform designed to streamline prompt design, version management, instant delivery, collaboration, troubleshooting, observability and more.

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • LLMStack

    No-code platform to build LLM Agents, workflows and applications with your data

  • ragas

    10 deepeval VS ragas

    Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines

  • openchat

    OpenChat: Advancing Open-source Language Models with Imperfect Data (by imoneoi)

  • chdb

    chDB is an embedded OLAP SQL Engine 🚀 powered by ClickHouse

  • FLaNK-Halifax

    Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data

  • distil-whisper

    Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.

  • super-gradients

    8 deepeval VS super-gradients

    Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.

  • CoC2023

    Community over Code, Apache NiFi, Apache Kafka, Apache Flink, Python, GTFS, Transit, Open Source, Open Data

  • trieve

    All-in-one infrastructure for building search, recommendations, and RAG. Trieve combines search language models with tools for tuning ranking and relevance.

  • tailspin

    🌀 A log file highlighter

  • opencompass

    OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better deepeval alternative or higher similarity.

deepeval reviews and mentions

Posts with mentions or reviews of deepeval. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-21.
  • Unit Testing LLMs with DeepEval
    1 project | dev.to | 11 Apr 2024
    For the last year I have been working with different LLMs (OpenAI, Claude, Palm, Gemini, etc) and I have been impressed with their performance. With the rapid advancements in AI and the increasing complexity of LLMs, it has become crucial to have a reliable testing framework that can help us maintain the quality of our prompts and ensure the best possible outcomes for our users. Recently, I discovered DeepEval (https://github.com/confident-ai/deepeval), an LLM testing framework that has revolutionized the way we approach prompt quality assurance.
  • Show HN: Ragas – the de facto open-source standard for evaluating RAG pipelines
    4 projects | news.ycombinator.com | 21 Mar 2024
    Checkout this instead: https://github.com/confident-ai/deepeval

    Also has native ragas implementation but supports all models.

  • Show HN: Times faster LLM evaluation with Bayesian optimization
    6 projects | news.ycombinator.com | 13 Feb 2024
    Fair question.

    Evaluate refers to the phase after training to check if the training is good.

    Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!

    So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.

  • Implemented 12+ LLM evaluation metrics so you don't have to
    1 project | news.ycombinator.com | 13 Dec 2023
    A link to a reddit post (with no discussion) which links to this repo

    https://github.com/confident-ai/deepeval

  • Show HN: I implemented a range of evaluation metrics for LLMs that runs locally
    1 project | news.ycombinator.com | 11 Dec 2023
  • These 5 Open Source AI Startups are changing the AI Landscape
    7 projects | dev.to | 16 Nov 2023
    Star DeepEval on GitHub and contribute to the advancement of LLM evaluation frameworks! 🌟
  • FLaNK Stack Weekly 06 Nov 2023
    21 projects | dev.to | 6 Nov 2023
  • Why we replaced Pinecone with PGVector 😇
    1 project | dev.to | 2 Nov 2023
    Pinecone, the leading closed-source vector database provider, is known for being fast, scalable, and easy to use. Its ability to allow users to perform blazing-fast vector search makes it a popular choice for large-scale RAG applications. Our initial infrastructure for Confident AI, the world’s first open-source evaluation infrastructure for LLMs, utilized Pinecone to cluster LLM observability log data in production. However, after weeks of experimentation, we made the decision to replace it entirely with pgvector. Pinecone’s simplistic design is deceptive due to several hidden complexities, particularly in integrating with existing data storage solutions. For example, it forces a complicated architecture and its restrictive metadata storage capacity made it troublesome for managing data-intensive workloads.
  • Show HN: Unit Testing for LLMs
    1 project | news.ycombinator.com | 26 Oct 2023
  • Show HN: DeepEval – Unit Testing for LLMs (Open Science)
    1 project | news.ycombinator.com | 5 Oct 2023
  • A note from our sponsor - SaaSHub
    www.saashub.com | 30 Apr 2024
    SaaSHub helps you find the best software and product alternatives Learn more →

Stats

Basic deepeval repo stats
22
1,769
9.9
2 days ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com