fiddler-auditor VS promptfoo

Compare fiddler-auditor vs promptfoo and see what are their differences.

fiddler-auditor

Fiddler Auditor is a tool to evaluate language models. (by fiddler-labs)

promptfoo

Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration. (by promptfoo)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
fiddler-auditor promptfoo
2 21
148 3,100
4.1% 14.0%
8.1 9.9
3 months ago 4 days ago
Python TypeScript
GNU General Public License v3.0 or later MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

fiddler-auditor

Posts with mentions or reviews of fiddler-auditor. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-09.
  • I asked 60 LLMs a set of 20 questions
    10 projects | news.ycombinator.com | 9 Sep 2023
    This is really cool!

    I've been using this auditor tool that some friends at Fiddler created: https://github.com/fiddler-labs/fiddler-auditor

    They went with a langchain interface for custom Evals which I really like. I am curious to hear if anyone has tried both of these. What's been your key take away for these?

  • I'm looking for good ways to audit the LLM projects I am working on right now.
    1 project | /r/LLM | 21 Jun 2023
    I have only found a handful of tools that work well. One of my favorite ones is the LLM Auditor by the data science team at Fiddler. Essentially multiplies your ability to run audits on multiple types of models and generate robustness reports.

promptfoo

Posts with mentions or reviews of promptfoo. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-09.

What are some alternatives?

When comparing fiddler-auditor and promptfoo you can also consider the following projects:

GodMode - AI Chat Browser: Fast, Full webapp access to ChatGPT / Claude / Bard / Bing / Llama2! I use this 20 times a day.

shap-e - Generate 3D objects conditioned on text or images

ChainForge - An open-source visual programming environment for battle-testing prompts to LLMs.

prompt-engineering - Tips and tricks for working with Large Language Models like OpenAI's GPT-4.

bench - A tool for evaluating LLMs

WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath

TheoremQA - The dataset and code for paper: TheoremQA: A Theorem-driven Question Answering dataset

chat-ui - Open source codebase powering the HuggingChat app

litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)

WizardVicunaLM - LLM that combines the principles of wizardLM and vicunaLM

evals - Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.