fiddler-auditor
promptfoo
fiddler-auditor | promptfoo | |
---|---|---|
2 | 21 | |
148 | 3,100 | |
4.1% | 14.0% | |
8.1 | 9.9 | |
3 months ago | 4 days ago | |
Python | TypeScript | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fiddler-auditor
-
I asked 60 LLMs a set of 20 questions
This is really cool!
I've been using this auditor tool that some friends at Fiddler created: https://github.com/fiddler-labs/fiddler-auditor
They went with a langchain interface for custom Evals which I really like. I am curious to hear if anyone has tried both of these. What's been your key take away for these?
-
I'm looking for good ways to audit the LLM projects I am working on right now.
I have only found a handful of tools that work well. One of my favorite ones is the LLM Auditor by the data science team at Fiddler. Essentially multiplies your ability to run audits on multiple types of models and generate robustness reports.
promptfoo
- Google CodeGemma: Open Code Models Based on Gemma [pdf]
- AI Infrastructure Landscape
- Promptfoo – Testing and Evaluation for LLMs
-
Show HN: Prompt-Engineering Tool: AI-to-AI Testing for LLM
Super interesting. We've been experimenting with [promptfoo](https://github.com/promptfoo/promptfoo) at my work, and this looks very similar.
- GitHub – promptfoo/promptfoo: Test your prompts
-
I asked 60 LLMs a set of 20 questions
In case anyone's interested in running their own benchmark across many LLMs, I've built a generic harness for this at https://github.com/promptfoo/promptfoo.
I encourage people considering LLM applications to test the models on their _own data and examples_ rather than extrapolating general benchmarks.
This library supports OpenAI, Anthropic, Google, Llama and Codellama, any model on Replicate, and any model on Ollama, etc. out of the box. As an example, I wrote up an example benchmark comparing GPT model censorship with Llama models here: https://promptfoo.dev/docs/guides/llama2-uncensored-benchmar.... Hope this helps someone.
- Ask HN: Prompt Manager for Developers
- DeepEval – Unit Testing for LLMs
- Show HN: Knit – A Better LLM Playground
- Show HN: CLI for testing and evaluating LLM outputs
What are some alternatives?
GodMode - AI Chat Browser: Fast, Full webapp access to ChatGPT / Claude / Bard / Bing / Llama2! I use this 20 times a day.
shap-e - Generate 3D objects conditioned on text or images
ChainForge - An open-source visual programming environment for battle-testing prompts to LLMs.
prompt-engineering - Tips and tricks for working with Large Language Models like OpenAI's GPT-4.
bench - A tool for evaluating LLMs
WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
TheoremQA - The dataset and code for paper: TheoremQA: A Theorem-driven Question Answering dataset
chat-ui - Open source codebase powering the HuggingChat app
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
WizardVicunaLM - LLM that combines the principles of wizardLM and vicunaLM
evals - Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.