fiddler-auditor
ChainForge
fiddler-auditor | ChainForge | |
---|---|---|
2 | 14 | |
148 | 2,055 | |
4.1% | - | |
8.1 | 8.9 | |
3 months ago | 8 days ago | |
Python | TypeScript | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fiddler-auditor
-
I asked 60 LLMs a set of 20 questions
This is really cool!
I've been using this auditor tool that some friends at Fiddler created: https://github.com/fiddler-labs/fiddler-auditor
They went with a langchain interface for custom Evals which I really like. I am curious to hear if anyone has tried both of these. What's been your key take away for these?
-
I'm looking for good ways to audit the LLM projects I am working on right now.
I have only found a handful of tools that work well. One of my favorite ones is the LLM Auditor by the data science team at Fiddler. Essentially multiplies your ability to run audits on multiple types of models and generate robustness reports.
ChainForge
- ChainForge is an open-source visual prompt engineering programming environment
- AI for ChainForge Beta
-
Anthropic Claude for Google Sheets
This seems like a sheets implementation of something like ChainForge (https://github.com/ianarawjo/ChainForge). Curious that Anthropic is entering the LLMOps tooling space ---this definitely comes as a surprise to me, as both OpenAI and HuggingFace seem to avoid building prompt engineering tooling themselves. Is this a business strategy of Anthropic's? An experiment? Regardless, it's very cool to see a company like them throw their hat into the LLMOps space beyond being a model provider. Interested to see what comes next.
- ChainForge, a visual programming environment for prompt engineering
-
I asked 60 LLMs a set of 20 questions
ChainForge has similar functionality for comparing : https://github.com/ianarawjo/ChainForge
LocalAI creates a GPT-compatible HTTP API for local LLMs: https://github.com/go-skynet/LocalAI
Is it necessary to have an HTTP API for each model in a comparative study?
- Show HN: Knit – A Better LLM Playground
-
Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation
I think you should probably mention that its source is available! [0]
I don't personally have a need for this right now, but I can really see the use for the parameterised queries, as well as comparisons across models.
Thanks for your efforts!
0: https://github.com/ianarawjo/ChainForge
- Continue multiple conversations simultaneously across multiple LLMs
- ChainForge now supports chat evaluation
-
GPT-Prompt-Engineer
No problem! I guess I will make a plug myself --we've been working on a similar 'prompt engineering', ChainForge (https://github.com/ianarawjo/ChainForge). It's targeted towards slightly different users and use cases than promptfoo --probably more geared towards early-stage, 'quick-and-dirty' prompting explorations of differences between prompts and models for less experience programmers, versus the kind of continuious benchmarking and verification testing that promptfoo offers.
I particularly like promptfoo's support for CI, which I haven't seen anywhere else, and is very important for developers pushing prompts into production (esp since OpenAI keeps updating their models every few months...).
What are some alternatives?
GodMode - AI Chat Browser: Fast, Full webapp access to ChatGPT / Claude / Bard / Bing / Llama2! I use this 20 times a day.
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
bench - A tool for evaluating LLMs
agenta - The all-in-one LLM developer platform: prompt management, evaluation, human feedback, and deployment all in one place.
TheoremQA - The dataset and code for paper: TheoremQA: A Theorem-driven Question Answering dataset
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
promptfoo - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. [Moved to: https://github.com/promptfoo/promptfoo]
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
DocGPT - 💻📚💡 DoctorGPT provides advanced LLM prompting for PDFs and webpages. [Moved to: https://github.com/FeatureBaseDB/DoctorGPT]
flux - Graph-based LLM power tool for exploring many completions in parallel.