promptfoo
ChainForge
promptfoo | ChainForge | |
---|---|---|
20 | 14 | |
2,921 | 2,015 | |
23.7% | - | |
9.9 | 8.9 | |
5 days ago | 7 days ago | |
TypeScript | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
promptfoo
- Google CodeGemma: Open Code Models Based on Gemma [pdf]
- AI Infrastructure Landscape
- Promptfoo – Testing and Evaluation for LLMs
-
Show HN: Prompt-Engineering Tool: AI-to-AI Testing for LLM
Super interesting. We've been experimenting with [promptfoo](https://github.com/promptfoo/promptfoo) at my work, and this looks very similar.
- GitHub – promptfoo/promptfoo: Test your prompts
-
I asked 60 LLMs a set of 20 questions
In case anyone's interested in running their own benchmark across many LLMs, I've built a generic harness for this at https://github.com/promptfoo/promptfoo.
I encourage people considering LLM applications to test the models on their _own data and examples_ rather than extrapolating general benchmarks.
This library supports OpenAI, Anthropic, Google, Llama and Codellama, any model on Replicate, and any model on Ollama, etc. out of the box. As an example, I wrote up an example benchmark comparing GPT model censorship with Llama models here: https://promptfoo.dev/docs/guides/llama2-uncensored-benchmar.... Hope this helps someone.
- Ask HN: Prompt Manager for Developers
- DeepEval – Unit Testing for LLMs
- Show HN: Knit – A Better LLM Playground
- Show HN: CLI for testing and evaluating LLM outputs
ChainForge
- ChainForge is an open-source visual prompt engineering programming environment
- AI for ChainForge Beta
-
Anthropic Claude for Google Sheets
This seems like a sheets implementation of something like ChainForge (https://github.com/ianarawjo/ChainForge). Curious that Anthropic is entering the LLMOps tooling space ---this definitely comes as a surprise to me, as both OpenAI and HuggingFace seem to avoid building prompt engineering tooling themselves. Is this a business strategy of Anthropic's? An experiment? Regardless, it's very cool to see a company like them throw their hat into the LLMOps space beyond being a model provider. Interested to see what comes next.
- ChainForge, a visual programming environment for prompt engineering
-
I asked 60 LLMs a set of 20 questions
ChainForge has similar functionality for comparing : https://github.com/ianarawjo/ChainForge
LocalAI creates a GPT-compatible HTTP API for local LLMs: https://github.com/go-skynet/LocalAI
Is it necessary to have an HTTP API for each model in a comparative study?
- Show HN: Knit – A Better LLM Playground
-
Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation
I think you should probably mention that its source is available! [0]
I don't personally have a need for this right now, but I can really see the use for the parameterised queries, as well as comparisons across models.
Thanks for your efforts!
0: https://github.com/ianarawjo/ChainForge
- Continue multiple conversations simultaneously across multiple LLMs
- ChainForge now supports chat evaluation
-
GPT-Prompt-Engineer
No problem! I guess I will make a plug myself --we've been working on a similar 'prompt engineering', ChainForge (https://github.com/ianarawjo/ChainForge). It's targeted towards slightly different users and use cases than promptfoo --probably more geared towards early-stage, 'quick-and-dirty' prompting explorations of differences between prompts and models for less experience programmers, versus the kind of continuious benchmarking and verification testing that promptfoo offers.
I particularly like promptfoo's support for CI, which I haven't seen anywhere else, and is very important for developers pushing prompts into production (esp since OpenAI keeps updating their models every few months...).
What are some alternatives?
shap-e - Generate 3D objects conditioned on text or images
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
prompt-engineering - Tips and tricks for working with Large Language Models like OpenAI's GPT-4.
agenta - The all-in-one LLM developer platform: prompt management, evaluation, human feedback, and deployment all in one place.
WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
promptfoo - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. [Moved to: https://github.com/promptfoo/promptfoo]
chat-ui - Open source codebase powering the HuggingChat app
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
DocGPT - 💻📚💡 DoctorGPT provides advanced LLM prompting for PDFs and webpages. [Moved to: https://github.com/FeatureBaseDB/DoctorGPT]
WizardVicunaLM - LLM that combines the principles of wizardLM and vicunaLM
GodMode - AI Chat Browser: Fast, Full webapp access to ChatGPT / Claude / Bard / Bing / Llama2! I use this 20 times a day.