promptfoo
llama-chat
promptfoo | llama-chat | |
---|---|---|
5 | 2 | |
328 | 6 | |
- | - | |
10.0 | 4.5 | |
11 months ago | about 1 month ago | |
TypeScript | TypeScript | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
promptfoo
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Jumping in because I'm a big believer in (1) local LLMs, and (2) evals specific to individual use cases.
[0] https://github.com/typpo/promptfoo
- Meta Llama 3
-
Launch HN: Talc AI (YC S23) – Test Sets for AI
Congrats on the launch!
I've been interested in automatic testset generation because I find that the chore of writing tests is one of the reasons people shy away from evals. Recently landed eval testset generation for promptfoo (https://github.com/typpo/promptfoo), but it is non-RAG so more simplistic than your implementation.
Was also eyeballing this paper https://arxiv.org/abs/2401.03038, which outlines a method for generating asserts from prompt version history that may also be useful for these eval tools.
-
GPT-Prompt-Engineer
Thanks for the promptfoo mention. For anyone else who might prefer deterministic, programmatic evaluation of LLM outputs, I've been building promptfoo: https://github.com/typpo/promptfoo
Example asserts include basic string checks, regex, is-json, cosine similarity, etc.
llama-chat
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Streaming is not a problem (it's just a simple flag: https://github.com/wiktor-k/llama-chat/blob/main/index.ts#L2...) but I've never used voice input.
The examples show image input though: https://github.com/ollama/ollama/blob/main/docs/api.md#reque...
Maybe you can file an issue here: https://github.com/ollama/ollama/issues
What are some alternatives?
rebuff - LLM Prompt Injection Detector
cloudseeder - One-click install internet appliances that operate on your terms. Transform your home computer into a sovereign and secure cloud.
gpt-engineer - Specify what you want it to build, the AI asks for clarification, and then builds it.
llama-cpp-python - Python bindings for llama.cpp
ChainForge - An open-source visual programming environment for battle-testing prompts to LLMs.
TensorRT-LLM - TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
plandex - AI driven development in your terminal. Designed for large, real-world tasks.
mlx - MLX: An array framework for Apple silicon
shap-e - Generate 3D objects conditioned on text or images
ollama_local_rag
gateway - A Blazing Fast AI Gateway. Route to 200+ LLMs with 1 fast & friendly API.
llama3 - The official Meta Llama 3 GitHub site