perses VS promptbench

Compare perses vs promptbench and see what are their differences.

perses

The CNCF candidate for observability visualisation. Already supports Prometheus - more data sources to come! (by perses)
Our great sponsors
  • SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
perses promptbench
5 4
523 2,061
10.1% 8.6%
9.7 9.2
5 days ago 1 day ago
TypeScript Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

perses

Posts with mentions or reviews of perses. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-31.

promptbench

Posts with mentions or reviews of promptbench. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-13.
  • Show HN: Times faster LLM evaluation with Bayesian optimization
    6 projects | news.ycombinator.com | 13 Feb 2024
    Fair question.

    Evaluate refers to the phase after training to check if the training is good.

    Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!

    So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.

  • FLaNK Weekly 31 December 2023
    25 projects | dev.to | 31 Dec 2023
  • FLaNK 25 December 2023
    33 projects | dev.to | 26 Dec 2023
  • Promptbench: A Unified Library for Evaluating and Understanding LLMs
    1 project | news.ycombinator.com | 25 Dec 2023

What are some alternatives?

When comparing perses and promptbench you can also consider the following projects:

Mixin - Mixin is a trait/mixin and bytecode weaving framework for Java using ASM

awesome-gpt-prompt-engineering - A curated list of awesome resources, tools, and other shiny things for GPT prompt engineering.

OpenVoice - Instant voice cloning by MyShell.

osgameclones - Open Source Clones of Popular Games

bytecode-viewer - A Java 8+ Jar & Android APK Reverse Engineering Suite (Decompiler, Editor, Debugger & More)

opencompass - OpenCompass is an LLM evaluation platform, supporting a wide range of models (InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.

pixie - Instant Kubernetes-Native Application Observability

JavaOnRaspberryPi - Sources and scripts for the book "Getting started with Java on the Raspberry Pi"

Recaf - The modern Java bytecode editor

Zolver - Automatic jigsaw puzzle solver

Maker - Lightweight, full-featured, low-level dynamic Java class generator designed for ease of use.

FLiPStackWeekly - FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...