EvalAI
llm-leaderboard
EvalAI | llm-leaderboard | |
---|---|---|
4 | 6 | |
1,688 | 266 | |
1.5% | - | |
8.9 | 7.8 | |
8 days ago | 8 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
EvalAI
-
Hacker News top posts: Jun 25, 2021
EvalAI: An Open-Source Alternative to Kaggle\ (16 comments)
-
EvalAI: An Open-Source Alternative to Kaggle
I agree that the comparison to Kaggle is a bit old and we have removed it (https://github.com/Cloud-CV/EvalAI/pull/3502). :-)
llm-leaderboard
-
Email Obfuscation Rendered Almost Ineffective Against ChatGPT
This is assuming you’re using a really big LLM behind a paid service. There are plenty of smaller open source models. Not sure at what point it’s not “large” but when fine tuned they are capable of matching the largest LLM in performance on narrow tasks.
Some of these open source models can even be run on your local machine. It’d be very inexpensive to run thousands of pages through it.
https://llm-leaderboard.streamlit.app/
-
Is the ChatGPT and Bing AI boom already over?
palm-2-l-instruct scores 0.909 on Winogrande few-shot.
https://github.com/LudwigStumpp/llm-leaderboard/blob/main/RE...
-
Meta is preparing to launch a new open source coding model, dubbed Code Llama, that may release as soon as next week
They said it "rivals OpenAI’s Codex model" which performs worse than starcoder-16b on HumanEval-Python (pass@1) according to https://github.com/LudwigStumpp/llm-leaderboard
- All Model Leaderboards (that I know)
-
GPT-3.5 and GPT-4 performance in Open LLM Leaderboard tests?
Yes, see this leaderboard for a comparison: https://llm-leaderboard.streamlit.app/
-
Sharing my comparison methodology for LLM models
So I've tried to use a basic matrix factorization method to estimate unknown benchmark scores for models based on the known benchmark scores. Basically, I assume each model has some intrinsic "quality" score, and the known benchmarks are assumed to be a linear function of the quality score. This is similar to matrix factorization with only 1 latent factor (though the bias values have to handled differently). Then I fit the known benchmark scores from https://github.com/LudwigStumpp/llm-leaderboard to my parameters, and estimate the remaining benchmark scores.
What are some alternatives?
GPBoost - Combining tree-boosting with Gaussian process and mixed effects models
llm-humaneval-benchmarks
StratosphereLinuxIPS - Slips, a free software behavioral Python intrusion prevention system (IDS/IPS) that uses machine learning to detect malicious behaviors in the network traffic. Stratosphere Laboratory, AIC, FEL, CVUT in Prague.
chain-of-thought-hub - Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
evaluate - 🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
searchGPT - Grounded search engine (i.e. with source reference) based on LLM / ChatGPT / OpenAI API. It supports web search, file content search etc.
Lars-Ulrich-Challenge - Algorithmic and AI MIDI Drums Generator Implementation
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
unilm - Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
pycm - Multi-class confusion matrix library in Python
alpa - Training and serving large-scale neural networks with auto parallelization.