searchGPT
llm-leaderboard
searchGPT | llm-leaderboard | |
---|---|---|
3 | 6 | |
570 | 270 | |
- | - | |
7.2 | 7.8 | |
about 1 year ago | 9 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
searchGPT
llm-leaderboard
-
Email Obfuscation Rendered Almost Ineffective Against ChatGPT
This is assuming you’re using a really big LLM behind a paid service. There are plenty of smaller open source models. Not sure at what point it’s not “large” but when fine tuned they are capable of matching the largest LLM in performance on narrow tasks.
Some of these open source models can even be run on your local machine. It’d be very inexpensive to run thousands of pages through it.
https://llm-leaderboard.streamlit.app/
-
Is the ChatGPT and Bing AI boom already over?
palm-2-l-instruct scores 0.909 on Winogrande few-shot.
https://github.com/LudwigStumpp/llm-leaderboard/blob/main/RE...
-
Meta is preparing to launch a new open source coding model, dubbed Code Llama, that may release as soon as next week
They said it "rivals OpenAI’s Codex model" which performs worse than starcoder-16b on HumanEval-Python (pass@1) according to https://github.com/LudwigStumpp/llm-leaderboard
- All Model Leaderboards (that I know)
-
GPT-3.5 and GPT-4 performance in Open LLM Leaderboard tests?
Yes, see this leaderboard for a comparison: https://llm-leaderboard.streamlit.app/
-
Sharing my comparison methodology for LLM models
So I've tried to use a basic matrix factorization method to estimate unknown benchmark scores for models based on the known benchmark scores. Basically, I assume each model has some intrinsic "quality" score, and the known benchmarks are assumed to be a linear function of the quality score. This is similar to matrix factorization with only 1 latent factor (though the bias values have to handled differently). Then I fit the known benchmark scores from https://github.com/LudwigStumpp/llm-leaderboard to my parameters, and estimate the remaining benchmark scores.
What are some alternatives?
chatGPT-cheatsheet - An ever-evolving introduction to ChatGPT, AI, and machine learning (including prompt examples and Python-built chatbots)
llm-humaneval-benchmarks
chatgpt-extractive-shortener - Shortens a paragraph of text with ChatGPT, using successive rounds of word-level extractive summarization.
chain-of-thought-hub - Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
gpt4docstrings - Generating Python docstrings with OpenAI ChatGPT!!
EvalAI - :cloud: :rocket: :bar_chart: :chart_with_upwards_trend: Evaluating state of the art in AI
AutoLearn-GPT - ChatGPT learns automatically.
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
aidoc - A simple CLI tool to generate documentation for your Python projects automatically.
unilm - Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
gmail-assist - Get control of your overflowing inbox using GPT-3 to classify your emails by importance.
alpa - Training and serving large-scale neural networks with auto parallelization.