alpa
llm-leaderboard
alpa | llm-leaderboard | |
---|---|---|
4 | 6 | |
2,986 | 266 | |
0.8% | - | |
5.1 | 7.8 | |
5 months ago | 8 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alpa
-
How to Train Large Models on Many GPUs?
- Alpa does training and serving with 175B parameter models https://github.com/alpa-projects/alpa
-
how much does it actually cost in terms of computer power for open AI to respond
alpa.ai states "You will need at least 350GB GPU memory on your entire cluster to serve the OPT-175B model. For example, you can use 4 x AWS p3.16xlarge instances, which provide 4 (instance) x 8 (GPU/instance) x 16 (GB/GPU) = 512 GB memory."
- Alpa: Auto-parallelizing large model training and inference (by UC Berkeley)
-
Alpa: Automated Model-Parallel Deep Learning
GitHub code: https://github.com/alpa-projects/alpa
llm-leaderboard
-
Email Obfuscation Rendered Almost Ineffective Against ChatGPT
This is assuming you’re using a really big LLM behind a paid service. There are plenty of smaller open source models. Not sure at what point it’s not “large” but when fine tuned they are capable of matching the largest LLM in performance on narrow tasks.
Some of these open source models can even be run on your local machine. It’d be very inexpensive to run thousands of pages through it.
https://llm-leaderboard.streamlit.app/
-
Is the ChatGPT and Bing AI boom already over?
palm-2-l-instruct scores 0.909 on Winogrande few-shot.
https://github.com/LudwigStumpp/llm-leaderboard/blob/main/RE...
-
Meta is preparing to launch a new open source coding model, dubbed Code Llama, that may release as soon as next week
They said it "rivals OpenAI’s Codex model" which performs worse than starcoder-16b on HumanEval-Python (pass@1) according to https://github.com/LudwigStumpp/llm-leaderboard
- All Model Leaderboards (that I know)
-
GPT-3.5 and GPT-4 performance in Open LLM Leaderboard tests?
Yes, see this leaderboard for a comparison: https://llm-leaderboard.streamlit.app/
-
Sharing my comparison methodology for LLM models
So I've tried to use a basic matrix factorization method to estimate unknown benchmark scores for models based on the known benchmark scores. Basically, I assume each model has some intrinsic "quality" score, and the known benchmarks are assumed to be a linear function of the quality score. This is similar to matrix factorization with only 1 latent factor (though the bias values have to handled differently). Then I fit the known benchmark scores from https://github.com/LudwigStumpp/llm-leaderboard to my parameters, and estimate the remaining benchmark scores.
What are some alternatives?
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
llm-humaneval-benchmarks
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
chain-of-thought-hub - Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
EvalAI - :cloud: :rocket: :bar_chart: :chart_with_upwards_trend: Evaluating state of the art in AI
FedML - FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, FEDML Nexus AI (https://fedml.ai) is your generative AI platform at scale.
searchGPT - Grounded search engine (i.e. with source reference) based on LLM / ChatGPT / OpenAI API. It supports web search, file content search etc.
awesome-tensor-compilers - A list of awesome compiler projects and papers for tensor computation and deep learning.
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
adaptdl - Resource-adaptive cluster scheduler for deep learning training.
unilm - Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities