alpa
PaddlePaddle
alpa | PaddlePaddle | |
---|---|---|
4 | 6 | |
2,986 | 21,625 | |
0.8% | 0.5% | |
5.1 | 10.0 | |
5 months ago | 4 days ago | |
Python | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alpa
-
How to Train Large Models on Many GPUs?
- Alpa does training and serving with 175B parameter models https://github.com/alpa-projects/alpa
-
how much does it actually cost in terms of computer power for open AI to respond
alpa.ai states "You will need at least 350GB GPU memory on your entire cluster to serve the OPT-175B model. For example, you can use 4 x AWS p3.16xlarge instances, which provide 4 (instance) x 8 (GPU/instance) x 16 (GB/GPU) = 512 GB memory."
- Alpa: Auto-parallelizing large model training and inference (by UC Berkeley)
-
Alpa: Automated Model-Parallel Deep Learning
GitHub code: https://github.com/alpa-projects/alpa
PaddlePaddle
-
List of AI-Models
Click to Learn more...
-
Ask HN: Are there any notable Chinese FLOSS projects?
PaddlePaddle?
https://github.com/PaddlePaddle/Paddle
Also, Baidu have quite a few OSS projects out there in general.
https://github.com/baidu
-
Volcano vs Yunikorn vs Knative
Volcano is a batch scheduler on top of Kube-batch targetting spark-operator, plain old MPI, chinesium paddlepaddle, and Kromwell HPC.
-
Baidu AI Researchers Introduce SE-MoE That Proposes Elastic MoE Training With 2D Prefetch And Fusion Communication Over Hierarchical Storage
Continue reading | Check out the paper, and Github
- I have issue with only __habs for half datatype? Please help!
- Alternatives to google collab?
What are some alternatives?
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
tensorflow - An Open Source Machine Learning Framework for Everyone
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
PyTorch-NLP - Basic Utilities for PyTorch Natural Language Processing (NLP)
determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
Keras - Deep Learning for humans
FedML - FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, FEDML Nexus AI (https://fedml.ai) is your generative AI platform at scale.
MLflow - Open source platform for the machine learning lifecycle
awesome-tensor-compilers - A list of awesome compiler projects and papers for tensor computation and deep learning.
xgboost - Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
adaptdl - Resource-adaptive cluster scheduler for deep learning training.
gym - A toolkit for developing and comparing reinforcement learning algorithms.