Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Alpa Alternatives
Similar projects and alternatives to alpa
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
datasets
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
-
determined
Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
-
hivemind
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
-
FedML
FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, FEDML Nexus AI (https://fedml.ai) is your generative AI platform at scale.
-
awesome-tensor-compilers
A list of awesome compiler projects and papers for tensor computation and deep learning.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
PaddlePaddle
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
alpa reviews and mentions
-
How to Train Large Models on Many GPUs?
- Alpa does training and serving with 175B parameter models https://github.com/alpa-projects/alpa
-
how much does it actually cost in terms of computer power for open AI to respond
alpa.ai states "You will need at least 350GB GPU memory on your entire cluster to serve the OPT-175B model. For example, you can use 4 x AWS p3.16xlarge instances, which provide 4 (instance) x 8 (GPU/instance) x 16 (GB/GPU) = 512 GB memory."
- Alpa: Auto-parallelizing large model training and inference (by UC Berkeley)
-
Alpa: Automated Model-Parallel Deep Learning
GitHub code: https://github.com/alpa-projects/alpa
-
A note from our sponsor - InfluxDB
www.influxdata.com | 23 Apr 2024
Stats
alpa-projects/alpa is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of alpa is Python.
Sponsored