xla
pytorch-lightning
xla | pytorch-lightning | |
---|---|---|
8 | 9 | |
2,296 | 26,952 | |
1.7% | 1.3% | |
9.9 | 9.9 | |
5 days ago | 4 days ago | |
C++ | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xla
-
Who uses Google TPUs for inference in production?
> The PyTorch/XLA Team at Google
Meanwhile you have an issue from 5 years ago with 0 support
https://github.com/pytorch/xla/issues/202
-
Google TPU v5p beats Nvidia H100
PyTorch has had an XLA backend for years. I don't know how performant it is though. https://pytorch.org/xla
-
Why Did Google Brain Exist?
It's curtains for XLA, to be precise. And PyTorch officially supports XLA backend nowadays too ([1]), which kind of makes JAX and PyTorch standing on the same foundation.
1. https://github.com/pytorch/xla
-
Accelerating AI inference?
Pytorch supports other kinds of accelerators (e.g. FPGA, and https://github.com/pytorch/glow), but unless you want to become a ML systems engineer and have money and time to throw away, or a business case to fund it, it is not worth it. In general, both pytorch and tensorflow have hardware abstractions that will compile down to device code. (XLA, https://github.com/pytorch/xla, https://github.com/pytorch/glow). TPUs and GPUs have very different strengths; so getting top performance requires a lot of manual optimizations. Considering the the cost of training LLM, it is time well spent.
-
[D] Colab TPU low performance
While apparently TPUs can theoretically achieve great speedups, getting to the point where they beat a single GPU requires a lot of fiddling around and debugging. A specific setup is required to make it work properly. E.g., here it says that to exploit TPUs you might need a better CPU to keep the TPU busy, than the one in colab. The tutorials I looked at oversimplified the whole matter, the same goes for pytorch-lightning which implies switching to TPU is as easy as changing a single parameter. Furthermore, none of the tutorials I saw (even after specifically searching for that) went into detail about why and how to set up a GCS bucket for data loading.
- How to train large deep learning models as a startup
-
Distributed Training Made Easy with PyTorch-Ignite
XLA on TPUs via pytorch/xla.
-
[P] PyTorch for TensorFlow Users - A Minimal Diff
I don't know of any such trick except for using TensorFlow. In fact, I benchmarked PyTorch XLA vs TensorFlow and found that the former's performance was quite abysmal: PyTorch XLA is very slow on Google Colab. The developers' explanation, as I understood it, was that TF was using features not available to the PyTorch XLA developers and that they therefore could not compete on performance. The situation may be different today, I don't know really.
pytorch-lightning
-
SB-1047 will stifle open-source AI and decrease safety
It's very easy to get started, right in your Terminal, no fees! No credit card at all.
And there are cloud providers like https://replicate.com/ and https://lightning.ai/ that will let you use your LLM via an API key just like you did with OpenAI if you need that.
You don't need OpenAI - nobody does.
- Lightning AI Studios – A persistent GPU cloud environment
-
Como empezar con inteligencia artificial?
https://see.stanford.edu/Course/CS229 https://lightning.ai/ https://www.youtube.com/watch?v=00s9ireCnCw&t=57s https://towardsdatascience.com/
-
Best practice for saving logits/activation values of model in PyTorch Lightning
I've been wondering on what is the recommended method of saving logits/activations using PyTorch Lightning. I've looked at Callbacks, Loggers and ModelHooks but none of the use-cases seem to be for this kind of activity (even if I were to create my own custom variants of each utility). The ModelCheckpoint Callback in its utility makes me feel like custom Callbacks would be the way to go but I'm not quite sure. This closed GitHub issue does address my issue to some extent.
- New to ML, which is easier to learn - Tensorflow or PyTorch?
- PyTorch Lightning – DL framework to train, deploy, and ship AI fast
-
We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning!
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
-
An elegant and strong PyTorch Trainer
For lightweight use, pytorch-lightning is too heavy, and its source code will be very difficult for beginners to read, at least for me.
-
[D] Mixed Precision Training: Difference between BF16 and FP16
For the A100 GPU, theoretical performance is the same for FP16/BF16 and both rely on the same number of bits, meaning memory should be the same. However since it's quite newly added to PyTorch, performance seems to still be dependent on underlying operators used (pytorch lightning debugging in progress here).
What are some alternatives?
NCCL - Optimized primitives for collective multi-GPU communication
lnd - Lightning Network Daemon ⚡️
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
Eclair - A scala implementation of the Lightning Network.
why-ignite - Why should we use PyTorch-Ignite ?
mmdetection - OpenMMLab Detection Toolbox and Benchmark
pocketsphinx - A small speech recognizer
composer - Supercharge Your Model Training
ignite - High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
umbrel - A beautiful home server OS for self-hosting with an app store. Buy a pre-built Umbrel Home with umbrelOS, or install on a Raspberry Pi 4, Pi 5, any Ubuntu/Debian system, or a VPS.
ompi - Open MPI main development repository
Keras - Deep Learning for humans