xla
Pytorch
xla | Pytorch | |
---|---|---|
8 | 340 | |
2,296 | 78,016 | |
1.7% | 1.4% | |
9.9 | 10.0 | |
5 days ago | 7 days ago | |
C++ | Python | |
GNU General Public License v3.0 or later | BSD 1-Clause License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xla
-
Who uses Google TPUs for inference in production?
> The PyTorch/XLA Team at Google
Meanwhile you have an issue from 5 years ago with 0 support
https://github.com/pytorch/xla/issues/202
-
Google TPU v5p beats Nvidia H100
PyTorch has had an XLA backend for years. I don't know how performant it is though. https://pytorch.org/xla
-
Why Did Google Brain Exist?
It's curtains for XLA, to be precise. And PyTorch officially supports XLA backend nowadays too ([1]), which kind of makes JAX and PyTorch standing on the same foundation.
1. https://github.com/pytorch/xla
-
Accelerating AI inference?
Pytorch supports other kinds of accelerators (e.g. FPGA, and https://github.com/pytorch/glow), but unless you want to become a ML systems engineer and have money and time to throw away, or a business case to fund it, it is not worth it. In general, both pytorch and tensorflow have hardware abstractions that will compile down to device code. (XLA, https://github.com/pytorch/xla, https://github.com/pytorch/glow). TPUs and GPUs have very different strengths; so getting top performance requires a lot of manual optimizations. Considering the the cost of training LLM, it is time well spent.
-
[D] Colab TPU low performance
While apparently TPUs can theoretically achieve great speedups, getting to the point where they beat a single GPU requires a lot of fiddling around and debugging. A specific setup is required to make it work properly. E.g., here it says that to exploit TPUs you might need a better CPU to keep the TPU busy, than the one in colab. The tutorials I looked at oversimplified the whole matter, the same goes for pytorch-lightning which implies switching to TPU is as easy as changing a single parameter. Furthermore, none of the tutorials I saw (even after specifically searching for that) went into detail about why and how to set up a GCS bucket for data loading.
- How to train large deep learning models as a startup
-
Distributed Training Made Easy with PyTorch-Ignite
XLA on TPUs via pytorch/xla.
-
[P] PyTorch for TensorFlow Users - A Minimal Diff
I don't know of any such trick except for using TensorFlow. In fact, I benchmarked PyTorch XLA vs TensorFlow and found that the former's performance was quite abysmal: PyTorch XLA is very slow on Google Colab. The developers' explanation, as I understood it, was that TF was using features not available to the PyTorch XLA developers and that they therefore could not compete on performance. The situation may be different today, I don't know really.
Pytorch
-
Clasificador de imágenes con una red neuronal convolucional (CNN)
PyTorch (https://pytorch.org/)
-
AI enthusiasm #9 - A multilingual chatbot📣🈸
torch is a package to manage tensors and dynamic neural networks in python (GitHub)
-
Einsum in 40 Lines of Python
PyTorch also has some support for them, but it's quite incomplete and has many issues so that it is basically unusable. And its future development is also unclear. https://github.com/pytorch/pytorch/issues/60832
-
Library for Machine learning and quantum computing
TensorFlow
-
My Favorite DevTools to Build AI/ML Applications!
TensorFlow, developed by Google, and PyTorch, developed by Facebook, are two of the most popular frameworks for building and training complex machine learning models. TensorFlow is known for its flexibility and robust scalability, making it suitable for both research prototypes and production deployments. PyTorch is praised for its ease of use, simplicity, and dynamic computational graph that allows for more intuitive coding of complex AI models. Both frameworks support a wide range of AI models, from simple linear regression to complex deep neural networks.
-
penzai: JAX research toolkit for building, editing, and visualizing neural nets
> does PyTorch have a similar concept
of course https://github.com/pytorch/pytorch/blob/main/torch/utils/_py...
-
Tinygrad: Hacked 4090 driver to enable P2P
fyi should work on most 40xx[1]
[1] https://github.com/pytorch/pytorch/issues/119638#issuecommen...
-
The Elements of Differentiable Programming
Sure, right here: https://github.com/pytorch/pytorch/blob/main/torch/autograd/...
Here's the documentation: https://pytorch.org/tutorials/intermediate/forward_ad_usage....
> When an input, which we call “primal”, is associated with a “direction” tensor, which we call “tangent”, the resultant new tensor object is called a “dual tensor” for its connection to dual numbers[0].
-
Functions and operators for Dot and Matrix multiplication and Element-wise calculation in PyTorch
*My post explains Dot, Matrix and Element-wise multiplication in PyTorch.
-
Dot vs Matrix vs Element-wise multiplication in PyTorch
In PyTorch with @, dot() or matmul():
What are some alternatives?
NCCL - Optimized primitives for collective multi-GPU communication
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
mediapipe - Cross-platform, customizable ML solutions for live and streaming media.
why-ignite - Why should we use PyTorch-Ignite ?
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
pocketsphinx - A small speech recognizer
flax - Flax is a neural network library for JAX that is designed for flexibility.
ignite - High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
ompi - Open MPI main development repository
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more