torchdynamo
A Python-level JIT compiler designed to make unmodified PyTorch programs faster. (by pytorch)
kernl
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable. (by ELS-RD)
torchdynamo | kernl | |
---|---|---|
1 | 8 | |
965 | 1,459 | |
1.2% | 0.9% | |
3.5 | 1.5 | |
17 days ago | 3 months ago | |
Python | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
torchdynamo
Posts with mentions or reviews of torchdynamo.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-10-28.
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 1), what is the easiest way to speed up inference (assume only PyTorch and primarily GPU but also some CPU)? I have been using ONNX and Torchscript but there is a bit of a learning curve and sometimes it can be tricky to get the model to actually work. Is there anything else worth trying? I am enthused by things like TorchDynamo (although I have not tested it extensively) due to its apparent ease of use. I also saw the post yesterday about Kernl using (OpenAI) Triton kernels to speed up transformer models which also looks interesting. Are things like SageMaker Neo or NeuralMagic worth trying? My only reservation with some of these is they still seem to be pretty model/architecture specific. I am a little reluctant to put much time into these unless I know others have had some success first.
kernl
Posts with mentions or reviews of kernl.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-02-08.
-
[P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl
I periodically check kernl.ai to see whether the documentation and tutorial sections have been expanded. My advice is put some real effort and focus in to examples and tutorials. It is key for an optimization/acceleration library. 10x-ing the users of a library like this is much more likely to come from spending 10 out of every 100 developer hours writing tutorials, as opposed to spending those 8 or 9 of those tutorial-writing hours on developing new features which only a small minority understand how to apply.
-
[P] BetterTransformer: PyTorch-native free-lunch speedups for Transformer-based models
FlashAttention + quantization has to the best of knowledge not yet been explored, but I think it would a great engineering direction. I would not expect to see this any time soon natively in PyTorch's BetterTransformer though. /u/pommedeterresautee & folks at ELS-RD made an awesome work releasing kernl where custom implementations (through OpenAI Triton) could maybe easily live.
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
Check https://github.com/ELS-RD/kernl/blob/main/src/kernl/optimizer/linear.py for an example.
-
[P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels
https://github.com/ELS-RD/kernl/issues/141 > Would it be possible to use kernl to speed up Stable Diffusion?
What are some alternatives?
When comparing torchdynamo and kernl you can also consider the following projects:
serve - Serve, optimize and scale PyTorch models in production
openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment