torchdynamo

A Python-level JIT compiler designed to make unmodified PyTorch programs faster. (by pytorch)

Torchdynamo Alternatives

Similar projects and alternatives to torchdynamo

  • server

    The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)

  • deepsparse

    Sparsity-aware deep learning inference runtime for CPUs

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • BentoML

    The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!

  • serve

    Serve, optimize and scale PyTorch models in production (by pytorch)

  • kernl

    8 torchdynamo VS kernl

    Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.

  • transformer-deploy

    Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀

  • openai-whisper-cpu

    Improving transcription performance of OpenAI Whisper for CPU based deployment

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better torchdynamo alternative or higher similarity.

torchdynamo reviews and mentions

Posts with mentions or reviews of torchdynamo. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-10-28.
  • [D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
    8 projects | /r/MachineLearning | 28 Oct 2022
    For 1), what is the easiest way to speed up inference (assume only PyTorch and primarily GPU but also some CPU)? I have been using ONNX and Torchscript but there is a bit of a learning curve and sometimes it can be tricky to get the model to actually work. Is there anything else worth trying? I am enthused by things like TorchDynamo (although I have not tested it extensively) due to its apparent ease of use. I also saw the post yesterday about Kernl using (OpenAI) Triton kernels to speed up transformer models which also looks interesting. Are things like SageMaker Neo or NeuralMagic worth trying? My only reservation with some of these is they still seem to be pretty model/architecture specific. I am a little reluctant to put much time into these unless I know others have had some success first.

Stats

Basic torchdynamo repo stats
1
963
3.5
7 days ago

pytorch/torchdynamo is an open source project licensed under BSD 3-clause "New" or "Revised" License which is an OSI approved license.

The primary programming language of torchdynamo is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com