Why TensorFlow for Python is dying a slow death

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • jax

    Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

  • If you're familiar with the plumbing/porcelain API paradigm, JAX depends on TensorFlow plumbing (XLA) with a more ergonomic porcelain API.

    You might not see TensorFlow's plumbing much anymore if you're a new grad running experiments in a notebook, but the "porcelain API" is just the tip of the ice berg of modern machine learning.

    If you do any work on the JAX framework, you're frequently working with both the JAX and TensorFlow code repositories: https://github.com/google/jax/blob/main/WORKSPACE#L17

  • tinygrad

    Discontinued You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad] (by geohot)

  • While PyTorch is obviously the future in the short term, it will be interesting to see how this space evolves.

    Before Tensorflow, people (myself included) were largely coding all of this stuff pretty manually, or with the zoo of incredibly clucky homemade libs.

    Tensorflow and PyTorch made the whole situation far more accessible and sane. You can get a basic neural network working in a few lines of code. Magical.

    But it's still early days. George Hotz, author of tinygrad[0], a PyTorch "competitor", made a really insightful comment -- we will look back on PyTorch & friends like we look back on FORTRAN and COBOL. Yes, they were far better than assembly. But they are really clunky compared to what we have today.

    What will we have in 20 years?

    [0] https://github.com/geohot/tinygrad, https://tinygrad.org

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • server

    The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)

  • "TensorFlow has the better deployment infrastructure"

    Tensorflow Serving is nice in that it's so tightly integrated with Tensorflow. As usual that goes both ways. It's so tightly coupled to Tensorflow if the mlops side of the solution is using Tensorflow Serving you're going to get "trapped" in the Tensorflow ecosystem (essentially).

    For pytorch models (and just about anything else) I've been really enjoying Nvidia Triton Server[0]. Of course it further entrenches Nvidia and CUDA in the space (although you can execute models CPU only) but for a deployment today and the foreseeable future you're almost certainly going to be using a CUDA stack anyway.

    Triton Server is very impressive and I'm always surprised to see how relatively niche it is.

    [0] - https://github.com/triton-inference-server/server

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts