If data science uses a lot of computational power, then why is python the most used programming language?

This page summarizes the projects mentioned and recommended in the original post on reddit.com/r/learnmachinelearning

Our great sponsors
  • Scout APM - Less time debugging, more time building
  • SonarLint - Deliver Cleaner and Safer Code - Right in Your IDE of Choice!
  • JetBrains - Developer Ecosystem Survey 2022
  • serving

    A flexible, high-performance serving system for machine learning models

    You serve models via https://www.tensorflow.org/tfx/guide/serving which is written entirely in C++ (https://github.com/tensorflow/serving/tree/master/tensorflow_serving/model_servers), no Python on the serving path or in the shipped product.

  • julia

    The Julia Programming Language

    For example, Julia (https://julialang.org/) is arguably as high level as Python and is targeted to scientific computing and data science but it faster than C++ at execution in some circumstances (Julia compiles on the fly to native code via LLVM, a common compiler backend that is also used in many C/C++ compilers; the same compiler backend used by NVDIA to in the CUDA compiler https://developer.nvidia.com/cuda-llvm-compiler).

  • Scout APM

    Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.

  • XLA.jl

    Julia on TPUs (by JuliaTPU)

    Julia can also use the same stuff that Python/Tensorflow use, to access the same hardware (e.g. Julia on TPUs https://github.com/JuliaTPU/XLA.jl).

  • tensorflow

    An Open Source Machine Learning Framework for Everyone

    For reference: In Tensorflow and JAX, for example, the tensor gets compiled to the intermediate XLA format (https://www.tensorflow.org/xla), then passed to the XLA complier (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/service) or the new TFRT runtime (https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md), or some more esoteric hardware (https://github.com/pytorch/glow).

  • runtime

    A performant and modular runtime for TensorFlow (by tensorflow)

    For reference: In Tensorflow and JAX, for example, the tensor gets compiled to the intermediate XLA format (https://www.tensorflow.org/xla), then passed to the XLA complier (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/service) or the new TFRT runtime (https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md), or some more esoteric hardware (https://github.com/pytorch/glow).

  • glow

    Compiler for Neural Network hardware accelerators (by pytorch)

    For reference: In Tensorflow and JAX, for example, the tensor gets compiled to the intermediate XLA format (https://www.tensorflow.org/xla), then passed to the XLA complier (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/service) or the new TFRT runtime (https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md), or some more esoteric hardware (https://github.com/pytorch/glow).

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts