Our great sponsors
-
You serve models via https://www.tensorflow.org/tfx/guide/serving which is written entirely in C++ (https://github.com/tensorflow/serving/tree/master/tensorflow_serving/model_servers), no Python on the serving path or in the shipped product.
-
For example, Julia (https://julialang.org/) is arguably as high level as Python and is targeted to scientific computing and data science but it faster than C++ at execution in some circumstances (Julia compiles on the fly to native code via LLVM, a common compiler backend that is also used in many C/C++ compilers; the same compiler backend used by NVDIA to in the CUDA compiler https://developer.nvidia.com/cuda-llvm-compiler).
-
Scout APM
Less time debugging, more time building. Scout APM allows you to find and fix performance issues with no hassle. Now with error monitoring and external services monitoring, Scout is a developer's best friend when it comes to application development.
-
Julia can also use the same stuff that Python/Tensorflow use, to access the same hardware (e.g. Julia on TPUs https://github.com/JuliaTPU/XLA.jl).
-
For reference: In Tensorflow and JAX, for example, the tensor gets compiled to the intermediate XLA format (https://www.tensorflow.org/xla), then passed to the XLA complier (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/service) or the new TFRT runtime (https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md), or some more esoteric hardware (https://github.com/pytorch/glow).
-
For reference: In Tensorflow and JAX, for example, the tensor gets compiled to the intermediate XLA format (https://www.tensorflow.org/xla), then passed to the XLA complier (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/service) or the new TFRT runtime (https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md), or some more esoteric hardware (https://github.com/pytorch/glow).
-
For reference: In Tensorflow and JAX, for example, the tensor gets compiled to the intermediate XLA format (https://www.tensorflow.org/xla), then passed to the XLA complier (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/service) or the new TFRT runtime (https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md), or some more esoteric hardware (https://github.com/pytorch/glow).