Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)
What is ROCm? ROCm is a library by AMD that's:
AMD had been maintaining a fork of the Tensorflow library, and the changes have been merged upstream, and there are little breadcrumbs you can see if you look for them. It should (in theory) be possible to run almost-current Tensorflow with AMD support, and there's been work on this at least as far back as 2018.
However, the binary PyPI packages are still distributed by AMD (and as of writing a point release behind the current upstream version) so it feels a bit like a second-class citizen right now.
There are inference servers, frameworks, edge devices, pretrained models, and even a NN-specialized library that make it easier for framework authors to use Nvidia GPUs for acceleration.