Our great sponsors
-
jetson-inference
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
> They provide some SSD-Mobilenet-v2 here: https://github.com/dusty-nv/jetson-inference
I was aware of that repository but from taking a cursory look at it I had thought dusty was just converting models from PyTorch to TensorRT, like here[0, 1]. Am I missing something?
> I get 140 fps on a Xavier NX
That really is impressive. Holy shit.
[0]: https://github.com/dusty-nv/jetson-inference/blob/master/doc...
[1]: https://github.com/dusty-nv/jetson-inference/issues/896#issu...
-
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Tensorflow does officially support ROCm. The project was started by AMD and later upstreamed.
https://github.com/tensorflow/tensorflow/tree/master/tensorf...
https://github.com/tensorflow/tensorflow/blob/master/tensorf...
It is true that it is not Google who are distributing binaries compiled with ROCm support through PyPI (tensorflow and tensorflow-gpu is uploaded by Google, but tensorflow-rocm is uploaded by AMD). Is this what you meant by "not officially supporting"?
-
Have you tried using Enzyme[0] on Numba IR?
-
syntaxdot
Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
What I like about PyTorch is that most of the functionality is actually available through the C++ API as well, which has 'beta API stability' as they call it. So, there are good bindings for some other languages as well. E.g., I have been using the Rust bindings in a larger project [1], and they have been awesome. A precursor to the project was implemented using Tensorflow, which was a world of pain.
Even things like mixed-precision training are fairly easy to do through the API.
-
> I'll also add a caveat that toolage for Jetson boards is extremely incomplete.
A hundred times this. I was about to write another rant here but I already did that[0] a while ago, so I'll save my breath this time. :)
Another fun fact regarding toolage: Today I discovered that many USB cameras work poorly on Jetsons (at least when using OpenCV), probably due to different drivers and/or the fact that OpenCV doesn't support ARM64 as well as it does x86_64. :(
> They supply you with a bunch of sorely outdated models for TensorRT like Inceptionv3 and SSD-MobileNetv2 and VGG-16.
They supply you with such models? That's news to me. AFAIK converting something like SSD-MobileNetv2 from TensorFlow to TensorRT still requires substantial manual work and magic, as this code[1] attests to. There are countless (countless!) posts on the Nvidia forums by people complaining that they're not able to convert their models.
[0]: https://news.ycombinator.com/item?id=26004235
[1]: https://github.com/jkjung-avt/tensorrt_demos/blob/master/ssd... (In fact, this is the only piece of code I've found on the entire internet that managed to successfully convert my SSD-MobileNetV2.)
-
> There's sadly no performant autodiff system for general purpose Python.
Like there is for general purpose Julia? (https://github.com/FluxML/Zygote.jl)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.