tensorrt_demos VS Zygote.jl

Compare tensorrt_demos vs Zygote.jl and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
tensorrt_demos Zygote.jl
5 9
1,720 1,439
- 0.4%
3.1 8.1
about 1 year ago about 1 month ago
Python Julia
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

tensorrt_demos

Posts with mentions or reviews of tensorrt_demos. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-06-04.
  • lowering size of YOLOV4 detection model
    1 project | /r/computervision | 10 Jul 2022
    tensorrt_demo github repository
  • Jetson Nano: TensorFlow model. Possibly I should use PyTorch instead?
    2 projects | /r/pytorch | 4 Jun 2021
    https://github.com/NVIDIA-AI-IOT/torch2trt <- pretty straightforward https://github.com/jkjung-avt/tensorrt_demos <- this helped me a lot
  • PyTorch 1.8 release with AMD ROCm support
    8 projects | news.ycombinator.com | 4 Mar 2021
    > I'll also add a caveat that toolage for Jetson boards is extremely incomplete.

    A hundred times this. I was about to write another rant here but I already did that[0] a while ago, so I'll save my breath this time. :)

    Another fun fact regarding toolage: Today I discovered that many USB cameras work poorly on Jetsons (at least when using OpenCV), probably due to different drivers and/or the fact that OpenCV doesn't support ARM64 as well as it does x86_64. :(

    > They supply you with a bunch of sorely outdated models for TensorRT like Inceptionv3 and SSD-MobileNetv2 and VGG-16.

    They supply you with such models? That's news to me. AFAIK converting something like SSD-MobileNetv2 from TensorFlow to TensorRT still requires substantial manual work and magic, as this code[1] attests to. There are countless (countless!) posts on the Nvidia forums by people complaining that they're not able to convert their models.

    [0]: https://news.ycombinator.com/item?id=26004235

    [1]: https://github.com/jkjung-avt/tensorrt_demos/blob/master/ssd... (In fact, this is the only piece of code I've found on the entire internet that managed to successfully convert my SSD-MobileNetV2.)

  • I'm tired of this anti-Wayland horseshit
    16 projects | news.ycombinator.com | 2 Feb 2021
  • H.264 hardware acceleration for surveillance station performance
    1 project | /r/synology | 12 Jan 2021
    It was some work getting compiled on nano but I used this guy's work to get started. https://jkjung-avt.github.io/tensorrt-yolov4/ and https://github.com/jkjung-avt/tensorrt_demos

Zygote.jl

Posts with mentions or reviews of Zygote.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-22.
  • Yann Lecun: ML would have advanced if other lang had been adopted versus Python
    9 projects | news.ycombinator.com | 22 Feb 2023
    If you look at Julia open source projects you'll see that the projects tend to have a lot more contributors than the Python counterparts, even over smaller time periods. A package for defining statistical distributions has had 202 contributors (https://github.com/JuliaStats/Distributions.jl), etc. Julia Base even has had over 1,300 contributors (https://github.com/JuliaLang/julia) which is quite a lot for a core language, and that's mostly because the majority of the core is in Julia itself.

    This is one of the things that was noted quite a bit at this SIAM CSE conference, that Julia development tends to have a lot more code reuse than other ecosystems like Python. For example, the various machine learning libraries like Flux.jl and Lux.jl share a lot of layer intrinsics in NNlib.jl (https://github.com/FluxML/NNlib.jl), the same GPU libraries (https://github.com/JuliaGPU/CUDA.jl), the same automatic differentiation library (https://github.com/FluxML/Zygote.jl), and of course the same JIT compiler (Julia itself). These two libraries are far enough apart that people say "Flux is to PyTorch as Lux is to JAX/flax", but while in the Python world those share almost 0 code or implementation, in the Julia world they share >90% of the core internals but have different higher levels APIs.

    If one hasn't participated in this space it's a bit hard to fathom how much code reuse goes on and how that is influenced by the design of multiple dispatch. This is one of the reasons there is so much cohesion in the community since it doesn't matter if one person is an ecologist and the other is a financial engineer, you may both be contributing to the same library like Distances.jl just adding a distance function which is then used in thousands of places. With the Python ecosystem you tend to have a lot more "megapackages", PyTorch, SciPy, etc. where the barrier to entry is generally a lot higher (and sometimes requires handling the build systems, fun times). But in the Julia ecosystem you have a lot of core development happening in "small" but central libraries, like Distances.jl or Distributions.jl, which are simple enough for an undergrad to get productive in a week but is then used everywhere (Distributions.jl for example is used in every statistics package, and definitions of prior distributions for Turing.jl's probabilistic programming language, etc.).

  • How long till Julia could be the default language to learn ML?
    1 project | /r/learnmachinelearning | 13 Nov 2022
    I think julia has a lot going for it. I feel like autograd is one of the bigger ones given that it's a language feature basically (https://github.com/FluxML/Zygote.jl for reference). I think the ecosystem is a bit of an uphill battle though.
  • Neural networks with automatic differentiation.
    3 projects | /r/Julia | 13 Apr 2021
    Also check out https://github.com/FluxML/Zygote.jl which is the AD engine
  • PyTorch 1.8 release with AMD ROCm support
    8 projects | news.ycombinator.com | 4 Mar 2021
    > There's sadly no performant autodiff system for general purpose Python.

    Like there is for general purpose Julia? (https://github.com/FluxML/Zygote.jl)

  • The KimKlone Microcomputer
    1 project | news.ycombinator.com | 1 Mar 2021
    Thanks again. Like you said it is fun to dream (ask the "Scheme Machine" guys sometime about how they would go about it now), but practically with technology like Julia's Zygote:

    https://github.com/FluxML/Zygote.jl

    the efficiency of autodiff might be similar to that of an opcode anyway.

    So, how did DEC do on the Alpha processor? I always heard good things about it--IIRC it was based on the VAX, but 64 bit. I learned PDP-11 assembler at RPI, during their college program for high school students in about 1984. We hand assembled code and really got to know the architecture.

  • FluxML/Zygote.jl -- v0.6.3 should implement a `jacobian` function but doesn't?
    1 project | /r/Julia | 23 Feb 2021
  • Did the makers of Zygote.jl use category theory to define their approach to computable autodiff?
    1 project | /r/Julia | 8 Feb 2021
    and make that computable. It seems like line 88 --> 90 of this file in Zygote does that: https://github.com/FluxML/Zygote.jl/blob/master/src/compiler/chainrules.jl
  • Study group: Structure and Interpretation of Classical Mechanics in Clojure
    1 project | /r/lisp | 6 Feb 2021
  • Ask HN: Show me your Half Baked project
    154 projects | news.ycombinator.com | 9 Jan 2021
    It's super powerful

    For example Zygote.jl (https://github.com/FluxML/Zygote.jl) implements reverse mode automatic differentiation, by defining a function that is a generated transformation of the function being differentiated.

What are some alternatives?

When comparing tensorrt_demos and Zygote.jl you can also consider the following projects:

YOLOX - YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/

Enzyme - High-performance automatic differentiation of LLVM and MLIR.

torch2trt - An easy to use PyTorch to TensorRT converter

ForwardDiff.jl - Forward Mode Automatic Differentiation for Julia

yolov4-custom-functions - A Wide Range of Custom Functions for YOLOv4, YOLOv4-tiny, YOLOv3, and YOLOv3-tiny Implemented in TensorFlow, TFLite, and TensorRT.

Tullio.jl - ⅀

tensorflow-yolov4-tflite - YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.3.1, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite

TensorFlow.jl - A Julia wrapper for TensorFlow

jetson-inference - Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

Flux.jl - Relax! Flux is the ML library that doesn't make you tensor

wayvnc - A VNC server for wlroots based Wayland compositors

InvertibleNetworks.jl - A Julia framework for invertible neural networks