tangent
Source-to-Source Debuggable Derivatives in Pure Python (by google)
kotlingrad
🧩 Shape-Safe Symbolic Differentiation with Algebraic Data Types (by breandan)
tangent | kotlingrad | |
---|---|---|
2 | 3 | |
2,280 | 508 | |
- | - | |
10.0 | 3.8 | |
over 1 year ago | about 1 year ago | |
Python | Kotlin | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tangent
Posts with mentions or reviews of tangent.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-12-25.
-
[D] How AD is implemented in JAX/Tensorflow/Pytorch?
Thank you so much for the detail explaination! This remind me of tangent, an abandoned (?) SCT built by google couple of years ago. https://github.com/google/tangent
-
Trade-Offs in Automatic Differentiation: TensorFlow, PyTorch, Jax, and Julia
No, autograd acts similarly to PyTorch in that it builds a tape that it reverses while PyTorch just comes with more optimized kernels (and kernels that act on GPUs). The AD that I was referencing was tangent (https://github.com/google/tangent). It was an interesting project but it's hard to see who the audience is. Generating Python source code makes things harder to analyze, and you cannot JIT compile the generated code unless you could JIT compile Python. So you might as well first trace to a JIT-compliable sublanguage and do the actions there, which is precisely what Jax does. In theory tangent is a bit more general, and maybe you could mix it with Numba, but then it's hard to justify. If it's more general then it's not for the standard ML community for the same reason as the Julia tools, but then it better do better than the Julia tools in the specific niche that they are targeting. Jax just makes much more sense for the people who were building it, it chose its niche very well.
kotlingrad
Posts with mentions or reviews of kotlingrad.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-12-25.
-
Trade-Offs in Automatic Differentiation: TensorFlow, PyTorch, Jax, and Julia
and that there is a mature library for autodiff https://github.com/breandan/kotlingrad
- Show HN: Shape-Safe Symbolic Differentiation with Algebraic Data Types
- Kotlin∇: Type-safe Symbolic Differentiation for the JVM
What are some alternatives?
When comparing tangent and kotlingrad you can also consider the following projects:
autograd - Efficiently computes derivatives of numpy code.
lets-plot-kotlin - Grammar of Graphics for Kotlin
kmath - Kotlin mathematics extensions library
kinference - Running ONNX models in vanilla Kotlin
uiua - A stack-based array programming language
dex-lang - Research language for array processing in the Haskell/ML family
kotlindl - High-level Deep Learning Framework written in Kotlin and inspired by Keras