tangent
Source-to-Source Debuggable Derivatives in Pure Python (by google)
SmallPebble
Minimal deep learning library written from scratch in Python, using NumPy/CuPy. (by sradc)
tangent | SmallPebble | |
---|---|---|
2 | 6 | |
2,280 | 112 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | over 1 year ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tangent
Posts with mentions or reviews of tangent.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-12-25.
-
[D] How AD is implemented in JAX/Tensorflow/Pytorch?
Thank you so much for the detail explaination! This remind me of tangent, an abandoned (?) SCT built by google couple of years ago. https://github.com/google/tangent
-
Trade-Offs in Automatic Differentiation: TensorFlow, PyTorch, Jax, and Julia
No, autograd acts similarly to PyTorch in that it builds a tape that it reverses while PyTorch just comes with more optimized kernels (and kernels that act on GPUs). The AD that I was referencing was tangent (https://github.com/google/tangent). It was an interesting project but it's hard to see who the audience is. Generating Python source code makes things harder to analyze, and you cannot JIT compile the generated code unless you could JIT compile Python. So you might as well first trace to a JIT-compliable sublanguage and do the actions there, which is precisely what Jax does. In theory tangent is a bit more general, and maybe you could mix it with Numba, but then it's hard to justify. If it's more general then it's not for the standard ML community for the same reason as the Julia tools, but then it better do better than the Julia tools in the specific niche that they are targeting. Jax just makes much more sense for the people who were building it, it chose its niche very well.
SmallPebble
Posts with mentions or reviews of SmallPebble.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-08-24.
-
Fastest Autograd in the West
You can implement autograd as a library. Just take a look at this
https://github.com/sradc/SmallPebble
The first line of the description is:
> SmallPebble is a minimal automatic differentiation and deep learning library written from scratch in Python, using NumPy/CuPy.
-
Compiling ML models to C for fun
Thanks for this. My approach to speeding up an autodiff system like this was to write it in terms of nd-arrays rather than scalars, using numpy/cupy [1]. But it's still slower than deep learning frameworks that compile / fuse operations. Wondering how it compares to the approach in this post. (Might try to benchmark at some point.)
[1] https://github.com/sradc/SmallPebble
- Understanding Automatic Differentiation in 30 lines of Python
-
[P] SmallPebble - minimal(/toy) deep learning framework written from scratch in Python, using NumPy/CuPy. <700 loc.
Located here: https://github.com/sradc/SmallPebble
- Show HN: I wrote a minimal(/toy) deep learning library from scratch in Python
- SmallPebble – Minimal automatic differentiation implementation in Python, NumPy
What are some alternatives?
When comparing tangent and SmallPebble you can also consider the following projects:
autograd - Efficiently computes derivatives of numpy code.
MyGrad - Drop-in autodiff for NumPy.
chainer - A flexible framework of neural networks for deep learning
memoized_coduals - Shows that it is possible to implement reverse mode autodiff using a variation on the dual numbers called the codual numbers
Tensor-Puzzles - Solve puzzles. Improve your pytorch.
GPU-Puzzles - Solve puzzles. Learn CUDA.