AD-Rosetta-Stone
autodidact
AD-Rosetta-Stone | autodidact | |
---|---|---|
2 | 1 | |
26 | 922 | |
- | - | |
10.0 | 10.0 | |
almost 6 years ago | almost 4 years ago | |
Scala | Jupyter Notebook | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AD-Rosetta-Stone
-
Understanding Automatic Differentiation in 30 lines of Python
[1] https://github.com/qobi/AD-Rosetta-Stone/
-
Autodidax: Jax Core from Scratch (In Python)
I find the solutions from https://github.com/qobi/AD-Rosetta-Stone/ to be very helpful, particularly for representing forward and backward mode automatic differentiation using a functional approach.
I used this code as inspiration for a functional-only (without references/pointers) in Mercury: https://github.com/mclements/mercury-ad
autodidact
-
Autodidax: Jax Core from Scratch (In Python)
I'm sure there's a lot of good material around, but here are some links that are conceptually very close to the linked Autodidax.
There's [Autodidact](https://github.com/mattjj/autodidact), a predecessor to Autodidax, which was a simplified implementation of [the original Autograd](https://github.com/hips/autograd). It focuses on reverse-mode autodiff, not building an open-ended transformation system like Autodidax. It's also pretty close to the content in [these lecture slides](https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slid...) and [this talk](http://videolectures.net/deeplearning2017_johnson_automatic_...). But the autodiff in Autodidax is more sophisticated and reflects clearer thinking. In particular, Autodidax shows how to implement forward- and reverse-modes using only one set of linearization rules (like in [this paper](https://arxiv.org/abs/2204.10923)).
Here's [an even smaller and more recent variant](https://gist.github.com/mattjj/52914908ac22d9ad57b76b685d19a...), a single ~100 line file for reverse-mode AD on top of NumPy, which was live-coded during a lecture. There's no explanatory material to go with it though.
What are some alternatives?
mercury-ad - Mercury library for automatic differentiation
autograd - Efficiently computes derivatives of numpy code.
Tensor-Puzzles - Solve puzzles. Improve your pytorch.
owl - Owl - OCaml Scientific Computing @ https://ocaml.xyz