mercury-ad
autodidact
mercury-ad | autodidact | |
---|---|---|
2 | 1 | |
6 | 922 | |
- | - | |
10.0 | 10.0 | |
over 1 year ago | almost 4 years ago | |
Mercury | Jupyter Notebook | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mercury-ad
-
Understanding Automatic Differentiation in 30 lines of Python
I wrote a purely functional AD library in Mercury [0], which adapts a general approach from [1]. I believe that Owl provides a similar approach [2].
[0] https://github.com/mclements/mercury-ad
-
Autodidax: Jax Core from Scratch (In Python)
I find the solutions from https://github.com/qobi/AD-Rosetta-Stone/ to be very helpful, particularly for representing forward and backward mode automatic differentiation using a functional approach.
I used this code as inspiration for a functional-only (without references/pointers) in Mercury: https://github.com/mclements/mercury-ad
autodidact
-
Autodidax: Jax Core from Scratch (In Python)
I'm sure there's a lot of good material around, but here are some links that are conceptually very close to the linked Autodidax.
There's [Autodidact](https://github.com/mattjj/autodidact), a predecessor to Autodidax, which was a simplified implementation of [the original Autograd](https://github.com/hips/autograd). It focuses on reverse-mode autodiff, not building an open-ended transformation system like Autodidax. It's also pretty close to the content in [these lecture slides](https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slid...) and [this talk](http://videolectures.net/deeplearning2017_johnson_automatic_...). But the autodiff in Autodidax is more sophisticated and reflects clearer thinking. In particular, Autodidax shows how to implement forward- and reverse-modes using only one set of linearization rules (like in [this paper](https://arxiv.org/abs/2204.10923)).
Here's [an even smaller and more recent variant](https://gist.github.com/mattjj/52914908ac22d9ad57b76b685d19a...), a single ~100 line file for reverse-mode AD on top of NumPy, which was live-coded during a lecture. There's no explanatory material to go with it though.
What are some alternatives?
GPU-Puzzles - Solve puzzles. Learn CUDA.
autograd - Efficiently computes derivatives of numpy code.
AD-Rosetta-Stone - Examples of Automatic Differentiation (AD) in many different languages and systems
owl - Owl - OCaml Scientific Computing @ https://ocaml.xyz