torchtyping
equinox
Our great sponsors
torchtyping | equinox | |
---|---|---|
7 | 31 | |
1,333 | 1,809 | |
- | - | |
3.2 | 9.2 | |
10 months ago | 8 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
torchtyping
-
[D] Have their been any attempts to create a programming language specifically for machine learning?
Not really an answer to your question, but there are Python packages that try to solve the problem of tensor shapes that you mentioned, e.g. https://github.com/patrick-kidger/torchtyping or https://github.com/deepmind/tensor_annotations
-
What's New in Python 3.11?
I disagree. I've had a serious attempt at array typing using variadic generics and I'm not impressed. Python's type system has numerous issues... and now they just apply to any "ArrayWithNDimensions" type as well as any "ArrayWith2Dimenensions" type.
Variadic protocols don't exist; many operations like stacking are inexpressible; the synatx is awful and verbose; etc. etc.
I've written more about this here as part of my TorchTyping project: [0]
[0] https://github.com/patrick-kidger/torchtyping/issues/37#issu...
-
Can anyone point out the mistakes in my input layer or dimension?
also https://github.com/patrick-kidger/torchtyping
-
[D] Anyone using named tensors or a tensor annotation lib productively?
FWIW I'm the author of torchtyping so happy to answer any questions about that. :) I think people are using it!
-
[D] Ideal deep learning library
The one thing I really *really* wish got more attention was named tensors and the tensor type system. Tensor misalignment errors are a constant source of silently-failing bugs. While 3rd party libraries have attempted to fill this gap, it really needs better native support. In particular it seems like bad form to me for programmers to have to remember the specific alignment and broadcasting rules, and then have to apply them to an often poorly documented order of tensor indices. I'd really like to see something like tsalib's warp operator made part of the main library and generalized to arbitrary function application, like a named-tensor version of fold. But preferably using notation closer to that of torchtyping.
-
[P] torchtyping -- documentation + runtime type checking of tensor shapes (and dtypes, ...)
Yes it does work with numerical literals! It support using integers to specify an absolute size, strings to specify names for dimensions that should all be consistently sized (and optionally also checks named tensors), "..." to indicate batch dimensions, and so on. See the full list here.
equinox
-
Ask HN: What side projects landed you a job?
I wrote a JAX-based neural network library (Equinox [1]) and numerical differential equation solving library (Diffrax [2]).
At the time I was just exploring some new research ideas in numerics -- and frankly, procrastinating from writing up my PhD thesis!
But then one of the teams at Google starting using them, so they offered me a job to keep developing them for their needs. Plus I'd get to work in biotech, which was a big interest of mine. This was a clear dream job offer, so I accepted.
Since then both have grown steadily in popularity (~2.6k GitHub stars) and now see pretty widespread use! I've since started writing several other JAX libraries and we now have a bit of an ecosystem going.
[1] https://github.com/patrick-kidger/equinox
-
[P] Optimistix, nonlinear optimisation in JAX+Equinox!
The elevator pitch is Optimistix is really fast, especially to compile. It plays nicely with Optax for first-order gradient-based methods, and takes a lot of design inspiration from Equinox, representing the state of all the solvers as standard JAX PyTrees.
-
JAX – NumPy on the CPU, GPU, and TPU, with great automatic differentiation
If you like PyTorch then you might like Equinox, by the way. (https://github.com/patrick-kidger/equinox ; 1.4k GitHub stars now!)
- Equinox: Elegant easy-to-use neural networks in Jax
- Show HN: Equinox (1.3k stars), a JAX library for neural networks and sciML
-
Pytrees
You're thinking of `jax.closure_convert`. :)
(Although technically that works by tracing and extracting all constants from the jaxpr, rather than introspecting the function's closure cells -- it sounds like your trick is the latter.)
When you discuss dynamic allocation, I'm guessing you're mainly referring to not being able to backprop through `jax.lax.while_loop`. If so, you might find `equinox.internal.while_loop` interesting, which is an unbounded while loop that you can backprop through! The secret sauce is to use a treeverse-style checkpointing scheme.
https://github.com/patrick-kidger/equinox/blob/f95a8ba13fb35...
-
Writing Python like it’s Rust
I'm a big fan of using ABCs to declare interfaces -- so much so that I have an improved abc.ABCMeta that also handles abstract instance variables and abstract class variables: https://github.com/patrick-kidger/equinox/blob/main/equinox/_better_abstract.py
-
[D] JAX vs PyTorch in 2023
For the daily research, I use Equinox (https://github.com/patrick-kidger/equinox) as a DL librarry in JAX.
- [Machinelearning] [D] État actuel de JAX vs Pytorch?
-
Training Deep Networks with Data Parallelism in Jax
It sounds like you're concerned about how downstream libraries tend to wrap JAX transformations to handle their own thing? (E.g. `haiku.grad`.)
If so, then allow me to make my usual advert here for Equinox:
https://github.com/patrick-kidger/equinox
This actually works with JAX's native transformations. (There's no `equinox.vmap` for example.)
On higher-order functions more generally, Equinox offers a way to control these quite carefully, by making ubiquitous use of callables that are also pytrees. E.g. a neural network is both a callable in that it has a forward pass, and a pytree in that it records its parameters in its tree structure.
What are some alternatives?
jaxtyping - Type annotations and runtime checking for shape and dtype of JAX/NumPy/PyTorch/etc. arrays. https://docs.kidger.site/jaxtyping/
flax - Flax is a neural network library for JAX that is designed for flexibility.
tsalib - Tensor Shape Annotation Library (numpy, tensorflow, pytorch, ...)
dm-haiku - JAX-based neural network library
mypy - Optional static typing for Python
treex - A Pytree Module system for Deep Learning in JAX
functorch - functorch is JAX-like composable function transforms for PyTorch.
extending-jax - Extending JAX with custom C++ and CUDA code
tensor_annotations - Annotating tensor shapes using Python types
diffrax - Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/
Roslyn - The Roslyn .NET compiler provides C# and Visual Basic languages with rich code analysis APIs.
elegy - A High Level API for Deep Learning in JAX