extending-jax
Extending JAX with custom C++ and CUDA code (by dfm)
mpi4jax
Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python :zap: (by PhilipVinc)
extending-jax | mpi4jax | |
---|---|---|
2 | 1 | |
398 | 484 | |
1.0% | 1.7% | |
3.0 | 5.7 | |
11 months ago | 16 days ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
extending-jax
Posts with mentions or reviews of extending-jax.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-02-15.
-
[D] Should We Be Using JAX in 2022?
You can check out this or this for more info. I think it is safe to assume that it is less stable than PyTorch - some other commenters have spoken about running into trouble with XLA in certain corner cases, but I have not experienced this so I can't speak to it.
- Extending JAX with custom C++ and CUDA code
mpi4jax
Posts with mentions or reviews of mpi4jax.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-02-03.
-
[D] Jax (or other libraries) when not using GPUs/TPUs but CPUs.
I've seen a couple of posts of folks using JAX for scientific computing (e.g. physics) workloads without much issue. The parallel primitives work just as well across multiple CPUs as they do on accelerators. If you're on a cluster, also worth looking into https://github.com/PhilipVinc/mpi4jax.
What are some alternatives?
When comparing extending-jax and mpi4jax you can also consider the following projects:
equinox - Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/
devito - DSL and compiler framework for automated finite-differences and stencil computation
einops - Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries
pyhpc-benchmarks - A suite of benchmarks for CPU and GPU performance of the most popular high-performance libraries for Python :rocket: