einops
tiny-cuda-nn
einops | tiny-cuda-nn | |
---|---|---|
19 | 9 | |
7,942 | 3,418 | |
- | 2.4% | |
7.4 | 5.9 | |
7 days ago | about 1 month ago | |
Python | C++ | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
einops
-
Einsum in 40 Lines of Python
Not sure if the wrapper you’re talking about is your own custom code, but I really like using einops lately. It’s got similar axis naming capabilities and it dispatches to both numpy and pytorch
http://einops.rocks/
- Einops: Flexible and powerful tensor operations for readable and reliable code
-
Yorick is an interpreted programming language for scientific simulations
Thanks for the pointer. I can believe that a language that looks so different will find that different patterns and primitives are natural for it.
My experience from writing a lot of array-based code in NumPy/Matlab is that broadcasting absolutely has made it easier to write my code in those ecosystems. Axes of length 1 have often been in the right places already, or have been easy to insert. It's of course possible to create a big mess in any language; it seems likely that the NumPy code you saw could have been neater too.
In machine learning there can be many array dimensions floating around: batch-dims, sequence and/or channel-dims, weight matrices, and so on. It can be necessary to expand two or more dimensions, and/or line up dimensions quite carefully. Einops[1] has emerged from that community as a tool to succinctly express many operations that involve lots of array dimensions. You're likely to bump into more and more people who've used it, and again it seems there's some overlap with what Rank does. (And again, you'll see uses of Einops in the wild that are unnecessarily convoluted.)
[1] https://einops.rocks/ -- It works with all of the existing major array-based frameworks for Python (NumPy/PyTorch/Jax/etc), and the emerging array API standard for Python.
-
Torch qeuivalent to image_to_array (keras)
this is definitely what you're looking for: https://github.com/arogozhnikov/einops
-
[D] Have their been any attempts to create a programming language specifically for machine learning?
Einops all the things! https://einops.rocks/
- Delimiter-First Code
-
[D] Any independent researchers ever get published/into conferences?
It depends on what are their main purposes. I know some figures who have done an amazing job in this field but never because of publications, e.g. https://github.com/lucidrains and https://github.com/rwightman, https://einops.rocks/
-
[D] Anyone using named tensors or a tensor annotation lib productively?
On tsalib's warp: this is very similar to einops. I think it might even be slightly more general. However, I'm honestly not sure to what extent tsalib is still maintained, as it looks like the most recent commit was about two years ago. (Which is a shame.)
-
A basic introduction to NumPy's einsum
Also see Einops: https://github.com/arogozhnikov/einops, which uses a einsum-like notation for various tensor operations used in deep learning.
https://einops.rocks/pytorch-examples.html shows how it can be used to implement various neural network architectures in a more simplified manor.
-
Ask HN: What technologies greatly improve the efficiency of development?
This combined with something like einops [1] ( intuitive reshaping library) can be a huge time saver.
[1] https://github.com/arogozhnikov/einops
tiny-cuda-nn
-
[D] Have their been any attempts to create a programming language specifically for machine learning?
In the opposite direction from your question is a very interesting project, TinyNN all implemented as close to the metal as possible and very fast: https://github.com/NVlabs/tiny-cuda-nn
-
A CUDA-free instant NGP renderer written entirely in Python: Support real-time rendering and camera interaction and consume less than 1GB of VRAM
This repo only implemented the rendering part of the NGP but is more simple and has a lesser amount of code compared to the original (Instant-NGP and tiny-cuda-nn).
- Tiny CUDA Neural Networks: fast C++/CUDA neural network framework
- Making 3D holograms this weekend with the very “Instant” Neural Graphics Primitives by nvidia — made this volume from 100 photos taken with an old iPhone 7 Plus
- NVlabs/tiny-CUDA-nn: fast C++/CUDA neural network framework
-
Small Neural networks in Julia 5x faster than PyTorch
...a C++ library with a CUDA backend. But these high-performance building blocks might only be saturating the GPU fully if the data is large enough.
I haven't looked at implementing these things, but I imagine uf you have smaller networks and thus less data, the large building blocks may not be optimal. You may for example want to fuse some operations to reduce memory latency from repeated memory access.
In PyTorch world, there are approaches for small networks as well, there is https://github.com/NVlabs/tiny-cuda-nn - as far as I understand from the first link in the README, it makes clever use of the CUDA shared memory, which can hold all the weights of a tiny network (but not larger ones).
- [R] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding (Training a NeRF takes 5 seconds!)
- Tiny CUDA Neural Networks
- Real-Time Neural Radiance Caching for Path Tracing
What are some alternatives?
extending-jax - Extending JAX with custom C++ and CUDA code
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
opt_einsum - ⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.
blis - BLAS-like Library Instantiation Software Framework
kymatio - Wavelet scattering transforms in Python with GPU acceleration
diffrax - Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/
d2l-en - Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
juliaup - Julia installer and version multiplexer
data-science-ipython-notebooks - Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.
RecursiveFactorization
horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
RecursiveFactorization.jl