miniF2F
tiny-cuda-nn
miniF2F | tiny-cuda-nn | |
---|---|---|
4 | 9 | |
256 | 3,418 | |
2.7% | 2.4% | |
0.0 | 5.9 | |
9 months ago | about 1 month ago | |
Objective-C++ | C++ | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
miniF2F
-
[D] Have their been any attempts to create a programming language specifically for machine learning?
That said, you *can* write down a desired type and have a system write down a ton of type annotations or generate a bunch of code to prove that the type you wrote down is satisfied by your program. There's been recent work on this in deep learning for theorem proving, such as this work which uses GPT for proving theorems in Lean, a dependently type programming language and theorem prover. A better approach though would be to combine this with an actual tree search algorithm to allow a more structured search over the space of proofs, instead of trying to generate full correct proofs in one shot. Hypertree Proof Search does this, using a variant of AlphaZero to search and fine-tune the neural net. Unfortunately it hasn't been open-sourced though, and it's pretty compute intensive, so we can't use this for actual type inference yet. But yeah there's active interest in doing this kind of thing, both as a proving ground for using RL for reasoning tasks and from mathematicians for theorem-proving.
- [D] First Author Interview: AI & formal math (Formal Mathematics Statement Curriculum Learning)
- [D] OpenAI tackles Math - Formal Mathematics Statement Curriculum Learning (Paper Explained Video)
- MiniF2F
tiny-cuda-nn
-
[D] Have their been any attempts to create a programming language specifically for machine learning?
In the opposite direction from your question is a very interesting project, TinyNN all implemented as close to the metal as possible and very fast: https://github.com/NVlabs/tiny-cuda-nn
-
A CUDA-free instant NGP renderer written entirely in Python: Support real-time rendering and camera interaction and consume less than 1GB of VRAM
This repo only implemented the rendering part of the NGP but is more simple and has a lesser amount of code compared to the original (Instant-NGP and tiny-cuda-nn).
- Tiny CUDA Neural Networks: fast C++/CUDA neural network framework
- Making 3D holograms this weekend with the very “Instant” Neural Graphics Primitives by nvidia — made this volume from 100 photos taken with an old iPhone 7 Plus
- NVlabs/tiny-CUDA-nn: fast C++/CUDA neural network framework
-
Small Neural networks in Julia 5x faster than PyTorch
...a C++ library with a CUDA backend. But these high-performance building blocks might only be saturating the GPU fully if the data is large enough.
I haven't looked at implementing these things, but I imagine uf you have smaller networks and thus less data, the large building blocks may not be optimal. You may for example want to fuse some operations to reduce memory latency from repeated memory access.
In PyTorch world, there are approaches for small networks as well, there is https://github.com/NVlabs/tiny-cuda-nn - as far as I understand from the first link in the README, it makes clever use of the CUDA shared memory, which can hold all the weights of a tiny network (but not larger ones).
- [R] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding (Training a NeRF takes 5 seconds!)
- Tiny CUDA Neural Networks
- Real-Time Neural Radiance Caching for Path Tracing
What are some alternatives?
tensor_annotations - Annotating tensor shapes using Python types
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
einops - Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
blis - BLAS-like Library Instantiation Software Framework
torchtyping - Type annotations and dynamic checking for a tensor's shape, dtype, names, etc.
diffrax - Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/
FL - FL language specification and reference implementations
juliaup - Julia installer and version multiplexer
dex-lang - Research language for array processing in the Haskell/ML family
RecursiveFactorization
jaxtyping - Type annotations and runtime checking for shape and dtype of JAX/NumPy/PyTorch/etc. arrays. https://docs.kidger.site/jaxtyping/
RecursiveFactorization.jl