juliaup
tiny-cuda-nn
juliaup | tiny-cuda-nn | |
---|---|---|
10 | 9 | |
970 | 3,650 | |
2.6% | 2.1% | |
9.2 | 5.1 | |
4 days ago | 16 days ago | |
Rust | C++ | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
juliaup
-
How to tell Quarto where the julia-1.8 kernel is?
There is juliaup, available in the Microsoft store, for managing julia versions: https://github.com/JuliaLang/juliaup
-
Is it better to install Julia with its own executable or to install it in an Anaconda environment?
Better use the official binaries from https://julialang.org/downloads/. For Windows, best use the Julia app in the MS Store (https://github.com/JuliaLang/juliaup ).
-
Neovim can't find executable path for program
Thanks - fwiw, I currently use juliaup, which basically does the same thing. Unfortunately, the problem isn't that Julia isn't on the path (it is), it's that Neovim's exepath can't find it, for some reason. :)
-
Julia 1.8 released
But it’s considered prerelease for Mac and Linux according to its git repository. So presumably it won’t become the default until it’s not experimental any more.
- appropriate way to run two versions of Julia on linux
-
Créer simplement un cluster k8s dans PhoenixNAP avec Rancher en quelques clics …
GitHub - JuliaLang/juliaup: Julia installer and version multiplexer GitHub - JuliaPluto/PlutoUI.jl GitHub - fonsp/Pluto.jl: 🎈 Simple reactive notebooks for Julia
-
I don't want to abandon Rust for Julia
Use the right tool for the job! I would never write a system utility or OS or user application or compiler in Julia, for example. juliaup is a Julia utility being written in Rust for this exact reason!
- Small Neural networks in Julia 5x faster than PyTorch
-
What's the standard procedure for package requests?
And yes, I know about jill.py and have considered using it as well. It seems fairly straightforwad to use. A similar project that I found interesting is juliaup. Still, I would definitely prefer just using zypper instead of relying on third party tools. That's why I figured I might as well ask on here.
- Get first n element from an array
tiny-cuda-nn
-
[D] Have their been any attempts to create a programming language specifically for machine learning?
In the opposite direction from your question is a very interesting project, TinyNN all implemented as close to the metal as possible and very fast: https://github.com/NVlabs/tiny-cuda-nn
-
A CUDA-free instant NGP renderer written entirely in Python: Support real-time rendering and camera interaction and consume less than 1GB of VRAM
This repo only implemented the rendering part of the NGP but is more simple and has a lesser amount of code compared to the original (Instant-NGP and tiny-cuda-nn).
- Tiny CUDA Neural Networks: fast C++/CUDA neural network framework
- Making 3D holograms this weekend with the very “Instant” Neural Graphics Primitives by nvidia — made this volume from 100 photos taken with an old iPhone 7 Plus
- NVlabs/tiny-CUDA-nn: fast C++/CUDA neural network framework
-
Small Neural networks in Julia 5x faster than PyTorch
...a C++ library with a CUDA backend. But these high-performance building blocks might only be saturating the GPU fully if the data is large enough.
I haven't looked at implementing these things, but I imagine uf you have smaller networks and thus less data, the large building blocks may not be optimal. You may for example want to fuse some operations to reduce memory latency from repeated memory access.
In PyTorch world, there are approaches for small networks as well, there is https://github.com/NVlabs/tiny-cuda-nn - as far as I understand from the first link in the README, it makes clever use of the CUDA shared memory, which can hold all the weights of a tiny network (but not larger ones).
- [R] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding (Training a NeRF takes 5 seconds!)
- Tiny CUDA Neural Networks
- Real-Time Neural Radiance Caching for Path Tracing
What are some alternatives?
diffrax - Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. https://docs.kidger.site/diffrax/
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
jill.py - A cross-platform installer for the Julia programming language
blis - BLAS-like Library Instantiation Software Framework
vectorflow
jill - Command line installer of the Julia Language.
RecursiveFactorization
n - Node version management
RecursiveFactorization.jl
LeNetTorch - PyTorch implementation of LeNet for fitting MNIST for benchmarking.