CUDA.jl | Zygote.jl | |
---|---|---|
15 | 9 | |
1,133 | 1,439 | |
1.1% | 0.4% | |
9.5 | 8.1 | |
7 days ago | about 1 month ago | |
Julia | Julia | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CUDA.jl
-
Ask HN: Best way to learn GPU programming?
It would also mean learning Julia, but you can write GPU kernels in Julia and then compile for NVidia CUDA, AMD ROCm or IBM oneAPI.
https://juliagpu.org/
I've written CUDA kernels and I knew nothing about it going in.
- What's your main programming language?
-
How is Julia Performance with GPUs (for LLMs)?
See https://juliagpu.org/
-
Yann Lecun: ML would have advanced if other lang had been adopted versus Python
If you look at Julia open source projects you'll see that the projects tend to have a lot more contributors than the Python counterparts, even over smaller time periods. A package for defining statistical distributions has had 202 contributors (https://github.com/JuliaStats/Distributions.jl), etc. Julia Base even has had over 1,300 contributors (https://github.com/JuliaLang/julia) which is quite a lot for a core language, and that's mostly because the majority of the core is in Julia itself.
This is one of the things that was noted quite a bit at this SIAM CSE conference, that Julia development tends to have a lot more code reuse than other ecosystems like Python. For example, the various machine learning libraries like Flux.jl and Lux.jl share a lot of layer intrinsics in NNlib.jl (https://github.com/FluxML/NNlib.jl), the same GPU libraries (https://github.com/JuliaGPU/CUDA.jl), the same automatic differentiation library (https://github.com/FluxML/Zygote.jl), and of course the same JIT compiler (Julia itself). These two libraries are far enough apart that people say "Flux is to PyTorch as Lux is to JAX/flax", but while in the Python world those share almost 0 code or implementation, in the Julia world they share >90% of the core internals but have different higher levels APIs.
If one hasn't participated in this space it's a bit hard to fathom how much code reuse goes on and how that is influenced by the design of multiple dispatch. This is one of the reasons there is so much cohesion in the community since it doesn't matter if one person is an ecologist and the other is a financial engineer, you may both be contributing to the same library like Distances.jl just adding a distance function which is then used in thousands of places. With the Python ecosystem you tend to have a lot more "megapackages", PyTorch, SciPy, etc. where the barrier to entry is generally a lot higher (and sometimes requires handling the build systems, fun times). But in the Julia ecosystem you have a lot of core development happening in "small" but central libraries, like Distances.jl or Distributions.jl, which are simple enough for an undergrad to get productive in a week but is then used everywhere (Distributions.jl for example is used in every statistics package, and definitions of prior distributions for Turing.jl's probabilistic programming language, etc.).
-
C++ is making me depressed / CUDA question
If you just want to do some numerical code that requires linear algebra and GPU, your best bet would be Julia or Python+JAX.
-
Parallélisation distribuée presque triviale d’applications GPU et CPU basées sur des Stencils avec…
GitHub - JuliaGPU/CUDA.jl: CUDA programming in Julia.
- Why Fortran is easy to learn
-
Generic GPU Kernels
Should have (2017) in the title.
Indeed cool to program julia directly on the GPU and Julia on GPU and this has further evolved since then, see https://juliagpu.org/
-
Announcing The Rust CUDA Project; An ecosystem of crates and tools for writing and executing extremely fast GPU code fully in Rust
I'm excited to eventually see something like JuliaGPU with support for multiple backends.
-
[Media] 100% Rust path tracer running on CPU, GPU (CUDA), and OptiX (for denoising) using one of my upcoming projects. There is no C/C++ code at all, the program shares a single rust crate for the core raytracer and uses rust for the viewer and renderer.
That's really cool! Have you looked at CUDA.jl for the Julia language? Maybe you could take some ideas from there. I am pretty sure it does the same thing you do here, and they support any arbitrary code with the limitations that you cannot allocate memory, I/O is disallowed, and badly-typed code(dynamic) will not compile.
Zygote.jl
-
Yann Lecun: ML would have advanced if other lang had been adopted versus Python
If you look at Julia open source projects you'll see that the projects tend to have a lot more contributors than the Python counterparts, even over smaller time periods. A package for defining statistical distributions has had 202 contributors (https://github.com/JuliaStats/Distributions.jl), etc. Julia Base even has had over 1,300 contributors (https://github.com/JuliaLang/julia) which is quite a lot for a core language, and that's mostly because the majority of the core is in Julia itself.
This is one of the things that was noted quite a bit at this SIAM CSE conference, that Julia development tends to have a lot more code reuse than other ecosystems like Python. For example, the various machine learning libraries like Flux.jl and Lux.jl share a lot of layer intrinsics in NNlib.jl (https://github.com/FluxML/NNlib.jl), the same GPU libraries (https://github.com/JuliaGPU/CUDA.jl), the same automatic differentiation library (https://github.com/FluxML/Zygote.jl), and of course the same JIT compiler (Julia itself). These two libraries are far enough apart that people say "Flux is to PyTorch as Lux is to JAX/flax", but while in the Python world those share almost 0 code or implementation, in the Julia world they share >90% of the core internals but have different higher levels APIs.
If one hasn't participated in this space it's a bit hard to fathom how much code reuse goes on and how that is influenced by the design of multiple dispatch. This is one of the reasons there is so much cohesion in the community since it doesn't matter if one person is an ecologist and the other is a financial engineer, you may both be contributing to the same library like Distances.jl just adding a distance function which is then used in thousands of places. With the Python ecosystem you tend to have a lot more "megapackages", PyTorch, SciPy, etc. where the barrier to entry is generally a lot higher (and sometimes requires handling the build systems, fun times). But in the Julia ecosystem you have a lot of core development happening in "small" but central libraries, like Distances.jl or Distributions.jl, which are simple enough for an undergrad to get productive in a week but is then used everywhere (Distributions.jl for example is used in every statistics package, and definitions of prior distributions for Turing.jl's probabilistic programming language, etc.).
-
How long till Julia could be the default language to learn ML?
I think julia has a lot going for it. I feel like autograd is one of the bigger ones given that it's a language feature basically (https://github.com/FluxML/Zygote.jl for reference). I think the ecosystem is a bit of an uphill battle though.
-
Neural networks with automatic differentiation.
Also check out https://github.com/FluxML/Zygote.jl which is the AD engine
-
PyTorch 1.8 release with AMD ROCm support
> There's sadly no performant autodiff system for general purpose Python.
Like there is for general purpose Julia? (https://github.com/FluxML/Zygote.jl)
-
The KimKlone Microcomputer
Thanks again. Like you said it is fun to dream (ask the "Scheme Machine" guys sometime about how they would go about it now), but practically with technology like Julia's Zygote:
https://github.com/FluxML/Zygote.jl
the efficiency of autodiff might be similar to that of an opcode anyway.
So, how did DEC do on the Alpha processor? I always heard good things about it--IIRC it was based on the VAX, but 64 bit. I learned PDP-11 assembler at RPI, during their college program for high school students in about 1984. We hand assembled code and really got to know the architecture.
- FluxML/Zygote.jl -- v0.6.3 should implement a `jacobian` function but doesn't?
-
Did the makers of Zygote.jl use category theory to define their approach to computable autodiff?
and make that computable. It seems like line 88 --> 90 of this file in Zygote does that: https://github.com/FluxML/Zygote.jl/blob/master/src/compiler/chainrules.jl
- Study group: Structure and Interpretation of Classical Mechanics in Clojure
-
Ask HN: Show me your Half Baked project
It's super powerful
For example Zygote.jl (https://github.com/FluxML/Zygote.jl) implements reverse mode automatic differentiation, by defining a function that is a generated transformation of the function being differentiated.
What are some alternatives?
LoopVectorization.jl - Macro(s) for vectorizing loops.
Enzyme - High-performance automatic differentiation of LLVM and MLIR.
cunumeric - An Aspiring Drop-In Replacement for NumPy at Scale
ForwardDiff.jl - Forward Mode Automatic Differentiation for Julia
awesome-quant - A curated list of insanely awesome libraries, packages and resources for Quants (Quantitative Finance)
Tullio.jl - ⅀
cudf - cuDF - GPU DataFrame Library
TensorFlow.jl - A Julia wrapper for TensorFlow
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
GPUCompiler.jl - Reusable compiler infrastructure for Julia GPU backends.
InvertibleNetworks.jl - A Julia framework for invertible neural networks