Lux.jl
Petalisp
Our great sponsors
Lux.jl | Petalisp | |
---|---|---|
4 | 17 | |
429 | 424 | |
7.9% | - | |
9.5 | 8.5 | |
5 days ago | about 2 months ago | |
Julia | Common Lisp | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Lux.jl
- Julia 1.10 Released
-
[R] Easiest way to train RNN's in MATLAB or Julia?
There is also the less known Lux.jl package: https://github.com/avik-pal/Lux.jl
-
“Why I still recommend Julia”
Can you point to a concrete example of one that someone would run into when using the differential equation solvers with the default and recommended Enzyme AD for vector-Jacobian products? I'd be happy to look into it, but there do not currently seem to be any correctness issues in the Enzyme issue tracker that are current (3 issues are open but they all seem to be fixed, other than https://github.com/EnzymeAD/Enzyme.jl/issues/278 which is actually an activity analysis bug in LLVM). So please be more specific. The issue with Enzyme right now seems to moreso be about finding functional forms that compile, and it throws compile-time errors in the event that it cannot fully analyze the program and if it has too much dynamic behavior (example: https://github.com/EnzymeAD/Enzyme.jl/issues/368).
Additional note, we recently did a overhaul of SciMLSensitivity (https://sensitivity.sciml.ai/dev/) and setup a system which amounts to 15 hours of direct unit tests doing a combinatoric check of arguments with 4 hours of downstream testing (https://github.com/SciML/SciMLSensitivity.jl/actions/runs/25...). What that identified is that any remaining issues that can arise are due to the implicit parameters mechanism in Zygote (Zygote.params). To counteract this upstream issue, we (a) try to default to never default to Zygote VJPs whenever we can avoid it (hence defaulting to Enzyme and ReverseDiff first as previously mentioned), and (b) put in a mechanism for early error throwing if Zygote hits any not implemented derivative case with an explicit error message (https://github.com/SciML/SciMLSensitivity.jl/blob/v7.0.1/src...). We have alerted the devs of the machine learning libraries, and from this there has been a lot of movement. In particular, a globals-free machine learning library, Lux.jl, was created with fully explicit parameters https://lux.csail.mit.edu/dev/, and thus by design it cannot have this issue. In addition, the Flux.jl library itself is looking to do a redesign that eliminates implicit parameters (https://github.com/FluxML/Flux.jl/issues/1986). Which design will be the one in the end, that's uncertain right now, but it's clear that no matter what the future designs of the deep learning libraries will fully cut out that part of Zygote.jl. And additionally, the other AD libraries (Enzyme and Diffractor for example) do not have this "feature", so it's an issue that can only arise from a specific (not recommended) way of using Zygote (which now throws explicit error messages early and often if used anywhere near SciML because I don't tolerate it).
So from this, SciML should be rather safe and if not, please share some details and I'd be happy to dig in.
-
The Julia language has a number of correctness flaws
Lots of things are being rewritten. Remember we just released a new neural network library the other day, SimpleChains.jl, and showed that it gave about a 10x speed improvement on modern CPUs with multithreading enabled vs Jax Equinox (and 22x when AVX-512 is enabled) for smaller neural network and matrix-vector types of cases (https://julialang.org/blog/2022/04/simple-chains/). Then there's Lux.jl fixing some major issues of Flux.jl (https://github.com/avik-pal/Lux.jl). Pretty much everything is switching to Enzyme which improves performance quite a bit over Zygote and allows for full mutation support (https://github.com/EnzymeAD/Enzyme.jl). So an entire machine learning stack is already seeing parts release.
Right now we're in a bit of an uncomfortable spot where we have to use Zygote for a few things and then Enzyme for everything else, but the custom rules system is rather close and that's the piece that's needed to make the full transition.
Petalisp
- Petalisp: Elegant High Performance Computing
- Is there a tutorial for automatic differentiation with petalisp?
-
Is there a language with lisp syntax but C semantics?
While not "as fast as C" (C is not the absolute pinnacle of performance), Common Lisp is incredibly fast compared to the majority of programming languages around today. There is even a huge amount of ongoing work being done to make it faster still. We are seeing many interesting projects that make better use of the hardware in your computer (e.g. https://github.com/marcoheisig/Petalisp).
-
Common Lisp Implementations in 2023
i think lisp-stat library is actually being developed. however one numerical cl library that doesnt get enough mention and is being constantly developed is petalisp for HPC
https://github.com/marcoheisig/Petalisp
-
numericals - Performance of NumPy with the goodness of Common Lisp
However, if you have a lisp library that puts those semantics to use, then you could get it to employ magicl/ext-blas and cl-bmas to speed it up. (petalisp looks relevant, but I lack the background to compare it with APL.)
-
New Lisp-Stat Release
> his means cl pagckages can be "done".
this is true if there is nothing functional that can be added to a package. however its very much not true for ml frameworks right now. new things are being added all the time in the field. however even in the package i linked you have the necessary ingredients for any deep learning model: cuda and back propagation. the other person mentioned convolution which i think is pretty trivial to implement but still, if you expect everything for you to be ready made then you should probably stick to tf and pytorch. if you want to explore the cutting edge and push the boundaries then i think common lisp is a good tool. as an aside it might also be interesting to note that a common lisp package (Petalisp) is being used for high performance computing by a german university
https://github.com/marcoheisig/Petalisp
- The Julia language has a number of correctness flaws
-
When a young programmer who has been using C for several years is convinced that C is the best possible programming language and that people who don't prefer it just haven't use it enough, what is the best argument for Lisp vs C, given that they're already convinced in favor of C?
One trick is that Common Lisp can generate and compile code at runtime, whereas static languages typically do not have a compiler available at runtime. This lets you make your own lazy person's JIT/staged compiler, which is useful if some part of the problem is not known at compile-time. Such an approach has been used at least for array munging, type munging and regular expression munging.
What are some alternatives?
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
awesome-cl - A curated list of awesome Common Lisp frameworks, libraries and other shiny stuff.
Enzyme - High-performance automatic differentiation of LLVM and MLIR.
JWM - Cross-platform window management and OS integration library for Java
julia - The Julia Programming Language
cl-cuda - Cl-cuda is a library to use NVIDIA CUDA in Common Lisp programs.
Enzyme.jl - Julia bindings for the Enzyme automatic differentiator
magicl - Matrix Algebra proGrams In Common Lisp.
StatsBase.jl - Basic statistics for Julia
lish - Lisp Shell
BetaML.jl - Beta Machine Learning Toolkit