Zygote.jl
syntaxdot
Zygote.jl | syntaxdot | |
---|---|---|
9 | 4 | |
1,439 | 67 | |
0.4% | - | |
8.1 | 6.2 | |
about 1 month ago | 6 months ago | |
Julia | Rust | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Zygote.jl
-
Yann Lecun: ML would have advanced if other lang had been adopted versus Python
If you look at Julia open source projects you'll see that the projects tend to have a lot more contributors than the Python counterparts, even over smaller time periods. A package for defining statistical distributions has had 202 contributors (https://github.com/JuliaStats/Distributions.jl), etc. Julia Base even has had over 1,300 contributors (https://github.com/JuliaLang/julia) which is quite a lot for a core language, and that's mostly because the majority of the core is in Julia itself.
This is one of the things that was noted quite a bit at this SIAM CSE conference, that Julia development tends to have a lot more code reuse than other ecosystems like Python. For example, the various machine learning libraries like Flux.jl and Lux.jl share a lot of layer intrinsics in NNlib.jl (https://github.com/FluxML/NNlib.jl), the same GPU libraries (https://github.com/JuliaGPU/CUDA.jl), the same automatic differentiation library (https://github.com/FluxML/Zygote.jl), and of course the same JIT compiler (Julia itself). These two libraries are far enough apart that people say "Flux is to PyTorch as Lux is to JAX/flax", but while in the Python world those share almost 0 code or implementation, in the Julia world they share >90% of the core internals but have different higher levels APIs.
If one hasn't participated in this space it's a bit hard to fathom how much code reuse goes on and how that is influenced by the design of multiple dispatch. This is one of the reasons there is so much cohesion in the community since it doesn't matter if one person is an ecologist and the other is a financial engineer, you may both be contributing to the same library like Distances.jl just adding a distance function which is then used in thousands of places. With the Python ecosystem you tend to have a lot more "megapackages", PyTorch, SciPy, etc. where the barrier to entry is generally a lot higher (and sometimes requires handling the build systems, fun times). But in the Julia ecosystem you have a lot of core development happening in "small" but central libraries, like Distances.jl or Distributions.jl, which are simple enough for an undergrad to get productive in a week but is then used everywhere (Distributions.jl for example is used in every statistics package, and definitions of prior distributions for Turing.jl's probabilistic programming language, etc.).
-
How long till Julia could be the default language to learn ML?
I think julia has a lot going for it. I feel like autograd is one of the bigger ones given that it's a language feature basically (https://github.com/FluxML/Zygote.jl for reference). I think the ecosystem is a bit of an uphill battle though.
-
Neural networks with automatic differentiation.
Also check out https://github.com/FluxML/Zygote.jl which is the AD engine
-
PyTorch 1.8 release with AMD ROCm support
> There's sadly no performant autodiff system for general purpose Python.
Like there is for general purpose Julia? (https://github.com/FluxML/Zygote.jl)
-
The KimKlone Microcomputer
Thanks again. Like you said it is fun to dream (ask the "Scheme Machine" guys sometime about how they would go about it now), but practically with technology like Julia's Zygote:
https://github.com/FluxML/Zygote.jl
the efficiency of autodiff might be similar to that of an opcode anyway.
So, how did DEC do on the Alpha processor? I always heard good things about it--IIRC it was based on the VAX, but 64 bit. I learned PDP-11 assembler at RPI, during their college program for high school students in about 1984. We hand assembled code and really got to know the architecture.
- FluxML/Zygote.jl -- v0.6.3 should implement a `jacobian` function but doesn't?
-
Did the makers of Zygote.jl use category theory to define their approach to computable autodiff?
and make that computable. It seems like line 88 --> 90 of this file in Zygote does that: https://github.com/FluxML/Zygote.jl/blob/master/src/compiler/chainrules.jl
- Study group: Structure and Interpretation of Classical Mechanics in Clojure
-
Ask HN: Show me your Half Baked project
It's super powerful
For example Zygote.jl (https://github.com/FluxML/Zygote.jl) implements reverse mode automatic differentiation, by defining a function that is a generated transformation of the function being differentiated.
syntaxdot
-
Candle: Torch Replacement in Rust
I am so happy about them releasing this. A few years ago I wrote a multi-task syntax annotator in Rust using Laurent Mazare's excellent tch-rs binding (it seems like he is also working on Candle):
https://github.com/tensordot/syntaxdot
However, the deployment story was always quite difficult. The PyTorch C++ API is not stable, so a particular version of tch-rs will only work with a particular PyTorch version. So, anyone wanting to use SyntaxDot always had to get exactly the right version of libtorch (and set some environment variables) to build the project.
The idea of making an abstraction over Torch and Rust ndarray (similar to Burn) crossed my mind several times, but there is only so much that I could do as a solo developer. So Candle would be a god-given if I was still working on this project.
Seeing Candle wants to make me port curated-transformers to Candle for fun:
https://github.com/explosion/curated-transformers
-
Ask HN: What is the job market like, for niche languages (Nim, crystal)?
They are obviously not as good as in Python, but if you are willing to invest time, it's definitely doable. E.g. I made a multi-task transformer-based syntax annotator in Rust using the tch Torch binding:
https://github.com/tensordot/syntaxdot
In my current job, I do NLP with Python, Cython, and some C++. I don't think doing it in Rust was much more work. Once you are beyond the stage of implementing a small research project or toy model, most systems are going to contain a lot of custom, specialized code. You will have to do that work in any language.
-
PyTorch 1.8 release with AMD ROCm support
What I like about PyTorch is that most of the functionality is actually available through the C++ API as well, which has 'beta API stability' as they call it. So, there are good bindings for some other languages as well. E.g., I have been using the Rust bindings in a larger project [1], and they have been awesome. A precursor to the project was implemented using Tensorflow, which was a world of pain.
Even things like mixed-precision training are fairly easy to do through the API.
[1] https://github.com/tensordot/syntaxdot
-
SpaCy v3.0 Released (Python Natural Language Processing)
Huggingface fills the need for task based prediction when you have a GPU.
With model distillation, it should be possible to annotate hundreds of sentences per second on a single CPU with a library like Huggingface Transformers.
For instance, one of my distilled Dutch multi-task syntax models (UD POS, language-specific POS, lemmatization, morphology, dependency parsing) annotates 316 sentences per second with 4 threads on a Ryzen 3700X. This distilled model has virtually no loss in accuracy, compared to the finetuned XLM-RoBERTa base model.
I don't use Huggingface Transformers, but ported some of their implementations to Rust [1], but that should not make a big difference since all the heavy lifting happens in C++ in libtorch anyway.
tl;dr: it is not true that tranformers are only useful for GPU prediction. You can get high CPU prediction speeds with some tricks (distillation, length-based bucketing in batches, etc.).
[1] https://github.com/tensordot/syntaxdot/tree/main/syntaxdot-t...
What are some alternatives?
Enzyme - High-performance automatic differentiation of LLVM and MLIR.
laserembeddings - LASER multilingual sentence embeddings as a pip package
ForwardDiff.jl - Forward Mode Automatic Differentiation for Julia
duckling - Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.
Tullio.jl - ⅀
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
TensorFlow.jl - A Julia wrapper for TensorFlow
projects - 🪐 End-to-end NLP workflows from prototype to production
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
tensorflow - An Open Source Machine Learning Framework for Everyone
InvertibleNetworks.jl - A Julia framework for invertible neural networks
candle - Minimalist ML framework for Rust