Flux.jl
Transformers.jl
Flux.jl | Transformers.jl | |
---|---|---|
22 | 7 | |
4,393 | 504 | |
0.4% | - | |
8.7 | 6.9 | |
3 days ago | 3 months ago | |
Julia | Julia | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Flux.jl
- Julia 1.10 Released
-
What Apple hardware do I need for CUDA-based deep learning tasks?
If you are really committed to running on Apple hardware then take a look at Tensorflow for macOS. Another option is the Julia programming language which has very basic Metal support at a CUDA-like level. FluxML would be the ML framework in Julia. I’m not sure either option will be painless or let you do everything you could do with a Nvidia GPU.
-
[D] ClosedAI license, open-source license which restricts only OpenAI, Microsoft, Google, and Meta from commercial use
Flux dominance!
-
What would be your programming language of choice to implement a JIT compiler ?
I’m no compiler expert but check out flux and zygote https://fluxml.ai/ https://fluxml.ai/
-
Any help or tips for Neural Networks on Computer Clusters
I would suggest you to look into Julia ecosystem instead of C++. Julia is almost identical to Python in terms of how you use it but it's still very fast. You should look into flux.jl package for Julia.
-
[D] Why are we stuck with Python for something that require so much speed and parallelism (neural networks)?
Give Julia a try: https://fluxml.ai
-
Deep Learning With Flux: Loss Doesn't Converge
2) Flux treats softmax a little different than most other activation functions (see here for more details) such as relu and sigmoid. When you pass an activation function into a layer like Dense(3, 32, relu), Flux expects that the function is broadcast over the layer's output. However, softmax cannot be broadcast as it operates over vectors rather than scalars. This means that if you want to use softmax as the final activation in your model, you need to pass it into Chain() like so:
-
“Why I still recommend Julia”
Can you point to a concrete example of one that someone would run into when using the differential equation solvers with the default and recommended Enzyme AD for vector-Jacobian products? I'd be happy to look into it, but there do not currently seem to be any correctness issues in the Enzyme issue tracker that are current (3 issues are open but they all seem to be fixed, other than https://github.com/EnzymeAD/Enzyme.jl/issues/278 which is actually an activity analysis bug in LLVM). So please be more specific. The issue with Enzyme right now seems to moreso be about finding functional forms that compile, and it throws compile-time errors in the event that it cannot fully analyze the program and if it has too much dynamic behavior (example: https://github.com/EnzymeAD/Enzyme.jl/issues/368).
Additional note, we recently did a overhaul of SciMLSensitivity (https://sensitivity.sciml.ai/dev/) and setup a system which amounts to 15 hours of direct unit tests doing a combinatoric check of arguments with 4 hours of downstream testing (https://github.com/SciML/SciMLSensitivity.jl/actions/runs/25...). What that identified is that any remaining issues that can arise are due to the implicit parameters mechanism in Zygote (Zygote.params). To counteract this upstream issue, we (a) try to default to never default to Zygote VJPs whenever we can avoid it (hence defaulting to Enzyme and ReverseDiff first as previously mentioned), and (b) put in a mechanism for early error throwing if Zygote hits any not implemented derivative case with an explicit error message (https://github.com/SciML/SciMLSensitivity.jl/blob/v7.0.1/src...). We have alerted the devs of the machine learning libraries, and from this there has been a lot of movement. In particular, a globals-free machine learning library, Lux.jl, was created with fully explicit parameters https://lux.csail.mit.edu/dev/, and thus by design it cannot have this issue. In addition, the Flux.jl library itself is looking to do a redesign that eliminates implicit parameters (https://github.com/FluxML/Flux.jl/issues/1986). Which design will be the one in the end, that's uncertain right now, but it's clear that no matter what the future designs of the deep learning libraries will fully cut out that part of Zygote.jl. And additionally, the other AD libraries (Enzyme and Diffractor for example) do not have this "feature", so it's an issue that can only arise from a specific (not recommended) way of using Zygote (which now throws explicit error messages early and often if used anywhere near SciML because I don't tolerate it).
So from this, SciML should be rather safe and if not, please share some details and I'd be happy to dig in.
- Flux: The Elegant Machine Learning Stack
-
Jax vs. Julia (Vs PyTorch)
> In his item #1, he links to https://discourse.julialang.org/t/loaderror-when-using-inter... The issue is actually a Zygote bug, a Julia package for auto-differentiation, and is not directly related to Julia codebase (or Flux package) itself. Furthermore, the problematic code is working fine now, because DiffEqFlux has switched to Enzyme, which doesn't have that bug. He should first confirm whether the problem he is citing is actually a problem or not.
> Item #2, again another Zygote bug.
If flux chose a buggy package as a dependency, that's on them, and users are well justified in steering clear of Flux if it has buggy dependencies. As of today, the Project.toml for both Flux and DiffEqFlux still lists Zygote as a dependency. Neither list Enzyme.
https://github.com/FluxML/Flux.jl/blob/master/Project.toml
Transformers.jl
-
Julia 1.10 Released
Flux is quite a nice lower level library:
https://github.com/FluxML/Flux.jl
On top of that there are many higher level libraries such as Transformers.jl
https://github.com/chengchingwen/Transformers.jl
- How is Julia Performance with GPUs (for LLMs)?
-
Load a transformer model with julia
Check out Transformers.jl. It’s a library that implements transformer based models in Julia using Flux.jl. They have support for some of the huggingface transformers.
-
Ask HN: Why hasn't the Deep Learning community embraced Julia yet?
https://github.com/chengchingwen/Transformers.jl but I have not had any personal experience with.
All of this is build by the community and your mileage may vary.
In my rather biased opinion the strengths of Julia are that the various ML libraries can share implementations, e.g. Pytorch and Tensorflow contain separate Numpy derivatives. One could say that you can write an ML framework in Julia, instead of writting a DSL in Python as part of your C++ ML library. As an example Julia has a GPU compiler so you can write your own layer directly in Julia and integrate it into your pipeline.
-
Help on Differentiable Programming
I think you might have some luck with looking at a transformers implementation in flux, e.g: https://github.com/chengchingwen/Transformers.jl/tree/master/src/basic
-
Fastai.jl: Fastai for Julia
Having tried fastai for a "serious" research project and helped (just a bit) towards FastAI.jl development, here's my take:
> motivation behind this is unclear.
Julia currently has two main DL libraries. Flux, which is somewhere between PyTorch and (tf.)Keras abstraction wise, and Knet, which is a little lower level (think just below PyTorch/around where MXNet Gluon sits). Frameworks like fastai, PyTorch Lightning and Keras demonstrate that there's a desire for higher-level, more batteries included libraries. FastAI.jl is looking to fill that gap in Julia.
> Since FastAI.jl uses Flux, and not PyTorch, functionality has to be reimplemented. FastAI.jl has vision support but no text support yet.
This is correct. That said, FastAI.jl is not and does not plan to be a copy of the Python API (hence "inspired by"). One consequence of this is that integration with other libraries is much easier, e.g. https://github.com/chengchingwen/Transformers.jl for NLP tasks.
> What is the timeline for FastAI.jl to achieve parity?
-
Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?
If NLP primitives are all that's keeping you from testing the waters, have a look at https://github.com/chengchingwen/Transformers.jl.
What are some alternatives?
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
PackageCompiler.jl - Compile your Julia Package
Knet.jl - Koç University deep learning framework.
model-zoo - Please do not feed the models
tensorflow - An Open Source Machine Learning Framework for Everyone
DataLoaders.jl - A parallel iterator for large machine learning datasets that don't fit into memory inspired by PyTorch's `DataLoader` class.
Torch.jl - Sensible extensions for exposing torch in Julia.
Chain.jl - A Julia package for piping a value through a series of transformation expressions using a more convenient syntax than Julia's native piping functionality.
Lux.jl - Explicitly Parameterized Neural Networks in Julia
StatsPlots.jl - Statistical plotting recipes for Plots.jl
flax - Flax is a neural network library for JAX that is designed for flexibility.
org-mode - This is a MIRROR only, do not send PR.