dex-lang
jax
Our great sponsors
dex-lang | jax | |
---|---|---|
25 | 82 | |
1,531 | 27,509 | |
0.0% | 3.8% | |
9.1 | 10.0 | |
3 months ago | 1 day ago | |
Haskell | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dex-lang
-
Thinking in an Array Language
A really nice approach to this I've seen recently is Google's research on [Dex](https://github.com/google-research/dex-lang).
- Function Composition in Programming Languages – Conor Hoekstra – CppNorth 2023 [video]
-
[D] Have their been any attempts to create a programming language specifically for machine learning?
Dex
That said, there are some things that try to do this. Haskell has a port of torch called HaskTorch that includes this kind of typed tensor shapes, and calls the Z3 theorem prover on the backend to solve type inference. It can get away with this because of the LiquidHaskell compiler extension, which has refinement types capable of solving this kind of typing problem, and is already pretty mature. Dex is a research language from Google that's based on Haskell and built to explore this kind of typechecking. Really you'd want to do this in Rust, because that's where the tradeoff of speed and safety for convenience makes the most sense, but rust is just barely on the edge of having a type system capable of this. You have to get really clever with the type system to make it work at all, and there's been no sustained push from any company to develop this into a mature solution. Hopefully something better comes along soon
-
[D] PyTorch 2.0 Announcement
Have you tried Dex? https://github.com/google-research/dex-lang It is in a relatively early stage, but it is exploring some interesting parts of the design space.
- Mangle, a programming language for deductive database programming
-
Looking for languages that combine algebraic effects with parallel execution
I think [Dex](https://github.com/google-research/dex-lang) might be along the lines of what you're looking for, although its focus is on SIMD GPU-style parallelism rather than thread-level parallelism.
-
“Why I still recommend Julia”
Dex proves indexing correctness without a full dependent type system, including loops.
-
Haskell for Artificial Intelligence?
In case you want to see one research direction that's combining practical machine learning and functional programming, one of the authors of JAX (and the main author of its predecessor, Autograd) is writing Dex (https://github.com/google-research/dex-lang), a functional language for array processing. The compiler itself is written in Haskell. JAX is one of the most popular libraries for doing a lot of machine learning these days, along with Tensorflow and PyTorch. You might also want to see the bug in the JAX repo about adding Haskell support, for some context: https://github.com/google/jax/issues/185
-
[D] Ideal deep learning library
Probably the most well-developed options I know for this at the moment are Dex and Hasktorch.
jax
-
The Elements of Differentiable Programming
The dual numbers exist just as surely as the real numbers and have been used well over 100 years
https://en.m.wikipedia.org/wiki/Dual_number
Pytorch has had them for many years.
https://pytorch.org/docs/stable/generated/torch.autograd.for...
JAX implements them and uses them exactly as stated in this thread.
https://github.com/google/jax/discussions/10157#discussionco...
As you so eloquently stated, "you shouldn't be proclaiming things you don't actually know on a public forum," and doubly so when your claimed "corrections" are so demonstrably and totally incorrect.
-
Julia GPU-based ODE solver 20x-100x faster than those in Jax and PyTorch
On your last point, as long as you jit the topmost level, it doesn't matter whether or not you have inner jitted functions. The end result should be the same.
Source: https://github.com/google/jax/discussions/5199#discussioncom...
-
Apple releases MLX for Apple Silicon
The design of MLX is inspired by frameworks like NumPy, PyTorch, Jax, and ArrayFire.
-
MatX: Efficient C++17 GPU numerical computing library with Python-like syntax
>
Are they even comparing apples to apples to claim that they see these improvements over NumPy?
> While the code complexity and length are roughly the same, the MatX version shows a 2100x over the Numpy version, and over 4x faster than the CuPy version on the same GPU.
NumPy doesn't use GPU by default unless you use something like Jax [1] to compile NumPy code to run on GPUs. I think more honest comparison will mainly compare MatX running on same CPU like NumPy as focus the GPU comparison against CuPy.
-
JAX – NumPy on the CPU, GPU, and TPU, with great automatic differentiation
Actually that never changed. The README has always had an example of differentiating through native Python control flow:
https://github.com/google/jax/commit/948a8db0adf233f333f3e5f...
The constraints on control flow expressions come from jax.jit (because Python control flow can't be staged out) and jax.vmap (because we can't take multiple branches of Python control flow, which we might need to do for different batch elements). But autodiff of Python-native control flow works fine!
Development seems not to have dropped at all from the contributions page: https://github.com/google/jax/graphs/contributors
Don’t know about usage and uptake though.
You're right! Maybe we should revise that... I made https://github.com/google/jax/pull/17851, comments welcome!
-
Julia and Mojo (Modular) Mandelbrot Benchmark
For a similar "benchmark" (also Mandelbrot) but took place in Jax repo discussion: https://github.com/google/jax/discussions/11078#discussionco...
-
Functional Programming 1
2. https://github.com/fantasyland/fantasy-land (A bit heavy on jargon)
Note there is a python version of Ramda available on pypi and there’s a lot of FP tidbits inside JAX:
3. https://pypi.org/project/ramda/ (Worth making your own version if you want to learn, though)
4. For nested data, JAX tree_util is epic: https://jax.readthedocs.io/en/latest/jax.tree_util.html and also their curry implementation is funny: https://github.com/google/jax/blob/4ac2bdc2b1d71ec0010412a32...
Anyway don’t put FP on a pedestal, main thing is to focus on the core principles of avoiding external mutation and making helper functions. Doesn’t always work because some languages like Rust don’t have legit support for currying (afaik in 2023 August), but in those cases you can hack it with builder methods to an extent.
Finally, if you want to understand the middle of the midwit meme, check out this wiki article and connect the free monoid to the Kleene star (0 or more copies of your pattern) and Kleene plus (1 or more copies of your pattern). Those are also in regex so it can help you remember the regex symbols. https://en.wikipedia.org/wiki/Free_monoid?wprov=sfti1
The simplest example might be {0}^* in which case
0: “” // because we use *
- Codon: Python Compiler
What are some alternatives?
Numba - NumPy aware dynamic Python compiler using LLVM
functorch - functorch is JAX-like composable function transforms for PyTorch.
julia - The Julia Programming Language
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Cython - The most widely used Python to C compiler
jax-windows-builder - A community supported Windows build for jax.
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
tensorflow - An Open Source Machine Learning Framework for Everyone
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️
futhark - :boom::computer::boom: A data-parallel functional programming language
brax - Massively parallel rigidbody physics simulation on accelerator hardware.
stan - Stan development repository. The master branch contains the current release. The develop branch contains the latest stable development. See the Developer Process Wiki for details.