tangent
dex-lang
tangent | dex-lang | |
---|---|---|
2 | 25 | |
2,280 | 1,538 | |
- | 0.5% | |
10.0 | 8.8 | |
over 1 year ago | 2 days ago | |
Python | Haskell | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tangent
-
[D] How AD is implemented in JAX/Tensorflow/Pytorch?
Thank you so much for the detail explaination! This remind me of tangent, an abandoned (?) SCT built by google couple of years ago. https://github.com/google/tangent
-
Trade-Offs in Automatic Differentiation: TensorFlow, PyTorch, Jax, and Julia
No, autograd acts similarly to PyTorch in that it builds a tape that it reverses while PyTorch just comes with more optimized kernels (and kernels that act on GPUs). The AD that I was referencing was tangent (https://github.com/google/tangent). It was an interesting project but it's hard to see who the audience is. Generating Python source code makes things harder to analyze, and you cannot JIT compile the generated code unless you could JIT compile Python. So you might as well first trace to a JIT-compliable sublanguage and do the actions there, which is precisely what Jax does. In theory tangent is a bit more general, and maybe you could mix it with Numba, but then it's hard to justify. If it's more general then it's not for the standard ML community for the same reason as the Julia tools, but then it better do better than the Julia tools in the specific niche that they are targeting. Jax just makes much more sense for the people who were building it, it chose its niche very well.
dex-lang
-
Thinking in an Array Language
A really nice approach to this I've seen recently is Google's research on [Dex](https://github.com/google-research/dex-lang).
- Function Composition in Programming Languages – Conor Hoekstra – CppNorth 2023 [video]
- Dex Lang: Research language for array processing in the Haskell/ML family
-
[D] Have their been any attempts to create a programming language specifically for machine learning?
Dex
-
[D] PyTorch 2.0 Announcement
Have you tried Dex? https://github.com/google-research/dex-lang It is in a relatively early stage, but it is exploring some interesting parts of the design space.
- Mangle, a programming language for deductive database programming
-
Looking for languages that combine algebraic effects with parallel execution
I think [Dex](https://github.com/google-research/dex-lang) might be along the lines of what you're looking for, although its focus is on SIMD GPU-style parallelism rather than thread-level parallelism.
-
“Why I still recommend Julia”
Dex proves indexing correctness without a full dependent type system, including loops.
See: https://github.com/google-research/dex-lang/pull/969
-
Haskell for Artificial Intelligence?
In case you want to see one research direction that's combining practical machine learning and functional programming, one of the authors of JAX (and the main author of its predecessor, Autograd) is writing Dex (https://github.com/google-research/dex-lang), a functional language for array processing. The compiler itself is written in Haskell. JAX is one of the most popular libraries for doing a lot of machine learning these days, along with Tensorflow and PyTorch. You might also want to see the bug in the JAX repo about adding Haskell support, for some context: https://github.com/google/jax/issues/185
What are some alternatives?
autograd - Efficiently computes derivatives of numpy code.
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
futhark - :boom::computer::boom: A data-parallel functional programming language
julia - The Julia Programming Language
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
hasktorch - Tensors and neural networks in Haskell
CIPs
tutorials - PyTorch tutorials.
Co-dfns - High-performance, Reliable, and Parallel APL
Enzyme.jl - Julia bindings for the Enzyme automatic differentiator
FlexFlow - FlexFlow Serve: Low-Latency, High-Performance LLM Serving
tensor_annotations - Annotating tensor shapes using Python types