julia
jax
julia | jax | |
---|---|---|
351 | 84 | |
44,780 | 28,764 | |
0.6% | 2.9% | |
10.0 | 10.0 | |
2 days ago | 2 days ago | |
Julia | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
julia
-
Modern Python REPL in Emacs using VTerm
From my jolly Julia days I’m used to julia-vterm. This emacs package runs a Julia REPL using a full terminal emulator (emacs-libvterm). So in the pursuit of a nice hack, I M-x replace-string’d the word juliawith python and gave it a shot. Remarkably, the whole thing just worked without much tweaking and you can enjoy the result by checking out the GitHub repo.
-
Top Paying Programming Technologies 2024
34. Julia - $74,963
-
Optimize sgemm on RISC-V platform
I don't believe there is any official documentation on this, but https://github.com/JuliaLang/julia/pull/49430 for example added prefetching to the marking phase of a GC which saw speedups on x86, but not on M1.
-
Dart 3.3
3. dispatch on all the arguments
the first solution is clean, but people really like dispatch.
the second makes calling functions in the function call syntax weird, because the first argument is privileged semantically but not syntactically.
the third makes calling functions in the method call syntax weird because the first argument is privileged syntactically but not semantically.
the closest things to this i can think of off the top of my head in remotely popular programming languages are: nim, lisp dialects, and julia.
nim navigates the dispatch conundrum by providing different ways to define free functions for different dispatch-ness. the tutorial gives a good overview: https://nim-lang.org/docs/tut2.html
lisps of course lack UFCS.
see here for a discussion on the lack of UFCS in julia: https://github.com/JuliaLang/julia/issues/31779
so to sum up the answer to the original question: because it's only obvious how to make it nice and tidy like you're wanting if you sacrifice function dispatch, which is ubiquitous for good reason!
-
Julia 1.10 Highlights
https://github.com/JuliaLang/julia/blob/release-1.10/NEWS.md
-
Best Programming languages for Data Analysis📊
Visit official site: https://julialang.org/
-
Potential of the Julia programming language for high energy physics computing
No. It runs natively on ARM.
julia> versioninfo() Julia Version 1.9.3 Commit bed2cd540a1 (2023-08-24 14:43 UTC) Build Info: Official https://julialang.org/ release
-
Rust std:fs slower than Python
https://github.com/JuliaLang/julia/issues/51086#issuecomment...
So while this "fixes" the issue, it'll introduce a confusing time delay between you freeing the memory and you observing that in `htop`.
But according to https://jemalloc.net/jemalloc.3.html you can set `opt.muzzy_decay_ms = 0` to remove the delay.
Still, the musl author has some reservations against making `jemalloc` the default:
https://www.openwall.com/lists/musl/2018/04/23/2
> It's got serious bloat problems, problems with undermining ASLR, and is optimized pretty much only for being as fast as possible without caring how much memory you use.
With the above-mentioned tunables, this should be mitigated to some extent, but the general "theme" (focusing on e.g. performance vs memory usage) will likely still mean "it's a tradeoff" or "it's no tradeoff, but only if you set tunables to what you need".
-
Eleven strategies for making reproducible research the norm
I have asked about Julia's reproducibility story on the Guix mailing list in the past, and at the time Simon Tournier didn't think it was promising. I seem to recall Julia itself didnt have a reproducible build. All I know now is that github issue is still not closed.
https://github.com/JuliaLang/julia/issues/34753
-
Julia as a unifying end-to-end workflow language on the Frontier exascale system
I don't really know what kind of rebuttal you're looking for, but I will link my HN comments from when this was first posted for some thoughts: https://news.ycombinator.com/item?id=31396861#31398796. As I said, in the linked post, I'm quite skeptical of the business of trying to assess relative buginess of programming in different systems, because that has strong dependencies on what you consider core vs packages and what exactly you're trying to do.
However, bugs in general suck and we've been thinking a fair bit about what additional tooling the language could provide to help people avoid the classes of bugs that Yuri encountered in the post.
The biggest class of problems in the blog post, is that it's pretty clear that `@inbounds` (and I will extend this to `@assume_effects`, even though that wasn't around when Yuri wrote his post) is problematic, because it's too hard to write. My proposal for what to do instead is at https://github.com/JuliaLang/julia/pull/50641.
Another common theme is that while Julia is great at composition, it's not clear what's expected to work and what isn't, because the interfaces are informal and not checked. This is a hard design problem, because it's quite close to the reasons why Julia works well. My current thoughts on that are here: https://github.com/Keno/InterfaceSpecs.jl but there's other proposals also.
jax
- cuDF – GPU DataFrame Library
-
Rebuilding TensorFlow 2.8.4 on Ubuntu 22.04 to patch vulnerabilities
I found a GitHub issue that seemed similar (missing ptxas) and saw a suggestion to install nvidia-cuda-toolkit. Alright: but that exploded the container size from 6.5 GB to 12.13 GB … unacceptable 😤 (Incidentally, this is too large for Cloud Shell to build on its limited persistent disk.)
-
The Elements of Differentiable Programming
The dual numbers exist just as surely as the real numbers and have been used well over 100 years
https://en.m.wikipedia.org/wiki/Dual_number
Pytorch has had them for many years.
https://pytorch.org/docs/stable/generated/torch.autograd.for...
JAX implements them and uses them exactly as stated in this thread.
https://github.com/google/jax/discussions/10157#discussionco...
As you so eloquently stated, "you shouldn't be proclaiming things you don't actually know on a public forum," and doubly so when your claimed "corrections" are so demonstrably and totally incorrect.
-
Julia GPU-based ODE solver 20x-100x faster than those in Jax and PyTorch
On your last point, as long as you jit the topmost level, it doesn't matter whether or not you have inner jitted functions. The end result should be the same.
Source: https://github.com/google/jax/discussions/5199#discussioncom...
-
Apple releases MLX for Apple Silicon
The design of MLX is inspired by frameworks like NumPy, PyTorch, Jax, and ArrayFire.
-
MLPerf training tests put Nvidia ahead, Intel close, and Google well behind
I'm still not totally sure what the issue is. Jax uses program transformations to compile programs to run on a variety of hardware, for example, using XLA for TPUs. It can also run cuda ops for Nvidia gpus without issue: https://jax.readthedocs.io/en/latest/installation.html
There is also support for custom cpp and cuda ops if that's what is needed: https://jax.readthedocs.io/en/latest/Custom_Operation_for_GP...
I haven't worked with float4, but can imagine that new numerical types would require some special handling. But I assume that's the case for any ml environment.
But really you probably mean fixed point 4bit integer types? Looks like that has had at least some work done in Jax: https://github.com/google/jax/issues/8566
-
MatX: Efficient C++17 GPU numerical computing library with Python-like syntax
>
Are they even comparing apples to apples to claim that they see these improvements over NumPy?
> While the code complexity and length are roughly the same, the MatX version shows a 2100x over the Numpy version, and over 4x faster than the CuPy version on the same GPU.
NumPy doesn't use GPU by default unless you use something like Jax [1] to compile NumPy code to run on GPUs. I think more honest comparison will mainly compare MatX running on same CPU like NumPy as focus the GPU comparison against CuPy.
[1] https://github.com/google/jax
-
JAX – NumPy on the CPU, GPU, and TPU, with great automatic differentiation
Actually that never changed. The README has always had an example of differentiating through native Python control flow:
https://github.com/google/jax/commit/948a8db0adf233f333f3e5f...
The constraints on control flow expressions come from jax.jit (because Python control flow can't be staged out) and jax.vmap (because we can't take multiple branches of Python control flow, which we might need to do for different batch elements). But autodiff of Python-native control flow works fine!
-
Julia and Mojo (Modular) Mandelbrot Benchmark
For a similar "benchmark" (also Mandelbrot) but took place in Jax repo discussion: https://github.com/google/jax/discussions/11078#discussionco...
-
Functional Programming 1
2. https://github.com/fantasyland/fantasy-land (A bit heavy on jargon)
Note there is a python version of Ramda available on pypi and there’s a lot of FP tidbits inside JAX:
3. https://pypi.org/project/ramda/ (Worth making your own version if you want to learn, though)
4. For nested data, JAX tree_util is epic: https://jax.readthedocs.io/en/latest/jax.tree_util.html and also their curry implementation is funny: https://github.com/google/jax/blob/4ac2bdc2b1d71ec0010412a32...
Anyway don’t put FP on a pedestal, main thing is to focus on the core principles of avoiding external mutation and making helper functions. Doesn’t always work because some languages like Rust don’t have legit support for currying (afaik in 2023 August), but in those cases you can hack it with builder methods to an extent.
Finally, if you want to understand the middle of the midwit meme, check out this wiki article and connect the free monoid to the Kleene star (0 or more copies of your pattern) and Kleene plus (1 or more copies of your pattern). Those are also in regex so it can help you remember the regex symbols. https://en.wikipedia.org/wiki/Free_monoid?wprov=sfti1
The simplest example might be {0}^* in which case
0: “” // because we use *
What are some alternatives?
NetworkX - Network Analysis in Python
Numba - NumPy aware dynamic Python compiler using LLVM
Lua - Lua is a powerful, efficient, lightweight, embeddable scripting language. It supports procedural programming, object-oriented programming, functional programming, data-driven programming, and data description.
functorch - functorch is JAX-like composable function transforms for PyTorch.
rust-numpy - PyO3-based Rust bindings of the NumPy C-API
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Cython - The most widely used Python to C compiler
F# - Please file issues or pull requests here: https://github.com/dotnet/fsharp
jax-windows-builder - A community supported Windows build for jax.
StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)
mesh-transformer-jax - Model parallel transformers in JAX and Haiku