rosettaboy
jax
rosettaboy | jax | |
---|---|---|
11 | 82 | |
465 | 28,004 | |
- | 1.8% | |
8.6 | 10.0 | |
26 days ago | 3 days ago | |
C++ | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rosettaboy
-
When Zig Outshines Rust – Memory Efficient Enum Arrays
As somebody who has written the same gameboy emulator in C++, Rust, and Zig (as well as C, Go, Nim, PHP, and Python) - I have yet to find a place where language affected emulation correctness.
Gameboy audio is kind of a pain in the ass (at least compared to CPU, which is fairly easy, and GPU, which is easy to get "good enough” if you don’t care about things like palette colours being swapped mid-scanline) - and some languages take more or less code to do the same thing (eg languages which allow one block of memory to be interpreted in several different ways concurrently will make the “interpret audio RAM as a bunch of registers” code much shorter with less copying) - but in my case at least, each one of my implementations actually has the same audio distortions, presumably because I’m misreading some part of the hardware spec :P
https://github.com/shish/rosettaboy/
(Also yes, the zig version is currently failing because every time I look at it the build system has had breaking changes...)
-
Ask HN: Why did Nim not catch-on like wild fire as Rust did?
Niceness is subjective, but Nim is just as valid an addition to that group. Nim compiles to C and has had an --os=standalone mode for like 10 years from its git history, and as mentioned else-thread (https://news.ycombinator.com/item?id=36506087) can be used for Linux kernel modules. Multiple people have written "stub OSes" in it (https://github.com/dom96/nimkernel & further along https://github.com/khaledh/axiom).
While it can use clang as a backend, Nim does not rely upon LLVM support like Zig or Rust (pre-gcc-rust working). Use on embedded devices is fairly popular: https://forum.nim-lang.org/search?q=embedded (or web search).
Latency-wise, for a time, video game programming was a perceived "adoption niche" or maybe "hook" for Nim and games often have stringent frame rendering deadlines. If you are interested in video games, you might appreciate https://github.com/shish/rosettaboy which covers all but Ada in your list with Nim being fastest (on one CPU/version/compiler/etc). Note, however, that cross-PL comparisons are often done by those with much "porting energy" but limited familiarity with any but a few of the PLs. A better way to view it is that "Nim responds well to optimization effort" (like C/Ada/C++/Rust/Zig).
- Finished building a working Game Boy Color emulator using React and WebAssembly 🎮🕹️
-
Ask HN: What have you created that deserves a second chance on HN?
https://github.com/shish/rosettaboy
The same gameboy emulator rewritten in C++, Go, Nim, PHP, Cython, Python, Rust, and Zig (and WIP typescript); mostly to teach myself the languages and to compare and contrast their idioms.
Also, when taken with a very large grain of salt, usable as a language benchmark (As with all benchmarks, there are lots of caveats - but as far as I’m aware this is unique in being “the same code in multiple languages” and “several thousand lines of code”):
$ ./utils/bench.py
- Zig 0.10.0 Release Notes
- Python 3.11 is much faster than 3.8
-
Writing a Game Boy Emulator in OCaml
Looks very polished, but major disappointment that it's not showcasing OCaml as part of RosettaBoy (https://github.com/shish/rosettaboy)
-
Which programming language or compiler is faster
I’m working on it :) https://github.com/shish/rosettaboy
(Ok it’s 5-10k lines rather than a million, but it’s non-trivial enough that the differences between languages are noticable)
- RosettaBoy – the same Gameboy emulator in Rust, Python, and C++
jax
-
The Elements of Differentiable Programming
The dual numbers exist just as surely as the real numbers and have been used well over 100 years
https://en.m.wikipedia.org/wiki/Dual_number
Pytorch has had them for many years.
https://pytorch.org/docs/stable/generated/torch.autograd.for...
JAX implements them and uses them exactly as stated in this thread.
https://github.com/google/jax/discussions/10157#discussionco...
As you so eloquently stated, "you shouldn't be proclaiming things you don't actually know on a public forum," and doubly so when your claimed "corrections" are so demonstrably and totally incorrect.
-
Julia GPU-based ODE solver 20x-100x faster than those in Jax and PyTorch
On your last point, as long as you jit the topmost level, it doesn't matter whether or not you have inner jitted functions. The end result should be the same.
Source: https://github.com/google/jax/discussions/5199#discussioncom...
-
Apple releases MLX for Apple Silicon
The design of MLX is inspired by frameworks like NumPy, PyTorch, Jax, and ArrayFire.
-
MLPerf training tests put Nvidia ahead, Intel close, and Google well behind
I'm still not totally sure what the issue is. Jax uses program transformations to compile programs to run on a variety of hardware, for example, using XLA for TPUs. It can also run cuda ops for Nvidia gpus without issue: https://jax.readthedocs.io/en/latest/installation.html
There is also support for custom cpp and cuda ops if that's what is needed: https://jax.readthedocs.io/en/latest/Custom_Operation_for_GP...
I haven't worked with float4, but can imagine that new numerical types would require some special handling. But I assume that's the case for any ml environment.
But really you probably mean fixed point 4bit integer types? Looks like that has had at least some work done in Jax: https://github.com/google/jax/issues/8566
-
MatX: Efficient C++17 GPU numerical computing library with Python-like syntax
>
Are they even comparing apples to apples to claim that they see these improvements over NumPy?
> While the code complexity and length are roughly the same, the MatX version shows a 2100x over the Numpy version, and over 4x faster than the CuPy version on the same GPU.
NumPy doesn't use GPU by default unless you use something like Jax [1] to compile NumPy code to run on GPUs. I think more honest comparison will mainly compare MatX running on same CPU like NumPy as focus the GPU comparison against CuPy.
[1] https://github.com/google/jax
-
JAX – NumPy on the CPU, GPU, and TPU, with great automatic differentiation
Actually that never changed. The README has always had an example of differentiating through native Python control flow:
https://github.com/google/jax/commit/948a8db0adf233f333f3e5f...
The constraints on control flow expressions come from jax.jit (because Python control flow can't be staged out) and jax.vmap (because we can't take multiple branches of Python control flow, which we might need to do for different batch elements). But autodiff of Python-native control flow works fine!
-
Julia and Mojo (Modular) Mandelbrot Benchmark
For a similar "benchmark" (also Mandelbrot) but took place in Jax repo discussion: https://github.com/google/jax/discussions/11078#discussionco...
-
Functional Programming 1
2. https://github.com/fantasyland/fantasy-land (A bit heavy on jargon)
Note there is a python version of Ramda available on pypi and there’s a lot of FP tidbits inside JAX:
3. https://pypi.org/project/ramda/ (Worth making your own version if you want to learn, though)
4. For nested data, JAX tree_util is epic: https://jax.readthedocs.io/en/latest/jax.tree_util.html and also their curry implementation is funny: https://github.com/google/jax/blob/4ac2bdc2b1d71ec0010412a32...
Anyway don’t put FP on a pedestal, main thing is to focus on the core principles of avoiding external mutation and making helper functions. Doesn’t always work because some languages like Rust don’t have legit support for currying (afaik in 2023 August), but in those cases you can hack it with builder methods to an extent.
Finally, if you want to understand the middle of the midwit meme, check out this wiki article and connect the free monoid to the Kleene star (0 or more copies of your pattern) and Kleene plus (1 or more copies of your pattern). Those are also in regex so it can help you remember the regex symbols. https://en.wikipedia.org/wiki/Free_monoid?wprov=sfti1
The simplest example might be {0}^* in which case
0: “” // because we use *
-
Best Way to Learn JAX
Hello! I'm trying to learn JAX over the next couple of weeks. Ideally, I want to be comfortable with using it for projects after about 3 weeks to a month, although I understand that may not be realistic. I currently have experience with PyTorch and TensorFlow. How should I go about learning JAX? Is there a specific YouTube tutorial or online course I should use, or should I just use the tutorial on https://jax.readthedocs.io/? Any information, advice, or experience you can share would be much appreciated!
- Codon: Python Compiler
What are some alternatives?
procs - Unix process&system query&format lib&multi-command CLI in Nim
Numba - NumPy aware dynamic Python compiler using LLVM
shumai - Fast Differentiable Tensor Library in JavaScript and TypeScript with Bun + Flashlight
functorch - functorch is JAX-like composable function transforms for PyTorch.
Programming-Language-Benchmark
julia - The Julia Programming Language
axiom - A 64-bit kernel implemented in Nim
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
awesome-python-typing - Collection of awesome Python types, stubs, plugins, and tools to work with them.
Cython - The most widely used Python to C compiler
KaithemAutomation - Pure Python, GUI-focused home automation/consumer grade SCADA
jax-windows-builder - A community supported Windows build for jax.