jsource
tinygrad
Our great sponsors
jsource | tinygrad | |
---|---|---|
18 | 58 | |
640 | 17,800 | |
3.4% | - | |
9.7 | 9.7 | |
5 days ago | 10 months ago | |
C | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jsource
-
Crafting Self-Evident Code with D
The one other example I know that morphs the language to that extent and to the detriment of readability by C programmers is the J interpreter[1,2]. But, once again, nobody (that I’ve read) claims it’s good or clear C. (Good C for those who speak J, maybe; I wouldn’t know.)
For a way to morph C syntax that does make things better, see libmill[3].
[1] https://code.jsoftware.com/wiki/Essays/Incunabulum
[2] https://github.com/jsoftware/jsource/tree/master/jsrc
[3] https://250bpm.com/blog:56/
- Show HN: Gemini client in 100 lines of C
-
Can anyone identify what this code does ?
Oh damn Whitney C representation.
-
C is the most dysfunctional non-esolang on the planet, precisely because everyone insisted on it being "just simple pointers"
I develop J btw
-
Want cleaner code? Use the rule of six
No, it was rhetorical, because it's obviously (to an APL-family programmer), not bad!
Your cultural prejudice is showing. There are good reasons APL is written the way it is, and this example is simply bringing those benefits to C by writing it in the dense APL style. There are other APL derivatives, like J[1] that are written the same way. These projects are well-maintained. They aren't collapsing under a load of technical debt. The style works. To them, it's clean code.
[1]: https://github.com/jsoftware/jsource
-
Ask HN: Is this how anyone programs?
Recently, I wanted to write a simple piece of code in J, but immediately found a bug. I went ahead to fetch the source to see if I can fix it. But, hell no. I couldn't believe my eyes. Is this how someone programs, really? I just can't believe it didn't go through some kind of obfuscator.
Here are some samples, but almost anything in the repository is beyond me:
https://github.com/jsoftware/jsource/blob/master/jsrc/xo.c
-
Jd
You can view the code, but is not open source: https://github.com/jsoftware/jsource/blob/master/license.txt
-
Someone earlier linked to Arthur Whitney's style of coding in the comments. Can we discuss this further? I am disturbed by what I saw.
This is the same dense style used in J.
-
Why does old C code often declare functions or global variables in the scope it's used, rather than at the top of a source file or a header file?
All-in-all this example doesn't seem too bad. It's clear what happens and is easy to follow. If you wan't to see something remarkably terribly, check out Whitney style. It's used in APL/J/K family interpreters. Keep in mind, financial institutions run that code.
- Ask HN: Examples of Unusual Code Formatting Styles?
tinygrad
- tinygrad: extreme simplicity, easiest framework to add new accelerators to
-
GGML – AI at the Edge
Might be a silly question but is GGML a similar/competing library to George Hotz's tinygrad [0]?
[0] https://github.com/geohot/tinygrad
-
Render neural network into CUDA/HIP code
at first glance i thought may its like tinygrad. but looks has many ops than that tiny grad but most maps to underlying hardware provided ops?
i wonder how well tinygrad's apporach will work out, ops fusion sounds easy, just a walk a graph, pattern match it and lower to hardware provided ops?
Anyway if anyone wants to understand the philosophy behind tinygrad, this file is great start https://github.com/geohot/tinygrad/blob/master/docs/abstract...
-
llama.cpp now officially supports GPU acceleration.
There are currently at least 3 ways to run llama on m1 with GPU acceleration. - mlc-llm (pre-built, only 1 model has been ported) - tinygrad (very memory efficient, not that easy to integrate into other projects) - llama-mps (original llama codebase + llama adapter support)
- George Hotz building an AMD competitor to Nvidia.
-
George Hotz ROCm adventures
Hopefully we will see now full support with AMD hardware on https://github.com/geohot/tinygrad. You can read more about it on https://tinygrad.org/
-
The Coming of Local LLMs
tinygrad
https://github.com/geohot/tinygrad/tree/master/accel/ane
But I have not tested it on Linux since Asahi has not yet added support.
llama.cpp runs at 18ms per token (7B) and 200ms per token (65B) without quantization.
- Everything we know about Apple's Neural Engine
- Everything we know about the Apple Neural Engine (ANE)
- How 'Open' Is OpenAI, Really?
What are some alternatives?
b-decoded - arthur whitney's b interpreter translated into a more traditional flavor of C
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
ancient-c-compilers - Very old C compilers
llama.cpp - LLM inference in C/C++
ZLib - A massively spiffy yet delicately unobtrusive compression library.
openpilot - openpilot is an open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for 250+ supported car makes and models.
kdb - Companion files to kdb+ and q
llama - Inference code for Llama models
boot - Build tooling for Clojure.
tensorflow_macos - TensorFlow for macOS 11.0+ accelerated using Apple's ML Compute framework.
data_jd - Jd
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ