mu
librope
Our great sponsors
mu | librope | |
---|---|---|
29 | 4 | |
1,342 | 265 | |
- | - | |
4.3 | 0.0 | |
4 months ago | over 2 years ago | |
Assembly | C | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mu
-
Damn Small Linux 2024
Depending on how minimal a distribution you want, a few years ago I had a way to take a single ELF binary created by my computing stack built up from machine code (https://github.com/akkartik/mu) and package it up with just a linux kernel and syslinux (whatever _that_ is) to create a bootable disk image I could then ship to a cloud server (https://akkartik.name/post/iso-on-linode, though I don't use Linode anymore these days) and run on a VPS to create a truly minimal webserver. If this seems at all relevant I'd be happy to answer questions or help out.
- Ask HN: Good Books on Philosophy of Engineering
-
x86-64 Assembly Language Programming with Ubuntu by Ed Jorgensen
This was the thinking behind my https://github.com/akkartik/mu
- Show HN: FocusedEdit β a classic Macintosh to web browser shared text editor
-
Plain Text. With Lines
Yes thank you, I was indeed alluding to https://github.com/akkartik/mu. Perhaps a more precise term would be "software stack".
-
Inferno: A small operating system for building crossplatform distributed systems
I built a computer with its own languages, and I consider it to be _less_ cognitive load when everything is in 1/2/3 languages. I don't have to worry that the next program I want to read the sources will require "Go, Rust, C++, JS/TS, Python, Java, etc."
There are other metrics to consider besides your notions of cognitive load and productivity. Inferno predates most of the languages on your list. My computer (https://github.com/akkartik/mu) uses custom languages because I was able to design them to minimize total LoC, and to ensure the dependency graph has no cycles (unlike all of the conventional software stack, at least until https://www.gnu.org/software/mes connects up all the dots).
- Llisp: Lisp in Lisp
-
10 Years Against Division of Labor in Software
"Separation of concerns is a hard-won insight."
Absolutely. I'm arguing for separating just concerns, without entangling them with considerations of people.
It's certainly reasonable to consider my projects toy. I consider them research:
* https://github.com/akkartik/mu
* https://github.com/akkartik/teliva
"The idea that projects should take source copies instead of library dependencies is just kind of nuts..."
The idea that projects should take copies seems about symmetric to me with taking pointers. Call by value vs call by reference. We just haven't had 50 years of tooling to support copies. Where would we be by now if we had devoted equal resources to both branches?
"...at least for large libraries."
How are these large libraries going for ya? Log4j wasn't exactly a shining example of the human race at its best. We're trying to run before we can walk.
-
My self-hosting infrastructure, fully automated
I still believe :) I'm looking not for an economic argument but for a strategic one. I think[1] a self-hosted setup with minimal dependencies can be more resilient than a conventional one, whether with a vendor or self-hosted.
https://sandstorm.io got a lot right. I wish they'd paid more attention to upgrade burdens.
-
My 486 Server
I'm very interested in the network stack, having explored it for a while for https://github.com/akkartik/mu before giving up. What sort of network card do you support?
librope
- Show HN
-
The case against an alternative to C
Yep. A few years ago I implemented a skip list based rope library in C[1], and after learning rust I eventually ported it over[2].
The rust implementation was much less code than the C version. It generated a bigger assembly but it ran 20% faster or so. (I don't know why it ran faster than the C version - this was before the noalias analysis was turned on in the compiler).
Its now about 3x faster than C, thanks to some use of clever layered data structures. I could implement those optimizations in C, but I find rust easier to work with.
C has advantages, but performance is a bad reason to choose C over rust. In my experience, the runtime bounds checks it adds are remarkably cheap from a performance perspective. And its more than offset by the extra optimizations the rust compiler can do thanks to the extra knowledge the compiler has about your program. If my experience is anything to go by, naively porting C programs to rust would result in faster code a lot of the time.
And I find it easier to optimize rust code compared to C code, thanks to generics and the (excellent) crates ecosystem. If I was optimizing for runtime speed, I'd pick rust over C every time.
-
Why Is C Faster Than Java (2009)
> itβs not clear if this will be a positive for native dev advocacy
I've rewritten a few things in rust. Seems pretty positive to me, because you can mix some of the best optimizations and data structures you'd write in C, with much better developer ergonomics.
A few years ago I wrote a rope library in C. This is a library for making very fast, arbitrary insert & delete operations in a large string. My C code was about as fast as I could make it at the time. But recently, I took a stab at porting it to Rust to see if I could improve things. Long story short, the rust version is another ~3x faster than the C version.
https://crates.io/crates/jumprope
(Vs in C: https://github.com/josephg/librope )
The competition absolutely isn't fair. In rust, I managed to add another optimization that doesn't exist in the C code. I could add it in C, but it would have been really awkward to weave in. Possible, but awkward in an already very complex bit of C. In rust it was much easier because of the language's ergonomics. In C I'm using lots of complex memory management and I don't want to add complexity in case I add memory corruption bugs. In rust, well, the optimization was entirely safe code.
And as for other languages - I challenge anyone to even approach this level of performance in a non-native language. I'm processing ~30M edit operations per second.
But these sort of performance results probably won't scale for a broader group of programmers. I've seen rust code run slower than equivalent javascript code because the programmers, used to having a GC, just Box<>'ed everything. And all the heap allocations killed performance. If you naively port python line-by-line to rust, you can't expect to magically get 100x the performance.
Its like, if you give a top of the line Porsche to an expert driver, they can absolutely drive faster. But I'm not an expert driver, so I'll probably crash the darn thing. I'd take a simple toyota or something any day. I feel like rust is the porsche, and python is the toyota.
-
Rust is now overall faster than C in benchmarks
> I have no idea whether that matters or even easy to measure...
It is reasonably easy to measure, and the GP is about right. I've measured a crossover point of around a few hundred items too. (Though I'm sure it'll vary depending on use case and whatnot.)
I made a rope data structure a few years ago in C. Its a fancy string data structure which supports inserts and deletes of characters at arbitrary offsets. (Designed for text editors). The implementation uses a skip list (which performs similarly to a b-tree). At every node we store an array of characters. To insert or delete, we traverse the structure to find the node at the requested offset, then (usually) memmove a bunch of characters at that node.
Q: How large should that per-node array be? A small number would put more burden on the skip list structure and the allocator, and incur more cache misses. A large number will be linearly slower because of all the time spent in memmove.
Benchmarking shows the ideal number is in the ballpark of 100-200, depending on CPU and some specifics of the benchmark itself. Cache misses are extremely expensive. Storing only a single character at each node (like the SGI C++ rope structure does) makes it run several times slower. (!!)
Code: https://github.com/josephg/librope
This is the constant to change if you want to experiment yourself:
https://github.com/josephg/librope/blob/81e1938e45561b0856d4...
In my opinion, hash tables, btrees and the like in the standard library should probably swap to flat lists internally when the number of items in the collection is small. I'm surprised more libraries don't do that.
What are some alternatives?
cosmopolitan - build-once run-anywhere c library
c2rust - Migrate C code to Rust
collapseos - Bootstrap post-collapse technology
c3c - Compiler for the C3 language
mtpng - A parallelized PNG encoder in Rust
jumprope-rs
mirage - MirageOS is a library operating system that constructs unikernels
buffet - All-inclusive Buffer for C
teliva - Fork of Lua 5.1 to encourage end-user programming
proposal-explicit-resource-management - ECMAScript Explicit Resource Management
ZeroTier - A Smart Ethernet Switch for Earth
search-benchmark-game - Search engine benchmark (Tantivy, Lucene, PISA, ...)