cpu
hubris
cpu | hubris | |
---|---|---|
3 | 35 | |
240 | 3,038 | |
1.7% | 1.2% | |
8.2 | 9.4 | |
30 days ago | 4 days ago | |
Go | Rust | |
BSD 3-clause "New" or "Revised" License | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cpu
- U-root/CPU: CPU command in Go, inspired by the Plan 9 CPU command
-
SSH Tips and Tricks
What's the problem with FUSE?
Anyway, it needs a daemon on the remote system, but perhaps https://github.com/u-root/cpu is suitable. (I can't vouch for it, not having used it.)
The HTCondor batch system also provides something like that, but I don't know details, and it's probably not a separable component.
-
Computer Is a Distributed System
If you yearn for Plan 9 -- I'm not sure I do -- Minnich's current incarnation of the inspiration seems to be https://github.com/u-root/cpu
hubris
-
It has been [33] days since the last Hubris kernel bug
Finding bugs in the Hubris kernel is rare enough that we have a running joke about resetting the “days since last kernel bug” timer.
I decided to make this joke into an actual docs page; because HN has enjoyed posts about Hubris in the past [1], I figured this might be of interest!
Many of the individual bugs are terrifying dives into corner cases of an embedded OS. This one is particularly good reading: https://github.com/oxidecomputer/hubris/issues/1134
[1] https://news.ycombinator.com/item?id=29390751
-
My 71 TiB ZFS NAS After 10 Years and Zero Drive Failures
It’s moderately smart - there’s a PID loop with per-component target temperatures, so it’s trying not to do more work than necessary.
(source: I wrote it, and it’s all published at https://github.com/oxidecomputer/hubris/tree/master/task/the... )
We also worked with the fan vendor to get parts with a lower minimum RPM. The stock fans idle at about 5K RPM, and ours idle at 2K, which is already enough to keep the system cool under light loads.
-
Framework won't be just a laptop company anymore
> The CPUs in Oxide racks are AMD, so, presumably AMD-based compute rather than ARM.
These don’t run Hubris though; based on the chips directory in the repo [0], they’re targeting a mix of NXP and ST parts, which are Arm, and the user isn’t likely to see them or care what firmware they’re running: they’re really pretty “boring”.
[0] : https://github.com/oxidecomputer/hubris/tree/020d014880382d8...
-
Who killed the network switch? A Hubris Bug Story
I wouldn't put this comment here. It's not just some detail of this function; it's an invariant of the field that all writers have to respect (maybe this is the only one now but still) and all readers can take advantage of. So I'd add it to the `TaskDesc::regions` docstring. [1]
[1] https://github.com/oxidecomputer/hubris/commit/b44e677fb39cd...
-
Oxide: The Cloud Computer
With respect to Hubris, the build badge was, in turns out, pointing to a stale workflow. (That is, the build was succeeding, but the build badge was busted.) This comment has been immortalized in the fix.[0]
With respect to Humility, I am going to resist the temptation of pointing out why one of those directories has a different nomenclature with respect to its delimiter -- and just leave it at this: if you really want to find some filthy code in Humility, you can do much, much better than that!
[0] https://github.com/oxidecomputer/hubris/commit/651a9546b20ce...
-
Barracuda Urges Replacing – Not Patching – Its Email Security Gateways
A lot of questions in there! Taking these in order:
1. We aren't making standalone servers: the Oxide compute sled comes in the Oxide rack. So are not (and do not intend to be) a drop in replacement for extant rack mounted servers.
2. We have taken a fundamentally different approach to firmware, with a true root of trust that can attest to the service processor -- which can turn attest to the system software. This prompts a lot of questions (e.g., who attests to the root of trust?), and there is a LOT to say about this; look for us to talk a lot more about this
3. In stark contrast (sadly) to nearly everyone else in the server space, the firmware we are developing is entirely open source. More details on that can be found in Cliff Biffle's 2021 OSFC talk and the Hubris and Humility repos.[0][1][2]
4. Definitely not vaporware! We are in the process of shipping to our first customers; you can follow our progress in our Oxide and Friends podcast.[3]
[0] https://www.osfc.io/2021/talks/on-hubris-and-humility-develo...
[1] https://github.com/oxidecomputer/hubris
[2] https://github.com/oxidecomputer/humility
[3] https://oxide-and-friends.transistor.fm/
- Do you use Rust in your professional career?
-
Spotting and Avoiding Heap Fragmentation in Rust Applications
everywhere, for example in https://github.com/oxidecomputer/hubris/search?q=dyn
Is Box really allocating here? Is the "Rust By Example" text incomplete?
Then I had to stop learning Rust for other reasons, but this doubt really hit me at the time.
-
What's the coolest thing you've done with Neovim?
I work on an embedded OS in Rust (Hubris) that has a very bespoke build system. As part of the build system, it has to set environmental variables based on (1) the target device and (2) the specific "task"; this is an OS with task-level isolation, so tasks are compiled as individual Rust crates.
-
TCG TPM2.0 implementations vulnerable to memory corruption
Oxide Computer told some storied about the difficulty of bring up of a new motherboard, and mentioned a lot of gotcha details and hack solutions for managing their AMD chip.
They talked about their bring up sequence, boot chain verification on their motherboard, and designing / creating / verifying their hardware root of trust.
I heard mention of this on a podcast recently, trying to find the reference.
I'm pretty sure it was [S3]
- "Tales from the Bringup Lab" https://lnns.co/FBf5oLpyHK3
- or "More Tales from the Bringup Lab" https://lnns.co/LQur_ToJX9m
But I found again these interesting things worth sharing on that search. https://oxide.computer/blog/hubris-and-humility, https://github.com/oxidecomputer/hubris
Search 1 [S1], Trammell Hudson ep mentioning firmware (chromebook related iirc) https://lnns.co/pystdPm0QvG.
Search 2 [S2], Security, Cryptography, Whatever podcast episode mentioning Oxide and roots of trust or similar. https://lnns.co/VnyTvdhBiGC
Search links:
[S1]: https://www.listennotes.com/search/?q=oxide+tpm
[S2]: https://www.listennotes.com/search/?q=oxide%20and%20friends%...
[S3]: https://www.listennotes.com/search/?q=oxide%20and%20friends%...
What are some alternatives?
sha256-simd - Accelerate SHA256 computations in pure Go using AVX512, SHA Extensions for x86 and ARM64 for ARM. On AVX512 it provides an up to 8x improvement (over 3 GB/s per core). SHA Extensions give a performance boost of close to 4x over native.
tock - A secure embedded operating system for microcontrollers
openssh-portable - Portable OpenSSH
esp32 - Peripheral access crate for the ESP32
github-keygen - Easy creation of secure SSH configuration for your GitHub account(s)
meta-raspberrypi - Yocto/OE BSP layer for the Raspberry Pi boards
ssh-save-alias - Quickly create ssh aliasses without manually editing ~/.ssh/config
l4v - seL4 specification and proofs
Mosh - Mobile Shell
esp32-hal - A hardware abstraction layer for the esp32 written in Rust.
ssh-tools - Making SSH more convenient
ferros - A Rust-based userland which also adds compile-time assurances to seL4 development.