hubris
icecream
Our great sponsors
hubris | icecream | |
---|---|---|
33 | 16 | |
2,790 | 1,553 | |
6.5% | 1.6% | |
9.4 | 0.0 | |
6 days ago | 5 months ago | |
Rust | C++ | |
Mozilla Public License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hubris
-
Framework won't be just a laptop company anymore
> The CPUs in Oxide racks are AMD, so, presumably AMD-based compute rather than ARM.
These don’t run Hubris though; based on the chips directory in the repo [0], they’re targeting a mix of NXP and ST parts, which are Arm, and the user isn’t likely to see them or care what firmware they’re running: they’re really pretty “boring”.
[0] : https://github.com/oxidecomputer/hubris/tree/020d014880382d8...
-
Who killed the network switch? A Hubris Bug Story
I wouldn't put this comment here. It's not just some detail of this function; it's an invariant of the field that all writers have to respect (maybe this is the only one now but still) and all readers can take advantage of. So I'd add it to the `TaskDesc::regions` docstring. [1]
[1] https://github.com/oxidecomputer/hubris/commit/b44e677fb39cd...
-
Oxide: The Cloud Computer
With respect to Hubris, the build badge was, in turns out, pointing to a stale workflow. (That is, the build was succeeding, but the build badge was busted.) This comment has been immortalized in the fix.[0]
With respect to Humility, I am going to resist the temptation of pointing out why one of those directories has a different nomenclature with respect to its delimiter -- and just leave it at this: if you really want to find some filthy code in Humility, you can do much, much better than that!
[0] https://github.com/oxidecomputer/hubris/commit/651a9546b20ce...
-
Barracuda Urges Replacing – Not Patching – Its Email Security Gateways
A lot of questions in there! Taking these in order:
1. We aren't making standalone servers: the Oxide compute sled comes in the Oxide rack. So are not (and do not intend to be) a drop in replacement for extant rack mounted servers.
2. We have taken a fundamentally different approach to firmware, with a true root of trust that can attest to the service processor -- which can turn attest to the system software. This prompts a lot of questions (e.g., who attests to the root of trust?), and there is a LOT to say about this; look for us to talk a lot more about this
3. In stark contrast (sadly) to nearly everyone else in the server space, the firmware we are developing is entirely open source. More details on that can be found in Cliff Biffle's 2021 OSFC talk and the Hubris and Humility repos.[0][1][2]
4. Definitely not vaporware! We are in the process of shipping to our first customers; you can follow our progress in our Oxide and Friends podcast.[3]
[0] https://www.osfc.io/2021/talks/on-hubris-and-humility-develo...
[1] https://github.com/oxidecomputer/hubris
[2] https://github.com/oxidecomputer/humility
[3] https://oxide-and-friends.transistor.fm/
- Do you use Rust in your professional career?
-
Spotting and Avoiding Heap Fragmentation in Rust Applications
everywhere, for example in https://github.com/oxidecomputer/hubris/search?q=dyn
Is Box really allocating here? Is the "Rust By Example" text incomplete?
Then I had to stop learning Rust for other reasons, but this doubt really hit me at the time.
-
What's the coolest thing you've done with Neovim?
I work on an embedded OS in Rust (Hubris) that has a very bespoke build system. As part of the build system, it has to set environmental variables based on (1) the target device and (2) the specific "task"; this is an OS with task-level isolation, so tasks are compiled as individual Rust crates.
-
TCG TPM2.0 implementations vulnerable to memory corruption
Oxide Computer told some storied about the difficulty of bring up of a new motherboard, and mentioned a lot of gotcha details and hack solutions for managing their AMD chip.
They talked about their bring up sequence, boot chain verification on their motherboard, and designing / creating / verifying their hardware root of trust.
I heard mention of this on a podcast recently, trying to find the reference.
I'm pretty sure it was [S3]
- "Tales from the Bringup Lab" https://lnns.co/FBf5oLpyHK3
- or "More Tales from the Bringup Lab" https://lnns.co/LQur_ToJX9m
But I found again these interesting things worth sharing on that search. https://oxide.computer/blog/hubris-and-humility, https://github.com/oxidecomputer/hubris
Search 1 [S1], Trammell Hudson ep mentioning firmware (chromebook related iirc) https://lnns.co/pystdPm0QvG.
Search 2 [S2], Security, Cryptography, Whatever podcast episode mentioning Oxide and roots of trust or similar. https://lnns.co/VnyTvdhBiGC
Search links:
[S1]: https://www.listennotes.com/search/?q=oxide+tpm
[S2]: https://www.listennotes.com/search/?q=oxide%20and%20friends%...
[S3]: https://www.listennotes.com/search/?q=oxide%20and%20friends%...
- Well-documented Embedded dev board for video, ethernet, usb, file IO, etc
-
OpenAI Used Kenyan Workers on Less Than $2 per Hour to Make ChatGPT Less Toxic
When we started the company, we knew it would be a three year build -- and indeed, our first product is in the final stages of development (i.e. EMC/safety certification). We have been very transparent about our progress along the way[0][1][2][3][4][5][6][7] -- and our software is essentially all open source, so you can follow along there as well.[8][9][10]
If you are asking "does anyone want a rack-scale computer?" the (short) answer is: yes, they do. The on-prem market has been woefully underserved -- and there are plenty of folks who are sick of Dell/HPE/VMware/Cisco, to say nothing of those who are public cloud borne and wondering if they should perhaps own some of their own compute rather than rent it all.
[0] https://oxide-and-friends.transistor.fm/episodes/holistic-bo...
[1] https://oxide-and-friends.transistor.fm/episodes/the-oxide-s...
[2] https://oxide-and-friends.transistor.fm/episodes/bringup-lab...
[3] https://oxide-and-friends.transistor.fm/episodes/more-tales-...
[4] https://oxide-and-friends.transistor.fm/episodes/another-lpc...
[5] https://oxide-and-friends.transistor.fm/episodes/the-pragmat...
[6] https://oxide-and-friends.transistor.fm/episodes/tales-from-...
[7] https://oxide-and-friends.transistor.fm/episodes/the-sidecar...
[8] https://github.com/oxidecomputer/omicron
[9] https://github.com/oxidecomputer/propolis
[10] https://github.com/oxidecomputer/hubris
icecream
- Icecream: Distributed compiler with a central scheduler to share build load
-
Distcc: A fast, free distributed C/C++ compiler
Related
https://github.com/icecc/icecream - another option that does what distcc does, but aimed at a somewhat different use case.
https://ccache.dev/ - a similar idea but provides caching of build outputs instead of distributing builds. You can use it together with distcc to achieve even better performance.
-
Do you use ccache to speed up compilation times
Of course! The github readme provides a lot of info - https://github.com/icecc/icecream
- GitHub - icecc/icecream: Distributed compiler with a central scheduler to share build load
-
Ccache – a fast C/C++ compiler cache
If you like distcc, did you ever give icecc a try?
https://github.com/icecc/icecream
I never had the time to set it up properly, but by the looks of it, it should be even better.
- People who use distributed builds, how do you handle many compilers?
- Fuchsia Workstation
-
Give local devices a way to connect to clients? - openvpn
I would like to have a icecc setup I can vpn into. It seems that with normal configs the clients can talk to the scheduler, but the scheduler cant connect to the clients as it tries to connect to the device running the openvpn server not the one behind it. How could I make my openvpn clients appear almost as physical devices on the network, with unique IP's that local devices can connect to; or if that is unnecessary how could I solve this?
-
ccache 4.6 released
Glad to see a new release on this! I've read worrying news about the state of icecc, and the followup uncertain news on sccache, so I hope at least some part of the tooling is in a good shape.
-
Best way to manage dependencies with c++?
I always wanted to try to use cmake-conan so I could let Conan grab all packages but have a neat cmake script being in charge of what gets built when. Also, this would allow me to easily switch between CMake fetchcontent and Conan packages that may or may not be stashed automatically on a local Artifactory server. Secondly, since now all build requirements are stashed on a server and binary reproducible, you could concider adding icecream and ccache into the mix. (Try running a node one of your buildservers for massive speadups with icecream) This however does require a reproducible build environment (by configure script) which conan again is really good in.
What are some alternatives?
tock - A secure embedded operating system for microcontrollers
sccache - Sccache is a ccache-like tool. It is used as a compiler wrapper and avoids compilation when possible. Sccache has the capability to utilize caching in remote storage environments, including various cloud storage options, or alternatively, in local storage.
esp32 - Peripheral access crate for the ESP32
ccache - ccache – a fast compiler cache
meta-raspberrypi - Yocto/OE BSP layer for the Raspberry Pi boards
keppel - Regionally federated multi-tenant container image registry
esp32-hal - A hardware abstraction layer for the esp32 written in Rust.
compiler-benchmark - Benchmarks compilation speeds of different combinations of languages and compilers.
l4v - seL4 specification and proofs
gg - The Stanford Builder
ferros - A Rust-based userland which also adds compile-time assurances to seL4 development.
cmake-init-conan-example - cmake-init generated executable project with Conan integration