Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
viper, which nicely handles env vars, also can watch for config changes: https://github.com/spf13/viper#watching-and-re-reading-confi...
if you're rotating creds or need to open/close the DB, this will typically just add another select case to your main method, where you also block on e.g. signal catching to cleanly shutdown the app
Ten years on, was the C++11 memory model (which I've used) a success? Compared to the Linux kernel memory model (which I haven't used)? I heard compilers can't remove dead reads because they can synchronize in rare situations, and sequential consistency was defined in a broken way and later fixed in a standards revision, and memory_order_consume is impossible to correctly implement in a way that's actually more optimized than memory_order_acquire, and the C++ memory model doesn't translate well to GPUs.
Is this better than the state of affairs prior to standardized atomics (which I haven't experienced)? Is it better than Go "defining enough of a memory model to guide programmers and compiler writers" (which I haven't used)? Or informally defining a set of use patterns, and writing optimizations around those use patterns rather than a formal model for what code and what optimizations are permitted (resulting in optimization steps that are only incorrect in combination, like global value numbering causing miscompilations[1])?
[1]: https://github.com/rust-lang/rust/issues/45839