rocketpool-research
turbo-geth
Our great sponsors
rocketpool-research | turbo-geth | |
---|---|---|
12 | 58 | |
39 | 2,938 | |
- | 7.3% | |
6.3 | 9.9 | |
23 days ago | 6 days ago | |
Go | ||
- | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rocketpool-research
-
Forcing the use of MEV-boost is leading us to censorship
What's to stop the authorities from outlawing all relays except the censoring ones? When you are forced to use MEV-boost trough threat of penalties (up to 80% of your stake!), then even those of us that "fell" for the decentralised promise of RP will be forced to be complicit in censorship.
-
Censorship and a potential fork (not powchain) after the Merge that may have serious consequences - summary
Unfortunately RP is also starting to look more and more centralised. In the latest upgrade they introduce penalties which were not included in the original "dumb contract" between us, validators, and RP. What's worse, they speak of forcing validators use MEV-boost.
- Daily General Discussion - August 16, 2022
-
Where does the second 16 eth come from if I create a full 32 eth minipool?
This surprised me a bit - I know this is how it works for a 16 eth node where it has to wait for another 16 eth to be matched and sent by the contract, but what about in my case? Am I still stuck waiting for the oDAO to confirm "everything looks right" (per here)? And if so, is there a way to see where I'm at on this queue? So is the contract holding the other 16 eth in the general pool and then will send it (basically "reserving" it for me)?
-
Daily General Discussion - March 20, 2022
This is a better place to dive deeper into the research being done on this and other topics: https://github.com/rocket-pool/rocketpool-research
-
Attention: All Node Operators (Extremely Important)
The minipool will wait for 12 hours while it gets vetted by the Oracle DAO to make sure it didn't abuse the Beacon Chain's withdrawal credentials exploit (this is known as the "scrub check")
-
UPDATE: Launch Bug and Fix Timeline
We'll make sure to keep our awesome community updated as we go roll out our solution to this unexpected, but very welcomed addition to the protocol. https://github.com/rocket-pool/rocketpool-research/blob/master/Reports/withdrawal-creds-exploit.md
-
Daily General Discussion - August 28, 2021
More info on this can be found here: https://medium.com/rocket-pool/the-merge-0x02-mev-and-the-future-of-the-protocol-c7451337ec40 and here: https://github.com/rocket-pool/rocketpool-research
- Upgrading The Minipool Smart Contract Delegate
-
Limiting ODAO Power & Rocket Pool Minipool Delegate Upgrade System
Link to repository: https://github.com/rocket-pool/rocketpool-research/blob/master/delegate-upgrades.md
turbo-geth
-
AMD EPYC 7C13 Is a Surprisingly Cheap and Good CPU
To be clear, it was a CPU fault that doesn't occur at all when running e.g. stress-ng, but only (as far as I know) when running our particular production workload.
And only after several hours of running our production workload.
But then, once it's known to be provokeable for a given machine, it's extremely reliable to trigger it again — in that it seems to take the same number of executed instructions that utilize the faulty part of the die, since power on. (I.e. if I run a workload that's 50% AES-NI and 50% something else, then it takes exactly twice as long to fault as if the workload was 100% AES-NI.)
And it isn't provoked any more quickly, by having just provoked it with the last hard-fault — i.e. there's no temporal locality to it. Which would make both "environmental conditions" and "CPU is overheating / overvolting" much less likely as contributing factors.
> There have been enough of them in private hands for long enough that if there were widespread issues they would be well-known.
Our setup is likely a bit unusual. These machines that experienced the faults, have every available PCIe lane (other than the few given to the NIC) dedicated to NVMe; where we've got the NVMe sticks stuck together in software RAIDO (meaning that every disk read fans in as many almost-precisely-parallel PCIe packets contending for bus time to DMA their way back into the kernel BIO buffers.) On top of this, we then have every core saturated with parallel CPU-bottlenecked activity, with a heavy focus on these AES-NI instructions; and a high level of rapid allocation/dellocation of multi-GB per-client working arenas, contending against a very large and very hot disk page cache.
I'll put it like this: some of these machines are "real-time OLAP" DB (Postgres) servers. And under load, our PG transactions sit in WAIT_LWLOCK waiting to start up, because they're actually contending over acquiring reader access to the global in-memory pg_locks table in order to write their per-table READ_SHARED locks there (in turn because they're dealing with wide joins across N tables in M schemas where each table has hundreds of partitions and the query is an aggregate so no constraint-exclusion can be used.) Imagine the TLB havoc going on, as those forked-off query workers fight for time.
It's to the point that if we don't either terminate our long-lived client connections (even when not idle), or restart our PG servers at least once a month, we actually see per-backend resource leaks that eventually cause PG to get OOMed!
The machines that aren't DB servers, meanwhile — but are still set up the same on an OS level — are blockchain nodes, running https://github.com/ledgerwatch/erigon, which likes to do its syncing work in big batches: download N blocks, then execute N blocks, then index N blocks. The part that reliably causes the faults is "hashing N blocks", for sufficiently large values of N that you only ever really hit during a backfill sync, not live sync.
In neither case would I expect many others to have hit on just the right combination of load to end up with the same problems.
(Which is why I don't really believe that whatever problem AMD might have seen, is related to this one. This seems more like a single-batch production error than anything, where OVH happened to acquire multiple CPUs from that single batch.)
> It's possible that AMD didn't order enough capacity from TSMC to meet demand, and couldn't get more during the COVID supply chain issues.
Yes, but that doesn't explain why they weren't able to ramp up production at any point in the last four years. Even now, there are still likely some smaller hosts that would like to buy EPYC 7xxxs at more-affordable prices, if AMD would make them.
You need an additional factor to explain this lack of ramp-up post-COVID; and to explain why the cloud providers aren't still buying any 7xxxs (which they would normally do, to satisfy legacy clients who want to replicate their exact setup across more AZs/regions.) Server CPUs don't normally have 2-year purchase commitments. It's normally more like 6.
Sure, maybe Zen4c was super-marketable to the clouds' customers, so they negotiated with AMD to drop all their existing spend commitments on 7xxx parts purchases in favor of committing to 9xxx parts purchases. (But why would AMD agree to that, without anything the clouds could hold over their head? It would mean shutting down many of the 7xxx production lines early, translating to the CapEx for those production lines not getting paid off!)
-
erigon sync log correct?
consensus client/execution client -> ERIGON v2.45.2 and NIMBUS v23.5.1
-
Can anyone share updated Erigon Grafana dashboard json file?
This file (https://github.com/ledgerwatch/erigon/blob/devel/cmd/prometheus/dashboards/erigon.json) is outdated. Many panels are not working. Would appreciate if someone can share the json with all the useful panels.
-
Syncing an erigon node
48 hours - currently on stage 7 - https://github.com/ledgerwatch/erigon/blob/devel/eth/stagedsync/README.md
-
Ethereum's pending withdrawals total $1.34 billion after Shapella
https://github.com/ledgerwatch/erigon 696 contributors
-
Daily General Discussion - April 12, 2023
Erigon 2.41.0 does come with Shanghai for mainnet though so as long as people are running 2.41.0 or 2.42.0 they will be all set.
-
How Client Architecture applies to decentralization & security in Crypto
— Erigon ***(~10.8% of all clients)***A fork of Geth client (also in the GO programming language) that is focused on maximizing the efficiency of storage for archive nodes.
-
Current known issues with Shapella clients
Github issue tracking it, likely to be fixed on Erigon side
- Erigon v2.41.0 is out. Ready to Shanghai upgrade for Ethereum mainnet
- Erigon v2.40.1 released
What are some alternatives?
defisaver-v3-contracts - All the contracts related to the Defi Saver ecosystem
besu - An enterprise-grade Java-based, Apache 2.0 licensed Ethereum client https://wiki.hyperledger.org/display/besu
smartnode-install - The install script for a Rocket Pool smart node.
protocols - A zkRollup DEX & Payment Protocol
rocketpool-js - A javascript library for interacting with the Rocket Pool network.
awesome-solidity - ⟠ A curated list of awesome Solidity resources, libraries, tools and more
rocketpool - Decentralised Ethereum Liquid Staking Protocol.
ethereum2-docker-compose - Run different kind of Ethereum 2 staking nodes with monitoring tools and own Ethereum 1 node out of the box!
docs.rocketpool.net - Rocket Pool Documentation & Guide Hub
go-ethereum - Go implementation of the Ethereum protocol
yearn-protocol - Yearn smart contracts