graph-node VS turbo-geth

Compare graph-node vs turbo-geth and see what are their differences.

graph-node

Graph Node indexes data from blockchains such as Ethereum and serves it over GraphQL (by graphprotocol)

turbo-geth

Ethereum implementation on the efficiency frontier (by ledgerwatch)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
graph-node turbo-geth
125 58
2,780 2,938
1.9% 7.3%
9.8 9.9
5 days ago 1 day ago
Rust Go
Apache License 2.0 GNU Lesser General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

graph-node

Posts with mentions or reviews of graph-node. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-27.

turbo-geth

Posts with mentions or reviews of turbo-geth. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-17.
  • AMD EPYC 7C13 Is a Surprisingly Cheap and Good CPU
    1 project | news.ycombinator.com | 27 Mar 2024
    To be clear, it was a CPU fault that doesn't occur at all when running e.g. stress-ng, but only (as far as I know) when running our particular production workload.

    And only after several hours of running our production workload.

    But then, once it's known to be provokeable for a given machine, it's extremely reliable to trigger it again — in that it seems to take the same number of executed instructions that utilize the faulty part of the die, since power on. (I.e. if I run a workload that's 50% AES-NI and 50% something else, then it takes exactly twice as long to fault as if the workload was 100% AES-NI.)

    And it isn't provoked any more quickly, by having just provoked it with the last hard-fault — i.e. there's no temporal locality to it. Which would make both "environmental conditions" and "CPU is overheating / overvolting" much less likely as contributing factors.

    > There have been enough of them in private hands for long enough that if there were widespread issues they would be well-known.

    Our setup is likely a bit unusual. These machines that experienced the faults, have every available PCIe lane (other than the few given to the NIC) dedicated to NVMe; where we've got the NVMe sticks stuck together in software RAIDO (meaning that every disk read fans in as many almost-precisely-parallel PCIe packets contending for bus time to DMA their way back into the kernel BIO buffers.) On top of this, we then have every core saturated with parallel CPU-bottlenecked activity, with a heavy focus on these AES-NI instructions; and a high level of rapid allocation/dellocation of multi-GB per-client working arenas, contending against a very large and very hot disk page cache.

    I'll put it like this: some of these machines are "real-time OLAP" DB (Postgres) servers. And under load, our PG transactions sit in WAIT_LWLOCK waiting to start up, because they're actually contending over acquiring reader access to the global in-memory pg_locks table in order to write their per-table READ_SHARED locks there (in turn because they're dealing with wide joins across N tables in M schemas where each table has hundreds of partitions and the query is an aggregate so no constraint-exclusion can be used.) Imagine the TLB havoc going on, as those forked-off query workers fight for time.

    It's to the point that if we don't either terminate our long-lived client connections (even when not idle), or restart our PG servers at least once a month, we actually see per-backend resource leaks that eventually cause PG to get OOMed!

    The machines that aren't DB servers, meanwhile — but are still set up the same on an OS level — are blockchain nodes, running https://github.com/ledgerwatch/erigon, which likes to do its syncing work in big batches: download N blocks, then execute N blocks, then index N blocks. The part that reliably causes the faults is "hashing N blocks", for sufficiently large values of N that you only ever really hit during a backfill sync, not live sync.

    In neither case would I expect many others to have hit on just the right combination of load to end up with the same problems.

    (Which is why I don't really believe that whatever problem AMD might have seen, is related to this one. This seems more like a single-batch production error than anything, where OVH happened to acquire multiple CPUs from that single batch.)

    > It's possible that AMD didn't order enough capacity from TSMC to meet demand, and couldn't get more during the COVID supply chain issues.

    Yes, but that doesn't explain why they weren't able to ramp up production at any point in the last four years. Even now, there are still likely some smaller hosts that would like to buy EPYC 7xxxs at more-affordable prices, if AMD would make them.

    You need an additional factor to explain this lack of ramp-up post-COVID; and to explain why the cloud providers aren't still buying any 7xxxs (which they would normally do, to satisfy legacy clients who want to replicate their exact setup across more AZs/regions.) Server CPUs don't normally have 2-year purchase commitments. It's normally more like 6.

    Sure, maybe Zen4c was super-marketable to the clouds' customers, so they negotiated with AMD to drop all their existing spend commitments on 7xxx parts purchases in favor of committing to 9xxx parts purchases. (But why would AMD agree to that, without anything the clouds could hold over their head? It would mean shutting down many of the 7xxx production lines early, translating to the CapEx for those production lines not getting paid off!)

  • erigon sync log correct?
    2 projects | /r/ethstaker | 17 Jun 2023
    consensus client/execution client -> ERIGON v2.45.2 and NIMBUS v23.5.1
  • Can anyone share updated Erigon Grafana dashboard json file?
    1 project | /r/ethstaker | 2 Jun 2023
    This file (https://github.com/ledgerwatch/erigon/blob/devel/cmd/prometheus/dashboards/erigon.json) is outdated. Many panels are not working. Would appreciate if someone can share the json with all the useful panels.
  • Syncing an erigon node
    1 project | /r/ethstaker | 7 May 2023
    48 hours - currently on stage 7 - https://github.com/ledgerwatch/erigon/blob/devel/eth/stagedsync/README.md
  • Ethereum's pending withdrawals total $1.34 billion after Shapella
    10 projects | /r/CryptoCurrency | 13 Apr 2023
    https://github.com/ledgerwatch/erigon 696 contributors
  • Daily General Discussion - April 12, 2023
    5 projects | /r/ethfinance | 12 Apr 2023
    Erigon 2.41.0 does come with Shanghai for mainnet though so as long as people are running 2.41.0 or 2.42.0 they will be all set.
  • How Client Architecture applies to decentralization & security in Crypto
    3 projects | /r/u_AndreyDidovskiy | 2 Apr 2023
    — Erigon ***(~10.8% of all clients)***A fork of Geth client (also in the GO programming language) that is focused on maximizing the efficiency of storage for archive nodes.
  • Current known issues with Shapella clients
    1 project | /r/ethstaker | 28 Mar 2023
    Github issue tracking it, likely to be fixed on Erigon side
  • Erigon v2.41.0 is out. Ready to Shanghai upgrade for Ethereum mainnet
    1 project | /r/ethstaker | 26 Mar 2023
  • Erigon v2.40.1 released
    1 project | /r/ethstaker | 7 Mar 2023

What are some alternatives?

When comparing graph-node and turbo-geth you can also consider the following projects:

hardhat - Hardhat is a development environment to compile, deploy, test, and debug your Ethereum software.

besu - An enterprise-grade Java-based, Apache 2.0 licensed Ethereum client https://wiki.hyperledger.org/display/besu

ipfs - Peer-to-peer hypermedia protocol

protocols - A zkRollup DEX & Payment Protocol

chainlink - node of the decentralized oracle network, bridging on and off-chain computation

awesome-solidity - ⟠ A curated list of awesome Solidity resources, libraries, tools and more

brownie - A Python-based development and testing framework for smart contracts targeting the Ethereum Virtual Machine.

ethereum2-docker-compose - Run different kind of Ethereum 2 staking nodes with monitoring tools and own Ethereum 1 node out of the box!

scaffold-eth - 🏗 forkable Ethereum dev stack focused on fast product iterations [Moved to: https://github.com/scaffold-eth/scaffold-eth]

go-ethereum - Official Go implementation of the Ethereum protocol

arwes - Futuristic Sci-Fi UI Web Framework.

yearn-protocol - Yearn smart contracts