Our great sponsors
- Onboard AI - Learn any GitHub repo in 59 seconds
- InfluxDB - Collect and Analyze Billions of Data Points in Real Time
- SaaSHub - Software Alternatives and Reviews
-
It is what people have been referring to as "Web3", yes: https://ipfs.io/#how
-
despite the downvotes, there is some substance to this. mutability is useful.
however, as implemented, all that is needed is for the user of p2psearch to refresh and the browser to pick up the latest database. i imagine most users are not keeping torrent search open 24/7, so this doesn't seem onerous.
it's probably a bit of a process for the host of the frontend to update the database, prepare a new torrent, update the code [0], and then rebuild the bundle regularly, but this could be automated.
regardless, it doesn't seem so unreasonable from an end user perspective, and i personally don't mind if my torrent search index is behind by a few days.
[0] https://gitlab.com/boredcaveman/p2psearch/-/blob/main/src/Me...
-
Onboard AI
Learn any GitHub repo in 59 seconds. Onboard AI learns any GitHub repo in minutes and lets you chat with it to locate functionality, understand different parts, and generate new code. Use it for free at www.getonboard.dev.
-
ENS [https://ens.domains/] names can point to IPFS addresses. https://jamescarnley.eth (can use https://jamescarnley.eth.link if your browser doesn't support IPFS) is my IPFS powered website that can't be taken down by anyone.
-
I'm not talking about the consensus protocol of the blockchain itself, but of the p2p algorithms underlying it, e.g. using Kademlia for service discovery and message routing. I'm asking why a distributed system would choose something like Consul (which uses Raft, and requires a coordinator node) instead of running a decentralized protocol like Kademlia (which has no coordinator nodes) within their distributed single-tenant environment.
I did a bit more research last night, and discovered that Bitfinex actually does something like this internally (anyone know if this is up to date?) [0] — they built a service discovery mesh by storing arbitrary data on a DHT implementing BEP44 (using webtorrent/bittorrent-dht [1]).
This seems pretty cool to me, and IMO any modern distributed system should consider running decentralized protocols to benefit from their robustness properties. Deploying a node to a decentralized protocol requires no coordination or orchestration, aside from it simply joining the network. Scaling a service is as simple as joining a node to the network and announcing its availability of an implementation of that service.
At first glance, this looks like a competitive advantage, because it decouples the operational and maintenance costs of the network from the size of the network.
So I'm wondering if there is a consistent tradeoff in exchange for this robustness — are decentralized applications more complex to implement but simpler to operate? Is latency of decentralized protocols (e.g. average number of hops to lookup item in a DHT) untenably higher than that of distributed protocols (e.g. one hop once to get instructions from coordinator, then one hop to lookup item in distributed KV)? Does a central coordinator eliminate some kind of principle agent problem, resulting in e.g. a more balanced usage of the hashing keyspace?
Decentralization emerged because distributed solutions fail in untrusted environments — but this doesn't mean that decentralized solutions fail in trusted environments. So why not consider more decentralized protocols to scale internal systems?
-
I'm not talking about the consensus protocol of the blockchain itself, but of the p2p algorithms underlying it, e.g. using Kademlia for service discovery and message routing. I'm asking why a distributed system would choose something like Consul (which uses Raft, and requires a coordinator node) instead of running a decentralized protocol like Kademlia (which has no coordinator nodes) within their distributed single-tenant environment.
I did a bit more research last night, and discovered that Bitfinex actually does something like this internally (anyone know if this is up to date?) [0] — they built a service discovery mesh by storing arbitrary data on a DHT implementing BEP44 (using webtorrent/bittorrent-dht [1]).
This seems pretty cool to me, and IMO any modern distributed system should consider running decentralized protocols to benefit from their robustness properties. Deploying a node to a decentralized protocol requires no coordination or orchestration, aside from it simply joining the network. Scaling a service is as simple as joining a node to the network and announcing its availability of an implementation of that service.
At first glance, this looks like a competitive advantage, because it decouples the operational and maintenance costs of the network from the size of the network.
So I'm wondering if there is a consistent tradeoff in exchange for this robustness — are decentralized applications more complex to implement but simpler to operate? Is latency of decentralized protocols (e.g. average number of hops to lookup item in a DHT) untenably higher than that of distributed protocols (e.g. one hop once to get instructions from coordinator, then one hop to lookup item in distributed KV)? Does a central coordinator eliminate some kind of principle agent problem, resulting in e.g. a more balanced usage of the hashing keyspace?
Decentralization emerged because distributed solutions fail in untrusted environments — but this doesn't mean that decentralized solutions fail in trusted environments. So why not consider more decentralized protocols to scale internal systems?
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.