Static torrent website with peer-to-peer queries over BitTorrent on 2M records

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • ipfs

    Peer-to-peer hypermedia protocol

  • It is what people have been referring to as "Web3", yes: https://ipfs.io/#how

  • p2psearch

  • despite the downvotes, there is some substance to this. mutability is useful.

    however, as implemented, all that is needed is for the user of p2psearch to refresh and the browser to pick up the latest database. i imagine most users are not keeping torrent search open 24/7, so this doesn't seem onerous.

    it's probably a bit of a process for the host of the frontend to update the database, prepare a new torrent, update the code [0], and then rebuild the bundle regularly, but this could be automated.

    regardless, it doesn't seem so unreasonable from an end user perspective, and i personally don't mind if my torrent search index is behind by a few days.

    [0] https://gitlab.com/boredcaveman/p2psearch/-/blob/main/src/Me...

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • ens

    Discontinued Implementations for ENS core functionality: The registry, registrars, and public resolvers.

  • ENS [https://ens.domains/] names can point to IPFS addresses. https://jamescarnley.eth (can use https://jamescarnley.eth.link if your browser doesn't support IPFS) is my IPFS powered website that can't be taken down by anyone.

  • grenache

    DHT based high-performance microservices framework, by Bitfinex

  • I'm not talking about the consensus protocol of the blockchain itself, but of the p2p algorithms underlying it, e.g. using Kademlia for service discovery and message routing. I'm asking why a distributed system would choose something like Consul (which uses Raft, and requires a coordinator node) instead of running a decentralized protocol like Kademlia (which has no coordinator nodes) within their distributed single-tenant environment.

    I did a bit more research last night, and discovered that Bitfinex actually does something like this internally (anyone know if this is up to date?) [0] — they built a service discovery mesh by storing arbitrary data on a DHT implementing BEP44 (using webtorrent/bittorrent-dht [1]).

    This seems pretty cool to me, and IMO any modern distributed system should consider running decentralized protocols to benefit from their robustness properties. Deploying a node to a decentralized protocol requires no coordination or orchestration, aside from it simply joining the network. Scaling a service is as simple as joining a node to the network and announcing its availability of an implementation of that service.

    At first glance, this looks like a competitive advantage, because it decouples the operational and maintenance costs of the network from the size of the network.

    So I'm wondering if there is a consistent tradeoff in exchange for this robustness — are decentralized applications more complex to implement but simpler to operate? Is latency of decentralized protocols (e.g. average number of hops to lookup item in a DHT) untenably higher than that of distributed protocols (e.g. one hop once to get instructions from coordinator, then one hop to lookup item in distributed KV)? Does a central coordinator eliminate some kind of principle agent problem, resulting in e.g. a more balanced usage of the hashing keyspace?

    Decentralization emerged because distributed solutions fail in untrusted environments — but this doesn't mean that decentralized solutions fail in trusted environments. So why not consider more decentralized protocols to scale internal systems?

    [0] https://github.com/bitfinexcom/grenache

    [1] https://github.com/webtorrent/bittorrent-dht

  • bittorrent-dht

    🕸 Simple, robust, BitTorrent DHT implementation

  • I'm not talking about the consensus protocol of the blockchain itself, but of the p2p algorithms underlying it, e.g. using Kademlia for service discovery and message routing. I'm asking why a distributed system would choose something like Consul (which uses Raft, and requires a coordinator node) instead of running a decentralized protocol like Kademlia (which has no coordinator nodes) within their distributed single-tenant environment.

    I did a bit more research last night, and discovered that Bitfinex actually does something like this internally (anyone know if this is up to date?) [0] — they built a service discovery mesh by storing arbitrary data on a DHT implementing BEP44 (using webtorrent/bittorrent-dht [1]).

    This seems pretty cool to me, and IMO any modern distributed system should consider running decentralized protocols to benefit from their robustness properties. Deploying a node to a decentralized protocol requires no coordination or orchestration, aside from it simply joining the network. Scaling a service is as simple as joining a node to the network and announcing its availability of an implementation of that service.

    At first glance, this looks like a competitive advantage, because it decouples the operational and maintenance costs of the network from the size of the network.

    So I'm wondering if there is a consistent tradeoff in exchange for this robustness — are decentralized applications more complex to implement but simpler to operate? Is latency of decentralized protocols (e.g. average number of hops to lookup item in a DHT) untenably higher than that of distributed protocols (e.g. one hop once to get instructions from coordinator, then one hop to lookup item in distributed KV)? Does a central coordinator eliminate some kind of principle agent problem, resulting in e.g. a more balanced usage of the hashing keyspace?

    Decentralization emerged because distributed solutions fail in untrusted environments — but this doesn't mean that decentralized solutions fail in trusted environments. So why not consider more decentralized protocols to scale internal systems?

    [0] https://github.com/bitfinexcom/grenache

    [1] https://github.com/webtorrent/bittorrent-dht

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts