What Is a Service Mesh?

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • OpenID

    OpenID Certified™ OpenID Connect Relying Party implementation for Apache HTTP Server 2.x

  • More information: https://docs.docker.com/network/

    > Load balancing

    The above will also distribute the traffic based on how many instances you have running, from as many web servers as you have running. Throw in health checks (such as the container running curl against itself, to check that the API/web interface is available when starting up, as well as periodically during operation) so no traffic gets routed before your application can receive them and you're good for the most part: https://docs.docker.com/engine/swarm/services/#publish-ports

    > TLS encryption

    Let's Encrypt as well as your own custom certificates are supported by most web servers out there rather easily, even Apache now has mod_md for automating this: https://httpd.apache.org/docs/trunk/mod/mod_md.html

    Also, if you want, you can encrypt the network traffic between the nodes as well and not worry about having to manage the internal certificates manually either: https://docs.docker.com/engine/swarm/networking/#customize-a...

    > Authentication and authorization

    Once again, web servers are pretty good at this, you can configure most forms or auth easily and even the aforementioned Apache now has mod_auth_openidc which supports OpenID Connect, so you can even configure it to be a Relying Party and not worry as much about letting your applications themselves manage that (given that if you have 5 different tech stacks running, you'd need 5 bits of separate configuration and libraries for that): https://github.com/zmartzone/mod_auth_openidc

    > Metrics aggregation, such as request throughput and response time, Distributed tracing

    This might be a little bit more tricky! The old Apache outputs its server status with a handler that you can configure (see a live example here: https://issues.apache.org/server-status ) thanks to mod-status: https://httpd.apache.org/docs/2.4/mod/mod_status.html and there's similar output for the ACME certificate status as well, which you can configure. The logs also contain metrics about the requests, which once again are configurable.

    Other web servers might give you more functionality in that regard (or you might shop around for Apache modules), Traefik, Caddy as well as Nginx Proxy Manager might all be good choices both when you're looking to hook up for something external to aggregate the metrics with minimal work, or want a dashboard of some sort, for example: https://doc.traefik.io/traefik/operations/dashboard/

    > Rate limiting

    In Apache, it's a bit more troublesome (other servers do this better most of the time), depending on which approach you use, but something basic isn't too hard to set up: https://httpd.apache.org/docs/2.4/mod/mod_ratelimit.html

    > Routing and traffic management, Traffic splitting, Request retries

    I'm grouping these together, because what people expect from this sort of functionality might vary a lot. You can get most of the basic stuff out of most web servers, which will be enough for the majority of the web servers out there.

    Something like blue/green deployments, A/B testing or circuit breaking logic is possible with a bit more work, but here I'll concede that for the more advanced setups out there something like Istio and Kiali would be better solutions. Then again, those projects won't be the majority of the ones out there.

    > Error handling

    Depends on what you want to do here, custom error pages (or handlers), or something in regards to routing or checking for the presence of resources isn't too hard and has been done for years.

    But what's my point here? Should everyone abandon using Linkerd or Istio? Not at all! I'm just saying that even with lightweight technologies and for simpler tech stacks, having and ingress as well as something that covers most of what a service mesh would (e.g. the aforementioned Docker overlay networking, or similar solutions) can be immensely useful.

    After putting Nginx in front of many of the services for projects at work, path rewriting, as well as handling special rules for certain apps has become way easier, certificate management is a breeze since it can be done with Ansible just against a single type of service, in addition to something like client certs or OIDC (though admittedly, that's mostly on my homelab, with Apache).

    Once you actually grow past that, or have your entire business built on Kubernetes, then feel free to adopt whatever solutions you deem necessary! But don't shy away from things like this even when you have <10 applications running in about as many containers (or when you have some horizontal scalability across your nodes).

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts