Our great sponsors
-
> automated DNS-based is that it doesn't work with all domain registrars
https://acme.sh automates dns01 challenges with over 100+ registrars. A good thing you can move your registrations and nameservers between them.
-
consul
Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.
> As a relatively young dev, the idea of a "web server" as a standalone binary that serves your application (vs a library that you use to write your own "server") feels strange.
In my eyes, the ideal setup is one that's layered: where you have an ingress that's basically a load balancer that also ensures that you have SSL/TLS certificates, enforces rate limits, perhaps is used for some very basic logging, or can optionally do any URL rewriting that you need. Personally, i think that Caddy (https://caddyserver.com/) is lovely for this, whereas some people prefer something like Traefik (https://traefik.io/), though the older software packages like Nginx (https://nginx.org/en/) or even Apache (https://www.apache.org/) are good too, as long as the pattern itself is in place.
Then, you may additionally have any sorts of middleware that you need, such as a service mesh for service discovery, or providing internal SSL/TLS - personally Docker Swarm (https://docs.docker.com/engine/swarm/) overlay networks have always been enough for me in this regard, though some people enjoy other solutions, such as Hashicorp Consul (https://www.consul.io/), or maybe something intended for Kubernetes or other platforms that you already may be using, like Linkerd (https://linkerd.io/).
Finally, you have your actual application with its server. Personally, i think that the web server should be embedded (for example, embedded Tomcat with Spring Boot) or indeed just be a library that's a part of the application executable, as long as you can update it easily enough by rebuilding the application - containers are good for this, but aren't strictly necessary, since sometimes other forms of automation and packaging are also enough.
The reason why i believe this, is because i've seen plenty of deployments where that just isn't the case:
- attempts to store certificates within the application, each application server having different requirements for the formats to be used, making management (and automation) of renewal a total nightmare
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
> As a relatively young dev, the idea of a "web server" as a standalone binary that serves your application (vs a library that you use to write your own "server") feels strange.
In my eyes, the ideal setup is one that's layered: where you have an ingress that's basically a load balancer that also ensures that you have SSL/TLS certificates, enforces rate limits, perhaps is used for some very basic logging, or can optionally do any URL rewriting that you need. Personally, i think that Caddy (https://caddyserver.com/) is lovely for this, whereas some people prefer something like Traefik (https://traefik.io/), though the older software packages like Nginx (https://nginx.org/en/) or even Apache (https://www.apache.org/) are good too, as long as the pattern itself is in place.
Then, you may additionally have any sorts of middleware that you need, such as a service mesh for service discovery, or providing internal SSL/TLS - personally Docker Swarm (https://docs.docker.com/engine/swarm/) overlay networks have always been enough for me in this regard, though some people enjoy other solutions, such as Hashicorp Consul (https://www.consul.io/), or maybe something intended for Kubernetes or other platforms that you already may be using, like Linkerd (https://linkerd.io/).
Finally, you have your actual application with its server. Personally, i think that the web server should be embedded (for example, embedded Tomcat with Spring Boot) or indeed just be a library that's a part of the application executable, as long as you can update it easily enough by rebuilding the application - containers are good for this, but aren't strictly necessary, since sometimes other forms of automation and packaging are also enough.
The reason why i believe this, is because i've seen plenty of deployments where that just isn't the case:
- attempts to store certificates within the application, each application server having different requirements for the formats to be used, making management (and automation) of renewal a total nightmare
-
> As a relatively young dev, the idea of a "web server" as a standalone binary that serves your application (vs a library that you use to write your own "server") feels strange.
In my eyes, the ideal setup is one that's layered: where you have an ingress that's basically a load balancer that also ensures that you have SSL/TLS certificates, enforces rate limits, perhaps is used for some very basic logging, or can optionally do any URL rewriting that you need. Personally, i think that Caddy (https://caddyserver.com/) is lovely for this, whereas some people prefer something like Traefik (https://traefik.io/), though the older software packages like Nginx (https://nginx.org/en/) or even Apache (https://www.apache.org/) are good too, as long as the pattern itself is in place.
Then, you may additionally have any sorts of middleware that you need, such as a service mesh for service discovery, or providing internal SSL/TLS - personally Docker Swarm (https://docs.docker.com/engine/swarm/) overlay networks have always been enough for me in this regard, though some people enjoy other solutions, such as Hashicorp Consul (https://www.consul.io/), or maybe something intended for Kubernetes or other platforms that you already may be using, like Linkerd (https://linkerd.io/).
Finally, you have your actual application with its server. Personally, i think that the web server should be embedded (for example, embedded Tomcat with Spring Boot) or indeed just be a library that's a part of the application executable, as long as you can update it easily enough by rebuilding the application - containers are good for this, but aren't strictly necessary, since sometimes other forms of automation and packaging are also enough.
The reason why i believe this, is because i've seen plenty of deployments where that just isn't the case:
- attempts to store certificates within the application, each application server having different requirements for the formats to be used, making management (and automation) of renewal a total nightmare
-
> As a relatively young dev, the idea of a "web server" as a standalone binary that serves your application (vs a library that you use to write your own "server") feels strange.
In my eyes, the ideal setup is one that's layered: where you have an ingress that's basically a load balancer that also ensures that you have SSL/TLS certificates, enforces rate limits, perhaps is used for some very basic logging, or can optionally do any URL rewriting that you need. Personally, i think that Caddy (https://caddyserver.com/) is lovely for this, whereas some people prefer something like Traefik (https://traefik.io/), though the older software packages like Nginx (https://nginx.org/en/) or even Apache (https://www.apache.org/) are good too, as long as the pattern itself is in place.
Then, you may additionally have any sorts of middleware that you need, such as a service mesh for service discovery, or providing internal SSL/TLS - personally Docker Swarm (https://docs.docker.com/engine/swarm/) overlay networks have always been enough for me in this regard, though some people enjoy other solutions, such as Hashicorp Consul (https://www.consul.io/), or maybe something intended for Kubernetes or other platforms that you already may be using, like Linkerd (https://linkerd.io/).
Finally, you have your actual application with its server. Personally, i think that the web server should be embedded (for example, embedded Tomcat with Spring Boot) or indeed just be a library that's a part of the application executable, as long as you can update it easily enough by rebuilding the application - containers are good for this, but aren't strictly necessary, since sometimes other forms of automation and packaging are also enough.
The reason why i believe this, is because i've seen plenty of deployments where that just isn't the case:
- attempts to store certificates within the application, each application server having different requirements for the formats to be used, making management (and automation) of renewal a total nightmare
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
Related posts
- Running Virtual Hosts in Apache Docker Container and LetsEncrypt for SSL
- Network Mapper – low privileges, no-eBPF network observability tool for K8s
- ️🚀🚀 Top 3 DevOps Trends to Watch Out for in 2024 📈
- Otterize launches open-source, declarative IAM permissions for workloads on AWS EKS clusters
- Error handling in Go web apps shouldn't be so awkward