Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
dotfiles
Discontinued Mostly ~/.* files to configure vim, sh, tmux, etc. on Debian, Mac, and Windows (by susam)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
````
This way, Caddy will buffer the request and give 30 seconds for your new service to get online when you're deploying a new version.
Ideally, during deployment of a new version the new version should go live and healthy before caddy starts using it (and kills the old container). I've looked at https://github.com/Wowu/docker-rollout and https://github.com/lucaslorentz/caddy-docker-proxy but haven't had time to prioritize it yet.
````
This way, Caddy will buffer the request and give 30 seconds for your new service to get online when you're deploying a new version.
Ideally, during deployment of a new version the new version should go live and healthy before caddy starts using it (and kills the old container). I've looked at https://github.com/Wowu/docker-rollout and https://github.com/lucaslorentz/caddy-docker-proxy but haven't had time to prioritize it yet.
I have a similar setup for my personal and project websites. Some similarities and differences:
* I use Linode VMs ($5/month).
* I too use Debian GNU/Linux.
* The initial configuration of the VM is coded as a shell script: https://github.com/susam/dotfiles/blob/main/linode.sh
* Project-specific or service-specific configuration is coded as individual Makefiles. This takes care of creatng An example: https://github.com/susam/susam.net/blob/main/Makefile
* The software is written in Common Lisp. In case of a personal website or blog, a static website is generated by a Common Lisp program. In case of an online service or web application, the service is written as a Common Lisp program that uses Hunchentoot to process HTTP requests and return HTTP responses.
* I use Nginx too. Nginx serves the static files as well as functions as a reverse proxy when there are backend services involved. Indeed TLS termination is an important benefit it offers. Other benefits include rate limiting requests, configuring an allowlist for HTTP headers to protect the backend service, etc.
I have a similar setup for my personal and project websites. Some similarities and differences:
* I use Linode VMs ($5/month).
* I too use Debian GNU/Linux.
* The initial configuration of the VM is coded as a shell script: https://github.com/susam/dotfiles/blob/main/linode.sh
* Project-specific or service-specific configuration is coded as individual Makefiles. This takes care of creatng An example: https://github.com/susam/susam.net/blob/main/Makefile
* The software is written in Common Lisp. In case of a personal website or blog, a static website is generated by a Common Lisp program. In case of an online service or web application, the service is written as a Common Lisp program that uses Hunchentoot to process HTTP requests and return HTTP responses.
* I use Nginx too. Nginx serves the static files as well as functions as a reverse proxy when there are backend services involved. Indeed TLS termination is an important benefit it offers. Other benefits include rate limiting requests, configuring an allowlist for HTTP headers to protect the backend service, etc.
The website with source: https://github.com/PetrKubes97/ts-neural-network
You can include encrypted secrets and deploy the key out of band. Our open source solution for this (cross-platform, cross-language): https://neosmart.net/blog/securestore-open-secrets-format/
Eg this is the rust version on GitHub: https://github.com/neosmart/securestore-rs/tree/master
A pretty same setup with a bunch of differences:
1. I'm using a single postgresql database for all apps (each with a different user) on a different server; each app has a different db user
2. I use a minio instance for file/media uploads/serving
3. I mostly use nginx but i'm transitioning new apps to caddy because of automatic integration with let's encrypt and much smaller config for common purposes
4. I use a fab-classic (fabric 1x) script to deploy new versions: https://github.com/spapas/etsd/blob/master/fabfile.py
5. For backup I do a logical db backup once per day via cron (using a script similar to this https://spapas.github.io/2016/11/02/postgresql-backup/)
6. One memcache instance of all apps
7. Each app gets a redis instance (if redis is needed): https://gist.github.com/akhdaniel/04e4bb2df76ef534b0cb982c1d...
8. Use systemd for app control
Since we're all sharing our favorite simplest solutions, I guess I'll throw https://fly.io out there. The DX is far from perfect right now so this answer is _ever-so-slightly theoretical_ but, if you know how to use Docker, `fly launch` is extremely hard to beat.