Ask HN: Solo-preneurs, how do you DevOps to save time?

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • flyctl

    Command line tools for fly.io services

  • https://fly.io

    - Easy Postgres clusters (but not managed)

  • sst

    Build modern full-stack applications on AWS

  • I'm starting to consider this myself and have been looking at https://serverless-stack.com/#guide as a way to prototype and build an MVP. The guide has quite a bit in it that I believe can be repurposed to that end although it doesn't cover backups etc, more integrations with AWS services.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • Dokku

    A docker-powered PaaS that helps you build and manage the lifecycle of applications

  • fieldbot-server

  • I've used this in the past https://github.com/piyiotisk/fieldbot-server/blob/master/.gi...

    Basically, building a docker image and server it using Docker compose. Of course you'll need NGINx as a reverse proxy running on the server.

    If I do it again I'll use a service like render. Not worth managing this myself.

  • golang-samples

    Sample apps and code written for Google Cloud in the Go programming language.

  • Choose a platform like App Engine. It's auto-scaling. Supports bigtable. And opentelemetry. Manageable from anywhere via Cloud Shell. You can run multiple instances and partition load between them. Even includes a free tier ;)

    https://github.com/GoogleCloudPlatform/golang-samples

  • datastation

    App to easily query, script, and visualize data from every database, file, and API.

  • I'm building an open-core data IDE that runs as a desktop application or server. Since it's open source, all tests run on Github Actions for free. This includes basic e2e testing using Selenium on Windows, macOS, and Linux (e.g. [0]); and unit/integration tests.

    If it were a private repo I'd still shell out for Github Actions or Circle CI most likely. I'd also consider buying a chunky-enough minipc for ~$500 and an older mac mini and set up runners on them.

    For the moment private runners isn't a problem. But soon I'll need to start integration-testing proprietary code paths like querying Oracle or MS SQL Server. In that case I probably need to set up a dedicated box with all the right licenses so I can run CI jobs on it.

    [0] https://github.com/multiprocessio/datastation/blob/master/.g...

  • Grafana

    The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

  • > CI

    GitHub Actions

    > deployments/rollbacks

    Docker. Scaleway offers a container registry that's ridiculously cheap[1]. Deployments are infrequent and executed manually.

    > DBs

    Again, Scaleway's managed RDS[2].

    Outside these, we have setup Grafana + Loki cloud[3] for monitoring and alerting. They have a generous free plan. For easy product analytics that can be derived from the database we've a self hosted instance of Metabase[4].

    [1]: https://www.scaleway.com/en/container-registry/

    [2]: https://www.scaleway.com/en/database/

    [3]: https://grafana.com/

    [4]: https://www.metabase.com/

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • action-hosting-deploy

    Automatically deploy shareable previews for your Firebase Hosting sites

  • Lambdas and firebase on the GCP stack for CRUD apps.

    One nice thing about firebase -> each PR deploys to its own preview channel[1].

    Downside: Very JS heavy. I write lambdas in python though.

    [1] https://firebase.google.com/docs/hosting/github-integration

  • nixpkgs

    Nix Packages collection & NixOS

  • I decided to take a few years off work to just build on what I'd like. Perhaps in a startup studio model, so I have a bias for having something that is easily reusable, and that uses tech someone else can pick up and run with easily. I'll probably be in the business of dev/infra tooling.

    Currently going with a container image as the minimal deployable unit that gets put on top of a clean up to date OS. For me that's created with a Dockerfile using Alpine image variants. In a way I could see someone's rsync as an ok equivalent, but I'd do versioned symlinked directories so I can easily roll back if necessary if I went with this method. Something like update-alternatives or UIUC Encap/Epk: https://www.ks.uiuc.edu/Development/Computers/docs/sysadmin/.... Anyone remember that? I guess the modern version of Epkg with dependencies these days is https://docs.brew.sh/Homebrew-on-Linux. :-) Or maybe Nixpkgs: https://github.com/NixOS/nixpkgs?

    Deployment-wise I've already done the Bash script writing thing to help a friend automate his deployment to EC2 instance. For myself I was going to start using boto3, but just went ahead and learned Terraform instead. So now my scripts are just simple wrappers for Docker/Terraform that build, push, or deploy that work with AWS ECS Fargate or DigitalOcean Kubernetes.

    No CI/CD yet. DBs/backups I'll tackle next as I want to make sure I can install or failover to a new datacenter without much difficulty.

  • linuxbrew-core

    Discontinued 💀Formerly the core formulae for the Homebrew package manager on Linux

  • I decided to take a few years off work to just build on what I'd like. Perhaps in a startup studio model, so I have a bias for having something that is easily reusable, and that uses tech someone else can pick up and run with easily. I'll probably be in the business of dev/infra tooling.

    Currently going with a container image as the minimal deployable unit that gets put on top of a clean up to date OS. For me that's created with a Dockerfile using Alpine image variants. In a way I could see someone's rsync as an ok equivalent, but I'd do versioned symlinked directories so I can easily roll back if necessary if I went with this method. Something like update-alternatives or UIUC Encap/Epk: https://www.ks.uiuc.edu/Development/Computers/docs/sysadmin/.... Anyone remember that? I guess the modern version of Epkg with dependencies these days is https://docs.brew.sh/Homebrew-on-Linux. :-) Or maybe Nixpkgs: https://github.com/NixOS/nixpkgs?

    Deployment-wise I've already done the Bash script writing thing to help a friend automate his deployment to EC2 instance. For myself I was going to start using boto3, but just went ahead and learned Terraform instead. So now my scripts are just simple wrappers for Docker/Terraform that build, push, or deploy that work with AWS ECS Fargate or DigitalOcean Kubernetes.

    No CI/CD yet. DBs/backups I'll tackle next as I want to make sure I can install or failover to a new datacenter without much difficulty.

  • mataroa

    Naked blogging platform

  • My web app is hosted on a server on Hetzner Cloud. I don't use Docker.

    For:

    * Database: PostgreSQL installed through apt in the same server: https://github.com/sirodoht/mataroa/blob/master/docs/server-...

    * Backups: MinIO-upload to an S3-compatible object storage: https://github.com/sirodoht/mataroa/blob/master/backup-datab...

    * CI: Github Actions + sr.ht builds: https://github.com/sirodoht/mataroa/blob/master/.github/work... + https://github.com/sirodoht/mataroa/blob/master/.build.yml

    * CD: (not exactly CD but...) ssh + git pull + uWSGI reload: https://github.com/sirodoht/mataroa/blob/master/deploy.sh

    * Rollbacks: git revert HEAD + ./deploy.sh

    * Architecture pattern: stick to the monolith; avoid to deploy another service at all costs; eg. we need to send multiple emails? not celery, that would mean hosting a redis/rabbitmq. We already have a database so let's use that. We can also use Django management commands and cron: https://github.com/sirodoht/mataroa/blob/5bb46e05524d99c346c... + https://github.com/sirodoht/mataroa/blob/master/main/managem...

  • rupy

    HTTP App. Server and JSON DB - Shared Parallel (Atomic) & Distributed

  • I made my own HTTP app. server and JSON database on top of that.

    The server accepts .jars with code (and files) so I can hotdeploy while developing on live on the entire cluster in real time. My turnaround is about 1 second.

    The JSON database allows for schema-less simplicity, and it has all the features you need like indexes and security (and then some, like global realtime distributed while still being performant) in 2000 lines of code.

    I have zero pain developing the most scalable (and energy efficient) backend in the world, yet very few seem to care or use it: https://github.com/tinspin/rupy

    It has been proven on a real project with 5 years uptime and 350.000 users: https://store.steampowered.com/app/486310/Meadow/

  • external-dns

    Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services

  • Via annotations on my ingresses, and of course with this : https://github.com/kubernetes-sigs/external-dns

  • flux2

    Open and extensible continuous delivery solution for Kubernetes. Powered by GitOps Toolkit.

  • porter

    Kubernetes powered PaaS that runs in your own cloud.

  • dbmate

    :rocket: A lightweight, framework-agnostic database migration tool.

  • I agree with most people here. Definitely keep it simple.

    With that said, I run a semi resource intensive operation, so I've invested a bit into dev ops to keep our costs down. My setup is currently on AWS, primarily using ECS, RDS, Elasticache. Infra is managed via Terraform.

    I felt ECS was a nice balance vs K8s, as it's much simpler to manage, while getting the benefit of maximizing resource utilization.

    For CI / deployment, I use Github Actions to build an image and push, then start a rolling refresh to update the containers to the new version. It was pretty easy to setup.

    On DBs, RDS handles all the backups and maintenance. For migrations, I use https://github.com/amacneil/dbmate.

    Happy to answer any other questions you have, as I've learned a lot through trial and error.

  • parsemail

    Hanami fork of https://github.com/DusanKasan/parsemail

  • - docker-compose to spin up everything. It's super nice. Again, the deployment is done with a `rsync` then `docker-compose up -f docker-compose-prod.yml`

    Eventually when deployment changes very frequent and need scale/ha I added in Kubernetes. K8S is way easiser to setup than you think and it handle all other suff(load balancer, environment variable etc).

    And my deploy now become: `kubectl apply -f`

    One trick I used is to use `sed` or `envsubst` to replace the image hash.

    For backedup, I again, literally setup cronjob from an external server, `ssh` into database and run `pgdump`.

    I also have a nice NFS server to centralize config and sync back to our git repo.

    I used this whole setup to operate https://hanami.run an email forwarding service for the first 3 months before I added Kubernetes.

  • laravel-backup

    A package to backup your Laravel app

  • In my case I'm dumping + zipping the entire database at the application level. In my case is as simple as adding a library [1], scheduling the job and transferring to AWS S3 (my main application is on DigitalOcean)

    [1] https://github.com/spatie/laravel-backup

  • core

    MetaCall: The ultimate polyglot programming experience. (by metacall)

  • I try to avoid any complicated tool and simplify my life with NoOps tools. Using Kubernetes or AWS from scratch is probably going to kill your startup.

    In my case, I have tried MetaCall: https://metacall.io

  • When it comes to managing your git repo's support for releases, you might like our alternative to git-flow, which we jokingly call "git-ebb":

    https://gitlab.com/northscaler-public/release-management

    It's a fairly low-tech set of shell scripts that implement a release management strategy that is based on one release branch per minor version. All it does is manage version strings, release commits, release branches & release tags. You can hook your CI/CD into it whenever you're ready for that.

    We've used it to great effect on many client projects.

    The workflow is pretty simple for a new release (assume a Node.js project in this example):

    0. Your main ("main", "master", "trunk", "dev") branch is where all new features go. Assume our next version is going to be "2.3.0", so the version in the main branch starts out at "2.3.0-pre.0". If you need dev prereleases, issue them any time you'd like with `./release nodejs pre`. This will bump the version to "2.3.0-pre.1", "2.3.0-pre.2", etc each time.

    1. Ceremony: decide that you're feature complete for your next release.

    2. Use the release script to cut a release candidate ("rc"), say, with `./release nodejs rc`. You'll end up with a new branch of the form vmajor.minor, so v2.3 in this example, and the version in that branch will be 2.3.0-rc.0. Appropriate git tags will also be created. The version in the main branch is bumped to 2.4.0-pre.0, for the next minor release.

    3. Test your release candidate, releasing more release candidates to your heart's content with `./release nodejs rc`. Meanwhile, developers can start working on new features off of the main branch.

    4. Ceremony: decide you're bug-free enough to perform a "generally available" (GA) release.

    5. Perform a GA release with `./release nodejs ga`. This will tag a release commit as "2.3.0", push the tag, then bump the version in the release branch (v2.3) to "2.3.1-rc.0".

    6. If you find a bug in production, fix it in the release branch, issue as many RCs as you need until it's fixed, then finally release your patch with `./release nodejs patch`. You'll get a release commit & tag "2.3.1", and the version will be bumped to "2.3.2-rc.0". Lastly, cherry pick, (often, literally "git cherry-pick -x ") the change(s) back to the main branch if the bug still applies there; 99% of the time, it will.

    7. Repeat ad nauseum.

    This allows you at least manage your versions, branches & git tags in a sane & portable way that's low-tech enough for anyone to work on and understand. It's also got plenty of idiot-proofing in it so that it's hard to shoot yourself in the foot.

    Further, it's very customizable. After years of use across lots & lots of projects, we recommend using "dev" as your main branch name and as your main branch's prerelease suffix, and using "qa" as your release branch's prerelease suffix. The defaults are "pre" & "rc", and too many folks are using these scripts nowadays for us to change the defaults.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts