Our great sponsors
-
bash-timestamping-sqlite
bash commandline timestamping using a sqlite database for personal analytics, activity logging and auditing
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Camunda BPM
Flexible framework for workflow and decision automation with BPMN and DMN. Integration with Quarkus, Spring, Spring Boot, CDI.
-
runnable-plans
A half-measure between "this recurring task is documented on a page somewhere" and "this recurring task is automated"
The author has written a Golang version of the same as well [1].
1: https://github.com/danslimmon/donothing
> The problem happens when somebody "updates" that web server in-place.
Imagine this is 28-nginx : I would jist create another script 29-nginx-update recording the update, even if it: "echo apt-get update; apt-get upgrade nginx ; echo "make sure to fix variable $foo"
Next time I have to do that, I will integrate that into 28-nginx and remove 29-nginx-update
> eventually when someone tries the whole checklist from the beginning, they'll find it's now broken; the steps aren't working as expected.
Maybe I don't understand the issue, but my scripts or text files are simple and meant to be used in sequence. If I hack the scripts, I make sure it still works as expected - and given my natural laziness, I only ever update scripts when deploying to a new server or VM, so I get an immediate feedback if they stop working
Still, sometimes something may work as expected (ex: above, maybe $foo depends on a context?), but it only means I need to generalize the previous solution - and since the script update only happen in the context of a new deployment, everything is still fresh in my head.
To help me with that, I also use zfs snapshots at important steps, to be able to "observe" what the files looked like on the other server at a specific time. The snapshots conveniently share the same name (ex etc@28-nginx) so comparing the files to create a scripts can be easily done with diff -Nur using .zfs/snapshot/ cf https://docs.oracle.com/cd/E19253-01/819-5461/gbiqe/index.ht...
Between that + a sqlite database containing the full history of commands types (including in which directory, and their return code), I rarely have such issues
Shameless plug for that bash history in sqlite: https://github.com/csdvrx/bash-timestamping-sqlite
Braintree's Runbook seems to regard exactly this case of "Do-nothing scripting" https://github.com/braintree/runbook
The underlying concept would be akin to Business Process Modelling.
Using something like https://camunda.com [0], you can execute a business process model. Such a model can include sending emails, waiting for responses, human action, custom API requests, etc.
Camunda is not no-code, but it comes close to what you imagine. I suspect that when the existing API landscape of a company needs to be used for workflows, coding/integration is inevitable.
There's also (truly) no-code platforms like
This is basically why I ended up writing Runnable Plans, over at https://github.com/vatine/runnable-plans/, the main differences are "you define the plan in YAML" (yeah, horrible, but better than hand-chasing a parser, that MAY come at a later date) instead of "in the script", "the plan has no inherent order" (to allow for future parallel execution), "saves success/failure to allow for later restart", and "can generate a GraphViz graph of the plan dependency ordering".
But, whatever works, works. Start somewhere, get it into a script, plan, whatever. Then, it is easier to identify steps that can be turned to entirely machine-operated.
Thank you for introducing me to babushka style "Check - Set - Check" methodology. I assume you meant this:
https://github.com/benhoskings/babushka