Infrastructure Engineering — Deployment Strategies

This page summarizes the projects mentioned and recommended in the original post on dev.to

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • kubefed

    Discontinued Kubernetes Cluster Federation

  • This is made possible by the very nature of Kubernetes being a standard portable platform across cloud providers, ability to manage infrastructure as code, ability to setup networking between them whenever needed with the help of multi-cluster service meshes and also due to the ability to orchestrate the deployments using Kubefed and Crossplane.

  • spec

    Container Storage Interface (CSI) Specification. (by container-storage-interface)

  • But if all of these are not an issue, then Containers and an orchestration system like Kubernetes can always take care of workload portability especially with OCI now in place for containers and CSI, CNI, CRI, SMI for storage, networking, runtime and service mesh respectively creating a healthy standards based ecosystem for all thereby enabling workload portability without lock-in since for a workload to be truly portable, all the underlying resources should be portable without any/very limited changes.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • OpenFaaS

    OpenFaaS - Serverless Functions Made Simple

  • Serverless: Serverless has long been seen as the final step to elastic computing. And while it seems ambitious, it cannot completely replace containers, virtual machines or bare metal deployments. It is to be seen as a great complement to them all considering the significant limitations they have. When you want to go for serverless you have to take into account a few things like Cold/Warm/Hot start of serverless functions since that decides the latency of the response you are going to get, keep in mind that every cloud provider has a timeout for execution like 15 minutes in case of AWS, 9 minutes in case of Google Cloud functions, 10 minutes for Azure Functions and so on making it unsuitable for long running jobs. In addition to this, there are also restrictions to the programming languages you can use in your serverless function (unless you are opting in for a container based deployment which essentially makes it a container based deployment 🤔) . If you still want to use serverless for long running jobs, you might have to reach out to them for dedicated/premium plans or maintain your own serverless infrastructure within your Kubernetes cluster using something like KNative, OpenFaas, Kubeless or similar and setting your own limits.

  • crossplane

    The Cloud Native Control Plane

  • Thinking of hybrid deployments brings us to workload portability because unless you have a portable workload, it may not be feasible to have hybrid deployment strategies. This also means that you have to reduce the dependence on proprietary services from your cloud providers as much as possible since you might have to end up doing cross-cloud or cross-region API calls otherwise if your other cloud provider or on-premise systems don't support it. Or sometimes you might even have to build out abstractions within your applications since not all the same service across multiple cloud providers does not often have the same APIs adding more complexity especially with hybrid architectures or you might have to use something like Crossplane to enable this for you to some extent.

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts