Bare-Metal Kubernetes with K3s

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • metalk8s

    An opinionated Kubernetes distribution with a focus on long-term on-prem deployments

  • An 'easy' way to deploy a cluster could be using kubeadm. Then you'll need a CNI like Calico to get Pod networking up-and-running. However, you'll want to install a bunch of other software on said cluster to monitor it, manage logs,...

    Given you're running on physical infrastructure, MetalK8s [1] could be of interest (full disclosure: I'm one of the leads of said project, which is fully open-source and used as part of our commercial enterprise storage products)

    [1] https://github.com/scality/metalk8s

  • AutoSpotting

    Saves up to 90% of AWS EC2 costs by automating the use of spot instances on existing AutoScaling groups. Installs in minutes using CloudFormation or Terraform. Convenient to deploy at scale using StackSets. Uses tagging to avoid launch configuration changes. Automated spot termination handling. Reliable fallback to on-demand instances.

  • we scale up to about 100 machines. We use spot instances EXTENSIVELY. And that configuration was tricky actually. Its been a couple of months now. Works pretty ok.

    k3s is actually pretty simple to use now. the tricky part was to integrate with https://github.com/kubernetes/cloud-provider-aws and https://github.com/DirectXMan12/k8s-prometheus-adapter

    The hardest part is to get it to work with spot instances. we use https://github.com/AutoSpotting/AutoSpotting to integrate with it.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • osv

    OSv, a new operating system for the cloud.

  • > Oracle used to offer an installation mode like this

    Oracle, and BEA before them, used to offer a JVM which ran on top of a thin custom OS designed only to host the JVM, you could call it a "unikernel". Product was called JRockit Virtual Edition (JRVE), WebLogic Server Virtual Edition (WLS-VE, when used to run WebLogic), earlier BEA called it LiquidVM. The internal name for that thin custom OS was in fact "Bare Metal". Similar in concept to https://github.com/cloudius-systems/osv but completely different implementation

    I think one thing which caused a problem for it, is a lot of customers want to deploy various management tools to their VMs (security auditing software, performance monitoring software, etc) and when your VM runs a custom OS that becomes very difficult or impossible. So adopting this product could lead to the pain of having to ask for exceptions to policies requiring those tools and then defending the decision to adopt it against those who use those policies to argue against it. I think this is part of why the product was discontinued.

    Nowadays, Oracle offers "bare metal servers" [1] – which are just hypervisor-less servers, same as other cloud vendors do. Or similarly, "Oracle Database Appliance Bare Metal System" [2] – which just means not installing a hypervisor on your Oracle Database Appliance.

    So Oracle seems to have a history of using the phrase "bare metal" in both the senses being discussed here.

    [1] https://www.oracle.com/cloud/compute/bare-metal.html

    [2] https://docs.oracle.com/en/engineered-systems/oracle-databas...

  • kubernetes

    ArgoCD-based configuration for the OCF Kubernetes cluster (by ocf)

  • I’m working on this right now. My theory is that having every cluster object defined in git (but with clever use of third party helm charts to reduce maintenance burden) is the way to go.

    Our cluster configuration is public[1] and I’m almost done with a blog post going over all the different choices you can make wrt the surrounding monitoring/etc infrastructure on a Kubernetes cluster.

    [1] https://github.com/ocf/kubernetes

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • Farewell to the Era of Cheap EC2 Spot Instances

    1 project | news.ycombinator.com | 5 May 2023
  • Fast-Terraform: Terraform Tutorial, How-To: Hands-on LABs, and AWS Hands-on Sample Usage Scenarios (Infrastructure As Code)

    2 projects | /r/hashicorp | 3 May 2023
  • Fast-Terraform: Terraform Tutorial, How-To: Hands-on LABs, and AWS Hands-on Sample Usage Scenarios (Infrastructure As Code)

    2 projects | /r/sre | 27 Apr 2023
  • Fast-Terraform: Terraform Tutorial, How-To: Hands-on LABs, and AWS Hands-on Sample Usage Scenarios (Infrastructure As Code) + Elastic Kubernetes Service

    2 projects | /r/kubernetes | 27 Apr 2023
  • Fast-Terraform: Terraform Tutorial, How-To: Hands-on LABs, and AWS Hands-on Sample Usage Scenarios (Infrastructure As Code)

    2 projects | /r/Cloud | 26 Apr 2023