otomi-core
cert-manager
Our great sponsors
otomi-core | cert-manager | |
---|---|---|
75 | 101 | |
2,139 | 11,457 | |
1.5% | 1.7% | |
9.6 | 9.8 | |
5 days ago | 4 days ago | |
Mustache | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
otomi-core
- Otomi – Self-Hosted PaaS for Kubernetes
- Self-hosted Kubernetes-based Heroku alternative
-
What is a self-hosted Kubernetes-based PaaS?
An example of a self-hosted Kubernetes-based PaaS is Otomi. Install Otomi on your Kubernetes cluster, compose your platform (by activating the required capabilities) and build, deploy and expose apps in just a couple of minutes. Heroku, but Kubernetes native and running on your own cluster.
- GitHub - redkubes/otomi-core: Self-hosted PaaS for Kubernetes
- GitHub - redkubes/otomi-core: Self-hosted & Git-based PaaS for Kubernetes
-
Add developer- and operations-centric tools, automation and self-service on top of Kubernetes
This video shows some of the new features of Otomi version 0.19.0 that will be released in Week 11 2023. Follow us on GitHub and be the first to try it out: https://github.com/redkubes/otomi-core
-
Selfhosted PaaS? (No dokku pls)
Otomi
- Self-hosted DevOps Platform as a Service for Kubernetes
-
Kubernetes is only a multi-node cluster kernel
Kubernetes is 'only' a multi-node cluster kernel. Some call it the Linux of the cloud.
And because K8s is only a kernel, there are now over 2000+ (open source) projects, all adding some extra functionality to it. Be it for observability, security, or networking. But all of these projects don't really collaborate and end-users don't ask for maturity of individual projects, they want sets/stacks of projects that integrate well.
Now every company has created some Stack with applications and configurations for Kubernetes, all trying to reinvent the wheel and spending an often shocking $ in doing so.
So here is my take:
- Let's create a new category in the Cloud Native Computing Foundation (CNCF) landscape and call it Integrated Stacks for K8s
- To be accepted, a stack needs to provide an open integration framework for other projects to add/integrate their apps
- Just like aLinux distro, each stack is ideal for some specific use case(s)
- A stack can be installed in one run, contains integrated apps that work out-of-the-box, has a (web) UI that acts as a desktop environment to provide easy and secure access to all features. Call it a new user experience for Kubernetes
Wouldn't it be great to have a list of all Kubernetes stacks available that everyone can use (and contribute to)? Just like (in the Linux analogy) you can choose between Linux Mint, Fedora, or Ubuntu.
We already created the first: https://github.com/redkubes/otomi-core
cert-manager
-
deploying a minio service to kubernetes
cert-manager
-
Upgrading Hundreds of Kubernetes Clusters
The second one is a combination of tools: External DNS, cert-manager, and NGINX ingress. Using these as a stack, you can quickly deploy an application, making it available through a DNS with a TLS without much effort via simple annotations. When I first discovered External DNS, I was amazed at its quality.
-
Run WebAssembly on DigitalOcean Kubernetes with SpinKube - In 4 Easy Steps
On top of its core components, SpinKube depends on cert-manager. cert-Manager is responsible for provisioning and managing TLS certificates that are used by the admission webhook system of the Spin Operator. Let’s install cert-manager and KWasm using the commands shown here:
-
Importing kubernetes manifests with terraform for cert-manager
terraform { required_providers { kubectl = { source = "gavinbunney/kubectl" version = "1.14.0" } } } # The reference to the current project or a AWS project data "google_client_config" "provider" {} # The reference to the current cluster or EKS data "google_container_cluster" "my_cluster" { name = var.cluster_name location = var.cluster_location } # We configure the kubectl provider to use those values for authenticating provider "kubectl" { host = data.google_container_cluster.my_cluster.endpoint token = data.google_client_config.provider.access_token cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate) } #Download the multiple manifests file. data "http" "cert_manager_crds" { url = "https://github.com/cert-manager/cert-manager/releases/download/v${var.cert_manager_version}/cert-manager.crds.yaml" } data "kubectl_file_documents" "cert_manager_crds" { content = data.http.cert_manager_crds.response_body lifecycle { precondition { condition = 200 == data.http.cert_manager_crds.status_code error_message = "Status code invalid" } } } # We use the for_each or else this kubectl_manifest will only import the first manifest in the file. resource "kubectl_manifest" "cert_manager_crds" { for_each = data.kubectl_file_documents.cert_manager_crds.manifests yaml_body = each.value }
-
An opinionated template for deploying a single k3s cluster with Ansible backed by Flux, SOPS, GitHub Actions, Renovate, Cilium, Cloudflare and more!
SSL certificates thanks to Cloudflare and cert-manager
-
Deploy Rancher on AWS EKS using Terraform & Helm Charts
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${CERT_MANAGER_VERSION}/cert-manager.crds.yaml
-
Setup/Design internal PKI
put the Sub-CA inside hashicorp vault to be used for automatic signing of services like https://cert-manager.io/ inside our k8s clusters.
-
Task vs Make - Final Thoughts
install-cert-manager: desc: Install cert-manager deps: - init-cluster cmds: - kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/{{.CERT_MANAGER_VERSION}}/cert-manager.yaml - echo "Waiting for cert-manager to be ready" && sleep 25 status: - kubectl -n cert-manager get pods | grep Running | wc -l | grep -q 3
-
Easy HTTPS for your private networks
I've been pretty frustrated with how private CAs are supported. Your private root CA can be maliciously used to MITM every domain on the Internet, even though you intend to use it for only a couple domain names. Most people forget to set Name Constraints when they create these and many helper tools lack support [1][2]. Worse, browser support for Name Constraints has been slow [3] and support isn't well tracked [4]. Public CAs give you certificate transparency and you can subscribe to events to detect mis-issuance. Some hosted private CAs like AWS's offer logs [5], but DIY setups don't.
Even still, there are a lot of folks happily using private CAs, they aren't the target audience for this initial release.
[1] https://github.com/FiloSottile/mkcert/issues/302
[2] https://github.com/cert-manager/cert-manager/issues/3655
[3] https://alexsci.com/blog/name-non-constraint/
[4] https://github.com/Netflix/bettertls/issues/19
[5] https://docs.aws.amazon.com/privateca/latest/userguide/secur...
-
☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH
the Cert Manager
What are some alternatives?
k3os - Purpose-built OS for Kubernetes, fully managed by Kubernetes.
metallb - A network load-balancer implementation for Kubernetes using standard routing protocols
charts - TrueNAS SCALE Apps Catalogs & Charts
aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers
k8s-gitops - GitOps principles to define kubernetes cluster state via code
Portainer - Making Docker and Kubernetes management easy.
quickstart - Quickstarts to provision Kubernetes with Otomi
awx-operator - An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible. 🤖
helm-charts - Temporal Helm charts
k3s - Lightweight Kubernetes
ingress-nginx - Ingress-NGINX Controller for Kubernetes
oauth2-proxy - A reverse proxy that provides authentication with Google, Azure, OpenID Connect and many more identity providers.