k8s-device-plugin
cert-manager
k8s-device-plugin | cert-manager | |
---|---|---|
11 | 101 | |
2,411 | 11,486 | |
3.7% | 1.1% | |
9.6 | 9.8 | |
8 days ago | 4 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
k8s-device-plugin
-
Unlocking AI and ML Metal Performance with QBO Kubernetes Engine (QKE) Post
https://github.com/NVIDIA/k8s-device-plugin/issues/332#issue...
- Nos – Open-Source to Maximize GPU Utilization in Kubernetes
- Show HN: Nos – Open-Source to Maximize GPU Utilization in Kubernetes
-
Time-Slicing GPUs with Karpenter
K8s-device-plugin
-
Understanding Kubernetes Limits and Requests
This framework allows the use of external devices (e.g., NVIDIA GPUs, AMD GPUS, SR-IOV NICs) without modifying core Kubernetes components.
-
Nvidia GPU Plugin: Am I really limited to one pod per GPU?
Not talking about MIG. NVIDIA device plugin. https://github.com/NVIDIA/k8s-device-plugin
- Nvidia Kubernetes plugin install option that does not require Helm?
-
What is the difference between nvidia device plugin and GPU operator?
GPU Operator Device plugin
-
Share a GPU between pods on AWS EKS
If you ever tried to use GPU-based instances with AWS ECS, or on EKS using the default Nvidia plugin, you would know that it's not possible to make a task/pod shared the same GPU on an instance. If you want to add more replicas to your service (for redundancy or load balancing), you would need one GPU for each replica.
-
Looking for a sanity check on a project I'm working on at home, hoping you fine people can help - Raspberry Pi Kubernetes Cluster
Some notes on Plex/Emby/Kodi and transcoding. If you want true transcoding with GPU acceleration, you have to have Nvidia GPU or be a k8s device plugin genius. The whole idea of mounting elastic devices in k8s is fairly new and rather complex. In the mean time transcoding is best done on a beefy device with a proper CPU (eg i7) or specifically Nvidia GPU because there are numerous pre-made plugins. I just run Plex and Emby on an old ATX gaming machine without GPU acceleration and it works totally fine. They were barely usable for just me when running on the RPis, wouldn't recommend it unless you can figure out how to mount the correct devices in the pod using a custom raspberry pi device plugin . . . lol good luck! - Arm labs device manager: https://community.arm.com/developer/research/b/articles/posts/a-smarter-device-manager-for-kubernetes-on-the-edge - Deis labs Akri device manager: https://github.com/deislabs/akri - Nvidia GPU plugin: https://github.com/NVIDIA/k8s-device-plugin
cert-manager
-
deploying a minio service to kubernetes
cert-manager
-
Upgrading Hundreds of Kubernetes Clusters
The second one is a combination of tools: External DNS, cert-manager, and NGINX ingress. Using these as a stack, you can quickly deploy an application, making it available through a DNS with a TLS without much effort via simple annotations. When I first discovered External DNS, I was amazed at its quality.
-
Run WebAssembly on DigitalOcean Kubernetes with SpinKube - In 4 Easy Steps
On top of its core components, SpinKube depends on cert-manager. cert-Manager is responsible for provisioning and managing TLS certificates that are used by the admission webhook system of the Spin Operator. Let’s install cert-manager and KWasm using the commands shown here:
-
Importing kubernetes manifests with terraform for cert-manager
terraform { required_providers { kubectl = { source = "gavinbunney/kubectl" version = "1.14.0" } } } # The reference to the current project or a AWS project data "google_client_config" "provider" {} # The reference to the current cluster or EKS data "google_container_cluster" "my_cluster" { name = var.cluster_name location = var.cluster_location } # We configure the kubectl provider to use those values for authenticating provider "kubectl" { host = data.google_container_cluster.my_cluster.endpoint token = data.google_client_config.provider.access_token cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate) } #Download the multiple manifests file. data "http" "cert_manager_crds" { url = "https://github.com/cert-manager/cert-manager/releases/download/v${var.cert_manager_version}/cert-manager.crds.yaml" } data "kubectl_file_documents" "cert_manager_crds" { content = data.http.cert_manager_crds.response_body lifecycle { precondition { condition = 200 == data.http.cert_manager_crds.status_code error_message = "Status code invalid" } } } # We use the for_each or else this kubectl_manifest will only import the first manifest in the file. resource "kubectl_manifest" "cert_manager_crds" { for_each = data.kubectl_file_documents.cert_manager_crds.manifests yaml_body = each.value }
-
An opinionated template for deploying a single k3s cluster with Ansible backed by Flux, SOPS, GitHub Actions, Renovate, Cilium, Cloudflare and more!
SSL certificates thanks to Cloudflare and cert-manager
-
Deploy Rancher on AWS EKS using Terraform & Helm Charts
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/${CERT_MANAGER_VERSION}/cert-manager.crds.yaml
-
Setup/Design internal PKI
put the Sub-CA inside hashicorp vault to be used for automatic signing of services like https://cert-manager.io/ inside our k8s clusters.
-
Task vs Make - Final Thoughts
install-cert-manager: desc: Install cert-manager deps: - init-cluster cmds: - kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/{{.CERT_MANAGER_VERSION}}/cert-manager.yaml - echo "Waiting for cert-manager to be ready" && sleep 25 status: - kubectl -n cert-manager get pods | grep Running | wc -l | grep -q 3
-
Easy HTTPS for your private networks
I've been pretty frustrated with how private CAs are supported. Your private root CA can be maliciously used to MITM every domain on the Internet, even though you intend to use it for only a couple domain names. Most people forget to set Name Constraints when they create these and many helper tools lack support [1][2]. Worse, browser support for Name Constraints has been slow [3] and support isn't well tracked [4]. Public CAs give you certificate transparency and you can subscribe to events to detect mis-issuance. Some hosted private CAs like AWS's offer logs [5], but DIY setups don't.
Even still, there are a lot of folks happily using private CAs, they aren't the target audience for this initial release.
[1] https://github.com/FiloSottile/mkcert/issues/302
[2] https://github.com/cert-manager/cert-manager/issues/3655
[3] https://alexsci.com/blog/name-non-constraint/
[4] https://github.com/Netflix/bettertls/issues/19
[5] https://docs.aws.amazon.com/privateca/latest/userguide/secur...
-
☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH
the Cert Manager
What are some alternatives?
kubevirt-gpu-device-plugin - NVIDIA k8s device plugin for Kubevirt
metallb - A network load-balancer implementation for Kubernetes using standard routing protocols
harvester - Open source hyperconverged infrastructure (HCI) software
aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers
aws-eks-share-gpu - How to share the same GPU between pods on AWS EKS
Portainer - Making Docker and Kubernetes management easy.
aws-virtual-gpu-device-plugin - AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads
awx-operator - An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible. 🤖
terraform-provider-kubernetes - Terraform Kubernetes provider
k3s - Lightweight Kubernetes
containers-roadmap - This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).
oauth2-proxy - A reverse proxy that provides authentication with Google, Azure, OpenID Connect and many more identity providers.