k8s-device-plugin
metallb
Our great sponsors
k8s-device-plugin | metallb | |
---|---|---|
11 | 78 | |
2,353 | 6,597 | |
4.7% | 1.8% | |
9.5 | 9.4 | |
5 days ago | 7 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
k8s-device-plugin
- Unlocking AI and ML Metal Performance with QBO Kubernetes Engine (QKE) Post
- Nos ā Open-Source to Maximize GPU Utilization in Kubernetes
- Show HN: Nos ā Open-Source to Maximize GPU Utilization in Kubernetes
-
Time-Slicing GPUs with Karpenter
K8s-device-plugin
-
Understanding Kubernetes Limits and Requests
This framework allows the use of external devices (e.g., NVIDIA GPUs, AMD GPUS, SR-IOV NICs) without modifying core Kubernetes components.
-
Nvidia GPU Plugin: Am I really limited to one pod per GPU?
Not talking about MIG. NVIDIA device plugin. https://github.com/NVIDIA/k8s-device-plugin
- Nvidia Kubernetes plugin install option that does not require Helm?
-
What is the difference between nvidia device plugin and GPU operator?
GPU Operator Device plugin
-
Share a GPU between pods on AWS EKS
If you ever tried to use GPU-based instances with AWS ECS, or on EKS using the default Nvidia plugin, you would know that it's not possible to make a task/pod shared the same GPU on an instance. If you want to add more replicas to your service (for redundancy or load balancing), you would need one GPU for each replica.
-
Looking for a sanity check on a project I'm working on at home, hoping you fine people can help - Raspberry Pi Kubernetes Cluster
Some notes on Plex/Emby/Kodi and transcoding. If you want true transcoding with GPU acceleration, you have to have Nvidia GPU or be a k8s device plugin genius. The whole idea of mounting elastic devices in k8s is fairly new and rather complex. In the mean time transcoding is best done on a beefy device with a proper CPU (eg i7) or specifically Nvidia GPU because there are numerous pre-made plugins. I just run Plex and Emby on an old ATX gaming machine without GPU acceleration and it works totally fine. They were barely usable for just me when running on the RPis, wouldn't recommend it unless you can figure out how to mount the correct devices in the pod using a custom raspberry pi device plugin . . . lol good luck! - Arm labs device manager: https://community.arm.com/developer/research/b/articles/posts/a-smarter-device-manager-for-kubernetes-on-the-edge - Deis labs Akri device manager: https://github.com/deislabs/akri - Nvidia GPU plugin: https://github.com/NVIDIA/k8s-device-plugin
metallb
-
Self hosted kubernetes
Hey guys, I want to share a guide Iām pretty proud of which is talking about setting up kubernetes which leverages https://kubespray.io/#/ and https://metallb.universe.tf/ so you can host this yourself most people when spinning up kubernetes opt for k3s or get stuck with all the options or unable to setup the external ips for their services so these tools will eliminate the problem.
- Deploy web app in port 80 using kubernetes
-
How to load balance highly available bare metal Kubernetes cluster control plane nodes?
Have a closer look at MetallLB.
-
Trouble with RKE2 HA Setup: Part 2
To avoid that, you can use a combination of haproxy and keepalived, an enterprise grade load balancer like the one from F5 or Citrix. Besides that you can also work with https://kube-vip.io or https://metallb.universe.tf.
-
Kubernetes and feeling defeated
Not sure if klipper is usable in a cluster with multiple nodes, as it binds to one port only. You may want to use MetalLB instead: https://metallb.universe.tf/
-
Cool stuff to deploy for a project ideas
Then deploy MetalLB https://metallb.universe.tf/
- Load balance ingress for baremetal
-
Own kubernetes cluster
What issue do you see with the load balancer? For self hosted clusters, one can use MetalLB for example to have such single outfacing IP which will failover to another node keeping the same IP if a node dies.
-
PaperLB: A Kubernetes Network Load Balancer Implementation
Quoting from their docs:
-
libvirt-k8s-provisioner - Ansible and terraform to build a cluster from scratch in less than 10 minutes ok KVM - Updated for 1.26
metalLB to manage bare-metal LoadBalancer services - WIP - Only L2 configuration can be set-up via playbook.
What are some alternatives?
kubevirt-gpu-device-plugin - NVIDIA k8s device plugin for Kubevirt
kube-vip - Kubernetes Control Plane Virtual IP and Load-Balancer
harvester - Open source hyperconverged infrastructure (HCI) software
calico - Cloud native networking and network security
aws-eks-share-gpu - How to share the same GPU between pods on AWS EKS
ingress-nginx - Ingress-NGINX Controller for Kubernetes
aws-virtual-gpu-device-plugin - AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads
external-dns - Configure external DNS servers (AWS Route53, Google CloudDNS and others) for Kubernetes Ingresses and Services
terraform-provider-kubernetes - Terraform Kubernetes provider
cert-manager - Automatically provision and manage TLS certificates in Kubernetes
containers-roadmap - This is the public roadmap for AWS container services (ECS, ECR, Fargate, and EKS).
rancher - Complete container management platform