k8s-gitops
node-feature-discovery
Our great sponsors
k8s-gitops | node-feature-discovery | |
---|---|---|
4 | 8 | |
600 | 675 | |
- | 3.3% | |
9.9 | 9.5 | |
5 days ago | 6 days ago | |
Shell | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
k8s-gitops
-
How can Intel quick sync be exposed to a pod?
And then deploy Plex or other service to the node based on the label that is created — https://github.com/billimek/k8s-gitops/blob/master/default/plex/plex.yaml#L60-L62
No prob! So the NFD config needs to know the pci ids for what you’re looking to label. The values are here: https://github.com/billimek/k8s-gitops/blob/master/kube-system/node-feature-discovery/node-feature-discovery.yaml#L67-L71
So this one https://github.com/billimek/k8s-gitops/blob/master/kube-system/intel-gpu_plugin/intel-gpu_plugin.yaml is a custom yaml that deploys the Intel plugin as a daemonset but only on nodes that have the label "feature.node.kubernetes.io/custom-intel-gpu".
node-feature-discovery
-
Those running Kubernetes, what is in your core stack? And what "gem" can you not live without?
node-feature-discovery and descheduler
-
How can Intel quick sync be exposed to a pod?
Take a look at the NFD repo, they have instructions for deploying without helm — https://github.com/kubernetes-sigs/node-feature-discovery
This one https://github.com/kubernetes-sigs/node-feature-discovery is an official method of node discovery that puts certain labels on nodes, but I can't see the label "feature.node.kubernetes.io/custom-intel-gpu" being created. Am I missing where that label comes from? Is that where the helm version comes in from https://github.com/billimek/k8s-gitops/blob/master/kube-system/node-feature-discovery/node-feature-discovery.yaml does this one apply that label named?
When deploying without the helm chart, you need to define that in the ConfigMap. Looks like it’s here: https://github.com/kubernetes-sigs/node-feature-discovery/blob/master/nfd-daemonset-combined.yaml.template#L137-L229
What are some alternatives?
truecharts - Community App Catalog for TrueNAS SCALE [Moved to: https://github.com/truecharts/charts]
intel-device-plugins-for-kubernetes - Collection of Intel device plugins for Kubernetes
flux2-kustomize-helm-example - A GitOps workflow example for multi-env deployments with Flux, Kustomize and Helm.
velero-plugin-for-aws - Plugins to support Velero on AWS
otomi-core - Self-hosted DevOps PaaS for Kubernetes
cloud-native-platform - Repo for "How to build your own cloud-native platform on IaaS clouds in 2021"
nfs-subdir-external-provisioner - Dynamic sub-dir volume provisioner on a remote NFS server.
cri-tools - CLI and validation tools for Kubelet Container Runtime Interface (CRI) .
k3d-action - A GitHub Action to run lightweight ephemeral Kubernetes clusters during workflow. Fundamental advantage of this action is a full customization of embedded k3s clusters. In addition, it provides a private image registry and multi-cluster support.
Zenko - Zenko is the open source multi-cloud data controller: own and keep control of your data on any cloud.
tekton-pipeline-and-task-test - This Demo repository will deploy and configure a Tekton CI System. It uses GitHub Actions to validate the Tekton config on every commit.