hierarchical-namespaces
secrets-store-csi-driver
Our great sponsors
hierarchical-namespaces | secrets-store-csi-driver | |
---|---|---|
8 | 22 | |
581 | 1,174 | |
7.4% | 2.1% | |
6.6 | 8.5 | |
10 days ago | 5 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hierarchical-namespaces
-
Efficient Cluster Management with Kubernetes’ Hierarchical Namespaces
HNC_VERSION=v1.1.0 HNC_VARIANT=default kubectl apply -f https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/latest/download/hnc-manager.yaml
-
Amazon EC2 Enhances Defense in Depth with Default IMDSv2
Kubernetes has a lot of limitations from a multi tenancy perspective.
It's functional, but I think it's not as polished as the rest of Kubernetes which is why Kubernetes has a multi tenancy SIG that spawned the hierarchical namespace controller (https://github.com/kubernetes-sigs/hierarchical-namespaces) and virtual clusters (https://github.com/kubernetes-sigs/cluster-api-provider-nest...)
-
Automatically deploy objects after namespace creation
Kyverno's a great option. Depending on the usecase you might want to consider https://github.com/kubernetes-sigs/hierarchical-namespaces as well (disclaimer: I'm the original author) - it's good if groups of related namespaces need related objects.
-
Multitenancy with Hierarchical namespaces
❯ HNC_VERSION=v1.0.0 ❯ kubectl apply -f https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/download/${HNC_VERSION}/default.yaml namespace/hnc-system created customresourcedefinition.apiextensions.k8s.io/hierarchyconfigurations.hnc.x-k8s.io created customresourcedefinition.apiextensions.k8s.io/hncconfigurations.hnc.x-k8s.io created customresourcedefinition.apiextensions.k8s.io/subnamespaceanchors.hnc.x-k8s.io created role.rbac.authorization.k8s.io/hnc-leader-election-role created clusterrole.rbac.authorization.k8s.io/hnc-admin-role created clusterrole.rbac.authorization.k8s.io/hnc-manager-role created clusterrole.rbac.authorization.k8s.io/hnc-proxy-role created rolebinding.rbac.authorization.k8s.io/hnc-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/hnc-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/hnc-proxy-rolebinding created secret/hnc-webhook-server-cert created service/hnc-controller-manager-metrics-service created service/hnc-webhook-service created deployment.apps/hnc-controller-manager created mutatingwebhookconfiguration.admissionregistration.k8s.io/hnc-mutating-webhook-configuration created validatingwebhookconfiguration.admissionregistration.k8s.io/hnc-validating-webhook-configuration created # Install helper plugin ❯ kubectl krew install hns
-
Is it anti-pattern to have multiple environments under a single namespace?
I would say it’s an anti-pattern since using a namespace for multiple environments will be a pain. Not sure what you mean by CRDs though. There is an addon that gives you namespace hierarchies. I.e. each team gets a namespace and they can have sub-namespaces for environments. Check it out: https://github.com/kubernetes-sigs/hierarchical-namespaces
-
Ask r/kubernetes: What are you working on this week?
Looking into the Hierarchical Namespace Controller to see if it can simplify our heavily multi-tenanted clusters. So far so good!
-
RBAC and limited namespace access
HNC is designed for these kinds of scenarios: https://github.com/kubernetes-sigs/hierarchical-namespaces
-
Introduction to Multi-Tenancy in Kubernetes
Project HNC
secrets-store-csi-driver
-
Check your secrets into Git [video]
I'm not a fan of this approach. I think the Secrets Store CSI Driver (https://secrets-store-csi-driver.sigs.k8s.io/) has a better approach.
-
EKS secrets - Bitnami sealed secrets or KMS?
Secret Store CSI Driver is what we're playing with now. Pretty excellent.
-
How does your company do secret management? AWS/GCP/Azure/Vault/CyberArk etc. thoughts?
If you deploy on k8s, keep your eye on https://secrets-store-csi-driver.sigs.k8s.io/
- K8s secret management
-
Secret Management in Kubernetes: Approaches, Tools, and Best Practices
Considering the major limitations of using Kubernetes Secrets, there are many new approaches being developed by the Kubernetes community. Kubernetes SIGs like the Secrets Store CSI Driver and solutions like the external secrets operator that works with third-party secret managers, and options to seal secrets through tools like bitnami’s sealed-secrets. To skip the tools and move directly to best practices, click here.
-
Azure AKS/Container App can't access Key vault using managed identity
Just to clarify, CSI secret driver is from cncf not Microsoft. Only msft piece is the portion that integrates with key vault. https://secrets-store-csi-driver.sigs.k8s.io/
-
Vault Secrets in K8S, use CRD Injector ?
https://secrets-store-csi-driver.sigs.k8s.io/ and https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-secret-store-driver
-
Shhhh... Kubernetes Secrets Are Not Really Secret!
The Secrets Store CSI Driver is a native upstream Kubernetes driver that can be used to abstract where the secret is stored from the workload. If you want to use a cloud provider's secret manager without exposing the secrets as Kubernetes Secret objects, you can use the CSI Driver to mount secrets as volumes in your pods. This is a great option if you use a cloud provider to host your Kubernetes cluster. The driver supports many cloud providers and can be used with different secret managers.
-
SealedSecrets or external secret operator?
If you want security they are both bad, use something like the secret manager of your choice API directly in your app or https://secrets-store-csi-driver.sigs.k8s.io/ this will keep the actual secrets out of etcd and env vars and give you more security
- Secrets Management on Kubernetes: How do you handle it?
What are some alternatives?
vcluster - vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
kubernetes-external-secrets - Integrate external secret management systems with Kubernetes
capsule - Multi-tenancy and policy-based framework for Kubernetes.
argocd-vault-plugin - An Argo CD plugin to retrieve secrets from Secret Management tools and inject them into Kubernetes secrets
rbac-manager - A Kubernetes operator that simplifies the management of Role Bindings and Service Accounts.
secrets-store-csi-driver-provider-gcp - Google Secret Manager provider for the Secret Store CSI Driver.
namespace-configuration-operator - The namespace-configuration-operator helps keeping configurations related to Users, Groups and Namespaces aligned with one of more policies specified as a CRs
external-secrets - External Secrets Operator reads information from a third-party service like AWS Secrets Manager and automatically injects the values as Kubernetes Secrets.
cluster-api-provider-nested - Cluster API Provider for Nested Clusters
ingress-nginx - Ingress-NGINX Controller for Kubernetes
multi-tenancy - A working place for multi-tenancy related proposals and prototypes.
sealed-secrets - A Kubernetes controller and tool for one-way encrypted Secrets