kubelogin
stern
kubelogin | stern | |
---|---|---|
14 | 16 | |
1,556 | 2,905 | |
- | 6.5% | |
8.8 | 6.0 | |
3 days ago | about 20 hours ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kubelogin
-
Giving Kyma a little spin ... a SpinKube
Authenticating with Kyma is a (in my opinion) unnecessary challenge as it leverages the OIDC-login plugin for kubectl. You find a description of the setup here. This works fine when on a Mac but can give you some headaches on a Windows and on Linux machine especially when combined with restrictive setups in corporate environments. For Windows I can only recommend installing krew via chocolatey and then install the OIDC plugin via kubectl krew install oidc-login. At least for me that was the only way to get this working on Windows.
-
Windows auth with K8s on prem
It is sort of a roundabout way, but I sync Active Directory to a Keycloak realm, then use OIDC auth with kube-oidc-proxy (https://github.com/jetstack/kube-oidc-proxy) and kubelogin (https://github.com/int128/kubelogin) for OIDC-based auth to the api server.
-
Kubernetes in production.
Yes, I setup a cluster with no SPFs. That means an HA setup for the external load balancer. I use HAProxy for my ELB, and setup 2 instances with a VRRP + keepalived to provide HA to the ingress controller. I run the control plane private, accessible only from localhost. I setup kube-oidc-proxy (https://github.com/jetstack/kube-oidc-proxy) to expose the API server with single sign-on on the ingress controller, and use the kubelogin plugin (https://github.com/int128/kubelogin) to provide OIDC support to kubectl. I then setup Keycloak to handle OIDC/OAuth2/SAML and syncing to Active Directory, and setup groups in Active Directory to control acccess to clusters. Devs each get their own namespace in the dev cluster, with mostly cluster-admin access to their namespace. Staging/Prod clusters are locked down, with read-only access to devs. Thanks to the OIDC auth to the APIServer, when employees are onboarded & offboarded, we only need to add/remove them from groups in Active Directory and everything else just magically syncs.
-
Gitlab token exchange with keycloak to execute deployments with kubectl
I've successfully configured kube-apiserver to authenticate users through oidc (https://github.com/int128/kubelogin) so all the users from my keycloak realm can access to the cluster with their credentials.
-
Getting started with kubectl plugins
Link to GitHub Repository
-
Why are there so many OIDC SSO options for Kubernetes?
kubelogin (helper for k8s build in OIDC support)
-
RBAC MANAGEMENT
I use the kube-login plugin for kubectl (https://github.com/int128/kubelogin) along with the kube-oidc-proxy (https://github.com/jetstack/kube-oidc-proxy), using Keycloak as my OIDC provider (https://www.keycloak.org) and doing LDAP synchronization to Active Directory.
-
Manage user authentication in on-prem cluster
Dex oauth and kubelogin. We happen to use google auth in our org, but dex is pretty flexible. You only have to have a way to distribute server certificates. We then have documented script commands to pull certs and create kubectl fig files. OpenUnison always looked interesting, but dex has been good enough for our uses.
-
k8s dex authentications
With a working dex/OIDC configuration, you could use: https://github.com/int128/kubelogin
- A kubectl plugin for Kubernetes OpenID Connect (OIDC) authentication
stern
-
☸️ Kubernetes: From your docker-compose file to a cluster with Kompose
deploy: stage: deploy image: alpine/k8s:1.29.1 variables: NAMESPACE: $CI_COMMIT_REF_SLUG before_script: # init namespace - kubectl config use-context $KUBE_CONTEXT - kubectl create namespace $NAMESPACE || true # download tools - curl --show-error --silent --location https://github.com/stern/stern/releases/download/v1.22.0/stern_1.22.0_linux_amd64.tar.gz | tar zx --directory /usr/bin/ stern && chmod 755 /usr/bin/stern && stern --version - curl --show-error --silent --location https://github.com/kubernetes/kompose/releases/download/v1.32.0/kompose-linux-amd64 -o /usr/local/bin/kompose && chmod a+x /usr/local/bin/kompose && kompose version # show logs asynchronously. Timeout to avoid hanging indefinitely when an error occurs in script section - timeout 1200 stern -n $NAMESPACE "app-" --tail=0 --color=always & # in background, tail new logs if any (current and incoming) pod with this regex as name - timeout 1200 kubectl -n $NAMESPACE get events --watch-only & # in background, tail new events in background script: # first delete CrashLoopBackOff pods, polluting logs - kubectl -n $NAMESPACE delete pod `kubectl -n $NAMESPACE get pods --selector app.kubernetes.io/component=$MODULE | awk '$3 == "CrashLoopBackOff" {print $1}'` || true # now deploying - kompose convert --out k8s/ - kubectl apply -n $NAMESPACE -f k8s/ - echo -e "\e[93;1mWaiting for the new app version to be fully operational...\e[0m" # waiting for successful deployment - kubectl -n $NAMESPACE rollout status deploy/app-db - kubectl -n $NAMESPACE rollout status deploy/app-back - kubectl -n $NAMESPACE rollout status deploy/app-front # on any error before this line, the script will still wait for these threads to complete, so the initial timeout is important. Adding these commands to after_script does not help - pkill stern || true - pkill kubectl || true after_script: # show namespace content - kubectl config use-context $KUBE_CONTEXT - kubectl -n $NAMESPACE get deploy,service,ingress,pod
-
stern VS stern - a user suggested alternative
2 projects | 11 Dec 2023
The old repo is dead
-
🦊 GitLab CI: 10+ Best Practices to Avoid Widespread Anti-patterns
node-and-git: image: node:18.10-alpine before_script: - apk --no-cache add git kubectl-and-stern: image: alpine/k8s:1.22.13 before_script: # install stern - curl --show-error --silent --location https://github.com/stern/stern/releases/download/v1.22.0/stern_1.22.0_linux_amd64.tar.gz | tar zx --directory /usr/bin/ stern && chmod 755 /usr/bin/stern playwright-and-kubectl: image: mcr.microsoft.com/playwright:v1.35.1-focal before_script: # install kubectl - curl --show-error --silent --location --remote-name https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl && chmod +x ./kubectl && mv ./kubectl /usr/local/bin/
-
K9s: A lazier way to manage Kubernetes Clusters
I'll add stern (https://github.com/stern/stern) to that - follow logs from multiple pods easily.
-
What k8s related tool you wish you knew earlier?
Multi pod and container log tailing for Kubernetes https://github.com/stern/stern
- What's your "IDE" of choice nowadays?
-
How to Deploy and Scale Strapi on a Kubernetes Cluster 1/2
stern v1.22.0
-
Getting started with kubectl plugins
Link to GitHub Repository
-
Julia Evans: Tips for Analyzing Logs
If you are using Kubernetes, I highly recommend using https://github.com/stern/stern
-
What daily terminal based tools are you using for cluster management?
Stern: https://github.com/stern/stern for log streaming
What are some alternatives?
lens - Lens - The way the world runs Kubernetes
kubetail - Bash script to tail Kubernetes logs from multiple pods at the same time
pam-keycloak-oidc - PAM module connecting to Keycloak for user authentication using OpenID Connect/OAuth2, with MFA/2FA/TOTP support
awesome-k8s-resources - A curated list of awesome Kubernetes tools and resources.
kubectl-neat - Clean up Kubernetes yaml and json output to make it readable
kail - kubernetes log viewer
okta-k8s-oidc-terraform-example - An example repo showcasing setting up Okta OIDC using Terraform
cw - The best way to tail AWS CloudWatch Logs from your terminal
kubectl-kubesec - Security risk analysis for Kubernetes resources
openlens-node-pod-menu - Node and pod menus for OpenLens
ksniff - Kubectl plugin to ease sniffing on kubernetes pods using tcpdump and wireshark
saw - Fast, multi-purpose tool for AWS CloudWatch Logs