|4 days ago||3 days ago|
|GNU General Public License v3.0 or later||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Is there any alternative to Lens desktop software?
9 projects | reddit.com/r/kubernetes | 20 Jan 2023
Intentional. As you called out, they're moving to a more premium model. https://github.com/lensapp/lens/issues/6823
Mirantis is up to more shenanigans with Lens, removes logs and shell. OpenLens affected as well.
Improve Extension loading capabilities #6749
I think this thread is a little less flame-y: https://github.com/lensapp/lens/issues/6819
WebAssembly: Docker Without Containers
9 projects | news.ycombinator.com | 21 Dec 2022
Hey, so I thought I remembered your username. This isn’t the first interaction we’ve had, or I’ve seen you have, that follows this similar pattern. In fact it’s the third example from you under this post!
It’s not a particularly pleasant experience to discuss anything with you, as after you make a particularly vapid and usually ice-cold take that is rebuffed, you seem to just try to make snarky replies rather than engage.
Understand that if you post your takes here they may be discussed and challenged, and if you don’t want this then I would refrain from initially commenting.
In response to your comment: They do. All Kubernetes resources are typed with JSON-schema definitions. Because of course they are, how else would kubernetes validate anything. https://kubernetesjsonschema.dev/
Anyone who’s used k8s at all knows this, if only from the error messages. From this you get autocompletion and a wide ecosystem of gui configuration tools. I like lens (https://k8slens.dev/).
What do you guys use to manage/monitor multiple clusters?
4 projects | reddit.com/r/kubernetes | 14 Dec 2022
Lens is no longer free but the upstream openlens is free see https://github.com/lensapp/lens
What daily terminal based tools are you using for cluster management?
19 projects | reddit.com/r/kubernetes | 5 Dec 2022
Surprised none of y'all mentioned Lens19 projects | reddit.com/r/kubernetes | 5 Dec 2022
The issue he’s complaining about has been fixed: https://github.com/lensapp/lens/issues/1588
The checklist: Monitoring for Economy
3 projects | dev.to | 28 Nov 2022
There are many ways you can see if instances are underutilized, using some open source tools such as k9s cli or Lens (if measuring the utilization of VMs which are part of Kubernetes clusters). Or the cloud providers console to see the memory and compute consumption of the provisioned VMs.
Why Kubernetes Is So Complex
4 projects | dev.to | 17 Oct 2022
With some experience and a user interface like Lens, debugging becomes easier. And there are great monitoring solutions for production use. But this is still a big hurdle for beginners taking their first steps with Kubernetes.
Noob question: Rancher does not have persistent storage and creates several new volumes when I start it (how to avoid it)?
2 projects | reddit.com/r/rancher | 29 Jan 2023
Actually, you could refer to https://github.com/rancher/rancher/issues/37723. File the ticket with the Title [question].2 projects | reddit.com/r/rancher | 29 Jan 2023
You could file this question on https://github.com/rancher/rancher/issues/ I think you could get some information from the Rancher team.
Terraform code for kubernetes on vsphere?
3 projects | reddit.com/r/devops | 30 Aug 2022
I don't know in which extend you plan to use Kubernetes in the future, but if it is aimed to become several huge production clusters, you should looks into Apps like Rancher: https://rancher.com
The Container Orchestrator Landscape
8 projects | news.ycombinator.com | 24 Aug 2022
This seems like a pretty well written overview!
As someone who rather liked Docker Swarm (and still likes it, running my homelab and private cloud stuff on it), it is a bit sad to see it winding down like it, even though there were attempts to capitalize on the nice set of simple functionality that it brought to the table like CapRover: https://caprover.com/
Even though there is still some nice software to manage installs of it, like Portainer: https://www.portainer.io/ (which also works for Kubernetes, like a smaller version of Rancher)
Even though the resource usage is far lower than that of almost any Kubernetes distro that I've used (microk8s, K3s and K0s included), the Compose format being pretty much amazing for most smaller deployments and Compose still being one of the better ways to run things locally in addition to Swarm for remote deployments (Skaffold or other K8s local cluster solutions just feel complex in comparison).
And yet, that's probably not where the future lies. Kubernetes won. Well, Nomad is also pretty good, admittedly.
Though if you absolutely do need Kubernetes, personally I'd suggest that you look in the direction of Rancher for a simple UI to manage it, or at least drill down into the cluster state, without needing too much digging through a CLI: https://rancher.com/
Lots of folks actually like k9s as well, if you do like the TUI approach a bit more: https://k9scli.io/
But for the actual clusters, assuming that you ever want to self-host one, ideally a turnkey solution, RKE is good, K0s is also promising, but personally I'd go with K3s: https://k3s.io/ which has been really stable on DEB distros and mostly works okay on RPM ones (if you cannot afford OpenShift or to wait for MicroShift), with my only pet peeve being that the Traefik ingress is a little bit under-documented (e.g. how to configure common use cases, like a SSL certificate, one with an intermediate certificate, maybe a wildcard, or perhaps just use Let's Encrypt, how to set defaults vs defining them per domain).
For the folks with thicker wallets, though, I'd suggest to just give in and pay someone to run a cluster for you: that way you'll get something vaguely portable, will make lots of the aspects in regards to running it someone else's problem and will be able to leverage the actual benefits of working with the container orchestrator.
> To extend its reach across multiple hosts, Docker introduced Swarm mode in 2016. This is actually the second product from Docker to bear the name "Swarm" — a product from 2014 implemented a completely different approach to running containers across multiple hosts, but it is no longer maintained. It was replaced by SwarmKit, which provides the underpinnings of the current version of Docker Swarm.
On an unrelated note, this, at least to me, feels like pretty bad naming and management of the whole initiative, though. Of course, if the features are there, it shouldn't be enough to scare anyone away from the project, but at the same time it could have been a bit simpler.
I want to provide some free support for community, how should I start?
2 projects | reddit.com/r/devops | 3 Aug 2022
But I think once you have a good understanding of K8S internal (components, how thing work underlying, etc.), you can use some tool to help you provision / maintain k8s cluster easier (look for https://rancher.com/ and alternatives).
Rancher monitoring v1 to v2 upgrade fails with "V1 should be disabled but the operator is still being deployed"
2 projects | reddit.com/r/rancher | 11 Jul 2022
Monitoring V1 should be disabled but the operator is still being deployed. Please file a bug with Rancher at https://github.com/rancher/rancher/issues/new.
Ask HN: What is your Kubernetes nightmare?
8 projects | news.ycombinator.com | 27 Jun 2022
Late to the party, but figured I'd share my own story (some details obviously changed, but hopefully the spirit of the experience remains).
Suppose that you work in an org that successfully ships software in a variety of ways - as regular packaged software that runs on an OS directly (e.g. a .jar that expects a certain JDK version in the VM), or maybe even uses containers sometimes, be it with Nomad, Swarm or something else.
And then a project comes along that needs Kubernetes, because someone else made that choice for you (in some orgs, it might be a requirement from the side of clients, others might want to be able to claim that their software runs on Kubernets, in other cases some dev might be padding their CV and leave) and now you need to deal with its consequences.
But here's the thing - if the organization doesn't have enough buy-in into Kubernetes, it's as if you're starting everything from 0, especially if paying some cloud vendor to give you a managed cluster isn't in the cards, be it because of data storage requirements (even for dev environments), other compliance reasons or even just corporate policy.
So, I might be given a single VM on a server, with 8 GB of RAM for launching 4 or so Java/.NET services, as that is a decent amount of resources for doing things the old way. But now, I need to fit a whole Kubernetes cluster in there, which in most configurations eats resources like there's no tomorrow. Oh, and the colleagues also don't have too much experience working with Kubernetes, so some sort of a helpful UI might be nice to have, except that the org uses RPM distros and there are no resources for an install of OpenShift on that VM.
But how much can I even do with that amount of resources, then? Well, I did manage to get K3s (a certified K8s distro by Rancher) up and running, though my hopes of connecting it with the actual Rancher tool (https://rancher.com/) to act as a good web UI didn't succeed. Mostly because of some weirdness with the cgroups support and Rancher running as a Docker container in many cases, which just kind of broke. I did get Portainer (https://www.portainer.io/) up and running instead, but back then I think there were certain problems with the UI, as it's still very much in active development and gradually receives lots of updates. I might have just gone with Kubernetes dashboard, but admittedly the whole login thing isn't quite as intuitive as the alternatives.
That said, everything kind of broke down for a bit as I needed to setup the ingress. What if you have a wildcard certificate along the lines of .something.else.org.com and want it to be used for all of your apps? Back in the day, you'd just setup Nginx or Apache as your reverse proxy and let it worry about SSL/TLS termination. A duty which is now taken over by Kubernetes, except that by default K3s comes with Traefik as their ingress controller of choice and the documentation isn't exactly stellar.
So for getting this sort of configuration up and running, I needed to think about a HelmChartConfig for Traefik, a ConfigMap which references the secrets, a TLSStore to contain them, as well as creating the actual tls-secrets themselves with the appropriate files off of the file system, which still feels a bit odd and would probably be an utter mess to get particular certificates up and running for some other paths, as well as Let's Encrypt for other ones yet. In short, what previously would have been those very same files living on the file system and a few (dozen?) lines inside of the reverse proxy configuration, is now a distributed mess of abstractions and actions which certainly need some getting used to.
Oh, and Portainer sometimes just gets confused and fails to figure out how to properly setup the routes, though I do have to say that at least MetalLB does its job nicely.
And then? Well, we can't just ship manifests directly, we also need Helm charts! But of course, in addition to writing those and setting up the CI for packaging them, you also need something running to store them, as well as any Docker images that you want. In lieu of going through all of the red tape to set that up on shared infrastructure (which would need cleanup policies, access controls and lots of planning so things don't break for other parties using it), instead I crammed in an instance of Nexus/Artifactory/Harbor/... on that very same server, with the very same resource limits, with deadlines still looming over my head.
But that's not it, for software isn't developed in a vacuum. Throw in all of the regular issues with developing software, like not being 100% clear on each of the configuration values that the apps need (because developers are fallible, of course), changes to what they want to use, problems with DB initialization (of course, still needing an instance of PostgreSQL/MariaDB running on the very same server, which for whatever reason might get used as a shared DB) and so on.
In short, you take a process that already has pain points in most orgs and make it needlessly more complex. There are tangible benefits for using Kubernetes. Once you find a setup that works (personally, Ubuntu LTS or a similar distro, full Rancher install, maybe K3s as the underlying cluster or RKE/K3s/k0s on separate nodes, with Nginx for ingress, or a 100% separately managed ingress) then it's great and the standardization is almost like a superpower (as long as you don't go crazy with CRDs). Yet, you need to pay a certain cost up front.
What could be done to alleviate some of the pain points?
In short, I think that:
- expect to need a lot more resources than previously: always have a separate node for managing your cluster and put any sorts of tools on it as well (like Portainer/Rancher), but run your app workloads on other nodes (K3s or k0s can still be not too demanding with resources for the most part)
Don't Use Kubernetes, Yet
7 projects | news.ycombinator.com | 18 Jun 2022
A few years, I would have said no. Now, I'm cautiously optimistic about it.
Personally, I think that you can use something like Rancher (https://rancher.com/) or Portainer (https://www.portainer.io/) for easier management and/or dashboard functionality, to make the learning curve a bit more approachable. For example, you can create a deployment through the UI by following a wizard that also offers you configuration that you might want to use (e.g. resource limits) and then later retrieve the YAML manifest, should you wish to do that. They also make interacting with Helm charts (pre-made packages) more easy.
Furthermore, there are certified distributions which are not too resource hungry, especially if you need to self-host clusters, for example K3s (https://k3s.io/) and k0s (https://k0sproject.io/) are both production ready up to a certain scale, don't consume a lot of memory, are easy to setup and work with whilst being mostly OS agnostic (DEB distros will always work best, RPM ones have challenges as soon as you look elsewhere instead of at OpenShift, which is probably only good for enterprises).
If you can automated cluster setup with Ansible and treat the clusters as something that you can easily re-deploy when you inevitably screw up (you might not do that, but better to plan for failure), you should be good! Even Helm charts have gotten pretty easy to write and deploy and K8s works nicely with most CI/CD tools out there, given that kubectl lends itself pretty well to scripting.
Building an Internal Kubernetes Platform
6 projects | dev.to | 16 Jun 2022
Alternatively, it is also possible to use a multi-cloud or hybrid-cloud approach, which combines several cloud providers or even public and private clouds. Special tools such as Rancher and OpenShift can be very useful to run this type of system.
Five Dex Alternatives for Kubernetes Authentication
6 projects | dev.to | 16 Jun 2022
Rancher provides a Rancher authentication proxy that allows user authentication from a central location. With this proxy, you can set the credential for authenticating users that want to access your Kubernetes clusters. You can create, view, update, or delete users through Rancher’s UI and API.
What are some alternatives?
podman - Podman: A tool for managing OCI containers and pods.
k9s - 🐶 Kubernetes CLI To Manage Your Clusters In Style!
microk8s - MicroK8s is a small, fast, single-package Kubernetes for developers, IoT and edge.
kubesphere - The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️
cluster-api - Home for Cluster API, a subproject of sig-cluster-lifecycle
lima - Linux virtual machines, typically on macOS, for running containerd
Portainer - Making Docker and Kubernetes management easy.
harvester - Open source hyperconverged infrastructure (HCI) software
kubespray - Deploy a Production Ready Kubernetes Cluster
kubelogin - kubectl plugin for Kubernetes OpenID Connect authentication (kubectl oidc-login)
octant - Highly extensible platform for developers to better understand the complexity of Kubernetes clusters.