rancher
lens
Our great sponsors
rancher | lens | |
---|---|---|
89 | 113 | |
22,430 | 22,130 | |
1.6% | 0.6% | |
9.9 | 9.3 | |
4 days ago | about 2 months ago | |
Go | TypeScript | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rancher
-
OpenTF Announces Fork of Terraform
Did something happen to the Apache 2 rancher? https://github.com/rancher/rancher/blob/v2.7.5/LICENSE RKE2 is similarly Apache 2: https://github.com/rancher/rke2/blob/v1.26.7%2Brke2r1/LICENS...
- Trouble with RKE2 HA Setup: Part 2
-
An overview of single-purpose Linux distributions
I think I'm confusing Rancher "proper" and RancherOS, sorry.
-
Noob question: Rancher does not have persistent storage and creates several new volumes when I start it (how to avoid it)?
Actually, you could refer to https://github.com/rancher/rancher/issues/37723. File the ticket with the Title [question].
You could file this question on https://github.com/rancher/rancher/issues/ I think you could get some information from the Rancher team.
-
Terraform code for kubernetes on vsphere?
I don't know in which extend you plan to use Kubernetes in the future, but if it is aimed to become several huge production clusters, you should looks into Apps like Rancher: https://rancher.com
-
The Container Orchestrator Landscape
This seems like a pretty well written overview!
As someone who rather liked Docker Swarm (and still likes it, running my homelab and private cloud stuff on it), it is a bit sad to see it winding down like it, even though there were attempts to capitalize on the nice set of simple functionality that it brought to the table like CapRover: https://caprover.com/
Even though there is still some nice software to manage installs of it, like Portainer: https://www.portainer.io/ (which also works for Kubernetes, like a smaller version of Rancher)
Even though the resource usage is far lower than that of almost any Kubernetes distro that I've used (microk8s, K3s and K0s included), the Compose format being pretty much amazing for most smaller deployments and Compose still being one of the better ways to run things locally in addition to Swarm for remote deployments (Skaffold or other K8s local cluster solutions just feel complex in comparison).
And yet, that's probably not where the future lies. Kubernetes won. Well, Nomad is also pretty good, admittedly.
Though if you absolutely do need Kubernetes, personally I'd suggest that you look in the direction of Rancher for a simple UI to manage it, or at least drill down into the cluster state, without needing too much digging through a CLI: https://rancher.com/
Lots of folks actually like k9s as well, if you do like the TUI approach a bit more: https://k9scli.io/
But for the actual clusters, assuming that you ever want to self-host one, ideally a turnkey solution, RKE is good, K0s is also promising, but personally I'd go with K3s: https://k3s.io/ which has been really stable on DEB distros and mostly works okay on RPM ones (if you cannot afford OpenShift or to wait for MicroShift), with my only pet peeve being that the Traefik ingress is a little bit under-documented (e.g. how to configure common use cases, like a SSL certificate, one with an intermediate certificate, maybe a wildcard, or perhaps just use Let's Encrypt, how to set defaults vs defining them per domain).
For the folks with thicker wallets, though, I'd suggest to just give in and pay someone to run a cluster for you: that way you'll get something vaguely portable, will make lots of the aspects in regards to running it someone else's problem and will be able to leverage the actual benefits of working with the container orchestrator.
> To extend its reach across multiple hosts, Docker introduced Swarm mode in 2016. This is actually the second product from Docker to bear the name "Swarm" — a product from 2014 implemented a completely different approach to running containers across multiple hosts, but it is no longer maintained. It was replaced by SwarmKit, which provides the underpinnings of the current version of Docker Swarm.
On an unrelated note, this, at least to me, feels like pretty bad naming and management of the whole initiative, though. Of course, if the features are there, it shouldn't be enough to scare anyone away from the project, but at the same time it could have been a bit simpler.
-
I want to provide some free support for community, how should I start?
But I think once you have a good understanding of K8S internal (components, how thing work underlying, etc.), you can use some tool to help you provision / maintain k8s cluster easier (look for https://rancher.com/ and alternatives).
-
Rancher monitoring v1 to v2 upgrade fails with "V1 should be disabled but the operator is still being deployed"
Monitoring V1 should be disabled but the operator is still being deployed. Please file a bug with Rancher at https://github.com/rancher/rancher/issues/new.
-
Ask HN: What is your Kubernetes nightmare?
Late to the party, but figured I'd share my own story (some details obviously changed, but hopefully the spirit of the experience remains).
Suppose that you work in an org that successfully ships software in a variety of ways - as regular packaged software that runs on an OS directly (e.g. a .jar that expects a certain JDK version in the VM), or maybe even uses containers sometimes, be it with Nomad, Swarm or something else.
And then a project comes along that needs Kubernetes, because someone else made that choice for you (in some orgs, it might be a requirement from the side of clients, others might want to be able to claim that their software runs on Kubernets, in other cases some dev might be padding their CV and leave) and now you need to deal with its consequences.
But here's the thing - if the organization doesn't have enough buy-in into Kubernetes, it's as if you're starting everything from 0, especially if paying some cloud vendor to give you a managed cluster isn't in the cards, be it because of data storage requirements (even for dev environments), other compliance reasons or even just corporate policy.
So, I might be given a single VM on a server, with 8 GB of RAM for launching 4 or so Java/.NET services, as that is a decent amount of resources for doing things the old way. But now, I need to fit a whole Kubernetes cluster in there, which in most configurations eats resources like there's no tomorrow. Oh, and the colleagues also don't have too much experience working with Kubernetes, so some sort of a helpful UI might be nice to have, except that the org uses RPM distros and there are no resources for an install of OpenShift on that VM.
But how much can I even do with that amount of resources, then? Well, I did manage to get K3s (a certified K8s distro by Rancher) up and running, though my hopes of connecting it with the actual Rancher tool (https://rancher.com/) to act as a good web UI didn't succeed. Mostly because of some weirdness with the cgroups support and Rancher running as a Docker container in many cases, which just kind of broke. I did get Portainer (https://www.portainer.io/) up and running instead, but back then I think there were certain problems with the UI, as it's still very much in active development and gradually receives lots of updates. I might have just gone with Kubernetes dashboard, but admittedly the whole login thing isn't quite as intuitive as the alternatives.
That said, everything kind of broke down for a bit as I needed to setup the ingress. What if you have a wildcard certificate along the lines of .something.else.org.com and want it to be used for all of your apps? Back in the day, you'd just setup Nginx or Apache as your reverse proxy and let it worry about SSL/TLS termination. A duty which is now taken over by Kubernetes, except that by default K3s comes with Traefik as their ingress controller of choice and the documentation isn't exactly stellar.
So for getting this sort of configuration up and running, I needed to think about a HelmChartConfig for Traefik, a ConfigMap which references the secrets, a TLSStore to contain them, as well as creating the actual tls-secrets themselves with the appropriate files off of the file system, which still feels a bit odd and would probably be an utter mess to get particular certificates up and running for some other paths, as well as Let's Encrypt for other ones yet. In short, what previously would have been those very same files living on the file system and a few (dozen?) lines inside of the reverse proxy configuration, is now a distributed mess of abstractions and actions which certainly need some getting used to.
Oh, and Portainer sometimes just gets confused and fails to figure out how to properly setup the routes, though I do have to say that at least MetalLB does its job nicely.
And then? Well, we can't just ship manifests directly, we also need Helm charts! But of course, in addition to writing those and setting up the CI for packaging them, you also need something running to store them, as well as any Docker images that you want. In lieu of going through all of the red tape to set that up on shared infrastructure (which would need cleanup policies, access controls and lots of planning so things don't break for other parties using it), instead I crammed in an instance of Nexus/Artifactory/Harbor/... on that very same server, with the very same resource limits, with deadlines still looming over my head.
But that's not it, for software isn't developed in a vacuum. Throw in all of the regular issues with developing software, like not being 100% clear on each of the configuration values that the apps need (because developers are fallible, of course), changes to what they want to use, problems with DB initialization (of course, still needing an instance of PostgreSQL/MariaDB running on the very same server, which for whatever reason might get used as a shared DB) and so on.
In short, you take a process that already has pain points in most orgs and make it needlessly more complex. There are tangible benefits for using Kubernetes. Once you find a setup that works (personally, Ubuntu LTS or a similar distro, full Rancher install, maybe K3s as the underlying cluster or RKE/K3s/k0s on separate nodes, with Nginx for ingress, or a 100% separately managed ingress) then it's great and the standardization is almost like a superpower (as long as you don't go crazy with CRDs). Yet, you need to pay a certain cost up front.
What could be done to alleviate some of the pain points?
In short, I think that:
- expect to need a lot more resources than previously: always have a separate node for managing your cluster and put any sorts of tools on it as well (like Portainer/Rancher), but run your app workloads on other nodes (K3s or k0s can still be not too demanding with resources for the most part)
lens
-
Mirantis K8s Lens closed its source
Nice commit message on the removal, “first draft of new readme”
https://github.com/lensapp/lens/commit/e1fc8869a9e0033fb2266...
Stuff like this is why its gets really hard to trust open source projects backed by a single company not in a foundation. Seems like we’ve entered into a spectrum where open source not in a foundation is shareware, till its relicensed non OSI source visible or closed.
- The Hater's Guide to Kubernetes
-
The Inner Workings of Kubernetes Management Frontends — A Software Engineer’s Perspective
Lens
-
Introduction to Helm: Comparison to its less-scary cousin APT
Generally I felt as if I was diving in the deepest of waters without the correct equipement and that was horrifying. Unfortunately to me, I had to dive even deeper before getting equiped with tools like ArgoCD, and k8slens. I had to start working with... HELM.
-
Imagine the best Kubernetes Dashboard. What does it have?
Indeed you can, with several "paid" features removed, like log tailing and pod shells. They deliberately hobbled the product. If you want to use Lens, my advice is pay for the supported version.
-
observing logs from Kubernetes pods without headaches
yes I know there is lens, but it does not allow me to see logs of multiple pods at same time and what is even more important it is not friendly for ephemeral clusters - in my case with help of kind I am recreating whole cluster each time from scratch
- Lazydocker
-
Cloud Native Workflow for *Private* AI Apps
Let's wait for few seconds for the pods to become green, I am using Lens, it's awesome btw.
-
Fastest way to set up an k8s environment ?
You probably don't need Rancher unless you need a GUI or manage multiple clusters, Lens or k9s might be a better fit for your use case.
What are some alternatives?
podman - Podman: A tool for managing OCI containers and pods.
microk8s - MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
kubesphere - The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️
k9s - 🐶 Kubernetes CLI To Manage Your Clusters In Style!
cluster-api - Home for Cluster API, a subproject of sig-cluster-lifecycle
kubespray - Deploy a Production Ready Kubernetes Cluster
lima - Linux virtual machines, with a focus on running containers
Portainer - Making Docker and Kubernetes management easy.
kubelogin - kubectl plugin for Kubernetes OpenID Connect authentication (kubectl oidc-login)
harvester - Open source hyperconverged infrastructure (HCI) software
octant - Highly extensible platform for developers to better understand the complexity of Kubernetes clusters.