Portainer VS rancher

Compare Portainer vs rancher and see what are their differences.

Our great sponsors
  • InfluxDB - Build time-series-based applications quickly and at scale.
  • SonarLint - Clean code begins in your IDE with SonarLint
  • SaaSHub - Software Alternatives and Reviews
Portainer rancher
262 76
24,326 20,504
2.5% 1.1%
5.9 9.9
4 days ago about 23 hours ago
Go Go
zlib License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Portainer

Posts with mentions or reviews of Portainer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-01-29.
  • Ask HN: What is the best source to learn Docker in 2023?
    8 projects | news.ycombinator.com | 29 Jan 2023
    I'd say that going from Docker Compose to Docker Swarm is the first logical step, because it's included in a Docker install and also uses the same Compose format (with more parameters, such as deployment constraints, like which node hostname or tag you want a certain container to be scheduled on): https://docs.docker.com/compose/compose-file/compose-file-v3... That said, you won't see lots of Docker Swarm professionally anymore - it's just the way the job market is, despite it being completely sufficient for many smaller projects out there, I'm running it in prod successfully so far and it's great.

    Another reasonably lightweight alternative would be Hashicorp Nomad, because it's free, simple to deploy and their HCL format isn't too bad either, as long as you keep things simple, in addition to them supporting more than just container workloads: https://www.hashicorp.com/products/nomad That said, if you don't buy into HashiStack too much, then there won't be too much benefit from learning HCL and translating the contents of various example docker-compose.yml files that you see in a variety of repos out there, although their other tools are nice - for example, Consul (a service mesh). This is a nice but also a bit niche option.

    Lastly, there is Kubernetes. It's complicated, even more so when you get into solutions like Istio, typically eats up lots of resources, can be difficult to manage and debug, but does pretty much anything that you might need, as long as you have either enough people to administer it, or a wallet that's thick enough for you to pay one of the cloud vendors to do it for you. Personally, I'd look into the lightweight clusters at first, like k0s, MicroK8s, or perhaps the K3s project in particular: https://k3s.io/

    I'd also suggest that if you get this far, don't be afraid to look into options for dashboards and web based UIs to make exploring things easier:

      - for Docker Swarm and Kubernetes there is Portainer: https://www.portainer.io/
  • Is there a good example of an open source non-trivial (DB connection, authentication, authorization, data validation, tests, etc...) Go API?
    14 projects | reddit.com/r/golang | 25 Jan 2023
  • What are your top self hosted services that you are very satisfied with ?
    71 projects | reddit.com/r/selfhosted | 17 Jan 2023
    Portainer - Makes managing my homelab, gateway and (Pi0) DNS server extremely easy and fun. Traefik - Great companion for the above. For those who don't know for some reason - a simple, yet extremely powerful reverse proxy. Docker - Should be obvious, but I would feel bad if I didn't give it a shoutout. If you haven't heard of it - go and learn, please, it'll make your life beautiful.
  • Homepage for 2023
    14 projects | reddit.com/r/homedash | 16 Jan 2023
    Portainer - Web UI for managing Docker Containers
  • Docker 2.0 went from $11M to $135M in 2 years
    7 projects | news.ycombinator.com | 13 Jan 2023
    > Why there are needs to use docker GUIs?

    Because to some people using GUIs are more approachable and in some case objectively better (e.g. telling the state of things at a glance and efficiently using screen real estate, with graphs and whatnot), whereas the ways they're worse in might not dealbreakers (e.g. lack of automation, given that there can still be APIs or access to the underlying cluster anyways).

    For an example of this, see pieces of software that one can use to manage orchestrators:

    - Portainer: https://www.portainer.io/

    - Rancher: https://www.rancher.com/products/rancher

    Some orchestrators even include dashboards on their own:

    - Kubernetes dashboard: https://kubernetes.io/docs/tasks/access-application-cluster/...

    - Nomad web UI: https://developer.hashicorp.com/nomad/tutorials/web-ui

    And some of that applies to running regular containers and managing them locally: for many it can be useful to be able to just click around to discover more details about a container, as well as what's using storage and so on. Thankfully the CLIs of Docker and competing runtimes are pretty well structured as they are, but I guess it's just a different type of UX.

    At the end of the day, what works for you, or even what you find comfortable to use, might not be the case for someone else and vice versa. It's definitely nice to have that choice in the first place, and to know the various options out there.

  • My Raspberry Pi 4 Dashboard
    11 projects | reddit.com/r/selfhosted | 10 Jan 2023
    - Portainer
  • Docker, Tailscale and Caddy with HTTPS. A love story!
    3 projects | reddit.com/r/Tailscale | 7 Jan 2023
    Breaking it down a bit more: - 'handle_path /docker/' means to handle on calls to http://example.tailnet-def456.ts.net/docker/ - 'reverse_proxy / portainer:9000' means to reverse proxy those calls to "portainer" (that's the container name on the docker network) on port 9000. That's where I have hosted my docker manager (https://www.portainer.io/)
  • Ask HN: What's on Your Home Server?
    52 projects | news.ycombinator.com | 5 Jan 2023
  • Anybody have a good dashboard tool recommendation?
    3 projects | reddit.com/r/HomeServer | 30 Dec 2022
    From purely an administration standpoint, I'd recommend Cockpit. For Docker, I'd also recommend Portainer. Maybe for Minecraft, try out Pterodactyl - I personally haven't used it myself but I've heard good things about it.
  • Most used selfhosted services in 2022?
    103 projects | reddit.com/r/selfhosted | 27 Dec 2022
    Portainer - Web UI for managing Docker Containers

rancher

Posts with mentions or reviews of rancher. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-08-30.
  • Terraform code for kubernetes on vsphere?
    3 projects | reddit.com/r/devops | 30 Aug 2022
    I don't know in which extend you plan to use Kubernetes in the future, but if it is aimed to become several huge production clusters, you should looks into Apps like Rancher: https://rancher.com
  • The Container Orchestrator Landscape
    8 projects | news.ycombinator.com | 24 Aug 2022
    This seems like a pretty well written overview!

    As someone who rather liked Docker Swarm (and still likes it, running my homelab and private cloud stuff on it), it is a bit sad to see it winding down like it, even though there were attempts to capitalize on the nice set of simple functionality that it brought to the table like CapRover: https://caprover.com/

    Even though there is still some nice software to manage installs of it, like Portainer: https://www.portainer.io/ (which also works for Kubernetes, like a smaller version of Rancher)

    Even though the resource usage is far lower than that of almost any Kubernetes distro that I've used (microk8s, K3s and K0s included), the Compose format being pretty much amazing for most smaller deployments and Compose still being one of the better ways to run things locally in addition to Swarm for remote deployments (Skaffold or other K8s local cluster solutions just feel complex in comparison).

    And yet, that's probably not where the future lies. Kubernetes won. Well, Nomad is also pretty good, admittedly.

    Though if you absolutely do need Kubernetes, personally I'd suggest that you look in the direction of Rancher for a simple UI to manage it, or at least drill down into the cluster state, without needing too much digging through a CLI: https://rancher.com/

    Lots of folks actually like k9s as well, if you do like the TUI approach a bit more: https://k9scli.io/

    But for the actual clusters, assuming that you ever want to self-host one, ideally a turnkey solution, RKE is good, K0s is also promising, but personally I'd go with K3s: https://k3s.io/ which has been really stable on DEB distros and mostly works okay on RPM ones (if you cannot afford OpenShift or to wait for MicroShift), with my only pet peeve being that the Traefik ingress is a little bit under-documented (e.g. how to configure common use cases, like a SSL certificate, one with an intermediate certificate, maybe a wildcard, or perhaps just use Let's Encrypt, how to set defaults vs defining them per domain).

    For the folks with thicker wallets, though, I'd suggest to just give in and pay someone to run a cluster for you: that way you'll get something vaguely portable, will make lots of the aspects in regards to running it someone else's problem and will be able to leverage the actual benefits of working with the container orchestrator.

    > To extend its reach across multiple hosts, Docker introduced Swarm mode in 2016. This is actually the second product from Docker to bear the name "Swarm" — a product from 2014 implemented a completely different approach to running containers across multiple hosts, but it is no longer maintained. It was replaced by SwarmKit, which provides the underpinnings of the current version of Docker Swarm.

    On an unrelated note, this, at least to me, feels like pretty bad naming and management of the whole initiative, though. Of course, if the features are there, it shouldn't be enough to scare anyone away from the project, but at the same time it could have been a bit simpler.

  • I want to provide some free support for community, how should I start?
    2 projects | reddit.com/r/devops | 3 Aug 2022
    But I think once you have a good understanding of K8S internal (components, how thing work underlying, etc.), you can use some tool to help you provision / maintain k8s cluster easier (look for https://rancher.com/ and alternatives).
  • Rancher monitoring v1 to v2 upgrade fails with "V1 should be disabled but the operator is still being deployed"
    2 projects | reddit.com/r/rancher | 11 Jul 2022
    Monitoring V1 should be disabled but the operator is still being deployed. Please file a bug with Rancher at https://github.com/rancher/rancher/issues/new.
  • Ask HN: What is your Kubernetes nightmare?
    8 projects | news.ycombinator.com | 27 Jun 2022
    Late to the party, but figured I'd share my own story (some details obviously changed, but hopefully the spirit of the experience remains).

    Suppose that you work in an org that successfully ships software in a variety of ways - as regular packaged software that runs on an OS directly (e.g. a .jar that expects a certain JDK version in the VM), or maybe even uses containers sometimes, be it with Nomad, Swarm or something else.

    And then a project comes along that needs Kubernetes, because someone else made that choice for you (in some orgs, it might be a requirement from the side of clients, others might want to be able to claim that their software runs on Kubernets, in other cases some dev might be padding their CV and leave) and now you need to deal with its consequences.

    But here's the thing - if the organization doesn't have enough buy-in into Kubernetes, it's as if you're starting everything from 0, especially if paying some cloud vendor to give you a managed cluster isn't in the cards, be it because of data storage requirements (even for dev environments), other compliance reasons or even just corporate policy.

    So, I might be given a single VM on a server, with 8 GB of RAM for launching 4 or so Java/.NET services, as that is a decent amount of resources for doing things the old way. But now, I need to fit a whole Kubernetes cluster in there, which in most configurations eats resources like there's no tomorrow. Oh, and the colleagues also don't have too much experience working with Kubernetes, so some sort of a helpful UI might be nice to have, except that the org uses RPM distros and there are no resources for an install of OpenShift on that VM.

    But how much can I even do with that amount of resources, then? Well, I did manage to get K3s (a certified K8s distro by Rancher) up and running, though my hopes of connecting it with the actual Rancher tool (https://rancher.com/) to act as a good web UI didn't succeed. Mostly because of some weirdness with the cgroups support and Rancher running as a Docker container in many cases, which just kind of broke. I did get Portainer (https://www.portainer.io/) up and running instead, but back then I think there were certain problems with the UI, as it's still very much in active development and gradually receives lots of updates. I might have just gone with Kubernetes dashboard, but admittedly the whole login thing isn't quite as intuitive as the alternatives.

    That said, everything kind of broke down for a bit as I needed to setup the ingress. What if you have a wildcard certificate along the lines of .something.else.org.com and want it to be used for all of your apps? Back in the day, you'd just setup Nginx or Apache as your reverse proxy and let it worry about SSL/TLS termination. A duty which is now taken over by Kubernetes, except that by default K3s comes with Traefik as their ingress controller of choice and the documentation isn't exactly stellar.

    So for getting this sort of configuration up and running, I needed to think about a HelmChartConfig for Traefik, a ConfigMap which references the secrets, a TLSStore to contain them, as well as creating the actual tls-secrets themselves with the appropriate files off of the file system, which still feels a bit odd and would probably be an utter mess to get particular certificates up and running for some other paths, as well as Let's Encrypt for other ones yet. In short, what previously would have been those very same files living on the file system and a few (dozen?) lines inside of the reverse proxy configuration, is now a distributed mess of abstractions and actions which certainly need some getting used to.

    Oh, and Portainer sometimes just gets confused and fails to figure out how to properly setup the routes, though I do have to say that at least MetalLB does its job nicely.

    And then? Well, we can't just ship manifests directly, we also need Helm charts! But of course, in addition to writing those and setting up the CI for packaging them, you also need something running to store them, as well as any Docker images that you want. In lieu of going through all of the red tape to set that up on shared infrastructure (which would need cleanup policies, access controls and lots of planning so things don't break for other parties using it), instead I crammed in an instance of Nexus/Artifactory/Harbor/... on that very same server, with the very same resource limits, with deadlines still looming over my head.

    But that's not it, for software isn't developed in a vacuum. Throw in all of the regular issues with developing software, like not being 100% clear on each of the configuration values that the apps need (because developers are fallible, of course), changes to what they want to use, problems with DB initialization (of course, still needing an instance of PostgreSQL/MariaDB running on the very same server, which for whatever reason might get used as a shared DB) and so on.

    In short, you take a process that already has pain points in most orgs and make it needlessly more complex. There are tangible benefits for using Kubernetes. Once you find a setup that works (personally, Ubuntu LTS or a similar distro, full Rancher install, maybe K3s as the underlying cluster or RKE/K3s/k0s on separate nodes, with Nginx for ingress, or a 100% separately managed ingress) then it's great and the standardization is almost like a superpower (as long as you don't go crazy with CRDs). Yet, you need to pay a certain cost up front.

    What could be done to alleviate some of the pain points?

    In short, I think that:

      - expect to need a lot more resources than previously: always have a separate node for managing your cluster and put any sorts of tools on it as well (like Portainer/Rancher), but run your app workloads on other nodes (K3s or k0s can still be not too demanding with resources for the most part)
  • Don't Use Kubernetes, Yet
    7 projects | news.ycombinator.com | 18 Jun 2022
    A few years, I would have said no. Now, I'm cautiously optimistic about it.

    Personally, I think that you can use something like Rancher (https://rancher.com/) or Portainer (https://www.portainer.io/) for easier management and/or dashboard functionality, to make the learning curve a bit more approachable. For example, you can create a deployment through the UI by following a wizard that also offers you configuration that you might want to use (e.g. resource limits) and then later retrieve the YAML manifest, should you wish to do that. They also make interacting with Helm charts (pre-made packages) more easy.

    Furthermore, there are certified distributions which are not too resource hungry, especially if you need to self-host clusters, for example K3s (https://k3s.io/) and k0s (https://k0sproject.io/) are both production ready up to a certain scale, don't consume a lot of memory, are easy to setup and work with whilst being mostly OS agnostic (DEB distros will always work best, RPM ones have challenges as soon as you look elsewhere instead of at OpenShift, which is probably only good for enterprises).

    If you can automated cluster setup with Ansible and treat the clusters as something that you can easily re-deploy when you inevitably screw up (you might not do that, but better to plan for failure), you should be good! Even Helm charts have gotten pretty easy to write and deploy and K8s works nicely with most CI/CD tools out there, given that kubectl lends itself pretty well to scripting.

  • Building an Internal Kubernetes Platform
    6 projects | dev.to | 16 Jun 2022
    Alternatively, it is also possible to use a multi-cloud or hybrid-cloud approach, which combines several cloud providers or even public and private clouds. Special tools such as Rancher and OpenShift can be very useful to run this type of system.
  • Five Dex Alternatives for Kubernetes Authentication
    6 projects | dev.to | 16 Jun 2022
    Rancher provides a Rancher authentication proxy that allows user authentication from a central location. With this proxy, you can set the credential for authenticating users that want to access your Kubernetes clusters. You can create, view, update, or delete users through Rancher’s UI and API.
  • I WANT TO LEARN. Roast me/ Humble me if need be. It's the only way.
    2 projects | reddit.com/r/HomeServer | 27 May 2022
    Not sure if it works on arm but I heard that rancher and harvester are a good combo for clustering k3s and kvm if you want to try both technologies. https://rancher.com/ https://harvesterhci.io/
  • Opinion about k3s
    2 projects | reddit.com/r/kubernetes | 24 May 2022
    - https://github.com/rancher/rancher/issues/16454

What are some alternatives?

When comparing Portainer and rancher you can also consider the following projects:

Yacht - A web interface for managing docker containers with an emphasis on templating to provide 1 click deployments. Think of it like a decentralized app store for servers that anyone can make packages for.

swarmpit - Lightweight mobile-friendly Docker Swarm management UI

podman - Podman: A tool for managing OCI containers and pods.

lens - Lens - The way the world runs Kubernetes

OpenMediaVault - openmediavault is the next generation network attached storage (NAS) solution based on Debian Linux. It contains services like SSH, (S)FTP, SMB/CIFS, DAAP media server, RSync, BitTorrent client and many more. Thanks to the modular design of the framework it can be enhanced via plugins. OpenMediaVault is primarily designed to be used in home environments or small home offices, but is not limited to those scenarios. It is a simple and easy to use out-of-the-box solution that will allow everyone to install and administrate a Network Attached Storage without deeper knowledge.

microk8s - MicroK8s is a small, fast, single-package Kubernetes for developers, IoT and edge.

podman-compose - a script to run docker-compose.yml using podman

kubesphere - The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️

cluster-api - Home for Cluster API, a subproject of sig-cluster-lifecycle

octoprint-docker - The dockerized snappy web interface for your 3D printer!

authelia - The Single Sign-On Multi-Factor portal for web apps