|about 14 hours ago||4 days ago|
|Mozilla Public License 2.0||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Kubernetes at Home with K3s
4 projects | news.ycombinator.com | 5 Dec 2021
That's a false statement as far as the technical aspects are concerned (Swarm is still usable and supported), but is a true statement when you look at the social aspects (Kubernetes won the container wars and now even Nomad is uncommon to run into).
Right now the company i'm in uses Swarm in a lot of places due to its simplicity (Compose file support) and low resource usage - Swarm hits the sweet spot when it comes to getting started with container orchestration and doing so without needing multiple people to wrangle the technical complexity of Kubernetes, or large VMs to deal with its resource usage, at least in on prem environments.
In combination with Portainer (https://www.portainer.io/) it's perhaps one of the best ways to get things done, when you expect everything to just work and aren't doing something too advanced (think along the lines of 10 servers, rather than 100, which is probably most of the deployments out there).
I actually wrote about some of its advantages in my blog post, "Docker Swarm over Kubernetes": https://blog.kronis.dev/articles/docker-swarm-over-kubernete...
That said, if there are any good options to replace Swarm, it has to either be Hashicorp Nomad (https://www.nomadproject.io/) which is a really nice platform, especially when coupled with Consul (https://www.consul.io/), as long as you can get past the weirdness of HCL. Alternatively, it has to be K3s (https://k3s.io/), which gives you Kubernetes without the insane bloat and hardware usage.
I actually benchmarked K3s against Docker Swarm in similar app deployments: 1 leader server, 2 follower servers, running a Ruby on Rails app and an ingress, while they're under load testing by K6 (https://k6.io/). I was attempting to see whether COVID contract tracking with GPS would be viable as far as the system load goes in languages with high abstraction level, here's more info about that: https://blog.kronis.dev/articles/covid-19-contact-tracing-wi...
Honestly, the results were pretty close - on the follower servers, the overhead of the orchestrator agents were a few percent (K3s being heavier, but a few dozen MB here or there not being too relevant), whereas the bigger differences were in the leader components, where K3s was heavier almost by a factor of two, which isn't too much when you consider how lightweight Swarm is (there was a difference of a few hundred MB) and the CPU usage was reasonably close in both of the cases as well. Sadly, the text of the paper is in Latvian, so it's probably of no use to anyone, but i advise you to do your own benchmarks! Being a student, i couldn't afford many servers then, so it's probably a good idea to benchmark those with more servers.
Of note, on those VPSes (4 GB of RAM, single core), the full Kubernetes wouldn't even start, whereas at work, trying to get the resources for also running Rancher on top of a "full" Kubernetes cluster (e.g. RKE) can also take needlessly long due to the backlash from ops. Also, personally i find the Compose syntax to be far easier to deal with, rather than the amalgamation that Kubernetes uses, Helm probably shouldn't even be a thing if the deployment descriptors weren't so bloated. Just look at this: https://docs.docker.com/compose/compose-file/compose-file-v3...
- Docker Swarm is pretty good when you're starting out with containers and is reasonably stable and easy to use
Designing large scale apps using micro services
2 projects | reddit.com/r/node | 16 Nov 2021
Check out Consul from HashiCorp. https://www.consul.io/
Nginx – The Architecture of Open Source Applications
5 projects | news.ycombinator.com | 2 Nov 2021
> As a relatively young dev, the idea of a "web server" as a standalone binary that serves your application (vs a library that you use to write your own "server") feels strange.
In my eyes, the ideal setup is one that's layered: where you have an ingress that's basically a load balancer that also ensures that you have SSL/TLS certificates, enforces rate limits, perhaps is used for some very basic logging, or can optionally do any URL rewriting that you need. Personally, i think that Caddy (https://caddyserver.com/) is lovely for this, whereas some people prefer something like Traefik (https://traefik.io/), though the older software packages like Nginx (https://nginx.org/en/) or even Apache (https://www.apache.org/) are good too, as long as the pattern itself is in place.
Then, you may additionally have any sorts of middleware that you need, such as a service mesh for service discovery, or providing internal SSL/TLS - personally Docker Swarm (https://docs.docker.com/engine/swarm/) overlay networks have always been enough for me in this regard, though some people enjoy other solutions, such as Hashicorp Consul (https://www.consul.io/), or maybe something intended for Kubernetes or other platforms that you already may be using, like Linkerd (https://linkerd.io/).
Finally, you have your actual application with its server. Personally, i think that the web server should be embedded (for example, embedded Tomcat with Spring Boot) or indeed just be a library that's a part of the application executable, as long as you can update it easily enough by rebuilding the application - containers are good for this, but aren't strictly necessary, since sometimes other forms of automation and packaging are also enough.
The reason why i believe this, is because i've seen plenty of deployments where that just isn't the case:
- attempts to store certificates within the application, each application server having different requirements for the formats to be used, making management (and automation) of renewal a total nightmare
An Update on Our Outage
3 projects | news.ycombinator.com | 31 Oct 2021
Programming Microservices Communication With Istio
7 projects | dev.to | 28 Oct 2021
Service discovery — Traditionally provided by platforms like Netflix Eureka or Consul.
1 project | reddit.com/r/PrometheusMonitoring | 11 Sep 2021
For discovery outside of Kubernetes, you can use whatever your configuration management database is to generate the discovery configs. But you might want to look at Consul. The down side to using discovery scripts is the monolithic update lag. I used to have a medium sized setup with Chef and Nagios. It took something like 5 minutes just to run one config cycle. As we transitioned to Prometheus we cut the cycle down to a couple minutes, because we had smaller targeted configs.
HashiCorp Consul: What's the catch?
5 projects | reddit.com/r/devops | 4 Sep 2021
So, my tech lead has once more had the sweet whispers of HashiCorp blaring in his ear, and to my irritation has decreed that we will be prioritizing bringing Consul into our environment despite pretty much everything else we have being in various states of rotting popsicle sticks and scotch tape.
An Introduction to Microservices pt. 3
1 project | dev.to | 24 Aug 2021
Harbormaster: The anti-Kubernetes for your personal server
20 projects | news.ycombinator.com | 19 Aug 2021
> There is gap in the market between VM oriented simple deployments and kubernetes based setup.
In my experience, there are actually two platforms that do this pretty well.
First, there's Docker Swarm ( https://docs.docker.com/engine/swarm/ ) - it comes preinstalled with Docker, can handle either single machine deployments or clusters, even multi-master deployments. Furthermore, it just adds a few values to Docker Compose YAML format ( https://docs.docker.com/compose/compose-file/compose-file-v3... ) , so it's incredibly easy to launch containers with it. And there are lovely web interfaces, such as Portainer ( https://www.portainer.io/ ) or Swarmpit ( https://swarmpit.io/ ) for simpler management.
Secondly, there's also Hashicorp Nomad ( https://www.nomadproject.io/ ) - it's a single executable package, which allows similar setups to Docker Swarm, integrates nicely with service meshes like Consul ( https://www.consul.io/ ), and also allows non-containerized deployments to be managed, such as Java applications and others ( https://www.nomadproject.io/docs/drivers ). The only serious downsides is having to use the HCL DSL ( https://github.com/hashicorp/hcl ) and their web UI being read only in the last versions that i checked.
There are also some other tools, like CapRover ( https://caprover.com/ ) available, but many of those use Docker Swarm under the hood and i personally haven't used them. Of course, if you still want Kubernetes but implemented in a slightly simpler way, then there's also the Rancher K3s project ( https://k3s.io/ ) which packages the core of Kubernetes into a smaller executable and uses SQLite by default for storage, if i recall correctly. I've used it briefly and the resource usage was indeed far more reasonable than that of full Kubernetes clusters (like RKE).
What Is a Service Mesh, and Why Is It Essential for Your Kubernetes Deployments?
2 projects | dev.to | 17 Aug 2021
With multiple services running, it’s hard to discover where they’re located. The dependencies between multiple services are not always easily found, and new services may be deployed with a new dependency on an older service. Those services can be deployed anywhere in the infrastructure, so what you need is a Service Discovery service. There are plenty available, such as Netflix Eureka or HashiCorp Consul.
1 project | reddit.com/r/JavaOnTheEdge | 7 Nov 2021
11 projects | dev.to | 26 Oct 2021
“etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines.” etcd etcd provides a way to store data across a distributed cluster of machines and make sure the data is synchronized across all machines. You can find more information, as well as the etcd source code, in the etcd GitHub repository etcd
Package Management Nightmare
2 projects | reddit.com/r/golang | 12 Oct 2021
They have an open issue for it and looks like no blockers, and a PR bumping the otel version, so it looks like it's moving.
Deploy a high available etcd cluster using docker
2 projects | dev.to | 8 Oct 2021
etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node.
What exactly is the stacked etcd's role and is it a separate entity? is it stored on 3 diff drives?
1 project | reddit.com/r/kubernetes | 3 Oct 2021
Need help with setting up kubebuilder locally
1 project | reddit.com/r/kubernetes | 11 Sep 2021
A Closer Look Into The World of Database Replication
1 project | dev.to | 15 Aug 2021
You can see the etcd implementation of Raft in here: https://github.com/etcd-io/etcd/tree/main/raft
Cannot connect to OpenShift cluster using oc tool and admin console.
1 project | reddit.com/r/openshift | 9 Jun 2021
I found this issue https://github.com/etcd-io/etcd/issues/11949
Can I anyone give me a proper Database black up solution
2 projects | reddit.com/r/cscareerquestions | 23 May 2021
If you want to get into all the details, you can look at Raft and other consensus algorithms; or the code for (relativel) simple implementations, like etcd.io.
Automatic Configuration Reloading in Java Applications on Kubernetes
3 projects | dev.to | 2 May 2021
If you need your configuration changes to be rolled out more immediate, there are other options as well. Rather than reading from a properties file, you could use a key-value store such as Consul, etcd, or AWS Systems Manager Parameter Store. While this gives you more direct control of configuration changes, it introduces new challenges. First, managing your configuration as code might require additional tooling, such as defining them as Terraform resources. Additionally, your application will have to know how to speak to the configuration services, including a proper authentication mechanism.
What are some alternatives?
minio - High Performance, Kubernetes Native Object Storage
traefik - The Cloud Native Application Proxy
kubernetes - Production-Grade Container Scheduling and Management
Vault - A tool for secrets management, encryption as a service, and privileged access management
Caddy - Fast, multi-platform web server with automatic HTTPS
Apache ZooKeeper - Apache ZooKeeper
nsq - A realtime distributed messaging platform
Nomad - Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
vaultwarden - Unofficial Bitwarden compatible server written in Rust, formerly known as bitwarden_rs