consul
conduit
Our great sponsors
consul | conduit | |
---|---|---|
57 | 33 | |
27,752 | 10,330 | |
0.6% | 1.1% | |
9.9 | 9.9 | |
7 days ago | 5 days ago | |
Go | Go | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
consul
-
Deploy Secure Spring Boot Microservices on Amazon EKS Using Terraform and Kubernetes
The JHipster scaffolded sample application has a gateway application and two microservices. It uses Consul for service discovery and centralized configuration.
-
The Complete Microservices Guide
Service Discovery: Microservices need to discover and communicate with each other dynamically. Service discovery tools like etcd, Consul, or Kubernetes built-in service discovery mechanisms help locate and connect to microservices running on different nodes within the infrastructure.
-
Replicating and Load Balancing Go Applications in Docker Containers with Consul and Fabio
After some research and testing, I landed on using Consul and Fabio as the demo infrastructure. Of course, there is a myriad of other options to accomplish this task, but because of the low configuration and ease of use, I was impressed with this pairing. Both projects are mature and well-supported, and very flexible--just because you can run them with low configuration, doesn't mean you have to. I wanted to keep this demo constrained, but the exercise did get me excited about exploring things further: circuit breakers, traffic splitting, and more complex service meshes.
-
register open-telemetry to consul
The goal is to be able to use Consul SD configurations to allow for retrieving scrape targets from consul. Is this possible? Can anyone provide an example? Thank you!!
-
Fly.io outage, recently deployed apps down, no new deployments possible
https://github.com/hashicorp/consul/pull/12080 - this should be the Consul issue that brought down Roblox
-
Netdata release 1.38.0
The Consul collector is production ready! Consul by HashiCorp is a powerful and complex identity-based networking solution, which is not trivial to monitor. We were lucky to have the assistance of HashiCorp itself in this endeavor, which resulted in a monitoring solution of exceptional quality. Look for common blog posts and announcements in the coming weeks!
-
Micro Frontends for Java Microservices
Changed the service discovery to Consul, since this is the default in JHipster 8.
- Website monitoren
-
I Know What You Shipped Last Summer
In another effort to standardize development and operations, Lob has just wrapped up our container orchestration migration from Convox to HashiCorp’s Nomad, led by Senior Platform Engineer Elijah Voigt. In this new ecosystem, one feature available to us is Consul Service Mesh (a feature of Consul, which is part of our Lob Nomad stack).
-
a tool for quickly creating web and microservice code
Service registry and discovery etcd, consul, nacos
conduit
-
Optimal JMX Exposure Strategy for Kubernetes Multi-Node Architecture
Leverage a service mesh like Istio or Linkerd to manage communication between microservices within the Kubernetes cluster. These service meshes can be configured to intercept JMX traffic and enforce access control policies. Benefits:
-
Linkerd no longer shipping open source, stable releases
Looks like CNCF waved them through Graduation anyway, let's look at policies from July 28, 2021 when they were deemed "Graduated"
All maintainers of the LinkerD project had @boyant.io email addresses. [0] They do list 4 other members of a "Steering Committee", but LinkerD's GOVERNANCE.md gives all of the power to maintainers: [1]
> Ideally, all project decisions are resolved by maintainer consensus. If this is not possible, maintainers may call a vote. The voting process is a simple majority in which each maintainer receives one vote.
And CNCF Graduation policy says a project must "Have committers from at least two organizations" [2]. So it appears that the CNCF accepted the "Steering Committee" as an acceptable 2nd committer, even though the Governance policy still gave the maintainers all of the power.
I would like to know if the Steering Committee voted to remove stable releases from an un-biased position acting in the best interest of the project, or if they were simply ignored or not even advised on the decision.
I'm all for Boyant doing what they need to do to make money and survive as a Company. But at that point my opinion is that they should withdraw the project from the CNCF and stop pretending like the foundation has any influence on the project's governance.
[0] https://github.com/linkerd/linkerd2/blob/489ca1e3189b6a5289d...
-
Ultimate EKS Baseline Cluster: Part 1 - Provision EKS
From here, we can explore other developments and tutorials on Kubernetes, such as o11y or observability (PLG, ELK, ELF, TICK, Jaeger, Pyroscope), service mesh (Linkerd, Istio, NSM, Consul Connect, Cillium), and progressive delivery (ArgoCD, FluxCD, Spinnaker).
-
Istio moved to CNCF Graduation stage
https://linkerd.io/ is a much lighter-weight alternative but you do still get some of the fancy things like mtls without needing any manual configuration. Install it, label your namespaces, and let it do it's thing!
-
Custom Authorization
Would it be possible to create a custom extension with the code that authorize traffic based on my custom access token?
-
API release strategies with API Gateway
Open source API Gateway (Apache APISIX and Traefik), Service Mesh (Istio and Linkerd) solutions are capable of doing traffic splitting and implementing functionalities like Canary Release and Blue-Green deployment. With canary testing, you can make a critical examination of a new release of an API by selecting only a small portion of your user base. We will cover the canary release next section.
-
GKE with Consul Service Mesh
I have experimented with other service meshes and I was able to get up to speed quickly: Linkerd = 1 day, Istio = 3 days, NGINX Service Mesh = 5 days, but Consul Connect service mesh took at least 11 days to get off the ground. This is by far the most complex solution available.
-
How is a service mesh implemented on low level?
https://github.com/linkerd/linkerd2 (random example)
- Kubernetes operator written in rust
-
What is a service mesh?
Out of the number of service mesh solutions that exist, the most popular open source ones are: Linkerd, Istio, and Consul. Here at Koyeb, we are using Kuma.
What are some alternatives?
etcd - Distributed reliable key-value store for the most critical data of a distributed system
Zone of Control - ⬡ Zone of Control is a hexagonal turn-based strategy game written in Rust. [DISCONTINUED]
Eureka - AWS Service registry for resilient mid-tier load balancing and failover.
Parallel
traefik - The Cloud Native Application Proxy
Fractalide - Reusable Reproducible Composable Software
Caddy - Fast and extensible multi-platform HTTP/1-2-3 web server with automatic HTTPS
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
Apache ZooKeeper - Apache ZooKeeper
istio - Connect, secure, control, and observe services.
kubernetes - Production-Grade Container Scheduling and Management