chaos-mesh
conduit
Our great sponsors
chaos-mesh | conduit | |
---|---|---|
17 | 33 | |
6,351 | 10,330 | |
2.4% | 1.1% | |
8.5 | 9.9 | |
12 days ago | 4 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chaos-mesh
-
Chaos Mesh
Ive been messing around with chaos mesh recently (https://chaos-mesh.org/) and im wondering: is there any way i can define custom behaviour in one of my experiments? Specifically, I want to deploy a Pod with a certain image using an experiment.
-
Building Resilience with Chaos Engineering and Litmus
Litmus, Gremlin, Chaos Mesh, and Chaos Monkey are all popular open-source tools used for chaos engineering. As we will be using AWS cloud infrastructure, we will also explore AWS Fault Injection Simulator (FIS). While they share the same goals of testing and improving the resilience of a system, there are some differences between them. Here are some comparisons:
-
rootly Vs firehydrant, any experience?
https://chaos-mesh.org/ (open source)
- Elon Musk is disconnecting random Twitter-servers just to see what happens
-
Implement DevSecOps to Secure your CI/CD pipeline
Implement Chaos Mesh and Litmus chaos engineering framework to understand the behavior and stability of application in real-world use cases.
- Chaos-Mesh - A chaos engineering platform for kubernetes.
-
Chaos Mesh for chaos engineering in Kubernetes
Here is our recent experience with Chaos Mesh for performing basic chaos engineering experiments on an application in Kubernetes.
-
Database Mesh 2.0: Database Governance in a Cloud Native Environment
In March 2018, an article titled Service Mesh is the broad trend, what about Database Mesh?, was pubslished on InfoQ China and went viral in the technical community. In this article, Zhang Liang, the founder of Apache ShardingSphere, described Database Mesh concept along with the idea of Service Mesh. Four years later, the Database Mesh concept has been integrated by several companies together with their own tools and ecosystems. Today, in addition to Service Mesh, a variety of “X Mesh” concepts such as ChaosMesh, EventMesh, IOMesh have emerged. Following four years of development, Database Mesh has also started a new chapter: Database Mesh 2.0.
-
Share your #ChaosMeshStory!
🐒 Chaos Mesh will turn 2 on 2021.12.31! We're grateful for every contribution that helped this project grow, and we’d like to hear your Chaos Mesh story!
-
help tips scripting pods creation for k8s cluster testing
So i came across this recently, haven't used it myself but it seems to fit your requirements: https://github.com/chaos-mesh/chaos-mesh
conduit
-
Optimal JMX Exposure Strategy for Kubernetes Multi-Node Architecture
Leverage a service mesh like Istio or Linkerd to manage communication between microservices within the Kubernetes cluster. These service meshes can be configured to intercept JMX traffic and enforce access control policies. Benefits:
-
Linkerd no longer shipping open source, stable releases
Looks like CNCF waved them through Graduation anyway, let's look at policies from July 28, 2021 when they were deemed "Graduated"
All maintainers of the LinkerD project had @boyant.io email addresses. [0] They do list 4 other members of a "Steering Committee", but LinkerD's GOVERNANCE.md gives all of the power to maintainers: [1]
> Ideally, all project decisions are resolved by maintainer consensus. If this is not possible, maintainers may call a vote. The voting process is a simple majority in which each maintainer receives one vote.
And CNCF Graduation policy says a project must "Have committers from at least two organizations" [2]. So it appears that the CNCF accepted the "Steering Committee" as an acceptable 2nd committer, even though the Governance policy still gave the maintainers all of the power.
I would like to know if the Steering Committee voted to remove stable releases from an un-biased position acting in the best interest of the project, or if they were simply ignored or not even advised on the decision.
I'm all for Boyant doing what they need to do to make money and survive as a Company. But at that point my opinion is that they should withdraw the project from the CNCF and stop pretending like the foundation has any influence on the project's governance.
[0] https://github.com/linkerd/linkerd2/blob/489ca1e3189b6a5289d...
-
Ultimate EKS Baseline Cluster: Part 1 - Provision EKS
From here, we can explore other developments and tutorials on Kubernetes, such as o11y or observability (PLG, ELK, ELF, TICK, Jaeger, Pyroscope), service mesh (Linkerd, Istio, NSM, Consul Connect, Cillium), and progressive delivery (ArgoCD, FluxCD, Spinnaker).
-
Istio moved to CNCF Graduation stage
https://linkerd.io/ is a much lighter-weight alternative but you do still get some of the fancy things like mtls without needing any manual configuration. Install it, label your namespaces, and let it do it's thing!
-
Custom Authorization
Would it be possible to create a custom extension with the code that authorize traffic based on my custom access token?
-
API release strategies with API Gateway
Open source API Gateway (Apache APISIX and Traefik), Service Mesh (Istio and Linkerd) solutions are capable of doing traffic splitting and implementing functionalities like Canary Release and Blue-Green deployment. With canary testing, you can make a critical examination of a new release of an API by selecting only a small portion of your user base. We will cover the canary release next section.
-
GKE with Consul Service Mesh
I have experimented with other service meshes and I was able to get up to speed quickly: Linkerd = 1 day, Istio = 3 days, NGINX Service Mesh = 5 days, but Consul Connect service mesh took at least 11 days to get off the ground. This is by far the most complex solution available.
-
How is a service mesh implemented on low level?
https://github.com/linkerd/linkerd2 (random example)
- Kubernetes operator written in rust
-
What is a service mesh?
Out of the number of service mesh solutions that exist, the most popular open source ones are: Linkerd, Istio, and Consul. Here at Koyeb, we are using Kuma.
What are some alternatives?
litmus - Litmus helps SREs and developers practice chaos engineering in a Cloud-native way. Chaos experiments are published at the ChaosHub (https://hub.litmuschaos.io). Community notes is at https://hackmd.io/a4Zu_sH4TZGeih-xCimi3Q
Zone of Control - ⬡ Zone of Control is a hexagonal turn-based strategy game written in Rust. [DISCONTINUED]
litmus - A fast python HTTP server inspired by japronto written in rust.
Parallel
chaosmonkey - Chaos Monkey is a resiliency tool that helps applications tolerate random instance failures.
Fractalide - Reusable Reproducible Composable Software
postgres-operator - Postgres operator creates and manages PostgreSQL clusters running in Kubernetes
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
chaosblade-exec-jvm - Chaosblade executor for chaos experiments on Java applications(对 Java 应用实施混沌实验的 chaosblade 执行器)
istio - Connect, secure, control, and observe services.
sandbox-operator - A Kubernetes operator for creating isolated environments
traefik - The Cloud Native Application Proxy