kube-green
argo
kube-green | argo | |
---|---|---|
6 | 44 | |
1,139 | 15,457 | |
3.1% | 1.0% | |
8.6 | 9.7 | |
12 days ago | 5 days ago | |
Go | Go | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kube-green
-
Building a Sustainable Web: a practical exploration of Open Source tools and strategies
kube-green, a tool that helps you reduce the carbon footprint of your Kubernetes clusters
-
Would you shut down servers on holiday?
If you're using kubernetes, look up [kube-green](https://github.com/kube-green/kube-green); I use it with great success (CPU goes down noticeably, and thus power consumption)
- A K8s operator to reduce CO2 footprint of your clusters
- Show HN: Kube-green, K8s operator to reduce CO2 footprint of your clusters
argo
-
Data on Kubernetes: Part 4 - Argo Workflows: Simplify parallel jobs : Container-native workflow engine for Kubernetes 🔮
Remember to meet the prerequisites, including AWS cli, kubectl, terraform and Argo Workflow CLI.
-
StackStorm – IFTTT for Ops
Like Argo Workflows?
https://github.com/argoproj/argo-workflows
-
Creators of Argo CD Release New OSS Project Kargo for Next Gen Gitops
Dagger looks more comparable to Argo Workflows: https://argoproj.github.io/argo-workflows/ That's the first of the Argo projects, which can run multi-step workflows within containers on Kubernetes.
For what it's worth, my colleagues and I have had great luck with Argo Workflows and wrote up a blog post about some of its advantages a few years ago: https://www.interline.io/blog/scaling-openstreetmap-data-wor...
-
Practical Tips for Refactoring Release CI using GitHub Actions
Despite other alternatives like Circle CI, Travis CI, GitLab CI or even self-hosted options using open-source projects like Tekton or Argo Workflow, the reason for choosing GitHub Actions was straightforward: GitHub Actions, in conjunction with the GitHub ecosystem, offers a user-friendly experience and access to a rich software marketplace.
-
(Not) to Write a Pipeline
author seems to be describing the kind of patterns you might make with https://argoproj.github.io/argo-workflows/ . or see for example https://github.com/couler-proj/couler , which is an sdk for describing tasks that may be submitted to different workflow engines on the backend.
it's a little confusing to me that the author seems to object to "pipelines" and then equate them with messaging-queues. for me at least, "pipeline" vs "workflow-engine" vs "scheduler" are all basically synonyms in this context. those things may or may not be implemented with a message-queue for persistence, but the persistence layer itself is usually below the level of abstraction that $current_problem is really concerned with. like the author says, eventually you have to track state/timestamps/logs, but you get that from the beginning if you start with a workflow engine.
i agree with author that message-queues should not be a knee-jerk response to most problems because the LoE for edge-cases/observability/monitoring is huge. (maybe reach for a queue only if you may actually overwhelm whatever the "scheduler" can handle.) but don't build the scheduler from scratch either.. use argowf, kubeflow, or a more opinionated framework like airflow, mlflow, databricks, aws lamda or step-functions. all/any of these should have config or api that's robust enough to express rate-limit/retry stuff. almost any of these choices has better observability out-of-the-box than you can easily get from a queue. but most importantly.. they provide idioms for handling failure that data-science folks and junior devs can work with. the right way to structure code is just much more clear and things like structuring messages/events, subclassing workers, repeating/retrying tasks, is just harder to mess up.
-
what technologies are people using for job scheduling in/with k8s?
Argo Workflows + Argo Events
-
What are some good self-hosted CI/CD tools where pipeline steps run in docker containers?
Drone, or Tekton, Argo Workflows if you’re on k8s
-
job scheduling for scientific computing on k8s?
Check out Argo Workflows.
- Orchestration poll
- What's the best way to inject a yaml file into an Argo workflow step?
What are some alternatives?
goldilocks - Get your resource requests "Just Right"
n8n - Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
Harbor - An open source trusted cloud native registry project that stores, signs, and scans content.
temporal - Temporal service
k8gb - A cloud native Kubernetes Global Balancer
StackStorm - StackStorm (aka "IFTTT for Ops") is event-driven automation for auto-remediation, incident responses, troubleshooting, deployments, and more for DevOps and SREs. Includes rules engine, workflow, 160 integration packs with 6000+ actions (see https://exchange.stackstorm.org) and ChatOps. Installer at https://docs.stackstorm.com/install/index.html