actions-runner-controller
home-ops
Our great sponsors
actions-runner-controller | home-ops | |
---|---|---|
31 | 52 | |
4,216 | 1,723 | |
3.4% | - | |
9.0 | 10.0 | |
4 days ago | 2 days ago | |
Go | Shell | |
Apache License 2.0 | Do What The F*ck You Want To Public License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
actions-runner-controller
-
Using Kaniko to Build and Publish container image with Github action on Github Self-hosted Runners
To set-up the self-hosted runner, an Action Runner Controller (ARC) and Runner scale sets application will be installed via helm. This post will be using Azure Kubernetes Service and ARC that is officialy maintained by Github. There is another ARC that is maintained by the community. You can follow the discussion where github adopted the ARC project into a full Github product here
-
Show HN: DimeRun v2 – Run GitHub Actions on AWS EC2
Before this we were using https://github.com/actions/actions-runner-controller but that's running on K8s instead of VMs. So along with common limitations of running CI jos in K8s/container, it cannot have exactly the same environment as the official GitHub runners. Maintaining a K8s cluster was also very difficult.
-
Terraform module for scalable GitHub action runners on AWS
ARC is great for running GitHub Actions on Kubernetes:
https://github.com/actions/actions-runner-controller
-
Best CI/CD for AWS services?
Almost all of our cicd, builds run on GitHub. I'm talking cypress tests, deployments via terraform and helm to over 25 environments, all backend tests, daily test runs etc. Overall we were racking up a cost of almost 20k on GitHub. With the ARC deployed and using spot instances I think our total infrastructure costs went up about 4-5k even though we added more actions. If we switched back to their runners we'd probably be around 25k at this point.
-
Running helm from within network
What else needs to be moved to my artifactory (charts - https://github.com/actions/actions-runner-controller/tree/master/charts ) - if so tar or entire folder or anything else ? ) What should the above steps correspond to?
-
Action-runner-controller & Enterprise Git
You need to use the steps in the repo instead of the steps on the docs if you're using enterprise server.
-
GitHub support for Actions Runner Controller (ARC) emerging in docs!
Honestly not a fan of Github docs.....I feel like the ones in the repo are much clearer and easier to understand/read.
-
How much work does it take to operate a self-hosted GitHub runners?
Its pretty easy to set up honestly. Deploy this on your k8s cluster https://github.com/actions/actions-runner-controller and a runnerDeployment and youre good to go.
-
Self-Hosted runner on Kubernetes
Trying to use the Actions Runner Controller (https://github.com/actions/actions-runner-controller) to utilize self-hosted runners. I keep getting this error on the controller.
-
AKS cluster w/ GitHub App and Actions Runner Controller
I'm convinced one of (or a combination) of things is happening here in regards to authentication. This GH enterprise account is configured with SAML. I feel like that is a valid data point. I'm using https://github.com/actions/actions-runner-controller as a reference guide for what I should be doing. I suspect whoever is Owner of this organization has modified what I can do as a user. The steps in the doc where I can actually Install the Application isn't available to me. When configuring the GitHub App I'm given two options. I select the option for "this account only" knowing the documentation says it is possible to use this Github App with a repo in the Organization as long as I have Admin privileges or I'm the owner.
home-ops
-
Ditching PaaS: Why I Went Back to Self-Hosting
These are great operational wins. Agreed very much that having autonomic (can fix itself) systems at your back is a massive game changer. De-crustifies the act of running things.
The other win is that there's a substantial cultural base to this way to go. Folks have been doing selfhosting for ages, but everyone has their own boutique setup some their way. A couple tools and techniques could be shared, but mostly everyone took blank slate configs & built their own system up, & added their own monitoring & operational scripts.
https://github.com/onedr0p/home-ops is a set of helm scripts and other tools that is widely widely used, and there's a lot more like it. It's a huge build out, using convention and a common platform to enable portable knowledge & sharing.
Self hosting did not have intellectual scale out at it's back, before Kubernetes came along. Docker and ansible and others have been around, but theres never been remotely the success there has been today in empowering users to setup & run complex services.
We really have clawed out of the server-hugging jungle &started building some villages. It's wonderful to see.
-
Homelab setup for Kubernetes training
Going thru this repo https://github.com/onedr0p/home-ops
- Selfhosted k8s for home server?
-
My recently deployed media apps in ArgoCD, migrating from Terraform.
Take a look at my open source GitOps repo managed by Flux here: https://github.com/onedr0p/home-ops
- How do You manage Your docker containers configuration?
-
Self Hosted SaaS Alternatives
Im fully onboard with the geneneral idea as a target.
Right now it's for early early adopters. Hosting stuff is still a painm But we are getting better at hosting stuff, finding stable patterns, paving the path. Hint, it's not doing less, it's not simpler options: it's adopting & making our own industrial scale tooling. https://github.com/onedr0p/home-ops is a great early & still strong demonstration; the up front cost od learning is high, but there's the biggest ecosystem of support you can imagine, and once you recognize the patterns, you can get into flow states, make stuff happen, with extreme leverage far beyond where humanity has ever been. Building the empowered individual is happening, and we're using stable good patterns that will mean the individual isnt so off on their own doing ops- they'll have a lot more accrued human experiene at their back, their running of services isnt as simple to understand from the start but goes much much further, is much more mature & well supported in the long run.
- Deploying apache guacamole on k8s
-
My completely automated Homelab featuring Kubernetes
My Kubernetes cluster, deployments, infrastructure provisioning is all available over here on Github.
-
Container Updating Strategies
For example: https://github.com/onedr0p/home-ops/pull/4528
-
Simple self-hosted S3-compatible
I'm running minio in my cluster with NFS backend just fine. You can see my deployment of it here.
What are some alternatives?
helm-charts - Jenkins helm charts
kube-plex - Scalable Plex Media Server on Kubernetes -- dispatch transcode jobs as pods on your cluster!
turnstyle - 🎟️A GitHub Action for serializing workflow runs
cluster-template - A template for deploying a Kubernetes cluster with k3s or Talos
cache - Cache dependencies and build outputs in GitHub Actions
longhorn - Cloud-Native distributed storage built on and for Kubernetes
azure-pipelines-agent - Azure Pipelines Agent 🚀
gocast - GoCast is a tool for controlled BGP route announcements from a host
ghat - 🛕 Reuse GitHub Actions workflows across repositories
motioneye - A web frontend for the motion daemon.
actions-runner-
renovate-helm-releases - Creates Renovate annotations in Flux2 Helm Releases