pritunl-k8s-tf-do
argo
Our great sponsors
pritunl-k8s-tf-do | argo | |
---|---|---|
11 | 43 | |
23 | 14,282 | |
- | 1.5% | |
3.6 | 9.8 | |
6 months ago | 1 day ago | |
HCL | Go | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pritunl-k8s-tf-do
-
Why migrate to GitHub from Jenkins?
I have a full example here of a working GHA pipeline that deploys terraform infrastructure. This deploys atlantis, which can then be used to deploy Pritunl VPN. Works almost perfectly except for the fact that helm yaml encoded sensitive values are revealed on terraform destroy, so I simply don't have anything sensitive encoded in yaml.
-
[FOR HIRE] Where are the high paying remote DevOps jobs that don't require LeetCode?
Hey now, I've already got a homelab that launches a k8s cluster and installs Pritunl VPN for coffee shop wifi. I do at least understand that tools need a valid use case before being applied, although admittedly installing it on k8s vs just using Nomad or something is more RDD than not for this one.
-
Start date pushed back multiple times at new contract gig. Looking to see what else is out there.
I'm looking for a company that is willing to judge me based off of past experience and previous projects I've already completed. So ideally no LeetCode, and no more take homes unless you want me to post them publicly. Ideally looking to come in at $160K+ for mid level or $200K+ for senior, depending on how much of a match there is between me and the position, and a sign-on bonus would be incredible. Hopefully I have enough of a reputation at this point where you've seen my posts and comments in /r/devops and already know what I'm capable of, but if not, I'm happy to chat about previous projects in-depth and go over what I've worked on. A code review as part of the interview process would be absolutely stellar. This is what I've been working on recently as well as my "homelab" if you re looking for some specifics. DM or chat with your work email for an official resume
-
So I've installed grafana, loki, and prometheus on the personal Kubernetes cluster via Terraform. Now what?
Already done, but good call on learning how to create conditionals. I will look into it!
-
Ask r/kubernetes: What are you working on this week?
Playing around with grafana/loki/prometheus all via Terraform, GitHub Actions, and Atlantis in a public repo.
-
What's the best cloud provider for me to mess around in and learn k8s without accidentally getting charged a lot of dollar?
I set up a whole pipeline to install and configure Pritunl VPN on DigitalOcean and it only costs me like $60/month for a 3 node cluster.
-
I built an open source deployment pipeline of Pritunl to Digital Ocean using Github Actions and Atlantis. User-friendly, open source, VPN on Kubernetes at under $60/month!
https://github.com/autotune/pritunl-k8s-tf-do/blob/master/README.md is the repo. The README should answer any questions about how the pipeline works but the end result is a pritunl webgui listening on port 80 with an ingress route for https, a SERVICE load balancer that listens for VPN connections, and the ability to connect to said service load balancer over Pritunl VPN client. Note this is missing a few things, for one you can only have a replica set of 1 in the deployment. I need to figure out how to add HA with the "enterprise" edition at $70/month extra (still relatively cheap for what you get!). But for personal use it should suffice. Also, I tried using an ingress for the vpn itself but can't get it working, so stuck with SVC load balaner instead, which works fine. Any suggestions here would be appreciated!
-
Any folks from the zerossl project lurking these forums? Your user signup page cert is expired.
All I know for sure is the one cert I was using with letsencrypt kept failing to renew. I just tried it with zerossl since the sign up page cert was finally renewed last night and people have generally been happy with them outside this little incident and seems to actually be working as expected. The helm release I am using is linked to via tf here and the ingress rules are here.
argo
-
StackStorm โ IFTTT for Ops
Like Argo Workflows?
https://github.com/argoproj/argo-workflows
-
Creators of Argo CD Release New OSS Project Kargo for Next Gen Gitops
Dagger looks more comparable to Argo Workflows: https://argoproj.github.io/argo-workflows/ That's the first of the Argo projects, which can run multi-step workflows within containers on Kubernetes.
For what it's worth, my colleagues and I have had great luck with Argo Workflows and wrote up a blog post about some of its advantages a few years ago: https://www.interline.io/blog/scaling-openstreetmap-data-wor...
-
Practical Tips for Refactoring Release CI using GitHub Actions
Despite other alternatives like Circle CI, Travis CI, GitLab CI or even self-hosted options using open-source projects like Tekton or Argo Workflow, the reason for choosing GitHub Actions was straightforward: GitHub Actions, in conjunction with the GitHub ecosystem, offers a user-friendly experience and access to a rich software marketplace.
-
(Not) to Write a Pipeline
author seems to be describing the kind of patterns you might make with https://argoproj.github.io/argo-workflows/ . or see for example https://github.com/couler-proj/couler , which is an sdk for describing tasks that may be submitted to different workflow engines on the backend.
it's a little confusing to me that the author seems to object to "pipelines" and then equate them with messaging-queues. for me at least, "pipeline" vs "workflow-engine" vs "scheduler" are all basically synonyms in this context. those things may or may not be implemented with a message-queue for persistence, but the persistence layer itself is usually below the level of abstraction that $current_problem is really concerned with. like the author says, eventually you have to track state/timestamps/logs, but you get that from the beginning if you start with a workflow engine.
i agree with author that message-queues should not be a knee-jerk response to most problems because the LoE for edge-cases/observability/monitoring is huge. (maybe reach for a queue only if you may actually overwhelm whatever the "scheduler" can handle.) but don't build the scheduler from scratch either.. use argowf, kubeflow, or a more opinionated framework like airflow, mlflow, databricks, aws lamda or step-functions. all/any of these should have config or api that's robust enough to express rate-limit/retry stuff. almost any of these choices has better observability out-of-the-box than you can easily get from a queue. but most importantly.. they provide idioms for handling failure that data-science folks and junior devs can work with. the right way to structure code is just much more clear and things like structuring messages/events, subclassing workers, repeating/retrying tasks, is just harder to mess up.
-
what technologies are people using for job scheduling in/with k8s?
Argo Workflows + Argo Events
-
What are some good self-hosted CI/CD tools where pipeline steps run in docker containers?
Drone, or Tekton, Argo Workflows if youโre on k8s
-
job scheduling for scientific computing on k8s?
Check out Argo Workflows.
- Orchestration poll
- What's the best way to inject a yaml file into an Argo workflow step?
-
Which build system do you use?
go-git has a lot of bugs and is not actively maintained. The bug even affects Argo Workflow, which caused our data pipeline to fail unexpectedly (reference: https://github.com/argoproj/argo-workflows/issues/10091)
What are some alternatives?
locust - Write scalable load tests in plain Python ๐๐จ
temporal - Temporal service
pritunl-client-electron - Pritunl OpenVPN client
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
k3s - Lightweight Kubernetes
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
beeswithmachineguns - A utility for arming (creating) many bees (micro EC2 instances) to attack (load test) targets (web applications).
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
predator - A powerful open-source platform for load testing APIs.
StackStorm - StackStorm (aka "IFTTT for Ops") is event-driven automation for auto-remediation, incident responses, troubleshooting, deployments, and more for DevOps and SREs. Includes rules engine, workflow, 160 integration packs with 6000+ actions (see https://exchange.stackstorm.org) and ChatOps. Installer at https://docs.stackstorm.com/install/index.html
thanos - Highly available Prometheus setup with long term storage capabilities. A CNCF Incubating project.
n8n - Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.