zfs-localpv
Concourse
zfs-localpv | Concourse | |
---|---|---|
12 | 47 | |
378 | 7,205 | |
6.3% | 0.8% | |
7.6 | 9.0 | |
10 days ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zfs-localpv
-
ZFS 2.2.0 (RC): Block Cloning merged
I use it in Kubernetes via https://github.com/openebs/zfs-localpv
The PersistentVolume API is a nice way to divvy up a shared resource across different teams, and using ZFS for that gives us the snapshotting, deduplication, and compression for free. For our workloads, it benchmarked faster than XFS so it was a no-brainer.
- openebs/zfs-localpv: CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using ZFS.
-
OpenEBS on MicroK8S on Hetzner
Last few months I experimented more and more with all OpenEBS solutions that fit small Kubernetes cluster, using MicroK8S and Hetzner Cloud for a real experience.
- Openebs ?? Or equivalent
-
Network Storage on On-Prem Barebones Machine
I would investigate https://openebs.io/ https://portworx.com/ https://longhorn.io/ if you are forced to you can mount ISCSI on the kublet and feed it to one of those solutions. Keep in mind most of the big guys buy some sort of managed solution that you can point a CSI like trident https://netapp-trident.readthedocs.io
-
Ask HN: What are some fun projects to run on a home K8s cluster?
What are some cool projects to self hosted on a home Raspberry Pi (64 bit) Kubernetes cluster (Helm charts). arm64 support is a must. A lot of projects only build amd64 Docker containers which don't run on my cluster.
I currently run:
- obenebs (provides abstraction for using local k8s worker disks as PVC mounts when running on-prem) -- https://openebs.io/
-
Finally got around to doing that Ceph on ZFS experiment
I didn't set anything actually -- I need to look into whether OpenEBS ZFS LocalPV can facilitate passing ZVOL options (I don't think it can just yet). The only tuning I did on the storage class was the usual ZFS-level options.
-
My self-hosting infrastructure, fully automated
What do you use to provision Kubernetes persistent volumes on bare metal? I’m looking at open-ebs (https://openebs.io/).
Also, when you bump the image tag in a git commit for a given helm chart, how does that get deployed? Is it automatic, or do you manually run helm upgrade commands?
-
Jinja2 not formatting my text correctly. Any advice?
ListItem( 'Kubernetes', 'https://kubernetes.io/', 'Container Engines and Orchestration', """Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management.""" ), ListItem( 'Podman', 'https://podman.io/', 'Container Engines and Orchestration', """Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images.""" ), # Data Storage :: Block Storage ListItem( 'Amazon EBS', 'https://aws.amazon.com/ebs/', 'Data Storage :: Block Storage', """Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2).""" ), ListItem( 'OpenEBS', 'https://openebs.io/', 'Data Storage :: Block Storage', """OpenESB is a Java-based open-source enterprise service bus. It allows you to integrate legacy systems, external and internal partners and new development in your Business Process.""" ), # Data Storage :: Cluster Storage ListItem( 'Ceph', 'https://ceph.io/en/', 'Data Storage :: Cluster Storage', """Ceph is an open-source software storage platform, implements object storage on a single distributed computer cluster, and provides 3-in-1 interfaces for object-, block- and file-level storage.""" ), ListItem( 'Hadoop Distributed File System', 'https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html', 'Data Storage :: Cluster Storage', """The Hadoop Distributed File System ( HDFS ) is a distributed file system designed to run on commodity hardware.""" ), # Data Storage :: Object Storage ListItem( 'Amazon S3', 'https://aws.amazon.com/s3/', 'Data Storage :: Object Storage', """Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services that provides scalable object storage through a web service interface.""" )
-
Building a "complete" cluster locally
Ideas from my kubernetes experience: * Cert-Manager is very popular and almost a must-have if you terminate SSL inside the cluster * Backups using velero * A dashboard/UI is actually very helpful to quickly browse resources, client tools like k9s are fine too * Secret: Management: Bitnami Sealed Secrets is the second big project in that space * I would add Loki to aggregate Logs * Never heard of ory. Usually I see (dex)[https://dexidp.io/] or keycloak used for Authentication * I like to run OpenEBS as in-cluster storage. * Istio isn't compatible with the upcomming ServiceMeshInterface (i think), so the trend seem to go toward Linkerd * Some Operator to deploy your favorite Database, is also a nice learning exercise.
Concourse
-
Elm 2023, a year in review
Ableton ⬩ Acima ⬩ ACKO ⬩ ActiveState ⬩ Adrima ⬩ AJR International ⬩ Alma ⬩ Astrosat ⬩ Ava ⬩ Avetta ⬩ Azara ⬩ Barmenia ⬩ Basiq ⬩ Beautiful Destinations ⬩ BEC Systems ⬩ Bekk ⬩ Bellroy ⬩ Bendyworks ⬩ Bernoulli Finance ⬩ Blue Fog Training ⬩ BravoTran ⬩ Brilliant ⬩ Budapest School ⬩ Buildr ⬩ Cachix ⬩ CalculoJuridico ⬩ CareRev ⬩ CARFAX ⬩ Caribou ⬩ carwow ⬩ CBANC ⬩ CircuitHub ⬩ CN Group CZ ⬩ CoinTracking ⬩ Concourse CI ⬩ Consensys ⬩ Cornell Tech ⬩ Corvus ⬩ Crowdstrike ⬩ Culture Amp ⬩ Day One ⬩ Deepgram ⬩ diesdas.digital ⬩ Dividat ⬩ Driebit ⬩ Drip ⬩ Emirates ⬩ eSpark ⬩ EXR ⬩ Featurespace ⬩ Field 33 ⬩ Fission ⬩ Flint ⬩ Folq ⬩ Ford ⬩ Forsikring ⬩ Foxhound Systems ⬩ Futurice ⬩ FörsäkringsGirot ⬩ Generative ⬩ Genesys ⬩ Geora ⬩ Gizra ⬩ GWI ⬩ HAMBS ⬩ Hatch ⬩ Hearken ⬩ hello RSE ⬩ HubTran ⬩ IBM ⬩ Idein ⬩ Illuminate ⬩ Improbable ⬩ Innovation through understanding ⬩ Insurello ⬩ iwantmyname ⬩ jambit ⬩ Jobvite ⬩ KOVnet ⬩ Kulkul ⬩ Logistically ⬩ Luko ⬩ Metronome Growth Systems ⬩ Microsoft ⬩ MidwayUSA ⬩ Mimo ⬩ Mind Gym ⬩ MindGym ⬩ Next DLP ⬩ NLX ⬩ Nomalab ⬩ Nomi ⬩ NoRedInk ⬩ Novabench ⬩ NZ Herald ⬩ Permutive ⬩ Phrase ⬩ PINATA ⬩ PinMeTo ⬩ Pivotal Tracker ⬩ PowerReviews ⬩ Practle ⬩ Prima ⬩ Rakuten ⬩ Roompact ⬩ SAVR ⬩ Scoville ⬩ Scrive ⬩ Scrivito ⬩ Serenytics ⬩ Smallbrooks ⬩ Snapview ⬩ SoPost ⬩ Splink ⬩ Spottt ⬩ Stax ⬩ Stowga ⬩ StructionSite ⬩ Studyplus For School ⬩ Symbaloo ⬩ Talend ⬩ Tallink & Silja Line ⬩ Test Double ⬩ thoughtbot ⬩ Travel Perk ⬩ TruQu ⬩ TWave ⬩ Tyler ⬩ Uncover ⬩ Unison ⬩ Veeva ⬩ Vendr ⬩ Verity ⬩ Vnator ⬩ Vy ⬩ W&W Interaction Solutions ⬩ Watermark ⬩ Webbhuset ⬩ Wejoinin ⬩ Zalora ⬩ ZEIT.IO ⬩ Zettle
- The worst thing about Jenkins is that it works
- Show HN: Togomak – declarative pipeline orchestrator based on HCL and Terraform
-
GitHub Actions could be so much better
> Why bother, when Dagger caches everything automatically?
The fear with needing to run `npm ci` (or better, `pnpm install`) before running dagger is on the amount of time required to get this step to run. Sure, in the early days, trying out toy examples, when the only dependencies are from dagger upstream, very little time at all. But what happens when I start pulling more and more dependencies from the Node ecosystem to build the Dagger pipeline? Your documentation includes examples like pulling in `@google-cloud/run` as a dependency: https://docs.dagger.io/620941/github-google-cloud#step-3-cre... and similar for Azure: https://docs.dagger.io/620301/azure-pipelines-container-inst... . The more dependencies brought in - the longer `npm ci` is going to take on GitHub Actions. And it's pretty predictable that, in a complicated pipeline, the list of dependencies is going to get pretty big - at least a dependency per infrastructure provider we use, plus inevitably all the random Node dependencies that work their way into any Node project, like eslint, dotenv, prettier, testing dependencies... I think I have a reasonable fear that `npm ci` just for the Dagger pipeline will hit multiple minutes, and then developers who expect linting and similar short-run jobs to finish within 30 seconds are going to wonder why they're dealing with this overhead.
It's worth noting that one of Concourse's problems was, even with webhooks setup for GitHub to notify Concourse to begin a build, Concourse's design required it to dump the contents of the webhook and query the GitHub API for the same information (whether there were new commits) before starting a pipeline and cloning the repository (see: https://github.com/concourse/concourse/issues/2240 ). And that was for a CI/CD system where, for all YAML's faults, for sure one of its strengths is that it doesn't require running `npm ci`, with all its associated slowness. So please take it on faith that, if even a relatively small source of latency like that was felt in Concourse, for sure the latency from running `npm ci` will be felt, and Dagger's users (DevOps) will be put in an uncomfortable place where they need to defend the choice of Dagger from their users (developers) who go home and build a toy example on AlternateCI which runs what they need much faster.
> I will concede that Dagger’s clustering capabilities are not great yet
Herein my argument. It's not that I'm not convinced that building pipelines in a general-purpose programming language is a better approach compared to YAML, it's that building pipelines is tightly coupled with the infrastructure that runs the pipelines. One aspect of that is scaling up compute to meet the requirements dictated by the pipeline. But another aspect is that `npm ci` should not be run before submitting the pipeline code to Dagger, but after submitting the pipeline code to Dagger. Dagger should be responsible for running `npm ci`, just like Concourse was responsible for doing all the interpolation of the `((var))` syntax (i.e. you didn't need to run some kind of templating before submitting the YAML to Concourse). If Dagger is responsible for running `npm ci` (really, `pnpm install`), then it can maintain its own local pnpm store / pipeline dependency caching, which would be much faster, and overcome any shortcomings in the caching system of GitHub Actions or whatever else is triggering it.
-
We built the fastest CI in the world. It failed
> Imagine you live in a world where no part of the build has to repeat unless the changes actually impacted it. A world in which all builds happened with automatic parallelism. A world in which you could reproduce very reliably any part of the build on your laptop.
That sounds similar to https://concourse-ci.org/
I quite like it, but it never seemed to gain traction outside of Cloud Foundry.
-
Ask HN: What do you use to run background jobs?
I used Concourse[0] for a while. No real complaints, the visibility is nice but the functionality isn't anything new.
[0] https://concourse-ci.org/
-
How to host React/Next "Cheaply" with a global audience? (NGO needs help)
We run https://concourse-ci.org/ on our own hardware at our office. (as a side note, running your own hardware, you realise just how abysmally slow most cloud servers are.)
-
What are some good self-hosted CI/CD tools where pipeline steps run in docker containers?
Concourse: https://concourse-ci.org
- JSON vs XML
-
Cicada - Build CI pipelines using TypeScript
We use https://concourse-ci.org/ at the moment and have been reasonably happy with it, however it only has support for linux containers at the moment, no windows containers. (MacOS doesn't have a containers primitive yet unfortunately)
What are some alternatives?
longhorn - Cloud-Native distributed storage built on and for Kubernetes
drone - Gitness is an Open Source developer platform with Source Control management, Continuous Integration and Continuous Delivery. [Moved to: https://github.com/harness/gitness]
democratic-csi - csi storage for container orchestration systems
GitlabCi
lvm-localpv - Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
woodpecker - Woodpecker is a simple yet powerful CI/CD engine with great extensibility.
k3s - Lightweight Kubernetes
Jenkins - A static site for the Jenkins automation server
Mayastor - Dynamically provision Stateful Persistent Replicated Cluster-wide Fabric Volumes & Filesystems for Kubernetes that is provisioned from an optimized NVME SPDK backend data storage stack.
Jenkins - Jenkins automation server
rook - Storage Orchestration for Kubernetes
Buildbot - Python-based continuous integration testing framework; your pull requests are more than welcome!