dynamic-localpv-provisioner
Mayastor
Our great sponsors
dynamic-localpv-provisioner | Mayastor | |
---|---|---|
3 | 6 | |
126 | 636 | |
9.5% | 5.2% | |
5.9 | 9.3 | |
11 days ago | 8 days ago | |
Go | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dynamic-localpv-provisioner
-
Using Kaniko to Build and Publish container image with Github action on Github Self-hosted Runners
# gha-runner-scale-set-value.yml githubConfigUrl: "https://github.com/myorg/myrepo" githubConfigSecret: github_token: "my-PAT" ## maxRunners is the max number of runners the autoscaling runner set will scale up to. maxRunners: 5 ## minRunners is the min number of idle runners. The target number of runners created will be ## calculated as a sum of minRunners and the number of jobs assigned to the scale set. minRunners: 1 containerMode: type: "kubernetes" ## type can be set to dind or kubernetes ## the following is required when containerMode.type=kubernetes kubernetesModeWorkVolumeClaim: accessModes: ["ReadWriteOnce"] # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath storageClassName: "managed-csi" resources: requests: storage: 2Gi template: spec: securityContext: fsGroup: 123 ## needed to resolve permission issues with mounted volume. https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/troubleshooting-actions-runner-controller-errors#error-access-to-the-path-homerunner_work_tool-is-denied containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] env: - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER value: "false" ## To allow jobs without a job container to run, set ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER to false on your runner container. This instructs the runner to disable this check. volumes: - name: work ephemeral: volumeClaimTemplate: spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "managed-csi" resources: requests: storage: 2Gi
-
A local maximum on bare metal k8s storage? OpenEBS ZFS LocalPV + Rancher Longhorn
Ahhhh So your'e using openebs localpv (openebs/dynamic-localpv-provisioner?). Yeah I couldn't go with that since it didn't properly limit space, but if you're running the DBs then you have controls of that stuff at a higher level (and it's probably good to not be too strict to not hurt user workloads). XFS supposedly does the limiting if you set up the underlying storage properly now but I couldn't get it to work.
-
Why OpenEBS 3.0 for Kubernetes and Storage?
OpenEBS Hostpath LocalPV (declared stable), the first and the most widely used LocalPV now supports enforcing XFS quotas and the ability to use a custom node label for node affinity (instead of the default 'kubernetes.io/hostname').
Mayastor
-
Open source cloud file system. Posix, HDFS and S3 compatible
What I really want is a filesystem I can span across geographically remote nodes that's transparently compatible. I should just be able to chuck files into it from my NAS like any other. I think Mayastor [1] might get some of the way there?
[1] https://github.com/openebs/mayastor
-
Looking for distributed file system with native Windows client.
Since you're using NVMe's there is some Intel tech that has people making some outrageous claims about mayastor. It's primarily used in OpenEBS for Kubernetes clusters, but from what I've seen, it looks possible to strip it down to the bare essentials to serve blocks.
-
My self-hosting infrastructure, fully automated from empty disk to operating services.
I use Longhorn for my set up, you can checkout the config here. But Mayastor just released v1.0 so I'll try that.
-
Why OpenEBS 3.0 for Kubernetes and Storage?
Advances in OpenEBS 3.0 in the vertical dimension, including addition resilience with performance via Mayastor, (beta) include:
- Mayastor – cloud-native declarative data plane written in Rust
-
Best Open-Source Distributed Parallel Storage Option for an AI/ML Cluster?
Tried OpenEBS? These two have replication HA features. https://github.com/openebs/Mayastor https://github.com/openebs/cstor-operators
What are some alternatives?
dynamic-nfs-provisioner - Operator for dynamically provisioning an NFS server on any Kubernetes Persistent Volume. Also creates an NFS volume on the dynamically provisioned server for enabling Kubernetes RWX volumes.
cstor-operators - Collection of OpenEBS cStor Data Engine Operators
openebs - Most popular & widely deployed Open Source Container Native Storage platform for Stateful Persistent Applications on Kubernetes.
jiva-operator - Kubernetes Operator for managing Jiva Volumes via custom resource.
lvm-localpv - Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
rawfile-localpv - Dynamically deploy Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is provisioned from RAW-device file loop mounted Local-Hostpath storage.
device-localpv - CSI Driver for using Local Block Devices
zfs-localpv - Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend ZFS data storage stack.
sidero - Sidero Metal is a bare metal provisioning system with support for Kubernetes Cluster API.