terraform-provider-libvirt
rancher
terraform-provider-libvirt | rancher | |
---|---|---|
13 | 89 | |
1,513 | 22,546 | |
- | 0.5% | |
6.8 | 9.9 | |
13 days ago | 3 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
terraform-provider-libvirt
- What do y'all use to provision KVM VM's?
-
libvirt-k8s-provisioner - Ansible and terraform to build a cluster from scratch in less than 10 minutes ok KVM - Updated for 1.26
libvirt-terraform-provider ( based on https://github.com/dmacvicar/terraform-provider-libvirt )
- NixOS 22.11 “Raccoon” Released
-
libvirt-ocp4-provisioner - Provision an OCP 4.x.y cluster in minutes with Ansible, now with Single Node OCP support! .
Hi guys!I wanted to allotment with you a tool to provision a fully working OCP 4.x.y cluster in minutes using Ansible for automation, libvirt as virtualization provider and terraform as VMs templating and creation tool. https://github.com/kubealex/libvirt-ocp4-provisioner It will take care of all the infrastructure provisioning and OCP machines provisioning, starting and completing the UPI installation of a cluster. (IPI work in progress ;) ) To give a quick overview, this project will allow you to provision a fully working OCP stable environment, consisting of: * Bastion machine provisioned with: * dnsmasq (with SELinux module, compiled and activated) * dhcp based on dnsmasq * nginx (for ignition files and rhcos pxe-boot) * pxeboot * Loadbalancer machine provisioned with: * haproxy * OCP Bootstrap machine * OCP Master(s) VM(s) * OCP Worker(s) VM(s) From latest release, it also supports installing SNO on a single host! It also takes care of preparing the host machine with needed packages, configuring: * dedicated libvirt network (fully customizable) * dedicated libvirt storage pool (fully customizable) * terraform * libvirt-terraform-provider ( compiled and initialized basedon https://github.com/dmacvicar/terraform-provider-libvirt) PXE is automatic, based on MAC binding to different OCP nodes role, so no need of choosing it from the menus, this means you can just run the playbook, take a beer and have your fully running OCP 4.9.latest stable up and running. It has been tested on Fedora 3x and CentOS 7/8. Playing around with it and contributions to make it work even on different OSes is more than welcome, hope you enjoy it! Alex
-
Need help on Terraform with KVM/Libvirt
I learned and got terraform to work with the KVM/Libvirt provider.
-
Automate creation of KVM VM and Installation of OS
I saw Terraform with the dmacvicar/terraform-provider-libvirt provider, but sadly didn't get really warm with it. When some can explain to me how I can set up new images for every VM I would be very happy also there are more question in the pipeline. Sadly, the “Documentation” is not really that good. Maybe Terraform is also the wrong Application for me. I'm a little lost because I thought Terraform would be the big Solution I want and need, until now, not yet.
-
Terraform Persistent Storage
It looks like there was an issue dealing with "attaching an existing disk" to a terraform created VM. That's here: https://github.com/dmacvicar/terraform-provider-libvirt/issues/688
-
Those of you running a home cluster that is NOT comprised of RasPis, what hardware are you using?
Nice. I’m straight KVM as it’s a mirror of work (my Lab) and I’m using the terraform-provider-libvirt provider. 20 minutes to fully build a site. Pretty cool.
-
Provision a full functional cluster in less than 10 minutes! libvirt-k8s-provisioner
libvirt-terraform-provider ( compiled and initialized based on https://github.com/dmacvicar/terraform-provider-libvirt)
- QEMU Version 6.0.0 Released
rancher
-
OpenTF Announces Fork of Terraform
Did something happen to the Apache 2 rancher? https://github.com/rancher/rancher/blob/v2.7.5/LICENSE RKE2 is similarly Apache 2: https://github.com/rancher/rke2/blob/v1.26.7%2Brke2r1/LICENS...
-
Kubernetes / Rancher 2, mongo-replicaset with Local Storage Volume deployment
I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found here.
- Trouble with RKE2 HA Setup: Part 2
-
Critical vulnerability (CVE-2023-22651) in Rancher 2.7.2 - Update to 2.7.3
CVE-2023-22651 is rated 9.9/10 : https://github.com/rancher/rancher/security/advisories/GHSA-6m9f-pj6w-w87g
-
What's your take if DevOps colleague always got new initiative / idea?
Depends. When I came into my last company I immediately noticed the lack of reproducible environments. Brought this up a few times and was met with some resistance because "we didn't have the capacity"... Until prod went down and it took us 23 hours to bring it back up due to spaghetti terraform.
-
Questions about Rancher Launched/imported AKS
For the latest releases of rancher: https://github.com/rancher/rancher/releases When is Rancher 2.7.1 going to be released? The Rancher support matrix for 2.7.1 shows k8s v1.24.6 as the highest supported version and Azure will drop AKS v1.24 in a few months... Should this be a concern for us? What could happen if we create our cluster with Rancher for an unsupported K8s version? 1.25 for example. - Rancher 2.7.2 just got released including support for 1.25. I have however tested running unsupported versions before, unless there is major deprecations in the kubernetes API it is fine in my experience. If we move to AKS imported clusters, in case we add node pools, and upgrade the cluster, will those changes be reflected in the Rancher Platform? - Yep! If we face some issues by running an unsupported K8s version on Rancher Launched K8s clusters, is it possible to remove it from Rancher, do the stuff we need, and then import it into the platform? - Yes, however be careful and do testing before doing in prod. From top of mind: Remove cluster from rancher (if imported), if rancher created you might want to revoke ranchers SA key for the cluster first (so it can't remove it). Delete the cattle-system namespace, and any other cattle-* namespaces you don't want to keep. And do your thing. It looks like AKS is faster than Rancher regarding supported Kubernetes versions... We would like to know if Rancher will always be on track with AKS regarding the removal of K8s version support and new versions. - In my experience yes. (Been using rancher on all three clouds for a 4 years now). What are exactly the big differences between imported AKS and Rancher-launched AKS? What should we look at, and what issues can we face when using one or another? - The main difference is that rancher will not be able to upgrade the cluster for you. You will have to do that yourself.
-
rancher2_bootstrap.admin resource fail after Kubernetes v1.23.15
variable "rancher" { type = object({ namespace = string version = string branch = string chart_set = list(object({ name = string value = string })) }) default = { namespace = "cattle-system" # There is a bug with destroying the cloud credentials in version 2.6.9 until 2.7.1 and will be fixed in next release 2.7.2. # See https://github.com/rancher/rancher/issues/39300 version = "2.7.0" branch = "stable" chart_set = [ { name = "replicas" value = 3 }, { name = "ingress.ingressClassName" value = "nginx-external" }, { name = "ingress.tls.source" value = "rancher" }, # There is a bug with the uninstallation of Rancher due to missing priorityClassName of rancher-webhook # The priorityClassName need to be set # See https://github.com/rancher/rancher/issues/40935 { name = "priorityClassName" value = "system-node-critical" } ] } description = "Rancher Helm chart properties." }
-
Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow
When I searched DuckDuckGo instead, the 12th link actually had the real answer. It's in this issue on Rancher's GitHub. Turns out the Rancher admin needs to be in all of the Keycloak groups they want to have show up in the auto-populated picklist in Rancher. Being a Keycloak admin and even creating the groups isn't good enough. Frustratingly, the "caveat" note the Rancher guy is pointing to that says this is only present in the guide to setting up Keycloak for SAML, but apparently this is also true for OIDC.
-
How to enable TLS 1.3 protocol
Explicitly set TLS 1.3 in Rancher, though it could be a bug in Rancher: https://github.com/rancher/rancher/issues/35654
-
Rancher deployment, hanging on login and setup pages
Thanks. Yeah looks like this might work: https://github.com/rancher/rancher/releases/tag/v2.7.2-rc3
What are some alternatives?
UTM - Virtual machines for iOS and macOS
podman - Podman: A tool for managing OCI containers and pods.
terraform-provider-proxmox - Terraform provider plugin for proxmox
lens - Lens - The way the world runs Kubernetes
terraform-provider-rancher2 - Terraform Rancher2 provider
microk8s - MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
QEMU - Official QEMU mirror. Please see https://www.qemu.org/contribute/ for how to submit changes to QEMU. Pull Requests are ignored. Please only use release tarballs from the QEMU website.
kubesphere - The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️
libvirt-k8s-provisioner - Automate your k8s installation
cluster-api - Home for Cluster API, a subproject of sig-cluster-lifecycle
xemu - Original Xbox Emulator for Windows, macOS, and Linux (Active Development)
kubespray - Deploy a Production Ready Kubernetes Cluster