terraform-provider-akamai
terraform-provider-kubernetes
terraform-provider-akamai | terraform-provider-kubernetes | |
---|---|---|
3 | 6 | |
104 | 1,541 | |
1.9% | 0.5% | |
9.4 | 9.0 | |
10 days ago | 4 days ago | |
Go | Go | |
Mozilla Public License 2.0 | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
terraform-provider-akamai
- slow Terraform tests
-
Shifting Akamai to the left using Terraform
As a result the edge hostname that is created is not managed via Terraform. Most of the edge hostname attributes hardly ever needs to be changed, but for ip_behavior this can be a problem (Github issue).
-
Terraform a long Akamai SPF Text record
I think this is the correct code for where this is actually getting parsed for the provider: https://github.com/akamai/terraform-provider-akamai/blob/7db7bd296039e8c2d9e9d17c0e5a07d3c0a1b297/pkg/providers/dns/resource_akamai_dns_record.go#L2004
terraform-provider-kubernetes
-
Does the kubernetes provider behave differently than other provider?
Now, to be honest, I'm not entirely sure/confident how this works. When I've used this kind of setup, I had two separate workspaces: one for setting up EKS and one for setting up Kubernetes within EKS. I'd apply the EKS workspace, first, then use its outputs for the Kubernete's workspace. You can see this pattern is specifically outlined in this EKS/k8s example. The Kubernetes provider docs also explicitly warns against creating the cluster in the same module as the Kubernetes provider. So it appears this may work, but it isn't recommended.
-
Name for move from Terraform to Kubernetes Operators
It is a pretty important distinction. Terraform and Kubernetes are fundamentally different in how they work. If you ever try to manage kubernetes state from terraform, it the differences become very obvious: https://github.com/hashicorp/terraform-provider-kubernetes/issues/1367
-
terraform-kubernetes-provider how to create secret from file?
I'm using the terraform kubernetes-provider and I'd like to translate something like this kubectl command into TF:
-
Share a GPU between pods on AWS EKS
After the resources be provisioned, you might want to run terraform apply -refresh-only to refresh your local state as the creation of some resource change the state of others within AWS. Also, state differences on metadata.resource_version of k8s resources almost always show up after an apply. This seems to be related to this issue.
-
Kubernetes provider awfully trigger happy to delete entire state when it can't connect
You can open an issue here: https://github.com/hashicorp/terraform-provider-kubernetes/issues
-
What are your experiences in using the Kubernetes and Helm Providers?
We want to do that, but this issue has been a huge blocker for us. You might not hit it unless you’re using AKS, though.
What are some alternatives?
terraform-provider-azurerm - Terraform provider for Azure Resource Manager
azure-service-operator - Azure Service Operator allows you to create Azure resources using kubectl
terraform-provider-libvirt - Terraform provider to provision infrastructure with Linux's KVM using libvirt
terrajet - Generate Crossplane Providers from any Terraform Provider
k8s-device-plugin - NVIDIA device plugin for Kubernetes
asdf-tflint - An asdf plugin for installing terraform-linters/tflint.
aws-virtual-gpu-device-plugin - AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads
asdf-hashicorp - HashiCorp plugin for the asdf version manager
terraform-provider-ovirt - Terraform provider for oVirt 4.x
k2tf - Kubernetes YAML to Terraform HCL converter
terraform-provider-grafana - Terraform Grafana provider
aws-eks-share-gpu - How to share the same GPU between pods on AWS EKS