terraform-aws-eks
terragrunt-infrastructure-modules-example
Our great sponsors
terraform-aws-eks | terragrunt-infrastructure-modules-example | |
---|---|---|
69 | 5 | |
4,154 | 289 | |
2.4% | 3.5% | |
8.7 | 4.3 | |
7 days ago | 7 days ago | |
HCL | HCL | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
terraform-aws-eks
- Feat: Made it clear that we stand with Ukraine
- Need suggestions for managing eks terraform module
-
What's everyone's favorite EKS Terraform module these days?
cloudposse module was popular but most have moved to https://github.com/terraform-aws-modules/terraform-aws-eks also eks blueprints will be moving to this module. use eks blueprints v5
-
The Future of Terraform: ClickOps
That's a very simplistic view. Let's do a small thought exercise. Is this module not infrastructure?
-
Failed to marshal state to json
I think there is an issue with the module eks : https://github.com/terraform-aws-modules/terraform-aws-eks
-
☸️ How to deploy a cost-efficient AWS/EKS Kubernetes cluster using Terraform in 2023
module "eks" { source = "terraform-aws-modules/eks/aws" cluster_name = var.cluster_name cluster_version = var.kubernetes_version cluster_endpoint_private_access = true cluster_endpoint_public_access = true cluster_addons = { coredns = { most_recent = true timeouts = { create = "2m" # default 20m. Times out on first launch while being effectively created } } kube-proxy = { most_recent = true } vpc-cni = { most_recent = true } aws-ebs-csi-driver = { most_recent = true } } vpc_id = module.vpc.vpc_id subnet_ids = module.vpc.private_subnets # Self managed node groups will not automatically create the aws-auth configmap so we need to create_aws_auth_configmap = true manage_aws_auth_configmap = true aws_auth_users = var.aws_auth_users enable_irsa = true node_security_group_additional_rules = { ingress_self_all = { description = "Node to node all ports/protocols" protocol = "-1" from_port = 0 to_port = 0 type = "ingress" self = true } egress_all = { # by default, only https urls can be reached from inside the cluster description = "Node all egress" protocol = "-1" from_port = 0 to_port = 0 type = "egress" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } } self_managed_node_group_defaults = { # enable discovery of autoscaling groups by cluster-autoscaler autoscaling_group_tags = { "k8s.io/cluster-autoscaler/enabled" : true, "k8s.io/cluster-autoscaler/${var.cluster_name}" : "owned", } # from https://github.com/terraform-aws-modules/terraform-aws-eks/issues/2207#issuecomment-1220679414 # to avoid "waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator" iam_role_additional_policies = { AmazonEBSCSIDriverPolicy = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy" } } # possible values : https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/node_groups.tf self_managed_node_groups = { default_node_group = { create = false } # fulltime-az-a = { # name = "fulltime-az-a" # subnets = [module.vpc.private_subnets[0]] # instance_type = "t3.medium" # desired_size = 1 # bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=normal'" # } spot-az-a = { name = "spot-az-a" subnet_ids = [module.vpc.private_subnets[0]] # only one subnet to simplify PV usage # availability_zones = ["${var.region}a"] # conflict with previous option. TODO try subnet_ids=null at creation (because at modification it fails) desired_size = 2 min_size = 1 max_size = 10 bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'" use_mixed_instances_policy = true mixed_instances_policy = { instances_distribution = { on_demand_base_capacity = 0 on_demand_percentage_above_base_capacity = 0 spot_allocation_strategy = "lowest-price" # "capacity-optimized" described here : https://aws.amazon.com/blogs/compute/introducing-the-capacity-optimized-allocation-strategy-for-amazon-ec2-spot-instances/ } override = [ { instance_type = "t3.xlarge" weighted_capacity = "1" }, { instance_type = "t3a.xlarge" weighted_capacity = "1" }, ] } } } tags = local.tags }
-
How are most EKS clusters deployed?
If you want somewhat viable setup - I'd go for terraform-aws-modules (Anton did an awesome job), and aws-ia blueprints, especially those multi-tenant ones.
-
I am stuck on learning how to provision K8s in AWS. Security groups? ALB? ACM? R53?
https://github.com/terraform-aws-modules/terraform-aws-eks
-
Deal with external managed resources destruction
I tried using explicit depends_on between my modules but this practise is not recommended since it cause issues during planning.
-
How to Upgrade EKS Cluster and its Nodes via Terraform without disruption?
If you use https://github.com/terraform-aws-modules/terraform-aws-eks it is designed to upgrade the components in the correct order when the cluster version is changed
terragrunt-infrastructure-modules-example
- How to structure Terraform with multi-env + multi-regions for TBD in monorepo
- Terraform - Standards/Development guidelines
- Best practice for structuring terraform repo for services in AWS multi account?
-
Multi-account management
Hello TF gurus, We use Terragrunt in our team to manage our TF code and to keep the state config DRY. We started with a couple of AWS accounts to now responsible for managing 10-12 AWS accounts. We use this folder structure: https://github.com/gruntwork-io/terragrunt-infrastructure-live-example https://github.com/gruntwork-io/terragrunt-infrastructure-modules-example
- multi-account management
What are some alternatives?
eksctl - The official CLI for Amazon EKS
terragrunt-infrastructure-live-example - A repo used to show examples file/folder structures you can use with Terragrunt and Terraform
terraform-aws-cloudwatch - Terraform module to create AWS Cloudwatch resources 🇺🇦
terraform-aws-secure-baseline - Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations and AWS Foundational Security Best Practices.
terraform-aws-eks-blueprints - Configure and deploy complete EKS clusters.
terraform-best-practices - Terraform Best Practices for AWS users
eks-alb-istio-with-tls - This repository demonstrate how to configure end-to-end encryption on EKS platform using TLS certificate from Amazon Certificate Manager, AWS Application LoadBalancer and Istio as service mesh.
typhoon - Minimal and free Kubernetes distribution with Terraform
terraform-aws-security-group - This terraform module creates set of Security Group and Security Group Rules resources in various combinations.
flux2-kustomize-helm-example - A GitOps workflow example for multi-env deployments with Flux, Kustomize and Helm.
eks-v17-v18-migrate - How to migrate from v17 to v18 of `terraform-aws-eks` module
terragrunt-atlantis-config - Generate Atlantis config for Terragrunt projects.