terragrunt-infrastructure-live-example
terraform-aws-eks
terragrunt-infrastructure-live-example | terraform-aws-eks | |
---|---|---|
23 | 69 | |
731 | 4,179 | |
2.5% | 1.5% | |
2.8 | 8.7 | |
about 2 months ago | 20 days ago | |
HCL | HCL | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
terragrunt-infrastructure-live-example
-
Terragrunt for Multi-Region/Multi-Account Deployments
In the official Terragrunt documentation there is a good article about how to set up a Terragrunt project and where to place modules. In fact, there is also a repository on GitHub providing an example project on how the creators recommend setting up Terragrunt. I certainly recommend going through that repository, because it is a good reference for a starting point. Having that said, I like to structure mine a little bit differently. My recommendation is to have different AWS accounts for each environment. Getting a new account usually is relatively easy to accomplish even if we are working in a corporate environment (your workplace most likely is using AWS Organizations to manage accounts). The existence of multiple accounts does not require additional costs, we only pay for what we use.
-
Seamless Cloud Infrastructure: Integrating Terragrunt and Terraform with AWS
NOTE: More information about the terragrunt.hcl file can be found in this example repository.
-
Manage multiple terraform environments in a single terraform workspace state file
Here's a pointer to an example repository with a Terragrunt monorepo (good, easy to manage), and each module called gets its unique statefile (good, smaller blast radius) where the tradeoff is learning a new tool and paradigm: https://github.com/gruntwork-io/terragrunt-infrastructure-live-example.
-
Migrate from terragrunt to terraform
I also highly recommend to check out how terragrunt recommends structuring your repo and even further details on this documentation page.
-
Conditionally set resource provider
https://github.com/gruntwork-io/terragrunt-infrastructure-live-example/blob/master/_envcommon/mysql.hcl https://terragrunt.gruntwork.io/docs/features/keep-your-terraform-code-dry/
-
Deploying globally (to all regions) on AWS with terragrunt
did you have a look at this example? https://github.com/gruntwork-io/terragrunt-infrastructure-live-example/blob/master/terragrunt.hcl
- How to structure Terraform with multi-env + multi-regions for TBD in monorepo
-
How to you segregate your dev and prod environments in the repository
Terragrunt! Using a scaffolding approach like this for inspiration https://github.com/gruntwork-io/terragrunt-infrastructure-live-example
-
Terraform / Terragrunt multi-environment - Pass in environment name
I would recommend taking a look at the example repo here: https://github.com/gruntwork-io/terragrunt-infrastructure-live-example. The layout you have doesn't look like a structure that would work well with terragrunt.
-
How you structure your terraform state?
A good example is in gruntwork-io / terragrunt-infrastructure-live-example repository and further documentation on Terragrunt's own documentation page.
terraform-aws-eks
- Feat: Made it clear that we stand with Ukraine
- Need suggestions for managing eks terraform module
-
What's everyone's favorite EKS Terraform module these days?
cloudposse module was popular but most have moved to https://github.com/terraform-aws-modules/terraform-aws-eks also eks blueprints will be moving to this module. use eks blueprints v5
-
The Future of Terraform: ClickOps
That's a very simplistic view. Let's do a small thought exercise. Is this module not infrastructure?
-
Failed to marshal state to json
I think there is an issue with the module eks : https://github.com/terraform-aws-modules/terraform-aws-eks
-
☸️ How to deploy a cost-efficient AWS/EKS Kubernetes cluster using Terraform in 2023
module "eks" { source = "terraform-aws-modules/eks/aws" cluster_name = var.cluster_name cluster_version = var.kubernetes_version cluster_endpoint_private_access = true cluster_endpoint_public_access = true cluster_addons = { coredns = { most_recent = true timeouts = { create = "2m" # default 20m. Times out on first launch while being effectively created } } kube-proxy = { most_recent = true } vpc-cni = { most_recent = true } aws-ebs-csi-driver = { most_recent = true } } vpc_id = module.vpc.vpc_id subnet_ids = module.vpc.private_subnets # Self managed node groups will not automatically create the aws-auth configmap so we need to create_aws_auth_configmap = true manage_aws_auth_configmap = true aws_auth_users = var.aws_auth_users enable_irsa = true node_security_group_additional_rules = { ingress_self_all = { description = "Node to node all ports/protocols" protocol = "-1" from_port = 0 to_port = 0 type = "ingress" self = true } egress_all = { # by default, only https urls can be reached from inside the cluster description = "Node all egress" protocol = "-1" from_port = 0 to_port = 0 type = "egress" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } } self_managed_node_group_defaults = { # enable discovery of autoscaling groups by cluster-autoscaler autoscaling_group_tags = { "k8s.io/cluster-autoscaler/enabled" : true, "k8s.io/cluster-autoscaler/${var.cluster_name}" : "owned", } # from https://github.com/terraform-aws-modules/terraform-aws-eks/issues/2207#issuecomment-1220679414 # to avoid "waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator" iam_role_additional_policies = { AmazonEBSCSIDriverPolicy = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy" } } # possible values : https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/node_groups.tf self_managed_node_groups = { default_node_group = { create = false } # fulltime-az-a = { # name = "fulltime-az-a" # subnets = [module.vpc.private_subnets[0]] # instance_type = "t3.medium" # desired_size = 1 # bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=normal'" # } spot-az-a = { name = "spot-az-a" subnet_ids = [module.vpc.private_subnets[0]] # only one subnet to simplify PV usage # availability_zones = ["${var.region}a"] # conflict with previous option. TODO try subnet_ids=null at creation (because at modification it fails) desired_size = 2 min_size = 1 max_size = 10 bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'" use_mixed_instances_policy = true mixed_instances_policy = { instances_distribution = { on_demand_base_capacity = 0 on_demand_percentage_above_base_capacity = 0 spot_allocation_strategy = "lowest-price" # "capacity-optimized" described here : https://aws.amazon.com/blogs/compute/introducing-the-capacity-optimized-allocation-strategy-for-amazon-ec2-spot-instances/ } override = [ { instance_type = "t3.xlarge" weighted_capacity = "1" }, { instance_type = "t3a.xlarge" weighted_capacity = "1" }, ] } } } tags = local.tags }
-
How are most EKS clusters deployed?
If you want somewhat viable setup - I'd go for terraform-aws-modules (Anton did an awesome job), and aws-ia blueprints, especially those multi-tenant ones.
-
I am stuck on learning how to provision K8s in AWS. Security groups? ALB? ACM? R53?
https://github.com/terraform-aws-modules/terraform-aws-eks
-
Deal with external managed resources destruction
I tried using explicit depends_on between my modules but this practise is not recommended since it cause issues during planning.
-
How to Upgrade EKS Cluster and its Nodes via Terraform without disruption?
If you use https://github.com/terraform-aws-modules/terraform-aws-eks it is designed to upgrade the components in the correct order when the cluster version is changed
What are some alternatives?
terragrunt-infrastructure-modules-example - A repo used to show examples file/folder structures you can use with Terragrunt and Terraform
terragrunt-atlantis-config - Generate Atlantis config for Terragrunt projects.
eksctl - The official CLI for Amazon EKS
atlantis - Terraform Pull Request Automation
terraform-aws-cloudwatch - Terraform module to create AWS Cloudwatch resources 🇺🇦
terraform-yaml-stack-config - Terraform module that loads an opinionated 'stack' configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote state outputs for Terraform and helmfile components.
terraform-aws-eks-blueprints - Configure and deploy complete EKS clusters.
modules.tf-demo - Real modules.tf demo (updated May 2021)
eks-alb-istio-with-tls - This repository demonstrate how to configure end-to-end encryption on EKS platform using TLS certificate from Amazon Certificate Manager, AWS Application LoadBalancer and Istio as service mesh.
terraform - Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
terraform-aws-security-group - This terraform module creates set of Security Group and Security Group Rules resources in various combinations.