TL;DR:
Clusters can be added or removed by adding or removing cluster modules.
The kbst
CLI provides scaffolding to make writing the module boilerplate more convenient.
The kbst
CLI does not make any changes to cloud or Kubernetes resources.
It only changes local files, and you can see the changes with git status
.
To have the changes take effect, you have to commit, push, merge and promote them following the GitOps flow.
Adding a cluster adds two files to your repository.
The first one, calls the cluster module and the second one configures the required providers.
You can add clusters by writing the Terraform yourself.
Or you can use the kbst
CLI to scaffold it.
The CLI will configure the base domain and environments of the new cluster to match your platform automatically.
Adding an EKS cluster requires specifying the name prefix and the region.
# kbst add cluster eks <name-prefix> <region>kbst add cluster eks gc0 eu-west-1
The kbst
CLI can also remove a cluster for you.
Removing a cluster will also remove all the cluster's node pools and services.
# list all platform componentskbst listaks_gc0_westeuropeeks_gc0_eu-west-1gke_gc0_europe-west1
# then remove the desired cluster by <name>kbst remove eks_gc0_eu-west-1
Cluster module configuration is provider specific. See the commenteded examples below for available attributes.
module "eks_gc0_eu-west-1" { providers = { aws = aws.eks_gc0_eu-west-1 kubernetes = kubernetes.eks_gc0_eu-west-1 }
source = "github.com/kbst/terraform-kubestack//aws/cluster?ref=v0.18.1-beta.0"
configuration = { apps = { # prefix added to workspace and region for the cluster name (required) name_prefix = "gc0"
# part of the fully qualified domain name (required) base_domain = var.base_domain # AWS EC2 instance type to use for default node-pool (required) cluster_instance_type = "t3a.xlarge" # comma-separated list of availability zones to use for default node-pool (required) cluster_availability_zones = "eu-west-1a,eu-west-1b,eu-west-1c"
# desired, min and max number of nodes for the default node-pool (required) cluster_desired_capacity = 3 cluster_min_size = 3 cluster_max_size = 9
# desired Kubernetes version, set to upgrade, downgrades are not supported by EKS # (optional), defaults to latest at creation time cluster_version = "1.25.8"
# whether to encrypt root device volumes of worker nodes # (optional), defaults to true worker_root_device_encrypted = true
# size in GB for root device volumes of worker nodes # (optional), defaults to 20 worker_root_device_volume_size = 20
# sets `mapAccounts` attribute in the `aws-auth` configmap in the `kube-system` namespace # (optional), defaults to null cluster_aws_auth_map_accounts = <<-MAPACCOUNTS - "000000000000" MAPACCOUNTS
# sets `mapRoles` attribute in the `aws-auth` configmap in the `kube-system` namespace # (optional), defaults to null cluster_aws_auth_map_roles = <<-MAPROLES - rolearn: arn:aws:iam::000000000000:role/KubernetesAdmin username: kubernetes-admin groups: - system:masters MAPROLES
# sets `mapUsers` attribute in the `aws-auth` configmap in the `kube-system` namespace # (optional), defaults to null cluster_aws_auth_map_users = <<-MAPUSERS - userarn: arn:aws:iam::000000000000:user/Alice username: alice groups: - system:masters MAPUSERS
# comma separated list of cotrol-plane log types to enable # (optional), defaults to "api,audit,authenticator,controllerManager,scheduler" enabled_cluster_log_types = "audit,authenticator" }
ops = {} }}