TL;DR:
Every Kubestack provisioned cluster comes with a default node-pool.
The default node-pool is configured as part of the cluster module.
Additional node-pools can be added or removed by adding or removing node-pool modules.
The kbst
CLI provides scaffolding to make writing the module boilerplate more convenient.
The kbst
CLI does not make any changes to cloud or Kubernetes resources.
It only changes local files, and you can see the changes with git status
.
To have the changes take effect, you have to commit, push, merge and promote them following the GitOps flow.
Adding a node-pool adds one file to your repository that calls the node-pool module.
You can add node-pools by writing the Terraform yourself.
Or you can use the kbst
CLI to scaffold it.
The CLI will configure the node-pool module to use the correct provider and attach to the correct cluster automatically.
Adding a node pool to your EKS cluster requires specifying the cluster name and the pool name.
# kbst add node-pool eks <cluster-name> <pool-name>kbst add node-pool eks eks_gc0_eu-west-1 extra
The kbst
CLI can also remove a cluster for you.
Removing a cluster will also remove all the cluster's node pools and services.
# list all platform componentskbst listaks_gc0_westeuropeeks_gc0_eu-west-1gke_gc0_europe-west1
# then remove the desired node-pool by <name>kbst remove eks_gc0_eu-west-1_node_pool_extra
Node-pool module configuration is provider specific. See the commenteded examples below for available attributes.
module "eks_gc0_eu-west-1_node_pool_extra" { providers = { aws = aws.eks_gc0_eu-west-1 }
source = "github.com/kbst/terraform-kubestack//aws/cluster/node-pool?ref=v0.18.1-beta.0"
cluster_name = module.eks_gc0_eu-west-1.current_metadata["name"]
configuration = { apps = { # name of the node pool (required) name = "extra"
# list of string, comma seperated (required) instance_types = "t3a.xlarge,t3a.large"
# desired, min and max number of nodes (required) desired_capacity = 3 max_size = 9 min_size = 3
# list of string, comma seperated, of availability zones to use # (optional), defaults to cluster's zones availability_zones = "eu-west-1a,eu-west-1b,eu-west-1c"
# AMY type to use for nodes # (optional), default depends on instance type # AL2_x86_64, AL2_x86_64_GPU or AL2_ARM_64 ami_type = "AL2_ARM_64"
# list of string, comma seperated, of VPC subnet IDs to use for nodes # (optional), defaults to cluster's existing subnets vpc_subnet_ids = "subnet-01234567890abcdef,..."
# secondary CIDR to associate with the cluster's VPC # (optional), defaults to null vpc_secondary_cidr = "10.1.0.0/16" # newbits to pass to cidsubnet function for new subnets # (optional), defaults to null # https://developer.hashicorp.com/terraform/language/functions/cidrsubnet vpc_subnet_newbits = 2 # offset added to cidrsubnet's netnum parameter for new subnets # (optional), defaults to 1 vpc_subnet_number_offset = 0
# whether to map public IPs to nodes # (optional), defaults to true vpc_subnet_map_public_ip = false
# Kubernetes taints to add to nodes # (optional), defaults to [] taints = [ { key = "taint-key1" value = "taint-value1" effect = "NO_SCHEDULE" }, { key = "taint-key2" value = "taint-value2" effect = "PREFER_NO_SCHEDULE" } ]
# additional AWS tags to add to node pool resources # (optional), defaults to {} tags = { "k8s.io/cluster-autoscaler/node-template/label/nvidia.com/gpu" = true "k8s.io/cluster-autoscaler/node-template/taint/dedicated" = "nvidia.com/gpu=true" }
# additional Kubernetes labels to add to nodes # (optional), defaults to {} labels = { "example.com/example" = true } }
ops = {} }}
module "eks_gc0_eu-west-1_node_pool_new_subnets" { providers = { aws = aws.eks_gc0_eu-west-1 }
source = "github.com/kbst/terraform-kubestack//aws/cluster/node-pool?ref=v0.18.1-beta.0"
cluster_name = module.eks_gc0_eu-west-1.current_metadata["name"]
configuration = { apps = { name = "new-subnets"
instance_types = "t3a.medium,t3a.small" desired_capacity = 1 min_size = 1 max_size = 3
availability_zones = "eu-west-1a,eu-west-1b,eu-west-1c"
# use the last three /18 subnets of the cluster's CIDR # https://www.davidc.net/sites/default/subnets/subnets.html?network=10.0.0.0&mask=16&division=23.ff4011 vpc_subnet_newbits = 2 }
ops = {} }}
module "eks_gc0_eu-west-1_node_pool_new_subnets_secondary_cidr" { providers = { aws = aws.eks_gc0_eu-west-1 }
source = "github.com/kbst/terraform-kubestack//aws/cluster/node-pool?ref=v0.18.1-beta.0"
cluster_name = module.eks_gc0_eu-west-1.current_metadata["name"]
configuration = { apps = { name = "new-subnets-secondary-cidr"
instance_types = "t3a.medium,t3a.small" desired_capacity = 1 min_size = 1 max_size = 3
availability_zones = "eu-west-1a,eu-west-1b,eu-west-1c"
# add a secondary CIDR to the VPC and create subnets for it # https://www.davidc.net/sites/default/subnets/subnets.html?network=10.1.0.0&mask=16&division=23.ff4011 vpc_secondary_cidr = "10.1.0.0/16"
# with 3 zones we need at least 3 subnets, use the first three /18 subnets vpc_subnet_newbits = 2 vpc_subnet_number_offset = 0 }
ops = {} }}