TL;DR:
Kubestack provides Terraform modules for the different platform components. There are three types of modules. Cluster modules, node-pool modules and platform service modules.
Cluster modules provision managed Kubernetes clusters and their required infrastructure dependencies using the respective cloud's Terraform provider.
Node-Pool modules provision and attach Node-Pools to clusters provisioned by cluster modules.
Platform service modules provision Kubernetes resources on top of Kubernetes clusters using the Kubestack maintained Kustomization provider. There are two options to deploy platform services:
Ultimately, all Kubestack modules are just Terraform modules that implement standard Kubestack inputs and outputs. This provides a unified developer experience and makes for seamless integration between the different platform components.
The Kubestack specific inheritance based configuration all modules implement is a key enabler for Kubestack's reliable GitOps automation.
Module identifiers and file names by convention reflect the cluster that the module provisions. Node-pool or service modules are prefixed with the cluster they belong to.
Cluster modules implement a unified naming scheme to support multi-cluster, multi-region and multi-cloud platform architectures. Names are unique for:
Kubestack Terraform modules accept the following attributes:
source
and version
(required)Source and version of the module as required by Terraform.
Cluster and node-pool modules are available from GitHub and include the version using the ?ref
parameter in the source
.
Versions for cluster and node-pool modules are the framework version.
# source for a fictitious cluster modulesource = "github.com/kbst/terraform-kubestack//example/cluster?ref=v0.1.0"
# source for a fictitious node-pool modulesource = "github.com/kbst/terraform-kubestack//example/cluster/node-pool?ref=v0.1.0"
Platform service modules are available from the Kubestack registry and use source
and version
.
Versions for Kubestack platform service modules consist of the upstream version and a -kbst.x
packaging suffix.
The packaging suffix counter x
is incremented for new module versions that do not change the upstream version.
As a result of the -kbst.x
packaging suffix version constraint operators like =
, !=
, >
, >=
, <
, <=
and ~>
can not be used for platform service module versions.
# source and version for a fictitious service modulesource = "kbst.xyz/catalog/example/kustomization"version = "0.1.0-kbst.0"
configuration
(required)Map of per environment configuration objects. Following Kubestack's inheritance model. The configuration attributes are specific to the module type and for cluster and node-pool modules also specific to the provider. Both platform service module flavors, the catalog and custom manifest modules, support the same configuration attributes.
configuration = { apps = { # module specific configuration attributes }
ops = { # inherits from apps }}
configuration_base_key
(optional)Name of the key in the configuration map all others inherit from.
Key must exist in configuration map.
Defaults to apps
.
configuration_base_key = "apps-prod"configuration = { apps-prod = { # every environment inhertis from apps-prod # because configuration_base_key is set to apps-prod }
apps-stage = { # inherits from apps-prod }
ops = { # inherits from apps-prod }}
Kubestack modules are regular Terraform modules and are used the same way. Below examples show general examples how to use cluster, node-pool and service modules.
The configuration attributes for cluster modules are cloud provider specific. Available attributes are documented as part of the cluster module configuration.
EKS requires the cluster module and a aws
provider alias to configure the desired region.
The alias attribute is used to pass a specific provider into a specific module.
provider "aws" { alias = "eks_gc0_eu-west-1"
region = "eu-west-1"}
module "eks_gc0_eu-west-1" { providers = { aws = aws.eks_gc0_eu-west-1 kubernetes = kubernetes.eks_gc0_eu-west-1 }
source = "github.com/kbst/terraform-kubestack//aws/cluster?ref=v0.18.1-beta.0"
configuration = { apps = { base_domain = var.base_domain cluster_availability_zones = "eu-west-1a,eu-west-1b,eu-west-1c" cluster_desired_capacity = 3 cluster_instance_type = "t3a.xlarge" cluster_max_size = 9 cluster_min_size = 3 name_prefix = "gc0" } ops = {} }}
The configuration attributes for node-pool modules are cloud provider specific. Available attributes are documented as part of the node-pool module configuration.
module "eks_gc0_eu-west-1_node_pool_extra" { providers = { aws = aws.eks_gc0_eu-west-1 }
source = "github.com/kbst/terraform-kubestack//aws/cluster/node-pool?ref=v0.18.1-beta.0"
cluster_name = module.eks_gc0_eu-west-1.current_metadata["name"]
configuration = { apps = { desired_capacity = 3 instance_types = "t3a.xlarge" max_size = 9 min_size = 3 name = "extra" } ops = {} }}
The configuration attributes for platform service modules are the same for all catalog modules and the custom-manifest module. Available attributes are documented as part of the cluster module configuration.
provider "kustomization" { alias = "eks_gc0_eu-west-1"
kubeconfig_raw = module.eks_gc0_eu-west-1.kubeconfig}
module "eks_gc0_eu-west-1_service_nginx" { providers = { kustomization = kustomization.eks_gc0_eu-west-1 }
source = "kbst.xyz/catalog/nginx/kustomization" version = "1.3.1-kbst.1"
configuration = { apps = {} ops = {} }}