TL;DR:
name_prefix
, workspace
, region
and base_domain
Exposing workloads on Kubernetes to the internet can be achieved in many ways. A common way, and the way the Kubestack framework provides by default is to:
This guide explains how to set up the nameserver records for the DNS zones Kubestack provisions in your base domain's DNS. Requirements 2. and 3. are covered in the Nginx Ingress controller guide. The Nginx Ingress guide explains how to deploy the ingress controller inside the cluster and how to expose it using a Kubernetes type loadbalancer service. The DNS set up proposed here works for other ingress controllers or Istio ingress gateway too.
If you prefer to provision your own DNS set up, you can disable the Kubestack provisioned zones by setting disable_default_ingress = true
on the cluster module(s).
Kubestack sets up DNS zones per cluster and per environment to ensure changes to the DNS can be previewed and validated like any other change.
The zones are based on the name_prefix
, workspace
, region
and base_domain
variables.
To make DNS names in those zones resolvable, we have to complete two steps:
NS
) records in the base_domain
's DNS for each of themAs the first step, we need to list the workspaces, so that we know how many environments we have to query name servers for.
terraform workspace list
This command will either return default
, ops
and apps
or default
, ops
, apps
and apps-prod
.
Depending on how many infrastructure environments you configured for your platform stack.
If you customised the environment names, you see those names here instead.
With the environment names, you can now query the name servers for each cluster and environment. Repeat the following steps for each cluster in every environment.
The DNS zones are provisioned on the same cloud provider as the cluster. To query them, we use the Terraform state.
You have to repeat these steps for every cluster in every environment. If you have clusters on more than one cloud provider, follow the instructions for each cloud provider.
# we use ops as the example here# but you need to do this for every workspace except defaultterraform workspace select ops
Select the cloud provider for each cluster. Follow the instructions once per cluster and infrastructure environment.
For ever cluster and environment, note down the name_servers
from below output.
You need to add NS
records for each of them in the following step.
terraform state list | grep aws_route53_zonemodule.eks_kbst_eu-west-1.module.cluster.aws_route53_zone.current[0]
terraform state show module.eks_kbst_eu-west-1.module.cluster.aws_route53_zone.current[0]
# module.eks_kbst_eu-west-1.module.cluster.aws_route53_zone.current[0]:resource "aws_route53_zone" "current" { # [...] name = "kbst-ops-eu-west-1.aws.kubestack.example.com" name_servers = [ "ns-1153.awsdns-16.org", "ns-1777.awsdns-30.co.uk", "ns-263.awsdns-32.com", "ns-916.awsdns-50.net", ] # [...]}
The name servers we just queried now have to be added to the base domain's DNS as NS records for the cluster's fully qualified domain name (FQDN).
How to do this will differ slightly for every DNS provider. But in general you have to:
Take a look at this example zone file to show the entries for two clusters in the ops and apps environments.
$ORIGIN kubestack.example.com.
; Amazon example for ops environmentkbst-ops-eu-west-1.aws IN NS ns-1153.awsdns-16.org.kbst-ops-eu-west-1.aws IN NS ns-1777.awsdns-30.co.uk.kbst-ops-eu-west-1.aws IN NS ns-263.awsdns-32.com.kbst-ops-eu-west-1.aws IN NS ns-916.awsdns-50.net.
; Amazon example for apps environmentkbst-apps-eu-west-1.aws IN NS ns-1252.awsdns-23.org.kbst-apps-eu-west-1.aws IN NS ns-1971.awsdns-48.co.uk.kbst-apps-eu-west-1.aws IN NS ns-252.awsdns-12.com.kbst-apps-eu-west-1.aws IN NS ns-186.awsdns-38.net.
The goal is to have a DNS query for the NS records of each FQDN to return the respective name servers in the ANSWER SECTION
like below.
Here is an example query for the GKE cluster in the ops environment that corresponds to the Terraform state and zone file examples above.
dig NS kbst-ops-europe-west1.gcp.kubestack.example.com
The DNS query should return output similiar to the one below. You want all four name servers to be returned.
[...];; ANSWER SECTION:kbst-ops-europe-west1.gcp.kubestack.example.com. 21599 IN NS ns-cloud-d1.googledomains.com.kbst-ops-europe-west1.gcp.kubestack.example.com. 21599 IN NS ns-cloud-d2.googledomains.com.kbst-ops-europe-west1.gcp.kubestack.example.com. 21599 IN NS ns-cloud-d3.googledomains.com.kbst-ops-europe-west1.gcp.kubestack.example.com. 21599 IN NS ns-cloud-d4.googledomains.com.
With the DNS set up in place, a common next step is to deploy an ingress controller to allow exposing services outside the Kubernetes cluster.
The DNS set up shown here can be used for any ingress controller and also for Istio ingress gateway. Kubestack provides a guide for Nginx ingress controller, but you can also use it as an example how to deploy other ingress controllers. Additionally, a guide how to set up Cert-Manager to automate certificate provisioning with Let's Encrypt is available too.