Provision Nginx Ingress to expose applications outside the cluster

TL;DR:

  • Nginx ingress controller dynamically configures Nginx to proxy requests to Kubernetes services
  • This guide provisions the ingress controller, a cloud load balancer and sets up DNS
  • Once completed, you can expose services outside the cluster by creating Ingress resources

Introduction

Nginx is one of the most widely used reverse proxies. The Nginx ingress controller integrates Nginx with Kubernetes and dynamically configures Nginx to proxy requests to applications on Kubernetes.

Following this guide you will:

  1. provision Nginx ingress controller
  2. expose it outside the cluster using a cloud load balancer and
  3. configure the DNS zones provisioned by the Kubestack cluster modules to resolve to the load balancer and your ingress controller

This guide assumes that you have set up DNS following the DNS set up guide.

Before we can provision the Nginx ingress controller, we need a Kubestack repository. If you do not have a Kubestack repository yet, follow the Kubestack tutorial first. While the catalog modules can be used with any Terraform configuration, this guide assumes you have a Kubestack framework repository.

Nginx Ingress Installation

To install the Nginx module, run the following kbst CLI command in the root of your repository.

# add nginx service to every cluster
# append --cluster-name NAME
# to only add to a single cluster
kbst add service nginx

Patch load balancer IP/CNAME

The Nginx ingress upstream manifests include a service type load balancer to route the traffic to the ingress pods. But to allow requests to hit the cloud load balancer the Kubernetes control plane provisions we have to make sure DNS resolves to the LBs IP/CNAME.

AmazonAzureGoogle

For EKS, the provisioned load balancer will return a CNAME. We have to add this CNAME to the DNS zone that has been provisioned by the Kubestack cluster module. To do so, open the *_service_nginx.tf files for all EKS clusters and add a second module. The first module, added by kbst add service nginx provisions the upstream manifests including a service type loadbalancer. We now add a second module to read the CNAME of the created ELB and add it to the DNS zone created by the cluster module.

module "eks_gc0_eu-west-1_service_nginx" {
providers = {
kustomization = kustomization.eks_gc0_eu-west-1
}
source = "kbst.xyz/catalog/nginx/kustomization"
version = "1.10.0-kbst.0"
# configuration_base_key = "apps-prod"
configuration = {
# apps-prod = {}
apps = {}
ops = {}
}
}
+ module "eks_gc0_eu-west-1_dns_zone" {
+ providers = {
+ aws = aws.eks_gc0_eu-west-1
+ kubernetes = kubernetes.eks_gc0_eu-west-1
+ }
+
+ # make sure to match your cluster module's version
+ source = "github.com/kbst/terraform-kubestack//aws/cluster/elb-dns?ref=v0.18.1-beta.0"
+
+ ingress_service_name = "ingress-nginx-controller"
+ ingress_service_namespace = "ingress-nginx"
+
+ metadata_fqdn = module.eks_gc0_eu-west-1.current_metadata["fqdn"]
+ }

Reading the CNAME

For the data source in the DNS module to be able to read the CNAME from the service's status, the ELB created from the service type loadbalancer resource has to exist first. There are two ways to achieve this.

  • Either, run terraform apply --target module.eks_gc0_eu-west-1_service_nginx to deploy the Nginx manifests. Then you can let the pipeline handle the rest.
  • Or, if you prefer to avoid the manual apply, split this change up into two commits, and merge and promote them one after the other.

A note about depends_on on the DNS zone module. Generally, you could make the dependency explicit by using depends_on = [module.eks_gc0_eu-west-1_service_nginx]. This works initially, but occasionally the data source will be pushed into apply phase, and as a result the DNS entries will get recreated. To avoid this recreation, and the DNS entries being temporarily unresolvable as a result, the guide here avoids setting depends_on and offers the two alternatives above.

Apply Changes

As with every change, we now follow the GitOps process. First, commit and push to start the peer review, then merge when the plan looks good. After the changes have been validated in the internal environment, promote the changes to the external environment.

The full workflow is documented on the GitOps process page.

But here's a short summary for convenience:

# create a new feature branch
git checkout -b add-nginx-ingress
# add the changes and commit them
git add .
git commit -m "Install nginx ingress, cloud loadbalancer and DNS"
# push the changes to trigger the pipeline
git push origin add-nginx-ingress

Then follow the link in the output, to create a new pull request. Review the pipeline run. And merge the pull request, when everything is green.

Last but not least, promote the changes once you validated them in ops by setting a tag.

# make sure you're on the merge commit
git checkout main
git pull
# then tag the commit
git tag apps-deploy-$(git rev-parse --short HEAD)
# finally push the tag, to trigger the pipeline to promote
git push origin apps-deploy-$(git rev-parse --short HEAD)

Next Steps

With Nginx ingress controller deployed, a common next step is to also deploy Cert-Manager and configure it to issue Let's Encrypt certificates.

The Cert-Manager and Let's Encrypt guide will walk you through setting this up.

If you haven't yet, you must set up DNS for both, applications be reachable from the internet, as well as Let's Encrypt to be able to issue certificates.