The Nginx ingress controller read Kubernetes Ingress resources to configure Nginx and expose your services outside the cluster. It provides load balancing, SSL termination, name-based virtual hosting and more.
This Terraform module helps platform engineering teams provision Nginx Ingress on Kubernetes. It fully integrates the upstream Kubernetes resources into the Terraform plan/apply lifecycle and allows configuring Nginx Ingress using native Terraform syntax.
The Nginx Ingress module is continuously updated and tested when new upstream versions are released.
TL;DR:
kbst add service nginx
to add Nginx Ingress to your platformkbst
CLI scaffolds the Terraform module boilerplate for youThe kbst
CLI helps you scaffold the Terraform code to provision Nginx Ingress on your platform.
It takes care of calling the module once per cluster, and sets the correct source
and latest version
for the module.
And it also makes sure the module's configuration
and configuration_base_key
match your platform.
# add Nginx Ingress service to all platform clusterskbst add service nginx
# or optionally only add Nginx Ingress to a single cluster# 1. list existing platform moduleskbst listaks_gc0_westeuropeeks_gc0_eu-west-1gke_gc0_europe-west1
# 2. add Nginx Ingress to a single clusterkbst add service nginx --cluster-name aks_gc0_westeurope
Scaffolding the boilerplate is convenient, but platform service modules are fully documented, standard Terraform modules. They can also be used standalone without the Kubestack framework.
All Kubestack platform service modules support the same module attributes and configuration as all Kubestack modules. The module configuration is a Kustomization set in the per environment configuration map following Kubestack's inheritance model.
The example below shows some options to customize the resources provisioned by the Nginx Ingress module.
module "example_nginx" { providers = { kustomization = kustomization.example } source = "kbst.xyz/catalog/nginx/kustomization" version = "1.10.1-kbst.0" configuration = { apps = {+ # change the namespace of all resources+ namespace = var.example_nginx_namespace++ # or add an annotation+ common_annotations = {+ "terraform-workspace" = terraform.workspace+ }++ # use images to pull from an internal proxy+ # and avoid being rate limited+ images = [{+ # refers to the 'pod.spec.container.name' to modify the 'image' attribute of+ name = "container-name"+ + # customize the 'registry/name' part of the image+ new_name = "reg.example.com/nginx"+ }] } ops = {+ # scale down replicas in ops+ replicas = [{+ # refers to the 'metadata.name' of the resource to scale+ name = "example"+ + # sets the desired number of replicas+ count = 1+ }] } }}
In addition to the example attributes shown above, modules also support secret_generator
, config_map_generator
, patches
and many other Kustomization attributes.
Full documentation how to customize a module's Kubernetes resources is available in the platform service module configuration section of the framework documentation.
Kubestack is a framework for platform engineering teams.
The Nginx ingress controller is commonly installed as part of the platform components using Kubestack.
Then, applications deployed on the cluster can create Ingress
resources expose applications outside of the cluster.
The steps below show how the ingress controller and cluster DNS provided can be used by application teams.
Traffic is commonly routed to the Nginx pods configured by the ignress controller via a cloud load balancer. Using the DNS zones the cluster modules provision, you can resolve a DNS name to the cloud load balancer. For step-by-step instructions how to set this up consult the DNS and Nginx ingress guides.
To configure how your service is exposed through Nginx ingress, use a Kubernetes built-in Ingress
resource.
Below is an example ingress resource that routes HTTP requests based on the host header to a specific Service
inside the cluster.
For more details about the configuration options, please refer to the official documentation.
To get started, put the example below into a file called ingress.yaml
and add it to your application's manifests.
You can use a sub-domain of the cluster's FQDN or a custom domain to expose your application outside the cluster. The sub-domain option is convenient because it does not require any additional DNS setup. But for user facing application environments a custom domain is the more common option.
To be able to route requests to your app based on a custom domain, make sure the domain resolves to the cluster's cloud load balancer. The easiest way to achieve that is to set the cluster's FQDN as a CNAME of your custom domain.
app.example.com CNAME gc0-apps-us-east1.gcp.kubestack.example.com
Then use the custom domain as the host in the Ingress
manifest.
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: example-app namespace: example-app-prodspec: ingressClassName: nginx rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: example-app port: number: 80
You must adapt the name
, namespace
, host
, service.name
and service.port
above.
Finally, apply the application manifests including the ingress.yaml
as usual.
Nginx ingress exposes a number of Nginx configuration options and features including redirects, authentication and more that are not part of the Kubernetes ingress definition. These additional parameters can be configured by setting annotations. Please refer to the Nginx ingress documentation for a list of available annotations.