Nginx Ingress with AWS NLB (deprecated in favor of Gateway API)
Setting up a single NLB to handle multiple domains through NGINX is a “power-user” architecture. It’s cost-effective, high-performance, and gives you total control over your routing. Since it is 2026, the Kubernetes community is actually shifting toward the Gateway API, but the NGINX + NLB approach remains the industry standard for production flexibility.1. The High-Level Flow
To make this work, we use the AWS Load Balancer Controller to manage the NLB and the NGINX Ingress Controller to manage the internal routing.- Route 53: Points
app1.example.comandapp2.example.comto the same NLB DNS. - NLB (AWS): Terminates TLS using your ACM Certificate. It listens on port 443 and sends plain HTTP (port 80) to the NGINX pods.
- NGINX Ingress (K8s): Reads the
Hostheader (e.g., “https://www.google.com/url?sa=E&source=gmail&q=app1.example.com”) and routes the traffic to the correct Service. - Apps: Your apps receive the traffic and respond.
2. Step-by-Step Setup
Step A: The ACM Certificate
You need one certificate that covers both domains.- Recommendation: Request a wildcard certificate in AWS Certificate Manager (ACM) for
*.example.com. - Why: This allows you to add
app3,app4, etc., later without changing your infrastructure.
Step B: Install AWS Load Balancer Controller
This is the “bridge” between K8s and AWS. It watches your services and creates the NLB.Step C: Install NGINX Ingress with NLB + ACM
We tell NGINX to create a Service of typeLoadBalancer. The magic is in the annotations.
helm install ingress-nginx ingress-nginx/ingress-nginx -f nginx-values.yaml
3. Creating the Application Routes
Now that the “Highway” (NLB) and “Reception” (NGINX) are ready, you just define your apps.App 1 Ingress
App 2 Ingress
You create a second file exactly like the one above, but change thehost to app2.example.com and the service to app2-service. NGINX handles the split automatically!
4. Scale & Industry Use Cases
The Scale
- Throughput: A single NLB can handle millions of requests per second. It is significantly faster and more “rugged” than an ALB.
- Limits: NGINX is only limited by the CPU/Memory you give the pods. You can use HPA (Horizontal Pod Autoscaler) to add more NGINX pods as traffic grows.
Who uses what?
| Approach | Typical Users | Why? |
|---|---|---|
| ALB Ingress | Startups / Small Teams (e.g., Buffer, smaller SaaS) | Zero maintenance. AWS does all the heavy lifting. Good if you have < 50 services. |
| NLB + NGINX | Enterprise / High Scale (e.g., Airbnb, Reddit, Tinder) | Cost: 1 NLB for 500 apps is way cheaper than 500 ALBs. Speed: Lower latency for massive traffic. Control: They need custom logic (like Lua scripts or specialized headers) that ALB doesn’t support. |
Note for 2026: Ingress-NGINX (community version) is currently moving toward a maintenance-only mode. Many big companies are migrating from NGINX to Envoy Gateway or Traefik using the same NLB logic you just learned.
