1. High-Level Architecture: Every Way to Access a Pod
To access a pod publicly, you generally move from the “Outside World” to the “Inside Cluster.” Here are the common architectures:A. The “Pure Service” Approach (Layer 4)
- Flow:
User>NLB (Network Load Balancer)>K8s Service (Type: LoadBalancer)>Pod - How it works: You set your Service type to
LoadBalancer. AWS automatically provisions an NLB. - Best for: Simple apps, non-HTTP traffic (TCP/UDP), or when you want the fastest possible performance.
B. The “In-Cluster Proxy” Approach (NGINX Ingress + NLB)
- Flow:
Route 53>NLB>NGINX Ingress Controller>K8s Service (ClusterIP)>Pod - How it works: One NLB sits outside. It sends all traffic to NGINX pods inside your cluster. NGINX then decides which internal app gets the request based on the domain name or URL path.
C. The “Cloud-Native” Approach (AWS ALB Ingress)
- Flow:
Route 53>ALB (Application Load Balancer)>K8s Service (NodePort/TargetGroupBinding)>Pod - How it works: The AWS Load Balancer Controller manages an actual AWS ALB. The “intelligence” (routing) happens at the AWS infrastructure level, not inside a pod.
D. The “Direct Entry” (NodePort) - Learning Only
- Flow:
User>Worker Node IP:Port>Service>Pod - How it works: You open a specific port (30000-32767) on your EC2 nodes.
- Why it exists: Mostly for local testing or when you are building your own custom load balancer.
2. Comparing Your Two Specific Flows
| Feature | Route 53 > NLB > NGINX Ingress | Route 53 > ALB > ALB Ingress Controller |
|---|---|---|
| Routing Logic | Happens inside the cluster (in NGINX pods). | Happens outside the cluster (in AWS managed ALB). |
| AWS Integration | Minimal (NLB just passes raw traffic). | High (Native WAF, ACM certificates, Cognito). |
| Flexibility | Very High. Use NGINX snippets, complex regex, and custom headers easily. | Moderate. Limited by what AWS ALB rules support (max 100-200 rules). |
| Cost | Cheaper for many apps (1 NLB for everything). | Can be pricier if not using “Ingress Groups” (1 ALB per app). |
| SSL/TLS | Usually terminated by NGINX pods inside K8s. | Terminated at the ALB level (Managed by AWS). |
3. The “Single NLB + Multiple NGINX” Approach
Yes, you can (and usually should) use a single NLB to forward traffic to multiple applications. In this setup, you install the NGINX Ingress Controller once. It creates one NLB. You then create multipleIngress objects (one for app1.com, one for app2.com).
Why use this approach?
- Cost Efficiency: You only pay for one AWS Load Balancer, regardless of how many apps you have.
- Centralized Management: You manage all your routing rules, SSL certificates (via Cert-Manager), and rate-limiting in one place (NGINX).
- Portability: This setup works almost the same on EKS, Azure (AKS), or even on-premises. You aren’t “locked in” to specific AWS ALB quirks.
- Advanced Features: NGINX supports things ALB doesn’t, like advanced URL rewriting, custom error pages, and fine-tuned authentication.
4. How Everything Fits: The Learning Guide
If you are learning EKS, think of the traffic flow like a filter becoming more specific:- VPC / Subnet: The environment. Public subnets hold the Load Balancer; private subnets hold the Worker Nodes.
- Route 53: The “Phonebook.” Maps
myapp.comto the Load Balancer’s DNS name. - Load Balancer (ALB/NLB): The “Gatekeeper.” It’s the first point of contact for the internet.
- Ingress Controller: The “Receptionist.” It looks at the request and says, “Oh, you want the ‘Billing’ app? Go to that service.”
- Service: The “Internal Switch.” It finds which specific Pod (among many replicas) is healthy and ready to talk.
- Pod: The “Worker.” It actually runs your code (like NGINX or a Python API).
Which one should you choose first?
- If you want “Easy & AWS Managed”: Use the AWS ALB Ingress Controller.
- If you want “Maximum Control & Cost Savings”: Use NGINX Ingress + NLB.
