Networking
Every pod in Kubernetes gets its own IP address. Networking ties pods together and exposes them to the outside world.
The Networking Model
Section titled “The Networking Model”Kubernetes has three rules:
- Every pod can communicate with every other pod (no NAT).
- Nodes can communicate with all pods (and vice versa).
- The IP a pod sees for itself is the same IP others see for it.
This flat network means you don’t need to map ports between containers. A CNI plugin (Calico, Cilium, Flannel, etc.) implements these rules.
Services
Section titled “Services”Pods are ephemeral — they come and go. A Service provides a stable address that routes traffic to a set of pods matched by a label selector.
ClusterIP (Default)
Section titled “ClusterIP (Default)”Internal-only. Reachable from inside the cluster.
apiVersion: v1kind: Servicemetadata: name: my-appspec: selector: app: my-app ports: - port: 80 targetPort: 8080Other pods reach this service at my-app:80 or my-app.default.svc.cluster.local:80.
NodePort
Section titled “NodePort”Exposes the service on a static port on every node’s IP. Useful for development or when you don’t have a cloud load balancer.
spec: type: NodePort ports: - port: 80 targetPort: 8080 nodePort: 30080 # accessible at <NodeIP>:30080LoadBalancer
Section titled “LoadBalancer”Provisions an external load balancer (on cloud providers). Traffic from the internet reaches the LB, which forwards to the service.
spec: type: LoadBalancer ports: - port: 80 targetPort: 8080Which service type should I choose?
Section titled “Which service type should I choose?”Use service types by exposure level and protocol needs:
| Need | Recommended choice | Why |
|---|---|---|
| Internal service-to-service traffic | ClusterIP | Stable DNS inside cluster, no external exposure |
| Public HTTP/HTTPS app | Ingress/Gateway + ClusterIP backend | Centralized TLS/routing, fewer public entrypoints |
| Public non-HTTP (TCP/UDP) service | LoadBalancer | Direct external L4 access from cloud LB |
| Quick local/dev access | NodePort | Simple direct testing via <NodeIP>:nodePort |
For most new production setups:
- Create app services as ClusterIP.
- Expose external web traffic through Ingress (or Gateway API).
- Use LoadBalancer at the edge (often for the ingress controller), not per service unless required.
- Keep NodePort mainly for development, labs, or specific infrastructure constraints.
Do I need both LoadBalancer and Ingress?
Section titled “Do I need both LoadBalancer and Ingress?”Often, yes for HTTP/HTTPS workloads.
- Ingress defines Layer 7 routing rules (host/path/TLS behavior).
- A LoadBalancer Service usually exposes the Ingress controller externally.
Typical flow:
Internet -> Cloud LoadBalancer (Service type: LoadBalancer for ingress controller) -> Ingress Controller -> ClusterIP Services -> PodsUse only Service type: LoadBalancer (without Ingress) when you need simple direct exposure, especially for single-service or non-HTTP TCP/UDP use cases.
Ingress
Section titled “Ingress”An Ingress manages external HTTP/HTTPS access to services. It provides:
- Host-based routing —
api.example.comgoes to one service,app.example.comto another. - Path-based routing —
/apigoes to the backend,/to the frontend. - TLS termination — HTTPS at the edge.
For HTTP semantics (status codes, headers, timeouts at the edge) and TLS termination patterns, see HTTP for Operators. For certificate lifecycle on AWS (ACM) and pointers to cert-manager, see TLS and Certificates and Operators (cert-manager).
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: my-ingressspec: rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: frontend port: number: 80 - path: /api pathType: Prefix backend: service: name: backend port: number: 80An Ingress needs an Ingress Controller (e.g. NGINX Ingress Controller, Traefik) actually running in the cluster to work. For controller selection, TLS/mTLS patterns, and cert-manager wiring, see Ingress Controllers.
Kubernetes runs an internal DNS service (CoreDNS). Every Service gets a DNS name:
<service-name>— within the same namespace.<service-name>.<namespace>.svc.cluster.local— fully qualified.
Pods can resolve service names automatically; no hardcoded IPs needed.
Network Policies
Section titled “Network Policies”By default, all pods can talk to all other pods. NetworkPolicies restrict traffic (like a firewall). They require a CNI plugin that supports them (e.g. Calico, Cilium).
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-frontend-onlyspec: podSelector: matchLabels: app: backend ingress: - from: - podSelector: matchLabels: app: frontend ports: - port: 8080This policy says: only pods labeled app: frontend can reach pods labeled app: backend on port 8080.
AWS, DNS, Ingress, and Service Mesh
Section titled “AWS, DNS, Ingress, and Service Mesh”On AWS, Services type: LoadBalancer and Ingress (with the AWS Load Balancer Controller) create NLB or ALB resources. See Elastic Load Balancing and Route 53 for public DNS in front of those endpoints.
For L7 routing, retries, and mTLS inside the cluster, Istio (and other meshes) sit alongside Ingress. See Istio.
Gateway API (Gateway, HTTPRoute) is an alternative to the classic Ingress API; ingress controllers and meshes increasingly support it — same ideas (listener, routes, backends), different resources.
Related
Section titled “Related”- Network troubleshooting flow — Order of operations from symptom to mesh.
- EKS — Control plane and worker patterns on AWS.