Saturday, February 8, 2025

nginx-Load Balancer.


Double-click on the image to zoom-out ...Larger.

To return to Home page: Refresh Page or Take ESC Button on Keyboard.

Double-click on the image to zoom-out ...Larger.

To return to Home page: Refresh Page or Take ESC Button on Keyboard.

NGINX Load Balancer and Its Benefits

NGINX is a powerful open-source web server that also functions as a load balancer, reverse proxy, and API gateway. Using NGINX as a load balancer helps distribute traffic across multiple backend servers, ensuring high availability, scalability, and better performance.

1. Types of Load-Balancing in NGINX

1.1 Round Robin (Default)

  • Requests are distributed sequentially across backend servers.
  • Example configuration:
    # nginx

    upstream backend { server server1.example.com; server server2.example.com; } server { listen 80; location / { proxy_pass http://backend; } }

1.2 Least Connections

  • Sends requests to the server with the fewest active connections.
  • Useful when servers have different capacities.
  • Example:
    # nginx

    upstream backend { least_conn; server server1.example.com; server server2.example.com; }

1.3 IP Hash

  • Requests from the same client IP go to the same server.
  • Ensures session persistence.
  • Example:
    # nginx

    upstream backend { ip_hash; server server1.example.com; server server2.example.com; }

1.4 Weighted Load Balancing

  • Assigns different weights to servers based on their capacity.
  • Example:
    # nginx

    upstream backend { server server1.example.com weight=3; server server2.example.com weight=1; }

1.5 Active Health Checks

  • Ensures requests are sent only to healthy servers.
  • Example:
    # nginx

    upstream backend { server server1.example.com; server server2.example.com; health_check interval=5 fails=3 passes=2; }

2. Benefits of NGINX Load Balancer

1. High Availability & Fault Tolerance

  • If one backend server goes down, traffic is rerouted to available servers.

2. Scalability

  • Allows horizontal scaling by adding more backend servers.

3. Improved Performance

  • Distributes traffic efficiently, preventing overloading of a single server.

4. Session Persistence (Sticky Sessions)

  • Ensures users remain connected to the same backend server when needed.

5. Security

  • Acts as a reverse proxy, protecting backend servers from direct exposure.
  • Can be configured with SSL termination for secure HTTPS traffic.

6. Monitoring & Logging

  • Provides detailed logs and metrics for performance analysis.

3. Deploying NGINX Load Balancer on Linux

Step 1: Install NGINX

sudo apt update sudo apt install nginx -y

Step 2: Configure Load Balancer

Edit the NGINX config file:

sudo nano /etc/nginx/nginx.conf

Example configuration:

# nginx

http { upstream backend_servers { least_conn; server 192.168.1.10; server 192.168.1.11; } server { listen 80; location / { proxy_pass http://backend_servers; } } }

Step 3: Restart NGINX

sudo systemctl restart nginx

Step 4: Verify Load Balancing

  • Access the NGINX Load Balancer using your browser:

    http://your-load-balancer-ip:80
  • Use curl to check responses:

    curl -v http://your-load-balancer-ip/

4. Advanced Features

  • SSL Termination: Secure traffic using Let's Encrypt or  SSL certificate.
  • Rate Limiting: Prevent abuse and DoS attacks.
  • Gzip Compression: Optimize response time.
  • Integration with Kubernetes: Use NGINX Ingress Controller for load balancing in K8s.

Nginix-ingress as a layer 7 load balancer for applications

Nginx Ingress is a Layer 7 load balancer that routes HTTP/HTTPS traffic to Kubernetes services based on rules. It acts as an entry point to expose applications running inside a Kubernetes cluster.

1. Why Use Nginx Ingress as a Load Balancer?

Application-Aware Routing: Handles URL-based, host-based, and path-based routing.
SSL Termination: Offloads SSL/TLS encryption to reduce workload on backend services.
Traffic Management: Implements rate limiting, request rewrites, and authentication.
High Availability & Scalability: Distributes traffic among backend pods efficiently.
Security Features: Supports WAF (Web Application Firewall) and DDoS protection.

2. Deploying Nginx Ingress Controller in Kubernetes

Step 1: Install Nginx Ingress Controller

You can install Nginx Ingress using Helm:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install my-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace

Alternatively, apply the official YAML manifest:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml

3. Exposing Applications with Ingress Rules

Host-base routing and Path-based routing:

When configuring NGINX Ingress in Kubernetes, you can use host-based routing or path-based routing to direct traffic to different backend services. Here’s how they differ:

1. Host-Based Routing

  • Routes traffic based on the hostname (domain name).
  • Different services are assigned to different hostnames (e.g., app1.example.comapp2.example.com).
  • Example:
    # yaml

    apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: host-based-ingress spec: rules: - host: app1.example.com http: paths: - path: / pathType: Prefix backend: service: name: app1-service port: number: 80 - host: app2.example.com http: paths: - path: / pathType: Prefix backend: service: name: app2-service port: number: 80

2. Path-Based Routing

  • Routes traffic based on the URI path of the request.
  • A single hostname (domain) can serve multiple services based on different paths.
  • Example:
    # yaml

    apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: path-based-ingress spec: rules: - host: example.com http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-service port: number: 80
  • Use case: Hosting multiple microservices under the same domain
  • Use case: Serving different applications under different subdomains.

Key Differences

FeaturePath-Based RoutingHost-Based Routing
Routing MethodBased on request path (e.g., /app1, /app2)Based on hostname (e.g., app1.example.com, app2.example.com)
Domain RequirementSame domain, different pathsDifferent domains or subdomains
Use CaseMultiple services under a single domainSeparate services per domain/subdomain
Example Requestexample.com/app1, example.com/app2app1.example.com, app2.example.com

Choosing Between Them:

  • Use path-based routing when multiple services share the same domain.
  • Use host-based routing when different services need their own domains or subdomains

b. SSL/TLS Termination

To enable HTTPS, create a TLS Secret:

kubectl create secret tls my-tls-secret --cert=cert.pem --key=key.pem

Modify the Ingress resource to use TLS:

# yaml

spec: tls: - hosts: - myapp.example.com secretName: my-tls-secret

c. Load Balancing & Sticky Sessions

Enable session persistence (sticky sessions) with cookies:

# yaml

annotations: nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "SESSION"

5. Monitoring and Logging

Monitor Nginx Ingress with Prometheus and Grafana:

  1. Install Prometheus metrics exporter:
    # yaml

    annotations: prometheus.io/scrape: "true" prometheus.io/port: "10254"
  2. Visualize metrics in Grafana using the Nginx Ingress dashboard.

To view logs:

kubectl logs -n ingress-nginx deploy/my-nginx-ingress-controller

6. Best Practices

Use external DNS and certificates for HTTPS (e.g., Let’s Encrypt + cert-manager).
Enable rate limiting to prevent abuse:

# yaml

nginx.ingress.kubernetes.io/limit-rps: "10"

 Optimize request buffering for better performance:

# yaml

nginx.ingress.kubernetes.io/proxy-buffering: "on"

 Enable Web Application Firewall (WAF) for security.

Thoughts:

Nginx Ingress is a powerful Layer 7 load balancer for Kubernetes. It provides flexible routing, SSL termination, security features, and traffic management for modern applications

Nginx Ingress is termed a Layer 7 Load Balancer because it operates at the Application Layer (Layer 7) of the OSI model. This means it makes traffic routing decisions based on HTTP/HTTPS headers, request paths, hostnames, cookies, and other application-specific data, rather than just IP addresses and ports (Layer 4 load balancing).

How Nginx Ingress Functions as a Layer 7 Load Balancer

1. HTTP/HTTPS-Based Routing

  • Routes requests based on hostnames (Host header), e.g., app.example.com.
  • Supports path-based routing, e.g., /api traffic goes to api-service, while /web traffic goes to web-service.
  • Can rewrite request paths before forwarding them to backend services.

2. SSL/TLS Termination

  • Offloads SSL encryption from backend services.
  • Supports automatic certificate management with tools like cert-manager.

3. Content-Based Load Balancing

  • Distributes traffic based on request headers, cookies, or query parameters.
  • Implements sticky sessions (session affinity) using cookies.

4. Traffic Control and Rate Limiting

  • Limits the number of requests per second to prevent abuse or DDoS attacks.
  • Implements IP whitelisting/blacklisting for security.

5. Advanced Features like Web Application Firewall (WAF)

  • Protects against common web threats (e.g., SQL injection, XSS).
  • Can integrate with ModSecurity or Nginx App Protect for enhanced security.

Key Differences Between Layer 4 and Layer 7 Load Balancing

FeatureLayer 4 Load BalancerLayer 7 Load Balancer (Nginx Ingress)
ProtocolsTCP, UDPHTTP, HTTPS
Routing DecisionBased on IP and portBased on HTTP headers, hostnames, paths
SSL TerminationNot supportedSupported
Load Balancing MethodRound Robin, Least ConnectionsPath-based, Host-based, Cookie-based
ExampleKubernetes Service LoadBalancer (e.g., AWS ELB, NLB)Nginx Ingress Controller

twtech-Thoughts:

Nginx Ingress is a Layer 7 load balancer because it processes HTTP/HTTPS traffic and makes intelligent routing decisions based on hostnames, paths, headers, and cookies. This provides more flexibility, security, and traffic management capabilities compared to traditional Layer 4 load balancers.

No comments:

Post a Comment

Kubernetes Clusters | Upstream Vs Downstream.

  The terms "upstream" and "downstream" in the context of Kubernetes clusters often refer to the direction of code fl...