Nginx is one of the most common traffic entry points in production environments. Its core capabilities include reverse proxying, load balancing, cache optimization, and API rate limiting. This article distills six reusable configuration templates that solve service forwarding, session persistence, static acceleration, and anti-abuse protection. Keywords: Nginx, Reverse Proxy, Load Balancing.
This is a production-oriented technical specification snapshot
| Parameter | Details |
|---|---|
| Core Technology | Nginx |
| Primary Language | Nginx configuration syntax |
| Transport Protocols | HTTP, HTTPS, HTTP/2 |
| Traffic Handling Capabilities | Reverse Proxy, Round Robin, Weighted Routing, Session Persistence |
| Core Dependencies | upstream, proxy_pass, limit_req, SSL module |
| Source Popularity | Approximately 6 likes / 6 bookmarks / 177 views |
These six templates cover the most common traffic governance scenarios at the gateway layer
The source material is highly focused: it does not explain Nginx internals. Instead, it provides production templates you can copy directly. It is well suited for backend developers, DevOps engineers, and platform teams that want a consistent gateway configuration baseline.
If you group these six capabilities by responsibility, they map to ingress forwarding, transport security, traffic distribution, session stickiness, static acceleration, and API protection. This is also the minimum gateway capability set that most small and mid-sized systems need before going live.
Basic reverse proxying reliably forwards public traffic to internal services
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://127.0.0.1:8080; # Forward requests to the local backend service
proxy_set_header Host $host; # Preserve the original Host header so the backend can identify the domain
proxy_set_header X-Real-IP $remote_addr; # Pass the real client IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Append proxy chain IP addresses
proxy_set_header X-Forwarded-Proto $scheme; # Pass the original protocol: http/https
}
}
This configuration creates a standard proxy entry point and ensures that the backend can retrieve accurate client source information.
HTTPS redirection is the default security baseline in production
The HTTP-to-HTTPS 301 redirect should stay as simple as possible, and your TLS configuration should keep only modern protocol versions. The source material specifically emphasizes TLSv1.2 and TLSv1.3, which helps avoid the security risks introduced by legacy protocols.
server {
listen 80;
server_name www.example.com;
return 301 https://$host$request_uri; # Force redirect to HTTPS
}
server {
listen 443 ssl http2;
server_name www.example.com;
ssl_certificate /etc/nginx/ssl/cert.pem; # Certificate file
ssl_certificate_key /etc/nginx/ssl/key.pem; # Private key file
ssl_protocols TLSv1.2 TLSv1.3; # Disable legacy TLS protocols
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://127.0.0.1:3000; # Forward to the application layer after HTTPS termination
}
}
This configuration cleanly separates secure access from application forwarding.
The upstream mechanism distributes requests across multiple backend nodes
When you run more than one backend instance, you should use upstream to define the node pool in a single place. Use weight to control the traffic ratio, backup to define a fallback node, and proxy_next_upstream for failover behavior.
upstream backend {
server 10.0.0.1:8080 weight=3; # Higher-performance node handles more traffic
server 10.0.0.2:8080 weight=2; # Secondary weighted node
server 10.0.0.3:8080 backup; # Used only when primary nodes fail
}
server {
listen 80;
location / {
proxy_pass http://backend; # Forward to the upstream service group
proxy_next_upstream error timeout http_502 http_503; # Automatically switch on failure
proxy_next_upstream_tries 2; # Retry up to 2 times
}
}
This configuration implements the most common weighted round-robin strategy with basic failover.
Session persistence scenarios are a good fit for the IP Hash strategy
If your application depends on local sessions, temporary cache, or user state affinity, the same user should reach the same backend node whenever possible. In this case, ip_hash is a simple and direct approach, although it works best in clusters with relatively stable node membership.
upstream backend {
ip_hash; # Distribute requests by hashing the client IP
server 10.0.0.1:8080;
server 10.0.0.2:8080;
server 10.0.0.3:8080 down; # Temporarily offline node excluded from traffic distribution
}
This configuration is suitable for monolithic applications or lightweight clusters that require basic session stickiness.
Static asset caching directly reduces origin load and bandwidth consumption
Images, scripts, stylesheets, and font files usually have high reuse rates, which makes them ideal candidates for strong caching at the Nginx edge layer. The source material also highlights an important detail: cross-origin font loading requires CORS response headers.
server {
listen 80;
server_name static.example.com;
root /var/www/static;
location ~* \.(jpg|jpeg|png|gif|ico|svg)$ {
expires 30d; # Cache images for 30 days
add_header Cache-Control "public, immutable";
access_log off; # Disable access logs to reduce I/O
}
location ~* \.(css|js)$ {
expires 7d; # Cache frontend static assets for 7 days
add_header Cache-Control "public";
}
location ~* \.(woff2|woff|ttf)$ {
expires 365d; # Font files are usually versioned and can be cached long term
add_header Cache-Control "public, immutable";
add_header Access-Control-Allow-Origin *; # Allow cross-origin font loading
}
}
The core value of this configuration is to trade cache for throughput and trade versioning for controlled updates.
API rate limiting is the final gate for anti-abuse and traffic spike protection
Rate limiting is not only for blocking malicious requests. It also protects backend thread pools, database connection pools, and downstream dependencies. rate defines the steady-state request rate, burst provides a short-term buffer, and nodelay determines whether burst requests pass immediately.
http {
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s; # Create a rate limit bucket per client IP
}
server {
listen 80;
server_name api.example.com;
location /api/ {
limit_req zone=api_limit burst=20 nodelay; # Allow 20 burst requests to pass immediately
limit_req_status 429; # Return 429 when the limit is exceeded
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
This configuration smooths traffic spikes before they reach the application layer and helps prevent backend overload during sudden bursts.
These configurations usually need to work together in real systems
In actual deployments, teams rarely enable only one capability. A more common pattern is this: HTTPS termination handles secure ingress, upstream provides horizontal scaling, caching offloads static traffic, and limit_req protects APIs.
The recommended implementation order is to start with reverse proxying, then add HTTPS, then introduce load balancing, and finally layer in caching and rate limiting based on business needs. This sequence makes troubleshooting easier and also simplifies canary validation for each configuration layer.
FAQ
Why must a reverse proxy set X-Real-IP and X-Forwarded-For?
Because the backend sees the proxy server address by default. Without these headers, application logs, risk control policies, audit systems, and rate-limiting logic may all become inaccurate.
When should I use weighted round robin, and when should I use IP Hash?
Use weighted round robin when backend nodes have different performance characteristics and you want to maximize overall throughput. Use IP Hash when the application depends on session stickiness and user state must stay on a fixed node.
If Nginx returns 429 for rate limiting, do I still need application-level rate limiting?
Yes. Nginx is best for coarse-grained ingress rate limiting. The application layer is better for fine-grained protection based on user identity, API endpoint, token, or business resource. You should use both together.
Core Summary: This article reconstructs the six most common Nginx configuration patterns used in production environments: reverse proxying, HTTP-to-HTTPS redirection, weighted load balancing, IP Hash session persistence, static asset caching, and API rate limiting, with reusable configuration snippets and key parameter explanations.