NGINX Web Server

type access

  • Operating System:

  • Terminal:

  • Shell:

  • Editor:

  • Package Manager:

  • Programming Language:

  • Utility:

  • Extension:


NGINX is a high-performance, open-source software for web serving, reverse proxying, caching, load balancing, media streaming, and more.

For more information about all its possible applications check the official NGINX documentation.

Initialization

For information on how to use the Initialization parameter, refer to the Initialization - Bash script section of the documentation.

Default configuration

Default NGINX configuration files are located at:

  • /etc/nginx/nginx.conf

  • /etc/nginx/conf.d/default.conf

NGINX configuration files can be edited directly using the app's terminal interface. For changes to the configuration file to take effect, you must reload NGINX:

$ nginx -t
$ nginx -s reload

A custom nginx.conf file can be loaded before the job starts using the optional NGINX configuration parameter.

Remote connectivity options

There are two primary methods for establishing remote connections to an NGINX server running on UCloud.

HTTP

To enable external HTTP access, the NGINX server must be deployed with an accessible public URL. This setup ensures that users or services can interact with server through the web seamlessly.

Important

The NGINX server is configured to utilize port 8080 as its default setting, which must remain unchanged for managing HTTP traffic.

Streams

For scenarios that demand external TCP/UDP streaming capabilities, it is essential to assign a public IP address. This step guarantees the direct and reliable flow of data streams to and from the NGINX server.

Note

In scenarios involving TCP/UDP streams, the NGINX server's listening port can be customized to match that associated with public IP address, offering flexibility in managing various streaming requirements.

Key features

NGINX is a versatile tool that can be configured for various roles:

  • Web Serving: Efficiently serves static content, accelerating content delivery.

  • Reverse Proxy: Routes client requests to any number of backend servers.

  • Load Balancing: Distributes incoming traffic across multiple servers to enhance application scalability and reliability.

  • Database Streaming: Facilitates real-time data streaming from databases over TCP or UDP, making it possible to proxy and balance database connections or stream database changes to clients.

  • Content Caching: Reduces server load by caching responses to requests.

  • Security Controls: Offers various security features, including rate limiting and client request filtering.

Basic configuration settings are reported below. For advanced NGINX integration patterns, see the tutorials available here.

Application as a web server

NGINX excels at serving static content, providing high performance with minimal resource consumption.

Serving static content

# /etc/nginx/nginx.conf
worker_processes  auto;
error_log /dev/stdout info;
pid        /var/run/nginx.pid;
events {
    worker_connections 1024;
}
http {
    access_log /dev/stdout combined;
    server {
        listen 8080 so_keepalive=on;
        location / {
            root /var/www/html;
            index index.html index.htm;
        }
    }
}

This configuration serves static files from /var/www/html directory.

Reverse proxy configuration

# /etc/nginx/nginx.conf
worker_processes  auto;
error_log /dev/stdout info;
pid       /var/run/nginx.pid;

events {
    worker_connections 1024;
}

http {
    access_log /dev/stdout combined;

    log_format logger-json escape=json
            '{"source": "nginx",
              "time": $msec,
              "resp_body_size": $body_bytes_sent,
              "host": "$http_host",
              "address": "$remote_addr",
              "request_length": $request_length,
              "method": "$request_method",
              "uri": "$request_uri",
              "status": $status,
              "user_agent": "$http_user_agent",
              "resp_time": $request_time,
              "upstream_addr": "$upstream_addr"}';

    server {
        listen 8080 so_keepalive=on;

        location /app {
            proxy_pass http://backendApp:port;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}

This setup forwards requests to /app to a backend application server, backendApp.

Application as a load balancer

NGINX can distribute client requests or network load efficiently across multiple servers. This ensures application reliability and scalability.

Configuration example

# /etc/nginx/nginx.conf
worker_processes  auto;
error_log /dev/stdout info;
pid       /var/run/nginx.pid;
events {
    worker_connections 1024;
}
http {
    access_log /dev/stdout combined;
    server {
        listen 8080 so_keepalive=on;
        location / {
            proxy_pass http://backend;
        }
    }
    upstream backend {
        server backend1:port1;
        server backend2:port2;
    }
}

This configuration directs traffic to a group of servers defined in the upstream block, enabling simple load balancing.

Application as a database stream

NGINX can be configured to act as a powerful intermediary for database streaming, facilitating real-time data flows from databases to clients. This capability is especially useful in scenarios where live data updates are crucial, such as in real-time analytics or live monitoring systems.

How it works

NGINX's stream module allows for the handling of TCP and UDP traffic, enabling it to proxy and load balance database connections just as it does for HTTP traffic. By leveraging the stream module, NGINX can listen for incoming database connections and forward them to one or more backend database servers. This setup not only adds a layer of abstraction and control between clients and databases but also enables the implementation of additional features like connection pooling, SSL/TLS encryption, and access control.

Configuration example

Below is a simplified example of how NGINX can be configured for database streaming. This example assumes that NGINX is placed in front of a database server to proxy incoming SQL connections.

# /etc/nginx/nginx.conf
worker_processes  auto;
error_log /dev/stdout info;
pid       /var/run/nginx.pid;
events {
  worker_connections 1024;
}
http {
    access_log /dev/stdout combined;
    server {
        listen 8080 so_keepalive=on;
        location = /health {
            auth_basic off;
            return 200;
        }
    }
}
stream {
    upstream database {
        server [database endpoint]:[database port];
    }
    server {
        listen [database port] so_keepalive=on;
            proxy_connect_timeout 60s;
            proxy_socket_keepalive on;
            proxy_pass database;
    }
}

Application as a content cache

NGINX can cache the content from a backend server and serve it to clients, reducing the load on the backend server.

Configuration example

# /etc/nginx/nginx.conf
worker_processes  auto;
error_log /dev/stdout info;
pid       /var/run/nginx.pid;
events {
    worker_connections 1024;
}
http {
    access_log /dev/stdout combined;
    proxy_cache_path /work/cache levels=1:2 keys_zone=my_cache:10m;
    server {
        listen 8080 so_keepalive=on;
        location / {
            proxy_cache my_cache;
            proxy_pass http://backend:port;
        }
    }
}

This configuration enables caching of content served by the backend server, enhancing content delivery speed.

Security controls

NGINX offers various security features, such as rate limiting and request filtering, to protect web applications.

Rate limiting

# /etc/nginx/nginx.conf
worker_processes  auto;
error_log /dev/stdout info;
pid       /var/run/nginx.pid;
events {
    worker_connections 1024;
}
http {
    access_log /dev/stdout combined;
    limit_req_zone $binary_remote_addr zone=mylimit:10m rate=5r/m;
    server {
        listen 8080 so_keepalive=on;
        location / {
            limit_req zone=mylimit burst=10;
        }
    }
}

This example limits the number of requests a client can make to 5 requests per minute, with a burst allowance of 10 requests.

Request filtering

# /etc/nginx/nginx.conf
worker_processes  auto;
error_log /dev/stdout info;
pid       /var/run/nginx.pid;
events {
    worker_connections 1024;
}
http {
    access_log /dev/stdout combined;
    server {
        listen 8080 so_keepalive=on;

        # Allow only GET and POST methods
        if ($request_method !~ ^(GET|POST)$) {
            return 405;
        }

        location / {
            proxy_pass http://backend:port;
            # Additional proxy settings...
        }
    }
}

This configuration blocks requests that are not GET or POST, returning a 405 "Method Not Allowed" error.