Ngnix is a web server used by the world’s most innovative companies and largest enterprises. Igor Sysoev originally wrote it to solve the C10K problem — The C10k problem is the problem of optimizing network sockets to handle a large number of clients at the same time — additionally, the term C10K problem was coined in 1999 to describe the difficulty that existing web server experienced in handling large numbers (i.e., the 10k) of concurrent connections (the C).
Nginx was open-sourced in 2004, and it has grown exponentially since then with its event-driven, asynchronous architecture. It revolutionized how servers operate in high-performance contexts and became the fastest web server.
What is NGINX?
Nginx is an open-source web server that can also be used as a reverse proxy, load balancer, mail proxy, and HTTP cache. It started as a web server designed for maximum performance, scalability, and stability before including a load balancer, reverse proxy, and API gateway for high performance.
These additions make NGINX the fastest web server by consistently beating Apache and other web servers according to these benchmarks measuring web server performance. They are currently serving more than 300+ million websites as of the time of writing.
Why you should learn it.
Nginx is a complete package for what an enterprise web server needs. It gives you a complete package for serving and handling millions of web traffic.
Nginx has grown to support a lot of modern web engineering components, including tools such as gRPC, WebSocket, streaming of multiple video formats such as HDS, HLS, RTMP, and others), and HTTP/2,
Building microservices shines with Nginx because it was created to handle a high volume of connections efficiently, and it’s commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers.
Nginx is an all-in-one software-only web server with modern web components such as a load balancer, reverse proxy, and API gateway designed for cloud-native architectures to accelerate your IT infrastructure and application modernization.
It’s a multifunction tool that can be used for load balancing, reverse proxying, content caching, and web server, minimizing the amount of tooling and configuration your organization needs to maintain.
How Nginx Works
Here's a high-level overview of how Nginx works:
Accepting and Handling Requests: Nginx listens on specified network ports (e.g., port 80 for HTTP) and accepts incoming client requests. It uses an event-driven, asynchronous architecture to handle numerous connections concurrently with low resource usage.
Configuration: Nginx's behavior is controlled by a configuration file. It defines various aspects, such as server blocks, which specify the domains and IP addresses it should listen to, along with other settings like SSL certificates, caching, and load balancing.
Processing Requests: When a request arrives, Nginx determines how to handle it based on the configuration file. It can serve static content directly from the file system without invoking other application servers. It acts as a reverse proxy for dynamic content, forwarding the request to the appropriate backend server (e.g., an application server like Node.js, PHP-FPM, or Apache) based on predefined rules.
Load Balancing: Nginx can distribute client requests across multiple backend servers to balance the load and improve overall performance. It supports various load-balancing algorithms, such as round-robin, least connections, and IP hash. This allows Nginx to efficiently distribute incoming traffic among backend servers to prevent overloading and ensure optimal resource utilization.
Caching: Nginx includes built-in caching capabilities to store and serve frequently requested static content. It can cache responses from backend servers, reducing the load on those servers and improving response times for subsequent requests. Additionally, it supports various cache-control mechanisms, allowing fine-grained control over the caching behavior.
SSL/TLS Termination: Nginx can handle SSL/TLS encryption and decryption, relieving backend servers from the computational overhead of SSL/TLS termination. It can also enforce secure connections by redirecting HTTP requests to HTTPS, enhancing security for web applications.
High Availability: Nginx supports high availability configurations to ensure service continuity. It can be deployed in a clustered or load-balanced setup, where multiple Nginx instances work together to provide redundancy and failover protection. This helps prevent downtime in case of server failures or maintenance activities.
Logging and Monitoring: Nginx provides detailed logging capabilities, allowing administrators to monitor server performance, track requests, and troubleshoot issues. Additionally, it supports integration with various monitoring tools and can generate real-time metrics, enabling efficient performance monitoring and analysis.
It's important to note that Nginx is highly customizable and can be extended through modules and plugins to add additional functionality. Its versatility, efficiency, and extensive feature set have made it a popular choice for serving web applications and handling high-traffic loads.
Nginx Components
Nginx is a web server with enhanced components for building high-performing and scalable web applications. Below I will discuss some components and show you how these components help make Nginx faster and an all-in-one web server package.
Web Server
Nginx is a web server that supports other components of modern web development for scalability and high-performing applications. As a web server, it supports virtual server multi-tenancy, URI and response rewriting, variables, and error handling.
To configure the Nginx web server, you define which URLs it handles and how it processes HTTP requests for resources at those URLs. However, what happens at a low level is that the configuration defines a set of virtual servers **that control the processing of requests for particular domains or IP addresses.
In the coming sections, we will explore how to create and set up Nginx as a web server.
Load Balancer
Nginx supports a load balancer to distribute HTTP traffic across web or application server groups. Load balancing is a technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations across multiple application instances.
Nginx can be used in different deployment scenarios as a very efficient HTTP, TCP, and UDP load balancer, as we will demonstrate in the coming section about setting up a load balancer with Nginx.
Reverse Proxy
You can also configure Nginx as a reverse proxy for HTTP and other protocols with support for modifying request headers and fine-turning responses.
Proxying is used to distribute the load among several servers, seamlessly show content from different websites, or pass requests for processing to application servers over protocols other than HTTP.
Configuring Nginx as a reverse proxy is very simple, and we will demonstrate it in the coming section, where we will discuss setting up Nginx reverse proxy.
Content Cache
To speed the delivery of your content, you can cache static and dynamic content from proxied web and application servers to speed delivery to clients and reduce the server load.
Content caching is a very important concept when dealing with scalability and high-performing application, and Nginx can handle it safely if enabled.
API Gateway
Nginx can be used as an API Gateway to secure and orchestrate traffic between backend services and API consumers.
Ideally, the function of the API gateway is to provide a single, consistent entry point for multiple APIs regardless of implementation and deployment strategy.
In setting up the Nginx API gateway section below, we will explore how to manage existing APIs, monoliths, and applications undergoing partial microservice transition using Nginx as an API gateway.
Nginx vs Apache
Now that we have clean knowledge of Nginx and its numerous features, aside from just being a web server. Let’s compare it with other web servers to see the difference and advantages you gain from using Nginx. The comparison will be split into different categories to gain a comprehensive insight.
Security
Nginx
Nginx is known for its strong security features and reputation for being highly secure.
It is designed to handle many concurrent connections efficiently and has a low memory footprint, making it less prone to attacks.
Nginx has built-in security mechanisms such as rate limiting, access controls, and SSL/TLS encryption support.
Since it can also be used reverse proxy server, it provides an additional layer of security by hiding the backend server's details.
Apache
Apache is also considered secure but has a larger attack surface than Nginx due to its modular architecture.
It has a long history and a large user base, so vulnerabilities are more likely to be discovered and patched quickly.
Apache provides various security modules and features, such as
mod_security
for web application firewall capabilities andmod_ssl
for SSL/TLS support.
Speed/Performance
Nginx
Nginx is known for its excellent performance and scalability.
It uses an event-driven, asynchronous architecture to handle many simultaneous connections efficiently.
It has a small memory footprint and is optimized for handling static content and serving as a reverse proxy.
Nginx is highly efficient in CPU and memory usage, making it ideal for high-traffic websites and resource-constrained environments.
Apache
Apache has a more traditional multi-process, multi-threaded architecture.
It performs well for serving dynamic content and has extensive module support.
However, Apache's process-based model can lead to higher memory usage than Nginx, especially when handling many concurrent connections.
Apache's performance can be improved using additional modules, such as
mod_cache
for caching ormod_proxy
for proxying requests to backend servers.
Handling Requests
Nginx
Nginx is known for its efficient and non-blocking event-driven architecture, making it highly capable of handling many concurrent requests.
It excels at serving static content and performing load balancing and reverse proxying tasks.
Nginx's handling of requests is optimized for speed and efficiency, with minimal resource consumption.
Apache
Apache uses a process-based model where a separate process or thread handles each request.
While this model allows Apache to handle complex scenarios and dynamic content effectively, it can be less efficient for handling many concurrent connections.
Apache's performance can be improved by fine-tuning its configuration, such as adjusting the number of processes or threads, but it may require more system resources than Nginx.
Platform Support
Nginx
Nginx is cross-platform and runs on various operating systems, including Linux, BSD variants, macOS, and Windows.
It is widely used on Linux distributions and has extensive support and documentation for different platforms.
Apache
Apache is cross-platform and supports various operating systems, including Linux, BSD variants, macOS, and Windows.
It has been the most popular web server for years and is bundled with most Linux distributions.
Apache has extensive documentation and a large community, making finding support for different platforms easier.
Modules
Nginx
Nginx has a modular architecture, but the number of available modules is smaller than Apache.
It provides essential modules for core functionality, such as HTTP proxying, load balancing, and SSL/TLS support.
Additional modules can be added through third-party extensions, but the ecosystem is less vast than Apache's.
Apache
Apache has a rich ecosystem of modules, allowing users to extend its functionality in numerous ways.
It provides modules for various purposes, including authentication, caching, compression, scripting languages, and database connectivity.
The extensive module support makes Apache highly versatile, adaptable, and suitable for various use cases.
Configuration
Nginx
Nginx's configuration syntax is concise and straightforward, using a declarative approach.
It uses a hierarchical structure with directives grouped in blocks, making it easy to understand and maintain.
Nginx's configuration file is typically organized separately for different purposes (e.g., server blocks), providing better organization and manageability.
Apache
Apache's configuration syntax is more complex than Nginx, using an Apache-specific format known as Apache Configuration Language (ACL) or
htaccess
files.The configuration file can become more cluttered and harder to manage, especially in complex setups.
Apache's configuration offers more flexibility and fine-grained control, allowing users to customize almost every aspect of the server's behavior.
Both Nginx and Apache are powerful web servers with their strengths and areas of expertise.
Nginx excels in performance, scalability, and security, making it ideal for serving static content, handling many concurrent connections, and acting as a reverse proxy.
However, Apache has a larger module ecosystem, making it more versatile and suitable for complex dynamic content scenarios.
The choice between Nginx and Apache ultimately depends on the specific requirements and priorities of the project or application.
Setting up Nginx as a Web Server
In this section, we will learn how to set up Nginx on your local machine and use it to serve your web content. There are different ways to set up Nginx depending on your operating system. We will use the Linux operating system for this demonstration while linking you to where to learn how to install it in other operating systems.
To follow this guide, you should have Linux Ubuntu 18 or later installed on your machine, or click here to learn how to install it in Windows or MacOS.
Step 1 — Installing Nginx
We will use the apt
packaging system in Ubuntu to install Nginx since it is available in the default repository.
It’s always a good practice to update the repository when installing new packages. You can do that using the commands below in your terminal.
sudo apt update
sudo apt install nginx
The first command updates the repositories and the second command installs the Nginx package into your local system.
The commands will install Nginx and the required dependencies and you should be greeted with a screenshot shown below:
[Screenshot]
Step 2 — Adjust Firewall Settings
Now that we have Nginx installed, we will adjust our firewall settings to allow access to the Nginx service. Nginx makes this process very easy as it registers itself as a service with ufw
during installation, making it easier to allow Nginx access.
To allow access, use the following command below to list all the application configurations that ufw
knows how to work with.
sudo ufw app list
After entering the command, you should be greeted with the following options as shown in the screenshot below:
[Screenshot]
The list displays 3 different profiles available for Nginx:
Nginx Full: This profile opens both port
80
(normal, unencrypted web traffic) and port443
(TLS/SSL encrypted traffic)Nginx HTTP: This profile opens only port
80
(normal, unencrypted web traffic)Nginx HTTPS: This profile opens only port
443
(TLS/SSL encrypted traffic)
When working on a production-ready web server, it is recommended that you allow the most restrictive profile which will be the Nginx Full
since you will allow traffic from HTTPS and maybe HTTP too. Additionally, some web servers will activate Nginx HTTPS
to allow only secured traffic using HTTPS.
But since we will be working on our local machine and have no SSL set up yet, you can choose to allow only HTTP that is Nginx HTTP
.
You can enable anyone using the command below:
sudo ufw allow 'Nginx HTTP'
once you’re done, you can verify your firewall settings using the status command:
sudo ufw status
You should receive a list of HTTP traffic allowed in the output as shown in the screenshot below:
Output
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
Now that you’ve added the appropriate firewall rule, you can check that your web server is running and able to serve content correctly.
Step 3 – Checking your Web Server
Now that we have Nginx installed and the firewall configured, it’s important to check if the Nginx web server is started even though Ubuntu should automatically start the Nginx server during installation.
To check the status of our newly installed web server, type in the following command:
systemctl status nginx
This command will check the systemd
init system to make sure the service is running and will produce an output as shown in the screenshot if it’s running successfully.
[screenshot]
Even when the output from the command shows that Nginx is running successfully, it will be sweet to render a simple page and see Nginx in action.
To achieve this, we need to access the default Nginx landing page which will tell us if Nginx is successful (If shown) or not. There are different ways to do this but will start by retrieving our IP address and using it to preview the landing page.
Type the following commands to retrieve your default IP address:
ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\\/.*$//'
If the above command doesn’t work due to uninstalled packages, use this second command below:
curl -4 icanhazip.com
This should give you an IP address, next visit the IP address in any of your web browser
[Image]
If you see this page, it means that Nginx is running successfully on your local machine. If you’re not successful, you might need to re-install Nginx or manage some of the processes using the commands listed below:
To stop and restart the Nginx server
sudo systemctl restart nginx
To stop Nginx server from running:
sudo systemctl stop nginx
To start Nginx server:
sudo systemctl start nginx
To reload the Nginx server after making changes to the configuration without stopping the server completely.
sudo systemctl reload nginx
To disable Nginx from auto-starting when you boot your local machine, use the following command:
sudo systemctl disable nginx
To enable Nginx to auto-start when you boot your local machine:
sudo systemctl enable nginx
Step 4 – Setting Up Server Blocks
Lastly, we have a running Nginx web server and we can start configuring and creating multiple server blocks for different domain names using a single server.
Nginx comes with a default server block that is enabled by default and that server block is what we used to preview the page above which tells us that Nginx is working properly.
We can continue using that server block but since Nginx supports multiple server blocks, it's a good practice and recommended to create multiple server blocks.
A server block is a subset of Nginx’s configuration that defines a virtual server used to handle requests of a defined type. Administrators often configure multiple server blocks and decide which block should handle which connection based on the requested domain name, port, and IP address.
Here’s a screenshot showing the example of a server block:
server {
listen 192.168.2.10;
. . .
}
server {
listen 80;
server_name example.com;
. . .
}
As I mentioned before if you install Nginx in Ubuntu 18.x it will come with a default server block located in this directory /var/www/html
that works perfectly for a single site. However, to host multiple sites, we need to create multiple server blocks.
In this guide, we will create an example server block called [example.com](<http://example.com>)
, it will serve as an example and should be replaced with your actual domain in your local machine or production server.
Create the directory for example.com
as follows, using the -p
flag to create any necessary parent directories:
sudo mkdir -p /var/www/example.com/html
Next, assign ownership of the directory with the $USER
environment variable using the command below:
sudo chown -R $USER:$USER /var/www/example.com/html
The permissions of your web roots should be correct if you haven’t modified your umask
value, however, you can make sure by typing the following command:
sudo chmod -R 755 /var/www/example.com
Next, create a sample index.html
page using nano, vim
or your favorite editor:
nano /var/www/example.com/html/index.html
Inside, add the following sample HTML:
<html>
<head>
<title>Welcome to example.com!</title>
</head>
<body>
<h1>Success! The example.com server block is working!</h1>
</body>
</html>
Save and close the file when you are finished. If you used nano
, you can exit by pressing CTRL + X
then Y
and ENTER
.
In order for Nginx to serve this content, it’s necessary to create a server block with the correct directives. Instead of modifying the default configuration file directly, make a new one at /etc/nginx/sites-available/example.com
:
sudo nano /etc/nginx/sites-available/example.com
Add the following configuration block, which is similar to the default, but updated for your new directory and domain name:
server {
listen 80;
listen [::]:80;
root /var/www/example.com/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ =404;
}
}
Notice that we’ve updated the root
configuration to the new directory, and the server_name
to the domain name. Save and close the file when you are finished.
Next, enable the file by creating a symbolic link from it to the sites-enabled
directory, which Nginx reads from during startup using the command below:
sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/
Two server blocks are now enabled and configured to respond to requests based on their listen
and server_name
directives:
example.com
: Will respond to requests forexample.com
andwww.example.com
.default
: Will respond to any requests on port80
that do not match the other two blocks.
Sometimes, Nginx can suffer from possible hash bucket memory problems that can arise from adding additional server names, it is necessary to adjust a single value in the /etc/nginx/nginx.conf
file.
Open the file:
sudo nano /etc/nginx/nginx.conf
Find the server_names_hash_bucket_size
directive and remove the #
symbol to uncomment the line:
...
http {
...
server_names_hash_bucket_size 64;
...
}
...
Save and close the file when you are finished, then test it using the following command:
sudo nginx -t
If there are no problems, then restart your server with the command below:
sudo systemctl restart nginx
Nginx should now be serving your domain name. You can test this by navigating to http://example.com
, where you should see something like the following:
[screenshot]
Setting up Nginx Load Balancer
You can use Nginx as a load balancer to load balance HTTP traffic across web or application server groups. Also, you can use different load-balancing algorithms and advanced features like slow-start and session persistence.
Now that you know how to set up Nginx to work as a web server, in this lesson, we are going to learn how to set up Nginx as a load balancer.
If you want to optimize resource utilization, load balancing across multiple application instances is a commonly used technique for it. Additionally, you can maximize throughput, reduce latency, and ensure fault‑tolerant configurations with load balancing.
Proxing HTTP traffic
In Nginx, it is easy to set up load balancing.
To start using Nginx as a load balancer, you need to define a group of servers with the upstream
directive and route HTTP traffic to it inside the http
context.
To create different servers inside the directive, you must use the server
directive, the location of the server, and other configuration options. Note that the server
directive is not to be confused with the server
block used to define virtual servers.
The Servers in the group you will create with the upstream
directive are using the server
directive.
Let’s look at a simple example, let’s create a backend
group to load balance HTTP traffic to 4 different backend servers.
http {
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com;
server backend3.example.com;
server backend4.example.com;
server backend5.example.com;
}
....
}
Inside the http
context, we create a server group for our backend
servers using the upstream
directive. Next, we added 5 different backend server instances.
Lastly, to pass the request to the server group we’ve just created, we will specify the name of the group inside our web server
directive still inside the http
context as shown in the example below:
server {
location / {
proxy_pass <http://backend>;
}
}
Here’s the full code snippet to create a load balancer for your web traffic.
http {
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com;
server backend3.example.com;
server backend4.example.com;
server backend5.example.com;
}
server {
location / {
proxy_pass <http://backend>;
}
}
}
Next, let’s explore some configurations you can use when creating load balancers.
Choosing Load Balancing Methods
Load balancing methods are algorithms or mechanisms used to efficiently distribute an incoming server request or traffic among servers from the server pool. When building a highly scalable and performing application, efficient distribution of traffic and efficient load balancing is necessary to ensure the high availability of Web services and the delivery of such services in a fast and reliable manner.
There are various load balancing methods available, and each method uses a particular criterion to schedule incoming traffic. Nginx supports varieties of these load-balancing methods are we will explore each of them in the lesson.
Round robin
This is a very popular method of traffic distribution algorithms. In this method, an incoming request is routed to each available server in a sequential manner.
Here’s an example of using the Round Robin algorithm with Nginx load balancing.
http {
upstream backend {
# no load balancing method is specified for Round Robin
server backend1.example.com weight=5;
server backend2.example.com;
server backend3.example.com;
server backend4.example.com;
server backend5.example.com;
}
....
}
By default, if no method is specified, Nginx will default to using the Round Robin algorithm as its default.
Least Connections
In this method, a request is sent to the server with the least number of active connections, again with server weights taken into consideration:
We will discuss what server weights are and how to specify them as you have seen in our previous examples. However, here’s an example of using the Least Connection algorithm with Nginx load balancing.
http {
upstream backend {
least_conn;
server backend1.example.com weight=5;
server backend2.example.com;
server backend3.example.com;
server backend4.example.com;
server backend5.example.com;
}
....
}
You specify the algorithm by specifying it as shown in the example, In this example, the algorithm will reduce the overload of the [backend1.example.com](<http://backend1.example.com>)
server because it has the highest server weight. So the higher the number of server weights, the lower the overload of traffic requests.
Source IP hash
In this method, an IP hash is used to find the server that must attend to a request meaning that the server to which a request is sent is determined from the client's IP address using either the first three octets of the IPv4 address or the whole IPv6 address are used to calculate the hash value.
http {
upstream backend {
ip_hash;
server backend1.example.com weight=5;
server backend2.example.com;
server backend3.example.com;
server backend4.example.com;
server backend5.example.com;
}
....
}
The method guarantees that requests from the same address get to the same server unless it is not available.
Generic Hash
In this method, the server to which a request is sent is determined from a user‑defined key which can be a text string, variable, or combination. For instance, the key may be a paired source IP address and port, or a URI as in this example:
http {
upstream backend {
hash $request_uri consistent;
server backend1.example.com weight=5;
server backend2.example.com;
server backend3.example.com;
server backend4.example.com;
server backend5.example.com;
}
....
}
The consistent
directive is used to specify ketama consistent‑hash load balancing so that requests are evenly distributed across all upstream servers based on the user‑defined hashed key value.
That’s all the supported methods in the open-source Nginx, you can get more supported methods in Nginx Plus.
What are Nginx Server Weights?
In the context of Nginx load balancing, server weights refer to the relative weights assigned to different backend servers in a load-balancing configuration. Nginx is a popular web server and reverses proxy server that can be used as a load balancer to distribute incoming client requests across multiple backend servers.
Each backend server is assigned a weight, which represents its capacity or capability to handle requests. The weight determines the proportion of client requests that will be forwarded to each server. Servers with higher weights receive a larger share of the traffic, while servers with lower weights receive a smaller share.
Here's an example to illustrate how server weights work in Nginx:
http {
upstream backend {
server backend1.example.com weight=3;
server backend2.example.com weight=2;
server backend3.example.com weight=1;
}
server {
listen 80;
location / {
proxy_pass <http://backend>;
}
}
}
In this example, there are three backend servers defined: backend1.example.com
, backend2.example.com
, and backend3.example.com
. The weights assigned to them are 3, 2, and 1, respectively. It means that backend1.example.com
will receive approximately 3/6 (or 50%) of the requests, backend2.example.com
will receive approximately 2/6 (or 33.33%) of the requests, and backend3.example.com
will receive approximately 1/6 (or 16.67%) of the requests.
Server weights allow you to distribute the load based on the capacity or performance characteristics of your backend servers. By adjusting the weights, you can fine-tune the load-balancing behavior to meet your requirements, such as giving more traffic to more powerful servers or balancing the load evenly across all servers.
It's important to note that the actual distribution of requests may not be precisely proportional to the weights, as Nginx uses an algorithm called weighted round-robin to determine the server selection. However, over a large number of requests, the distribution should be close to the specified weight ratios.
What is Server Slow-Start?
Nginx Server Slow-Start, also known as the "slow-start" feature, is a mechanism in Nginx that allows backend servers to gradually handle an increasing amount of traffic during their startup phase. It helps prevent overwhelming the newly started servers with a sudden surge of requests.
When a backend server is added to an Nginx load balancing configuration or when it is considered healthy after being previously marked as down, Nginx employs the slow-start mechanism to gradually increase the traffic it forwards to that server. By slowly ramping up the request load, Nginx gives the server time to warm up and reach its optimal performance level before receiving the full amount of traffic.
Here's a simplified explanation of how server slow-start typically works:
Initial phase: When a server is added or restarted, the load balancer starts by directing only a small portion of the incoming traffic to that server. This allows the server to warm up and handle a manageable load initially.
Incremental increase: As time progresses, the load balancer gradually increases the traffic directed to the server in small increments. For example, it might double the traffic every few seconds or minutes, depending on the configuration.
Monitoring and adjustment: The load balancer continuously monitors the server's performance and response times during the slow-start phase. If the server starts to experience issues or its response times degrade significantly, the load balancer can adjust the traffic increase rate or even pause the slow-start process temporarily.
Full capacity: Once the slow-start period is complete and the server has proven its stability and performance, the load balancer allows it to handle a full load of incoming requests.
During the slow-start phase, Nginx uses a dynamic weight adjustment for the newly added or reactivated server. The weight starts from a lower value and gradually increases over time until it reaches the weight specified in the configuration. This weight adjustment affects the proportion of traffic sent to the server.
Here's an example to illustrate the slow-start configuration in Nginx:
http {
upstream backend {
server backend1.example.com weight=10 slow_start=30s;
server backend2.example.com weight=10 slow_start=1m;
}
server {
listen 80;
location / {
proxy_pass <http://backend>;
}
}
}
In this example, two backend servers, backend1.example.com
, and backend2.example.com
, are defined with a weight of 10 each. They also have different slow-start durations specified: 30 seconds for backend1.example.com
and 1 minute for backend2.example.com
.
During the slow-start period, Nginx will gradually increase the traffic sent to each server. It starts from a lower weight value (e.g., 1) and linearly scales it up over the specified duration. Once the slow-start period is over, the weight of the server reaches the configured value (e.g., 10), and it starts receiving the full share of traffic.
By using slow-start, Nginx helps avoid sudden spikes in traffic to newly added or reactivated servers, which can improve the stability and performance of the overall system. It gives the servers time to warm up, initialize connections, cache data, or perform any necessary initialization tasks before handling a high load.
Setting up Nginx Reverse Proxy
One of the powerful features of Nginx is that you can configure it to become a reverse proxy. A reverse proxy is a server that sits in front of web servers and forwards client requests to those servers. It is important to note that reverse proxies are implemented to increase the security, performance, and reliability of a server
To help you understand more how reverse proxy works and all its benefits, let’s define what proxy servers are and what it entails.
What is a proxy server?
A proxy server is a middleman that sits between a group of client machines and servers. When the client machines make requests for resources on the internet, the proxy server will intercept those requests and then communicates with web servers on behalf of those clients. A proxy server has many names and can also be called a forward proxy, proxy, proxy server, or web proxy.
[inforgraphic]
What is a reverse proxy?
A reverse proxy is a server that sits in front of one or more web servers, intercepting requests from clients. This is different from forward proxy servers, where the proxy sits in front of the clients. In reverse proxy, the client’s request is intercepted in the server at the network edge by a reverse proxy. The reverse proxy server will then send requests to and receive responses from the origin server.
The difference between a forward proxy and a reverse proxy is very important and can be simplified to say that a forward proxy sits in front of a client and ensures that no origin server ever communicates directly with that specific client while a reverse proxy sits in front of an origin server and ensures that no client ever communicates directly with that origin server.
[Inforgraphic]
Now that we understood proxy servers, reverse proxies, and the difference between them, let’s explore how to set up a reverse proxy in Nginx.
NGINX Reverse Proxy
To configure NGINX as a reverse proxy for various protocols, including HTTP, and enable the manipulation of request headers along with finely-tuned response buffering, follow these steps.
When acting as a proxy, NGINX forwards client requests to a designated server, retrieves the corresponding response, and then sends that response back to the client.
You can proxy requests to an HTTP server, whether it's another NGINX server or any other server, using a designated protocol. To pass a request to an HTTP-proxied server, the proxy_pass directive is specified inside a location.
location /backend/v1 {
proxy_pass <http://example.com/v1/>;
}
With this configuration, all requests to Nginx that include the /backend/v1 URL will be forwarded to the v1
application server at http://example.com/v1
. You can also specify the domain name or IP address, including the port number as the proxy_pass argument.
Proxying to a load balancer
It is also possible to proxy requests to a load balancer with Nginx, all you need to do is create a load balancer as shown in the previous lesson and use it as the proxy_pass
argument as shown below:
location /backend/v2 {
proxy_pass <http://backend>;
}
With this configuration, all requests to Nginx that include the /backend/v2
URL will be forwarded to one of the two application servers listed in the upstream
element named backend
.
Passing Requests Headers
You can pass many request headers to NGINX aside from the two default headers that come with NGINX which are “Host” which is set to the $proxy_host
variable, and “Connection” which is set to close
.
To do that, you use the proxy_set_header
directive. The directive can be added at any location in the configuration file e.g. location
, server
, or even in the http
block. Here’s an example showing how to pass different headers to NGINX in the location
block.
location /backend/v1/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header M-Header "The-Value";
proxy_pass <http://localhost:8000>;
}
Nginx will automatically remove all headers with an empty value, therefore, to remove any header, set the value to an empty string.
location /backend/v1/ {
proxy_set_header Host $host;
proxy_set_header M-Header "";
proxy_pass <http://localhost:8000>;
}
Configuring Buffers
Nginx Reverse Proxy supports a very important concept that increases the response performance and the speed of its reverse proxy.
NGINX buffers responses from proxied servers by default. A response is stored in the internal buffers and is not sent to the client until the whole response is received.
According to NGINX, Buffering helps to optimize performance with slow clients, which can waste proxied server time if the response is passed from NGINX to the client synchronously. Buffering allows NGINX proxied to process responses quickly, while NGINX stores the responses for as much time as the clients need to download them
Buffering is enabled by default in NGINX and can either be turned off or configured with more performance power.
You can turn on and off NGINX buffering using the proxy_buffering
directive and you can use the following directive to configure NGINX buffer. The proxy_buffers
directive controls the size and the number of buffers allocated for a request and the proxy_buffer_size
configures the size of the buffer.
Here’s an example:
location /backend/ {
proxy_buffering on;
proxy_buffers 16 4k;
proxy_buffer_size 2k;
proxy_pass <http://localhost:8000>;
}
The proxy_buffers
directive is used to control the number of buffers and the size of the buffer respectively while the proxy_buffer_size
is used to configure the size of the buffer.
NGINX is a powerful web server and we only scratch the surface in this lesson, you can explore more in the official documentation.
In this lesson, we covered everything relating to servers that you need to know, we explored servers, type of server, web server, and NGINX. You can learn more about other types of web servers available.