Sunday, December 22, 2024
HomeLinuxNginx Performance Tuning Reference for Geeks

Nginx Performance Tuning Reference for Geeks

NGINX is well known as a high‑performance load balancer, cache, and web server, powering over 40% of the busiest websites in the world. For most use cases, default NGINX and Linux settings work well, but achieving optimal performance sometimes requires a bit of tuning. This blog post discusses some of the NGINX and Linux settings to consider when tuning a system.

You can tune almost any setting, but this post concentrates on the few settings for which tuning benefits the most users. There are settings that we recommend you change only if you have a deep understanding of NGINX and Linux, or as directed by our Support or Professional Services teams, and we don’t cover those here. The Professional Services team has worked with some of the world’s busiest websites to tune NGINX for the maximum level of performance and is available to work with you on getting the most out of your NGINX or NGINX Plus deployment.

Introduction

A basic understanding of the NGINX architecture and configuration concepts is assumed. This post does not attempt to duplicate the NGINX documentation, but provides an overview of the various options and links to the relevant documentation.

A good rule to follow when tuning is to change one setting at a time, and set it back to the default value if the change does not improve performance.

Tuning Your Linux Configuration

The settings in modern Linux kernels (2.6+) are suitable for most purposes but changing some of them can be beneficial. Check the kernel log for error messages indicating that a setting is too low, and adjust it as advised. Here we cover only those settings that are most likely to benefit from tuning under normal workloads. For details on adjusting these settings, please refer to your Linux documentation.

The Backlog Queue

The following settings relate to connections and how they are queued. If you have a high rate of incoming connections and you are getting uneven levels of performance (for example some connections appear to be stalling), then changing these settings can help.

  • core.somaxconn – The maximum number of connections that can be queued for acceptance by NGINX. The default is often very low and that’s usually acceptable because NGINX accepts connections very quickly, but it can be worth increasing it if your website experiences heavy traffic. If error messages in the kernel log indicate that the value is too small, increase it until the errors stop.

Note: If you set this to a value greater than 512, change the backlog parameter to the NGINX listen directive to match.

  • net.core.netdev_max_backlog – The rate at which packets are buffered by the network card before being handed off to the CPU. Increasing the value can improve performance on machines with a high amount of bandwidth. Check the kernel log for errors related to this setting and consult the network card documentation for advice on changing it.

File Descriptors

File descriptors are operating system resources used to represent connections and open files, among other things. NGINX can use up to two file descriptors per connection. For example, if NGINX is proxying, it generally uses one file descriptor for the client connection and another for the connection to the proxied server, though this ratio is much lower if HTTP keepalives are used. For a system serving a large number of connections, the following settings might need to be adjusted:

  • sys.fs.file_max – The system‑wide limit for file descriptors
  • nofile – The user file descriptor limit, set in the /etc/security/limits.conf file

Ephemeral Ports

When NGINX is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral, port. You might want to change this setting:

  • net.ipv4.ip_local_port_range – The start and end of the range of port values. If you see that you are running out of ports, increase the range. A common setting is ports 1024 to 65000.

Compress content gzip

Anyway it is gzip compression. Compared to network bandwidth, CPU resources are cheap.

# Compress content
 gzip  on ;

However, in many cases this alone is not enough. This is because nginx text/htmlcompresses only the content of Content – Type by default . To increase the content type to be compressed gzip_types, use.

# text / html is always compressed if gzip on
 gzip_types  text / css  text / javascript ;

If a simple Web site is sufficient for the above settings, but in some cases application/json and application/javascript you may want to add a Content-Type such.

Also note that unlike Apache ‘s mod_deflate, nginx’ s gzip compression module must gzip_vary on;be written as it will Vary: Accept-Encoding  not append content when gzipped .

Deliver compressed content with gzip_static module

Although it said “CPU resources are cheap compared to network bandwidth”, consumption of CPU resources due to gzip compression cannot be ignored if the distribution volume also increases.

In such cases it is effective gzip_static. When gzip compression of content is normally enabled with nginx ( gzip on), gzip compression of content is performed every time when distributing content, but if gzip_staticyou enable it , you will have already compressed file (eg xxx.js.gz) I will deliver it as it is. So using this feature can save a lot of CPU resources. Also, if you also gunzipwant to distribute to clients that do not support gzip compression let’s try to unpack the gz file.

# xxx. (css | js). gz if present, deliver it as is
# Also, if the client does not support gzip compression, expand and distribute the gz file
location ~* \.(css|js)$ {
    gzip_static always;
    gunzip on;
}

Also, gzip_static in order gunzip to use –with-http_gzip_static_module and –with-http_gunzip_module, you need the / and option when running nginx’s configure .

Compress further with zopfli

zopfli is a deflate compatible compression algorithm (and its implementation) compatible with gzip and zlib.
Since it is compatible with deflate, contents compressed with zopfli can be expanded with deflate. In addition zopfli has the advantage of a higher compression ratio (several%) than deflate, but on the other hand the cost of compression is much higher than deflate, so when you distribute content with nginx you compress it with zopfli every time Such things are not realistic.

However, gzip_staticif you are using, the story is different. As mentioned earlier, zopfli is compatible with deflate. Since zoplfli compressed files can be expanded with deflate, gzip_staticit is enough to deliver gz files compressed with zopfli with .

Adjust Worker Process

Servers now are multi-threaded and multi processes; Nginx has the ability to meet these modern day techniques. The default settings don’t allow Nginx to handle multiple workloads but you can make certain changes in the configuration file to make it work as multi-threaded web server. There is “worker_processes” parameter in nginx which will make it use multiple processors available in the system.

Open the file /etc/nginx/nginx.conf and change the “worker_processes auto;” parameter to:

 worker_processes 12;

Disable Access Logs

Nginx logs have too much information coming in them as each and every action is being logged. To improve its performance you can disable these logs and it will eventually save disk space also. Nginx configuration file should contain following parameter if you are looking to disable access logs.

 access_log off;

TCP_nodelay & TCP_nopush

These parameters are important on the network level as every packet is being analyzed on core level, here are some details about these parameters.

1) TCP_nodelay: This parameter allows you to prevent your system from buffering data-sends and all the data will be sent in small bursts in real time. You can set this parameter by adding the following line

 tcp_nodelay on;

2) TCP_nopush: This parameter will allow your server to send HTTP response in one packet instead of sending it in frames. This will optimize throughout and minimize bandwidth consumption which will result in improvement of your website’s loading time.

Cache the client with expires

It is good to be able to save network bandwidth by compressing content, but there is a better way to save more bandwidth. It is to use the cache of the client once accessing it, so that the delivery server itself will return 304.

Since it is static content delivery this time expires, setting nginx as follows should be enough.

expires 30d;

expires There are quite a few formats and it’s confusing, so please see the main documentation for details.

Cache file information of distributed contents with open_file_cache

open_file_cache You can cache information such as file descriptor, size, and update time.

# Set the number of entries (max) of the cache and the lifetime of the cache with no access
 open_file_cache  max = 100  inactive = 10 s ;

It mainly has the effect of reducing the system call overhead associated with file open and close.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments