There are many possible real life cases and not all optimization technics will be suitable for you but I hope it will be a good starting place.

Also you shouldn’t copy paste examples with faith that they will make your server fly 😃 You have to support your decisions with excessive tests and help of monitoring system (ex. Grafana ).

Cache static and dynamic content

Setting caching static and dynamic content strategy may offload your server from additional load from repetitive downloads of same, rarely updated files. This will make your site to load faster for frequent visitors.

Example configuration:

location ~* ^.+\.(?:jpg|png|css|gif|jpeg|js|swf|m4v)$ {
    access_log off; log_not_found off;

    tcp_nodelay off;

    open_file_cache max=500 inactive=120s;
    open_file_cache_valid 45s;
    open_file_cache_min_uses 2;
    open_file_cache_errors off;

    expires max;
}

For additional performance gain, you may:

  • disable logging for static files,
  • disable tcp_nodelay option - it’s useful to send a lot of small files (ideally smaller than single TCP packet - 1,5Kb), but images are rather big files and sending them all together will gain better performance,
  • play with open_file_cache - it will take off some IO load,
  • add long long expires.

Caching dynamic content is harder case. There are articles that are rarely updated and they may lay in cache forever but other pages are pretty dynamic and shouldn’t be cached for long. Even if caching dynamic content sounds scary for you it’s not. So called micro caching (caching for short period of time, like 1s) - is great solution for digg effectexternal link or slashdottingexternal link .

Let say your page gets ten views per second and you will cache ever site for 1s, then you will be able to server 90% of requests from cache. Leaving precious CPU cycles for other tasks.

Compress data

On your page you should use filetypes that are efficiently compressed like: JPEG, PNG, MP3, etc. But all HTML, CSS, JS may be compressed too on the fly by web server, just enable options like that globally:

gzip on;
gzip_vary on;
gzip_disable "msie6";
gzip_comp_level 1;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_min_length 50;
gzip_types text/plain text/css application/json application/x-javascript application/javascript text/javascript application/atom+xml application/xml application/xml+rss text/xml image/x-icon text/x-js application/xhtml+xml image/svg+xml;

You may also precompress these files stronger during build/deploy process and use gzip_static module to serve them without additional overhead for compression. Ex.:

gzip_static on;

Then use script like this to compress files:

find /var/www -iname *.js -print0 |xargs -0 -I'{}' sh -c 'gzip -c9 "{}" > "{}.gz" && touch -r "{}" "{}.gz"'
find /var/www -iname *.css -print0 |xargs -0 -I'{}' sh -c 'gzip -c9 "{}" > "{}.gz" && touch -r "{}" "{}.gz"'

Files have to had same timestamp like original (not compressed) file to be used by Nginx.

Optimize SSL/TLS

New optimized versions of HTTP protocols like HTTP/2 or SPDY require HTTPS configuration (at least in browsers implementation). Then SSL/TLS high cost of every new HTTPS connection became crucial case for further optimizations.

There are few steps required for improved SSL/TLS performance.

Enable SSL session caching

Use ssl_session_cache directive to cache parameters used when securing each new connection, ex.:

ssl_session_cache builtin:1000 shared:SSL:10m;

Enable SSL session tickets

Tickets store information about specific SSL/TLS connection so connection may be reused without new handshake, ex.:

ssl_session_tickets on;

Configure OCSP stapling for SSL

This will lower handshaking time by caching SSL/TLS certificate informations. This is per site/certificate configuration, ex.:

  ssl_stapling on;
  ssl_stapling_verify on;
  ssl_certificate /etc/ssl/certs/my_site_cert.crt;
  ssl_certificate_key /etc/ssl/private/my_site_key.key;
  ssl_trusted_certificate /etc/ssl/certs/authority_cert.pem;

A ssl_trusted_certificate file have to point to trusted certificate chain file - root + intermediate certificates (this can be downloaded from your certificate provider site (sometimes you have to merge by yourself those files).

Excessive article in this topic could be found here: https://raymii.org/s/tutorials/OCSP_Stapling_on_nginx.htmlexternal link

Implement HTTP/2 or SPDY

If you have HTTPS configured the only thing you have to do is to add two options on listen directive, ex.:

listen 443 ssl http2; # currently http2 is preferred against spdy;

# on SSL enabled vhost
ssl on;

You may also advertise for HTTP connection that you have newer protocol available, for that on HTTP connections use this header:

add_header Alternate-Protocol 443:npn-spdy/3;

SPDY and HTTP/2 protocols use:

  • headers compression,
  • single, multiplexed connection (carrying pieces of multiple requests and responses at the same time) rather than multiple connection for every piece of web page.

After SPDY or HTTP/2 implementation you no longer need typical HTTP/1.1 optimizations like:

  • domain sharding,
  • resource (JS/CSS) merging,
  • image sprites.

Tune other nginx performance options

Access logs

Disable access logs were you don’t need them, ex.: for static files. You may also use buffer and flush options with access_log directive, ex.:

access_log /var/log/nginx/access.log buffer=1m flush=10s;

With buffer Nginx will hold that much data in memory before writing it to disk. flush tells Nginx how often it should write gathered logs to disk.

Proxy buffering

Turning proxy bufferingexternal link may impact performance of your reverse proxy.

Normally when buffering is disabled, Nginx will pass response directly to client synchronously.

When buffering is enable it will store response in memory set by proxy_buffer_size option and if response is too big it will be stored in temporary file.

proxy_buffering on;
proxy_buffer_size 16k;

Keepalive for client and upstream connections]

Every new connection costs some time for handshake and will add latency to requests. By using keepalive connections will be reused without this overhead.

For client connectionsexternal link :

keepalive_timeout = 120s;

For upstream connectionsexternal link :

upstream web_backend {
    server 127.0.0.1:80;
    server 10.0.0.2:80;

    keepalive 32;
}

Limit connections to some resources

Some time users/bots overload your service by querying it to fast. You may limit allowed connectionsexternal link to protect your service in such case, ex.:

limit_conn_zone $binary_remote_addr zone=owncloud:1m;

server {
    # ...
    limit_conn owncloud 10;
    # ...
}

Adjust woker count

Normally Nginx will start with only 1 worker processexternal link , you should adjust this variable to at the number of CPU’s, in case of quad core CPU use in main section:

worker_processes 4;

Use socket sharding

In latest kernel and Nginx versions (at least 1.9.1) there is new feature of sockets sharding. This will offload management of new connections to kernel. Each worker will create a socket listener and kernel will assign new connections to them as they become available.

listen 80 reuseport;

Thread pools

Thread poolsexternal link are solution for mostly long blocking IO operations that may block whole Nginx event queue (ex. when used with big files or slow storage).

location / {
    root /storage;
    aio threads;
}

This will help a lot if you see many Nginx processes in D state, with high IO wait times.

Tune Linux for performance

Backlog queue

If you could see on your system connection that appear to be staling then you have to increase net.core.somaxconn. This system parameter describes the maximum number of backlogged sockets. Default is 128 so setting this to 1024 should be no big deal on any decent machine.

echo "net.core.somaxconn=1024" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf

File descriptors

If your system is serving a lot of connections you may get reach system wide open descriptor limit. Nginx uses up to two descriptors for each connection. Then you have to increase sys.fs.fs_max.

echo "sys.fs.fs_max=3191256" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf

Ephemeral ports

Nginx used as a proxy creates temporary (ephemeral) ports for each upstream server. On busy proxy servers this will result in many connection in TIME_WAIT state.
Solution for that is to increase range of available ports by setting net.ipv4.ip_local_port_range. You may also benefit from lowering net.ipv4.tcp_fin_timeout setting (connection will be released faster, but be careful with that).

Use reverse-proxy

This with microcaching technic is worth separate article, I will add link here when it will be ready.

Source: