Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

The Core HTTP Module in Nginx

Save for later
  • 8 min read
  • 08 Jul 2011

article-image

Nginx 1 Web Server Implementation Cookbook


core-http-module-nginx-img-0

Over 100 recipes to master using the Nginx HTTP server and reverse proxy  

Setting up the number of worker processes correctly


Nginx like any other UNIX-based server software, works by spawning multiple processes and allows the configuration of various parameters around them as well. One of the basic configurations is the number of worker processes spawned! It is by far one of the first things that one has to configure in Nginx.

How to do it...


This particular configuration can be found at the top of the sample configuration file nginx.conf:

user www www;
worker_processes 5;
error_log logs/error.log;
pid logs/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096;
}




In the preceding configuration, we can see how the various process configurations work. You first set the UNIX user under which the process runs, then you can set the number of worker processes that Nginx needs to spawn, after that we have some file locations where the errors are logged and the PIDs (process IDs) are saved.

How it works...


By default, worker_processes is set at 2. It is a crucial setting in a high performance environment as Nginx uses it for the following reasons:

  • It uses SMP, which allows you to efficiently use multi-cores or multi-processors systems very efficiently and have a definite performance gain.
  • It increases the number of processes decreases latency as workers get blocked on disk I/O.
  • It limits the number of connections per process when any of the various supported event types are used. A worker process cannot have more connections than specified by the worker_connections directive.

There's more...


It is recommended that you set worker_processes as the number of cores available on your server. If you know the values of worker_processes and worker_connections, one can easily calculate the maximum number of connections that Nginx can handle in the current setup.

Maximum clients = worker_processes * worker_connections


 

Increasing the size of uploaded files


Usually when you are running a site where the user uploads a lot of files, you will see that when they upload a file which is more than 1MB in size you get an Nginx error stating, "Request entity too Large" (413), as shown in the following screenshot. We will look at how Nginx can be configured to handle larger uploads.

core-http-module-nginx-img-1


How to do it...


This is controlled by one simple part of the Nginx configuration. You can simply paste this in the server part of the Nginx configuration:

client_max_body_size 100M; # M stands for megabytes


This preceding configuration will allow you to upload a 100 megabyte file. Anything more than that, and you will receive a 413. You can set this to any value which is less than the available disk space to Nginx, which is primarily because Nginx downloads the file to a temporary location before forwarding it to the backend application.

There's more...


Nginx also lets us control other factors related to people uploading files on the web application, like timeouts in case the client has a slow connection. A slow client can keep one of your application threads busy and thus potentially slow down your application. This is a problem that is experienced on all the heavy multimedia user-driven sites, where the consumer uploads all kinds of rich data such as images, documents, videos, and so on. So it is sensible to set low timeouts.

client_body_timeout 60; # parameter in seconds
client_body_buffer_size 8k;
client_header_timeout 60; # parameter in seconds
client_header_buffer_size 1k;


So, here the first two settings help you control the timeout when the body is not received at one read-step (basically, if the server is queried and no response comes back). Similarly, you can set the timeout for the HTTP header as well. The following table lists out the various directives and limits you can set around client uploading.

core-http-module-nginx-img-2


 

Using dynamic SSI for simple sites


With the advent of modern feature-full web servers, most of them have Server-Side Includes (SSI) built in. Nginx provides easy SSI support which can let you do pretty much all basic web stuff.

How to do it...


Let's take a simple example and start understanding what one can achieve with it.

  1. Add the following code to the nginx.conf file:

    server {
    .....
    location / {
    ssi on;
    root /var/www/www.example1.com;
    }
    }


    
    

  2. Add the following code to the index.html file:

    <html>
    <body>
    <!--# block name="header_default" -->
    the header testing
    <!--# endblock -->
    <!--# include file="header.html" stub="header_default" -->
    <!--# echo var="name" default="no" -->
    <!--# include file="footer.html"-->
    </body>
    </html>


    
    

  3. Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at €18.99/month. Cancel anytime
  4. Add the following code to the header.html file:
    <h2>Simple header</h2>

  5. Add the following code to the footer.html file:
    <h2>Simple footer</h2>

How it works...


This is a simple example where we can see that you can simply include some partials in the larger page, and in addition to that you can create block as well within the page. So the <block> directive allows you to create silent blocks that can be included later, while the <include> directive can be used to include HTML partials from other files, or even URL end points. The <echo> directive is used to output certain variables from within the Nginx context.

There's more...


You can utilize this feature for all kinds of interesting setups where:

  • You are serving different blocks of HTML for different browsers types
  • You want to optimize and speed up certain common blocks of the sites
  • You want to build a simple site with template inheritance without installing any other scripting language


 

Adding content before and after a particular page


Today, in most of the sites that we visit, the webpage structure is formally divided into a set of boxes. Usually, all sites have a static header and a footer block. Here, in this following page you can see the YUI builder generating the basic framework of such a page.

In such a scenario, Nginx has a really useful way of adding content before and after it serves a certain page. This will potentially allow you to separate the various blocks and optimize their performance individually, as well.

Let's have a look at an example page:

core-http-module-nginx-img-3


So here we want to insert the header block before the content, and then append the footer block:

core-http-module-nginx-img-4


How to do it…


The sample configuration for this particular page would look like this:

server {
listen 80;
server_name www.example1.com;
location / {
add_before_body /red_block
add_after_body /blue_block;
...
}
location /red_block/ {
...
}
location /blue_block/ {
....
}
}


This can act as a performance enhancer by allowing you to load CSS based upon the browser only. There can be cases where you want to introduce something into the header or the footer on short notice, without modifying your backend application. This provides an easy fix for those situations.

This module is not installed by default and it is necessary to enable it when building Nginx.
./configure –with-http_addition_module


 

Enabling auto indexing of a directory


Nginx has an inbuilt auto-indexing module. Any request where the index file is not found will route to this module. This is similar to the directory listing that Apache displays.

How to do it...


Here is the example of one such Nginx directory listing. It is pretty useful when you want to share some files over your local network. To start auto index on any directory all you need to do is to carry out the following example and place it in the server section of the Nginx configuration file:

server {
location 80;
server_name www.example1.com;
location / {
root /var/www/test;
autoindex on;
}
}


How it works...


This will simply enable auto indexing when the user types in http://www.example1.com. You can also control some other things in the listings in this way:

autoindex_exact_size off;


This will turn off the exact file size listing and will only show the estimated sizes. This can be useful when you are worried about file privacy issues.

autoindex_localtime on;


This will represent the timestamps on the files as your local server time (it is GMT by default):

core-http-module-nginx-img-5


This image displays a sample index auto-generated by Nginx using the preceding configuration. You can see the filenames, timestamp, and the file sizes as the three data columns.