NginX Loadbalancing


You may come to point where you need load balancing to keep up with demand and introduce a bit more redundancy to your website.
This will be a breif guide on the methods and how to implement this with nginx

Essentially the setup looks like this

                    INTERNET http/https
                   +---------|--------+                
                   |                  |                
                   |   LOADBALANCER   |                
                   |                  |                           
       +-----------+---------+--------+---------+      
       |                     |                  |      
      http                 http                http      
       |                     |                  |      
+------v------+      +-------v------+    +------v-----+
|             |      |              |    |            |
|    WEB1     |      |     WEB2     |    |    WEB3    |
|             |      |              |    |            |
+-------------+      +--------------+    +------------+

The DNS for the website will point to the loadbalancer and then the loadbalancer will handle requests sending them to the backend webservers

The Config


Loadbalancer Config

On the server acting as the loadbalancer add the following to /etc/nginx/nginx.conf inside the http {} block

    real_ip_header X-Forwarded-For;
    set_real_ip_from 0.0.0.0/0;
  upstream backend  {
    server web1.domain.com;
    server web2.domain.com;
    server web3.domain.com;
  }

The first lines real_ip_header X-Forwarded-For; and set_real_ip_from 0.0.0.0/0; will ensure that the visitors IP are sent to the backend servers. We we cover how that header is sent later on.
The upstream section is used to define the loadbalancer pool. Here you can use additional config to setup the loadbalancing method and any conditionals around the backend servers

Loadbalancing Methods

  • Round Robin
    This is the default loadbalancer method and will just send requests equally to all backend servers regardless of current load or how many connections the server is currently handling. This config requires no additional lines added to the upstream section
  • Least connections least_conn
    Least connections: The requests are sent to the server with the least connections currently active. You can combine this with server weights to make more use of backend resources. For example if you have a lower spec server in the pool you would prefer sending requests to the larger spec servers to reduce load this can be done like this
  upstream backend  {
    least_conn;
    server web1.domain.com weight=1; #512mb ram server
    server web2.domain.com weight=2; #1gb ram server
    server web3.domain.com weight=5; #2gb ram server
  }

Every 5 requests sent to the server will be handed over to web3 then every 2 after that will be sent to web2 and the one after web1 will handle. The larger number weight the more requests it will handle

  • IP hashing (IP Based)
    Some web applications require a contiuious session to be open with the application using the ip_hash option this will send requests to the same server based on the visitors IP keeping the session opened with the backend server

  • Hashing (User Defined)
    Using the generic hash method you can define sessions baised on cookies or a certain uri. This is more advanced way of session persistance and allows you to define your own conditions

  • Session Persistance Using cookies
    You can set the sticky condition to open session persistance as defined in the upstream block

    sticky cookie srv_id expires=1h domain=.domain.com path=/;
    

    This will enable users to keep there session open

Further config

so in order to pass requests to the backends we will make use of the proxy_pass option in the default nginx vhost

server {
   listen	 80;
   location / {
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $host;
    proxy_pass http://backend;
   }
}

This will set all the needed headers to ensure that the visitors IP is being passed back to the backend hosts in the form of the X-Forward-For header
The proxy_pass option is passing requests back to the servers defined in the upstream block

Backend setup

On the backend there is nothing special that needs to be setup you can have all the vhost config you need sitting there and as long as the DNS is pointed to the loadbalancer and thing on the backend that corrolates will be served by the loadbalancer via the backend

This was a very breif introduction to loadbalancing with NginX for more info check out this link for a more indeph guide http://nginx.com/resources/admin-guide/load-balancer/