Thin + Nginx with Rails

Recently i’ve been playing around with xen and different hosting solutions, and i was wondering about lightweight, yet performant replacements for the usual apache + mod_fcgi + dispatcher stack. I did toy around with nginx before, together with mongrel with quite good success.

But it seems there is some serious competition for mongrel coming along, the Thin webserver. It combines the good points of mongrel, the HTTP parser, together with an event driven IO framework called eventmachine. The Bechmarks of it look promising:

Rails Webservice Benchmark

Its clearly to see the event driven IO approach is clearly superiour to the others when having many concurent requests.

Now why nginx when thin already performs so good ? Well, for one thing, thin is still in its quite early stages of production. Also, when it comes to serving static files the ruby version does not even come close.  Another thing that comes to play is that nowerdays most CPU’s are multi core, while ruby is single threated. That means the concurrent requests will be served by one CPU.

Here nginx comes to play, its an excellent http server, proxy and load balancer. One could start serveral rails servers with mongrel (mongrel_cluster is an excellent tool for that), or many thin servers. Right now i’m using the following rake task (from Stepehn Celis) for starting thin:

namespace :thin do
  namespace :cluster do    desc 'Start thin cluster'
    task :start => :environment do
      `cd #{RAILS_ROOT}`
      port_range = RAILS_ENV == 'development' ? 3 : 8
      (ENV['SIZE'] ? ENV['SIZE'].to_i : 4).times do |i|
        Thread.new do
          port = ENV['PORT'] ? ENV['PORT'].to_i + i : ("#{port_range}%03d" % i)
          str  = "thin start -d -p#{port} -Ptmp/pids/thin-#{port}.pid"
          str += " -e#{RAILS_ENV}"
          puts str
          puts "Starting server on port #{port}..."
          `#{str}`
        end
      end
    end
desc 'Stop all thin clusters'
    task :stop => :environment do
      `cd #{RAILS_ROOT}`
      Dir.new("#{RAILS_ROOT}/tmp/pids").each do |file|
        Thread.new do
          if file.starts_with?("thin-")
            str  = "thin stop -Ptmp/pids/#{file}"
            puts "Stopping server on port #{file[/\d+/]}..."
            `#{str}`
          end
        end
      end
    end
  end
end

with that you can start/stop many thin instances easily:

# rake thin:cluster:start RAILS_ENV=production SIZE=3 PORT=8000
# rake thin:cluster:stop

Getting those instances into nginx is also easy, the following example layout works with the ubuntu nginx package:

upstream thin {
    server 127.0.0.1:8000;
    server 127.0.0.1:8001;
    server 127.0.0.1:8002;
}

server {
        listen   80;
        server_name  localhost;
        access_log  /var/log/nginx/localhost.access.log;
        root /var/www/test/public;

        location / {
                proxy_set_header  X-Real-IP  $remote_addr;
                proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header Host $http_host;
                proxy_redirect false;
                if (-f $request_filename/index.html) {
                        rewrite (.*) $1/index.html break;
                }
                if (-f $request_filename.html) {
                        rewrite (.*) $1.html break;
                }
                 if (!-f $request_filename) {
                        proxy_pass http://thin;
                        break;
                }
        }
}

To see if it is really working i used apache Bench on the the same setup with a simple dynamic page on a dual cpu p3-s 1,3 Ghz machine :

# ab -n 1000 -c 10 http://10.1.4.99/foo/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.1.4.99 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests

Server Software:        nginx/0.5.26
Server Hostname:        10.1.4.99
Server Port:            80

Document Path:          /foo/
Document Length:        59 bytes

Concurrency Level:      10
Time taken for tests:   6.247127 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      505000 bytes
HTML transferred:       59000 bytes
Requests per second:    160.07 [#/sec] (mean)
Time per request:       62.471 [ms] (mean)
Time per request:       6.247 [ms] (mean, across all concurrent requests)
Transfer rate:          78.92 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.7      0       9
Processing:     7   61  72.9     35     453
Waiting:        0   61  72.9     34     453
Total:          7   61  72.9     35     453

Percentage of the requests served within a certain time (ms)
  50%     35
  66%     59
  75%     82
  80%     97
  90%    134
  95%    203
  98%    333
  99%    380
 100%    453 (longest request)

The same on one of the internal thin servers would give the following result:

# ab -n 1000 -c 10 http://10.1.4.99:8000/foo/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.1.4.99 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests

Server Software:
Server Hostname:        10.1.4.99
Server Port:            8000

Document Path:          /foo/
Document Length:        59 bytes

Concurrency Level:      10
Time taken for tests:   7.880520 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      446000 bytes
HTML transferred:       59000 bytes
Requests per second:    126.90 [#/sec] (mean)
Time per request:       78.805 [ms] (mean)
Time per request:       7.881 [ms] (mean, across all concurrent requests)
Transfer rate:          55.20 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:    63   78  50.9     63     252
Waiting:       61   77  50.8     62     251
Total:         63   78  50.9     63     252

Percentage of the requests served within a certain time (ms)
  50%     63
  66%     63
  75%     64
  80%     64
  90%     65
  95%    251
  98%    251
  99%    252
 100%    252 (longest request)

As you can see, the nginx version scales quite a bit better with 10 concurrent users.

Now what is really missing is some nice scripted integration into startup scripts, so you can automaticly start/stop many nginx/thin installations at bootup (and not execute some manual rake tasks :))

This entry was posted in nginx, ruby on rails and tagged , , , , . Bookmark the permalink.