I wondered about how many parallel thin you need to get the optimal throughput. I often saw values from 3 to 10 servers in different configuration examples around the net. Time to get some real numbers.
To get a meaningful benchmark i took an old dual 1,3 GHz P3-s system with the latest ubuntu. I did install rails 2.0.2 and a testing project with it, then created a very simple controller, that just passes a “hello world” string to the view. Also, no database connections should be made.
The server is started with
rake thin:cluster:start RAILS_ENV=production SIZE=5 PORT=8000
You’ll need the latest thin 0.5.2 for it, as 0.5.1 has some serious bug that prevents it from running daemonized. The command above will start 5 thin instances, listening on ports 8000-8004. To actually benchmark the number of thin instances used, i did modify the upstream entries in nginx:
# only one server used in this example upstream thin { server 127.0.0.1:8000; #server 127.0.0.1:8001; #server 127.0.0.1:8002; #server 127.0.0.1:8003; #server 127.0.0.1:8004; }
The results are not really supprising and show that the nginx + thin combo scales well even up to 100 concurrent users. All Tests were done with
ab -n 10000 -c 20 http://localhost/foo/
where the “foo” controller was the minimal one returning the “hello world” page.
The results can be seen in this graph:
As you can see, the best results for this dual cpu machine was indeed running two instances of thin. What is interesting to see that thin+nginx scale quite well over many concurent requests.
Of course this is an idealized test with minimal load times for each pagecall. In the next version i’ll put in a random delay before the template is rendered to simulate real wold loading times and slow operations (users up/downloading large files, etc ..)
Pingback: Thin 0.5.3 “Purple Yogurt” — Learning on Rails