Performance Benchmarking your CMS and Webserver

joe.price's picture

There is nothing worse than seeing a good site marred by poor performance.

Content Management Systems (CMS) such as Drupal, Wordpress, Magento, and Joomla are database and PHP driven solutions which have a number of moving parts. If any one of these parts is not properly tuned, the entire site can feel the consequences. Understanding the performance impacts of server and site-level changes is crucial to keeping your site running at its peak -- enter benchmarking.

Sure, there are services and utilities (such as Yahoo! YSlow and Google Page Speed) that show us generic information regarding best practices for site-level performance. Or Webmaster Tools Site Speed which provides us information on how our site stacks up against the rest of the internet. However, these tools do not provide any realtime data; change x leads to average performance y.

A number of good open source load/performance benchmarking tools exist. I'm a bit biased towards apache bench (ab) because its cross-platform, easy to use, and meets my needs. I'm not going to cover installation in this article but if you're using a Debian based Linux distribution, apt-get install apache2-utils will get you ab installed and ready to use. If you already have apache installed, ab is typically provided by default.

Rules for benchmarking:

  1. Don't run ab from the site's webserver. This is equivalent to a localhost request and is virtually irrelevant for benchmarking. Ideally, use a second server that has a comparable internet connection.
  2. If running a before and after comparison, issue the same ab command.
  3. Test more than just your home page.
  4. Start small.

The basic command line syntax is:

ab -n [#] -c [#] [URL]

The -n switch represents the number of requests to perform –- a figure in the low thousands is generally appropriate for a web application. The -c switch represents concurrency, or how many requests to make at the same time.

An example (and a good starting point):

ab -n 1000 -c 10 http://example.com/

In english, this means that I want 10 simultaneous connections made 1000 times to example.com (note: the trailing slash here is required).

Output:

Finished 1000 requests

Server Software:        nginx
Server Hostname:        techatitsbest.com
Server Port:            80

Document Path:          /
Document Length:        17011 bytes

Concurrency Level:      10
Time taken for tests:   3.83588 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      17636170 bytes
HTML transferred:       17073360 bytes
Requests per second:    324.30 [#/sec] (mean)
Time per request:       30.836 [ms] (mean)
Time per request:       3.084 [ms] (mean, across all concurrent requests)
Transfer rate:          5585.05 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        5    6   5.2      5      40
Processing:    17   23  18.9     19     219
Waiting:        5    9  18.8      6     206
Total:         22   30  20.1     24     224

Percentage of the requests served within a certain time (ms)
  50%     24
  66%     25
  75%     27
  80%     34
  90%     41
  95%     44
  98%     79
  99%    173
 100%    224 (longest request)


The key to this output is that we get results simulating various loads that's our site typically encounters. Indicative in these numbers is the average response time for a large number of requests. To translate, 50% of all requests were served within 24ms; 100% were served within 224ms.

The output I've posted is from a Drupal site (this site) running on a finely tuned server. Keeping ab in my toolbox has enabled me to evaluate performance changes and troubleshoot problem areas more effectively. 1


  1. See Mobile Tools quadruples mean response time for a situation in which I was able to determine the exact cause of a major performance degradation. ↩︎

Tags: