SaltyCrane Blog — Notes on JavaScript and web development

Notes on HTTP load testing w/ httperf

Reading

Install

$ sudo apt-get install httperf

Notes

I originally ran the tests on the same machine as the server, but the quick start guide at http://httperf.comlore.com/ said to run httperf on a separate machine. So I did this. I edited the /etc/hosts file to point www.domain2.com to each of the 3 servers. Here is info about our servers: Amazon EC2 Instance Types.

Also, it is important to use the --hog option.

Apache Wordpress site (EC2 High-CPU Medium)

I got 0.1 requests/sec for our Apache Wordpress homepage.

$ httperf --server=www.domain2.com --port=80 --uri=/ --rate=0.2 --num-conns=10
httperf --client=0/1 --server=www.domain2.com --port=80 --uri=/ --rate=0.2 --send-buffer=4096 --recv-buffer=16384 --num-conns=10 --num-calls=1
Maximum connect burst length: 2

Total: connections 10 requests 10 replies 10 test-duration 114.361 s

Connection rate: 0.1 conn/s (11436.1 ms/conn, <=9 concurrent connections)
Connection time [ms]: min 29207.4 avg 61777.7 max 76998.7 median 65479.5 stddev 14845.0
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 0.1 req/s (11436.1 ms/req)
Request size [B]: 63.0

Reply rate [replies/s]: min 0.0 avg 0.1 max 0.4 stddev 0.1 (14 samples)
Reply time [ms]: response 59290.6 transfer 2487.0
Reply size [B]: header 296.0 content 100947.0 footer 2.0 (total 101245.0)
Reply status: 1xx=0 2xx=10 3xx=0 4xx=0 5xx=0

CPU time [s]: user 0.65 system 1.18 (user 0.6% system 1.0% total 1.6%)
Net I/O: 8.7 KB/s (0.1*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

Varnish (EC2 Standard Large) serving cached Wordpress page from above

Our Varnish configuration seemed to max out at ~2200 requests/sec. We are using Varnish 2.0.6.

$ httperf --hog --server=www.domain2.com --port=80 --uri=/ --rate=2400 --num-conns=240000
httperf --hog --client=0/1 --server=www.domain2.com --port=80 --uri=/ --rate=2400 --send-buffer=4096 --recv-buffer=16384 --num-conns=240000 --num-calls=1
Maximum connect burst length: 151

Total: connections 232293 requests 232293 replies 232293 test-duration 100.200 s

Connection rate: 2318.3 conn/s (0.4 ms/conn, <=1022 concurrent connections)
Connection time [ms]: min 0.2 avg 227.7 max 10222.5 median 279.5 stddev 198.0
Connection time [ms]: connect 15.1
Connection length [replies/conn]: 1.000

Request rate: 2318.3 req/s (0.4 ms/req)
Request size [B]: 63.0

Reply rate [replies/s]: min 2103.8 avg 2312.4 max 2446.4 stddev 101.3 (20 samples)
Reply time [ms]: response 31.2 transfer 181.5
Reply size [B]: header 358.0 content 100944.0 footer 0.0 (total 101302.0)
Reply status: 1xx=0 2xx=232293 3xx=0 4xx=0 5xx=0

CPU time [s]: user 9.20 system 49.66 (user 9.2% system 49.6% total 58.7%)
Net I/O: 229487.0 KB/s (1880.0*10^6 bps)

Errors: total 7707 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 7707 addrunavail 0 ftab-full 0 other 0

Nginx (EC2 Standard Large) serving cached Wordpress page from above

Nginx got up to 3100 requests/sec before httperf gave the following error when I tried to do 3200 requests/sec:

httperf: connection failed with unexpected error 98
We are using Nginx version 0.8.38.

# httperf --hog --server=www.domain2.com --port=80 --uri=/ --num-conns=8000 --rate=800
httperf --hog --client=0/1 --server=www.domain2.com --port=80 --uri=/ --rate=800 --send-buffer=4096 --recv-buffer=16384 --num-conns=8000 --num-calls=1
Maximum connect burst length: 19

Total: connections 8000 requests 8000 replies 8000 test-duration 12.902 s

Connection rate: 620.1 conn/s (1.6 ms/conn, <=205 concurrent connections)
Connection time [ms]: min 9.2 avg 167.0 max 5271.4 median 47.5 stddev 568.7
Connection time [ms]: connect 111.6
Connection length [replies/conn]: 1.000

Request rate: 620.1 req/s (1.6 ms/req)
Request size [B]: 63.0

Reply rate [replies/s]: min 766.1 avg 786.6 max 807.0 stddev 28.9 (2 samples)
Reply time [ms]: response 6.2 transfer 49.1
Reply size [B]: header 251.0 content 100949.0 footer 2.0 (total 101202.0)
Reply status: 1xx=0 2xx=8000 3xx=0 4xx=0 5xx=0

CPU time [s]: user 0.60 system 9.19 (user 4.7% system 71.2% total 75.9%)
Net I/O: 61316.7 KB/s (502.3*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

httperf error 99 code

When testing Nginx with higher rates and num-conns, I started to get the following error:

httperf: connection failed with unexpected error 99

According to this Stack Overflow answer, I tried:

grep 99 /usr/include/*/*errno*

which resulted in:

/usr/include/asm-generic/errno.h:#define        EADDRNOTAVAIL   99      /* Cannot assign requested address */

Google search for "EADDRNOTAVAIL" This made me think I should check my Nginx error logs. Adding the --hog option got rid of the error 99 but gave me error 98.

Nginx "Too many open files" error

I found the following in my /var/log/nginx/error.log:

2010/05/27 15:56:46 [alert] 21247#0: accept() failed (24: Too many open files)

Need to change a setting on your OS. Google for "nginx failed (24: Too many open files)" scalr-discuss says to raise ulimit -n to 2-3 times higher than the Nginx worker_connections setting. Here is a man page for ulimit.

httperf.parse_status_line: invalid status line error

If you get the following error while using the --wlog option, maybe you are not using ASCII NUL to separate urls. See http://schlinkify.org/post/19743846/how-to-replay-live-traffic-with-httperf "stupid format" section and the httperf manpage on the --wlog option.

httperf.parse_status_line: invalid status line `

Comments