January 14, 2010

Python - Web Load Tester - Multiple Processes and Threads

Here is my latest HTTP load generator in Python (web performance test). You just give it a URL and some runtime parameters and it will hammer a resource with HTTP requests. You could easily adapt this to run against a web service or set of links. It is useful for quickly loading a web resource with synthetic transactions for performance testing or tuning purposes.

I have built lots of different load testing tool in Python in the past (see: Pylot), but they all suffered a similar problem. Their concurrency model was based on Threads. Because of this threaded design, combined with Python's GIL implemenatation, my tools were unable to fully utilize multiple cores or processors.

Load generators shouldn't really suffer from processor contention because they are inherently IO-bound, not CPU-bound. However, if you add some client-side processing (response parsing, content verification, etc) and SSL, you could quickly run into a situation where you need more CPU horsepower.

The addition of multiprocessing in Python 2.6 gives me a whole new set of ideas for distributing load over multiple OS processes. It allows me to sidestep the GIL limitation of using a purely threaded model for concurrency. So now I can spawn multiple processes (to scale across processors/cores), with each one spawning multiple threads (for non-blocking i/o). This combination of processes and threads makes the basis for a very scalable and powerful load generating tool.


The Script:
http_load_multiprocess_multithread.py

In the code, you can define the following constants:

URL = 'http://www.example.com/foo?q=bar'
PROCESSES = 4
PROCESS_THREADS = 10
INTERVAL = 2  # secs
RUN_TIME = 60  # secs
RAMPUP = 60  # secs

It is a single Python script with no dependencies. I tested it on a dual quad-core system and it scaled nicely across all 8 cores. I hope to use something like this as a synthetic transaction engine at the core of a new load testing tool.

The output of the script is a single file named 'results.csv' with raw timing data in the following CSV format:

elapsed time, response time, http status

It looks like this:

0.562,0.396,200
0.562,0.319,200
0.578,0.405,200
0.578,0.329,200
...

Here is what the raw data looks like as a scatter plot:

To get more useful information, you need to do some post-processing on the results data. Here is a small script that will crunch some of the data from the results file. It breaks the data up into a time-series and calculates throughput and average response time per interval.

http_load_multiprocess_multithread_results_parse.py

Once you have this derived data, you can graph it it to get a more useful view of how the test and system performed.

January 13, 2010

Python - Command Line Progress Bar With Percentage and Elapsed Time Display

Here is a Python module which produces an ascii command line progress bar with percentage and elapsed time display.

code:
progress_bar.py

to use:

from progress_bar import ProgressBar

p = ProgressBar(60)
    
p.update_time(15)
print p
    
p.fill_char = '='
p.update_time(40)
print p

results:

[##########       25%                  ]  15s/60s
[=================67%=====             ]  40s/60s

January 10, 2010

Python - Lookup Google PageRank Score

Here is a Python module for getting the PageRank of a site from Google. It uses the Google Toolbar 3.0.x/4.0.x Pagerank Checksum Algorithm.

The code is here:

pagerank.py

You can import it as a module and use its get_pagerank(url) method to lookup the PageRank of a URL from your Python code.

For example:

#!/usr/bin/env python

import pagerank

rank = pagerank.get_pagerank('http://www.google.com')
print rank

* note: only use this within Google's terms of service