August 1, 2010

Load Testing Web Services with Python and Multi-Mechanize

(originally posted at

I had to test and benchmark a SOAP web service recently, and figured I'd write up some instructions for load testing with Multi-Mechanize.

Multi-Mechanize is a performance and load testing framework that enables you to write plugins (in Python) to create virtual user scripts that run in parallel against your service.

Since Multi-Mechanize scripts are pure Python, you can use any Python module inside them. In the case of a web service that uses SOAP, you have a few options. You can use any 3rd party module, so something like Suds (a lightweight SOAP client lib) would be a reasonable choice. However, I decided to stick with the Python standard library and build my scripts with urllib2. Afterall, SOAP is just slinging a bunch of XML back-and-forth over HTTP.

So before I get started with Multi-Mechanize and performance/load testing...

Here is a Python (2.x) script using urllib2 to interact with a SOAP web service over HTTP:

import urllib2

with open('soap.xml') as f:
    soap_body =
req = urllib2.Request(url='', data=soap_body)
req.add_header('Content-Type', 'text/xml')
resp = urllib2.urlopen(req)
content =

(The script above assumes you have a file named 'soap.xml' in the local directory that contains your payload (SOAP XML message). It will send an HTTP POST request containing your payload in the body.)

After the initial script is created in Python, the next step is to convert it into a Multi-Mechanize script. To do this, you create a Transaction class with a run() method and add your code there.

As a Multi-Mechanize script, the same thing would be done like this:

import urllib2

class Transaction(object):
    def __init__(self):
        with open('soap.xml') as f:
            self.soap_body =
    def run(self):
        req = urllib2.Request(url='', data=self.soap_body)
        req.add_header('Content-Type', 'text/xml')
        resp = urllib2.urlopen(req)
        content =
        assert ('Example SOAP Response' in content), 'Failed Content Verification'

(Notice I also added an assert statement to verify the content returned from the service.)

Now that you have a script created, it's time to configure Multi-Mechanize and run some tests.

You can download Multi-Mechanize from:
It requires Python 2.x (2.6 or 2.7), and if you want it to generate graphs, you must also install Matplotlib and its dependencies. See the FAQ for help.

Once you have Multi-Mechanize downloaded, unzip it and go to the "/projects" directory. You can create a new project directory here. You can call it "soap_project". Inside this directory, you will need 2 things: a config file ("config.cfg"), and a "test_scripts" directory (containing the script you previously created, which you can call "". Since the script is looking for a data file named "soap.xml", make sure you have one created in the main multi-mechanize directory.

The directory and file layout will look like this:


(you can see the "default_project" for an example of how it should be setup.)

To begin with, you can use a simple config.cfg file like this:

run_time: 30
rampup: 0
console_logging: on
results_ts_interval: 10

threads: 1

This will just a run a single thread of your virtual user script for 30 seconds; good enough for testing and getting things going.

To run a test, go to the topmost multi-mechanize directory and run:

$ python soap_project

(You should see timer output in the console.)

Once you have things running well, you can turn off "console_logging" and increase the workload. It will take some adjustment and plenty of trials to get the load dialed in correctly for your service. You can also get a lot more sophisticated with your workload model and create multiple virtual user scripts doing different transactions. In this case, I'll keep it simple and just test one web service request type in isolation.

Once I ran plenty of iterations and got a feel for how the service responded and what its limits were, I settled with a config file like this:

run_time: 900
rampup: 900
console_logging: off
results_ts_interval: 90

threads: 30

(30 threads, increasing load over 15 mins)

After each test run you do, a new results directory is created containing your test results. Look for "results.html" and view it in your browser to see the output report.

Have questions about Multi-Mechanize?
Post to the discussion group: