Here is my latest HTTP load generator in Python (web performance test). You just give it a URL and some runtime parameters and it will hammer a resource with HTTP requests. You could easily adapt this to run against a web service or set of links. It is useful for quickly loading a web resource with synthetic transactions for performance testing or tuning purposes.
I have built lots of different load testing tool in Python in the past (see: Pylot), but they all suffered a similar problem. Their concurrency model was based on Threads. Because of this threaded design, combined with Python's GIL implemenatation, my tools were unable to fully utilize multiple cores or processors.
Load generators shouldn't really suffer from processor contention because they are inherently IO-bound, not CPU-bound. However, if you add some client-side processing (response parsing, content verification, etc) and SSL, you could quickly run into a situation where you need more CPU horsepower.
The addition of multiprocessing in Python 2.6 gives me a whole new set of ideas for distributing load over multiple OS processes. It allows me to sidestep the GIL limitation of using a purely threaded model for concurrency. So now I can spawn multiple processes (to scale across processors/cores), with each one spawning multiple threads (for non-blocking i/o). This combination of processes and threads makes the basis for a very scalable and powerful load generating tool.
The Script:
http_load_multiprocess_multithread.py
In the code, you can define the following constants:
URL = 'http://www.example.com/foo?q=bar' PROCESSES = 4 PROCESS_THREADS = 10 INTERVAL = 2 # secs RUN_TIME = 60 # secs RAMPUP = 60 # secs
It is a single Python script with no dependencies. I tested it on a dual quad-core system and it scaled nicely across all 8 cores. I hope to use something like this as a synthetic transaction engine at the core of a new load testing tool.
The output of the script is a single file named 'results.csv' with raw timing data in the following CSV format:
elapsed time, response time, http status
It looks like this:
0.562,0.396,200 0.562,0.319,200 0.578,0.405,200 0.578,0.329,200 ...
Here is what the raw data looks like as a scatter plot:
To get more useful information, you need to do some post-processing on the results data. Here is a small script that will crunch some of the data from the results file. It breaks the data up into a time-series and calculates throughput and average response time per interval.
http_load_multiprocess_multithread_results_parse.py
Once you have this derived data, you can graph it it to get a more useful view of how the test and system performed.
1 comment:
"So now I can spawn multiple processes (to scale across processors/cores), with each one spawning multiple threads (for non-blocking i/o)"
If there any reason why you would not use multiprocessing combined with asyncore or select module? I would think this would be a better way to not be blocking. Have you tried this with any async / nonbolking web servers?
Post a Comment