Tuesday, March 15, 2011

Performance Testing

One of my users recently asked me about the process of taking an application from build to live.    The steps we take include:

*Functional Testing
*End-to-End Testing
*Performance Testing
*User Acceptance
*Cutover Planning including training, communications, and the technical details of transitioning one application to another.

I was asked to explain the difference between end-to-end testing and performance testing.

End-to-end testing is done to make sure the application code does what is expected in terms of function.  For example, if you look up a patient result, is it presented accurately?   End-to-end testing could theoretically done by one person entering one type transaction after another.

Performance testing places the application under "load" to see if there are bottlenecks with the server, database, storage and middleware.   The purpose is to avoid slow performance after go live.

Many times you can do performance tests using simulated input.   Two typical software tools for doing this are HP's Loadrunner and Microfocus' SilkPerformer.  

Some vendors recommend using manual load testing i.e. put all the staff on the new system and do a day of work to see if infrastructure performance suffers.

Although manual testing is often the easiest thing to do, it may not find bottlenecks in transactional performance. Each transaction type and software module creates a different load on the infrastructure.   Some transactions have minimal impact while others cause significant strain.  Doing load testing right, requires a representative mixture of transactions including automated interfaces, data entry, reports and others.

Our approach is generally a combination of manual and automated performance testing.    We pre-load the databases with years of data.    We use automated load testing tools to simulate heavy web site use.    We run scripts that emulate interface activity.   In the context of this real world simulation, we then let the users exercise the software fully.

Of course, even such comprehensive testing can miss software flaws, such as queries against unindexed database tables or processes become a rate limiting step to application performance.   Thus, it's also important to have tools that diagnose problems if slow downs should occur after go live (such as OpNet) and have a strong working relationship with your software vendors so they can rapidly correct any flaws that appear once an application is in full production.

No comments:

Post a Comment