Saturday, July 31, 2010

Load-runner Vs Rational-Performance-Tester

Purpose
The purpose of this document is to compare Rational’s Performance Tester and Mercury’s Load Runner.
Performance Testing-Overview
Performance testing is testing to determine the responsiveness, throughput, reliability or scalability of a system under a workload. Performance testing is commonly conducted to accomplish the following:

• Evaluate against performance criteria
• Compare systems to find which one performs better
• Find the source of performance problems
• Find throughput levels

Performance, Load, and Stress Testing
Performance tests most typically fall into one of the following three categories:

Performance testing – is testing to determine or validate the speed, scalability, and/or stability characteristics of the system under test. Performance is concerned with achieving response times, throughput, and resource utilization levels that meet the performance objectives for the application under test.
Load testing – a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations. These tests are designed to answer questions such as “How many?”, “How big?”, and “How much?”.
Stress testing – a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models, and load volumes beyond those anticipated during production operations. Stress tests may also include tests focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes when the product is subjected to other stressful conditions, such as limited memory, insufficient disk space, or server failure. These tests are designed to determine under what conditions an application will fail, how it will fail, and what indicators can be monitored to warn of an impending failure.

Why Do Performance Testing?
Some of the most common reasons for conducting performance testing can be summarized as follows:

1. To compare the current performance characteristics of the application with the performance characteristics that equate to end-user satisfaction when using the application.
2. To verify that the application exhibits the desired performance characteristics, within the budgeted constraints of resource utilization.

3. To analyze the behavior of the Web application at various load levels.

4. To identify bottlenecks in the Web application.

5. To determine the capacity of the application’s infrastructure and to determine the future resources required to deliver acceptable application performance.


6. To compare different system configurations to determine which one works best for both the application and the business.

There are risks associated with lack or improper performance testing. Some considerations are outline below:


• Revenue losses to the competition due to scalability, stability issues.
• Loss of credibility that may affect the branding image of the company.

Load Testing Process



Load Testing Tools Evaluation Criteria
Scripting

Scripts represent recorded user actions issued by a web browser to a web application during a web session. They are created by passing HTTP/S traffic through a proxy server, then encoding the recorded data, which can be edited later for use in creating different scenarios.
Key Features
• Record and Play back
• Ability to recognize web page components
(Tables, links, drop down menus, radio buttons)
• Data Functions
Ability to use data from a text file to fill in forms
• Language
Load Scenario Creation
Ability to define custom load scenarios including number of virtual users, the scripts being executed, the speed of end user connection and browser type and the ramp- up profile. In some instances, scenarios can be modified “on the fly” to create “what if” scenarios.
Key Features
• Virtual User Creation and support
• Weighing virtual users
• Adjust virtual user access speed
• Ability to combine scripts to create a scenario(s)
Load Test Feedback
It is the ability of an application to monitor and display results of the load test sessions on a real time basis.
Key features
• Test error recovery
• Alert notification
• Feedback parameter coverage
Reporting
Performance can be accumulated at varying levels of granularity including profiles, scripts, individual pages, frames and objects on pages. Report may provide various graphs and data tables and may also be able to export data to external programs such as Excel, for further analysis.
Key Features
• Variety of report
• Depth of reports-ability to drill down to a problem
• Ability to easily export test results
Other Criteria
Environment Support
How many environments does the tool support out of the box? Does it support the latest Java release, Oracle, PowerBuilder, Wireless protocols etc. Most tools can interface to unsupported environments if the developers in that environment provide classes, dll’s etc that expose some of the applications details but whether a developer will or has time to do so is another question. Ultimately this is one of the most important criterions to test a tool. If the tool does not support our application, then it’s worth nothing.

Ease of Use
We have evaluated each tool on how easy it is to use each tool. Since we were the first time users for all of these tools, we were in a very good position to judge how easy it was to use these tools. For ease of use, we looked for User interface, out of the box functions, debugging facilities, error messages, layout on the screen, help files and user manuals.
Technical Support
Finally, we looked at what kind of support each tools and the vendors provide. For this criteria, we looked for online support, how fast the service is available, does the vendors provide software patches free of charge, is there a ticketing system available for us to open a problem tickets, how soon a problem ticket is handled etc.
Integration
Tool integration is a very important functionality. Most of the organizations have variety of tools from different vendors in their software organization. It’s essential for a testing tool to integrate with other tools. We looked to see how well does the tool integrates with other tools. Does the tool allow us to run it from various test management suites? Can we raise a bug directly from the tool and feed the information gathered from our test logs into it? Does it integrate with products like word, excel or requirements management tools?
Cost
Our selection of any tool is constrained by the available budget. While cost may not be a significant function criterion to evaluate a test tool, we can never ignore this while evaluating a tool.

LoadRunner

LoadRunner works by creating virtual users who take the place of real users operating client software, such as Internet Explorer sending requests using the HTTP protocol to IIS or Apache web servers. Requests from many virtual user clients are generated by "Load Generators" in order to create a load on various servers under test. These load generator agents are started and stopped by Mercury's "Controller" program. The Controller controls load test runs based on "Scenarios" invoking compiled "Scripts" and associated "Run-time Settings".

Scripts are crafted using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers. With Java clients, VuGen captures calls by hooking within the client JVM. During runs, the status of each machine is monitored by the Controller.

At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.
Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis.

Errors during each run are stored in a database which can be read using Microsoft Access
















Rational Performance Tester

Rational Performance Tester (RPT) is a full load and monitoring suite that uses a distributed model to load test systems. RPT is not limited to Web server load testing, but can include specially written Java classes that can be used to load test other types of systems. RPT is based on the IBM Rational Software Development Platform (Rational SDP), which in itself is built on Eclipse. RPT refers to the graphical user interface (GUI). RPT is used in conjunction with the IBM Rational Agent Controller (RAC), which is a non-graphical application and used in distributed environments to generate load. The RPT GUI runs on the Windows or Linux platform. A RAC runs on the RPT machine and communicates with other RACs running on Linux, Windows, or z/OS platforms.

The scripts are compiled into Java classes and are sent to the other platforms
that are running RAC. Any platform running RAC can then execute a test to the
target system.


There is no programming necessary to create, modify, or execute a load test. A load test is a graphical illustration of the Web pages that will be visited during execution. A SOCKS proxy is used to record a test script by navigating, through a browser, the Web pages that the test should capture and should mimic the actual use of a user of the system. The captured Web pages can be viewed and modified through a browser-like window in the RPT test editor


Test data can be varied during a test using a feature in RPT called a Data Pool.

RPT contains a Test Scheduler that allows for different characteristics of a load test to be changed or enhanced. This feature provides the ability to mirror the load that occurs on a running system. The scheduler features items such as run time, think time, statistic gathering, and overall user load.

RPT generates performance and throughput reports in real time, enabling detection of performance problems at any time during a load test run. These reports provide multiple filtering and configuration options that can be set before, during, and after a test run. Additional reports are available at the conclusion of the test run to perform deeper analysis on items, such as response time percentile distributions. Custom reports can be created. A custom report can be
tailored to the need of the tester one report to display information important to the tester. Reports can include bar, line, or pie charts.



Rational Performance Tester detailed architecture





Scripting
Load Runner
The Virtual User Generator (VuGen) allows a user to record and/or script the test to be performed against the application under test, and enables the performance tester to playback and make modifications to the script as needed. Such modifications may include Parameterization (selecting data for keyword-driven testing), Correlation and Error handling. During recording, VuGen records a tester's actions by routing data through a proxy. The type of proxy depends upon the protocol being used, and affects the form of the resulting script.
RPT
Rational Performance Tester
uses its own custom scripting language. These scripts can then be customized and organized in a variety of ways to accurately reflect the habits of the various user profiles expected to use the application once it goes live. Using a variety of automated data correlation techniques, these tests can then be executed to reflect multiple, unique, concurrent actors - scaling to even thousands or tens of thousands of users. Manual correlation is done using regular expressions.




Load Scenario Creation

Load Runner
The script generated by VuGen is run by the controller. Each run is called as a scenario with some preset settings. LoadRunner provides for the usage of various machines to act as Load Generators. For example, to run a test of 100 users, we can use three or more machines with Load generator installed in them. The tester will then provide the script and the name of the machine which is going to act as a load generator along with the number of users who are going to run from that machine.
RPT
There is no programming necessary to create, modify, or execute a load test. A
load test is a graphical illustration of the Web pages that will be visited during execution. A SOCKS proxy is used to record a test script by navigating, through a browser, the Web pages that the test should capture and should mimic the actual use of a user of the system. The captured Web pages can be viewed and modified through a browser like window in the RPT test editor. This window shows what the user would see when visiting a selected page, helping to make more informed decisions about how to modify load tests prior to execution.


Load Test Feedback

Load Runner
LoadRunner
uses Monitors to monitor the performance of individual components. But each monitor is to be purchased separately from Mercury. Some monitors include Oracle Monitors, Web Sphere Monitors, etc... Once a scenario is set and the run is completed, the result of the scenario can be viewed via the Analysis tool.
RPT
During test execution, Rational Performance Tester graphically displays consolidated views of the average response times - over time - for each simulated user profile. Testers also have the option of diving into the real-time conversation between any single user instance and the system under test to view the actual data passed back and forth, enabling rapid and early analysis of suspected response degradation.


Reporting

LoadRunner
The Analysis tool takes the completed scenario result and prepares the necessary graphs for the tester to view. Also, graphs can be merged to get a good picture of the performance. The tester can then make needed adjustments to the graph and prepare a LoadRunner report. The report, including all the necessary graphs, can be saved in several formats, including HTML and Microsoft Word format.

RPT
RPT generates performance and throughput reports in real time, enabling detection of performance problems at any time during a load test run. These reports provide multiple filtering and configuration options that can be set before, during, and after a test run. Additional reports are available at the conclusion of the test run to perform deeper analysis on items, such as response time percentile distributions. Custom reports can be created. A custom report can be tailored to the need of the tester. Data from diverse sources can be combined on one report to display information important to the tester. Reports can include bar, line, or pie charts.



Environment Support
LoadRunner

It supports a wide range of enterprise environments, including Web Services, Ajax, J2EE, and .NET.

LoadRunner 4.0 is available for the following client/server platforms:
• Microsoft Windows: Windows 3.x, Windows NT, Windows 95
• UNIX: Sun OS, Solaris, HP-UX, IBM AIX, NCR
RPT Creates, executes and analyzes tests to validate the reliability of complex e-business applications, including Seibel, SAP®, SOA and Citrix.
Supports Windows, Linux and z/OS as distributed controller agents.





Ease of Use
LoadRunner
LoadRunner
now includes game-changing technology that reduces the script creation process down to a few simple mouse clicks. LoadRunner Click and Script enables you to record scripts at the user interface level
The scripts are succinct, visually intuitive, and self-explanatory scripts.
The scripts are easy to maintain.
Click and Script lowers the technical skills needed to perform load tests.
RPT
The automatic correlation in the script works most of the time. This is a big time saver.

The tool gives you the option of using a persistent cursor for your data tables. So if you had a test case that, for instance, deleted a user from a database, you would not have to restore the database or re-add users between test runs, you could just continue from the same point in your list of users.

Rational® Performance Tester for z/OS includes multiple easy-to-use features with powerful testing capabilities, and delivers both high-level and detailed test views.




Technical Support
Load Runner
Mercury support is rated to be the best among testing tool and the developer can expect a reply from Mercury within 4 hours.
RPT Though manuals, support hand books and rational help desk is available in the IBM website, the support is not up to mark compared to mercury.


Integration
Load Runner

Load Runner can be integrated with Mercury’s Test Director, which is a Test Management tool.
RPT
Integrates with Tivoli® composite application management solutions to identify the source of production performance problems. It can also be integrated with other Rational Products like Clear Case.

Summary

While the above comparison shows the features rendered by the tools in each of the functionality we would like to say that you should use the demo-version of each product to see which meets your requirements. Based on your requirements, any of these tools could turn out to be the best for your need. Here are some of the strengths and weakness of tools to help you make your selection:

6 comments:

  1. This is very generic comparison

    ReplyDelete
  2. Extremely low value comparison. I feel like I just wasted time I cannot recover

    ReplyDelete
  3. I really found it helpful to begin my comparative study. thanks a lot.

    ReplyDelete
  4. I found it helpful as a starting point for comparative studies, too. Thanks a lot.

    ReplyDelete
    Replies
    1. How do I download or view the actual comparative study?

      Delete
  5. LoadRunner 4.0 - That was like 1990s we are on version 13.x now..

    Outdated

    ReplyDelete