Saturday, July 31, 2010

Load-runner Vs Rational-Performance-Tester

Purpose
The purpose of this document is to compare Rational’s Performance Tester and Mercury’s Load Runner.
Performance Testing-Overview
Performance testing is testing to determine the responsiveness, throughput, reliability or scalability of a system under a workload. Performance testing is commonly conducted to accomplish the following:

• Evaluate against performance criteria
• Compare systems to find which one performs better
• Find the source of performance problems
• Find throughput levels

Performance, Load, and Stress Testing
Performance tests most typically fall into one of the following three categories:

Performance testing – is testing to determine or validate the speed, scalability, and/or stability characteristics of the system under test. Performance is concerned with achieving response times, throughput, and resource utilization levels that meet the performance objectives for the application under test.
Load testing – a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations. These tests are designed to answer questions such as “How many?”, “How big?”, and “How much?”.
Stress testing – a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models, and load volumes beyond those anticipated during production operations. Stress tests may also include tests focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes when the product is subjected to other stressful conditions, such as limited memory, insufficient disk space, or server failure. These tests are designed to determine under what conditions an application will fail, how it will fail, and what indicators can be monitored to warn of an impending failure.

Why Do Performance Testing?
Some of the most common reasons for conducting performance testing can be summarized as follows:

1. To compare the current performance characteristics of the application with the performance characteristics that equate to end-user satisfaction when using the application.
2. To verify that the application exhibits the desired performance characteristics, within the budgeted constraints of resource utilization.

3. To analyze the behavior of the Web application at various load levels.

4. To identify bottlenecks in the Web application.

5. To determine the capacity of the application’s infrastructure and to determine the future resources required to deliver acceptable application performance.


6. To compare different system configurations to determine which one works best for both the application and the business.

There are risks associated with lack or improper performance testing. Some considerations are outline below:


• Revenue losses to the competition due to scalability, stability issues.
• Loss of credibility that may affect the branding image of the company.

Load Testing Process



Load Testing Tools Evaluation Criteria
Scripting

Scripts represent recorded user actions issued by a web browser to a web application during a web session. They are created by passing HTTP/S traffic through a proxy server, then encoding the recorded data, which can be edited later for use in creating different scenarios.
Key Features
• Record and Play back
• Ability to recognize web page components
(Tables, links, drop down menus, radio buttons)
• Data Functions
Ability to use data from a text file to fill in forms
• Language
Load Scenario Creation
Ability to define custom load scenarios including number of virtual users, the scripts being executed, the speed of end user connection and browser type and the ramp- up profile. In some instances, scenarios can be modified “on the fly” to create “what if” scenarios.
Key Features
• Virtual User Creation and support
• Weighing virtual users
• Adjust virtual user access speed
• Ability to combine scripts to create a scenario(s)
Load Test Feedback
It is the ability of an application to monitor and display results of the load test sessions on a real time basis.
Key features
• Test error recovery
• Alert notification
• Feedback parameter coverage
Reporting
Performance can be accumulated at varying levels of granularity including profiles, scripts, individual pages, frames and objects on pages. Report may provide various graphs and data tables and may also be able to export data to external programs such as Excel, for further analysis.
Key Features
• Variety of report
• Depth of reports-ability to drill down to a problem
• Ability to easily export test results
Other Criteria
Environment Support
How many environments does the tool support out of the box? Does it support the latest Java release, Oracle, PowerBuilder, Wireless protocols etc. Most tools can interface to unsupported environments if the developers in that environment provide classes, dll’s etc that expose some of the applications details but whether a developer will or has time to do so is another question. Ultimately this is one of the most important criterions to test a tool. If the tool does not support our application, then it’s worth nothing.

Ease of Use
We have evaluated each tool on how easy it is to use each tool. Since we were the first time users for all of these tools, we were in a very good position to judge how easy it was to use these tools. For ease of use, we looked for User interface, out of the box functions, debugging facilities, error messages, layout on the screen, help files and user manuals.
Technical Support
Finally, we looked at what kind of support each tools and the vendors provide. For this criteria, we looked for online support, how fast the service is available, does the vendors provide software patches free of charge, is there a ticketing system available for us to open a problem tickets, how soon a problem ticket is handled etc.
Integration
Tool integration is a very important functionality. Most of the organizations have variety of tools from different vendors in their software organization. It’s essential for a testing tool to integrate with other tools. We looked to see how well does the tool integrates with other tools. Does the tool allow us to run it from various test management suites? Can we raise a bug directly from the tool and feed the information gathered from our test logs into it? Does it integrate with products like word, excel or requirements management tools?
Cost
Our selection of any tool is constrained by the available budget. While cost may not be a significant function criterion to evaluate a test tool, we can never ignore this while evaluating a tool.

LoadRunner

LoadRunner works by creating virtual users who take the place of real users operating client software, such as Internet Explorer sending requests using the HTTP protocol to IIS or Apache web servers. Requests from many virtual user clients are generated by "Load Generators" in order to create a load on various servers under test. These load generator agents are started and stopped by Mercury's "Controller" program. The Controller controls load test runs based on "Scenarios" invoking compiled "Scripts" and associated "Run-time Settings".

Scripts are crafted using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers. With Java clients, VuGen captures calls by hooking within the client JVM. During runs, the status of each machine is monitored by the Controller.

At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.
Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis.

Errors during each run are stored in a database which can be read using Microsoft Access
















Rational Performance Tester

Rational Performance Tester (RPT) is a full load and monitoring suite that uses a distributed model to load test systems. RPT is not limited to Web server load testing, but can include specially written Java classes that can be used to load test other types of systems. RPT is based on the IBM Rational Software Development Platform (Rational SDP), which in itself is built on Eclipse. RPT refers to the graphical user interface (GUI). RPT is used in conjunction with the IBM Rational Agent Controller (RAC), which is a non-graphical application and used in distributed environments to generate load. The RPT GUI runs on the Windows or Linux platform. A RAC runs on the RPT machine and communicates with other RACs running on Linux, Windows, or z/OS platforms.

The scripts are compiled into Java classes and are sent to the other platforms
that are running RAC. Any platform running RAC can then execute a test to the
target system.


There is no programming necessary to create, modify, or execute a load test. A load test is a graphical illustration of the Web pages that will be visited during execution. A SOCKS proxy is used to record a test script by navigating, through a browser, the Web pages that the test should capture and should mimic the actual use of a user of the system. The captured Web pages can be viewed and modified through a browser-like window in the RPT test editor


Test data can be varied during a test using a feature in RPT called a Data Pool.

RPT contains a Test Scheduler that allows for different characteristics of a load test to be changed or enhanced. This feature provides the ability to mirror the load that occurs on a running system. The scheduler features items such as run time, think time, statistic gathering, and overall user load.

RPT generates performance and throughput reports in real time, enabling detection of performance problems at any time during a load test run. These reports provide multiple filtering and configuration options that can be set before, during, and after a test run. Additional reports are available at the conclusion of the test run to perform deeper analysis on items, such as response time percentile distributions. Custom reports can be created. A custom report can be
tailored to the need of the tester one report to display information important to the tester. Reports can include bar, line, or pie charts.



Rational Performance Tester detailed architecture





Scripting
Load Runner
The Virtual User Generator (VuGen) allows a user to record and/or script the test to be performed against the application under test, and enables the performance tester to playback and make modifications to the script as needed. Such modifications may include Parameterization (selecting data for keyword-driven testing), Correlation and Error handling. During recording, VuGen records a tester's actions by routing data through a proxy. The type of proxy depends upon the protocol being used, and affects the form of the resulting script.
RPT
Rational Performance Tester
uses its own custom scripting language. These scripts can then be customized and organized in a variety of ways to accurately reflect the habits of the various user profiles expected to use the application once it goes live. Using a variety of automated data correlation techniques, these tests can then be executed to reflect multiple, unique, concurrent actors - scaling to even thousands or tens of thousands of users. Manual correlation is done using regular expressions.




Load Scenario Creation

Load Runner
The script generated by VuGen is run by the controller. Each run is called as a scenario with some preset settings. LoadRunner provides for the usage of various machines to act as Load Generators. For example, to run a test of 100 users, we can use three or more machines with Load generator installed in them. The tester will then provide the script and the name of the machine which is going to act as a load generator along with the number of users who are going to run from that machine.
RPT
There is no programming necessary to create, modify, or execute a load test. A
load test is a graphical illustration of the Web pages that will be visited during execution. A SOCKS proxy is used to record a test script by navigating, through a browser, the Web pages that the test should capture and should mimic the actual use of a user of the system. The captured Web pages can be viewed and modified through a browser like window in the RPT test editor. This window shows what the user would see when visiting a selected page, helping to make more informed decisions about how to modify load tests prior to execution.


Load Test Feedback

Load Runner
LoadRunner
uses Monitors to monitor the performance of individual components. But each monitor is to be purchased separately from Mercury. Some monitors include Oracle Monitors, Web Sphere Monitors, etc... Once a scenario is set and the run is completed, the result of the scenario can be viewed via the Analysis tool.
RPT
During test execution, Rational Performance Tester graphically displays consolidated views of the average response times - over time - for each simulated user profile. Testers also have the option of diving into the real-time conversation between any single user instance and the system under test to view the actual data passed back and forth, enabling rapid and early analysis of suspected response degradation.


Reporting

LoadRunner
The Analysis tool takes the completed scenario result and prepares the necessary graphs for the tester to view. Also, graphs can be merged to get a good picture of the performance. The tester can then make needed adjustments to the graph and prepare a LoadRunner report. The report, including all the necessary graphs, can be saved in several formats, including HTML and Microsoft Word format.

RPT
RPT generates performance and throughput reports in real time, enabling detection of performance problems at any time during a load test run. These reports provide multiple filtering and configuration options that can be set before, during, and after a test run. Additional reports are available at the conclusion of the test run to perform deeper analysis on items, such as response time percentile distributions. Custom reports can be created. A custom report can be tailored to the need of the tester. Data from diverse sources can be combined on one report to display information important to the tester. Reports can include bar, line, or pie charts.



Environment Support
LoadRunner

It supports a wide range of enterprise environments, including Web Services, Ajax, J2EE, and .NET.

LoadRunner 4.0 is available for the following client/server platforms:
• Microsoft Windows: Windows 3.x, Windows NT, Windows 95
• UNIX: Sun OS, Solaris, HP-UX, IBM AIX, NCR
RPT Creates, executes and analyzes tests to validate the reliability of complex e-business applications, including Seibel, SAP®, SOA and Citrix.
Supports Windows, Linux and z/OS as distributed controller agents.





Ease of Use
LoadRunner
LoadRunner
now includes game-changing technology that reduces the script creation process down to a few simple mouse clicks. LoadRunner Click and Script enables you to record scripts at the user interface level
The scripts are succinct, visually intuitive, and self-explanatory scripts.
The scripts are easy to maintain.
Click and Script lowers the technical skills needed to perform load tests.
RPT
The automatic correlation in the script works most of the time. This is a big time saver.

The tool gives you the option of using a persistent cursor for your data tables. So if you had a test case that, for instance, deleted a user from a database, you would not have to restore the database or re-add users between test runs, you could just continue from the same point in your list of users.

Rational® Performance Tester for z/OS includes multiple easy-to-use features with powerful testing capabilities, and delivers both high-level and detailed test views.




Technical Support
Load Runner
Mercury support is rated to be the best among testing tool and the developer can expect a reply from Mercury within 4 hours.
RPT Though manuals, support hand books and rational help desk is available in the IBM website, the support is not up to mark compared to mercury.


Integration
Load Runner

Load Runner can be integrated with Mercury’s Test Director, which is a Test Management tool.
RPT
Integrates with Tivoli® composite application management solutions to identify the source of production performance problems. It can also be integrated with other Rational Products like Clear Case.

Summary

While the above comparison shows the features rendered by the tools in each of the functionality we would like to say that you should use the demo-version of each product to see which meets your requirements. Based on your requirements, any of these tools could turn out to be the best for your need. Here are some of the strengths and weakness of tools to help you make your selection:

Tuesday, July 27, 2010

Load runner Vs Silk performer

Can anyone speak about the relative merits of these two tools? I am new to LoadRunner, but my company is merging with another that uses Segue's tools.

I know of one difference, i.e. that LoadRunner allows recording against UNIX and Windows RTE clients.

Thanks for any advice;
Jerry

IP Logged
PerformerUser
unregistered posted 07-22-1999 10:06 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by PerformerUser SilkPerformer can also record traffic from a Unix/Mac machine. Since the web recorder is proxy based, it doesn't matter what kind of machine the client is running on.

As for relative merits, after a serious comparison of the products, we found that SilkPerformer actually simulates browsers correctly. It allows simulation of multiple connections correctly and the modem simulation actually works. The script is also much smaller in size and the tool scales very well with a large number of simulated users. You can simulate multiple protocols in a single script (HTTP, POP3, FTP, SMTP, LDAP, IIOP, TCP/IP, etc.) so you don't have to create different scripts for different protocols. The scripting language has a whole lot of specialized functions optimized for load testing. And finally, it doesn't crash frequently, which is a good thing in a qualaity assurance tool! Bottom line is: SilkPerformer is a much more serious tool. Hope that helps.

IP Logged
AdamA
unregistered posted 08-06-1999 08:01 AM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by AdamA We have found through thorough evaluation that LoadRunner is much easier to use in a real world scenario. The Astra QuickTest Tool makes recording Business Processes and then parameterizing the data very easy. LoadRunner does emulate modem speeds accurately and has the ability to manage multiple user types (inside and outside of a browser). I recommend you do a side by side comparison of the tools and ignore the sales crap - both sales forces said they were the only "serious testing tool". For our App we found the Monitors that come with LoadRunner to be very powerful.

IP Logged
BE
unregistered posted 10-18-1999 07:18 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by BE Just to complete AdamA message, LoadRunner has also a very nice feature for testing web environments : "IPSpoofing".
Each simulated user gets its own IP address.
LoadRunner supplies an easy-to-use wizard to define as many IP addresses as you want for each injector machine.
So LoadRunner can really stress all the components of the web architecture :
Firewalls, routers, Load balancing...
All of these rely on the IP address to identify a user.
Other load testing tool will simulate all the users with the same IP address, which is not realistic and give false results...

Hope it helps...

IP Logged
SEG
New Member

Posts: 2
Registered: Dec 1999
posted 12-02-1999 02:05 PM Click Here to See the Profile for SEG Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by SEG If you are going with one-shot capture/replay, LoadRunner is just fine. Otherwise, Segue's Realizer and Test pieces are overwhelming improvements. The creation of an abstraction layer reduces man-hours by an order of magnitude for development systems or production systems that experience significant change. The Radar piece allows non-technical personnel to track QA failures with desktop replay. Expensive, yes. We paid $ 75,000. Worth every dime.

IP Logged
QA grinder
unregistered posted 12-02-1999 09:08 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by QA grinder I don't know about that "accurate modem" comment for LoadRunner. I heard 2nd hand that a Mercury rep admitted that there modem simulation was essentially bogus - grabbing the whole file and then calculating a waiting time based on size before getting the next file. This sounds like faking it to me...

IP Logged
JeffNyman
unregistered posted 12-28-1999 02:24 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by JeffNyman I agree with most of the comments. LoadRunner is easier to use, but if you want more robust results and more scalability you should definitely go with SilkPerformer.

The modem times are much more accurate in SilkPerformer. LoadRunner seems to use some sort of algorithm to compute the times after the fact; it does take to long before you can deduce this on your own because the times are either inconsistent or do not match up with what you actually are seeing.

SilkPerformer also gives many more runtime settings that have a direct bearing on the results rather than just glitz while running. There are many options I have tried with LoadRunner that do work - but offer inconsistent results. I have had much better luck with SilkPerformer in this regard. This is not to say that LoadRunner was necessarily incorrect - just that it was much more inconsistent than SilkPerformer.

Also: SilkPerformer's integration of functional tests is much more smooth than in LoadRunner. I know that with the 5.0 version of LoadRunner, if you use a WinRunner script instead of an Astra script you lose some runtime setting options such as iterations. There is also the case that there are firm separations in LoadRunner's VUGen between database users and Web users and this makes for a lot of overhead. I have not had this problem with SilkPerformer because all scripts can be integrated.

As a caveat to all of the above: I am using WinRunner/LoadRunner 5.0 and SilkTest 5.0, SilkPerformer 3.0.

IP Logged
LoadUser
unregistered posted 12-30-1999 09:39 AM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by LoadUser Mercury now ships LoadRunner 6.0.
This version has a new modem emulation that takes into consideration MANY modem issues and not like Segue bandwidth control

IP Logged
JeffNyman
unregistered posted 12-30-1999 11:40 AM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by JeffNyman I have not directly worked with LoadRunner 6.0 yet so I cannot comment too much on it, however from what I hear from Mercury and others who have used it, LoadRunner still does not handle burstiness and heavy-tailing with the use of a service demand law parameter - something that SilkPerformer does with ease. This really does not matter if you are working with traditional client/server, but for a Web-based solution it means everything.

I have also found that Mercury seems to equate the network transmission time with the network contention time in their internal logic. This is something that Segue does not (apparently) do and leads to much more accurate results in my opinion. I think a lot of this has to do with the fact that SilkPerformer uses a much more realistic internal setup for workload characterization than does LoadRunner.

Overall what I have found is that SilkPerformer is much more robust and much more reliant upon industry standard performance equations (knowingly or not) than is LoadRunner if you are doing a great deal of Web testing. For traditional client/server it is somewhat of a toss-up for me.


IP Logged
PerformerExpert
unregistered posted 01-03-2000 02:11 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by PerformerExpert Another thing not mentioned is that Performer lets you set bounds and then compare against those bounds which I think is the workload characterization being mentioned. LoadRunner 6.0 has nothing that comes close to this. The repository information in SilkPerfomer is much better to -- much better than Mercury's BDE database setup.

IP Logged
ewm
unregistered posted 01-17-2000 05:42 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by ewm In comparing the two products, I was led to believe that SilkPerformer's virtual user model was more accurate than that of Loadrunner's. A rep told me that because of the libraries that LR uses to connect to a web server, that LR can only service 4 connections per 50 virtual users simulated. Their explanation was along the lines that for every 50 virtual users simulated using LR , that only 4 would ever be concurrently connected. If running a load test with 100 users using LR then the web server would only ever experience max 8 connected users. Is this true? If it is, then how can the results gathered be validated?

- EWM

IP Logged
LR
unregistered posted 01-18-2000 08:57 AM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by LR What do you think?
Do you think someone will buy LoadRunner if this was true?

IP Logged
TK
unregistered posted 02-02-2000 01:51 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by TK In regards to all those who posted here regarding the modem emulation between Silk and Merc, Mercury just released a two-DLL patch for modem emulation. It still uses an after-the-fact algorithm (just decompile the DLL's to find out) but it is much closer to what Silk is able to do. Since Mercury felt it necessary to do this it's obvious that much of what was said about alogrithms computing the timings after the fact was true since according to Mercury this "fix" gets rid of that. It doesn't completely though --- if you check out the timings and compare it with actual modem speeds, it's not nearly as close as Silk gets to the actual times.