Monday, August 23, 2010

What NEW in Loadrunner 9.5!!!!

It’s time again for another version update of your favorite load testing product, LoadRunner. The latest release as of this writing is 9.5, and we (Loadtester Inc.) have been waiting for this version anxiously for a couple of reasons. I wanted to go through a few of the new features in this latest version.
LoadRunner now fully supports Windows Vista (SP1) for both the Controller and Generators (version 9.1 added Generator only support for Vista). That is good news for those who have been holding out on updating their LoadRunner Controller. HP, may I make a suggestion? It might be good to make preparations for Windows 7 support now so it can be supported when it first comes out instead of taking almost two years. I have talked to a lot of people who will skip Vista and move from XP to Windows 7. They are hoping that will be possible as soon as Windows 7 arrives as a general release. There are some “gotcha’s” to running on Vista, so make sure to read the limitations section in the README.HTML file on the LoadRunner install disk for more information. LoadRunner has also caught up with Microsoft’s .NET framework and now supports up to the 3.5 version. Now that .NET 3.5 SP1 is out, I am hoping this will be supported in the next service pack for LoadRunner. In light of all this progress, you should also note, that as of this version, LoadRunner no longer supports Windows 2000.
One notable change with this version is the addition of an Agent for the RDP (Remote Desktop) protocol. The RDP protocol is beginning to evolve much like the Citrix protocol did in the early days of its use. The RDP Agent should allow recognition of objects instead of just x/y coordinates and offer better synchronization. This is important ,because we’re already hearing of people using RDP instead of other protocols like RMI, DCOM, and Winsock Vusers, simply because it’s easier to script against these types of applications and finish testing projects in a shorter amount of time. Terminal Server licensing is a small price to pay when it could take three months to develop a decent RMI script. You can expect a separate article/review of this feature, because I think this is going to be the new “catch all” protocol when nothing else will work – should this agent provide the capabilities it promises. Other new areas of interest include support for Citrix Presentation Server 4.5, Oracle E-Business Suite - R12, and RTMP (Real Time Messaging Protocol) support for the Flex protocol.
The new Protocol Advisor should be helpful for those who do not know how to use a network sniffer to figure out the transport protocol used by an application. You can record your application, and it will suggest protocols based on the information it gathered. It may be a good starting point, but any engineer using LoadRunner should have a good understanding of the underlying protocols that an application uses because there may be times to dig deeper to find the right one, or the right combination. You can now export the test results from a script run in Vugen to HTML. This allows you to use the report in Quality Center to open up defects. It appears that HP has begun to integrate the Service Test product directly into LoadRunner. By adding the right license you can get to all the Service Test functionality, which would allow you to do verification testing on headless (GUI-less) web services. For those of you wondering, Service Test fills that blind spot where QuickTest Pro leaves off in testing web services, especially when there is no GUI interface. As an added benefit of using Vugen as the interface to test headless web services, you can run load tests against them easily as well. Several versions ago you could not have LoadRunner and Service Test on the same machine.
One of the main reasons I personally want to upgrade to 9.5, is that WAN Emulation has been brought back to LoadRunner. Yeah!!! For those of you old timers who can remember as far back as version 7.6, this is when Shunra became integrated into the LoadRunner product. A limited version of their WAN emulation software could be used on the Generators. It required an additional licensing purchase, but it was minimal when compared to the overall price of LoadRunner. When the license model for a Controller changed in version 8.1, WAN emulation was thrown in as part of the entire Controller package. Unfortunately, after HP acquired Mercury, the Shunra software sort of got lost in the shuffle somewhere and this functionality disappeared. Because Shunra is a third party software company in their own right, they have continued to sell their VE Desktop software and their VE appliance (hardware) as stand-alone solutions or as an add-on to LoadRunner. However, the integration was limited. Last year, Loadtester became a partner with Shunra because we really believe their products offers visibility to a blind spot within application performance testing - the network. It also relieves the problem of having to install load generators on your production network remotely. With LoadRunner 9.5, the WAN emulation options with even more control are available in LoadRunner, but you will need to contact Shunra to get a license from them to use it. You will install the VE Desktop for HP Software (for LoadRunner or Performance Center, if that is how you roll) on the Controller. You then install the VE Desktop client on the Generators. With Performance Center, or optionally with LoadRunner, you’ll install a VE Desktop server to store advanced network configurations in. This is very cool because it means users with other Shunra products (say, your developers using VE Desktop Professional, hint, hint) could share the same network settings as you. Once you have VE Desktop for HP Software on the Controller, when you need to turn WAN emulation on, you do it through the options within LoadRunner, and it is seamlessly integrated. If you set up different WAN emulation “profiles” on different Generators, you will be able to filter information in the Analysis module to show the impact each one had, meaning you can filter by WAN Emulation (Emulated Location) profile. This is pretty cool if you think about it. It will tell you immediately the role your network has on application performance. Couple of things to note: first, this is a whole new WAN emulation, so don’t think it has anything to do with the WAN emulation of those older versions. Forget and move on. Secondly, you cannot set up WAN emulation on the Controller (like if you install a Generator on a Controller). But if you are trying to do that anyway, we would all make fun of you, as everyone except complete NUBES know you NEVER PUT A GENERATOR AND CONTROLLER ON THE SAME MACHINE. Sheesh… :)
Some people have been asking for a more secure way for the Controller and Generators to communicate with each other. Nothing like passing a whole lot of user names and passwords to Generators in text files (your parameter files), right? There are a couple of new items in the listing of tools called “Host Security Setup” and “Host Security Manager”. It’s fairly simple, in that you create a security key and make sure all the Generators are synced up with it. You have to turn this feature on and choose to enforce channel communications. It is off by default. This will create a secure channel between your Controller and Generator. This should help ease the minds for those with Generators sitting outside their firewalls, and other highly secure environments. I am curious to see long term how much this level of security affects the performance of the LoadRunner components themselves and any test results, if at all.
There is a new option in the Controller’s general options (under the Execution tab) called Post Collate Command, which allows you to run an executable command or batch file after results are collated.
HP continues to open up the components to an API so that there can be more control of the LoadRunner components programmatically, for those who need it. The new Analysis API will let you launch and process and Analysis session, but even more importantly, extract this information into a third party tool to report test results any way you desire. I have always felt the Analysis engine was a powerful component to LoadRunner that helped prove out its value, and this extends it even more if you are willing to put in the time to code some stuff up to take advantage of it. There are some additional reports and exporting features in this version. Another enhancement is the support for SQL Server 2005. What year is it anyway? :) Hopefully SQL Server 2008 won’t be far behind (perhaps another thing to put into SP1 for 9.5). More work has been done to improve processing time of test results and importing from external sources. I have not tested that out yet, but it is one of the first things on my list to do.
LoadRunner 9.5 represents a major update to the application and moves it forward closer to where we need it to be today. However, it still lacks features that we would like to see; such as better hooking for the .NET record/replay protocol, and better support for Microsoft WPF and WCF applications in general. The "click-n-script" concept has good intentions, but still needs to be more mature. We find we are still having to compensate for the hooking engine not always capturing what it should. Specifically, AJAX C&S has issues with redirects and still requires some manual function creation to handle the forcing of some JavaScript execution. This makes the use of C&S rather pointless. However, it is nice to see progress being made with the RDP protocol, and Vista support for those who have been forced to migrate to it within their companies. This version appears to load a little faster, and seems a bit more stable (I haven't had a Vugen crash yet). Keeping my fingers crossed....

Saturday, July 31, 2010

Load-runner Vs Rational-Performance-Tester

The purpose of this document is to compare Rational’s Performance Tester and Mercury’s Load Runner.
Performance Testing-Overview
Performance testing is testing to determine the responsiveness, throughput, reliability or scalability of a system under a workload. Performance testing is commonly conducted to accomplish the following:

• Evaluate against performance criteria
• Compare systems to find which one performs better
• Find the source of performance problems
• Find throughput levels

Performance, Load, and Stress Testing
Performance tests most typically fall into one of the following three categories:

Performance testing – is testing to determine or validate the speed, scalability, and/or stability characteristics of the system under test. Performance is concerned with achieving response times, throughput, and resource utilization levels that meet the performance objectives for the application under test.
Load testing – a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations. These tests are designed to answer questions such as “How many?”, “How big?”, and “How much?”.
Stress testing – a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models, and load volumes beyond those anticipated during production operations. Stress tests may also include tests focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes when the product is subjected to other stressful conditions, such as limited memory, insufficient disk space, or server failure. These tests are designed to determine under what conditions an application will fail, how it will fail, and what indicators can be monitored to warn of an impending failure.

Why Do Performance Testing?
Some of the most common reasons for conducting performance testing can be summarized as follows:

1. To compare the current performance characteristics of the application with the performance characteristics that equate to end-user satisfaction when using the application.
2. To verify that the application exhibits the desired performance characteristics, within the budgeted constraints of resource utilization.

3. To analyze the behavior of the Web application at various load levels.

4. To identify bottlenecks in the Web application.

5. To determine the capacity of the application’s infrastructure and to determine the future resources required to deliver acceptable application performance.

6. To compare different system configurations to determine which one works best for both the application and the business.

There are risks associated with lack or improper performance testing. Some considerations are outline below:

• Revenue losses to the competition due to scalability, stability issues.
• Loss of credibility that may affect the branding image of the company.

Load Testing Process

Load Testing Tools Evaluation Criteria

Scripts represent recorded user actions issued by a web browser to a web application during a web session. They are created by passing HTTP/S traffic through a proxy server, then encoding the recorded data, which can be edited later for use in creating different scenarios.
Key Features
• Record and Play back
• Ability to recognize web page components
(Tables, links, drop down menus, radio buttons)
• Data Functions
Ability to use data from a text file to fill in forms
• Language
Load Scenario Creation
Ability to define custom load scenarios including number of virtual users, the scripts being executed, the speed of end user connection and browser type and the ramp- up profile. In some instances, scenarios can be modified “on the fly” to create “what if” scenarios.
Key Features
• Virtual User Creation and support
• Weighing virtual users
• Adjust virtual user access speed
• Ability to combine scripts to create a scenario(s)
Load Test Feedback
It is the ability of an application to monitor and display results of the load test sessions on a real time basis.
Key features
• Test error recovery
• Alert notification
• Feedback parameter coverage
Performance can be accumulated at varying levels of granularity including profiles, scripts, individual pages, frames and objects on pages. Report may provide various graphs and data tables and may also be able to export data to external programs such as Excel, for further analysis.
Key Features
• Variety of report
• Depth of reports-ability to drill down to a problem
• Ability to easily export test results
Other Criteria
Environment Support
How many environments does the tool support out of the box? Does it support the latest Java release, Oracle, PowerBuilder, Wireless protocols etc. Most tools can interface to unsupported environments if the developers in that environment provide classes, dll’s etc that expose some of the applications details but whether a developer will or has time to do so is another question. Ultimately this is one of the most important criterions to test a tool. If the tool does not support our application, then it’s worth nothing.

Ease of Use
We have evaluated each tool on how easy it is to use each tool. Since we were the first time users for all of these tools, we were in a very good position to judge how easy it was to use these tools. For ease of use, we looked for User interface, out of the box functions, debugging facilities, error messages, layout on the screen, help files and user manuals.
Technical Support
Finally, we looked at what kind of support each tools and the vendors provide. For this criteria, we looked for online support, how fast the service is available, does the vendors provide software patches free of charge, is there a ticketing system available for us to open a problem tickets, how soon a problem ticket is handled etc.
Tool integration is a very important functionality. Most of the organizations have variety of tools from different vendors in their software organization. It’s essential for a testing tool to integrate with other tools. We looked to see how well does the tool integrates with other tools. Does the tool allow us to run it from various test management suites? Can we raise a bug directly from the tool and feed the information gathered from our test logs into it? Does it integrate with products like word, excel or requirements management tools?
Our selection of any tool is constrained by the available budget. While cost may not be a significant function criterion to evaluate a test tool, we can never ignore this while evaluating a tool.


LoadRunner works by creating virtual users who take the place of real users operating client software, such as Internet Explorer sending requests using the HTTP protocol to IIS or Apache web servers. Requests from many virtual user clients are generated by "Load Generators" in order to create a load on various servers under test. These load generator agents are started and stopped by Mercury's "Controller" program. The Controller controls load test runs based on "Scenarios" invoking compiled "Scripts" and associated "Run-time Settings".

Scripts are crafted using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers. With Java clients, VuGen captures calls by hooking within the client JVM. During runs, the status of each machine is monitored by the Controller.

At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.
Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis.

Errors during each run are stored in a database which can be read using Microsoft Access

Rational Performance Tester

Rational Performance Tester (RPT) is a full load and monitoring suite that uses a distributed model to load test systems. RPT is not limited to Web server load testing, but can include specially written Java classes that can be used to load test other types of systems. RPT is based on the IBM Rational Software Development Platform (Rational SDP), which in itself is built on Eclipse. RPT refers to the graphical user interface (GUI). RPT is used in conjunction with the IBM Rational Agent Controller (RAC), which is a non-graphical application and used in distributed environments to generate load. The RPT GUI runs on the Windows or Linux platform. A RAC runs on the RPT machine and communicates with other RACs running on Linux, Windows, or z/OS platforms.

The scripts are compiled into Java classes and are sent to the other platforms
that are running RAC. Any platform running RAC can then execute a test to the
target system.

There is no programming necessary to create, modify, or execute a load test. A load test is a graphical illustration of the Web pages that will be visited during execution. A SOCKS proxy is used to record a test script by navigating, through a browser, the Web pages that the test should capture and should mimic the actual use of a user of the system. The captured Web pages can be viewed and modified through a browser-like window in the RPT test editor

Test data can be varied during a test using a feature in RPT called a Data Pool.

RPT contains a Test Scheduler that allows for different characteristics of a load test to be changed or enhanced. This feature provides the ability to mirror the load that occurs on a running system. The scheduler features items such as run time, think time, statistic gathering, and overall user load.

RPT generates performance and throughput reports in real time, enabling detection of performance problems at any time during a load test run. These reports provide multiple filtering and configuration options that can be set before, during, and after a test run. Additional reports are available at the conclusion of the test run to perform deeper analysis on items, such as response time percentile distributions. Custom reports can be created. A custom report can be
tailored to the need of the tester one report to display information important to the tester. Reports can include bar, line, or pie charts.

Rational Performance Tester detailed architecture

Load Runner
The Virtual User Generator (VuGen) allows a user to record and/or script the test to be performed against the application under test, and enables the performance tester to playback and make modifications to the script as needed. Such modifications may include Parameterization (selecting data for keyword-driven testing), Correlation and Error handling. During recording, VuGen records a tester's actions by routing data through a proxy. The type of proxy depends upon the protocol being used, and affects the form of the resulting script.
Rational Performance Tester
uses its own custom scripting language. These scripts can then be customized and organized in a variety of ways to accurately reflect the habits of the various user profiles expected to use the application once it goes live. Using a variety of automated data correlation techniques, these tests can then be executed to reflect multiple, unique, concurrent actors - scaling to even thousands or tens of thousands of users. Manual correlation is done using regular expressions.

Load Scenario Creation

Load Runner
The script generated by VuGen is run by the controller. Each run is called as a scenario with some preset settings. LoadRunner provides for the usage of various machines to act as Load Generators. For example, to run a test of 100 users, we can use three or more machines with Load generator installed in them. The tester will then provide the script and the name of the machine which is going to act as a load generator along with the number of users who are going to run from that machine.
There is no programming necessary to create, modify, or execute a load test. A
load test is a graphical illustration of the Web pages that will be visited during execution. A SOCKS proxy is used to record a test script by navigating, through a browser, the Web pages that the test should capture and should mimic the actual use of a user of the system. The captured Web pages can be viewed and modified through a browser like window in the RPT test editor. This window shows what the user would see when visiting a selected page, helping to make more informed decisions about how to modify load tests prior to execution.

Load Test Feedback

Load Runner
uses Monitors to monitor the performance of individual components. But each monitor is to be purchased separately from Mercury. Some monitors include Oracle Monitors, Web Sphere Monitors, etc... Once a scenario is set and the run is completed, the result of the scenario can be viewed via the Analysis tool.
During test execution, Rational Performance Tester graphically displays consolidated views of the average response times - over time - for each simulated user profile. Testers also have the option of diving into the real-time conversation between any single user instance and the system under test to view the actual data passed back and forth, enabling rapid and early analysis of suspected response degradation.


The Analysis tool takes the completed scenario result and prepares the necessary graphs for the tester to view. Also, graphs can be merged to get a good picture of the performance. The tester can then make needed adjustments to the graph and prepare a LoadRunner report. The report, including all the necessary graphs, can be saved in several formats, including HTML and Microsoft Word format.

RPT generates performance and throughput reports in real time, enabling detection of performance problems at any time during a load test run. These reports provide multiple filtering and configuration options that can be set before, during, and after a test run. Additional reports are available at the conclusion of the test run to perform deeper analysis on items, such as response time percentile distributions. Custom reports can be created. A custom report can be tailored to the need of the tester. Data from diverse sources can be combined on one report to display information important to the tester. Reports can include bar, line, or pie charts.

Environment Support

It supports a wide range of enterprise environments, including Web Services, Ajax, J2EE, and .NET.

LoadRunner 4.0 is available for the following client/server platforms:
• Microsoft Windows: Windows 3.x, Windows NT, Windows 95
• UNIX: Sun OS, Solaris, HP-UX, IBM AIX, NCR
RPT Creates, executes and analyzes tests to validate the reliability of complex e-business applications, including Seibel, SAP®, SOA and Citrix.
Supports Windows, Linux and z/OS as distributed controller agents.

Ease of Use
now includes game-changing technology that reduces the script creation process down to a few simple mouse clicks. LoadRunner Click and Script enables you to record scripts at the user interface level
The scripts are succinct, visually intuitive, and self-explanatory scripts.
The scripts are easy to maintain.
Click and Script lowers the technical skills needed to perform load tests.
The automatic correlation in the script works most of the time. This is a big time saver.

The tool gives you the option of using a persistent cursor for your data tables. So if you had a test case that, for instance, deleted a user from a database, you would not have to restore the database or re-add users between test runs, you could just continue from the same point in your list of users.

Rational® Performance Tester for z/OS includes multiple easy-to-use features with powerful testing capabilities, and delivers both high-level and detailed test views.

Technical Support
Load Runner
Mercury support is rated to be the best among testing tool and the developer can expect a reply from Mercury within 4 hours.
RPT Though manuals, support hand books and rational help desk is available in the IBM website, the support is not up to mark compared to mercury.

Load Runner

Load Runner can be integrated with Mercury’s Test Director, which is a Test Management tool.
Integrates with Tivoli® composite application management solutions to identify the source of production performance problems. It can also be integrated with other Rational Products like Clear Case.


While the above comparison shows the features rendered by the tools in each of the functionality we would like to say that you should use the demo-version of each product to see which meets your requirements. Based on your requirements, any of these tools could turn out to be the best for your need. Here are some of the strengths and weakness of tools to help you make your selection:

Tuesday, July 27, 2010

Load runner Vs Silk performer

Can anyone speak about the relative merits of these two tools? I am new to LoadRunner, but my company is merging with another that uses Segue's tools.

I know of one difference, i.e. that LoadRunner allows recording against UNIX and Windows RTE clients.

Thanks for any advice;

IP Logged
unregistered posted 07-22-1999 10:06 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by PerformerUser SilkPerformer can also record traffic from a Unix/Mac machine. Since the web recorder is proxy based, it doesn't matter what kind of machine the client is running on.

As for relative merits, after a serious comparison of the products, we found that SilkPerformer actually simulates browsers correctly. It allows simulation of multiple connections correctly and the modem simulation actually works. The script is also much smaller in size and the tool scales very well with a large number of simulated users. You can simulate multiple protocols in a single script (HTTP, POP3, FTP, SMTP, LDAP, IIOP, TCP/IP, etc.) so you don't have to create different scripts for different protocols. The scripting language has a whole lot of specialized functions optimized for load testing. And finally, it doesn't crash frequently, which is a good thing in a qualaity assurance tool! Bottom line is: SilkPerformer is a much more serious tool. Hope that helps.

IP Logged
unregistered posted 08-06-1999 08:01 AM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by AdamA We have found through thorough evaluation that LoadRunner is much easier to use in a real world scenario. The Astra QuickTest Tool makes recording Business Processes and then parameterizing the data very easy. LoadRunner does emulate modem speeds accurately and has the ability to manage multiple user types (inside and outside of a browser). I recommend you do a side by side comparison of the tools and ignore the sales crap - both sales forces said they were the only "serious testing tool". For our App we found the Monitors that come with LoadRunner to be very powerful.

IP Logged
unregistered posted 10-18-1999 07:18 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by BE Just to complete AdamA message, LoadRunner has also a very nice feature for testing web environments : "IPSpoofing".
Each simulated user gets its own IP address.
LoadRunner supplies an easy-to-use wizard to define as many IP addresses as you want for each injector machine.
So LoadRunner can really stress all the components of the web architecture :
Firewalls, routers, Load balancing...
All of these rely on the IP address to identify a user.
Other load testing tool will simulate all the users with the same IP address, which is not realistic and give false results...

Hope it helps...

IP Logged
New Member

Posts: 2
Registered: Dec 1999
posted 12-02-1999 02:05 PM Click Here to See the Profile for SEG Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by SEG If you are going with one-shot capture/replay, LoadRunner is just fine. Otherwise, Segue's Realizer and Test pieces are overwhelming improvements. The creation of an abstraction layer reduces man-hours by an order of magnitude for development systems or production systems that experience significant change. The Radar piece allows non-technical personnel to track QA failures with desktop replay. Expensive, yes. We paid $ 75,000. Worth every dime.

IP Logged
QA grinder
unregistered posted 12-02-1999 09:08 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by QA grinder I don't know about that "accurate modem" comment for LoadRunner. I heard 2nd hand that a Mercury rep admitted that there modem simulation was essentially bogus - grabbing the whole file and then calculating a waiting time based on size before getting the next file. This sounds like faking it to me...

IP Logged
unregistered posted 12-28-1999 02:24 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by JeffNyman I agree with most of the comments. LoadRunner is easier to use, but if you want more robust results and more scalability you should definitely go with SilkPerformer.

The modem times are much more accurate in SilkPerformer. LoadRunner seems to use some sort of algorithm to compute the times after the fact; it does take to long before you can deduce this on your own because the times are either inconsistent or do not match up with what you actually are seeing.

SilkPerformer also gives many more runtime settings that have a direct bearing on the results rather than just glitz while running. There are many options I have tried with LoadRunner that do work - but offer inconsistent results. I have had much better luck with SilkPerformer in this regard. This is not to say that LoadRunner was necessarily incorrect - just that it was much more inconsistent than SilkPerformer.

Also: SilkPerformer's integration of functional tests is much more smooth than in LoadRunner. I know that with the 5.0 version of LoadRunner, if you use a WinRunner script instead of an Astra script you lose some runtime setting options such as iterations. There is also the case that there are firm separations in LoadRunner's VUGen between database users and Web users and this makes for a lot of overhead. I have not had this problem with SilkPerformer because all scripts can be integrated.

As a caveat to all of the above: I am using WinRunner/LoadRunner 5.0 and SilkTest 5.0, SilkPerformer 3.0.

IP Logged
unregistered posted 12-30-1999 09:39 AM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by LoadUser Mercury now ships LoadRunner 6.0.
This version has a new modem emulation that takes into consideration MANY modem issues and not like Segue bandwidth control

IP Logged
unregistered posted 12-30-1999 11:40 AM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by JeffNyman I have not directly worked with LoadRunner 6.0 yet so I cannot comment too much on it, however from what I hear from Mercury and others who have used it, LoadRunner still does not handle burstiness and heavy-tailing with the use of a service demand law parameter - something that SilkPerformer does with ease. This really does not matter if you are working with traditional client/server, but for a Web-based solution it means everything.

I have also found that Mercury seems to equate the network transmission time with the network contention time in their internal logic. This is something that Segue does not (apparently) do and leads to much more accurate results in my opinion. I think a lot of this has to do with the fact that SilkPerformer uses a much more realistic internal setup for workload characterization than does LoadRunner.

Overall what I have found is that SilkPerformer is much more robust and much more reliant upon industry standard performance equations (knowingly or not) than is LoadRunner if you are doing a great deal of Web testing. For traditional client/server it is somewhat of a toss-up for me.

IP Logged
unregistered posted 01-03-2000 02:11 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by PerformerExpert Another thing not mentioned is that Performer lets you set bounds and then compare against those bounds which I think is the workload characterization being mentioned. LoadRunner 6.0 has nothing that comes close to this. The repository information in SilkPerfomer is much better to -- much better than Mercury's BDE database setup.

IP Logged
unregistered posted 01-17-2000 05:42 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by ewm In comparing the two products, I was led to believe that SilkPerformer's virtual user model was more accurate than that of Loadrunner's. A rep told me that because of the libraries that LR uses to connect to a web server, that LR can only service 4 connections per 50 virtual users simulated. Their explanation was along the lines that for every 50 virtual users simulated using LR , that only 4 would ever be concurrently connected. If running a load test with 100 users using LR then the web server would only ever experience max 8 connected users. Is this true? If it is, then how can the results gathered be validated?


IP Logged
unregistered posted 01-18-2000 08:57 AM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by LR What do you think?
Do you think someone will buy LoadRunner if this was true?

IP Logged
unregistered posted 02-02-2000 01:51 PM Edit/Delete Message Copy This Message Reply w/Quote Search for more posts by TK In regards to all those who posted here regarding the modem emulation between Silk and Merc, Mercury just released a two-DLL patch for modem emulation. It still uses an after-the-fact algorithm (just decompile the DLL's to find out) but it is much closer to what Silk is able to do. Since Mercury felt it necessary to do this it's obvious that much of what was said about alogrithms computing the timings after the fact was true since according to Mercury this "fix" gets rid of that. It doesn't completely though --- if you check out the timings and compare it with actual modem speeds, it's not nearly as close as Silk gets to the actual times.

Saturday, April 3, 2010

pre-defined functions in LR

Points to note with web_url and web_link:

■web_url is not a context sensitive function while web_link is a context sensitive function. Context sensitive functions describe your actions in terms of GUI objects (such as windows, lists, and buttons). Check HTML vs URL recording mode.
■If web_url statement occurs before a context sensitive statement like web_link, it should hit the server, otherwise your script will get error’ed out.
■While recording, if you switch between the actions, the first statement recorded in a given action will never be a context sensitive statement.
■The first argument of a web_link, web_url, web_image or in general web_* does not affect the script replay. For example: if your web_link statements were recorded as
view sourceprint?
web_link("Hi There",

"Text=Hello, ABC",

Now, when you parameterize/correlate the first argument to

view sourceprint?
web_link("{Welcome to LearnLoadRunner}",

"Text=Hello, ABC",

On executing the above script you won’t find the actual text of the parameter {Welcome to Learn LoadRunner} instead you will find {Welcome to Learn LoadRunner} itself in the execution log. However to show the correlated/parameterized data you can use lr_eval_string to evaluate the parameter.