Tuesday, November 13, 2007

Testing Tools

Testing Without a Formal Test Plan

A formal test plan is a document that provides and records important information about a test project, for example:










project and quality assumptions
project background information
resources
schedule & timeline
entry and exit criteria
test milestones
tests to be performed
use cases and/or test cases
For a range of reasons -- both good and bad -- many software and web development projects don't budget enough time for complete and comprehensive testing. A quality test team must be able to test a product or system quickly and constructively in order to provide some value to the project. This essay describes how to test a web site or application in the absence of a detailed test plan and facing short or unreasonable deadlines.
Identify High-Level Functions First
High-level functions are those functions that are most important to the central purpose(s) of the site or application. A test plan would typically provide a breakdown of an application's functional groups as defined by the developers; for example, the functional groups of a commerce web site might be defined as shopping cart application, address book, registration/user information, order submission, search, and online customer service chat. If this site's purpose is to sell goods online, then you have a quick-and-dirty prioritization of:
shopping cart
registration/user information
order submission
address book
search
online customer service chat
I've prioritized these functions according to their significance to a user's ability to complete a transaction. I've ignored some of the lower-level functions for now, such as the modify shopping cart quantity and edit saved address functions because they are a little less important than the higher-level functions from a test point-of-view at the beginning of testing.
Your opinion of the prioritization may disagree with mine, but the point here is that time is critical and in the absence of defined priorities in a test plan, you must test something now. You will make mistakes, and you will find yourself making changes once testing has started, but you need to determine your test direction as soon as possible.
Test Functions Before Display






Any web site should be tested for cross-browser and cross-platform compatibility -- this is a primary rule of web site quality assurance. However, wait on the compatibility testing until after the site can be verified to just plain work. Test the site's functionality using a browser/OS/platform that is expected to work correctly -- use what the designers and coders use to review their work.
By running through the site or application first with known-good client configurations allows testers to focus on the way the site functions, and allows testers to focus on the more important class of functional defects and problems early in the test project. Spend time up front identifying and reporting those functional-level defects and the developers will have more time to effectively fix and iteratively deliver new code levels to QA.
If your test team will not be able to exhaustively test a site or application -- and the premise of this essay is that your time is extremely short and you are testing without a formal plan -- you must first identify whether the damned thing can work, and then move on from there.
Concentrate on Ideal Path Actions First
Ideal paths are those actions and steps most likely to be performed by users. For example, on a typical commerce site, a user is likely to
identify an item of interest
add that item to the shopping కార్ట్
buy it online with a credit card




ship it to himself/herself
Now, this describes what the user would want to do, but many sites require a few more functions, so the user must go through some more steps, for example:
login to an existing registration account (if one exists)
register as a user if no account exists
provide billing & bill-to address information
provide ship-to address information
provide shipping & shipping method information
provide payment information
agree or disagree to receiving site emails and newsletters
Most sites offer (or force) an even wider range of actions on the user:
change product quantity in the shopping cart
remove product from shopping cart
edit user information (or ship-to information or bill-to information)
save default information (like default shipping preferences or credit card information)
All of these actions and steps may be important to some users some of the time (and some developers and marketers all of the time), but the majority of users will



not use every function every time. Focus on the ideal path and identify those factors most likely to be used in a majority of user interactions.
Assume a user who knows what s/he wants to do, and so is not going to choose the wrong action for the task they want to complete. Assume the user won't make common data entry and interface control errors. Assume the user will accept any default form selections -- this means that if a checkbox is checked, the user will leave it checked; if a radio button is selected to a meaningful selection, the user will let that ride. This doesn't mean that non-values that are defaulted -- such as the drop-down menu that shows a "select one" value --



will left as-is to force errors. The point here is to keep it simple and lowest-common denominator and not force errors. Test as though everything is right in the world, life is beautiful, and your project manager is Candide.
Once the ideal paths have been tested, focus on secondary paths involving the lower-level functions or actions and steps that are less frequent but still reasonable variations.
Forcing errors comes later, if you have time.
Concentrate on Intrinsic Factors First
Intrinsic factors are those factors or characteristics that are part of the system or product being tested. An intrinsic factor is an internal factor. So, for a typical commerce site, the HTML page code that the browser uses to display the shopping cart pages is intrinsic to the site: change the page code and the site itself is changed. The code logic called by a submit button is intrinsic to the site.
Extrinsic factors are external to the site or application. Your crappy computer with only 8 megs of RAM is extrinsic to the site, so your home computer can crash without affecting the commerce site, and adding more memory to your computer doesn't mean a whit to the commerce site or its functioning.
Given a severe shortage of test time, focus first on factors intrinsic to the site:
does the site work?
do the functions work? (again with the functionality, because it is so basic)
do the links work?
are the files present and accounted for?
are the graphics MIME types correct? (I used to think that this couldn't be screwed up)
Once the intrinsic factors are squared away, then start on the extrinsic points:
cross-browser and cross-platform compatibility
clients with cookies disabled
clients with javascript disabled
monitor resolution
browser sizing
connection speed differences
The point here is that with myriad possible client configurations and user-defined environmental factors to think about, think first about those that relate to the product or application itself. When you run out of time, better to know that the system works rather than that all monitor resolutions safely render the main pages.
Boundary Test From Reasonable to Extreme
You can't just verify that an application works correctly if all input and all actions have been correct. People do make mistakes, so you must test error handling and error states. The systematic testing of error handling is called boundary testing (actually, boundary testing describes much more, but this is enough for this discussion).
During your pedal-to-the-floor, no-test-plan testing project, boundary testing refers to the testing of forms and data inputs, starting from known good values, and progressing through reasonable but invalid inputs all the way to known extreme and invalid values.
The logic for boundary testing forms is straightforward: start with known good and valid values because if the system chokes on that, it's not ready for testing. Move through expected bad values because if those fail, the system isn't ready for testing. Try reasonable and predictable mistakes because users are likely to make such mistakes -- we all screw up on forms eventually. Then start hammering on the form logic with extreme errors and crazy inputs in order to catch problems that might affect the site's functioning.
Good Values
Enter in data formatted as the interface requires. Include all required fields. Use valid and current information (what "valid and current" means will depend on the test system, so some systems will have a set of data points that are valid for the context of that test system). Do not try to cause errors.
Expected Bad Values
Some invalid data entries are intrinsic to the interface and concept domain. For example, any credit card information form will expect expired credit card dates -- and should trap for them. Every form that specifies some fields as required should trap for those fields being left blank. Every form that has drop-down menus that default to an instruction ("select one", etc.) should trap for that instruction. What about punctuation in name fields?
Reasonable and Predictable Mistakes
People will make some mistakes based on the design of the form, the implementation of the interface, or the interface's interpretation of the relevant concept domain(s). For example, people will inadvertently enter in trailing or leading spaces into form fields. People might enter a first and middle name into a first name form field ("Mary Jane").
Not a mistake, per se, but how does the form field handle case? Is the information case-sensitive? Or does the address form handle a PO address? Does the address form handle a business name?
Extreme Errors and Crazy Inputs
And finally, given time, try to kill the form by entering in extreme crap. Test the maximum size of inputs, test long strings of garbage, put numbers in text fields and text in numeric fields.
Everyone's favorite: enter in HTML code. Put your name in BLINK tags, enter in an IMG tag for a graphic from a competitor's site.
Enter in characters that have special meaning in a particular OS (I once crashed a server by using characters this way in a form field).
But remember, even if you kill the site with an extreme data input, the priority is handling errors that are more likely to occur. Use your time wisely and proceed from most likely to less likely.
Compatibility Test From Good to Bad
Once you get to cross-browser and cross-platform compatibility testing, follow the same philosophy of starting with the most important (as defined by prevalence among expected user base) or most common based on prior experience and working towards the less common and less important.
Do not make the assumption that because a site was designed for a previous version of a browser, OS, or platform it will also work on newer releases. Instead, make a list of the browsers and operating systems in order of popularity on the Internet in general, and then move those that are of special importance to your site (or your marketers and/or executives) to the top of the list.
The most important few configurations should be used for functional testing, then start looking for deviations in performance or behavior as you work down the list. When you run out of time, you want to have completed the more important configurations. You can always test those configurations that attract .01 percent of your user base after you launch.
The Drawbacks of This Testing Approach
Many projects are not mature and are not rational (at least from the point-of-view of the quality assurance team), and so the test team must scramble to test as effectively as possibly within a very short time frame. I've spelled out how to test quickly without a structured test plan, and this method is much better than chaos and somewhat better than letting the developers tell you what and how to test.
This approach has definite quality implications:
Incomplete functional coverage -- this is no way to exercise all of the software's functions comprehensively.
No risk management -- this is no way to measure overall risk issues regarding code coverage and quality metrics. Effective quality assurance measures quality over time and starting from a known base of evaluation.
Too little emphasis on user tasks -- because testers will focus on ideal paths instead of real paths. With no time to prepare, ideal paths are defined according to best guesses or developer feedback rather than by careful consideration of how users will understand the system or how users under



stand real-world analogues to the application tasks. With no time to prepare, testers will be using a very restricted set input data, rather than using real data (from user activity logs, from logical scenarios, from careful consideration of the concept domain).
Difficulty reproducing -- because testers are making up the tests as they go along, reproducing the specific errors found can be difficult, but also reproducing the tests performed will be tough. This will cause problems when trying to measure quality over successive code cycles.
Project management may believe that this approach to testing is good enough -- because you can do some good testing by following this process, management may assume that full and structured testing, along with careful test preparation and test results analysis, isn't necessary. That misapprehension is a very bad sign for the continued quality of any product or web site.
Inefficient over the long term -- quality assurance involves a range of tasks and foci. Effective quality assurance programs expand their base of documentation on the product and on the testing process over time, increasing the coverage and granularity of tests over time. Great testing requires good test setup and preparation, but success with the kind testplan-less approach described in this essay may reinforce bad project and test methodologies. A continued pattern of quick-and-dirty testing like this is a sign that the product or application is unsustainable in the long run.

Basic Testcase Concepts

A testcase is simply a test with formal steps and instructions; testcases are valuable because they are repeatable, reproducible under the same environments, and easy to improve upon with feedback. A testcase is the difference between saying that something seems to be working okay and proving that a set of specific tasks are known to be working correctly.
Some tests are more straightforward than others. For example, say you need to verify that all the links in your web site work. There are several different approaches to checking this:
you can read your HTML code to see that all the link code is correct
you can run an HTML DTD validator to see that all of your HTML syntax is correct, which would imply that your links are correct
you can use your browser (or even multiple browsers) to check every link manually
you can use a link-checking program to check every link automatically
you can use a site maintenance program that will display graphically the relationships between pages on your site, including links good and bad
you could use all of these approaches to test for any possible failures or inconsistencies in the tests themselves
Verifying that your site's links are not broken is relatively unambiguous. You simply need to decide which one of more of these tests best suits your site structure, your test resources, and your need for granularity of results. You run the test, and you get your results showing any broken links.
Notice that you now have a list of broken links, not of incorrect links. If a link is valid syntactically, but points at the incorrect page, your link test won't catch the problem. My general point here is that you must understand what you are testing. A testcase is a series of explicit actions and examinations that identifies the "what".
A testcase for checking links might specify that each link is tested for functionality, appropriateness, usability, style, consistency, etc. For example, a testcase for checking links on a typical page of a site might include these steps:
Link Test: for each link on the page, verify that




the link works (i.e., it is not broken)
the link points at the correct page
the link text effectively and unambiguously describes the target page
the link follows the approved style guide for this web site (for example, closing punctuation is or is not included in the link text, as per the style guide specification)
every instance of a link to the same target page is coded the same way
As you can see, this is a detailed testing of many aspects of the link, with the result that on completion of the test, you can say definitively what you know works. However, this is a simple example: testcases can run to hundreds of instructions, depending on the types of functionality being tested and the need for iterations of steps.
Defining Test and Testcase Parameters
A testcase should set up any special environment requirements the test may have, such as clearing the browser cache, enabling JavaScript support, or turning on the warnings for the dropping of cookies.
In addition to specific configuration instructions, testcases should also record browser types and versions, operating system, machine platforms, connection speeds -- in short, the testcase should record any parameter that would affect the reproducibility of the results or could aid in troubleshooting any defects found by testing. Or to state this a little differently, specify what platforms this testcase should be run against, record what platforms it is run against, and in the case of defects report the exact environment in which the defect was found.
An Example Testcase
For a more realistic example of a testcase, I have built the following sample testcase for philosophe.com. 0. On a Windows 95 box, load Netscape Navigator 3.04. Clear browser
caches, and clear history.
1. In the URL field type in www.philosophe.com and hit the
ENTER key
2. On the home page, for each link
a. examine the link text
b. click on the link, and verify that the link works
c. verify that the new page is the correct page, as
indicated by the link text
d. verify that the little black arrow correctly indicates
the current page
e. click on the browser's BACK button
f. verify that the link's color changed to the vlink color
3. Once all links on the home page have been tested, click on the first
link in the left navigation column.
4. for this page, repeat steps 2a - 2f.
5. repeat steps 3 and 4 until every page has been tested
As you can see, even the simplest of testcases can go into great detail; for a site that has a stable structure, this is a good thing.
Some Issues to Consider About Testcases
The quality assurance process will find defects uncovered by formal testcases, just like quality control, but in addition will find defects in usability or performance that wouldn't be caught by even the most detailed testcases. This is the major shortcoming of quality control testing, that the very design of the testcases will tend to limit the scope of attention paid to how things come together with the code. Furthermore, with a web site that changes rapidly, intricate testcases will become outdated quickly, requiring a great deal of work just to keep them current. My point here is not that testcases are not useful, but that you must seek a balance between using scarce resources for creating testcases and using them for actual testing.

Software Testing Types

COMPATIBILITY TESTING. Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.
CONFORMANCE TESTING. Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.
FUNCTIONAL TESTING. Validating an application or Web site conforms to its specifications and correctly performs all its required functions. This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product's user interface, APIs, database management, security, installation, networking, etcF testing can be performed on an automated or manual basis using black box or white box methodologies.
LOAD TESTING. Load testing is a generic term covering Performance Testing and Stress Testing.
PERFORMANCE TESTING. Performance testing can be applied to understand your application or WWW site's scalability, or to benchmark the performance in an environment of third party products such as servers and middleware for potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.
REGRESSION TESTING. Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing.
SMOKE TESTING. A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
STRESS TESTING. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. A graceful degradation under load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load.
UNIT TESTING. Functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration.

Manual testing

Manual testing is the oldest and most rigorous type of software testing. Manual testing requires a tester to perform manual test operations on the test software without the help of Test automation. Manual testing is a laborious activity that requires the tester to possess a certain set of qualities; to be patient, observant, speculative, creative, innovative, open-minded, resourceful, unopinionated, and skillful.

As a tester, it is always advisable to use manual white box testing and black-box testing techniques on the test software. Manual testing helps discover and record any software bugs or discrepencies related to the functionality of the product.

Manual testing can be replaced by test automation. It is possible to record and playback manual steps and write automated test script(s) using Test automation tools. Although, test automation tools will only help execute test scripts written primarily for executing a particular specification and functionality. Test automation tools lack the ability of decision-making and recording any unscripted discrepancies during program execution. It is recommended that one should perform manual testing of the entire product at least a couple of times before actually deciding to automate the more mundane activities of the product.

Manual testing helps discover defects related to the usability testing and GUI testing area. While performing manual tests the software application can be validated whether it meets the various standards defined for effective and efficient usage and accessibility. For example, the standard location of the OK button on a screen is on the left and of CANCEL button on the right. During manual testing you might discover that on some screen, it is not. This is a new defect related to the usability of the screen. In addition, there could be many cases where the GUI is not displayed correctly and the basic functionality of the program is correct. Such bugs are not detectable using test automation tools.

Repetitive manual testing can be difficult to perform on large software applications or applications having very large dataset coverage. This drawback is compensated for by using manual black-box testing techniques including equivalence partitioning and boundary value analysis. Using which, the vast dataset specifications can be divided and converted into a more manageable and achievable set of test suites.

A manual tester would typically perform the following steps for manual testing:

  1. Understand the functionality of program
  2. Prepare a test environment
  3. Execute test case(s) manually
  4. Verify the actual result
  5. Record the result as Pass or Fail
  6. Make a summary report of the Pass and Fail test cases
  7. Publish the report
  8. Record any new defects uncovered during the test case execution

There is no complete substitute for manual testing. Manual testing is crucial for testing software applications more thoroughly. Test automation has become a necessity mainly due to shorter deadlines for performing test activities, such as regression testing, performance testing, and load testing.

Software testing

Software testing is the process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (SQA), which encompasses all business process areas, not just testing.

Today, software has grown in complexity and size. The software product developed by a developer is according to the System Requirement Specification. Every software product has a target audience. For example, a video game software has its audience completely different from banking software. Therefore, when an organization invests large sums in making a software product, it must ensure that the software product must be acceptable to the end users or its target audience. This is where Software Testing comes into play. Software testing is not merely finding defects or bugs in the software, it is the completely dedicated discipline of evaluating the quality of the software.

There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is also used to connote the dynamic analysis of the product—putting the product through its paces. Sometimes one therefore refers to reviews, walkthroughs or inspections as "static testing", whereas actually running the program with a given set of test cases in a given development stage is often referred to as "dynamic testing", to emphasize the fact that formal review processes form part of the overall testing scope.

QTP interview questions


  1. What are the Features & Benefits of Quick Test Pro (QTP 8.0)? - Operates stand-alone, or integrated into Mercury Business Process Testing and Mercury Quality Center. Introduces next-generation zero-configuration Keyword Driven testing technology in Quick Test Professional 8.0 allowing for fast test creation, easier maintenance, and more powerful data-driving capability. Identifies objects with Unique Smart Object Recognition, even if they change from build to build, enabling reliable unattended script execution. Collapses test documentation and test creation to a single step with Auto-documentation technology. Enables thorough validation of applications through a full complement of checkpoints.
  2. How to handle the exceptions using recovery scenario manager in QTP? - There are 4 trigger events during which a recovery scenario should be activated. A pop up window appears in an opened application during the test run: A property of an object changes its state or value, A step in the test does not run successfully, An open application fails during the test run, These triggers are considered as exceptions.You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps: 1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run
  3. What is the use of Text output value in QTP? - Output values enable to view the values that the application talks during run time. When parameterized, the values change for each iteration. Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table.
  4. How to use the Object spy in QTP 8.0 version? - There are two ways to Spy the objects in QTP: 1) Thru file toolbar, In the File Toolbar click on the last toolbar button (an icon showing a person with hat). 2) True Object repository Dialog, In Object repository dialog click on the button object spy. In the Object spy Dialog click on the button showing hand symbol. The pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object if at all the object is not visible. or window is minimized then, hold the Ctrl button and activate the required window to and release the Ctrl button.
  5. How Does Run time data (Parameterization) is handled in QTP? - You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files.
  6. What is keyword view and Expert view in QTP? - Quick Test’s Keyword Driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View. Advanced testers can view and edit their tests in the Expert View, which reveals the underlying industry-standard VBScript that Quick Test Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View.
  7. Explain about the Test Fusion Report of QTP? - Once a tester has run a test, a Test Fusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining Test Fusion reports with Quick Test Professional, you can share reports across an entire QA and development team.
  8. Which environments does QTP support? - Quick Test Professional supports functional testing of all enterprise environments, including Windows, Web,..NET, Java/J2EE, SAP, Siebel, Oracle, PeopleSoft, Visual Basic, ActiveX, mainframe terminal emulators, and Web services.
  9. What is QTP? - Quick Test is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, Quick Test Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and.NET framework applications
  10. Explain QTP Testing process? - Quick Test testing process consists of 6 main phases:
  11. Create your test plan - Prior to automating there should be a detailed description of the test including the exact steps to follow, data to be input, and all items to be verified by the test. The verification information should include both data validations and existence or state verifications of objects in the application.
  12. Recording a session on your application - As you navigate through your application, Quick Test graphically displays each step you perform in the form of a collapsible icon-based test tree. A step is any user action that causes or makes a change in your site, such as clicking a link or image, or entering data in a form.
  13. Enhancing your test - Inserting checkpoints into your test lets you search for a specific value of a page, object or text string, which helps you identify whether or not your application is functioning correctly. NOTE: Checkpoints can be added to a test as you record it or after the fact via the Active Screen. It is much easier and faster to add the checkpoints during the recording process. Broadening the scope of your test by replacing fixed values with parameters lets you check how your application performs the same operations with multiple sets of data. Adding logic and conditional statements to your test enables you to add sophisticated checks to your test.
  14. Debugging your test - If changes were made to the script, you need to debug it to check that it operates smoothly and without interruption.
  15. Running your test on a new version of your application - You run a test to check the behavior of your application. While running, Quick Test connects to your application and performs each step in your test.
  16. Analyzing the test results - You examine the test results to pinpoint defects in your application.
  17. Reporting defects - As you encounter failures in the application when analyzing test results, you will create defect reports in Defect Reporting Tool.
  18. Explain the QTP Tool interface. - It contains the following key elements: Title bar, displaying the name of the currently open test, Menu bar, displaying menus of Quick Test commands, File toolbar, containing buttons to assist you in managing tests, Test toolbar, containing buttons used while creating and maintaining tests, Debug toolbar, containing buttons used while debugging tests. Note: The Debug toolbar is not displayed when you open Quick Test for the first time. You can display the Debug toolbar by choosing View — Toolbars — Debug. Action toolbar, containing buttons and a list of actions, enabling you to view the details of an individual action or the entire test flow. Note: The Action toolbar is not displayed when you open Quick Test for the first time. You can display the Action toolbar by choosing View — Toolbars — Action. If you insert a reusable or external action in a test, the Action toolbar is displayed automatically. Test pane, containing two tabs to view your test-the Tree View and the Expert View ,Test Details pane, containing the Active Screen. Data Table, containing two tabs, Global and Action, to assist you in parameterizing your test. Debug Viewer pane, containing three tabs to assist you in debugging your test-Watch Expressions, Variables, and Command. (The Debug Viewer pane can be opened only when a test run pauses at a breakpoint.) Status bar, displaying the status of the test
  19. How does QTP recognize Objects in AUT? - Quick Test stores the definitions for application objects in a file called the Object Repository. As you record your test, Quick Test will add an entry for each item you interact with. Each Object Repository entry will be identified by a logical name (determined automatically by Quick Test), and will contain a set of properties (type, name, etc) that uniquely identify each object. Each line in the Quick Test script will contain a reference to the object that you interacted with, a call to the appropriate method (set, click, check) and any parameters for that method (such as the value for a call to the set method). The references to objects in the script will all be identified by the logical name, rather than any physical, descriptive properties.
  20. What are the types of Object Repositories in QTP? - Quick Test has two types of object repositories for storing object information: shared object repositories and action object repositories. You can choose which type of object repository you want to use as the default type for new tests, and you can change the default as necessary for each new test. The object repository per-action mode is the default setting. In this mode, Quick Test automatically creates an object repository file for each action in your test so that you can create and run tests without creating, choosing, or modifying object repository files. However, if you do modify values in an action object repository, your changes do not have any effect on other actions. Therefore, if the same test object exists in more than one action and you modify an object’s property values in one action, you may need to make the same change in every action (and any test) containing the object.
  21. Explain the check points in QTP? - A checkpoint verifies that expected information is displayed in an Application while the test is running. You can add eight types of checkpoints to your test for standard web objects using QTP. A page checkpoint checks the characteristics of an Application. A text checkpoint checks that a text string is displayed in the appropriate place on an Application. An object checkpoint (Standard) checks the values of an object on an Application. An image checkpoint checks the values of an image on an Application. A table checkpoint checks information within a table on a Application. An Accessibilityy checkpoint checks the web page for Section 508 compliance. An XML checkpoint checks the contents of individual XML data files or XML documents that are part of your Web application. A database checkpoint checks the contents of databases accessed by your web site
  22. In how many ways we can add check points to an application using QTP? - We can add checkpoints while recording the application or we can add after recording is completed using Active screen (Note : To perform the second one The Active screen must be enabled while recording).
  23. How does QTP identify objects in the application? - QTP identifies the object in the application by Logical Name and Class.
  24. What is Parameterizing Tests? - When you test your application, you may want to check how it performs the same operations with multiple sets of data. For example, suppose you want to check how your application responds to ten separate sets of data. You could record ten separate tests, each with its own set of data. Alternatively, you can create a parameterized test that runs ten times: each time the test runs, it uses a different set of data.
  25. What is test object model in QTP? - The test object model is a large set of object types or classes that Quick Test uses to represent the objects in your application. Each test object class has a list of properties that can uniquely identify objects of that class and a set of relevant methods that Quick Test can record for it. A test object is an object that Quick Test creates in the test or component to represent the actual object in your application. Quick Test stores information about the object that will help it identify and check the object during the run session.
  26. What is Object Spy in QTP? - Using the Object Spy, you can view the properties of any object in an open application. You use the Object Spy pointer to point to an object. The Object Spy displays the selected object’s hierarchy tree and its properties and values in the Properties tab of the Object Spy dialog box.
  27. What is the Diff between Image check-point and Bit map Check point? - Image checkpoints enable you to check the properties of a Web image. You can check an area of a Web page or application as a bitmap. While creating a test or component, you specify the area you want to check by selecting an object. You can check an entire object or any area within an object. Quick Test captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk Space. For example, suppose you have a Web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly. You can create bitmap checkpoints for all supported testing environments (as long as the appropriate add-ins are loaded). Note: The results of bitmap checkpoints may be affected by factors such as operating system, screen resolution, and color settings.
  28. How many ways we can parameterize data in QTP? - There are four types of parameters: Test, action or component parameters enable you to use values passed from your test or component, or values from other actions in your test. Data Table parameters enable you to create a data-driven test (or action) that runs several times using the data you supply. In each repetition, or iteration, Quick Test uses a different value from the Data Table. Environment variable parameters enable you to use variable values from other sources during the run session. These may be values you supply, or values that Quick Test generates for you based on conditions and options you choose. Random number parameters enable you to insert random numbers as values in your test or component. For example, to check how your application handles small and large ticket orders, you can have Quick Test generate a random number and insert it in a number of tickets edit field.
  29. How do u do batch testing in WR & is it possible to do in QTP, if so explain? - Batch Testing in WR is nothing but running the whole test set by selecting Run Test set from the Execution Grid. The same is possible with QTP also. If our test cases are automated then by selecting Run Test set all the test scripts can be executed. In this process the Scripts get executed one by one by keeping all the remaining scripts in Waiting mode.
  30. If I give some thousand tests to execute in 2 days what do u do? - Adhoc testing is done. It Covers the least basic functionalities to verify that the system is working fine.
  31. What does it mean when a check point is in red color? what do u do? - A red color indicates failure. Here we analyze the cause for failure whether it is a Script Issue or Environment Issue or a Application issue.
  32. What is Object Spy in QTP? - Using the Object Spy, you can view the properties of any object in an open application. You use the Object Spy pointer to point to an object. The Object Spy displays the selected object’s hierarchy tree and its properties and values in the Properties tab of the Object Spy dialog box.
  33. What is the file extension of the code file & object repository file in QTP? - Code file extension is.vbs and object repository is.tsr
  34. Explain the concept of object repository & how QTP recognizes objects? - Object Repository: displays a tree of all objects in the current component or in the current action or entire test (depending on the object repository mode you selected). We can view or modify the test object description of any test object in the repository or to add new objects to the repository. Quicktest learns the default property values and determines in which test object class it fits. If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description. If no assistive properties are available, then it adds a special Ordinal identifier such as objects location on the page or in the source code.
  35. What are the properties you would use for identifying a browser & page when using descriptive programming? - Name would be another property apart from title that we can use.
  36. Give me an example where you have used a COM interface in your QTP project? - com interface appears in the scenario of front end and back end. for eg:if you r using oracle as back end and front end as VB or any language then for better compatibility we will go for an interface. of which COM will be one among those interfaces. Create object creates handle to the instance of the specified object so that we program can use the methods on the specified object. It is used for implementing Automation(as defined by Microsoft).
  37. Explain in brief about the QTP Automation Object Model. - Essentially all configuration and run functionality provided via the Quick Test interface is in some way represented in the Quick Test automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in Quick Test have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the Quick Test automation object model, along with standard programming elements such as loops and conditional statements to design your program.

LoadRunner interview questions
  1. What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.
  2. What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.
  3. Did u use LoadRunner? What version? - Yes. Version 7.2.
  4. Explain the Load testing process? -
    Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.
    We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario.
    We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner.s graphs and reports to analyze the application.s performance.
  5. When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.
  6. What are the components of LoadRunner? - The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.
  7. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
  8. What Component of LoadRunner would you use to play Back the script in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.
  9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
  10. What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
  11. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.
  12. Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.
  13. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  14. How do you find out where correlation is required? Give few examples from your projects? - Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.
  15. Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  16. What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
  17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select
    Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select
    extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.
  18. How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.
  19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions we need to create the external
    library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.
  20. What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.
  21. Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.
  22. How do you perform functional testing under load? - Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.
  23. What is Ramp up? How do you set this? - This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be
    specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
  24. What is the advantage of running the Vuser as thread? - VuGen provides the facility to use multithreading. This enables more Vusers to be run per
    generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
    generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of
    Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.
  25. If you want to stop the execution of your script on error, how do you do that? - The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the .Continue on error. option in Run-Time Settings.
  26. What is the relation between Response Time and Throughput? - The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
  27. Explain the Configuration of your systems? - The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.
  28. How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
  29. If web server, database and Network are all fine where could be the problem? - The problem could be in the system itself or in the application server or in the code written for the application.
  30. How did you find web server related issues? - Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that
    occurred during scenario, the number of http responses per second, the number of downloaded pages per second.
  31. How did you find database related issues? - By running .Database. monitor and help of .Data Resource Graph. we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues
  32. Explain all the web recording options?
  33. What is the difference between Overlay graph and Correlate graph? - Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show.s the current graph.s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph.s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph.s Y-axis.
  34. How did you plan the Load? What are the Criteria? - Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.
  35. What does vuser_init action contain? - Vuser_init action contains procedures to login to a server.
  36. What does vuser_end action contain? - Vuser_end section contains log off procedures.
  37. What is think time? How do you change the threshold? - Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.
  38. What is the difference between standard log and extended log? - The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.
  39. Explain the following functions: - lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.
  40. Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would
    be reasonable to conclude that the bandwidth is constraining the volume of
    data delivered.
  41. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types of goals in a goal oriented scenario:
    • The number of concurrent Vusers
    • The number of hits per second
    • The number of transactions per second
    • The number of pages per minute
    • The transaction response time that you want your scenario
  42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load
    increases. At 56 Vusers, there is a sudden, sharp increase in the average response
    time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.
  43. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  44. Where do you set automatic correlation options? - Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  45. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.