Thursday, December 30, 2010

Difference between Change Related Software Testing like Confirmation Testing & Regression Testing

During software testing we get failures. In most cases the underlying defects are corrected and the corrected test object is handed over to the testers for confirmation. This is the situation where we iterate in the test process and go back to the test execution process. We go back to perform confirmation testing and regression testing.

Confirmation testing and regression testing are important activities in test execution. They can appear in all the test levels from component testing to (one hopes rarely) acceptance testing and even during maintenance of a product in operation.

These two types of change-related software testing activities have one thing in common: they are executed after defect correction. Apart from that, they have very different goals.



In the figure above the test object with a defect is shown to the left. The defect has been discovered during the software testing activity. The defect has subsequently been corrected and we have got the new test object back again for testing; this is the one to the right.

What we must do now are confirmation testing and regression testing of the corrected test object.

Confirmation Testing:
Confirmation testing is the first to be performed after defect correction. It is done to ensure that the defect has indeed been successfully removed. The test that originally unveiled the defect by causing a failure is executed again and this time it should pass without problems. This is illustrated by the dark rectangle in the place where the defect was.

Regression Testing:

Regression testing may and should then be performed.

Regression testing is repetition of tests that have already been performed without problems to confirm that new defects have not come up or discovered after the change. In other words it is to ensure the object under test has not regressed.

Following example shows a case of regression:

A correction of a fault in a document using the "replace all" of the word "Author" with the word "Speaker" had an unintended effect in one of the paragraphs:

"… If you are providing the Presentation as part of your duties with your company or another company, please let me know and have a Speakerized representative of the company also sign this Agreement."

The amount of regression testing can vary from a complete rerun of all the test procedures that have already passed, to, well, in reality, no regression testing at all. The amount depends on issues such as:

1) The risk involved

2) The architecture of the system or product

3) The nature of the defect that has been corrected

The amount of regression testing we choose to do must be justified in accordance with the strategy for the test.

Regression testing should be performed whenever something in or around the object under testing has changed. Fault correction is an obvious reason. There could also be others, more external or environmental changes, which could cause us to consider regression testing.

An example of an environment change could be the installation of a new version of the underlying database administration system or operating system. Experience shows that such updates may have the strangest effects on systems or products previously running without problems.

Wednesday, December 22, 2010

Escaped Defects Found

Definition

An escaped defects is a defect that was not found by, or one that escaped from, the quality assurance team. Typically, those issues are found by end users after released version has made available to them. The metrics Escaped Defects Found counts number of new escaped defects found over period of time (day, week, month).
Calculation
To be able to calculate that metric, it is important that in your defect tracking system you track:

* affected version, version of software in which this defect was found.
* release date, date when version was released

The calculation process:

* Find all versions that were already released
* for each version, find all defects that affected the version
* if defect creation date is after version release date then this defect is an escaped defect.
* now count all those escaped defects.

Dimensions
It should be possible to monitor Escaped Defects Found in following dimensions:

* Affected version, to monitori metric value of any released version.
* Project/Product. To aggregated Escaped Defects Found over all released versions of the project or product.

Usage
I like this metric because tracking that sends correct message to development team: functionality that is released, should be of good quality. Less Escaped Defects almost always means good job of QA team.

Also important reason to keep the number of Escaped Defects low, is that fixing a single escaped software defect can take from a week to several weeks of effort to correct. As you have to include the time to isolate, repair, checkout, retest, reconfigure and redistribute. Fixing defect inside the development iteration is always much cheaper.
Presentation
Best presentation is a simple bar chart, where each bar shows number of escaped defects found per (day/week/month/quarter/year). Depending on your requirements you may want to create this chart for a whole company or each product. Also to compare quality of different versions it could makes sense to show bar chart with current values of escaped defects for each version of the product.

Monday, December 13, 2010

Maximizing QA Performance

An important element of any IT strategy is to ensure deployment of defect free systems. Among other benefits, this helps
minimize significantly the total cost of ownership (TCO) of applications. However, organizations quickly discover that
despite their best intentions and efforts, their QA team has hit a ceiling from a defect leakage standpoint. It seems as if
an invisible hurdle is preventing the QA team from achieving its true potential - deploying defect free systems.
"So what are we not seeing in this picture?"While tool based automation or customized test automation is definitely beneficial in lowering defects, it is an option
which requires significant investments. Hence organizations, with their budgets stretched thin, are constantly on the
lookout for inexpensive options. Maximizing the performance of existing QA teams is often the 'low hanging fruit'. In an
attempt to squeeze the maximum out of the QA teams, organizations often overlook a few extraneous factors that play
havoc with the team's testing effectiveness. Without an eye for these factors, they can go unnoticed and continue to
bleed the organization.
Based on our extensive and varied experience in test engagements, we have identified a few factors which we believe
can help QA teams unlock their true potential. Of course before we start we would like to quote an old saying, "To improve
the condition we need to first start measuring it". In line with this quote, we recommend organizations begin by collecting
metrics that indicate the current health of their QA programs. Most companies miss this basic, but critical point in
improving processes. Once we have measured the current state of the QA programs, we are all set to further analyze the
shortcomings and maximize the QA performance of teams -
How many releases does the application go through in a year?This provides a good indication of whether sufficient time is being allocated for every QA iteration. Across engagements
we have seen clients plan from as many as 12 releases in a year to as few as 3 in a year. It is essential for IT teams to plan
the number of releases in combination with business teams, such that only releases which are necessitated due to
business needs are included. Allowing the QA team to 'huddle and regroup', before the next iteration, is extremely important
and the time needed for QA iterations must not to be viewed solely as test script execution time. Root cause analysis, test
script updates based on requirements in the beginning, and UAT result analysis at the end of the test cycle are crucial
steps in the QA process too and demand equal attention. Through these steps the QA team is able to refine its processes
and prevent a recurrence of defects. Frequent releases compromise the effectiveness of these crucial steps and curtail
the feedback loop, limiting the improvement in defect leakage. Hence, every application should have an optimum number
of releases in a year and this should not be exceeded.
The best way to avoid having more than the optimal number of releases in a year, is to educate the business team on the
pros and cons of increasing or decreasing the release frequency. It is absolutely essential that the business and QA team
agree upon a plan keeping in consideration the resources available to the QA team.
How many 'code drops' are allowed per release?
Often frequent changes to the requirements result in increased instances of 'code drops' within a release. At times the
frequency of code drops are not even planned making it almost impossible for QA teams to effectively test before a fresh
code drop occurs. The solution lies in defining requirements freeze dates, post which no new change requests should be
accommodated.
Is there a large variation in the size of every release?The frequency of a release by itself will not set off any alarms and it needs to be considered in conjunction with the size of
the release. The size of the release directly impacts the number of enhancements to be made to test scripts, some of them
permanent, which need to be diligently incorporated into the QA plan. Further, releases which modify the base or common
functionalities of a system need to be viewed as large releases even if their enhancement time is not large. This is
because such releases have sweeping implications on the system and improper assessment of the size of the release from
a QA standpoint and can lead to heightened levels of defect leakage. Further, variations in size of the release leads to
variations in the size of the QA team which makes effective knowledge management a huge challenge. Hence we strongly
recommend that business and IT teams plan uniform sized releases as much as possible. This can be done by prioritizing
and limiting the change requests (CRs) which will help in reducing defect leakage rates and better management of the QA
team.
How process oriented is the development team and how early is the QA team engaged in the SDLC?
While the defect detection during the testing phase is the onus of the QA team, the team in charge of the development
also plays a significant role in the performance of the QA team. Clear sign-offs during the development process and
waterfall execution of lifecycles help provide the QA team unambiguous points of entry into the development process.
Further even though the role of QA team starts much later in SDLC, the involvement of QA teams in the requirement phase
can make them far more effective. The distinct advantage gained by having a better view of the system enhancements to
be built, allows the QA team to plan their tasks more effectively
Are you providing the QA team with an interference free environment?QA teams need environments; viz. independent servers, database instance, test data, etc. that demand investments on a
continuous basis from the organization. However organizations are faced with the twin dilemma of reducing costs and
simultaneously ensuring that sufficient investments in IT infrastructure have been made. In the bargain QA teams are not
always provided with an independent environment for testing. Dedicated test environments ensure that the QA execution
plans are not upset and testing proceeds without interference from external factors.
Have you staffed the QA team with the right onsite-offshore leverage?
In a highly globalized world where offshore teams are a given, the onsite-offshore ratio for a QA team determines the
efficiency of the QA processes. Maintaining a low onsite presence creates bottlenecks. Consequently the reviews of the
enhancement documents (or change requests), test script vetting with the UAT team, etc. suffer. At the same time, a big
onsite team may not always help realise the cost benefits of an offshore model either. Hence it is recommended that QA
teams begin with a 25-30% onsite staff and gradually target aggressive onsite ratios as low as 15-20%.
How well do you manage knowledge erosion?
This is especially important for a large and complex application. QA teams often struggle because of knowledge erosion
associated with planned or unplanned attrition in team. In teams where staffing size fluctuates either due to varying size
of releases or due to attrition, it is important to retain a core group of experts as the base team. An extended team, built
around this base group, can then be engaged or released depending on the demands of the project. Of course, while such
a model would help manage knowledge erosion to a large extent, it would need to be bolstered further by strong Knowledge
Management (KM) practices.
Finally, while the above mentioned factors can help improve QA performance, it is important to track the QA program using
objective metrics. Some of the metrics which we believe need to be tracked closely are:
a) Defect Leakage: This measures the capability of the QA team in detecting defects/bugs. It is important to track this
number accurately, assigning specific reasons for each defect. All issues reported during QA may not necessarily be
defects. Categorizing the defects as "Valid Defects", "Not a Defect", "Duplicate", "Changes in Specifications" and
"Recommendations" help identify the true performance of the QA team. The unsaid part, of course, is that there needs to
be a robust defect tracking system already in place for formal recording, tracking and subsequent analysis.
b) Release frequency: This indicates the number of releases that an application is subjected to in a normal year.
c) Code drops per release: Refers to the number of builds during a release.
d) Schedule/Effort Adherence: Measures how well the QA team is performing vis-à-vis the planned
milestones and bandwidth of resources.
e) 'Not-a-bug' percentage: It indicates the number of defects logged by the QA team that are not defects, as a percentage
of the total reported defects. This metric is very important since it is an indication of the unproductive effort expended by
the development team in investigating a non-defect. It is also an indication of the need to update the documentation /
improve test scripts
The environmental factors discussed in this document, while hidden from the immediate field of view of the team, greatly
influence the outcome of the QA efforts. Hence they have to be objectively assessed and tracked to understand the influence
they exercise on QA performance. By learning from them and adopting appropriate measures, QA teams can be helped to
realize their true performance potential.

Monday, December 6, 2010

Concept testing


It is important to make a distinction between the different types of testing applied at different stages of the development process. This helps the development team to understand the purpose of each test and consider how data is to be captured.

Description
Different testing methods will have different objectives, approaches and types of modelling. Four general types of testing are described in more detail:

Exploratory tests
Assessment tests
Validation tests
Comparison tests
ISO 9000 tests are also briefly summarised.


Exploratory testsCarried out early in the development process during the fuzzy front end, when the problem is still being defined and potential solutions are being considered, preferably once the development team has a good understanding of the user profile and customer needs. The objective of the exploratory test is to examine and explore the potential of preliminary design concepts and answer some basic questions, including:

What do the users think about using the concept?
Does the basic functionality have value to the user? Is the user interface appropriate and operable?
How does the user feel about the concept?
Are our assumptions about customer requirements correct?
Have we misunderstood any requirements?
This type of early analysis of concepts is potentially the most critical of all types of prototyping and evaluation, for if the development is based on faulty assumptions or misunderstanding about the needs of the users, then problems are almost inevitable later on. Data collection will tend to be qualitative based on observation, interview and discussion with the target audience. Ideally, the customer should be asked to use the product without training or prompting, to assess the intuitiveness of controls and instructions. Some quantitative measures may be appropriate, such as time to perform tasks, number of failures or errors.

If there has been no improvement since the beginning of your relationship with your SEO company, they need to explain why. Then you have a great seo Company working for you. If you don't want to do it yourself, an SEO company being paid monthly should detail exactly what they are doing for you. | Any link exchanges promising hundreds or links immediately upon joining should be avoided unless the only search engine you care about is MSN. Using a link exchange as the only way you get links will also not be a path you want to wander down. Exchanging links with every site under the sun is also bad.

Assessment testsWhile the exploratory test aims to explore the appropriateness of a number of potentially competing solutions, the assessment test digs into more detail with a preferred solution at a slightly later stage of development. The main aim of an assessment test is to ensure that assumptions remain relevant and that more detailed and specific design choices are appropriate. The assessment test will tend to focus on the usability or or level of functionality offered and in some cases, may be appropriate for evaluating early levels of performance. Assuming that the right concept has been chosen, then the assessment test aims to ensure that it has been implemented effectively and answer more detailed questions, such as:

Is the concept usable?
Does the concept satisfy all user needs?
How does the user use the product and could it be more effective?
How will it be assembled and tested and could this be achieved in a better way?
Can the user complete all tasks as intended?
Assessment testing typically requires more complex or detailed models than the exploratory test. A combination of analytical models, simulations and working mock ups (not necessarily with final appearance or full tooling) will be used.

The evaluation process is likely to be relatively informal, including both internal and external stakeholders. Data will typically be qualitative and based on observation, discussion and structured interview. The study should aim to understand why users responds in the way that they do to the concept.

Validation testsThe validation test is normally conducted late in the development process to ensure that all of the product design goals have been met. This may include usability, performance, reliability, maintainability, assembly methods and robustness. Validation tests normally aim to evaluate actual functionality and performance, as is expected in the production version and so activities should be performed in full and not simply walked through.

It is probable that the validation test is the first opportunity to evaluate all of the component elements of the product together, although elements may have been tested individually already. Thus, the product should be as near to representing the final item as possible, including packaging, documentation and production processes. Also included within validation tests will be any formal evaluation required for certification, safety or legislative purposes. Compared to an assessment test, there is a much greater emphasis on experimental rigour and consistency. It may be preferable for evaluation to be carried out independently from the design team, but with team input on developing standards and measurement criteria.

Data from a validation test is likely to be quantitative, based on measurement of performance. Normally, this is carried out against some benchmark of expected performance. Usability issues may be scored in terms of speed, accuracy or rate of use, but should always be quantified. Issues such as desirability may be measured in terms of preference or user ranking. Data should also be formally recorded, with any failures to comply with expected performance logged and appropriate corrective action determined.

Comparison testsA comparison test may be performed at any stage of the design process, to compare a concept, product or product element against some alternative. This alternative could be an existing solution, a competitive offering or an alternative design solution. Comparison testing could include the capturing of both performance and preference data for each solution. The comparison test is used to establish a preference, determine superiority or understand the advantages and disadvantages of different designs.

ISO 9000 tests
ISO 9000 defines a number of test activities:

· Design review
A design review is a set of activities whose purpose is to evaluate how well the results of a design will meet all quality requirements. During the course of this review, problems must be identified and necessary actions proposed.

· Design verification
Design verification is a process whose purpose is to examine design and development outputs and to use objective evidence to confirm that outputs meet design and development input requirements.

· Design validationDesign validation is a process whose purpose is to examine resulting products and to use objective evidence to confirm that these products meet user needs.

Wednesday, December 1, 2010

Difference between Latent and Masked Defect.

Latent bug is an existing uncovered or unidentified bug in a system for a period of time. The bug may have one or more versions of the software and might be identified after its release.

The problems will not cause the damage currently, but wait to reveal themselves at a later time.

Masked defect hides the other defect, which is not detected at a given point of time. It means there is an existing defect that is not caused for reproducing another defect.

Tuesday, November 9, 2010

How to test software requirements specification (SRS)?

Do you know “Most of the bugs in software are due to incomplete or inaccurate functional requirements?” The software code, doesn’t matter how well it’s written, can’t do anything if there are ambiguities in requirements.

It’s better to catch the requirement ambiguities and fix them in early development life cycle. Cost of fixing the bug after completion of development or product release is too high. So it’s important to have requirement analysis and catch these incorrect requirements before design specifications and project implementation phases of SDLC.

How to measure functional software requirement specification (SRS) documents?
Well, we need to define some standard tests to measure the requirements. Once each requirement is passed through these tests you can evaluate and freeze the functional requirements.

Let’s take an example. You are working on a web based application. Requirement is as follows:
“Web application should be able to serve the user queries as early as possible”

How will you freeze the requirement in this case?
What will be your requirement satisfaction criteria? To get the answer, ask this question to stakeholders: How much response time is ok for you?
If they say, we will accept the response if it’s within 2 seconds, then this is your requirement measure. Freeze this requirement and carry the same procedure for next requirement.

We just learned how to measure the requirements and freeze those in design, implementation and testing phases.

Now let’s take other example. I was working on a web based project. Client (stakeholders) specified the project requirements for initial phase of the project development. My manager circulated all the requirements in the team for review. When we started discussion on these requirements, we were just shocked! Everyone was having his or her own conception about the requirements. We found lot of ambiguities in the ‘terms’ specified in requirement documents, which later on sent to client for review/clarification.

Client used many ambiguous terms, which were having many different meanings, making it difficult to analyze the exact meaning. The next version of the requirement doc from client was clear enough to freeze for design phase.

From this example we learned “Requirements should be clear and consistent”

Next criteria for testing the requirements specification is “Discover missing requirements”

Many times project designers don’t get clear idea about specific modules and they simply assume some requirements while design phase. Any requirement should not be based on assumptions. Requirements should be complete, covering each and every aspect of the system under development.

Specifications should state both type of requirements i.e. what system should do and what should not.

Generally I use my own method to uncover the unspecified requirements. When I read the software requirements specification document (SRS), I note down my own understanding of the requirements that are specified, plus other requirements SRS document should supposed to cover. This helps me to ask the questions about unspecified requirements making it clearer.

For checking the requirements completeness, divide requirements in three sections, ‘Must implement’ requirements, requirements those are not specified but are ‘assumed’ and third type is ‘imagination’ type of requirements. Check if all type of requirements are addressed before software design phase.

Check if the requirements are related to the project goal.
Some times stakeholders have their own expertise, which they expect to come in system under development. They don’t think if that requirement is relevant to project in hand. Make sure to identify such requirements. Try to avoid the irrelevant requirements in first phase of the project development cycle. If not possible ask the questions to stakeholders: why you want to implement this specific requirement? This will describe the particular requirement in detail making it easier for designing the system considering the future scope.

But how to decide the requirements are relevant or not?Simple answer: Set the project goal and ask this question: If not implementing this requirement will cause any problem achieving our specified goal? If not, then this is irrelevant requirement. Ask the stakeholders if they really want to implement these types of requirements.

In short requirements specification (SRS) doc should address following:
Project functionality (What should be done and what should not)
Software, Hardware interfaces and user interface
System Correctness, Security and performance criteria
Implementation issues (risks) if any

Conclusion:
I have covered all aspects of requirement measurement. To be specific about requirements, I will summarize requirement testing in one sentence:
“Requirements should be clear and specific with no uncertainty, requirements should be measurable in terms of specific values, requirements should be testable having some evaluation criteria for each requirement, and requirements should be complete, without any contradictions”

Testing should start at requirement phase to avoid further requirement related bugs. Communicate more and more with your stakeholder to clarify all the requirements before starting project design and implementation.

Do you have any experience testing software requirements?

Tuesday, November 2, 2010

Test Case Point Analysis

1. INTRODUCTION

Effective software project estimation is one of the most challenging and important activity in software project life-cycle. On time project delivery cannot be achieved without an accurate and reliable estimate. Estimates play a vital role in all phases of the software development life-cycle.
There are many popular models for estimation in vogue today. But none of the models determine the efforts needed for separate phases of the SDLC. Organizations specializing in niche areas such as providing only testing services need an estimation model that can accurately determine the size of the Application Under Test (AUT) and in turn the efforts needed to execute it. Today’s e-business applications impose new challenges for software testing. Some of the commonly known challenges are complex application scenarios, extensive third party integration, crunched testing life cycles and increased security concerns. These factors inherently make testing of e-business application far more complex and critical than conventional software testing. In no other projects is performance testing more critical than in web-based projects. Underestimating a testing project leads to under-staffing it (resulting in staff burnout), under-scoping the
quality assurance effort (running the risk of low quality deliverables), and setting too short a schedule (resulting in loss of credibility as deadlines are missed). Traditionally, estimation of efforts for testing has been more of a ballpark percentage of the rest of the development life cycle phases. This approach to estimation is more prone to errors and carries a bigger risk of delaying the launch deadlines.
TCP analysis is an approach for doing an accurate estimation of functional testing projects. This approach emphasizes on key testing factors that determine the complexity of the entire testing
cycle and gives us a way of translating test creation efforts to test execution efforts, which is very useful for regression testing estimation. In other words Test Case Points is a way of representing the efforts involved in testing projects.

2. TCP METHODOLOGY

As stated above, TCP analysis generates test efforts for separate testing activities. This is essential because testing projects fall under four different models. Though in practice, most testing projects are a combination of the four execution models stated below.

Model I – Test Case Generation
This includes designing well-defined test cases. The deliverables here are only the test cases.

Model II – Automated Script Generation
This execution model includes only automating the test cases using an automated test tool. The
deliverables include the tool generated scripts

Model III – Manual Test Execution
This execution model only involves executing the test cases already designed and reporting the
defects.

Model IV – Automated Test Execution
This phase includes the execution of the automated scripts and reporting the defects.

TCP Analysis uses a 7-step process consisting of the following stages:

1. Identify Use Cases
2. Identify Test Cases
3. Determine TCP for Test Case Generation
4. Determine TCP for Automation
5. Determine TCP for Manual Execution
6. Determine TCP for Automated Execution
7. Determine Total TCP

Given below is an overview of different phases as applied to the four project execution models.Each phase is explained in detail in subsequent sections. Determine TCP for Test Case Generation To determine the TCP for Test Case Generation, first determine the complexity of the Test Cases. Some test cases may be more complex due to the inherent functionality being tested. The complexity will be an indicator of the number of TCPs for the test case.

Calculate test case generation complexity based on the factors given below.

Sl.No Test Case Generation Complexity Factors Weights
1 The number of steps in the case –
Assuming that test cases are atomic and that they test only one
condition, the number of steps will be an indicator of the complexity. 2
2 Interface with other Test Cases
This usually involves calling other test cases from this use case. Ex.
Address Book lookup from Compose Mail 1
3 Establishing a baseline result
e.g. Testing an EMI Calculator would involve validating the formulae
used.
This would typically be complex 3

Note: The above given weights are just sample weights and should not be mistaken for
benchmarks.
– Determine the complexity based on the following table:
Number of Steps Weight
<5 1
5-10 2
>10 3

* The standard weights range from 0 to 3.
0 —-> Not Applicable
1 —-> Simple
2 —-> Average
3 —-> Complex
Calculate the sum of the above weights and determine the complexity of each test case.
Calculate the Rough Test Case Points for test generation by using the table below.
Test Case Type Complexity Weights Test Case Points
Simple < 9 6
Average < 10-16 8
Complex >16 12

Calculate Test Case Points by using the below formulae
Rough TCP-G = ( # of Simple Test Cases X 6 ) + ( # of Average Test Cases X 8 ) + ( # of
Complex Test Cases X 12 )
The TCP estimates above might be affected by certain application level parameters. For
example, if the AUT is a vertical portal (e.g. Insurance), then creating test cases would involve an
understanding of the insurance business. This might increase the effort required. To factor the
impact of such parameters, we will increase the TCP by an Adjustment Factor. Some of the
Application level parameters that might have an influence on the TCP are listed in the table
below:
Sl. No. Factors Adjusted Weight
1 Design Complexity 0.1
2 Integration with other
devices such as WAP
enabled devices, etc. 0.1
1 Multi-lingual Support 0.05
4 Total Factor 0.25

The weights in the table are only indicative and will be subject to the application for which
estimation is being carried out.
Adjustment Factor = 1 + Total Factor = 1 + 0.25 = 1.25
Each of these factors is scored based on their influence on the system being counted. The
resulting score will increase the Unadjusted Test Case Point count. This calculation provides us
with the Test Case Point Generation count.
TCP-G = Rough TCP -Generation X Adjustment Factor
2.1 Determine TCP for Automation

From the list of Test Cases derived from Section 2.2, identify the test cases that are good
candidates for automation. Some test cases save a lot of effort if done manually and is not worth
automating. On the other hand, some test cases cannot be automated because the test tool does
not support the feature being tested.
Certain cases are straightforward and are quite easy to automate whereas certain cases involving
dynamic data are difficult to automate. Next, determine the test case automation complexity
based on the factors given below.
Sl.No Test Case Automation Complexity Factors Weights
1 Interface with other Test Cases
This usually involves calling other test cases from this use case. Ex.
Address Book lookup from Compose Mail 2
2 Interface with external application
This is interaction with an application that is outside the system boundary.
Ex. Credit Card validations through an independent gateway 0
3 External Application Validation
This involves testing of third party applications to validate the use case.
Ex. Checking Word, PDF reports generated by the system 0
4 Data Driven
This is usually helpful for testing the use case with positive and negative
data. Ex. User Registration 2
5 Links
# of links to be tested for broken/orphaned links. Typically, dynamic lists
like catalog items are rated as complex. 1
6 Numerical Computations
This involves validation of arithmetical calculations.
e.g. Testing an EMI Calculator would involve validating the formulae used.
This would typically be complex. 1
7 Check Points
This involves modifying the test scripts to check for validations.
Ex. Page titles, buttons, labels, null values, max char, numeric etc 1
8 Database Check Points
This involves cross checking the database values for integrity.
Ex. Check database after user registration for proper values. 0
9 UI ComponentsThese are usually Applets, ActiveX etc 0

Note: The above given weights are just sample weights and should not be mistaken for
benchmarks.
* The standard weights range from 0 to 3.
0 —-> Not Applicable
1 —-> Simple
2 —-> Average
3 —-> Complex
Calculate the sum of the above weights and determine the complexity of each test case.
From the complexity, calculate the Test Case Points for test automation by using the table below.
Test Case Type Complexity Weights Test Case Points
Simple < 9 6
Average < 10-16 8
Complex >16 12

TCP-A (Single Scenario) = ( # of Simple Test Cases X 6 ) + ( # of Average Test Cases X 8 ) + (
# of Complex Test Cases X 12 )
e-biz applications need to be tested on various configurations because they can be accessed
from anywhere and from an uncontrolled environment. A scenario is basically a combination of a
browser, operating system and hardware.
Sometimes, the application by its nature of usage might demand testing on different scenarios.
The TCPs identified above need to be modified to factor the impact of multiple scenarios.
The extra effort needed can be obtained by answering the following two questions:
Ø Can the scripts generated for a single scenario be run on multiple scenarios?
Ø Do the scripts generated for a single scenario need to be re-generated because the
automation tool does not support a scenario?
To arrive at the additional Test Case Points, we will identify the test cases (from section 3.2) that
need to be re-generated for every additional scenario.
The TCP-A (single scenario) will be added for every scenario for which regeneration is required.
For example, assume that the test case checks for an applet condition in Internet Explorer 5.0
and Windows NT and the TCP -A comes out to be 6. Further, the test tool does not support the
same script for Netscape Navigator 4.7 and Solaris. So the script needs to be regenerated. In
this case, the Total TCP -A comes out to be 12 for the 2 different scripts.
2.2 Determine TCP for Manual Execution

To determine the TCPs for Manual Execution, first calculate the manual test case execution complexity based on the factors given below.
Sl.No Manual Execution Complexity Factors Weights
1 Pre-conditions
This usually involves setting up the test data. It also includes
the steps needed before starting the execution. E.g. to test
whether the printing facility is working fine, the pre-conditions
would include opening the application, inserting a record into
the database, generating a report and then checking the
printing facility. 2
2 Steps in the Test Case
If the steps themselves are complex, the manual execution
effort will increase. Please refer to the legend below to
determine the complexity factor. 1
3 Verification
The verification itself might be complex. For example,
checking the actual result might itself involve many steps.
Let’s say in a test case, we need to check the server logs, it
might need opening up the log analyzer and verifying the
statistics. 2

Note: The above given weights are just sample weights and should not be mistaken for benchmarks.
Number of Steps Weight
<5 1
5-10 2
>10 3

The standard weights range from 0 to 3.
0 —-> Not Applicable
1 —-> Simple
2 —-> Average
3 —-> Complex
Calculate the sum of the above weights and determine the complexity of each test case.
Calculate the Test Case Points for manual execution by using the table below.
Test Case Type Complexity Weights Test Case Points
Simple < 9 6
Average < 10-16 8
Complex >16 12

TCP-ME = ( # of Simple Test Cases X 6 ) + ( # of Average Test Cases X 8 ) + ( # of Complex
Test Cases X 12 )
Test Cases need to be manually executed in all the scenarios. To arrive at the additional Test
Case Points, the TCP -ME (single scenario) will be added for every scenario for which manual
execution is required.
2.3 Determine TCP for Automated Execution

To determine the TCP for Automated Execution, calculate the automation test case execution complexity based on the factors given below.

Note: The above given weights are just sample weights and should not be mistaken for
benchmarks.
The standard weights range from 0 to 3.
0 —-> Not Applicable
1 —-> Simple
2 —-> Average
3 —-> Complex
Calculate the sum of the above weights and determine the complexity of each test case.
Calculate the Test Case Points for automated execution by using the table below.
Sl.No Test Case Complexity Factors Weights
1 Pre-conditions
This usually involves setting up the test data. It also includes
the steps needed before starting the execution. E.g. to test
whether the printing facility is working fine, the pre-conditions
would include opening the application, inserting a record into
the database, generating a report and then checking the
printing facility. 2
Test Case Type Complexity Weights Test Case Points
Simple < 9 6
Average < 10-16 8
Complex >16 12

TCP-AE = ( # of Simple Test Cases X 6 ) + ( # of Average Test Cases X 8 ) + ( # of Complex Test
Cases X 12 )
2.4 Calculate Total TCP

The Total TCP is computed by summing up the individual TCPs for Test Case Generation, Test Automation and Test Execution.
TCP-T = TCP-G + TCP-A + TCP-ME + TCP-AE
The total TCP is indicative of the size of the testing project.
3. EFFORT CALCULATION

To translate the Test Case Points into the total person months involved, based on your prior experience estimate the number of test case points per person month.

Thursday, October 28, 2010

What can be done if requirements are changing continuously?

A common problem and a major headache.

Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.
It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.
If the code is well-commented and well-documented this makes changes easier for the developers.
Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.
The project's initial schedule should allow for some extra time commensurate with the possibility of changes.
Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.
Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.
Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted .
Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.
Try to design some flexibility into automated test scripts.
Focus initial automated testing on application aspects that are most likely to remain unchanged.
Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)

Wednesday, October 27, 2010

DMAIC

DMAIC is an acronym for this five-step improvement process: Define, Measure, Analyze, Improve, Control.

* Define: Define the problem, the process, and the project goals. In Six Sigma it is imperative that the problem is specifically defined. Alluding to slowing business is a poorly defined problem. Instead, the problem should be clearly established in quantitative terms. So a good Six Sigma problem definition would be a 35% decrease of net sales in the past two consecutive quarters.
* Measure: Measure and collect data that will determine the factors that have influence over the outcome of the process or procedure.
* Analyze: The data is analyzed using statistical tools to assess whether the problem is real (and solvable) or random, which makes it unsolvable within the Six Sigma framework.
* Improve: If the problem is real, the Six Sigma team identifies solutions to improve the process based on the data analysis.
* Control: Control planning, including data collection and control mechanisms, is required to ensure that the solutions can be sustained to deliver peak performance and early deviations from the target do not materialize into process defects.

DMAIC is the method utilized for existing business processes, while DMADV, which stands for Define, Measure, Analyze, Design, Verify, is used to create new processes or designs. Both are inspired by Dr. W. Edward Deming, who is considered the father of modern quality control. Dr. Deming’s Plan-Do-Check-Act Cycle, also known as the Deming or Shewhard cycle, laid the groundwork for DMAIC as a statistical and scientific method of business process improvement.

DMAIC uses the Six Sigma methodology to improve an existing business process and profitability by identifying defect opportunities. These are places in a process, procedure, or service where defects can occur. By identifying defects per million opportunities (DMPO), Six Sigma team members can eliminate errors and can accurately determine quality, which they then use as a parameter to determine a solution to a problem.

Monday, October 25, 2010

Defects Find Rate

Definition

The Defects Find Rate counts number of new defects found in a software over period of time (day, week, month).

Calculation
To be able to calculate that metric, it is important that in your defect tracking system you track:

* affected version, version of software in which this defect was found.
* defect creation date

The calculation process:

* for each version of the product, find all defects that affected the version. If you don't track version where the defect was found you can look for all defects for the product.
* group the defects by the creation date and depending on the period you would like to evaluate count number of defects created in each day, week or month.

Dimensions
It should be possible to monitor Escaped Defects Found in following dimensions:

* Affected version, to monitori metric value of any released version.
* Project/Product. To aggregated Escaped Defects Found over all released versions of the project or product.

Usage
Defect Find Rate gives a product owner an idea of how fast new defects are found in the product and therefore how much it can influence the development and release plans.
Presentation
Best presentation is a simple bar chart, where each bar shows number of escaped defects found per (day/week/month/quarter/year). Depending on your requirements you may want to create this chart for a whole company or each product.

Friday, October 22, 2010

Defect Removal Efficiency

DefinitionThe defect removal efficiency (DRE) gives a measure of the development team ability to remove defects prior to release. It is calculated as a ratio of defects resolved to total number of defects found. It is typically measured prior and at the moment of release.
Calculation
To be able to calculate that metric, it is important that in your defect tracking system you track:

* affected version, version of software in which this defect was found.
* release date, date when version was released

DRE = Number of defects resolved by the development team / total number of defects at the moment of measurement.
Dimensions
It should be possible to monitor Defect Detection Percentage in following dimensions:

* Affected version
* Priority

PresentationDRE is typically measured at the moment of version release, the best visualization is just to show current value of DRE as a number.
ExampleFor example, suppose that 100 defects were found during QA/testing stage and 84 defects were resolved by the development team at the moment of measurement. The DRE would be calculated as 84 divided by 100 = 84%

Tuesday, October 19, 2010

Soak Tests (Also Known as Endurance Testing)

Soak testing is running a system at high levels of load for prolonged periods of time. A soak test would normally execute several times more transactions in an entire day (or night) than would be expected in a busy day, to identify any performance problems that appear after a large number of transactions have been executed.



Also, it is possible that a system may ‘stop’ working after a certain number of transactions have been processed due to memory leaks or other defects. Soak tests provide an opportunity to identify such defects, whereas load tests and stress tests may not find such problems due to their relatively short duration.



The above diagram shows activity for a certain type of site. Each login results in an average session of 12 minutes duration with and average eight business transactions per session.

A soak test would run for as long as possible, given the limitations of the testing situation. For example, weekends are often an opportune time for a soak test. Soak testing for this application would be at a level of 550 logins per hour, using typical activity for each login.

The average number of logins per day in this example is 4,384 per day, but it would only take 8 hours at 550 per hour to run an entire days activity through the system.

By Starting a 60 hour soak test on Friday evening at 6 pm (to finish at 6am Monday morning), 33,000 logins would be put through the system, representing 7½ days of activity. Only with such a test, will it be possible to observe any degradation of performance under controlled conditions.

Some typical problems identified during soak tests are listed below:
bullet

Serious memory leaks that would eventually result in a memory crisis,
bullet

Failure to close connections between tiers of a multi-tiered system under some circumstances which could stall some or all modules of the system.
bullet

Failure to close database cursors under some conditions which would eventually result in the entire system stalling.
bullet

Gradual degradation of response time of some functions as internal data-structures become less efficient during a long test.

Apart from monitoring response time, it is also important to measure CPU usage and available memory. If a server process needs to be available for the application to operate, it is often worthwhile to record it's memory usage at the start and end of a soak test. It is also important to monitor internal memory usages of facilities such as Java Virtual Machines, if applicable.
Long Session Soak Testing

When an application is used for long periods of time each day, the above approach should be modified, because the soak test driver is not Logins and transactions per day, but transactions per active user for each user each day.

This type of situation occurs in internal systems, such as ERP and CRM systems, where users login and stay logged in for many hours, executing a number of business transactions during that time. A soak test for such a system should emulate multiple days of activity in a compacted time-frame rather than just pump multiple days worth of transactions through the system.

Long session soak tests should run with realistic user concurrency, but the focus should be on the number of transactions processed. VUGen scripts used in long session soak testing may need to be more sophisticated than short session scripts, as they must be capable of running a long series of business transactions over a prolonged period of time.
Test Duration

The duration of most soak tests is often determined by the available time in the test lab. There are many applications, however, that require extremely long soak tests. Any application that must run, uninterrupted for extended periods of time, may need a soak test to cover all of the activity for a period of time that is agreed to by the stakeholders, such as a month. Most systems have a regular maintenance window, and the time between such windows is usually a key driver for determining the scope of a soak test.

A classic example of a system that requires extensive soak testing is an air traffic control system. A soak test for such a system may have a multi-week or even multi-month duration.

Thursday, September 9, 2010

iMacros: Measuring Web Response Time

Introduction

It's hard to imagine a life without internet in the present age of communication. Internet is being used for just about anything ranging from getting a recipe to finding an address or researching on some topic. With the increase in the number of users on the internet, it has become imperative that the website responds to the user requests, with the latest research indicating that if a website takes more than 2-3 seconds, you start deeming it slow even though it has a lot of graphic content to render. Hence it has become a critical performance test to note the response times of your website even if it is launched on an intranet.

Measuring response times of websites can be done in the following context:

*
Checking the response times with the different system configurations of the web server on which the website is hosted.
*
Compare the response times of websites with different builds.
*
Noting the impact of adding graphical content to the website on the response times of websites.
*
Noting the impact of adding same graphical content in different formats (e.g. split images instead of a single big image).
*
Comparing the query performance for a web request exercising the database for different query implementations

This article discusses about an Add-on from firefox that helps the users to find the response times of your website. It also details on how Python scripts could be used to automate the same.
iMacros Overview

iMacros, an Add-On from Firefox, is a fairly simple record and playback tool that helps the user in the following different ways:

*
Downloading and Uploading Files from internet.
*
Automatically filling web forms
*
Performance testing of websites via Browser (Single-user)
*
Reading the web pages and importing the data to CSV files

More on iMacros here.

This article concentrates on “Web based performance testing” and details on how one of the features of the iMacros Add-on can be used to find the response time of a Website.
Tool Specifications
Add-on Version 6.6.5.0
Firefox support Firefox 3.0 - 3.6
OS Version Win 2K and above
Language English, French, German, Russian
Using iMacros

To get up and running with iMacros:

*
On the firefox browser go to Tools → Add-ons
*
Under Get Add-ons tab, search for iMacros
*
Click on 'Install Now' to install the add-on
*
Firefox browser gets restarted and you can use it by browsing to View → Slide Bar → iOpus iMacros

iMacros is now ready for use.

On the left-hand side of your home page you would see an iMacros pane as below:

imacro.jpg
Configuration

As mentioned earlier, iMacros is a record and playback tool. Let's check out the pane in detail to see how tests can be recorded and re-run.

The pane that we saw above contains three main tabs:
Play

*
Play - This option starts replaying the recorded tests for the user.
*
Pause - While the recorded tests are replayed, this option can be used to pause the test.
*
Stop - While the recorded tests are replayed, this option can be used to stop the test.
*
Repeat Macro - This option is used to reiterate the recorded tests for N number of times. Current - depicts the current iteration of the tests and Max - depicts the maximum of number of iteration a user wants to re-run the recorded test.
*
Play (Loop) - Enables the user to re-play/start the recoded tests for N iterations as mentioned under 'Repeat Macro' option.

Rec

*
Record - By clicking this user can start recording the tests.
*
Save - This option would the user to save the tests recorded. This will pop-up a Save File window that asks for 'Name' denoting the name of test and 'Create in' denoting the where the user wants to store the recorded test. It gets stored as .iim file and is also known as a Macro.
*
Load - Will open up a 'Select file to load' pop-up that suggests that the user can load the Macro (earlier recorded test) for re-run.
*
Stop - User can stop recording of the tests using this option.
*
Click Mode/Auto - This option enables how the links are recorded. Links can be recorded by:
1.
Using complete HTM tag.
2.
Using X/Y position of the link.
3.
User can choose either of them OR can go for 'Auto' mode where the Add-on uses the correct configuration for itself.
*
Save Page As - This option can be used to save the current page as .htm in the desired location.
*
Take Screenshot - Option for taking the screenshot of the current page. File is stored in a user defined location as .png.
*
Del. Cache&Cookies - This option would delete the cache and cookies of the firefox browser at the global level.
*
Wait during Play - This option enables the user to put a delay at a certain page, while re-runing the test. 'Wait Parameter' pops up and asks the user to enter the value of delay in seconds.

Edit

*
Edit Macro - It opens up the current Macro (#Current.iim) so that it could be edit.
*
Share Macro - macros can be shared by:
o
Copying as a link that can be pasted on to a browser for direct use.
o
Sending the macro using email application your system (e.g. outlook).
o
Adding it on any of the social bookmarking service. (e.g. blogger).

*
Refresh Macro List - Refreshes all the Macros under 'Favorites' tab of the iMacros pane.
*
Options: It enables the user to allow certain domains from running the shared macros. Domains not listed under this, will not be allowed to run a shared macro.
*
Help - Takes the user to the wiki page of iMacros.

Recording a Macro

A macro can be recorded by performing the following steps:

*
On the iMacros pane go to 'Rec' tab.
*
Make sure you are the web page from where you want to start the test.
*
Press the 'Record' button to start recording the test.
*
Perform all the browsing events that you need to test. iMacros would be recording the actions that you perform.
*
Click on 'Stop' to stop recording the tests.
*
'Save' can be used to save the Macros. (default path: \My Documents\iMacros\Macros).

Case Study: Measuring Response Times

Let us take an example. Here's a macro that I recorded for the case study purpose.

VERSION BUILD=6600217 RECORDER=FX
TAB T=1
URL GOTO=http://www.google.co.in/
TAG POS=1 TYPE=INPUT:TEXT FORM=NAME:f ATTR=NAME:q CONTENT=ChetanAppTimer
TAG POS=1 TYPE=INPUT:SUBMIT FORM=NAME:f ATTR=NAME:btnG&&VALUE:GoogleSearch
TAG POS=1 TYPE=A ATTR=TXT:TestingPerspective-RahulVerma'sWebsiteonSoftwareTesting

This is what the above Macros does:

*
launches “www.google.com”
*
Inputs a string “Chetan AppTimer” and clicks on the Search button
*
Clicks on the “Testing Perspective” link found by google search.

Note The above macro would work only if the above search result is found on the first page (which was the case at the time this macro was recorded)
Measuring Response Time

This script can now be tweaked to measure the response times. It can be achieved by using a simple feature in iMacros called STOPWATCH. Here's how we use it.

VERSION BUILD=6600217 RECORDER=FX
TAB T=1

SET !FILESTOPWATCH C:\logs.csv

STOPWATCH ID=Opening_Google
URL GOTO=http://www.google.co.in/
STOPWATCH ID=Opening_Google

TAG POS=1 TYPE=INPUT:TEXT FORM=NAME:f ATTR=NAME:q CONTENT=ChetanAppTimer

STOPWATCH ID=Google_Search
TAG POS=1 TYPE=INPUT:SUBMIT FORM=NAME:f ATTR=NAME:btnG&&VALUE:GoogleSearch
STOPWATCH ID=Google_Search

STOPWATCH ID=Opening_TestingPerspective
TAG POS=1 TYPE=A ATTR=TXT:TestingPerspective-RahulVerma'sWebsiteonSoftwareTesting
STOPWATCH ID=Opening_TestingPerspective

In this Macro we have added certain statements:

*
SET !FILESTOPWATCH C:\logs.csv – This suggests that we have set a STOPWATCH on some of the operations in our recorded test and we are logging the measurements made by the add-on in “C:\logs.csv” file.
*
STOPWATCH ID=Opening_Google – This suggests that we have added a STOPWATCH on opening www.google.com. (“Opening_google” can be referred to as Transaction name as done in some commercial performance testing tools)

By enabling the STOPWATCH option for all operations in our earlier recorded script, we can easily get the response times for each operation. In the modified code, we can get the response times of:

*
Opening www.goolge.com.
*
Searching time for “Chetan AppTimer” on www.google.com.
*
Opening of www.testingperspective.com/doku.php/collabration/chetan/apptimer1.

The macro can be run in 'Play (Loop)' mode to perform N iterations.
Results and Data Analysis

We are logging the measured response times in C:\logs.csv file. Attached is the snapshot of the log file. The test was run for four times. Log file shows all the three measurements with the timestamp.

logs.jpg

After getting the measurements for four runs, the user can take an average of all the measurements under same category like “Opening Google” and get the closest approximate of the response time of www.google.com.

Similarly, the user can run the test N number of times and get the response times of websites or sub-sections of the website. Here N denotes the number of iterations a user wants to run the test for depending on the performance criterion set by the him/performance testing team.

The usual statical equivalence criteria should be employed while averaging the response times.
Application in Performance Testing

1.
As a Probing Client: A probing client is the replacement for a human tester who checks and records the response time of the website during the actual multi-user performance testing is in progress. IMacros can be a very effective way of doing this job in a consistent manner.
2.
One can build a multi-threaded solution on top of iMacros using a language of choice and later process the response time data collected to see the application behavior on a certain user load. Could be an easy way of testing small scale websites. This would be especially helpful when one is not conversant with elaborate performance testing tools or has access to tools which do not have good record/playback feature (HTTPS traffic recording is a usual lacking feature seen in free tools)

Automation Outlook

Once the tests are recorded you would want to run them with every test cycle. You can easily automate this process in either of the two ways:
Using iMacros.exe

imacros.exe is a freely available tool and can be downloaded here. You could use imacros.exe from the command line. Here are some of the commands that can be used to automate the tests:

Commands

*
Runs the macro from a desired location on the hard drive.

imacros.exe -macro \Sample.iim

*
Runs the macro in a loop with N iterations.

imacros.exe -macro \Sample.iim -loop N

*
Runs the macro with N iterations in background without actually opening up the browser instance.

imacros.exe -macro \Sample.iim -loop N -silent

*
Runs the macro with parameters.

imacros.exe -macro \Sample.iim -var_myvar

Results of the test

After running the commands on the CmdLine, you could check the results of the tests by checking on the ENV variabale %ERRORLEVEL%.

Success: %ERRORLEVEL% = 1
Failure: %ERRORLEVEL% != 1 (Value is negative)

Running the automation

*
Create a Batch file.
*
Enter in all the commands for all the macros that you want to run.
*
Double-Click and run the batch script to start running the automation.
*
Alternatively, you could also schedule it with DOS 'AT' command.

Using firefox application

Here is a simple example that shows how the macro can be run using firefox.exe.

"c:\program files\mozilla firefox\firefox.exe" http://run.imacros.net/?m=Sample.iim

Using Python Script

Here's a small Python script that can automate the tests.

def automateIMacros(macros_file_path):
import win32com.client
w=win32com.client.Dispatch("imacros")
w.iimInit("", 1)
w.iimPlay(macros_file_path)

if __name__=='__main__':
automateIMacros("")


Automation scripts for Perl, Powershell and VB could be found on iMacros wiki
Known Issues and Limitations

*
The tool itself provides response time for s single user. You would have to build a multi-user environment on top of the tool by employing a language of choice (Python/Perl/VB etc.)
*
Mainly available for Firefox, though an extension for IE is available now as well.

Cubical Decoration


UI team with Green Ganesh
Rajni performing pooja for Maestro success :)
Warli paintings done by team




Theme: Ganesha Eco Fest


This is certainly a model to create awareness in people and lead them to buy and worship eco-sensitive and green Ganesha idols for Ganesh Chaturthi 2010

Friday, September 3, 2010

Interesting facts about India.

History

India is the world's largest, oldest, continuous civilization
Although modern images of India often show poverty and lack of development, India was the richest country on earth until the time of British invasion in the early 17th Century. Christopher Columbus was attracted by India's wealth.
India never invaded any country in her last 10000 years of history.
India is the world's largest democracy.
The four religions born in India, Hinduism, Buddhism, Jainism, and Sikhism, are followed by 25% of the world's population
Chess (Shataranja or AshtaPada) was invented in India.
Varanasi, also known as Benares, was called "the ancient city" when Lord Buddha visited it in 500 B.C.E, and is the oldest, continuously inhabited city in the world today.
The art of Navigation was born in the river Sindh 6000 years ago. The very word Navigation is derived from the Sanskrit word NAVGATIH. The word navy is also derived from Sanskrit 'Nou'.
Medicine

Sushruta is the father of surgery. 2600 years ago he and health scientists of his time conducted complicated surgeries like cesareans, cataract, artificial limbs, fractures, urinary stones and even plastic surgery and brain surgery. Usage of anesthesia was well known in ancient India. Over 125 surgical equipment were used. Deep knowledge of anatomy, physiology, etiology, embryology, digestion, metabolism, genetics and immunity is also found in many texts.
Ayurveda is the earliest school of medicine known to humans. Charaka, the father of medicine consolidated Ayurveda 2500 years ago. Today Ayurveda is fast regaining its rightful place in our civilization.
Math

The value of "pi" was first calculated by Budhayana, and he explained the concept of what is known as the Pythagorean Theorem. He discovered this in the 6th century long before the European mathematician.

India invented the Number System. Zero was invented by Aryabhatta.
Bhaskaracharya calculated the time taken by the earth to orbit the sun hundreds of years before the astronomer Smart. Time taken by earth to orbit the sun: (5th century) 365.258756484 days.
Academic

The World's first university was established in Takshashila in 700 BCE. More than 10,500 students from all over the world studied more than 60 subjects. The University of Nalanda built in the 4th century BCE was one of the greatest achievements of ancient India in the field of education.
Grammar constitutes one of India's greatest contributions to Western philology. Panini, the Sanskrit grammarian, who lived between 750 and 500 BCE, was the first to compose formal grammar through his Astadhyai.

Thursday, August 26, 2010

SOFTWARE TESTER ROCK


TESRER ROCK!

Personality Traits in Software Engineering, which conducted a research survey assessing the major personality traits of software testers and developers.
k! Here’s how the scores break down:

Tester Scores

Neuroticism: Low
Extraversion: Medium
Conscientiousness: Medium
Openness To Experience: High
Cognitive Capability: High
Agreeableness: High

Tuesday, August 3, 2010

Internet Explorer 7 shortcut keys

Tabs shortcut keys:
To do the following Press this
Open links in a new tab in the background Ctrl+Click
Open links in a new tab in the foreground Ctrl+Shift+Click
Open a new tab in the foreground Ctrl+T
Switch between tabs Ctrl+Tab / Ctrl+Shift+Tab
Close current tab (or current window when there are no open tabs) Ctrl+W
Open a new tab in the foreground from the address bar Alt+Enter
Switch to the n'th tab Ctrl+n (n can be 1-8)
Switch to the last tab Ctrl+9
Close other tabs Ctrl+Alt+F4
Open quick tabs Ctrl+Q

Zoom shortcut keys:
To do the following Press this
Increase zoom (+ 10%) Ctrl+(+)
Decrease zoom (-10%) Ctrl+(-)
Original size (100% zoom)* Ctrl+0

Search shortcut keys:
To do the following Press this
Go to the Toolbar Search Box Ctrl+E
Open your search query in a new tab Alt+Enter
Bring down the search provider menu Ctrl+Down Arrow
Favorites Center shortcut keys:
To do the following Press this
Open Favorites Center to your favorites Ctrl+I
Open Favorites Center to your history Ctrl+H
Open Favorites Center to your feeds Ctrl+J
Even with all these cool keyboard hotkeys we've introduced a few helpful shortcuts for mouse users as well.
To do the following with a mouse Press this
Open a link in a background tab Middle mouse button
Close a tab Middle mouse button on the tab
Open a new tab Double click on empty tab band space
Zoom the page in/out 10% Ctrl+Mouse wheel Up/Down
Internet Explorer 5.5 and 6.0 shortcut keys:
To view and explore Web pages with shortcut keys:
To do the following Press this
Open your favorites in a folder window Shift+Click on the "Organize Favorites"
menu item
Change the text size Ctrl+Mouse wheel Up/Down
In the History or Favorites boxes, open multiple folders CTRL+click
Open the History box CTRL+H
Open the Favorites box CTRL+I
Open the Search box CTRL+E
Activate a selected link ENTER
Print the current page or active frame CTRL+P
Save the current page CTRL+S
Close the current window CTRL+W
Open a new window CTRL+N
Go to a new location CTRL+O or CTRL+L
Display Internet Explorer Help or to display context Help about an item in a dialog box F1
Toggle between full-screen and other views in the browser F11
Move forward through the items on a Web page, the Address box, or the Links box TAB
Move through the items on a Web page, the Address box, or the Links box SHIFT+TAB
Go to your Home page ALT+HOME
Go to the next page ALT+RIGHT ARROW
Go to the previous page ALT+LEFT ARROW or BACKSPACE
Display a shortcut menu for a link SHIFT+F10
Move forward between frames CTRL+TAB or F6
Move back between frames SHIFT+CTRL+TAB
Scroll toward the beginning of a document UP ARROW
Scroll toward the end of a document DOWN ARROW
Scroll toward the beginning of a
document in larger increments PAGE UP
Scroll toward the end of a document in larger increments PAGE DOWN
Move to the beginning of a document HOME
Move to the end of a document END
Find on this page CTRL+F
Refresh the current Web page F5 or CTRL+R
Refresh the current Web page, even if the time stamp for the Web version and your locally stored version are the same CTRL+F5
Stop downloading a page ESC

To Print Preview Web pages with shortcut keys:
To do the following Press this
Close Print Preview ALT+C
Display a list of zoom percentages ALT+Z
Zoom in ALT+PLUS
Zoom out ALT+MINUS
Display the last page to be printed ALT+END
Display the next page to be printed ALT+RIGHT ARROW
Type the number of the page that you
want displayed ALT+A
Display the previous page to be printed ALT+LEFT ARROW
Display the first page to be printed ALT+HOME
Change paper, headers and footers,
orientation, and margins for this page ALT+U
Set printing options and print the page ALT+P
To use the Address box with shortcut keys:
To do the following Press this
Move back through the list of AutoComplete matches DOWN ARROW
Move forward through the list of AutoComplete matches UP ARROW
Add "www." to the beginning and ".com" to the end of the text that you type in the Address box CTRL+ENTER
When in the Address box, move the cursor right to the next logical break in the address (period or slash) CTRL+RIGHT ARROW
When in the Address box, move the cursor left to the next logical break in the address (period or slash) CTRL+LEFT ARROW
Display a list of addresses that you have typed F4

To work with Favorites by using shortcut keys:
To do the following Press this
Move selected item down in the Favorites list in the Organize Favorites dialog box ALT+DOWN ARROW
Move selected item up in the Favorites list in the Organize Favorites dialog box ALT+UP ARROW
Add the current page to your favorites CTRL+D
Open the Organize Favorites dialog box CTRL+B
To edit with shortcut keys:
To do the following Press this
Remove the selected items and copy them to the Clipboard CTRL+X
Select all items on the current Web page CTRL+A
Insert the contents of the Clipboard at the selected location CTRL+V

Wednesday, July 21, 2010

What Is Requirements-Based Testing?

What Is Requirements-Based Testing?
Gary E. Mogyorodi, Bloodworth Integrated Technology, Inc.

This article provides an overview of the requirements-based testing (RBT) process. RBT is comprised of two phases: ambiguity reviews and cause-effect graphing. An ambiguity review is a technique for identifying ambiguities in functional 1 requirements to improve the quality of those requirements. Cause-effect graphing is a test-case design technique that derives the minimum number of test cases to cover 100 percent of the functional requirements. The intended audience for this article is project managers, development managers, developers, test managers, and test practitioners who are interested in understanding RBT and how it can be applied to their organization.

The requirements-based testing (RBT) process is comprised of two phases: ambiguity reviews and cause-effect graphing. An ambiguity review is a technique for identifying ambiguities in functional1 requirements to improve the quality of those requirements. Cause-effect graphing is a test-case design technique that derives the minimum number of test cases to cover 100 percent of the functional requirements.

Testing can be divided into the following seven activities:

1. Define Test Completion Criteria. The test effort has specific, quantifiable goals. Testing is completed only when the goals have been reached (e.g., testing is complete when the tests that address 100 percent functional coverage of the system all have executed successfully).
2. Design Test Cases. Logical test cases are defined by four characteristics: the initial state of the system prior to executing the test, the data, the inputs, and the expected results.
3. Build Test Cases. There are two parts needed to build test cases from logical test cases: creating the necessary data, and building the components to support testing (e.g., build the navigation to get to the portion of the program being tested).
4. Execute Tests. Execute the test-case steps against the system being tested and document the results.
5. Verify Test Results. Testers are responsible for verifying two different types of test results: Are the results as expected? Do the test cases meet the test completion criteria?
6. Verify Test Coverage. Track the amount of functional coverage achieved by the successful execution of each test.
7. Manage the Test Library. The test manager maintains the relationships between the test cases and the programs being tested. The test manager keeps track of what tests have or have not been executed, and whether the executed tests have passed or failed.

Activities one, two, and six are addressed by RBT. The remaining four activities are addressed by test management tools that track the status of test executions.

The RBT process stabilizes the application interface definition early because the requirements for the user interface become well defined and are written in an unambiguous and testable manner. This allows the use of capture/playback tools sooner in the software development life cycle.
Relative Cost to Fix an Error

The cost of fixing an error is lowest in the first phase of software development (i.e., requirements). This is because there are very few deliverables at the beginning of a project to correct if an error is found. As the project moves into subsequent phases of software development, the cost of fixing an error rises dramatically since there are more deliverables affected by the correction of each error. At the requirements phase the cost ratio to fix errors is one to one; at coding it is 10 to one; at production it is from 40 to 1,000 to one.

A study by James Martin showed that the root cause of 56 percent of all bugs identified in projects is errors introduced in the requirements phase. Of the bugs rooted in requirements, roughly half were due to poorly written, ambiguous, unclear, and incorrect requirements. The remaining half was due to requirements that were completely omitted (see Figure 1).



Figure 1: Distribution of Bugs
Figure 1: Distribution of Bugs
Why Good Requirements Are Critical

A study by the Standish Group in 2000 showed that American companies spent $84 billion for cancelled software projects. Another $192 billion was spent on software projects that significantly exceeded their time and budget estimates. The Standish Group and other studies show there are three top reasons why software projects fail:

* Requirements and specifications are incomplete.
* Requirements and specifications change too often.
* There is a lack of user input (to requirements).

The RBT process addresses each of these issues:

* It begins at the first phase of software development where the correction of errors is the least costly.
* It begins at the requirements phase where the largest portion of bugs have their root cause.
* It addresses improving the quality of requirements: Inadequate requirements often are the reason for failing projects.

A Good Test Process

The characteristics of a good test process are as follows:

* Testing must be timely. Testing begins when requirements are first drafted; it must be integrated throughout the software development life cycle. In this way, testing is not perceived as a bottleneck operation. Test early, test often.
* Testing must be effective. The approach to test-case design must have rigor to it. Testing should not rely on individual skills and experiences. Instead, it should be based on a repeatable test process that produces the same test cases for a given situation, regardless of the tester involved. The test-case design approach must provide high functional coverage of the requirements.
* Testing must be efficient. Testing activities must be heavily automated to allow them to be executed quickly. The test-case design approach should produce the minimum number of test cases to reduce the amount of time needed to execute tests, and to reduce the amount of time needed to manage the tests.
* Testing must be manageable. The test process must provide sufficient metrics to quantitatively identify the status of testing at any time. The results of the test effort must be predictable (i.e., the outcome each time a test is successfully executed must be the same).

Standard Software Development Life Cycle

There are many software development methodologies. Each has its own characteristics and approaches, but most software development methodologies share the following six aspects:

* Requirements. There is a description of what has to be delivered.
* Design. There is a description of how the requirements will be delivered.
* Code. The system is constructed from the requirements and the design.
* Test. The behavior of the code is compared to the expected behavior described by the requirements.
* Write user manuals/write training materials. Documentation is created to support the delivered system.
* International translations. Code is often executed in different countries with different languages; the initial system must be translated into the native language of the target country.

In many software development methodologies, testing does not begin until after code is constructed. If a defect is found after coding, there is a good deal of scrap and rework to correct the code, and possibly the design, test cases, and requirements as well. Defects must be tested out of the system rather than being avoided in the first place. Testing often is a bottleneck activity. See Figure 2 for a graphical representation of a standard development life cycle.



Figure 2: Standard Development Life Cycle
Figure 2: Standard Development Life Cycle
(Click on image above to show full-size version in pop-up window.)
Life Cycle With Testable Requirements

In a software development life cycle with testable requirements and integrated testing, the RBT process is integrated throughout the entire software development life cycle. As soon as requirements are complete, they are tested. As soon as the design is complete, the requirements are walked through the design to ensure that they can be met by the design. As soon as the code is constructed and reviewed, it is tested as usual. But because testing begins at the requirements phase, many defects are avoided instead of being tested out of the code.

This is a less costly and more timely approach. User manuals and training materials can be developed sooner. The entire software development life cycle is compressed. Testing is performed in parallel with development instead of all at the end, so testing is no longer a bottleneck. There are fewer surprises when the code is delivered (see Figure 3).



Figure 3: Life Cycle With Testable Requirements and Integrated Testing
Figure 3: Life Cycle With Testable Requirements and Integrated Testing
(Click on image above to show full-size version in pop-up window.)
The RBT Methodology

The RBT methodology is a 12-step process. Each of these steps is described below.

1. Validate requirements against objectives. Compare the objectives, which describe why the project is being initiated, to the requirements, which describe what is to be delivered. The objectives define the success criteria for the project. If the what does not match the why, then the objectives cannot be met, and the project will not succeed. If any of the requirements do not achieve the objectives, then they do not belong in the project scope.
2. Apply use cases against requirements. Some organizations document their requirements with use cases. A use case is a task-oriented users' view of the system. The individual requirements, taken together, must be capable of satisfying any use-case scenarios; otherwise, the requirements are incomplete.
3. Perform an initial ambiguity review. An ambiguity review is a technique for identifying and eliminating ambiguous words, phrases, and constructs. It is not a review of the content of the requirements. The ambiguity review produces a higher-quality set of requirements for review by the rest of the project team.
4. Perform domain expert reviews. The domain experts review the requirements for correctness and completeness.
5. Create cause-effect graph. The requirements are translated into a cause-effect graph, which provides the following benefits:
* It resolves any problems with aliases (i.e., using different terms for the same cause or effect).
* It clarifies the precedence rules among the requirements (i.e., what causes are required to satisfy what effects).
* It clarifies implicit information, making it explicit and understandable to all members of the project team.
* It begins the process of integration testing. The code modules eventually must integrate with each other. If the requirements that describe these modules cannot integrate, then the code modules cannot be expected to integrate. The cause-effect graph shows the integration of the causes and effects.
6. Logical consistency checks performed and test cases designed. A tool identifies any logic errors in the cause-effect graph. The output from the tool is a set of test cases that are 100 percent equivalent to the functionality in the requirements.
7. Review of test cases by requirements authors. The designed test cases are reviewed by the requirements authors. If there is a problem with a test case, the requirements associated with the test case can be corrected and the test cases redesigned.
8. Validate test cases with the users/domain experts. If there is a problem with the test case, the requirements associated with it can be corrected and the test case redesigned. The users/domain experts obtain a better understanding of what the deliverable system will be like. From a Capability Maturity Model® IntegrationSM (CMMISM) perspective, you are validating that you are building the right system.
9. Review of test cases by developers. The test cases are also reviewed by the developers. By doing so, the developers understand what they are going to be tested on, and obtain a better understanding of what they are to deliver so they can deliver for success.
10. Use test cases in design review. The test cases restate the requirements as a series of causes and effects. As a result, the test cases can be used to validate that the design is robust enough to satisfy the requirements. If the design cannot meet the requirements, then either the requirements are infeasible or the design needs rework.
11. Use test cases in code review. Each code module must deliver a portion of the requirements. The test cases can be used to validate that each code module delivers what is expected.
12. Verify code against the test cases derived from requirements. The final step is to build test cases from the logical test cases that have been designed by adding data and navigation to them, and executing them against the code to compare the actual behavior to the expected behavior. Once all of the test cases execute successfully against the code, then it can be said that 100 percent of the functionality has been verified and the code is ready to be delivered into production. From a CMMI perspective, you have verified that you are building the system right.

An Ambiguity Review

Here is a sample of a requirement written in first draft. It is not testable because it contains ambiguities.

ATMs shall send an alert to the information technology (IT) department when the ATM has been tampered with. In the event that the ATM is opened without the key and security code, the ATM will alert the IT department immediately so the appropriate action can be taken.

After performing an ambiguity review of the requirements, the following ambiguities are identified:

* What type of alert does the ATM issue to the IT department?
* What is the definition of tampered with?
* Is tampered with the same as "in the event that the ATM is opened without the key and security code?"
* What happens if the key is used and an invalid security code is entered?
* What is the alert text?
* What is the appropriate action?

The requirements are revised so that the ambiguities are eliminated. The requirements are now testable.

ATMs shall send a tamper alert to the IT department when the ATM has been tampered with, i.e., opened without the key and the valid security code.

Case 1: (1) If the service operator enters the key into the ATM, then the following message displays on the ATMconsole: "Please enter the valid security code." (2) If the service operator enters the valid security code, then the ATMopens.

Case 2: After entering the key in the ATM, if the service operator enters an incorrect security code, then (1) the following message displays on the ATM console: "Security Code invalid. Please reenter." (2) The service operator now has three tries to enter the valid security code. If a valid security code is entered in less than or equal to three tries, then the ATM is opened. Each time an invalid security code is entered, the following message is displayed on the ATM console: "Security code invalid. Please re-enter."

Case 3: If a valid security code has not been entered by the third try, then (1) the following message displays on the ATM console: "Security code invalid. The IT department will be notified." (2) The ATM alerts the IT department immediately.

Case 4: In the event that the ATM is opened without the key and the valid security code, then the ATM sends a tamper alert to the IT department immediately.

A Cause-Effect Graphing Example

Consider a check-debit function whose inputs are new balance and account type, which is either postal or counter, and whose output is one of four possible values:

* Process debit and send out letter.
* Process debit only.
* Suspend account and send out letter.
* Send out letter only.

The function has the following requirements and is testable:

* If there are sufficient funds available in the account to be in credit, or the new balance would be within the authorized overdraft limit, then process the debit.
* If the new balance is below the authorized overdraft limit, then do not process the debit, and if the account type is postal, then suspend the account.
* If a) the transaction has an account type of postal or b) the account type is counter and there are insufficient funds available in the account to be in credit, then send out letter.

The causes for the function are as follows:

* C1 - New balance is in credit.
* C2 - New balance is in overdraft, but within the authorized overdraft limit.
* C3 - Account type is postal.

The effects for the function are as follows:

* E1 - Process the debit.
* E2 - Suspend the account.
* E3 - Send out letter.

A cause-effect graph shows the relationships between the conditions (causes) and the actions (effects) in a notation similar to that used by designers of hardware logic circuits. The check-debit requirements are modeled by the cause-effect graph shown in Figure 4. C1 and C2 cannot be true at the same time.



Figure 4: Cause-Effect Graph
Figure 4: Cause-Effect Graph
(Click on image above to show full-size version in pop-up window.)

The cause-effect graph is converted into a decision table. Each column of the decision table is a rule. The table comprises two parts. In the top part, each rule is tabulated against the causes. A T indicates that the cause must be TRUE for the rule to apply and an F indicates that the condition must be FALSE for the rule to apply. In the bottom part, each rule is tabulated against the effects. A T indicates that the effect will be performed; an F indicates that the effect will not be performed; an asterisk (*) indicates that the combination of conditions is infeasible and so no effects are defined for the rule. The check-debit function has the decision table shown in Table 1.



Table 1: Decision Table
Table 1: Decision Table
(Click on image above to show full-size version in pop-up window.)

Only test cases one through five in Table 1 are required to provide 100 percent functional coverage. Rule No. 6 does not provide any new functional coverage that has not already been provided by the other five rules, so a test case is not required for rule No. 6. No test cases are generated for rule Nos. 7 and 8 because they describe infeasible conditions since C1 and C2 cannot be true at the same time. The final set of test cases with sample-data values is described in Table 2.



Table 2: Test Cases
Table 2: Test Cases
(Click on image above to show full-size version in pop-up window.)
Real-Life Problem Test Cases

With a real-life problem, there are usually far more than three inputs (causes). As an example, in one application where RBT was applied, there were 37 inputs. This allowed a maximum of 2**37, or 137,438,953,472 possible test cases. RBT resolved the problem with 22 test cases that provided 100 percent functional coverage.

Consider the following thought experiment: Put 137,438,953,450 red balls in a giant barrel. Add 22 green balls to the barrel and mix well. Turn out the lights. Pull out 22 balls. What is the probability that you have selected all 22 of the green balls? If this does not seem likely to you, try again. Return the balls and pull out 1,000 balls. What is the probability that you now have selected all 22 of the green balls? If this still does not seem likely to you, try again. Return the balls and pull out 1,000,000 balls. What is the probability that you now have selected all 22 of the green balls? This is what gut-feel testing really is.

For most complex problems it is impossible to manually derive the right combination of test cases that covers 100 percent of the functionality. The right combination of test cases is made up of individual test cases, and each covers at least one type of error that none of the other test cases covers. Taken together, the test cases cover 100 percent of the functionality. Any more test cases would be redundant because they would not catch an error that is already covered by an existing test case.

Gut-feel testing often focuses only on the normal processing flow. Another name for this is the go path. Gut-feel testing often creates too many (redundant) test cases for the go path. Gut-feel testing also often does not adequately cover all the combinations of error conditions and exceptions, i.e., the processing off the go path. As a result, gut-feel testing suffers when it comes to functional coverage.
Summary

In summary, the RBT methodology delivers maximum coverage with the minimum number of test cases. This translates into 100 percent functional coverage and approximately 70 percent to 90 percent code coverage. RBT also provides quantitative test progress metrics within the 12 steps of the RBT methodology, ensuring that testing is adequately provided and is no longer a bottleneck. Logical test cases are designed and become the basis for highly portable capture/playback test scripts.
Note

1. A functional requirement specifies what the system must be able to do in terms that are meaningful to its users. A nonfunctional requirement specifies an aspect of the system other than its capacity to do things. Examples of nonfunctional requirements include those relating to performance, reliability, serviceability, availability, usability, portability, maintainability, and extendibility.