Thursday, December 30, 2010

Difference between Change Related Software Testing like Confirmation Testing & Regression Testing

During software testing we get failures. In most cases the underlying defects are corrected and the corrected test object is handed over to the testers for confirmation. This is the situation where we iterate in the test process and go back to the test execution process. We go back to perform confirmation testing and regression testing.

Confirmation testing and regression testing are important activities in test execution. They can appear in all the test levels from component testing to (one hopes rarely) acceptance testing and even during maintenance of a product in operation.

These two types of change-related software testing activities have one thing in common: they are executed after defect correction. Apart from that, they have very different goals.



In the figure above the test object with a defect is shown to the left. The defect has been discovered during the software testing activity. The defect has subsequently been corrected and we have got the new test object back again for testing; this is the one to the right.

What we must do now are confirmation testing and regression testing of the corrected test object.

Confirmation Testing:
Confirmation testing is the first to be performed after defect correction. It is done to ensure that the defect has indeed been successfully removed. The test that originally unveiled the defect by causing a failure is executed again and this time it should pass without problems. This is illustrated by the dark rectangle in the place where the defect was.

Regression Testing:

Regression testing may and should then be performed.

Regression testing is repetition of tests that have already been performed without problems to confirm that new defects have not come up or discovered after the change. In other words it is to ensure the object under test has not regressed.

Following example shows a case of regression:

A correction of a fault in a document using the "replace all" of the word "Author" with the word "Speaker" had an unintended effect in one of the paragraphs:

"… If you are providing the Presentation as part of your duties with your company or another company, please let me know and have a Speakerized representative of the company also sign this Agreement."

The amount of regression testing can vary from a complete rerun of all the test procedures that have already passed, to, well, in reality, no regression testing at all. The amount depends on issues such as:

1) The risk involved

2) The architecture of the system or product

3) The nature of the defect that has been corrected

The amount of regression testing we choose to do must be justified in accordance with the strategy for the test.

Regression testing should be performed whenever something in or around the object under testing has changed. Fault correction is an obvious reason. There could also be others, more external or environmental changes, which could cause us to consider regression testing.

An example of an environment change could be the installation of a new version of the underlying database administration system or operating system. Experience shows that such updates may have the strangest effects on systems or products previously running without problems.

Wednesday, December 22, 2010

Escaped Defects Found

Definition

An escaped defects is a defect that was not found by, or one that escaped from, the quality assurance team. Typically, those issues are found by end users after released version has made available to them. The metrics Escaped Defects Found counts number of new escaped defects found over period of time (day, week, month).
Calculation
To be able to calculate that metric, it is important that in your defect tracking system you track:

* affected version, version of software in which this defect was found.
* release date, date when version was released

The calculation process:

* Find all versions that were already released
* for each version, find all defects that affected the version
* if defect creation date is after version release date then this defect is an escaped defect.
* now count all those escaped defects.

Dimensions
It should be possible to monitor Escaped Defects Found in following dimensions:

* Affected version, to monitori metric value of any released version.
* Project/Product. To aggregated Escaped Defects Found over all released versions of the project or product.

Usage
I like this metric because tracking that sends correct message to development team: functionality that is released, should be of good quality. Less Escaped Defects almost always means good job of QA team.

Also important reason to keep the number of Escaped Defects low, is that fixing a single escaped software defect can take from a week to several weeks of effort to correct. As you have to include the time to isolate, repair, checkout, retest, reconfigure and redistribute. Fixing defect inside the development iteration is always much cheaper.
Presentation
Best presentation is a simple bar chart, where each bar shows number of escaped defects found per (day/week/month/quarter/year). Depending on your requirements you may want to create this chart for a whole company or each product. Also to compare quality of different versions it could makes sense to show bar chart with current values of escaped defects for each version of the product.

Monday, December 13, 2010

Maximizing QA Performance

An important element of any IT strategy is to ensure deployment of defect free systems. Among other benefits, this helps
minimize significantly the total cost of ownership (TCO) of applications. However, organizations quickly discover that
despite their best intentions and efforts, their QA team has hit a ceiling from a defect leakage standpoint. It seems as if
an invisible hurdle is preventing the QA team from achieving its true potential - deploying defect free systems.
"So what are we not seeing in this picture?"While tool based automation or customized test automation is definitely beneficial in lowering defects, it is an option
which requires significant investments. Hence organizations, with their budgets stretched thin, are constantly on the
lookout for inexpensive options. Maximizing the performance of existing QA teams is often the 'low hanging fruit'. In an
attempt to squeeze the maximum out of the QA teams, organizations often overlook a few extraneous factors that play
havoc with the team's testing effectiveness. Without an eye for these factors, they can go unnoticed and continue to
bleed the organization.
Based on our extensive and varied experience in test engagements, we have identified a few factors which we believe
can help QA teams unlock their true potential. Of course before we start we would like to quote an old saying, "To improve
the condition we need to first start measuring it". In line with this quote, we recommend organizations begin by collecting
metrics that indicate the current health of their QA programs. Most companies miss this basic, but critical point in
improving processes. Once we have measured the current state of the QA programs, we are all set to further analyze the
shortcomings and maximize the QA performance of teams -
How many releases does the application go through in a year?This provides a good indication of whether sufficient time is being allocated for every QA iteration. Across engagements
we have seen clients plan from as many as 12 releases in a year to as few as 3 in a year. It is essential for IT teams to plan
the number of releases in combination with business teams, such that only releases which are necessitated due to
business needs are included. Allowing the QA team to 'huddle and regroup', before the next iteration, is extremely important
and the time needed for QA iterations must not to be viewed solely as test script execution time. Root cause analysis, test
script updates based on requirements in the beginning, and UAT result analysis at the end of the test cycle are crucial
steps in the QA process too and demand equal attention. Through these steps the QA team is able to refine its processes
and prevent a recurrence of defects. Frequent releases compromise the effectiveness of these crucial steps and curtail
the feedback loop, limiting the improvement in defect leakage. Hence, every application should have an optimum number
of releases in a year and this should not be exceeded.
The best way to avoid having more than the optimal number of releases in a year, is to educate the business team on the
pros and cons of increasing or decreasing the release frequency. It is absolutely essential that the business and QA team
agree upon a plan keeping in consideration the resources available to the QA team.
How many 'code drops' are allowed per release?
Often frequent changes to the requirements result in increased instances of 'code drops' within a release. At times the
frequency of code drops are not even planned making it almost impossible for QA teams to effectively test before a fresh
code drop occurs. The solution lies in defining requirements freeze dates, post which no new change requests should be
accommodated.
Is there a large variation in the size of every release?The frequency of a release by itself will not set off any alarms and it needs to be considered in conjunction with the size of
the release. The size of the release directly impacts the number of enhancements to be made to test scripts, some of them
permanent, which need to be diligently incorporated into the QA plan. Further, releases which modify the base or common
functionalities of a system need to be viewed as large releases even if their enhancement time is not large. This is
because such releases have sweeping implications on the system and improper assessment of the size of the release from
a QA standpoint and can lead to heightened levels of defect leakage. Further, variations in size of the release leads to
variations in the size of the QA team which makes effective knowledge management a huge challenge. Hence we strongly
recommend that business and IT teams plan uniform sized releases as much as possible. This can be done by prioritizing
and limiting the change requests (CRs) which will help in reducing defect leakage rates and better management of the QA
team.
How process oriented is the development team and how early is the QA team engaged in the SDLC?
While the defect detection during the testing phase is the onus of the QA team, the team in charge of the development
also plays a significant role in the performance of the QA team. Clear sign-offs during the development process and
waterfall execution of lifecycles help provide the QA team unambiguous points of entry into the development process.
Further even though the role of QA team starts much later in SDLC, the involvement of QA teams in the requirement phase
can make them far more effective. The distinct advantage gained by having a better view of the system enhancements to
be built, allows the QA team to plan their tasks more effectively
Are you providing the QA team with an interference free environment?QA teams need environments; viz. independent servers, database instance, test data, etc. that demand investments on a
continuous basis from the organization. However organizations are faced with the twin dilemma of reducing costs and
simultaneously ensuring that sufficient investments in IT infrastructure have been made. In the bargain QA teams are not
always provided with an independent environment for testing. Dedicated test environments ensure that the QA execution
plans are not upset and testing proceeds without interference from external factors.
Have you staffed the QA team with the right onsite-offshore leverage?
In a highly globalized world where offshore teams are a given, the onsite-offshore ratio for a QA team determines the
efficiency of the QA processes. Maintaining a low onsite presence creates bottlenecks. Consequently the reviews of the
enhancement documents (or change requests), test script vetting with the UAT team, etc. suffer. At the same time, a big
onsite team may not always help realise the cost benefits of an offshore model either. Hence it is recommended that QA
teams begin with a 25-30% onsite staff and gradually target aggressive onsite ratios as low as 15-20%.
How well do you manage knowledge erosion?
This is especially important for a large and complex application. QA teams often struggle because of knowledge erosion
associated with planned or unplanned attrition in team. In teams where staffing size fluctuates either due to varying size
of releases or due to attrition, it is important to retain a core group of experts as the base team. An extended team, built
around this base group, can then be engaged or released depending on the demands of the project. Of course, while such
a model would help manage knowledge erosion to a large extent, it would need to be bolstered further by strong Knowledge
Management (KM) practices.
Finally, while the above mentioned factors can help improve QA performance, it is important to track the QA program using
objective metrics. Some of the metrics which we believe need to be tracked closely are:
a) Defect Leakage: This measures the capability of the QA team in detecting defects/bugs. It is important to track this
number accurately, assigning specific reasons for each defect. All issues reported during QA may not necessarily be
defects. Categorizing the defects as "Valid Defects", "Not a Defect", "Duplicate", "Changes in Specifications" and
"Recommendations" help identify the true performance of the QA team. The unsaid part, of course, is that there needs to
be a robust defect tracking system already in place for formal recording, tracking and subsequent analysis.
b) Release frequency: This indicates the number of releases that an application is subjected to in a normal year.
c) Code drops per release: Refers to the number of builds during a release.
d) Schedule/Effort Adherence: Measures how well the QA team is performing vis-à-vis the planned
milestones and bandwidth of resources.
e) 'Not-a-bug' percentage: It indicates the number of defects logged by the QA team that are not defects, as a percentage
of the total reported defects. This metric is very important since it is an indication of the unproductive effort expended by
the development team in investigating a non-defect. It is also an indication of the need to update the documentation /
improve test scripts
The environmental factors discussed in this document, while hidden from the immediate field of view of the team, greatly
influence the outcome of the QA efforts. Hence they have to be objectively assessed and tracked to understand the influence
they exercise on QA performance. By learning from them and adopting appropriate measures, QA teams can be helped to
realize their true performance potential.

Monday, December 6, 2010

Concept testing


It is important to make a distinction between the different types of testing applied at different stages of the development process. This helps the development team to understand the purpose of each test and consider how data is to be captured.

Description
Different testing methods will have different objectives, approaches and types of modelling. Four general types of testing are described in more detail:

Exploratory tests
Assessment tests
Validation tests
Comparison tests
ISO 9000 tests are also briefly summarised.


Exploratory testsCarried out early in the development process during the fuzzy front end, when the problem is still being defined and potential solutions are being considered, preferably once the development team has a good understanding of the user profile and customer needs. The objective of the exploratory test is to examine and explore the potential of preliminary design concepts and answer some basic questions, including:

What do the users think about using the concept?
Does the basic functionality have value to the user? Is the user interface appropriate and operable?
How does the user feel about the concept?
Are our assumptions about customer requirements correct?
Have we misunderstood any requirements?
This type of early analysis of concepts is potentially the most critical of all types of prototyping and evaluation, for if the development is based on faulty assumptions or misunderstanding about the needs of the users, then problems are almost inevitable later on. Data collection will tend to be qualitative based on observation, interview and discussion with the target audience. Ideally, the customer should be asked to use the product without training or prompting, to assess the intuitiveness of controls and instructions. Some quantitative measures may be appropriate, such as time to perform tasks, number of failures or errors.

If there has been no improvement since the beginning of your relationship with your SEO company, they need to explain why. Then you have a great seo Company working for you. If you don't want to do it yourself, an SEO company being paid monthly should detail exactly what they are doing for you. | Any link exchanges promising hundreds or links immediately upon joining should be avoided unless the only search engine you care about is MSN. Using a link exchange as the only way you get links will also not be a path you want to wander down. Exchanging links with every site under the sun is also bad.

Assessment testsWhile the exploratory test aims to explore the appropriateness of a number of potentially competing solutions, the assessment test digs into more detail with a preferred solution at a slightly later stage of development. The main aim of an assessment test is to ensure that assumptions remain relevant and that more detailed and specific design choices are appropriate. The assessment test will tend to focus on the usability or or level of functionality offered and in some cases, may be appropriate for evaluating early levels of performance. Assuming that the right concept has been chosen, then the assessment test aims to ensure that it has been implemented effectively and answer more detailed questions, such as:

Is the concept usable?
Does the concept satisfy all user needs?
How does the user use the product and could it be more effective?
How will it be assembled and tested and could this be achieved in a better way?
Can the user complete all tasks as intended?
Assessment testing typically requires more complex or detailed models than the exploratory test. A combination of analytical models, simulations and working mock ups (not necessarily with final appearance or full tooling) will be used.

The evaluation process is likely to be relatively informal, including both internal and external stakeholders. Data will typically be qualitative and based on observation, discussion and structured interview. The study should aim to understand why users responds in the way that they do to the concept.

Validation testsThe validation test is normally conducted late in the development process to ensure that all of the product design goals have been met. This may include usability, performance, reliability, maintainability, assembly methods and robustness. Validation tests normally aim to evaluate actual functionality and performance, as is expected in the production version and so activities should be performed in full and not simply walked through.

It is probable that the validation test is the first opportunity to evaluate all of the component elements of the product together, although elements may have been tested individually already. Thus, the product should be as near to representing the final item as possible, including packaging, documentation and production processes. Also included within validation tests will be any formal evaluation required for certification, safety or legislative purposes. Compared to an assessment test, there is a much greater emphasis on experimental rigour and consistency. It may be preferable for evaluation to be carried out independently from the design team, but with team input on developing standards and measurement criteria.

Data from a validation test is likely to be quantitative, based on measurement of performance. Normally, this is carried out against some benchmark of expected performance. Usability issues may be scored in terms of speed, accuracy or rate of use, but should always be quantified. Issues such as desirability may be measured in terms of preference or user ranking. Data should also be formally recorded, with any failures to comply with expected performance logged and appropriate corrective action determined.

Comparison testsA comparison test may be performed at any stage of the design process, to compare a concept, product or product element against some alternative. This alternative could be an existing solution, a competitive offering or an alternative design solution. Comparison testing could include the capturing of both performance and preference data for each solution. The comparison test is used to establish a preference, determine superiority or understand the advantages and disadvantages of different designs.

ISO 9000 tests
ISO 9000 defines a number of test activities:

· Design review
A design review is a set of activities whose purpose is to evaluate how well the results of a design will meet all quality requirements. During the course of this review, problems must be identified and necessary actions proposed.

· Design verification
Design verification is a process whose purpose is to examine design and development outputs and to use objective evidence to confirm that outputs meet design and development input requirements.

· Design validationDesign validation is a process whose purpose is to examine resulting products and to use objective evidence to confirm that these products meet user needs.

Wednesday, December 1, 2010

Difference between Latent and Masked Defect.

Latent bug is an existing uncovered or unidentified bug in a system for a period of time. The bug may have one or more versions of the software and might be identified after its release.

The problems will not cause the damage currently, but wait to reveal themselves at a later time.

Masked defect hides the other defect, which is not detected at a given point of time. It means there is an existing defect that is not caused for reproducing another defect.