Monday, December 5, 2011

Be Creative: Bug-Hunting Tips

Be Creative: In such situations you have to be creative with your test ideas. Think of scenarios that have not yet been tried. This is where you’ll save time by first understanding the duplicate bugs. It may sound hard, but it is time well spent in trying to understand the application and the scope for the test cycle. This creativity not only helps you for this particular test cycle, but will prove to be useful in other situations as well. So the point that you need to remember is this: Don’t just look for simple bugs.

Go through the other reports logged and try to reproduce the bug and see if you can dig deeper for serious bugs. It does help to generate quite a few new ideas if you take some time and spend in reading other tester reports.

Be Patient: This one of the key attributes for a tester – not only just for those working on uTest projects. Keep your cool, understand the application, and try to come up with test ideas that may yield some fresh insight. Because you are working with testers all across the globe, it may possible that the test idea that you just thought of has been implemented by other tester, but that should not make you impatient.

Be an Explorer: Just don’t look out for the simple bugs that do not need much more exploration, although if you find them that is all well and good. Explore the application by targeting specific parts of the application. When you start exploring the application you will begin to distinguish the deeper issues, instead of bugs which are just on the surface.

While exploring the application makes sure that you are still very much in control of what are you trying to. At times exploration leads to the part of the application which is not yet suppose to test, so just be careful! Remember to read the scope of the test cycle before you do anything.

Be Agile: It is good to be agile, but it’s important not to lose your focus. Make sure that you are taking proper notes, taking screen shots or videos, or even using tools like Session Tester.

So, if you are ready to practice all the above then you are certainly going to have good time while working on the test cycles. Not to mention that it also helps you in planning/splitting your tasks as you move further along.

So, the next time if you accept an invite and see bugs logged, you got to know what to do, right?

Friday, October 7, 2011

Tech Legend Inspirational Legacy: Steve Jobs Quotes

.Death is very likely the single best invention of Life. It is Life’s change agent. It clears out the old to make way for the new.”

As the world is mourning the passing of Steve Jobs, the media reminds us of all the amazing products he launched, from the personal computer to the iPod and iPhone. Besides his innovative qualities and accomplishments, the legend of the digital age is also to be remembered as a great showman and inspirator. Here are some of his best-known quotes, many of them being part of his famous Stanford commencement speech in 2005 (video).
--------------------------------------------------------------------
http://www.youtube.com/watch?v=UF8uR6Z6KLc&feature=player_embedded#!
--------------------------------------------------------------------

On death. “Remembering that I’ll be dead soon is the most important tool I’ve ever encountered to help me make the big choices in life. Because almost everything — all external expectations, all pride, all fear of embarrassment or failure — these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose.” (Periscope Post)

“No one wants to die. Even people who want to go to heaven don’t want to die to get there. And yet death is the destination we all share. No one has ever escaped it. And that is as it should be, because Death is very likely the single best invention of Life. It is Life’s change agent. It clears out the old to make way for the new.” (The Wall Street Journal)

On life. “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle. As with all matters of the heart, you’ll know when you find it. And, like any great relationship, it just gets better and better as the years roll on. So keep looking until you find it. Don’t settle.” (News One)

“You have to trust in something - your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.” (The Hindu)

“Your time is limited, so don’t waste it living someone else’s life.” (The Hindu)

On money. “Being the richest man in the cemetery doesn’t matter to me. ... Going to bed at night saying we’ve done something wonderful, ... that’s what matters to me.”

"I’m the only person I know that’s lost a quarter of a billion dollars in one year…. It’s very character-building." (Exception Mag)

On business. "My model for business is The Beatles: They were four guys that kept each other's negative tendencies in check; they balanced each other. And the total was greater than the sum of the parts. Great things in business are not done by one person, they are done by a team of people." (Exception Mag)

On computers. It takes these very simple-minded instructions – ‘Go fetch a number, add it to this number, put the result there, perceive if it’s greater than this other number’ – but executes them at a rate of, let’s say, 1,000,000 per second. At 1,000,000 per second, the results appear to be magic. (The Wall Street Journal)

On design. “Look at the design of a lot of consumer products — they’re really complicated surfaces. We tried to make something much more holistic and simple. When you first start off trying to solve a problem, the first solutions you come up with are very complex, and most people stop there. But if you keep going, and live with the problem and peel more layers of the onion off, you can often times arrive at some very elegant and simple solutions. Most people just don’t put in the time or energy to get there. We believe that customers are smart, and want objects which are well thought through.” (The Wall Street Journal)

Thursday, September 29, 2011

Change management: Agile adoption with knowledge, attitude and action

“Employee communication has become a competitive advantage for companies,” says communication specialist Terry McKenzie, who once served as a senior director of global employee communications and communities at Sun Microsystems. While at Sun, McKenzie introduced a communication goal tool referred to as the KAA model using knowledge, attitude and action to facilitate organizational change. In this tip, we’ll take a look at how the KAA model works and how it might be applied to organizations that are transitioning to an Agile software development environment.

Using the KAA model to help in facilitating change

The basic principles of KAA are straightforward. For each of the three areas – knowledge, attitude and action – you look at your current state and your desired end state. You then put together a strategy that will help you reach your end state in each goal area. McKenzie emphasizes that just telling employees about a change is not enough. You must actively communicate with the employees by discussing the change, getting their input and following up on their questions or suggestions.

Seven questions McKenzie suggests to ask about change are:

1. What is it and why is it needed?

2. What does success look like?

3. What will the results be, and how will they impact me?

4. How will the change be supported

5. How are concerns being handled?

6. How will it be rolled out?

7. What will you do to make it stick?


Let’s take a look at how this tool might be applied in organizations that are adopting a new Agile methodology, such as Scrum.

Knowledge of Scrum

You’ll start by assessing the existing knowledge of Scrum within your organization. You will want to take a look at the skill sets and experience level of your project team members. Your desired end state may be that you would like all the project team members educated in the use of Scrum. You may want to go beyond just the team members, however,
Show Me More

More on Scrum software development
Get help from the community
Powered by ITKnowledgeExchange.com

and make sure that even those who aren’t directly on the Scrum team receive some amount of training so that everyone is speaking a common language.

Put together a plan of the level of knowledge of Scrum that you think will be necessary and then put together a training plan that will help you reach your knowledge goals. Perhaps it will include outside training or Agile coaches. In the tip, Adopting Agile: Eight traction tips to make Agile development stick, we find that Howard Deiner stresses the importance of training the entire organization, saying that a foundation is important for everyone. Other Agile luminaries such as Scrum coach Jean Tabaka and Scott Ambler also speak frequently of the importance of education and learning from experienced mentors when it comes to an Agile transition.

Attitude about Agile adoption

When it comes to attitude, again, we start by assessing the current feelings of the people affected by the change. Team members may be very excited about an Agile adoption or they may be resistant. “Some people take to certain changes very easily and other people don’t. And if they don’t, and you want that change to happen, you have to figure out, ‘What is it that keeps them from changing?’ Almost invariably they’re worried about losing something they value. So what is that thing that they value? The best way is to ask them,” says Agile expert George Dinwiddie in an interview about cultural change at the 2011 Agile Development Practices West conference.

In the tip, Real world Agile: Gaining internal acceptance of Agile methodologies, we hear from Scrum Masters who have struggled with the task of gaining buy-in from people, both managers and team members, who are not ready to jump on the Agile bandwagon. Matt Weir found that those who were once the biggest resistors ended up becoming his biggest allies in the effort to gain acceptance. “The skeptics were the ones who were asking the right questions and came up with some really great ideas,” he says.

Action in Agile adoption

Along with the knowledge and the attitude, you have to evaluate the actions that are taking place regarding your Agile transition. Is your entire organization adopting Agile, or are you starting small and transitioning one project at a time? Where are you now and where do you want to be? What will your roadmap look like?

Though it may be tempting to take the ‘big bang’ approach, some Agile consultants recommend against this. “Friction between those who want a new way to do things, and folks who prefer the old way, is the cause of a fair amount of conflict, strife, and failure in Scrum adoption,” says Matt Heusser in his tip, Large-scale Agile: Making the transition to Scrum.” When it comes to Agile transition, he recommends, to not do it all at once, “but instead incremental, done in a way that respects people and gives them options.”

Conclusion

Regardless of whether your entire organization has moved to Agile or you are starting slowly with a few teams, you need to continually evaluate how the team is doing by using retrospectives. The Agile methodologies include processes allowing for continual improvement, so take the time after each iteration to evaluate the team’s competencies in terms of knowledge, attitude and action. How close is the team to reaching desired end states?

Though we used the example of Agile adoption to step through use of the KAA model, this model can be applied to any type of change. Take the time to have conversations with people that will be affected by change. Listen to their concerns and work with them to address those concerns. If you keep an eye on knowledge, attitude and action, communicating effectively and openly about your goals in each area, you’ll gain the support needed for effective change.

Monday, September 26, 2011

What are the main activities in Scrum?

The sprint itself is the main activity of a Scrum project. Scrum is an iterative and incremental process and so the project is split into a series of consecutive sprints. Each is timeboxed, usually to between one week and a calendar month. A recent survey found that the most common sprint length is two weeks. During this time the team does everything to take a small set of features from idea to coded and tested functionality.

The first activity of each sprint is a sprint planning meeting. During this meeting the product owner and team talk about the highest-priority items on the product backlog. Team members figure out how many items they can commit to and then create a sprint backlog, which is a list of the tasks to perform during the sprint.

On each day of the sprint, a daily scrum meeting is attended by all team members, including the ScrumMaster and the product owner. This meeting is timeboxed to no more than fifteen minutes. During that time, team members share what they worked on the prior day, will work on today, and identify any impediments to progress. Daily scrums serve to synchronize the work of team members as they discuss the work of the sprint.

At the end of a sprint, the teams conducts a sprint review. During the sprint review, the team demonstrates the functionality added during the sprint. The goal of this meeting is to get feedback from the product owner or any users or other stakeholders who have been invited to the review. This feedback may result in changes to the freshly delivered functionality. But it may just as likely result in revising or adding items to the product backlog.

Another activity performed at the end of each sprint is the sprint retrospective. The whole team participates in this meeting, including the ScrumMaster and product owner. The meeting is an opportunity to reflect on the sprint that is ending and identify opportunities to improve in the new sprint.

Introduction to Scrum - An Agile Process

As a brief introduction, Scrum is an agile process for software development. With Scrum, projects progress via a series of iterations called sprints. Each sprint is typically 2-4 weeks long. While an agile approach can be used for managing any project, Scrum is ideally suited for projects with rapidly changing or highly emergent requirements such as we find in software development.


What is Scrum?

Scrum is an agile approach to software development. Rather than a full process or methodology, it is a framework. So instead of providing complete, detailed descriptions of how everything is to be done on the project, much is left up to the software development team. This is done because the team will know best how to solve the problem they are presented. This is why, for example, a sprint planning meeting is described in terms of the desired outcome (a commitment to set of features to be developed in the next sprint) instead of a set of Entry criteria, Task definitions, Validation criteria, and Exit criteria (ETVX) as would be provided in most methodologies.

Scrum relies on a self-organizing, cross-functional team. The scrum team is self-organizing in that there is no overall team leader who decides which person will do which task or how a problem will be solved. Those are issues that are decided by the team as a whole. The team is cross-functional so that everyone necessary to take a feature from idea to implementation is involved.

These agile development teams are supported by two specific individuals: a ScrumMaster and a product owner. The ScrumMaster can be thought of as a coach for the team, helping team members use the Scrum framework to perform at their highest level. The product owner represents the business, customers or users and guides the team toward building the right product.

Scrum projects make progress in a series of sprints, which are timeboxed iterations no more than a month long. At the start of a sprint, team members commit to delivering some number of features that were listed on the project's product backlog. At the end of the sprint, these features are done--they are coded, tested, and integrated into the evolving product or system. At the end of the sprint a sprint review is conducted during which the team demonstrates the new functionality to the product owner and other interested stakeholders who provide feedback that could influence the next sprint.

Thursday, September 22, 2011

Logic and Software Testing

Summary: Formal logic is what runs computers, but it is only a part of the logic used by a software tester. In this installment of his ongoing series on philosophy and software testing, Rick Scott explains.


QSM
Finally, we come around to a branch of philosophy in this series that most people will immediately associate with software. Logic is what runs computers, right? After all, they are logical machines. However, the software we are testing often seems to behave in a way that is anything but logical. So, what exactly is logic, and how is it relevant to software testers?

What Is Logic?

We bandy the word “logic” around a great deal, but our mental definition of it tends to be a bit amorphous. While we have a good general idea of what logic is, we tend to conflate it with “thinking" and “reason.”

Thinking is, broadly, any mental process. Reason is a kind of thinking—specifically, a mental process of arriving at an answer to a question or of deciding amongst choices. Logic tries to enumerate rules for reasoning—rules that allow us to reason in an orderly manner and help to ensure our conclusions are sound. As such, logic is invaluable for ensuring that we make robust decisions, that we are systematic in our consideration of difficult issues, and that we can perceive the flaws in erroneous arguments before they mislead us.

Formal Logic

Formal logic (also known as “mathematical logic”) is the flavor of logic that comes to mind when we speak about computers. It’s based on “propositions” or “Boolean variables” that can be either true or false. These are combined with logical connectives, such as AND (true when all constituent propositions are true) and OR (true when any constituent proposition is true). This, along with conditionals (IF ... THEN), is the foundation of how computers “make choices” or “reason” at both the hardware and software levels. For example:

IF the user is logged in,
AND the user has the correct permissions,
THEN show the user the configuration page.

IF this exception is uncatchable,
OR we haven't provided a way to handle it,
THEN crash.

Testers make use of formal logic in ways beyond understanding the machines we work with, particularly when it comes to testing strategy. Whether we explicitly say so or not, we often plan our test steps and allocate our time using formal logic: IF performance is slow AND it’s slow in browsers other than Internet Explorer 6, THEN I'll spend more time on performance testing; otherwise, I'll devote more time to inspecting the new UI. In those blessed (albeit infrequent) scenarios where we can enumerate all the possible inputs and outputs of a test scenario, we can use truth tables to make sure we don't miss anything and sometimes tools such as Boolean algebra or Karnaugh maps to separate the inputs that should affect the outcome from the ones that shouldn't.

Informal Logic

As software professionals, half of our job is interfacing with technology and half of it is interfacing with people. While it is laudable to have found an important bug buried deep in a application’s dark recesses, it does no good for the users of the software if you can’t convince management or the development team that it is worthwhile to fix it.

Informal logic (sometimes called "persuasive logic") is how we form arguments and attempt to reason with each other in our everyday lives. As opposed to the mathematical structure of formal logic, it deals with argument and reasoning in natural language. An argument consists of one or more premises, a line of reasoning, and a conclusion reached thereby. Informal logic outlines what constitutes a sound premise and what constitutes valid reasoning so that the conclusions we reach are both justified and defensible.

As one of the core skills involved in bug advocacy, informal logic is invaluable to testers. It lets you argue why something is in fact a bug and why that bug is important enough to fix. In a broader context, informal logic is the tool that lets you advocate for any of the myriad ideas and initiatives that you come up with. It may be clear to you that the test team needs more resources or that one feature should be prioritized over another, but neither course of action is likely to happen unless you can present a convincing argument for it to someone in management.

Logical Fallacies: Learning by Counterexample

While studying the attributes of a sound argument seems like the most straightforward way to get a grasp of informal logic, learning about common logical fallacies is a more accessible route to take. The language used to describe them is colourful (as opposed to the rather dry theory often associated with formal logic), and you can usually come up with instances of their occurring in your life or workplace.

To illustrate, let’s see what specious arguments we can muster to rally support for a statement that is patently false. I’m going to make the claim that pigs can fly. When someone points out that they can’t, I can try to discredit him or her personally: “Do you have a degree in biology? And, didn’t you just tell a lie the other day?” This is called ad hominem (Latin, “to the man”). I’m attacking the person presenting the argument instead of refuting the argument’s substance. Alternatively, I can reply with the following: “Why do you think animals can't fly? Birds fly all the time.” This is a straw man. I’ve misrepresented my opponent’s argument, then attacked that misrepresentation instead of the actual argument.

When it comes to affirming my own dubious position, I have many options:

* Argumentum ad populum (“appeal to the people”)—“Ten thousand people believe that pigs can fly, so surely they can.” This is claiming that something is true because many people think it is.
* Argumentum ad verecundiam (“appeal to authority”)—“Professor Bloggins says pigs can fly.”
* Argumentum ad consequentiam (“appeal to the consequences”)—“A world where pigs can fly would be an awesome place, so I'll believe that they can.”
* Argumentum ad baculum (“appeal to force” or, literally, “appeal to the stick”)—“If you don’t agree that pigs can fly, I’ll punch you in the nose.”
* Argumentum ad nauseam—Finally, ridiculous though my case may be, if I continue pressing it long enough, everyone else will get sick of arguing with me.

Setting aside the arguments of questionable relevance we’ve made in favor of flying pigs, the varied types of faulty reasoning also constitute fallacies of their own.

Offering premises that give no support to one’s conclusion is called a non sequitur (“It does not follow”). “His dissertation was excellent, since he served doughnuts at the seminar where he presented it.” Doughnuts are good in and of themselves, but they have nothing to do with whether the dissertation was good or poor.

Affirming the consequent is the fallacy that arises from the mistaken belief that “if X, then Y” also means “if Y, then X.” “When it snows, it’s cold outside. It’s cold outside, therefore it must be snowing.”

Finally, though begging the question is often taken to mean “"raising the question” nowadays, it refers to a circular argument in which the conclusion appears as one of the premises. In other words, one assumes at the outset what one is supposed to be proving. “Bob is always right.” Why? “Because Bob says so, and he’s never wrong.”

Knowledge of logical fallacies is a powerful tool for both improving your own reasoning and examining the arguments of others. Many of them seem ridiculous or absurd on their faces, but people actually do formulate and fall for arguments that use them! There are many more named logical fallacies. If this whets your appetite, Wikipedia’s list makes a good jumping-off point, and an introductory text on informal logic makes an even better one.

Conclusion

Testers need to reason with both computers and other people, and logic is at the heart of both. Whether analyzing the flow of a program, making a case for fixing an involved bug, or evaluating next quarter’s development plan, a solid grounding in logic will serve testers well.

Tuesday, September 20, 2011

Difference between Functional Requirement and Non- Functional Requirement

The Functional Requirement specifies how the system or application SHOULD DO where in. Non- Functional Requirement it specifies how the system or application SHOULD BE.

Some Functional Requirements are,

Authentication
Business Rules
Historical Data
Legal and Regulatory Requirements
External Interfaces

Some Non-Functional Requirements are,

Performance
Reliability
Security
Recovery
Data Integrity
Usability

Different Types of Severity

1. User Interface Defects - Low
2. Boundary Related Defects - Medium
3. Error Handling Defects - Medium
4. Calculation Defects - High
5. Interpreting Data Defects - High
6. Hardware Failures & Problems - High
7. Compatibility and Intersystem Defects - High
8. Control Flow Defects - High
9. Load Conditions (Memory Leakages under testing) - High

Friday, September 9, 2011

BMR

BMR. Basal Metabolic Rate measures the minimum calories necessary to sustain life in a resting individual. Calories are burned by bodily processes such as respiration, the pumping of blood around the body and maintenance of body temperature. BMR can be responsible for burning up to 70% of a person’s total calories expended.

Thursday, September 8, 2011

Ghrelin

Ghrelin. Ghrelin is a hormone that makes people hungry, slows metabolism and decreases the body's ability to burn fat. Research shows that Ghrelin levels in the blood spike before meals and drop afterward. First identified by researchers in 1999, this hormone continues to be the focus of efforts to further explain eating variations and perhaps correct eating disorders.

Tuesday, August 2, 2011

Cloud testing: The cloud and your testing practice

The impact of the cloud on testing practices has grown with the cloud’s growing presence in the IT space. Testing practices are now dealing with several aspects of the cloud simultaneously -- three aspects of the cloud that directly impact our testing practices are: using the cloud to create scalable testing environments, non-functional testing of cloud-based solutions and functional testing of cloud-based solutions. In short, the cloud can be used:

* As a testing enabler
* For non-functional testing
* For functional testing (unit, integration, system, and regression)

While these are clearly distinct aspects of the cloud space and the discipline of testing, there are relationships between these aspects of the cloud that are being ignored or “glossed over” by both vendors and proponents of cloud-based computing -- specifically the non-functional risks around security/integrity and performance.

The cloud as a testing enabler

The cloud provides the opportunity to create scalable testing environments that can be easily ramped up or down given the immediate needs of your testing organization. Whether this type of scalable solution is an appropriate fit for your organization is dependent on the idle time of your development and testing infrastructure throughout the year. If most of your infrastructure is in use, or your infrastructure is inadequate, then a straight investment comparison of own versus rent/lease should be possible. On the other hand, if much of your development and test infrastructure remains idle throughout the year, then leveraging a cloud-based solution may alleviate your overall infrastructure cost.

There are other factors that must be considered before moving testing assets to a cloud-based solution. These factors do not involve the capacity of cloud-based testing solutions -- the capacity is certainly there, but instead the security and integrity of these solutions. The security of cloud-based solutions still remains problematic -- with major security incidents happening on regular basis. If your production environment has not moved to the cloud and you have not created obfuscated test data, then you are exposing your organization to significant security risk by moving testing to the cloud. The question to ask is, “Am I exposing my organization to additional risks by moving to a cloud-enabled testing solution?” If the answer is “yes,” then a proven risk mitigation plan needs to be put in place before moving testing assets to the cloud.


Non-functional testing of cloud-based solutions

As your IT organization and infrastructure moves into the cloud, non-functional testing becomes critical. Recent experience has shown us that most of the risks associated with the cloud come from non-functional requirements not being met or often not even being articulated and therefore not being tested or supported. The areas of proven vulnerability are performance, security, and disaster recovery/management -- recent examples being (April/May 2011):

* Amazon’s “glitch” (the last week of April) that caused numerous Websites hosts to crash or run very slowly.
* Sony of Japan revealed that hackers accessed 100-million PlayStation accounts including names, addresses, passwords, and possibility credit card details.
* Amazon Web Services (AWS) Virginia data centers in its US-East-1 region were down leaving many of its customers with no service and no service alternatives.

From disaster recovery/management perspective you need to ensure a plan has been put in place by the cloud provider to ensure interruptions in their service will be addressed. This is somewhat problematic since the key cloud providers have not yet addressed this issue -- witness Amazon’s “glitch” in late April. Your testing organization still needs to identify the risk, and test any recovery procedures presented by the cloud provider.

From a performance testing perspective the testing organization can apply traditional tools and techniques while ensuring the infrastructure of the cloud-based solution closely resembles (or is) production. There are additional factors that will need to be addressed, the most critical being:

* Addressing the loads that will be applied by other clients/customers of the cloud provider.
* Addressing the loads that will be applied against the internet providers (example: Cyber Monday).

From a security testing perspective, the testing practice will need to become much more aggressive than many test organizations have been in the past. Your business and your data now reside on a third party that has not yet presented a sound security solution -- with all transactions travelling over the Internet. The testing organization will have to address the security of the application presentation layer, the service layer, the data layer, and now the architecture/infrastructure to ensure security requirements have been met. In the past, security risks have often been mitigated by the nature of the architecture -- in-house applications on a secure network with little direct contact with the “world.” Now your applications will exist in the cloud, or cyberspace, with all the benefits and risks that provides.

Functional testing of cloud-based solutions

The test processes and technologies used to perform functional testing against cloud-based applications are not significantly different than traditional in-house applications. Adjustments do need to be made for the non-functional aspects of the application space that relate to security and data integrity. If testing involves production data then appropriate security and data integrity processes and procedures need to be in place and validated before functional testing begins.

The cloud and your testing practice

Awareness of the non-functional risks around the cloud are critical to successfully testing (functional and non-functional) or leveraging (test labs) the cloud. The responsibility for testing the non-functional aspects of a cloud-based application may reside within the testing practice or the infrastructure/security team. As a rule of thumb, if you are dealing with a public cloud or a provider-supplied cloud, then testing responsibility should stay within the testing practice. If you are dealing with a private cloud, then testing responsibility could be handled by the infrastructure/security team -- at least from a performance and security perspective. In either case, all parts of the IT organization need to work together to mitigate the risks presented by cloud-based solutions to the overall enterprise.

Tuesday, June 28, 2011

Monday, June 6, 2011

7 basic tips for testing multi-lingual web sites

Reposting Again.

These days a number of web sites are deployed in multiple languages. As companies perform more and more business in other countries, the number of such global multi-lingual web applications will continue to increase.

Testing web sites supporting multiple languages has its own fair share of challenges. In this article, I will share seven tips with you that will enable you to test the multi-lingual browser-based applications in a complete way:

Tip # 1 - Prepare and use the required test environment

If a web site is hosted in English and Japanese languages, it is not enough to simply change the default browser language and perform identical tests in both the languages. Depending on its implementation, a web site may figure out the correct language for its interface from the browser language setting, the regional and language settings of the machine, a configuration in the web application or other factors. Therefore, in order to perform a realistic test, it is imperative that the web site be tested from two machines - one with the English operating system and one with the Japanese operating system. You might want to keep the default settings on each machine since many users do not change the default settings on their machines.

Tip # 2 - Acquire correct translations

A native speaker of the language, belonging to the same region as the users, is usually the best resource to provide translations that are accurate in both meaning as well as context. If such a person is not available to provide you the translations of the text, you might have to depend on automated web translations available on web sites like wordreference.com and dictionary.com. It is a good idea to compare automated translations from multiple sources before using them in the test.

Tip # 3 - Get really comfortable with the application

Since you might not know the languages supported by the web site, it is always a good idea for you to be very conversant with the functionality of the web site. Execute the test cases in the English version of the site a number of times. This will help you find your way easily within the other language version. Otherwise, you might have to keep the English version of the site open in another browser in order to figure out how to proceed in the other language version (and this could slow you down).

Tip # 4 - Start with testing the labels

You could start testing the other language version of the web site by first looking at all the labels. Labels are the more static items in the web site. English labels are usually short and translated labels tend to expand. It is important to spot any issues related to label truncation, overlay on/ under other controls, incorrect word wrapping etc. It is even more important to compare the labels with their translations in the other language.

Tip # 5 - Move on to the other controls

Next, you could move on to checking the other controls for correct translations and any user interface issues. It is important that the web site provides correct error messages in the other language. The test should include generating all the error messages. Usually for any text that is not translated, three possibilities exist. The text will be missing or its English equivalent will be present or you will see junk characters in its place.

Tip # 6 - Do test the data

Usually, multi-lingual web sites store the data in the UTF-8 Unicode encoding format. To check the character encoding for your website in mozilla: go to View -> Character Encoding and in IE go to View -> Encoding. Data in different languages can be easily represented in this format. Make sure to check the input data. It should be possible to enter data in the other language in the web site. The data displayed by the web site should be correct. The output data should be compared with its translation.

Tip # 7 - Be aware of cultural issues

A challenge in testing multi-lingual web sites is that each language might be meant for users from a particular culture. Many things such as preferred (and not preferred) colors, text direction (this can be left to right, right to left or top to bottom), format of salutations and addresses, measures, currency etc. are different in different cultures. Not only should the other language version of the web site provide correct translations, other elements of the user interface e.g. text direction, currency symbol, date format etc. should also be correct.

As you might have gathered from the tips given above, using the correct test environment and acquiring correct translations is critical in performing a successful test of other language versions of a web site.

It would be interesting to know your experience on testing multi-language web sites.

HTML5


The Web Is Reborn

The last decade expanded what we could do online, but the Web’s basic programming couldn’t keep up. That threatened to fracture the world’s greatest innovation engine—until a small group of Web rivals joined forces to save it.

Mozilla, Opera, and Apple announced that it was forming a new body to take up the work on HTML that was being abandoned by the World Wide Web Consortium (W3C).

The impor tant point is that HTML5 has been developed by companies that actually have to answer to their customers. And their work has made for the biggest overhaul the programming of the Web has ever received.

What HTML5 will change?

Structure
*
New tags in coding of sites will help them better organize the information they present   to search engines’ automated indexers. That could make search results more relevant for everyone.

 Canvas
*Video in the browser is tough today because its about plug-ins. Now a new video tag.
 YouTube running entirely using the video tag. No flash required.

*The language has tags for video and audio, which should dramatically streamline the way the Web handles multimedia: it will be as easy for a Web developer to incorporate a film clip or a song as it is to place text and images.

*HTML5 will clean the web content up: mul­timedia elements will no longer require complex code and an add-on program such as Flash. This should make Web browsers faster and more efficient.


Drag and drop

Dragging and dropping has been the standard way to move files around a computer desktop. Now HTML5 is bringing it to the web. You could quickly upload a new photo of yourself to a social network.

Database and Application Cache

n      HTML5 has the capacity which could enable people to use Web pages even when they’re not connected to the Internet. When you had connectivity again, you’d find that the website “takes care of synching it up.
n      Still in development is a feature that enables a browser to store a large amount of data; the new specifications recommend that the amount be five megabytes per Web domain, or 1,000 times more than is currently possible.

NEW LIFE

n      HTML5 can’t fix the Web overnight. There’s still a long way to go. For example, while the browser makers are in agreement on most things, they continue to argue about which video standards to support. 
n       It might also take some time for Web developers to put the technology to its most significant uses; first they’ll want to be sure that enough people are using Web browsers that can fully handle HTML5
n      Scribd (Document Sharing Website) engineers spent 6 months rebuilding the site . They stopped using flash to display documents , even though that meant they had to convert tens of millions of files to HTML5.
              *After the renewal Scribd’s pages looked crispier. No longer did it seem as if users had to view the files through their lens.
              * Readers began sticking around 3 times longer.
              * Scribd’s renovation also made the site usable in the browser of an ipad, where it has the smoothness and light feel of an app



Monday, May 2, 2011

Browser Testing Checklist

Browser Testing Checklist

Browser Testing Presentation

Browser Testing Presentation

Bayer Alert PPT

Bayer AE Automate Ppt

Monday, April 4, 2011

Test Vector - Are You Testing in the Right Direction?

Funny Story
A bicyclist pulls along side a jogger running down the road and asks the jogger
"how fast are you running?" The jogger looks at the GPS training device on his
wrist and replies "7.25 miles per hour." The bicyclist then asks the jogger "where
are you going?" The jogger replies "I'm going this way as fast as I can."
The bicyclist rides away saying to herself "I wonder where 'this way' is and when
the jogger knows he has reached it."
Introduction
Seasoned software testing veterans understand validating requirements are a
critical success factor for software testing. Too often organizations focus on the
volume of test cases executed and/or the number of defects recorded by testing
in a given test cycle. Defect slippage to production is used to determine the
effectiveness of testing.

Managers focus on test case “volume” metrics to fix problems, erroneously
assuming that test case coverage has been established through requirements
traceability. The volume of test cases, defects found, and defects not caught in
testing are examined. With all of these measures in place there is an assumption
that test cases are mapped to requirements. Testing management and project
management assume that there are test cases for each requirement and the test
cases actually validate those requirements. With such a large volume of test
cases we must have covered everything, right?

In all instances the volume of something other than the volume of requirements
validated is examined in attempts to mitigate project distress. A test case-centric
or defect-centric view of test planning is not focusing on the right success factors.
This whitepaper introduces a new perspective for discussing test planning with
traceability to create a requirements-centric view, with accompanying vocabulary
of testing and reporting results.

Concepts from Physics and Software Testing
Engineers use mathematics and physics, among other sciences, to apply
established principles in designing practical solutions to technical problems.
Software testers can view the process surrounding requirements testing the
same way an engineer views mathematics and physics. By applying analogies
for their test processes, test planning, and test execution activities to concepts in
physics, testers can control the various components in the testing process and
create positive outcomes.

Concept: Test Direction
Test Direction describes testing the “right things” for a given test cycle. In
software testing these “things” are requirements. Requirements validation should
be the goal of all software testing. If you are not testing to validate the
requirements of the system then you are testing in the “wrong direction.”
Here are a few examples of statements of Test Direction:
Test Direction = High priority requirements
or
Test Direction = Critical severity defects
or
Test Direction = Critical severity defects against high priority requirements

There can be many test directions. Test Direction accommodates both new code
and defect fixes. The “right direction” in this case is a plan to validate
requirements. Test Direction is a requirements-centric statement of what shall be
tested during a given test cycle.
Sample Test Direction measures:
• High priority requirements
• Medium priority requirements
• High priority requirements for new code, medium priority requirements for
regression testing
• High and medium priority requirements for new code
The “right direction” also applies to testing cycles where defects fixes are
involved. Defect severity and requirement priority set the Test Direction.

Sample Test Direction measures for test cycles with defect fixes:
• Critical severity defects against high priority requirements
• Critical severity defects against medium priority requirements
• High severity defects against high priority requirements

• High severity defects against medium priority requirements
Ideally, defects are mapped to the test cases that discovered them and those test
cases are in turn mapped to requirements.

Concept: Test Magnitude
In mathematics, magnitude is the size of an object. Test Magnitude describes the
number (size) of test cases (object) that have been written to test a software
system.
Test Magnitude = Number of test cases
Sample Test Magnitude measures:
• Number of total test cases to test critical severity defects
• Number of manual test cases to test high severity defects
• Number of automated test cases to test critical severity defects
• Number of multi-use test cases to test high severity defects

Concept: Test Vector
A vector is an object with both direction and magnitude. Test Vector describes
the magnitude and direction of a testing effort. That is to say, Test Vector
describes the number of test cases executed (Test Magnitude) that have direct
links to requirements (Test Direction) to be validated.
Typical Test Vector measures:
• Number of total test cases (Test Magnitude) to test [number] of high
priority requirements (Test Direction)
• Number of manual test cases (Test Magnitude) to test [number] of high
priority requirements (Test Direction)
• Number of automated test cases (Test Magnitude) to test [number] of high
priority requirements (Test Direction)
• Number of test cases (Test Magnitude) to test the [specified] severity
defect fixes mapped high priority requirements (Test Direction)
Test Vector answers the question “how many test cases do I need to execute in
order to validate all of the requirements and defect fixes?” Test Vector is always
stated in terms of requirements since Test Direction is part of the definition of
Test Vector.

In this paper’s perspective, defects equate to requirements since a defect should
be mapped to a test case which is in turn mapped to a requirement.
Test Sizing and Planning with Test Vector
Test Vector can assist with sizing and planning a test cycle. Starting with a list of
requirements and/or defects to be validated the requirements trace matrix will
provide linkage to the test cases that need to be executed. This answers the
question “how many test cases do I need to execute in order to validate all of the
requirements and defect fixes?”
Risk-based testing can also be described by the Test Vector. By reviewing
requirement priorities and defect severity and then deciding the appropriate level
of testing for the test cycle, a suitable Test Vector can be determined.

In both planning activities the Test Magnitude component of the Test Vector will
provide the number of test cases that needs to be executed in order to validate
the requirements test management has selected for a given test cycle.
Requirements Centric Discussion
The term “Test Vector” is modeled after the term in software malware “threat
vector.” Threat vector is the method malicious code propagates and infects a
computer. Test Vector is the method that testing propagates via test cases and
validates requirements in a given test cycle. This gives all stakeholders a
requirements-centric vocabulary. Test Vector describes the test plan, test
execution, and test results in terms of requirements being validated.

Test plans should be created to validate software requirements. Test results
should be reported in terms of requirements tested, requirements passed
(validated), requirements failed (business risk), and requirements not tested
(business risk).

Back to the Funny Story
In the funny story at the beginning of this article the bicyclist is a seasoned
veteran test professional. The jogger is a typical test manager in a busy testing
department who is under pressure from the PMO to produce results quickly. The
GPS training device is test case execution reports detailing the number of test
cases and daily rate of test cases executed. "This way" is what the test manager
is doing with testing - executing all of the test cases. It's implied that everyone
knows that "this way" is north, south, east, west, or some intermediate direction
although nobody actually has a compass to validate direction. 7.5 mph is the
speed of execution shown on the test case execution reports.
What is missing is a direction (which test cases) and destination (validating which
requirements).

Vectoring in the Right Direction
Using the Test Vector vocabulary to describe testing in terms of requirements
validated takes testing in the right direction. Focusing on the volume of test
cases executed, number of defects found, and the number of defects in
production puts the spotlight on test cases, without regard to the requirements
the software is designed to satisfy. Un-vectored testing, that is to say testing as
many test cases as you can as fast as the team can execute them, is potentially
a waste of scarce resources, time, and money.

Alternatively, the Test Vector vocabulary establishes that requirements validation
should be the primary concern for all stakeholders.
What is the Test Vector for this test cycle? We're executing 473 (magnitude) test
cases to validate all 326 high priority business requirements and 37 critical
defects that map to high priority business requirements (direction).

Conclusion
Test Vector is a statement of the requirements and/or defect fixes that will be
validated for a given test cycle. These requirements and defect fixes map to the
test cases that will validate whether a requirement has been met or the defect
traced to a requirement has been repaired. Using a requirements trace matrix
we can determine the number of test cases needed to accomplish testing for the
test cycle.

By using a vocabulary to describe test planning, test execution, and test results
in terms of requirements validated, stakeholders focus on software functionality
that is important to the project instead of the volume and speed of test cases run
and the volume of defects discovered.

Test Vector changes the conversation on test planning and results reporting
while creating positive software delivery outcomes.

Thursday, March 3, 2011

A Review of Error Messages


Error messages, if they're posted at all, should convey helpful information and advice--not only for the user, but also for tech support and maintenance programmers. Here are a few things to think about when coding your error-handling routines and designing your error messages.
Error Message Basics

Error messages are displayed by programs in response to unusual or exceptional conditions that can't be rectified by the program itself. A well-written program should post very few error messages indeed; instead, absolutely whenever possible, the program should cope with the problem gracefully and continue without bothering the customer. By this yardstick, of course, most programs are poorly written.

For the purposes of this discussion, there are two classes of poorly written programs. First, there is the program that can't remedy things on its own, or that needs so much hand-holding that it bothers its customers unnecessarily. Second, and the focus of this discussion, is the kind of program that encounters some real problem, but confuses or offends the customer by providing an inadequate error message.

Of course, the best error message is no error message at all. In the case where something has gone awry, a program should do everything within its power to remedy the situation at hand. For example, a program should never post a dialog saying that a file cannot be found unless the program has actually bothered to look for it. At a minimum, a program (that is to say, a programmer) should search all local hard drives for the missing file. If the program finds the file in an inappropriate place, the program should either update its own records to point to the file, or make a copy of the file in an appropriate place. There should be no need to disturb the customer in either case.

If your program has to post an error message, don't waste the customer's time either before or after the error condition is detected. For example, an installation program should not begin copying files unless it is certain that the files will fit onto the destination disk. A simple set of calculations can determine whether there is adequate disk space, but most programs don't even bother with this basic check. Just as bad, installation programs frequently refuse to proceed, even when already-existing files are going to be overwritten.

Don't depend on the operating system to handle things properly. Amazingly, after almost twenty years in the field, the DOS COPY and XCOPY commands don't bother to check for disk space before the copy starts; instead, they begin copying blindly and hope that the destination disk doesn't fill up before the operation is complete. Windows is no better; like DOS, it stupidly fails to check for sufficient disk space before performing a file copy. Worse, if you are copying a set of files, Windows will stop the process on the first error, will refuse to continue, and will forget your selection.

When you write code, anticipate the error conditions and code around them. Try to fulfill the user's goal to the greatest degree possible, and don't view error conditions as catastrophic unless they are. Remember the program's state at the time that the error occurred, and permit the user to restore that state easily. Always write functions that return status codes, and return a unique error code for each error condition. At the point the status code is returned, there is typically quite a bit of information available that you can relay to people who are going to need to identify and fix the problem. On the other hand, remember that your program's internal errors are not the customer's concern, so don't overload or intimidate the customer. Make it clear that some information is for the customer to act upon, but that other information is there only to help the person that is helping her.
What Does a Good Error Message Look Like?

A well-constructed error message * should identify the program that is posting the error message
* should alert the customer to the specific problem
* should provide some specific indication as to how the problem may be solved
* should suggest where the customer may obtain further help
* should provide extra information to the person who is helping the customer
* should not suggest an action that will fail to solve the problem and thus waste the customer's time
* should not contain information that is unhelpful, redundant, incomplete, or inaccurate
* should provide an identifying code to distinguish it from other, similar messages

A Good Example

One of the best error messages I have ever seen went something like this:

This was an error message from an applicant tracking system (called "Applicant Tracking System") that was designed for a personnel agency by an independent consultant in 1988. The message looked almost, but not quite exactly, as I've rendered it above. A significant difference is that the original message did not have a Windows look and feel, because this message came from a DOS program. I mention this because the author provided this detailed message even in the days of the 640K memory limit. The customers of this system were not experienced with computers, but even if they had been experts, the message would have been helpful.

Let's look at this error message and compare it with the list of requirements above:

* This error message clearly identifies the program from which it is coming. The title bar gets extra points because it identifies the type of error.
* The message says that the program has lost communication with the printer. The message does not say that the program "is unable to print", nor does it say "LPT1: Error", nor some equally vague text relayed from the operating system. Most operating systems provide notoriously terse—and usually poor—error messages. This message is in terms that the customer can understand.
* The message scores top marks for giving the customer constructive steps that are within his power to perform, regardless of his skill level or experience. The program does not offer a vague guess as to what the problem might be. The steps are ordered from simplest to most complicated, and they're also ordered in terms of probability. Part of this is due to luck—the most common problems are not always the easiest to solve.
* The program does not offer a foolish suggestion to the customer that is likely to waste his time. ("Try restarting the application", or worse, "Try re-installing the application".)
* The error message is carefully worded. Each item in the message is worth checking. Nothing is restated pointlessly. There is no attempt to blame another application for the problem. The message is accurate and helpful.
* Best of all, there is specific tech support information right in the message, for the customer, the technician, and the developer. If there is a defect in the code, the error message suggests clearly to the programmer where in the program the error can be found, and the type of error involved. As an added plus, there's the name of—and an invitation from—a real person. Apart from the pleasant feeling that the customer gets from dealing with a person, rather than a corporation, the programmer's name suggests pride in the work.

Ten Rotten Error Messages

Now, by contrast, here are some examples of the very worst kinds of error messages. You'll see that my examples are all from Microsoft software. Microsoft is not the only company that releases software with lousy user interfaces, but it certainly seems to have perfected the art of the irritating error dialog.

Duh. This message states something that is entirely obvious, and fails to state anything at all that is helpful. There is nothing here to remedy the customer's problem or to help him through it. There is no information that would help even an imaginative tech support person to work through some possible solutions with the customer. The developer responsible for maintaining this code--typically not the person who wrote the original program--is not offered even a hint of what the problem is, or the error code returned by the called function. If more than one error condition posts this dialog, there's no way to tell which one caused the problem.

I have no comment on this message.

I have no comment on this message either. Although somehow this looks a little less severe than the last one.



You know more than you're saying, don't you? And by the way-- restarting Outlook will help how, exactly?



Which applications? How will it be incompatible? Why didn't you fix the problem? Thank God it doesn't seem to be incompatible with non- existent applications.



"May" again. Is a component busy or missing, or is it neither? If a component is involved, which component? Is it busy? Or is it missing? And what is a component anyway? A file? If so, could we have the file name please?



Really? Really? Which action? Which action? What should I do to fix the problem? What should I do to fix the problem?



Nope, I don't. I want you to find it.



Still won't look for it, eh? In fact, I've forgotten the context in which I got this message, and so I've forgotten which application is involved. However, I do remember that it was unclear to me even at the time which application needed to be reinstalled.



Why Are Error Messages So Poor, and How Can They Be Improved?

Our systems for teaching programming almost never discuss error messages, or even error handling. How many programming books emphasize the importance of checking return codes from operating system or library functions, and handling errors gracefully? How many source code examples show even minimal error checking or commenting? How many programming books discuss even the most basic user interface issues, such as how to construct a useful error message?

Let's start with what is displayed to the world outside your program. Error messages are often less than helpful or useful because they're written by people who have an intimate knowledge of the program. Those people often fail to recognize that the program will be run by other people, who don't have that knowledge. Thus it is important that you consider the customer's plight carefully when writing error-handling routines; that you involve someone other than yourself with the design and testing of the program; and that you provide each and every error message to someone else for review. The reviewer should not be an expert in the program. Your messages should be detailed and polite. They should not offend or frustrate the customer.

Write and test your program so that it will have to display as few error messages as possible. If your programming language provides debug-build validity checking like the C ASSERT macro, use it; if you have to hand-roll validity checking yourself, do it. Walk through code in the debugger. Include features in the release version of the program, such as log files or verbose modes, to help with troubleshooting. Each condition in the program that has a chance of failure should return a distinct error code, and should display this code as part of the error message. The error code will not only help to narrow a problem down, but is also good internationalization strategy; error codes will form a useful cross- check when the program is translated. Comment each status code as thoroughly as you can to make life easier for the maintenance programmer and for documenters, and use the header to help define a table of error status codes for technical support. Make sure that there is a mechanism to identify missing files, registry entries, and the like. Create error handling classes and functions to supply consistent, well-formatted error messages--and reuse them consistently. Use code review and walkthroughs with other developers and quality assurance to make sure that your program is readable, consistent, maintainable, and free of defects. Provide testers with tools or a test program that will allow them to view all of the error messages displayed by your program.

Façade programming is a useful construction strategy. As the program is being constructed, write skeletons of each function. Until you have the internals of the function coded, simply have the function do nothing and return a positive return code. Define the return codes as symbols—constants or enumerated values. Later, when you begin to flesh out the function (and as you check return values at each stage), define distinct symbolic codes for each type of error.

Programming is, of course, more complicated than ever. There are more technologies, more languages, and more different disciplines to master this week than there were last week. Developers are pressured to design too little, and to code too quickly. Each step of the development process is squeezed so that products can be released as quickly as possible. However, neither programmers nor managers should kid themselves; other parts of the company are not likely to take responsibility for a program that is sent to testing (or worse, to customers) laden with obvious defects and opaque error messages. Developers and development managers must therefore learn to include design and debugging time in planning estimates, and must argue effectively for more time and more help, especially in areas that don't require coding, such as user interaction design.

It's rational to assume that help won't arrive immediately, so walk a mile in the customer's shoes and program defensively. When you're constructing an error message, the important thing to remember is that your message must convey useful information. Useful information saves time. Remember that the message will be read not only by the customer. The message must also be interpreted by the tech support person who handles the call, the quality assurance analyst who helps to track down the problem, and the maintenance programmer who is charged with fixing the problem in the code. Each person in this process represents a cost to your company. What's more, while the error-handling routine need be written only once, the support path is typically followed many times--tens, or hundreds, or thousands of times. Form alliances with technical support, testing, and documentation; ask questions, do the math, and put dollar amounts on what it costs to solve (or sandbag) a problem after the product has been released. Don't forget future lost sales in your calculations. If senior management at your company wants to rush the product to market without leaving you time to code proper error handling, remind management politely of the cost of such a policy.

Thursday, January 6, 2011

UI Team at Rappa



























Monday, January 3, 2011

Auditing Software Testing Process

Introduction:

To ensure transparency and reliability of the IT systems it may be necessary to audit the Software Development Processes including the most important aspect – Software Testing Process.
Auditing is an important activity in organizations. In the context of testing it helps us ensure that the Testing processes are followed as defined.

Types of Testing Process Audits

There can be various reasons to conduct Audits. The Audits may serve aim to achieve certain definite goals. Based on that we can classify them as follows:

Audit to verify compliance: In this type of auditing the prime motivation is to judge if the process complies with a standards. In these scenarios, the actual testing process is compared with the documented process. For example ISO Standards require us to define our Software testing process. The audit will try to verify if we actually conducted the testing as documented

Audit for process improvement/problem solving:
In this type of audit the motivation is to audit and trace the various steps in the process and try to weed out process problems. For instance it is observed that too many software defects escaped detection even though the testing process was apparently followed. So the audit is done as a preliminary step to collect facts and analyze them.

Audit for Root Cause Analysis
In this type of audit the motivation is to audit the testing process is to find a Root Cause of a specific problem. For example the customers discovered a huge problem with the software. So we retrace our testing steps to find out what went wrong in this specific case.

Internal Audits
Typically the internal audits are initiated from within the organizations

External Audits
External Audits are done by and initiated by external agencies

Why Audit Software Testing Process?

Auditing Test Process helps the management understand if the process is being followed as specified. Typically Testing audit may be done for one or more of the following factors:
• To ensure continued reliability and integrity of the process
• To verify compliance of standards (ISO, CMM, etc)
• To solve process related problems
• To find the root cause of a specific problem
• To detect or prevent Fraud
• To improve the Testing process

Auditing of the Testing process may also be done if the Software Product is a mission critical one such as used for Medical Life Support Systems
This is done to prevent any loop holes or bugs in the system

How to Audit

Typically the Audit of the Testing Process will include the following steps:

• reviewing the Testing process as documented in the Quality Manual
This helps the auditor understand the process as defined.

• Reviewing the deliverable documents at each step

• Document reviewed include
............... Test Strategy
............... Test Plans
............... Test Cases
............... Test Logs
............... Defects Tracked
............... Test Coverage Matrix
............... any other relevant records

Each of the above document provides a certain level of traceability that the process was followed and the necessary steps were taken

• Interviewing the Project Team at various levels – PM, Coordinator, Tester
Interviewing the Project Team members gives an understanding of the thought process prevalent in those conducting the Testing Process.
This can provide valuable insights over an above what was actually documented

ISACA – ww.isaca.org provides guidelines and standards for Auditing Information Systems & Software Development Lifecycle

CISA stands for Certified Information Systems Auditor

Similarly independent agencies may verify the Test Processes and SDLC for ensuring compliance with FDA ( Food and Drug Administration)

What can be audited?

Whether the test process deliverables exist as specified

The only thing that can be really verified in an audit is that the process deliverables exist. The process deliverables are taken as a proof that the necessary steps were taken to do the testing. For example if Test Logs exist, we assume that testing was done and the Test Logs were created as a result of actual tests executed.
A separate exercise may be initiated to verify the authenticity of the Test Logs or other test deliverables

Whether test cases created covered all requirements/use cases

This analysis reveals if the test coverage was sufficient. It indicates that whether the testing team did the best to provide adequate amount of testing

Whether all Defects were fixed

The Status of all the Defects logged is checked to verify if all were fixed and verified

Whether there are any known bugs in the software released

Sometimes all the defects may not be fixed, the software may be released with known problems. Test Logs would indicate the actual results and evidence of any bugs being present.

Whether the levels of testing was effective enough

If Defects pass thru the various levels of testing undetected, it may reflect poorly on the effectiveness of the testing process

* What were the number of defects (Defect Leaks) that went by undetected in each phase
* Number of iterations of testing in each level
* Time taken to test each module/component
* This data may be used for process improvement
* Versions of source code actually tested


The Test Logs and Defect Logs may indicate (if the information was captured) the actual versions of code/components tested. This information may be valuable in root cause analysis.

Summary

In this article we reviewed the process of auditing the Software Testing Process and some of the reasons why auditing is done

Glass Box Testing

Software testing approaches that examine the program structure and derive test data from the program logic. Structural testing is sometimes referred to as clear-box testing since white boxes are considered opaque and do not really permit visibility into the code.

Synonyms:

1) White Box Testing

2) Structural Testing

3) Clear Box Testing

4) Open BoxTesting

Types of Glass Box Testing:

1) Static and Dynamic Analysis: static analysis techniques do not necessitate the execution of the software, dynamic analysis is what is generally considered as ``testing``, i.e. it involves running the system.

2) Statement Coverage: Testing performed where every statement is executed at least once.

3) Branch Coverage: Running a series of tests to ensure that all branches are tested at least once.

4) Path Coverage: Testing all paths.

5) All-Definition-use-path coverage: All paths between the definition of a variable and the use of that definition are now identified and tested.

Advantages:

1) Forces test developer to reason carefully about implementation

2) Approximates the partitioning done by execution equivalence

3) Reveals errors in "hidden" code

4) Beneficent side-effects

5) Optimizations


Disadvantages:


1) Expensive

2) Miss cases omitted in the code