Wednesday, July 4, 2012

Requirements review in Agile: Ensuring consistency and spotting defects

Agile methodology adoption applies to requirements reviews and techniques just as it does to the development process as a whole. However, companies switching over to Agile tend to hang on to the full-fledged requirements document for regulatory or documentation reasons. Agile teams need to perform a thorough requirements review before the document is broken down into user stories. While Agile teams are frequently coming up with new ideas, there is a need to retain consistency in how software testers perform their reviews – what details they need, how they can recognize gaps in the code or logic and how they can ensure all the requirements are testable. Testing teams can spot more defects by spending time performing requirements reviews in a consistent manner. In this article, we look at requirements review techniques, examining how to determine testability and how to spot missing connections. What is a testable requirement? How does the tester determine if a requirement is truly testable? Testable means that as a tester you’re able to verify the results. There must be something to verify. Whether it’s a result, a database value, calculation, form – whatever it is, the tester must be able to generate it and see it. When reviewing requirements, a good rule of thumb is the story must contain the essential elements of who, what, where, why, when and how. For example, is this requirement testable? The Pony2000 Client shall perform the same as the Pony1900 client. No, it’s not unless the requirements document contains details of the functionality in the Pony2000 compared to the Pony1900 client so the tester can create a test for each functional piece. We are given the “what” but there is no who, where, when or how so the meaning of “perform the same” is not defined. It’s highly likely in this type of requirement that the functionality covered is exhaustive. However, the above requirement is absolutely worthless without the information in some fashion of how exactly the Pony2000 client “performs.” There is nothing to verify. Is this example testable? The Prescriber's dose unit (or converted dose unit if applicable) should be communicated to caregivers (CareProduct A and CareProduct E). When a dose is converted for extremely small or large units, that conversion will also be available for use in other applications. Let’s break it down. Who= Prescriber’s dose unit. What= It’s communicated to caregivers or two specific integrated applications. When= When the dose is large or small the conversion is available to other applications. Why, how, and where are missing. If we take the who – what is the Prescriber’s dose unit? A better definition is necessary. What this actually means is the physician’s chosen dose unit. Now, on what? Where? How? Why? Finally, what do they mean by “if applicable?” Stating what is applicable is the point of the requirements document. What about this example: When a medication has the non-pharmacy prompt set to either "CareProduct A" or "CareProduct A and CareProduct B", Pony2012 will automatically append "(NonPharm)" to the end of the display of that medication’s name displayed in the lower left order window. This display will persist during the ordering process, but will NOT be visible as a part of the medication’s name once the order is confirmed. Yes, it is. Why? We have: Who = A medication with non-pharmacy prompt set with available = CareProduct A or CareProduct A and CareProduct B. What = configuration that appends NonPharm to the end of the medication name in the order display window. How = automatic display that persists during the session and prior to order completion. Where = in standard medication entry before the order is completed and moves to the lower left order window. Why = display a visual indicator when a non-pharmacy medication is ordered. When = when a medication is defined as non-pharmacy. Your test team can verify several results with this one requirement. It’s lengthy and includes multiple components, but it gives the necessary information needed to test. As a test team, your first and most important action is verifying that the requirements are testable. Untestable requirements provide a wide and ambitious opening for defects. Missing connections – looking for the Grand Canyon Once your test team has testable requirements, the next task is to review for missing connections. The test team reviews for logic or functional workflows where gaps exist. Is there a spot where an integration point exists with another application? Is the data available for transmission? If a customer places an order, do they get a confirmation number? A receipt? Are the numbers unique and does it actually print to paper? If it prints to paper, is it legible or in a format that makes it easily readable? Testing teams know these small details are often overlooked by development, but not by customers. Consider your communication points. In healthcare, messaging systems pass information between related systems in a form readable by other health systems. Similarly, a financial application may rely on data being present confirming an account identity. Is the receiving application aware of it, did anything change that would alter the passage of the data? Confirming that the communication and integration points are covered is critical. Once those are complete, scan for places where data is not written to a database table. For example, in electronic health record software every single action is recorded every time on a patient record. Those records are retained for auditing, historical and reporting purposes. Same with most financial transactions, large or small, it all must be recorded in a database table. A good thing to check is what exactly are the columns the data is written to? Review the database table – is there anything missing? Perhaps a date field has incorrect formatting or doesn’t accept a valid number of characters? Database records are generally ripe harvesting locations for defects not covered in requirements. Conclusion Your test will enhance the value you provide, when they review requirements documents consistently. Reviewing requirements for testability and gaps in processing are useful methods that provide consistency. Your team will consistently provide value and enhance your application’s quality and customer acceptance by finding defects before they are coded in.

Tuesday, April 3, 2012

Contextual inquiry techniques in requirements gathering

Gathering software requirements is one of the most challenging tasks software development managers face. One technique to overcome the challenges is to observe or experience the processes which the application will be written to address using a technique called contextual inquiry. We explore this technique and show how it can be used to overcome some of the typical issues encountered with gathering requirements.

The history of contextual inquiry

Contextual inquiry originated in the mid-80s as a response to companies getting frustrated with requirements gathering efforts in software development. Users were frustrated with software that was delivered but didn't address their real business problems. Software developers were frustrated that the users couldn't articulate what they needed well enough in the beginning, but then complained when the software was delivered. The bottom line was that the business problem wasn't solved by the software delivered.

Understanding what the users do in their jobs and deriving or augmenting requirements is the key concept behind contextual inquiry. Users may not be able to articulate precisely what they need in requirements gathering exercises. What the users want may be very different from what they actually need. Observing users in their natural work setting, and even performing their work for a brief period of time can make sure that the requirements you gather is what they actually need to do their jobs. This is the contextual in contextual inquiry.

Contextual inquiry was pioneered by Karen Holtzblatt and Hugh Beyer in the 80s and 90s as a requirements gathering methodology, a result of their software engineering experiences at Digital Equipment Corporation (DEC) and beyond. Subsequently, they extended these techniques into software design called Contextual Design. Also, the emergence of UML (Universal Modeling Language) provided a nice way of capturing the results of contextual inquiry and similar techniques.

Typical problems with requirements gathering

Users may know what they want, not what they need – Users may always express requirements in terms of how they do things currently, the current as-is business processes. But these may not be the optimal way to do their current jobs. Requirements gathered this way may be institutionalizing the existing state including possible inefficiencies. This is called paving the cow-path, evoking an informal path in the meadow created by meandering cows.
Business users may not reflect customer requirements – Business users may express requirements only in terms of their work processes. This may or may not be in the best interest of a company’s prospects and customers. Requirements need to strike a balance between making it easy internally vs. making it easy for customers.
Users may not know what’s possible with technology – Users may always be expressing requirements in terms of their own knowledge of technology. Information technology evolves so fast that tools that were used even five years ago may already be obsolete. You have new computing platforms like smart phones and tablets replacing laptops and desktops. A decade ago they were mini-computers and a decade before that, mainframe computers. Requirements need to look to the future and plan for this rapid evolution in technology.
Users may not have visibility into requirements for upstream and downstream business processes – Requirements gathering should be in a holistic context, with allowances made for upstream and downstream business processes. Eliciting requirements from only members of a certain division or department may reflect only their requirements and have no context with respect to inputs from upstream business processes and outputs to downstream ones.

The contextual inquiry process

The contextual inquiry process consists of the following broad steps:

Gathering background holistic information – Requirements gathering needs to incorporate long-term goals of the company and not just the current work practices. Contextual inquiry has as the first step, collection of background information like business strategy, product development plans and a general idea about the requirements needed in the future. This is to make sure that long term goals are accommodated along with daily work practices.
Gathering communication flow and process sequences – During contextual inquiry, it pays to actually perform the job functions of the users for a short period of time to get a sense of how communications and processes flow. Doing this with upstream and downstream business processes will also provide a good idea of who feeds information to the current process, and which business processes outputs are fed out to. This can overcome the isolated requirements gathering problem and provide a good contextual grounding. Individual anecdotal work practices cannot be mistaken for general, across the board, common ones. Interviewing multiple users is necessary for this.
Gathering artifacts and physical work space details – Physical artifacts like forms, screenshots of existing systems and process flow diagrams, if they exist, are to be gathered and included in the requirements exercise. Getting a sense of the physical work space in which users perform their work is important, since it can point to some special requirements that may not have been gathered otherwise. For example, if users work from home or remotely in addition to the office, a browser-based interface may need to be part of requirements.

Benefits of contextual inquiry

Here are some of the benefits of contextual inquiry as a requirements gathering tool. You will be better able to:

Get requirements that balance users with business goals and customers.
Get requirements that enable agreement between IT and business.
Support the way users want to work.
Capture hidden, future requirements and thus enable better planning for the future.
Get to what users need, not just what they want.

Conclusion

Yogi Berra said, “You can observe a lot by just watching!” Nothing captures contextual inquiry better than this quote. Contextual inquiry may help IT professionals understand what a business user is trying to accomplish by getting the context in addition to the requirements. Analysts accomplish this by spending time at the work place, doing the jobs the users do for some amount of time. The overall business, as well as the work context, enables IT to have a longer term holistic view, leading to higher quality requirements.

Tuesday, February 7, 2012

Difference Between http & https Part 2

Summary:
Circulating email advises web users to take note of the differences between "http" and "https" in web addresses to ensure that they only provide sensitive personal and financial information on secure websites (Full commentary below).

Status:
True

Example:(Submitted, January 2009)
Subject: FW: Difference between http & https (no joke)

Don't know how many are aware of this difference, but worth sending to any that do not...... What is the difference between http and https

FIRST, MANY PEOPLE ARE UNAWARE OF
**The main difference between http:// and https:// is It's all about keeping you secure** HTTP stands for Hyper Text Transport Protocol,

Which is just a fancy way of saying it's a protocol (a language, in a manner of speaking) For information to be passed back and forth between web servers and clients. The important thing is the letter S which makes the difference between HTTP and HTTPS.

The S (big surprise) stands for "Secure". If you visit a website or webpage, and look at the address in the web browser, it will likely begin with the following: http://.

This means that the website is talking to your browser using the regular 'unsecure' language. In other words, it is possible for someone to "eavesdrop" on your computer's conversation with the website. If you fill out a form on the website, someone might see the information you send to that site.

This is why you never ever enter your credit card number in an http website! But if the web address begins with https://, that basically means your computer is talking to the website in a secure code that no one can eavesdrop on.

You understand why this is so important, right?

If a website ever asks you to enter your credit card information, you should automatically look to see if the web address begins with https://.

If it doesn't, there's no way you're going to enter sensitive information like a credit card number.

PASS IT ON (You may save someone a lot of grief).


Commentary:
This email forward offers some timely advice that may help many Internet users avoid compromising their security online. The message outlines in plain English the difference between the http and https protocols. It explains why it is important to ensure that a web page is using the secure https protocol before providing financial information such as credit card numbers.

©iStockphoto.com/Alexey Khlobystov
Http Protocol
The information provided in the email is correct and well worth heeding. Hypertext Transfer Protocol (http) is a system that allows the transmitting and receiving of information across the Internet. Http allows information, such as the text you are reading right now, to be accessed from the server by your web browser. While http allows for the quick and easy transmission of information it is not secure and it is possible for a third party to "listen in" to the "conversation" between servers and clients.

For many purposes, such as a website article that is open and available to everyone, this lack of security is of no importance. However, if a website is one that needs to collect private information such as credit card numbers, then a more secure protocol is an important prerequisite. For example, purchasing a product or service online or using Internet banking, it is vital that the exchange of information between clients and servers cannot be easily harvested by third parties. Thus, the https (secure http) protocol was developed to allow the authorisation of users and secure transactions.

So, as the message states, if you are required to provide sensitive personal or financial information on a web page, always ensure that the web address starts with https not just http. Knowing the difference between http and https can certainly help web users keep their information secure. For example, if a webpage, such as an Internet banking login page, that should be secure, uses http rather than https in its address, it may well be a "look-a-like" phishing site designed to steal financial information. A genuine financial institution website would NEVER use the unsecure http protocol on any pages that requires customers to provide personal or financial information.

Unfortunately however, even if a site address does display https, it might still be a bogus phishing web page. Internet criminals can sometimes use clever spoofing techniques to make a fake web page appear to be using the https protocol. Thus, other methods of avoiding phishing scams should also be used.

Note:
Most modern browsers also display a "lock" icon in the status bar or, possibly, in the address field, when a secure https website is being accessed. Generally, you can click on the lock icon to display more information about the secure website.

HTTP vs HTTPS, What’s the difference

HTTPS (Hypertext Transfer Protocol over Secure Socket Layer) is a secured communication protocol between web browser and web server. You can say it is secured HTTP (think of ‘S’ in HTTPS as secured) protocol. It encrypts any communication that a user sends to a web server and decrypts at server side. Similarly, it encrypts any communication that a web server sends to a web browser and decrypts at browser side. That way HTTPS protocol provides a secured sub layer under HTTP.

So the conclusion is that https is more secured, why do websites use http anyway?

One reason is that https cost more. Another reason is it slows down the website since it encrypts and decrypts every communication a web user sends or receives.

You can place all websites in three categories
HTTP vs. HTTPS

Least Security – These websites use http throughout. Most internet forums will probably fall into this category. Because these are open discussion forums, secured access is generally not required

Medium Security – These websites use https, when you sign in (when you enter your id and password) and use http once you are logged in. Google and Yahoo are example of such sites. MSN (or Hotmail) provides you with an option to use http or https protocol. You can choose ‘Use enhanced security’ option for https or ‘Use standard security’ option for http.

Highest security – These websites use https throughout. Most financial institutions fall into this category. Try logging to your bank or credit card company’s website, you will see https protocol being used throughout.

Tip – So unless you trust the provider think twise when you enter your password on a http website.

Sunday, January 29, 2012

Effective Test Automation in an Agile Environment



Five Common Mistakes and Their Solutions

The dynamically changing IT industry brings forth new objectives and new perspectives for automated testing in areas that were brought to life in the recent decade, such as cloud-based, SaaS applications, e-commerce, and so on. The last five years saw an immense growth in the number of agile and Scrum projects. Additionally, the IT market has changed significantly, not only with various new tools—including Selenium 2, Watir WebDriver, BrowserMob, and Robot Framework—but with approaches that have also completely changed. For example, more focus has been made on cloud-based test automation solutions both for performance testing and functional testing. Cloud-based testing of web applications is now replacing "classical" local deployments of testing tools.

Even though there are a vast number of benefits to automated testing, test automation can often fail. Some types of mistakes in test automation may include: Selecting the wrong automated tool; incorrectly using the tool; or setting the wrong time for test creation. It is also worth to pay special attention to the test automation framework and proper work scope division between the test automation and manual testing teams. The "Test Cases Selection" section of this article highlights many reasons why we must not automate certain test cases. Let’s take a closer look at the five most common mistakes of test automation in agile and their possible solutions.

1. Wrong Tool Selection
Even though the popular tool may contain a commendably rich feature set, and it's price may be affordable, the tool could have hidden problems that are not obvious at first glance. For example, there may be problems like insufficient support for the product, and a lack of reliability. This occurs in both commercial and open source tools.

Solution
When selecting a commercial test automation tool for a specific project, it is not enough to only consider the tool’s features and price; it`s best to analyze feedback and recommendations from people that have successfully used the tool on real projects. When selecting an open source freeware tool, the first thing to consider is the community support, because these tools are supported by their community only, and not by a vendor. The chances to correct arising issues with the tool are much higher if the community is strong. Looking at the number of posts in forums and blogs throughout the web is a good way to assess the actual size of the community. A couple good examples include: stackoverflow.com, answers.launchpad.net, www.qaautomation.net, and many other test automation forums and blogs which can be found by your search engine when you enter the name of the given tool in it.

In order to understand whether a test automation tool was selected properly, you should begin with answering a few questions:

Is your tool compatible with the application environment, technologies, and interfaces?
What is the cost of your chosen test automation tool?
How easy it is to write, execute, and maintain test scripts?
Is it possible to extend the tool with additional components?
How fast can a person learn the scripting language used by the tool?
Is your vendor ready to resolve tool-related issues? Is the community support strong enough?
How reliable is your test automation tool?

Answering these questions will provide a clear picture of the situation, and may help you to decide whether the advantages of this tool's usage outweigh the possible disadvantages.

2. Starting at the Wrong Time
It is a common mistake to begin test automation development too early, because the benefits almost never justify the losses of efforts for redevelopment of test automation scripts after the functionality of the application changes until the end of iteration. This is a particularly serious issue for GUI (Graphical User Interface) test automation, because it is much more likely that GUI automation scripts will be broken by development than any other types of automated tests, including like unit tests, performance tests, and API tests. Unfortunately, even after finishing the design phase you may still not know all the necessary technical details of the implementation, because the actual realization of the design selected could be achieved in a number of different ways. For GUI tests, technical details of the implementation always matter. Starting automation early may result in spending repeatable and meaningless efforts on redevelopment of the automated tests.

Solution
During the development phase, members of a Quality Assurance (QA) team should spend more time creating detailed manual test cases suitable for the test automation. If the manual test cases are detailed enough, they can be automated successfully after completion of the given feature. Of course, it`s not a bad idea to write automated tests earlier, but only in cases where you are 100 percent confident that further development within the current iteration will not disrupt your new tests.

3. Framework
Do you know what’s wrong with the traditional agile workflow? It seems to not encourage the inclusion of test automation framework development tasks, because they have zero user points. But it’s not a secret that any good and effective test automation requires both tools and framework. Even if you have already spent several thousands of dollars on a test automation tool, you still need a framework to be developed by your test automation engineers. Test automation framework should always be considered, and its development never underestimated. How does this fit into the agile process? Pretty easily, actually, and it's not as incompatible as it may seem.

Solution
How much time would you need to develop a test automation framework? In most cases it will take no longer than two weeks, which equals the usual agile iteration. Thus, the solution is for you to develop the test automation framework in the very first iteration. You are probably wondering if that means that the product will remain untested, but that is not the case, because it could be tested during the duration of that period. The workload increase on manual testers is probably unavoidable, but there is not much testing done during the first iterations—developers are more focused on backend development, usually covered by unit tests—so the process balances itself. The very first iteration will look like this: Start with both analyzing requirements and designing the test automation framework during the design phase; then develop, debug, and test it until the end of iteration.

4. Test Cases Selection
How do we select test cases for automation? That’s an interesting question and grounds for another common mistake–trying to automate all test cases. But automate them all" is hardly an answer if you are focused on quality and efficiency. Following this principle leads to useless efforts and money spent on test automation without bringing any real value to the product.

Solution
There are certain cases where it's better to automate and some cases where it doesn't make much sense to do so; it is the latter that always has the higher priority. You should perform automation when your test case is executed frequently enough—and takes time to run manually, you have a test that will run with different sets of data, or your test case needs to be run under many different platforms and system configurations.

On the other hand, test automation cannot be used for usability testing and the following instances: When the functionality of the application changes frequently; when the expenditures on test automation tools and the support of already existing tests are too high; when test automation doesn't provide enough advantages if compared to manual testing.

5. Test Automation vs. Manual Automation
A lack of coordination between your automated testing and manual testing subteams is another common mistake. This can lead to excessive efforts spent on testing and bad quality software. Why does this happen so often? In most cases, manual testing teams may not have enough technical skills to review automated test cases, so they prefer to hand-off this responsibility to the automated testing teams. This causes a different set of problems, including:

The test automation scripts are not testing what they should, and in the worst case scenario, they are testing something that is not even close to the requirements.
To ensure a successful test, test automation engineers can change test automation scripts to ignore certain verifications.
Automated tests can become unsynchronized with the manual test cases.
Some parts of the application under the test receive double coverage, while others are not covered at all.

Solution
In order to avoid these problems, it’s best to keep your whole QA team centralized and solid. The automated testing subteam should obtain the requirements from the same place as the manual testing subteam. The same list of test cases should be kept and supported for both subteams. Automated test cases should be presented in a format that is easy to understand for non-technical staff. There are many ways to achieve this, including using human-readable scenarios, keyword-driven frameworks, or just keeping the code clean while providing sufficient comments.

Conclusion
I have listed only the most common mistakes that could affect the efficiency of test automation for your project, resulting in its poor quality. It’s wise to pay more attention to the test automation activities, and to consider them an integral part of the quality assurance process of your project. If you take test automation seriously, you will be able to avoid most of the above-mentioned mistakes.

Tuesday, January 17, 2012

Agile testing quadrants: Guiding managers and teams in test strategies


Many software teams struggle with “fitting testing in” to the development lifecycle. For software managers and teams new to Agile development, the idea of planning and executing all the testing activities within short iterations and release cycles is daunting. I’m often asked questions such as: “When should we do performance testing? And who is going to do it? There aren’t any performance testing specialists on our cross-functional development team.” Substitute “user acceptance testing,” “exploratory testing,” “security testing,” or “usability testing” for “performance” – every Agile development organization faces similar challenges.

In my experience, a testing taxonomy such as the Agile testing quadrants (Figure 1) is a highly effective tool to help answer these questions.

Figure 1

How the quadrants work

The quadrants originated with Brian Marick's original posts on his Agile testing matrix. With his permission, Janet Gregory and I adapted this into the Agile testing quadrants, which form the heart of our Agile Testing book. The quadrants represent the many different purposes for different types of testing.

Tests on the left-hand quadrants help the team know what code to write, and know when they are done writing it. Tests on the right hand side help the team learn more about the code they’re written, and this learning often translates into new user stories and tests that feed back to the left-hand quadrants. Business stakeholders define quality criteria for the top two quadrants, while the bottom two quadrants relate more to internal quality and criteria.

The word “team” here includes both the customer and development teams. We need the business experts to provide examples that we turn into tests to drive development. We also need them to evaluate the resulting software as it is delivered in small increments, and give us fast feedback so we can make course corrections as we go.

The clouds at the quadrant corners denote whether tests in that quadrant generally require automation, manual testing or specialized tools. “Manual” doesn’t mean no specialized skills required – for instance, exploratory testing is a largely manual process but requires a high degree of expertise.

When to do which tests

The quadrant numbering system does not imply any order. You don’t work through the quadrants from Q1 to Q4 in a Waterfall style. Janet Gregory and I just chose an arbitrary numbering so that, in our book and when we are talking about the quadrants, we can say “Q1″ instead of “technology-facing tests that support the team.”

Most projects would start with Q2 tests, because those are where you get the examples that turn into specifications and tests that drive coding, along with prototypes and the like. However, I have worked on projects where we started out with performance testing (which is in Q4) on a spike of the architecture, because that was the most important criterion for the feature. If your customers are uncertain about their requirements, you might even do a spike and start with exploratory testing (Q3).

Q3 and Q4 testing pretty much require that some code be written and deployable, but most teams iterate through the quadrants rapidly, working in small increments. Write a test for some small chunk of a feature, write the code, once the test is passing, perhaps automate more tests for it, do exploratory testing on it, do security or load testing on it, whatever, then add the next small chunk and go through the whole process again.

How to use the quadrants

When you get the team together to plan a release or theme, go through each quadrant and identify which types of testing will be needed. Maybe this project doesn’t require usability testing, but reliability testing is critical. Talk with your customers about quality criteria. What absolutely has to work?

Next, figure out if the team (or teams) have people with the right skills to accomplish all the different types of testing, and if they already have the necessary hardware, software, data and test environments. If not, brainstorm ways to get what is needed in a timely manner.

Here are some examples. Does the team already have appropriate data for testing? If not, you may need a user story to obtain or create test data, or perhaps a business expert will arrange to provide it. If load testing is critical, but nobody on the team has experience with load testing, you could budget in time for programmers to experiment with developing a load testing harness, schedule time with a load testing expert from a different team within the company, or plan to contract with a load testing specialist. Identifying these issues early gives you time to find creative solutions.

If the team has decided to try a new practice such as using business-facing tests to drive development (known as “acceptance test-driven development (ATDD)” or “specification by example (SBE),” plan extra time for them to get up to speed with the practice. In the example of implementing ATDD, they need time to experiment with different approaches and create or adopt a testing framework. If nobody on the development team has experience with exploratory testing, you may want to plan training for that. The goal is to avoid suddenly getting stuck in mid-cycle due to lack of a particular testing skill, tool or infrastructure.

Repeat this process as you plan each iteration. For each user story, think through the testing requirements in each quadrant. How will tests be automated at various levels for a given story? When you start your planning by thinking about testing, you are likely to come up with technical implementations that make automation easier. If you lack some key testing component you may decide to postpone a theme or user story until that component can be obtained.

For example, my team had to rewrite the software to produce account statements for our financial services system into our new architecture. The legacy code had no automated regression tests, and the requirements were highly complex. We ran a huge risk of making mistakes that weren’t caught in testing, and mistakes on account statements are disastrous for the business. We decided to spend an entire sprint developing automated regression tests for the existing account statements, which required some creative experimentation. Armed with this safety net, we then developed the new code, confident that we had mitigated the risks.

It’s a tool, not a rule

Adapt and enhance the quadrants to suit your own needs. There are no hard and fast rules about which tests belong in which quadrant. Michael Hütterman added “outside-in, barrier-free, collaborative” to the middle of the quadrants in his book, Agile ALM. Mike Cohn’s Test Automation Pyramid is a good complement to help teams focus their automation efforts for maximum return on investment. Pascal Dufour integrates risk analysis with quadrants to decide what level of detail is needed in the specifications.

The quadrants are simply a taxonomy that helps teams plan their testing. Using the quadrants helps teams make sure they have all the people and resources they need to accomplish it. Think through them as you do your release, theme and iteration planning, so your whole team starts out by thinking about testing first. Focus on the purpose of each testing activity, and the quality you are building into your software.

Tuesday, January 10, 2012

Automation testing: Seven tips for functional test design

Automated functional tests, or user interface (UI) tests, have a reputation for being hard to maintain and for not being powerful enough to actually find bugs. However, in most cases the reasons for this are not the fault of the test tools or the test frameworks, but can be traced to poor design of the individual tests themselves.

Here are seven functional test design tips to make UI tests more maintainable and more powerful.

Don't just click, check afterward

Many automated test tools include a feature that allows a set of actions to be recorded automatically and then played back. While such record/playback features are sometimes handy when creating tests, the results of pure record/playback actions tend to be poor tests. In particular, record/playback tests do not check the state of the application after manipulating elements in the application.

Clicking, typing, selecting, and other such functions all change the state of the application in some way. Good tests will check for proper results after manipulating elements in the application. If the automated test follows a link, have the test check that the resulting page is correct. If the test generates a report, have the test check that the content of the report is correct.

Wait, don't pause

Often an application will take some time before results are available for the test to check. This is particularly common with AJAX calls in Web browsers. It is tempting to simply have the test pause or sleep for some number of seconds before checking such a result, but pausing or sleeping is poor practice. If the application takes too long to return, then the test will generate a false failure. If the application returns more quickly, then the test is wasting time while it could be moving on.

Instead of pausing or sleeping, have the test wait for a particular aspect of the application to appear. Not only does this make the test less prone to false failures, but it also makes for a more powerful test, since the test is actually waiting for and checking the state of the application upon generating the aspect the test waits for.

Use discrete locators, not indexes

It is often tempting to have a test do something like "click the third link on the page" or "select the fifth element in the list." Instead of manipulating aspects of the application according to index, though, it is worth the effort to find or create unique identifiers for such elements.

If the order of the links change, or the order of the list changes, the test will go down an unexpected path, and maintaining such unpredictable tests is quite difficult.

Check sort order with regular expressions

It is often important to the user that aspects of the application appear in the correct order. Whether columns in tables or elements in a list, or text on the page itself, it is often important that automated tests check for the correct order of things.

Say there is a set of things that should appear in order called "one," "two." and "three." Tests can check the order of things using some sort of regular expressions. Here is an example using a simplified kind of regular expression called a "glob" that is available in Selenium and other automated test tools:

| getText | glob:one*two*three |

| click | sort_thing |

| getText | glob:three*two*one |

The first step of this test checks that the text "one" is followed by the text "two" followed by the text "three." The "*" character indicates that the test will allow any characters at all between "one" and "two" and "three." The second step of the test clicks something that causes "one" "two" and "three" to be sorted in reverse order, then the third step of the test checks that the sorting was actually successful.

Don't repeat yourself

As noted above, waiting for an element in the application to appear is a good practice. It is often the case that once the element appears, the test will want to manipulate that element, for instance by clicking. It is good practice to abstract common actions into their own methods or modules, then to call those actions from the tests as required. Here is an example of abstracting the wait-for-and-click action in the syntax of Fitnesse and Selenium:

!| scenario | Wait for and click | elementLocator |

| waitForElementPresent | @elementLocator |

| click | @elementLocator |

So from a test itself we need only write:

| open | www.foo.com |

| Wait for and click | link=Welcome to Foo! |

While this example saves only one line of typing, if 'Wait for and click' is performed hundreds or thousands of times in a test suite, that is a significant improvement in maintenance and readability. Other examples of actions to be abstracted to their own modules might be logging in, selecting all the elements of a list, checking for a set of errors, etc.

Don't use conditionals

Sometimes test environments can behave unpredictably. In such cases it is tempting to use a conditional in a test to say, for example, "if this element exists, click it, if it does not exist, do something else." There are a number of problems with this approach. One problem is similar to the problem caused by using indexes instead of specific locators: if the application being tested changes, the automated test could go down completely unpredicted and unknown paths, causing false failures (or worse, false successes) and making maintenance difficult. Another problem is that one branch of the conditional statement could (erroneously) disappear altogether, and the test would never show that a bug had been introduced.

Use Javascript to create reusable random data

Finally, below is a particular example of a good practice for certain kinds of test data specifically using Selenium and Fitnesse. In this example, the test needs to enter a Social Security Number that is unique, then check that that SSN has in fact been entered into the application:

| type; | ssn | javascript{RN =Math.floor(Math.random()*9999999);while (String(RN).length < 8) { RN=RN+'0';}} |

| $SSN= | getValue | ssn |

| click | link=Save |

| type; | search | $SSN |

| GET SEARCH RESULTS CONTAINING THE SSN |

Selenium will evaluate Javascript in-line. The first line of this test types into a field whose id value is "ssn" a random 9-digit number generated on the fly by evaluating the Javascript as an argument to the type() action. The second line uses a feature of Fitnesse to store that 9-digit number from the "ssn" field in a variable called "$SSN". Then the test types that same 9-digit number into a field whose id value is "search." This is an elegant and useful way within the test itself to handle test data required to be unique. The same approach should be available in any reasonable test tool or framework.

Good design for good testing

These are just a few examples to help make your automated tests both powerful and maintainable. Many other examples exist, and each automated test tool or framework will have good design practices unique to the tool as well.

The biggest complaints about automated functional tests are that they are hard to maintain, are not powerful and that they don't find bugs. But well-designed tests are not difficult to maintain; they are powerful in that they check the state of the application being tested for aspects of function important to the user and to the application itself, and well-designed automated tests absolutely find bugs.

Testers: Put on Your End-user Hat Article by Paul Fratellone

Summary:

The more you know about the end-user, the more effective you will be as a tester. Here are some tips for adding value by thinking like your customer.

One of the biggest criticisms about testers and QA organizations is that they do not understand the business or the end-user. If you believe this to be true, it sends a clear message about not having a value-adding testing team of professionals. The more you know about the ultimate customer or end-user, the more you will become an effective, risked-based tester.

When I led QA teams in the past, I made "knowing your customer" a major performance criteria for my staff. To ensure this, I arranged field trips with business development to customer sites and had the testing team view how and why the end-users actually used the system or application. Upon returning from these field trips, the QA team started to modify how it approached end-to-end and user acceptance tests. It was so beneficial that the number of critical end-user defects dropped by more than 20 percent in a very short period of time.

This result inspired me to continue my learning. I took the product management certification course from Pragmatic Marketing and was certified in pragmatic product management in December 2009. From the course, I learned how understanding the following questions will increase the effectiveness of tests and testing teams (note: It is your responsibility to ensure you are adding value to the delivery of the product):

What problem or problems will this upgrade, enhancement, or new feature solve? This is the value proposition.
For whom do we solve that problem? This is the target market.
How will we measure success? This is the business result. What metrics will be needed to validate success has been attained?
What alternatives are out there? What is the competition doing? Is it a "blue ocean industry” or a "red ocean industry”?
Why are we best suited to pursue this? What is our differentiator?
Why now, and when does it need to be delivered? This is the market window or window of opportunity.
How will we deploy this? What will be our deployment strategy?
What is the preliminary estimated cost and benefit? Will there be a return on investment, customer satisfaction increase, or cost avoidance?

If you understand these high-level questions, you will ensure a higher level of end-user quality by designing and executing tests from an end-user’s perspective. Defining, quantifying, and weighing the many quality dimensions as perceived by your end-users, you will be able to approach testing in a very efficient and effective manner. Knowing what the user wants and needs to do with the system will enable a proactive mindset regarding requirements and feature reviews, acceptable behaviors, operational inconsistencies, interactions, and interoperability.

I have found the user manual to be a great source of knowledge for a test team. Granted, a newly developed application is devoid of a manual, as the manual gets developed along with the application. But, during my independent consulting years, I relied heavily on these manuals to gain an operational business perspective. Be careful, though, as they can be dated and may become stale depending upon how much the end-user relies upon them.

This perspective naturally leads to an understanding of where the potential risks are to the business:

What are the most common and critical areas of the functionality from the user’s point of view?
How accepting should the system be to incorrect input
Is the description complete enough to proceed to design, implement, and test the requirement processes?
What is an acceptable response time for the users? Are there stated performance requirements?
Which problems and risks may be associated with these requirements?
Are there limitations in the software or hardware?

Which omissions are assumed directly or indirectly?

Regardless of the delivery methodology you are involved with (iterative, agile, waterfall, etc.) the above will be applicable. A test strategy should be unaffected by the delivery method. It should be a relative constant, setting the goals, objectives, and approach. Your test strategy should articulate test governance and other standard operating procedures that the delivery team will be following, including but not limited to configuration, release, source code, defect, and test case management. Any test tools and test execution tools that will be deployed in the delivery of the application or system should also be identified. Clearly articulated expectations will be how the team will measure success and what defines "done." Without this, how will the team know when good is good enough? This document should also include a contingency plan for any regression that might take place. What severity and quantity of defects will be deemed acceptable, and how will the defects be handled with remaining deployments? Additionally, a test strategy needs to state what test metrics will be used and how they will be reported, communicated, and escalated when results display negative trends in the acceptance criteria.

The test plan should document the techniques and methods that will be employed to validate the system under test. The test plan should detail the estimates of the test cycles in relation to the delivery plan. A test estimation algorithm and how one derived these estimates are best guesses at this point, but be sure these assumptions are reviewed with the delivery team.

The same holds true for the development team. If the project is to be delivered in iterations, then naturally the team will jointly develop the estimated costs and duration. It is important to highlight to the delivery team the estimated defect rate for the delivery. As this rate is approached or exceeded, the impact to remaining deployments and regression can cause not only the timeline to skew but also costs to escalate. Feedback from these defect trends should trigger a re-estimation of remaining iterations or sprints, thereby increasing accuracy and confidence. The methods and techniques documented in the plan will support the estimation of costs and duration. Examples of these techniques include requirements-based, combinatorial, model, and scenario-based testing. Each technique has unique attributes that will be associated with the various levels of structural and functional testing. Test estimates will be challenged, so teams need to stay focused. Should all business-critical features and functions have more than 90 percent test case permutations covering the various combinations of valid, error, and user profiles? Do all the users view the feature set similarly and agree about what is critical to the operation of their needs and business?

Deciding at what level to stop testing is difficult, but there is the ever-present law of diminishing returns. Review the approach and risks with the business and product owners, and gain their insight into where there could be excessive testing and what is an acceptable level of risk. As a value-adding testing team, you must quantify the costs by articulating the number of test cases and how long it will take to deliver the quality the end-user is expecting. Understand where the greatest risks to the business exist—features frequently used by most or all customers, financial impacts from failure or errors, feature complexity, defect density, and historical data on problem areas or feature sets.

When it all comes together, the team can circle back to those quality dimensions highlighted earlier in this paper. Reliability, usability, and accuracy will manifest in the number of test cases and techniques used to satisfy the level of quality that the end-user is expecting and on which the business owner must plan to spend. Complete transparency enables the team to make sound business decisions and decide on appropriate levels of risk and tradeoffs when plans are not being met. The cost-risk-benefit equation of quality will be used as the team makes adjustments to content time and cost. There should be no surprises when the team is faced with the tough decisions that always arise during the deployment of software.

In the delivery of software, testers can wear many hats. Teams that are able to think like the end-user will make a significant contribution in ensuring that the test team is adding value and focused on meeting the client’s expectations.