Opinion

Is there a place for QA Testing in Scrum?

Calcey

With the emergence of the Agile Software Development zeitgeist at the onset of the 21st century, there occurred an upheaval in how professional competencies were demarcated within the software engineering industry. The established “project roles” and “professional practice groups” within the industry such as Development, QA Testing, Project Management, Business Analysis and suchlike went through an upheaval of sorts, with a general tendency towards de-specialization. A software developer was re-packaged as an “all rounder” and expected to perform well in all departments. Project Managers were attenuated to “Scrum Masters” having a narrower window of responsibility, as compared with the PMs of yore who handled everything from elucidating business requirements to billing clients. The “management” effort was decentralized and distributed throughout a cross-functional team. Many intelligent folk in the industry welcomed this change, as it made developers better aware of the overall business requirements by placing them in direct contact with the client.

One notable early trend in agile product development teams was the aversion to having dedicated “testers” –  after all, why would one need them, if one wrote one’s unit tests and tested one’s releases constantly in a continuous integration environment? For some years, agile development startups shunned the need for hiring specialized human testers; on the basis that the developers would “perfect” the functionality purely through awareness of business needs and through end-user feedback from the client. The possibility that there might be such a thing as “end user competency” in individuals that doesn’t always accompany programming competency, was completely ignored.
As with any other proposition in the scientific management of work, empirical evidence shapes engineering process. As we all know, today we find many agile development teams are back to recruiting dedicated testers to perform manual regression testing and a host of other mission-critical tasks. I’d like to detail four important tasks that our dedicated team of testers at Calcey do, and explain why a new realigned tester role is a valuable addition to software development.

1. Usability Testing
Testing the durability of user interfaces –  i.e. how the requisite functionality is translated into a user-friendly experience –  is the first stage in a project’s lifecycle where Calcey testers get involved. A product owner (or a developer) may quickly wireframe the functionality he or she needs and pass it on to the development team, but this first-cut can benefit immensely from usability testing. What the test team does is, they printout the wireframes, place it before an “ordinary user” (a tester who is not aware of the product) and observe how he or she tries to interact with the wireframe to achieve the objective, that is stated upfront. The questions raised and the time taken to achieve the objective is noted, and eye and hand movements of the user are observed. Thereafter, the wireframe is modified to improve the user experience.

Sometimes, an experienced tester doesn’t actually need to carryout the “usability test”, he or she can simply draw upon past knowledge of “good practices” to redefine a user experience and make it a better one. We have found usability testing input especially useful, in the context of developing completely different user interfaces delivering the same functionality across multiple mobile devices like Websites, iPhones or iPad.

2. Regression Testing
The beauty of Scrum is that it allows a QA team to function alongside the dev team, working more-or-less in parallel. What we discovered is that the Sprint time-box must accommodate a testing and bug fixing period, if one were to avoid bug-pileup. For example, if a single Sprint is of three weeks duration, two weeks are allocated for development, and one week is left for testing and bug fixing the Sprint demo release. During the initial two weeks of the Sprint, there is no regression testing, but the tester(s) can prepare for the upcoming release, by drawing up simplified test cases on a spreadsheet. They also can continue testing the previous Sprint release, or engage in other critical testing activities such as performance testing or test automation (see below).

There is no hard-and-fast rule, but we find that in our “parallel development and testing” setup, the optimal bandwidth ratio for dedicated test resources is modest. In our experience, a team of four developers could benefit from a dedicated tester. The critical success factor is that the tester plays an end-user role –  looking upon the whole system as though he or she would have to work with the evolving product for years to come, without worrying about engineering complexity.

3. Performance Testing
Performance testing is a much discussed and often overcomplicated activity. There are two generic types of performance tests that we setup and conduct during product development initiatives at Calcey. One is the “performance test” verbatim. What we do is we setup a reasonable transactional load on the given user interface under test, and record the response times. For example, how long would it take to login to the system via the login screen and land on the home page, when five users log in at once? We would match our results with the performance expectations for the system provided by the client, or match them against observed industry norms for different devices and environments. A page change on a native iPad app would be expected to happen within a second, for example, whereas a parameterized search result on a web page could be expected to take 3~5 seconds over the Internet.

The second type of test we do is a scalability test. Here we would gradually scale up the transactional load on a user interface’s functionality, in a ramp fashion, whilst measuring the response times at each increase in load. We’d do such a test on benchmarked hardware resources, and identify the breaking point of the system, when the response time becomes infinite or the application crashes. The evaluation of the test results are slightly more complex for a scalability test, as we have to factor in the design of the system and its dependency on hardware bandwidth.

In both of the above cases, the results are fed back to the development team for profiling and implementing performance improvement tweaks to the system. There are several automation tools we use for setting up performance tests, the most common being Apache JMeter for web apps, and Apple’s Instruments for Performance and Behavior Analysis for iOS apps.

4. Test Automation
Another important QA activity we engage in is the maintenance of automated regression test suites for web apps of significant complexity. We write Selenium test scripts embedded in native web code (such as C#) to perform the basic operations of the system; for example logging in, searching for products and adding them to a shopping cart, in the case of an ecommerce system. An automated test suite complements unit tests; as most developers know there are situations where is not feasible to write unit tests, but it is very easy to “click through” and verify continuity via a Selenium web test. These automated regression tests are a living artifact, and need to be updated based on evolving changes to the product requirements. They help to speedily flag breaks in old functionality caused by new releases, and thus save the testers time when deciding whether to accept or reject a build. Writing test scripts also gives the QA testing team a chance to dig into simple code and keep their logical reasoning abilities sharp.

The below diagram summarizes the QA Process we follow at Calcey.

In our Calcey experience, we find that “3rd eye” of the tester invaluable to producing quality, bug-free software (the first and second eyes being those of the client and the developer). The tester also acts like a sort of bridge between the developer and the client, to challenge both parties to achieve an optimal balance between usability and engineering cost.