Testing - NZ Tester

5MB Size 5 Downloads 23 Views

Feb 6, 2014 ... The Quarterly Magazine for the New Zealand Software Testing Community and Supporters .... Our interview this issue is with Neil Gray of Equinox. IT. Neil and ... solution, hosting, network and managed services. .... Ask questions and take back valuable tips and tricks to use in your every day manual testing.
NZTester

The Quarterly Magazine for the New Zealand Software Testing Community and Supporters ISSUE 6 FEB - APR 2014 COMPLIMENTARY

In this issue: Interview with Neil Gray, Equinox IT One Year On...with Garth Hamilton Using HP QC/ALM for UAT What Type of Tester Are You? WeTest Auckland Review Testing @ Fiserv Why Do We Test? Reporting v Debugging Planit Testing Survey 2013 Review Great Bugs, Pinheads & more…

NZTester Magazine Editor: Geoff Horne [email protected] [email protected] ph. 021 634 900 P O Box 48-018 Blockhouse Bay Auckland 0600 New Zealand www.nztester.co.nz

Advertising Enquiries: [email protected]

Disclaimer: Articles and advertisements contained in NZTester Magazine are published in good faith and although provided by people who are experts in their fields, NZTester make no guarantees or representations of any kind concerning the accuracy or suitability of the information contained within or the suitability of products and services advertised for any and all specific applications and uses. All such information is provided “as is” and with specific disclaimer of any warranties of merchantability, fitness for purpose, title and/or non-infringement. The opinions and writings of all authors and contributors to NZTester are merely an expression of the author’s own thoughts, knowledge or information that they have gathered for publication. NZTester does not endorse such authors, necessarily agree with opinions and views expressed nor represents that writings are accurate or suitable for any purpose whatsoever. As a reader of this magazine you disclaim and hold NZTester, its employees and agents and Geoff Horne, its owner, editor and publisher, harmless of all content contained in this magazine as to its warranty of merchantability, fitness for purpose, title and/or non-infringement. No part of this magazine may be reproduced in whole or in part without the express written permission of the publisher. © Copyright 2014 - NZ Tester Magazine, all rights reserved.

2

IN THIS ISSUE…

The Journal For New Zealand Test Professionals

Click on title...

Finally got it out on the wire so….welcome to the first issue of NZTester Magazine for 2014. We trust everyone managed to negotiate through the Christmas, New Year and holiday periods intact!

5 Interview With Neil Gray, Performance Testing Practice Manager, Equinox 9 One Year On….

In this issue we have new insights from Andrew Robins, Nele Nikovic and Richard Boustead and welcome to our writer’s club for the first time, Chris Williams of Telecom. We also continue our testing company feature interviews with Neil Gray, Performance Engineering Practice Manager at Equinox and hear again from Garth Hamilton of Assurity as part of our One Year On feature.

Garth Hamilton, Assurity Consulting

12 Testing @ Fiserv 15 Tester Types: What Type of Tester Are You? Nele Nikovic, TradeMe

19 Why Do We Test? Andrew Robins, Tait Communications

We also revisit the PlanIT Test Survey that was reviewed in NZTester2 with an update taken from the 2013 survey. Some of the changes between the two make interesting reading, you can find this on page 24.

21 Pitfall Problems in Testing: Reporting versus Debugging

We expect this year to be busy on the conference front as well. We will be attending StarWest and ANZTB in May, TCANZ in October while already lining up Ignite, StarWest and the inaugural Australasian Let’s Test conference in Sydney this coming September.

24 PlanIT Testing Index 2013 Review

Richard Boustead, Statistics NZ

22 Mobile Testing: A WeTest Auckland Workshop

26 Using HP QC/ALM to Manage UAT Chris Williams, Telecom

REGULAR FEATURES

We’re also looking forward to seeing USTester Magazine leaving the tarmac this year, probably April time. While this is not yet a guarantee, we hope that we can get our stuff together in time to make it happen before StarEast in early May. Any US-based readers of NZTester are hereby officially encouraged to submit articles for submission!

4 NZTester Magazine Announcements NZTester & OZTester Magazine Back Issues 14 Testing Events 18 The Pinheads 28 Great Bugs We Have Known & Loved

With three magazines now in the stable, we’ve decided to stagger releases from hereonin so while this one sneaks into February, OZTester will go out mid-March with USTester mid-April. Next NZTester will be in May, OZTester in June and so on. Hopefully this means we can keep on top of the production workload! Anyway, happy reading and as always, we do appreciate any feedback you may have around our publications.

3

CORRESPONDENTS WANTED!

NZTester Announcements

Its jolly hard to keep up with everything going on testing these days, especially with three magazines now on the go!

Calling all

PROGRAMME TEST MANAGERS

We’re looking for voluntary correspondents to help us: Provide testing news from the region  Solicit articles of interest  Help run the local Meetups  Write an article or two per year 

There’s now a LinkedIn group for all test professionals operating at Programme Test Management level (or at least aspire to)

We’re currently looking for correspondents in Wellington, Christchurch and any of the regional centres in NZ. Email me here.

Click on the title above.

– Ed.

Subscribe to the magazines free at www.nztester.co.nz

4

This issue’s interview is:

Neil Gray Practice Director Performance Intelligence Equinox IT, Wellington

Our interview this issue is with Neil Gray of Equinox IT. Neil and I go way back to a company called Software 9000 in the mid-nineties which was acquired by Rational Software then later by IBM.

faster?’ or ‘how can we do the same for less cost to the client?’ We share that knowledge within the team and with our clients. The motivation to improve and share is instinctive.

NZTester: Can you please describe Equinox IT?

NZTester: What do you think makes a Test Manager or Analyst come to work for Equinox IT?

Equinox IT delivers software development, consulting and training services. My part is the Performance Intelligence Practice within our software development area. We deliver software performance services to our clients focusing on software stability, hardware capacity and end user response time – in that order of priority.

It is the opportunity to develop as a professional and as a person by working in a team of smart people, on interesting projects, with appreciative clients. We talk about our sweet spot assignment being one where the client, the consultant and company (Equinox IT) all benefit. We work hard to hit this sweet spot on as many assignments as possible.

Equinox IT has over 60 staff based in Wellington and Auckland, but we undertake engagements throughout New Zealand and internationally.

NZTester: Where do you believe the challenges for NZ testing companies lay?

The thoughts and ideas below come from a performance testing perspective, although some may apply to other areas of testing.

As is often the case with testing, many projects engage performance testing too late in the process. We become the ambulance at the bottom of the cliff working under intense time pressures, trying to band aid a problem that never would have happened if we had been involved earlier. Architecting performance into software from the start is clearly a smarter and much more cost effective approach and Equinox IT has a lot to contribute in this area.

NZTester: What products and services does Equinox IT offer? Equinox IT has the capability to solve tough business problems. Software performance is a business problem hidden under layers of technical and architectural complexity. We refer to our practice as Performance Intelligence for good reason – we draw on both performance testing expertise and business intelligence techniques to trouble shoot performance issues. This allows us to visualise test metrics in a way where we can communicate more effectively with business and technical stakeholders, making it faster to find and fix problems.

Software architectures are also getting more complex, with numerous areas of integration that span across a variety of infrastructures. Often there are multiple vendors involved, providing the solution, hosting, network and managed services. These vendors have contractual obligations and don’t want to be seen as party responsible for problems. Getting to the bottom of complex performance issues is becoming harder, both because of technical complexity and relationship complexity. Equinox IT has had to become quite

The other difference I see is that team members expect to learn and improve every time we take on a new client problem. We will ask ‘how can we do this

5

skilled at getting everyone in a room to work through the root cause of a complex performance issue.

NZTester: Where do you believe the next initiatives in testing lay? What’s coming next? In NZ? Internationally?

NZTester: Where do you believe NZ’s approach to testing is going well?

I think there will be a new generation of tools to meet the technical and project demands – faster test script development, reduced test cycle times, deeper analysis of metrics. Do more for less, faster and sooner.

Following a number of publicised government and private sector issues where functional, security or performance testing had not uncovered all the issues, there is an appetite for doing testing better at the moment. NZ companies are responding to this and it was pleasing to see a home-grown testing provider land a major long-term contract with a large government department recently. We see pockets of innovation everywhere and we are involved in some exciting innovation ourselves.

NZTester: Do you have a testing horror story to share? I was recently talking with a CIO who said his organisation (referred to as the “client” below) had just completed a performance testing project which had some significant challenges.

One thing that is going well for us is ‘unnatural collaboration’ - doing things with companies who on the surface wouldn’t naturally seem to be a good fit. I’m working on a project with one of our, on paper, competitors. We actually have different strengths in different areas and get on brilliantly. We also partner on performance testing with a company who in other areas of Equinox IT would be considered a competitor. But for performance testing the partnership makes sense.

Very early on in the project, the software performance testing provider had set very unrealistic expectations with the client as to what they could achieve in a limited time and assured the client all the problems would be fixed before go live. This despite the fact the non-functional requirements were ambiguous, incomplete and in some cases untestable. However some long time prior to engaging the performance testing provider, the programme management and architects had agreed on the schedule and cost of the performance testing so that is what the performance testing provider signed up to. This was not a problem as the performance testing provider excelled in the performance testing tool they recommended to the client. Commercially, the application vendor had agreed to the non-functional requirements as they had done many of these projects before but the client had recently engaged a new infrastructure provider who was accustomed to working strictly to the service level agreements, despite none of them referring to system stability, capacity or response times. The client was relaxed about the lack of detailed resource monitoring in place as they had rock solid SLAs so performance was now the infrastructure providers issue and not theirs.

NZTester: Where do you believe NZ’s approach could improve? Early engagement for testing and performance testing activities. Projects also need to set realistic plans and budgets that accommodate the inherent uncertainty and risk involved in complex software projects. While there may be many agendas and many constraints, project teams, as well as the multiple third parties involved, need to work better towards a shared goal, ensuring open and honest communication. NZTester: Do you believe that overall the standard of testing in NZ is improving? I started a testing company in 1992 and since then there is certainly a lot more testing being done. The standard of testing is improving, for sure, but the improvements are offset by the increasing complexity, greater public scrutiny and ever growing reliance on technology in our society. In other words the bar is higher, so testing does need to continuously improve to maintain the same standard of deliverable.

The performance testing was late and 5 times the price estimated. The performance testing provider was delayed getting access to the environment due to the infrastructure provider’s processes. The test tool didn’t work with the rich user interface designed by the application vendor. The performance testing blamed the application vendor citing ‘best practices’.

6

Test scripting had to be bodged and only worked on a single version of the application. The scripts hard coded data values to speed up scripting. New releases took several weeks to rescript. The tests eventually were executed but resources weren’t monitored at a degree of granularity that helped diagnose root cause so the application vendor got blamed for poor performance.

comfort to the application users, whose jobs critically depended on the software to deliver services to their customers. Of course I made all that up but wonder how many of you thought I was describing your project? All I am saying is this stuff is hard to do well and it involves a lot of people with different capabilities and commercial imperatives. Doing it well requires a pragmatic, collaborative prioritisation of risks and an open, honest communication between all parties.

Despite the workload model being wrong (they didn’t have time to do it properly because of all the other issues), the scripts only half working and the results hard to interpret and communicate in a timely manner, the system went live and there were no performance issues until about the 4th week when the system failed. Now everyone blamed each other but by then everyone involved had used the success as a case study on their website. Everyone in the process had an alibi, which was small

Editor’s comments: Thanks for making time to write for us Neil. I’m sure that if the old SQA Robot could do what today’s tools can do, we’d have been laughing all the way to….well, somewhere - Ed.

Neil Gray, with Richard Leeke (NZTester5 writer) pondering their next performance testing challenge.

7

Webinar: Manual Testing 2x Faster in Word & Excel Register Now: http://www.autom8.co.nz/webinars/ Date: Tuesday, March 11 th Time: 10:30am- 11:30am NZDT

85% of all software tests are still being performed manually. Is there a way to significantly improve manual testing and be able to work 2x faster? This free webinar is designed to change the way you perform manual testing. "Manual Testing 2x Faster in Word & Excel" focuses on new ways to reduce your manual test effort. Join this webinar to learn how to: Build, execute and analyse software tests directly in Word and Excel Make UAT, Exploratory, SAP and CRM manual testing much easier Test websites and applications with greater flexibility Improve the efficiency of manual testing and test 2x faster Integrate with popular test management systems Ask questions and take back valuable tips and tricks to use in your every day manual testing. Space is limited so sign up today. Register Now: http://www.autom8.co.nz/webinars/ See you soon. - Aaron Athfield, Founder and Chief Manual Tester Guy nz.linkedin.com/pub/aaron-athfield/75/811/626 www.autom8.co.nz

8

One Year On... by Garth Hamilton, CEO Assurity Consulting Limited, Wellington

Garth was our industry interviewee in NZTester2 in January 2013. He gives us his thoughts on what has chnaged in testing since then - Ed. It’s always nice at the start of a New Year to be able to reflect on a year of positives. For me, this is both

We offer courses for business analysts, developers,

personal and professional. I look back on 2013 as yet

testers, project owners, business users and, of course,

another year where both awareness and perceptions

testers. Each course is designed to help those people

of professional testing and the role it plays have

learn how to improve the software development

grown and matured. I think this has been good for

process to achieve better outcomes.

everyone involved in the New Zealand IT industry.

The two most recent courses we’ve released are Agile

As you probably know, when I wrote my last article

Testing and Lean Testing, two different, but very

a year ago, there was a lot of negative publicity

complementary methods to improve your approach

around Novopay. I ended that article by reflecting

to building quality into products.

with sadness on the habit of people to instinctively

I see Lean Testing approaches being more widely

blame testing when production errors occur.

adopted in future because they can add value

In the case of Novopay, the Ministerial Inquiry

regardless of whether the project approach follows

highlighted the positive role that testing played in

an Agile or waterfall methodology. Lean Testing

a project that suffered systemic errors over a number

focuses on reducing waste in the testing process

of years. For many project sponsors, there remains

through targeted test automation that complements

a perception that testing is a cost and not a benefit to

investigative testing. The demand to lower costs

a project. Sometimes, it is only when a project fails

will see an increased adoption of both Lean and Agile

publically that the realities of a project failure are

Testing approaches over the next couple of years.

hammered home to many executives. Novopay will

Adopting better

go down as one of those

approaches is extremely

projects. Good will come

important to the New

out of it. Bad habits will

Zealand testing community.

be addressed and

As I look back on the year, the big trend that worries me is the growing influx of

everyone will benefit.

offshore companies looking at New Zealand as a

Ultimately, our role as testers is to help project teams

destination for growth. Testing is seen by these

to build quality in and not to try to test quality in.

companies as an easy target market. Ultimately, the

Changing how projects are run and the processes

aim of these companies is to move testing work

that are followed – from requirements through to

offshore on the basis that it can be done cheaper. It

development and into release – became our company

commoditises testing.

focus two years ago. Last year, we supported that focus by launching a new Education service offering

I strongly disagree with the proposition that testing

training courses run by our own practitioners.

is a cost and can be delivered cheaper offshore.

9

Testing is a value-add service and in almost every

Editor’s comments:

project for our clients we can demonstrate that.

Thanks Garth, I certainly agree with your sentiments on commoditising testing. The market is seeing a number of organisations now merely ’flicking testing over the fence’ and not necessarily due to cost factors. The greatest appeal seems to be to “make it someone else’s problem” and choose someone big then ‘if it fails, we can sue”. Unfortunately this approach fails to recognise that losing control of testing risks the integrity of the whole SDLC thus greatly increasing the chances of another publically visible failure. Some of the other misconceptions seem to be yes, the perceived cost reductions; that vendor resources are more experienced and skilled than those that can be acquired internally; the expectation of always getting the best people for the job as opposed to the vendor filling from ‘the bench’ first and by skills/experience second; the expectation that a vendor won’t move their senior personnel around the client base, and customers who see their role as solely to ‘police the contract’. Potential customers might also watch out for the ‘turnover factor’ as some vendors work their permanent staff as project contractors - the point being that if someone wanted to be a project contractor, that person would be a project contractor and not a permanent employee- Ed.

We help companies build quality in by working closely with the business, the business analysts, the developers and operations staff. By introducing better and leaner approaches, we can do more to improve the economics of projects than simply charging a lower hourly rate. Why is this important? Besides the obvious benefits to the projects we work on, there are wider economic and social benefits to consider. As the father of three university-age children, I believe we will have failed both the current, as well as the next generation, if we do not build a large, competitive IT industry based here in New Zealand. So while we still need more skilled migrants in testing, we also need to keep the work here in New Zealand. And to do that, we need to focus on enlightening even more minds, doing things even better and proving the value-add of our services. That’s Assurity’s aim in 2014. And in a year’s time this will be the positive I would most like to reflect upon.

Wanna Get Published? Our formula for selecting articles for publishing: Good + Relevant = We’ll Print It (well, digitally-speaking anyway) Good = one or more of: thought-provoking, well-articulated, challenging, experiencedbased, technical skill-based, different perspective to mainstream, unique…. Relevant = one or more of: emerging trends, new technology/methodology, controversial (within reason), beyond the basics (eg. testing is good, defects are bad)…. 10

11

Testing @ by NZTester Staff Writer

The Friday before Auckland Anniversary Weekend saw YT (yours truly for the uninitiated) make his way downtown to a rather nondescript albeit relatively new building in Britomart which houses the financial services technology company Fiserv. I was surprised to find 270-odd staff members spread across two floors of modern, inspiring surroundings with much activity as folks moved around the workspace at speed – the proverbial hive of activity! I was meeting with Brian Brewer and Michael Matson; both Fiserv QA Managers across the Product Development and Professional Services groups respectively. Brian has been with the company for 6 months and Michael a year and even in their respectively short tours of duty, both have seen the company continue to move forward with its software development and associated quality assurance initiatives. Fiserv in Auckland began life as Mobile Commerce back in 2002 and morphed into its own abbreviation M-Com a few years later. Acquired by US-based financial services technology company Fiserv in 2010, M-Com became a main mobile development arm within Fiserv based on its work on the various popular mobile platforms including iOS and Android. Most of the banks that use the mobile banking and payment solutions from Fiserv are based in the US, however, the company has a worldwide client base and does number Australian/ New Zealand companies amongst its clients. As per above, there are two distinct work streams related to the development and delivery of mobile solutions within Fiserv; i) development of the core products including those on a SaaS platform (Product Development Team), and ii) repackaging, configuration and customisation of the core products to meet specific client requirements (Professional Services Team). The Product Development Team

adopted an Agile approach to software development some 18 months ago, and while Professional Services is still a more mixed Waterfall/Agile approach, currently across both groups there are some 20+ project/scrum teams comprising of 5-6 developers and 2-4 testers each and operating on two-week sprints. Through its Mobiliti mobile banking and payment platform, which is available in both licensed and ASP versions, Fiserv currently supports mobile financial services for nearly 1,800 financial institutions and millions of consumers in North America, Asia, the Pacific, the Middle East and Europe. In addition to Brian and Michael, Fiserv engages Test Leads who sit orthogonally across multiple scrum teams ensuring that all sprints are populated with appropriate testing personnel and that the various testing standards and processes are adhered to. The full testing contingent at Fiserv is now 60strong and both managers are actively hiring senior testing professionals to cater for the increasing workloads across all teams. It is indeed interesting to note that of 270 employees at the Fiserv Auckland location that 60 are testers (22%) and that number is growing! Fiserv operates a predominantly permanent staffing model with very few contractors. The main driver is not so much cost but the cultural integrity and retention of intellectual property. In the testing space, senior staff members are encouraged to coach and mentor the more junior team members thus providing a definitive education programme. This practice extends to the development of training material and running of training courses and whilst attendees are obviously the main beneficiaries of this exercise, seniors are also developing and deploying leadership skills along the way; therefore these are enhanced at all levels. Testers are also asked to achieve ISTQB Foundation Certification or above.

12

At present, most testing is performed on a scripted basis however both Brian and Michael are quick to point out that they are using more exploratory testing techniques and looking to see where these approaches will improve testing outcomes. In addition much of the development is performed using Behaviour Driven Development (BDD) techniques so this combination will be quite complementary.

future sprint executes. Taking this approach to its logical conclusion will see a complete set of automated test sets available by product for regression testing. It is hoped that this in turn will reduce test cycle time and consequently overall time to market. The ultimate aim, as described by Brian and Michael is to provide software releases with zero defects. At present, major releases are performed every six months and it is hoped that a quarterly target can be achieved within the next few iterations. While it might appear to be a lofty one for any software development company to achieve, there is no doubt that the possibility is becoming more and more realistic as time and technology moves on. Whether it’s 100% achievable or not we’ll have to see, however my time at Fiserv indicates to me that the company is certainly moving in the right direction.

Fiserv is also a pragmatic user of test tools where benefits can be easily achieved. Microsoft TFS 2013 is being deployed for developing and maintaining test libraries, requirements, scripts and defects and the company finds this toolset ideal for its culture and environment. Fiserv has been a long-time adopter of test automation for regression testing with Selenium Webdriver the primary automation platform, extended by Appium and Experitest for mobile automation and both Brian and Michael are always on the lookout for any tool that can enhance testing productivity. Brian’s team is currently targeting around 40% of regression testing to be automated in the Product space, with much of it being developed within each of the sprints and built upon as each

In summary, Fiserv has bitten the bullet in a number of areas by moving to more outcome-based software development and testing approaches and has definitive plans in place to complete the picture. This should read as good news for its client base as users can look forward to an even-better, everincreasing quality of deliverable.

From back: Abhimanyu Duhan, Sarthak Gupta; Next Row: Steven Wang, Ben Clapp, Piyush Dungrani, Ali Al-Sarraf, Peter-Kyle Jackson, Akshay Sud, Dhermesh Patel, Yassir Malik, Uldis Ziverts, Tobias Francis, Nisha Angra, Michael Matson, Helan Amota, Divya Vasudevan; Next Row: Aeryan Silpedes, Hafiz Vegdani, Feroze Jaffar, Anurag Dial, Eldhose Varghese, Sumit Poddar, Nelson Birones, Reuben Pereira, Vivian Gan, Bashura DeAlwis, Philip Chan, Svitlana Mykhailova, Maria Shcherbakova, Ocean Wu; Next Row: Gurumoorthy Paulraj, Vivek Langer, Jason Sun, Achaiah Burra, Anna Pandagani, Nichola Ransley, Avanthi Jayamanna, Lucy Lui; Front Row: Jigger Fantonial, Brian Brewer, Nicole Wilcox, Ashley-Kate Whelan, Shilpi Mathur, Esmee Suson, Rakshika Mahala

13

Testing Events

If you have an event you’d like to promote on this page, email [email protected]

14

Testing Types What Type of Tester Are You? By Nele Nicovic

So, what type of tester are you? Well, I find asking myself that same question! Recently, I attended a WeTest Meetup in Auckland, which I recommend to anyone willing to know more about how we test in New Zealand. It is structured in a way to allow participants to talk and share their experiences. It may not mean that you will completely agree or completely disagree with what is said in the Meetup but you will certainly learn that how you test in your company has an alternative somewhere else. Hopefully, it will make you think, doubt and question current practices. I have always tried my best to be very open minded and not be ‘by the book’ as each project is different, each team has a different skill-set when intercompared, and each company has its own culture with foundations that was either initiated at the start of its journey (startup), or built and maintained over time, or both. It’s an approach of mine that I can try and explain by going slightly off topic, philosophically, with a quote from a very famous British social critic, Bertrand Russell, who says: “The fundamental difference between the liberal and the illiberal outlook is that the former regards all questions as open to discussion and all opinions as open to a greater or lesser measure of doubt, while the latter holds in advance that certain opinions are absolutely unquestionable, and that no argument against them must be allowed be heard.”1

outdated and produce mediocre results at best? Tough question I believe, but also a very hard one to prove and sell. I will try to simplify the above paragraph with my personal experience of various physical concepts I came across in my testing career. Though my experience in testing is not vast, the concepts observed are large enough to grab my attention and write about them in hope it is a thoughtprovoking material. To correlate the concepts mentioned, in the Meetups, a topic of Test Job Titles has been discussed. And not only in a way of reaffirming what a particular job title stand for, but it also created a confusion among many as to what that tester’s responsibilities are for certain roles. So, what is the difference between a Test Analyst, QA Analyst, Test Engineer? Could it be that the terms are completely arbitrary, depending on which recruitment agency you stumble upon? I would really like to hear recruiters approach this topic and perhaps standardise the roles, as it would help many of us. However, it’s likely that the employers are the ones driving this naming convention. Google Inc. made an interesting comparison of how they have classified their test roles. See http:// googletesting.blogspot.co.nz/2007/03/differencebetween-qa-qc-and-test.html for more on that. So, what are the actual, official definitions?

This is not just factual in our test industry but it is evidently applicable to many things we do in life. Even though the essence of e.g. test processes being the same in two similar companies / teams, you are still not guaranteed that the same solution can be applied to both companies / teams. If need be that you try and experiment or pilot-test a new tool, process, team structure, etc., so be it. At the end, isn’t it better to build something empirically to suit your needs and achieve the common goal, than to be stubborn and follow something that is

Test Analyst2: Identifies items to be evaluated by the test effort, defines the appropriate tests required and any associated Test Data, gathers and manages this Test Data and evaluates the outcome of each test cycle. 1 Freedom and the Colleges (Article in The American Mercury, 1940 2 http://sce.uhcl.edu/helm/rationalunifiedprocess/ process/workers/wk_tstanl.htm

15

3

QA Analyst: Is responsible for maintaining software quality within an organization. Such individuals develop and use stringent testing methods. These professionals are focused on providing the confidence that quality requirements will be fulfilled.4

terminology which has a great tendency to differ from what has been described in the ISTQB / ISEB course. What struck me first was the name used for Test Scripts, which by definition is “A document specifying a sequence of actions for the execution of a test. Also known as manual test script” - (ISTQB).

5 6 Software

If I could form a hierarchy, I hope it would not be too incorrect to say that it is the [Master] Test Plan first, which consists of many Test Suites / Levels that have numerous Test Scripts, with Test Cases within (Test Plan->Test Suite / Level->Test Script>Test Case). By definition, a Test Plan is a record of the test planning process8, which, if serving as a Master Plan, is done during the strategy meeting(s) and usually by Test Leads / Managers. This is especially evident with large companies, mainly in using Waterfall as a methodology standard.

/ Test Engineer : Designs and develops high quality test plans and test cases. Software Engineer works closely with Development and Program Management teams to provide feedback on design (product and technical) and user scenarios. [Test Engineer] drives improvements in unit testing coverage, develops test suites, expands automated testing harness, automates end-to-end tests, and validates metrics and reporting accuracy. In order not to enlarge the confusion aspect, I have deliberately excluded Software QC (Quality Control), as QC differs to QA, in my opinion, to say the least. QA testing is process orientated, as opposed to QC which is predominantly product orientated. The QC may appear to be correlated to the description of a Test Analyst role, as mentioned above, but its prime focus is to verify that the product does what it is supposed to, unlike QA which ensures the product meets the needs of customers7.

Trade Me, my current employer, uses Test Plan simply as a Word or Excel document for test coverage of a particular area “infected”. Perhaps, a similar path has also been used with HPQC software (Test Plan>Test Cases). There is no further propagation of work but the test execution which follows the moment a fix or a so-called branch is in RTT state (Ready to Test). Recently, we at Trade Me have introduced Test Case Management software which has a Test Suite term used as a highest hierarchical set of Test Cases9. There is no mention of Test Script anywhere whereas that is the term I was familiar with as a junior tester (prior to Trade Me). For those with automation experience, Test Script could mean something slightly similar to what you have used in the past but then again, it is still distant (used in e.g. TestComplete)10.

Now, the above descriptions are mostly the official definitions we tend to used widely but, in my opinion, not wisely. The testing industry is constantly evolving, and with the IT industry being young and innovative as it is, can we really rely on something that is written 5+ years ago? I believe it is a must that we update our skillsets constantly as the market drives this change more often than we think.

3 http://www.pcmag.com/encyclopedia/ term/50006/qa-analyst 4 http://www.rbcs-us.com/media/glossary/ 5 http://www.microsoft-careers.com/job/ Hyderabad-Software-Test-Engineer-%28SDET%29II-Job/27536400/ 6 http://jobsearchtech.about.com/od/ careersintechnology/p/SWTest.htm 7 http://googletesting.blogspot.co.nz/2007/03/ difference-between-qa-qc-and-test.html 8 http://www.istqb.org/downloads/ finish/20/101.html 9 http://www.gurock.com/testrail/ 10 http://smartbear.com/products/qa-tools/ automated-testing-tools/test-case-automation-withscripts/

The job title used for people you usually report to, QA / Test Leads and QA / Test Managers, is somewhat easier to explain. Test Managers have dealt with Test Strategies, wider team focus, team budget and reviews. Test Leads also had their share of these responsibilities but are more hands-on with testing. I am certain there is a lot more to add here but I do not feel as competent in trying to describe and distinguish these two in more detail. I would, however, be interested to hear our Editor’s (Geoff Horne) opinion. Once you have a title attached to your name, the next thing you will stumble upon is the testing

16

Editor’s comments:

Regardless of the Methodology used (e.g. Waterfall, Kanban, Agile, etc.), I found that companies use the terms they are comfortable with. I am in favour of such flexibility, even though it creates confusion initially but it is only a matter of time one gets used to it. Perhaps, it is the Contractors who are best to speak to about this as they are the ones who constantly face synonymous terminologies. To conclude, testing industry is improving its reputability by continually expanding on the test types, becoming a mainstream activity in SDLC. It continues to spread its tentacles, especially with the resurgence of Agile. The dev-test ratio is becoming smaller and “fairer” (from e.g. 4-1, to an ideal 2.7-1, and even down to 2-1), project segments are a lot more flexible than before (estimation, design input, etc.), our feedback is valued more than ever, we play a vital role in production release, and the job of a tester is therefore gaining the credibility that it deserves. Compared to a decade ago, many independent software testing services are emerging, filling the gaps and offering the sustainability for various size corporations. Various terminologies and job titles are just a small example of how large our industry is. To a non-IT individual, it is no longer possible to explain in a few words what it is that we do. Regardless, it is this flexibility and general open-mindedness to change processes, teams, titles, etc., that is helping the growth of our domain. Nebojsa Nele Nikovic is a Test Analyst at Trade Me in Auckland. He can be contacted at [email protected]

Have you seen the interview in NZTester?

OMG, please tell me it’s not you!

17

Thanks Nele, to take you up on your opinion invite, I think you raise some interesting points. While this maybe in the eyes of some folk a Priority 3 testing subject and maybe not one to lose any sleep over, any issue around terminology which has the potential for miscommunication deserves a mention and perhaps discussion. Regarding titles; I’ve tended to not put so much emphasis on a title, rather I’m more interested in what the person actually does. A Test Analyst designation can mean anything from a UAT Test Analyst with a few months experience testing on a package implementation to a Systems Test Analyst with countless years testing at infrastructure level. Yes, we could put clarifying terms eg. “senior”, “applications”, “technology” around the words however these can make the situation even more confusing. Other ‘modern’ practices such as putting a pseudo-job description on a business card IMHO only cheapens the role and perhaps leads to recipients wondering why? I’m not sure whether there is a useful solution to this - only that in most scenarios, we have to make sure that all parties are clear on precisely what the expectations are around any task, position etc. I would hope that this approach would win through in most situations hence the need for glossaries and bibliographies in all documentation and certainly for probing questions in verbal situations. - Ed.

www.parasoft.com

The Pinheads Early one morning in the PMO.... There’s no way we can finish testing on time!

Can I hire some more testers? No! You just need to take more ownership

So.... I’m taking ownership of the failure?

OK...can we reduce the number of modules we test then?

No!

You catch on fast.

18

Why Do We Test? By Andrew Robins

Individuals will be able to point to a range of reasons and motivations (real and imagined) for why they test. For myself this list of reasons would include:

to be constantly asking ourselves the question “What is the most useful piece of information that I could be presenting, right now?”, and be chasing after that data.

 Because I affiliate with a community that is

If your testing is not providing information that people need, then, why are you doing it?

built around testing  I enjoy testing, puzzles and other types of

Information is hard to get in other ways

intellectual challenge

Testing is expensive, so if people can get the information that you are providing in other ways, then they will. So if a task is easy, automate it and get your testers doing the hard, interesting stuff.

 I get plenty of variety in my work as a tester  I work with excellent people, who I respect.

I am committed to these people and I want to continue to work with them

We are paid to provide a professional service

 Etc

The word professional implies professional standards and a professional code of ethics. As a member of the association for software testing, I have signed up to a professional code of ethics which I adhere to. I would encourage other testers to do the same.

None of this actually matters though, except to me as an individual. Taking a wider view, the obvious answer to the question “Why do we test?”, is “We test because people pay us to test”.

Going one step further, we should act as if there were accepted professional standards for the work that we do (even though there are not), and I would argue that these standards should be of much higher quality than most industry sectors currently accept.

Is this a useful answer to the question? How about if we restate the answer more fully: “We test because people who need information that is hard to get in other ways, pay us to provide a professional testing service that gives them good quality information.”

The testing service ethic I embrace a service ethic personally, and I encourage the testers in my teams to do the same. We are here to solve problems for people. Sometimes that means identifying problems that people don’t know that they have.

This is the statement I want to examine further, because this need is the reason the testing profession exists in its current form. We test because people need information

Embracing a service ethic means that testers will more often than not be seen as part of the solution to the problems that they have identified, rather

The information that testers provide should be a vital part of the picture that is being painted for the decision makers on any project. As testers we need

19

than being identified with the problem itself.

that a full and accurate picture gets painted. We are responsible for the quality of the information that we provide. That is part of why we get paid.

Testing provides good quality information We should always respect the quality of the information that we provide. Sometimes project stakeholders will request information that when seen out of context, could be misleading or misused. This can be a thorny issue for testers, and we will not always be in a position to influence outcomes, or the behaviour of others. But one thing that we can do is make sure that the information that we provide is as complete as it needs to be to meet the criteria of being “good quality”.

The big “why” We testers do an important job. The products and solutions and services that we work on are vital parts of modern life, and in some cases lives can depend on them. We have a responsibility to do our jobs well and to build a profession that earns and keeps the respect of those who depend upon it. That is another reason why we test. Andrew Robins is the Test Manager at Tait Radio Communication in Christchurch. He can be reached at [email protected]

So if we get asked for potentially misleading data, we make sure we provide it in such a way

20

Pitfall Problems In Testing: Reporting versus Debugging by Richard Boustead

out a defect you already know about. Both are worthwhile activities, but only one is contributing to testing the software. The other is a result of testing.

It’s a quiet Wednesday morning when Thomas the Tester logs a bug appearing in a payments program and fires it off to Dan the Developer. After about an hour, Dan fires back an e-mail:

As Testers, our job is to report issues with the software. Thomas has been doing this, correct? Well, no. Thomas in our example located a defect, and then ran the same tests several times to rediscover the same defect. The developer’s repeated requests for more details were not really an appropriate use of the Tester’s time and effort. They were debugging tasks, part of the developer’s sphere of responsibility.

Hey Tom. Good spotting on that issue with the Payment. However, I can’t quite nail it down on my end. Could I get you to check if it appears when you go through the Account screen as well? - Thanks, Dan. Thomas is on good terms with Dan, so he has no problem doing the check. He dutifully reruns the test through the Account screen, and then for good measure does it through another component.

So what can testers do in this situation? It just seems right to help out doesn’t it? We’re all on the same side after all.

Hi Dan. Yep, it’s in the Account screen and the New Payment as well. - Thomas.



It’s getting late in the afternoon when Thomas gets a reply. 

Hi Tom Can you see if it’s showing up in Edit Payment as well? - Thanks, Dan.



Thomas does so, and after the test is re-run, he packs up and rushes to catch the bus home. As he’s sitting there and reflecting on his day, he initially feels satisfied with the work he’s done, until it slowly dawns that he only performed five or six tests out of the suite of twenty he was meant to do.

Clear Steps. Have labelled, clear steps to locate the defect. Give the developer as much information as you can in your initial report to allow them to locate the area in one go. Clear Roles: At the beginning of a Testing cycle, or the start of the project, make it clear that a developer needs to debug and a tester needs to test. Sometimes a reminder is needed. Don’t be a roadblock. Sometimes attempting to avoid the problem can be a problem in itself. In the end, we are on the same side, so infighting between armies will cause more harm than good. For the big bugs, get whatever is needed, done. Richard Boustead is a Test Analyst at Statistics New Zealand in Wellington. He can be contacted at

This is a common scenario in many testing environments. Testing is discovering and flagging new defects, and debugging is figuring

[email protected] 21

Mobile Testing A WeTest Auckland Workshop with Morris Nye, Pushpay. Review by NZTester Staff Writer

When I first saw an iPhone in early 2008 I wondered whether this cool-looking piece of equipment would be the start of something new. I had used Symbian-based Nokia smartphones up until that time and when I next upgraded, I still went with another Nokia and considered the iPhone, at version 2, too immature a product for me. Yes there was a small bunch of applications (now known simply as apps) however nothing that I would consider dropping my trusty Nokia for.

of which is no mean feat when one considers the tiny platform(s) it is deployed upon. Morris led us through the different testing approaches, which as he mentioned, follow the standard testing processes we all know and love. However there were a few raised eyebrows in the room when he mentioned that Pushpay has to still support Android v2.2 (a mobile OS from 2010 that is already considered Legacy, akin to Windows 95). In addition, Morris stated that testing in the mobile space is virtually impossible without some form of automation. With all the different combinations of operating systems etc, the ability to write automated tests that work on a number of devices has to be huge time saver, any compatibility issues notwithstanding.

Fast forward to 2014, only 6 years later and I’m onto my third iPhone having finally kissed goodbye to my Nokias some years earlier. In that time, Symbian has all but disappeared and gone the same way as Palm Pilots and more recently Blackberry's. In their place, a new player entered the foray in the form of Android which now occupies space on every smartphone other than the iPhone and a few Window-based devices.

The Meetup was run along the lines of LAWST (Los Altos Workshop on Software Testing), a peer conference which appears to be quite a favoured approach these days so consequently the queue of queries around automation was quite steady once the official presentation was over. The last time I went to one of these, someone swiped my Interrupt card (can’t fathom why).

Most software developers now develop their product(s) predominantly on these two platforms; albeit across a multitude of different operating system and browser versions thus creating a nightmare for us humble testers. So it was with some intrigue that I decide to drop in to the latest WeTest meetup in Auckland where Morris Nye, the Principal Software Tester at Pushpay, presented an overview of testing in the mobile space. Pushpay is an Auckland-based startup that has been in existence for three years, and currently operates in New Zealand, Australia and U.S.A. Pushpay is a primarily mobile platform that allows users to make payments via smartphone to pre-selected organisations and businesses in under 10 seconds. Its primary selling features are its security, speed, simplicity and convenience. In order to maintain these key points, the app must be quick, reliable and easy to use; the achievement

22

Pushpay uses open source automation tools, primarily Calabash and Morris has found that for it to be of the greatest benefit, the tool has to execute tests on the actual device as opposed to a simulator running on a desktop (which risks integrity and missed compatibility issues). Some of the other challenges Morris has encountered with automation on a mobile device is when the operating system launches a popup window, or even just the soft keyboard. This tends to cause havoc for automation as those run as a separate process, often interrupting the connection to the app under test.

As more and more companies embark upon mobile development projects, the need for innovation both around automated tools and otherwise, will only grow. Consider the following statistics (courtesy of a survey conducted by SmartBear Software across 1,000+ developers, testers and consumers in October through December 2013): 

Nearly 30% of developers building apps, are building mobile apps.



Over 50% of respondents who are currently building mobile apps have entered the space within the past two years.



84% of those who are not currently building mobile apps plan to enter the space in the near future.



30% of companies plan to develop 5-20+ new apps in 2014.



40% of consumers will download 5-20+ apps in a single month.



In addition to building and deploying many new applications, 51% of respondents also plan to release daily, weekly, or monthly updates for existing mobile apps in the same 12 month timeframe.



Nearly 50% of consumers would delete a mobile app if they encountered a single bug.



61% of respondents are currently using 3 or more different quality processes when building mobile apps. 33% of respondents are using as many as 4-6 quality processes. The most common quality processes include; manual testing, automated testing, API testing and load testing.



Producing quality products is the #1 greatest challenge for succeeding in the mobile space.

In summary, thanks to Morris for an interesting presentation. There is no doubt in my mind that this area of development will continue to grow substantially and all testers would do well to climb on board the wagon as soon as practical, especially those with automation skills. I think we’ll also see new tools from some of the major automation players (TestComplete from SmartBear is already supported on Android with iOS in beta test) to add to the raft of offerings from more established specialist vendors eg. SOASTA and Perfecto Mobile etc. The WeTest Auckland and Wellington groups run monthly meetups in each centre. See the Events page for more details.

23

By NZTester Staff Writer

Testing Index 2013 Review As a precursor to reading this article, I suggest a visit to www.planit.net.au to download this year’s Planit Testing Index Executive Summary - Ed.

15% respectively (I now have both eyebrows raised). However when reading on, we see that the main reason for project failure is still “Poor or Changing Business Requirements” at a whopping 70% (up from 68% last year which in turn was up 9% on the year before) so maybe not such a surprise after all.

I didn’t make it to the Planit Testing Index roadshow this time around. Not that I didn’t want to; events conspired against me and I never seemed to be in a place long enough to get myself along. However as always, Planit has made the documentation available to all so I have relished the opportunity of working my way through and determining how things (may) have changed from the 2012 survey.

So onto the whole Requirements matter; has anything improved on last year? If you’ve read NZTester2, you will have noted that 28% of respondents reported feeling positive about their company’s Requirements Definitions with another 44% feeling OK (=28% not OK at all). A huge 97% conceded that their company could benefit from improving Requirements Definition and 67% believed it was suboptimal at the time. Unfortunately 2013 has seen no improvement and has indeed worsened: just 23% positive, 39% OK and 38% for that other category – a 10% increase on 2012! In addition, 99% of respondents now feel that their company could benefit from improving requirements definition and 71% believed it suboptimal. So there you have it….disappointing to say the least. Testers: we must yell louder!

I’ve noticed that in most specialist industry surveys conducted on an annual basis that the percentages tend to change no more than plus or minus 5% points from year to year. In other words things stay pretty much the same; slightly up one year, slightly down the next et al. So the first statistical difference that caught my attention was that the number of responses from New Zealand was up from 9% in 2012 to 21%. In fact there were more respondents from New Zealand than New South Wales so looks like the Kiwis have taken the encouragement made in NZTester2 to heart and given this year’s survey a fair crack of the whip. Another number that caught my eye, although only a 3% increase on last year was the number of respondents rating Testing as a “Critical Element in Producing Reliable Software”, up from 48% in 2012 to 51% ie. just over half. I shouldn’t cheer too much though as I would have thought that this one was a no-brainer, warranting a much higher assessment than just over half. Does this mean than 49% of respondents do not see Testing as a “Critical Element in Producing Reliable Software”? The mind boggles….as mine is so apt to do. Last year’s eyebrow-raiser of Desired (62%) versus Actual (18%) ratings for Testing starting during the Requirements phase was similar this year; 60% and

24

The Project Methodologies category yielded an interesting statistic; last year respondents reported a breakdown of Waterfall 36%, Agile 29% and VModel 24% (ignoring the Other or No Methodology responses). We queried at the time whether respondents fully understood the differences between Waterfall and V-Model and wondered whether the categories had been used interchangeably. This year the breakdown is Waterfall 33%, Agile 33% and V-Model 22% showing a small but distinct increase for Agile (4%). It should be remembered that the “Agile” umbrella includes a number of different methods eg. XP, SCRUM, RAD, BDD/TDD/FDD/TLADD etc although I think most folk tend to think of development Sprints as synonymous with the term “Agile”.

Project Outcomes showed a significant increase for the “On Time/On Budget” category, up to 52% from 39% last year, good news! The breakdown of this category by project method also makes interesting reading; Agile 52%, Waterfall 49% and V-Model 55% (same caveat on the last two as per above). It is encouraging to see the increase here as it reverses the downward trend from the last three surveys.

so sorry, testing has to be done on and with a postage stamp! In 2012, those who rated Estimation for Budget and Timeline as Poor or Very Poor were 27% and 31% respectively. This year it’s up to 33% and 39% so oops, no improvement there! If we add in Realistic Expectations and our old favourite, Requirements Definition (neither of which I mentioned last year) at 34% each for 2012 and 39% and 38% respectively for 2013, this makes these four categories, which of out of the 10 assessed are the most applicable to testing (in my humble opinion), the lowest rated categories of all! Same as last year, gulp! Please excuse me being so negative and cynical, maybe it’s the tester in me!

The Testing Investment section stills shows expected increases in spending around Structured Test Processes, Testing Tools and Testing Training as the three main areas for 2014 although it’s interesting to note that plans to Engage Contract Testing Professionals rose from a 19% increase last year to 29%, with only 25% expecting a decrease – good news for the contract market! Utilisation of Performance Testing stayed very much the same as for 2012 which is a little surprising given the increased awareness of Performance Testing and Engineering. The specialist Performance Testing companies that I have contact with report that they are rushed off their feet especially in the telecommunications and banking sectors where emphasis on designing and developing IT systems for performance as opposed to purely for functionality appears to be be growing. On the Software Testing Tools front, HP continues to rule across all three categories; Test Management, Test Automation and Performance Testing. Interesting enough though, in each category the next most popular tool is not from another generalist vendor but a specialist tool ie. Atlassian Jira, Selenium and Apache JMeter respectively. Other traditional vendors eg. Rational (IBM), Microsoft, Tricentis and SmartBear notch single figure usages only with a few eg. MicroFocus (Silk), Telerik and Fitnesse not rating a mention. I also wonder whether Jira users might also be including NZ’s own EnterpriseTester in their figures. Finally, I always find the Project Conditions section an amusing one. Last year I homed in on Project Estimation as I’ve always found that when estimates prove to be too light, that it’s testing that cops it at the southern end of the project lifecycle. In other words, the architects, business analysts and developers have spent all the money

25

In summary, while some of the categories this year are worthy of further optimism eg. project success rates, New Zealand participation in the survey et al, those that are applicable specifically to testing do seem to be creeping westward (and no, not to Western Australia). Will we ever see a day where we’re all satisfied with requirements, estimations, expectations etc, no, probably not and it’s possibly quite naïve to think that we will? However, that said, it does mean that we have to continue to i) find better, faster and more innovative ways to test and ii) keep the flag aloft around these areas, and then just maybe we might start to see a swing east again. Until next year….

Using HP QC/ALM Test Management Tool To Its Best Advantage When Managing UAT By Chris Williams

In this article, I share my viewpoint and experience using HP QC/ALM to manage User Acceptance Testing (UAT) with business users. I will discuss the management of the processes and how easily these can be changed and maintained using this Test Management tool. I will also provide some of the practices that can be used to manage UAT that I have accumulated over the numerous years I have worked with this tool. HP QC/ALM is a single platform which creates an information bridge, regardless of function or location. It drives collaboration among all teams, quality functions, business analysis & project managers. Opening from one test management platform standardises requirements definition and management, release and build management, test planning, scheduling and defect tracking and reporting, all with complete traceability. I see the benefits of using HP QC/ALM for managing UAT with business users as follows: 

Plan and track projects and releases from a single dashboard for predictability.



Manage and create traceability between requirements, tests, defects, code changes and build management system tasks.



Allows the UAT testing team to provision and deploy a test lab themselves in a hybrid delivery environment.



Support Development and Operations teams by bringing development, testing and operations teams closer together.



Prioritise testing based on business risk.



Access testing assets anytime, anywhere.

 Schedule and execute tests automatically.

 Analyse readiness with integrated graphs and

reports.  Manage defects and trace back to tests and

requirements. HP QC/ALM’s power lies in its ability; first to store and classify tests and their results and second, to coordinate the efforts of business users. Testers enter test cases into HP QC/ALM, together with the expected results. Actual results are entered when the tests are executed. If defects are discovered, they are entered into HP QC/ALM, together with any supporting material. All of these activities are made instantly available to the other members of the UAT testing team. Using HP QC/ALM greatly reduces the potential for errors compared to say storing test scripts in Word documents and tracking them in spreadsheets. With HP QC/ALM it is less likely that test scripts will be duplicated, results lost, results miscommunicated, tests omitted and defects overlooked. HP QC/ALM is a large product with many features. Some of the most important which are important for the business user are listed here : Test Storage and Organization Test Cases are stored in HP QC/ALM as a manual script with a series of steps. Attached to each step is an expected result. Tests are grouped logically into test sets and a hierarchy of test sets makes up the full collection of tests for an application. When a test is executed, UAT testers check in HP QC/ ALM whether the actual result matches the expected one. If it does, the test step passes; if not the test step fails. A test passes when all its steps have passed; a test set passes when all its tests have passed and testing is complete when all the test sets have passed. This logical grouping of tests enables HP QC/ALM to manage a great number of tests and yet allow the test manager to zero in on areas where tests are failing.

26

HP QC/ALM multi-user nature allows business users to work simultaneously on the same set of test data. Testers can create scripts in parallel and see each other's scripts. The execution of scripts can be assigned to particular testers and the results recorded by one member of the team are instantly visible to other team members.

business processes while the business users navigate the applications involved. The benefits of using a tool like Panaya to accelerate your manual testing is as follows :  Creates an HP ALM Test Plan  Generates test scripts in a dual format

Defect Management

 Machine-readable executable test scripts that

When a tester encounters a defect in the application under test, it can be logged in HP QC/ALM. The tester may describe the defect in words and attach any files that may be relevant. HP QC/ALM will then automatically e-mail the details of the defect to other members of the team. They may in turn add other comments, update the defect's status (“new”, “fixed”, “closed” etc) or assign responsibility for the defect to a team member. Eventually the defect will be fixed and the test that gave rise to the defect run again. If the test passes the defect can be closed in HP QC/ALM.

accelerate manual testing  Human-readable detailed Test Plans for

knowledge retention  Stores and manages entire test catalogue

within an HP ALM Test Plan  Accelerates manual testing enabling HP ALM

business users to execute the Panaya test scripts in an accelerate mode, from within HP ALM  Self-adapts test scripts so they are always

Many aspects of the defect tracking process can be configured to suit the project on which it is being used. They include the testers who may log defects and change their status; the testers to whom copies of the defect are e-mailed; and the different possible status codes for a defect.

up-to-date scope.

Chris Williams currently works as a Senior Test Specialist (HP Testing Tools SME) for Telecom (NZ) which involves the Administration (Site Administration, Project Administration), Customization (Workflow Customization, Template Standardization) and managing the on-going maintenance work of HP (ALM, Performance Centre, Unified Functional Testing). As well as given guidance on Best Practices and running Training Courses in the use of these tools.

Accelerate Manual Testing To conclude this article I want to talk about HP QC/ALM integration with other tools to accelerate manual testing. The greatest benefit to UAT testers is for the acceleration of manual testing with tools like Panaya, which is used for ERP SAP testing. Panaya enables the business users to quickly and accurately test all the scenarios that are relevant for your projects – in a very cost effective manner. Panaya automatically captures and documents affected

He can be contacted at

[email protected]

Software Quality Assessment Test Architecture & Governance Quality Review Board Management Test Solutions Design Software Quality Assurance & Control Call Mark Pridgeon on 021 513 970

27

Great Bugs We Have Known and Loved (Deja Bugs!) By Richard Boustead

In a team environment, assisting other testers on their tasks is not an unusual requirement – if anything, it is to be expected. So when I entered the meeting room for that first meeting, I was startled to see my departing colleague was grinning in a most disconcerting way.

complex bit of software. We began logging the new set, and then sat back with a terrible feeling of deja vu. These bugs had already been logged – three weeks ago. And they were marked as closed…. At this point, the other tester I was working on the project with noted that those were identical to the bugs in V1.0. Investigation soon proved him right. The cycle continued through V1.3, and 1.4. Each time, we were picking up previously fixed bugs and having to re-open them. We hit the end of scheduled testing, still with 140 bugs outstanding, and a good two thirds of them were the reoccurring phantoms.

I went through the introductions around the table, took the hand outs and asked a few questions. Everything seemed pretty set though – apart from the minor issue of being nearly three months behind schedule. The issue there was that the software was for a scanner, and the hardware (the scanners themselves) had not yet shown up. After three months. I rolled with it, even though I was beginning to suspect what my now absent colleague had been grinning about.

While idly speculating among ourselves as to the cause and bouncing crazy ideas around, my colleague suddenly blurted out “They’re giving us fixes based on the original, not the latest version!” The BA overheard, and called the Project Manager, presenting our overheard speculation as The Truth. Project Manager called Senior Manager; Senior manager called Vendor, and the Vendor? The Vendor admitted to giving us fix versions based on the original release of V1.0, not the latest build. Huh.

The scanners arrived, which is where the second wrinkle showed up. Software was loaded, sample machine was set up, and my list of scripts were lined up and waiting. Just like any other project at any other time. Our first run-through of Scanner V1.0 brought up some small, low severity bugs. Nothing especially major. Bugs were identified, logged, evaluated and escalated to the vendor.

It wasn’t too long later that I was moved off the project and moved back to my regularly scheduled BAU releases. As my replacement came into the meeting room, I was careful to give him a great big grin.

Scanner v1.1 arrived and all was well. Identified faults were fixed, performance had improved and there was a host of additional new bugs to process. I mentioned to the project manager that despite being three months late in starting, I really couldn’t see anything that would delay testing too much. I should have known better than to taunt Murphy. ScannerV1.2 arrived. The new bugs were fixed, and there were some new ones to deal with. At revision 3, the number should be shrinking, but it was a

The lesson I learnt from this escapade? Bugs aren’t always in the program. They can be in the process as well. Also, deja vu tends to mean someone didn’t change something in the Matrix

28

And now it’s your turn… If you would like to be involved with and/or contribute to future NZTester issues, you’re formally invited to submit your proposals to me at [email protected] Articles should be a minimum of ½ A4 page at Cambria 11pt font and a maximum of 2 A4 pages for the real enthusiasts. If you wish to use names of people and/or organisations outside of your own, you will need to ensure that you have permission to do so. Articles may be product reviews, success stories, testing how-to’s, conference papers or merely some thought-provoking ideas that you might

wish to put out there. You don’t have to be a great writer as we have our own staff writer who is always available to assist. Please remember to provide your email address which will be published with your article along with any photos you might like to include (a headshot photo of yourself should be provided with each article selected for publishing). As NZTester is a free magazine, there will be no financial compensation for any submission and the editor reserves the sole right to select what is published and what is not. Please also be aware that your article will be proofread and amendments possibly made for readability. And while we all believe in free speech I’m sure, it goes without saying that any defamatory or inflammatory comments directed towards an organisation or individual are not acceptable and will either be deleted from the article or the whole submission rejected for publication.

Feedback

Click on title

WorX / Autom8

8

Assurity Consulting

11

IntegrationQA

18

Catch Software / EnterpriseTester

20

ZephyrQUALITY

27

NZTester is open to suggestions of any type, indeed feedback is encouraged. If you feel so inclined to tell us how much you enjoyed (or otherwise) this issue, we will publish both praise and criticism, as long as the latter is constructive. Email me on [email protected] and please advise in your email if you specifically do not want your comments published in the next issue otherwise we will assume that you’re OK with this.

Sign Up to Receive NZTester Finally, if you would like to receive your own copy of NZTester completely free, even though we’re still real low tech right now, there’s two easy ways: 1) go to www.nztester.co.nz, or 3) simply click here Ed. 29

Comments