Top Banner
Inside: Test Automation | Software security | Performance testing Derk-Jan de Grood asks is it time to say goodbye to a separate testing phase? The end of the road for the Test Phase? Visit TEST online at www.testmagazine.co.uk Volume 4: Issue 3: June 2012 I NNOVATION F OR S OFTWARE Q UALITY
52

TEST Magazine - June-July 2012

Mar 07, 2016

Download

Documents

31 Media

The June-July 2012 issue of TEST Magazine
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: TEST Magazine - June-July 2012

Inside: Test Automation | Software security | Performance testing

Derk-Jan de Grood asks is it time to say goodbye to a separate testing phase?

The end of the road for the Test Phase?

visit TEST online at www.testmagazine.co.uk

volume 4: Issue 3: June 2012

INNovAT IoN foR SofTWARE quAL I Ty

TE

ST

:

IN

NO

VA

TI

ON

F

OR

S

OF

TW

AR

E

QU

AL

IT

YV

OL

UM

E

4:

IS

SU

E

3:

JU

NE

2

01

2

Page 2: TEST Magazine - June-July 2012
Page 3: TEST Magazine - June-July 2012

Feature | 1

June 2012 | TESTwww.testmagazine.co.uk

Leader | 1

INNovAT IoN foR SofTWARE quAL I Ty

Inside: Test Automation | Software security | Performance testing

Derk-Jan de Grood asks is it time to say goodbye to a separate testing phase?

The end of the road for the Test Phase?

Visit TEST online at www.testmagazine.co.uk

Volume 4: Issue 3: June 2012

InnoVAT Ion For SoFTwArE QuAl I Ty

TE

ST

: i

nn

ov

aT

io

n f

or

So

fT

wa

rE

qu

al

iT

y

vo

lu

me

4

: i

ss

ue

3

: j

un

e

20

12

According to Guardian Government Computing, the Department of Health hasconfirmedthat

HealthSpace, the NHS patients' personal health records organiser, will close by March next year.

It follows a speech by Dr Charles Gutteridge, national clinical director for informatics at the Department of Health where he said that although he has used HealthSpace to communicate with patients, he did not think it was a technology that would ever take off.

“It is too difficult to make an account; it is too difficult to log on; it is just too difficult,” he complained. “I don't think I am hiding anything if I say to you that we will not continue with HealthSpace. We will close it down over the next year or so.”

He went on to say that the Health Department needs to create a new portal where patients can find their summary care records and view them personally. One can only guess at how much this has all cost.

What this example does highlight is something that should be obvious to all in the software testing business: that not only does the software have to function in the way specified, without glitches, bugs or crashes etc, it also has to be useable, or even user-friendly to use a term that seems somewhat out of fashion. It is a massive shame the

creators of HealthSpace didn’t consider the average user when they designed the interface; or perhaps the blame lies elsewhere and usability wasn’t made a priority in the original specification.

The uK national Health Service has been the nemesis of many a massive IT project. It seems to harbour a toxic combination of ever-changing funding and management approaches combined with the shifting landscape of British politics. Let us hope they sort it out sooner rather than later.

To more positive matters...Circulated with this issue is our Agile Strategy

supplement in which we seek to make the business case for the Agile method. One of the key aims of TEST is to prove the business value of testing as an industry and the time seemed right to do something a little more in-depth on the Agile approach. We have a range of opinions from a selection of sources both in the vendor and user communities. I hope you find it useful.

until next time...

Matt Bailey, Editor

“Just too diffi cult to use”

What this example does highlight is

something that should be obvious

to all in the software testing

business: that not only does the

software have to function in the

way specifi ed, without glitches,

bugs or crashes etc, it also has to

be useable, or even user-friendly to

use a term that seems somewhat

out of fashion. It is a massive shame

the creators of HealthSpace didn’t

consider the average user when

they designed the interface; or

perhaps the blame lies elsewhere

and usability wasn’t made a priority

in the original specifi cation.

Matt Bailey, Editor

Editor Matthew [email protected] Tel: +44 (0)203 056 4599

To advertise contact:Grant [email protected]: +44(0)203 056 4598

Production & DesignToni Barrington [email protected] Cook [email protected]

Editorial & Advertising Enquiries 31 Media Ltd, unit 8a, nice Business ParkSylvan Grove London, SE15 1PD

Tel: +44 (0) 870 863 6930Fax: +44 (0) 870 085 8837Email: [email protected] Web: www.testmagazine.co.uk

Printed by Pensord, Tram Road, Pontllanfraith, Blackwood. nP12 2YA

© 2012 31 Media Limited. All rights reserved.

TEST Magazine is edited, designed, and published by 31 Media Limited. no part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available.

Opinions expressed in this journal do not necessarily refl ect those of the editor or TEST Magazine or its publisher, 31 Media Limited.

ISSn 2040-0160

Page 4: TEST Magazine - June-July 2012

At Original Software, we have listened tomarket frustrations and want you to share in our visionary approach for managing thequality of your applications. We understandthat the need to respond faster to changingbusiness requirements means you haveto adapt the way you work when you’redelivering business-critical applications.

Our solution suite aids business agility andprovides an integrated approach to solvingyour software delivery process andmanagement challenges.

Join the RevolutionDon’t let your legacy applicationquality systems hamper yourbusiness agility

Find out why leading companies areswitching to Original Software by visiting:www.origsoft.com/business_agility

11_10_os_A4_advert_02_aw:1 11/11/2010 08:47 Page 1

Page 5: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

Contents | 3

1 Leader column When software is “Just too difficult to use”.

4 News

6 The end of the road for the test phase? There is much debate about how testing will be organised in the near future and

Derk-Jan de Grood has noticed growing uneasiness in the testing ranks. Where are we going? What should we do? And is the test phase still relevant?

12 Test matching: Rethinking test automation How do we enable testers to work effectively within the new business-driven

development environment? Huw Price finds out.

16 TESTProfile–Globalservice Financial services is a complex, global industry with massive IT demands. TEST speaks

to Giuseppe Tozzo, specialist tester at Deloitte LLP in London about how software quality is crucial at the company.

20 Traditional automation is dead Stephen Readman takes us on a journey from traditional testing to a place where software specifications are automated upfront and testers have a shift in focus.

24 Specificationbyexample: How validating requirements helps users, developers and testers Paul Gerrard and Susan Windsor offer a simple approach to creating business stories from

requirements that not only validates requirements but also creates examples of features that can feed the ‘Behaviour-Driven Development’ approach and also be used as system or acceptance test cases.

28 Trainingcorner–Agileandthehare This issue Angelina Samaroo looks at Agile, exploding some myths and taking a detour

down memory lane.

31 DesignforTEST–Testingandgrowth Mike Holcombe suggests that one way to help boost economic growth is to put some

effort into trying to explain to SMEs how they can make their testing better.

32 TESTSupplierProfile–Softwaresecurity With a rapid increase in adaptation of security test automation, especially in industries

working in machine-to-machine solutions, consumer devices, and critical infrastructure protection, Codenomicon is finding success in providing software security. The company’s founder and CTO, Ari Takanen speaks to TEST...

36 Surviving the perfect storm While I’m sure we’re all looking forward to the Olympics, many are asking if the sheer

amount of network and mobile data traffic will test the IT infrastructure to destruction. Mike Howse says, if it’s done properly, performance testing will de-risk your application both when you launch it and when the crowds of users come to visit.

40 The Agile imperative It is important that software design and development agencies understand that agility

is the easiest way to meet the demands of the client. Digital marketing guru Toby Mason explains why Agile is imperative.

41 TEST Directory

48 LastWord–TheendoftheDeathMarch Dave Whalen ponders the end of the software testing ‘death march’.

Contents...JunE 2012

16

32

6

36

Page 6: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

WHy IT SHouLD foRGET AbouT DEvELoPMENT

The results of a global survey of nearly 1,000 IT professionals focused on IT trends and outlooks pertaining to application

development practices has uncovered the level of success respondents feel their companies have with various application lifecycle management (ALM) processes, including demand management, requirements management, application development and also release management. The conclusion: IT should forget about development because its core processes are not an issue.

Even with the dramatic influx of new technologies into the application development process – such as cloud and mobile computing – IT organisations are largely successful with

development. However, it is the ‘bookend’ processes of requirements management and release management that tell a different story. Survey respondents consistently scored these two processes lowest (the scale was a one to four rating with four being the best). In fact, respondents cite deploying applications on time without issues as the biggest impediment to overall ALM success.

Most software expenditures take place in the operations and post-deployment phase, so getting release management right early on is critical. The rise of DevOps to improve the transition from development to production has gained popularity in recent years and not only because of Agile development, but

also because the proliferation of production environments. A key finding from the survey indicates IT organisations should focus more on DevOps to bridge the development and operations divide.

The survey highlighted that IT struggles with delivering quality requirements: According to market research, 33 percent of development costs are wasted because of poor requirement practices. Yet managing requirements had the lowest overall score in the global survey. IT organisations need to invest more in requirements processes, not more tools, to help everyone share requirements across a complex technology and organisational landscape.

firms that allow staff to bring their own computer devices (byoD) to use at work are more attractive employers

and enjoy better workplace morale, according to a new survey of 300 small and medium sized businesses by cloud computing company Nasstar.

According to the survey, almost three quarters of company bosses said that allowing staff to use their own smart phone, laptop or tablet in the workplace would position their firm as a ‘flexible and attractive employer’.

Around two thirds of SME chiefs already allow their staff to use their own devices for work purposes. The same number said they had written policies in place for staff wishing to use their own devices at work.

Fifty eight percent felt that by letting staff use their own devices at work had led to

increased output and better workplace efficiency and happier staff.

Cary Cooper, professor of organisational psychology and health at Lancaster university Management School, comments: “As this research shows most people these days generally like the option of using their own computer devices at work. For employers it’s better to be flexible to their employees’ needs rather imposing ways of working that go against the preferences of their workforces.”

Sixty percent of respondents felt that they had saved some money on IT training and hardware by letting staff use their own devices. A clear majority (70 percent) felt that with the rise of tablets and smart phones, it was ‘inevitable’ that in the future all staff would demand the flexibility to use their own devices in tandem with those provided by their employer.

Staff are happier with BYOD

vulnerabilities found in storage devices

Consumers have been warned about the poor stability of network-attached storage (NAS) units. based

on Codenomicon's robustness test results using smart model-based fuzzing tools, all of the tested units failed in multiple critical communication protocols.

“It is alarming that not a single one of the devices could clear the tests,” says Ari Takanen, CTO of Codenomicon. “Similar protocols are used not only in consumer nAS units, but also in enterprise-level data storage devices. The information security as it stands is poor.”

Embedded devices are everywhere. Connected to the Internet, they use protocols to transfer data. Bugs in software implementing those protocols make the devices vulnerable, and potentially exploitable. Researchers at Codenomicon Labs have been testing embedded devices. However, Codenomicon will not disclose any details of the vulnerabilities publicly in order to protect the users of those devices.

This research is part of a series of publications in testing embedded devices used by home consumers. Codenomicon Labs took five different consumer-grade nAS units from well-known manufacturers. Lab researchers fuzzed them thoroughly, and all of them failed.

network-attached storage units have been becoming more and more ubiquitous due to the convenient, simple and efficient way of sharing and storing data, often requiring nothing more than connectivity to a LAn or the Internet. These features, however, expose them to a multitude of threats that menace any networked devices.

Page 7: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

nEW

S

Coverity’s latest Scan open Source Integrity Report (Scan), which is the largest public-private sector

research project focused on open source software integrity has revealed that open source code quality is on a par with that of proprietary code.,

In 2011, open source projects in Coverity Scan were upgraded to the Coverity 5 development testing platform analysis engine to accommodate significant advances of the maturity in static analysis technology over the past five years – in particular, the ability to find more new and existing types of defects in software code. The 2011 Scan report details the analysis of Scan’s most active open source projects,

totalling over 37 million lines of open source software code. In addition, the report details the results of over 300 million lines of proprietary software code from a sample of anonymous Coverity users.

“The line between open source and proprietary software will continue to blur over time as open source is further cemented in the modern software supply chain,” commented Zack Samocha, Coverity Scan Project Director. “Our goal with Scan is to enable more open source projects to adopt development testing as part of their workflow for ongoing quality improvement, as well as further the adoption of open source by providing broader visibility into its quality.”

REPoRT SAyS oPEN SouRCE CoDE quALITy IS oN PAR WITH PRoPRIETARy

SqS Software quality Systems, a leading specialist for software quality and training, says it has launched a

new-look software testing training portfolio offering a host of additional courses and novel training processes, for beginners and advanced software testing and quality assurance professionals.

Being rolled out across the uK and Eire, the new portfolio consists of a wide range of international courses arranged around twelve key testing roles. There are three levels of course available for each key role: core training to strengthen existing skills, additional training to continue development and skills top-ups for personalised training.

SQS training & conferences director, Steve Dennis, comments: “Whether you are responsible for the quality policy in your company, you have a leadership role in large test projects of complex software systems or are tasked with ensuring the delivery of a software system within budget, time and quality constraints, our new-look 2012 training portfolio has over 70 courses, to suit all levels.

“The investment in software testing training can save time and costs while also helping to preserve a company’s reputation. Our courses support all major certification training programs and some are specially developed to meet accredited bodies‘ requirements. Our experienced and accredited trainers also help trainees to understand how to apply new knowledge to real projects.

new-look software testing training

Taking the Championship

Micro focus, a provider of enterprise application modernisation, testing and

management solutions, has announced that it has been named a ‘champion’ in Info-Tech Research Group's Agile ALM vendor Landscape and Software Test Management vendor Landscape reports. Through a comprehensive review of companies in both areas, the reports are designed to offer an independent perspective on the range of test and agile development products available in the market.

In its evaluation, Info-Tech Research Group acknowledged Micro Focus’ Borland Application Lifecycle Management (ALM) suite as a premium offering and Silk Central Test Manager

solution for its comprehensive testing and requirements management features. The company was also recognised for its global solution support, including a far-reaching reseller network, in both categories.

“Micro Focus strives to provide customers with best-in-class software solutions that offer the most flexibility possible. This enables customers to leverage their existing infrastructure without prohibitive overhead costs,” said Steve Biondi, president of north American sales and field operations for Micro Focus. “We’ve always believed that the strength of our solutions distinguish us from the competition. This recognition by Info-Tech Research Group for our Borland suite and Silk Central Test Manager only underscores our conviction to continue to innovate for our customers.”

Page 8: TEST Magazine - June-July 2012

TEST | June 2012

6 | Feature

www.testmagazine.co.uk

There is much debate about how testing will be organised in the near future. The testing profession has evolved from the very first time developers started testing through to separate test phases and independence and then to collaborative testing. However, in recent years Derk-Jan de Grood has noticed growing uneasiness in the testing ranks. Where are we going? What should we do? And is the test phase still relevant?

The end of the road for the test phase?

6 | Cover story

Page 9: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

Cover story | 7

The waterfall method, which is still being used by many organisations, prescribes anextensivefinaltest

phase.Anothersignificantfeature of waterfall is that the software development process is organised with several sequential periods, or test phases, in which a dedicated group performs specialised tests. In this context, the term ‘test phase’ can be definedasagroupofjointlyexecuted and controlled test activities. The testing within each phase may be more or less structured and formal, but the test activities within a test phase are bound by common objectives and the same focus or system boundaries. Leading testers like James Whittaker and Gojko Adzic have repeatedly stated in various presentations and articles that such testing is no longer viable.Butfirstafewreasonswhythe test phase is dead...

Afterthoughts (it is a day after the fair)There is little value in a quality assessment that is too late. Within the waterfall method the testing is often tested until just before the deadline. Due to high workload the test report is written afterwards when the system is already in production. What is the value of remarks and comments at this stage? What should the project do with bugs that cannot be restored, since the deployment is already a fact?

Even when testing is done at early stages, such as with the system test, results are coming in too late. The fair is already over. The programmers have done their work and want to start something new, but must wait until the testers make their statement about the quality, often accompanied by a litany of bugs.

Customer experience is a key performance indicator (KPI) that

is gaining popularity with our stakeholders. Organisations put the customer rating at the centre of their dashboard. Although bugs are a threat to customer experience, the perceived value of a full bug tracking system depreciates quickly.

The aim is not to demonstrate the differences with the specification, it is to have a satisfied customer. Agile development aims for ‘right first time’ and has therefore a large focus on early detection and quick resolution of errors. The user is involved in the development and cooperation is more important than following a formal specification. This reduces the need for an independent quality judgment at a later stage.

The development cycle is shorteningThe life of software is becoming shorter due to rapid innovation. Consequently, we should develop our software faster also. Kent Beck provides a clear prognosis at the uSEnIX Technical Conference.

In the coming years the deployment cycle will decrease from an average of once a quarter to releases on a daily or hourly basis. For testing this has two direct consequences. First, test must be performed quickly. You cannot test for one month when the software is due to be released next week. Secondly, the phasing of activities gets blurred. Testing is done continuously and by everyone. There is no longer room for a separate testing phase.

A shift to operational assuranceFor many organisations the test phase is still important but in Agile organisations there is a shift in emphasis. We see that testing is an activity that is conducted by many parties: developers within the sprint, business architects during design and real users that perform beta tests. Quality attributes like usability, durability and security

The aim is not to demonstrate the differences with the specification, it is to have a satisfied customer. Agile development aims for ‘right first time’ and has therefore a large focus on early detection and quick resolution of errors. The user is involved in the development and cooperation is more important than following a formal specification. This reduces the need for an independent quality judgment at a later stage.

Page 10: TEST Magazine - June-July 2012

For many organisations the test phase is still important but in Agile organisations there is a shift in emphasis. We see that testing is an activity that is conducted by many parties: developers within the sprint, business architects during design and real users that perform beta tests. Quality attributes like usability, durability and security are increasingly important and get attention throughout the project. This is in contrast to the traditional test phase in which a group of independent testers, urged to speed by a fast approaching deadline, execute their functional tests.

www.testmagazine.co.ukTEST | June 2012

8 | Cover story

are increasingly important and get attention throughout the project. This is in contrast to the traditional test phase in which a group of independent testers, urged to speed by a fast approaching deadline, execute their functional tests.

The above description shows a clear shift of formal testing phase (especially at the end of the development process) to a continuous process involving many disciplines. Linda Hayes indicates that there is a shift from quality assurance to operational assurance. In this the quality assessment no longer has central place, but the support of the operational process has. On the basis of the above arguments, it is clear that one of the ‘victims’, of this shift is the separate testing phase.

note that the above arguments question the health of the separate test phase, and argue it to be dying dinosaur. Between the lines you can read that testing as discipline is far from dead. It will be organised differently and other disciplines are getting involved. Although things are changing for sure, the above arguments are only one side of the coin. Are there arguments that plead for a separate test phase and the end of the cycle? Yes, there are!

Arguments for a test phase at the end of cycleIn the following paragraph I will share some arguments that plead for a separate test phase.

Although it is desirable to maximise early testing as much as possible, not all tests can be done upfront. Often it is just not possible. unit and system tests only check the quality up to a certain level. Due to appification and an increase in system couplings, the system chains get longer.

using adequate simulations and by working with trusted components a lot of errors can be solved before integration, but these measures will never replace a true integration test. Experience teaches us that when two systems interact for the first time, often unforeseen problems arise. A testing phase at the end prevents these problems from occurring while being in operation.

The supplier has other interestsWherever development work is being outsourced, organisational boundaries arise. On either side of this boundary parties have their own interests. And they might be different and exclusive. For political reasons or due

Page 11: TEST Magazine - June-July 2012

geographic spread it is difficult for the accepting party to have real insight in the activities of the supplier, therefore control and checking by the acceptor is a necessity.

Preferably this are done during the project and in cooperation with the supplier, but formal acceptance means that there is should be a critical examination once the goods are delivered. Although the weight of this activity may vary based upon the trust one has in the supplier and the risk involved, it pleads for a small testing phase at least.

Politics ruleApart from the question of acceptance, when organising testing one has to have an eye for the role that politics has in the organisation. Increasingly, organisations are expected to meet compliance standards like Basel, SOx, SEPA (just to name a few). This forces compliance testing, and makes demands on the formality of the testing activities. Besides it is often desirable to have a shared responsibility and create a wide commitment. Both can be achieved by involving stakeholders and management in testing. Such a test phase therefore has a political purpose.

As mentioned, quality attributes such as security, performance, durability and user friendliness become increasingly important. Experience shows that these specialised tests are often best organised separately. usability, Performance testing, and especially reliability testing seldom fit within a two week sprint. If you organise Beta Testing, a longer run will lead to greater coverage, reasons for these tests to be organised in a – you guessed it – separate test phase.

LegacyAlmost all organisations have to deal with legacy. Agile development, continuous integration and testing are all right, but bear in mind, that not all of the software is suitable for this mode of development. In particular, legacy systems can best be adapted in a traditional development way.

According to Ken Beck various types of systems require different testing approaches. It may therefore be effective to choose for different test approaches. These coexist within the same organisation. Besides legacy systems, there are also legacy organisations. In these organisations, the technique does not determine what is possible, but the culture and

June 2012 | TESTwww.testmagazine.co.uk

Cover story | 9

I do not think there is value in forced decisions for or against. We should not cling to the known test phases just because we are familiar with them. Neither is it desirable to throw away the old approaches. Current developments will lead to many changes. It is important for testers to track these developments and to consider what its consequences are for the testing profession and the way we do our work.

Page 12: TEST Magazine - June-July 2012

TEST | June 2012

10 | Feature

www.testmagazine.co.uk

10 | Cover story

Codenomicon's Fuzz-o-Matic automatizes your application fuzz testing! Fuzz any-where, early on in the development process, get only relevant results and remediate on time. Access your test results through your browser anywhere.

FUZZ IT LIKE YOU MEAN IT!

Get actual, repeatable test results

Save time and money by avoiding false positives! Find unknown vulnerabilities before hackers do. Not everyone has the staff, time, or budget to effectively use penetration testers and white hat security auditors. For executable but untested software, Fuzz-o-Matic gives longer lead time to remedy bugs before release.

Identify bugs easily

Fuzz-o-Matic gives you working samples that caused issues during testing. Each crash case includes also programmer-friendly trace of the occurred crash. Identification of duplicate crashes is effortless. In addition to plain old crashes, you also get samples that caused excessive CPU or memory consumption. After remediation, a new version can be tested to verify the fixes.

Verify your applications

Test your builds with Fuzz-o-Matic. The world’s best software and hardware companies have Software Development Lifecycle processes (SDLC) that identify fuzzing as a necessity to pre-emptively thwart vulnerabilities. These days simple functional testing done as Quality Assur-ance is not enough. Improve your appli-cation resilience by finding and remedi-ating bugs that cause product crashes.

www.codenomicon.com/fuzzomatic

Derk-Jan de GroodValoriwww.valori.nl

The test phase is far from dead, but it will increasingly be defined and organised in a different manner. I think that's just fine, as long as we keep aligned with the needs of the organisation, we continually challenge ourselves to deliver maximum added value and contribute to operational excellence.

About the authorDerk-Jan de Grood, works for Valori and is an experienced test manager and advises companies on how to set up their test projects. He has published several successful books and is a regular speaker at national and international conferences. This article is closely related to the keynote he will give at the EXPO:QA conference in Madrid. Derk-Jan is passionate about his profession and gives lectures at various Dutch universities and provides training sessions for fellow professionals in the trade.

References[Hayes, 2010] Linda Hayes, "You're Either On The Train or On the Tracks: Radical Realities Shaping Our Future, "keynote STAR WEST 2010.

[Beck 2010] Kent Beck, "Software G Forces: The Effects of Acceleration “.

[Whittaker,2011] James Whittake, “Pursuing Quality? You Won't Get There By Testing” (keynote EuroSTAR 2011).

[Adzic 2011] Gojko Adzic, “Death To The Testing Phase” (keynote EuroSTAR 2011).

[JuBoCo, 2012] Bepaal je koers, Toekomst en Trends in Testen, ISBn 9789491065347, Dutch test society (Testnet) Anniversary Book.

available knowledge does. Agile development requires the right expertise and mindset. not every organisation is ready for this.

In life-critical systems the above arguments apply even stronger. If lives depend on it, the organisation is bound to tackle as many problems as possible by fully integrating testing into the development, and to have the necessary objective test moments. Thus, the two opinions merge into one other and coexist side by side.

best of both worldsWe have seen that there are arguments for and against separate test phases within software development. I do not think there is value in forced decisions for or against. We should not cling to the known test phases just because we are familiar with them. neither is it desirable to throw away the old approaches. Current developments will lead to many changes. It is important for testers to track these developments and to consider what its consequences are for the testing profession and the way we do our work.

In my view, the job gets more colourful, versatile and challenging. We get new tools and options. Test strategists will have to think about the contribution they want to make to the organisation and what objectives we

pursue with our activities. On this basis we can make choices.

Let old and new ideas come together. This will result in testers that sit beside developers to reduce and rapidly detect errors. This will also lead to testers that are working in separately organised test phases, whenever this is more efficient.

I can think of situations where, for example, all activities related to a certain risk group are combined in a dedicated test phase. Proper business alignment dictates that the output of our test activities are closely related to the information needs of the business. Regardless of the moment in time that particular test activities are being performed, it can be rewarding to organise them separately. This holds for all testing activities that contribute to the same insights and information. By doing so, the test coordinator becomes the responsible person that, on behalf of the business, ensures intelligence and comfort for one or more key focus areas.

The test phase is far from dead, but it will increasingly be defined and organised in a different manner. I think that's just fine, as long as we keep aligned with the needs of the organisation, we continually challenge ourselves to deliver maximum added value and contribute to operational excellence.

Page 13: TEST Magazine - June-July 2012

Codenomicon's Fuzz-o-Matic automatizes your application fuzz testing! Fuzz any-where, early on in the development process, get only relevant results and remediate on time. Access your test results through your browser anywhere.

FUZZ IT LIKE YOU MEAN IT!

Get actual, repeatable test results

Save time and money by avoiding false positives! Find unknown vulnerabilities before hackers do. Not everyone has the staff, time, or budget to effectively use penetration testers and white hat security auditors. For executable but untested software, Fuzz-o-Matic gives longer lead time to remedy bugs before release.

Identify bugs easily

Fuzz-o-Matic gives you working samples that caused issues during testing. Each crash case includes also programmer-friendly trace of the occurred crash. Identification of duplicate crashes is effortless. In addition to plain old crashes, you also get samples that caused excessive CPU or memory consumption. After remediation, a new version can be tested to verify the fixes.

Verify your applications

Test your builds with Fuzz-o-Matic. The world’s best software and hardware companies have Software Development Lifecycle processes (SDLC) that identify fuzzing as a necessity to pre-emptively thwart vulnerabilities. These days simple functional testing done as Quality Assur-ance is not enough. Improve your appli-cation resilience by finding and remedi-ating bugs that cause product crashes.

www.codenomicon.com/fuzzomatic

Codenomicon's Fuzz-o-Matic automatizes your application fuzz testing! Fuzz any-where, early on in the development process, get only relevant results and remediate on time. Access your test results through your browser anywhere.

Page 14: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

12 | Test Automation

How do we enable testers to work effectively within the new business-driven development environment? Huw Price finds out.

Test matching: Rethinking test automation

Page 15: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

Test Automation | 13

As much as 40 percent of an entire application development lifecycle is spent on testing, which means that test teams need to adopt an approach which allows them to react to the demands of the business. This means using efficient practices to improve software quality.

Increasingly, companies are viewing IT as a business-critical operation; one that is expected to improve

software quality while also aligning itself with the business drive to reduce costs. This realignment is noticeable throughout the industry. In fact, the ‘highest priority’ of Agile software development is ‘to satisfy the customer through early and continuous delivery of valuable software’.

In short, IT is now expected to cost-effectively deliver software which meets customer requirements without significant variation or defects, in short iterations. Therefore, testers play an important role in this transition, acting as gatekeepers of quality. So, how do we enable testers to work effectively within this new business-driven development?

The manual- automation debate As much as 40 percent of an entire application development lifecycle is spent on testing, which means that test teams need to adopt an approach which allows them to react to the demands of the business. This means using efficient practices to improve software quality. Often, this discussion is contained entirely within the Manual vs. Automation debate, which revolves around cost analysis of man vs. tool. This is a moot point. Manual testing is time-consuming, labour-intensive and prone to human error. However, in some cases, it is not only the preferable method but also the necessary one. Automation is not intended to replace manual testing but, rather, create efficiencies by removing the bottlenecks created by unnecessary manual tests. Bottlenecks, as you might expect, can considerably slow your test cycles and limit the number of tests you can perform within business-imposed time constraints. This reduces the coverage of your testing, making it difficult to ensure that the required functionality works as expected. If you can automate the execution of the most repetitive, or time-consuming of these tests – for example, regression tests, load and performance tests – you can remove the unacceptable choice between delaying implementation and compromising quality.

overcoming perceptionsAutomated testing tools are expensive; the market leaders can cost six or seven figures. This is a considerable investment, and one which, in many cases, has failed to reap expected returns. Often, this can be attributed to the unrealistic expectations of what automation should achieve.

By nature, automated test tools are labour-saving devices, just like washing machines. It is unrealistic to expect everything to come out in the wash if there are no clothes and soap, or the cycle program has not been defined. However, taking five minutes to put in the clothes, soap, and program allows you perform other tasks while the machine performs an operation that used to take humans hours of manual labour.

If we apply this back to testing, the benefits are numerous. Automation frees up testers for more manual tests, minimises the risk of human error through repetition, allows out-of-hours executions, and speeds up the testing cycle. The washing machine example also gets us to the crux of the issue - data! Testers and tools can only test that the data they are provisioned satisfies the defined requirements. Just as put colours and whites together may cause a wash to ‘fail’, so the wrong data will lead to the failure of tests. So, how do we ensure that we are testing the right data in the correct state at the right time?

Rethinking data provisioningGenerally, once the requirements of a project have been established, requests will be received to go and find the appropriate data to test them. This operation is often performed manually, meaning that someone has to understand what data is required, know where it is stored and then go and find it. This is a process which can take weeks, adding further time constraints upon testers, and can’t guarantee that the data provided is fit for purpose and more importantly will be fit for purpose the next time the test runs.

Another reason that the data may not be appropriate is that searches tend to be geared towards finding specific keys (IDs). In modern systems with numerous cross-stream and upstream dependencies, can you guarantee that the data is available in the same state, or that it has not been modified since the requirements were set?

Page 16: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

14 | Test Automation

Huw PriceManaging directorGrid-Tools www.grid-tools.com

In order to ensure quality, automated testing should be an important part of any testing strategy. However, to maximise the benefits of test automation we need to change the way we think about data. Adopting test matching into your approach to data provisioning can significantly improve the efficiency and success of your automated tests, enabling testing to align itself to the business and deliver quality software on time.

unavailability or modifications can cause substantial delays and rework for both data and test teams, and may not be discovered until after a failed test is reported.

It is also likely that the attributes required to run the test are stored in disparate systems, for example, in order to test an employee holiday system, you may need data from HR and Payroll systems to satisfy the test. This adds a further level of complexity. In short, manually provisioning data carries many of the same risks as manual testing; delays, substantial rework and human error, all of which compromise Quality. It is time to rethink the way in which we provision our data to ensure the efficiency and quality of our testing cycles.

What is test matching?Test matching utilises the concept of matching automated tests to back-end data to improve the efficiency and success of your test automation. The first step is to identify current data in the correct state for the tests to run without failure. The data is then mined using powerful data mining tools. The second step is then to provision the high quality data required to run your automated tests.

Definingtestcriteria not individual keysInstead of creating tests fed with spreadsheets containing specific keys (Account IDs, Purchase Order numbers etc) against which to run your tests, use data criteria to make your test work every time. Based on these criteria, you can then use powerful data mining tools to extract the data criteria from your multiple back-end systems.

Often, individual testers will have achieved this themselves by removing key IDs from scripts and writing some SQL. However, this tends to be ad-hoc and not standardised across the entire team. using standardised criteria helps to ensure that the data is always in the right state to run your tests. This automated technique also allows you to quickly mine data from a number of disparate systems, something which is difficult to do manually, making your data provisioning considerably more stable than data taken from production.

Once the data has been mined, it will be assembled into data cubes allowing you to look at the data criteria for a specific key in one place. For example, each employee ID would have a series of attributes, such as leave entitlement, holiday taken, staff position etc. The end result is a richer,

higher quality understanding of the data needed to drive more efficient, rigorous automated tests. Once we have mined the data, it’s time to match the attributes to your tests.

Matching your testsEach test you run will have certain conditions, usually based on operators, which match the functionality that needs to be tested. By creating a control table for your tests, it is possible to define these conditions and match each test to the appropriate data. Indeed, it is also possible to match multiple tests to a single key if, for instance, you wish to perform a number of tests on a specific employee ID.

By prioritising the rarest sets of criteria, you can ensure that your tests have the richest data for maximum coverage. However, perhaps the biggest benefit to running a test match prior to your automated testing is that it allows you to detect which tests will fail and why before you run the tests. For example, if there was no data in your back-end systems to meet a requirement, you could synthetically create the data needed to fulfil those criteria before the tests run. This significantly reduces the risk of rework and delays in your test cycles, whilst allowing you to perform more rigorous tests.

feeding scripts into your tool Traditionally, test automation scripts read data from spread sheets. However, this is a cumbersome process, with testers needing to create, refresh, match and track data in spreadsheets. However, by automating the test matching and provisioning processes, these operations can be moved into a single tool’s ‘test data repository’. This allows you to standardise your test scripts before feeding them into your automation tool. As such, these test automation scripts can use their native database connectivity methods, for example, “ADODB connections of VBScripts” to both read test data from your repository and write your tests back to the repository.

In order to ensure quality, automated testing should be an important part of any testing strategy. However, to maximise the benefits of test automation we need to change the way we think about data. Adopting test matching into your approach to data provisioning can significantly improve the efficiency and success of your automated tests, enabling testing to align itself to the business and deliver quality software on time.

Page 17: TEST Magazine - June-July 2012

- A powerful and easy to use load test tool- Developed and supported by testing experts- Flexible licensing and managed service options

Don’t leave anything to chance.Forecast tests the performance, reliability and scalability

of your business critical IT systems. As well as excellent

support Facilita off ers a wide portfolio of professional

services including consultancy, training and mentoring.

Powerful multi-protocol testing software

TM

Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: [email protected] | www.facilita.com

Page 18: TEST Magazine - June-July 2012

16 | TEST profile

Financial services is a complex, global industry with massive IT demands. TEST speaks to Giuseppe Tozzo, specialist tester at Deloitte LLP in London about how software quality is crucial at the company.

Global service

Deloitte LLP is the brand under which tens of thousands of professionals in

independentfirmsthroughoutthe world collaborate to provide integrated services that include audit, tax, consulting and corporatefinance.

Giuseppe Tozzo joined Deloitte LLP last year as a specialist in risk and testing services, but his experiences in software go back to the late ’80s when his Applied Biology background lead him to a role as a support programmer at Shared Medical Systems. This developed into a range of test manager roles culminating most recently in a four year stint as a testing consultant at Acutest Ltd.

Tozzo originally hails from Cardiff and is a keen cyclist and photographer. He also studies the history of technology and has published books on the subject including Collectable Technology, published by Carlton Books and Retro Electro published by universal Books. Both came out in 2005.

TEST: Please explain what you organisation is and what it does. How does the testing function fit into the organisation as a whole?Giuseppe Tozzo: Deloitte is a global business advisory firm. We offer integrated services to businesses, including audit, tax, consulting and corporate finance. Our testing services provide a broad range of consultancy services to clients. Introducing new

TEST | June 2011 www.testmagazine.co.uk

Page 19: TEST Magazine - June-July 2012

June 2012 | TEST

software is complex and costly. It has to meet both functional and performance needs, integrate with legacy operating environments, and be delivered within set timescales and budgets. Therefore an effective software testing process is critical.

Our testing team helps clients achieve their projects’ objectives by managing the testing process, or by assuring those processes are both effective and pragmatically followed.

Our solutions and methodologies are based on good practices, including BSI and IEEE, but can be tailored to suit any project. Our testing team guides clients through a structured testing review, helping them avoid common pitfalls in the testing process. We encourage early testing where practicable

showing clients how it’s possible to deliver sooner if they embrace testing sooner in their programmes.

TEST: What are the particular challenges faced by the software testing staff in your organisation? GT: The challenges our personnel come across vary. We have the same challenges any group of testers has. The testing world is changing rapidly so balancing improving our people’s skills and knowledge while maintaining service delivery to our clients is always a challenge. The need to apply the right techniques and solutions is always a pressure that challenges us but we cope well with this through peer support and good use of central information repositories.

TEST: who are your main customers today and in the future?GT: Our clients are major firms, many of whom are in the FTSE 100, whose reach extends to most areas of life, not just in the uK but around the world. Through consistently good work for these clients and building strong relationships with them, we hope to continue working with them in the future – as well as gaining new, similarly eminent clients

TEST: How do you see the role of the software tester changing and developing in the current economic climate? What are the challenges and the opportunities?GT: The role of the software test professional is developing into one that spans the whole project lifecycle. Firms with a mature approach to change now recognise that testing towards the end of the project lifecycle is too late. Increasingly, testing is being done at the start of programmes, where it can deliver high value by preventing defects.

The best test professionals have either experience or an appreciation of project and programme management, as well as the challenges faced by the

TEST profile | 17

Our testing team guides clients through a structured testing review, helping them avoid common pitfalls in the testing process. We encourage early testing where practicable showing clients how it’s possible to deliver sooner if they embrace testing sooner in their programmes.

www.testmagazine.co.uk

Page 20: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

18 | TEST profile

different parties involved in a complex project. We recognise this and invest considerable resource in improving our consultants’ skills and knowledge. While the economic climate is difficult at the moment, it has little impact on the demand leading businesses have for support. A fluctuating economy drives change, whichever direction it is going in, and change requires risk management and testing services.

The opportunities for high-quality testing consultancy and services are enormous. Clients are increasingly aware that excellent testing implemented as early as practicable ensures better quality and improves their ability to make informed business decisions.

TEST: What testing methodologies do you use eg, Agile, Waterfall, etc?GT: We have people with experience in all the major testing technologies.

TEST: What current issues, regulations, legislation and technologies are having an impact on your business, and which if any will be important in the future? GT: As far as regulatory changes are concerned we’ve been very busy on Solvency II and Basel III testing activities and consultancy.

The pace of mobile phone technology is creating opportunities. I remember that it was only a handful of years ago that WAP made its appearance but now we are all walking around with fully functional computers and media centres in our pockets. Add the cloud to this and

we have a positively explosive mix of technology that will take us who knows where in the next five years? We watch this very closely and constantly adapt to provide the most up to date service.

TEST: What software testing products and tools have had a positive impact on your operations? How have they helped?GT: Working with such a diverse range of clients I come across numerous tools. These include the obvious well known test management and automation choices but I’m seeing increasing use of open source testing tools combined with the innovative use of custom test framework management solutions. I am very keen to see tool selection based on what is right for the business and job at hand, rather than jump for the tool that everyone else is using. At the highest level, testing is the same challenge for all organisations, but not all organisations are the same.

Tools have helped me most when it comes to producing meaningful reports for senior decision makers. The challenge here is always how to present complex testing-related information in a concise form that supports informed decision making. For senior executives it’s not a matter of knowing how much testing has been done, but what the exposure to risk may be and what confidence they have in making a decision to put a product into the production domain.

TEST: Giuseppe Tozzo, thank you very much.

The pace of mobile phone technology is creating opportunities. I remember that it was only a handful of years ago that WAP made its appearance but now we are all walking around with fully functional computers and media centres in our pockets. Add the cloud to this and we have a positively explosive mix of technology that will take us who knows where in the next five years?

Giuseppe Tozzo Senior manager Deloitte LLP www.deloitte.com

Page 21: TEST Magazine - June-July 2012

Industry-leading Cloud, CEP, SOA and BPM test automationPutting you and the testing team in control

Since 1996, Green Hat, an IBM company, has been helping organisations around the world test smarter. Our industry-leading solutions help you overcome the unique challenges of testing complex integrated systems such as multiple dependencies, the absence of a GUI, or systems unavailable for testing. Discover issues earlier, deploy in confidence quicker, turn recordings into regression tests in under five minutes and avoid relying on development teams coding to keep your testing cycles on track.

Web Services • TIBCO • webMethods • SAP • Oracle • IBM • EDI • HL7 • JMS • SOAP • SWIFT • FIX • XML

Support for 70+ systems including:

GH Tester ensures integrated systems go into production faster:

• Easy end-to-end continuous integration testing • Single suite for functional and performance needs • Development cycles and costs down by 50%

GH VIE (Virtual Integration Environment) delivers advanced virtualized applications without coding:

• Personal testing environments easily created • Subset databases for quick testing and compliance • Quickly and easily extensible for non-standard systems

Every testing team has its own unique challenges. Visit www.greenhat.com to find out how we can help you and arrange a demonstration tailored to your particular requirements. Discover why our customers say, “It feels like GH Tester was written specifically for us”.

Page 22: TEST Magazine - June-July 2012

20 | Test automation

TEST | June 2012 www.testmagazine.co.uk

Stephen Readman takes us on a journey from traditional testing to a place where software specifications are automated upfront and testers have a shift in focus...

Traditional automation is dead

Traditional automation is dead. No really it is. Gone are the days when we focussed on automation

of a manual regression pack just because it had become time consuming to execute.

I'll let you into a little secret, when test automation engineers start the task of automating your regression pack, they spend a long time trying to understand the tests and how they could be automated. They often disregard parts of the manual tests, quite simply because the manual tests were designed by a manual tester to be executed by a manual tester; and they weren’t thinking about ease of automation at the time they wrote the manual test scripts.

Once automated, do you throw away the original manual tests? Why not? Generally this is because the automated regression pack is not tester- or business-facing/readable. You can rarely give automated test

scripts to a new user, tester or business analyst as a means to learn about the system. They cannot easily learn about a system from the automation code alone. They have to read the manual test too, but who has the time to keep the manual tests and their automated counterparts in sync? If you could write the manual tests as structured specifications and apply the automation to those specifications, you wouldn’t have two things to keep in sync.

When you've finished reading this article I hope you will see a new way to take a manual regression test pack and automate it.

Checking vs. testing: Please differentiate between them We keep muddling checking and testing up, in highly scripted approaches, both automated and manual, we forget about the skill of testing. Testers find defects and do

Page 23: TEST Magazine - June-July 2012

June 2012 | TEST

Test automation | 21

Why ask people to do what computers can easily do? If people can stay awake during scripted manual testing, do you think they'll do the best job? It would be much better to have them do session-based exploratory testing and the combination of this with good automated checking drives quality of delivered systems higher.

www.testmagazine.co.uk

far more than any automation can. People think and contextualise – tools can’t.

Automated checks are so powerful, the ability to regularly determine if the software system does what it is supposed to do, the basics of what the business asked for and what the technical teams implemented.

Why ask people to do what computers can easily do? If people can stay awake during scripted manual testing, do you think they'll do the best job? It would be much better to have them do session-based exploratory testing and the combination of this with good automated checking drives quality of delivered systems higher.

Michael Bolton from Developsense wrote a great blog article on ‘Checking vs. Testing’ at: http://www.developsense.com/blog/2009/08/testing-vs-checking/

be agile and I don't mean Agile methodologies How often as manual testers, as human beings, are we asked to double check that something still works? This becomes regression testing and amongst other things is detrimental to the team’s agility. In ‘Implementing Lean Software Development’, Mary Poppendieck asks: “How long would it take your organisation to deploy a change that involves just one single line of code? Do you do this on a repeatable, reliable basis?”

People have become so hung up on manual regression packs, they become so cumbersome that we look to automate them. Oh and they're easy to count – so our managers become focussed on the number of tests created, executed, passed and failed.

TheThreeAmigos–Test,Dev and the business It doesn't have to be fixed at three people, but you get the idea. Having the right people collaborating together on the software specification is powerful. The idea that they collaborate (as opposed to review each others’ work in mini-iterations) ensures that the right software is being developed. It also ensures that all three parties do not produce a specification for something where the idea is incomplete or woolly.

This conversation is powerful, it becomes even more powerful if the

output is a specification which can be automated easily. There's an interesting thought, automated specifications!

Testers have tried to get testing involved earlier in the software lifecycle already, as testers who do requirements testing. Few testers are invited to actually work with the business and developers on the actual requirements.

Automate the blockers We have all worked with complex systems that have multiple associated services and systems wired into them. We should try to focus on testing the system under test, as opposed to the entire system and its supporting sub systems. Some of the blockers for high levels test automation are environment provisioning, data and the supporting systems being a moving target.

Through smart use of stubs, environment automation and clear scope we can achieve high levels of automation. This is going to require a team effort where, for example, the testers contribute to stub design by sharing the variety of test cases and expected responses that the stub may have to deal with. ultimately the whole team benefit and become much more agile when dealing with change.

Let’s face it, many of the “automation blockers” above are also “scripted manual testing blockers” too, testers do spend too much time getting the system and data into a state where their manual tests can be executed.

Living documentationTests and in fact testers have always been seen as holding the keys to a wealth of information relating to the software system, its nuances and complexities. There are a variety of tools and techniques available to developers to make the code they write "self documenting" - this is great. Since the specifications are business readable and they have key examples included, why not take those specifications and present them back as living documentation, after all they do describe what the software does and doesn’t do.

Conclusion of my learning experienceI’m a technical tester – I’ve been involved in various forms test automation from day one, from specific tools to aid manual testers

Page 24: TEST Magazine - June-July 2012

TEST | June 2012

22 | Feature

www.testmagazine.co.uk

22 | Test automation

Even though I now understood that higher quality checking is my focus, I am still convinced that a tester could use their testing skills earlier using the checking approach. Part of the skill of testing could be used to produce higher quality checks. Why bottle up that knowledge and skill and reserve it for testing a product release, when it could be used to make sure the product was sound in the first place.

Stephen Readman Senior test consultantSopra Group uKwww.sopragroup.co.uk

to full blown automated testing. I specialised in performance testing for over three years and then I started looking at Continuous Integration (CI) approaches, where test automation had its part to play as a Continuous Testing component of CI.

It’s bothered me for a long time that the testing fraternity rarely have the right skill set to do test automation. Where the testers lack coding skills, the developers lack testing skills – herein lies the problem. I worked hard to implement test automation frameworks that abstracted the test automation (coding) from the intention of the tests (skill of testing).

I was lucky to meet Michael Bolton, from Developsense, at a Scottish Testing Group event last year. Through discussion with Michael, finally the whole ‘Testing vs Checking’ thing came together for me. The crux of what I perceived as the ‘test automation problem’ was actually the ‘checking automation problem’ and how to ensure testers remain as key players.

Even though I now understood that higher quality checking is my focus, I am still convinced that a tester could use their testing skills earlier using the checking approach. Part of the skill of testing could be used to produce higher quality checks. Why bottle up that knowledge and skill and reserve it for testing a product release, when it could be used to make sure the product was sound in the first place.

Latterly a colleague mentioned meeting Gojko Adzic at a conference and later I attended one of his Specification by Example courses hosted at Edinburgh university. His book was one of the best IT books I’ve read recently, Specification by Example – finally a well rounded pragmatic approach which Adzic calls “Specification by Example” (SBE). You could consider Specification by Example as a hybrid of BDD (Behaviour

Driven Development) and TDD (Test Driven Development) but it’s really neither and Adzic does dedicate a section of the book to explaining why.

The idea is to focus on writing software specifications in a structured way, adding key examples which are basic, but fundamental, test cases, then using tools which allow the specification, with key examples, to be an automated specification.

From the tester’s point of view it’s important to note that we would never do Tester Driven Development, it’s also important to note that if a tester was to contribute to the SBE process early, he is in fact providing tester guidance upfront before the mistakes are made.

one more thing...I have another requirement: All of the approaches and techniques for efficient upfront automated specifications are reusable in the context of regression test automation.

now I had the correct methods for the tester to apply the skill of testing during the specification phase, they are to contribute to the quality of the specification. With the right tools the specification is an executable specification. The whole method is business-driven and you produce business-readable specifications.

Move your software’s lifecycle forward in time and into live and then maintenance cycles. What happens when you want to implement a quick fix or change? What happens if your manual test suite takes days or even weeks to run? What happens if the people familiar with the test suites and the system have been redeployed on to new exciting projects? The risks begin to stack up!

When considering the benefits to any approach to software development and testing, think ahead and consider its usefulness through the entire lifecycle of the software.

Page 25: TEST Magazine - June-July 2012
Page 26: TEST Magazine - June-July 2012

TEST | June 2012

24 | Feature

www.testmagazine.co.uk

24 |Testing requirements validation

Paul Gerrard and Susan Windsor offer a simple approach to creating business stories from requirements that not only validates requirements but also creates examples of features that can feed the ‘Behaviour-Driven Development’ approach and also be used as system or acceptance test cases.

Specification by example: How validating requirements helps users, developers and testers

The success of Agile reminds us that teams are at their most effective when they are focused

on a business goal and when communication and feedback are excellent. business stories are simple, compact ‘examples’ of the behaviour of software. When placed at the centre of software delivery, these stories close the communication gap between users, developers and testers.

The flow of knowledge from requirements through business stories towards automated tests is commonly called Specification by Example or SBE. SBE creates both ‘living documentation’ and ‘executable specifications’. Both analysis and testing skills are required to create good stories so there is a real opportunity for testers to move upstream and closer to the business by using the Business Story Method.

An example of a business storyYou can see a fuller explanation of the structure of business stories in [1]. We’ll just show a simple example of a story here. Consider, for example the following requirement:

“The System will maintain a stock level, representing the count of physical units in stock for all stock items. When transfers to or from stock are made, the stock level will be adjusted accordingly”

From this simple requirement, we envisage at least one feature which we’ll call ‘stock level’. A single data-driven scenario with a table of example data is also presented. It’s clear that a scenario is simply a test case in business language. They don’t have to be data-driven of course.Feature: Stock Level As a stock manager I want to maintain stock

records So that stock levels are

always accurateScenario: adjust stock level Given a current stock level <current> for an item When I add <newitems> to

stock

Then the new stock level should be <new> And display message <message> But stock level >=0 at

all times

Examples:

Language and ambiguityThe English language has around a quarter of a million words in it of which perhaps 20 percent are no longer in use [2]. If distinct senses are counted, the number may be three quarters of a million definitions. The scope for ambiguity in natural language is boundless and the scope for errors in software based on requirements written in natural language is also boundless.

In our experience, one of the easiest ways to highlight misunderstandings in projects is to ask for a definition of words, terms or concepts that are used. The chances that our colleagues have exactly the same understanding as us are remarkably low [3].

Page 27: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

Testing requirements validation | 25

Testing or (we prefer to say) validating requirements, as a distinct activity, not only find faults in requirements but improves them. Giving this activity a name and securing time to perform this activity means all bets are off and anything in a requirement can be challenged. Of course, this activity needs to happen as early as possible to prevent downstream problems.

Story-driven requirements validationIn this approach, the technique involves taking a requirement and identifying the feature(s) it describes. For each feature, a story summary and a series of scenarios are created and these are used to feedback examples to stakeholders. In a very crude way, you could regard the walkthrough of scenarios and examples as a ‘paper-based’ unit or component test of each feature.

The scenarios are limited in scope to a single feature but taken together, a set of stories validates the overall consistency and completeness of a requirement with respect to the feature(s) it describes.

Creating stories and scenarios–DeFOSPAMDeFOSPAM is the mnemonic we use to summarise the seven steps used to create a set of stories and scenarios for a requirement that allow us to comprehensively validate our understanding of that requirement.Definitions – features – outcomes – Scenarios – Prediction – Ambiguity - Missing.

The number of features and stories created for a requirement are obviously dependent on the scope of a requirement. A 100 word requirement might describe a single system feature and a few scenarios might be sufficient.

A requirement that spans several pages of text might describe multiple features and require many stories and tens of scenarios to fully describe. We recommend you try and keep it simple by splitting complex requirements.

D–Definitions: If agreement of terminology, or feature descriptions cannot be attained, perhaps this is a sign that stakeholders do not actually agree on other things? These could be the goals of the business, the methods or processes to be used by the business, the outputs of the project or the system features required. A simple terminology check may expose symptoms of serious flaws in the foundation of the project itself. How powerful is that?Identify your sources of definitions. These could be an agreed language dictionary, source texts (books, standards etc.) and a company glossary. You will almost certainly need to update these as you proceed with the analysis.

On first sight of a requirement text, underline the nouns and verbs and check that these refer to agreed

terminology or that a definition of those terms is required.

What do the nouns and verbs actually mean? Highlight the source of definitions used. Note where definitions are absent or conflict.

Where a term is defined, ask stakeholders – is this the correct, agreed definition? Call these ‘verified terms’.

Propose definitions where no known definition exists. Mark them as ‘not verified by the business’. Provide a list of unverified terms to your stakeholders for them to refine and agree.

When you start the process of creating a glossary, progress will be slow. But as terms are defined and

agreed, progress will accelerate rapidly. A glossary can be much more than a simple list of definitions. It’s really important to view the glossary as a way of making requirements both more consistent and compact – it’s not just an administrative chore.

A definition can sometimes describe a complex business concept. Quite often in requirements documents, there is huge scope for misinterpretation of these concepts, and explanations of various facets of these concepts appear scattered throughout requirements documents. A good glossary makes for more compact requirements.

Glossary entries don’t have to be ‘just’ definitions of terminology. In some circumstances, business rules can be codified and defined in the glossary. A simple business rule could be the validation rule for a piece of business data, for example a product code. But it could be something much more complex, such as the rule for processing invoices and posting entries into a sales ledger.

Glossary entries that describe business rules might refer to features identified in the requirements elsewhere. The glossary (and index of usage of glossary entries) can therefore provide a cross-reference of where a rule is used and associated system feature is used.

F–Features–Onestoryperfeature:A feature is something the proposed system needs to do for its user and helps the user to meet a goal or supports a critical step towards that goal.

When visualising what is required of a system, users naturally think of features. Their thoughts traverse some kind of workflow where they use different features at each step in the workflow. ‘… I’ll use the search screen to find my book, then I’ll add it to the shopping cart and then I’ll confirm the order and pay’.

Each of the phrases, ‘search screen’, ‘shopping cart’, ‘confirm the order’ and ‘pay’ sound like different features. Each could be implemented as a page on a web site perhaps, and often features are eventually implemented as screen transactions. But features could also be processes that the system undertakes without human intervention or unseen by the user. Examples would be periodic reports, automated notifications sent via email, postings to ledgers triggered by, but not seen by, user activity.Things to look for:• Named features – the users and

analysts might have already decided

Page 28: TEST Magazine - June-July 2012

26 | Feature

TEST | June 2012 www.testmagazine.co.uk

26 | Testing requirements validation

what features they wish to see in the system. Examples could be ‘Order entry’, ‘Search Screen’, ‘Status Report’.

• Phrases like, ‘the system will {verb} {object}’. Common verbs are capture, add, update, delete, process, authorise and so on. Object could be any entity the system manages or processes for example, customer, order, product, person, invoice and so on. Features are often named after these verb-object phrases.

• Does the requirement describe a single large or complex feature or can distinct sub-features be identified? Obviously larger requirements are likely to have several features in scope.

• Are the features you identify the same features used in a different context or are they actually distinct? For example, a feature used to create addresses might be used in several places such as adding people, organisations and customers.

O–Outcomes–OneScenarioperoutcome: More than anything else, a requirement should identify and describe outcomes. An outcome is the required behaviour of the system when one or more situations or scenarios are encountered. We identify each outcome by looking for requirements statements that usually have two positive forms:Active statements that suggest that: ‘…the system will…’Passive statements that suggest that: ‘…valid values will …’ or ‘…invalid values will be rejected…’ and so on.

Active statements tend to focus on behaviours that process data, complete transactions successfully and have positive outcomes. Passive statements tend mostly to deal with data or state information in the system.

There is also a negative form of requirement. In this case, the requirement might state, ‘…the system will not…’. What will the system not do?

You might list the outcomes that you can identify and use that list as a starting point for scenarios. Obviously, each unique outcome must be triggered by a different scenario. You know that there must be at least one scenario per outcome.

There are several types of outcome, of which some are observable but some are not.

Outputs might refer to web pages being displayed, query results being shown or printed, messages being shown or hard copy reports being produced. Outputs refer to behaviour that is directly observable through the user interface and result in human-

readable content that is visible or available on some storage format or media (disk files or paper printouts).

Outcomes often relate to changes of state of the system or data in it (for example, updates in a database). Often, these outcomes are not observable through the user interface but can be exposed by looking into the database or system logs perhaps. Sometimes outcomes are messages or commands sent to other features, sub-systems or systems across technical interfaces.

Often an outcome that is not observable is accompanied by a message or display that informs the user what has happened. Bear in mind, that it is possible that an outcome or output can be ‘nothing’. Literally nothing happens. A typical example here would be the system’s reaction to a hacking attempt or selection of a disabled menu option/feature.Things to look out for:• Words (usually verbs) associated with

actions or consequences. Words like capture, update, add, delete, create, calculate, measure, count, save and so on.

• Words (verbs and nouns) associated with output, results or presentation of information. Words like print, display, message, warning, notify and advise.

S–Scenarios–Onescenarioperrequirements decision: We need to capture scenarios for each decision or combination of decisions that we can associate with a feature.

The most common or main success scenario might be called the normal case, the straight-through or plain-vanilla scenario. Other scenarios represent exceptions and variations. This concept maps directly with the use cases and extensions idea.

Scenarios might be split into those which the system deals with and processes, and those which the system rejects because of invalid or unacceptable data or particular circumstances that do not allow the feature to perform its normal function. These might be referred to as negative, error, input validation or exception condition cases.Things to look out for:• Phrases starting (or including) the

words ‘if’, ‘or’, ‘when’, ‘else’, ‘either’, ‘alternatively’.

• Look for statements of choice where alternatives are set out.

• Where a scenario in the requirement describes numeric values and ranges, what scenarios (normal, extreme, edge and exceptional) should the feature be able to deal with?

P–Prediction: Each distinct scenario in a requirement setting out a situation that the feature must deal with, should also describe the required outcome associated with that scenario. The required outcome completes the definition of a scenario-behaviour statement. In some cases, the outcome is stated in the same sentence as the scenario. Sometimes a table of outcomes is presented, and the scenarios that trigger each outcome are presented in the same table.

A perfect requirement enables the reader to predict the behaviour of the system’s features in all circumstances. The rules defined in the requirements, because they generalise, should cover all of the circumstances (scenarios) that the feature must deal with. The outcome for each scenario will be predictable.

When you consider the outcomes identified on the Outcomes stage, you might find it difficult to identify the conditions that cause them. Sometimes, outcomes are assumed or a default outcome may be stated but not associated with scenarios in the requirements text. These ‘hanging’ outcomes might be important but might never be implemented up by a developer. unless, that is, you focus explicitly on finding these hanging outcomes.Things to look out for:• Are all outcomes for the scenarios

you have identified predictable from the text?

• If you cannot predict an outcome try inventing your own outcomes – perhaps a realistic one and perhaps an absurd one and keep a note of these. The purpose of this is to force the stakeholder to make a choice and to provide clarification.

A–Ambiguity:The Definitions phase is intended to combat the use of ambiguous or undefined terminology. The other major area to be addressed is ambiguity in the language used to describe outcomes.

Ambiguity strikes in two places. Scenarios identified from different parts of the requirements appear to be identical but have different or undefined outcomes. Or two scenarios appear to have the same outcomes, but perhaps should have different outcomes to be sensible.

In general, the easiest way to highlight these problems to stakeholders is to present the scenarios/outcome combinations as you see them and point out their inconsistency or duplication.Look out for:

Page 29: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

Testing requirements validation | 27

• Different scenarios that appear to have identical outcomes but where common sense says they should differ.

• Identical scenarios that have different outcomes.

M–Missing: If we have gone through all the previous steps and tabulated all of our glossary definitions, features, scenarios and corresponding outcomes we perform a simple set of checks as follows:Are all terms, in particular nouns and verbs defined in the glossary?

Are there any features missing from our list that should be described in the requirements? For example, we have create, read and update features, but not delete feature.

Are there scenarios missing? We have some but not all combinations of conditions identified in our table

Do we need more scenarios to adequately cover the functionality of a feature?

Are outcomes for all of our scenarios present and correct?

Are there any outcomes that are not on our list that we think should be?

What next?The requirements and stories are now trusted to be a reliable basis for development and testing. The requirements provide a general description of the required system behaviour and the stories and scenarios identify the required features and concrete examples of their use. The stories in particular, can be used by developers as the source content for Behaviour-Driven Development and testers to create good system or acceptance test cases.

Overall, the DeFOSPAM approach enables the requirements and stories to be trusted as ‘living documentation’ and even ‘executable specifications’.

And what if requirements change? Stories and scenarios provide the lingua franca of users, developers and testers. Changes in requirements can be mapped directly to stories and scenarios, and thereby, directly to developers and/or system tests. The ‘living documentation’ ideal is achieved.

Susan Windsor PrincipalGerrard Consultingwww.gerrardconsulting.com

Synthetic test data creation Data masking/obfuscation Test Data Repository Data subsetting Data profi ling and coverage Data design Test matching SOA testing virtualization Version control

Complete test data management suite

> IMPROVE EFFICIENCY AND QUALITY OF TEST CYCLES

> REDUCE DEVELOPMENT AND TESTING COSTS

> COMPLY WITH DATA PROTECTION LAWS

> CREATE STABLE TESTING ENVIRONMENT FOR SOA

> DELIVER AGILE PROJECTS FASTER

FOR A FREE DEMO CONTACT:[email protected] Tel: +44 (0) 1865 884600 www.grid-tools.com

Grid-Tools are the leading test data management vendor internationally, o� ering holistic, end-to-end test data solutions for traditional and Agile development and testing environments, including those in the Cloud and Virtualized Services. Our innovative solutions enable companies to provision high quality, compliant test data that is “fi t for purpose”.

GT Ad-half-May12.indd 1 24/05/2012 14:35

Paul Gerrard PrincipalGerrard Consultingwww.gerrardconsulting.com

References

1. The Business Story Pocketbook,

Paul Gerrard and Susan Windsor,

businessstorymethod.com.

2. http://oxforddictionaries.com/

words/how-many-words-are-

there-in-the-english-language.

3. The Fateful Process of Mr. A

Talking to Mr. B, Wendel Johnson,

Harvard Business Review –

On Human Relations, 1979.

Page 30: TEST Magazine - June-July 2012

TEST | June 2012

28 | Feature

www.testmagazine.co.uk

28 | Training Corner

In the supplement distributed with this issue we celebrate Agile. oK, perhaps that’s stretching the idea a little. So,

of course is the hype around this new way of working. To dispel a few myths (as always, as I’ve heard them):

1.Agileisnew–no. The term Agile in this context was introduced relatively recently. The way of working is as old as time. Test as you go and eat at regular intervals. If you’re cooking a meal, you taste as you cook. no point trying to taste the chicken before it hits the heat – it will put you off chicken for life and may well end it. Bacteria, like requirements, are no good in the raw.

2. Agile is a methodology in its own right–no. It is a set of good practices, applicable when developing software, following the V-lane or swimming in your own soup.

3.ScrumisthesameasAgile–no. Scrum describes one way of working when developing software following the Agile wave. Apparently everyone likes rugby. At least six nations do.

The Agile Alliance has a manifesto. now for me, the term manifesto generally applies to some political agenda. Vote for what I say, not what I do. Of course, if you’re in the international newspaper business with millions of readers daily, then what you say is what you do. So for them, there is no need for a manifesto. If we complain then we just don’t understand the needs of the common man. If we wield such influence, why can’t we be above the common law, just this once?

Back to our world, not free from politics, never free from politics. The manifesto is simple, and says that it’s good to talk, and it’s good to deliver what the customers want, when they want it. Life changes, and don’t we

know it. I plan to go out without an umbrella each day, and each day I get wet, or so it seems. Following that plan to my Mum is bad; to me it’s essential to my status as one of life’s optimists. There can and must be sun, and it will be today.

To support the Agile manifesto, there is a set of Agile principles. Again, all good. Get your head out of the paperwork; work together towards the common goal and make sure you know your stuff, so you can play your part fully.

A little at a timeQuite how and when Agile become the antonym of getting-the requirements-right-before-you-code-anything is a mystery to me. not trying to define fully the whole system up front I understand. We may not have the mental agility to do this credibly. We may well have not researched our competitive environment well

This issue Angelina Samaroo looks at Agile, exploding some myths and taking a detour down memory lane...

Agile and the hare

Page 31: TEST Magazine - June-July 2012

Global IT Services and Consultancy Provider with the expertise and resources to meet and

exceed your expectations every time

PARTNER WITH A TEAMOF EXPERTS

Global Locations

www.hexaware.com

United States | Canada | Mexico | United Kingdom | Germany | India | Singapore | Japan | France | Australia | Netherlands | Scandinavia | Dubai

4th Floor, Cornwall House, 55-57 High Street, Slough, Berkshire SL1 1DZ, UK,Telephone: +44 (0) 1753 217160, Fax: +44 (0) 1752 217161

Page 32: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

30 | Training Corner

Angelina SamarooManaging director Pinta Educationwww.pintaed.com

enough to get ahead of it. We may well misjudge our customers’ real needs. We may not fully understand the technical implications of what we’re trying to achieve. We would very much like time for these things to become evident and real to us.

So, just a little at a time please. And please can you explain as they did long ago – in story form. I have no idea where the term user-story came from (and yes I guess I could google it), but I prefer the term plain-(insert your language here)-description. The term user-story suggests that there is a user who knows how the system should work. For me, if the system being developed is technically complex and safety critical, then never mind the users, bring me the egg-heads. Bring me the documentation. As a software engineer, I don’t need it to be just so, I just need it to be so – as described. If it is a similar system, but as an Xbox offering, then bring in the D&D brigade – they know best. Way out of my league. user-stories make sense here.

Think about the designHowever, creating code without thinking about some kind of design I do not understand. There cannot be valuable code without some promulgation of a valuable requirement. Back to that chocolate spewing ATM – it’s never done that, so presumably the programmers know that they deal in money, not cocoa.

Surely, we at least need to think through what we we’re trying to achieve before we commit to code? note that one of the Agile principles says that ‘technical excellence and good design contribute to agility’. It does not say just give it your best shot. It says be excellent at it. In other words, study it; practise it until it you know it. Then you can be Agile.

Design a little; build a little; test a little. not, talk a lot, build a little, and then test a lot. The earlier you find a defect the cheaper it is to fix then and there. That principle has not changed.

Delivering valuable working software is what Agile says. It does not say deliver to the customer and let them see if it adds value.

The self-organising teamAnother Agile principle is the idea of the self-organising team being best suited to technical excellence. This appears to have been interpreted as people working under their own steam without management. My interpretation of this is that you do not ring your manager to tell them that you will be late for work; you ring your peer worker so they can take over any pieces of work in the critical path to the current delivery. When planning your summer holiday you arrange with your team how things will be done when you’re away, and not just get your line manager to sign off your holiday card, and leave them to sort out your work detail for you. Self-organised, self-motivated, group rewarded. The win:win.

As an example, I worked in the Defence sector. To get the job I had to qualify first as an engineer. To hold the job I had to practise what I had been taught. I had to gain the required knowledge and understanding first. The team spent many happy hours learning about the system. Analysing it; designing it, coding it, testing it, showing it off to the customer. The customer said yes – each and every time. no, we weren’t perfect, but we figured we’d try to be. It was good enough. In today’s (convenient) interpretation, we were in a deep, dark, inefficient, unfriendly team – in the dreaded V – complete with useless paperwork up to our necks and happy in our silos. no one told us that a silo was for storing people, not wheat as we had been led to believe at school. What price lack of education I wonder?

So, the day arrived when we could read at leisure no more. Software needed to be delivered quickly. The team didn’t crumble of course, it got itself organised. What did they need and when by? This was the question

each and every morning. We moved to shift work. The morning shift finished at 3pm, but we stayed ‘till five anyway. The evening shift started at 11, but we came in at nine. Why? Because we were all friends; we were all captive to the cause; we all knew not just what had to be done, but how to do it. If we could not, then we knew a man who could. When the time came to sprint we could. We had already strengthened our legs by running countless marathons. We had already done many high-jumps. We knew how to jump long. So when we had to be quick, it was business as usual, with a dash of urgency.

We did not email documents for sign-off, we walked them around – the documentation still had to be there – we still needed to engineer solutions. The programmers gave us code to test once it was compiled. We had already been taught how to test at component integration level, so we could shift easily into the right position.

We did not need to be told what to do, we just walked the walk. However, when to do things was a key concern. The system was far too complex for individual heroes. The system required certain functionality to be integrated in a specific order to deliver value in chunks. My boss at the time taught me a lesson in management that I would never forget. He took this release planning task upon himself. Every day we came in to a list of tasks to be completed during that day, in the order given, from design to code to test. Ah, to be properly managed, not bossed around, but led, to success, to commendations, to letters of congratulations, to puffing your chest out and walking tall, to that smile I will take to my grave. Thank you Mr. Wright – a fitting name even if you don’t know how to spell it.

The final words to the Londoners and Her Majesty – 2012 is finally here. If you’ve got tickets, wave our flag. If not, best leave the Jubilee line to Her Majesty’s celebrations – as useful to her as it will be to you.

Page 33: TEST Magazine - June-July 2012

Feature | 31

June 2012 | TESTwww.testmagazine.co.uk

Design for TEST | 31

Testing and growthMike Holcombe suggests that one way to help boost economic growth is to put some effort into trying to explain to SMEs how they can make their testing better.

We hear a lot in the media about the need to generate growth in our

economy. few concrete ideas seem to be emerging from the Government–maybeindustry is on its own and needs to come up with practical measures itself.

A popular myth is that technology based ‘start-ups’ will provide the answer but the success rate of these is quite low (around 35 percent in the IT world). There are a very few spectacular, headline grabbing successes but these alone will not do the job.

The CBI has identified that medium sized businesses (MSBs, 50 – 500 employees) form a much lower proportion of the uK business landscape – half the proportion to be found in uSA, Germany etc. We need to turn successful SMEs into MSBs to generate growth quickly.

Within the area of IT and software development SMEs often struggle because of problems with the delivery of software and services within time (and budget). Part of this is because they don’t really know how to test the systems properly and this leads them to all sorts of problems after delivery which diverts resources away from the next set of projects.

There has been a clear example of this recently where the delivery of a system to a large corporate client has led to many problems, almost all caused by inadequate testing at different stages of the process. I know the executives who commissioned the software are very unhappy about the quality of what they got in contrast to their previous experiences with large corporate vendors.

Bearing in mind the CBI strategy of trying to make SMEs into MSBs we should put some effort into trying to explain to SMEs how they can make their testing better. The trouble is that they may not be able to afford consultants or expensive courses on testing – not just in terms of money but also with respect to the availability of staff. This needs to change and priorities need to be rebooted.

Before we can make progress we need to convince companies that they have a problem with testing. It would be interesting to know how much effort is put into testing and review in most SMEs. If the figure is less than 50 percent then there may be a problem – the large players spend a lot more effort and time than that.

The trouble is that this message needs to get to CEOs and finance directors rather than the software professionals. until that happens we will always have a problem.

Mike Holcombe Founder and directorepiGenesys Ltdwww.epigenesys.co.uk

Before we can make progress we need to convince companies that they have a problem with testing. It would be interesting to know how much effort is put into testing and review in most SMEs. If the figure is less than 50 percent then there may be a problem – the large players spend a lot more effort and time than that.

Page 34: TEST Magazine - June-July 2012

32 | Feature

TEST | June 2012 www.testmagazine.co.uk

Ari Takanen is a founder and the CTo of Codenomicon, the company he spun out

of successful PRoToS test tools research of the oulu university Secure Programming Group in his native finland. The Codenomicon DEfENSICS platform his team created remainsakeytooltoquicklyfindquality, resiliency and security flawswithinabroadarrayofapplications. Thousands of developers and security analysts across telecommunications, networking, manufacturing, financialservicesanddefenceindustries use the tools to reduce costly reputation, quality and compliance risks.

TEST: What are the origins of the company; how did it start and develop; how has it grown and how is it structured?Ari Takanen: Codenomicon was founded 2001 in Finland, as a spin-off from Oulu university’s Secure Programming Group (OuSPG) and their PROTOS project that they conducted with VTT, which is a research organisation in Finland. All founders came from these two organizations.

From the beginning it was clear that operating in national level is not an option, and we needed to target an international market. Codenomicon was a ‘born global’ company, everything was aimed at international success. As an example of that, we created a so-called CESSnA plan – the name was an acronym of the six major,

global network equipment companies we wanted to gain as our customers. Within few years we had five of them as our paying customers. Today we work with basically all big name vendors in communication software and network devices.

During the ten years of Codenomicon’s existence business has been growing steadily averaging in about 50 percent annual growth rate. Growth of personnel was a bit more moderate, and quickly we became a profitable company. Today we are about 100 people around the world. Our global headquarters are in Oulu, Finland, and regional headquarters are in California uSA and Singapore. While all research and development is done in Finland, various support, sales and management roles are distributed around the world. For example, our CEO is based in our uSA office close to our biggest customers. This is because we want to be where our customers are. We have always been a customer-driven company.

TEST: What range of products and services does the company offer?AT: Codenomicon offers tools and solutions for proactive security. We want to help our customers discover their own problems, be it vulnerability in software or misconfigured network element. Software security is critical for all companies developing or depending on software.

We offer two different solutions for ‘knowing the unknown’: proactive testing, and situation awareness. For testers, these solutions allow you to perform all steps of security testing

32 | TEST Supplier Profile

With a rapid increase in adaptation of security test automation, especially in industries working in machine-to-machine solutions, consumer devices, and critical infrastructure protection, Codenomicon is finding success in providing software security. The company’s founder and CTO speaks to TEST...

Software security

Page 35: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

The latest addition to our tools and solutions portfolio is Fuzz-o-Matic, a cloud-based testing-as-a-service platform for security testing. Fuzz-o-Matic customers just upload their software to the cloud over a secure connection, and download actionable, repeatable test results.

TEST Supplier Profile | 33

from attack vector analysis up to automated tests and remediation of found vulnerabilities.

Codenomicon Defensics is a proactive testing solution for finding and mitigating both known and unknown vulnerabilities in software even before deployment, improving application and network security. In the core of the solution is fuzz testing, a method where invalid input are fed to the system under test to expose vulnerabilities.

Codenomicon's situational awareness tools help collecting, filtering and visualising the information in real time, from terabytes of data. They help visualising complex systems and communication interfaces for attack surface analysis for distinguishing significant data from irrelevant pieces of information.

The latest addition to our tools and solutions portfolio is Fuzz-o-Matic, a cloud-based testing-as-a-service platform for security testing. Fuzz-o-Matic customers just upload their software to the cloud over a secure connection, and download actionable, repeatable test results.

TEST: Does the company have any specialisations within the software testing industry? AT: We are specialised in robustness and reliability testing using a technique called fuzz testing or fuzzing, and we also conduct security assessments and advise in how to integrate security testing into the development lifecycle. We have millions of model-based, off-the-shelf test cases for over 200 protocols, and a general purpose traffic

capture fuzzer for testing proprietary protocols. Tools are available as licensed software, or as a personalized service and Cloud-based testing- as-a-service solution.

TEST: Who are the company’s main customers today and in the future?AT: We work with over 200 companies that want to secure their software or services. Our customers are both builders and buyers of software and communication devices. Builders such as network equipment vendors and global software houses include companies such as Alcatel-Lucent, Cisco Systems, Microsoft, Motorola, Google, nSn, Huawei and Oracle. Buyers of software do acceptance testing of critical systems, and include governments, finance, telecommunications operators and service providers. These include, for example, AT&T, Verizon and T-Systems. Currently we see rapid increase in adaptation of security test automation, especially in industries working in machine-to-machine solutions, consumer devices, and critical infrastructure protection.

TEST: What is your view of the current state of the testing industry and how will the recent global economic turbulence affect it. What are the challenges and the opportunities?AT: Economic uncertainty in the world does not seem to affect information security. While companies are more careful in their investments in general, they still see the value of secure, robust and reliable software. Also, investment in security testing early on is cheaper than fixing the

Page 36: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

34 | TEST Supplier Profile

bugs and releasing patches after the deployment, not to mention the cost of denial-of-service situation. When money is tight and competition is hard, companies cannot afford risks related to product security and service downtime.

TEST: What are the future plans for the business?AT: Codenomicon plans to maintain focus on customer-driven development. We develop security test automation solutions to all industries that need them. Our customers are the main source of direction for future product roadmaps and to selecting the most critical features for test automation. Currently these seem to be focused on: usability, test coverage and integration to third-party test automation frameworks.

TEST: How do you see the future of the software testing sector developing in the medium and longer terms?AT: Each year since our inception in 2001 we have seen new companies embrace the main principles of product security: better coverage for security testing, and better automation and integration. I believe that true test automation, where tests are automatically designed, generated and executed, will take more of a foothold in the market which currently just seems to degrade towards more ad-hoc manual tests. When QA budgets are tight, the correct solution is not to reduce the amount of testing but to automate it.

TEST: Is there anything else you would like to add?AT: When planning your tests, focus on return-on-investment (ROI) and risk assessment. The most expensive bugs are those that result in security mistakes. If the system crashes, loses money through denial of service attacks, or leaks confidential customer data; then you are in real trouble. Prioritise your tests.

TEST: Ari Takanen, thank you very much.

When planning your tests, focus on return-on-investment (ROI) and risk assessment. The most expensive bugs are those that result in security mistakes. If the system crashes, loses money through denial of service attacks, or leaks confidential customer data; then you are in real trouble. Prioritise your tests.

Ari Takanenwww.codenomicon.com

Page 37: TEST Magazine - June-July 2012

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazine.co.uk

The Whole Story

www.31media.co.uk

Print Digital Online

INNOVAT ION FOR SOFTWARE QUAL I TY

Page 38: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

36 | Performance testing

you would have to live on mars not to know that the 2012 olympics is being held in London next

month. Much has been written on the complex preparation for theGames–fromthedemandson immigration staff at airports through to transport around London and the emergency services–butwhatwillseverelytest the IT infrastructure and those responsible for it will be the sheer volumeoftraffic.Sporthungryfans will be desperate to have quick and easy access not only to results information but also to the wealth of historic data surrounding each athlete, race, match and event.

“So what, this is no different from any other Olympics,” you may say. But it is, and the difference is that over 50 percent of those demanding access

to information will do so on a mobile device. During the past four years we have seen the appearance of the tablet and the dramatic increase in smart phone proliferation throughout the world. Ask any organisation trying to deliver information to a mobile device and they will tell you of the complexities involved – from the non-stop technology upgrades with new operating systems and hardware to the ‘supply’ of bandwidth in certain countries of the world. The permutations are endless and ensuring that seamless and reliable delivery of information behaves like a utility – it just works, reliable and fast – is a challenge to application developers.

How does an application evolve?When a specification is given to developers, it will usually spell out x, y, and x functionality and have some

Surviving the perfect storm While I’m sure we’re all looking forward to the Olympics, many are asking if the sheer amount of network and mobile data traffic will test the IT infrastructure to destruction. Mike Howse says, if it’s done properly, performance testing will de-risk your application both when you launch it and when the crowds of users come to visit.

Page 39: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

Performance testing | 37

We have painted a picture where there are 1,001 moving parts – server hardware, operating systems, freeware, routers, content management system, cache and others – and we expect the application to work seamlessly at lightning speed. Of course functional testing will have been done in the development phase but it is Web Operations that are typically handed the ‘finished’ article with a smile and a friendly comment of ‘yours’. Without scalability testing a disaster is likely to happen.

graphics to make the end product look good. It is implicit that the application scales – but usually no figure is given for the maximum number of concurrent users – and that ‘good’ response times will be experienced anywhere in the world. Inevitably the specification development starts off with and the end product produced will be very different – specification ‘creep’ will have occurred as input is received for additional functionality, look and feel, graphics etc, etc. Finally at launch of the website there will be social media links and probably advertising links as well.

We have painted a picture where there are 1,001 moving parts – server hardware, operating systems, freeware, routers, content management system, cache and others – and we expect the application to work seamlessly at lightning speed. Of course functional testing will have been done in the development phase but it is Web Operations that are typically handed the ‘finished’ article with a smile and a friendly comment of ‘yours’. Without scalability testing a disaster is likely to happen.

While we are familiar with the phrase ‘if we build it they will come’, we know that it is true regarding the physical staging of the 302 events across 26 sports with 10,000+ athletes – the raw parameters for London 2012. We can predict the numbers of spectators who will attend, and also know of the ensuing ticket rush. With an online application, we really don’t know how many will come, but we can sort of predict the most newsworthy events. Apologies to the sport of Archery but there really is going to be more news about the mens’ 100 metre heats let alone the finals.

So here we have the perfect storm – the biggest show on earth, the most newsworthy show on earth, followed by sports-mad enthusiasts – and an unpredictable number of people around the world desperate for information. We know the numbers are big, but we don’t know just how big. Why do web-sites break down?As we have seen, the response time experienced by an end-user is a result of the interaction of multiple pieces

of hardware and software, in theory working in harmony. Given that the end-user is the ultimate judge of whether a service is good or bad, and we all have our own idea of what is a good response time and what is a bad one, we are dealing here with what a user ‘perceives’ to be a good response time. Perception is based on what they expect, what happened last time, whether they used Google in the recent past (who spend a lot of time and effort making their response times really fast) and maybe other subliminal factors.

The major reason websites break down is that the back-end database gets overloaded and just can’t cope with the demands put on it. Of course there could be other reasons as well but typically when traffic spikes occur, the service gradually degrades until it just freezes. The users in the middle of accessing data remain ‘locked in’ while others trying to enter the system will typically see a browser request failure after say 30 seconds. Traditional hosting environments, whether they are in-house or external, have fixed architectural environments that grow organically over time. That strategy was probably ok in the past but with the dynamic environment in which mobile access is needed, massive extra capacity, possibly for just a very short time, is very difficult to accommodate.

Doesn’t the Cloud solve all my problems?Increasingly organisations are seriously looking at the Cloud as a solution for all sorts of IT challenges. Whether that be a financial move from capex to opex for accounting reasons or the nature of the business means that there is a real need to scale up and scale down architectural resources ‘on demand’, Europe is fast on the heels of the united States in Cloud adoption.

If you can predict when extra resources in terms of web/app/databases are needed that is great. So the expected surge in users logging in to take advantage, say, of a special offer advertised on TV, can be managed with a bit of forward planning. But if that surge is unplanned, it can take 10+ minutes for additional resources to be spun up in the Cloud – this is a very long time in the world

Page 40: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

Organisations who demand the highest possible performance make sure they understand the performance dynamics of their application or service. That understanding comes from performance testing to give insight into how performance is affected by a variety of different traffic patterns. In the same way that a car manufacturer will tune an engine to perform in a multitude of situations, so we must ensure that no stone is unturned to make sure we deliver a consistently good service.

38 | Performance testing

of mobile. What is really needed is a way to instantly have access to extra resources – ideally based on either the number of users and/or the current response times experienced.

Application performance dynamics“If you can’t measure it, you can’t manage it”. Yes, I know it is an old management saying that goes way back – but it is equally applicable in today’s market for understanding the performance dynamics of an application or web service.

Performance testing is fundamental to providing a scalable and reliable service and should mimic what real users do from their geographic location. Web operations staff need to understand the maximum number of concurrent users the application can handle with good response times, application stability and how the application deals with ‘spike’ traffic. A good performance test will highlight the location of bottlenecks. Once identified, these can be removed to tune for peak performance. This can be a recursive process, removing bottlenecks, re-testing and so on.

We talked earlier about the necessity to protect the back-end database. If we have done some performance testing and know, once we have tuned the application, what our limits are in terms of concurrent users, we know that a safe load would be say 90 percent of the maximum. Beyond that we know the service will deteriorate and eventually everything will grind to a halt.

At that 90 percent loading, a sensible way to cope with additional traffic is to divert it to a ‘holding area’. At an airport when there is no space for a plane to land, it goes into a holding pattern until a docking bay becomes available. Similarly, when the number of users goes beyond our 90 percent threshold, this triggers the next requests to go into a holding pattern until system resources becomes available. While this is going on, users in the queue can be sent a suitable message apologising for the delay in service.

The key thing in the scenario describes above is that the service remains up and running. Why? Because we have understood the dynamics of the application and crucially we know the maximum number of users that can be accommodated with good response times. We have made sure that the back-end database has been protected and that the service remains

up and running – albeit degraded while the surge in traffic is dealt with.

best practice for peak performanceOrganisations who demand the highest possible performance make sure they understand the performance dynamics of their application or service. That understanding comes from performance testing to give insight into how performance is affected by a variety of different traffic patterns. In the same way that a car manufacturer will tune an engine to perform in a multitude of situations, so we must ensure that no stone is unturned to make sure we deliver a consistently good service.

Good performance testing should give you the analytic data to understand the trigger points beyond which further servers need to become active and whether doubling the number of servers actually doubles the number of users that can be handled with good response times. In other words you can answer the question ‘what resources do I need to scale properly?’ Homing in on where the bottlenecks to performance reside and tuning those components in a way that is optimised alongside other components is fundamental for smooth operation.

If it’s done properly, performance testing will de-risk your application both when you launch it and when the crowds of users come to visit – and of course you will avoid brand damage

Managing the extraordinaryThis article has used the back-drop of the Olympics as one where the demands of IT will be stretched to breaking point and beyond. With a little planning, extraordinary traffic levels can be managed effectively. understanding the limits of an application is crucial and performance testing needs to be part of the DnA of the Web Ops department – and of the quality processes as well. users who see fast response times will be satisfied users – and come back time and time again. Loyalty to a website or service is only as good as the quality of service – ie, the response time experienced.

Test hard and test often, be prepared for the perfect storm, actively monitor the quality of service. In short; be prepared for Lady GaGa to tweet about your site to her 24 million followers!

Mike HowseVP International Apicawww.apicasystem.com

Page 41: TEST Magazine - June-July 2012
Page 42: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

Toby MasonDirector Totally Communicationstotallycommunications.co.uk

40 | Agile methodology

The Agile imperative It is important that software design and development agencies understand that agility is the easiest way to meet the demands of the client. Digital marketing guru Toby Mason explains why Agile is imperative.

It has in the past been acceptable to design for aclienta‘flashy’,‘pretty’website, which they would

be relatively happy with. However, with clients becoming more web savvy and understanding the essence of what a good website is and what it can do for their business, now is the time that developmental processes are so important to the success of an agency.

The Agile methodology is an approach to development project management, typically used for software development. It helps us to respond to the unpredictability of building software for clients.

In the past you had to make the assumption that every requirement of a project could be identified and agreed before any design or coding had even taken place, identifying to your developers every piece of software that needed to be developed before it was up and running. With this process it is easy to see why there would be errors and mistakes and why budgets would be exceeded and why projects over ran.

The times of gracelessnessWhen an Agile methodology is not introduced in a design or development process a development team only has one chance to get each facet of a project right. In an Agile model, each and every aspect of the development

(design for example) is frequently revisited throughout the lifecycle of the project. When a team stops to re-evaluate the direction of a project at regular intervals, there’s always time to steer the project back in the right direction if it starts to fall off brief.

Working out of an Agile method, also means that each phase of a project needs be completed before the next one can begin, so developers need to gather all the project information right at the start before they can create the architecture of the site and then design the site before the coding can take place. This type of approach has little communication, if any, between the different groups that complete each phase of the work.

You could quite often get to the end of a project, and a team will not have communicated at all throughout the process, resulting in the team having built the software asked for in the initial brief, but, now the goal posts have moved, the software no longer fits with the business plan, making the software development obsolete. Therefore, not only have you lost out on time and money to create software that is useless, now you have an unhappy client. This is easily solvable by changing the way in which we work.

flexibility enables positive outcomes The Agile methodology is the approach that you don’t decide upfront what

will take place throughout the project. Instead you take it in incremental steps, outlaying the design and development at stages throughout the project and re-evaluating the project as you move through it, thus meeting your client’s demands on time, on budget and to spec.

With this approach, it’s easy to see why and how the Agile methodology developed and entered into our world.

The Agile methodology simply put, allows you to assess each stage of the development at regular intervals throughout the project. A projects aims and objectives at the start of a project may be very different to those at varying stages throughout the development. This process allows you to inspect and adapt each level of development, which in turn reduces both development costs and time to market.

Implementing regular catch ups with your developers and your client, gives you the opportunities to fine tune your progress to make sure that the end result is what you need. ultimately with the Agile methodology, you are protecting the work that you are doing to make sure that it ends up in the world for everyone to see and that your time isn’t wasted on a project that just gets shelved, an attractive option for both clients and developers alike.

Page 43: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

TEST company profile | 41

Facilita has created the Forecast™ product suite which is used across multiple business sectors to performance test applications, websites and IT infrastructures of all sizes and complexity. With class leading software and unbeatable support and services Facilita will help you ensure that your IT systems are reliable, scalable and tuned for optimal performance.

Forecast™ is proven, effective and innovativeA sound investment: Choosing the optimal load testing tool is crucial as the risks and costs associated with inadequate testing are enormous. Load testing is challenging and without the right tool and vendor support it will consume expensive resources and still leave a high risk of disastrous system failure.

Forecast has been created to meet the challenges of load testing now and in the future. The core of the product is tried and trusted and incorporates more than a decade of experience and is designed to evolve in step with advances in technology.

Realistic load testing: Forecast tests the reliability, performance and scalability of IT systems by realistically simulating from one to many thousands of users executing a mix of business processes using individually configurable test data.

Comprehensive technology support: Forecast provides one of the widest ranges of protocol support of any load testing tool.

1. forecast Web thoroughly tests web based applications and web services, identifies system bottlenecks, improves application quality and optimises network and server infrastructures. Forecast Web supports a comprehensive and growing list of protocols, standards and data formats including HTTP/HTTPS, SOAP, XML, JSOn and Ajax.

2. forecast Java is a powerful and technically advanced solution for load testing Java applications. It targets any non-GuI client-side Java API with support for all Java remoting technologies including RMI, IIOP, CORBA and Web Services.

3. forecast Citrix simulates multiple Citrix clients and validates the Citrix environment for scalability and reliability in addition to the performance of the published applications. This non-intrusive approach provides very accurate client performance measurements unlike server based solutions.

4. forecast .NET simulates multiple concurrent users of applications with client-side .nET technology.

5. forecast WinDriver is a unique solution for performance testing windows applications that are impossible or uneconomical to test using other methods or where user experience timings are required. WinDriver automates the client user interface and can control from one to many hundreds of concurrent client instances or desktops.

6. forecast can generate intelligent load at the IP socket level (TCP or uDP) to test systems with proprietary messaging protocols, and also supports the OSI protocol stack.

Powerful yet easy to use: Testers like using Forecast because of its power and flexibility. Creating working tests is made easy with Forecast's application recording and script generation features and the ability to rapidly compose complex test scenarios with a few mouse clicks.

Supports Waterfall and Agile (and everything in between): Forecast has the features demanded by QA teams like automatic test script creation, test data management, real-time monitoring and comprehensive charting and reporting.

Forecast is successfully deployed in Agile "Test Driven Development" (TDD) environments and integrates with automated test (continuous build) infrastructures. The functionality of Forecast is fully programmable and test scripts are written in standard languages (Java, C# and C++ ). Forecast provides the flexibility of Open Source alternatives along with comprehensive technical support and the features of a high-end commercial tool.

Monitoring: Forecast integrates with leading solutions such as dynaTrace to provide enhanced server monitoring and diagnostics during testing.

Forecast Virtual user technology can also be deployed to generate synthetic transactions within a production monitoring solution. Facilita now offers a lightweight monitoring dashboard in addition to integration with comprehensive enterprise APM solutions.

flexible licensing: Our philosophy is to provide maximum value and to avoid hidden costs. Licenses can be bought on a perpetual or subscription basis and short-term project licensing is also available with a “stop-the-clock” option.

ServicesSupporting our users

In addition to comprehensive support and training, Facilita offers mentoring by experienced consultants either to ‘jump start’ a project or to cultivate advanced testing techniques.

Testing services

Facilita can supplement test teams or supply fully managed testing services, including Cloud based solutions.

facilita Tel: +44 (0) 1260 298109 Email: [email protected] Web: www.facilita.com

facilitaFacilita load testing solutions deliver results

Can you predict the future?Forecast tests the performance, reliability and scalability

of IT systems. Combine with Facilita’s outstanding

professional services and expert support and the future is

no longer guesswork.

visit Facilita at:

Powerful multi-protocol testing software

TM

Facilita Software Development Limited. Tel: +44 (0)1260 298 109 | email: [email protected] | www.facilita.com

4th October Guoman Tower Hotel. London

7th December Plaisterers Hall, London

WINTER 2010

Page 44: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

42 | TEST company profile

Spirent Communications plc Tel: +44(0)7834752083 Email: [email protected] Web: www.spirent.com

For over 20 years Parasoft has been studying how to efficiently create quality computer code. Our solutions leverage this research to deliver automated quality assurance as a continuous process throughout the SDLC. This promotes strong code foundations, solid functional components, and robust business processes. Whether you are delivering Service-Orientated Architectures (SOA), evolving legacy systems, or improving quality processes – draw on our expertise and award winning products to increase productivity and the quality of your business applications.

Parasoft's full-lifecycle quality platform ensures secure, reliable, compliant business processes. It was built from the ground up to prevent errors involving the integrated components – as well as reduce the complexity of testing in today's distributed, heterogeneous environments.

What we doParasoft's SOA solution allows you to discover and augment expectations around design/development policy and test case creation. These defined policies are automatically enforced, allowing your development team to prevent errors instead of finding and fixing them later in the cycle. This significantly increases team productivity and consistency.

End-to-end testing: Continuously validate all critical aspects of complex transactions which may extend through web interfaces, backend services, ESBs, databases, and everything in between.

Advanced web app testing: Guide the team in developing robust, noiseless regression tests for rich and highly-dynamic browser-based applications.

Application behavior virtualisation: Automatically emulate the behavior of services, then deploys them across multiple environments – streamlining collaborative development and testing activities. Services can be emulated from functional tests or actual runtime environment data.

Load/performance testing: Verify application performance and functionality under heavy load. Existing end-to-end functional tests are leveraged for load testing, removing the barrier to comprehensive and continuous performance monitoring.

Specialised platform support: Access and execute tests against a variety of platforms (AmberPoint, HP, IBM, Microsoft, Oracle/BEA, Progress Sonic, Software AG/webMethods, TIBCO).

Security testing: Prevent security vulnerabilities through penetration testing and execution of complex authentication, encryption, and access control test scenarios.

Trace code execution: Provide seamless integration between SOA layers by identifying, isolating, and replaying actions in a multi-layered system.

Continuous regression testing: Validate that business processes continuously meet expectations across multiple layers of heterogeneous systems. This reduces the risk of change and enables rapid and agile responses to business demands.

Multi-layer verification: Ensure that all aspects of the application meet uniform expectations around security, reliability, performance, and maintainability.

Policy enforcement: Provide governance and policy-validation for composite applications in BPM, SOA, and cloud environments to ensure interoperability and consistency across all SOA layers.

Please contact us to arrange either a one to one briefingsessionorafreeevaluation.

Web: www.parasoft.com Email: [email protected] Tel: +44 (0) 208 263 6005

ParasoftImproving productivity by delivering quality as a continuous process

Page 45: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

TEST company profile | 43

www.seapine.com Phone:+44 (0) 208-899-6775 Email: [email protected] Kingdom, Ireland, and benelux: Seapine Software Ltd. building 3, Chiswick Park, 566 Chiswick High Road, Chiswick, London, W4 5yA uK

Americas (Corporate Headquarters): Seapine Software, Inc. 5412 Courseview Drive, Suite 200, Mason, ohio 45040 uSA Phone: 513-754-1655

With over 8,500 customers worldwide, Seapine Software Inc is a recognised, award-winning, leading provider of quality-centric application lifecycle management (ALM) solutions. With headquarters in Cincinnati, Ohio and offices in London, Melbourne, and Munich, Seapine is uniquely positioned to directly provide sales, support, and services around the world.

Built on flexible architectures using open standards, Seapine Software’s cross-platform ALM tools support industry best practices, integrate into all popular development environments, and run on Microsoft Windows, Linux, Sun Solaris, and Apple Macintosh platforms.Seapine Software's integrated software development and testing tools streamline your development and QA processes – improving quality, and saving you significant time and money.

TestTrack RMTestTrack RM centralises requirements management, enabling all stakeholders to stay informed of new requirements, participate in the review process, and understand the impact of changes on their deliverables. Easy to install, use, and maintain, TestTrack RM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Whether as a standalone tool or part of Seapine’s integrated ALM solution, TestTrack RM helps teams keep development projects on track by facilitating collaboration, automating traceability, and satisfying compliance needs.

TestTrack Pro TestTrack Pro is a powerful, configurable, and easy to use issue management solution that tracks and manages defects, feature requests, change requests, and other work items. Its timesaving communication and reporting features keep team members informed and on schedule. TestTrack Pro supports MS SQL Server, Oracle, and other ODBC databases, and its open interface is easy to integrate into your development and customer support processes.

TestTrack TCM TestTrack TCM, a highly scalable, cross-platform test case management solution, manages all areas of the software testing process including test case creation, scheduling, execution, measurement, and reporting. Easy to install, use, and maintain, TestTrack TCM features comprehensive workflow and process automation, easy customisability, advanced filters and reports, and role-based security. Reporting and graphing tools, along with user-definable data filters, allow you to easily measure the progress and quality of your testing effort.

QA Wizard Pro qA Wizard Pro completely automates the functional and regression testing of Web, Windows, and Java applications, helping quality assurance teams increase test coverage. Featuring a next-generation scripting language, QA Wizard Pro includes advanced object searching, smart matching a global application repository, data-driven testing support, validation checkpoints, and built-in debugging. QA Wizard Pro can be used to test popular languages and technologies like C#, VB.nET, C++, Win32, Qt, AJAX, ActiveX, JavaScript, HTML, Delphi, Java, and Infragistics Windows Forms controls.

Surround SCM Surround SCM, Seapine’s cross-platform software configuration management solution, controls access to source files and other development assets, and tracks changes over time. All data is stored in industry-standard relational database management systems for greater security, scalability, data management, and reporting. Surround SCM’s change automation, caching proxy server, labels, and virtual branching tools streamline parallel development and provide complete control over the software change process.

Seapine SoftwareTM

Page 46: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

44 | TEST company profile

for more information, please visit www.microfocus.com/solutions/softwarequality

Deliver better software, faster. Software quality that matches requirements and testing to business needs.Making sure that business software delivers precisely what is needed, when it is needed is central to business success. Getting it right first time hinges on properly defined and managed requirements, the right testing and managing change. Get these right and you can expect significant returns: Costs are reduced, productivity increases, time to market is greatly improved and customer satisfaction soars.

The Borland software quality solutions from Micro Focus help software development organizations develop and deliver better applications through closer alignment to business, improved quality and faster, stronger delivery processes – independent of language or platform.

Combining Requirements Definition and Management, Testing and Software Change Management tools, Micro Focus offers an integrated software quality approach that is positioned in the leadership quadrant of Gartner Inc’s Magic Quadrant.

The Borland Solutions from Micro Focus are both platform and language agnostic – so whatever your preferred development environment you can benefit from world class tools to define and manage requirements, test your applications early in the lifecycle, and manage software configuration and change.

Requirements Defining and managing requirements is the bedrock for application development and enhancement. Micro Focus uniquely combines requirements definition, visualization, and management into a single '3-Dimensional' solution, giving managers, analysts and developers precise detail for engineering their software. By cutting ambiguity, the direction of development and QA teams is clear, strengthening business outcomes.

For one company this delivered an ROI of 6-8 months, 20% increase in project success rates, 30% increase in productivity and a 25% increase in asset re-use.

using Micro Focus tools to define and manage requirements helps your teams:

• Collaborate, using pictures to build mindshare, drive a common vision and share responsibility with role-based review and simulations.

• Reduce waste by finding and removing errors earlier in the lifecycle, eliminating ambiguity and streamlining communication.

• Improve quality by taking the business need into account when defining the test plan.

Caliber® is an enterprise software requirements definition and management suite that facilitates collaboration, impact analysis and communication, enabling software teams to deliver key project milestones with greater speed and accuracy.

Software Change Management StarTeam® is a fully integrated, cost-effective software change and configuration management tool. Designed for both centralized and geographically distributed software development environments, it delivers:

• A single source of key information for distributed teams

• Streamlined collaboration through a unified view of code and change requests

• Industry leading scalability combined with low total cost of ownership

TestingAutomating the entire quality process, from inception through to software delivery, ensures that tests are planned early and synchronize with business goals even as requirements and realities change. Leaving quality assurance to the end of the lifecycle is expensive and wastes improvement opportunities.

Micro Focus delivers a better approach: Highly automated quality tooling built around visual interfaces and reusability. Tests can be run frequently, earlier in the development lifecycle to catch and eliminate defects rapidly.

From functional testing to cloud-based performance testing, Micro Focus tools help you spot and correct defects rapidly across the application portfolio, even for Web 2.0 applications.

Micro Focus testing solutions help you:

• Align testing with a clear, shared understanding of business goals focusing test resources where they deliver most value

• Increase control through greater visibility over all quality activities

• Improve productivity by catching and driving out defects faster

Silk is a comprehensive automated software quality management solution suite which enables users to rapidly create test automation, ensuring continuous validation of quality throughout the development lifecycle. users can move away from manual-testing dominated software lifecycles, to ones where automated tests continually test software for quality and improve time to market.

Take testing to the cloud users can test and diagnose Internet-facing applications under immense global peak loads on the cloud without having to manage complex infrastructures.

Among other benefits, SilkPerformer® CloudBurst gives development and quality teams:

• Simulation of peak demand loads through onsite and cloud-based resources for scalable, powerful and cost effective peak load testing

• Web 2.0 client emulation to test even today’s rich internet applications effectively

Micro Focus, a member of the FTSE 250, provides innovative software that enables companies to dramatically improve the business value of their enterprise applications. Micro Focus Enterprise Application Modernization, Testing and Management software enables customers’ business applications to respond rapidly to market changes and embrace modern architectures with reduced cost and risk.

Micro focus

Page 47: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

TEST company profile | 45

With a world class record of innovation, original Software offers a solution focused completely on the goal of effective software quality management. by embracing the full spectrum of Application quality Management (AqM) across a wide range of applications and environments, we partner with customers and help make quality a business imperative. our solutions include a quality management platform, manual testing, test automation and test data management software, all delivered with the control of business risk, cost, time and resources in mind. our test automation solution is particularly suited for testing in an agile environment.

Setting new standards for application qualityManagers responsible for quality must be able to implement processes and technology that will support their important business objectives in a pragmatic and achievable way, and without negatively impacting current projects.

These core needs are what inspired Original Software to innovate and provide practical solutions for Application Quality Management (AQM) and Automated Software Quality (ASQ). We have helped customers achieve real successes by implementing an effective ‘application quality eco-system’ that delivers greater business agility, faster time to market, reduced risk, decreased costs, increased productivity and an early return on investment.

Our success has been built on a solution suite that provides a dynamic approach to quality management and automation, empowering all stakeholders in the quality process, as well as uniquely addressing all layers of the application stack. Automation has been achieved without creating a dependency on specialised skills and by minimising ongoing maintenance burdens.

An innovative approachInnovation is in the DnA at Original Software. Our intuitive solution suite directly tackles application quality issues and helps you achieve the ultimate goal of application excellence.

Empowering all stakeholdersThe design of the solution helps customers build an ‘application quality eco-system’ that extends beyond just the QA team, reaching all the relevant stakeholders within the business. Our technology enables everyone involved in the delivery of IT projects to participate in the quality process – from the business analyst to the business user and from the developer to the tester. Management executives are fully empowered by having instant visibility of projects underway.

quality that is truly code-freeWe have observed the script maintenance and exclusivity problems caused by code-driven automation solutions and has built a solution suite that requires no programming skills. This empowers all users to define and execute their tests without the need to use any kind of code, freeing them from the automation specialist bottleneck. not only is our technology easy to use, but quality processes are accelerated, allowing for faster delivery of business-critical projects.

Top to bottom qualityQuality needs to be addressed at all layers of the business application. We give you the ability to check every element of an application - from the visual layer, through to the underlying service processes and messages, as well as into the database.

Addressing test data issuesData drives the quality process and as such cannot be ignored. We enable the building and management of a compact test environment from production data quickly and in a data privacy compliant manner, avoiding legal and security risks. We can also manage the state of that data, so that it is synchronised with test scripts, enabling swift recovery and shortening test cycles.

A holistic approach to qualityOur integrated solution suite is uniquely positioned to address all the quality needs of an application, regardless of the development methodology used. Being methodology neutral, we can help in Agile, Waterfall or any other project type. We provide the ability to unite all aspects of the software quality lifecycle. Our solution helps manage the requirements, design, build, test planning and control, test execution, test environment and deployment of business applications from one central point that gives everyone involved a unified view of project status and avoids the release of an application that is not ready for use.

Helping businesses around the worldOur innovative approach to solving real pain-points in the Application Quality Life Cycle has been recognised by leading multinational customers and industry analysts alike. In a 2011 report, Ovum stated:

“While other companies have diversified, into other test types and sometimes outside testing completely, Original Software has stuck more firmly to a value proposition almost solely around unsolved challenges in functional test automation. It has filled out some yawning gaps and attempted to make test automation more accessible to non-technical testers.”

More than 400 organisations operating in over 30 countries use our solutions and we are proud of partnerships with the likes of Coca-Cola, unilever, HSBC, Barclays Bank, FedEx, Pfizer, DHL, HMV and many others.

original SoftwareDelivering quality through innovation

www.origsoft.com Email: [email protected] Tel: +44 (0)1256 338 666 fax: +44 (0)1256 338 678Grove House, Chineham Court, basingstoke, Hampshire, RG24 8AG

Page 48: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

46 | TEST company profile

The Green Hat differenceIn one software suite, Green Hat automates the validation, visualisation and virtualisation of unit, functional, regression, system, simulation, performance and integration testing, as well as performance monitoring. Green Hat offers code-free and adaptable testing from the user Interface (uI) through to back-end services and databases. Reducing testing time from weeks to minutes, Green Hat customers enjoy rapid payback on their investment.

Green Hat’s testing suite supports quality assurance across the whole lifecycle, and different development methodologies including Agile and test-driven approaches. Industry vertical solutions using protocols like SWIFT, FIX, IATA or HL7 are all simply handled. unique pre-built quality policies enable governance, and the re-use of test assets promotes high efficiency. Customers experience value quickly through the high usability of Green Hat’s software.

Focusing on minimising manual and repetitive activities, Green Hat works with other application lifecycle management (ALM) technologies to provide customers with value-add solutions that slot into their Agile testing, continuous testing, upgrade assurance, governance and policy compliance. Enterprises invested in HP and IBM Rational products can simply extend their test and change management processes to the complex test environments managed by Green Hat and get full integration.

Green Hat provides the broadest set of testing capabilities for enterprises with a strategic investment in legacy integration, SOA, BPM, cloud and other component-based environments, reducing the risk and cost associated with defects in processes and applications. The Green Hat difference includes:

• Purpose built end-to-end integration testing of complex events, business processes and composite applications. Organisations benefit by having UI testing combined with SOA, BPM and cloud testing in one integrated suite.

• Unrivalled insight into the side-effect impacts of changes made to composite applications and processes, enabling a comprehensive approach to testing that eliminates defects early in the lifecycle.

• Virtualisation for missing or incomplete components to enable system testing at all stages of development. Organisations benefit through being unhindered by unavailable systems or costly access to third party systems, licences or hardware. Green Hat pioneered ‘stubbing’, and organisations benefit by having virtualisation as an integrated function, rather than a separate product.

• Scaling out these environments, test automations and virtualisations into the cloud, with seamless integration between Green Hat’s products and leading cloud providers, freeing you from the constraints of real hardware without the administrative overhead.

• ‘Out-of-the-box’ deep integration with all major SOA, enterprise service bus (ESB) platforms, BPM runtime environments, governance products, and application lifecycle management (ALM) products.

• ‘Out-of the box’ support for over 70 technologies and platforms, as well as transport protocols for industry vertical solutions. Also provided is an application programming interface (API) for testing custom protocols, and integration with uDDI registries/repositories.

• Helping organisations at an early stage of project or integration deployment to build an appropriate testing methodology as part of a wider SOA project methodology.

Corporate overviewSince 1996, Green Hat has constantly delivered innovation in test automation. With offices that span North America, Europe and Asia/Pacific, Green Hat’s mission is to simplify the complexity associated with testing, and make processes more efficient. Green Hat delivers the market leading combined, integrated suite for automated, end-to-end testing of the legacy integration, Service Oriented Architecture (SOA), Business Process Management (BPM) and emerging cloud technologies that run Agile enterprises.

Green Hat partners with global technology companies including HP, IBM, Oracle, SAP, Software AG, and TIBCO to deliver unrivalled breadth and depth of platform support for highly integrated test automation. Green Hat also works closely with the horizontal and vertical practices of global system integrators including Accenture, Atos Origin, CapGemini, Cognizant, CSC, Fujitsu, Infosys, Logica, Sapient, Tata Consulting and Wipro, as well as a significant number of regional and country-specific specialists. Strong partner relationships help deliver on customer initiatives, including testing centres of excellence. Supporting the whole development lifecycle and enabling early and continuous testing, Green Hat’s unique test automation software increases organisational agility, improves process efficiency, assures quality, lowers costs and mitigates risk.

Helping enterprises globallyGreen Hat is proud to have hundreds of global enterprises as customers, and this number does not include the consulting organisations who are party to many of these installations with their own staff or outsourcing arrangements. Green Hat customers enjoy global support and cite outstanding responsiveness to their current and future requirements. Green Hat’s customers span industry sectors including financial services, telecommunications, retail, transportation, healthcare, government, and energy.

Green Hat

[email protected] www.greenhat.com

Page 49: TEST Magazine - June-July 2012

June 2012 | TESTwww.testmagazine.co.uk

TEST company profile | 47

T-Plan since 1990 has supplied the best of breedsolutions for testing. The T-Plan method and toolsallowing both the business unit manager and the ITmanager to: Manage Costs, Reduce Business Riskand Regulate the Process.

By providing order, structure and visibilitythroughout the development lifecycle fromplanning to execution, acceleration of the "time tomarket" for business solutions can be delivered.The T-Plan Product Suite allows you to manageevery aspect of the Testing Process, providing aconsistent and structured approach to testing atthe project and corporate level.

What we doTest Management:

The T-Plan Professional product is modular indesign, clearly dierentiating between the Analysis,Design, Management and Monitoring of the Test Assets.

• What coverage back to requirements has been achieved in our testing so far?

• What requirement successes have we achieved so far?

• Can I prove that the system is really tested?

• If we go live now, what are the associated Business Risks?

Incident Management:

Errors or queries found during the Test Executioncan also be logged and tracked throughout theTesting Lifecycle in the T-Plan Incident Manager.

“We wanted an integrated test management process; T-Plan was very exible and excellent value for money.”

Francesca Kay, Test Manager, Virgin Mobile

Test Automation:

Cross-Platform Independence (Java) TestAutomation is also integrated into the test suitepackage via T-Plan Robot, therefore creating a full testing solution.

T-Plan Robot Enterprise is the most flexible anduniversal black box test automation tool on themarket. Providing a human-like approach tosoftware testing of the user interface, and uniquelybuilt on JAVA, Robot performs well in situationswhere other tools may fail.

• Platform independence (Java). T-Plan Robot runs on, and automates all major systems, such as Windows, Mac, Linux, unix, Solaris, and mobile platforms such as Android, iPhone, Windows Mobile, Windows CE, Symbian.

• Test almost ANy system. As automation runs at the GuI level, via the use of VnC, the tool can automate any application. E.g. Java, C++/C#, .nET, HTML (web/browser), mobile, command line interfaces; also, applications usually considered impossible to automate like Flash/Flex etc.

Web: hays.co.uk/it Email: [email protected] Tel: +44 (0)1273 739272

T-Plan

Page 50: TEST Magazine - June-July 2012

TEST | June 2012 www.testmagazine.co.uk

48 | The last word...

Dave Whalen is burying the testing death march...

The end of the Death March

If you have been around testing long enough you have probably participated in the Testing Death March.

Testing is usually planned for the end of the development cycle. Development is complete and the code is thrown over the fence like raw meat to the ravenous testers who are snarling to get their teeth into it.

If it's a typical project, development is late and the schedule is inflexible so the only place left to cut time is in test. All of the code is tested and if any bugs are found, they are resolved, a new build is deployed, and we start over... and over.

Is there a way to make it stop? Absolutely, by introducing continuous integration. under our continuous integration model, we are constantly producing new builds as the code is updated. The developer writes or fixes the code and immediately applies it to the build server. I can deploy it to my test server and immediately test it.

We don't save up new code and bug fixes for a scheduled weekly build or a build-on-demand. I don't have to contact the SCM person to deploy a build for me. I control it. So every day, I have access to the most currently available code. In less than a day we can get a bug fix deployed and retested. It's awesome!

We have even expanded our automated tests to include an automated ‘Smoke Test’. This is a subset of the regression test. It can run in under 15 minutes. The major difference between the smoke test and the regression test is that the smoke test consists of only positive, or ‘happy path’ tests. We automatically run the smoke test with every new build. The regression test, on the other hand, consists of every test we have ever written – both positive and negative tests. I automatically run the regression test after each nightly build.

The real benefit is that we are constantly testing – every day. I have a daily regression test that I update constantly. The size of the regression test has increased over the life of the project as new functionality is added from a handful of tests to a library approaching 200. With every new build, I run the complete suite of regression tests. I know immediately if there are any effects of new or updated code on existing functionality. If the regression test fails, everyone, the entire team, is dedicated to fixing it until we get a clean run. As an added bonus, the person responsible for breaking the test has to buy donuts for the team. not great for my waist line, but to be fair, sometimes (not often) I may have a bad test which results in an invalid error. In that case I'm more than happy to make a trip to the donut shop.

From a test perspective, to have a successful Continuous Integration model you have to have an automated test tool. It can be a full time job running a manual regression test every day – and totally boring. With an automated regression test, I basically just push a button and let the test run. It runs in the background while I move on to more pressing matters, like buying the donuts. An hour later, I check the results and hopefully continue writing new tests. Once the new tests have been run successfully, I add them to the regression suite.

When we were testing the business we needed a tool that would allow us access to the underlying code. We had no user interface (uI) at first. As the project progressed, we needed a tool that allowed us to drive testing through the uI. The tool also had to allow us to automate testing. We eventually selected Fitnesse as our middle tier test tool and Selenium for our uI tests. There are no perfect tools. Each has good and not-so-good

features. Both of the tools we selected were not exactly user friendly but with a little help from the development team we were soon able to build some pretty robust tests. As a result I have become a pretty good Java developer (shhh...don't tell anyone). Just because we move from the middle tier to the uI, didn't mean we threw out our suite of Fitnesse regression tests. We still run them constantly.

As the project came to an end, the death march began looming on the horizon. The project manager asked me how much time we needed to test once development was complete? He was shocked when I told him none. As news of my answer circulated, shock waves rippled throughout the company all the way to the top. My phone rang off the hook. “What? You don't time need to test?” I'm surprised I wasn't sent for a drug test.

I don’t need time to test because I've been testing the system every day since day one. What more would you like me to do? They just didn't feel comfortable with that so to appease them I agreed to run the entire suite of tests while they watched. I said if it ran clean they owed us lunch. If it fails we buy them lunch, and not McDonalds either!

I kicked off the automated regression test, an hour later it was clean and green. They didn't believe it so they called the development manager from another project and I ran it again. Once again, clean. More people were called - developers this time, they poured through the test code looking for something, anything that would invalidate the results. They found nothing. In fact, they mentioned that the test was more comprehensive than any they had ever seen.

I insisted they take the entire team for lunch: testers, developers, analysts, everyone. After all, it was a team success. That was the best steak I've ever eaten!

the last word...

Dave Whalen President and senior software entomologistWhalen Technologiessoftwareentomologist.wordpress.com

Page 51: TEST Magazine - June-July 2012

So I'm sure the question on everyone's mind is: am I now an Agile fan? Will I convert? Did I drink the Agile Kool-Aid? After all, I'm the guy that wrote the ‘I Hate Agile’ cover story for this very publication. The answer is (drum roll) Yes! Well a qualified yes.

Inside: Static analysis | Data obfuscation | Testing tools

Focussing on collaboration

Testing centres

of excellence

Visit TEST online at www.testmagazine.co.uk

Volume 3: Issue 6: December 2011INNOVAT ION FOR SOFTWARE QUAL I TY

20 Leading

Testing Providers

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

Y

VO

LU

ME

3

: I

SS

UE

6

: D

EC

EM

BE

R

20

11

20 Leading

Testing Providers

Static analysis | Data obfuscation | Testing tools

Testing centres Testing centres Testing centres Testing centres Testing centres

of excellenceof excellence

Visit TEST online at www.testmagazine.co.uk

UAL I TY

20 Leading

Testing Providers

Inside: Test automation | Outsourcing | Data-driven testing

the 'number-8 fencing wire' approach

Testing in New Zealand

Visit TEST online at www.testmagazine.co.uk

Volume 4: Issue 1: February 2012

INNOVAT ION FOR SOFTWARE QUAL I TY

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

YV

OL

UM

E

4:

IS

SU

E

1:

FE

BR

UA

RY

2

01

2

Test automation | Outsourcing | Data-driven testing

the 'number-8 the 'number-8 the 'number-8 the 'number-8 the 'number-8 the 'number-8 the 'number-8 the 'number-8 the 'number-8 the 'number-8 the 'number-8 the 'number-8 the 'number-8 fencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approachfencing wire' approach

Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in Testing in New ZealandNew ZealandNew ZealandNew ZealandNew ZealandNew ZealandNew ZealandNew ZealandNew ZealandNew Zealand

Visit TEST online at www.testmagazine.co.uk

OFTWARE QUAL I TY

Inside: Automation tools | Model-based testing | Testing certifi cation

Fred Beringer reports from Silicon Valley

Cloud testing for an Agile world

FEATURE FOCUS: Fuzzing web applications – The new web auditing: P20-23

Volume 4: Issue 2: April 2012

INNOVAT ION FOR SOFTWARE QUAL I TY

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

Y

VO

LU

ME

4

: I

SS

UE

2

: AP

RI

L

20

12

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazine.co.uk

Subscribe to TEST free!

Published by 31 Media Ltd

www.31media.co.uk

Telephone: +44 (0) 870 863 6930

Facsimile: +44 (0) 870 085 8837

Email: [email protected]

Website: www.31media.co.uk

INNOVAT ION FOR SOFTWARE QUAL I TY

INNOVAT ION FOR SOFTWARE QUAL I TY

Page 52: TEST Magazine - June-July 2012

Visit us at TestExpo, the UK’s premier software testing event to hear how you can meet both business requirements and quality expectations with Borland solutions. Register now at www.testexpo.co.uk

© 2012 Micro Focus Limited. All rights reserved. MICRO FOCUS, the Micro Focus logo, among others, are trademarks or registered trademarks of Micro Focus Limited or its subsidiaries or affiliated companies in the United Kingdom, United States and other countries. All other marks are the property of their respective owners.

Deliver it right, deliver it better and deliver it faster Gather, refine and organize requirements – align what you develop with what your users need. Accelerate reliable, efficient and scalable testing to deliver higher quality software. Continuously improve the software you deliver – track code changes, defects and everything important in collaborative software delivery. Give your users the experience they expect with better design, control and delivery.

Work the way you want.The way your team wants.The way your users want with Borland solutions.

Create, develop and deliverbetter software faster with Borland