Top Banner
Software Testing and Technical FAQs - Ravi S
116
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Software testing q as   collection by ravi

Software Testing and Technical FAQs

- Ravi S

Page 2: Software testing q as   collection by ravi

Software QA/Testing Technical FAQs

Are you a Software QA engineer or Software tester? Need to update your software QA/testing

knowledge or need to prepare for a job interview? Check out this collection of Software

QA/Testing Technical FAQs ...

Software Quality Assurance

(1) A planned and systematic pattern of all actions necessary to provide adequate confidence

that an item or product conforms to established technical requirements.

(2) A set of activities designed to evaluate the process by which products are developed or

manufactured.

What's difference between client/server and Web Application ?

Client/server based is any application architecture where one server application and one or

many client applications are involved like your mail server and MS outlook Express, it can be a

web application as well, where the Web Application is a kind of client server application that is

hosted on the web server and accessed over the internet or internet. There are lots of things

that differs between testing of the two type above and cann't be posted in one post but you can

look into the data flow, communication and server side variable like session and security etc

Software Quality Assurance Activities

Application of Technical Methods (Employing proper methods and tools for developing

software)

Conduct of Formal Technical Review (FTR)

Testing of Software

Enforcement of Standards (Customer imposed standards or management imposed

standards)

Control of Change (Assess the need for change, document the change)

Measurement (Software Metrics to measure the quality, quantifiable)

Records Keeping and Recording (Documentation, reviewed, change control etc. i.e.

benefits of docs).

What's the difference between STATIC TESTING and DYNAMIC TESTING?

Page 3: Software testing q as   collection by ravi

Answer1:

Dynamic testing: Required program to be executed

static testing: Does not involve program execution

The program is run on some test cases & results of the program’s performance are examined to

check whether the program operated as expected

E.g. Compiler task such as Syntax & type checking, symbolic execution, program proving, data

flow analysis, control flow analysis

Answer2:

Static Testing: Verification performed with out executing the system code

Dynamic Testing: Verification and validation performed by executing the system code

Software Testing

Software testing is a critical component of the software engineering process. It is an element of

software quality assurance and can be described as a process of running a program in such a

manner as to uncover any errors. This process, while seen by some as tedious, tiresome and

unnecessary, plays a vital role in software development.

Testing involves operation of a system or application under controlled conditions and evaluating

the results (eg, 'if the user is in interface A of the application while using hardware B, and does

C, then D should happen'). The controlled conditions should include both normal and abnormal

conditions. Testing should intentionally attempt to make things go wrong to determine if things

happen when they shouldn't or things don't happen when they should. It is oriented to

'detection'.

Organizations vary considerably in how they assign responsibility for QA and testing.

Sometimes they're the combined responsibility of one group or individual. Also common are

project teams that include a mix of testers and developers who work closely together, with

overall QA processes monitored by project managers. It will depend on what best fits an

organization's size and business structure.

What's difference between QA/testing

The quality assurance

process is a process for providing adequate assurance that the software products and

processes in the product life cycle conform to their specific requirements and adhere to their

established plans."

The purpose of Software Quality Assurance is to provide management with appropriate visibility

into the process being used by the software project and of the products being built

Page 4: Software testing q as   collection by ravi

What black box testing types can you tell me about?

Black box testing is functional testing, not based on any knowledge of internal software design

or code.

Black box testing is based on requirements and functionality. Functional testing is also a black-

box type of testing geared to functional requirements of an application.

System testing is also a black box type of testing. Acceptance testing is also a black box type of

testing. Functional testing is also a black box type of testing. Closed box testing is also a black

box type of testing. Integration testing is also a black box type of testing.

What is software testing methodology?

One software testing methodology is the use a three step process of...

1. Creating a test strategy;

2. Creating a test plan/design; and

3. Executing tests. This methodology can be used and molded to your organization's needs.

Rob Davis believes that using this methodology is important in the development and ongoing

maintenance of his clients' applications.

What’s the difference between QA and testing?

TESTING means “Quality Control”; and

QUALITY CONTROL measures the quality of a product; while

QUALITY ASSURANCE measures the quality of processes used to create a quality product.

Why Testing CANNOT Ensure Quality

Testing in itself cannot ensure the quality of software. All testing can do is give you a certain

level of assurance (confidence) in the software. On its own, the only thing that testing proves is

that under specific controlled conditions, the software functioned as expected by the test cases

executed.

How to find all the Bugs during first round of Testing?

Answer1:

I understand the problems you are facing. I was involved with a web-based HR system that was

encountering the same problems. What I ended up doing was going back over a few release

cycles and analyzing the types of defects found and when (in the release cycle including the

various testing cycles) they were found. I started to notice a distinct trend in certain areas.

For each defect type, I started looking into the possibility if it could have been caught in the prior

phase (lots of things were being found in the Systems test phase that should have been caught

Page 5: Software testing q as   collection by ravi

earlier). If so, why wasn't it caught? Could it have been caught even earlier (say via a peer

review)? If so, why not? This led me to start examining the various processes and found a

definite problem with peer reviews (not very thorough IF they were even being done) and with

the testing process (not rigorous enough). We worked with the customer and folks doing the

testing to start educating them and improving the processes. The result was the number of

defects found in the latter test stages (System test for example) were cut by over half! It was

getting harder to find problems with the product as they were discovering them earlier in the

process -- saving time & money!

Answer2:

There could be several reasons for not catching a showstopper in the first or second build/rev. A

found defect could either functionally or physiologically mask a second or third defect.

Functionally the thread or path to the second defect could have been boken or rerouted to

another path or physiologically the tester who found the first defect knows the app must go back

and be rewritten so he/she procedes halfheartedly on and misses the second one. I've seen

both cases. It is difficult to keep testing on a known defective app. The testers seem to lose

interest knowing that what effort they put in to test it, will have to be redone on the next iteration.

This will test your metal as a lead to get them to follow through and maintain a professional

attitude.

Answer3:

The best way is to prevent bugs in the first place. Also testing doesn't fix or prevent bugs. It just

provides information. Applying this information to your situation is the important part.

The other thing that you may be encountering is that testing tends to be exploratory in nature.

You have stated that these are existing bugs, but not stated whether tests already existed for

these bugs.

Bugs in early cycles inhibit exploration. Additionally, a tester's understanding of the application

and its relationships and interactions will improve with time and thus more 'interesting' bugs tend

to be found in later iterations as testers expand their exploration (ie. think of new tests).

No matter how much time you have to read through the documents and inspect artefacts,

seeing the actual application is going to trigger new thoughts, and thus introduce previously

unthought of tests. Exposure to the application will trigger new thoughts as well, thus the longer

your testing goes, the more new tests (and potential bugs) are going to be found. Iterative

development is a good way to counter this, as testers get to see something physical earlier, but

this issue will always exist to some degree as the passing of time, and exploration of the

application allow new tests to be thought of at inconvenient moments.

Is regression testing performed manually?

The answer to this question depends on the initial testing approach. If the initial testing

approach was manual testing, then the regression testing is usually performed manually.

Page 6: Software testing q as   collection by ravi

Conversely, if the initial testing approach was automated testing, then the regression testing is

usually performed by automated testing.

How to choose which defect to remove in 1000000 defects? (because It will take too

much resources in order to remove them all.)

Answe1:

Are you the programmer who has to fix them, the project manager who has to supervise the

programmers, the change control team that decides which areas are too high risk to impact, the

stakeholder-user whose organization pays for the damage caused by the defects or the tester?

The tester does not choose which defects to fix.

The tester helps ensure that the people who do choose, make a well-informed choice.

Testers should provide data to indicate the *severity* of bugs, but the project manager or the

development team do the prioritization.

When I say "indicate the severity", I don't just mean writing S3 on a piece of paper. Test groups

often do follow-up tests to assess how serious a failure is and how broad the range of failure-

triggering conditions.

Priority depends on a wide range of factors, including code-change risk, difficulty/time to

complete the change, which stakeholders are affected by the bug, the other commitments being

handled by the person most knowledgeable about fixing a certain bug, etc. Many of these

factors are not within the knowledge of most test groups.

Answe2:

As a tester we don't fix the defects but we surely can prioritize them once detected. In our org

we assign severity level to the defects depending upon their influence on other parts of

products. If a defect doesnt allow you to go ahead and test test the product, it is critical one so it

has to be fixed ASAP. We have 5 levels as

1-critical

2-High

3-Medium

4-Low

5-Cosmetic

Dev can group all the critical ones and take them to fix before any other defect.

Answer3:

Priority/Severity P1 P2 P3

S1

S2

S3

Page 7: Software testing q as   collection by ravi

Generally the defects are classified in aboveshown grid. Every organization / software has some

target of fixing the bugs.

Example -

P1S1 -> 90% of the bugs reported should be fixed.

P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service packs or

versions.

Thus the organization should decide its target and act accordingly.

Basically bugfree software is not possible.

Answer4:

Ideally, the customer should assign priorities to their requirements. They tend to resist this. On a

large, multi-year project I just completed, I would often (in the lack of customer guidelines) rely

on my knowledge of the application and the potential downstream impacts in the modeled

business process to prioritize defects.

If the customer doesn't then I fell the test organization should based on risk or other, similar

considerations.

What is Software “Quality”?

Quality software is reasonably bug-free, delivered on time and within budget, meets

requirements and/or expectations, and is maintainable.

However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall

influence in the scheme of things. A wide-angle view of the ‘customers’ of a software

development project might include end-users, customer acceptance testers, customer contract

officers, customer management, the development organisation’s

management/accountants/testers/salespeople, future software maintenance engineers,

stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on

‘quality’ - the accounting department might define quality in terms of profits while an end-user

might define quality as user-friendly and bug-free.

What is retesting?

Answer1:

Retesting is usually equated with regression testing (see above) but it is different in that is

follows a specific fix--such as a bug fix--and is very narrow in focus (as opposed to testing entire

application again in a regression test). A product should never be released after any change has

been applied to the code, with only retesting of the bug fix, and without a regression test.

Answer2:

Page 8: Software testing q as   collection by ravi

1. Re-testing is the testing for a specific bug after it has been fixed.(one given by your

definition).

2. Re-testing can be one which is done for a bug which was raised by QA but could not be

found or confirmed by Development and has been rejected. So QA does a re-test to make sure

the bug still exists and again assigns it back to them.

when entire project is tested & client have some doubts about the quality of testing, Re-Testing

can be called. It can also be testing the same application again for better Quality.

Answer3:

Regression Testing is, the selective retesting of a system that has been modified to ensure that

any bugs have been fixed and that no other previously working functions have failed as a result

of the reparations and that newly added features have not created problems with previous

versions of the software. Also referred to as verification testing

It is important to determine whether in a given set of circumstances a particular series of tests

has been failed. The supplier may want to submit the software for re-testing. The contract

should deal with the parameters for retests, including (1) will test program which are doomed to

failure be allowed to finish early, or must they be completed in their entirety? (2) when can, or

must, the supplier submit his software for retesting?, and (3) how many times can the supplier

fail tests and submit software for retesting ñ is this based on time spent, or the number of

attempts? A well drawn contract will grant the customer options in the event of failure of

acceptance tests, and these options may vary depending on how many attempts the supplier

has made to achieve acceptance.

So the conclusion is retesting is more or less regression testing. More appropriately retesting is

a part of regression testing.

Answer4:

Re-testing is simply executing the test plan another time. The client may request a re-test for

any reason - most likely is that the testers did not properly execute the scripts, poor

documentation of test results, or the client may not be comfortable with the results.

I've performed re-tests when the developer inserted unauthorized code changes, or did not

document changes.

Regression testing is the execution of test cases "not impacted" by the specific project. I am

currently working on testing of a system with poor system documentation (and no user

documentation) so our regression testing must be extensive.

Answer5:

* QA gets a bug fix, and has to verify that the bug is fixed. You might want to check a few things

that are a “gut feel” if you want to and get away by calling it retesting, but not the entire function

/ module / product. * Development Refuses a bug on the basis of it being “Non Reproducible”,

then retesting, preferably in the presence of the Developer, is needed.

Page 9: Software testing q as   collection by ravi

How to establish QA Process in an organization?

1.CURRENT SITUATION

The first thing you should do is to put what you currently do in a piece of paper in some sort of a

flowchart diagram. This will allow you to analyze what is being currently done.

2.DEVELOPMENT PROCESS STAGE

Once you have the "big picture", you have to be aware of the current status of your

development project or projects. The processes you select will vary depending if you are in early

stages of developing a new application (i.e.: developing a version 1.0), or maintaining an

existing application (i.e.: working on release 6.7.1).

3. PRIORITIES

The next thing you need to do is identify the priorities of your project, for example: - Compliance

with industry standards - Validation of new functionality (new GUIs, etc) - Security - Capacity

Planning ( You should see "Effective Methods for Software Testing" for more info). Make a list of

the priorities, and then assign them values of (H)igh, (M)edium and (L)ow.

4. TESTING TYPES

Once you are aware of the priorities, focus on the High first, then Medium, and finally evaluate

whether the Low ones need immediate attention.

Based on this, you need to select those Testing Types that will provide coverage for your

priorities. Example of testing types:

- Functional Testing

- Integration Testing

- System Testing

- System-to-System Testing (for testing interfaces)

- Regression Testing

- Load Testing

- Performance Testing

- Stress Testing

Etc.

5. WRITE A TEST PLAN

Once you have determined your needs, the simplest way to document and implement your

process is to elaborate a "Test Plan" for every effort that you are engaged into (i.e.: for every

release).

For this you can use generic Test Plan templates available in the web that will help you

brainstorm and define the scope of your testing:

- Scope of Testing (defects, functionality, and what will be and will not be tested).

- Testing Types (Functional, Regression, etc).

- Responsible people

- Requirements traceability matrix (match test cases with requirements to ensure coverage)

- Defect tracking

Page 10: Software testing q as   collection by ravi

- Test Cases

DURING AND POST-TESTING ACTIVITIES

Make sure you keep track of the completion of your testing activities, the defects found, and that

you comply with an exit criteria prior to moving to the next stage in testing (i.e. User Acceptance

Testing, then Production Release).

Make sure you have a mechanism for:

- Reporting

- Test tracking

What is software testing?

1) Software testing is a process that identifies the correctness, completenes, and quality of

software. Actually, testing cannot establish the correctness of software. It can find defects, but

cannot prove there are no defects.

2) It is a systematic analysis of the software to see whether it has performed to specified

requirements. What software testing does is to uncover errors however it does not tell us that

errors are still not present.

Any recommendation for estimation how many bugs the customer will find till gold

release?

Answer1:

If you take the total number of bugs in the application and subtract the number of bugs you

found, the difference will be the maximum number of bugs the customer can find.

Seriously, I doubt you will find any sort of calculations or formula that can answer your question

with much accuracy. If you could refernce a previous application release, it might give you a

rough idea. The best thing to do is insure your test coverage is as good as you can make it then

hope you've found the ones the customer might find.

Remember Software testing is Risk Management!

Answer2:

For doing estimation :

1.)Find out the Coverage during testing of ur software and then estimate keeping in mind 80-20

principle.

2.)You can also look at the deepening of your test cases e.g. how much unit level testing and

how much life cycle teting have you performed (Believe that most of the bugs from customer

comes due to real use of lifecycle in the software)

3.)You can also refer the defect density from earlier releases of the same product line.

by doing these evaluation you can find out the probability of bugs at an approximately optimum

estimation.

Page 11: Software testing q as   collection by ravi

Answer3:

You can look at the customer issues mapping from previous release (If you have the same

product line) to the current release ,This is the best way of finding estimation for gold release of

migration of any product.Secondly, till gold release most of the issues comes from various

combination of installation testing like cross-platform,i18 issues,Customization,upgradation and

migration.

So ,these can be taken as a parameter and then can estimation be completed.

When the build comes to the QA team, what are the parameters to be taken for

consideration to reject the build upfront without committing for testing ?

Answer1:

Agree with R&D a set of tests that if one fails you can reject the build. I usually have some build

verification tests that just make sure the build is stable and the major functionality is working.

Then if one test fails you can reject the build.

Answer2:

The only way to legitimately reject a build is if the entrance criteria have not been met. That

means that the entrance criteria to the test phase have been defined and agreed upon up front.

This should be standard for all builds for all products. Entrance criteria could include:

- Turn-over documentation is complete

- All unit testing has been successfully completed and U/T cases are documented in turn-over

- All expected software components have been turned-over (staged)

- All walkthroughs and inspections are complete

- Change requests have been updated to correct status

- Configuration Management and build information is provided, and correct, in turn-over

The only way we could really reject a build without any testing, would be a failure of the turn-

over procedure. There may, but shouldn't be, politics involved. The only way the test phase can

proceed is for the test team to have all components required to perform successful testing. You

will have to define entrance (and exit) criteria for each phase of the SDLC. This is an effort to be

taken together by the whole development team. Developments entrance criteria would include

signed requirements, HLD doc, etc. Having this criteria pre-established sets everyone up for

success

Answer3:

The primary reason to reject a build is that it is untestable, or if the testing would be considered

invalid.

For example, suppose someone gave you a "bad build" in which several of the wrong files had

been loaded. Once you know it contains the wrong versions, most groups think there is no point

Page 12: Software testing q as   collection by ravi

continuing testing of that build.

Every reason for rejecting a build beyond this is reached by agreement. For example, if you set

a build verification test and the program fails it, the agreement in your company might be to

reject the program from testing. Some BVTs are designed to include relatively few tests, and

those of core functionality. Failure of any of these tests might reflect fundamental instability.

However, several test groups include a lot of additional tests, and failure of these might not be

grounds for rejecting a build.

In some companies, there are firm entry criteria to testing. Many companies pay lipservice to

entry criteria but start testing the code whether the entry criteria are met or not. Neither of these

is right or wrong--it's the culture of the company. Be sure of your corporate culture before

rejecting a build.

Answer4:

Generally a company would have set some sort of minimum goals/criteria that a build needs to

satisfy - if it satisfies this - it can be accepted else it has to be rejected

For eg.

Nil - high priority bugs

2 - Medium Priority bugs

Sanity test or Minimum acceptance and Basic acceptance should pass The reasons for the new

build - say a change to a specific case - this should pass Not able to proceed - non - testability

or even some more which is in relation to the new build or the product If the above criterias don't

pass then the build could be rejected.

What is software testing?

Software testing is more than just error detection;

Testing software is operating the software under controlled conditions, to (1) verify that it

behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is

what the user actually wanted.

Verification is the checking or testing of items, including software, for conformance and

consistency by evaluating the results against pre-specified requirements. [Verification: Are we

building the system right?]

Error Detection: Testing should intentionally attempt to make things go wrong to determine if

things happen when they shouldn’t or things don’t happen when they should.

Validation looks at the system correctness – i.e. is the process of checking that what has been

specified is what the user actually wanted. [Validation: Are we building the right system?]

In other words, validation checks to see if we are building what the customer wants/needs, and

verification checks to see if we are building that system correctly. Both verification and validation

are necessary, but different components of any testing activity.

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process

Page 13: Software testing q as   collection by ravi

of analysing a software item to detect the differences between existing and required conditions

(that is defects/errors/bugs) and to evaluate the features of the software item.

What is the testing lifecycle?

There is no standard, but it consists of:

Test Planning (Test Strategy, Test Plan(s), Test Bed Creation)

Test Development (Test Procedures, Test Scenarios, Test Cases)

Test Execution

Result Analysis (compare Expected to Actual results)

Defect Tracking

Reporting

How to validate data?

I assume that you are doing ETL (extract, transform, load) and cleaning. If my assumetion is

correct then

1. you are builing data warehouse/ data minning

2. you ask right question to wrong place

What is quality?

Quality software is software that is reasonably bug-free, delivered on time and within budget,

meets requirements and expectations and is maintainable. However, quality is a subjective

term. Quality depends on who the customer is and their overall influence in the scheme of

things. Customers of a software development project include end-users, customer acceptance

test engineers, testers, customer contract officers, customer management, the development

organization's management, test engineers, testers, salespeople, software engineers,

stockholders and accountants. Each type of customer will have his or her own slant on quality.

The accounting department might define quality in terms of profits, while an end-user might

define quality as user friendly and bug free.

What is Benchmark?

How it is linked with SDLC (Software Development Life Cycle)?

or SDLC and Benchmark are two unrelated things.?

What are the compoments of Benchmark?

In Software Testing where Benchmark fits in?

A Benchmark is a standard to measure against. If you benchmark an application, all future

application changes will be tested and compared against the benchmarked application.

Page 14: Software testing q as   collection by ravi

Which of the following Statements about gernerating test cases is false?

Which of the following Statements about gernerating test cases is false?

1. Test cases may contain multiple valid conditions

2. Test cases may contain multiple invalid conditions

3. Test cases may contain both valid and invalid conditions

4. Test cases may contain more than 1 step.

5. test cases should contain Expected results.

Answer1:

all the conditions mentioned are valid and not a single condition can be stated as false.

Here i think, the condition means the input type or situation (some may call it as valid or invalid,

positive or negative)

Also a single test case can contain both the input types and then the final result can be verified

(it obviously should not bring the required result, as one of the input condition is invalid, when

the test case would be executed), this usually happens while writing secnario based test cases.

For ex. Consider web based registration form, in which input data type for some fields are

positive and for some fields it is negative (in a scenario based test case)

Above screen can be tested by generating various scenario's and combinations. The final result

can be verified against actual result and the registration should not be carried out sucessfully

(as one/some input types are invalid), when this test case is executed.

The writing of test case also depends upon the no. of descriptive fields the tester has in the test

case template. So more elaborative is the test case template, more is the ease of writing test

cases and generating scenario's. So writing of test cases totally depends on the indepth thinking

of the tester and there are no predefined or hard coded norms for writing test case.

This is according to my understanding of testing and test case writing knowledge (as for many

applications, i have written many positive and negative conditions in a single test case and

verified different scenario's by generating such test cases)

Answer2:

The answer to this question will be 3 Test cases may contain both valid and invalid conditions.

Since there is no restriction for the test case to be of multiple steps or more than one valid or

invalid conditions. But A test case whether it is feature ,unit level or end to end test case ,it can

not contain both valid and invalid condition in a unit test case.

Because if this will happen then the concept of test case for a result will be dwindled and hence

has no meaning.

What is “Quality Assurance”?

“Quality Assurance” measures the quality of processes used to create a quality product.

Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all

Page 15: Software testing q as   collection by ravi

activities associated with software development, from requirements gathering, design and

reviews to coding, testing and implementation.

It involves the entire software development process - monitoring and improving the process,

making sure that any agreed-upon standards and procedures are followed, and ensuring that

problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly

a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods &

processes – and therefore reduce the prevalence of errors in the software.

Organisations vary considerably in how they assign responsibility for QA and testing.

Sometimes they’re the combined responsibility of one group or individual. Also common are

project teams that include a mix of testers and developers who work closely together, with

overall QA processes monitored by project managers or quality managers.

Quality Assurance and Software Development

Quality Assurance and development of a product are parallel activities. Complete QA includes

reviews of the development methods and standards, reviews of all the documentation (not just

for standardisation but for verification and clarity of the contents also). Overall Quality

Assurance processes also include code validation.

A note about quality assurance: The role of quality assurance is a superset of testing. Its

mission is to help minimise the risk of project failure. QA people aim to understand the causes

of project failure (which includes software errors as an aspect) and help the team prevent,

detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps

acknowledging that testers should consider broader QA issues as well as testing.

Which things to consider to test a mobile application through black box technique?

Answer1:

Not sure how your device/server is to operate, so mold these ideas to fit your app. Some

highlights are:

Range testing: Ensure that you can reconnect when leaving and returning back into range.

Port/IP/firewall testing - change ports and ips to ensure that you can connect and disconnect.

modify the firewall to shutoff the connection.

Multiple devices - make sure that a user receives his messages with other devices connected to

the same ip/port. Your app should have a method to determine which device/user sent the

message and only return to it. Should be in the message string sent and received. Unless you

have conferencing capabilities within the application.

Cycle the power of the server and watch the mobile unit reconnect automatically.

Mobile unit sends a message and then power off the unit, when powering back on and

reconnecting, ensure that the message is returned to the mobile unit.

Page 16: Software testing q as   collection by ravi

Answer2:

Not clearly mentioned which area of the mobile application you are testing with. Whether is it

simple SMS application or WAP application, you need to specify more details.If you are working

with WAP then you can download simulators from net and start testing over it.

What is the general testing process?

The general testing process is the creation of a test strategy (which sometimes includes the

creation of test cases), creation of a test plan/design (which usually includes test cases and test

procedures) and the execution of tests. Test data are inputs that have been devised to test the

system

Test Cases are inputs and outputs specification plus a statement of the function under the test.

Test data can be generated automatically (simulated) or real (live).

The stages in the testing process are as follows:

1. Unit testing: (Code Oriented)

Individual components are tested to ensure that they operate correctly. Each component is

tested independently, without other system components.

2. Module testing:

A module is a collection of dependent components such as an object class, an abstract data

type or some looser collection of procedures and functions. A module encapsulates related

components so it can be tested without other system modules.

3. Sub-system testing: (Integration Testing) (Design Oriented)

This phase involves testing collections of modules, which have been integrated into sub-

systems. Sub-systems may be independently designed and implemented. The most common

problems, which arise in large software systems, are sub-systems interface mismatches. The

sub-system test process should therefore concentrate on the detection of interface errors by

rigorously exercising these interfaces.

4. System testing:

The sub-systems are integrated to make up the entire system. The testing process is concerned

with finding errors that result from unanticipated interactions between sub-systems and system

components. It is also concerned with validating that the system meets its functional and non-

functional requirements.

5. Acceptance testing:

This is the final stage in the testing process before the system is accepted for operational use.

The system is tested with data supplied by the system client rather than simulated test data.

Acceptance testing may reveal errors and omissions in the systems requirements definition(

Page 17: Software testing q as   collection by ravi

user - oriented) because real data exercises the system in different ways from the test data.

Acceptance testing may also reveal requirement problems where the system facilities do not

really meet the users needs (functional) or the system performance (non-functional) is

unacceptable.

Acceptance testing is sometimes called alpha testing. Bespoke systems are developed for a

single client. The alpha testing process continues until the system developer and the client

agrees that the delivered system is an acceptable implementation of the system requirements.

When a system is to be marketed as a software product, a testing process called beta testing is

often used.

Beta testing involves delivering a system to a number of potential customers who agree to use

that system. They report problems to the system developers. This exposes the product to real

use and detects errors that may not have been anticipated by the system builders. After this

feedback, the system is modified and either released fur further beta testing or for general sale.

What's normal practices of the QA specialists with perspective of software?

These are the normal practices of the QA specialists with perspective of software

[note: these are all QC activities, not QA activities.]

1-Desgin Review Meetings with the System Analyst and If possible should be the part in

Requirement gathering

2-Analysing the requirements and the desing and to trace the desing with respect to the

requirements

3-Test Planning

4-Test Case Identification using different techniques (With respect to the Web Based

Applciation and Desktoip Applications)

5-Test Case Writing (This part is to be assigned to the testing engineers)

6-Test Case Execution (This part is to be assigned to the testing engineers)

7-Bug Reporting (This part is to be assigned to the testing engineers)

8-Bug Review and thier Analysis so that future bus can be removed by desgining some

standards

from low-level to high level (Testing in Stages)

Except for small programs, systems should not be tested as a single unit. Large systems are

built out of sub-systems, which are built out of modules that are composed of procedures and

functions. The testing process should therefore proceed in stages where testing is carried out

incrementally in conjunction with system implementation.

The most widely used testing process consists of five stages

Page 18: Software testing q as   collection by ravi

Component testing

Unit Testing

Verification (Process Oriented)

White Box Testing Techniques (Tests that are derived from knowledge of the program's structure and implementation)

Module Testing

Integrated testing

Sub-system Testing

System Testing

User testing Acceptance Testing

Validation (Product Oriented)

Black Box Testing Techniques (Tests are derived from the program specification)

However, as defects are discovered at any one stage, they require program modifications to

correct them and this may require other stages in the testing process to be repeated.

Errors in program components, say may come to light at a later stage of the testing process.

The process is therefore an iterative one with information being fed back from later stages to

earlier parts of the process.

How to test and to get the difference between two images which is in the same window?

Answer1:

How are you doing your comparison? If you are doing it manually, then you should be able to

see any major differences. If you are using an automated tool, then there is usually a

comparison facility in the tool to do that.

Answer2:

Jasper Software is an open-source utility which can be compiled into C++ and has a imgcmp

function which compares JPEG files in very good detail as long as they have the same

dimentions and number of components.

Answer3:

Rational has a comparison tool that may be used. I'm sure Mercury has the same tool.

Answer4:

The key question is whether we need a bit-by-bit exact comparison, which the current tools are

good at, or an equivalency comparison. What differences between these images are not

differences? Near-match comparison has been the subject of a lot of research in printer testing,

including an M.Sc. thesis at Florida Tech. It's a tough problem.

Testing Strategies

Page 19: Software testing q as   collection by ravi

Strategy is a general approach rather than a method of devising particular systems for

component tests.

Different strategies may be adopted depending on the type of system to be tested and the

development process used. The testing strategies are

Top-Down Testing

Bottom - Up Testing

Thread Testing

Stress Testing

Back- to Back Testing

1. Top-down testing

Where testing starts with the most abstract component and works downwards.

2. Bottom-up testing

Where testing starts with the fundamental components and works upwards.

3. Thread testing

Which is used for systems with multiple processes where the processing of a transaction

threads its way through these processes.

4. Stress testing

Which relies on stressing the system by going beyond its specified limits and hence testing how

well the system can cope with over-load situations.

5. Back-to-back testing

Which is used when versions of a system are available. The systems are tested together and

their outputs are compared. 6. Performance testing.

This is used to test the run-time performance of software.

7. Security testing.

This attempts to verify that protection mechanisms built into system will protect it from improper

penetration.

8. Recovery testing.

This forces software to fail in a variety ways and verifies that recovery is properly performed.

Large systems are usually tested using a mixture of these strategies rather than any single

approach. Different strategies may be needed for different parts of the system and at different

stages in the testing process.

Page 20: Software testing q as   collection by ravi

Whatever testing strategy is adopted, it is always sensible to adopt an incremental approach to

sub-system and system testing. Rather than integrate all components into a system and then

start testing, the system should be tested incrementally. Each increment should be tested

before the next increment is added to the system. This process should continue until all

modules have been incorporated into the system.

When a module is introduced at some stage in this process, tests, which were previously

unsuccessful, may now, detect defects. These defects are probably due to interactions with the

new module. The source of the problem is localized to some extent, thus simplifying defect

location and repai

Debugging

Brute force, backtracking, cause elimination.

Unit Testing Coding Focuses on each module and whether it works properly. Makes heavy use of white box testing

Integration Testing

Design

Centered on making sure that each module works with another module. Comprised of two kinds: Top-down and Bottom-up integration. Or focuses on the design and construction of the software architecture. Makes heavy use of Black Box testing.(Either answer is acceptable)

Validation Testing

Analysis Ensuring conformity with requirements

Systems Testing

Systems Engineering

Making sure that the software product works with the external environment, e.g., computer system, other software products.

Driver and Stubs

Driver: dummy main program

Stub: dummy sub-program

This is because the modules are not yet stand-alone programs therefore drive and or stubs

have to be developed to test each unit.

When do we prepare a Test Plan?

Page 21: Software testing q as   collection by ravi

When do we prepare a Test Plan?

[Always prepared a Test Plan for every new version or release of the product? ]

For four or five features at once, a single plan is fine. Write new test cases rather than new test

plans. Write test plans for two very different purposes. Sometimes the test plan is a product;

sometimes it's a tool.

What is boundary value analysis?

Boundary value analysis is a technique for test data selection. A test engineer chooses values

that lie along data extremes. Boundary values include maximum, minimum, just inside

boundaries, just outside boundaries, typical values, and error values. The expectation is that, if

a systems works correctly for these extreme or special values, then it will work correctly for all

values in between. An effective way to test code is to exercise it at its natural boundaries.

Boundary Value Analysis is a method of testing that complements equivalence partitioning. In

this case, data input as well as data output are tested. The rationale behind BVA is that the

errors typically occur at the boundaries of the data. The boundaries refer to the upper limit and

the lower limit of a range of values or more commonly known as the "edges" of the boundary.

Describe methods to determine if you are testing an application too much?

Answer1:

While testing, you need to keep in mind following two things always:

-- Percentage of requirements coverage

-- Number of Bugs present + Rate of fall of bugs

-- Firstly, There may be a case where requirement is covered quite adequately but number of

bugs do not fall. This indicates over testing.

--- Secondly, There may be a case where those parts of application are also being tested which

are not affected by a CHANGE or BUG FIXTURE. This is again a case of over testing.

-- Third is the case as you have suggested, with slight modification, i.e bug has sufficiently

dropped off but still testing is being at SAME levels as before.

Methods to determine if an application is being over-tested are--

1. Comparison of 'Rate of Drop in number of Bugs' & 'Effort Invested in Testing' (With all

Requirements been met) That is, if bug rate is falling (as it generally happens in all

applications), but effort invested in man hours does not fall, this implies Over testing.

2. Comparison of 'Achievment of bug rate threshold' & 'Effort Invested in Testing' (With all

Requirements been met) That is, if bug rate has already achieved the agreed-upon value with

Page 22: Software testing q as   collection by ravi

business and still the testing efforts are being invested with no/little reduction.

3. Verifying if the 'Impact Analysis' for 'Change Requests' has been done properly and being

implemented correctly. That is, to check and verify that the components of AUT which have got

impacted by the new change are being tested only and no other unrequired component is being

tested unneccessarily. If unaffected components are being tested, this implies Over testing.

Answer2:

If the bug find rate has dropped off considerably, the test group should shift its testing strategy.

One of the key problems with heavy reliance on regression testing is that the bug find rate drops

off even though there are plenty of bugs not yet found. To find new bugs, you have to run new

tests.

Every test technique is stronger for some types of bugs and weaker for others. Many test

groups use only a few techniques. In our consulting, James Bach and I repeatedly worked with

companies that relied on only one or two main techniques.

When one technique, any one test technique, yields few bugs, shifting to new technique(s) is

likely to expose new problems.

At some point, you can use a measure that is only partially statistical -- if your bug find rate is

low AND you can't think of any new testing approaches that look promising, THEN you are at

the limit of your effectiveness and you should ship the product. That still doesn't mean that the

application is overtested. It just means that YOU'RE not going to find many new bugs.

Answer3:

Best way is to monitor the test defects over the period of time

Refer williams perry book, where he has mentioned the concept of 'under test' and 'over test', in

fact the data can be plotted to see the criteria.

Yes one of the criteria is to monitor the defect rate and see if it is almost zero second method

would be using test coverage when it reach 100% (or 100% requirement coverage)

Procedural Software Testing Issues

Software testing in the traditional sense can miss a large number of errors if used alone. That is

why processes like Software Inspections and Software Quality Assurance (SQA) have been

developed. However, even testing all by itself is very time consuming and very costly. It also ties

up resources that could be used otherwise. When combined with inspections and/or SQA or

when formalized, it also becomes a project of its own requiring analysis, design and

implementation and supportive communications infrastructure. With it interpersonal problems

arise and need managing. On the other hand, when testing is conducted by the developers, it

will most likely be very subjective. Another problem is that developers are trained to avoid

errors. As a result they may conduct tests that prove the product is working as intended (i.e.

proving there are no errors) instead of creating test cases that tend to uncover as many errors

as possible.

Page 23: Software testing q as   collection by ravi

How do I start with testing?

Think twice (or may be more) times before you choose a career. Are you interested in it or do u

just want to jump on the bandwagon?

Prerequisite

You can join a software development company as a tester if you can convince the interviewer

1. You have a knack for breaking software

2. You are aware of basic Quality concepts and belive in them

3. You want to pursue Testing as a career and not just to try it

OO Software Testing Issues

A common way of testing OO software testing-by-poking-around (Binder, 1995). In this case the

developer's goal is to show that the product can do something useful without crashing. Attempts

are made to "break" the product. If and when it breaks, the errors are fixed and the product is

then deemed "tested".

Testing-by-poking-around method of testing OO software is, in my opinion, as unsuccessful as

random testing of procedural code or design. It leaves the finding of errors up to a chance.

Another common problem in OO testing is the idea that since a superclass has been tested, any

subclasses inheriting from it don't need to be.

This is not true because by defining a subclass we define a new context for the inherited

attributes. Because of interaction between objects, we have to design test cases to test each

new context and re-test the superclass as well to ensure proper working order of those objects.

Yet another misconception in OO is that if you do proper analysis and design (using the class

interface or specification), you don't need to test or you can just perform black-box testing only.

However, function tests only try the "normal" paths or states of the class. In order to test the

other paths or states, we need code instrumentation. Also it is often difficult to exercise

exception and error handling without examination of the source code.

What is the purpose of black box testing?

Answer1:

The main purpose of BB Testing is to validate that the application works as the user will be

operating it and in the environments of their systems. How do you do system testing and

integration testing?

You may lose time and money but you may also lose Quality and eventually Customers!

Answer2:

"What is the purpose of black box testing?"

Black-box testing checks that the user interface and user inputs and outputs all work correctly.

Page 24: Software testing q as   collection by ravi

Part of this is that error handling must work correctly. It's used in functional and system testing.

"We do everything in white box testing: - we check each module's function in the unit testing"

Who is "we"? Are you programmers or quality assurance testers? Usually, unit testing is done

by programmers, and white-box testing would be how they'd do it.

"- once unit test result is ok, means that modules work correctly (according to the requirement

documemts)"

Not quite. It means that on a stand-alone basis, each module is okay. White box testing only

tests the internal structure of the program, the code paths. Functional testing is needed to test

how the individual components work together, and this is best done from an external

perspective, meaning by using the software the way an end user would, without reference to the

code (which is what black-box testing is).

if we doing testing again in black box will we lose time and money?"

No, the opposite: You'll lose money from having to repair errors you didn't catch with the white-

box testing if you don't do some black-box testing. It's far more expensive to fix errors after

release than to test for them and fix them early on.

But again, who is "we"? The black box testers should not be the people who did the

programming; they should be the QA team -- also some end users for the usability testing.

Now that I've said that, good programmers will run some basic black-box tests before handing

the application to QA for testing. This isn't a substitute for having QA do the tests, but it's a lot

quicker for the programmer to find and fix an error right away than to have to go through the

whole process of reporting a bug, then fixing and releasing a new build, then retesting.

How do you create a test plan/design?

Test scenarios and/or cases are prepared by reviewing functional requirements of the release

and preparing logical groups of functions that can be further broken into test procedures. Test

procedures define test conditions, data to be used for testing and expected results, including

database updates, file outputs, report results. Generally speaking...

* Test cases and scenarios are designed to represent both typical and unusual situations that

may occur in the application.

* Test engineers define unit test requirements and unit test cases. Test engineers also execute

unit test cases.

* It is the test team that, with assistance of developers and clients, develops test cases and

scenarios for integration and system testing.

* Test scenarios are executed through the use of test procedures or scripts.

* Test procedures or scripts define a series of steps necessary to perform one or more test

scenarios.

* Test procedures or scripts include the specific data that will be used for testing the process or

transaction.

* Test procedures or scripts may cover multiple test scenarios.

* Test scripts are mapped back to the requirements and traceability matrices are used to ensure

Page 25: Software testing q as   collection by ravi

each test is within scope.

* Test data is captured and base lined, prior to testing. This data serves as the foundation for

unit and system testing and used to exercise system functionality in a controlled environment.

* Some output data is also base-lined for future comparison. Base-lined data is used to support

future application maintenance via regression testing.

* A pretest meeting is held to assess the readiness of the application and the environment and

data to be tested. A test readiness document is created to indicate the status of the entrance

criteria of the release.

Inputs for this process:

* Approved Test Strategy Document.

* Test tools, or automated test tools, if applicable.

* Previously developed scripts, if applicable.

* Test documentation problems uncovered as a result of testing.

* A good understanding of software complexity and module path coverage, derived from general

and detailed design documents, e.g. software design document, source code, and software

complexity data.

Outputs for this process:

* Approved documents of test scenarios, test cases, test conditions, and test data.

* Reports of software design issues, given to software developers for correction.

What is the purpose of a test plan?

Reason number 1: We create a test plan because preparing it helps us to think through the

efforts needed to validate the acceptability of a software product.

Reason number 2: We create a test plan because it can and will help people outside the test

group to understand the why and how of product validation.

Reason number 3: We create a test plan because, in regulated environments, we have to have

a written test plan.

Reason number 4: We create a test plan because the general testing process includes the

creation of a test plan.

Reason number 5: We create a test plan because we want a document that describes the

objectives, scope, approach and focus of the software testing effort.

Reason number 6: We create a test plan because it includes test cases, conditions, the test

environment, a list of related tasks, pass/fail criteria, and risk assessment.

Reason number 7: We create test plan because one of the outputs for creating a test strategy is

an approved and signed off test plan document.

Reason number 8: We create a test plan because the software testing methodology a three step

process, and one of the steps is the creation of a test plan.

Reason number 9: We create a test plan because we want an opportunity to review the test

plan with the project team.

Page 26: Software testing q as   collection by ravi

Reason number 10: We create a test plan document because test plans should be documented,

so that they are repeatable.

Can we prepare Test Plan without SRS?

It is not always mandatory that you should have SRS document to prepare a Test Plan. This

kind of Documents Hierarchy is maintained to maintain Organizational standards and also to

have clear understanding of the things.

Yes you can Prepare a Test plan directly without SRS, When the Requirements are clear with

your clients,and when your URD(User Requirement Document ) is supportive enough to clarify

the issues.

Though we don't have SRS clients will be giving some information SRS only contains mainly

Product information

But we will not know the Testing effort if we don't have SRS.

SRS contains How many cycles we are testing, and on the platforms we are testing , etc.

Actually there won't be any harm in doing so, becoz, ultimately you will send your Test plan

document to your client and after getting approval from him only you start Testing.

(Note:- SRS is the document which you get in the Analysis phase of your Software

Development. Test plan is the document , which contains the details of Product interms of , Tset

strategy , Scope of testing, Types of tests to be conducted,Risk Managemnet , Mention of

Automation Tool ,About Bug tracking Tool, etc..,)

How do test plan templates look like?

The test plan document template helps to generate test plan documents that describe the

objectives, scope, approach and focus of a software testing effort. Test document templates are

often in the form of documents that are divided into sections and subsections. One example of a

template is a 4-section document where section 1 is the description of the "Test Objective",

section 2 is the the description of "Scope of Testing", section 3 is the the description of the "Test

Approach", and section 4 is the "Focus of the Testing Effort".

All documents should be written to a certain standard and template. Standards and templates

maintain document uniformity. They also help in learning where information is located, making it

easier for a user to find what they want. With standards and templates, information will not be

accidentally omitted from a document. Once Rob Davis has learned and reviewed your

standards and templates, he will use them. He will also recommend improvements and/or

additions.

A software project test plan is a document that describes the objectives, scope, approach and

focus of a software testing effort. The process of preparing a test plan is a useful way to think

through the efforts needed to validate the acceptability of a software product. The completed

document will help people outside the test group understand the why and how of product

validation.

Page 27: Software testing q as   collection by ravi

How to Test a desktop systems ?

You will likely have to use a programming or scripting language to interact with the service

directly. You will have more control over the raw information that way.

You will have to determine what the service is supposed to do and how it is supposed to interact

with other applications and services. A data dictionary likely exists. It may not be called that

however. What this document does is explain what commands the service will respond to and

what sort of data should be sent. You will have to use this document to do your testing. Get

close to the person or people who created the document or the service and expect them to keep

you in the loop when changes take place (it doesn't help anyone if you report a defect and it's

really only reflecting an expected change in the operation of the service).

Desktop applications are generally designed to run and quit. You have to be concerned with

memory leaks and system usage.

How do you create a test strategy?

The test strategy is a formal description of how a software product will be tested. A test strategy

is developed for all levels of testing, as required. The test team analyzes the requirements,

writes the test strategy and reviews the plan with the project team. The test plan may include

test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk

assessment.

Inputs for this process:

* A description of the required hardware and software components, including test tools. This

information comes from the test environment, including test tool data.

* A description of roles and responsibilities of the resources required for the test and schedule

constraints. This information comes from man-hours and schedules.

* Testing methodology. This is based on known standards.

* Functional and technical requirements of the application. This information comes from

requirements, change request, technical and functional design documents.

* Requirements that the system can not provide, e.g. system limitations.

Outputs for this process:

* An approved and signed off test strategy document, test plan, including test cases.

* Testing issues requiring resolution. Usually this requires additional negotiation at the project

management level.

How to do Estimating Testing effort ?

Time Estimation method for Testing Process

Note : folloing method is based on use case driven specification.

Step 1 : count number of use cases (NUC) of system

step 2 : Set Avg Time Test Cases(ATTC) as per test plan

Page 28: Software testing q as   collection by ravi

step 3 : Estimate total number of test cases (NTC)

Total number of test cases = Number of usecases X Avg testcases per a use case

Step 4 : Set Avg Execution Time (AET) per a test case (idelly 15 min depends on your system)

Step 5 : Calculate Total Execution Time (TET)

TET = Total number of test cases * AET

Step 6 : Calculate Test Case Creation Time (TCCT)

useually we will take 1.5 times of TET as TCCT

TCCT = 1.5 * TET

Step 7 : Time for ReTest Case Execution (RTCE) this is for retesting

useually we take 0.5 times of TET

RTCE = 0.5 * TET

Step 8 : Set Report generation Time (RGT

usually we take 0.2 times of TET

RGT = 0.2 * TET

Step 9 : Set Test Environment Setup Time (TEST)

it also depends on test plan

Step 10 : Total Estimation time = TET + TCCT+ RTCE + RGT + TEST + some buffer...;)

Example

Total No of use cases (NUC) : 227

Average test cases per Use cases(AET) : 10

Estimated Test cases(NTC) : 227 * 10 = 2270

Time estimation execution (TET) : 2270/4 = 567.5 hr

Time for creating testcases (TCCT) : 567.5*4/3 = 756.6 hr

Time for retesting (RTCE) : 567.5/2 = 283.75 hr

Report Generation(RGT) = 100 hr

Test Environment Setup Time(TEST) = 20 hr.

-------------------

Total Hrs 1727.85 + buffer

-------------------

here 4 means Number of testcases executed per hour

i.e 15 min will take for execution of each test case

What is the purpose of test strategy?

Reason number 1: The number one reason of writing a test strategy document is to "have" a

signed, sealed, and delivered, FDA (or FAA) approved document, where the document includes

a written testing methodology, test plan, and test cases.

Reason number 2: Having a test strategy does satisfy one important step in the software testing

process.

Reason number 3: The test strategy document tells us how the software product will be tested.

Reason number 4: The creation of a test strategy document presents an opportunity to review

Page 29: Software testing q as   collection by ravi

the test plan with the project team.

Reason number 5: The test strategy document describes the roles, responsibilities, and the

resources required for the test and schedule constraints.

Reason number 6: When we create a test strategy document, we have to put into writing any

testing issues requiring resolution (and usually this means additional negotiation at the project

management level).

Reason number 7: The test strategy is decided first, before lower level decisions are made on

the test plan, test design, and other testing issues.

What's Quality Approach document? what should be the contents and things like that...

Answer1:

you should start thinking from your company business type, and according to it define different

processes for your organization. like procurment, CM etc

Then think over different matrices you will be calculating for each process, and define them with

formula, the kind of analysis will be doing and when shall the red flag to be raised,

Decide on your audit policies frequencies etc. Think on the change control board if any process

needs modification.

Answer2:

By defining the process i mean the structured collection of practices that describe the

characteristics of the work and its quality. writting process means creating a system with which

every one will work, the benefits of it are like common language and a shared vision across

organization, its will be a frame work for prioritizing actions.

From implementation point of view first you need to break the complete life cycle of your product

in diffrent meaningful steps, and setting the goals for each phase.

you can create different document templates which every one shall follow, Define the

dependencies among different groups for each project, Define risks for each project and what is

mitigation plan for each risk. etc

You can read the CMMI model, customize that as per your organization goal. for a start up

company As per my personal opinion, its better to define and reach at the process for Level 3

First and then go for level 5.

What does a test strategy document contain?

The test strategy document contains test cases, conditions, the test environment, a list of

related tasks, pass/fail criteria and risk assessment. The test strategy document is a formal

description of how a software product will be tested. What is the test strategy document

developed for? It is developed for all levels of testing, as required. How is it written, and who

Page 30: Software testing q as   collection by ravi

writes it? It is the test team that analyzes the requirements, writes the test strategy, and reviews

the plan with the project team.

Why Q/A should not report to development?

Based on research from the Quality Assurance Institute, the percent of quality groups in each

location is noted,

50% - reports to Senior IT Manager - This is the best positioning because it gives the Quality

Manager immediate access to the IT Manager to discuss and promote Quality issues, when the

quality manager reports elsewhere, quality issues may not be raised to the appropriate level or

receive the necessary action.

25% - reports to Manager of systems/programming

15 % reports to Manger oprerations.

10 % outside IT function.

Which of the following statements about Regression statements are true?

Which of the following statements about Regression statements are true?

1---Regression testing must consist of a fixed set of tests to create a base line

2---Regression tests should be used to detect defects in new feature

3---Regression testing can be run on every build

4--- Regression testing should be targeted areas of high risk and known code change

5---Regression testing when automated, is highly effective in preventing defects.

Answer1:

1---Regression testing must consist of a fixed set of tests to create a base line

Don't think it is true as a "must" -- it

depends on whether your regression testing style involves repeating identical tests or redoing

testing in previously tested areas with similar tests or tests that address the same risks. For

example, some people do regression testing with tests whose specific parameters are

determined randomly. They broaden the set of values they test while achieving essentially the

same testing. Second example--some regression test suites include random stringing together

of test cases (they include load testing and duration testing in their regression series, reporting

their results as part of the assessment of each build). Depending on your theory of the _point_

of regression testing, these may or may not be entirely valid regression tests.

2---Regression tests should be used to detect defects in new feature

How do you create new regression tests? Should you design new tests as standalone, or should

you develop a strategy in which the tests you use for bug-hunting are designed to be reusable

as regression tests? If the latter, and I have certainly heard some skilled testers argue that the

latter approach worked well in their sistuation, then #2 is sometimes true.

Page 31: Software testing q as   collection by ravi

3---Regression testing can be run on every build

This is true, though it might be silly and a big waste of time.

4--- Regression testing should be targeted areas of high risk and known code change

Hmmm, there's a area of computer science called program slicing and one of the objectives of

this class of work is to figure out how to restrict the regression test suite to a smaller number of

tests, which test only those things that might have been impacted by a change. Bob Glass has

criticized the results of some of this work, but if #4 is false, some Ph.D.'s and big research

grants should be retracted.

5---Regression testing when automated, is highly effective in preventing defects.

Unit-level automated regression testing is highly effective in preventing defects--read up on test-

driven development.

Answer2:

Let me explain why I think 2 & 5 are false

2---Regression tests should be used to detect defects in new feature

Since regression tests only address existing features and functionality, it can't find defects in

new features. It can only find where existing features and functionality have been broken by

changes.

5---Regression testing when automated, is highly effective in preventing defects.

Since no tests prevent defects, they only find them, it's impossible to prevent defects with a

regression test. I will add, however, that if a developer can use an automated regression test to

test their own code before submitting it to the code repository (say in the form a series of unit

tests coupled to a library, etc.) then you could in some way prevent defects with a regression

test.

I also don't like 1- and 4. 1- since a regression test suite grows as the product does. Therefore

the tests are not fixed. 4- because a regression test tests the whole application, not just a

targeted area. In the past, I have used the concept of test depth (level 1 being the basic

regression tests--higher number reflect additional functionality) so you could run a level one

regression on the whole program but do level three on the transport layer "because we've

updated the library". T

an automated set of tests would be the most likely way to make 3- a possibility. It is unlikely that

with daily builds, as many companies run their build process, that anything short of an

automated regression test suite would be able to be run daily with any efficacy. if the builds

were weekly, then a manual regression test would be likely.

Page 32: Software testing q as   collection by ravi

Answer3:

As per the difinition of regression testing and actual workaround if you have to have answer this

question then option 3 & 4 is the best choice among all.The reason behind it is :

3---Regression testing can be run on every build It is a normal phenomenon if there is build

coming on weekly basis or it is a RC build.Since,there is nothing mention about daily build ,only

thing mention is every build so it can be correct.

4---Regression testing should be targeted areas of high risk and known code change This is

also true in most of the situation,it is not universally true but in certain condition where there is

code change and the related modules are only tested in regression automation rather than

whole code.

5 is not true coz in regression we detect the defect not prevent normally.

How do you execute tests?

Execution of tests is completed by following the test documents in a methodical manner. As

each test procedure is performed, an entry is recorded in a test execution log to note the

execution of the procedure and whether or not the test procedure uncovered any defects.

Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held

daily, if required, to address and discuss testing issues, status and activities.

* The output from the execution of test procedures is known as test results. Test results are

evaluated by test engineers to determine whether the expected results have been obtained. All

discrepancies/anomalies are logged and discussed with the software team lead, hardware test

lead, programmers, software engineers and documented for further investigation and resolution.

Every company has a different process for logging and reporting bugs/defects uncovered during

testing.

* A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a

test summary report. The severity of a problem, found during system testing, is defined in

accordance to the customer's risk assessment and recorded in their selected tracking tool.

* Proposed fixes are delivered to the testing environment, based on the severity of the problem.

Fixes are regression tested and flawless fixes are migrated to a new baseline. Following

completion of the test, members of the test team prepare a summary report. The summary

report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.

* After a particular level of testing has been certified, it is the responsibility of the Configuration

Manager to coordinate the migration of the release software components to the next test level,

as documented in the Configuration Management Plan. The software is only migrated to the

production environment after the Project Manager's formal acceptance.

* The test team reviews test document problems identified during testing, and update

documents where appropriate.

Inputs for this process:

* Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.

* Test tools, including automated test tools, if applicable.

Page 33: Software testing q as   collection by ravi

* Developed scripts.

* Changes to the design, i.e. Change Request Documents.

* Test data.

* Availability of the test team and project team.

* General and Detailed Design Documents, i.e. Requirements Document, Software Design

Document.

* A software that has been migrated to the test environment, i.e. unit tested code, via the

Configuration/Build Manager.

* Test Readiness Document.

* Document Updates.

Outputs for this process:

* Log and summary of the test results. Usually this is part of the Test Report. This needs to be

approved and signed-off with revised testing deliverables.

* Changes to the code, also known as test fixes.

* Test document problems uncovered as a result of testing. Examples are Requirements

document and Design Document problems.

* Reports on software design issues, given to software developers for correction. Examples are

bug reports on code issues.

* Formal record of test incidents, usually part of problem tracking.

* Base-lined package, also known as tested source and object code, ready for migration to the

next level.

What is a requirements test matrix?

The requirements test matrix is a project management tool for tracking and managing testing

efforts, based on requirements, throughout the project's life cycle.

The requirements test matrix is a table, where requirement descriptions are put in the rows of

the table, and the descriptions of testing efforts are put in the column headers of the same

table.

The requirements test matrix is similar to the requirements traceability matrix, which is a

representation of user requirements aligned against system functionality. The requirements

traceability matrix ensures that all user requirements are addressed by the system integration

team and implemented in the system integration effort.

The requirements test matrix is a representation of user requirements aligned against system

testing. Similarly to the requirements traceability matrix, the requirements test matrix ensures

that all user requirements are addressed by the system test team and implemented in the

system testing effort.

Page 34: Software testing q as   collection by ravi

Can you give me a requirements test matrix template?

For a requirements test matrix template, you want to visualize a simple, basic table that you

create for cross-referencing purposes.

Step 1: Find out how many requirements you have.

Step 2: Find out how many test cases you have.

Step 3: Based on these numbers, create a basic table. If you have a list of 90 requirements and

360 test cases, you want to create a table of 91 rows and 361 columns.

Step 4: Focus on the the first column of your table. One by one, copy all your 90 requirement

numbers, and paste them into rows 2 through 91 of the table.

Step 5: Now switch your attention to the the first row of the table. One by one, copy all your 360

test case numbers, and paste them into columns 2 through 361 of the table.

Step 6: Examine each of your 360 test cases, and, one by one, determine which of the 90

requirements they satisfy. If, for the sake of this example, test case number 64 satisfies

requirement number 12, then put a large "X" into cell 13-65 of your table... and then you have it;

you have just created a requirements test matrix template that you can use for cross-referencing

purposes.

What metrics are used for bug tracking?

Metrics that can be used for bug tracking include the followings: the total number of bugs, total

number of bugs that have been fixed, number of new bugs per week, and the number of fixes

per week. Metrics for bug tracking can be used to determine when to stop testing, for example,

when bug rate falls below a certain level. You CAN learn to use defect tracking software.

In QA team, everyone talks about process. What exactly they are taking about? Are there

any different type of process?

Answer1:

When you talk about "process" you are generally talking about the actions used to accomplish a

task.

Here's an example: How do you solve a jigsaw puzzle?

You start with a box full of oddly shaped pieces. In your mind you come up with a strategy for

matching two pieces together (or no strategy at all and simply grab random pieces until you find

a match), and continue on until the puzzle is completed.

If you were to describe the *way* that you go about solving the puzzle you would be describing

the process.

Some follow-up questions you might think about include things like:

- How much time did it take you to solve the puzzle?

- Do you know of any skills, tricks or practices that might help you solve the puzzle quicker?

Page 35: Software testing q as   collection by ravi

- What if you try to solve the puzzle with someone else? Does that help you go faster, or

slower? (why or why not?) Can you have *too* many people on this one task?

- To answer your second question, I'll ask *you* the question: Are there different ways that

people can solve a jigsaw puzzle?

There are many interesting process-related questions, ideas and theories in Quality Assurance.

Generally the identification of workplace processes lead to the questions of improvement in

efficiency and productivity. The motivation behind that is to try and make the processes as

efficient as possible so as to incur the least amount of time and expense, while providing a

general sense of repeatability, visibility and predictability in the way tasks are performed and

completed.

The idea behind this is generally good, but the execution is often flawed. That is what makes

QA so interesting. You see, when you work with people and processes, it is very different than

working with the processes performed by machines. Some people in QA forget that distinction

and often become disillusioned with the whole thing.

If you always remember to approach processes in the workplace with a people-centric view, you

should do fine.

Answer2:

There is:

* Waterfall

* Spiral

* Rapid prototype

* Clean room

* Agile (XP, Scrum, ...)

What metrics are used for test report generation?

Metrics that can be used for test report generation include...

McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC), module

design complexity metric (iv(G)), essential complexity metric (ev(G)), pathological complexity

metric (pv(G)), design complexity metric (S0), integration complexity metric (S1), object

integration complexity metric (OS1), global data complexity metric (gdv(G)), data complexity

metric (DV), tested data complexity metric (TDV), data reference metric (DR), tested data

reference metric (TDR), maintenance severity metric (maint_severity), data reference severity

metric (DR_severity), data complexity severity metric (DV_severity), global data severity metric

(gdv_severity).

McCabe object-oriented software metrics: encapsulation percent public data (PCTPUB), access

to public data (PUBDATA), polymorphism percent of unoverloaded calls (PCTCALL), number of

roots (ROOTCNT), fan-in (FANIN), quality maximum v(G) (MAXV), maximum ev(G) (MAXEV),

and hierarchy quality (QUAL).

Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods (LOCM),

number of children (NOC), response for a class (RFC), weighted methods per class (WMC),

Page 36: Software testing q as   collection by ravi

Halstead software metrics program length, program volume, program level and program

difficulty, intelligent content, programming effort, error estimate, and programming time.

Line count software metrics: lines of code, lines of comment, lines of mixed code and

comments, and lines left blank.

What is quality plan?

Answer1:

the test plan is the document created before starting the testing process, it includes that types of

testing that will be performed, high level scope of the project, the envirnmental requirements of

the testing process, what automated testing tools will be used (If available), the schedule of

each test, when it will start and end.

Answer2:

you should not only understand what a Quality Plan is, but you should understand why you're

making it. I don't beleieve that "because I was told to do so" is a good enough reason. If the

person who told you to create it can't tell you 1) what it is, and 2) how to create it, I don't think

that they actually know why it's needed. That breaks the primary rule of all plans used in testing:

We write quality plans for two very different purposes. Sometimes the quality plan is a product;

sometimes it's a tool. It's too easy, but also too expensive, to confuse these goals.

If it's not being used as a tool, don't waste your time (and your company's money) doing this.

What are the five dimensions of the Risks?

Schedule: Unrealistic schedules, exclusion of certain activities when chalking out a schedule

etc. could be deterrents to project delivery on time. Unstable communication link can be

considered as a probable risk if testing is carried out from a remote location.

Client: Ambiguous requirements definition, clarifications on issues not being readily available,

frequent changes to the requirements etc. could cause chaos during project execution.

Human Resources: Non-availability of sufficient resources with the skill level expected in the

project are not available; Attrition of resources - Appropriate training schedules must be planned

for resources to balance the knowledge level to be at par with resources quitting.

Underestimating the training effort may have an impact in the project delivery.

System Resources: Non-availability of /delay in procuring all critical computer resources either

hardware and software tools or licenses for software will have an adverse impact.

Quality: Compound factors like lack of resources along with a tight delivery schedule and

frequent changes to requirements will have an impact on the quality of the product tested.

Page 37: Software testing q as   collection by ravi

What is good code?

A good code is code that works, is free of bugs and is readable and maintainable. Organizations

usually have coding standards all developers should adhere to, but every programmer and

software engineer has different ideas about what is best and what are too many or too few

rules. We need to keep in mind that excessive use of rules can stifle both productivity and

creativity. Peer reviews and code analysis tools can be used to check for problems and enforce

standards.

Why back-end testing is required, if we are going to check the front-end ....?

Why we need to do unit testing, if all the features are being tested in System testing.

What extra things are tested in unit testing, which can not be tested in System testing.

Answer1:

Assume that you're thinking client-server or web. If you test the application on the front end only

you can see if the data was stored and retrievd correctly. You can't see if the servers are in an

error state or not. many server processes are monitored by another process. If they crash, they

are restarted. You can't see that without looking at it.

The data may not be stored correctly either but the front end may have cached data lying

around and it will use that instead. The least you should be doing is verifying the data as stored

in the database.

It is easier to test data being transferred on the boundaries and see the results of those

transactions when you can set the data in a driver.

Answer2:

Back-End testing : Basically the requirement of this testing depends on ur project. like Say if ur

project is .Ticket booking system,Front end u will provided with an Interface , where u can book

the ticket by giving the appropriate details ( Like Place to go, and Time when u wanna go etc..).

It will have a Data storage system (Database or XL sheet etc) which is a Back end for storing

details entered by the user.

After submitting the details ,U might have provided with a correct acknowledgement.But in back

end , the details might not updated correctly in Database becoz of wrong logic development.

Then that will cause a major problem.

and regarding Unit level testing and System testing Unit level testing is for testing the basic

checks whether the application is working fyn with the basic requirements.This will be done by

developers before delivering to the QA.In System testing , In addition to the unit checks ,u will

be performing all the checks ( all possible integrated checks which required) .Basically this will

be carried out by tester

Answer3:

Page 38: Software testing q as   collection by ravi

Ever heard about divide and conquer tactic ? It is a same method applied in backend and

frontend testing.

A good back end test will help minimize the burden of frontend test.

Another point is you can test the backend while develope the frontend. A true pararelism could

be achived.

Backend testing has another problem which must addressed before front end could use it. The

problem is concurency. Building a scenario to test concurency is formidable task.

A complex thing is hard to test. To create such scenarios will make you unsure which test you

already done and which you haven't. What we need is an effective methods to test our

application. The simplest method i know is using divide and conquer.

Answer4:

A wide range of errors are hard to see if you don't see the code. For example, there are many

optimizations in programs that treat special cases. If you don't see the special case, you don't

test the optimization. Also, a substantial portion of most programs is error handling. Most

programmers anticipate more errors than most testers.

Programmers find and fix the vast majority of their own bugs. This is cheaper, because there is

no communication overhead, faster because there is no delay from tester-reporter to

programmer, and more effective because the programmer is likely to fix what she finds, and she

is likely to know the cause of the problems she sees. Also, the rapid feedback gives the

programmer information about the weaknesses in her programming that can help her write

better code.

Many tests -- most boundary tests -- are done at the system level primarily because we don't

trust that they were done at the unit level. They are wasteful and tedious at the system level. I'd

rather see them properly done and properly automated in a suite of programmer tests.

What is the difference between verification and validation?

Verification takes place before validation, and not vice versa.

Verification evaluates documents, plans, code, requirements, and specifications. Validation, on

the other hand, evaluates the product itself.

The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings,

reviews and meetings. The input of validation, on the other hand, is the actual testing of an

actual product.

The output of verification is a nearly perfect set of documents, plans, specifications, and

requirements document. The output of validation, on the other hand, is a nearly perfect, actual

product.

Page 39: Software testing q as   collection by ravi

What is the difference between efficient and effective?

"Efficient" means having a high ratio of output to input; which means working or producing with

a minimum of waste. For example, "An efficient engine saves gas." Or, "An efficient test

engineer saves time".

"Effective", on the other hand, means producing or capable of producing an intended result, or

having a striking effect. For example, "For rapid long-distance transportation, the jet engine is

more effective than a witch's broomstick". Or, "For developing software test procedures,

engineers specializing in software testing are more effective than engineers who are

generalists".

How effective can we implement six sigma principles in a very large software services

organization?

Answer1:

Effective way of implementing sixsigma.

there are quite a few things one needs

1. management buyin

2. dedicated team both drivers as well as adopters

3. training

4. culture building - if you have a pro process culture, life is easy

5. sustained effort over a period towards transforming, people, thoughts and actions Personally

technical content is never a challenge, but adoption is a challenge.

Answer2:

"Six sigma" is a combination of process recommendations and mathematical model. The name

"six sigma" reflects the notion of reducing variation so much that errors -- events out of

tolerance -- are six standard deviations from a desired mean. The mathematics are at the core

of the process implementation.

The problem is that software is not hardware. Software defects are designed in, not the result of

manufacturing variation.

The other side of six sigma is the drive for continuous improvement. You don't need the six

sigma math for this and the concept has been around long before the six sigma movement.

To improve anything, you need some type of indicator of its current state and a way to tell that it

is improved. Plus determination to improve it. Management support helps.

Answer3:

There are different methodologies adopted in sixsigma. However, it is commonly referenced

from the variance based approach. If you are trying to look at sixsigma from that, for software

services, fundamentally the measurement system should be reliable - industry has not reached

Page 40: Software testing q as   collection by ravi

the maturity level of manufacturing industry where it fits to a T. The differences between SW

and HW/manufacturing industry is slightly difficult to address.

There are some areas you can adopt sixsigma in its full statistical form(eg in-process error rate,

productivity improvements etc), some areas are difficult.

The narrower the problem area is, the better it gets even in software services to address

adopting the statistical method.

There are methodologies that have a bundle of tools,along with statistical techniques, are used

on the full SDLC.

A generic observation is ,SS helps if we look for proper fitment of methodology for the purpose.

Else doubts creep in.

What stage of bug fixing is the most cost effective?

Bug prevention techniques (i.e. inspections, peer design reviews, and walk-throughs) are more

cost effective than bug detection.

What is Defect Life Cycle.?

Answer1:

Defect life cycle is....different stages after a defect is identified.

New (When defect is identified)

Accepted (when Development team and QA team accepts it's a Bug)

In Progress (when a person is working to resolve the issue-defect)

Resolved (once the defect resolved)

Completed (Some one who can take up the responsibly Team lead)

Closed/reopened (Retested by TE and he will update the Status of the bug)

Answer2:

Defect Life Cycle is nothing but the various phases a Bug undergoes after it is raised or

reported.

A general Interview answer can be given as:

1. New or Opened

2. Assinged

3. Fixed

4. Tested

5. Closed.

Page 41: Software testing q as   collection by ravi

What is the difference between a software bug and software defect?

"Software bug" is nonspecific; it means an inexplicable defect, error, flaw, mistake, failure, fault,

or unwanted behavior of a computer program. Other terms, e.g. "software defect", or "software

failure", are more specific.

While the word "bug" has been a part of engineering jargon for many-many decades; many-

many decades ago even Thomas Edison, the great inventor, wrote about a "bug" - today there

are many who believe the word "bug" is a reference to insects that caused malfunctions in early

electromechanical computers.

In software testing, the difference between "bug" and "defect" is small, and also depends on the

end client. For some clients, bug and defect are synonymous, while others believe bugs are

subsets of defects.

Difference number one: In bug reports, the defects are easier to describe.

Difference number two: In my bug reports, it is easier to write descriptions as to how to replicate

defects. In other words, defects tend to require only brief explanations.

Commonality number one: We, software test engineers, discover both bugs and defects, before

bugs and defects damage the reputation of our company.

Commonality number two: We, software QA engineers, use the software much like real users

would, to find both bugs and defects, to find ways to replicate both bugs and defects, to submit

bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've

achieved the desired level of quality.

Commonality number three: We, software QA engineers, do not differentiate between bugs and

defects. In our reports, we include both bugs and defects that are the results of software testing.

Are developers smarter than tester? Any suggestion about the future prospects and

technicality involvedin the testing job?

Answer1:

QA & Testing are thankless jobs. In a software development company developer is a core

person. As you are a fresh graduate, it would be good for you to work as a developer. From

development you can always move to testing or QA or other admin/support tasks. But from

Testing or QA it is little difficult to go back to development, though not impossible(as u are BE

comp)

Seeing the job market, it is not possible for each & every fresher to get into development. But

you can keep searching for it.

Some big company's have seperate Verifiction & Validation groups where only testing projects

are executed. Those teams have TLs, PLs who are testing experts. They earn good salary

same as development people.

In technical projects the testing team does lot of technical work. You can do certifications to

improve your technical skills & market value.

Page 42: Software testing q as   collection by ravi

It all depends on your way of handling things & interpersonal, communication and leadership

skills. If it is difficult for you to get a job in developement or you really like testing, just go ahead.

Try to achieve excellence as a testing professional. You will never have a job problem .Also you

will always get onsite opportunities too!! Yuo might have to struggle for initial few years like all

other freshers.

Answer2:

QA and Testing are thankless only in some companies.

Testing is part of development. Rather than distinguish between testing and

development,distinguish between testing and programming.

Programming is also thankless in some companies.

Not suggesting that anyone should or should not go into testing. It depends on your skills and

interests. Some people are better at programming and worse at testing, some better at testing

and worse at programming, some are not suited for either role. You should decide what you are

good at and what fascinates you. What type of work would make you WANT to stay at work for

60-80 hours a week for a few years because it is so interesting?

Suggesting that there are excellent testing jobs out there, but there are bad ones too (in testing

and in programming, both).

Have not seen any certification in software testing that improves the technical skill of anyone.

Apparently, testing certification improves a tester's market value in some markets.

Most companies mean testing when they say "QA". Or they mean Testing plus Metrics, where

the metrics tasks are low-skill data collection and basic data analysis rather than thinking up and

justifying measurement systems appropriate to the questions at hand. In terms of skill, salary,

intellectual challenge and value to the company, testing+metrics is the same as testing. Some

companies see QA more strategically, and hire more senior people into their groups. Here is a

hint--if you can get a job in a group called QA with less than 5 years of experience, it's a testing

group or something equivalent to it.

Answer3:

Nothing is considered as great or a mean job. As long as you like and love to do, everything in

that seems to be interesting.

I started as a developer and slowly moved to Testing. I find testing to be more challenging and

interesting. I have solid 6 years of testing experience alone and many sernior people are there

in my team, who are professional testers.

Answer4:

testing is low-skill work in many companies.

Scripted testing of the kind pushed by ISEB, ISTQB, and the other certifiers is low skill, low

prestige, offers little return value to the company that pays for it, and is often pushed to offsite

contracting firms because it isn't worth doing in-house. In many cases, it is just a process of

"going through the motions" -- pretending to do testing (and spending a lot of money in the

Page 43: Software testing q as   collection by ravi

pretense) but without really looking for any important information and without creating any

artifacts that will be useful to the project team.

The only reason to take a job doing this kind of work is to get paid for it. Doing it for too long is

bad for your career.

There are much higher-skill ways to do testing. Some of them involve partial automation (writing

or using programs to help you investigate the program more effectively), but automation tools

are just tools. They are often used just as mind-numbingly and valuelessly as scripted manual

testing. When you're offered this kind of position, try to find out how much judgment you will

have to exercise in the analysis of the product under test and the ways that it provides value to

the users and other stakeholders, in the design of tests to check that value and to check for

other threats to value (security failures, performance failures, usability failures, etc.)--and how

much this position will help you develop your judgment. If you will become a more skilled and

more creative investigator who has a better collection of tools to investigate with, that might be

interesting. If not, you will be marking time (making money but learning little) while the rest of

the technical world learns new ideas and skills.

What's the difference between priority and severity?

The word "priority" is associated with scheduling, and the word "severity" is associated with

standards. "Priority" means something is afforded or deserves prior attention; a precedence

established by urgency or order of or importance.

Severity is the state or quality of being severe; severe implies adherence to rigorous standards

or high principles and often suggests harshness; severe is marked by or requires strict

adherence to rigorous standards or high principles. For example, a severe code of behavior.

The words priority and severity do come up in bug tracking. A variety of commercial, problem-

tracking / management software tools are available. These tools, with the detailed input of

software test engineers, give the team complete information so developers can understand the

bug, get an idea of its severity, reproduce it and fix it. The fixes are based on project priorities

and severity of bugs. The severity of a problem is defined in accordance to the end client's risk

assessment, and recorded in their selected tracking tool. A buggy software can severely affect

schedules, which, in turn can lead to a reassessment and renegotiation of priorities.

How to test a web based application that has recently been modified to give support for

Double Byte Character Sets?

Answer1:

should apply black box testing techniques (boundary analysis, equivalence partioning)

Answer2:

The Japanese and other East Asian Customers are very particular of the look and feel of the UI.

Page 44: Software testing q as   collection by ravi

So please make sure, there is no truncation at any place.

One Major difference between Japanese and English is that there is no concept of spaces

between the words in Japanese. The line breaks in English usually happens whenever there is

a Space. In Japanese this leads to a lot of problem with the wrapping on the text and if you have

a table with defined column length, you might see text appearing vertical.

On the functionality side:

1. Check for the date format and Number format. (it should be in the native locale)

2. Check that your system accepts 2-byte numerals and characters.

3. If there is any fields with a boundary value of 100 characters, the field should accept, the

same number of 2-byte character as well.

4. The application should work on a Native (Chinese, Japanese, Korean) OS as well as on an

English OS with the language pack installed.

Writing a high level test plan for 2-byte support will require some knowledge of the application

and its architecture.

What is the difference between software fault and software failure?

Software failure occurs when the software does not do what the user expects to see. Software

fault, on the other hand, is a hidden programming error.

A software fault becomes a software failure only when the exact computation conditions are

met, and the faulty portion of the code is executed on the CPU. This can occur during normal

usage. Or, when the software is ported to a different hardware platform. Or, when the software

is ported to a different complier. Or, when the software gets extended.

before creating test cases to "break the system", a few principles have to be observed:

Testing should be based on user requirements. This is in order to uncover any defects that

might cause the program or system to fail to meet the client's requirements.

Testing time and resources are limited. Avoid redundant tests.

It is impossible to test everything. Exhaustive tests of all possible scenarios are impossible,

simple because of the many different variables affecting the system and the number of paths a

program flow might take.

Use effective resources to test. This represents use of the most suitable tools, procedures and

individuals to conduct the tests. The test team should use tools that they are confident and

familiar with. Testing procedures should be clearly defined. Testing personnel may be a

technical group of people independent of the developers.

Test planning should be done early. This is because test planning can begin independently of

coding and as soon as the client requirements are set.

Testing should begin at the module. The focus of testing should be concentrated on the smallest

programming units first and then expand to other parts of the system.

We look at software testing in the traditional (procedural) sense and then describe some testing

Page 45: Software testing q as   collection by ravi

strategies and methods used in Object Oriented environment. We also introduce some issues

with software testing in both environments.

Would like to know whether Black Box testing techniques like Boundary Value Analysis

and Equivalence Partitioning - during which phases of testing are they used,if possible

with examples ?

Answer1:

Also Boundary Value Analysis and Equivalence Partitioning can be used in unit or component

testing, and generally is used in system testing

Example, you have a module designed to work out the tax to be paid:

An employee has £4000 of salary tax free. The next £1500 is taxed at 10%

The next £28000 is taxed at 22%

Any further amount is taxed at 40%

You must define test cases that exercise valid and invalid equivalence classes:

Any value lower than 4000 is tax free

Any value between 4000 and 5500 must paid 10%

Any value between 5501 and 33500 must paid 22%

Any value bigger than 33500 must paid 40%

And the boundary values are: 4000, 4001, 5501, 33501

Answer2:

This Boundary value analysis and Equivalence partitioning is used to prepare the positive and

negative type test cases.

Equivalence partitioning: If you want to validate the text box which accepts the value between

2000 to 10000 , then the test case input is partitioned as the following way

1. <=2000

2. >=2000 and <=10000

3. >10000

The boundary Values analysis is checking the input values on boundaries. IN the above case it

can checked with whether the input values is on the boundary or above the boundary or in low

boundary.

Test Case Design

Test cases should be designed in such a way as to uncover quickly and easily as many errors

as possible. They should "exercise" the program by using and producing inputs and outputs that

are both correct and incorrect. Variables should be tested using all possible values (for small

Page 46: Software testing q as   collection by ravi

ranges) or typical and out-of-bound values (for larger ranges). They should also be tested using

valid and invalid types and conditions. Arithmetical and logical comparisons should be examined

as well, again using both correct and incorrect parameters. The objective is to test all modules

and then the whole system as completely as possible using a reasonably wide range of

conditions.

How to use methods/techniques to test the bandwidth usage of a client/server

application?

Bandwidth Utilization:

Basically at the client-server model you will be most concerned about the bandwidth usage if

your application is a web based one. It surely is a part of concern when the throughput and the

data transfer comes into the picture.

I suggest you to use the Radview's Webload for the Load and Stress testing tool for the same.

Available at the demoware.. you can record the scenarios of the normal user over the variable

connection speed and then run it for hours to know about the bandwidth utilisation and the

throughput and data trasfer rate, hits per sec, etc... there is a huge list of parameters which can

be tested over a n no of combinations..

How do test case templates look like?

Software test case templates are blank documents that describe inputs, actions, or events, and

their expected results, in order to determine if a feature of an application is working correctly.

Test case templates contain all particulars of test cases. For example, one test case template is

in the form of a 6-column table, where column 1 is the "test case ID number", column 2 is the

"test case name", column 3 is the "test objective", column 4 is the "test conditions/setup",

column 5 is the "input data requirements/steps", and column 6 is the "expected results".

All documents should be written to a certain standard and template. Why? Because standards

and templates do help to maintain document uniformity. Also because they help you to learn

where information is located, making it easier for users to find what they want. Also because,

with standards and templates, information is not be accidentally omitted from documents.

How to insert a check point to a image to check enable property in QTP?

Answer1:

AS you are saying that the all images are as push button than you can check the property

enabled or disabled. If you are not able to find that property than go to object repository for that

objecct and click on add remove to add the available properties to that object. Let me know if

that works. And if you take it as image than you need to check visible or invisible property tht

also might help you are there are no enable or disable properties for the image object.

Page 47: Software testing q as   collection by ravi

Answer2:

The Image Checkpoint does not have any property to verify the enable/disable property.

One thing you need to check is:

* Find out form the Developer if he is showing different images for activating/deactiving i.e

greyed out image. That is the only way a developer can show deactivate/activate if he is using

an "image". Else he might be using a button having a headsup with an image.

* If it is a button used to display with the headsup as an image you woudl need to use the object

Properties as a checkpoint.

How do you write test cases?

When I write test cases, I concentrate on one requirement at a time. Then, based on that one

requirement, I come up with several real life scenarios that are likely to occur in the use of the

application by an end user.

When I write test cases, I describe the inputs, action, or event, and their expected results, in

order to determine if a feature of an application is working correctly. To make the test case

complete, I also add particulars e.g. test case identifiers, test case names, objectives, test

conditions (or setups), input data requirements (or steps), and expected results.

Additionally, if I have a choice, I like writing test cases as early as possible in the development

life cycle. Why? Because, as a side benefit of writing test cases, many times I am able to find

problems in the requirements or design of an application. And, because the process of

developing test cases makes me completely think through the operation of the application.

Diferences Between System Testing and User Acceptance Testing?

Answer1:

system testing: The process of testing an integrated system to verify that it meets specified

requirements. acceptance testing: Formal testing with respect to user needs, requirements, and

business processes conducted to determine whether or not a system satisfies the acceptance

criteria and to enable the user, customers or other authorized entity to determine whether or not

to accept the system.

First, I don’t classify the incidents or defects regarding the phase the software development

process or testing, I prefer classify them regarding their type, e. g. Requeriments, features and

functionality, structural bugs, data, integration, etc The value of categorising faults is that it helps

us to focus our testing effort where it is most important and we should have distinct test

activietis that adrress the problems of poor requerimients, structure, etc.

You don’t do User Acceptance Test only because the software is delivered! Take care about the

concepts of testing!

Page 48: Software testing q as   collection by ravi

Answer2:

In my company we do not perform user acceptance testing, our clients do. Once our system

testing is done (and other validation activities are finished) the software is ready to ship.

Therefore any bug found in user acceptance testing would be issued a tracking number and

taken care of in the next release. It would not be counted as a part of the system test.

Answer3:

This is what i feel is user acceptance testing, i hope u find it useful. Definition:

User Acceptance testing is a formal testing conducted to determine whether a software satisfies

it's acceptance criteria and to enable the buyer to determine whether to accept the system.

Objective:

User Acceptance testing is designed to determine whether the software is fit for the user to use.

And also to determine if the software fits into user's business processes and meets his/her

needs.

Entry Criteria:

End of development process and after the software has passed all the tests to determine

whether it meets all the predetermined functionality, performance and other quality criteria.

Exit Criteria:

After the verification that the docs delivered are adequate and consistent with the executable

system. Software system meets all the requirements of the customer

Deliverables:

User Acceptance Test Plan

User Acceptance Testcases

User guides/docs

User Acceptance Testreports

Answer4:

System Testing: Done by QA at developemnt end.It is done after intergration is complete and all

integration P1/P2/P3 bugs are fixed. the code is freezed. No more code changes are taken.

Then All the requirements are tested and all the intergration bugs are verified.

UAT: Done by QA(trained like end users ). All the requiement are tested and also whole system

is verified and validated.

What is the difference between a test plan and a test scenario?

Difference number 1: A test plan is a document that describes the scope, approach, resources,

and schedule of intended testing activities, while a test scenario is a document that describes

both typical and atypical situations that may occur in the use of an application.

Difference number 2: Test plans define the scope, approach, resources, and schedule of the

intended testing activities, while test procedures define test conditions, data to be used for

Page 49: Software testing q as   collection by ravi

testing, and expected results, including database updates, file outputs, and report results.

Difference number 3: A test plan is a description of the scope, approach, resources, and

schedule of intended testing activities, while a test scenario is a description of test cases that

ensure that a business process flow, applicable to the customer, is tested from end to end.

Can you give me an example on reliability testing?

For example, our products are defibrillators. From direct contact with customers during the

requirements gathering phase, our sales team learns that a large hospital wants to purchase

defibrillators with the assurance that 99 out of every 100 shocks will be delivered properly.

In this example, the fact that our defibrillator is able to run for 250 hours without any failure in

order to demonstrate the reliability, is irrelevant to these customers. In order to test for reliability

we need to translate terminology that is meaningful to the customers into equivalent delivery

units, such as the number of shocks. Therefore we describe the customer needs in a

quantifiable manner, using the customer’s terminology. For example, our quantified reliability

testing goal becomes as follows: Our defibrillator will be considered sufficiently reliable if 10 (or

fewer) failures occur from 1,000 shocks.

Then, for example, we use a test / analyze / fix technique, and couple reliability testing with the

removal of errors. When we identify a failed delivery of a shock, we send the software back to

the developers, for repair. The developers build a new version of the software, and then we

deliver another 1,000 shocks (into dummy resistor loads). We track failure intensity (i.e. failures

per 1,000 shocks) in order to guide our reliability testing, and to determine the feasibility of the

software release, and to determine whether the software meets our customers' reliability

requirements.

Need function to find all the positions?

Ex: a string "abcd, efgh,ight" .

Want break this string based on the criteria here ever found the..

Answer1:

And return the delimited fields as a list of string? Sound like a perl split function. This could be

built on one of your own containing:

[ ] //knocked this together in a few min. I am sure there is a much more efficent way of doing

things

[ ] //but this is with the cobling together of several built in functions

[-] LIST OF STRING Split(STRING sDelim, STRING sData)

[ ] LIST OF STRING lsReturn

[ ] STRING sSegment

[-] while MatchStr("*{sDelim}*", sData)

[ ] sSegment = GetField(sData, sDelim, 1)

Page 50: Software testing q as   collection by ravi

[ ] ListAppend(lsReturn, Trim(sSegment))

[ ] //crude chunking:

[ ] sSegment += ","

[ ] sData = GetField(sData, sSegment, 2)

[-] if Len(sData) > 0

[ ] ListAppend(lsReturn, Trim(sData))

[ ] return lsReturn

Answer2:

You could use something like this.... hope I am understanding the problem

[+] testcase T1()

[ ] string sTest = "hello, there I am happy"

[ ] string sTest1 = (GetField (sTest, ",", 2))

[ ] Print(sTest1)

[ ]

[ ] This Prints "there I am happy"

[ ] GetField(sTest,","1)) would Print hello, etc....

Answer3:

Below is the function which return all fields (list of String).

[+] LIST OF STRING ConvertToList (STRING sStr, STRING sDelim)

[ ] INTEGER iIndex= 1

[ ] LIST OF STRING lsStr

[ ] STRING sToken = GetField (sStr, sDelim, iIndex)

[ ]

[+] if (iIndex == 1 && sToken == "")

[ ] iIndex = iIndex + 1

[ ] sToken = GetField (sStr, sDelim, iIndex)

[ ]

[+] while (sToken != "")

[ ] ListAppend (lsStr, sToken)

[ ] iIndex = iIndex+1

[ ] sToken = GetField (sStr, sDelim, iIndex)

[ ] return lsStr

What is the difference between monkey testing and smoke testing?

Difference number 1: Monkey testing is random testing, and smoke testing is a nonrandom

testing. Smoke testing is nonrandom testing that deliberately exercises the entire system from

end to end, with the the goal of exposing any major problems.

Page 51: Software testing q as   collection by ravi

Difference number 2: Monkey testing is performed by automated testing tools, while smoke

testing is usually performed manually.

Difference number 3: Monkey testing is performed by "monkeys", while smoke testing is

performed by skilled testers.

Difference number 4: "Smart monkeys" are valuable for load and stress testing, but not very

valuable for smoke testing, because they are too expensive for smoke testing.

Difference number 5: "Dumb monkeys" are inexpensive to develop, are able to do some basic

testing, but, if we used them for smoke testing, they would find few bugs during smoke testing.

Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough

enough that, if the build passes, one can assume that the program is stable enough to be tested

more thoroughly.

Difference number 7: Monkey testing either does not evolve, or evolves very slowly. Smoke

testing, on the other hand, evolves as the system evolves from something simple to something

more thorough.

Difference number 8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke

testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of

hours.

It's a good thing to share test cases with customers

That's generally a good thing, but the question is why do they want to see them?

Potential problems are that they may be considering changing outsourcing firms and want to

use the test cases elsewhere. If that can be prevented, please do so.

Another problem is that they want to micro manage your testing efforts. It's one thing to audit

your work to prove to themselves that you're doing a good job, it's an entirely different matter if

they intend to tell you that you don't have enough test coverage on the activity of module foo

and far too much coverage on module bar, please correct it.

Another issue may be that they are seeking litigation and they need proof that you were

negligent in some area of testing.

It's never a bad thing to have your customer wanting to be involved, unless you're a large

company and this is a small (in terms of sales) customer.

What are your concerns about this? Can you give more information on your situation and the

customer's?

Tell me about daily builds and smoke tests.

The idea is to build the product every day, and test it every day. The software development

process at Microsoft and many other software companies requires daily builds and smoke tests.

According to their process, every day, every single file has to be compiled, linked, and

combined into an executable program; and then the program has to be "smoke tested".

Smoke testing is a relatively simple check to see whether the product "smokes" when it runs.

Page 52: Software testing q as   collection by ravi

Please note that you should add revisions to the build only when it makes sense to do so. You

should to establish a build group, and build daily; set your own standard for what constitutes

"breaking the build", and create a penalty for breaking the build, and check for broken builds

every day.

In addition to the daily builds, you should smoke test the builds, and smoke test them Daily. You

should make the smoke test evolve, as the system evolves. You should build and smoke test

Daily, even when the project is under pressure.

Think about the many benefits of this process! The process of daily builds and smoke tests

minimizes the integration risk, reduces the risk of low quality, supports easier defect diagnosis,

improves morale, enforces discipline, and keeps pressure cooker projects on track. If you build

and smoke test DAILY, success will come, even when you're working on large projects!

How to Read data from the Telnet session?

Declared

[+] window DialogBox Putty

[ ] tag "* - PuTTY"

[ ]

[ ] // Capture the screen contents and return as a list of strings

[+] LIST OF STRING getScreenContents()

[ ]

[ ] LIST OF STRING ClipboardContents

[ ]

[ ] // open the system menu and select copy all to clipboard menu command

[ ] this.TypeKeys("<ALT-SPACE>o")

[ ]

[ ] // get the clipboard contents

[ ]

[ ] ClipboardContents = Clipboard.getText()

[ ] return ClipboardContents

I then created a function that searches the screen contents for the required data to validate.

This works fine for me. Here it is to study. Hope it may help

void CheckOutPut(string sErrorMessage)

[ ]Putty.setActive ()

[ ]

[ ] // Capture screen contents

[ ] lsScreenContents = Putty.GetScreenContents ()

[ ] Sleep(1)

[ ] // Trim Screen Contents

[ ] lsScreenContents = TrimScreenContents (lsScreenContents)

Page 53: Software testing q as   collection by ravi

[ ] Sleep(1)

[-] if (sBatchSuccess == "Yes")

[-] if (ListFind (lsScreenContents, "BUILD FAILED"))

[ ] LogError("Process should not have failed.")

[-] if (ListFind (lsScreenContents, "BUILD SUCCESSFUL"))

[ ] Print("Successful")

[ ] break

[ ] // Check to see if launcher has finished

[-] else

[-] if (ListFind (lsScreenContents, "BUILD FAILED") == 0)

[ ] LogError("Error should have failed.")

[ ] break

[-] else

[ ] // Check for Date Conversion Error

[-] if (ListFind (lsScreenContents, sErrorMessage) == 0)

[ ] LogError ("Error handle")

[ ] Print("Expected - {sErrorMessage}")

[ ] ListPrint(lsScreenContents)

[ ] break

[-] else

[ ] break

[ ]

[ ] // Raise exception if kPlatform not equal to windows or putty

[+] default

[ ] raise 1, "Unable to run console: - Please specify setting"

[ ]

What is the difference between system testing and integration testing?

"System testing" is a high level testing, and "integration testing" is a lower level testing.

Integration testing is completed first, not the system testing. In other words, upon completion of

integration testing, system testing is started, and not vice versa.

For integration testing, test cases are developed with the express purpose of exercising the

interfaces between the components. For system testing, the complete system is configured in a

controlled environment, and test cases are developed to simulate real life scenarios that occur

in a simulated real life test environment.

The purpose of integration testing is to ensure distinct components of the application still work in

accordance to customer requirements. The purpose of system testing is to validate an

application's accuracy and completeness in performing the functions as designed, and to test all

functions of the system that are required in real life.

Page 54: Software testing q as   collection by ravi

How to trace fixed bug in test case?

Answer1:

The fixed defects can be tracked in the defect tracking tool. I think it is out of scope of a test

case to maintain this.

The defect tracking tool should indicate that the problem has been fixed, and the associated test

case now has a passing result.

If and when you report test results for this test cycle, you should provide this sort of information;

i.e., test failed, problem report written, problem fixed, test passed, etc...

Answer2:

As using Jira (like Bugzilla) to manage your testcases as well as your bugs. When a test

discovers a bug, you will link the two, marking the test as "in work" and "waiting for bug X". Now,

when the developer resolves the bug and you retest it, you see the link to the tescase and

retest/close it.

What is the difference between performance testing and load testing?

Load testing is a blanket term that is used in many different ways across the professional

software testing community. The term, load testing, is often used synonymously with stress

testing, performance testing, reliability testing, and volume testing. Load testing generally stops

short of stress testing. During stress testing, the load is so great that errors are the expected

results, though there is gray area in between stress testing and load testing.

After the migration done, how to test the application (Frontend hasn't changed just the

database changed)

Answer1:

You can concentrate only on those testcases which involve DB transactions like

insert,update,delete etc.

Answer2:

Focus on the database tests, but it's important to analyze the differences between the two

schemas. You can't just focus on the front end. Also, be careful to look for shortcuts that the

DBAs may be taking with the schema.

What is the difference between reliability testing and load testing?

Page 55: Software testing q as   collection by ravi

The term, reliability testing, is often used synonymously with load testing. Load testing is a

blanket term that is used in many different ways across the professional software testing

community. Load testing generally stops short of stress testing. During stress testing, the load is

so great that errors are the expected results, though there is gray area in between stress testing

and load testing.

Some general guidelines on what to test for web based applications.

1. Navigation: Users move to and from pages, click on links, click on images (thumbnails), etc.

Navigation in a WebSite shoud be quick and error free.

2. Server Response. How fast the WebSite host responds influences whether a user (i.e.

someone on the browser) moves on or gives up.

3. Interaction & Feedback. For passive, content-only sites the only real quality issue is

availability. For a WebSite that interacts with the user, the big factor is how fast and how reliable

that interaction is.

4. Concurrent Users. Do multiple users interact on a WebSite? Can they get in each others'

way? While WebSites often resemble client/server structures, with multiple users at multiple

locations a WebSite can be much different, and much more complex, than complex

applications

5. Browser Independent. Tests should be realistic, but not be dependent on a particular

browser

6. No Buffering, Caching. Local caching and buffering -- often a way to improve apparent

performance -- should be disabled so that timed experiments are a true measure of the Browser

response time.

7. Fonts and Preferences. Most browsers support a wide range of fonts and presentation

preferences

8. Object Mode. Edit fields, push buttons, radio buttons, check boxes, etc. All should be

treatable in object mode, i.e. independent of the fonts and preferences.

9. Page Consistency. Is the entire page identical with a prior version? Are key parts of the text

the same or different?

10. Table, Form Consistency. Are all of the parts of a table or form present? Correctly laid out?

Can you confirm that selected texts are in the "right place".

11. Page Relationships. Are all of the links on a page the same as they were before? Are there

new or missing links? Are there any broken links?

12. Performance Consistency, Response Times. Is the response time for a user action the

same as it was (within a range)?

13. Image File Size. File size should be closely examined when selecting or creating images for

your site. This is particularly important when your site is directed to an audience that may not

enjoy the high-bandwidth and fast connection speeds available

14. Avoid the use of HTML "frames". The problems with frames-based site designs are well

documented, including; the inability to bookmark subcategories of the site, difficulty in printing

Page 56: Software testing q as   collection by ravi

frame cell content, disabling the Web browser's "back" button as a navigation aid.

15. Security. Ensure data is encrypted before transferring sensitive information, wherever

required. Test user authentication thoroughly. Ensure all backdoors and test logins are disabled

before going live with the web application.

16. Sessions. Ensure session validity is maintained throughout a web transasction, for e.g. filling

a webform that spans over several webpages. Forms should retain information when using the

'back' button wherever required for user convenience. At the same time, forms need to be reset

wherever security is an issue, like the password fields, etc.

17. Error handiling. Web navigation should be quick and error free. However, sometimes errors

cannot be avoided. It would be a good idea to have a standard error page that handles all

errors. This is cleaner than displaying the 404 page. After displaying the error page, users can

then be automatically redirected to the home page or any other relevant page. At this same

time, this error can also be logged and a message can be sent to notify the admin.

What is the difference between volume testing and load testing?

The term, volume testing, is often used synonymously with load testing. Load testing is a

blanket term that is used in many different ways across the professional software testing

community.

What types of testing can you tell me about?

Each of the followings represents a different type of testing: black box testing, white box testing,

unit testing, incremental testing, integration testing, functional testing, system testing, end-to-

end testing, sanity testing, regression testing, acceptance testing, load testing, performance

testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility

testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha

testing, beta testing, and mutation testing.

Which test cannot be automated? Acceptence test plan is prepared from? Which is the

test case design methodology? Does test plan contains bug tracing procedure and

reporting procedure?

1: which test cannot be automated?

a. Performance testing

b. Regresssssion

c. user interface

d.none

2: acceptence test plan is prepared from?

a. SRS

Page 57: Software testing q as   collection by ravi

b. HLD

c. DDDDdd

d.ULD/URD

3: which is the test case design methodology?

a. WBT

b: Regression

c: none

4: when will u start testing

a: After coding

b: Requirements gathering

5:does test plan contains bug tracing procedure and reporting procedure?

6: compatability testing uses

a; All S/W components

b: all H/w components

c; all networking components

d; A & B

e: A, B, C

7: which software model is easy and most efffffective

when compared to others?

a. Waterfall

b.interactive waterfall

c. Spiral

d. Prototyping

8: whioch is not system level testing

a. system testing

b. performance testing

c. instalable testing

d. none

9: why do u need testers?

a: they think in clients view

b: bettttter than developers

c. they think in processss view

d. none

Page 58: Software testing q as   collection by ravi

10. what is Quality Assssurance?

a. process

b. system

c. business

d. all

Ans : 1. Performance Testing

2. SRS

3. dont know

4. b. after requirement gathering

5. Yes Test Plan will definitely define the procedure for reporting of bugs.

6. e: A, B, C

7. b.interactive waterfall

8. d. none

9. the options given are not appropriate.

10. a. process

What testing roles are standard on most testing projects?

A: Depending on the organization, the following roles are more or less standard on most testing

projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System

Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test

Configuration Manager.

Depending on the project, one person may wear more than one hat. For instance, Test

Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test

Configuration Manager.

What is ACH and NACHA?

ACH- Automatic clearing house

ACH. A nationwide electronic funds transfer network which enables participating financial

institutions to distribute electronic credit and debit entries to bank accounts and to settle such

entries.

NACHA:

A membership organization that provides marketing and education assistance and establishes

the rules, standards and procedures that enable Financial Institutions to exchange ACH

payments on a national basis.

What is the role of documentation in QA?

Page 59: Software testing q as   collection by ravi

Documentation plays a critical role in QA. QA practices should be documented, so that they are

repeatable. Specifications, designs, business rules, inspection reports, configurations, code

changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally,

there should be a system for easily finding and obtaining of documents and determining what

document will have a particular piece of information. Use documentation change management,

if possible.

Is there a way to automate XML file comparison?

Use diff called from a scripting language and output the results to a file.

or

Using KDiff3

or

Using BBEdit in MAC

How do you introduce a new software QA process?

It depends on the size of the organization and the risks involved. For large organizations with

high-risk projects, a serious management buy-in is required and a formalized QA process is

necessary. For medium size organizations with lower risk projects, management and

organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA

processes should be balanced with productivity, in order to keep any bureaucracy from getting

out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot

depends on team leads and managers, feedback to developers and good communication is

essential among customers, managers, developers, test engineers and testers. Regardless the

size of the company, the greatest value for effort is in managing requirement processes, where

the goal is requirements that are clear, complete and testable.

What's requitrement of Bug tracking tool?

1. Should maintain version history

2. File attachement

3. Controlled access (user's Level)

4. Bug History

5. Reports and Metrics

6.Track bugs and code changes

7.Communicate with teammates

8.Submit and review patches

9. Manage quality assurance (QA)

10.lexible workflow management, defect and change tracking across the application life cycle

Bug tracking tool:

Page 60: Software testing q as   collection by ravi

Rational ClearQuest -- expensive

Bugzilla is Free and better!!

Mantis - opensource

When do you choose automated testing?

For larger projects, or ongoing long-term projects, automated testing can be valuable. But for

small projects, the time needed to learn and implement the automated testing tools is usually

not worthwhile. Automated testing tools sometimes do not make testing easier. One problem

with automated testing tools is that if there are continual changes to the product being tested,

the recordings have to be changed so often, that it becomes a very time-consuming task to

continuously update the scripts. Another problem with such tools is that the interpretation of the

results (screens, data, logs, etc.) can be a time-consuming task.

What a Coverage Matrix is? or What is a traceability matrix?

Answer1:

It is a mapping of one baselined object to another. For testers, the most common documents to

be linked in the manner are a requirements document and the written test cases for that

document.

In order to facilitate this, testers can add an extra column to their test cases listing the

requirement being tested.

The requirements matrix is usually stored in a spreadsheet. It contains the test ids down the left

side and the requirements ids across the top. For each test, you place a mark in the cell under

the heading for that requirement it is designed to test. The goal is to find out which requirements

are under-tested and which are either over tested or which are so large that too many tests

have to be written to adequately test it.

Answer2:

The traceability matrix means mapping of all the work products (various design docs, testing

docs) to requirements.

Is white box testing different from unit testing?

Unit testing is one element of white box testing.

Do automated testing tools make testing easier?

Yes and no.

For larger projects, or ongoing long-term projects, they can be valuable. But for small projects,

Page 61: Software testing q as   collection by ravi

the time needed to learn and implement them is usually not worthwhile.

A common type of automated tool is the record/playback type. For example, a test engineer

clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and

has an automated testing tool record and log the results. The recording is typically in the form of

text, based on a scripting language that the testing tool can interpret.

If a change is made (e.g. new buttons are added, or some underlying code in the application is

changed), the application is then re-tested by just playing back the recorded actions and

compared to the logged results in order to check effects of the change.

One problem with such tools is that if there are continual changes to the product being tested,

the recordings have to be changed so often that it becomes a very time-consuming task to

continuously update the scripts.

Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that

can be a time-consuming task.

Ho to write Software Requirement Sepcification (SRS) document for Grade Card System?

SRS document is very important to give information what the project is going to do and what it is

assuming in advance.

below is some idea about it.

in SRS document following points should be included.

1. Project aim.

2. Project objectives.

3. Project scope

4. Process to be followed.

5. Project Deliverables- it includes documents to be submitted and other plans or project

prototypes.

6. Requirements in short.

How can I schedule the different testcases in a (.t) test script so that all the test cases it

contains run one after another ? ...

A small query: a numbe of (.t) script files which contains a number of test cases. Need to call a

user defined method in all the (.t) script files.

Problem: How to do that.

Second is: if this is possible that when one test case is run successfully, can I put in the

condition that if it is successfull, go to testcase 2 else go to test case 3.

Third is: How can I schedule the different testcases in a (.t) test script so that all the test cases it

contains run one after another.

Answer for Problem: How to do that:

Page 62: Software testing q as   collection by ravi

X Just take instance for your class and call the method thru that instance.

Answer for 2nd and 3rd queries:

.t file

=======

[-] testcase tc1() appstate none

[ ] Print("This is tc1")

[-] testcase tc2() appstate none

[ ] Print("This is tc2")

[-] testcase tc3() appstate none

[ ] Print("This is tc3")

call your test cases under main function as below.

[-] main()

[ ]

[-] tc1()

[-] if GetTestsPassedCount ( )!=0 // Executing testcases tc2 and tc3 when testcase tc1 is

passed only.

[ ] tc2()

[ ] tc3()

Why are there so many software bugs?

Generally speaking, there are bugs in software because of unclear requirements, software

complexity, programming errors, changes in requirements, errors made in bug tracking, time

pressure, poorly documented code and/or bugs in tools used in software development.

* There are unclear software requirements because there is miscommunication as to what the

software should or shouldn't do.

* Software complexity. All of the followings contribute to the exponential growth in software and

system complexity: Windows interfaces, client-server and distributed applications, data

communications, enormous relational databases and the sheer size of applications.

* Programming errors occur because programmers and software engineers, like everyone else,

can make mistakes.

* As to changing requirements, in some fast-changing business environments, continuously

modified requirements are a fact of life. Sometimes customers do not understand the effects of

changes, or understand them but request them anyway. And the changes require redesign of

the software, rescheduling of resources and some of the work already completed have to be

redone or discarded and hardware requirements can be effected, too.

* Bug tracking can result in errors because the complexity of keeping track of changes can

Page 63: Software testing q as   collection by ravi

result in errors, too.

* Time pressures can cause problems, because scheduling of software projects is not easy and

it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes

will be made.

* Code documentation is tough to maintain and it is also tough to modify code that is poorly

documented. The result is bugs. Sometimes there is no incentive for programmers and software

engineers to document their code and write clearly documented, understandable code.

Sometimes developers get kudos for quickly turning out code, or programmers and software

engineers feel they cannot have job security if everyone can understand the code they write, or

they believe if the code was hard to write, it should be hard to read.

* Software development tools , including visual tools, class libraries, compilers, scripting tools,

can introduce their own bugs. Other times the tools are poorly documented, which can create

additional bugs.

What are Test Cases, Test Suites, Test Scripts, and Test Scenarios (or Scenaria)?

A test case is usually a single step, and its expected result, along with various additional pieces

of information. It can occasionally be a series of steps but with one expected result or expected

outcome. The optional fields are a test case ID, test step or order of execution number, related

requirement(s), depth, test category, author, and check boxes for whether the test is

automatable and has been automated. Larger test cases may also contain prerequisite states or

steps, and descriptions. A test case should also contain a place for the actual result. These

steps can be stored in a word processor document, spreadsheet, database or other common

repository. In a database system, you may also be able to see past test results and who

generated the results and the system configuration used to generate those results. These past

results would usually be stored in a separate table.

The most common term for a collection of test cases is a test suite. The test suite often also

contains more detailed instructions or goals for each collection of test cases. It definitely

contains a section where the tester identifies the system configuration used during testing. A

group of test cases may also contain prerequisite states or steps, and descriptions of the

following tests.

Collections of test cases are sometimes incorrectly termed a test plan. They may also be called

a test script, or even a test scenario.

A test plan is the approach that will be used to test the system, not the individual tests.

Most companies that use automated testing will call the code that is used their test scripts.

A scenario test is a test based on a hypothetical story used to help a person think through a

complex problem or system. They can be as simple as a diagram for a testing environment or

they could be a description written in prose. The ideal scenario test has five key characteristics.

It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate. They

are usually different from test cases in that test cases are single steps and scenarios cover a

number of steps. Test suites and scenarios can be used in concert for complete system tests.

Page 64: Software testing q as   collection by ravi

See: An Introduction to Scenario Testing

Scenario testing is similar to, but not the same as session-based testing, which is more closely

related to exploratory testing, but the two concepts can be used in conjunction.

Scenario testing is similar to, but not the same as session-based testing, which is more closely

related to exploratory testing, but the two concepts can be used in conjunction.

See Session-Based Test Management

What's Exploratory Test

A Test Plan gives detailed testing information, including

Scope of testing

Schedule

Test Deliverables

Release Criteria

Risks and Contingencies

Give me five common problems that occur during software development.

Poorly written requirements, unrealistic schedules, inadequate testing, adding new features

after development is underway and poor communication.

1. Requirements are poorly written when requirements are unclear, incomplete, too general, or

not testable; therefore there will be problems.

2. The schedule is unrealistic if too much work is crammed in too little time.

3. Software testing is inadequate if none knows whether or not the software is any good until

customers complain or the system crashes.

4. It's extremely common that new features are added after development is underway.

5. Miscommunication either means the developers don't know what is needed, or customers

have unrealistic expectations and therefore problems are guaranteed.

What is SRS and BRS . and what is the difference between them?

Answer1:

SRS - Software Requirements Specification BRS - Business Requirements Specification

Answer2:

BRS - Biz Requirements Case

This doc has to be from the client stating the need for a particular module or a project. This

basically tells you why a particular request is needed. Reasons have to be given. Mostly a lay

persons document. This has to aproved by te Project Manager

Page 65: Software testing q as   collection by ravi

SRS - Sq REq Specification

Follows the BRC after its approval etc. gives a detail func etc details about the project,

requirement, use cases, refere..etc and how each module works in detal

your srs cannot start without a brc and an approval of the same

What should be done after a bug is found?

When a bug is found, it needs to be communicated and assigned to developers that can fix it.

After the problem is resolved, fixes should be re-tested. Additionally, determinations should be

made regarding requirements, software, hardware, safety impact, etc., for regression testing to

check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place,

it should encapsulate these determinations. A variety of commercial, problem-

tracking/management software tools are available. These tools, with the detailed input of

software test engineers, will give the team complete information so developers can understand

the bug, get an idea of its severity, reproduce it and fix it.

Qive some examples of Low Severity and Low Priority Bugs .....

Qive some examples of

Low Severity and Low Priority Bugs

High Severity and Low Priority Bugs

Low Severity and High Priority Bugs

High Severity and High Priority Bugs ?

Answer1:

First know about severity and priority then its easy to decide Low or Medium or High

Priority-Business oriented

Severity-Effect of bug in the functionality

1. For example there is cosmetic change in the clients name and you found this bug at the time

of delivery, so the severity of this bug is low but the priority is high because it affects to the

business.

2. If you found that there is major crash in the functionality of the application but the crash lies in

the module which is not delivered in the deliverables in this case the priority is low and severity

is high.

Answer2:

Priority - how soon your business side needs a fix. (Tip: The engineering side never decides

priority.)

Severity - how bad the bug bites. (Tip: Only engineers decide severity.)

For a high priority, low severity example, suppose your program has an easter egg (a secret

Page 66: Software testing q as   collection by ravi

feature) showing a compromising photo of your boss. Schedule this bug to be removed

immediately.

Low priority, high severity example: A long chain of events leads to a crash that risks the main

data file. Because the chain of events is longer than customers might probably reproduce, so

keep an eye on this one while fixing higher priority things.

Testers should report bugs, the business side should understand them and set their priorities.

Then testers and engineers should capture the bugs with automated tests before killing them.

This reduces the odds they come back, and generally reduces "churn", which is bug fixes

causing new bugs.

Answer3:

Priority is how important it is to the customer and if the customer is going to find it. Severity is

how bad it is, if the customer found it.

High Priority low severity

I have a text editor and every 3 minutes it rings a bell (it is also noted that the editor does an

auto-save every 3 minutes). This is going to drive the customer insane. They want it fixed

ASAP; i.e. high priority. The impact is minimal. They can turn off the audio when using the

editor. There are workarounds. Should be easy for the developer to find the code and fix it.

Low Priority High severity

If I press CRTL-Q-SHIFT-T, only in that order, then eject a floppy diskette from the drive it

formats my hard drive. It is a low priority because it is unlikely a customer is going to be affected

by it. It is high severity because if a customer did find it the results would be horrific.

High Priority High severity

If I open the Save As dialog and same the file with the same name as the Save dialog would

have used it saves a zero byte file and all the data is lost. Many customers will select Save As

then decide to overwrite the original document instead. They will NOT cancel the Save As and

select Save instead, they will just use Save As and pick the same file name as the one they

opened. So the likelihood of this happening is high; therefore high priority. It will cause the

customer to lose data. This is costly. Therefore high severity.

Low Priority low severity

If I hold the key combination LEFT_CTRL+LEFT_ALT+RIGHT_ALT+RIGHT_CTRL+F1+F12 for

3 minutes it will display cryptic debug information used by the programmer during development.

It is highly unlikely a customer will find this so it is low priority. Even if they do find it it might

result in a call to customer service asking what this information means. Telling the customer it is

debug code left behind; they didn't want to remove it because it would have added risk and

delayed the release of the program is safer than removing it and potentially breaking something

else. Answer4:

High Priority low severity

Spelling the name of the company president wrong

Low Priority High severity

Year end processing breaks ('cause its 6 more months 'till year end)

Page 67: Software testing q as   collection by ravi

High Priority High severity

Application won't start

Low Priority low severity

spelling error in documentation; occasionally screen is slightly

misdrawn requiring a screen refresh

Give me five solutions to problems that occur during software development.

Solid requirements, realistic schedules, adequate testing, firm requirements and good

communication.

1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and

testable. All players should agree to requirements. Use prototypes to help nail down

requirements.

2. Have schedules that are realistic. Allow adequate time for planning, design, testing, bug

fixing, re-testing, changes and documentation. Personnel should be able to complete the project

without burning out.

3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for

sufficient time for both testing and bug fixing.

4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend

design against changes and additions, once development has begun and be prepared to

explain consequences. If changes are necessary, ensure they're adequately reflected in related

schedule changes. Use prototypes early on so customers' expectations are clarified and

customers can see what to expect; this will minimize changes later on.

5. Communicate. Require walkthroughs and inspections when appropriate; make extensive use

of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is

available and up-to-date. Do use documentation that is electronic, not paper. Promote teamwork

and cooperation.

What is risk analysis? What does it have to do with Severity and Priority?

Risk analysis is a method to determine how much risk is involved in something. In testing, it can

be used to determine when to test something or whether to test something at all. Items with

higher risk values should be tested early and often. Items with lower risk value can be tested

later, or under some circumstances if time runs out, not at all. It can also be used with defects.

Severity tells us how bad a defect is: "how much damage can it cause?". Priority tells us how

soon it is desired to fix the defect: "should we fix this and if so, by when?".

Companies usually use numeric values to calculate both values. The number of values will

change from place to place. I assume a five-point scale but a three-point scale is commonly

used. Using a defect as an example, Major would be Severity1 and Trivial would be Severity5. A

Priority1 would imply that it needs to be fixed immediately and a Priority5 means that it can wait

until everything else is done. You can add or multiply the two digits together (there is only a

Page 68: Software testing q as   collection by ravi

small difference in the outcome) and the results become the risk value. You use the event's risk

value to determine how you should address the problem. The lower values must be addressed

before the middle values, and the higher values can wait the longest.

Defect 12345

Foo displays an error message with incorrect path separators when the optional showpath

switch is applied

Sev5

Pri5

Risk value (addition method) 10

Defect 13579

Module Bar causes system crash using derefenced handle

Sev1

Pri1

Risk value (addition method) 2

Defect 13579 will usually be addressed before 12345.

Another method for Risk Assessment is based on a military standard, MIL-STD-882. It

describes the risk of failure for military hardware. The main area of interest is section A.4.4.3

and its children where they indicate the Assessment of mishap risk. They use a four-point

severity rating: Catastrophic; Critical; Marginal; Negligible. They then use a five-point probability

rating: Frequent; Probable; Occasional; Remote; Improbable. Then rather than using a

mathematical calculation to determine a risk level, they use a predefined chart. It is this chart

that is novel as it groups risks together rather than giving them discrete values. If you want a

copy of the current version, search for MIL-STD-882D using Yahoo! or Google.

It is logical to be described different expected results for one action? ....

If complicated system with a lots of users' profiles having different rights. Should to write

different test cases for each profile? Or Write one test case describing the expected results

according to the user's rights? It is logical to be described different expected results for one

action?

Answer1:

You will have to write one test case describing the results of various kinds of users. You could

write a tabular data form.

For each action you would create a table

First column: user type

Second: expected result

This avoids the issue of writing a series of test cases where 90% of the information is the same

Page 69: Software testing q as   collection by ravi

and 10% is different. It makes maintaining teh tests easier as well.

And the best way to test your application is to use an automated tool to do it.

Answer2:

Think of things in terms of use cases. Treat it like a completely different system for each user

role, and create your own suite of cases for each role.

What if the software is so buggy it can't be tested at all?

In this situation the best bet is to have test engineers go through the process of reporting

whatever bugs or problems initially show up, with the focus being on critical bugs.

Since this type of problem can severely affect schedules and indicates deeper problems in the

software development process, such as insufficient unit testing, insufficient integration testing,

poor design, improper build or release procedures, managers should be notified and provided

with some documentation as evidence of the problem.

What is API Testing?

An API (Application Programming Interface) is a collection of software functions and

procedures, called API calls, that can be executed by other software applications. Application

developers code that links to existing APIs to make use of their functionality. This link is

seamless and end-users of the application are generally unaware of using a separately

developed API.

During testing, a test harness-an application that links the API and methodically exercises its

functionality-is constructed to simulate the use of the API by end-user applications. The

interesting problems for testers are:

1. Ensuring that the test harness varies parameters of the API calls in ways that verify

functionality and expose failures. This includes assigning common parameter values as well as

exploring boundary conditions.

2. Generating interesting parameter value combinations for calls with two or more parameters.

3. Determining the content under which an API call is made. This might include setting external

environment conditions (files, peripheral devices, and so forth) and also internal stored data that

affect the API.

4. Sequencing API calls to vary the order in which the functionality is exercised and to make the

API produce useful results from successive calls.

What if there isn't enough time for thorough testing?

Since it's rarely possible to test every possible aspect of an application, every possible

combination of events, every dependency, or everything that could go wrong, risk analysis is

appropriate to most software development projects.

Use risk analysis to determine where testing should be focused. This requires judgment skills,

Page 70: Software testing q as   collection by ravi

common sense and experience. The checklist should include answers to the following

questions:

* Which functionality is most important to the project's intended purpose?

* Which functionality is most visible to the user?

* Which functionality has the largest safety impact?

* Which functionality has the largest financial impact on users?

* Which aspects of the application are most important to the customer?

* Which aspects of the application can be tested early in the development cycle?

* Which parts of the code are most complex and thus most subject to errors?

* Which parts of the application were developed in rush or panic mode?

* Which aspects of similar/related previous projects caused problems?

* Which aspects of similar/related previous projects had large maintenance expenses?

* Which parts of the requirements and design are unclear or poorly thought out?

* What do the developers think are the highest-risk aspects of the application?

* What kinds of problems would cause the worst publicity?

* What kinds of problems would cause the most customer service complaints?

* What kinds of tests could easily cover multiple functionalities?

* Which tests will have the best high-risk-coverage to time-required ratio?

How to test a module(web based developed in .NET) which would load data from the

list(which is text file) into the database(SQL Server)

How to test a module(web based developed in .NET) which would load data from the list(which

is text file) into the database(SQL Server). It would touch approx 10 different tables depending

on data in the list.

The job is to verify that data which is suppose to get loaded gets loaded correctly. List might

contain 60 millions of record. anyone suggest? * Compare the record counts before and after

the load and match with the expected data load * Sample records shoudl be taken to ensure teh

data integrity

* Include Test cases where the loaded data is visible functionally through the application. For

eg: If the data loads new users to the system, tahn the login fucntionlaity using the new user

login creadentials shoudl work etc...

Finally tools available in the market, you can be innovativce in using the Functional Automation

tools like Winrunner and adding DB Checkpoints, you can write SQL's to do the Backend

testing. It is upon the Test scenario (Test Case) details that you wooudl have to narrow upon the

tools/techniques.

What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. However, if extensive testing is

still not justified, risk analysis is again needed and the considerations listed under "What if there

Page 71: Software testing q as   collection by ravi

isn't enough time for thorough testing?" do apply. The test engineer then should do "ad hoc"

testing, or write up a limited test plan based on the risk analysis.

Is the testing an art of thinking ?

Answer1:

Think like a guy who would like to break the application. like a hacker...finding the weakness in

the system.

Answer2:

Think like a Tester then think negative rather than positive. Because tester always try to break

the application, by putting some negative values.

Answer3:

How testers think is:

- Testers are "negative" thinkers

- Testers complain

- Testers like to break things

- Testers take a special thrill in delivering bad news

The authors introduce an alternate view:

- Testers don't complain, they offer evidence

- Testers don't like to break things, they like to dispel the illusion that things work

- Testers don't take a special thrill in delivering bad news, they enjoy freeing their clients from

false belief.

They go on to explain how testers should think:

- Deriving inference

- Technically

- creatively

- Critically

- practically

- Attempting to anwer questions

- Exploring, thinking

- Using logic

Answer4:

Testers are destroyers for a cretive purpose.Always keep one thing in mind "CREATIVE

DESTRUCTION IS WHAT WE WANT TO ACHIEVE".

Add one thing to it is that the quality of testers while testing an application should be enforced

only after the smooth flow of the application is assured i.e., the application passes the positive

Page 72: Software testing q as   collection by ravi

test. If the application doesnt pass even the positive testing than the testing strategy gets

waivered.

And aftyer all the competition is appreciated when both the sides are equally strong.

So before bringing the real quality of testers into act while doing the testing one should ensure

that it has passed the positive testing.

What is the role of test engineers?

We, test engineers, speed up the work of your development staff, and reduce the risk of your

company's legal liability. We give your company the evidence that the software is correct and

operates properly. We also improve your problem tracking and reporting. We maximize the

value of your software, and the value of the devices that use it. We also assure the successful

launch of your product by discovering bugs and design flaws, before users get discouraged,

before shareholders loose their cool, and before your employees get bogged down. We help the

work of your software development staff, so your development team can devote its time to build

up your product. We also promote continual improvement. We provide documentation required

by FDA, FAA, other regulatory agencies, and your customers. We save your company money

by discovering defects EARLY in the design process, before failures occur in production, or in

the field. We save the reputation of your company by discovering bugs and design flaws, before

bugs and design flaws damage the reputation of your company.

What is the role of a QA engineer?

The QA engineer's role is as follows: We, QA engineers, use the system much like real users

would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers,

and provide feedback to the developers, i.e. tell them if they've achieved the desired level of

quality.

What are the responsibilities of a QA engineer?

Let's say, an engineer is hired for a small software company's QA role, and there is no QA

team. Should he take responsibility to set up a QA infrastructure/process, testing and quality of

the entire product? No, because taking this responsibility is a classic trap that QA people get

caught in. Why? Because we QA engineers cannot assure quality. And because QA

departments cannot create quality.

What we CAN do is to detect lack of quality, and prevent low-quality products from going out the

door. What is the solution? We need to drop the QA label, and tell the developers that they are

responsible for the quality of their own work. The problem is, sometimes, as soon as the

developers learn that there is a test department, they will slack off on their testing. We need to

offer to help with quality assessment, only.

Page 73: Software testing q as   collection by ravi

The system runs at Intranet environment and it has a security system.....

The system runs at Intranet environment and it has a security system. The security system's

architecture designed on User and Role system. There are only one system Role and that is

System Amdin Role and user can also create as many role as he needs. Role is attached with

an user and a user can login to the system if he has a role. Role mainly instanciate the

permissions of resources in that role. And the system has about 100 system defined resources

and there may be some user-defined resources also.

So, in this environment, for testing the security system. how to develop test plan and test script

?

Assume that roles are generated by combining logical options (can edit this section, can only

generate reports here, can not access this).

Start by witing down the different activities that each role can access. Then write down the

different levels for each activity.

Now create a pair-wise combination of them. I won't explain pair-wise testing as you can Google

for it and get better answers there.

Use pair-wise testing to create special roles that are used in testing. If you know that there are

certain default roles, make sure to use them.

Then generate a list of tasks that can be preformed on the system (don't concern yourself with

roles at this point).

Write each of these tasks down and put them into a database (If you have no other option, use

MySQL and OpenOffice to create your shared database).

Then create another table that contains your roles. Create a third table that takes the index

values of the first table and the index values of the second table (the intersections) and there

you can determine if the scenario can be tested or not using that role. (this can also be done in

a spreadsheet with scenarios on the left side and rols across the top).

Then run the tests that can be run.

What is the ratio of developers and testers?

The ratio of developers and testers is not a fixed one, but depends on what phase of the

software development life cycle the project is in. When a product is first conceived, organized,

and developed, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In

sharp contrast, when the product is near the end of the software development life cycle, just

before alpha testing begins, this ratio tends to be 1:1, or even 1:2, in favor of testers.

Page 74: Software testing q as   collection by ravi

What is the difference between V-model and water fall model?

V-model is used for the project based, here the spec is not freezed, the devolpement and QA

process goes parellel.

Water fall model is used for the product based projects, here the spec is defined and freezed in

the starting, Once the developement completes the coding, testers will start testing the product.

Which of these roles are the best and most popular?

In testing, Tester roles tend to be the most popular. The less popular roles include the roles of

System Administrator, Test/QA Team Lead, and Test/QA Managers.

For Reliability, Usability and Testability. Explain why you would test for these factors?

Reliability:

- Extent to which a program can be expected to perform its intended function with required

precision.

- This testing would be performed if the application has a characteristic that affects human lives

or if it is a Real time application.

Usability:

- Effort required in learning, operating, preparing input & interpreting output of a program.

- This testing would be performed if the application has a characteristic that involves a lot of

human interaction with the application.

Testability:

- Effort required in testing a program to ensure it performs its intended function.

- This testing would be performed if the application has a characteristic that affects human lives.

What other roles are in testing?

Depending on the organization, the following roles are more or less standard on most testing

projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System

Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test

Configuration Managers.

Depending on the project, one person can and often wear more than one hat. For instance, we

Test Engineers often wear the hat of Technical Analyst, Test Build Manager and Test

Configuration Manager as well.

Whats the difference between ISO vs CMM ?

Answe1:

Page 75: Software testing q as   collection by ravi

CMM is much oriented towards S/W engg process improvements and never speaks of customer

satisfaction whereas the ISO 9001:2000 speaks of process improvements generic to all

organisations and also speaks of customer satisfaction.

A2:

FYI. There are 3 popular ISO standards that are commonly used for SW projects. They are

12270, 15540, and 9001 (subset or 9000). I hope I got the numbers correct. For CMM, the latest

version is 1.1, however, it is already considered a legacy standard which is to be replaced by

CMMI, the latest version is 1.1. For further information re CMM/I, visit the following:

http://www.sei.cmu.edu/cmm/

http://www.sei.cmu.edu/cmmi/

To build and release the build to the QA. Does any body knowing in detail about this

profile?

Build Release engineer,

The nature of the job is to retrieve the source from the confirugartion system, and creates a

build in the build machine, and takes a copy of the files which you moved to buildmachine, and

install into QA servers.

Here the main task when you install in QA servers, you have to be carefull about connectin

properties, whether all applications are extracted properly, whether is QA server should have all

supported software

What makes a good test engineer?

Good test engineers have a "test to break" attitude. We, good test engineers, take the point of

view of the customer, have a strong desire for quality and an attention to detail. Tact and

diplomacy are useful in maintaining a cooperative relationship with developers and an ability to

communicate with both technical and non-technical people. Previous software development

experience is also helpful as it provides a deeper understanding of the software development

process, gives the test engineer an appreciation for the developers' point of view and reduces

the learning curve in automated test tool programming.

What's the role in CMM Level in Testing?

What's the diff b/w 5 levels?

which level most commonly used in testing?

Answer1:

SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S.

Defense Department to help improve software development processes.

Page 76: Software testing q as   collection by ravi

CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model

Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine

effectiveness in delivering quality software. It is geared to large organizations such as large U.S.

Defense Department contractors. However, many of the QA processes involved are appropriate

to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI

ratings by undergoing assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to

successfully complete projects. Few if any processes in place; successes may not be

repeatable.

Level 2 - software project tracking, requirements management, realistic planning, and

configuration management processes are in place; successful practices can be repeated.

Level 3 - standard software development and maintenance processes are integrated throughout

an organization; a Software Engineering Process Group is is in place to oversee software

processes, and training programs are used to ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes, and products. Project performance is

predictable, and quality is consistently high.

Level 5 - the focus is on continouous process improvement. The impact of new processes and

technologies can be predicted and effectively implemented when required. Perspective on CMM

ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at

Level 1, 39% at 2,23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62%

were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations

was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal

contractors or agencies. For those rated at Level 1, the most problematical key process area

was in Software Quality Assurance.

Answer2:

The whole essence of CMM or CMMI is to produce quality software. It targets the whole

organizational practices (or processes), which are believed to be the best across industries. For

further understanding of SEI CMMI visit http://www.sei.cmu.edu/cmmi.

What is the role of CMMI Level in Testing?

Please understand that Testing is just part or subset of CMMI. Testing is addressed on a

particular Process Area. If my memory serves me correct, it is the VER or Verification process

area and sometimes addressed also in VAL or the Validation process area. It could also be the

other way around.

Each Process Area has its own level to be driven to the level 5. This is true for the Continuous

Representation of CMMI version 1.1. I am not sure about the Staged Representaiton of the

same version. Please refer to the website above for more details.

What is the difference between the levels of CMMI?

This was already answered in the same thread by Priya. I would like to add that there is an

additional level for the Continuous Representation which is called Level 0 (zero) --> Incomplete.

Which level is most commonly used in Testing?

Page 77: Software testing q as   collection by ravi

I would say all levels would deal with testing. But again this is true for VAL and VER Process

Areas.

For further readings, try searching google using CMMI+tutorials or Testing+CMMI. Most of the

documents about CMMI are free and available on the Web.

Answer3:

Level 1. Initial The organization is characterized by an ad hoc set of activities. The processes

aren't defined and success depends on individual effort and heroics.

Level 2. Repeatable At this level, basic project management processes are established to track

costs, to schedule, and to define functionality. The discipline is available to repeat earlier

successes on similar projects.

Level 3. Defined All processes are documented for both management and engineering

activities, and standards are defined.

Level 4. Managed Detailed measures of each process are defined and product quality data is

routinely collected. Both process and products are quantitatively understood and controlled.

Level 5. Optimizing Continuous process improvement is enabled by quantitative feedback from

the process and from piloting innovative ideas and technologies.

There are 3 popular ISO standards that are commonly used for SW projects. They are 12270,

15540, and 9001 (subset or 9000). I hope I got the numbers correct. For CMM, the latest

version is 1.1, however, it is already considered a legacy standard which is to be replaced by

CMMI, the latest version is 1.1.

What are Internationalization, Localization, Globalization, and Multilingualization

Testing?

Internationalization and localization are a means of adapting software for non-native

environments, especially other nations and cultures. Internationalization is often abbreviated as

I18N (or i18n or I18n), where the number 18 refers to the number of letters omitted.

"Localization" is often abbreviated l10n in the same manner. Both are sometimes collectively

termed globalization (g11n). Also seen in some circles, but less commonly, are "p13n" for

personalization? and "r3h" for reach, as in the reach of a website across countries and markets.

L10N should support two languages or character codes simultaneously, usually English (ASCII)

and another specific one. Since each programmer has his or her own mother tongue, there are

numerous L10N patches and L10N programs written to satisfy his or her own need. L10N is

preparing a feature or system for use in a local market, e.g., Russia, Japan, Québec. Usually a

market has a distinct language, customs and regulations. At the very least, user interface

elements are translated into the local language.

I18N is also sometimes used interchangeably with G11N when speaking broadly of the

economic and cultural effects of an increasingly interconnected world. In software terms, Usage

of the term I18N has become rare; the term globalization (G11N) is preferred mostly because of

corporate globalization where many companies and products find themselves in many countries

Page 78: Software testing q as   collection by ravi

worldwide.

G11N is a multi-step process to prepare a feature or system for use in multiple markets, or at

least so that it can easily be localized. It is most commonly taken to refer to the addition of a

framework for multiple language support. This implies that the application is capable of input of

and displaying non-western character sets. These activities include software localization, and

technical document translation result in user interfaces, on-line help systems, and

documentation that are adapted to the cultural, linguistic, and technical requirements of specific

international markets. This has given rise to increasing requirements for localization (L10N) of

products and services.

M17N (multilingualization) model is to support many languages at the same time. For example,

Mule (MULtilingual Enhancement to GNU Emacs) can handle a text file which contains multiple

languages - for example, a paper on differences between Korean and Chinese whose main text

is written in Finnish. GNU Emacs 20 and XEmacs now include Mule. Note that the M17N model

can only be applied in character-related instances. For example, it is nonsense to display a

message like 'file not found' in many languages at the same time. Unicode and UTF-8 are

technologies which can be used for this model. Viewing a website in English and same in

French should not have any functionality differences ideally, and no runtime errors. Check for

incorrect translations, misspelled words and wrong symbols for the particular language chosen

by the user. The language conversion should be consistent throughout the application. Use of

shared variables can cause serious bugs, like when users select same page or content to view

but choose different languages however the page is renderd in the previous user's language.

What makes a good QA engineer?

The same qualities a good test engineer has are useful for a QA engineer. Additionally, Rob

Davis understands the entire software development process and how it fits into the business

approach and the goals of the organization. Rob Davis' communication skills and the ability to

understand various sides of issues are important. Good QA engineers understand the entire

software development process and how it fits into the business approach and the goals of the

organization. Communication skills and the ability to understand various sides of issues are

important.

Difference between Verification and Validation?

Answer1:

The ISO would say that Verification is a process of determining whether or not the products of a

given phase of the software development cycle meets the implementation steps and can be

traced to the incoming objectives established during the previous phase. The techniques for

verification are testing, inspection and reviewing.

Validation is a process of evaluating software at the end of the software development process to

Page 79: Software testing q as   collection by ravi

ensure compliance with software requirements. The techniques for validation are testing,

inspection and reviewing.

Answer2:

Validation:Determination of the correctness of the products with respect to the user needs and

requirements.

Verification:Determination of the correctness of the product with respect to the test

conditions/requirement imposed at the start.

Answer3:

the diifernce between V & V.

*no.*

Verification ensures that the system complies with organizations standards & processes.

Validation physically ensures that the system operates according to plan.

Relies on non-executable methods of analyzing various artifacts.

Executes the system functions through a series of tests that can be observed & evaluated.

Answers the question "Did we build the right system?"

Answers the question "Did we build the system right?"

E.g. Check sheets, traceability matrix,

Uses functional or structural testing techniques to catch defects.

Includes Requirement reviews, design reviews, code walkthroughs, code inspections, test

reviews, independent static analyzers, confirmation in which 3rd party attests to the document,

desk checking.

Includes Unit testing, coverage analysis, black box techniques, Integrated testing, System

testing & User Acceptance testing.

Most effective, it has been proven that 65% defects can be discovered here.

Effective, but not as effective as verification, for removing defects. It has been proven that 30%

of defects can be discovered here.

Can be used throughout SDLC.

Looking for a tool whcih can do bulk data insert to various tables in the test database

and also that tool which work with DB2, SQLServer and Oracle.

Answer1:

First copy the existing data to an excel file by DTS import/export wizard in SQL server 2000

Export the contents of the table to an excel file . In the Excel change the integrity constraints. for

example the table had one primary key column. So using excel you just changed the values of

the priamary key by using linear fill option of Excel. Then save it.

Now import data from this excel sheet to the table.

Page 80: Software testing q as   collection by ravi

Answer2:

Using Perl and their DBI modules. You will also need DBD modules for the specific databases

that you want to test with. In theory, you should be able to re-use the scripts and just change

DBD connections or possibly create handles to all three RDBMSs simultaneously. Ruby and

Python have similar facilities.

You will just have to have access to the data files somewhere and then you can then read the

data and insert the data into the database using the correct insert statements.

There are other tools, but since they cost money to purchase I have never bothered to

investigate them.

Scripting is the most powerful (and cheapest) way to do it. preferred method is to use Python

and its ODBC module. This way you can use the same code and just change the data source

for whichever DB you're connecting to. Also, you could potentially have the script generate

random data if you don't have any source data to begin with.

need to have the proper ODBC client drivers installed on the box you're running the script from

for the ODBC module. There's also a PyPerl distribution that will let you use the Perl DBI

module with Python. It's really up to personal preference on what you're comfortable scripting in.

What makes a good QA/Test Manager?

QA/Test Managers are familiar with the software development process; able to maintain

enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to

increase productivity; able to promote cooperation between Software and Test/QA Engineers,

have the people skills needed to promote improvements in QA processes, have the ability to

withstand pressures and say *no* to other managers when quality is insufficient or QA

processes are not being adhered to; able to communicate with technical and non-technical

people; as well as able to run meetings and keep them focused.

Need to shut down network connectivity mid tansaction. How to do this

programmatically via windows interface?

From the command line, IPCONFIG /RELEASE, should do it. or do the old fashion way. remove

the cable on your machine. if u r using a wireless connection, it is better to use ipconfig then.

What can be done if requirements are changing continuously?

Work with management early on to understand how requirements might change, so that

alternate test plans and strategies can be worked out in advance. It is helpful if the application's

initial design allows for some adaptability, so that later changes do not require redoing the

application from scratch. Additionally, try to...

* Ensure the code is well commented and well documented; this makes changes easier for the

developers.

Page 81: Software testing q as   collection by ravi

* Use rapid prototyping whenever possible; this will help customers feel sure of their

requirements and minimize changes.

* In the project's initial schedule, allow for some extra time to commensurate with probable

changes.

* Move new requirements to a 'Phase 2' version of an application and use the original

requirements for the 'Phase 1' version.

* Negotiate to allow only easily implemented new requirements into the project; move more

difficult, new requirements into future versions of the application.

* Ensure customers and management understand scheduling impacts, inherent risks and costs

of significant requirements changes. Then let management or the customers decide if the

changes are warranted; after all, that's their job.

* Balance the effort put into setting up automated testing with the expected effort required to

redo them to deal with changes.

* Design some flexibility into automated test scripts;

* Focus initial automated testing on application aspects that are most likely to remain

unchanged;

* Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing

needs;

* Design some flexibility into test cases; this is not easily done; the best bet is to minimize the

detail in the test cases, or set up only higher-level generic-type test plans;

* Focus less on detailed test plans and test cases and more on ad-hoc testing with an

understanding of the added risk this entails.

What if the application has functionality that wasn't in the requirements?

It may take serious effort to determine if an application has significant unexpected or hidden

functionality, which it would indicate deeper problems in the software development process. If

the functionality isn't necessary to the purpose of the application, it should be removed, as it

may have unknown impacts or dependencies that were not taken into account by the designer

or the customer.

If not removed, design information will be needed to determine added testing needs or

regression testing needs. Management should be made aware of any significant added risks as

a result of the unexpected functionality. If the functionality only affects areas, such as minor

improvements in the user interface, it may not be a significant risk.

How to write Test Case for telephone ?

Answer1:

Test cases for telephone

test the "functionality" of telephone,

Page 82: Software testing q as   collection by ravi

1. Test for presence of dial tone.

2. Dial Local number and check that receiver phone(dialled no.) rings.

3. Dial any STD number and check that intended phone number rings.

4. Dial the number of "under test" phone and check that it rings.

5. When ringing, pick it up and check that ringing stops.

6. When talking - then there should be no noise or disturbance.

7. Check that "redial" works properly.

8. Check STD lock facility works.

9. Check speed dialing facility.

10. Check for call waiting facility.

11. Check that only the caller can disconnect the call.

12. If "telephone Under test" is engaged with any caller and at this time if a third caller attempts

to call the "telephone under test" then call between two other parties should not get

disconnected.

13. If "telephone Under test" is engaged with any caller and at this time if a third caller attempts

to call the "telephone under test" then third caller will listen to engage tone or message from

exchange.

14. Check for volume(increase or decrease) of the handset.

15. Keep the hand set down from base unit and attempt to call the "telephone under test" then it

should not ring.

16. Check for call transfer facility.

test the 'telephone itself

1. Check for extreme temparatures (hot and cold)

2. Check for different atmospheric conditions (humidity etc..)

3. Check for exterme power conditions

4. Check for button durability

5. Check for body strength

etc...

Answer2:

My company designs and build phone system software, so I am very familiar with phone testing.

You could be dealing with an IVR system that has menu-driven logic, or you could be dealing

with an auto-attendant with directory features. The basic idea is that you need to be able to

define your expected results, and record your actual results. The medium is different, but the

same basic concepts apply. In some ways the phone is easier becuase it can be a more linear

process than say, a web system.

Page 83: Software testing q as   collection by ravi

How do you know when to stop testing?

This can be difficult to determine. Many modern software applications are so complex and run in

such an interdependent environment, that complete testing can never be done. Common factors

in deciding when to stop are...

* Deadlines, e.g. release deadlines, testing deadlines;

* Test cases completed with certain percentage passed;

* Test budget has been depleted;

* Coverage of code, functionality, or requirements reaches a specified point;

* Bug rate falls below a certain level; or

* Beta or alpha testing period ends.

WHAT WILL BE TESTED ON A STATIC WEB PAGE?

1. Testing all links are working properly.

There are link checker programs that can help you verify if your links are broken

2. Test GUI design.

3. Test spelling and grammer for contents.

4. Test page fonts are consistent.

Again depending on the page, this may not be essential, but you can suggest to the designed to

use a cascading style sheet to easily maintain a consistent style across pages.

5.Title bar message testing.

6.Status bar message testing.

7.Scroll bars presence at page.

8. Browser compatibility(IE and NetScape)

IE and FireFox. Ironicly, Netscape 8 has two modes now that allows you to swicth between

using the gecko render engine in Firefox and the internal IE render engine that ships with every

Windows OS. It's very cool and it can save you a lot of time.

9.Changing browser options of IE from Tools --> Internet Options --->

10.Advanced tab?

11.Changing font for browser and also font size for browser.

12. Changing any privacy option from Tools --> Internet Options.

13. the images are present

14. conformance to W3C standards WRT tags".

That's a pretty big topic, but I can touch on it. Every HTML document should tell the browser

about the DTD that it was built using. Things like

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">

Each DTD version has different standards. Some allow frames others don't, etc. You will have to

learn what the DTD is supposed to use and what it's not supposed to use. Only the best web

designers have the various DTDs memorised. Fortunately the W3C has made a page that will

Page 84: Software testing q as   collection by ravi

validate your pages for you at http://validator.w3.org/

After your page passes through that you will get a report that lists errors and info. While most

render engines will gloss over the errors and display the page "correctly", it may cause

problems further down the road when editing the page. You can discuss these things with your

web designer.

How can software QA processes be implemented without stifling productivity?

Implement QA processes slowly over time. Use consensus to reach agreement on processes

and adjust and experiment as an organization grows and matures. Productivity will be improved

instead of stifled. Problem prevention will lessen the need for problem detection. Panics and

burnout will decrease and there will be improved focus and less wasted effort.

At the same time, attempts should be made to keep processes simple and efficient, minimize

paperwork, promote computer-based processes and automated tracking and reporting,

minimize time required in meetings and promote training as part of the QA process.

However, no one, especially talented technical types, like bureaucracy and in the short run

things may slow down a bit. A typical scenario would be that more days of planning and

development will be needed, but less time will be required for late-night bug fixing and calming

of irate customers.

How to solve this issue - When developers blame testers for reporting bugs that is not

reproducible on their machine?

Avoid this differences by taking screenshots and attaching them in bug tracking tool.

it seems that since the environment was not the cause then the "Steps to Reproduce" portion of

the Bug report were lacking clarity. Screenshots along the way are a great way to prove a point,

especially when you are dealing with something that is reproducible.

Sure as a test engineer we surely understand functionality and to an extent we understand the

architecture of the software. Hence, we can surly say that some bugs are related to each other

and some are not. So, We can introduc a column/field in our bug reporting format (What ever it

is...a tool or an excel) for related bug ID.

This will actually be helpful for the development community too fix the bugs.

Actually Development environment should be same as Testing environment so that this issue

will not arise. Make sure that before getting a Build the environment is same . Before Testing

briefly go through with the Internal Release note. While Defect reporting mention proper Test

data, steps etc. so that next time you can reproduce it.

Why do we recommended that we test during the design phase?

Because testing during the design phase can prevent defects later on. We recommend verifying

three things...

Page 85: Software testing q as   collection by ravi

1. Verify the design is good, efficient, compact, testable and maintainable.

2. Verify the design meets the requirements and is complete (specifies all relationships between

modules, how to pass data, what happens in exceptional circumstances, starting state of each

module and how to guarantee the state of each module).

3. Verify the design incorporates enough memory, I/O devices and quick enough runtime for the

final product.

Exceptional circumstances, starting state of each module and how to guarantee the state of

each module). Verify the design incorporates enough memory, I/O devices and quick enough

runtime for the final product.

How to test a application in flash?

Manually testing flash animations is as simple as making sure that the objects do what they're

supposed to do. manually mostly because flash isn't really a programming language. Most

developers consider it to be a toy. So the big automation companies won't consider plug-ins for

the flash objects.

If the flash application is a: E learning Application:

1. Need to know the the Hardware Configuration, becauase if this animation contains some

heavy images or movie files then it works slowly, and which is a error.

so each and every images and movie should be of light weight, as far the quality says.

2. File naming convention

3. Flash detection

4. Objects should do what they are supposed to do

5. Etc.

If the flash application is a: Web Application:

1. File size, should be light weight, beacuase most of the user dont have high speed connection,

load testing require

2. Quality of texts, images and movie

How test estimation (in terms of schedule, cost, resources required) will be done during

developing of test plan?

Reads on the topic:

Factors that Influence Test Estimation

What kind of automated software used to test a Web-based application with a .NET

(ASP.NET and C#...also SQL Server) framework?

Answer1:

Page 86: Software testing q as   collection by ravi

Mercury makes some decent products. Quick Test Pro can be used for a lot of your

requirements... It can be costly and mind-numbing at times though.

Answer2:

Selenium is a test tool for web applications. Selenium tests run directly in a browser, just as real

users do. And they run in Internet Explorer, Mozilla and Firefox on Windows, Linux, and

Macintosh. No other test tool covers such a wide array of platforms.

* Browser compatability testing. Test your application to see if it works correctly on different

browsers and operating systems. The same script can run on any Selenium platform.

* System functional testing. Create regression tests to verify application functionality and user

acceptance.

Answer3:

Ruby is becoming a preferred standard for testing

Perl is also used a great deal

What kind of automated testing tool should Small company use?

Automation is not designed for small test teams in companies. It does not make your testing

more efficient, it just make it faster. When you hit problems (and you likely will) it will take a lot of

time to fix these problems.

At the only time that a good automated process can be put in place.

After your company starts growing and product releases, service packs, etc start to pile up in

the horizon, your team will not have the time to automate because it will come after anything

else to keep revenue coming and customers happy.

Automation makes you faster, a nice outcome that will become vital later. The trick is in

controlling the inputs of that process to get a good bang for the buck.

The standard QA tools cause a lot of trouble and more often than not end up collecting dust in

the shelves.

Just throwing people and money at the problem does not work and is just available to bigger

companies.

Reads on the topic: How do I know when to Automate?

These articles will help you decide if you're even ready to automate.

When Should a Test Be Automated?

Brian Marick

Software Test Automation and the Product Life Cycle: Implementing

software test in the product life cycle

Dave Kelly

What if organization is growing so fast that fixed QA processes are impossible?

Page 87: Software testing q as   collection by ravi

This is a common problem in the software industry, especially in new technology areas. There is

no easy solution in this situation, other than...

* Hire good people

* Ruthlessly prioritize quality issues and maintain focus on the customer;

* Everyone in the organization should be clear on what quality means to the customer.

Can We do performance testing manually?*

Yes you can do Performance testing manually. For this you should open many active sessions

of the application and should test it out. It also depends on what type of performance test you

want to do. However, in general you can judge the active sessions, number of DB connections

open, number of threads running (I have taken JAVA based Web applications as eg), the

amount of the CPU time and memory being used by having a performance viewer. You can

have IBM Tivoli Performance viewer. It is available for trial version also. Usually the the test is

done by deploying the application on the server and accessing the application from multiple

client machines and making multiple threads to run. The performance viewer should of course

be installed on the server.

What's the SDLC models- waterfall and v-models?

Answer1:

In large scale company first of all they are followed waterfall method. now days mostly

companies are following V models.

model - means the testing involvement starts from the design state itself & continues till system

test.

Phase Testing

Requirements - review

design - review

TR - TUT

then testing phases starts.

So, like this testing makes a perfect V. so we call it V model.

Indicate the flow of activities in the V-model, please look at:

Test Process in the V-model

Answer2:

The waterfall is the General concept of all the models and most of the project based companies

use V-Model. Testing will be involved from the requirements phase till the User Acceptance

Test.

Page 88: Software testing q as   collection by ravi

What's server side testing

It's testing the applications and daemons that run on a server.

Server Side testing can involve testing of Servlets and Controllers.

How a particular test team is formed ?

Putting together a test team is

1 - get an understanding of the application being tested

2 - understand the underlying technologies

3 - understand the roadmap (future plans) for the product

4 - understand the budgetary limitations you are working under

Points 1 and 2 are pretty obvious. Point 3 is more to do with future planning (they might be

moving from client/server to webapp, so dont go recruiting lots of client/server specialist - bad

example, but you get the drift ..). Point 4 is important as it will determine not just the number of

testers, but the skill level of the testers you can afford to employ, training required etc ...

Need a template for preparing the Test Environment.

Answer1:

A test environment can be as simple or as complex as can be, but it *must* be seperate from a

development environment. In an ideal world, you'd have a DEVelopment environment, a TEST

environment, an ACCeptance environment and a partitioned PRODuction environment.

The DEV environment no one in QA touches, the TEST environment no one in development

touches, the ACCeptance environment is for acceptance testing by end-users and

adminstrators, performance/stress/load testing and so on and should mirror the PRODuction

environment. The PRODuction environment should be a live/'hot swap' configuration; the

release is deployed to 'hot swap', tested by the administrators and final acceptance testing

before being 'hot swapped' to live.

Answer2:

TEST ENVIRONMENT:

Setup of a test environment will require:

- Hardware

- Operating systems

- Software that needs to be tested

- Other required software like tools (And people who can use them)

- Data configurations

- Interfaces to other systems, communications

Page 89: Software testing q as   collection by ravi

- Documentation like user manuals/reference documents/configuration guides/installation guides

Setting up a dedicated Test Environment is expensive and the following needs to be

considered:

- To create an internal Test Environment or to outsource

- To follow any External (IEEE, ISO etc.) or Internal company standards

- The initial set-up & running costs

- How long will the Test Environment be required?

- How production like does it need to be? If the environment does not mirrors production then

differences between the test and production systems and their impact on test validity must be

determined.

- Can you support the environment either technically or within the building infrastructure?

- Could any exisiting setup for other projects in the company can be re-used

- Could the setup be used for other projects within the company? - Day to day management

- Procedures for controlling change (Configuration management) - Data loading and security

requirements

What is the exact difference between functional and non functional testing?

Functional testing means we do functional testing to validate the functionality of the application

against functional requirements document.we test for functionality of the application only. Non-

functional testing means we do not test for functionality of the application System testing, load

testing, stress testing, performance testing etc come under non functional testing.

How is testing affected by object-oriented designs?

A well-engineered object-oriented design can make it easier to trace from code to internal

design to functional design to requirements. While there will be little affect on black box testing

(where an understanding of the internal design of the application is unnecessary), white-box

testing can be oriented to the application's objects. If the application was well designed this can

simplify test design.

Standards and templates - what is supposed to be in a document?

All documents should be written to a certain standard and template. Standards and templates

maintain document uniformity. It also helps in learning where information is located, making it

easier for a user to find what they want. Lastly, with standards and templates, information will

not be accidentally omitted from a document.

Page 90: Software testing q as   collection by ravi

Need some QA advice: * Improve Management Commitment to Quality Assurance , *

Improve the Internalization Quality Assurance as a process

The first part of the battle is already won: product quality is suffering due to financial and

business drivers outside of your control. In other words, there is awareness (or there is a means

of obtaining) awareness that although products are shipping, quality is suffering and if this is

allowed to continue, your product will be bogged down in patches, bad customer experience

and perception. Ultimately your product will fail. This is a very powerful argument with which to

convince your management that the situation cannot be allowed to continue. The expression

"once you have them by the balls, their hearts and minds will follow" is particularly true here.

So step 1: recognize there is a problem and highlight the consequences. Awareness is key!

Step 2: Define your role and get clarity about your objectives and understand the expectations

your managers have of your department.

QA is not just testing. Testing is not just QA. Are you expected to test, or are you expected to

manage quality? Depending on the answer you can focus on the areas that the business places

as top priority and fill the areas where you are lacking expertise.

Example 1: they want you to be a Quality Assurance manager. Now we're in the realms of ISO,

CMM etc. Coding standards, documentation standards, process flows, organisational structure,

communication standards, release management procedures, process improvement and

maintenance, change management procedures, review cycle management, requirement

mangement lifecycles etc. In other words, you are responsible for the process that assure

quality. How that is monitored and controlled is quality control.

Example 2: they want you to test the software. Now we're talking Test methodologies, test

plans, test team organisation and so on.

By clarifying your role you not only get an idea of your areas of responsiblitiy, you are also

empowered to bring about change, to introduce quality and turn the ship around! Make sure you

have the means to carry out your job (budget, resources, a senior management sponsor to help

you through the hard time). If you dont, then you will fight a losing battle.

Step 3: Now that you know your role; evangelise! Let everyone know what you are going to do

and what the end-results are going to be. Include everyone (yes, quality is everyones business)

and be aware that change meets with resistance. Check out the paterson-connor change

lifecycle model for an example and structure of your change activities and the problems you will

face.

Step 4: Depending on your role, implement your procedures. The business will decide what is

more important. If it wants to ship product, focus on the release management/service

delivery/service management to make sure that customers get their products quickly and get the

support they need. If they want quality, focus more on standards and development/qa lifecycles.

Page 91: Software testing q as   collection by ravi

I don't have a lot of money. How can I become a good tester?

If you don't have a lot of money, but want to become a good tester, the cheapest or free

education is sometimes provided on the job, by an employer, while you're getting paid to do a

job that requires the use of WinRunner and many other software testing tools.

What's fuzz testing ?

Fuzz testing is a software testing technique. The basic idea is to attach the inputs of a program

to a source of random data. If the program fails (for example, by crashing, or by failing built-in

code assertions), then there are defects to correct.

The great advantage of fuzz testing is that the test design is extremely simple, and free of

preconceptions about system behavior

Which of these tools should I learn?

Learn ALL you can! Learn all the tools that you are able to master! Ideally, this will include some

of the most popular software tools, i.e. LabView, LoadRunner, Rational Tools, WinRunner,

SilkTest etc.

What's differentiate Test_Plan and Test_Strategy

Test plan is developed by test lead with the help of other team members. Test plan contains

objective, test strategy, test scope, resources, schedule, deliverables and risks and

dependancies. Test strategy is part of the test plan. Based on the application type test strategy

will be derived. It specifies what type of tests are required and how the testers are going to be

tested on the application.

What are some of the software configuration management tools?

Software configuration management tools include Rational ClearCase, DOORS, PVCS, CVS;

and there are many others.

Rational ClearCase is a popular software tool for revision control of source code. Made by

Rational Software.

DOORS, or "Dynamic Object Oriented Requirements System", is a requirements version control

software tool.

CVS, or "Concurrent Version System", is another popular, open source version control tool. It

keeps track of changes in documents associated with software projects. It enables several,

often distant, developers to work together on the same source code.

PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX

program, based on "diff". Diff is a UNIX command that compares contents of two files.

Page 92: Software testing q as   collection by ravi

How to write Nunit test cases

What Is NUnit?

NUnit is a unit-testing framework for all .Net languages. Initially ported from JUnit, the current

production release, version 2.2, is the fourth major release of this xUnit based unit testing tool

for Microsoft .NET. It is written entirely in C# and has been completely redesigned to take

advantage of many .NET language features, for example custom attributes and other reflection

related capabilities. NUnit brings xUnit to all .NET languages.

A: Read the following:

How to write Nunit test cases

What is documentation change management?

Documentation change management is part of configuration management (CM). CM covers the

tools and processes used to control, coordinate and track code, requirements, documentation,

problems, change requests, designs, tools, compilers, libraries, patches, changes made to them

and who makes the changes.

How can I test without requirements?

By Anuj Magazine

Testing Without Requirements: A Practical Approach

What is the difference between user documentation and user manual?

When a distinction is made between those who operate and use a computer system for its

intended purpose, a separate user documentation and user manual is created. Operators get

user documentation, and users get user manuals.

Why was bug x or bug y caught NOT during testing?"

Why the defect was allowed to be introduced into the code? Why don't they have better code

reviews? Why don't the developers understand the product better? Why are the requirements

not fully understood by these people?

The real issue here is that they are passing quality off to the testing team and it's not our job to

make the product a quality one--it's the responsibility of everyone in the company including the

receptionist.

It is not the job of testing to be responsible for assuring quality and it is not the purpose of

testing to find bugs.

Page 93: Software testing q as   collection by ravi

Classic Testing Mistakes

Do they rely on strange configurations: ones you could never hope to reproduce? Is it

reasonable that your testers should have "caught" these defects? If it is, don't make any

excuses.

Alternately, if it's really the requirements, how can the developers make the right product and

the testers don't understand what the developers are making? There is communication about

what needs to be done, and the developers seem to be getting that communication, why can't

your testers? We know the reason: the developers didn't get the communication right--that's

why there was a defect. So you can point out the communication as well.

when there is a requirements document, testers have a tendency to only test the main path, or

they'll only run one test case per requirement, when there clearly should be many tests to catch

all boundaries and failures. Testers do need to be able to think about what they are doing, and it

is very possible that the testers themselves are at fault. Don't be afraid to hold them

accountable for being lazy.

The main cause of the problem is not enough testing time allocated:

NO time for doc reviews;

Little time for test design and creation;

Little time for test execution.

How do you conduct peer reviews?

The peer review, sometimes called PDR, is a formal meeting, more formalized than a walk-

through, and typically consists of 3-10 people including the test lead, task lead (the author of

whatever is being reviewed) and a facilitator (to make notes). The subject of the PDR is typically

a code block, release, or feature, or document. The purpose of the PDR is to find problems and

see what is missing, not to fix anything. The result of the meeting is documented in a written

report. Attendees should prepare for PDRs by reading through documents, before the meeting

starts; most problems are found during this preparation.

Why is the PDR great? Because it is a cost-effective method of ensuring quality, because bug

prevention is more cost effective than bug detection.

500 Internal Server Error problem while doing load testing using Microsoft Web

Application Stress Tool ....

500 Internal Server Error problem while doing load testing using Microsoft Web Application

Stress Tool When doing Load testing using WAS (Microsoft Web Application Stress Tool), get "

500 Internal Server Error" problem for most of the "POST" querries. The Log file it showed the

following data:

"GET /imse/Global/images/Default/arrow.gif 500"

"GET /imse/client/Template/images/Default/arrow.gif 500"

Page 94: Software testing q as   collection by ravi

"GET /imse/client/Template/images/Default/Plus1.gif 500"

What could be the reason for this?

The problem is because the response will have not come. The session will have timed out. This

will be the application will have taken more memory. This might be because of multiple threads

running with each thread taking much CPU time. Please check the Server of the system where

the build is deployed for Heap Dump. The Garbage collector will have created Java Heap and

Core dumps in the Application folder.

Try increasing the number of DB Connections in the server. This might solve the Problem. Also

increase the Final Heap size also. This might solve the problems.

How do you check the security of an application?

To check the security of an application, one can use security/penetration testing.

Security/penetration testing is testing how well a system is protected against unauthorized

internal, or external access, or willful damage. This type of testing usually requires sophisticated

testing techniques.

How do I write a Test Case?

Read the following:

What Is a Good Test Case?

Cem Kaner

How to Write Better Test Cases

Dianne Runnels

Reducing Test Case Documentation Time

Ranjit Shewale

Testing in the Dark: A pragmatic approach to overcoming undocumented requirements

By Johanna Rothman/Brian Lawrence

Session-Based Test Management: A strategy for structuring exploratory testing

By Jonathan Bach

Adventures in Session-Based Testing

By James Lyndsay

An Introduction to Scenario Testing

By Cem Kaner

How to estimate product test hours for new releases?

Answer1:

Page 95: Software testing q as   collection by ravi

Your main task is to convince your company of the

- value of structured testing and the benefits it brings to the end product

- the risks of not testing properly (high maintenance, lots of bugs found in production (and these

generally found by your customers!), loss of market reputation ("another crap product from xyz

company).

Another approach might be to consider starting your test processes earlier (i am guessing from

your message that you are following some kind of waterfall method) - its a sort of 'design a little,

build a little, test a little, design a little ...' approach.

Answer2:

Tell the folks making decisions to read user feedback. No time for testing = angry users who

want their money back or worse angry clients who suddenly hire a team of lawyers.

Warned all the stakeholders early on and then sent user feedback emails up the chain. Users

can be brutal and they tell the truth! Comments like YOU SUCK!!

It may also convince them to get more support people instead of increasing testing.

Answer3:

The ratios:

3/1 Developers to QA (industry)

3/2 Developers to QA (Microsoft)

There is also a really good article called "A Better Bug Trap" published by The Economist in

2004, which is pretty telling: according to NIST 80% of a software project belongs to testing and

debugging.

There is also the classic book called "Mythical Man Month". There are a couple of pertinent

passages there:

1) Back when the book was written, the percentage quoted by NIST was 50%, which means

that software development has become less efficient over the last 20 years or so.

2) There is a 30% that a change in any line of code will break something down stream.

3) There is another article published by McKinsey Quarterly called "What high tech can learn

from slow-growth industries".

When testing the password field, what is your focus?

When testing the password field, one needs to focus on encryption; one needs to verify that the

passwords are encrypted.

What are the metrics to be collected as part of testing?

A metric is a measurement. While it's easy to count things that are easy to count, and to divide

the counts by other things that you count, it's harder to decide what the counts and ratios mean.

What are we trying to measure, and what model and what evidence lead us to believe that the

Page 96: Software testing q as   collection by ravi

counts measure the attribute we claim to be trying to measure?

Without clear thinking about validity, the numbers are more of a dangerous distraction than a

useful tool. Rather than blindly using a well-known metric, decide what goal you are trying to

achieve and find out how best to measure to achieve that goal.

Not everything that can be counted counts, and not everything that counts can be counted.

Questions you must ask before starting to use metrics are:

* Who will use these metrics?

* What behaviour are you trying to promote with these metrics?

* What information is important to know across the project?

* What requires increased visibility or transparency?

Please see Software Engineering Metrics

What is your view of software QA/testing?

Software QA/testing is easy, if requirements are solid, clear, complete, detailed, cohesive,

attainable and testable, and if schedules are realistic, and if there is good communication in the

group.

Software QA/testing is a piece of cake, if project schedules are realistic, if adequate time is

allowed for planning, design, testing, bug fixing, re-testing, changes, and documentation.

Software QA/testing is easy, if testing is started early on, if fixes or changes are re-tested, and

sufficient time is planned for both testing and bug fixing.

Software QA/testing is easy, if new features are avoided, if one sticks to initial requirements as

much as possible.

How to test a web application, for security testing of Web Application?

Answer1:

Two most common security vulnerabilities that often times overlooked by developers are

session and cookie management. Check out google for possible hacks re the two items.

Develop test scenario from the kb that you find in the web.

Another test would be to concentrate on the log in page and log out.

In some cases the back button could be a security problem especially if the previous

screen/page has sensitive data and could easily be modified if the back button is used.

Lastly, test the user roles properly. Making sure that the specific role only sees what s/he is

intended to see.

Answer2:

Can test one more scenario for security,

1. Login into the application.

2. Then copy the url.

Page 97: Software testing q as   collection by ravi

3. Click Logout button

4. Now paste this url in Browser's Address bar or from History access the url of the application

after logging out

Also do not forget to check the timeout setting for the application

1. Login into the app

2. Leave the browser for sometime idle

3. then checkout that user session gets expired or not.

Who write the user acceptance testing testcases?

Whoever is assigend the task. It is usually best done by a third-party hired by the client to

determine that their product works as contracted, however it can be done by someone one the

testing team, product management, or even a technical writer. UATs as usually positive tests.

A2: 1. Get requirements

2. Design the HLD and LLD documents (high and low level design docs) This is the way we will

implement the clients requirements in the software.

from 1 Get the UAT test cases.

from 2 Get the system test cases.

its not a subset

Each and every organisation has its own way designing the test cases sometimes form different

sources.

UAT is usually designed by the test team itself.

What Is UAT, And Why Do It?

What is automatic recovery testing?

Say you are writing a series of transactions to a database and halfway through the power fails,

or the comms link drops. You now have a database with incomplete or corrupt data.

Once the link is back up, or the application is restarted, how does the database (or application)

restore itself and how is the data integrity verified? If there are automatic processes that do this

(e.g rollback procedures coded into the application), test if these work correctly.

What's the difference between Alpha, Beta and User Acceptance testing?

The focus in this question is somewhat wrong. You don't do Alpha testing, you do testing

against the Alpha cycle of the software. The Alpha cycle is during the development phase. The

product has many defects and is not suitable for users in a production environment to be using.

Once the Show-Stopper, Critical and most Major defects have been resolved, and once the

majority of planned functionality has been added to the product, a Beta release can occur. It is

best to have someone coordinate the beta testers rather than just throw the software out to the

general public--this way you can keep track of the defects generated by beta users in the field.

Page 98: Software testing q as   collection by ravi

User Acceptance testing occurs when you have to deliver your product to a customer based on

contractual obligations. The User Acceptance test is usually written by the customer or an agent

on their part. It is designed to verify, usually only with positive test cases, that the product is as

described in the contract.

How can I be a good tester?

We, good testers, take the customers' point of view. We are also tactful and diplomatic. We

have a "test to break" attitude, a strong desire for quality, an attention to detail, and good

communication skills, both oral and written. Previous software development experience is also

helpful, as it provides a deeper understanding of the software development process.

How to do Laod testing for web based Application?

1. Recording a scenerio in QTP of my web based application.

2. Make 100 copies of that scenerio and run the test (scenerio run for 100 times)

3. In that case, do the load of application on server.

4. The basic logic of running the copy 100 times is to create same scenerio as if 100 users were

working.

What is GUI testing? What elements will we cover in GUI testing?

In GUI testing, need to cover the customer requirement if don't have to validate 1. Font size,

colors, spellings(labels) etc

2. Every application should follow Microsoft rules like

2.1 Controls should be Initcap (i.e every label should start with Capital letter )

Can observe that in Win applications every label starts with Caps

2.2 OK or Cancel button should exist

2.3 Controls should not be overlapped

2.4 Controls should be alligned properly (left side alignment is mandatory but the right side is

optional)

2.5 Controls should be visible

2.6 Short cut keys should be provided

2.7 System menu should exist (i.e if u press Alt key + Space bar a menu will appear at the left

most corner )

2.8 Mouse pointer events

Colors, Label Names, Tab Order, Alignment, graphs , Navigation of the software to test in GUI

Testing.

What is that column "steps to reproduce" mean in bug tracking?

Page 99: Software testing q as   collection by ravi

Answer1:

Well, steps to reproduce are just that: what are the steps you need to take to reproduce the

stated problem.

The steps to reproduce (STR) must be as clear as possible, preferably with screenshots and/or

test data. The steps should also be definite (so no 'maybe', 'it sometimes works if you do this'

type statements).

In the test projects, you've always tried to keep the STR down to a maximum of 5, this to make

sure that the problem is easy and clear to communicate to the developers, to reproduce and

hence resolve.

Answer2:

Ideally, once you identify a bug - you would need to determine the least number of steps

required to reproduce the bug. This would help your developer to reproduce the bug easily on

his development environment.

When is a process repeatable?

A process repeatable when we use detailed and well-written processes and procedures; this

way we ensure the correct steps are being executed. This also facilitates a successful

completion of the task, and ensures the process is repeatable.

A process is repeatable, whenever we have the necessary processes in place, in order to

repeat earlier successes on projects with similar applications. A process is repeatable, if we use

detailed and well-written processes and procedures. A process is repeatable, if we ensure that

the correct steps are executed.

When the correct steps are executed, we facilitate a successful completion of the task.

Documentation is critical. A software process is repeatable, if there are requirements

management, project planning, project tracking, subcontract management, QA, and

configuration management.

Both QA processes and practices should be documented, so that they are repeatable.

Specifications, designs, business rules, inspection reports, configurations, code changes, test

plans, test cases, bug reports, user manuals should all be documented, so that they are

repeatable.

Document files should be well organized. There should be a system for easily finding and

obtaining documents, and determining what document has a particular piece of information. We

should use documentation change management, if possible.

If you don't have requirements specification, how will you go about testing the

application?

Page 100: Software testing q as   collection by ravi

Answer1:

if there is no requirement specification and testing is required then smoke testing or Gorilla

testing is a best option for it in this way we can understand the functionality and Bugs of the

application

Answer2:

As a thumb rude, never test or signoff on undocumented (applications without complete

functional specifications) applications. Its quite similar to swiming in unknown waters - you never

know what you could encounter. In the case of software testing, its not what you will encounter,

but its what you will not encounter. There is a very high possibility that you could completely

miss out some functionality or even worse, misunderstand the functionality.

Software Testing is closely associated with the Program management Team or the requirment

analysis team rather than the Development Team. When you test an application without the

knowledge of the requirments, you only see what the developer wants you to see and not what

the customer want to see. And customers / end users are our prime audience.

In the case of missing requirments, you would try out something what is called 'Focused

Exploratory Testing', identifying every piece in the application, its functionality and gradually dig

deeper.

Smoke Testing or Gorilla Testing (Monkey Testing) is a different type of testing and the purpose

of same is very diffferent.

Smoke Testing or Sanity Testing is used, only to certify builds and is no measure for quality. It

only ensures that there are no blocking issues in the build and ensures that the same can

undergo a test pass.

Gorilla Testing or Monkey Testing (Gorilla being the smarter among the Monkey kind) is all

about adhoc testing. You would probably try hitting the 'ENTER' key 100 times, or try a

'SUBMIT' followed by 'CANCEL' followed by 'SUBMIT' again.

The idea of 'Exploratory Testing' is to identify the functionality of the application along with

Testing the same.

How can I start my career in automated testing?

To start your career in automated testing:

1. Read all you can, and that includes reading product descriptions, pamphlets, manuals, books,

information on the Internet, and whatever information you can lay your hands on.

2. Get some hands on experience in using automated testing tools. e.g. WinRunner and many

other automated testing tools.

What should test in BANKING DOMAIN application ?

Page 101: Software testing q as   collection by ravi

You would like to test:

Banking Workflows

Data Integrity issues

Security and access issues

Recovery testing

All the above needs to be tested in the expected banking environment (hardwares, LAN, Op

Sys, domain configurations, databases)

Collaboration between dev and testing

Unit testing is entirely the responsibility of the developer. A tester is not in as knowledgable a

position to write meaningful unit tests as the developer who did the feature. I would push back

hard against a development team that tried to do this. A feature is not 'done' (as in 'ready for

test') until

- the requirements are met (be they specifications or use case)

- the code has all be checked into revision control

- it has been verified that the newly checked in code does not break the compile/existing tests

- a comprehensive suite of feature specific unit tests has been created and integrated into the

build process

- there are no TODO's (or similar watermarks) left in the new code

- all FIXME's (or similar watermarks) have bug numbers assigned to them in the new code

Is there any common testing framework or testing best practices for distributed system?

For example, for distrbuted database management system?

A distributed database management based on mysql. It has three components.

1. A jdbc driver providing services for user's applications, including distributed transaction

management, load balancing, query processor, table id management, etc.

2. A master process, which manages global dirstributed transaction id, load balancing, load

balancing strategy, etc.

3. An agent running on the same box with mysql, which get mysql server's balance statistic

info.

AN OPERATIONAL ENVIRONMENT FOR TESTING DISTRIBUTED SOFTWARE

Distributed applications have traditionally been designed as systems whose data and

processing capabilities reside on multiple platforms, each performing an assigned function

within a known and controlled framework contained in the enterprise. Even if the testing tools

were capable of debugging all types of software components, most do not provide a single

monitoring view that can span multiple platforms. Therefore, developers must jump between

several testing/monitoring sessions across the distributed platforms and interpret the cross–

platform gap as best they can. That is, of course, assuming that comparable monitoring tools

exist for all the required platforms in the first place. This is particularly difficult when one server

Page 102: Software testing q as   collection by ravi

platform is the mainframe as generally the more sophisticated mainframe testing tools do not

have comparable PC– or Unix–based counterparts. Therefore, testing distributed applications is

exponentially more difficult than testing standalone applications.

To overcome this problem, we present an operational environment for testing distributed

applications based on the Java Development Kit (JDK) as shown in Figure 1, allowing testers to

track the flow of messages and data across and within the disparate platforms.The primary goal

of this operational environment is an attempt to provide a coherent, seamless environment that

can serve as a single platform for testing distributed applications. The hardware platform of the

testbed at the lowest level in Figure 1, is a network of SUN workstations running the Solaris 2.x

operating system which often plays a part in distributed and client–server system. The

widespread use of PCs has also prompted an ongoing effort to port the environment to the

PC/Windows platform. On the top of the hardware platform is Java Development Kit. It consists

of the Java programming language core functionality, the Java Application Programming

Interface (API) with multiple package sets and the essential tools such as Remote Method

Invocations (RMI), Java DataBase Conncetivity (JDBC) and Beans for creating Java

applications. On top of this platform is the SITE which secures automated support for the testing

process, including modeling, specification, statistical analysis, test data generation, test results

inspection and test path tracing. At the top of this environment are the distributed applications.

These can use or bypass any of the facilities and services in this operational environment. This

environment receives commands from the users (testers) and produces the test reports back.

Why is that my company requires a PDR?

Your company requires a PDR, because your company wants to be the owner of the very best

possible design and documentation. Your company requires a PDR, because when you

organize a PDR, you invite, assemble and encourage the company's best experts to voice their

concerns as to what should or should not go into your design and documentation, and why.

Please don't be negative. Please do not assume your company is finding fault with your work, or

distrusting you in any way. Remember, PDRs are not about you, but about design and

documentation. There is a 90+ per cent probability your company wants you, likes you and trust

you because you're a specialist, and because your company hired you after a long and careful

selection process.

Your company requires a PDR, because PDRs are useful and constructive. Just about

everyone - even corporate chief executive officers (CEOs) - attend PDRs from time to time.

When a corporate CEO attends a PDR, he has to listen for "feedback" from shareholders. When

a CEO attends a PDR, the meeting is called the "annual shareholders' meeting".

A list of ten good things about PDRs!

Number 1: PDRs are easy, because all your meeting attendees are your co-workers and

friends.

Page 103: Software testing q as   collection by ravi

Number 2: PDRs do produce results. With the help of your meeting attendees, PDRs help you

produce better designs and better documents than the ones you could come up with, without

the help of your meeting attendees.

Number 3: Preparation for PDRs helps a lot, but, in the worst case, if you had no time to read

every page of every document, it's still OK for you to show up at the PDR.

Number 4: It's technical expertise that counts the most, but many times you can influence your

group just as much, or even more so, if you're dominant or have good acting skills.

Number 5: PDRs are easy, because, even at the best and biggest companies, you can

dominate the meeting by being either very negative, or very bright and wise.

Number 6: It is easy to deliver gentle suggestions and constructive criticism. The brightest and

wisest meeting attendees are usually gentle on you; they deliver gentle suggestions that are

constructive, not destructive.

Number 7: You get many-many chances to express your ideas, every time a meeting attendee

asks you to justify why you wrote what you wrote.

Number 8: PDRs are effective, because there is no need to wait for anything or anyone;

because the attendees make decisions quickly (as to what errors are in your document). There

is no confusion either, because all the group's recommendations are clearly written down for

you by the PDR's facilitator.

Number 9: Your work goes faster, because the group itself is an independent decision making

authority. Your work gets done faster, because the group's decisions are subject to neither

oversight nor supervision.

Number 10: At PDRs, your meeting attendees are the very best experts anyone can find, and

they work for you, for FREE!

What is the best way to simulate the real behavior of a web based system?

It may seem obvious, but the best way to simulate real behavior of a web based system is to

simulate user actual behavior, and the way to do this is from an actual browser with test

functionality built inside.

The key to achieving the kind of test accuracy that eValid provides is to understand that it's the

eValid browser that is doing the the actual navigating and processing. And, it is the eValid

browser that is taking the actual performance timing measurements.

eValid employs IE-equivalent multi-threaded HTTP/S processing and uses IE-equivalent page

rendering. While there is some overhead with injecting actions into the browser, it is very, very

low. eValid's timers resolve to 1.0 msec and this precision is usually enough to produce very

meaningful performance testing results.

How can I shift my focus and area of work from QC to QA?

Number one: Focus on your strengths, skills, and abilities! Realize that there are MANY

similarities between Quality Control and Quality Assurance! Realize you have MANY

Page 104: Software testing q as   collection by ravi

transferable skills!

Number two: Make a plan! Develop a belief that getting a job in QA is easy! HR professionals

cannot tell the difference between quality control and quality assurance! HR professionals tend

to respond to keywords (i.e. QC and QA), without knowing the exact meaning of those

keywords!

Number three: Make it a reality! Invest your time! Get some hands-on experience! Do some QA

work! Do any QA work, even if, for a few months, you get paid a little less than usual! Your

goals, beliefs, enthusiasm, and action will make a huge difference in your life!

Number four: Read all you can, and that includes reading product pamphlets, manuals, books,

information on the Internet, and whatever information you can lay your hands on!

How to use loadrunner for testing web-based application?

Exactly the data which a user of your site will enter as soon as it will be in an e-commerce

website.

Check your concept by implementing a simple test case, e.g.

logon - some info - logoff

stress your site with this simple script and n parallel virtual users

(n = 1, 10, 100, 1000, 10000)

create some more complex tests and repeat.

Any server setup that is not needed in order to use Loadrunner for an e-commerce website.

From the server point of view, it is just if many real users would stress your site.

What techniques and tools can enable me to migrate from QC to QA?

Technique number one: Mental preparation. Understand and believe what you want is not

unusual at all! Develop a belief in yourself! Start believing what you want is attainable! You can

change your career! Every year, millions of men and women change their careers successfully!

Number two: Make a plan! Develop a belief that getting a job in QA is easy! HR professionals

cannot tell the difference between quality control and quality assurance! HR professionals tend

to respond to keywords (i.e. QC and QA), without knowing the exact meaning of those

keywords!

Number three: Make it a reality! Invest your time! Get some hands-on experience! Do some QA

work! Do any QA work, even if, for a few months, you get paid a little less than usual! Your

goals, beliefs, enthusiasm, and action will make a huge difference in your life!

Number four: Read all you can, and that includes reading product pamphlets, manuals, books,

information on the Internet, and whatever information you can lay your hands on!

Page 105: Software testing q as   collection by ravi

What is the BEST WAY to write test cases?

Answer1:

1) List down usecases (taken from business cases) from function specs. For each use case

write a test case and categorize them into sanity tests, functionality, GUI, performance etc. Then

for each test case, write its workflow.

2) For a GUI application - make a list of all GUI controls. For each control start writing test cases

for testing of the control UI, functionality (impact on the whole application), negative testing (for

incorrect inputs), performance etc.

Answer2:

1. Generate Sunny day scenarios based on use cases and/or requirements.

2. Generate Rainy Day (negative, boundary, etc.) tests that correspond to the previously defined

Sunny Day scenarios.

3. Based on past experience and a knowledge of the product, generate tests for anything that

might have been missed in steps one and two above. These tests need not correspond to any

documented requirements or use cases. It's generally not possible to test every facet of the

design, but with a little work and forethought you can test the high risk areas or high impact

features.

What is the difference between build and release?

Builds and releases are similar, because both builds and releases are end products of software

development processes. Builds and releases are similar, because both builds and releases help

developers and QA teams to deliver reliable software.

A build is a version of a software; typically one that is still in testing. A version number is usually

given to a released product, but sometimes a build number is used instead.

Difference number one: "Build" refers to software that is still in testing, but "release" refers to

software that is usually no longer in testing.

Difference number two: "Builds" occur more frequently; "releases" occur less frequently.

Difference number three: "Versions" are based on "builds", and not vice versa. Builds (or a

series of builds) are generated first, as often as one build per every morning (depending on the

company), and then every release is based on a build (or several builds), i.e. the accumulated

code of several builds.

How to test the memory leakeage mannually?

Answer1:

Here are tools to check this. Compuware DevPartner can help you test your application for

Page 106: Software testing q as   collection by ravi

Memory leaks if the application is complex. Also depending upon the OS on which you need to

check for memory leaks you need to select the tool.

Answer2:

Tools are more effective to do so. the tools watch to see when memory is allocated and not

freeed. You can use various tools manually to see if the same happens. You just won't be able

to find the exact points where this happens.

In windows you would use task manager or process explorer (freeware from Sysinternals) and

switch to process view and watch memory used. Record the baseline memory usage (BL) . Run

an action once and record the memory usage (BLU). Perform the same actions repeatedlty and

then if the memory usage has not returned to at least BLU, you have a memory leak. The trick is

to wait for the computer to clean up after the transactions have finished. This should take a few

seconds.

What is the difference between version and release?

Both version and release indicate particular points in the software development life cycle, or in

the life cycle of a document. Both terms, version and release, are similar, i.e. pretty much the

same thing, but there are minor differences between them.

Minor difference number 1: Version means a variation of an earlier or original type. For

example, you might say, "I've downloaded the latest version of XYZ software from the Internet.

The version number of this software is _____"

Minor difference number 2: Release is the act or instance of issuing something for publication,

use, or distribution. Release means something thus released. For example, "Microsoft has just

released their brand new gaming software known as _______"

How to write Test cases for Login screen?

the format for all test cases could be something like this

1. test cases for GUI

2. +ve test cases for login.

3. -ve test cases for login.

in the -ve scenario :- we should include boundary analysis to create test cases ,Value Analysis.

Equivalence Classes,Positive and Negative test cases) plus cross-site scripting and SQL

injection. SQL injection is especially high-risk for login pages.

( Test case is Enter special char for username and it should displays a message that username

should have char a-z and 0-9 )

What is the checklist for credit card testing?

Page 107: Software testing q as   collection by ravi

In credit card testing the following validations are considered

1)Testing the 4-DBC (Digit batch code) for its uniqueness (present on right corner of credit

card)

2)The message formats in which the data is sent

3)LUHN testing

4)Network response

5) Terminal validations

How do you test data integrity?

Data integrity is tested by the following tests:

Verify that you can create, modify, and delete any data in tables.

Verify that sets of radio buttons represent fixed sets of values.

Verify that a blank value can be retrieved from the database.

Verify that, when a particular set of data is saved to the database, each value gets saved fully,

and the truncation of strings and rounding of numeric values do not occur.

Verify that the default values are saved in the database, if the user input is not specified.

Verify compatibility with old data, old hardware, versions of operating systems, and interfaces

with other software.

Why do we perform data integrity testing? Because we want to verify the completeness,

soundness, and wholeness of the stored data. Testing should be performed on a regular basis,

because important data could, can, and will change over time.

What is the difference between data validity and data integrity?

Difference number one: Data validity is about the correctness and reasonableness of data, while

data integrity is about the completeness, soundness, and wholeness of the data that also

complies with the intention of the creators of the data.

Difference number two: Data validity errors are more common, and data integrity errors are less

common.

Difference number three: Errors in data validity are caused by human beings - usually data entry

personnel - who enter, for example, 13/25/2010, by mistake, while errors in data integrity are

caused by bugs in computer programs that, for example, cause the overwriting of some of the

data in the database, when somebody attempts to retrieve a blank value from the database.

What is the difference between static and dynamic testing?

Difference number 1: Static testing is about prevention, dynamic testing is about cure.

Difference number 2: The static tools offer greater marginal benefits.

Difference number 3: Static testing is many times more cost-effective than dynamic testing.

Difference number 4: Static testing beats dynamic testing by a wide margin.

Page 108: Software testing q as   collection by ravi

Difference number 5: Static testing is more effective!

Difference number 6: Static testing gives you comprehensive diagnostics for your code.

Difference number 7: Static testing achieves 100% statement coverage in a relatively short time,

while dynamic testing often often achieves less than 50% statement coverage, because

dynamic testing finds bugs only in parts of the code that are actually executed.

Difference number 8: Dynamic testing usually takes longer than static testing. Dynamic testing

may involve running several test cases, each of which may take longer than compilation.

Difference number 9: Dynamic testing finds fewer bugs than static testing.

Difference number 10: Static testing can be done before compilation, while dynamic testing can

take place only after compilation and linking.

Difference number 11: Static testing can find all of the followings that dynamic testing cannot

find: syntax errors, code that is hard to maintain, code that is hard to test, code that does not

conform to coding standards, and ANSI violations.

What testing tools should we use?

Ideally, you should use both static and dynamic testing tools. To maximize software reliability,

you should use both static and dynamic techniques, supported by appropriate static and

dynamic testing tools.

Reason number 1: Static and dynamic testing are complementary. Static and dynamic testing

find different classes of bugs. Some bugs are detectable only by static testing, some only by

dynamic.

Reason number 2: Dynamic testing does detect some errors that static testing misses. To

eliminate as many errors as possible, both static and dynamic testing should be used.

Reason number 3: All this static testing (i.e. testing for syntax errors, testing for code that is

hard to maintain, testing for code that is hard to test, testing for code that does not conform to

coding standards, and testing for ANSI violations) takes place before compilation.

Reason number 4: Static testing takes roughly as long as compilation and checks every

statement you have written.

Why should I use static testing techniques?

There are several reasons why one should use static testing techniques.

Reason number 1: One should use static testing techniques because static testing is a bargain,

compared to dynamic testing.

Reason number 2: Static testing is up to 100 times more effective. Even in selective testing,

static testing may be up to 10 times more effective. The most pessimistic estimates suggest a

factor of 4.

Reason number 3: Since static testing is faster and achieves 100% coverage, the unit cost of

detecting these bugs by static testing is many times lower than detecting bugs by dynamic

testing.

Page 109: Software testing q as   collection by ravi

Reason number 4: About half of the bugs, detectable by dynamic testing, can be detected

earlier by static testing.

Reason number 5: If one uses neither static nor dynamic test tools, the static tools offer greater

marginal benefits.

Reason number 6: If an urgent deadline looms on the horizon, the use of dynamic testing tools

can be omitted, but tool-supported static testing should never be omitted.

What is the definiton of top down design?

Top down design progresses from simple design to detailed design. Top down design solves

problems by breaking them down into smaller, easier to solve subproblems. Top down design

creates solutions to these smaller problems, and then tests them using test drivers. In other

words, top down design starts the design process with the main module or system, then

progresses down to lower level modules and subsystems. To put it differently, top down design

looks at the whole system, and then explodes it into subsystems, or smaller parts. A systems

engineer or systems analyst determines what the top level objectives are, and how they can be

met. He then divides the system into subsystems, i.e. breaks the whole system into logical,

manageable-size modules, and deals with them individually.

What is the future of software QA/testing?

In software QA/testing, employers increasingly want us to have a combination of technical,

business, and personal skills. By technical skills they mean skills in IT, quantitative analysis,

data modeling, and technical writing. By business skills they mean skills in strategy and

business writing. By personal skills they mean personal communication, leadership, teamwork,

and problem-solving skills. We, employees, on the other hand, want increasingly more

autonomy, better lifestyle, increasingly more employee oriented company culture, and better

geographic location. We continue to enjoy relatively good job security and, depending on the

business cycle, many job opportunities. We realize our skills are important, and have strong

incentives to upgrade our skills, although sometimes lack the information on how to do so.

Educational institutions increasingly ensure that we are exposed to real-life situations and

problems, but high turnover rates and a rapid pace of change in the IT industry often act as

strong disincentives for employers to invest in our skills, especially non-company specific skills.

Employers continue to establish closer links with educational institutions, both through in-house

education programs and human resources. The share of IT workers with IT degrees keeps

increasing. Certification continues to keep helping employers to quickly identify us with the latest

skills. During boom times, smaller and younger companies continue to be the most attractive to

us, especially those that offer stock options and performance bonuses in order to retain and

attract those of us who are the most skilled. High turnover rates continue to be the norm,

especially during economic boom. Software QA/testing continues to be outsourced to offshore

Page 110: Software testing q as   collection by ravi

locations. Software QA/testing continues to be performed by mostly men, but the share of

women keeps increasing.

How can I be effective and efficient, when I'm testing e-commerce web sites?

When you're doing black box testing of an e-commerce web site, you're most efficient and

effective when you're testing the site's visual appeal, content, and home page. When you want

to be effective and efficient, you need to verify that the site is well planned; verify that the site is

customer-friendly; verify that the choices of colors are attractive; verify that the choices of fonts

are attractive; verify that the site's audio is customer friendly; verify that the site's video is

attractive; verify that the choice of graphics is attractive; verify that every page of the site is

displayed properly on all the popular browsers; verify the authenticity of facts; ensure the site

provides reliable and consistent information; test the site for appearance; test the site for

grammatical and spelling errors; test the site for visual appeal, choice of browsers, consistency

of font size, download time, broken links, missing links, incorrect links, and browser

compatibility; test each toolbar, each menu item, every window, every field prompt, every pop-

up text, and every error message; test every page of the site for left and right justifications,

every shortcut key, each control, each push button, every radio button, and each item on every

drop-down menu; test each list box, and each help menu item. Also check, if the command

buttons are grayed out when they're not in use.

What is the difference between top down and bottom up design?

Top down design proceeds from the abstract entity to get to the concrete design. Bottom up

design proceeds from the concrete design to get to the abstract entity.

Top down design is most often used in designing brand new systems, while bottom up design is

sometimes used when one is reverse engineering a design; i.e. when one is trying to figure out

what somebody else designed in an existing system.

Bottom up design begins the design with the lowest level modules or subsystems, and

progresses upward to the main program, module, or subsystem. With bottom up design, a

structure chart is necessary to determine the order of execution, and the development of drivers

is necessary to complete the bottom up approach.

Top down design, on the other hand, begins the design with the main or top-level module, and

progresses downward to the lowest level modules or subsystems.

Real life sometimes is a combination of top down design and bottom up design. For instance,

data modeling sessions tend to be iterative, bouncing back and forth between top down and

bottom up modes, as the need arises.

Page 111: Software testing q as   collection by ravi

Give me one test case that catches all the bugs!

On the negative side, if there was a "magic bullet", i.e. the one test case that was able to catch

ALL the bugs, or at least the most important bugs, it'd be a challenge to find it, because test

cases depend on requirements; requirements depend on what customers need; and customers

have great many different needs that keep changing. As software systems are changing and

getting increasingly complex, it is increasingly more challenging to write test cases.

On the positive side, there are ways to create "minimal test cases" which can greatly simplify

the test steps to be executed. But, writing such test cases is time consuming, and project

deadlines often prevent us from going that route. Often the lack of enough time for testing is the

reason for bugs to occur in the field.

However, even with ample time to catch the "most important bugs", bugs still surface with

amazing spontaneity. The fundamental challenge is, developers do not seem to know how to

avoid providing the many opportunities for bugs to hide, and testers do not seem to know where

the bugs are hiding.

What testing approaches can you tell me about?

Each of the followings represents a different testing approach: black box testing, white box

testing, unit testing, incremental testing, integration testing, functional testing, system testing,

end-to-end testing, sanity testing, regression testing, acceptance testing, load testing,

performance testing, usability testing, install/uninstall testing, recovery testing, security testing,

compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison

testing, alpha testing, beta testing, and mutation testing.

Can you give me five common problems?

Poorly written requirements, unrealistic schedules, inadequate testing, adding new features

after development is underway and poor communication.

Requirements are poorly written when they're unclear, incomplete, too general, or not testable;

therefore there will be problems.

The schedule is unrealistic if too much work is crammed in too little time.

Software testing is inadequate if none knows whether or not the software is any good until

customers complain or the system crashes.

It's extremely common that new features are added after development is underway.

Miscommunication either means the developers don't know what is needed, or customers have

unrealistic expectations and therefore problems are guaranteed.

Can you give me five common solutions?

Page 112: Software testing q as   collection by ravi

Solid requirements, realistic schedules, adequate testing, firm requirements, and good

communication.

Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable.

All players should agree to requirements. Use prototypes to help nail down requirements.

Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing,

re-testing, changes and documentation. Personnel should be able to complete the project

without burning out.

Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for

sufficient time for both testing and bug fixing.

Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend

design against changes and additions, once development has begun and be prepared to

explain consequences.

If changes are necessary, ensure they're adequately reflected in related schedule changes. Use

prototypes early on so customers' expectations are clarified and customers can see what to

expect; this will minimize changes later on.

Communicate. Require walkthroughs and inspections when appropriate; make extensive use of

e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is

available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork

and cooperation.

What if the application has functionality that wasn't in the requirements?

It can take a serious effort to determine if an application has significant unexpected or hidden

functionality, which can indicate deeper problems in the software development process.

If the functionality isn't necessary to the purpose of the application, it should be removed, as it

may have unknown impacts or dependencies that were not taken into account by the designer

or the customer.

If not removed, design information will be needed to determine added testing needs or

regression testing needs. Management should be made aware of any significant added risks as

a result of the unexpected functionality. If the unexpected functionality only affects areas, e.g.

minor improvements in user interface, then it may not be a significant risk.

How can software QA processes be implemented without stifling productivity?

When you implement software QA processes without stifling productivity, you want to implement

yhem slowly over time. You want to use consensus to reach agreement on processes, and

adjust, and experiment, as an organization grows and matures.

Productivity will be improved instead of stifled. Problem prevention will lessen the need for

problem detection. Panics and burnout will decrease, and there will be improved focus, and less

wasted effort.

At the same time, attempts should be made to keep processes simple and efficient, minimize

Page 113: Software testing q as   collection by ravi

paperwork, promote computer-based processes and automated tracking and reporting,

minimize time required in meetings and promote training as part of the QA process.

However, no one, especially not the talented technical types like bureaucracy, and in the short

run things may slow down a bit. A typical scenario would be that more days of planning and

development will be needed, but less time will be required for late-night bug fixing and calming

of irate customers.

Should I take a course in manual testing?

Yes, you want to consider taking a course in manual testing. Why? Because learning how to

perform manual testing is an important part of one's education. Unless you have a significant

personal reason for not taking a course, you do not want to skip an important part of an

academic program.

To learn to use WinRunner, should I sign up for a course at a nearby educational

institution?

Free, or inexpensive, education is often provided on the job, by an employer, while one is

getting paid to do a job that requires the use of WinRunner and many other software testing

tools.

In lieu of a job, it is often a good idea to sign up for courses at nearby educational institutes.

Classes, especially non-degree courses in community colleges, tend to be inexpensive.

Test Specifications

The test case specifications should be developed from the test plan and are the second phase

of the test development life cycle. The test specification should explain "how" to implement the

test cases described in the test plan.

Test Specification Items

Each test specification should contain the following items:

Case No.: The test case number should be a three digit identifer of the following form: c.s.t,

where: c- is the chapter number, s- is the section number, and t- is the test case number.

Title: is the title of the test.

ProgName: is the program name containing the test.

Author: is the person who wrote the test specification.

Date: is the date of the last revision to the test case.

Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how

to conduct the test.

Expected Error(s): Describes any errors expected

Reference(s): Lists reference documententation used to design the specification.

Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation

Page 114: Software testing q as   collection by ravi

Under Test (IUT) and the test engine.

Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.

Example Test Specification

Test Specification

Case No. 7.6.3 Title: Invalid Sequence Number (TC)

ProgName: UTEP221 Author: B.C.G. Date: 07/06/2000

Background: (Objectives, Assumptions, References, Success Criteria)

Validate that the IUT will reject a normal flow PIU with a transmissionheader that has an invalid

sequence number.

Expected Sense Code: $2001, Sequence Number Error

Reference - SNA Format and Protocols Appendix G/p. 380

Data: (Tx Data, Predicted Rx Data)

IUT

<-------- DATA FIS, OIC, DR1 SNF=20

<-------- DATA LIS, SNF=20

--------> -RSP $2001

Script: (Pseudo Code for Coding Tests)

SEND_PIU FIS, OIC, DR1, DRI SNF=20

SEND_PIU LIS, SNF=20

R_RSP $2001

Formal Technical Review: Reviews that include walkthroughs, inspection, ....

Formal Technical Review

Reviews that include walkthroughs, inspection, round-robin reviews and other small group

technical assessment of software. It is a planned and control meeting attended by the analyst,

programmers and people involve in the software development.

Uncover errors in logic, function or implementation for any representation of software

To verify that the software under review meets the requirements

To ensure that the software has been represented according to predefined standards

To achieve software that is developed in a uniform manner.

To make project more manageable.

Early discovery of software defects, so that in the development and maintenance phase

the errors are substantially reduced. " Serves as a training ground, enabling junior

members to observe the different approaches in the software development phases

(gives them helicopter view of what other are doing when developing the software).

Page 115: Software testing q as   collection by ravi

Allows for continuity and backup of the project. This is because a number of people are

become familiar with parts of the software that they might not have otherwise seen,

Greater cohesion between different developers.

Reluctance of implementing Software Quality Assurance

Managers are reluctant to incur the extra upfront cost

Such upfront cost are not budgeted in software development therefore management may be

unprepared to fork out the money.

Avoid Red - Tape (Bureaucracy)

Red- tape means extra administrative activities that needs to be performed as SQA involves a

lot of paper work. New procedures to determine that software quality is correctly implemented

needs to be developed, followed through and verified by external auditing bodies. These

requirements involves a lot of administrative paperwork.

Benefits of Software Quality Assurance to the organization

Higher reliability will result in greater customer satisfaction: as software development is

essentially a business transaction between a customer and developer, customers will naturally

tend to patronize the services of the developer again if they are satisfied with the product.

Overall life cycle cost of software reduced.

As software quality is performed to ensure that software is conformance to certain requirements

and standards. The maintenance cost of the software is gradually reduced as the software

requires less modification after SQA. Maintenance refers to the correction and modification of

errors that may be discovered only after implementation of the program. Hence, proper SQA

procedures would identify more errors before the software gets released, therefore resulting in

the overall reduction of the life cycle cost.

Constraints of Software Quality Assurance

Difficult to institute in small organizations where available resources to perform the necessary

activities are not present. A smaller organization tends not to have the required resources like

manpower, capital etc to assist in the process of SQA.

Cost not budgeted

In addition, SQA requires the expenditure of dollars that are not otherwise explicitly budgeted to

software engineering and software quality. The implementation of SQA involves immediate

upfront costs, and the benefits of SQA tend to be more long-term than short-term. Hence, some

organizations may be less willing to include the cost of implementing SQA into their budget.

Page 116: Software testing q as   collection by ravi

SOFTWARE TESTING METRICS

In general testers must rely on metrics collected in analysis, design and coding stages of the

development in order to design, develop and conduct the tests necessary. These generally

serve as indicators of overall testing effort needed. High-level design metrics can also help

predict the complexities associated with integration testing and the need for specialized testing

software (e.g. stubs and drivers). Cyclomatic complexity may yield modules that will require

extensive testing as those with high cyclomatic complexity are more likely to be error prone.

Metrics collected from testing, on the other hand, usually comprise of the number and type of

errors, failures, bugs and defects found. These can then serve as measures used to calculate

further testing effort required. They can also be used as a management tool to determine the

extensity of the project's success or failure and the correctness of the design. In any case these

should be collected, examined and stored for future needs.

OBJECT ORIENTED TESTING METRICS

Testing metrics can be grouped into two categories: encapsulation and inheritance.

Encapsulation

Lack of cohesion in methods (LCOM) - The higher the value of LCOM, the more states have to

be tested.

Percent public and protected (PAP) - This number indicates the percentage of class attributes

that are public and thus the likelihood of side effects among classes.

Public access to data members (PAD) - This metric shows the number of classes that access

other class's attributes and thus violation of encapsulation

Inheritance

Number of root classes (NOR) - A count of distinct class hierarchies.

Fan in (FIN) - FIN > 1 is an indication of multiple inheritance and should be avoided.

Number of children (NOC) and depth of the inheritance tree (DIT) - For each subclass, its

superclass has to be re-tested. The above metrics (and others) are different than those used in

traditional software testing, however, metrics collected from testing should be the same (i.e.

number and type of errors, performance metrics, etc.).