Wednesday, August 17, 2016

Software Testing

lnvolves executing an implementation of the software with test data and examining the
outputs of the software and its operational behavior to check that it is performing as
required. Testing is a dynamic technique because it works with an executable
section of the system.


However, static techniques can only check the correspondence between a program and its
specification (verification). They cannot demonstrate that the software is operationally useful
nor can they check non-functional characteristics of the software such as performance and
reliability.

Although software inspections are now widely used, program testing is still the predominant
verification and validation tecnique. Testing involves exercising the program using data like
real data processed by the program. The existence of program defects or inadequacies is
inferred by examining the outputs .

There are two distinct types of testing that may be used at different stages in the software process

Defect Testing:

ls intended to find inconsistencies between a program and its specification. These
inconsistencies are due to program faults or defects, The tests are designed to reveal the
presence of errors in the system rather than to simulate its operational use.

Statistical Testing:

ls used to test the performance and reliability and to check how it works under
operational conditions. Test are design to reflect the actual user inputs and their
frequency. After running the test an estimate of the operational reliability of the system
can be made by counting the number of observed system failure.

The ultimate goal of the verification and validation process is to establish confidence that the
software system is fit for purpose. This does not mean that the program must be completely
free of defects. Rather it means that the system must be good enough for its intended use.
The level of required confidence depends on the system's purpose, the expectations of the
system users and the current marketing environment for the system:

Verification and validation is a process that establishes the existence of defects in a software system.

Debugging is a process that locates and correct these defects.

After a defect in the program has been discovered, it must be corrected and the system
should be re-validated. This may involve re inspecting the program or repeating the previous
test runs (regression testing)

Regression testing is performed to check that the changes made to the program has not
introduced any new errors in the system. ln principle during regression testing , all tests
should be repeated after every defect repair; in practice this is too expensive.

Black box and White box testing

Black-box and white-box are test design methods. Black-box test design treats the system
as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black-box
test design is usually described as focusing on testing functional requirements. Synonyms
for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test
design allows one to peek inside the "box", and it focuses specifically on using internal
knowledge of the software to guide the selection of test data. Synonyms for white-box
include: structural, glass-box and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer the
terms "behavioral" and "structural". Behavioral test design is slightly different from black-box
test design because the use of internal knowledge isn't strictly forbidden, but it's still
discouraged. ln practice, it hasn't proven useful to use a single test design method. One has
to use a mixture of different methods so that they aren't hindered by the limitations of a
particular one. Some call this "gray-box" or "translucent-box" test design, but others wish
we'd stop talking about boxes altogether.

It is important to understand that these methods are used during the test design phase, and
their influence is hard to see in the tests once they're implemented. Note that any level of
testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is
usually associated with structural test design, but this is because testers usually don't have
well-defined requirements at the unit level to validate.

Unit Testing

ln computer programming, a unit test is a procedure used to validate that a particular
module of source code is working properly. The idea about unit tests is to write test cases
for all functions and methods so that whenever a change causes a regression, it can be
quickly identified and fixed. ldeally each test case is separate from the others; constructs
such as mock objects can assist in separating unit tests. This type of testing is mostly done
by the developers and not by end-users.

The goal of unit testing is to isolate each pari of the program and show that the individual
parts are correct. Unit testing provides a strict, written contract that the piece of code must
satisfy. As a result, it affords several benefits.

Unit Testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems or any other system-wide issues.

Integration Testing

Integration testing is a logical extension of unit testing. ln its simplest form, two units that
have already been tested are combined into a component and the interface between them
is tested. A component, in this sense, refers to an integrated aggregate of more than one
unit. ln a realistic scenario, many units are combined into components, which are in turn
aggregated into even larger parts of the program. The idea is to test combinations of pieces
and eventually expand the process to test your modules with those of other groups.
Eventually all the modules making up a process are tested together. Beyond that, if the
program is composed of more than one process, they should be tested in pairs rather than all
at once.

Integration testing identifies problems that occur when units are combined. By using a test
plan that requires you to test each unit and ensure the viability of each before combining
units, you know that any errors discovered when combining units are likely related to the
interface between units. This method reduces the number of possibilities to a far simpler
level of analysis.


System Testing

Validation testing:
is a concern which overlaps with integration testing. Ensuring that the application fulfills its
specification is a major criterion for the construction of an integration test. Validation testing
also overlaps to a large extent with system testing, where the application is tested with
respect to its typical working environment. Consequently for many processes no clear
division between validation and system testing can be made. Specific tests which can be
performed in either or both stages include the following.

Regression testing
Where this version of the software is tested with the automated test harnesses used with
previous versions to ensure that the required features of the previous version are still working
in the new version.


Recovery testing
Where the software is deliberately interrupted in a number of ways, for example taking its
hard disc off line or even turning the computer off, to ensure that the appropriate techniques
for restoring any lost data will function.

Security testing
Where unauthorized attempts to operate the software, or parts of it, are attempted. lt might
also include attempts to obtain access the data, or harm the software installation or even the
system software. As with all types of security it is recognized that someone sufficiently
determined will be able to obtain unauthorized access and the best that can be achieved is to
make this process as difficult as possible.

Stress testing
Where abnormal demands are made upon the software by increasing the rate at which it is
asked to accept data, or the rate at which it is asked to produce information. More complex
tests may attempt to create very large data sets or cause the software to make excessive
demands on the operating system.

Performance testing
Where the performance requirements, if any, are checked. These may include the size of the
software when installed, the amount of main memory and/ or secondary storage it requires
and the demands made of the operating system when running within normal limits or the
response time.

Usability testing
The process of usability measurement was introduced in the previous chapter. Even if
usability prototypes have been tested whilst the application was constructed, a validation test
of the finished product will always be required.

Alpha and beta testing
This is where the software is released to the actual end users. An initial release, the alpha
release, might be made to selected users who would be expected to report bugs and other
detailed observations back to the production team. Once the application has passed through
the alpha phase a beta release, possibly incorporating changes necessitated by the alpha
phase, can be made to a larger more representative set users, before the final release is
made to all users.

During software verification and validation defects in the program are discovered and the
program must then be modified to correct these defects. This debugging process is normally
integrated with other verification and validation activities. However testing and debugging are
different processes that do not have to be integrated.

No comments:

Post a Comment

Important Notice!

Dear students and friends. When you commenting please do not mention your email address. Because your email address will be publicly available and visible to all. Soon, it will start sending tons of spams because email crawlers can extract your email from feed text.

To contact me directly regarding any inquiry you may send an email to info@bcslectures.website and I will reply accordingly.