Tuesday, December 4, 2007

Software Testing

Jignesh Patel (Quality Assurance Executive)
Software testing is the process used to measure the Quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, Security, but can also include more technical requirements as described under the ISO standard, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviors of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (SQA), which encompasses all business process areas, not just testing.

Structure of test case

Formal, written test cases consist of three main parts with subsections:
Ø Information contains general information about Test case.
o Identifier is unique identifier of test case for further references, for example, while describing found defect.
o Test case owner/creator is name of tester or test designer, who created test or is responsible for its development
o Version of current Test case definition
o Name of test case should be human-oriented title which allows to quickly understand test case purpose and scope.
o Identifier of the requirement which is covered by the test case. Also here could be an identifier of a use case or a functional specification item.
o Purpose contains short description of test purpose, what functionality it checks.
o Dependencies Eid mubarak in advance. if you want to see my picture than open http://www.jigneshqa.blog.com/ site.

Ø Test case activity
o Testing environment/configuration contains information about configuration of hardware or software which must be met while executing test case
o Initialization describes actions, which must be performed before test case execution is started. For example, we should open some file.
o Finalization describes actions to be done after test case is performed. For example if test case crashes database, tester should restore it before other test cases will be performed.
o Actions step by step to be done to complete test.
o Input data description
Ø Results
o Expected results contains description of what tester should see after all test steps has been completed
o Actual results contains a brief description of what the tester saw after the test steps has been completed. This is often replaced with a
o Pass/Fail. Quite often if a test case fails, reference to the defect involved should be listed in this column.

o White box, black box, and grey box testing

o White box and black box testing are terms used to describe the point of view a test engineer takes when designing test cases. Black box testing treats the software as a black-box without any understanding as to how the internals behave. Thus, the tester inputs data and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case.
o White box testing, however, is when the tester has access to the internal data structures, code, and algorithms. For this reason, unit testing and debugging can be classified as white-box testing and it usually requires writing code, or at a minimum, stepping through it, and thus requires more skill than the black-box tester. If the software in test is an interface or API of any sort, white-box testing is almost always required.
o In recent years the term grey box testing has come into common usage. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey-box because the input and output are clearly outside of the black-box we are calling the software under test. This is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test.
o Grey box testing could be used in the context of testing a client-server environment when the tester has control over the input, inspects the value in a SQL database, and the output value, and then compares all three (the input, sql value, and output), to determine if the data got corrupt on the database insertion or retrieval.

Levels of testing


o Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an Object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.
o Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.
o Functional testing tests at any level (class, module, interface, or system) for proper functionality as defined in the specification.
o System testing tests a completely integrated system to verify that it meets its requirements.
o System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.
o Acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed as part of the hand-off process between any two phases of development. See also Development stage
o Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing. The term alpha implies that the software is functionally complete and development will go into bug-fix mode only afterwards and no new features will be added.
o Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Manual Testing Tips

Manual Testing Tips
Web Testing
During testing the websites the following scenarios should be considered.
  • Functionality
  • Performance
  • Usability
  • Server side interface
  • Client side compatibility
  • Security

Functionality:
In testing the functionality of the web sites the following should be tested.

  • Links
  • Internal links
  • External links
  • Mail links
  • Broken links
  • Forms
  • Field validation
  • Functional chart
  • Error message for wrong input
  • Optional and mandatory fields
  • Database
  • Testing will be done on the database integrity.
  • Cookies
  • Testing will be done on the client system side, on the temporary internet files.

Performance:
Performance testing can be applied to understand the web site's scalability, or to benchmark the performance in the environment of third party products such as servers and middleware for potential purchase.
Connection speed:

o Tested over various Networks like Dial up, ISDN etc

Load

o What is the no. of users per time?
o Check for peak loads & how system behaves.
o Large amount of data accessed by user.

Stress

o Continuous load
o Performance of memory, cpu, file handling etc.

Usability :

Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction.
Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment. The broad goal of usable systems is often assessed using several

criteria:
  • Ease of learning
  • Navigation
  • Subjective user satisfaction
  • General appearance

Server side interface:

In web testing the server side interface should be tested.
This is done by Verify that communication is done properly.
Compatibility of server with software, hardware, network and database should be tested.
The client side compatibility is also tested in various platforms, using various browsers etc.

Security:

The primary reason for testing the security of an web is to identify potential vulnerabilities and subsequently repair them.
The following types of testing are described in this section:

  • Network Scanning
  • Vulnerability Scanning
  • Password Cracking
  • Log Review
  • Integrity Checkers
  • Virus Detection

Performance Testing

Performance testing is a rigorous usability evaluation of a working system under realistic conditions to identify usability problems and to compare measures such as success
rate, task time and user satisfaction with requirements.
The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing.

To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly.
A clearly defined set of expectations is essential for meaningful performance testing.
For example, for a Web application, you need to know at least two things:

  • expected load in terms of concurrent users or HTTP connections
  • acceptable response time

Load testing:

Load testing is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing
Examples of volume testing:

  • testing a word processor by editing a very large document
  • testing a printer by sending it a very large job
  • testing a mail server with thousands of users mailboxes

Examples of longevity/endurance testing:
testing a client-server application by running the client in a loop against the server over an extended period of time
Goals of load testing:
Expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
ensure that the application meets the performance baseline established during Performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seen similar, their goals are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking purposes and uses various load levels whereas load testing operates at a predefined load level, the highest load that the system can accept while still functioning properly.

Stress testing:
Stress testing is a form of testing that is used to determine the stability of a given system or entity. This is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs. Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing).
The main purpose behind this madness is to make sure that the system fails and recovers gracefully -- this quality is known as recoverability.