Thursday, March 16, 2023

Unit I: Basics of Software Testing

 Unit I: Basics of Software Testing

Inspection and Testing, what is testing? Testing objectives, Terms: fault, failure, error, fault

masking, test, test case, Fundamental Test process: test planning, test specification, Test

execution, test records, test completion, Prioritizing the tests, Psychology of testing,

Difference between QA and Testing





What Does Inspection and Test Plan (ITP) Mean?

An inspection and test plan, or inspection test plan, is a document or series of documents used for quality assurance purposes. An inspection and test plan outlines how the quality of a particular object will be ensured both at its beginning and throughout its service life.


Corrosionpedia Explains Inspection and Test Plan (ITP)

An inspection and test plan will highlight many different testing and inspection tasks. For instance, destructive testing of materials may be done during the initial construction phase of the project to ensure that the materials have adequate properties and that the employees working with the materials have the required skill level for the project. Once the service life has begun, nondestructive methods are primarily used.

The inspection methods include ultrasonic testing, radiographic testing, visual inspection, pH testing and many others. The timing and frequency of these inspections are determined in the inspection and test plan. The plan will also set up intervals of auditing to make sure that the necessary inspections have been performed and are sufficiently documented.

An inspection and test plan is used in a variety of industries, especially the construction industry. Bridges, buildings and roads must be periodically inspected and tested to ensure that they are safe for their users or inhabitants. Another sector that uses inspection and test plans frequently is the oil and gas industry. Pipelines, for example, can be subject to corrosion because of the contents they carry. An inspection and test plan will dictate how often inspections will be conducted to prevent a pipeline failure. Although less common, service industries can also undergo inspection and test plans; these are usually more audit-focused and rely less on mechanical inspection methods. These audits are aimed at making sure the service operation is achieving its quality goals.


What is testing? Testing is the practice of making objective judgments regarding the extent to which the system (device) meets, exceeds or fails to meet stated objectives What the purpose of testing? There are two fundamental purposes of testing: verifying procurement specifications and managing risk. First, testing is about verifying that what was specified is what was delivered: it verifies that the product (system) meets the functional, performance, design, and implementation requirements identified in the procurement specifications. Second, testing is about managing risk for both the acquiring agency and the system’s vendor/developer/integrator. The testing program is used to identify when the work has been “completed” so that the contract can be closed, the vendor paid, and the system shifted by the agency into the warranty and maintenance phase of the project. Why is testing important? A good testing program is a tool for both the agency and the integrator/supplier; it typically identifies the end of the “development” phase of the project, establishes the criteria for project acceptance, and establishes the start of the warranty period. 


What methods are used to conduct testing? There are five basic verification methods, as outlined below. i Inspection - Inspection is the verification by physical and visual examinations of the item, reviewing descriptive documentation, and comparing the appropriate characteristics with all the referenced standards to determine compliance with the requirements. Testing for Transportation Management Systems: Q & A i Certificate of Compliance - A Certificate of Compliance is a means of verifying compliance for items that are standard products. Signed certificates from vendors state that the purchased items meet procurement specifications, standards, and other requirements as defined in the purchase order. Records of tests performed to verify specifications are retained by the vendor as evidence that the requirements were met and are made available by the vendor for purchaser review. i Analysis - Analysis is the verification by evaluation or simulation using mathematical representations, charts, graphs, circuit diagrams, calculation, or data reduction. This includes analysis of algorithms independent of computer implementation, analytical conclusions drawn from test data, and extension of test-produced data to untested conditions. i Demonstration - Demonstration is the functional verification that a specification requirement is met by observing the qualitative results of an operation or exercise performed under specific condition. This includes content and accuracy of displays, comparison of system outputs with independently derived test cases, and system recovery from induced failure conditions. i Test (Formal) - Formal testing is the verification that a specification requirement has been met by measuring, recording, or evaluating qualitative and quantitative data obtained during controlled exercises under all appropriate conditions using real and/or simulated stimulus. This includes verification of system performance, system functionality, and correct data distribution.


Software Testing has different goals and objectives.The major objectives of Software testing are as follows:

  • Finding defects which may get created by the programmer while developing the software.

  • Gaining confidence in and providing information about the level of quality.

  • To prevent defects.

  • To make sure that the end result meets the business and user requirements.

  • To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is System Requirement Specifications.

  • To gain the confidence of the customers by providing them a quality product.

Software testing helps in finalizing the software application or product against business and user requirements. It is very important to have good test coverage in order to test the software application completely and make it sure that it’s performing well and as per the specifications.

While determining the test coverage the test cases should be designed well with maximum possibilities of finding the errors or bugs. The test cases should be very effective. This objective can be measured by the number of defects reported per test cases. Higher the number of the defects reported the more effective are the test cases.

Once the delivery is made to the end users or the customers they should be able to operate it without any complaints. In order to make this happen the tester should know as how the customers are going to use this product and accordingly they should write down the test scenarios and design the test cases. This will help a lot in fulfilling all the customer’s requirements.

Software testing makes sure that the testing is being done properly and hence the system is ready for use. Good coverage means that the testing has been done to cover the various areas like functionality of the application, compatibility of the application with the OS, hardware and different types of browsers, performance testing to test the performance of the application and load testing to make sure that the system is reliable and should not crash or there should not be any blocking issues. It also determines that the application can be deployed easily to the machine and without any resistance. Hence the application is easy to install, learn and use.



Difference between Bug, Defect, Error, Fault & Failure

In this section, we are going to discuss the difference between the Bug, Defect, Error, Fault & Failure as we understood that all the terms are used whenever the system or an application act abnormally.

Sometimes we call it an error and sometimes a bug or a defect and so on. In software testing, many of the new test engineers have confusion in using these terminologies.

Generally, we used these terms in the Software Development Life Cycle (SDLC) based on the phases. But there is a conflict in the usage of these terms.

In other words, we can say that in the era of software testing, the terms bugs, defects, error, fault, and failure come across every second of the day.

But for a beginner or the inexperienced in this field, all these terminologies may seem synonyms. It became essential to understand each of these terms independently if the software doesn't work as expected.

What is a bug?

In software testing, a bug is the informal name of defects, which means that software or application is not working as per the requirement. When we have some coding error, it leads a program to its breakdown, which is known as a bug. The test engineers use the terminology Bug.

If a QA (Quality Analyst) detect a bug, they can reproduce the bug and record it with the help of the bug report template.

What is a Defect?

When the application is not working as per the requirement is knows as defects. It is specified as the aberration from the actual and expected result of the application or software.

In other words, we can say that the bug announced by the programmer and inside the code is called a Defect.

What is Error?

The Problem in code leads to errors, which means that a mistake can occur due to the developer's coding error as the developer misunderstood the requirement or the requirement was not defined correctly. The developers use the term error.

Bug vs Defect vs Error vs Fault vs Failure

What is Fault?

The fault may occur in software because it has not added the code for fault tolerance, making an application act up.

A fault may happen in a program because of the following reasons:

  • Lack of resources

  • An invalid step

  • Inappropriate data definition

What is Failure?

Many defects lead to the software's failure, which means that a loss specifies a fatal issue in software/ application or in its module, which makes the system unresponsive or broken.

In other words, we can say that if an end-user detects an issue in the product, then that particular issue is called a failure.

Possibilities are there one defect that might lead to one failure or several failures.

For example, in a bank application if the Amount Transfer module is not working for end-users when the end-user tries to transfer money, submit button is not working. Hence, this is a failure.

The flow of the above terminologies are shown in the following image:

Bug vs Defect vs Error vs Fault vs Failure



What is fundamental test process in software testing?


Testing is a process rather than a single activity. This process starts from test planning then designing test cases, preparing for execution and evaluating status till the test closure. So, we can divide the activities within the fundamental test process into the following basic steps:

1) Planning and Control

2) Analysis and Design

3) Implementation and Execution

4) Evaluating exit criteria and Reporting

5) Test Closure activities

1)    Planning and Control:

Test planning has following major tasks:

i.  To determine the scope and risks and identify the objectives of testing.

ii. To determine the test approach.

iii. To implement the test policy and/or the test strategy. (Test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform PM, testers and developers about some key issues of the testing process. This includes the testing objectives, method of testing, total time and resources required for the project and the testing environments.).

iv. To determine the required test resources like people, test environments, PCs, etc.

v. To schedule test analysis and design tasks, test implementation, execution and evaluation.

vi. To determine the Exit criteria we need to set criteria such as Coverage criteria. (Coverage criteria are the percentage of statements in the software that must be executed during testing. This will help us track whether we are completing test activities correctly. They will show us which tasks and checks we must complete for a particular   level of testing before we can say that testing is finished.)

 Test control has the following major tasks:

i.  To measure and analyze the results of reviews and testing.

ii.  To monitor and document progress, test coverage and exit criteria.

iii.  To provide information on testing.

iv.  To initiate corrective actions.

v.   To make decisions.

2)  Analysis and Design:

Test analysis and Test Design has the following major tasks:

i.   To review the test basis. (The test basis is the information we need in order to start the test analysis and   create our own test cases. Basically it’s a documentation on which test cases are based, such as requirements, design specifications, product risk analysis, architecture and interfaces. We can use the test basis documents to understand what the system should do once built.)

ii.   To identify test conditions.

iii.  To design the tests.

iv.  To evaluate testability of the requirements and system.

v.  To design the test environment set-up and identify and required infrastructure and tools.

3)  Implementation and Execution:

During test implementation and execution, we take the test conditions into test cases and procedures and other testware such as scripts for automation, the test environment and any other test infrastructure. (Test cases is a set of conditions under which a tester will determine whether an   application is working correctly or not.)

(Testware is a term for all utilities that serve in combination for testing a software like scripts, the test environment and any other test infrastructure for later reuse.)

Test implementation has the following major task:

i.  To develop and prioritize our test cases by using techniques and create test data for those tests. (In order to test a software application you need to enter some data for testing most of the features. Any such specifically identified data which is used in tests is known as test data.)

We also write some instructions for carrying out the tests which is known as test procedures.

We may also need to automate some tests using test harness and automated tests scripts. (A test harness is a collection of software and test data for testing a program unit by running it under different conditions and monitoring its behavior and outputs.)

ii. To create test suites from the test cases for efficient test execution.

(Test suite is a collection of test cases that are used to test a software program   to show that it has some specified set of behaviours. A test suite often contains detailed instructions and information for each collection of test cases on the system configuration to be used during testing. Test suites are used to group similar test cases together.)

iii. To implement and verify the environment.

Test execution has the following major task:

i.  To execute test suites and individual test cases following the test procedures.

ii. To re-execute the tests that previously failed in order to confirm a fix. This is known as confirmation testing or re-testing.

iii. To log the outcome of the test execution and record the identities and versions of the software under tests. The test log is used for the audit trial. (A test log is nothing but, what are the test cases that we executed, in what order we executed, who executed that test cases and what is the status of the test case (pass/fail). These descriptions are documented and called as test log.).

iv. To Compare actual results with expected results.

v. Where there are differences between actual and expected results, it report discrepancies as Incidents.

4)  Evaluating Exit criteria and Reporting:

Based on the risk assessment of the project we will set the criteria for each test level against which we will measure the “enough testing”. These criteria vary from project to project and are known as exit criteria.

Exit criteria come into picture, when:

— Maximum test cases are executed with certain pass percentage.

— Bug rate falls below certain level.

— When achieved the deadlines.

Evaluating exit criteria has the following major tasks:

i.  To check the test logs against the exit criteria specified in test planning.

ii.  To assess if more test are needed or if the exit criteria specified should be changed.

iii.  To write a test summary report for stakeholders.

5)  Test Closure activities:

Test closure activities are done when software is delivered. The testing can be closed for the other reasons also like:

  • When all the information has been gathered which are needed for the testing.

  • When a project is cancelled.

  • When some target is achieved.

  • When a maintenance release or update is done.

Test closure activities have the following major tasks:

i.  To check which planned deliverables are actually delivered and to ensure that all incident reports have been resolved.

ii. To finalize and archive testware such as scripts, test environments, etc. for later reuse.

iii. To handover the testware to the maintenance organization. They will give support to the software.

iv To evaluate how the testing went and learn lessons for future releases and projects.



Test Plan

A Test Plan is a detailed document that describes the test strategy, objectives, schedule, estimation, deliverables, and resources required to perform testing for a software product. Test Plan helps us determine the effort needed to validate the quality of the application under test. The test plan serves as a blueprint to conduct software testing activities as a defined process, which is minutely monitored and controlled by the test manager.

As per ISTQB definition: “Test Plan is A document describing the scope, approach, resources, and schedule of intended test activities.”

Let’s start with following Test Plan example/scenario: In a meeting, you want to discuss the Test Plan with the team members, but they are not interested – .



2.2 Test specifications

A test specification is a specification of which test suites and test cases to run and which to skip. A test specification can also group several test cases into conf cases with init and cleanup functions (see section about configuration cases below). In a test there can be test specifications on three different levels:

The top level is a test specification file which roughly specifies what to test for a whole application. The test specification in such a file is encapsulated in a topcase command.

Then there is a test specification for each test suite, specifying which test cases to run within the suite. The test specification for a test suite is returned from the all(suite) function in the test suite module.

And finally there can be a test specification per test case, specifying sub test cases to run. The test specification for a test case is returned from the specification clause of the test case.

When a test starts, the total test specification is built in a tree fashion, starting from the top level test specification.

The following are the valid elements of a test specification. The specification can be one of these elements or a list with any combination of the elements:

{Mod, Case}

This specifies the test case Mod:Case/1

{dir, Dir}

This specifies all modules *_SUITE in the directory Dir

{dir, Dir, Pattern}

This specifies all modules Pattern* in the directory Dir

{conf, Init, TestSpec, Fin}

This is a configuration case. In a test specification file, Init and Fin must be {Mod,Func}. Inside a module they can also be just Func. See the section named Configuration Cases below for more information about this.

{make, Init, TestSpec, Fin}

This is a special version of a conf case which is only used by the test server framwork ts. Init and Fin are make and unmake functions for a data directory. TestSpec is the test specification for the test suite owning the data directory in question. If the make function fails, all tests in the test suite are skipped. The difference between this "make case" and a normal conf case is that for the make case, Init and Fin are given with arguments ({Mod,Func,Args}), and that they are executed on the controller node (i.e. not on target).

Case

This can only be used inside a module, i.e. not a test specification file. It specifies the test case CurrentModule:Case.


Test execution is the process of executing the code and comparing the expected and actual results. Following factors need to be considered for a test execution process −

  • Based on a risk, select a subset of test suite to be executed for this cycle.

  • Assign the test cases in each test suite to testers for execution.

  • Execute tests, report bugs, and capture test status continuously.

  • Resolve blocking issues as they arise.

  • Report status, adjust assignments, and reconsider plans and priorities daily.

  • Report test cycle findings and status.

The following points need to be considered for Test Execution.

  • In this phase, the QA team performs actual validation of AUT based on prepared test cases and compares the stepwise result with the expected result.

  • The entry criteria of this phase is completion of the Test Plan and the Test Cases Development phase, the test data should also be ready.

  • The validation of Test Environment setup is always recommended through smoke testing before officially entering the test execution.

  • The exit criteria requires the successful validation of all Test Cases; Defects should be closed or deferred; test case execution and defect summary report should be ready.

Activities for Test Execution

The objective of this phase is real time validation of AUT before moving on to production/release. To sign off from this phase, the QA team performs different types of testing to ensure the quality of product. Along with this defect reporting and retesting is also crucial activity in this phase. Following are the important activities of this phase −

System Integration Testing

The real validation of product / AUT starts here. System Integration Testing (SIT) is a black box testing technique that evaluates the system's compliance against specified requirements/ test cases prepared.

System Integration Testing is usually performed on subset of system. The SIT can be performed with minimum usage of testing tools, verified for the interactions exchanged and the behavior of each data field within individual layer is also investigated. After the integration, there are three main states of data flow −

  • Data state within the integration layer

  • Data state within the database layer

  • Data state within the Application layer

Note − In SIT testing, the QA team tries to find as many defects as possible to ensure the quality. The main objective here is finding bugs as many as possible.

Defect Reporting

A software bug arises when the expected result doesn't match with the actual result. It can be an error, flaw, failure, or fault in a computer program. Most bugs arise from mistakes and errors made by developers or architects.

While performing SIT testing, the QA team finds these types of defects and these need to be reported to the concerned team members. The members take further action and fix the defects. Another advantage of reporting is it eases the tracking of the status of defects. There are many popular tools like ALM, QC, JIRA, Version One, Bugzilla that support defect reporting and tracking.

Defect Reporting is a process of finding defects in the application under test or product by testing or recording feedback from customers and making new versions of the product that fix the defects based on the client’s feedback.

Defect tracking is also an important process in software engineering as complex and business critical systems have hundreds of defects. One of the most challenging factors is managing, evaluating and prioritizing these defects. The number of defects gets multiplied over a period of time and to effectively manage them, defect tracking system is used to make the job easier.

Defect Mapping

Once defect is reported and logged, it should be mapped with the concerned failed/blocked test cases and corresponding requirements in Requirement Traceability Matrix. This mapping is done by the Defect Reporter. It helps to make a proper defect report and analyze the impishness in product. Once the test cases and requirements are mapped with the defect, stakeholders can analyze and take a decision on whether to fix/defer the defect based on priority and severity.

Re-testing

Re-testing is executing a previously failed test against AUT to check whether the problem is resolved. After a defect has been fixed, re-testing is performed to check the scenario under the same environmental conditions.

During re-testing, testers look for granular details at the changed area of functionality, whereas regression testing covers all the main functions to ensure that no functionalities are broken due to this change.

Regression Testing

Once all defects are in closed, deferred or rejected status and none of the test cases are in progress/failed/no run status, it can be said that system integration testing is completely based on test cases and requirement. However, one round of quick testing is required to ensure that none of the functionality is broken due to code changes/ defect fixes.

Regression testing is a black box testing technique that consists of re-executing those tests that have had an impact due to code changes. These tests should be executed as often as possible throughout the software development life cycle.

Types of Regression Tests

  • Final Regression Tests − A "final regression testing" is performed to validate the build that has not undergone a change for a period of time. This build is deployed or shipped to customers.

  • Regression Tests − A normal regression testing is performed to verify if the build has NOT broken any other parts of the application by the recent code changes for defect fixing or for enhancement.

What is Test Completion Criterion?

A check against the test exit criteria is very much essential before we claim that the testing is completed. Before putting an end to test process the product quality is measured against the test completion criteria.

The Exit criterion is connected to the test coverage, test case design technique adopted, risk level of the product varies from one test level to another.

Test Completion Criteria - Examples:

  • Specified coverage has been achieved.

  • No Showstoppers or critical defects.

  • There are very few known medium or low-priority defects that don't affect the usage of the product.

Test Completion Criteria - Significance:

  • If Exit criterion has not met, the test cannot be stopped.

  • The Exit criterion has to be revamped or the time should be extended for testing based on the quality of the product.

  • Any changes to the test completion criterion must be documented and signed off by the stakeholders.

  • The testware can be released upon successful completion of exit criteria.

Quality of a Software can be assured by going through the Software Testing process of the Software Development Life Cycle (SDLC). Smarter Software Testing can help deliver a product, which is more reliable and defect-free meeting the business requirements and stakeholders’ expectations. This is the reason why it takes more time and resources and therefore, making this process very expensive. Due to the limited time left when the product reaches the Testing stage, it has become more important to prioritize the test cases, especially during Regression Testing in order to improve the efficiency of software testing.

Regression testing is a type of software testing, which checks that the changes, updates, or improvements made in the code base of an application does not impact the existing functionality of the software application. It is responsible for the overall stability and functionality of the existing features.

During this Regression testing of software, the Test Case Prioritization or TCP comes into play. TCP is one of the approaches to Regression testing apart from the Test Suite Minimization (TSM) and Test Case Selection (TCS).

What is Test Case Prioritization (TCP)?

Test Case Prioritization or TCP, as the name suggests, is the process of prioritizing test cases in a test suite on the basis of different factors which could be anything ranging from code coverage, and functionality to risk/critical modules, features, etc.

It gives an approach to execute highly significant test cases first according to some measures and then produce the desired outcome such as, revealing the faults earlier and providing the testers with the feedback.

Why is Test Case Prioritization important?

  • Testing is one of the most critically important phases of the SDLC which consumes significant resources in terms of cost, effort, and time.

  • The difficult part of testing is the risk management, test planning, cost value, and being analytical about which test to run for your specific project.

  • Running all the test cases in a test suite can require a large amount of effort and thereby, increase the regression testing cost.

  • As the size of the software grows, the test suite also grows bigger and thus, requires more effort to maintain the test suite.

  • For complex applications, it is impossible and impractical to exhaustively test each and every scenario.

  • Prioritizing test cases based on perceived risks and customer needs can efficiently reduce the number of test cases required for testing an application.

  • Prioritizing test cases also helps in meeting the project milestones along with meeting customer requirements and expectations.

  • Early detection of bugs can also be achieved.

Psychology of testing,


Psychological Testing in Software Testing

Software development, including software testing, involves human beings. Therefore, human psychology has important effect on software testing.


In software testing, psychology plays an extremely important role. It is one of those factors that stay behind the scene, but has a great impact on the end result. Categorized into three sections, the psychology of testing enables smooth testing as well as makes the process hassle-free. It is mainly dependent on the mindset of the developers and testers, as well as the quality of communication between them. Moreover, the psychology of testing improves mutual understanding among team members and helps them work towards a common goal.

The three sections of the psychology of testing are:

  • The mindset of Developers and Testers.

  • Communication in a Constructive Manner.

  • Test Independence.

The mindset of Developers and Testers

The software development life cycle is a combination of various activities, which are performed by different individuals using their expertise and knowledge. It is not an unknown fact that to accomplish the success, development of software, people with different skills and mindset are required.

Developers synthesize code. They build up things, putting pieces together and figuring out fun and unique ways of combining those distinct little bits to do wonderful and amazing things.

But Testers are all about analysis. Once it has all been put together, the tester likes to take it apart again, piece by piece, this time looking for those little corners, edges, and absurdities that hide in those weird and strange interactions that come from those new and amazing ways of putting pieces together.

Testing and Reviewing the applications are different from analyzing and developing it. While testing or reviewing a product, testers mainly look for defects or failures in the product. If we are building or developing applications, we have to work positively to solve the problems during the development process and to make the product according to the user specification.


As an example,

Identifying defects during static testing such as requirements review or user story refinement session or identifying failures during dynamic test execution, may be perceived as Criticism of the product and of its author.

So the developer or the analyst may have problems with you as a tester because he thinks that you are criticizing them.

There’s an element of human psychology called Confirmation bias, which means that most of people find it difficult to accept information that disagrees with currently held believes.

For example, since developers expect their code to be correct, they have a confirmation bias that makes it difficult to accept that the code is incorrect.

In addition to confirmation bias, other cognitive biases may make it difficult for people to understand or accept information produced by testing.






No comments:

Post a Comment