Fundamental Testing Theory

In testing, there are no clear definitions, as in physics, mathematics, which, when paraphrased, become absolutely incorrect. Therefore, it is important to understand the processes and approaches. In this article, we will analyze the main definitions of testing theory.







Let's move on to basic concepts



Software Testing is a verification of the conformity of the actual and expected results of the program's behavior, carried out on a finite set of tests chosen in a certain way.



The purpose of testing is to verify that the software meets the requirements, to ensure confidence in the quality of the software, to search for obvious errors in the software, which must be identified before the users of the program find them.



Why is software testing done?



  • To verify compliance with the requirements.
  • To detect problems earlier in the development phase and prevent product price increases.
  • Discovering use cases that were not foreseen during development. As well as a user's perspective on the product.
  • , .. .




  • 1 β€” (Testing shows presence of defects).

    , , .
  • 2 β€” (Exhaustive testing is impossible).

    , ( β€” ).
  • 3 β€” (Early testing).

    , .
  • 4 β€” (Defects clustering).

    .
  • 5 β€” (Pesticide paradox).

    , - .
  • 6 β€” (Testing is context depending). - . , , , , .
  • Principle 7 - Misconception about the absence of errors (end the Absence-of-errors The fallacy) . The absence of found defects during testing does not always mean that the product is ready for release. The system should be user-friendly and meet the user's expectations and needs.


Quality Assurance (QA) and Quality Control (QC) - these terms are similar to interchangeable ones, but there is still a difference between quality assurance and quality control, although in practice the processes have some similarities.



QC (Quality Control) - Product quality control - analysis of test results and quality of new versions of the released product.



Quality control tasks include:



  • checking software readiness for release;
  • verification of compliance with the requirements and quality of this project.


QA (Quality Assurance) - Product quality assurance - exploring opportunities to change and improve the development process, improve communication in the team, where testing is only one of the aspects of quality assurance.



Quality assurance objectives include:



  • verification of technical characteristics and software requirements;
  • risk assessment;
  • scheduling tasks to improve product quality;
  • preparation of documentation, test environment and data;
  • testing;
  • analysis of test results, as well as drawing up reports and other documents.


Screenshot



Verification and validation are two concepts closely related to testing and quality assurance processes. Unfortunately, they are often confused, although the differences between them are quite significant.



Verification is the process of evaluating a system to see if the results of the current development phase meet the conditions that were formulated at the beginning.



Validation is the determination of the compliance of the developed software with the expectations and needs of the user, his requirements for the system.



: 310, , «», . , , «». , . , , , . «» β€” , «» β€” . , .







The documentation that is used on software development projects can be roughly divided into two groups:



  1. Project documentation - includes everything related to the project as a whole.
  2. Product documentation is a part of design documentation, allocated separately, which relates directly to the developed application or system.


Testing stages:



  1. Product analysis
  2. Working with requirements
  3. Developing a test strategy and planning quality control procedures
  4. Creation of test documentation
  5. Prototype testing
  6. Basic testing
  7. Stabilization
  8. Exploitation


Software development stages - the stages that software development teams go through before the program becomes available to a wide range of users.



The software product goes through the following stages:



  1. analysis of project requirements;
  2. design;
  3. implementation;
  4. product testing;
  5. implementation and support.


Requirements



Requirements are a specification (description) of what needs to be implemented.

Requirements describe what needs to be implemented without detailing the technical side of the solution.



Requirement attributes:



  1. Correctness is an accurate description of the developed functionality.
  2. Verifiability - the formulation of requirements in such a way that it is possible to set an unambiguous verdict whether everything has been done in accordance with the requirements or not.
  3. Completeness - the requirement must contain all the information necessary to implement the functionality.
  4. Unambiguous - the requirement must contain unambiguous wording.
  5. β€” .
  6. β€” ( ). .
  7. β€” .
  8. β€” .
  9. β€” , .


Defect (bug) - deviation of the actual result from the expected one.



Bug report - A document that contains a report of any deficiency in a component or system that could potentially cause a component or system to fail to perform the required function.



Defect report attributes:



  1. Unique identifier (ID) - assigned automatically by the system when creating a bug report.

  2. Topic (short description, Summary) - a briefly formulated essence of the defect according to the rule β€œWhat? Where? When?"
  3. Description - a broader description of the essence of the defect (specified optionally).
  4. (Steps To Reproduce) β€” , . , .

  5. (Actual result) β€” , , .
  6. (Expected result) β€” , , .

  7. (Attachments) β€” , -.
  8. (, Severity) β€” .
  9. (, Priority) β€” .
  10. Status - defines the current state of the defect. Defect statuses can be different in different bug tracking systems.
  11. Environment - indicates the environment on which the bug was reproduced.


Bug life cycle



Screenshot



Severity vs Priority



Severity shows the degree of damage caused to the project by the existence of a defect. Severity is exposed by the tester.



Defect Severity Grading:



  • Blocking (S1 - Blocker)

    testing of a significant part of the functionality is not available at all. Blocking error that renders the application inoperative, as a result of which further work with the system under test or its key functions becomes impossible.
  • (S2 – Critical)

    , -, , , , - , workaround ( / ), .
  • (S3 – Major)

    - /-, , workaround, - . visibility – , , , .
  • (S4 – Minor)

    GUI, , . , .
  • Trivial (S5 - Trivial)

    almost always defects in the GUI - typos in the text, mismatch of font and shade, etc., or a poorly reproducible error that does not concern business logic, a problem with third-party libraries or services, a problem that does not have any effect on the overall quality of the product.


The priority indicates how quickly the defect should be fixed. Priority is set by the manager, team lead or customer. Defect Priority gradation



(Priority):



  • P1 High

    Critical error for the project. Should be fixed as soon as possible.
  • P2 Medium

    Not a critical error for the project, but must be resolved.
  • P3 (Low)

    . , .


:



  • (epic) β€” , .
  • (requirement ) β€” , .
  • (story) β€” (), 1 .
  • (task) β€” , .
  • - (sub-task) β€” / , .
  • (bug) β€” , .




  • (Development Env) – , , ,
  • (Test Env) – , ( , smoke , .
  • (Integration Env) – , , , .
  • (Preprod Env) – , . .
  • (Production Env) – , .




  • Pre-Alpha: , . .
  • Alpha: , -.
  • Beta: , .
  • Release Candidate (RC): .
  • Release: , .




A type of testing is a set of activities aimed at testing the specified characteristics of a system or its part, based on specific goals.



Screenshot



  1. Classification by running code for execution:

    • Static testing is a testing process that is carried out to verify almost any development artifact: program code components, requirements, system specifications, functional specifications, design documents and architecture of software systems and their components.
    • Dynamic testing - testing is carried out on a running system, cannot be carried out without running the application program code.


  2. Classification by code access and architecture:

    • β€” , .
    • β€” , ( White Box Black Box ).
    • β€” , ( ) . .


  3. :

    • β€” - () . , .
    • β€” , , .
    • β€” , , β€” , , .
    • β€” , - .


  4. :

    • .
    • .




    • β€” , .
    • β€” , .


  5. :

    • (smoke test) β€” , , , .
    • (critical path) β€” , .
    • (extended) β€” .


  6. :

    • - β€” . - .
    • - β€” , . β€” .


  7. :

    • (functional testing) β€” .
    • (non-functional testing) β€” , .



      1. (performance testing) β€” .
      2. (load testing) β€” - , ().
      3. (scalability testing) β€” , , .
      4. (volume testing) β€” , .
      5. (stress testing) β€” , ( ).
      6. (installation testing) β€” , , .
      7. (GUI/UI testing) β€” .
      8. (usability testing) β€” , , .
      9. (localization testing) β€” .
      10. (security testing) β€” , , , , , , .
      11. (reliability testing) β€” , .
      12. (regression testing) β€” , , , .
      13. / (re-testing/confirmation testing) β€” , , , .




Test design is the stage of software testing, at which test cases (test cases) are designed and created.



Test Design Techniques



The author of A Practitioner's Guide to Software Test Design , Lee Copeland, highlights the following test design techniques:



  1. Equivalence partitioning testing is a black box technique in which we divide functionality (often a range of possible input values) into groups of values ​​that are equivalent in terms of their effect on the system.
  2. Boundary value testing is a technique for testing product behavior at the extreme (boundary) values ​​of the input data.
  3. (pairwise testing) β€” , -.
  4. (State-Transition Testing) β€” .
  5. (Decision Table Testing) β€” , , .
  6. (Domain Analysis Testing) β€” , .
  7. Use Case Testing - Use Case describes the scenario of interaction between two or more participants (usually a user and a system).


Testing methods



Screenshot



White box testing is a software testing method that assumes that the internal structure / device / implementation of the system is known to the tester.



According to ISTQB, white box testing is:



  • testing based on an analysis of the internal structure of a component or system;
  • test design based on the white box technique is a procedure for writing or selecting test cases based on an analysis of the internal structure of a system or component.
  • Why White Box? The program under test for the tester is a transparent box, the contents of which he sees perfectly.


Gray box testing is a software testing method that involves a combination of White Box and Black Box approaches. That is, we only partially know the internal structure of the program.



Black box testing β€” also known as specification-based testing or behavior testing β€” is a testing technique that relies solely on the external interfaces of the system under test.



According to ISTQB, black box testing is:



  • testing, both functional and non-functional, not involving knowledge of the internal structure of a component or system;
  • test design based on the black box technique is a procedure for writing or selecting test cases based on the analysis of a functional or non-functional specification of a component or system without knowing its internal structure.


Test Documentation



A Test Plan is a document that describes the entire scope of testing, from the description of the facility, strategy, schedule, criteria for starting and ending testing, to the equipment required in the process of operation, special knowledge, and risk assessment.



The test plan should answer the following questions:



  • What needs to be tested?
  • How will testing be done?
  • When will testing take place?
  • Test start criteria.
  • Test termination criteria.


The main points of the test plan:



  1. Test plan identifier;
  2. (Introduction);
  3. (Test items);
  4. , (Features to be tested;)
  5. , (Features not to be tested);
  6. (Approach);
  7. (Item pass/fail criteria);
  8. (Suspension criteria and resumption requirements);
  9. (Test deliverables);
  10. (Testing tasks);
  11. (Environmental needs);
  12. (Responsibilities);
  13. (Staffing and training needs);
  14. (Schedule);
  15. (Risks and contingencies);
  16. (Approvals).


A check list is a document that describes what should be tested. The checklist can be of completely different levels of detail.



Most often, the checklist contains only actions, without the expected result. The checklist is less formalized.



A test case is an artifact that describes a set of steps, specific conditions and parameters required to test the implementation of a function under test or a part of it.



Test case attributes:



  • Preconditions - a list of actions that bring the system to a state suitable for conducting a basic check. Or a list of conditions, the fulfillment of which indicates that the system is in a state suitable for conducting the main test.
  • Steps - a list of actions that transfer the system from one state to another, in order to obtain a result, on the basis of which it is possible to conclude that the implementation meets the requirements.
  • Expected result - what they should get in fact.


Summary



Try to understand the definitions, not memorize them. If you want to learn more about testing, you can read the QA Bible . And if you have a question, you can always ask us in the @qa_chillout telegram channel .



All Articles