Sunforger

Sunforger

Software Testing Fundamentals

Software needs to be tested before going live. This is to verify whether the software meets usage requirements while minimizing bugs and defects in the software, ensuring software quality.

Classification#

By Development Stage#

  1. Unit Testing: Testing whether program code modules (such as functions) work correctly, usually performed by the developers themselves.
  2. Integration Testing: Also known as joint debugging. Testing the modules that have passed unit testing together to verify whether their interfaces work correctly (ensuring no issues when working together).
  3. System Testing: Testing the entire system as a whole to verify its functional and non-functional requirements (including compatibility testing, performance testing, security testing, etc.) is a comprehensive validation. This is usually executed based on internal documentation.
  4. Acceptance Testing: Testing conducted by actual users or clients to determine whether the system meets business requirements. Formally divided into: internal testing (Alpha testing) and public testing (Beta testing). Typically conducted in public testing form to ensure that project defects are discovered as much as possible before release, better meeting user expectations and reducing risks after going live.

Unit testing is an internal test of a module; integration testing is a test between multiple modules.

By Code Visibility#

  1. Black Box Testing: Source code is not visible. UI functionality is visible. It does not concern implementation details, only input and output. Mainly verifies the system's functionality and performance from the user's perspective (e.g., functional, compatibility, and user experience testing).
  2. White Box Testing: Testers fully understand the internal structure and implementation details, directly testing the source code. Mainly performed by developers. Generally used in unit testing.
  3. Gray Box Testing: A compromise between black box testing and white box testing. Testing starts from the user's perspective but also has some understanding of the internal implementation logic. Mainly used in integration testing for interface testing, etc.

Evaluation: Software Quality Model#

If all eight quality dimensions meet standards, it is considered excellent software.

  1. Functionality: Mainly considers whether the functions provided by the software meet user needs. Includes: the number and completeness of functions, whether functions are correctly implemented (e.g., correctness of login functionality and error handling), and error handling situations (e.g., prompt information when entering an incorrect password).
  2. Performance: Mainly considers the software's performance under high load. Includes: how many requests the server can handle per second, whether the existing hardware configuration meets requirements (e.g., CPU, memory), and optimizations to meet expected user numbers (e.g., 200,000 online users).
  3. Compatibility: Mainly considers the software's performance in different software and hardware environments. Includes browser compatibility (e.g., the five major browsers: Chrome, Firefox, Safari, IE, Edge), operating system compatibility (e.g., Windows 7/8/10/11, macOS, Linux), mobile device compatibility (e.g., resolution, brand, operating system version, network type), and compatibility with other applications (e.g., Alipay, WeChat).
  4. Usability: Mainly considers whether the software is easy to use. Includes: simplicity of the interface, user-friendliness (e.g., smooth operation, clear text), and aesthetics (e.g., interface design).
  5. Reliability: Mainly considers the software's stability and fault-free operation under various conditions. Includes: unresponsiveness (e.g., no response during login), lag (e.g., lag during gameplay), and crashes (e.g., software crashes).
  6. Security: Mainly considers the software's security during data transmission and storage. Includes data transmission encryption (e.g., password transmission during login) and data storage encryption (e.g., sensitive information in the database).
  7. Maintainability: Mainly considers the convenience of software maintenance and updates in the later stages. Includes code cleanliness and comments, code modularity and separation, and labeling and organization of networks and hardware.
  8. Portability: Mainly considers the software's ability to migrate across different platforms and environments. Includes the convenience of data migration (e.g., data migration during server upgrades) and installation and operation of software on different hardware.

Functionality, performance, compatibility, usability, and security are the top priorities and are usually the required testing content.

Testing Process#

  1. Requirement Review: At the start of the project, the product, development, and testing teams jointly review the requirement documents to ensure a consistent understanding of the requirements and identify core functions and important needs.
  2. Test Planning: Based on the results of the requirement review, a detailed test plan is written. Which functions need to be tested, who will perform these tests, how to conduct the tests (what testing strategies to adopt, such as compatibility, performance, etc.), and the testing schedule, etc.
  3. Test Case Design: Based on the test plan, specific test cases are designed. Each test case includes test steps, input data, and expected results. Test cases need to cover all requirements and functional points to ensure comprehensiveness and repeatability of testing.
  4. Test Case Execution: Actual testing is conducted according to the designed test cases, and actual results are recorded to verify whether the software's functionality and performance meet the requirements. Defects are also discovered and recorded.
  5. Defect Management: Manage and track discovered defects, return them to developers until they are fixed. Ensure that all discovered defects are handled and verified in a timely manner.
  6. Test Report: Summarize the testing process and results, generating a test report. The report should reflect test coverage, defect statistics and analysis, testing conclusions, and recommendations to provide an overall testing situation for the project. It serves as a basis for decision-makers to evaluate whether the software can be released.

Writing Test Cases#

Purpose of writing test cases:

  • Prevent missed tests: Ensure all requirement points are covered to avoid omissions.
  • Standardization: Provide clear test steps and expected results to ensure consistency and repeatability in the testing process.

Format#

  1. Case ID
    Format: Project Abbreviation_Module Abbreviation_Number (e.g., ProjectName_ModuleName_001)
    Purpose: Uniquely identify each test case for easy management and tracking.
  2. Title
    Format: Expected Result + Function Description
    Purpose: Concisely describe the testing objective of the case for quick understanding by reviewers.
  3. Project/Module
    Content: Indicate which project or module the case belongs to.
    Purpose: Clarify the testing scope for easier classification and management.
  4. Priority
    Level: P0 (highest) to P4 (lowest)
    Purpose: Determine the execution order of test cases; core functions (most frequently used by users) are usually set as P0.
  5. Preconditions
    Content: Conditions that must be met before executing the case.
    Purpose: Ensure the testing environment and state are correct to avoid test failures due to environmental issues.
  6. Steps
    Content: Detailed test steps, including specific operations for each step.
    Purpose: Provide clear operational guidelines to ensure testers can follow the same steps during testing.
  7. Test Data
    Content: Specific data used during the testing process.
    Purpose: Ensure consistency and accuracy of test data for easier problem reproduction.
  8. Expected Result
    Content: The result that should be achieved after executing the test.
    Purpose: Compare actual results with expected results to determine whether the test passes.

Test Case Design#

Equivalence Class Partitioning#

This is a black box testing technique. It divides input data into different equivalence classes and selects representative data for testing. This reduces the number of cases and improves testing efficiency.

Equivalence Class Classification

  • Valid Equivalence Class: A set of data that meets the requirements.
  • Invalid Equivalence Class: A set of data that does not meet the requirements.

How to Partition Equivalence Classes

  1. Clarify Requirements: Understand the requirements and determine the basis for partitioning (e.g., gender, age, etc.).
  2. Partition Equivalence Classes: Divide data into valid and invalid equivalence classes based on requirements.
  3. Extract Data: Select representative data from each equivalence class.
  4. Write Test Cases: Write test cases based on the extracted data.

Example 1
Assume there is a requirement to validate the legality of a QQ account, which requires the QQ account to be a natural number of 6 to 10 digits. Here’s how to use equivalence class partitioning to design test cases:

  • Valid Equivalence Class:
    8-digit natural number (e.g., 12345678) [Length validation]
  • Invalid Equivalence Class:
    Natural numbers less than 6 digits (e.g., 123) [Length validation]
    Natural numbers greater than 10 digits (e.g., 12345678901) [Length validation]
    8-digit non-natural numbers (e.g., 1234567a) [Type validation]
    QQ number is empty [Type validation]

Boundary Value Analysis#

Needs to consider:

  • On Point: Data that is exactly equal to the boundary value.
  • Off Point: Data that is closest to the boundary value, including just above and just below the boundary value.
  • In Point: Data within the interval, usually taking the middle value.

Example 2
Assume there is a requirement to determine whether a number is less than -99 or greater than 99, and if so, give an error prompt.

  • On Point: -99, 99
  • Off Point: -100, -98, 100, 98
  • In Point: 0 (or other values within the interval, such as 50)
    Then design test cases accordingly.

Decision Table Method#

The decision table is a black box testing technique that lists multiple conditions and their combinations along with corresponding action results in tabular form, helping to design test cases. [Solves condition dependency issues]

Key Terms

  • Condition Stubs: Lists all conditions.
  • Action Stubs: Lists all possible actions or results.
  • Condition Entries: Specific values for each condition (usually "Yes" or "No").
  • Action Entries: Action results determined by the combination of condition entries.

Steps

  1. Clarify Requirements: Understand the conditions and results in the requirements.
  2. Draw the Decision Table:
  • Fill in condition stubs and action stubs.
  • Fill in condition entries based on condition stubs.
  • Fill in action entries based on combinations of condition entries.
  1. Write Test Cases: Generate test cases based on the decision table.

Example
Assume there is a requirement involving two conditions: whether the user is in arrears and whether the user's phone is turned off. The specific rules are as follows:

If the user is in arrears and the phone is turned off, then calling is not allowed.
If the user is in arrears but the phone is not turned off, then calling is allowed.
If the user is not in arrears and the phone is turned off, then calling is allowed.
If the user is not in arrears and the phone is not turned off, then calling is allowed.

Is the user in arrears (Condition Stub)Is the phone turned off (Condition Stub)Is calling allowed (Action Stub)
YesYesNot allowed
YesNoAllowed
NoYesAllowed
NoNoAllowed

Then design cases based on the decision table.

Scenario Method#

Design test cases by simulating the actual process of users using the software. This method is usually based on business process diagrams or use cases to ensure that the entire business process can run smoothly.

Purpose

  • Cover business processes: Ensure all key business processes can be executed correctly.
  • Improve user experience: Ensure the software meets actual usage needs from the user's perspective.

Steps

  1. Clarify Requirements: Understand the specific requirements of the business process.
  2. Draw a Flowchart: Include start, end, decision, and processing steps.
  3. Write Test Cases: Generate test cases based on the flowchart to ensure coverage of all business paths.

Example
Assume there is a simple login system where users need to enter a username and password for verification. The specific rules are as follows:

If the username is admin and the password is 123456, then login is successful.
Otherwise, login fails.

First, draw the flowchart. (Omitted)

Then write test cases.

Error Guessing Method#

The error guessing method is a way to design test cases based on experience and intuition. It relies on the tester's experience and understanding of similar projects to guess potential issues in the system.

  • Quickly discover defects: Quickly identify potential issues through experience.
  • Supplement other methods: Serve as a supplement to other testing methods to ensure broader test coverage.

Main Idea

  • List potential issues: Based on experience, list potential issues that may arise.
  • Analyze causes of issues: Analyze each potential issue to identify possible causes.
  • Discover defects: Design test cases based on the analysis results to verify and discover defects.

Usage Scenarios

  • Tight deadlines, heavy workloads: When project timelines are tight and tasks are heavy, comprehensive detailed testing cannot be performed.
  • Later stages of the project: When all planned test cases have been executed and known defects have been fixed, but there is still some time before going live, the error guessing method can be used for final retesting.

Example
Assume an e-commerce project is about to go live, all planned test cases have been executed, and known defects have been fixed. With only a few hours left before going live, the error guessing method can be used for final checks.

Testing Steps

  1. Review experience: Recall issues encountered in similar projects in the past.
  2. List potential issues:
    a) Possible exceptions during user login (e.g., password input error limits, verification code expiration, etc.).
    b) Boundary conditions in the shopping cart function (e.g., response after adding a large number of products).
    c) Network interruptions or payment failures during the payment process.
  3. Design test cases: Design test cases based on the above list.
  4. Execute tests: Test according to the designed test cases to verify whether these issues exist.

Software Defect Management#

Software defects refer to any problems that exist during use, not just code errors, but also functional deficiencies, performance issues, usability problems, etc.

Causes

  1. Requirement Stage: Unclear or ambiguous requirement descriptions.
  2. Design Stage: Design errors or unreasonable designs.
  3. Coding Stage: Errors during code writing (e.g., logical errors, syntax errors, or other programming errors).
  4. Runtime Stage: For example, compatibility issues or other environmental problems (e.g., the program crashes on a specific system version).

Evaluation Criteria

  • Failure to implement functions explicitly required in the requirement specification: If the software does not implement functions explicitly required by the client or product documentation, it is considered a defect. For example, if the contract specifies ten functions but only eight are delivered, even if those eight functions are flawless, it is still a defect.
  • Occurrence of errors that should not appear as specified in the requirement specification: This includes logical errors in functionality, such as calculation errors or unexpected behavior. For example, if it is required that 1+1=3, but the software implements 1+1=2; or if it is required to log in only with an ID number, but the software allows login with a phone number.
  • Exceeding the scope specified in the requirement specification: Even additional functions, if they are unsolicited and may have negative impacts, will be considered defects. For example, if e-commerce software provides an unsolicited function allowing users to return goods at will, when there should actually be a strict approval process.
  • Failure to implement requirements that are not explicitly stated in the requirement specification but should be implemented: Refers to implicit functions, which, although not explicitly stated in the requirement document, should be implemented based on common sense and user experience. For example, some non-functional requirements in expected results, such as user-friendliness and interaction experience.
  • Software that is difficult to understand, not easy to use, or runs slowly: From the perspective of testers' expertise, if the software is difficult to understand, not easy to use, or runs slowly, it is also considered a defect. Testers need to consider the software's usability and user experience from the user's perspective.

Defect Management Process

  1. Discover defects: Identify issues that do not meet the above standards during testing.
  2. Record defects: Use tools (such as JIRA, Bugzilla, etc.) or Excel to record defects, including defect descriptions, reproduction steps, severity, etc.
  3. Assign defects: Assign defects to the corresponding developers for fixing.
  4. Fix defects: Developers fix defects and submit the repaired version.
  5. Verify defects: Testers verify the repaired version to confirm whether the defects have been resolved.
  6. Close defects: If the defects are resolved, close them; otherwise, reassign and continue fixing.

Defect Lifecycle

  1. Injected defects: Defects introduced during the requirement, design, coding, etc., stages due to various reasons (e.g., unclear requirements, design errors, coding errors).
  2. Discovered defects: Testers discover and record defects through test case execution.
  3. Classification and prioritization: Classify and prioritize defects based on their severity and impact.
  4. Assignment and fixing: Assign defects to corresponding developers for fixing.
  5. Verify defects: Testers verify the repaired defects to confirm whether they have been resolved.
  6. Close defects: If defects are resolved, close them; otherwise, reassign and continue fixing.
  7. Regression testing: After fixing defects, conduct regression testing to ensure that the newly fixed code has not introduced new defects.

Tools
Defect management tools: JIRA, Bugzilla, Mantis, TestRail, etc.
Spreadsheets: Excel can also be used for simple defect tracking.

Any issues at any stage will affect the overall quality of the project (the barrel effect), so each stage needs to be strictly controlled.

Defect Reporting and Submission#

How to Describe Software Defects

  1. Defect Title: Clearly and concisely describe the core issue of the defect. For example: "Password input box does not display asterisks when logging in."
  2. Preconditions: Describe the conditions that must be met before reproducing the defect. For example: "User has registered an account and is in a logged-out state."
  3. Reproduction Steps: List the specific steps to reproduce the defect in detail. For example: "1. Open the login page; 2. Enter username; 3. Enter password; 4. Click the login button."
  4. Expected Result: Describe the result that should be achieved after executing the above steps. For example: "User successfully logs into the system."
  5. Actual Result: Describe the actual result obtained after executing the above steps. For example: "User cannot log in, and the page prompts 'Password format error.'"
  6. Attachments (optional): Provide additional information that helps understand and reproduce the defect, such as screenshots, log files, etc. For example: Attach a screenshot of the login page and the error log.

Defect Submission
Generally filled out through defect management tools. Usually, it also needs to include the following information.

  • Defect Number: Each defect has a unique number for easy tracking and management.
  • Severity: Usually divided into S1 (very severe), S2 (severe), S3 (general), S4 (minor), etc. For example: Issues in core functional modules are usually S1 or S2, while issues in secondary functional modules may be S3 or S4.
  • Priority: Usually divided into P1 (urgent), P2 (high), P3 (medium), P4 (low), etc. For example: P1 level defects require resolution within 24 hours, while P2 level defects need to be resolved before release.
  • Defect Type: Includes code errors, compatibility issues, design defects, performance issues, etc. For example: Selecting "code error" indicates a programming issue, while "compatibility issue" indicates problems in different environments.
  • Defect Status: Common statuses include New, Open, Closed, Deferred, etc. For example: New indicates a defect just submitted, Open indicates a defect currently being processed, and Closed indicates a defect that has been fixed.

Defect Types

  • Functional Errors: The software's functionality does not work as expected. For example: A button does not respond when clicked.
  • UI Errors: Issues with the user interface, such as incorrect layout or incomplete image display. For example: The logo on the login page is not fully displayed.
  • Compatibility Issues: The software behaves inconsistently across different operating systems or browsers. For example: Works fine in Chrome but not in Firefox.
  • Data Issues: Problems related to the database, such as data loss or inconsistency. For example: User data is not correctly stored in the database.
  • Usability Issues: User experience problems with the software, such as complex operations or non-intuitive navigation. For example: Users find it difficult to locate the entry point for a certain function.
  • Suggestive Issues: Suggestions for improving software functionality or interface. For example: Suggest adding a night mode.
  • Architectural Design Defects: Issues in software architecture design that may lead to overall performance or stability problems. For example: Improper configuration of the database connection pool leading to performance bottlenecks.

Defect Management Tools
DevOps tools can be used for this purpose, such as ZenTao or other software.

It integrates product management, project management, and quality management, clearly defining the roles and responsibilities of product, development, testing, and project management.

The term DevOps comes from the combination of Development and Operations, emphasizing the communication and collaboration between software developers and operations personnel. It aims to make software building, testing, and releasing faster, more frequent, and more reliable through automated processes.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.