“A great tester is not the one who finds the most bugs, but the one who ensures the right bugs get fixed.” – Cem Kaner.

Testing isn’t simply finding bugs; it is knowing how things work, trying to think as a user, and making sure that everything runs gently. It’s about digging deep, challenging assumptions, and finding issues before they become actual problems. A great tester is basically not executing the steps: they’re curious, observant, and one step ahead. Whether verifying edge cases, making sure a smooth user experience, or simply checking code behaves as expected. A whole industry is focused on keeping software reliable, intuitive, and, most importantly, trustworthy.

Whether you’re just getting started with testing or you’re a seasoned QA guru, this cheat sheet will help guide you through all aspects of manual testing. So let’s get started.

What is Manual Testing?

Manual testing is a process where test cases are executed manually without using any automation tools. It requires domain knowledge, analytical capabilities and test execution techniques. In manual testing, a human tester plays the role of an end user and interacts with the software to check if it functions as expected. 

Key Aspects of Manual Testing

  • No Automation Involved: Testers execute test cases without using automation tools.
  • Human Judgment: Testers use intuition, domain knowledge, and real-world scenarios.
  • Defect Identification: The primary goal is to find bugs, inconsistencies, and missing features.
  • User Perspective Testing: Makes sure that the software meets customer expectations.

What Does a Tester Do?

A manual tester is responsible for maintaining the software quality through various activities.

Software Development Life Cycle

The Software Development Life Cycle (SDLC) defines the process of planning, developing, testing, and deploying software.

Phases of SDLC

  • Requirement Gathering: Understanding business and user needs.
  • Planning: Defining scope, resources, and risk management.
  • Design: Creating system architecture and UI/UX design.
  • Development: Writing code for the application.
  • Testing: Validating the application manually and through automation.
  • Maintenance: Updating and improving the software post-release.

Testing is crucial in SDLC to maintain quality, reliability, and performance.

Comparing SDLC Methodologies

There are various models of Software Development Life Cycle (SDLC) that help design the software development process.

SDLC Model Description Pros Cons
Waterfall Model A structured sequential manner in which each phase must be finished before moving to the next.
  • Easy to manage and document.
  • Well-organized and clear requirements.
  • Best for small, well-defined projects.
  • Not flexible; changes are difficult to accommodate.
  • Defect detection becomes increasingly expensive as testing is performed late in the development lifecycle.
  • Not Ideal For Complex Or Evolving Projects
Iterative Model Software is developed and improved through repeated cycles
  • Detects risks early and allows continuous improvement.
  • More flexible than Waterfall.
  • Suitable for complex projects.
  • Requires more resources and time.
  • Needs frequent customer involvement.
  • Managing multiple iterations can be complex.
Spiral Model Merges iterative and risk-based methods over many development stages (planning, risk evaluation, design, and assessment).
  • Ideal for high-risk projects.
  • Focuses on risk analysis and user input.
  • Allows iterative development.
  • Overpriced is the cause of the very expensive risk analysis.
  • Requires skilled expertise.
  • Not suitable for small projects.
Agile Model A flexible and iterative approach where development is broken into small cycles (sprints).
  • Highly adaptable to changing requirements.
  • Encourages collaboration and continuous feedback.
  • Faster delivery of functional software.
  • Requires frequent customer involvement.
  • Needs skilled and self-organizing teams.
  • It can be challenging to manage in large teams.
RAD (Rapid Application Development) A quick (compared to waterfall) approach to development that focuses on prototyping and user feedback.
  • Quick delivery of functional software.
  • Early feedback leads to high customer satisfaction.
  • Encourages collaboration.
  • Needs constant participation of the user.
  • Not a good fit for big projects.
  • Interdependence on high-skilled developers.
DevOps Integrates development and operations teams for continuous development, testing, and deployment.
  • Faster releases with automated testing and deployment.
  • Improves collaboration between teams.
  • Enhances software reliability and quality.
  • Requires cultural shift and tool integration.
  • Needs skilled professionals.
  • It can be complex to implement in legacy systems.

Testing Methods

There are three main methods used in software testing:

Black Box Testing
  • Focuses on software functionality without looking at code.
  • Testers validate inputs and outputs without knowing the internal implementation.
  • Examples: Functional Testing, UI Testing, User Acceptance Testing (UAT).
White Box Testing
  • Involves testing the internal structure of an application.
  • Requires programming knowledge to analyze logic and code execution.
  • Examples: Unit Testing, Code Coverage Testing, etc.
Grey Box Testing
  • A combination of Black Box and White Box Testing.
  • Testers have partial knowledge of internal workings.
  • Used to validate security and data flow across modules.

Essential Manual Testing Concepts

Test Case A structured set of inputs, execution conditions, and expected results is designed to verify a specific feature or functionality of the application.
Test Scenario A high-level description of a test objective that covers multiple test cases to validate a particular workflow or user interaction.
Defect/Bug An error, flaw or deviation in the software that causes it to behave unexpectedly or incorrectly compared to the specified requirements.
Test Plan A comprehensive document outlining the testing strategy, scope, objectives, resources, schedule, and responsibilities for a testing cycle.
Test Environment The configured hardware, software, network, and other necessary tools are required to execute test cases in a controlled setting.
Severity vs. Priority
Severity represents the impact of a defect on system functionality.
Priority determines the urgency with which it should be fixed.

Levels of Testing

Software testing is conducted at different levels to ensure quality.

Unit Testing
  • Developers do this testing at the code level before integrating components.
  • Tests the individual functions, methods, or modules in isolation.
  • It assists in finding early-stage bugs and enhances code quality.
  • Usually automated with frameworks such as JUnit (Java), NUnit (. NET), or PyTest (Python).
Integration Testing
  • Tests various modules, services, or APIs to ensure they are interacting as expected
  • Detects communication failures between components, like data flow errors.
  • Includes different techniques like Top-Down, Bottom-Up, and Big Bang Testing.
  • Some common tools are testRigor, Postman, etc.
System Testing
  • Tests and evaluates the complete software system as a unique system after integration.
  • Checks that all end-to-end functionality is as per business and technical requirements.
  • Encompasses functional, performance, security, and usability testing.
  • Tests are executed in an environment that is similar to production.
User Acceptance Testing (UAT)
  • End-user or stakeholder-driven tests to validate real-world business needs.
  • Emphasizes the need to ensure the software works according to customer satisfaction before deployment.
  • End-user or stakeholder-driven tests to validate real-world business needs.
  • It is the final phase before releasing it into production, identifying any last-minute defects.

Types of Testing

Manual testing consists of various testing types, each designed to validate different aspects of software functionality, performance, and security.

Functional Testing Tests that the application behaves as expected by validating its features against specified requirements.
Non-Functional Testing Evaluates aspects like performance, security, and usability to make sure the software meets quality standards beyond just functionality.
Smoke Testing A quick, high-level test to verify that the critical functionalities of the application work before proceeding with more detailed testing.
Sanity Testing A focused check on specific functionalities or recent fixes to confirm they work correctly without conducting a full regression test.
Regression Testing Make sure that new changes, bug fixes or enhancements do not negatively impact previously working functionality.
Exploratory Testing Performed without predefined test cases, relying on tester intuition and experience to uncover defects.
Ad-hoc Testing An unstructured testing approach where testers randomly explore the application to find unexpected issues.
Performance Testing Assesses the application’s speed, responsiveness, and stability under different conditions to identify performance bottlenecks.
Security Testing To check robust security measures, examine the system for vulnerabilities, unauthorized access, and data breaches.
Accessibility Testing Test that the application is usable for individuals with special needs, following standards like WCAG for inclusivity.

Test Case Design Techniques

Equivalence Partitioning A technique that divides input data into valid and invalid partitions to minimize the number of test cases while having maximum test coverage.
Boundary Value Analysis It focuses on testing input values at the extreme edges (minimum, maximum, and just outside limits) to detect boundary-related defects.
Decision Table Testing Uses a tabular format to represent different input conditions and their corresponding expected outputs, making sure that all possible scenarios are tested.
State Transition Testing Evaluates the system’s behavior by testing valid and invalid state changes to verify that transitions between states function correctly.
Use Case Testing Validates software functionality based on real-world user interactions and workflows to make sure that the application meets business requirements.

Software Testing Life Cycle

Requirement Analysis Involves understanding the project requirements and defining the test scope to ensure all functionalities are covered.
Test Planning Focuses on creating the test strategy, defining test objectives, estimating resources and scheduling test activities.
Test Case Development Involves writing detailed test cases, preparing test data, and defining expected outcomes based on requirements.
Test Environment Setup Prepares the necessary hardware, software, network configurations, and access permissions to execute tests.
Test Execution Running test cases, comparing actual vs. expected results, and logging defects for resolution.
Test Closure Finalizing documentation, analyzing test metrics, preparing closure reports, and discussing lessons learned for future improvements.

Test Metrics

Testing metrics are quantitative measures used to assess the efficiency, effectiveness, and quality of the software testing process. They provide insights into test coverage, defect density, and overall test performance.

Test Execution Metrics

Test Case Execution Percentage
  • Measures the progress of test execution.
  • (Number of Test Cases Executed / Total Test Cases) x 100
Test Case Pass Rate
  • Indicates the percentage of successful test cases.
  • (Number of Passed Test Cases / Total Executed Test Cases) x 100
Test Case Failure Rate
  • Helps identify defect-prone areas.
  • (Number of Failed Test Cases / Total Executed Test Cases) x 100
Test Coverage
  • Assesses how well the requirements are tested.
  • (Number of Requirements Covered by Test Cases / Total Requirements) x 100

Defect Metrics

Defect Density
  • Determines the number of defects per unit of code (e.g., per 1,000 lines of code).
  • Total Number of Defects / Size of the Software Module
Defect Leakage
  • Indicates the percentage of defects missed in earlier test phases but found in production.
  • (Defects Found in UAT / Total Defects Found in Testing) x 100
Defect Severity Index (DSI)
  • Helps prioritize defect resolution based on impact.
  • (Σ (Severity Level x Number of Defects)) / Total Defects

Test Efficiency and Productivity Metrics

Test Efficiency
  • Measures the effectiveness of the testing process.
  • (Number of Defects Found by Testing / Total Defects in the System) x 100
Test Efficiency
  • Evaluates the efficiency of testers.
  • (Number of Test Cases Executed / Testing Hours Spent)

Test Metrics

Test artifacts are essential documents created during different phases of the Software Testing Life Cycle (STLC) to maintain proper planning, execution, tracking, and reporting of testing activities.

Test Artifact Description Created During
Test Plan A document outlining the test strategy, scope, objectives, schedule, resources, and risks. Test Planning
Test Cases Detailed step-by-step test scripts specifying inputs, expected results, and pass/fail criteria. Test Case Development
Test Scenarios High-level test descriptions covering multiple test cases based on requirements. Test Case Development
Test Data Input data is used to execute test cases, including valid, invalid, and boundary conditions. Test Case Development
Requirement Traceability Matrix (RTM) A mapping document linking test cases to their corresponding requirements to make sure full coverage. Test Case Development
Test Environment Setup Document Specifies hardware, software, network configurations, and access credentials required for testing. Test Environment Setup
Test Execution Report A daily/weekly summary of executed test cases, pass/fail status, and any blockers. Test Execution
Defect Report A detailed log of identified defects, including steps to reproduce, severity, priority, and status. Test Execution
Bug Report Captures information about a specific defect, including actual vs. expected results and supporting screenshots/logs. Test Execution
Test Summary Report A final document summarizes the testing process, test execution results, defect status, and overall quality assessment. Test Closure
Lessons Learned Document A post-testing review capturing challenges, best practices, and recommendations for future projects. Test Closure

Conclusion

Manual testing is an indispensable part of software quality assurance. Mastering it requires understanding testing types, writing effective test cases, and applying best practices. Combining manual and automation testing makes sure robust software quality.