“A great tester is not the one who finds the most bugs, but the one who ensures the right bugs get fixed.” – Cem Kaner.
Testing isn’t simply finding bugs; it is knowing how things work, trying to think as a user, and making sure that everything runs gently. It’s about digging deep, challenging assumptions, and finding issues before they become actual problems. A great tester is basically not executing the steps: they’re curious, observant, and one step ahead. Whether verifying edge cases, making sure a smooth user experience, or simply checking code behaves as expected. A whole industry is focused on keeping software reliable, intuitive, and, most importantly, trustworthy.
Whether you’re just getting started with testing or you’re a seasoned QA guru, this cheat sheet will help guide you through all aspects of manual testing. So let’s get started.
What is Manual Testing?
Manual testing is a process where test cases are executed manually without using any automation tools. It requires domain knowledge, analytical capabilities and test execution techniques. In manual testing, a human tester plays the role of an end user and interacts with the software to check if it functions as expected.
Key Aspects of Manual Testing
- No Automation Involved: Testers execute test cases without using automation tools.
- Human Judgment: Testers use intuition, domain knowledge, and real-world scenarios.
- Defect Identification: The primary goal is to find bugs, inconsistencies, and missing features.
- User Perspective Testing: Makes sure that the software meets customer expectations.
What Does a Tester Do?
A manual tester is responsible for maintaining the software quality through various activities.
Software Development Life Cycle
The Software Development Life Cycle (SDLC) defines the process of planning, developing, testing, and deploying software.
Phases of SDLC
- Requirement Gathering: Understanding business and user needs.
- Planning: Defining scope, resources, and risk management.
- Design: Creating system architecture and UI/UX design.
- Development: Writing code for the application.
- Testing: Validating the application manually and through automation.
- Maintenance: Updating and improving the software post-release.
Testing is crucial in SDLC to maintain quality, reliability, and performance.
Comparing SDLC Methodologies
There are various models of Software Development Life Cycle (SDLC) that help design the software development process.
SDLC Model | Description | Pros | Cons |
---|---|---|---|
Waterfall Model | A structured sequential manner in which each phase must be finished before moving to the next. |
|
|
Iterative Model | Software is developed and improved through repeated cycles |
|
|
Spiral Model | Merges iterative and risk-based methods over many development stages (planning, risk evaluation, design, and assessment). |
|
|
Agile Model | A flexible and iterative approach where development is broken into small cycles (sprints). |
|
|
RAD (Rapid Application Development) | A quick (compared to waterfall) approach to development that focuses on prototyping and user feedback. |
|
|
DevOps | Integrates development and operations teams for continuous development, testing, and deployment. |
|
|
Testing Methods
There are three main methods used in software testing:
Black Box Testing |
|
---|---|
White Box Testing |
|
Grey Box Testing |
|
Essential Manual Testing Concepts
Test Case | A structured set of inputs, execution conditions, and expected results is designed to verify a specific feature or functionality of the application. |
---|---|
Test Scenario | A high-level description of a test objective that covers multiple test cases to validate a particular workflow or user interaction. |
Defect/Bug | An error, flaw or deviation in the software that causes it to behave unexpectedly or incorrectly compared to the specified requirements. |
Test Plan | A comprehensive document outlining the testing strategy, scope, objectives, resources, schedule, and responsibilities for a testing cycle. |
Test Environment | The configured hardware, software, network, and other necessary tools are required to execute test cases in a controlled setting. |
Severity vs. Priority |
Severity represents the impact of a defect on system functionality.
Priority determines the urgency with which it should be fixed.
|
Levels of Testing
Software testing is conducted at different levels to ensure quality.
Unit Testing |
|
---|---|
Integration Testing |
|
System Testing |
|
User Acceptance Testing (UAT) |
|
Types of Testing
Manual testing consists of various testing types, each designed to validate different aspects of software functionality, performance, and security.
Functional Testing | Tests that the application behaves as expected by validating its features against specified requirements. |
---|---|
Non-Functional Testing | Evaluates aspects like performance, security, and usability to make sure the software meets quality standards beyond just functionality. |
Smoke Testing | A quick, high-level test to verify that the critical functionalities of the application work before proceeding with more detailed testing. |
Sanity Testing | A focused check on specific functionalities or recent fixes to confirm they work correctly without conducting a full regression test. |
Regression Testing | Make sure that new changes, bug fixes or enhancements do not negatively impact previously working functionality. |
Exploratory Testing | Performed without predefined test cases, relying on tester intuition and experience to uncover defects. |
Ad-hoc Testing | An unstructured testing approach where testers randomly explore the application to find unexpected issues. |
Performance Testing | Assesses the application’s speed, responsiveness, and stability under different conditions to identify performance bottlenecks. |
Security Testing | To check robust security measures, examine the system for vulnerabilities, unauthorized access, and data breaches. |
Accessibility Testing | Test that the application is usable for individuals with special needs, following standards like WCAG for inclusivity. |
Test Case Design Techniques
Equivalence Partitioning | A technique that divides input data into valid and invalid partitions to minimize the number of test cases while having maximum test coverage. |
---|---|
Boundary Value Analysis | It focuses on testing input values at the extreme edges (minimum, maximum, and just outside limits) to detect boundary-related defects. |
Decision Table Testing | Uses a tabular format to represent different input conditions and their corresponding expected outputs, making sure that all possible scenarios are tested. |
State Transition Testing | Evaluates the system’s behavior by testing valid and invalid state changes to verify that transitions between states function correctly. |
Use Case Testing | Validates software functionality based on real-world user interactions and workflows to make sure that the application meets business requirements. |
Software Testing Life Cycle
Requirement Analysis | Involves understanding the project requirements and defining the test scope to ensure all functionalities are covered. |
---|---|
Test Planning | Focuses on creating the test strategy, defining test objectives, estimating resources and scheduling test activities. |
Test Case Development | Involves writing detailed test cases, preparing test data, and defining expected outcomes based on requirements. |
Test Environment Setup | Prepares the necessary hardware, software, network configurations, and access permissions to execute tests. |
Test Execution | Running test cases, comparing actual vs. expected results, and logging defects for resolution. |
Test Closure | Finalizing documentation, analyzing test metrics, preparing closure reports, and discussing lessons learned for future improvements. |
Test Metrics
Testing metrics are quantitative measures used to assess the efficiency, effectiveness, and quality of the software testing process. They provide insights into test coverage, defect density, and overall test performance.
Test Execution Metrics
Test Case Execution Percentage |
|
---|---|
Test Case Pass Rate |
|
Test Case Failure Rate |
|
Test Coverage |
|
Defect Metrics
Defect Density |
|
---|---|
Defect Leakage |
|
Defect Severity Index (DSI) |
|
Test Efficiency and Productivity Metrics
Test Efficiency |
|
---|---|
Test Efficiency |
|
Test Metrics
Test artifacts are essential documents created during different phases of the Software Testing Life Cycle (STLC) to maintain proper planning, execution, tracking, and reporting of testing activities.
Test Artifact | Description | Created During |
---|---|---|
Test Plan | A document outlining the test strategy, scope, objectives, schedule, resources, and risks. | Test Planning |
Test Cases | Detailed step-by-step test scripts specifying inputs, expected results, and pass/fail criteria. | Test Case Development |
Test Scenarios | High-level test descriptions covering multiple test cases based on requirements. | Test Case Development |
Test Data | Input data is used to execute test cases, including valid, invalid, and boundary conditions. | Test Case Development |
Requirement Traceability Matrix (RTM) | A mapping document linking test cases to their corresponding requirements to make sure full coverage. | Test Case Development |
Test Environment Setup Document | Specifies hardware, software, network configurations, and access credentials required for testing. | Test Environment Setup |
Test Execution Report | A daily/weekly summary of executed test cases, pass/fail status, and any blockers. | Test Execution |
Defect Report | A detailed log of identified defects, including steps to reproduce, severity, priority, and status. | Test Execution |
Bug Report | Captures information about a specific defect, including actual vs. expected results and supporting screenshots/logs. | Test Execution |
Test Summary Report | A final document summarizes the testing process, test execution results, defect status, and overall quality assessment. | Test Closure |
Lessons Learned Document | A post-testing review capturing challenges, best practices, and recommendations for future projects. | Test Closure |
Conclusion
Manual testing is an indispensable part of software quality assurance. Mastering it requires understanding testing types, writing effective test cases, and applying best practices. Combining manual and automation testing makes sure robust software quality.