Software Testing: Introduction to Testing
In 2002, a study commissioned by the US Department of Commerce’s National Institute of Standards and Technology concluded that software errors cost the US economy about $59 billion annually.
Testing is about the most crucial part of software development. You definitely don’t want your customers to be the first to report a bug.
Software testing allows you to discover and fix a bug before they get to production. There are different ways to test your applications; you are either testing manually or automated. For this article, we are concerned about manual testing.
Testing your application can be very challenging. Even though it’s impossible to figure out all the defects of your application in your test suites, testing your application gives you confidence that your application works really well as expected to a large extent. At least, you can sleep well at night.
For many organizations, all they want is just to push out new product features as soon as possible. Have you worked in an organization where your boss says you need to push out a feature within the next few hours — a timeline you know is not feasible to get this feature ready and error-free? There are many bosses like that.
This puts you in a fix and allows you to write crappy and hacky code that are not testable or predictable.
You are not alone. I’ve been in this situation as well. Don’t be like these bosses, be different, make testing a part of your software development life cycle as it should be.
In this article we’ll learn how to test our applications manually and effectively and then in the next article we’ll talk about how to automate your testing, the techniques, the tools, and how to get started.
So, let’s get started with manual testing.
Table of content
- The objective of testing
- Types of testing
- Test coverage
- Test Matrix
The objective of testing
The objective of testing is to first find defects and to prevent defects before they get to production.
The information you get from testing can help you perform a proper risk assessment that will contribute to allowing you to deliver software that is fault-tolerant, maintainable, and scalable.
Although different testing techniques have different objective the primary objective is to find defects and to fix them as soon as possible as it might become more expensive to fix when found in production.
Types of Testing
Below are a few types of testing techniques that software teams often use;
Functional Testing types include:
- Unit Testing
- Integration Testing
- System Testing
- Sanity Testing
- Smoke Testing
- Interface Testing
- Regression Testing
- Beta/Acceptance Testing
Non-functional Testing types include:
- Performance Testing
- Load Testing
- Stress Testing
- Volume Testing
- Security Testing
- Compatibility Testing
- Install Testing
- Recovery Testing
- Reliability Testing
- Usability Testing
- Compliance Testing
- Localization Testing
You can find more details about these testing techniques here
Test coverage is a metric used to measure the number of tests that are being executed and how it aligns with the functional requirements, user requirements, system specs, and so on.
For example, if you are testing a login functionality how many test cases do we have for that, did you only test for when the user enter correct details, what will happen if the user inputs some kind of unexpected data, what will happen if the login fails, etc. Test coverage allows you to measure all of this — the much you can.
Requirements Traceability Matrix
Requirement traceability matrix helps you answer the question “What have we tested?” it gives you a quantifiable data point to what has been tested and what hasn’t been tested, what was successful, what failed, and needs to be fixed.
It’s more like a system designed to validate, review, and audit system requirements and the current behavior of the system. So, you will find the traceability matrix tool being used by the validation team to ensure that requirements are not lost during the validation process, and for auditors, to review the validation documentation.
How do you design the traceability matrix, you asked?
let’s design one.
To design one we’ll start by creating a test case, most teams use spreadsheets to create templates to be re-used for different tests.
Assuming our application requires a user to be registered before accessing our services, we can test for;
- validation errors, like fields, can not be blank, length of passwords are at least 8 characters, etc. This will form our test cases for the registration validation test suite.
- Successful registration
- Failed registration
So, our sample traceability matrix will look like this:
Requirement Traceability Matrix Example(https://www.notion.so/b5e849370fb04a8caa60f5773dcfc7f2)
You can expand on this to cover as many test cases as possible. Each time a test is done you can look at this table to see what isn’t working find the root cause and fix them before pushing to production.
This process can be really tedious since the tester will have to click around the app to test all this functionalities. It’ll make sense if you can automate some of this things. We’ll look at how you can automate this stuff with unit testing, integration testing and end to end testing in this series.
Originally published at Ezesunday.com