The software testing discipline has developed a lot over the past 7-8 years. The number of automated testing tools multiplied, the number of testing methods expanded as well. If almost 8-10 years ago, our clients were quite restrained in having testers in their teams (“it’s in the developers’ job description to do testing” used to be the most common argument), today this is the new norm.
And it’s for a good reason: the sooner you can identify a defect, the easier, faster, and cheaper it is to fix it. The cost of fixing defects found during testing is 15 times higher than if they were found during the design phase.
Not to mention the potential damage for the brand if a bug makes its way into the released software product. Therefore software testing is without any doubt a fundamental and crucial part of any software project.
The real challenge with software testing is finding the fine balance between how much effort to put into testing and how much value you get from it.
You can do white box testing, black box testing, manual or automated, functional and non-functional testing - the combination of possibilities is overwhelming.
But testing takes time and comes with a price. And so understanding the business need and the quality standards each product must meet are the drivers behind how we conduct testing and development.
For example, testing an MVP is different from testing a product in a very regulated industry such as pharmaceuticals. The goal of an MVP is to validate an idea and get feedback from the market. In this context, you want to launch the MVP as fast as possible and will spend less time testing (even if that means “living” with a few known issues). However, for a software product in the pharmaceutical industry, it’s a totally different story: there are specific requirements and quality metrics that need to be met (security standards, performance etc), and so thoroughly testing makes sense.
Before we dive in, it’s worth mentioning that for us, quality assurance means more than just the quality of the code, it extends across every stage of the software development process. Also, being loyal to the Agile principles, we believe that the entire software engineering team is responsible for the overall quality assurance of the software product.
So without further ado, let’s go into how we approach testing at Wirtek and what our process looks like.
Every time we onboard a new client, the first mandatory step we take is understanding the quality standards that a particular product has to meet. These quality standards are different depending on the industry vertical, as for example, an HR platform doesn’t need to comply with the same regulations as a medical or pharmaceutical application, and so how we test each of them will be different.
Therefore we meet with the client to talk about these standards and specific requirements. And we put in place a testing process that goes hand in hand with development.
We document these quality standards by creating acceptance tests, together with the client. Every feature or user story that we build needs to pass a series of acceptance tests before being released in production. These acceptance tests can define how a certain functionality should work, load time, data security, or how the feature integrates with another system such as an API or a payment system and more.
In the early stages of a collaboration, one important task is studying the existing documentation.
When we deal with an existing product, we need to understand how it works from an end-user perspective. We also try to understand the client’s processes so that we can tailor our testing process to best fit their needs and context.
The added value of exploratory testing for us is that it enables us to assess and give feedback about how the product performs from a usability standpoint, how easy or difficult it is to learn to use it.
The test environment should closely mimic the end-users’ environment in terms of hardware, software and network configurations, operating system settings, and other characteristics.
For example, for a mobile application, we need to make sure it works the same on all the versions of a mobile operating system.
We typically work in an Agile environment, with sprints that take 2 - 3 weeks, having a sprint backlog of user stories that we need to build.
Right after a feature is developed, testers work with the development team to make sure that the acceptance tests are passed. Testers and developers work in close cooperation, almost in symbiosis as we like to say because the goal is to spot any bugs early on and fix those bugs then and there. It’s easier and faster to fix an error right after developing a feature, rather than later in the future.
Developers themselves are responsible for testing the features they developed, and most of the time their code is covered by Unit tests (white box testing).
At the end of the sprint, our goal is to ensure we have a version of the product that is ready to be published, so we need to make sure the entire product is tested.
Testers will use different testing techniques, depending on the quality standards of the software product. In some instances, our testers will execute smoke tests, running a small subset of all test cases, to ensure that the most important functionalities will work in production.
In other instances, we will run regression tests to make sure that a newly developed feature or changes made during that sprint haven’t affected the entire application.
Running the entire suite of tests would require time and significant effort, so we need to be strategic about how we run regression tests. We focus on identifying those functionalities potentially impacted by changes made in the sprint, and run a set of tests that includes them.
Before a major release, we usually run full regression tests to ensure we can make a safe deployment into production.
We view our testing process as ever-evolving and focus on how we can continue to make it better. The Sprint Retrospective meeting held at the end of each development sprint is a good occasion to look at how the testing went, what types of bugs were reported, and determine how testing can be improved.
Some areas or questions we explore:
In Wirtek, our dedicated teams are functional teams, having one tester for each of four or five software developers.
For us, testing represents a bridge between the client’s product owner and the software developers. Our testers need to know the product well, understand the business requirements of the software, how the end-users will actually interact with it. As such, we try to create in our dedicated teams the role of Proxy Product Owner, in charge of gathering product know-how which can help the developers whenever they have questions, easing the burden on the Product Owner.
Software testing is a fundamental component of the software development life cycle.
It is at the same time a complex and resource-intensive process, so looking at testing through the lens of utility and business value it can create enables you to stay efficient and agile.
Because we develop new products or new functionalities to existing software products, and because our clients have unique contexts and priorities, we strive to adapt our testing process to each client, to best answer their needs.
Our end goal is to find the right balance between how much we test and the value we can provide through testing, and a balance between speed and how much we document what we test.