How to make continuous testing work for your business
A deeper understanding of performance engineering for a better development model
May 31, 2021 | By James Pulley and Brian Copeland
A quick Google search on continuous testing and you’ll find many definitions and explanations around “testing early, testing often and testing everywhere.” And they’re not wrong—in fact, implementing loops of automated tests as part of your development process is crucial for finding and fixing issues as quickly as possible. Because at a time where organisations are competing for heightened user experiences, a system that can’t perform—even in the most minor of ways—is set up for failure.
Consumer expectations across digital experiences are increasing to almost unreachable heights. Any waiting or loading time is no longer acceptable—everything must be instantaneous, and that’s the bare minimum. From media and entertainment (e.g., streaming a movie) to retail and ecommerce (e.g., checking out a shopping cart), businesses risk losing customers and the bottom line if their user experience isn’t flawless. The role of continuous testing in DevOps has never been more important to adapt to user feedback and environmental shifts.
Then why is it that organisations struggle with incorporating this approach into their processes?
Continuous testing requires a cultural shift
Continuous testing is more than just plugging in automation or testing tools—you need a deep understanding of the different kinds of testing and what approach your organisation needs in order to perform.
Most organisations struggle with differentiating between a load test and stress testing—and end up focusing on the former rather than the latter. In the simplest terms, load testing focuses on true end-user performance by loading the system down with an expected amount to find bottlenecks to mitigate. While stress testing focuses on overloading a system as much as possible to find any potential breaking points.
Understanding the value of implementing stress testing early and often throughout the development life cycle is critical in finding those breaking points at the lowest possible level in the microservice or piece of code, before stacking them all together into an end-to-end system.
Performance testing vs. performance validation
In most environments, performance engineering hasn’t been fully built into software development life cycles yet. Performance testing itself is not very common, and when it is practiced, it’s more like performance validation—where the testing is performed when the system is ready to go live to check end-user performance. Essentially, waiting until the very end of the development life cycle to run a performance validation test, when at that point it’s too late and very difficult to uncover the root cause. Then in the likely case that an issue does arise, you’re trying to recover how the system was supposed to perform, leading to tension development and architecture, because the assumptions they used to build their solutions are different than the recovered requirements.
In reality, 80% of all performance issues can be found and solved with a single user, but you need to ask questions in a structured way to collect information on response time and how resources are being managed. You can ask those questions as part of a pipelined process early and often, rather than waiting for multiuser performance testing before asking the first question on application performance. It is an axiom that if an application is not performant for a single user, then it cannot be performant for many. The corollary to this is that we need to ask the questions related to single-user performance.
As industries move into concepts like Agile and DevOps, it becomes even harder for a continuous testing mindset to be built in—but we must. We need to build continuous testing into our user stories and ask performance questions so that developers understand that performance is a characteristic of the code. It can’t just function correctly—it has to perform and function correctly.
James Pulley is the practice manager of performance engineering and testing for TEKsystems. He has spent the last 20 years helping customers with software application performance and scalability as a performance tester and engineer.
Brian Copeland is a solutions and sales enablement director for TEKsystems. With over 30 years of software development and leadership experience, he specialises in solution architecture and continuous testing.