Choose your language:
TEKsystems introduced an enhanced testing automation framework to allow a marketing software-as-a-service client to increase its testing capacity, efficiency and velocity.
The client delivers software-as-a-service (SaaS) solutions to help marketers reach customers through multichannel communications platforms including email, marketing automation, mobile, social media and websites. Founded in 2000, the client provides marketing support services to Fortune 500 enterprises and small businesses around the globe. The client has partnered with TEKsystems since June 2012.
In the applications and digital services world, quality assurance (QA) and testing is a fact of life; without testing, software products and services simply will not work as promised.
In order to maintain rapid software release schedules, QA and testing needs to be conducted quickly and efficiently. Testing also needs to be comprehensive, as post-release glitches can incur financial penalties and damage a business’ reputation. Many enterprises incorporate automated testing to keep up with intense testing demands. However, automation is not a silver bullet in itself; it has to be carefully implemented to achieve optimal results.
Building a robust, automated testing program requires developing a comprehensive framework for test design and building the right tools and processes to extend its usefulness. But this initial investment can pay great dividends: it makes testing faster, less labor-intensive and more thorough. Most importantly, building the right testing infrastructure will allow automation to scale up to meet a nearly infinite number of testing demands, thereby enabling rapid software development cycles. Many companies turn to QA and testing specialists to create an enhanced automation framework built for growth and velocity.
The client is a marketing SaaS company experiencing tremendous growth. Their core service is a cloud-based platform for clients to conduct multichannel marketing campaigns through email, text, social media, websites and more. In addition, the client rents its back-end infrastructure to other large companies. The breadth and complexity of the client’s offerings, combined with the scale of data it hosts, demands continual software advances to keep stride with the evolving technologies it supports.
Understanding the need to keep software development rolling smoothly and to the highest quality standards, the client invested in developing a QA automation program. However, their ability to scale automation hit a ceiling of about 15 percent of all needed tests before progress stopped in its tracks.
The client’s four-person automation team was spread thin, and their specialized knowledge of testing in its environments was needed to keep the tests running in place instead of designing a strategic testing platform. This created a risky situation for the client; the removal of any of the four existing engineers would create a serious knowledge gap and possibly delay software releases.
While the automation team struggled to keep up with testing demands, they were unable to build capacity. There was no time to institutionalize their knowledge, train new team members or engage in long-term strategy building. The problems affected the greater 60-person QA team as well; every new software release demanded massive overtime to complete testing on time.
A conversation with a TEKsystems QA and Testing Services expert helped shed some light on the client’s dilemma. TEKsystems’ QA and Testing practice had partnered with the client the previous year to run manual tests of their software, and they remembered our consultative approach and technical capabilities. Based on the success of that engagement, the client requested that we talk with their QA practice leader about possible ways to extend our partnership, and the topic of automation surfaced.
Our team suggested bringing an entirely new approach to the client’s automation efforts, one that could help them greatly increase their testing capacity and velocity. Immediately recognizing that our solution could help overcome their scaling challenges, they engaged TEKsystems to conduct an analysis of their automation program and pilot our recommended approach.
TEKsystems proposed a four-part solution to the client’s automated testing needs that would incorporate an integrated people/process/technology strategy.
Stage 1: Review and Strategy
Our first task would be to analyze the current state of the client’s testing automation program. We would explore root causes of the automation ceiling, select a set of targets appropriate for piloting our proposed framework and determine a deployment strategy. We would also provide a technology demonstration to help the client’s QA leaders conceptualize the complex project.
Stage 2: Pilot
We would build a pilot around one of the client’s testing targets to validate the viability of our proposed approach and measure results. Building upon free open-source software, we would develop a user-friendly interface for nontechnical testers to administer automated tests and train them in its use. Finally, we would document the pilot project’s framework, architecture and workflows to enable the client’s permanent QA team to replicate the results.
Stage 3: Pilot Review
In the pilot review, we would analyze the success of our approach, extrapolate enterprise-wide metrics and help the client build a business case for expanding the enhanced automation model.
Stage 4: Transition
In the transition stage, we would prepare the client’s team to adopt the new testing strategy and flesh out the expansion project plan. We would also conduct training for an expanded group of testers to learn to use the software and understand the framework.
TEKsystems successfully instituted a new strategy and processes to enable the client to increase its automation threshold from 15 percent to 80 percent, while reducing the workload of the testing team. The pilot project reduced the need for a testing engineer’s time by 30 percent per month, and a manual tester’s time by one to two weeks, for a 45 percent gain in efficiency. We accomplished this by determining the exact nature of the client’s problem and building the right process/people/technology strategy to address it.
Our initial review of the client’s automation program determined that quick, organic growth had created a scalability problem—a common scenario at fast-growing technology companies. As their testing needs increased, the QA team reacted by working harder and employing work-arounds to solve problems that arose. It was time to take a fresh look at their testing processes and automation strategy.
On the people front, we determined that work was not being efficiently allocated between technical and nontechnical resources. As a first step in resolving that problem, we created a template to define roles and tasks based on skill levels. Our plan called for sorting tasks into either test design or technological execution, and then assigning the work to the appropriate skill sets.
Overhauling the technology strategy presented a larger challenge. The client’s highly skilled automated testing engineers were writing code scripts for every testing stage. While writing scripts is a quick method to start testing immediately, it becomes problematic when various stages of software development create a demand for thousands of new tests. Using non-modular and non-abstracted coding means that only highly skilled resources can perform the automated tests. And worse, any change or software bug fix—even something as simple as a tweak to the login screen—leads to a whole new set of testing, forcing testers to refactor thousands of scripts from scratch.
We addressed these deep challenges by re-envisioning a testing framework consisting of modular functional workflow blocks organized within a system map. Tests now consist of modules strung together instead of completely new scripts, and organized around a keyword template that corresponds to application components. With this framework, engineers do not have to write all new tests for each software iteration but can reuse previously created ones.
To extend the usefulness of the new framework, we created a graphical user interface (GIU) to allow lower-skilled manual testers to run many of the automated tests. The difference between the client’s old testing platform and the new user-friendly GUI is akin to the difference between typing manual commands versus using a windows-based operating system, which requires no specialized knowledge. Empowering lower-skilled QA team members to run automated tests will free up the highly skilled automation team to focus on strategic test design.
The pilot project was successful, helping reduce both testing times and the demands on engineers. We provided all the necessary ingredients for the client to extend the automation framework across its entire organization. Finally, we provided a scenario calculator for the client to easily determine return on investment (ROI) in extending the program; a conservative estimate of ROI for automating all tests based on their current workload is $5.1 million. The success of the project and metrics collected provided ammunition for the client’s business case for making an enterprise-wide investment in enhanced automation.