Core Processes –
The Automated Test Development Process

When you’re developing automated tests, there are some fundamental processes that make a significant difference to the level of success you can expect to achieve. Implementing and perfecting the processes we describe below could turn a failing automated test development project into a flourishing one.

You can think of a process as a recipe for the meal you want to cook; the ingredients being the systems you need to have in place, the techniques you need to apply and the tools youíll need to use. If you use the wrong techniques, you might just succeed, if youíre lucky. Using proven techniques, however, will improve your chances of success significantly. The same goes for the tools you choose. You could go off the beaten track and choose a unique tool that appears to be better for your specific recipe; but remember, the popular tools are popular for a reason. Therefore, use the tried and tested ingredients.

The three processes that have the greatest impact on the effectiveness of your automated testing projects are:

  1. Source code control
  2. Tracking development progress
  3. Continuous execution

In this article, we walk through the best practices for implementing these processes and the systems that support them. We start with source code control and explain why it is so important. Then we see why tracking your progress and making it visible is key to your success. Finally, we’ll talk about why automated test execution is vital to the development process. With development and execution working in harmony, your automated testing conveyor belt can be scaled up significantly.

Source Code Control

When you come from a manual testing background, it’s easy to think that in a small automation project source code control doesn’t matter. The fact is that source code control underpins everything. While it is certainly useful for manual test cases, in automation projects version control becomes critical. Here are some of the benefits of using source code management tools for your test development:

development:
First, it gives you a fixed reference point. Everyone in the team understands that if they need the latest copy there is just one place to go ? no arguments, no misunderstandings. There is just one place to maintain the master copy of all your automated test cases.

Second, your continuous test execution system will reference that same source when you are automatically running your tests as part of your CI/CD system. Your CI tool will pull your automated test code out of the source code repository and deploy that code to the environment where the tests are executed.

A simple two-step process: commit your code to the source code repository during the development phase; the automated execution system fetches this code during the execution phase.

The greatest benefit of using source code control, however, is the stability and consistency it brings to your code base. It provides a bedrock on which you can build out other capabilities that will help you streamline, enhance and perfect your automation development process. Here are some of the ways you can capture the benefits of using source code control:

It’s easy to dismiss source code control when your automation projects are small and perhaps being run by just one person. Yet it remains important, not just for the discipline of operating with good practice, but to make the entire test process easier to build.

Tracking Development Progress

Agile projects are tracked with Scrum or Kanban boards. Waterfall projects are tracked with project plans. Whichever method you use, every task associated with your software development project needs to be tracked in some way. 

Tracking provides visibility to the whole team on the progress of the project. You should treat your automated test development as a project in the same way. The trouble is, many teams define the creation of the automated tests as some nebulous task on the project plan or scrum board. The task of designing tests never gets the attention that it requires.

When you start to plan tasks in more detail, you engender a degree of focus on the automation project that builds momentum and leads to results. This can be done by adding it to the overall software development project; however, we prefer creating a separate scrum project with its own backlog of tasks, sprints and burn down charts. Add to that a dedicated retrospective, and you have a powerful system for tracking your test automation projects.

It makes a huge difference to make this a dedicated project in its own right. This brings a degree of focus to the process of developing and deploying automated tests. Putting all the automation tasks on a backlog in order to capture everything.

The act of agreeing what goes into a sprint as part of a sprint planning meeting helps focus everyone on developing the automated tests that will deliver the best return. Sitting down and looking at the burn down chart retrospectively creates awareness of what’s been achieved. It also shows everyone what’s possible with the resources available for automated testing. And this, in turn, helps to build the business case for allocating more resources if product owners are pushing for greater automated test coverage.

The procedure for setting this up is fairly straightforward. Follow the same methodology as for agile projects:

  • Set up the project in a tool like Jira
  • Create the task backlog
  • Define the sprint period
  • Hold your first sprint planning meeting
  • Move tasks off the backlog and into the sprint
  • Work on the sprint tasks during the sprint
  • Check off the completed sprint tasks

Ensure the tests developed in the sprint are released in a stable state once finished – a clear definition of ‘done’ is key here.

The two critical parts of this are:

Move tasks off the backlog and into the sprint. An important aspect of defining tasks is to have a clear definition of ‘done’. ‘Done’ should cover the scope of the test, data requirements, account requirements, test environment requirements, etc. It is also imperative to define what’stability’ means for the test. Releasing a test that only passes 10% of the time because it’s been poorly implemented only serves to undermine the credibility of the automated testing project. A good definition of ‘done’ should look something like this:

We’ll run this test 25 times each in our Dev, Test and UAT environments over a 1 week period. If the test passes (correctly) on the last 10 runs, then we’ll consider this test to be completed and include it in our formal regression pack that is run against every new release.

That covers what to do. Next, we need to understand how to do it. If you’re already using Jira to track your projects, then you have three options for managing your automated test development:

The first is to incorporate your automation tasks into your existing software development project. This is an easy way to get started. You’re also likely to get swept along with undertaking and tracking everything in the right way. If your scrum projects are already being managed well, then this is likely to rub off on your automation project, too.

Second is to start a separate project in Jira dedicated to your test automation project. The advantage of this approach is that there’s nowhere to hide, because everything on the new scrum project will be related to the automation project. If you’re not achieving much, then it will show. It will also be easier to explain why this is the case and, if there are good reasons, easier to argue the case for more resources. And when you’re achieving lots in each sprint, it will show, too. This will help you justify future automation projects.

The third way, without Jira, is the good old flip chart and post-it notes approach. This is my preferred approach. It does depend on the team not being too large and being local. If you have just one, two or three people in your team, then the good old’stand up around a flip chart each day moving post-it notes from left to right routine has always seemed far more motivating to me. It is a bit more difficult to create a burn down chart at the end of the sprint, though. However, if you’re committing tests to a source code control system, or have a test management tool in place, it can be simple to create other kinds of system reports to show your progress. Burn down or no burn down, this approach makes the whole setup more visible (even to those outside of your team), and somehow feels more rewarding.

Whichever method you choose to track the progress of your automated test development, the goal is to build the momentum of creating and adding tests to your regression pack regularly. When you’re doing this, it’s surprising how quickly everything starts to grow. Before you know it, you’ll find that developing automated tests has become a habit.

Continuous Execution

The goal of Continuous Integration (CI) or Continuous Development (CD) is to make software development work like a conveyor belt. Automated test development can use the same processes, methods and tools as other development projects to reap the same benefits.

If you use the same building blocks and follow the same workflow, this is actually pretty simple to do. Development teams have been laying the groundwork for this approach for years, so we can simply borrow from the same toolbox, follow the same instruction manuals, and apply the same principles. There’s no need to reinvent the wheel and go through the same process of trial and error.

Starting with a foundation of source code control puts you in a good position to build a solid CI solution. Without source code control in place, you might as well forget it. It needs to be embedded in your team’s workflow, even if you’re a team of one. Only if you are totally committed to using the source code control solution you have in place will it serve its purpose as a fixed reference point for everyone.

Your continuous automated test execution solution should be set up to support the following processes:

It is critical that the tests you deploy are working and stable. Failing or unstable tests will destroy confidence in the system you’ve built. Automating the testing of your tests will ensure that none of them slips through unnoticed. The process can work something like this:

  • Develop and test your test locally
  • Check your test into your source code control system
  • CI tool detects the check-in and deploys the test to a test environment
  • CI tool executes the test and captures the test result
  • CI tool reports the result directly to the automation engineer
  • Automation engineer modifies the test, and the process repeats

Having the process of testing your own automated tests on autopilot saves time, provides quick feedback, and builds stability and reliability into your regression packs right from the start.

It’s essential that you isolate these runs from your official, stable, regression pack. You do not want unfinished tests failing and skewing the test results of your formal release. If you are using the same source code control system for both, you could tag incomplete or untested tests so they don’t get scooped up and reported on.

One of the main tenets of the agile domain is fast feedback. You don’t get fast feedback following an application deployment if you have to wait for someone to manually start the automated test execution run. The moment the application is deployed and running in the QA environment, your run needs to start. This can happen in a number of ways.

i. You could chain your deployment jobs in your CI tool so that when the build and deployment jobs finish, the test run is triggered. You will need to implement a check to make sure the deployment is completed successfully before the full run starts.

ii. You could set up a monitoring job in your CI tool that looks for new releases of the application under test. This could be based on a version number embedded somewhere in the front end of the application, or perhaps an API call that returns the version number of the application. When a change in version number is detected, the test run is automatically triggered.

The key here is that everything is set up to run as soon and as fast as possible in order to deliver the results while the interested parties are still focused on the piece of code that’s just been released.

There’s no point running all of this if you don’t provide accessible, accurate and easy-to-consume reports. Pay careful attention to how the relevant people are notified about new reports. In an age of email overload, an email notification is likely to get lost in the noise. Set up an RSS feed connected to a company chat system (like Slack) or employ a desktop notification widget. There are many ways to set up the notification mechanism. The goal is to make notifications as visible as possible.

Once people are aware that the reports are ready, you need to be absolutely certain that what’s displayed is accurate. Incorrect test statuses, missing tests, and tests that just fail to run will destroy all confidence in your automated testing system. Once lost, that confidence is very difficult to regain. Rather start with just a small number of automated tests that are reliable, than with lots of tests where half of them are worthless. Remember that your reports should be tested, too, to ensure that they are conveying correct information.

Once you have the notification mechanism in place and you’re confident of the reliability of the data, it all comes down to how you report that data. Pages of incomprehensible data don’t help anyone. A few targeted, meaningful charts with a drill-down capability are usually best. Keep your reporting short and to the point.

Then, along with a few nice pie charts and bar charts, provide a way for users to quickly get to the underlying data. There’s no point in providing a reporting capability if it’s next to impossible to work out what the cause of a test failure is. You need to strike a balance between providing an easy-to-consume, high-level overview and providing detailed data that allows people to find the root cause of any issues.

A typical dashboard might include:

Pie chart showing the total number of tests run, segmented by Passed, Failed and Skipped results

Bar chart showing the total number of tests run and the application area they were run against, also segmented by Passed, Failed and Skipped results

Bar chart showing the breakdown and results from the different types of tests (Unit, API and UI tests)

An example of drill-down functionality would be clicking on the failed test bar chart and being shown a list of those tests. Each of the failed tests on the list could be hyperlinked to the test logs. If the tests are run from Jenkins, you can provide a link to the console logs and test run reports provided in Jenkins. This approach makes it quick and easy to get to the details.

One of the most difficult aspects of reporting is how you collate your test results. You could display all tests run in the past 24 hours. However, if you selectively ran different tests over that period of time, this list won’t be helpful. You could show the last complete test run, but then how would you combine results from Unit, API and UI runs? And what do you do if your team is making releases of the application so frequently that it’s impossible to complete a full test run against one release? In this situation, you need a way to collate test results from multiple releases.

The best way to approach this is to construct a list of all the tests that have been run, then filter them by time period or a range of application releases, and then collate the results from this filtered list tests. When a test is run multiple times, you only need to display the latest result. Where a test hasn’t been run, the record should be marked as ‘untested’. The ability to effectively collate test execution results and present an accurate picture of the state of the application is difficult to achieve, but very important. The solution to fitting everything together is to use a good, well-configured, test management tool.

Summary

In this article, we’ve looked at three processes that are essential to developing and perfecting the automated test development process: Source Code Control, Tracking Development Progress and Continuous Execution. Although there are other areas of the overall infrastructure we could discuss, we’ve focused on the development process. Sometimes it is helpful to focus on just one aspect of automated testing in order to take it to the next level.

We saw that source code control is important, not only for team collaboration, maintaining previous revisions of code, and tracking changes, but it also provides the bedrock for building a continuous test execution system. It’s like the conveyor belt in your factory. You still have to write the tests and put them on the conveyor belt, but then you simply wait for them to run and check that they’re doing the job you set them to do. A good test automation version control system is fundamental to developing high quality, stable and reliable automated tests. 

The second part we discussed was tracking the progress of your automated test development project. One of our main goals is to create a significant number of high-quality test cases. 

The best way to achieve this is to get into the habit of writing them on a regular basis. Managing this with an agile process that demonstrates progress with burn down charts helps to maintain focus through routinely prioritising which tests to automate. It is essential to set this up separately from the main scrum project or your automation tasks may get buried beneath the application development tasks and forgotten about.

Thirdly, we looked at continuous execution. Whilst moving slightly away from the area of development, this is a key area of support for the development process. If you have a team of developers creating automation code that just sits there waiting for someone to deploy it, run it and report on it, youíll end up with a list of manual tasks that will become a bottleneck for the whole automation project. Get the automated deployment and execution in place, and let the results from those runs come to you. Make sure youíre driving the process, so that the process isnít driving you.

In short, focus on automating the development process as much as you focus on automating the tests. This is the only way your automation system will scale to become a valuable part of the application development process, and of your organisation as a whole.