It’s hard to be a software developer these days without talking about writing automated tests. Gone are the days of throwing software builds over the proverbial wall to a team of testers that sit and click pages all day. No longer do developers sling code around like wizards to make something work without ever checking it does. They are in charge of writing tests that verify their code works as it should.
But one must ask: what is accomplished by having developers write automated tests? What’s the point? It can be easy to accept the status quo, but it’s important to understand just what benefits you and your team are getting when you as developers commit to writing automated tests.
A Quick History
For many years within software engineering, there was a big divide between developers and testers. Developers simply wrote code and handed a build of the software to a QA (quality assurance) team whose job was to verify and validate the software behaved as expected. This concept is sometimes called a “handoff to QA.” Others call it “throwing it over the wall.”
This process was good in many ways. Testers focused on verifying software, and a great tester was someone who could craft unique scenarios or use cases that had not even been considered by the development team. It also kept developers from going rouge and deploying code they thought was good enough without it being adequately tested (called separation of duties).
However, it was slow. Even with decent tools, the handoff process could take a while (which build was it again? Are you sure your code made it into that build anyhow? Is there any new configuration?). Additionally, software builds might be sent to QA with lots of bugs that aren’t hard to fix but require a lot of time from the QA team to test, track, and file tickets for each one.
Over time, the testing teams began to write their own software: automated test suites for validating a software build. These tools saved precious time since once a tester writes a test, that test continues to verify the functionality on future builds (more on this later). As these automated tools became more popular, they became better and better over time. Automated tools for fuzzing, load testing, and traditional black-box tests all exist as a result.
But there were still some limitations.
- Most tests were still black-box tests. The software behavior is evaluated by inputs and expected outputs without any context as to how that behavior is established.
- Software builds were still being delivered to QA with lots of simple bugs. Most of these bugs were issues like default values, spelling mistakes, or even simple mathematical errors. All trivial to fix, but they just made it through the proverbial cracks.
- Handoffs were still slow. Managing the correct version of software builds, installing them into test harnesses, etc. all have overhead.
- Animosity. In some cases, testers and developers had difficulties working together. Accusations of developers shipping terrible code from testers; developers adamant that the test cases being created were impossible situations, etc.
These factors lead to developers writing tests themselves before ever sending a build to the QA team. These tests were more focused on white-box testing rather than black-box, which gave way to more robust assertions about the internals of the software itself. They also developed tools to run these tests in an automated way.
So with all this context and history, what are the advantages of developers writing automated tests?
One core benefit is speed. Automated test suites run faster than having a team of testers manually “poke” a system and evaluate the output. This helps you verify your software faster, which means your time-to-ship is faster. This ability to ship quickly is crucial for any software solution, so reducing it is vital.
Ironically, the most significant “speed up” outside of the tests themselves is that the testing hand-off between teams is eliminated (at least to a degree). When integrated with a CI/CD process, there isn’t an email telling a team to download the build, install it, set up their tests, etc. Tests just run. Teams get notified if they fail or pass. Easy — well, it’s easy after the hard work to set it up properly 🙂.
Similarly, all the things that happen after a build has been tested can also experience speedups. Automated tests can act as a catalyst for automating and standardizing things like your version numbers or even force you to develop a repeatable deployment process. All of this comes back to reducing time to ship metrics from earlier — progress in the right direction.
Write Once, Run Many Times
Another benefit is that once an automated test is written, it can continue to run basically “for free.” Free in the sense that you have written a test that verifies some aspect of the system, it can continue to be a valid test until the system behavior changes.
This kind of test is known as a regression test. These are tests that continue to verify that new development efforts don’t break existing behaviors, something crucially important! Businesses can’t just break existing behaviors to introduce a new feature; they would lose too many customers.
Of course, this brings up an additional concern: test suites are more code to maintain. As such, you don’t get the benefits of automated tests 100% free. As any software engineer knows, maintainability is paramount for a software product. It’s difficult to continue building and iterating on software if you can’t maintain it. It will cripple your ability to respond to the needs of your users.
In the same way, unmaintainable tests can slow development down just as much as they can speed it up. Unreliable tests, tests that take hours to run, inability to change a test due to a change in system behavior, etc. are all detrimental to your tests’ ability to verify your software.
Therefore, you must be vigilant in writing maintainable and efficient tests. All of these are issues you must think about when writing automated tests. An excellent quick rule of thumb with this in mind is this: treat your testing code as production code! You will be thankful in six months that you still followed the rules about writing high-quality code for your tests when it’s easy to understand and change when a new feature rolls around.
Catch Bugs Sooner in the Development Lifecycle
One of the important things to realize about why old-school manual black-box testing impacts a software’s ability to release so much is because bugs are caught late in the game. The later a bug is detected in software, the more expensive it becomes. When a testing team found a bug, they had to do a formal write-up, file the bug, get the development team to look at it, prioritize it, and on and on.
But what if a developer catches a bug while writing code? The cost is cheap! The developer can fix the bug right then and there without having to involve other team members or incur the overhead of filing a new bug.
Automating tests helps this happen. Within a framework like JUnit, nose, or whatever tool, developers can write tests for their code immediately and run them with ease. Like most things in life, the more you remove the barriers to doing something, the more likely you are to do that something. Testing is the same way.
Specialized Test Teams are (Can) Still be Worth It
Don’t let me give you the impression that specialized testers or having manual system/acceptance tests are irrelevant. They most certainly are not! Many software teams rely on having additional controls and checks maintained by separate teams to verify software builds.
In my experience, this additional verification is essential when dealing with a diverse set of users. Even more true when an average user is nothing like the average software developer on the development team. If there is anything we can learn from software security research, its never to trust input from users. Therefore, having a separate QA team craft intentionally nefarious inputs can be worth it to verify your software handles it appropriately.
That is just one example, though. There are tons more, I’m sure.
One last thing to note here is that QA teams can still contribute tests to an automated suite as well that can integrate into the same build pipeline of the software itself. In the same way that a developer might write a test and include it in every build, so a tester might do the same, and both test suites need to pass to have a green build.
This is useful because it continues to remove one of the biggest barriers to having different teams we saw earlier: the handoff process. By continuing to use integration tools to cut down on handoff time, we continue to reduce our time to ship.
To sum it all up, having developers write automated tests themselves rather than relying on a separate team all the time will likely speed up time to ship by tests running faster, catching bugs earlier, and reducing handoff time. Even with an independent QA team, leveraging CI/CD tools can still keep handoff small and keep the separation of duties controls in place when needed.