At Enable, we carry out rigorous testing to ensure that we achieve quality throughout our applications. We use manual testing, particularly at the later stages of development, to observe the functional behavior and full user experience. However, automated tests also play a key role when building a complex application, from simply verifying that behavior meets the specification to reducing costs and undesired bugs throughout an application's life.
While automated testing is crucial to ensuring quality in any software product, it is often harder to immediately appreciate from the outside. Provided the software is meeting the current specification why should our clients care how we got there?
In the short term adding automated tests can sometimes even slow down development, as code for the tests must be written. You might ask yourself: is this really necessary?
This article aims to describe automated testing, focusing on the test driven development (TDD) methodology, and discuss why we follow this approach for our development work.
This can initially sound strange, so let's give a more concrete example. Suppose we are given a simple requirement for our new application -- the ability to register and manage users. Writing the code that enables this will involve a number of different steps. At a minimum this would look something like:
Once we are done, we will want to test our changes. So, we fire up the application and navigate to our new page. We fill out a form to sign up a new user and click submit. The progress spinner turns and the page refreshes. But, oh no, something has gone wrong! Our new user is nowhere to be seen. So, what happened? Why didn't this work and how can we guard against bugs like this arising in the future when new code is added, and complexity grows?
The code for each of these steps will typically be encapsulated into a grouping called a class. When designed well these groupings or classes will be loosely coupled and have a clear, single responsibility. Somewhere along the line one of these classes is not doing its job correctly. But with the manual test we've just performed it can be hard to see where things went wrong. The user interface doesn't give us this information.
With automated tests, we would write code that exercises each of these classes in isolation. When all the test cases for a particular class pass it gives us confidence that this part of the system is working as expected. This testing code is stored, along with the application source code, and can then be run at any point in the future. Assuming we've written our tests well it then becomes much easier to monitor application behavior and maintain it over time.
Going back to our example, we run our automated tests. We find a failure that immediately catches our eye. One of the endpoints -- the one responsible for listening to user sign up requests and passing these on to the required database service -- is not working properly. It looks like the request is being picked up, but the service is not being called. No wonder our new user is missing. We can quickly remedy this and observe the expected behavior. Great!
The TDD methodology states that when writing code, we should write tests first.
These tests will cover the specified requirements for the feature being added. These requirements may be top level user requirements or smaller requirements identified when breaking down the overall design goal into isolated steps. Initially these tests will fail -- we have not written the code yet to actually do anything, we've just defined what's required.
Next comes the step where we actually write the application code. We proceed just far enough to make the tests pass. We may then choose to refactor, to tidy up our implementation, being sure to check that tests are still passing after any changes are made.
We typically then iterate this process: writing failing tests to define requirements, writing application code which then makes the tests pass, and then refactoring and tidying our implementation. This process is often termed ‘red-green-refactor’.
If we follow this process, we will be left with a robust application that does exactly what we want and no more. We can very immediately verify this behavior now, and in the future, by running through our automated test suite. This gives confidence that things are working as we had intended and greatly reduces the chance of unexpected regressions in the future when maintaining and further developing the application.
Here are some of the key benefits of the TDD methodology. While some of these apply generally to automated testing, the majority are only maximized when a TDD approach is followed.
Writing tests at the design stage serves an important role. It translates a product specification into something concrete in code -- something a computer can reason about. It forces the developer to focus and fully define the intended behavior. After writing the implementation, it allows us to measure robustly how we meet this specification and what more is required.
The complexity of an application can quickly grow as features are added, the code is refactored, and requirements shift. We need to be confident when making changes that existing behavior is maintained. TDD allows us to quickly and automatically verify this, simply by running through our test suite.
With TDD feedback is fast. Even for a large application, we can run through our test suite in just a few minutes. To manually test the full behavior of a large application would take days.
When errors occur, we can also identify the source faster. A test failing for a particular class points immediately to the source of the problem. With a manual test we will only observe the effects of errors in the user interface. Errors often occur much deeper within the application, making them hard to diagnose through a manual test.
By breaking down our requirements into small, testable units we are typically able to reach a superior design. Testable code must be loosely coupled, and classes are forced to take clear, single-responsibilities. In technical terms, testable code is what's known as SOLID -- a set of software design principles that are known to help developers build superior solutions.
Code developed through TDD will also strike the right balance between simplicity and complexity -- the code will do exactly what is specified -- no more, no less.
When we write code using a TDD approach it is much more likely to be easy to develop and extend. Testable code is typically structured in such a way that it is easier to refactor, classes are simpler to adapt or extend and dependency between classes is lower.
Yes, we can monitor and address bugs that arise via our test suite. But testable code goes beyond that -- it reduces the chances of making errors and increases the chances of writing good code right now.
Consider the alternative: imagine code that does not have any clear sense of responsibility. It meets the current specifications however internally it is a collection of interdependent monolithic classes. A tangle of side-effects and noise. When adding a new feature to this code a developer will have great difficulty in determining what changes are required and where they should go. It becomes very hard to reason about how your changes will impact the existing system and development grinds to a standstill. In the long term, taking a TDD approach can really pay off.
Code that can be easily maintained and further developed reduces costs. Substantial costs build up when regressions occur or when development is slowed down by poor design. As time goes on and complexity grows this typically only gets worse. Getting things right from the start with TDD is a proven way to combat this.
With all this in mind, here are a few things that we do that help us get the most from the TDD approach.
To follow a true TDD approach one should really use tests right from the start. It is always true that tests are harder to add in retrospect. By starting early and testing first we get the full design benefits from TDD and maximize our chances of maintaining a good coverage.
It's unrealistic for us to achieve 100% test coverage. It's also not even desirable. Practising TDD is only part of Enable's development strategy and it's important to keep things in balance. Given limited resources it is necessary to focus our efforts when testing.
What will benefit most from testing? What will provide the most value?
Calculations are features that benefit greatly from automated testing: they're hard to test manually and often have significant impact when something goes wrong. An animation, on the other hand, is hard to test automatically (but easy to test manually) and has a less significant impact if errors occur.
When a bug occurs, we will always try to add a test case along with a fix to guard against future regressions.
Automated testing is just one part of the overall picture. It is meant to substantially complement -- but never replace -- manual testing. Yes, automated tests can significantly reduce our time spent running manual tests (while providing a host of other benefits), but we still ensure that every new feature and change is thoroughly manually tested before it is delivered.
Automated testing, and TDD in particular, has a lot to offer when developing an application. TDD provides real benefits and ensures a well-structured application that continues to be fit for purpose over time. Most importantly, TDD gives us freedom to focus on what we love -- writing code and building your applications -- and not on expensive and laborious hours of manual testing and debugging!