Strategies for Effective Acceptance Testing – Part I

David Farley CDAutomated testing is at the heart of any good Continuous Delivery process and I see automated Acceptance Testing as being one of the foundations of any effective testing strategy.

In my book ‘Continuous Delivery’ we defined Acceptance Testing as asserting that the code ‘did what the business wanted it to do’. The distinction that we made is between that and unit-test-based TDD, which is really focused on asserting that the code does what the developer thinks it should. Both of these perspectives are important, and compliment one another, but for the rest of this post I want to talk about Acceptance Testing.
Good Acceptance Tests are hard to get right, but there are a few tricks that make it easier. The problem with acceptance tests is that they are big and complex. We want them to assert that the code behaves as we expect, in life-like circumstances. Often organizations have some experience of trying to write tests a bit like Acceptance Tests and have seen them go horribly wrong. The key to getting such tests right is, as usual, good software design. We should work hard to abstract our test implementation appropriately and separate the various concerns involved in creating them. When we do this we tend to side-step many of the tricky problems that many teams face.

Domain Specific Languages for Testing.

My preferred approach to Acceptance Testing is to create a simple Domain Specific Language (DSL) to define my test cases. This approach helps to deliver these characteristics. This is mostly about good software design. It is important to separate the concerns of the technical means of interacting with the System Under Test (SUT)  from the use of the test case as an executable specification of the desired behaviour of the SUT. I want the DSL to hide all details of how the tests talk to the SUT so that anyone who understand the problem domain can understand what the test is asserting. To put it in more technical language the idea of the DSL is to reduce the coupling between the test case and the SUT.
I generally create my testing DSL as something called an ‘internal DSL’. That is a DSL that is hosted by some other programming language or technology. I have done this myself using Java, Python and Fitnesse and have seen it done in lots of other languages. The advantage of this approach is that developers can use the language, tools and infrastructure that they are familiar with to develop and maintain the tests. For example, all of my Java DSL test cases were run with the JUnit test runner. My Python version with the Nosetest test runner.
People are wary of the idea of DSL, it sounds like a complex, expensive thing to create, but it really is not. It is much more about the design approach, the application of a good separation of concerns, than complex technology.
Here is an example of a real acceptance test from one project that I worked on:

@Channel(fixApi, dealTicket, publicApi)
@Test
public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder()
{
trading.placeOrder("instrument", "side: buy", “price: 123.45”, "quantity: 4", "goodUntil: Immediate”);
trading.waitForExecutionReport("executionType: Fill", "orderStatus: Filled", "side: buy", "quantity: 4", "matched: 4", "remaining: 0", "executionPrice: 123.45", "executionQuantity: 4");
}

The annotation at the top defines the channels for which this test is valid. That means that this single test case could run successfully via a FIX (Financial Information eXchange protocol) interface, a Web UI (the dealTicket) and a custom binary API. This works because there is nothing in the test case itself, apart from the channel specifications that is technology specific. This example was written in Java and so tests ran under JUnit.

What is a good Acceptance Test?

So what are the characteristics of a good Acceptance Test?

  • Relevance – A good Acceptance test should assert behaviour from the perspective of some user of the system under test (SUT).
  • Reliability/Repeatability– The test should give consistent, repeatable results.
  • Isolation – The test should work in isolation and not depend, or be affected by, the results of other tests or other systems.
  • Ease of Development – We want to create lots of tests, so they should be as easy as possible to write.
  • Ease of Maintenance – When we change code that breaks tests, we want to home in on the problem and fix it quickly.

All of these characteristics are supported and enhanced by a good DSL.


Related Posts

Leave a Comment

Your email address will not be published. Required fields are marked *