← Back to Library

Composable Tests

Deep Dives

Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:

  • Referential transparency 1 min read

    The article explicitly mentions referential transparency as the functional programming equivalent of test isolation. Understanding this concept deeply would help readers grasp why isolated tests behave predictably regardless of execution order.

  • Test fixture 11 min read

    The article references test fixtures as the mechanism for achieving isolation, mentioning how xUnit frameworks create new instances and run setUp() functions. Understanding fixture patterns would deepen comprehension of the isolation vs composition tradeoffs discussed.

The Test Desiderata desires 12 properties for tests, two of which are:

  • Isolation—the result of running one test should be completely independent of the results of other tests.

  • Composition—??? tests should run together ??? Isn’t that the same thing as isolation?

No, and here’s why (I finally got an example—examples are always the hardest part.)

Isolation

If a test runs by first setting up its own test fixture, creating from scratch all the data it will be using as input, then that test is guaranteed to be isolated. It doesn’t matter what order you run the tests, the results will be exactly the same. (This is the same property as referential transparency in functional programming.)

Isolation is encouraged in the xUnit testing frameworks (at least most of them) by creating a new instance of a test object for every test & running the setUp() function before running the test. (Some frameworks, notably NUnit, reuse test instances, opening the door to breaking isolation.)

Composition

Say we have a suite of isolated tests & we run them all together. The suite’s success should give us confidence (be predictive in Desiderata terms), even though each individual test on its own isn’t comprehensive.

Example—say we have a test:

test1()
object := new Whatever()
actual := object.doSomething()
assertEquals(expected, actual)

We get that working so we want to implement the next bit of functionality. We copy, paste, & extend:

test2()
object := new Whatever()
actual := object.doSomething()
assertEquals(expected, actual)
actual2 := object.nowSomethingElse()
assertEquals(expected2, actual2)

I have seen tests like this that have been copied, pasted, & extended 6 or 7 times. That last test is pretty hard to read.

Notice that test2 can’t pass if test1 fails. All non-compliant programs caught by test1 will also be caught by test2. We have at least 3 options that preserve the same coverage, the same predictability:

  • Leave both tests.

  • Delete test1.

  • Simplify test2.

Pruning

From a purely aesthetic standpoint (& don’t discount aesthetics), leaving both tests as is offends my sensibilities. They are redundant! Something must be wrong.

Deleting test1 loses us another property from the Test Desiderata—tests should be specific. That’s the property of tests where, when one fails, you know exactly where the problem is.

Which leads to my preferred solution—composition. I trim test2 to avoid the purely redundant parts:

test2()
object := new Whatever()
object.doSomething()
actual := object.nowSomethingElse()
assertEquals(expected, actual)

The composition of test1 ...

Read full article on Software Design: Tidy First? →