Using a test suite for automation

Have you ever needed to automate a task that needs to be run on an iOS simulator and not known quite how to make it work in an unsupervised way? As an example - imagine we need to take some JSON files, run each one of them through an app and capture the generated UI so that the screenshots can be uploaded for further processing.

At a high level we want an executable that can take a directory full of input, run a task inside the simulator and place the output of all of the tasks into a different directory. We’ll start by looking at building something to run our task inside a simulator and then see how to get data in/out of the process.

Running our task

To run our code inside the simulator we can make use of a new test suite with a single test method. The test method will enumerate files found at the input directory path, execute some code on the simulator and store the results in a different directory. Here’s a basic implementation of the above:

import XCTest

class Processor: XCTestCase {
    func testExample() throws {
        let (inputDirectory, outputDirectory) = try readDirectoryURLs()
        let fileManager = FileManager.default
        try fileManager.createDirectory(at: outputDirectory, withIntermediateDirectories: true)
        try fileManager.contentsOfDirectory(at: inputDirectory, includingPropertiesForKeys: nil).forEach {
            try eval($0).write(to: outputDirectory.appendingPathComponent($0.lastPathComponent))
    private func readDirectoryURLs() throws -> (input: URL, output: URL) {
        func read(_ path: String) throws -> URL {
            URL(fileURLWithPath: try String(contentsOf: Bundle(for: Processor.self).bundleURL.appendingPathComponent("\(path)-directory-path")).trimmingCharacters(in: .whitespacesAndNewlines))
        return try (read("input"), read("output"))

The interesting things to note above are:

  • The input/output directory paths need to be written to files called input-directory-path and output-directory-path inside the test bundle
  • The eval function is a function that can read the contents of a file and return a result that we can write to the output directory - this is where all of the real work would happen

There are plenty of things that could be done to customise the above for individual use cases but it’s enough for this post.

How do we set up the input-directory-path and output-directory-path files inside the test bundle?

Wrapping the task

In order to inject the relevant paths we need to ensure that our test suite is built and then run as two separate steps. This gives us a chance to build the project, inject our file paths and then actually execute the test.

A Ruby script to do this would look something like the following:

#!/usr/bin/env ruby

unless ARGV.count == 2
  exit 1

input_directory, output_directory = *ARGV

def xcode_build mode
  `xcodebuild #{mode} -scheme Processor -destination 'platform=iOS Simulator,name=iPhone 12,OS=14.5' -derivedDataPath ./.build`

xcode_build "build-for-testing"

Dir[".build/**/"].each do |path|
  write = -> name, contents do"#{path}/PlugIns/ProcessorTests.xctest/#{name}-directory-path", 'w') do |file| file.puts contents end
  write["input", input_directory]
  write["output", output_directory]

xcode_build "test-without-building"

This script is doing the following:

  • Basic input validation to ensure that both an input and output path have been provided
  • Run xcodebuild with the action of build-for-testing to ensure that the test suite is built and not run
  • Write the input-directory-path and output-directory-path files into the test bundle
  • Run xcodebuild with the action of test-without-building to execute the test suite

With all of these pieces in place and assuming we named this script run-processor we can execute this script like this:

./run-processor /path/to/input /path/to/output


We have a pretty bare bones implementation that should demonstrate the general idea and leave plenty of scope for expansion and experimentation.

The hidden cost of `@testable`

If a Swift module is compiled with “testing enabled” it allows us to import that module using the @testable annotation to alter the visibility of our code. Classes and their methods that are marked as internal or public become open, which allows subclassing and overriding in tests. Other API marked as internal becomes public so that it is now visible to tests.

This is certainly useful when it’s required but it can often be used too eagerly without taking into account some of the issues it can lead to. I’m going to look at a few potential design issues that could come from using @testable. This post is not saying if you use @testable bad things will happen but it’s worth keeping in mind some of the design trade offs you are making.

All the issues I’m going to discuss have a common thread that revolves around my understanding of public API so it’s worth clarifying what I’m encompassing when referring to public API.

Public API

When an API is marked as public in code that is going to be shared it represents a commitment from the author. The commitment is to the consumers of the code that public APIs will be stable, supported and the behaviour will not change unless some change management process is followed. It is therefore beneficial for code authors to keep the surface area of their public APIs as small as possible and hide as much implementation from end users. This set up gives the author the freedom to rework the internals as much as they like and as long as the observable public API remains unchanged then downstream users won’t bat an eyelid.

With that explained let’s look at some of the issues:

Overly specified code

Adding tests around code makes the code harder to change because we are locking in the behaviour or at the very least our current understanding of the behaviour. This is great for public APIs because we already discussed that public APIs should be stable. This rigidity is not so good for our non public implementation details that we want to be easier to change.

I’m sure we’ve all had this internal dialog with ourselves at some point

I only wrote these tests last week, why is it hindering my refactoring rather than helping?

This is normally a sign that we got carried away and we are testing the implementation details rather than the overall behaviour. @testable makes this problem much easier to come by. There’s been plenty of times I’ve hit code visibility issues in my tests and instinctively reached for @testable import instead of opting to mark the API I want to test as public. The issue is, once the big ol’ @testable switch has been flipped it’s much easier to overly specify your code and write tests at the wrong level.

There are of course exceptions but I’d try to selectively mark things as public and prefer to only test those APIs. This does not mean that the code is any less tested, it’s just that the code is being exercised indirectly. If there is code that is not exercised when going through the public API then it’s probably just dead code that needs removing.

Loosens documentation and forces extremes

Something that @testable takes away from us is the documentation that we get when we mark an API as public. As the default visibility for code is internal it means that unless otherwise stated all code you write in a single module is visible everywhere within that module. This makes it really hard to differentiate what code should be stable and what code should be flexible.

This documenting of stable API is forced upon us, in a good way, when using multiple modules or we won’t be able to see any code from the imported modules. Unfortunately this is probably not the common case as many people will come to Swift for app development where working within a single module is the norm.

To resolve this we can go to extremes and mark all implementation details as private but this then removes our ability to use the escape hatch of @testable. As a reminder this post is not saying @testable is bad as there are many times where you genuinely might get value from testing implementation details that you don’t want to be public.

But I use TDD

Using a TDD approach is not a panacea and when teamed with @testable I think it’s a winning combination to make it easy to fall into these design traps. I’ve seen people TDD some code and come up with good solutions but then fall at the last hurdle. It’s easy to forget that that tests are not the artifact we care about producing, instead they just help create working software. The last step that is often missed is to ask the question

Are these tests at the right level?

Keeping in mind that tests make things less flexible and the things we preferably want to be stable are the public API. We should therefore see if we can restate any tests that are aimed at implementation details as tests of the public API.

I’ve been bitten by this many times where I’ve TDD’d some code and then returned sometime later to find it requires a lot of rework of tests to get things moving. This can often cause so much friction that I’ll just opt to leave the code to rot and incur more debt.

Different compilation

For @testable import to work your Swift module needs to be compiled differently. I’m assuming it’s entirely safe as much smarter people than me decided it would be a good addition but I can’t help but feel most uses are unnecessary. By choosing to lightly sprinkling code with public you get the benefits of a smaller public API surface area, better documentation of intent and the compiler is not having to do any special work.


There are no hard rules and context is key when making decisions. As a starting point I’d mark things as public rather than use @testable as this forces you to consider how stable you want this API to be. Also I’d use the visibility modifiers on code even when working within a single module to signal intent to future readers about whether the code should remain stable or is free to change.

Basically - don’t be afraid to use @testable just don’t make it the default tool.

Swift unimplemented()

Getting into a flow state where your tools get out of the way and you are able to express your thoughts quickly is a great feeling. This flow can be easily interrupted when things like code completion, syntax highlighting or other IDE support features start to fail. A function that I find really helps me when I am wanting to get into a good flow and stay there is unimplemented(). The aim is to keep the compiler happy whilst I cut corners and avoid getting bogged down in unimportant details.


func unimplemented<T>(message: String = "", file: StaticString = #file, line: UInt = #line) -> T {
    fatalError("unimplemented: \(message)", file: file, line: line)


Let’s rewind and go through building the above function step by step as there is a lot in those 3 lines of code that can be applied to other APIs we build.

As an example let’s pretend we have a World structure that contains some globally configured services

struct World {
    var analytics: Analytics
    var authentication: Authentication
    var networkClient: NetworkClient

Each one of the services could be quite complicated to construct but for our unit tests we only need to construct the parts under test. We could create all the instances but this might be awkward/time consuming and also makes the tests less documenting as we are building more than required.

If we had a networkClient that we were testing then the simplest way to get this to compile without providing an Analytics instance and an Authentication instance would be like this:

var Current = World(
    analytics: fatalError() as! Analytics,
    authentication: fatalError() as! Authentication,
    networkClient: networkClient

The above isn’t great as the compiler will raise a great big yellow warning on each of the lines containing fatalError as! because the cast will always fail.

Attempt 2

Having big compiler warnings breaks my flow and forces me to go and concentrate on details that are unimportant (this is a complete blocker if you treat warnings as errors). The next attempt would be to drop the cast so that the compiler doesn’t complain. To achieve this we need to wrap the call to fatalError in an immediately evaluated closure:

var Current = World(
    analytics: { fatalError() }(),
    authentication: { fatalError() }(),
    networkClient: networkClient

The compiler warning is gone but there are a few issues:

  • Immediately evaluated closures aren’t the most common thing so people might not remember this trick off the top of their head
  • There’s a lot of line noise to type with curly braces and parens
  • The error you get from a fatalError won’t be very descriptive, which makes this technique awkward to use as a TODO list

Attempt 3

Functions are familiar and allow us to give this concept a name. I think a well named function should solve all 3 of the above complaints:

func unimplemented<T>(message: String = "") -> T {
    fatalError("unimplemented: \(message)")

With the above our call site now looks like this:

var Current = World(
    analytics: unimplemented(),
    authentication: unimplemented(),
    networkClient: networkClient

We’ve now got a descriptive function that acts as good documentation that we don’t need two of these arguments for our work. Having a default message of unimplemented: might not seem very useful but it gives us more indication that this was something we need to implement and not a condition that we never expected to happen (another common use case for fatalError). Giving this concept a name also means that we have a term we can search for throughout the codebase or logs.

In order for this version to work we’ve had to use a generic placeholder for the return type. This allows us to leverage type inference to just throw a call to our function in to plug a hole and keep the compiler happy.

Attempt 4

This last version is much nicer than where we started but it actually drops some usability that we got for free in Attempt 2. With the last version of the code if we actually invoke this function at runtime Xcode will indicate that the unimplemented function is to blame. It might not be too hard to track back if you have the debugger attached but if not this doesn’t give you much to work with. When we have the immediately evaluated closures Xcode would highlight the actual line where the fatalError was.

Not to worry as fatalError also accepts file and line arguments. We simply collect these values and pass them along. To achieve this we use the literal expression values provided by #file and #line and add them as default arguments:

func unimplemented<T>(message: String = "", file: StaticString = #file, line: UInt = #line) -> T {
    fatalError("unimplemented: \(message)", file: file, line: line)


I find it really important to take a step back and examine what helps me improve my workflow. Often taking the time to work through problems helps to stimulate new ideas, find new tools or just highlight bad practices that are slowing me down.