Testing Boundaries in Swift

Testing is fairly common practice but there are plenty of rough edges that catch people out. One area that causes trouble is testing across module boundaries. In this post I’m going to walk through the evolution of an API and explain how to approach testing at each step, which often requires evolving the API slightly. Understanding how to test different scenarios is important because it empowers us to craft the boundaries we want without having to compromise on testing or aesthetics.

There are a few key approaches that I’m going to cover:


Problem Outline

As always here’s a contrived example - I’ve got two modules:

Main Application

The main application has a PersonRepository that uses a TransientStore (see Storage Module) as its local cache.

final class PersonRepository {
    let cache: TransientStore

    init(cache: TransientStore = .init()) {
        self.cache = cache
    }

    func fetch(id: String) -> Person? {
        return cache.get(key: id).flatMap { try? JSONDecoder().decode(Person.self, from: $0) }
    }

    func store(id: String, person: Person) {
        cache.set(key: id, value: try? JSONEncoder().encode(person))
    }
}

Usage of this repository within the main application would look like:

let repo = PersonRepository()
repo.store(id: "12345", person: .init(name: "Elliot"))
print(repo.fetch(id: "12345")) //=> Person(name: "Elliot")

Storage Module

The Storage module contains a TransientStore which is a type that provides a simple Key/Value store. Here’s the public interface:

public class TransientStore {
    public init()
    public func get(key: String) -> Data?
    public func set(key: String, value: Data?)
}

The relationship between these types is PersonRepository --> TransientStore, which is to say that the PersonRepository has a strong dependency on TransientStore and knows the type by name.


What do we want to test?

Before we dive into analysing the current structure I think it’s important to highlight exactly what I feel is important to test here for this blog post.

From within my main application I want to test the collaboration between PersonRepository and TransientStore - this is the collaboration across the module boundary. In more concrete terms I want to be able to write tests like:

  • If I call PersonRepository.fetch(id:) it should:
    • Invoke TransientStore.get(key:) with the id value that was passed to the original function
    • If data is returned it should attempt to JSON decode it
  • If I call PersonRepository.store(id:person:) it should:
    • Attempt to JSON encode the person passed to the original function
    • Invoke TransientStore.set(key:value:) with the id from the original function and the encoded person

The above are the high level collaborations, in reality there would be many permutations of these tests to validate what happens for the unhappy paths like invalid input etc.

What I am not interested in for the sake of this blog is testing the behaviour of TransientStore. In a real project I would expect that TransientStore is well tested to ensure that it honours the public contract that it provides.


Subclass and Substitute

With this first iteration I can test this collaboration by subclassing TransientStore and overriding it’s various functions to create a test double. Here’s an implementation of this test double:

class TransientStoreMock: TransientStore {
    var onFetchCalled: (String) -> Data? = { _ in nil }
    var onStoreCalled: (String, Data?) -> Void = { _, _ in }

    override func get(key: String) -> Data? {
        return onFetchCalled(key)
    }

    override func set(key: String, value: Data?) {
        onStoreCalled(key, value)
    }
}

To show how this would be used - here’s the two test cases I mentioned above:

final class PersonRepositoryTests: XCTestCase {
    var transientStoreMock: TransientStoreMock!
    var sut: PersonRepository!

    override func setUp() {
        super.setUp()
        transientStoreMock = TransientStoreMock()
        sut                = PersonRepository(cache: transientStoreMock)
    }

    override func tearDown() {
        transientStoreMock = nil
        sut                = nil
        super.tearDown()
    }

    func testFetch_collaboratesWithTransientStore() {
        let expectedID = "12345"

        transientStoreMock.onFetchCalled = {
            XCTAssertEqual(expectedID, $0)
            return self.encodedPerson
        }

        XCTAssertEqual(.fake(), sut.fetch(id: expectedID))
    }

    func testStore_collaboratesWithTransientStore() {
        let expectedID = "12345"
        var wasCalled = false

        transientStoreMock.onStoreCalled = {
            XCTAssertEqual(expectedID, $0)
            XCTAssertEqual(self.encodedPerson, $1)
            wasCalled = true
        }

        sut.store(id: expectedID, person: .fake())

        XCTAssertTrue(wasCalled)
    }

    var encodedPerson: Data {
        // Happy to use a force cast here because if this fails there is something really wrong.
        return try! JSONEncoder().encode(Person.fake())
    }
}

extension Person {
    static func fake(name: String = "Elliot") -> Person {
        return .init(name: name)
    }
}

This works but there are a few things I’m not keen on:

  • To actually make this work I need to update TransientStore to be open so that it can be subclassed externally. This is not a great change to be making just to enable tests. The mere addition of the open access control modifier may suggest to an API user that this type is intended to be subclassed.

  • This only works for class types so we need a different solution for struct and enum.

  • There is a burden on me as a test writer to know what to override in our TransientStore subclass. If I don’t override the right stuff then my tests will not be isolated and could be causing all kinds of side effects.

Before moving on… if the above technique fits your needs and you don’t share my concerns then by all means use it - there really is no right and wrong if stuff works for your requirements.


Use an Interface

We can resolve all 3 of the issues above by making PersonRepository depend on an interface that we’ll call Store and then make TransientStore depend on the same interface. This has the effect of inverting the direction of the dependency. Doing this would give us the following (notice how the arrows all point away from the concrete details):

PersonRepository --> Store (protocol) <-- TransientStore

Let’t take a look at the changes required to get this working. We’ll update the Storage module first:

+ public protocol Store {
+     func get(key: String) -> Data?
+     func set(key: String, value: Data?)
+ }

- public class TransientStore {
+ public class TransientStore: Store {
      public init()
      public func get(key: String) -> Data?
      public func set(key: String, value: Data?)
  }

Above I’ve added Store as a protocol. TransientStore is almost identical to our first implementation except we are able to remove the open modifier and we conform to Store.

With this change in place we can update the PersonRepository to the following:

  final class PersonRepository {
-     let cache: TransientStore
+     let cache: Store

-     init(cache: TransientStore = .init()) {
+     init(cache: Store = TransientStore()) {
          self.cache = cache
      }

      func fetch(id: String) -> Person? {
          return cache.get(key: id).flatMap { try? JSONDecoder().decode(Person.self, from: $0) }
      }

      func store(id: String, person: Person) {
          cache.set(key: id, value: try? JSONEncoder().encode(person))
      }
  }

The only difference here is that all references to TransientStore have been replaced with Store except for the default argument instantiation in the initialiser.

With this the body of the tests can remain identical but we need to update the test double to conform to a protocol rather than subclassing:

- class TransientStoreMock: TransientStore {
+ class StoreMock: Store {
      var onFetchCalled: (String) -> Data? = { _ in nil }
      var onStoreCalled: (String, Data?) -> Void = { _, _ in }

      func get(key: String) -> Data? {
          return onFetchCalled(key)
      }

      func set(key: String, value: Data?) {
          onStoreCalled(key, value)
      }
  }

As promised this resolves all 3 issues mentioned above and it didn’t really require many changes. The first two are resolved because we have removed the inheritance aspect. The third issue is resolved because if we modify the protocol to add a new requirement then our tests will no longer compile. This gets me to my happy place where I am doing compiler driven development, which means I just fix all the things the compiler complains about.


I do have a gripe with the above solution and it’s that we’ve hidden some details because we are using a protocol but I still had to reference the TransientStore type by name within the PersonRepository, which highlights that TransientStore is still publicly visible. If we look at the public header for our Storage module again we can see that it leaks implementation details:

public protocol Store {
    public func get(key: String) -> Data?
    public func set(key: String, value: Data?)
}

public class TransientStore : Store {
    public init()
    public func get(key: String) -> Data?
    public func set(key: String, value: Data?)
}

As a consumer of the module I might assume that it would be sensible to use TransientStore directly as it’s freely provided in the public API.


Hiding Details

We can resolve the above issue by hiding the concrete TransientStore type entirely. The way to do this is to provide a factory function that will create a TransientStore but it won’t externally reference the TransientStore type. We can then set everything on TransientStore to have internal visibility:

+ public func makeTransientStore() -> Store {
+     return TransientStore()
+ }

- public class TransientStore: Store {
-     public init()
-     public func get(key: String) -> Data?
-     public func set(key: String, value: Data?)
- }
+ class TransientStore: Store {
+     init()
+     func get(key: String) -> Data?
+     func set(key: String, value: Data?)
+ }

It may not seem like we did anything there apart from change some visibility but the end result is the public interface for the Storage module is now much simpler:

  public protocol Store {
      public func get(key: String) -> Data?
      public func set(key: String, value: Data?)
  }

+ public func makeTransientStore() -> Store

- public class TransientStore: Store {
-     public init()
-     public func get(key: String) -> Data?
-     public func set(key: String, value: Data?)
- }

As you can see there is no mention of the actual type TransientStore. The function name does include the name but this is just a label it’s not the actual type itself being leaked.

At this point we have a nice seam that allows us to provide alternate Store implementations into our code base, whether that be in tests or in production code.


Type Erasure

Type erasure can be pretty daunting but it’s really useful when you know when it can be utilised. I don’t think I’ll ever get to the point where I use it often enough that I remember how to do it without googling - maybe I’ll end up back on this post in the not too distant future.

Continuing with our example above we might wonder if we can make our API more generic and use any Hashable type as the key. To achieve this in Swift we need to add an associatedtype to the Store protocol and use the new type where we was previously hardcoding the String type:

  public protocol Store {
+     associatedtype Key: Hashable

-     public func get(key: String) -> Data?
-     public func set(key: String, value: Data?)
+     public func get(key: Key) -> Data?
+     public func set(key: Key, value: Data?)
  }

Updating the TransientStore to conform to this interface requires that we make the class generic:

- class TransientStore: Store {
-     func get(key: String) -> Data?
-     func set(key: String, value: Data?)
- }
+ class TransientStore<Key: Hashable>: Store {
+     func get(key: Key) -> Data?
+     func set(key: Key, value: Data?)
+ }

The changes so far are valid but the compiler starts getting very unhappy with our factory function for creating a TransientStore

public func makeTransientStore() -> Store { //=> Protocol 'Store' can only be used as a generic constraint because it has Self or associated type requirements
    return TransientStore() //=> Return expression of type 'TransientStore<_>' does not conform to 'Store'
}

This isn’t going to work because the associatedtype means that we can’t use Store in the following places:

  • As the return type of this function.
  • As the type of cache variable in PersonRepository.

We have two options to get around this restriction.

1) Forget about the interface approach and go back to using the concrete type directly - just like in the problem statement.
2) Create a type eraser that acts as a wrapper over our concrete types.

As you can tell from the less than positive wording of 1 I’m not going to go that route in this post. Again if this is the right solution for your code base then go ahead and use it.


The mechanics of what we will do is:

A) Create a concrete type which follows the naming convention of adding Any to the beginning of our type e.g. AnyStore.
B) The AnyStore will be generic over the key’s type where Key: Hashable.
C) When instantiating an AnyStore<Key> you will need to provide an instance to wrap, which will need to conform to Store.
D) Replace references to Store within function return types or variable declarations with our new AnyStore<Key> type.


Let’s start with the type eraser (steps A - C):

 1 public class AnyStore<Key: Hashable>: Store {
 2     let _get: (Key) -> Data?
 3     let _set: (Key, Data?) -> Void
 4 
 5     public init<Concrete: Store>(_ store: Concrete) where Concrete.Key == Key {
 6         _get = store.get(key:)
 7         _set = store.set(key:value:)
 8     }
 9 
10     public func get(key: Key) -> Data? {
11         return _get(key)
12     }
13 
14     public func set(key: Key, value: Data?) {
15         _set(key, value)
16     }
17 }

Line 1 is defining our new type and stating that it’s generic over a Key type that must be Hashable.

Lines 5-8 is where most of the heavy lifting is done. We are taking in another concrete type that conforms to Store and grabbing all of it’s functions and placing them into some variables 2-3. By doing this it means that we can implement the Store interface get(key:) and set(key:value:) and then delegate to the functions that we captured.

With this in place we move onto updating any place where Store was mentioned as a return type or a variable’s type and change to use our new type eraser.

- public func makeTransientStore() -> Store {
-     return TransientStore()
- }
+ public func makeTransientStore<Key>() -> AnyStore<Key> {
+     return AnyStore(TransientStore())
+ }
  final class PersonRepository {
-     let cache: Store
+     let cache: AnyStore<String>

-     init(cache: Store = makeTransientStore()) {
+     init(cache: AnyStore<String> = makeTransientStore()) {
          self.cache = cache
      }

      func fetch(id: String) -> Person? {
          return cache.get(key: id).flatMap { try? JSONDecoder().decode(Person.self, from: $0) }
      }

      func store(id: String, person: Person) {
          cache.set(key: id, value: try? JSONEncoder().encode(person))
      }
  }

There were surprisingly few changes required to get this to work.


What did we just do?

Let’s look at the public interface for the Storage module:

  public protocol Store {
-     public func get(key: String) -> Data?
-     public func set(key: String, value: Data?)

+     associatedtype Key : Hashable

+     public func get(key: Key) -> Data?
+     public func set(key: Key, value: Data?)
  }

+ public class AnyStore<Key: Hashable>: Store {
+     public init<Concrete: Store>(_ store: Concrete) where Key == Concrete.Key
+     public func get(key: Key) -> Data?
+     public func set(key: Key, value: Data?)
+ }

- public func makeTransientStore() -> Store
+ public func makeTransientStore<Key: Hashable>() -> AnyStore<Key>

We’ve had to expose a new concrete type AnyStore in order to accommodate the fact that we wanted Store to be generic. Exposing a new concrete type may seem at odds with the idea of relying on abstractions over concretions but I tend to think of this kind of type erasure as a fairly abstract wrapper that exists solely to hide concrete implementations.


Expanding our Type Erasure

To really ground our understanding let’s make our Store abstraction more powerful and make it work for any value that is Codable instead of just working with Data. The current method of working with Data directly pushes complexity onto the clients of our Store API as they have to handle marshalling to and from Data.

First let’s see how this change will actually simplify our API usage:

  final class PersonRepository {
-     let cache: AnyStore<String>
+     let cache: AnyStore<String, Person>

-     init(cache: AnyStore<String> = makeTransientStore()) {
+     init(cache: AnyStore<String, Person> = makeTransientStore()) {
          self.cache = cache
      }

      func fetch(id: String) -> Person? {
-         return cache.get(key: id).flatMap { try? JSONDecoder().decode(Person.self, from: $0) }
+         return cache.get(key: id)
      }

      func store(id: String, person: Person) {
-         cache.set(key: id, value: try? JSONEncoder().encode(person))
+         cache.set(key: id, value: person)
      }
  }

To make the above work here’s the modifications required to add the new generic to the Store protocol and feed it through our AnyStore type eraser:

- public class AnyStore<Key: Hashable>: Store {
-     public init<Concrete: Store>(_ store: Concrete) where Key == Concrete.Key
-     public func get(key: Key) -> Data?
-     public func set(key: Key, value: Data?)
- }
+ public class AnyStore<Key: Hashable, Value: Codable>: Store {
+     public init<Concrete: Store>(_ store: Concrete) where Key == Concrete.Key, Value == Concrete.Value
+     public func get(key: Key) -> Value?
+     public func set(key: Key, value: Value?)
+ }

  public protocol Store {
      associatedtype Key: Hashable
+     associatedtype Value: Codable

-     public func get(key: Key) -> Data?
-     public func set(key: Key, value: Data?)
+     public func get(key: Key) -> Value?
+     public func set(key: Key, value: Value?)
  }

- public func makeTransientStore<Key: Hashable>() -> AnyStore<Key>
+ public func makeTransientStore<Key: Hashable, Value: Codable>() -> AnyStore<Key, Value>

Conclusion

That was a lot to go through and it got pretty difficult at the end. I covered a few different methods for testing that have various tradeoffs but are all useful for helping to test across boundaries to ensure that objects are collaborating correctly.

Hopefully the above will demonstrate some of the techniques that can be used to design clean boundaries without compromising because we couldn’t figure out a way to test things.

Leaning on the Compiler

A real strength of statically compiled languages is that you always have a really clever friend (the compiler) looking over your shoulder checking for various types of errors. Just like real friendships you’ll have times where you seem to disagree on really stupid things but at the end of the day you know they have your back. Here’s some tips on ways that you can lean on your friend to really take advantage of their expertise.

Disclaimer everything below is a suggestion and your milage may vary. As with anything in software there are no silver bullets.


Avoid force unwrapping (even when you know it’s safe)

I’ve seen code that will do a nil check to guard against having an optional but the code then continues to use force unwraps for all subsequent uses. Consider the following:

1 guard self.shape != nil else {
2   return
3 }
4 
5 update(with: self.shape!)

For the sake of simplicity let’s assume that shape is defined as let so it can’t possibly go nil after we have checked it.

If I saw this code I would suggest that the author rephrase it like this:

1 guard let shape = self.shape else {
2   return
3 }
4 
5 update(with: shape)

The change itself is trivial and it may be difficult to see all the advantages. The biggest win for me is that the compiler is now primed to have my back during future refactorings.


What do I mean by the compiler is now primed to have my back? Let’s first look at some of the problems from the above. With the first listing the biggest maintenance headache is that we have added a line of code that cannot be safely relocated. Relocating lines of code can come about for many different reasons:

1) Copy and Pasta

A developer may copy and paste the working line of code (line 5) without the guard statement and place it somewhere else. Because this line has a force unwrap the compiler won’t force us to explicitly handle a nil check. In a worst case scenario we may not exercise the use case where shape is nil very often, which would result in run times crashes that are rarely reached.

2) Refactoring

Refactoring is a dangerous game unless you have really good test coverage (quality coverage covering all branches, not just a high number from executing everything). Imagine someone removed lines 1-3 in an attempt to tidy things up - we’d be back to the point of crashes that we may or may not reproduce. This seems a little laughable with the example above but it would be really easily done if the guard statement was more complicated to begin and we was careless with the tidy up.

3) Bad Merge

There is always a chance in a multi-author system that people’s work may not merge as cleanly as we would like, which could result in the guard being taken out.


How is the second listing better?

I’m glad you asked. With the second listing all three scenarios above are just non existent. If I take the line update(with: shape) and place it anywhere that does not have an unwrapped shape around then the compiler will shout. This shouting will be at build time so I don’t need to spend potentially hours tracking down a crash, I get an immediate red flag that I need to handle the potential that this reference could be nil.


Avoid non-exhaustive switch statements (even when it’s painful)

Switch statements need to be exhaustive in order to compile but I often see code that does not exhaustively list out cases, instead opting to use default. Consider the following:

enum Shape {
  case circle
  case square
  case triangle
}

func hasFourSides(_ shape: Shape) -> Bool {
  switch shape {
  case .square: return true
  default:      return false
  }
}

I would argue that the function would be better phrased as:

func hasFourSides(_ shape: Shape) -> Bool {
  switch shape {
  case .square:            return true
  case .circle, .triangle: return false
  }
}

Like the first example it may seem like this is a trivial change with no immediate advantage but it’s the future maintenance that I think is important here. With the second listing the compiler will be ready to shout at us if we expand our shapes to include a case rectangle - this will force us to consider the function and provide the right answer. In the first listing the compiler will not notice any issues and worse the code will now incorrectly report that a .rectangle does not have four sides. In this case I would argue that this is a trickier bug than a crash because it’s non fatal and relies on us checking logic correctly either in an automated test or via manual testing.


Create real types instead of leaning on primitives

Creating real types gives the compiler even more scope to help you. Consider the following:

struct User {
  let id: String
  let bestFriendID: String
}

func bestFriendLookup(id: String) -> User {
  ...
}

With the above API it’s actually impossible to tell without looking at some documentation or viewing the source whether you should pass a User.id or a User.bestFriendID to the bestFriendLookup(id:) function.

If we was using more specific types instead of String for the id the function might look more like this:

func bestFriendLookup(id: BestFriendID) -> User {
  ...
}

Note that I mean a real type e.g. struct BestFriendID { ... } not just a typealias which would have no benefit

I’m not going to list a solution here because you may as well just check out https://github.com/pointfreeco/swift-tagged repo/README for how you can easily use real types to solve this problem.


Avoid Any

There are absolutely times that we have to use Any but I’m willing to bet that most of the times I encounter it in code there could have been a way to rephrase things to keep the type knowledge.

A common example, that I have definitely done myself, occurs when crossing a module boundary. If I write a new module that I call into from my main app I may want to to pass a type declared in my main app to the module as userData: Any. In this instance the new module has to take Any as I don’t want it to know anything about the main app.

This userData: Any is another potential bug just waiting to be encountered because the compiler can’t validate that I didn’t get my types mixed up. The fix for this is to provide a module type that uses generics. The best example of this is going to be the collection types in the standard library - they don’t have an interface that works with Any instead they are generic over a type.


Conclusion

I’ve outlined a few cases where leaning on the compiler can eliminate potential future maintenance issues. There is no one size fits all solution so I encourage people to keep sweating the details and maybe some of the above suggestions will be appropriate for your code bases. The compiler isn’t going to solve all of our coding problems but it can certainly help reduce pain if we code in a compiler aware way - I like to ask the question “can I rephrase this code so that it won’t compile if the surrounding context is removed”.

Hands on Generics in Swift

Generics are a powerful language feature that we use daily when using the Swift standard library without really thinking about it. Things get tricky when people first start to write their own APIs with generics, there is often a confusion about what/why/when/how they should be used. This post runs through a worked example of writing a generic function and explaining the benefits as we go.


Problem Outline

Let’s imagine that we want a function that will take an array of shapes and remove any where the area is less than 100. The function should be able to handle multiple different shape types - here are two example types:

struct Rectangle: Equatable {
    let width: Double
    let height: Double

    func area() -> Double {
        return width * height
    }
}

struct Square: Equatable {
    let length: Double

    func area() -> Double {
        return length * length
    }
}

First Attempt

There are plenty of ways to tackle this problem so let’s just pick one to begin. Without generics we might try writing a function that ignores types by working with Any.

func filterSmallShapes(_ shapes: [Any]) -> [Any]

To write the implementation we need to cast to the correct type, call the area function and compare it against 100.

func filterSmallShapes(_ shapes: [Any]) -> [Any] {
    return shapes.filter {
        if let square = $0 as? Square {
            return square.area() > 100
        } else if let rectangle = $0 as? Rectangle {
            return rectangle.area() > 100
        } else {
            fatalError("Unhandled shape")
        }
    }
}

This implementation has some design floors:

1) It can crash at run time if we use it on any type that is not a Square or Rectangle.

filterSmallShapes([ Circle(radius: 10) ]) // This will crash as we have no implementation for `Circle`

2) The size predicate logic is duplicated twice.

This is not great because it means we’ll need to update multiple places in our code base if the core business rules change.

3) The function will keep getting bigger for every type we support.
4) We get an array of Any as the output, which means we’ll probably need to cast this output to a more useful type later.

1 func testFilterSmallShapes_removesSquaresWithAnAreaOfLessThan100() {
2     let squares: [Square] = [ .init(length: 100), .init(length: 10) ]
3 
4     XCTAssertEqual(
5       [ .init(length: 100) ],
6       filterSmallShapes(squares).map { $0 as! Square }
7     )
8 }

On line 6 above we have to cast back to a more specific type in order to do anything useful, in this case a simple equality check. This might not seem too bad but we must remember that this cast happens at runtime, which means that we put more pressure on our testing to ensure that we are exercising all possible scenarios in our code.


Second Attempt

Let’s introduce a protocol so that we don’t need to cast for each shape type. Doing this will resolve issues 1, 2 and 3.

protocol Sizeable {
    func area() -> Double
}

extension Rectangle: Sizeable {}
extension Square: Sizeable {}

func filterSmallShapes(_ shapes: [Sizeable]) -> [Sizeable] {
    return shapes.filter { $0.area() > 100 }
}

This implementation is a big improvement but we now return [Sizable] as the output, which is just as unhelpful as [Any] from the first attempt, which will still require a runtime cast.


Third Attempt

To resolve all the issues that we have encountered so far we might decide our previous attempts were ok but it might be easier to just duplicate the code and keep everything type safe:

func filterSmallShapes(_ shapes: [Rectangle]) -> [Rectangle] {
    return shapes.filter { $0.area() > 100 }
}

func filterSmallShapes(_ shapes: [Square]) -> [Square] {
    return shapes.filter { $0.area() > 100 }
}

Our test from earlier now becomes really simple without the type cast:

func testFilterSmallShapes_removesSquaresWithAnAreaOfLessThan100() {
    let squares: [Square] = [ .init(length: 100), .init(length: 10) ]

    XCTAssertEqual([ .init(length: 100) ], filterSmallShapes(squares))
}

This all works but we have reintroduced a couple of issues from our first attempt:

1) The size predicate logic is duplicated twice.
2) The function will keep being duplicated for every type we support.


The Generic Approach

This approach is a combination of the above attempts. The idea is that we’ll ask Swift to generate the various versions of our function (like in attempt 3) by providing a generic function that it can use as a blueprint. I’ll show the implementation and then explain the new bits:

func filterSmallShapes<Shape: Sizeable>(_ shapes: [Shape]) -> [Shape] {
    return shapes.filter { $0.area() > 100 }
}

The function body is identical to attempt two, the real change is in the function definition. We’ve introduced a “placeholder” type between <> that we have called Shape. This placeholder type has some constraints placed upon it where by we are saying it has to be a type that conforms to Sizable, this is indicated by writing Sizable after the :.

Our test is identical to the one written in attempt three - it’s as if we have just duplicated the function.


To understand how this all works I like to imagine the following mental model:

  • The compiler sees that I am calling the function with a type of Square.
  • The compiler will check that Square conforms to Sizable.
    • If it does not then it will cause a compiler error.
  • The compiler will generate a specialised copy of the function where mentions of Shape are replaced with Square.

I have no idea about the technical implementation of this from the compiler’s point of view but externally as a language user this model works well for me.


Conclusion

Writing your first functions/types that have generics can seem a little daunting but the steep learning curve is worth it when things start to click and you see the possible use cases as well as understand when it’s not appropriate to use generics. In the example above we end up in a position where we have no duplication of our core business logic (checking the area is < 100) and we have kept compile time type safety. I think analysing a few versions of the function can help with understanding the benefits/disadvantages of our decisions and make us more aware of the tradeoffs we are making when designing our APIs.