We all write retain cycles from time to time regardless of experience. Retain cycles are not always obvious to spot and can result in hours of debugging. There are of course great tools like the memory graph debugger available but debugging retain cycles can still be a painful and time consuming task.
The key thing to fixing retain cycles is detecting them. This post looks at some code you can incorporate into your unit tests to help with the discovery of retain cycles.
The basic idea
The ownership rules are fairly simple with ARC. An object will only be kept alive for as long as there is at least one strong reference to it.
classNoisyDeinit{deinit{print("I'm in deinit")}}varexample:NoisyDeinit?=.init()// Setting `example` (the only reference to the `NoisyDeinit` instance) to `nil`// causes the instance to be deallocated and it's `deinit` will be invoked.example=nil
Equally we know that a weak reference to an object will be nil‘d out when the last strong reference is released.
varexample:NoisyDeinit?=.init()weakvarweakReference=exampleassert(weakReference!=nil)// Setting `example` (the only reference to the `NoisyDeinit` instance) to `nil`// also causes `weakReference` to be set to `nil`.example=nilassert(weakReference==nil)
Knowing the above we can write our tests in such a way that we have both a strong and weak reference to our object under test. After we have finished exercising our object we can set the strong reference to nil and then verify that the weak reference is also nil. If the weak reference is not nil at this point then we have to figure out what is causing the object to stay alive (this could be a retain cycle).
Let’s see how this would look. Here is a unit test without cycle checking:
A new weak var to hold the object who’s lifecycle we want to verify (line 4)
nil‘ing out the strong reference (line 8)
The assertion that the new variable does become nil (line 9)
Can we simplify this?
Adding 3 lines per object is a little tedious and error prone. For example you may accidentally forget any one of these steps and the validation will no longer work.
We can write a couple of helper functions that we can add as an extension on XCTestCase that allow us to get this down to just one line per object who’s lifecycle we want to validate.
First let’s add a function that allows us to validate that an object is correctly deallocated after we execute an arbitrary block of caller provided code. This will be useful for scenarios where you have an instance property that is holding onto your object.
1
2
3
4
5
6
7
8
9
10
11
12
extensionXCTestCase{funcassertNil(_subject:AnyObject?,after:@escaping()->Void,file:StaticString=#file,line:UInt=#line){guardletvalue=subjectelse{returnXCTFail("Argument must not be nil",file:file,line:line)}addTeardownBlock{[weakvalue]inafter()XCTAssert(value==nil,"Expected subject to be nil after test! Retain cycle?",file:file,line:line)}}}
Lines 3-5 perform a little bit of validation. It’s programmer error to set the assertion up when the object is already nil so we guard against that scenario
Lines 7-9 are enqueuing a closure to be invoked after the test has been run
Line 7 is where our weak reference to our object is created
Line 8 is where we execute our arbitrary closure
Line 9 is where we perform the assertion that our weak reference is nil‘d out
When using our helper function our unit test above becomes:
In scenarios where we don’t have an instance property holding onto our object we can provide a simpler function. We’ll write it so that it just calls through to our helper above:
The above works because if there is nothing holding onto our subject outside the scope of the test function it should be naturally cleaned up by the fact that the only strong reference has gone out of scope. This allows for an even simpler test:
The two helper functions above make for a simple API that should hopefully be useful for helping detect those painful retain cycles before they become a real problem. The code is hopefully simple enough to understand and doesn’t require modifying existing tests too heavily (no subclassing etc).
For usage I am tempted in my own projects to litter this cycle checking throughout most tests and not make contrived tests that just create an object and check it gets deallocated. By putting this logic on most tests I can get a level of comfort that I am not creating retain cycles whilst exercising the various functions of an object.
The Swift Package Manager (SPM) is perfect for writing quick tools and you can even bring along your existing code from your production apps. The trick is realising that you can symlink a folder into the SPM project, which means with some work you can create a command line tool that wraps parts of your production code.
Why would you want to do this?
It’s very project dependent but a common use case would be for creating support/debugging/CI validation tools. For example a lot of apps work with remote data - in order to carry out it’s function the app will need to convert the remote data into custom types and use business rules to do useful things with this data. There are multiple failure points in this flow that will manifest as either an app crash or incorrect app behaviour. The way to debug this would be to fire up the app with the debugger attached and start exploring, this is where it would be nice to have tools to help explore problems and potentially prevent them.
Caveats
You can not use code that imports UIKit which means that this technique is only going to work for Foundation based code. This sounds limiting but ideally business logic and data manipulation code shouldn’t know about UIKit.
Having dependencies makes this technique harder. You can still get this to work but it will require more configuration in Package.swift.
How do you do it?
This depends on how your project is structured. I’ve got an example project here. This project is a small iOS app that displays a list of blog posts (don’t look at the iOS project itself it’s not really important for this). The blog posts come from a fake JSON feed that doesn’t have a particularly nice structure, so the app needs to do custom decoding. In order to keep this light I’m going to build the simplest wrapper possible - it will:
Read from standard in
Use the production parsing code
Print the decoded results or an error
You can go wild and add a lot more features to this but this simple tool will give us really quick feedback on whether some JSON will be accepted by the production code or show any errors that might occur, all without firing up a simulator.
The basic structure of this example project looks like this:
Now I need to update the Package.swift file to create a new target for this code and to add a dependency so that the web-api executable can utilise the production code.
Here’s an example of the error messages we get with invalid JSON:
$ echo '{}' | swift run web-api
keyNotFound(CodingKeys(stringValue: "posts", intValue: nil), Swift.DecodingError.Context(codingPath: [], debugDescription: "No value associated with key CodingKeys(stringValue: \"posts\", intValue: nil) (\"posts\").", underlyingError: nil))
$ echo '{ "posts" : [ { } ] }' | swift run web-api
keyNotFound(CodingKeys(stringValue: "title", intValue: nil), Swift.DecodingError.Context(codingPath: [CodingKeys(stringValue: "posts", intValue: nil), _JSONKey(stringValue: "Index 0", intValue: 0)], debugDescription: "No value associated with key CodingKeys(stringValue: \"title\", intValue: nil) (\"title\").", underlyingError: nil))
$ echo '{ "posts" : [ { "title" : "Some post" } ] }' | swift run web-api
keyNotFound(CodingKeys(stringValue: "tags", intValue: nil), Swift.DecodingError.Context(codingPath: [CodingKeys(stringValue: "posts", intValue: nil), _JSONKey(stringValue: "Index 0", intValue: 0)], debugDescription: "No value associated with key CodingKeys(stringValue: \"tags\", intValue: nil) (\"tags\").", underlyingError: nil))
The first example is erroring as there is no key posts
The second example is erroring because a post does not have a title key
The third example is erroring because a post does not have a tags key
In real life I would be piping the output of curling a live/staging endpoint not hand crafting JSON.
This is really cool because I can see that the production code does not parse some of these examples and I get the error messages that explain why. If I didn’t have this tool I would need to run the app manually and figure out a way to get the different JSON payloads to run through the parsing logic.
Conclusion
This post covers the basic technique of using SPM to create tools using your production code. You can really run with this and create some beautiful workflows like:
Add the tool as a step in the CI pipeline of the web-api to ensure no deployments could take place that break the mobile clients.
Expand the tool to also apply the business rules (from the production code) to see if errors are introduced at the level of the feed, parsing or business rules.
I’ve started using the idea in my own projects and I’m excited about how it going to help me and potentially other members of my team.
I’ve had conversations where people don’t see the benefit of using functions like map over hand rolling a for loop with an accumulator, with the main argument being that everyone understands a for loop so “why add complexity?”. Whilst I agree that most developers could rattle off a for loop in their sleep it doesn’t mean we should use the lower level constructs that have more risk. I compare it to a light switch in my house, I fully understand how to connect two wires to turn a light on but I’m not about to enter a dark room and start twiddling with wires when I can just use a switch.
The other trend I have picked up on during code reviews and discussions is that people are happy to use higher order functions like map, forEach and reduce but because they’ve not come from a functional programming background they can often misuse these functions. To make the most of these functions and to avoid surprises for future you and your team mates there are some rules that should be obeyed (some functional languages enforce these rules). The misuse I see is generally around putting side effects where they don’t belong.
I think the best way to get people on board is to start off with examples of basic iteration and progressively change the iteration style whilst analysing the potential areas for improvement with each code listing. Hopefully by the end we’ll have a shared understanding of what is good/bad and be able to identify the tradeoffs in each iteration style.
Basic iteration
The most basic iteration in most languages is some variation of for (int i = 0; i < end; i++) { ... }. This is not available in Swift but we can approximate it with the following
There are a lot of things about this loop that could be considered bad:
There is mutable state with the index variable.
Mutable state is more difficult to reason about than immutable state and things become easier if we can remove it. I’ve found that in longer code listings mutable state can be especially tricky as you have to read more just to keep track of the mutation. Diff tools can add to the difficulty as they may decide to collapse a long listing, meaning that you need to unfold all the code before you can sanity check that mutation is happening where you expect.
We are in control of the iteration.
There are plenty of opportunities to mess this up - here’s some:
Forget to increment the index
Increment the index too much
Write the condition with <= and get an out of bounds
Mess the condition up completely by adding more complexity - e.g. index < collection.count || someThingThatEvaluatesToTrue
This only works for zero based indices. ArraySlice is not guaranteed to be zero based as it’s a view on the original collection.
.count could change between loops.
It’s less efficient than newer forms of iteration as items are fetched one at a time.
It’s not very well scoped.
The code block that is being evaluated during the loop causes side effects in the surrounding scope, in this case the side effect is the mutation of index.
Whilst this iteration works you can see from the list above that there is plenty of room for improvement for making this safer and reducing complexity.
Safer indexing
We can make an improvement on the above by getting the indices from the collection itself
We’ve removed the mutable index variable.
Remember mutability is evil.
We’ve removed the side effect of mutating index.
We still have the side effect caused by calling print but that’s the core of the algorithm so that’s fine.
We are no longer in control of the iteration.
We are still in control of accessing elements at specific indices but this is an improvement.
We are no longer calculating indices as they are given to us by the collection, which means this will work with non zero indexed collections like ArraySlice instances.
Remove indexing
Most of my complaints so far have been related to indexing - both calculating the correct index and then performing the access. We can improve this situation by using for/in syntax.
We’ve removed indexing/manual control of the iteration.
Iteration can be more efficient - for example in Objective-C this would use NSFastEnumeration under the hood that would request batches of objects instead of fetching items one by one, which could be more efficient.
The previous disadvantages are now mostly gone. To continue evaluating the tradeoffs we need to step our thinking up a level to see the improvements that can be made. Some bad things about the above are:
There is no easy way to intuit the intent of the loop without studying the whole body.
For example am I iterating to:
build up a new collection?
reduce a collection down to a single value?
purely cause side effects?
It’s not composable.
The iteration is using a code block not a closure. This means I can’t reuse the block of code like I can with a closure.
forEach
To handle all the issues raised so far we can jump to using forEach
There are a few nice advantages with this implementation:
We are no longer concerned with the act of iterating.
We simply provide a block and forEach will ensure that it is invoked for each element in the collection.
Improved composability.
As forEach takes a block we can also reuse the same block in multiple places, which allows us to build our programs from smaller pieces.
We removed boilerplate.
This is a big win as iteration can be considered as boiler plate, which we all know is tiresome and error prone to write. In addition it’s not DRY to have the same structures all over the place so it makes sense to codify patterns.
The function signature hints at the iterations intent.
When you look at this function signature
funcforEach(_body:(Element)throws->Void)rethrows
you can see that there is no return value - this is a big hint that this function is all about side effects. If a function does not return a result then for it to add any value it needs to cause side effects. For clarification in all the examples above the side effect has been printing to stdout.
map
We get the same benefits (listed above) when we use map. The difference here is that the function signature reveals a different intent:
This function does have a return type, which suggests this function is all about creating something and not having side effects. The function signature also shows that we need to provide a function that transforms Element to T, which provides the full picture - this function will create a new collection by using the provided function to map values. In terms of inferring that this function should not have side effects you can derive this from a few different principles:
Single Responsibility Principle.
When applied at the function level it really is about only doing one thing. In this case that one thing is transforming each element to build a new output.
Query/Command separation.
In an ideal world a function should either be a query or a command. A query is what we have here - we invoke the function and expect a result back. A command would be where we don’t want a result but we want to do something. (There are occasions where you will have both a query/command in one function but it’s best to try and separate these things).
You can see that classic iteration introduces a few issues again:
Mutable state. results is declared as var so that it can be mutated in each run of the loop. In this listing we can see all the mutation but in a longer code listing you might have to validate that mutation isn’t happening in other places. results also has the position of being mutable after the loop has concluded - so even if you do the correct mutation inside the loop it doesn’t mean you are in the clear. You need to validate that no mutation is happening after the loop.
Efficiency.
In the example above map is more efficient as it will likely preallocate a new array with the correct size to take all the transformed elements. My naive implementation of classic iteration is not that clever so the array could potentially need to allocate storage multiple times.
Side effects leaving the scope of the iteration block.
In order to mutate results the block inside the loop has to mutate a value outside it’s own scope. Ideally we want to keep the scope of mutation as small as possible.
It’s wordier.
We read more than we write so it’s important that our ideas are communicated succinctly.
I can’t easily infer intent.
With map I know the function is returning a new collection by applying some transform. With the classic loop I am relying on reading the whole listing and recognising the pattern of collecting transformed items. Recognising common patterns requires practice and experience so may be harder for newer developers.
It’s nearly all boilerplate.
In the classic iteration example nearly all of the code is boilerplate, which is obfuscating my really interesting algorithm that stringifies each item.
It’s not an expression.
I’ve deliberately made the map super short and inlined it inside the print. That’s the beautiful thing about map compared to classic iteration - map is an expression whereas the for/in loop is control flow, which changes the places where they are allowed to be used.
Wrap up
The issues I have seen in code reviews are that people are happy to use forEach, map and other higher order functions but they then muddy the water by not following the intent of the functions and introducing side effects in functions that should be side effect free or adding an accumulator to a function that should be purely about side effects. I’ve even seen multiple scenarios where people use a map and also have an accumulator for collecting different bits, there is almost always a more appropriate API to prevent us from the misuse.
In order to take advantage of the higher level functions we should make sure that we follow the rules around side effects and single responsibility. The aim should be to use these functions to clarify intent, the intent becomes confusing if you use a map and also cause side effects or if you use forEach whilst modifying an accumulator. If we stick to the rules then it makes the task of reading code simpler as the functions behave in known ways and we don’t need to become experts at recognising patterns.
This post is in no way advocating that you go and change every iteration to use higher order functions but I do hope that you’ll be able to know the tradeoffs and decide when it is appropriate to use each tool. My real aim with this post is to allow people who have read it to have a deeper appreciation of the tradeoffs around different iteration techniques and to allow for a shared base knowledge to build ideas upon.