Quick Tip: Test functions with DI

Testing your code's collaborators is really important but how do you test functions like UIImage(named:) or NSLocalizedString(_:tableName:bundle:value:comment:)?


What are we testing?

We are not interested in testing whether methods like UIImage(named:) actually work as that's Apple's job but we should verify that we invoke the functions with the correct arguments.

Take the following example

class Images {
    
    func jumpSprite(atIndex index: Int) -> UIImage? {
        return UIImage(named: "jump_\(String(format: "%03d", index))")
    }

}

The above is a simple call to UIImage(named:) with some simple logic to build the image name based on the passed index. The thing that we need to test here is the image name construction logic - we would like our tests to verify that if we call the function with the input of 1 that it will invoke UIImage(named: "jump_001").


Dependency Injection to the rescue

To create a seam that allows testing the collaborator we can make UIImage(named:) injectable. Our production code can continue to use UIImage(named:) but our tests can use a different function that allows us to capture and verify the input.

Start by making it injectable with a sensible default

class Images {
    
    var loadImage = UIImage.init(named:)
    
    func jumpSprite(atIndex index: Int) -> UIImage? {
        return loadImage(named: "jump_\(String(format: "%03d", index))")
    }
    
}

In the above we made a couple of changes

  • Added a new variable of type (named: String) -> UIImage? that holds our image loading function
  • Invoke loadImage(named:) instead of directly invoking UIImage(named:)

End by adding some tests

Now that we have done the scaffolding we can add tests that verify that the correct arguments are provided when loading images

class ImagesTests: XCTestCase {
    
    func testJumpSpriteIsInvokedWithTheCorrectArguments() {
        let images = Images()
        
        var captured: String?
        
        images.loadImage = { name in
            captured = name
            return nil
        }
        
        images.jumpSprite(atIndex: 321)
        
        XCTAssertEqual("jump_321", captured)
    }
    
}

Conclusion

The fact that functions are a first-class type in Swift means that it is super simple to use dependency injection to enable testing of functions. It's always important to test our code's collaborators to ensure that we are calling their contracts correctly and with the arguments that we expect to send.

Pro Tip: Playback Speed

Videos are a great way to consume information but they can sometimes drag their feet when it comes to getting to the important subject matter. When reading a blog or a book you can just skim read over the things you already know but this is difficult with video. I generally watch most programming podcasts/tutorials or conference talks at 2x and then just slow down when I need to take more time to digest the content.

Native video

I use iTunes to subscribe to a few great podcasts like RubyTapas, NSScreencasts etc. The content is often great but the podcasts are aimed at a broad audience and so some things I can happily skim over. I don't know if there is a better podcast player I should be looking at but I generally just open the podcast in QuickTime which allows my 2x playback.

Youtube

There is too much great content to call out but Confreaks is one of my favorite channels for interesting talks. To get 2x playback speed on Youtube you need to ensure that you are using the HTML player.


The other obvious benefit to watching things at 2x speed is that they take half the time to watch. This is awesome especially if you are following series like Handmade Hero where the videos can easily be between 1-2 hours. It may not be possible to watch a whole video at 2x when it is dense with information but at least you can budget your time more wisely by skimming the less important details.

ximber

I've created a simple tool called ximber that hopefully helps with making xibs just a little bit nicer to work with.


The why

A painful problem when working with layout constraints in xibs and storyboards is that they are almost impossible to revisit at a later date and make quick changes on. The problem stems from the fact that the constraints you make have terrible names and there can be a large amount of them even for relatively simple interfaces. It is often easier to remove all constraints and start again if it's been a long time since you set an interface up. This is because clicking between the constraints and then building up a mental stack of what each does is just plain exhausting.


The first step (manual)

Some people know that you can help yourself out by adding labels* to each constraint to give yourself a prompt about what the constraint does. This could be as simple as using VFL to state that a constraint is for setting a view called myView's height to 100 (V:[myView(==100)]). This is exactly what I started doing and I found that it was a great help, constraints were much easier to find and work with in interface builder, especially in a team environment where you may not have set the constraint up. A pretty major flaw in this approach is that you are essentially adding a comment to the xib and as we all know comments will try their hardest to stay out of sync with actual code.


The next step (automate)

The aim of ximber is to automate this process. The idea is simple

  • Parse the xib's XML
  • Grab any constraints
  • Try to generate a meaningful label for each constraint (this is slightly awkward)
  • Write the xib back out

More problems

Believe it or not generating a label like this: H:[view]-[view][view]-[view] is not very useful. This will happen if you have poorly named views:

poor naming

What would be really good is if each view had a decent name as well. A simple idea would be to just manually add a label to each view, which you could argue is just good house keeping. ximber aims to help with some of the leg work by giving views in your xibs a reasonable name. Obviously it's not just going to guess what to call things - so what it does is examine the IBOutlet connections that are connected to your outlets. If a view is connected up then it will add a user label with the name of the property. ximber will only add a user label if there isn't one already so it won't clobber any manually added labels.

With nicely named view outlets and constraints you end up with the following:

Before

Image Alt

After

Image Alt


Before

Image Alt

After

Image Alt


This is a personal tool and as such I've not considered any decent way of making it easy to install :/ If you grab the project from the repo you can build and put the product somewhere in your path.

This was a weekend project and as such may not be the fully finished article - any feedback, pull requests or bugs would be greatly appreciated.


* Adding labels is as simple as

  • Select an outlet
  • Hit return

OR

  • Select an outlet
  • Wait a short delay
  • Select the outlet again

OR

  • Select an outlet
  • Go to the identify inspector
  • Edit the label in the Document section

What did I just compile?

In most projects you'll reach a point where you want to run slightly different code depending on the type of build configuration you are using. You may want to only use logging in DEBUG or potentially use different API keys for services between ad hocs and store builds. The process of doing this is fairly straight forward and uninteresting but I'm just going to point out how the pieces fit together and a couple of ways to test your setup to give you the confidence that the correct code will go to the app store and you don't end up with egg on your face.

Let's say we want to remove all occurrences of NSLog() in our project for any build configuration that is not DEBUG - simple enough.

Prefix.pch

#ifndef DEBUG
#  undef  NSLog
#  define NSLog(...)
#endif

#ifndef?

So the first line #ifndef DEBUG is a compiler directive to check to ensure that the token DEBUG has not been defined in this file or any included files. If DEBUG has been defined then it removes the next two lines before compilation. If DEBUG has not been defined then these lines will be compiled and have the effect of replacing all calls to NSLog() with nothing. If we want to check that a token has been defined we use the similar looking #ifdef.

So hopefully the above was nothing new or interesting but you may be wondering where and why would DEBUG be defined as I don't do it manually?


Where's my define?

This is where the Xcode templates help us out a little. Jump to the build settings of a project and search for 'preprocessor macros' and you'll see something like this:

Preprocessor Macros

As you can see from the above screenshot the Xcode default project settings have the define DEBUG=1 declared but only for the debug configuration.

There is nothing special about how this is done that prevents you from adding your own defines. For example you may want to augment this so you can check other configurations. In the screenshot below I add RELEASE=1 for release builds and ADHOC=1 for a new configuration made for ad hocs.

Preprocessor Macros


Location, location, location

So were would you put some code to remove NSLog()? It needs to be available everywhere to ensure that it effects every NSLog. A common place could be to just slap it in the {Project}-Prefix.pch file. This file will have been automatically created for you when starting with one of the Xcode templates.

I used the term "just slap it in the {Project}-Prefix.pch" as this is the simplest way to get code included everywhere in your project, but it's not pretty. If you have too much code that you want to use everywhere your .pch will start to look like a dumping ground and it's probably time to move this stuff into a better named file and include that in the .pch. Yes I know this is just hiding the code in another file but it certainly seems like a good compromise of convenience and tidiness, the alternative being to #import your new file everywhere you want to use your new code.

Slight detour

If you want to rename the .pch you have to also ensure that you update the build settings to reference the new name. The quickest way to find the setting is to use the filter on the build settings tab of the project and look for "prefix header":

Image


Rocky waters

The .pch suffix actually stands for pre compiled header and it's precompiled to speed up your build times. This means that Xcode can get a little bit moody if anything changes in any of the imported files - for that reason it's best to only #import stable things (things that don't change often) into the .pch. If you get into a situation where your project builds but Xcode spits out warnings related to the files that are #import'd into your .pch you have to do a little dance to get Xcode working again.

I generally follow these steps - I build at each point and if the warnings go away you are done, if not keep going through the list

  1. Do a full clean (⌘⌥⇧K)
  2. Comment out the offending import, build, then uncomment the import, build
  3. Do step 1 again and also delete derived data
  4. Close Xcode
  5. If you got here you are having a bad day :(

Test

Great so you are using different build configurations to remove or add code but how do you get the confidence to know that you are not about to release the wrong code to the app store?

Ad hoc

You should definitely without fail make an ad hoc using the configuration that you will use when going to store. To do this go to Product > Scheme > Edit Scheme (⌘<), switch to Archive and ensure that the Build Configuration is set to your app store configuration (most likely Release).

Release config

Now just test your app and make sure that it behaves as you expected.

Preprocess

If you are like me and want to go full belts and braces you'll want to see the code that will be run (seeing is believing). To do this edit the Run settings go to Product > Scheme > Edit Scheme (⌘<), switch to Run and ensure that the Build Configuration is set to your app store configuration (most likely Release).

Run config

Now with the file that contains the conditional open go to Product > Perform Action > Preprocess "MyFile.m"

Preprocess

Xcode will make itself busy and then spit out the preprocessed file, which you can now search to ensure that the correct code was included/excluded. ProTip: your code will be way way way down the file, right near the bottom.

Wrap up

I'm not going to lie this was not a very interesting post but it's important to know how this stuff works to make sure that you are not accidentally including bad code in production builds.

Believe it or not there was a couple of useful/interesting things in the above post:

  • Looked at 2 build settings Preprocessor Macros and Prefix Header
  • Learn the Xcode dance when the .pch gets itself in knots
  • Test your code to be confident that the #ifdef's are working as advertised

I think the ways of testing are the most important take away as it's never a good idea to trust the code you just copied and pasted from StackOverflow. Your project may be configured differently and it would be foolish to trust a computer to "just work".

How does it work - Bit shifting & masking

tl;dr

I explore how bitwise manipulations allow you to store the red, green, blue and alpha components of a colour in a single integer.


Today I'm going to step through a helper method that allows you to create a UIColor instance from the hex colour values that designers love to give you. The category method in question looks like this:

+ (UIColor *)pas_colorWithRGBAHex:(u_int32_t)rgba;

When I first saw an implementation of this many years ago I remember staring at the source and feeling my eyes glaze over. Let's break this apart and demystify what's really going on.


u_int32_t

One of the questions you might ask when looking at this method is why be so specific with the type of the parameter and how do we know it needs to be a u_int32_t and not something else like u_int64_t or int? To answer that we'll look at some sample input provided by a designer in the form of a lovely green colour.

#01A902FF /* a lovely green */

This is actually 4 pairs of hexadecimal numbers, reading from left to right we have:

  • The Red component is #01
  • The Green component is #A9
  • The Blue component is #02
  • The Alpha component is #FF

In base 10, our normal counting system, this would be 1, 169, 2, 255 respectively.

Each hexadecimal component covers the range 00..FF, which is equivalent to 0..255. To be able to represent all the numbers from 0..255 in binary we would need 8 bits.

bits

A bit can only represent 21 values as it can either be 1 or 0. By looking at groups of bits we can increase the number of values that can be represented. For example a 4 bit number can be configured in 24 or 2 x 2 x 2 x 2 different combinations, which means we can use it to represent 16 unique values.

To represent 0..255 we would need 8 bits as 28 or 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 equals 256.

So knowing that we have to represent 4 colour components that each require 8 bits we can finally understand why we chose u_int32_t as the type of the argument (4 x 8 = 32).


rgba

The next logical question is how do we extract the four 8 bit values from a single integer?

Let's start by looking at how our hexadecimal #01A902FF (green) value actually looks under the hood when represented by a u_int32_t. Keep in mind that a u_int32_t is just a collection of 32 bits that can each have a value of 1 or 0:

1101010010000001011111111 // binary solo (Flight of the Conchords reference)

Let's do that again but with some spacing and some markers to show which bits represent our RGBA components.

+--  R  --+--  G  --+--  B  --+--  A  --+
|         |         |         |         |
 0000 0001 1010 1001 0000 0010 1111 1111

When a bit is set to 1 it means that the unit that the bit corresponds to should be included in the total. The units are powers of 2, with the power to raise by incrementing by 1 each time and starting from the right with 20. Here's a few examples using just the first 8 bits:

| 2^7 | 2^6 | 2^5 | 2^4 | 2^3 | 2^2 | 2^1 | 2^0 |
   1     1     1     1     1     1     1     1    = 255 = (128  + 64 + 32 + 16 + 8 + 4 + 2 + 1)
   0     1     0     0     0     0     0     0    = 64  = (64)
   0     0     1     0     1     0     1     0    = 42  = (32 + 8 + 2)

With that slight detour out of the way we can crack on with extracting the Alpha component, which is the easiest to work with. Currently our u_int32_t is capable of representing the values 0..4294967295, but we only want to extract an 8 bit value with a range of 0..255. We need to somehow look at just the 8 bits that we care about - this can be achieved with bit masking.


&

The logical AND, which is represented by the single ampersand character (&), is used to compare two values by looking at the bits that the values are represented by. When using the logical AND a bit is only set to 1 in the resulting value if it is 1 in both the left and right values used in an expression.

15 is represented by the binary 0b1111. So 15 & 15 would equal 15 and look like this:

0b1111   /*
0b1111 &  * All the bits are on in both values so all
------    * of the bits in the resulting value are on
0b1111    */

0 is represented by the binary 0b0000. So 0 & 15 would equal 0 and look like this:

0b0000   /*
0b1111 &  * There are no bits that are on in both values
------    * so the resulting value has no bits on
0b0000    */

10 is represented by the binary 0b1010 and 12 looks like 0b1100. So 10 & 12 would result in 8 and look like this:

0b1010   /*
0b1100 &  * Only the 8 bit is on in both values so only
------    * the 8 bit is on in the result
0b1000    */

This may not seem very helpful but this is the key to being able to treat our 32 bit value as if it was only 8 bits. To get our Alpha component we need to turn of the 24 bits that we don't care about and ensure that the first 8 bits keep their current values.

To achieve this we use a bit pattern that turns on only the first 8 bits and then logical AND this with our RGBA value:

rgba & #000000FF

#000000FF is hex for 255, which is the first 8 bits turned on (0b1111 1111) - here is how the above looks at the bit level:

+--  R  --+--  G  --+--  B  --+--  A  --+
|         |         |         |         |
 0000 0001 1010 1001 0000 0010 1111 1111
 0000 0000 0000 0000 0000 0000 1111 1111  &
 ---------------------------------------
 0000 0000 0000 0000 0000 0000 1111 1111

Great this has masked out the last 24 bits of our u_int32_t and has given us the result of 255 for the Alpha component.

Hopefully this makes sense and we are ready to try and extract more components. Let's jump to the Green value for a challenge. With our new technique we may naively assume that we just want to mask out all of the data that we don't want. We could do that with the following:

rgba & #00FF0000

#00FF0000 is the same as 0b0000 0000 1111 1111 0000 0000 0000 0000 - at the bit level we get:

+--  R  --+--  G  --+--  B  --+--  A  --+
|         |         |         |         |
 0000 0001 1010 1001 0000 0010 1111 1111
 0000 0000 1111 1111 0000 0000 0000 0000  &
 ---------------------------------------
 0000 0000 1010 1001 0000 0000 0000 0000 

This has indeed masked out the data we don't care about, but the resulting value is 11075584 which is well beyond the 0..255 range we was looking for. This is because the bits that are set to 1 represent 65536 + 524288 + 2097152 + 8388608.

What we need to do is shift the data to the right so that the bits representing the Green component move from their current place, in the middle, all the way over to the right so that they are in the first 8 bits of our variable. If the Green bit pattern (1010 1001) occurred in the first 8 bits we would get 1 + 8 + 32 + 128 = 169.

We move the values to the right with the right shift operator.


>>

We want to shift the 8 bits that represent the Green component all the way to the right. There are 16 bits in front of the Green component so we need to shift our number 16 times to the right - simple

rgba >> 16

As the bits are shifted to the right they will fall of the right hand side, then new values will need to be added to the left to pad the newly created space. What the padding values are depends on the variable's type and a few other things. After doing rgba >> 16 our bits now look like this

+------  new  ------+--  R  --+--  G  --+
|                   |         |         |
 0000 0000 0000 0000 0000 0000 1010 1001

This has gotten the bits of the Green component into a position where we can just mask them off like we did with the Alpha component.


If we follow this logic for each component we end up with the final demystified implementation of:

+ (UIColor *)pas_colorWithRGBAHex:(u_int32_t)rgba;
{
    int r = 0xFF & (rgba >> 24);
    int g = 0xFF & (rgba >> 16);
    int b = 0xFF & (rgba >> 8);
    int a = 0xFF & (rgba);

    return [UIColor colorWithRed:r / 255.0f
                           green:g / 255.0f
                            blue:b / 255.0f
                           alpha:a / 255.0f];
}

For each component we follow the pattern of:

  • Shift the bits in the rgba variable to the right until the data we are interested in is in the first 8 bits
  • Mask of the first 8 bits to ensure that any other bits are ignored

This category method requires the hex value to be defined in the form of 2 hexadecimal numbers for each colour component - you could easily make different methods to handle different formats now you know how it all works. This category method would be used like this:

UIColor *greenColor = [UIColor pas_colorWithRGBAHex:0x01A902FF];

Wrap up

This post only looks at the right shift >> and logical AND & operators but hopefully it allows you to begin to imagine in your minds eye how bits are stored and can be manipulated.

Essentially under the hood the u_int32_t variable is just held as a sequence of bits. We are exploiting this structure to encode more than one piece of information in a single variable. We then have to use various bitwise operations to extract the meaning that we encoded into the integer.