Make nice tools

I spend a lot of time thinking about developer experience at all levels, from what makes a helpful error message deep in a framework to how we make people’s dev environments easy to spin up. In this post I’ll walk through how a local development tool I wrote evolved over 4.5 years and what that taught me about designing developer facing tools.

This isn’t a post about Docker, SwiftUI or Compose. It’s about what I learned building a developer tool that people actually used and what I got wrong along the way.


The Problem

For my day job I work in the native apps team at Autotrader on the iOS/platform side of things. As a team we don’t only own the Android/iOS code bases, we also own a few backend services that power the apps, the number of backend services has grown over time. This means that if you want to make changes to a backend service and test it locally in a simulator you need to spin up all the related services locally.

As a simplified example imagine this setup

+-----+     +-----------+     +-----------+
| iOS |-----| Service A |-----| Service B |
+-----+     +-----------+     +-----------+

To run this locally there are many ways I can configure things. One scenario might be that I want to make changes to Service B, which means I still need to spin up Service A to allow the communication.

In the beginning this was all done manually - you’d load up the projects for Service A and Service B in IntelliJ and run them both locally. This worked but it pushed both cognitive and operational complexity onto individual developers, which is exactly where DX debt hurts most.


First Solution

Some people will have been screaming “use docker compose” and you’d be right that’s what I did. All of our services were already containerised so there was nothing to change there. I just needed to write the docker-compose.yml to configure everything, which I did and it worked fine.

There were a few issues that made me feel uncomfortable stopping here

  • Not everyone is comfortable on the command line so this can be intimidating
  • Docker is possibly not a tech a lot of Android/iOS devs delve into
  • The UI isn’t great

To elaborate on that last point the UI for interacting with this new setup would be one of the following:

docker compose up
docker compose up service-a
docker compose up service-b

Not everyone knows about reverse history search in bash (ctrl + r) so scraping around for these commands or pressing up a million times at the prompt isn’t the best experience. There’s also an issue with discoverability as you’d need to know the correct spelling of each service you want to run.

It seemed pretty clear that some kind of simple UI would really reduce the barrier to entry for using this tool.


Put Some UI on it

I’d dabbled in macOS apps before and honestly not enjoyed the experience too much as all my previous experience was in UIKit. Luckily SwiftUI exists so I thought I’d use it as a learning experience to make a menu bar app that essentially wraps the docker compose.

This is what that first version looked like

simple first version

Yup it’s not going to win any design awards but the power of this abstraction can’t be overstated.

  • We are completely abstracting away docker from the mobile app devs
  • All available services are visible and just a button tap away from being started/stopped
  • We show useful debug information like what ports things are running on
  • We show the running status of each service and detect if running via docker or as .jar (read: most likely in IntelliJ)

This wasn’t a quick task and I ended up burning a lot of the midnight oil as it was an interesting task. There were lots of gotchas with handling subprocesses and their environments and coordinating lots of state events to the UI.

I was pretty happy with this step until I realised my first mistake…


How do I install it?

The first lesson was that building a tool isn’t good enough; you need to make it accessible. I’m not sure if it’s common knowledge but developers tend to be very lazy energy efficient and having a multi step install is just a recipe for support pain.

It wasn’t long before I added an install script so people could simply run a curl | bash and enjoy a nicely installed app without manually

  • Finding the repo
  • Navigating to releases
  • Downloading the binary
  • Doing some gatekeeper to get it out of quarantine
  • Finally be able to launch the app

Obviously whenever I shared installation instructions I did the good citizen thing and issued a disclaimer that people shouldn’t blindly trust me and pipe my remotely hosted bash script into an interpreter without reading it first.


Success and Growing Pains

The tool in the form of a macOS menu bar app written in SwiftUI had a good 4 year run. New services were added and bugs were fixed and it ended up looking more like this

second version

One fun lesson that you can see played out in the screen shot is that I added { {commit-sha} }, which is populated on a release build with the git sha. This was because people have a tendency to not update things, especially when it involved manually running a curl | bash. Although adding some version identifier was helpful for support, it was just a plaster and the actual requirement I wish I noticed earlier is that this really needed to automatically update itself.

As you can see from the growth of services the tool is popular enough to have been updated multiple times but it wasn’t popular enough to help people overcome the thought of a steep learning curve required to contribute. The project was mostly maintained by myself and in all honesty I’d made some questionable architectural decisions early on and Combine heavy state management made the learning curve steeper than it needed to be.

Another detail that I’d started to notice with the way this had grown is that a long list of services is only good if the user knows how they relate. In an evolving estate where many teams contribute it may not be very obvious what services you need to run to enable you to work. You could spin up all the services but that’s sometimes overkill and hard on our poor little CPUs.

The project needed change but needed some inspiration…


A Reimagining

So here I was thinking

  • This codebase is a pain to maintain
  • I want to better visualise how services hang together
  • I want more people to be able to contribute
  • I want the tool to autoupdate

My colleague (who doesn’t like being named) was working on a Compose Desktop app to help debug our apps. As they’d done all the hard work of getting a project scaffolded and off the ground I decided to see how hard it would be to port the SwiftUI tool to Compose, update the UI and incorporate it into this debug app.

This was actually a perfect opportunity to rethink architectural choices as it was an entirely different language and although I’d consider myself proficient in Kotlin I’d never done Compose so it would be a fun experience. Doing the work in this codebase also opened up the contributor pool considerably as now any Android dev could contribute and conveniently all of our iOS devs are solid Kotlin devs already. At this point, the limiting factor wasn’t UX polish - it was who felt capable of contributing.

After some fun learning and porting all the process management over I ended up with something like this

current version

Personally I think I nailed the visualisation requirement as you can now at a glance see what services you’d need running to access different parts of the estate. The red connecting lines do actually go black to show that the apps can access services but I hastily made changes to get the anonymised screenshot and messed that detection up.

The other main thing with this tool was the autoupdate ability. I leant on having more contributors and got a colleague to write the autoupdate process logic and it works great. We even held back giving anyone access to the tool until the autoupdate was available as we just didn’t want to deal with the support.

I was feeling pretty good about this developer experience…


Arrgghh SSO

Then a new requirement came in in the form of services needing to get tokens to communicate to our preprod environment. Actually this wasn’t a new thing as I’d just been avoiding doing anything about it for a few years but it was being more broadly rolled out so I couldn’t ignore it anymore.

As with most things I put off, the programming part didn’t actually end up being super complicated. Essentially when a service requires an auth token we need to invoke an external helper tool that does all the SSO magic and then inject the returned token into the relevant service when it is started. The more complicated part was figuring out how the UI would educate users that they might need to get permission to do this for each service and then signpost how/where they do that.

Once a user is setup they just press the play button like normal and everything is automatic. If the user isn’t configured then they get an ⚠️ on the service, which tells them how to resolve the misconfiguration.


Make sure you are listening

Another lesson that I completely missed was tuning into the frequency of support requests. Behind the scenes to get all of the networking setup correctly to allow docker to communicate with localhost seamlessly people need to add an entry to /etc/hosts. This was constantly missed but quickly resolved with a trip to the docs. Unfortunately no one knew where the docs were so I’d just be dishing out the link constantly.

Instead of redirecting people to the docs and then helping them through the steps I decided to add a “Doctor” command that will diagnose common misconfigurations/issues and give remediation steps.

doctor


Build it Remove friction and they will come

I think there’s a few good victory stories on getting more contributors. Some that come to mind are:

A massive eye saver from one of the iOS devs was adding dark mode. A pretty impressive feat considering he’d never done Compose and actually didn’t just tweak colours he went full hog and made various components much more pleasing on the eyes.

dark mode

Another capability added by a few iOS devs was the ability to install the latest development build into your simulator and launch it. This is a massive productivity boost especially for web developers who now don’t need to learn how to get the project running and interact with Xcode. There is still a requirement to have Xcode installed but after that it’s zero knowledge.

One of the QA engineers added hot reloading to make the development experience on the tool that improves developer experience better.


Building a platform

Another observation that we made was that we now have a platform to build out tools that are automatically updating and installed by many people. This has lead to me sitting with another QA engineer to help them port their command line tool for finding all kinds of difficult to find adverts that people need for testing/development.

I think in total we spent about an hour getting something working together and then he’s been a one man feature factory improving things.


Wrap up

I really think developer experience is important. Making and evolving tools is a nice way to show your colleagues that you value their time and hear their frustrations. It’s also been highly rewarding seeing other people get involved and seeing people interested in building a DX culture. I have no metrics on how much time this tool has saved others but I personally use it multiple times a day and the time I’ve saved alone has more than paid off for the personal investment I’ve put into this.

From Runtime Explosions to Compiler Checked Simplicity

When I’m solving problems I rarely move in a straight line. I tend to circle the goal, trying a handful of bad or awkward ideas before something clicks. With experience those loops get shorter, but they never really go away.

This post is about one of those loops: a small problem in some of our KSP generated code. I’ll walk through a few iterations of the solution and end with a much simpler approach that let the compiler do the hard work instead of us.


The problem

We have some KSP (Kotlin Symbol Processor) code that generates helper functions for routing within our application. Inside the generated functions we were blindly taking in arguments and attempting to serialize them with kotlinx.serialization e.g.

fun routeToCheckout(cart: Cart) {
    val encodedCart = Json.encodeToString(cart)
    ...
}

The issue here is that if Cart is not annotated with @Serializable then this code will explode at runtime, which is less than ideal.


Solutions

Your friend and mine (your LLM of choice) suggested explicitly taking a serializer at the call site. This would force the caller to guarantee that a serializer exists.

In practice though, it felt wrong. Requiring consumers of this generated API to manually thread serializers through their code makes the API harder to use and leaks an implementation detail that callers shouldn’t need to care about.

The other suggestion from the LLM was to use a reified inline function but that was not an option based on the code setup.


Attempt 1

Using the suggestions from the LLM I thought maybe we could just blindly generate the serializer into the body of the function and then the compiler would reject our code if the serializer didn’t exist e.g.

fun routeToCheckout(cart: Cart) {
    val encodedCart = Json.encodeToString(Cart.serializer(), cart)
    ...
}

This works as the compiler will now error if Cart.serializer() does not exist, which is much better than a runtime exception. Granted this does have a less than ideal failure mode as the consumer of this KSP processor could end up with their code not compiling and being pointed to generated code. Whilst not great I was happy with the compromise and we can generate a comment as well to help steer anyone who does end up here e.g.

fun routeToCheckout(cart: Cart) {
    /*
     * If you find yourself here with a compilation error, ensure that the relevant
     * type has a serializer defined.
     */
    val encodedCart = Json.encodeToString(Cart.serializer(), cart)
    ...
}

Never smooth sailing

I applied this latest code to a larger code base and it immediately flagged the code that caused the original issue that prompted this investigation in the first place, which was reassuring. It also highlighted a few other call sites that would exhibit the same breakage but luckily those code paths weren’t executed. More annoyingly the new code found that we sometimes pass more complex types that use Kotlin collections e.g. we had types like List<String> or Map<String, CustomType>. Obviously I didn’t see the wood for the trees and started “making it work” but it got real ugly real quick with all the potential nesting. The serializers in the above cases would be

ListSerializer(String.serializer())
MapSerializer(String.serializer(), CustomType.serializer())

To do this I started thinking about writing a recursive function that would keep traversing through the types building up these serializers and special casing the various Kotlin collection types.


Could we do it simpler?

Luckily at this point I’d already asked my colleague JackMack for his thoughts, which was could we do it simpler?. The key insight he’d had after hearing me ramble on about my current progress was that we don’t actually need to reproduce the exact serializer to pass to Json.encodeToString; the root of the problem is that we want to prove that each type mentioned has a serializer.

The new idea was to simply list out all the types outside of the Json.encodeToString function and let the compiler just do its thing. So essentially for the Map<String, CustomType> example the target is something like

fun routeToPage(customTypes: Map<String, CustomType>) {
    /*
     * If you find yourself here with a compilation error, ensure that the relevant
     * type has a serializer defined.
     */
    CustomType.serializer()

    val encodedCustomTypes = Json.encodeToString(customTypes)
    ...
}

We don’t need to worry about checking String.serializer() or MapSerializer exist because we know the library provides those.


Getting our prompt on

Once the target shape was clear, the remaining work became much more mechanical. We needed a way to walk the types involved in a function signature, including any type parameters and extract the set of domain types we cared about.

This was ideal for a quick human/LLM pairing session.


Conclusion

This problem went through several iterations, each one technically “working” but increasingly complex. The turning point came from stepping back and questioning what we were really trying to achieve.

We didn’t need to construct the correct serializer. We didn’t need to mirror kotlinx.serialization’s internal rules. We just needed proof at compile time that the types flowing through our generated APIs were @Serializable.

By narrowing the problem to that single requirement, the solution became smaller, clearer and more robust. It also produced better failures: early, explicit and enforced by the compiler rather than discovered at runtime.

It’s a useful reminder that when a solution starts to grow unwieldy, the answer is often not more code, but a better question.

The best part is that this is now another pattern my brain can store away and recognise more quickly in the future, reducing the number of iterations when I hit similar problems again.

Kotlin Gotchas: Why Your ?.let Sometimes Fails to Compile

Kotlin’s let is a great tool for writing expressive code, but I’ve noticed it can introduce subtle fragility when used the wrong way - especially alongside certain compiler features. Let’s start with a question - should this code compile?

Types

class Example {
    val title: String? = null
}

data class Id(val value: String)

Usage

val example = Example()

example.title?.let {
    Id(example.title)
}

At first glance, this should compile - and it does. But we can make small, seemingly harmless tweaks that suddenly break it.


Failing to Compile

The first breaking change would be to make title a computed property

  class Example {
-     val title: String? = null
+     val title: String? get() = null
  }

With this change, we get the following error:

Smart cast to ‘String’ is impossible, because ‘title’ is a property that has an open or custom getter.

This error rather cunningly suggests another change that also causes it to fail.

-  class Example {
-     val title: String? = null
+  open class Example {
+     open val title: String? = null
  }

One final change I can think of that breaks for the same underlying reason but is achieved in a different way is to declare Example in a different module from the usage code. This gives the error

Smart cast to ‘String’ is impossible, because ‘title’ is a public API property declared in different module.

So what’s going on here?


Failing to Smart Cast

We’ve seen Smart cast mentioned in both errors but what does that mean? A smart cast is when the Kotlin compiler automatically treats a variable as a non-null or a more specific type after it’s checked - but only if it can guarantee the value won’t change in the meantime.

In the original working code, the compiler can see that Example.title is declared as val and cannot be reassigned. So inside the scope of the ?.let the compiler is able to prove that the value cannot change.

All these breaking changes are just different ways of preventing the compiler from making that guarantee.

There’s another subtle language feature that allows us to write this code without realising the mistake.


Ignoring Closure Arguments

Kotlin allows you to silently ignore closure arguments. This contrasts with languages like Swift, which require you to explicitly mark when you’re ignoring one. For example in Swift we cannot implicitly ignore closure arguments so the compiler would force us to make this change

- example.title?.let {
+ example.title?.let { _ ->
      Id(example.title)
  }

Thinking about this actually points to the root issue, we should use the argument passed to the closure rather than doing the example.title access again.


Recommendation

Now that we understand why, the fix should be clearer. Instead of relying on a Smart cast and ignoring the lambda argument, we should just use the closure argument it directly: This means our call site would change like this

  example.title?.let {
-     Id(example.title)
+     Id(it)
  }

Personally, I would go a step further and favour the more concise call site:

- example.title?.let {
-     Id(it)
- }
+ example.title?.let(::Id)

This is an example of point-free style.


Using Point Free Style

Not everyone is a fan of method references because :: looks odd and scary at first. Once you get used to it, it’s really handy for writing super concise code.

It’s subtle but the second line here packs a punch.

example.title?.let { Id(it) }
example.title?.let(::Id)

In essence we are taking the data from let and plumbing it straight into the Id constructor. In the first line we are doing this manually by providing a closure and then invoking the Id constructor. In the second line we are just composing these functions together.

It’s subtle, especially in such a short example. With the first line, I have to mentally parse the closure, verify how it is used, and ensure it isn’t modified before being passed to Id. With the second line I don’t have to do any of that - I just know that the let will feed the value straight into Id.


Conclusion

Keeping in mind that the original code listing worked, we could arguably just not worry about any of this. I personally think the end code is simpler and more descriptive about the intent but that can be debated. Also the original code has the issue that unrelated changes can start breaking things, which is something that I think we should always try to avoid. This is definitely one of those changes that I would suggest in a pull request but feel guilty about not providing a thorough explanation of the reasoning. This post gives me extended notes I can point to when explaining the change.