Hacking with Ktor
02 Feb 2026Every now and then when I’m out walking my dog I nerd snipe myself and start thinking of interesting little programming challenges to toy around with. This post explores using Ktor to build a tiny reverse proxy that forwards Snowplow events and pushes notifications over WebSockets so a debug UI can react in real time.
Background
The app I work on uses Snowplow for event tracking. When you send events, the payload has to adhere to strict JSON schemas. When you send an event with the right structure then everything is good, but if you get the structure wrong then the validation kicks in and events are lost to a bad queue.
Snowplow provides a debug collector that you can run locally and it will give you good information about whether the events you are sending are valid or not. The issue I was toying with is that the app I want to use to observe the debug collector doesn’t know when new data is available in order to update its UI.
The service structure looks something like this:
+-----+ sends events +-----------+ needs to update +-----------+
| iOS |----------------->| Collector |<---------------------| Debug GUI |
+-----+ +-----------+ +-----------+
The Debug GUI is a small local tool we plan to write to inspect what the collector has received, but it currently has no way of knowing when new data arrives.
For the sake of my experiment I had the limitation that I cannot change the code in iOS or the Collector.
It’s not cheating but the iOS client already has the ability to change the location of the Collector so I can point traffic where I want.
A couple of less fun solutions that would work would be:
- Manual - add a refresh button to the
Debug GUI - Noisy - add polling to the
Debug GUI
I opted to explore option 3 as noted at the top of the post, which is to create a reverse proxy to sit in the middle and be able to observe when events occur and push them to the Debug GUI.
This looks something like this
+-----+ sends events +-------+ forwards +-----------+
| iOS |----------------->| Proxy |------------->| Collector |
+-----+ +-------+ +-----------+
|
| sends messages +-----------+
'----------------->| Debug GUI |
+-----------+
With a rough structure to aim for I started experimenting.
Creating a basic proxy
I’m not aiming for production readiness as this is only for a local debugging tool but I started with a new empty project and manually added all the ktor bits I would need. As a start I know I’ll need both client and server libraries for Ktor and as I know I’ll be using WebSockets I looked ahead at the docs and noticed they were suggesting to use Netty for WebSocket support. The basic configuration then looks like this:
gradle/libs.versions.toml
[versions]
ktor = "3.4.0"
[libraries]
ktor-client-cio = { module = "io.ktor:ktor-client-cio", version.ref = "ktor" }
ktor-client-core = { module = "io.ktor:ktor-client-core", version.ref = "ktor" }
ktor-server-core = { module = "io.ktor:ktor-server-core", version.ref = "ktor" }
ktor-server-netty = { module = "io.ktor:ktor-server-netty", version.ref = "ktor" }
[bundles]
ktor-client = [
"ktor-client-cio",
"ktor-client-core",
]
ktor-server = [
"ktor-server-core",
"ktor-server-netty",
]
build.gradle.kts
...
dependencies {
implementation(libs.bundles.ktor.client)
implementation(libs.bundles.ktor.server)
}
...
With all the dependencies in place I can implement a main function that starts a server that will receive requests on port 9090 and forward them to 9091 before returning the result.
src/main/kotlin/Main.kt
fun main() {
embeddedServer(Netty, port = 9090, host = "0.0.0.0") {
val client = HttpClient(CIO)
routing {
route("{...}") {
handle {
val path = call.request.uri
val response = client.request("http://localhost:9091$path") {
method = call.request.httpMethod
call.request.headers.forEach { key, values ->
values.forEach { header(key, it) }
}
setBody(call.receiveChannel())
}
call.respondBytes(
bytes = response.bodyAsBytes(),
status = response.status,
contentType = response.contentType()
)
}
}
}
}.start(wait = true)
}
The code isn’t super exciting but the high level idea is to copy the path, headers and body and construct an identical request to the service I am wrapping. The results of calling the wrapped service are then just sent back to the original caller. Because I forward the request body as a channel, large payloads can stream through without being buffered in memory.
After firing this up and changing my iOS client to point from port 9091 to 9090 I could see everything was working as if I’d done nothing. This was a bit worrying as I wasn’t sure I’d actually done anything so I placed a few breakpoints to confirm my code was actually being executed.
The proxy works, but it doesn’t observe anything yet - so next I added a WebSocket endpoint that interested tools can subscribe to.
Adding WebSockets
The general idea now is to continue the existing proxying logic but instead of just returning the result I’m also going to expose a WebSocket endpoint that I will publish new data to. To publish new data I’m going to need to be able to parse the data coming from the collector which means my dependency shopping list now includes WebSockets and json serialization.
gradle/libs.versions.toml
[libraries]
ktor-client-cio = { module = "io.ktor:ktor-client-cio", version.ref = "ktor" }
+ ktor-client-content-negotiation = { module = "io.ktor:ktor-client-content-negotiation", version.ref = "ktor" }
ktor-client-core = { module = "io.ktor:ktor-client-core", version.ref = "ktor" }
+ ktor-client-serialization = { module = "io.ktor:ktor-serialization-kotlinx-json", version.ref = "ktor" }
ktor-server-core = { module = "io.ktor:ktor-server-core", version.ref = "ktor" }
+ ktor-server-content-negotiation = { module = "io.ktor:ktor-server-content-negotiation", version.ref = "ktor" }
ktor-server-netty = { module = "io.ktor:ktor-server-netty", version.ref = "ktor" }
+ ktor-server-websockets = { module = "io.ktor:ktor-server-websockets", version.ref = "ktor" }
+ [plugins]
+ kotlinx-serialization = { id = "org.jetbrains.kotlin.plugin.serialization", version = "2.3.0" }
[bundles]
ktor-client = [
"ktor-client-cio",
+ "ktor-client-content-negotiation",
"ktor-client-core",
+ "ktor-client-serialization"
]
ktor-server = [
"ktor-server-core",
+ "ktor-server-content-negotiation",
"ktor-server-netty",
+ "ktor-server-websockets"
]
build.gradle.kts
plugins {
kotlin("jvm") version "2.2.21"
+ alias(libs.plugins.kotlinx.serialization)
}
...
With all the dependencies installed I looked at the docs for guidance and came up with this just to verify the general setup
src/main/kotlin/Main.kt
suspend fun main() {
embeddedServer(Netty, port = 9090, host = "0.0.0.0") {
install(WebSockets)
val subscribers = Collections.synchronizedList<WebSocketServerSession>(mutableListOf())
routing {
webSocket("/ws") {
println("Web socket connected")
subscribers += this
repeat(10) {
send("Message $it")
delay(1.seconds)
}
try {
for (frame in incoming) {
// ignore client messages
}
} finally {
subscribers -= this
}
close(CloseReason(CloseReason.Codes.NORMAL, "All done"))
}
}
}.start(wait = true)
}
When this server is run and I connect to the WebSocket (I used websocat for testing) the server will print that a connection was made and then start sending Message 0-Message 9 with a second delay between each message.
This works nicely and the socket is kept open by the infinite read loop that waits for input and just throws it away on repeat.
Now I know I can do the proxying and establish WebSockets it’s time to stitch things together.
Consume and emit events
The actual collector doesn’t actually return any useful data when we inspect the traffic so in reality I will need to make a further network request to fetch data. For the sake of this example, let’s pretend the collector returns the data directly in the response body rather than requiring a follow-up request.
I started by setting up the JSON parsing side of things.
The reason for parsing the JSON is that the payload from the collector contains a lot of stuff that the Debug GUI doesn’t need, so it makes sense to return a filtered view.
val json = Json {
ignoreUnknownKeys = true
explicitNulls = false
}
embeddedServer(Netty, port = 9090, host = "0.0.0.0") {
install(WebSockets) {
contentConverter = KotlinxWebsocketSerializationConverter(json)
}
install(ServerContentNegotiation) {
json(json)
}
val client = HttpClient(CIO) {
install(ContentNegotiation) {
json(json)
}
}
...
}
The above code creates a more lenient Json instance that will ignore unknown keys as I’m not going to reconstruct the shape of the entire response.
With this created it’s wired into the ContentNegotiation plugins for the client/server.
Annoyingly because both the client and server call the plugin ContentNegotiation you need to fully qualify to have them both in one project - I opted for aliasing on the import.
Next I updated the routing to wire things together
routing {
webSocket("/ws") {
println("Web socket connected")
subscribers += this
try {
for (frame in incoming) {
// ignore client messages
}
} finally {
subscribers -= this
}
}
route("{...}") {
handle {
val path = call.request.uri
val response = client.request("http://localhost:9091$path") {
method = call.request.httpMethod
call.request.headers.forEach { key, values ->
values.forEach { header(key, it) }
}
setBody(call.receiveChannel())
}
val bodyText = response.bodyAsText()
val items = json.decodeFromString<List<Item>>(bodyText)
val targets = synchronized(subscribers) { subscribers.toList() }
for (subscriber in targets) {
subscriber.sendSerialized(items)
}
call.respondBytes(
bytes = bodyText.toByteArray(),
status = response.status,
contentType = response.contentType()
)
}
}
}
With this in place I spun everything up and connected to the endpoint with websocat and started sending events and then… Nothing, absolutely nothing was happening.
There was nothing in the logs so I slapped a break point in and stepped through things.
It turns out there was an exception being thrown on the val items = json.decodeFromString<List<Item>>(bodyText) line but because there is no configuration for logging this error was just swallowed 🤦🏼♂️.
As it happens I’d messed up the structure of my @Serializable types but it was an annoying lesson that logging was not installed.
Having to drop down to stepping through the code line by line was a tedius, so the obvious fix was to install CallLogging and an slf4j backend.
In my case that was
ktor-server-call-logging = { module = "io.ktor:ktor-server-call-logging", version.ref = "ktor" }
logback = { module = "ch.qos.logback:logback-classic", version.ref = "logback" }
With the corrected data structures I spun it up again and everything worked perfectly.
Wrap up
In the end I had a tiny Ktor service that transparently proxied Snowplow traffic and broadcast filtered events over WebSockets to any connected UI.
This was actually a fun exploration and I reckon I’ll be able to take the learnings forward. The main take away on this one was that I should just try stuff - in the past I’ve wanted to do similar things with wrapping services but I’d assumed it would be too tricky. The whole thing took a couple of hours of experimenting and then it was all just clicking, which just reminds me that the things I put off aren’t ever that bad if you just start.