Mock-Free, Compile-Time checked, dependency injection with Reader

Brandon Kase at Swift Summit 2017


Video transcript:

Hey, what's up? So, we're going to talk about...yeah, there we go. Reader, alright? With this long subtitle. I'm so happy that today a few of the earlier speakers actually covered a little bit of what I needed to cover in the introductions, now I can sort of whiz past that, but I have to start with some code and this is just a cache.

Some cache that is associating keys and values and it's going to look up a key in the file system and try to load that, and if that fails it'll hit the network and fall back to a default. And we're going to use shared instance singletons everywhere, and then set takes a key and a value, and does some other stuff with singletons. And then if we have that, we can build a little program that uses it. This is kind of a toy function that doesn't really make sense in real life, but it's useful to illustrate the problem.

This increment function takes a key and it tries to get the value associated at that key from the cache and then it sets it back after incrementing it. There's this computed property increment on value. Right? This is code that maybe some of you have seen in some iOS apps, but it's kind of bad, I think. It's bad because it's untestable. And in being untestable, I'm using this word because I hope that it resonates with people, but it has consequences. It means that it's hard to reuse this code. It means that it's hard to compose it. And that's not good. Right? The big reason that it's untestable is that we're not thinking carefully about our dependencies, and specifically it's the singletons. Right?

I think if we go through the definition of what a singleton is, we can kind of see why that is. A singleton is a single instance that has mutable state or does some side effects and is accessible from anywhere. That's very familiar. Right, because global variable's a mutable state that's accessible from everywhere. Singletons are globals. We know that using global variables everywhere makes our code bad. Using singletons everywhere will make our code bad. So, we want to fix it, and if the problem is that we're not thinking carefully about our dependencies, then the solution is to think carefully about our dependencies and dependency injection is about explicitly providing dependencies. Okay?


The simplest way to sort of implement dependency injection is to just require that your dependencies get passed into the constructor or your struct or class. In this example, the cache depends on disk and network. We can make these protocols and then in production we can implement the protocols one way, in testing we implement them a different way, and then in our methods on cache, we can use disk and network freely. Okay?

But there's a problem with this, and the problem is that it doesn't really scale. You basically introduce tons of noise in your program when you do this because you have to pass the perimeters everywhere. If at any point you don't pass your dependencies explicitly, then you sort of become implicit, and then your code becomes impossible to test again. So this is not great, right? People have noticed this is not great, and have made these dependency injection frameworks, and they all kind of look similar. I'm just gonna think generically here, or whatever, the idea is that we build up a dependency graph at runtime, before our app actually sort of starts doing things.

So we start this graph, and we can register dependencies, and then way later in our actual business logic, we just use a dependency kind of like accessing something from a dictionary. We can pass the type as the key and we get out a value. Under the hood we've explained how to wire the dependencies so that this makes sense. The idea is that in testing you can sort of provide mock dependencies, or whatever, right? This is a step up, but there's a problem here, right? The problem is here, and specifically it's the force unwrap. Okay? This is what the documentation tells you to do. The reason that you're sort of supposed to do this force unwrapping is because you know that you've provided a dependency. Right? You know. Most of the time it'll probably be safe, but sometimes it won't. Right?

To me this sort of cries out as a design flaw. I shouldn't need...if I'm passing a dependency, then give it to me non optionally and if I don't pass a dependency, don't compile. Okay? This is sort of bad. We need to change how we think about things, and for this talk I'm not pitching a particular library or framework. If you recall yesterday, a couple people mentioned this: adding another dependency has tons of problems. Just think about this, maybe you can write it yourself in your app, but what I want to show you is that the solution to everything is types.

So...let's...yeah! Clap.


If we look back at our first dependency injection example here, this is typesafe. I cannot construct a cache without providing dependencies, which is great, but it doesn't scale. It's too much boiler plate. So, what we're going to do, is we're going to take this and we're going to massage it, and massage it, and massage it until it's super, super usable. Alright?

Our cache takes disk and network, and we're storing them as properties, we can use them in get and set, but this is kind of the same as just taking those dependencies as parameters to the individual operations, and then we can make those operations static. We can lift them out of the class or we can make them static methods. And then we can do the same thing. Right? We can use disk and network because we have them in scope and we're passing it explicitly. Of course, this is just more boiler plate of passing stuff around.

We can help it a little bit by sort of grouping them together in a struct. We're going to call our configuration of our dependencies a config. So we have our cache config, this holds our dependencies for our cache. Then we're going to move the parameter to the end of our parameter list, and now, we're going to do a currying operation.

Some of you may not be familiar with this, so I'm going to sort of go through it slowly. This is the uncurried version of get. We take a key, we take the configuration of our dependencies. Somehow we use those dependencies and the key to produce a value. I'm just using a method here, to keep it simple. And then we can access a value by invoking our get function as long as we provide a key and the dependencies. Right? So, when we curry get, we now use this a little bit differently. Right? We provide a key and we get back a function and that function takes the cache configuration, and then we get the value out.

The implementation is similar. We just return this function here. Okay? We can still get a value by passing a key and a configuration, but syntax is a little bit different because we're immediately calling a function that's being returned, but you can use it in the same way. But now, we can only pass a key and we get back a function and way later, at the end of our program, we can pass a configuration for production. We can pass a configuration for testing. Maybe you see where this is going. But if we look at this type signature now, we can still do everything that we could do before. Right?

If we just take a second and think about what the operations mean now, I think it's useful to think about this, get now takes a key, and what it gives you back is not a value. Right? It gives you back a description of a way to produce a value with some holes in it. When you provide the dependencies, you're filling in the holes and evaluating all the side effects and the mutations and whatever you need to do to get the value out.

The same thing for set. You give a key and a value and you get back a description of some way to produce a value with some holes in it, and you can later fill in those holes. One way for testing, one way for production. What's nice about this is that when you just pass the first set of parameters the result is pure. Passing the parameters doesn't run any side effects. It's only until the configuration is passed that the side effects run. So, this is kind of nice.


We're going to do one more transformation. We're going to wrap that last function in a struct. We're going to call it Reader. That's the title. Yeah. So Reader is generic on two type parameters. Okay? Deps and data. It literally just wraps a function from deps to data and we're gonna call it run. Alright? We can rewrite get and set again and now it's sort of more readable. We can...we know that get takes a key and it gives us back a description of a way to produce a value when provided cache configuration. Okay? So now, if we have a struct we can start doing things with it. We can put methods on it. We can Reader a story or something. Right. We can map. I'm so happy Daniel talked about this yesterday. If we have a function from data to newdata and we want to lift it into the context of Reader, that we have a description of a way to produce data and we want to produce a description of a way to eventually create newdata, then we use map. I'm just going to breeze through this.

It's a one liner, or one or two depending on how you count. You can read that if you want. We don't have time to super go over it, but mapping exists. So, we can build programs now. That's something useful. Our increment function takes a key, it got the value associated at that key from our cache and it incremented it and put it back. Alright, the first thing we have to do here is change the type signature. Increment...it's not clear, that increment depends on anything. It's implicit, but if we say that it returns a Reader, now it's explicit. Now we'd need to produce a description that will eventually turn void when holes are filled in. We need to first get some value out. That gives us a Reader. Then we want to increment it, and then we need to set it back in the cache. That gives us a Reader. And we need to combine them and get a single Reader out. Okay?

We want to combine them in sequence and the way you do that is flatmap. If we have a reader, deps and data, two generic type parameters in scope here. We're going to implement flatmap which is generic on some newdata, and then we're going to take a transformation from data to a Reader of newdata. The way to think about this is remember this is a method on the first Reader. The first Reader sort of describes a way to produce a data and then this transformation says, "Okay, I can take that data and then decide what I want to do next, and produce a newdata." We're gonna sort of group that together in a single Reader, and that’s what we return.

The implementation is again, one line. It is pretty hard to understand it from looking at the slide, and I encourage you to go and play with this at the end of this talk. I have a link to some code examples, so I'm going to stick it here. But don't worry if you don't understand it. Now, we can hook up increment. So what we do is we first get from our cache with a key that provides us a description with some hole in it and whenever that hole is filled in, we'll get out a value. We can increment that value and set it back in the cache. That's kind of how you read this stuff.

So, that's good, right? We can provide our dependency separately, our production dependencies, our test dependencies. It's super composable. If we want to increment twice and we have a way to increment once, then we increment once, flatmap, increment again. This composability is really crazy, alright, it's like Readers all the way down. This means in the context of Reader, it means at the lowest level, when you're wrapping your side effect, you can unit test it. It means at the entire application level, if you build your program from Readers all the way up, you can test it. You can test it at any level in between. It's all pure, it's all reusable, it's all composable. It's ridiculously amazing.


One more example here. We can increment end times. This can just be like a little recursive function, so in the base case, we're going to return just void, and we don't depend on anything. We can use Reader.pure to combine that, bring it into the context of dependencies. I'll explain that on the next slide. In the recursive case, we increment, and then recursively call this function. So we increment, increment, increment, increment, stop. Right? This pure operation is super useful, so I want to just go over it quickly.

Reader. Right? Deps to data. We're going to write this pure function, it's a static function, that takes data, that doesn't depend on anything, and it says, "Hey, I'm going to pretend that I depend on some deps." Okay? To do that, all you have to do is ignore the dependencies. Right? The implementation is constructing new Reader and the function that the Reader is wrapping just ignores its dependencies and returns the value. Alright?

There's something else you might want to do. If you have some little sub program that depends on that triangle, and another one that depends on a circle, then you might want to combine those. For example, if we had a way to get the key associated with today, that would depend on some way of accessing the current time. That will give us back a key, and we have our increment function that we already talked about. I want to be able to implement and increment todaysKey function that gets todaysKey and afterwards uses that key to increment. The problem is I don't know what to return. What is the type of this composite operation?

Lets talk about that. The type is both. If I am an operation that over here depends on A, and over here depends on B, then the composite is depending on both. So, we can do that. We can create a struct that has both, the time configuration, and the cache configuration inside of it. Then we can implement...increment todaysKey, and that will return a Reader that takes the time and the cache and give us back a void. To do that...the T.V. went blank...Alright. To do that, we can get todaysKey. This gives us back a Reader of just Timeconfig. Okay? We can use this local method to say how are we going to extract the little part of the dependency I care about from this group of more dependencies.

Here we have both time and cache, and we're projecting out time. Afterwards, we're going to get a key out and we can increment it. We need to say, "Okay, increment only works on caches. Here's how you can access the cache configuration from both the cache and time configurations. This local function I have at the end of the slide deck. If we have time, we can get to it. But, the really, really cool thing about dealing with dependencies this way is that you can handle more effects all at the same time. What am I talking about?

We can view depending on something as an effect. Not a side effect, just an effect in our system. Failure is an effect, which you can model with an optional. Failure with an error is an effect which you model with the result. You can model asynchrony, you can model non-determinism, and these are things that are important in our programs. For an example, our cache get operation probably doesn't return a default value in real life. It's going to return an optional value. So, naively, you may just make a Reader that takes a CacheConfig and returns an optional value.

The problem is, we have all sorts of combinations of functions that do different effects in our program. Like the first on here, get, takes a key and it can fail and it needs a dependency. The next one, cacheInfo, always succeeds and maybe its a way to get metadata out of our cache. We can always ask the cache for its metadata. Then, maybe there's a way to annotate a key with extra metadata, and that can fail, but it doesn't depend on anything. Right? We have all these different functions that are using different effects and what's interesting about this approach to dependency management is that we can capture all the effects at once.

We can move the optional and push it into the top level type. We're going to get an OptionReader instead of a Reader. An OptionReader is just a wrapper around a Reader that returns an optional. We can implement pure, and map, and flatmap, and it's just delegating to the underlying types, the underlying value. But, it's also useful to implement these extra lifting operations. If I have data that can fail, but doesn't depend on anything, I want to be able to lift it in the context of failure and dependencies. And the same thing, the implementation is just the single line.

The same thing with a Reader that always succeeds. I want to lift it into the context of failure and success, and the implementation again, is one line. We're just mapping over the underlying Reader and wrapping it up in an optional.

The important thing is...here's how you use it. Right? We're going to make a typealias that's really short, because we're going to use this thing all over the place. An O of a value is an OptionReader that depends on cache configuration and returns a value. Then we're going to implement this load with metadata function that takes a key and returns an O of value. Okay? The first thing we do is we get the metadata out with cache info and then we lift it into our context of failure and dependecies. Then, when the hole is filled in, we will get out cacheInfo. We can use that info, which is not shown here, to add metadata to our key. That is just an optional. We lift that in the context of Reader and failure, and then we can flatmap over get, and get is already in the right context, so we don't have to do anything.

Maybe if you squint, you can kind of see how you can build up your program in this way. Right? What's cool is this stack, you can combine more than one effect at at time. More than two, I guess. I've used this with future as well, like a FutureReader. When you map, you map over the reader and the future at the same time, and it's really, really, really nice.

This concept is actually called the Reader Monad. Yeah, the M word. Sorry. But, yeah. You don't need to understand monads to use Reader. You may not understand totally what's going on just by looking at the slides, I encourage you to play with it, but you totally don't have to understand monads to use Reader. There is a constant amount of learning that you have to do, that everyone on your team has to do in order to get up to speed to get comfortable using this thing. But you get this return later on where you can unit test everything everywhere at any level. It's all pure. It's all composable. You can mix and match things. It's just amazing.

I think you can kind of use the same argument that Ryan used yesterday. If you pay the down payment, you get tons of returns. You know? But again, you don't need to understand Monads. Don't be afraid of the word...don't be afraid of any words. Words are fun. Yeah. There's a few other typesafe dependency injection methods I want to touch on, so there's a thing called a cake pattern. It's called cake because there's layers, and layers, and layers of protocol. You can dig into that later. There's a thing called a free monad, which is where you make your operations part of a data structure. You make them an enum and then later you interpret the enum. So you can look at that later as well.

Here is a link to a GitHub repository that is littered with comments in a good way, I think, that has examples of cake, free, and Reader with the combining of effects. I really encourage people who are interested to dig into that because you can move more slowly through it.

Just to recap. We write code wrong because we use singletons without really thinking about our dependencies, and it seems like a lot of us in this community don't do that anymore, which is great because singletons are global variables and dependency injection is the way that you can sort of solve this problem because then we think about our dependencies and then nice things happen. But Dependency Injection libraries themselves are mostly not typesafe and that, to me, is sort of a design flaw. So we need some new ideas. Right? Reader provides typesafe Dependency Injection just by wrapping a function.

It's like what Brandon was saying earlier. Everything is a function. It's seriously true. Everything is a function, and because it's so composable, you can unit test at any layer. All the way down, all the way up, anywhere in between. What's cool is Reader combines with other effects so you can simultaneously map over failure, running things in the background, dependencies, non-determinism, whatever combination you need for you application.

Finally, you can check out Cake and Free for more ideas because you should be inspired to do whatever makes sense for you. Thanks.