Cooperative Path Finding
Matt Comi at Playgrounds Conference 2017
Hey. I will be discussing Cooperative Path Finding, and it is one small element that's involved in making games.
My name's Matt, and as you've been told, I'm an independent iOS game developer from Perth and my little tiny company is called ‘Big Bucket Games’. It was founded in 2009 and is nowadays made up of three guys. So Neven and Cabel are my partners and they're based in Portland, Oregon. And that's a little way away but we've managed to make it work this far, so we've released three games. The Incident was our first, that was in 2009, and if you haven't played it, it's a sort of an endless runner- except vertically, objects fall from the sky and you have to dodge them and climb onto them. Space Age, that's a real time strategy/adventure game mashup, is what I like to describe it as. And Stage Hand, most recently, which is a platformer where you control the world instead of the character.
Path Finding in Space Age
Now, I'll be discussing pathfinding as it pertains to Space Age. So more specifically, Space Age, is a, it's styled as a retro-futuristic sci-fi, so kind of a 1950s view of what the future was going to look like. It's sort of like a zoomed-in real time strategy with elements of adventure games like dialog and inventory, but at the same time, scoped into little missions, and missions take place on maps, and a map is simply a bunch of tiles, and these gridlines denote those tiles, delineate those tiles, and these red tiles are the ones that are impassable or obstructed.
Now, you don't have direct manipulation of the characters. Instead you control those characters or units by tapping on them, and then tapping where you want them to go, so that looks like this. The pathfinding, so the ability to get from that point to the x, takes place the moment you click. So before he starts moving, he's already determined where he's going to go. That is the first part of a pathfinding system, is the actual pathfinding itself. Now, you can sort of imagine a naïve version of how Pathfinder might work by starting at the origin, where he was originally standing, and expanding outward, so exploring every neighboring tile. Is that tile a destination? You're done. Is it not? Explore those neighboring tiles, and so on until you eventually find the destination, and if you keep a link back from the destination tile all the way through past its previous neighbors, you'll get back to the origin and you'll have a path, but in reverse. Flip that, and you're done.
Guide the algorithm with a heuristic
So that is called, that very strange thing that I just described, is either a breadth-first or a depth-first search depending on how you implement that. Those types of algorithms are referred to as "unguided", meaning they don't have any information to get them from where they are to where they want to be, and to guide them, you need what's called a "heuristic", and this is a best guess that helps the algorithm, the pathfinding algorithm, get to the destination. So the heuristic is, more specifically, the approximate cost of moving from any tile to the destination. Space Age uses the ‘Manhattan Distance’ heuristic, and that is a distance measured at right angles. So here's an example of what I'm talking about. The Manhattan Distance from the blue guy to the computer is four, because there are four tiles separating him.
Meanwhile, to that rock, it's two. And then to that switch over there, is again, four.
Despite the fact that it passes through that wall, the heuristic doesn't know anything about that wall because if it did, we wouldn't need a pathfinder. We'd be done. So the Manhattan Distance approximation is convenient because it doesn't involve having to perform a, well, rather, it is better than having to do a Euclidean distance, so an actual line of sight distance, because to calculate that distance, you need a square root, and Space Age is a few years old now, and you save on square roots as often as you can on old devices.
Now the algorithm that uses the heuristic is the A* algorithm. It's very similar to how I described, originally, the idea of visiting each tile's neighbors, exploring those neighbors and so on, except that the exploration is guided by that heuristic, so it will favor those that head towards the destination. Now the A* algorithm is made up of these four concepts.
I'll describe each of those.
Now, the Open Set, as it says, is; the tiles that are on the fringe of the explored path. So if you imagine exploring a path to mean expanding outwards from the origin, then what you can visualize is a clump of tiles. The perimeter of that clump is the Open Set.
The Closed Set is everything else in that clump.
G Score & F Score
Now for every tile or node in that path that's being explored, there is a G Score and an F Score that's being calculated.
The G Score is the cost from the origin to this tile.
The F Score is a combination of that (G Score) plus the heuristic.
These things will make sense hopefully.
To implement this algorithm, you need more or less these five classes.
A Tile in a simple naïve case is either something that's obstructed or not.
A Map is just a 2D array of tiles.
A PointI, is just a CGPoint, but for integers.
And for the purposes of the pathfinder, we're gonna need a couple of useful functions, equality and neighbors, which is handy because it saves having to do double for loops inside the algorithm itself.
A Node is what ties it together.
A Node knows the position, the F Score, the G Score and the parent. It ties back to the parent of a tile or point on the map. I'm being vague in what I'm referring to them as because the pathfinder really doesn't care whether or not you've got a 2D map or a 3D space or anything. It just needs to know how to navigate through a collection of things, and in this case, that just so happens to be a 2D map. But anyway, if we have a Node, and a Node has appointed to its parent, then what we do to trace back to the path is just continually call node.parent.parent.parent until we get all… (inaudible) …our tile to the origin. The initializer is straightforward.
With Set I won't go into too much detail. This is just the protocol. It's basically an array with a limited interface. We need to be able to insert nodes into our open and closed sets, remove from them, find whether there is a node that has a particular position, and grab out the lowest F Score. Yeah.
Now, this is where the interesting stuff happens:
This is, as a good delegate should, what powers the pathfinder. To know, we need to know; is the position valid? Is it the destination? What is the cost of moving between two tiles? Those two tiles always need to be neighboring, I couldn't come up with a better interface to insist that it's a neighboring tile without confusing the parameters. And then the heuristic cost.
So there are a few of these in Space Age, and this is the most intuitive one:
It is the one that lets you get to a point from another point. It is initialized with a map and a destination, and a valid position is simply one that is within the bounds of the map, and is unobstructed. The next one should be clear enough. Now this one is fairly intuitive - if you're moving horizontally or vertically between two neighboring tiles, there is a slightly lower cost in moving diagonally because the diagonal distance is longer. I say 1.5 for the purposes of clarity when I show an example later, but really that number should be the square root of two because of, if you can imagine it hits the hypotenuse of a triangle, it's made up of one and one.
Anyway, and then the Manhattan Distance is what we're using for our heuristic.
The below slide is here for completeness. I won't go through it.
A very, very brief summary and I will try not to just read it off the screen, but that's what I'm gonna do, actually, as I'm gonna stuff it up if I don’t. So really realistically, this is here so if you want to look up the slides later there's an algorithm. Go, you can go off and make yourself an A* pathfinder.
But what it's really doing is it's starting at the origin, it's adding a node for that origin tile to be open-set, and then in a loop, it takes the node with the lowest f score, closes it, which is the same move as from the open set to the closed set, explores the neighbors of this node, putting those into the open set, calculates their f and g scores and then it continues to do this until it finds the destination tile.
Alright, here is an example:
So he wants to get to the x. So we'll say that what's happened is the user has just clicked on that x. This is what's happening under the hood.
The green represents a tile that is in the open set. The f score is 4 because there is four tiles separating that tile and the destination, and the g score is 0 because it hasn't moved yet. So to get from the origin tile to this tile, no cost.
So now this is the part where we get into the loop, which is to say, "find the lowest f score tile". It's that one, obviously. Close it.
It's now in the closed set, so it's red. Those black arrows represent what was the parent pointer of node, so they point from the node back to the node that it came from. Okay, now I'll describe this one on the right here, because I can't go through all of them and still fit in 25 minutes, but the f score is 4 because… I'll start with g, rather. G score is 1 because the movement cost is one. We have to move horizontally from the origin, and the f score is 4 because the heuristic is 3, three tiles separating that tile from the destination, plus the g score; one, equals four. Hope that made sense. F score again was heuristic cost plus g cost. I'm gonna keep switching between saying cost and score and I'm not gonna forgive myself, so, all right. So now we have to find the lowest f score, close that, explore its neighbors. It's that one;
…and now we've moved on to exploring its neighbors.
So you can see it is being guided in the direction of the destination, not knowing of course that is about to hit a wall, but regardless, this is the best it can do. Now, if we explore all of these neighbors, nothing happens because there are no neighbors to explore.
It does actually explore the node that it came from, and might sometimes replace the pointer… is that where it came from? No. Rather, it can sometimes explore another node that is an open set and replace the arrow if it discovers that getting from this tile to that green tile is a better option, but this is really sort of like an edge case that I've actually never seen an example, so forget I said anything. All right, we need to find the next lowest f score. Now, there are three of them and it is deterministic which one it's gonna pick but I can never remember which one it'll be, but I can cheat.
Okay, so it's gonna be the bottom one, and that's simply because of the sort of first-in, first-out sort of nature of how it works, and I think I'm doing all the Xs first and all the Ys second, but I don't remember. So anyway, there.
And again, we have that same trick of; we've explored the neighbors, we've calculated the f scores and the g scores, and we have to find the next lowest, it's gonna be 6 again.
And once more.
And now we've explored all of the sixes, so now we have to find the next best. That one at the top-right is a 7.5. There is another 7.5, but it turns out it's gonna be that one.
And so on and so on until eventually, we're led to the destination.
Now that we've got this, it's just a matter of following those arrows back to the origin and flipping the array that you've got from collecting those nodes, and you can create a path:
And then this is how that looks. (Animation is shown on screen). Woah, seems slightly faster than I remember. Anyway, cool. There you go.
So that's how A* algorithm works. Now, that's only part of the picture.
It is the more complicated part of the picture, and I'm glad that that's behind us now, because that is, if I was going to lose anybody, it was gonna be there, but this is the fun stuff. So that is A* in its entirety, it is simply using, well, in the parlance of Swift and Objective-C, it is using a delegate to control a graph search, but once you've gone beyond that to have intelligent-looking units, you need to go a bit further.
Coordination & Cooperation
So I favor dynamic sort of techniques, which is to say; things that occur during the movement along that path, versus things that happen up front. Because the nature of other units on the map will change, and that will affect the validity, I guess, of the path that was originally determined, so, hopefully that makes a little bit of sense after I explain the first once, which is: Avoidance.
So despite what I just said about the favoring dynamic techniques, this one actually is static, but it's the only one.
It's something that affects the pathfinder and what I'm doing is adjusting the isValidPosition to not just be within the bounds and unobstructed, it also has to be a tile that is not occupied by a unit or it's occupied by a unit that isn't attacking, which is one way of saying an attacking unit is as good as an obstruction.
As before, horizontally 1, vertically 1.5, but if it's occupied add 6, now that number six is just plucked out of thin air, the higher the number, the more likely the unit is to detour because it will favor walking several steps rather than bumping somebody out of the way.
Now to implement these changes, here are a couple of structures we're gonna need.
We're gonna need to define Unit and UnitState, simply so that we can know whether the unit that is occupying a tile is attacking it or not.
A unit has a position, a path and a state, and now that we have a unit, we can update Tile to include that.
So is there an occupying unit and who is it?
And here is how those changes look. (Animation is shown on screen). So there's no need to detour, he just walks right around that guy. Now in this case, there's a short detour that he might have to take, so he takes it, rather than bumping. Now in this case, there's a long detour. So he favors bumping. Same detour length, but two guys. He takes the detour instead of bumping two guys.
And that sort of appearance of intelligence is simply the rules that the pathfinder calculated in terms of evaluating cost. I just realized that terrible scrunching noise was the footsteps in this video.
Okay. Stalling. This is where the unit will pause to wait for another unit to get out of the way, hopeful that it eventually will. Stalling means that two units can't stand on the same tile.
So Doc, the blue guy there, he can't stand in Arch's tile until she steps out of it. She looks like she's about to, so he'll just be there for a few seconds.
In this case:
Two units should not be allowed to be standing that close to each other, so despite the fact that they're both in their own tile, that looks ridiculous so stalling ensures that neither of those two things can happen.
And here's stalling in practice. (Animation is shown on screen). And there you go. So they both evaluated their paths up front, neither of them considered each other, and yet when they came to that same point in the middle there, one waited for the other, and so everything was fine. No upfront knowledge was needed to let that happen.
So multiple units are able to move together at once in a group. This is how that works. Let's say all of those guys want to walk to that x over there.
First thing that needs to be done is they need to be sorted because, yeah, once they're selected, they're basically just, they exist in their array in terms of whatever order they were added to the map in the first place.
So, and the sorting helps to be able to evaluate this bounding rectangle because it means, for example, that the very last item is the bottom-right corner, and there's some other shortcuts like that it helps with.
Once we have that, we figure out what I call the minimum bounding rect, I guess. It is basically these guys, top left justified. Like that.
I removed the background because that's not actually happening on the map, it's just a way of calculating what their final formation is going to be. And now that we know what that's going to be, those are the dimensions of it. Okay. The next trick is to figure out who is going to be standing on the x when the group moves. So who that's going to be is the person who's closest.
That guy and now they actually do the moving. Click. (Animation is shown on screen). And you can see they're huddling into that formation that I showed you a second ago.
And there it is, so that's group movement.
Now again, the units aren't going to consider each other's paths when calculating their path to the destination, but if they bump into each other, they'll stall. In that example, there was no stalling necessary, but in this example, there will be.
They all want to get to the other side of this structure. So again, they'll all move, they'll all move independently but when they encounter somebody else that's in their way, they'll deal with it, and here's how that looks. (Animation is shown on screen). Nice and tidy. Okay. Finally, timing is going good.
Alright, last bit is deadlocks. A deadlock is basically when stalling doesn't work. Stalling is simply, as I've said, waiting for somebody else to get out of the way, but sometimes they can wait on each other.
So in this, a simple deadlock, orange waiting on blue, blue's waiting on orange, there's nowhere else to go, they'll just wait forever. Simple deadlock.
A's waiting on B, but B's not waiting on A, he's waiting on C, and C is waiting on A. So there's a loop. Now to detect a deadlock, each unit has a stalled unit property. The stalled unit property is set to the unit that they are waiting on to move out of the way. So A's stalled unit is B, B's stalled unit is C, and C's is A. If you walk this property and find- so if I start from A and continue to walk the stalled unit property until I find a nil or itself, when I find itself, it means that there's a loop, and if there's a loop, ditch the path that you're following and follow a different one instead, which is to say; abandon the- let's say it was a path formed by the move-to path delegate, and instead use a path that is instructed by the move-away pathfinder delegate.
But don't ditch the old destination, stash the old destination away because once we've moved away, we want to go back to where we were planning to go in the first place.
So the MoveAwayPathFinderDelegate has similarities from the move-to pathfinder delegate.
So you initialize it with the map and the otherUnits.
isValidPosition is just like the other one. isDestination is very different- there is no one destination, it's just a position that isn't along any other unit's paths. Which is to say also that there is no heuristic cost. We're only concerned about the g cost, so the cost of actually moving from where you are, so always 0.
And here is how that looks:
So what we have, narrow corridor, there's no getting around somebody. Green guy wants to move into where orange guy's standing and orange guy, vice versa. So they're gonna meet in the corridor, they're gonna deadlock and orange guy is gonna abandon his path, he's gonna do his move-away pathfinding, then he's gonna move back again, and here's how that looks. (Animation is shown on screen). Pretty cool. Alright.
So that is the most interesting parts, I hope, of pathfinding, and I've put all of that code up in a playground that you can grab from there. So thank you very much.
If you enjoyed this talk, you can find more info: