Sequential Bus Theory and why it matters to GMs
I’m writing this article on the day that the idea occurred to me, but I’ve held it back until an opportune gap appeared in the publishing schedule.

Image provided by FreeImages.com / Michael Zacharzewski
I was waiting for the bus today (well, on the day that I wrote this), and that got me to thinking. More on that a little later.
If you live anywhere near the end of a long bus route, you will probably be familiar with the fact that they are almost never on time. If you ever have occasion to venture into the heart of a city with multiple public bus routes, you will also have seen the phenomenon of one bus following a number along the exact same numbered route. Both of these are aspects of Sequential Bus Theory.
Somewhere about 10-15 years ago, there was an interesting article in Discover magazine, which at the time I bought religiously every month – at least until rising prices and cost of living put it out of my reach and the habit was broken, but that’s another story. This article reported on the analysis of a math or physics professor (I forget which) who was waiting for the bus, and filled the time by analyzing the maths of what happens when you have two buses traveling along the same route.
I found the fact that there was math to explain the observed phenomenon mildly interesting, but what struck me more than anything else was the editorial surprise that seemed to be evinced by the professor’s findings. After all, it seemed to me, simple logic and some rather obvious assumptions made the results inevitable.
The Logic Of Bus Schedules
If you were a bus scheduler, how would you determine when the bus was supposed to arrive? You would assume an average speed of travel that would be a function of the speed limit and the number of times the bus had to stop and start. You would factor in the average waiting time at each red light along the way, and you would allow a bit of a fudge factor for variables. In order to allow for the number of times the bus had to slow, stop, and start again at bus stops, you would determine the average passenger numbers and average length of trip. You would then apply a statistical analysis that would tell you that some points along the way – where the bus route intersected shopping centers and railway stations, for example – would be cluster points where a great many of the passengers who had accumulated en route got off, and more passengers than usual got on, only to disperse, a few at a time, at subsequent stops.
Taking all this into account, you would prognosticate how long it should take the bus to reach each stop along the way, and publish your schedule accordingly.
And it would never be right.
The reality of Bus Schedules
Statistically, for any given bus route, there would be a specific average number of passengers getting on and off, which has to equal the same thing if the whole route is considered (because at the final stop, everyone has to get off the bus). Now, consider the effect of just a single passenger more or less using that bus at a handful of stops. Each such additional passenger takes time to get on, and time to get off, and increases the likelihood that a bus will have to stop at all at any given bus stop. The inevitable result is that each extra passenger delays the bus just a little bit, and that accumulates over the entire trip.
It doesn’t have to be an extra passenger. It might be getting a run of red lights – it happens – or having to wait for an extra vehicle to make a turn at a busy intersection, or any of a dozen other things.
Time, once it has gone, is inordinately hard to make up. At first, the “fudge factor” would mask the deficit in arrival time; but over the years, bus schedulers, mindful of an ever-less-tolerant customer base, would eat away at that fudge factor and demand greater accuracy in the timetables. Every effort to impose such greater accuracy would cut the fudge factor, and – in theory – make the timetable more precise – without achieving much success in practice.
The Exponential Timetable Catastrophe
But that’s not all. Passengers are not this static thing that simply exist at bus stops until the bus arrives; they are a dynamic phenomenon, always arriving from somewhere. If there is an average number per stop, that is simply the product of the average rate at which they reach the bus stop multiplied by the interval since the last bus. So, if a bus is delayed by anything, there will be more time for passengers to reach a bus stop, ensuring that the bus will be further delayed, which in turn enables still more passengers to get to the stop and be waiting for the bus.
And there are so many factors that could cause that initial delay that the result seems inevitable – the bus will always be late by the time it gets close to the end of its scheduled route.
Flawed Compensation and Good Luck
If you were a clever scheduler, you might assume that there will be some sort of delay somewhere in the journey, and have it set off a little earlier than the strict law of averages dictates. Even without this, inevitably, some buses will encounter a good trip instead of a bad one – a few passengers running late who miss the bus, a run of green lights, traffic that conveniently gets out of the way. Each such event would make the bus run that little bit ahead of schedule, allowing fewer passengers to arrive at the stop in time to catch it, requiring fewer stops, and once again we have an exponential effect in the other direction.
Again, the result seems inevitable; taking into account our earlier finding, the bus will always be either early or late at any given stop, and only the amount will vary.
Sequential Buses
Let’s expand our simple universe to describe two buses running along the same route. The first bus to arrive at a stop picks up the passengers who are waiting, obviously. So, if the first bus is late, what happens to the second bus?
Well, the bus that was late picks up passengers who (theoretically, according to the timetable) should have had to wait for the next one to arrive. That makes it progressively later and later. Meanwhile, the second bus, who departed some time interval after the first, finds fewer passengers to pick up, simple because those passengers are now riding on the first bus – so it gets progressively more and more ahead of time.
Inevitably, either the second bus stops for an extra period of time somewhere to get back on schedule (irritating the passengers on board, who just want to get to their destination ASAP), or it catches up to the first bus. The next time that first bus stops and the second bus has no-one wanting to alight at the stop, it will overtake the first bus, becoming the first bus in line. But that simply means that at the next stop, where more people than usual are waiting for the bus because it is late, the new first bus will stop for them to get on board – and (unless someone wants to alight), the second bus will cruise back past it. The buses will begin playing hopscotch, passing each other time and time again, until they reach their destinations.
What if the first bus in line is the one that gets all the good luck? Well, that means that it will be early, and continue to get earlier, and so close in on the bus that left before it, while the second bus will find more passengers waiting than average, will be delayed, and will grow progressively and exponentially later, falling back toward the bus scheduled to depart after it. Same result, in other words.
The Law of Averages
It’s usually fairly unusual for everything to go one way or the other, and that’s the saving grace for our bus scheduler, who would otherwise be at his wits’ end about now. It’s like tossing a coin a great many times in succession – every outcome has an equal probability of occurring, whether it be HHHHH or TTTTT or HTHHT – but there are so many more outcomes of “mixed” heads and tails in any order that the extreme outcomes (all heads or all tails) are unlikely. The longer the run of coin tosses, the more unlikely it becomes.
In terms of our buses, the longer the route, the more chance there is for something to either speed or delay the bus, but the less likely it is to be consistently one thing or the other. There is a certain level of resilience to the timetable as a result, that wants to push the bus closer to being ‘on time’. However, past a certain threshold, this effect will be overcome by the exponential nature of either delay or advance, and the longer the route, the greater the chance that at some point, that will occur.
The Length Of Route Criticality
So the shorter the bus route, the more accurate the timetable will be. Obvious, right? But shorter bus routes are inefficient, because there is a turnaround time and a period of inactivity at the end of each run, while the driver waits for the next scheduled departure time. Cost-effectiveness promotes longer bus routes, minimizing this dead-time in proportion to the period of time in which the bus is performing its function of conveying passengers.
Governments in democracies hate wasting money; it makes them too easy a target for the opposition. It tends to lose you government. That’s one reason why promising to make the buses run on time is always a popular election platform; not only does it target ‘government waste’ and imply reduced demand for taxation (leaving more money in the governments’ pocket for other services, or more in the pockets of taxpayers, or some combination of both), but it implies a promise of making life more convenient. It’s all gravy, in other words, so long as you actually deliver.
The length of route is critical to efficiency of operation and accuracy of timetables, but in opposite directions. That means that somewhere in the middle, there is going to be an optimum point of balance between both – and that shifting the priority this way or that just a little bit makes it easy to appear to achieve such a promise.
It seems obvious to me – others might disagree – that the optimum balance is that balance of frequency of service and route length (usually another compromise) at which the degree of inaccuracy becomes sufficient to achieve “hopscotching” given an above-average level of delay or worse, i.e. when there is a 50-50 likelihood of achieving catastrophic inaccuracy on any given trip sufficient to overcome that “stable threshold”. And that means that for any given specific bus, there’s roughly a 50% chance that it will be early, and a 50% chance that it will be late, and that at regular intervals, enough metaphoric coin tosses will have been made for one bus to get too many heads or tails in a row and you will get hopscotching.
You always tend to see hopscotching more often in the city centers simply because they are, by definition, a hub for public transport – there are more chances that you will happen to see the phenomenon there, simply because more bus routes come together there.
What has this got to do with Gaming?
Excellent question. Gaming is full of discrete events that accumulate towards specific targets. The expected length of combat (based on how many hits you think the PCs will achieve what damage they will do relative to a HP target) or the accumulation of XP, for example.
If you think of each XP handout as the delay caused by passengers getting on or off a bus, the inaccuracy of the timetable at the end of each route is analogous to the achievement of any specific total – like the XP needed to gain a level. What’s more, gaining enough XP to gain a level earlier than expected can force the GM to increase the level of threat required to challenge the party, which only increases subsequent XP awards – so this, too, is an exponential relationship.
Unfortunately, there is no analogous threshold caused by the law of averages, because the inputs – the parameters that define the amount of XP earned – are not random, and hence not governed by the law of averages. Instead, they are functions of character levels, and part of the problem.
That’s why many campaigns seem to spin out of control, especially at higher levels (analogous to longer bus routes).
Matters aren’t helped by the opposite phenomenon; if characters are advancing too slowly, GMs have a need to compensate in order to maintain player satisfaction. It’s almost impossible to get the scale of such compensation right; instead, they keep compensating until they achieve an inadvertent exponential boom.
Some GMs then try to compensate in the other direction by throwing wildly dangerous encounters at the characters – and that’s how Monty Haul syndrome starts. Over-the-top encounters and wildly improbably rewards which fuel the need for even more over-the-top encounters, earning still vaster rewards. Every Monty Hail campaign could, if analyzed sufficiently closely, be traced back to a single instance of throwing too harsh an encounter at the characters (and compensating with extra rewards) or giving too large an award away in a single encounter that put someone over the top.
Solving the problem
So, we need some law-of-averages method of solving the problem. It was while waiting for the bus today that I thought of one (told you I’d get back to that). It means turning some of my accepted practices and advice on its head, and reinventing the way I handle one specific aspect of my campaigns in future.
The mechanism that I’ve come up with is the wandering monster.
Not just any wandering monster, mind; and that’s where the break with past practice (and recommendations) comes into play.
You see, I used to roll completely randomly for wandering monsters; at best, these were based on an ecological pattern, as described in the Creating ecology-based random encounters series, and Random Encounter Tables – my old-school way. Within the assigned parameters of what could be there, it was whatever the dice came up with, and it could be anything from too easy to too hard to just right in terms of encounter difficulty.
Well, that’s just not going to cut it anymore. That way lies inherent Monty Haul or sudden death, either directly or as a result of compensating for a weak encounter that wasn’t worth the playing time.
Instead, what’s needed is a probability table that determines the difficulty of the encounter relative not to how powerful the characters are at the time, but how powerful they should be.
Let’s say that things are measured in relative EL based on character levels. What’s needed is a random roll that defines and constrains the level of drift away from that average, giving an actual encounter EL, and to set up encounter tables with entries that correspond to the range of ELs that might transpire – a small subset of the whole, a part of the ecology. Everything else becomes something that you describe in narrative (no xp) as inconsequential or something that dismisses the PCs as inconsequential (no xp) because they aren’t a big enough threat. “The spiders flee as you approach.” “The Dragon passing overhead fixes you with a baleful glare before effortlessly ascending to 10,000 feet and proceeding on its’ way”.
Ideally, this should run along the lines of 1:2:4:2:1 or 1:2:4:8:4:2:1, i.e. the greatest probability should center on the average, and the maximum deviation from that should be whatever is considered tolerable. Personally, I think that ±3 is too great a spread, so I would recommend the first of those. The total result is 10, so 2d6 would work well:
2 | 3-4 | 5-10 | 10-11 | 12 |
---|---|---|---|---|
Target EL -2 | Target EL -1 | Target EL | Target EL +1 | Target EL +2 |
Or, you could decide that you want to bias the results even more strongly toward the average by using 3d6 (and the familiar bell-curve):
3 | 4-6 | 7-14 | 15-17 | 18 |
---|---|---|---|---|
Target EL -2 | Target EL -1 | Target EL | Target EL +1 | Target EL +2 |
or,
3 | 4-7 | 8-13 | 14-17 | 18 |
---|---|---|---|---|
Target EL -2 | Target EL -1 | Target EL | Target EL +1 | Target EL +2 |
which gives a more diffuse result about the center while still keeping the extremes quite unlikely and giving greater weight to the average.
Of course, none of this will work if it’s in addition to the rewards-for-achievement built into the adventure; that’s just a free pass at achieving Monty-Haulism. No, the assumption has to be made and built into the adventure that there will be N random encounters worth an average of X experience points, and that expectation has to be incorporated into the estimates of what EL the characters should represent, and should therefore encounter.
The greater the proportion of XP that is awarded in this fashion, the stronger the “Law Of Averages” effect – and the more latitude you have to make the major villain a little nastier if it looks like it will be too easy to be satisfying.
This works as a leavening agent because if the PCs are short of where they should be, they will earn more XP from the random encounter; and if they are ahead, they will earn less.
Why have a random adjustment at all?
Another excellent question, the “virtual reader” inside my head and to whom I write is firing on all cylinders today! If there is no variation, players will quickly work out that every encounter is pitched to some set target; they will rule out being surprised by a foe who is stronger than they expect, and will become more aggressive accordingly. A range of 5 ELs can make a major difference.
Further encouraging diversity of result
I would even go further and say that a subsequent encounter (determined on a d3 roll) should receive the opposite modifier to the one determined randomly, just to ensure that the sustained random rolls can’t bias to one extreme consistently over several rolls. This preserves the random variation while playing toward the average overall result.
This system may not completely eliminate the problem of exponential growth or shortfalls in XP totals, but it will impose a buffer similar to that experienced by the “bus” example.
Other applications
As implied when I first began discussing the relevance of sequential bus theory to RPGs, these are just the tip of a very large iceberg. Here are three other examples to consider:
Loot
Where is it written that loot should always be commensurate with the threat? So long as it averages out correctly, why shouldn’t some encounters yield more than expected and some less? Why not use a second roll on the same table (any of the three offered above) to determine the treasure yield? In fact, why not deliberately under-pay on the random encounters so that you have margin to be more generous on the important ones?
Magic Items
As I was thinking of the above, another thought came to me – a whole new approach to determining what magic items would be handed out in treasures.
Instead of specifying that a given encounter yields X items, why not set magic as a set percentage of the overall reward and hold off handing anything over until the accumulated amount in a player’s pool equals the value of the item you want to give them? That means that there will be less random junk handed out – with its potentially game-unbalancing escalation in PC capabilities – and more deliberate placement. You can even ask the player to define what magical goodie they would like next – and how long it takes for it to show up depends on its value. You could even state that treasure above a certain value per item gets added to the booty of the final encounter, even if it was earned in an earlier encounter.
Once again, that means that the booty handed out conforms not to the success achieved by the characters, but on the success they should have achieved – derailing the Monty Haul train for good.
Getting back to Combat
You could think of the damage inflicted by a blow as the number of passengers boarding a bus, the expected number of blows of average result needed to defeat a foe as the number of stops, and the total damage that has to be inflicted as the travel time. This actually runs in reverse to the bus analogy (more gets you there sooner, not slower), but that’s OK – the principles stand.
There are all sorts of biases to combat outcomes that are built into the D&D/Pathfinder game systems, and they can interact in a number of ways. Players recognize this and try to maximize the compounding and cascading effects in an intelligent manner in character creation and evolution through experience levels. Min-Maxers take this intelligent manner to an extreme that is only barely within the spirit of the rules. The GM, on the other hand, has very limited capability in terms of response; there aren’t many mechanisms that alter the number of hit points that a creature has, and most methods of altering the rate at which damage is inflicted will simply fall into the hands of the PCs after a successful combat. Instead, they often fall back on trying to match the PCs at their own game, enhancing attacking capabilities through the combination of feats, stats, and equipment/abilities – but many of those, too, will end up in PC hands at the end of the day.
Consequence & Solutions
The result is that characters are often far more effective at dishing out damage per hit die of character than their enemies are at doing so per EL. While solving the experience and treasure traps will mitigate this somewhat, the potential still exists for confluences of combat-effectiveness-enhancements.
What is needed to finish the job is some mechanism for ensuring that the averages are respected, despite enhancements. This removes much of the cause of what is often described as “game imbalance” by trending the effect of results toward the average.
Now, it’s not fair for a good character design to be penalized to the extent of the player getting no reward for his efforts in design, so what is also needed is a mechanism by which they can be rewarded for combat effectiveness without cutting short the battle. I’ve run encounters in which one character of exceptional prowess took down the encounter before any of the other characters could act, or needed to act; the optimization of design was such that they weren’t even on the same planet in terms of effectiveness. I’ve also seen the same thing done with mages – a fireball spell wiping out the enemy before anyone else even got to act.
One Answer and its flaws
I’ve also known at least one GM who solved the latter problem by deciding that additional dice of arcane attacks did not stack, but simply added +1 to the degree of difficulty of the saving throw, and thereafter added +1 to the damage inflicted by a single dice. So a 12d6 fireball did 1d6 points of damage, plus -11 to the save; if the character failed the save, the amount by which he failed, up to that full 11, was then added to the damage. So the most a 12-dice fireball could do to a single character was 17 points. This tracked reasonably well with what a fighter of equivalent level could achieve with a typical weapon and appropriate enhancements from magic and feats, so on the whole it worked reasonably well; and that, I think, masks the fundamental flaw with the approach, which is that it penalized characters for intelligent design.
A better solution
I have an alternative: above the average expected damage per blow, damage accumulates in a pool. It is then translated according to the size of the HD of the target into a number of “full dice equivalence”, which is then applied as a negative to the damage-above-average that they can apply. That means that additional combat capability ultimately translates not into a quicker kill, but into greater control on the battlefield. Or perhaps it could be applied, at least in part, into ensuring that the target’s defenses were weakened for the attacker’s next round of attacks, letting them “get on a roll”.
A caveat
Changing something as central as the combat resolution system is not something that can, or should, be done lightly. I’m certainly not going to be rushing out to implement this change in any of my campaigns; it requires a lot of thought and more than a little simulation and number-crunching before I’d do that. Nor am I, therefore, advocating it to anyone else out there – simply putting it forward as food for thought.
The accumulation of small bites
In conclusion, then, anything that accumulates towards a threshold or target of any sort in a game can be viewed with fresh eyes through the lessons of Sequential Bus Theory. Phenomena that can be easily identified but not easily analyzed become more clearly understood, and therefore more controllable by the GM. That takes some of the anarchy out of a game, leaving the GM with more room to reward appropriately when merited, and giving players more control over the future fortunes and development of their characters.
Greater control over those aspects of the game that have an inherent trend to go out of control is always a good thing, so that’s food for thought indeed.
And that’s what I thought about while waiting for my bus to arrive. It was late…
Discover more from Campaign Mastery
Subscribe to get the latest posts sent to your email.
December 4th, 2015 at 3:33 am
As everyone has come to expect from you, another Superb article
December 4th, 2015 at 4:35 am
Thank you, Gazza, but I can’t take all the credit. Much of the first half was derived from memories of an article in Discover magazine that I read about a decade ago. It made such a strong impression that I didn’t even need to refer to the original article when writing my own version :)
December 13th, 2015 at 5:56 pm
[…] Sequential Bus Theory and why it matters to GMs […]