Campaign Mastery helps tabletop RPG GMs knock their players' socks off through tips, how-to articles, and GMing tricks that build memorable campaigns from start to finish.

None So Blind – Character Blind Spots


With the conclusion of the Zener Gate campaign, I’ve been thinking about what comes next. In fact, it’s fair to say that it’s been somewhere on my mind for most of 2022, if not always front-and-center.

About six months ago, I decided that I would resurrect the Warcry campaign, even though it would need some revision because one of the players passed away – the event that actually led to the campaign being put on hold, six or seven years ago.

This will be something of a reboot, and I was all set to spend this post describing the processes and thinking that I was using to carry out this reboot; but as I set (digital) pen to paper, another thought began crowding it out – both insistently and repeatedly. After three unsuccessful attempts to get my thoughts back on track, I have yielded to the demands of my subconscious, which obviously thinks that I’m onto something.

To Every Character, a blind spot

Every character has limits to their breadth of experience, at least when they first enter play. There are parts of a complete society that would simply never have been experienced by that character, by virtue of who they are.

    Example 1: Fur-person

    Take, for example, a character who is naturally covered in fur, and whose people do not naturally wear clothing. This character would almost certainly be utterly unaware of fashion, and of the way clothing is used to signal particular social activities or functions – wedding dresses, a judge’s robes (and wig, in some locations), and the like.

    Example 2: Mer-man / -maid

    A not dissimilar range of options comes to mind for a mer-person. But there is something that would be unique to such a character – sound behaves so much differently under water than it does in air that ‘music’ would be perceived entirely differently, and they would be completely ignorant of the many things that particular music or musical styles can represent in our society. Indeed, much of what we call music would be unrecognizable and completely without appeal to such a character.

    Who amongst us would fail to recognize a wedding march immediately? Or a woman in white dress with a long train and a veil of lace?

    Example 3: Synthetic person

    This is a more difficult character to work with because the specifics of time period of origins would make a huge difference. In anything reasonably modern, the internet would provide a rich but shallow source of information; in many cases, the specifics of any social interaction would be revealed readily with little or no explanation of why we do things a certain way, and where that why was also available, context and symbolic meaning would be missing.

    When someone holds out their hand, it might be recognized as a gesture of greeting, but such a character would not immediately know that the hands are supposed to be clasped in a particular way and then moved up and down once or twice. They are just as likely to put forth their own hand and simply move it up and down – a literal interpretation of the term “hand shake”.

    Example 4: A D&D Cleric

    It can be assumed that a child who became a Cleric was given religious instruction from an early age, and that this crowded out other subjects of study as the years progressed. Most people would not have received any formal education at all, instead learning a skill through experience with a master, such as blacksmithing.

    They would thus have traded expertise in one area (religion / theology / religious ceremonies and practices / prayers) for expertise in many others. They might have some limited exposure to some things outside of this frame, but would be especially limited in knowledge of anything that wasn’t traditionally explained to younger children, such as the realities of war, and romance.

Overcoming the blind spot

I’ve known relatively few players who would not accept such blind spots as a logical part of such a character. But I have also known many players who would see them as a flaw or weakness within the character, and therefore something that should be overcome as quickly as completely as possible.

There have even been a few who made a point of setting the wheels in motion for such self-improvement in one game session and who then tried to argue that the deficiency was gone in the next.

It’s not that simple, or shouldn’t be. Superficial rote learning of the most common human practices might be possible in a relatively short space of time, but the all-important social context, the unwritten assumptions and associations within society in that particular subject, and how they interconnect, would take a lot longer.

As they described it in Star Trek, The Next Generation (and I am paraphrasing), there is a world of difference between memorizing the rules of poker and hand probabilities and the actual experience of playing the game, with the inherent personality interactions that are included – bluffing, learning how to read an opponent, strategies built around deliberate deceptions and detecting same – none of that would come out of such a rulebook. Then throw in all the unwritten rules, traditions, and expressions of table etiquette that can only be learned by experiencing them in a group that already knows them.

Ultimately, these players are missing a bet; these blind spots are not character flaws to be rectified as quickly as possible, they are tools for characterization that should be exploited.

A piece-meal approach

Instead of a single act of rectification, overcoming a blind spot should consist of dozens of actions and misinterpretations and outright social faux pas.

It’s reasonable to assume that upon being confronted with a particular manifestation of a blind spot, a character would seek to rectify that specific ignorance – probably starting with a conversation between PCs. Depending on the depth of understanding that ensues, that specific ignorance might thereafter be disregarded or downplayed.

Over time, the character’s ignorance of the subject in question would reduce, but there would still be the occasional manifestation of the blind spot.

A planned approach

An even better approach would be for the player to provide a list of the ways the blind spot might impact the character, a series of plot seeds for subplots involving their character. The sequence in which these appeared would be up to the GM, and even whether or not some of them appeared, so that he can tailor their inclusion to fit the adventure at hand.

This entire approach can be taken one step further and presented as a series of episodes that, in combination, tell the story of how the character overcomes his blind spot. By making these extremely episodic and relatively brief, they can be dropped into any plot where there’s room.

Of course, the GM is then free to take these general plot ideas and twist them mercilessly (so long as the point of the plot seed is not sacrificed in the process), so the player is no less in the dark than he would have been, and still has to roleplay any encounter or situation that arises, just as he would if he had not provided the GM with plot material to feature his character.

Nor does this exclude the GM coming up with his own mini-plots to explore other, completely unrelated, aspects of the character and/or his backstory. The planned approach to a blind spot is just one source of plot material for the GM to exploit amongst many.

Personal Story Arcs

The wise GM will take this philosophy one step further, and take the time to discuss the character with the player (before he enters play, if possible), and where the player sees the character heading over time, what he wants the character to have an opportunity to do, and so on.

In every campaign that I run, with the exception of those in which it is not necessary (like Zener Gate) or that are deliberately self-contained (like my Dr Who campaigns), I adopt this approach. In the Zenith-3 and Adventurer’s Club campaigns in particular, I’m at pains to detail where characters are and what they are doing when a new adventure begins, essentially roleplaying the character’s personal lives until the main plot thrusts itself upon them – and those plots frequently start as one PCs personal plotline and mushroom to involve the other PCs.

These are plot arcs or personal story arcs, and long-time readers will know that I have been championing the concept for a great many years, now. The concept of character blind spots as plot-fodder is just another variation on the general concept. But it’s a good one.

A relatively shortish article once again, because I still don’t have internet function. But every passing day brings the hour of reconnection – whenever it comes – another hour closer.

UPDATE October 27:

My internet connection has finally been restored, just in time to publish this article! Now, to get caught up on everything!

Comments Off on None So Blind – Character Blind Spots

A Little Yesterday On The Side


This weekend was the big finish to the Zener Gate campaign (exactly on schedule). Guest starring the Governator and James Cameron and the Mythbusters duo, it involved the PCs trying to convince Xi Jinping that the Chinese temporal agency was attempting to replace him with a perfect duplicate in order to abort the program – before it sent certain anonymous communiques to James Comey that gave Donald Trump just enough of a wedge against Hilary Clinton that the 2016 Elections turned out very differently – all while they were becoming ghosts because a later action by that same organization had successfully assassinated the PCs long before they had even been recruited by the Zener Gate program!

For obvious reasons, then, time travel and altered history adventures have been on my mind for the last month or so, and so I thought that I would write about them today. If you’re not into that type of adventure, if it doesn’t fit your campaign, don’t fret; there are some general lessons that can be drawn from the topic that will apply outside of this context.

If all goes according to plan, this post will also be accompanied by a review of a Kickstarter or two that might appeal, but that’s looking extremely doubtful at the moment.

Sometime in the early hours of Saturday morning, someone physically tore my internet and telephone connection out of the ground.

The earliest that the connection can be restored is Monday Afternoon – and I’m not sure that this will leave me enough time to prep that additional content for inclusion.

In fact, if there’s any headache with the restoration process, I won’t even be able to post this article on schedule, but that’s a less likely outcome. I hope.

If this gets published on the 17th or 18th of October, all is well; if not, expect to see an update on the situation, and its impact over the next couple of weeks, at the end of the post!

1. Fundamental Premise

The basic premise at the heart of most time travel / alternate history plotlines is that someone has changed history; the PCs discover this and have to change it back or otherwise prevent the Villains from changing it in the first place.

    1a. Variations

    A number of variations are possible on this theme.

    • Reversing the assumed temporal arrow can be fun. This means that someone from the past has traveled into the future from the PCs’ time to stage some sort of intervention which will impact the far future (relatively speaking). The PCs get wind of this and have to prevent it.
    • Setting your alternate history on an alien world can be a useful variation to have up your sleeve. All you then need to do is get the PCs to intersect it on their travels. Of course, having such a parallel world arise accidentally is beyond improbable – there needs to be a very well-informed operation in back of this creation, and their motivations have to be bulletproof. It’s a lot of trouble to go to just to fake out the PCs or play head games….
    • Alternate timelines are a relatively safe answer – you just need a way to get the PCs there (perhaps against their wills) and some challenge to be overcome before they are able to return. See the Star Trek episode “Mirror Mirror” and a number of sequels in Deep Space 9.

2. The Implications

The PCs need to have some means of recognizing that history has changed, and some way of tracking the change to its “source event”. That usually means that they have some sort of immunity or protection from being affected by the change.

I cannot stress enough how important this is to get right; the credibility of the whole adventure rests on it.

But there are a number of variations possible, which I’ll look at in section 5.

3. The Mechanism

I think that the place to start is always with the mechanism that is used by the Villain(s) to cause the change in history. How were they able to alter the past?

The answer – and there are many possible contenders – will define the Immunity mechanism.

4. Granting Immunity

In some campaigns, immunity to such things comes with the territory, either as a general principle (“Temporal Shielding”) or as some sort of protection against the mechanism in general (“We’re shielded against external Magic”).

When neither of these is the case, the “Protection” has to derive from some accidental circumstance that is unique to the PCs at the key moment. Have them blasted out of their natural space-time or something.

As a general rule, deciding on the mechanism tells you everything you need to know about how ‘immunity” is to be granted.

Depending on the circumstances and the mechanism chosen, you may need to have your temporal theory nailed down, hard. For example:

A temporal change results in some object traveling interstellar distances changing it’s destination or vector. As the change propagates forward down the timeline, it is instantaneously somewhere else from where it was at the moment the Change intersects it. Anyone with ‘immunity’ from the change on board will perceive this change in location occurring faster than the speed of light.

Unless the speed of light limit still applies, in which case the wave of change will propagate more slowly as it expands from the Event that has changed history,

I’m not going to go into huge amounts of detail on this aspect of the situation; this article is all about the creation of a time travel / altered history plotline. If you want more information on this sort of thing, see my earlier series on the “Physics” of Time travel, Time Travel in RPGs.

5. Immunity Variations

There are a few variations that are worth considering when it comes to immunity. Which one, if any, apply to this specific situation will depend on the intersection between Temporal Mechanics and the mechanism by which history has been changed.

    5a. Temporary Immunity

    Having some sort of a deadline before the “immunity” runs out can be lots of fun. This is particularly likely in situations in which some external force is furnishing the protection. It creates a deadline after which the PCs will be irrevocably reintegrated with the changed timeline.

    5b. Restricted Immunity

    It can be a lot harder to arrange for ALL the PCs to have “Protection”. Having a situation in which only ONE PC is protected and who has to convince his Integrated companions from the altered timeline to act can also be fun. This basically involves determining how the unprotected PCs will have changed because of the Temporal Intervention (either directly or indirectly) and letting those players use a “variation” of their usual characters.

    This can involve a lot more prep work, so it’s not a perfect solution. And you need players who are capable of handling such curve-balls with some level of aplomb. In fiction, it’s much easier.

    5c. Without Immunity

    I’ve only seen this worked a few times. The PCs discover that the world that they are used to is the result of Temporal Meddling, and their own personal histories as they know them are also affected. They HAVE no immunity – but they have the opportunity to bootstrap themselves and their world back to the way things were supposed to be.

    Ironically, under this scenario, those causing such changes to history may be naturally protected from them, depending on how you are working the temporal physics. So this can be a great way to reboot a campaign or start a new one, and obviates the usual campaign briefing!

6. Assigning A Target

The mechanism and the motivation of the Villain will identify the Target Event that is to be altered. It’s a lot more work to try and rationalize these things after the fact; it’s far better to have at least a general outline of how things changed and what the resulting dominoes were.

One of my favorite things to do is to have Changes as a result of a Temporal Intervention be largely and wildly unpredictable, no matter how obvious the outcomes might have appeared when planning the change. There will almost always be some factor that can’t be taken into account that will… complicate… the flow of events.

Any decision that was “inspired” or “made on a whim” is particularly vulnerable to such chaos. Knocking over a domino might lead to the change in history desired, but with unexpected repercussions; or those repercussions might completely undermine the desired change.

Having the villain show up on the PCs doorstep to announce “I did something, it only made things worse, I need your help to undo my mistake” is a different way of propelling the PCs into such an adventure!

7. Assigning An Enemy

If you know the Motivation, you have at least an inkling as to who’s responsible. Again, it’s a lot easier to work from the Desired change in history to a motivation to a villain identity than it is to work things in the other direction, even though the logic as presented to the PCs will almost certainly run “Villain to motivation to desired change”.

8. Discovery

How are the PCs who have immunity going to recognize that history has changed? How will the change manifest itself? What is the plot hook, and how can one or more PCs be persuaded to swallow it whole?

A lot of GMs (and some writers) give only superficial attention to this, and it shows. It’s not completely accurate, but I advise acting as though the credibility of the whole adventure is resting on this. It needs to be compelling and believable and completely seamlessly integrated into the normal course of events, nothing out of the ordinary at all.

If the conferring of “Immunity” is not the start of the adventure, this is.

9. Detective Work

How are the protected characters going to back-trace the falling dominoes to discover the instant of change? Again, plausibility needs to be absolute, but you also need to make both the process and the results interesting, even though not everyone is necessarily going to participate – and that can be challenging at the best of times..

10. Motivating Counter-intervention

One of the worst problems that a GM can encounter when running such an adventure is the PC or PCs who respond “I like the way things have changed, let’s leave them this way”. That is the source of the advice offered in section 6, which I now reiterate – no matter how appealing and stable the changes to history may appear to be, they should always rapidly spin out of control.

These negative impacts don’t have to be large and overt; they can be relatively small and targeted. Think of the wishes granted by a Monkey’s Paw; you may return a beloved child or spouse to life, at the cost of the lives of one or more parents and a criminal conviction of the character leading to a divorce and the loss of custody. Just because the child / spouse is still alive in this revised history does not mean that the character can be part of the life of the Intervener, and vice-versa.

No matter how positive the change might appear to be, superficially, there should always be some severely negative aspects to it, which will act to motivate the PCs to oppose the intervention that has changed history. It might be that the resulting world is a happy one for the Villain who has changed history, because he doesn’t care that the rest of the world has gone to hell in a hand-basket so long as all is rosy in his little bubble.

Some GMs may feel that this proposed “rule” should not apply when it’s a PC who is changing history (and there can be a case made in this respect when that’s the whole point of the campaign); but except in such cases, the GM should think long and hard before giving players that much control over the campaign world.

11. Counter-intervention

The second-worst thing that can happen is for the players to say the equivalent of “it’s too big / too complicated, I don’t know what to do”. While the specifics of a counter-intervention may not be obvious, the general strokes of ‘What Needs To Be Done’ should always be clear to even superficial analysis of the situation.

If more information or specificity is needed before such counter-intervention can be properly targeted, where the PC(s) have to go, and what they have to do, in order to gain the required intelligence should be as clear as possible to the players.

This is more challenging to the GM than it might seem, because there is very little to challenge the players in a genuinely follow-your-nose path to a solution; it gets very old, very fast.

What’s more, most solutions of this type are extremely short on character interactions – where’s the opportunity to roleplay? In fact, this can be a problem with this type of adventure in general!

The best resolution to such a problem is for the GM to be proactive in incorporating opportunities for roleplay into their adventure design in the first instance.

The last time this type of adventure came up in the Zenith-3 campaign, for example, the agency for counter-action had the PCs taking the place of their alternate-world selves who fully integrated into the divergent timeline (and horrible people, to boot). They needed< the resources available to these alternate-world versions of themselves in order to solve the problem and set history back on its rightful course, and that meant interacting with various subordinates and superiors. And, in concluding this section, let me again reiterate - while the ultimate solution might not be apparent in all it's specifics, a general description should always be possible, and the next step towards such specifics should always be patently clear.

12. Counter-Intervention Variations

There are four variations on the basic counter-intervention model, and the GM should employ them to create variety in the adventure.

    12a. Target Yesterday

    This is the default – the PCs ‘go to’ the scene of the historical changes and undo them. While this is the most obvious approach, it’s also easily mishandled; perhaps the most common failure is insufficient prep. The environment and population of the world around the change in history needs to be sufficiently detailed that the GM can adopt the roles of the various NPCs in a completely convincing manner.

    All too often – and I’ve been guilty of this myself – the GM will have the attitude of knowing ‘generally’ who these NPCs are, and confident of being able to improv whatever is needed; but there are many more moving parts to this type of adventure than is usually the case. The result is that such efforts are almost always inadequate. Even a single line of description – names and personas – is better than nothing.

    The flaw in the ‘Target Yesterday’ Basic model

    The problem is that the GM is fully aware that all such prep is disposable, intended to be thrown away at the end of the adventure, and so there is a constant temptation to do the absolute minimum. At the same time, this type of adventure is essentially the creation of a new campaign world, however temporary, and so the prep demands are far higher than is normally the case; these two facts are clearly at odds with each other.

    Solving the flaw

    The best solution is to find a way to recycle or perpetuate the value of the prep into the future. For example, just as the PCs are “protected” from the change, so some of their enemies who have been overcome or bypassed in order to counter the Temporal Incursion might also be “protected” and seeking to revert the corrected timeline; they carry the adventure prep with them as character background.

    This also achieves another important outcome: all too often, this sort of adventure ends in the entire premise being overcome, history being restored or whatever. Aside from the PCs memories – and possibly not even there – no lingering impacts remain at the end; whatever caused the adventure to occur in the first place has been undone and nothing remains. To all intents and purposes, the adventure might as well not have happened.

    Taking what would be a disposable adventure and giving it some long-term impact within the campaign, however limited or subtle, makes the adventure itself important. This is also an opportunity to correct anything that isn’t quite right in the campaign background, revise anything that didn’t quite work the way you intended it to; subtle changes to characters and character backgrounds are only reasonable as a consequence of an imperfect solution to the problem posed to the PCs and actually enhance the plausibility of the adventure and campaign.

    12b. Target Tomorrow

    One variation is to target the ’embarkation point’ of whoever changed history – stop them from doing so by targeting the enemy’s circumstances before they even commit the deed.

    It’s implicit in time-changing adventures that there be some connection from the consequences of the ‘true’ history to the changes made in the past. This variation subverts that association to create a different adventure.

    You can further distance the adventure from the predictable cookie-cutter form by having the intervention that is to be undone occur sometime in the PCs futures, too. This makes the entire adventure an embodiment of a “Plan B” (see section 15, below), as though the PCs that were contemporary or post-contemporary to the change in history have already tried to reverse the changes to history and failed. These versions of the PCs, of course, have no memory of that, because it lies in a future that will never come into existence – if they are successful!

    Yet another variation is in someone from the PCs “now” wreaking recurring havoc in a peaceful future, a future that reaches out to the PCs to act as counter-agents to the future-villain.

    12c. Target Interception

    Perhaps the most difficult variation to implement is the one in which the counter-intervention targets neither end of the loop in time (embarkation point or arrival point), but instead seeks to intervene somewhere in between the two.

    The reason is that it can be exceptionally hard to target a time-traveler “in transit” in any plausible way. But this changes the environment in which the adventure takes place, and that in turn makes the adventure all aboutthe most interesting way to introduce such an expanded cosmology, by making it immediately relevant and demonstrating that relevance.

    12d. Domino Theory

    A variation on a variation? Why not?

    A previous attempt at a time-travel campaign that I ran, some years back (using an early version of the Sixes System, as it happens) had as a dictum that once an Intervention was made, it could never be undone; all you could do was introduce some new timeline that corrected the effect of the changed history. For example, the Enemies might have prevented the death of a key figure in a car accident by diverting the vehicle that was supposed to cause the accident; the solution might be to cut the brake lines of the car being driven by the Key Figure Who Is Supposed To Die so that even without the other vehicle, the NPC still dies in a car accident.

    That campaign concept was predicated on this variation being the only valid one. It also meant that once you detected a change to history, that change was permanent, that domino was always going to fall; you could never prevent it from happening, so that event was always there to be detected. So every adventure had a lingering effect on campaign continuity, including those by NPC groups!

    This is, of course, a variation on the proposal offered up in 12c. It involves very different temporal mechanics, arguably more plausible ones but definitely more complicated. That can be both a good thing and a bad thing – they are going to be more original and less cookie-cutter, but they will be harder for players to wrap their heads around at the same time.

    Domino Theory Advice

    If time-travel is to be a central or frequently-recurring part of the campaign, it can bear such detailed scrutiny and still be relevant; if not, then a more accessible alternative might be a better option, under the circumstances.

    Whatever you decide in this respect can have extremely durable consequences for the campaign, so don’t make this choice frivolously or capriciously. Make sure that you understand and can accept the consequences, implications, and ramifications of whatever you choose.

13. Logical Timelines

It’s very easy to tie yourself up in time-travel knots. These bind you implicitly and seem absolutely secure – until they come unraveled at the worst possible moment. I avoid this problem by implicitly tracking the logical timelines from the point of view of all the different participating characters of this sort of adventure.

(Truth be told, I recommend doing this anyway, even if it’s not for a time-travel / alternate history adventure).

Everything that a character does, or is supposed to do (in the course of an adventure) should make complete sense from their perspective, knowing what they know at the time.

In an ordinary adventure, this is relatively straightforward, simply a matter of bearing in mind what different characters know and presume at the point of any decision being made (but sometimes it can get overlooked, anyway).

In a time travel / alternate history adventure, in which effects can and do precede causes (from some points of view), it becomes absolutely critical.

My Process

As you outline the course of events within the adventure, you will be writing that adventure / plotline from the perspective of the PCs and their players. This is the simplest and most elegant solution to the plotting problems of such an adventure, or so many people seem to think.

The ‘knots’ come into existence when this leads you to have an NPC – be it the Enemy, or whoever is providing the PCs with “Protection” from historical changes, or past versions of the PCs that never actually appear in-game, or whoever – are required to act on knowledge that they do not have at the time.

I prefer to plot the ‘Intervention’ from the point of view of the NPC committing the act of intervention, then any external source of information who is recruiting the PCs to stage a counter-intervention (if any) from the point of view of that source, and then actually writing the adventure and the proposed resolution to the events from the perspective of the PCs. And then it’s back to the NPC actor’s perspective for any reactions or responses to what the PCs are expected to (possibly) do – just to keep all the continuities straight.

14. Outcomes – Success, Failure, and points in between

The GM creating this sort of plotline should always have a clear idea of what the next step for the PCs is supposed to be, all the way through to a resolution of the adventure – and should strive to make that ‘next step’ abundantly obvious, even if multiple alternatives are to be presented or possible, and this should continue all the way through to the possible outcomes of the adventure.

In general, these outcomes come in three basic flavors.

    14a. Success

    What does success look like, and what are its ramifications? What’s the ideal outcome and what compromises may need to be made in order to succeed in dealing with the adventure?

    As indicated earlier, it’s all too easy to have the resolution be ‘it never happened’, but that throws away a lot of prep work, which in turn discourages the GM from doing that prep to an adequate standard. That’s “sub-optimal” to use some Neo-militaristic jargon. And it’s never ,em>really the case, anyway; even in such an outcome, the players are forever-after aware that a “Time War” can happen.

    Even if the effort itself is automatically condemned to failure by the Temporal Mechanics of the campaign, you can’t assume that every NPC will always know and respect this reality; there will always be those who think they have a loophole, or can create one, or who are simply ignorant or overconfident.

    Time travel always opens up a can of worms – if you are prepared for that and willing to accept it, that’s fine. But don’t delude yourself into thinking that the PCs can be ‘walled off’ from the expanded reality around them – they can’t. Use it once, and Time Travel will always be a part of the game universe.

    14b. Failure

    Equally, you need to know what failure looks like, and be prepared to live with the consequences. I’ll speak further to this point in section 15, below, but the bottom line remains.

    If you aren’t prepared to accept the consequences of failure on the part of the PCs who are attempting to correct the course of history, then you are setting yourself up for one (or both) of two possible problems:

    • 1. Plot railroading, in which you manipulate events to orchestrate the outcome that is most desirable from a campaign perspective; and/or
    • 2. Making the PCs fifth wheels to a dues-ex-machina that solves the problem for them.

    Neither of these is an acceptable resolution, so it follows that you need to have a third option prepped and ready to go – just in case.

    14c. Mixed Results

    But my favorite choice is to avoid either of these extremes. The PCs may be 99% successful, but there remains just a little divergence from established history. Or they may be 99% unsuccessful, but with room left for hope.

    Not only does this feel more realistic, but it means that the adventure will have lingering consequences. They may not manifest often, and certainly may not rub the player’s noses in the outcome, but they will still be there every now and then.

    This creates an opportunity to rejig any campaign elements that have grown stale, to wallpaper over any continuity cracks and plot holes, and – in general – to revise anything that is either not working or has come to the end of it’s useful life within the campaign.

    In fact, that can often be the whole point of running such an adventure – for its’ metagame repercussions.

    Of course, this is not a card that can be played frequently or even regularly. That’s part of the challenge of a campaign that’s explicitly about time-travel. Once a year may be too often, even if you are playing almost every week.

15. Have a Plan ‘B’

Granting the possibility that the PCs will fail – making room within the adventure for that to happen – generally implies the existence of a ‘Plan B’, a way for the PCs to snatch victory from the jaws of defeat (perhaps after a taste of the defeat).

If you permit a deus-ex-machina to provide the PCs with a second chance, this is easy to do. But that’s bad writing in the literary world, and not much better in the RPG sphere.

Plan B’s don’t happen by accident. They need to be carefully constructed and implicitly placed within the continuity of events while being completely hidden from player awareness until needed.

That usually means that they need to be subtle, and sophisticated, and very carefully prepared.

    Sidebar: An example

    One of my favorite plots of this type deliberately made it almost impossible for the players to succeed in stopping the Event – but included, as an inobvious inevitability, the seeds of the Villain’s defeat at the hands of parallel-world versions of the PCs. The flaw in the villain’s plans was a small one, essentially unnoticeable until it became a crashing reality to first the players and then the PCs – by undoing an unwanted victory on the part of the PCs, he also undid a subsequent victory that was required in order for him to have time-travel capabilities in the first place.

    History, in that game universe, abhors a paradox; the consequence was that his intervention would be undone so that he could gain the ability to intervene again, with history oscillating back-and-forth repeatedly until some extremely low-probability coincidence arose by random chance to give the PCs one final shot at stabilizing the situation.

    The actual adventure proceeded from that low-probability event; the PCs affected (and their players) were completely unaware of the failed attempt to intervene. It was only when they found a way to send information from one temporal ‘loop’ into the next that they could start to make progress, bootstrapping themselves out of the paradox – but with consequences, to wit a Dalek invasion of Korea – or maybe it was Thailand, I’m not sure anymore.

The key point is that this enabled the PCs to try various things that would fail, but to learn from those mistakes until they finally found a solution that “ticked all the boxes”.

Those with a lot of sci-fi in their personal backgrounds might recognize this as the basic premise of a “Star Trek: The Next Generation” plotline, but I also threw in some ideas from the original Terminator trilogy and some bits from Doctor Who, to make the whole thing more original.

16. Reflecting A Changed Reality

Something that I’ve only done twice, but that worked quite well both times, is to reflect the changed reality by using a completely different game system. On the first of these occasions, I went so far as to parachute in a ‘guest GM’ (while sticking around to act as a continuity advisor).

Picture the scene: I run the adventure up to the point where the character (who was ‘protected’ from the change by subsequent events) becomes aware of reality running like water around him. I then rise from my seat and the guest GM (who has been lurking somewhere nearby unobtrusively) sits down and hands the player a version of his PC that has been ‘translated’ into a completely different game system, while I pull up a chair to one side. Without explanation, the new GM then starts describing what the PC sees around him…

That’s A Wrap

Having reached the end of the article, I now have an update. The Broadband technician has come, and gone, and made arrangements for a proper repair to my telephone and internet – but it will still be two or three days before it’s working again. The intent at the moment is therefore to post this on Thursday – either using my own (restored) connection, or by means of an internet cafe. Either way, I won’t get to look at those Kickstarters that I mentioned at the head of this post – they will have to wait until next week.

I guess I’m fortunate in that I didn’t need to do a lot of research for this article, that I could pretty much just type (having already drafted the sequence of sections). Hopefully, the seams (and the interruptions) don’t show too badly.

It should be observed that this was going to be the subject of choice, anyway. It’s just a useful coincidence that it was on the agenda when circumstances permitted no other option!

UPDATE October 20:

So, still no internet connection (or telephone) and the latest word is that it will be restored ‘on or before’ November 7th. I’l do my best to post regularly, but all schedules are shot to hell and no commitments are certain. That includes promised Kickstarter reviews (apologies to the creators and publishers affected), and means that interaction between myself and the November 2022 Blog Carnival will be disrupted, at least at first (I should still have enough time to prep and publish an anchor post, though it might not have the depth of content I originally intended).

As soon as the connection is restored, which could be as soon as tomorrow (but probably won? be), things will start getting back to normal, but if the original internet issues also return, even that might be a protracted process.

It’s unfortunate, but there’s nothing I can do about it. Sorry, folks!

Comments Off on A Little Yesterday On The Side

Uncoupling DnD’s Heisenberg Compensators 2


Hopefully, my internet connection is now fixed. It’s been functioning perfectly since Friday when a technician attended the hardware connection – at least, I assume they did; I was notified that they were on their way, and then notified some time later that the call was completed, without ever seeing them or being informed about what work they had done. Such a state of ignorance does nothing to restore confidence, but so far, so good.

In the first half of this article, I showed how D&D and Pathfinder were hostages to the connection between the “Magic” level of an object and its “Magical Combat Plus”. Disconnecting the direct link between these concepts creates undreamed-of flexibility for the creation of unique magic items in a campaign.

In the final section of part one, I looked at a system for ‘fusing’ two magical objects of like kind together to create an item of greater capacity and capability. But there’s a better, simpler way – and, ironically, it depends on a partial restoration of the link between the “Magical Rating” and the “Magical Combat Plus”….

Unused Capacity, Revisited

Let us start with this: Every item needs to have at least one point of Magical Rating that is not assigned to a power, ability, activation, or whatever, in order for it to be capable of ‘fusing’ with another one, as stated in part one.

Things become a lot simpler if we simply assume that it is so, and turn this requirement around completely, to say “Every item can be fused to another of its kind, assuming their power levels are not too far removed, save those in which the magic has been fixed or locked..

Locking a magic item removes the capacity for further enhancement, or for the item to be used to enhance another, but it also makes the magic item just a smidgen more powerful or useful. The details vary, but the magic powers contained can be a little stronger, or a little easier to activate, or functional over a slightly greater range, or a little harder to resist – they are, in some particular fashion, slightly better.

It can be assumed that any ‘unlocked’ magic item therefore must have a Magical Rating that is one higher than the number required to contain the enchantments actually placed within the item, but this is not taken into account in determining the cost of the item; since one capability (further enhancement) is being replaced by another (slightly better enchantment), the locked item has exactly the same value as an unlocked item.

For all intents and purposes, the ‘extra’ point of Magical Capacity might as well not exist, it’s an unnecessary complication to take into account. Treat it as being a theoretical reality that can be ignored in all practical senses.

Once you do that, you can simplify and abstract the process of fusing magic items considerably by disconnecting the process from the Magical Rating and reconnecting with the magical combat plus!

Forging Of Magic Items

All items made by sentient hands or will have the potential to be enchanted, but not all such objects are created equal. Some materials are better suited to some forms of enchantment, some types of object and shapes of object are better suited to this particular form of enchantment or that, and the craftsmanship of the maker also has a big bearing on the innate capacity for enchantment of an object.

Magic items are, therefore, forged just like any other, at least initially. And that’s true of everything from sculpted bowls to sharpened blades.

There are three factors to any item, and they add up to the potential Enchantment Capacity:

  • Rarity / Purity / Perfection of materials
  • Skill Achieved / Craftsmanship
  • Suitability of shape and materials to Specific Enchantment

These values do not correlate directly with any other numeric variable – so a better skill roll result by the craftsman will yield an object with greater innate capacity for enchantment, but not by any specific numeric amount.

Using more expensive materials will also increase the enchantment potential, but doubling the value of materials will not add X to that potential, or double it from Y to 2×Y.

Enchanting An Object

To enchant an object, a mage or other spellcasting class must cast a spell into the item without incorporating the usual trigger phrase / word / gesture that would activate the spell. This embeds the magical power of the spell into the item. Each trigger that will activate the power must then be added to the ‘suspended’ magic within. In practical terms, this is the sum of:

  • Spell Level (as modified by any included Metamagics
  • Plus the base Spell Level (i.e. UNmodified by Metamagics
  • Plus the total level adjustments of all included Metamagics, regardless of whether they increase or decrease the effective Spell Level
  • Plus 3 for every Activation Point etc (refer part one of this article)
  • Plus one.

The total is the target of an appropriate skill roll – it could be Spellcraft, or whatever. This should be interpreted according to your system’s Game Mechanics – it’s either a DC, or the amount by which your roll needs to be below your skill level, or whatever.

This successful roll embeds the triggers into the item that allow it to complete the Spell effect that has been suspended within the item under construction.

Enchantment Time Required

The process is time consuming.

  • 3 hours per spell level for the basic spell;
  • Plus one hour per Magical Plus used by Triggers, etc;
  • Minus ten minutes per rank of ability in the skill being used for the roll previously described;
  • Plus-or-minus 5 minutes for the actual roll (minus if low is good, plus if high is good, according to your game mechanics).

Only then will the caster learn whether or not the enchantment has been wholly successful, partially successful (spell suspended but activations failed, twisting the spell effect into a curse of some kind), or completely unsuccessful. On a critical failure, the entire object may be reduced to slag, i.e. destroyed.

Interrupting an Enchantment

It is possible to suspend the process temporarily – one day per caster level, minus 1/2 a day per effective spell level – and then resume it. It’s even possible to exceed this limit; simply increase the target difficulty by 1 and restart the clock.

It’s entirely possible to discover an object with a spell that was embedded within, centuries earlier, but then interrupted, with an accumulated penalty in the thousands, and attempt to complete it.

Most mages are unwilling to make such an attempt, however, because each such difficulty increase also does a point of damage to the mage making the attempt – and there aren’t many mages who can cope with 1000+ hit points worth of damage.

Liches and other high-level Undead often have great magical tools at their disposal because Necromancers are adept at palming this damage off to someone else (potentially several someone elses), sacrificing them to complete an object. Obsessive Cults can also sacrifice members (who go willingly) to achieve such enchantments as sane individuals would never dream.

Enhanced Spell Repertoire

Note that there are also magic spells that only function when cast into objects, and these account for any effects that may be found in magic items that do not correspond with the spells available to any particular character class. That’s how “Combat Pluses” get added to an item, for example. These are not usually listed as spells in any canonical list because they have only that one purpose. Simply regard the total combat plus (counting attack and damage separately) as one more than the spell level of the ‘spell equivalent” and away you go.

Similarly, every mage has a series of ‘unlisted spells’ to apply sensory triggers – basic sight is a 0th-level spell, +1 for every +3 ‘perception check’ increase. At 18 (3d6) or 21 (d20), there is no longer a need to roll a perception check for the item to ‘see’ an activation trigger, and it functions perfectly. Similarly, you can include hearing so that an object will be aware of a spoken Activation Word. This was mentioned in Part One as “Embedding a sense” into the object.

Enchantment Potential

Of course, it’s a little embarrassing if you spend fifty hours slaving over a +3/+3 Holy Avenger only to find that the unenchanted object doesn’t have sufficient capacity for the spell(s) required.

A simple ‘Detect Magic” – and a skilled interpretation of the results – is all it takes to estimate, within a point or two, the total Enchantment Capacity of an object. Some even suggest that this is the true purpose of the spell, and the fact that it makes already-enchanted objects detectable – the purpose for which it is commonly used – is merely a happy side-benefit.

Exceeding The Bounds

It’s even possible to enchant an object with more magic than can properly be bound into it (if you got your estimated Enchantment Potential wrong, for example) – the Enchantment Process will take whatever additional Potential it needs from the enchanter’s life force, permanently consuming their hit points.

Exceeding an estimate by just a point or two is painful but rarely debilitating. Deliberately exceeding an estimate by a hundred points or more is usually permanently lethal (at best, crippling) – but, fanatics….

The resulting magic item is inherently and permanently locked, obviously.

Reforging

Okay, so that’s the basic process. It’s also possible to reforge a non-locked magic item – changing the trigger mechanism or basic spell effect. This is known as Reforging the item.

You simply cast the spell that is already in the item, into the item, matching perfectly any Metamagics embedded within, while at the same time, casting the new spell and embedding the new trigger into the item.

Sounds easy enough, doesn’t it?

The difficulty comes with the casting check described earlier. Not only do you have to match the casting difficulty of the original spell and trigger, but also of your replacement spell and trigger, and any shortfall is experienced as hit point damage. Even if the spell trigger is to remain exactly the same, you still have to cover the original trigger and the new one; it doesn’t matter that they are both the same.

So it’s actually at least twice as difficult as enchanting an object from scratch. The time requirements are also stacked, so this is not something that can be done in the field.

And, reforging for a third time? Add the difficulty of all the previous spells to the difficulty of the new spell – at least tripling the original difficulty.

But, there is a simpler way…

Fusing of two magic items together

Merging one magic item (the base item) with one of equal or lesser enchantment (the donor item) undoes any magic in the donor item and transfers the resulting unused potential into the base item, as was explained in Part One.

This is relatively quick – where you might read ‘hours’ for Reforging an item, “Fusion” reads ‘minutes’ (but it’s still not something to be done in the field), where you might read ‘days’, read ‘hours’.

In most other respects, it’s the same as taking one magic item of the Potential of the end item and enchanting it. Same skill rolls required, same damage if you get it wrong, and so on.

The fusion of two magic items preserves the existing magic of the base item and adds the potential required for it to be enhanced at the same time, either adding a new magical power (with associated activation, etc) or improving the one that’s already there.

And, since one of the most common powers embedded into magic items is a Combat Plus, ‘improving the one that’s there’ is very often the whole point. And that’s why returning the ‘rules’ of Fusing objects together to the foundation of the Combat Plus or equivalent makes a lot of sense.

Degrees of similarity

Since the magic of the donor object is unraveled by the process, that doesn’t matter too much – a Speed item can become a Flametongue item, no problem. But the basic shape has to be similar (both longswords or chain mail or whatever).

So long as they have the same description in game mechanics disregarding any magical enhancement, they are similar enough for fusion. But if one is made of Mythril, or Shadowsteel, or Jade, or whatever, so must the other one be.

There is a side-effect of the fusion process that should be noted, however: the same basic equation (Materials + Craftsmanship + Suitability) remains as valid regarding the composite item as it was to the constituents. At least one of these, possibly more, will therefore have to improve markedly as a result of the fusion process. This can mean that exotic new decoration becomes etched onto a blade, or that the material the blade is made from is transformed into something else, or that the hilt changes color – but there is a visible consequence to the blending of two items.

With high magic as a status symbol, that would make the process doubly attractive to certain people.

But there others who like to fly beneath the radar. It is quite possible to embed a low-level illusion into a magic item that it is a worthless or poor representative of its kind – a wooden dagger of +1, for example. But when the command word is uttered, it becomes a +5 +5 Dancing Blade….

Degrees of Magical Similarity

The Enchantment Potentials must also be similar. They don’t have to be identical, though. The more closely matched they are, the more effective and efficient the fusion process.

This is summed up by two rules:
 

  1. Plus N and Plus N fuse to create an object of Plus N+2.
     
  2. Plus N-1 and Plus N fuse to create an object of Plus N+1.

 
…but it’s usually more convenient to rearrange the second one to read:
 

  1. Plus N and Plus N+1 fuse to create an object of Plus N+2.

 
– you just have to remember that it’s the magic of the higher-plus item that is preserved in the initial state of the fused item.

All you then have to do is determine the Enchantment Potential that corresponds with the new item, and you’re ready to enhance / further enchant it.

Surprising complexity

These two very simple rules combine to result in surprisingly complex behavior. When working to combine multiple objects together, there can be multiple pathways – some far more efficient than others.

Obviously, fusing two matching pairs is inherently the most efficient method – the higher value of a mismatched pair means that it costs relatively more (by some margin) than fusing matched pairs.

  • +1 and +1 make a +3 for a cost of two +1 items.
  • +1 and +2 make a +3 but the difference of costs between +2 and +1 are significant, and reduce the cost-effectiveness of the +3 item.

But things become more complicated when you are hunting down those items in the wild rather than simply commissioning them. It’s still worth fusing a +1 and +2 item together to make a +3 when you already have both the ingredient items.

I’ve spent a lot of time analyzing the resulting enhancement patterns in order to spell them out for you, the GM – but players should be told only the basic rules above, and let to deduce the smarter upgrade strategies for themselves.

Symbology

To make these patterns transparent to the reader, once again, I need to expand the nomenclature.

    +2 should be read as two items, each of +2.

This enables the representation of the fusion process in a simpler, more abstract, manner that’s easier to comprehend.

For example, if I have:

  • +0, +0, +1, +2, +2, +3

items, all suited for fusion, this would be written

  • +0, +1, +2, +3.
Sequential Fusion

Putting a string of fusions together in the most efficient way possible can be quite complex. After a lot of study, I’ve found that it’s easiest to work the process out in steadily-progressing values of +N.

You may be tempted to leap ahead because the path seems so obvious, but it’s easy to make a mistake.

Fusion Sequence

Let’s take those items and see what can be made of them, because it permits me to demonstrate the way that I will depict the fusion process.

  1. Starting point: +0, +1, +2, +3.
  2. +0 & +0 make +0+2=+2, i.e.
    +0 = +2
  3. +1 and that +2 make +3
  4. The +2 that we already had make +4.
  5. The +3 that we made and the +3 that we already had fuse to make +3+2=+5.
  6. The +4 and the +5 combine to make ×+6.

So all of those together can be combined to make a single +6 item.

Which One’s The Base Item?

To determine the base item (and there may be multiple choices), we need to track back through the sequence, looking for the thread that binds the higher-plus items together.

  • +5 is higher than +4 so the +5 contains the base item of the +6..
  • That means that either the +3 that we already had, or the +3 that we made, are or contain the base item.
  • If it’s the +3 that we made, then ANY of the +2 items, including the one that we made, could be the base item.
  • If it’s the +2 that we made, then either of the +0 items could be the base item.

So the potential candidates are either the +0, either of the +2, or the +3 that we already had. The way that we configure the fusion chain and the choices of the artificer constructing that fusion chain determine which. Only the +1 can be definitely exceeded from the list.

Same items, an alternative fusion chain

As I said, multiple items lead to multiple ways they can be combined.

  1. Starting point: +0, +1, +2, +3.
  2. Set aside one of the +0 items.
  3. +0 & +1 make +2.

  4. With +2, one of them has to be set aside for a moment because we have nothing to pair it with. The other +2 make +4.
  5. Take the +2 that we set aside and combine it with the +3 that we already had to get +4.
  6. +4‘s combine to make the ×+6.
  7. And we still have that +0 left over!

Okay, so a +0 item isn’t going to be worth very much. But the same fusion chain applies if we add +2 to every plus shown:

  1. Starting point: +2, +3, +4, +5.
  2. Set aside one of the +2 items.
  3. +2 & +3 make +4.

  4. With +4, one of them has to be set aside for a moment because we have nothing to pair it with. The other +4 make +6.
  5. Take the +4 that we set aside and combine it with the +5 that we already had to get +6.
  6. +6‘s combine to make the final ×+8.
  7. And we still have the +2 left over.

While the cost of a +2 item would pale in significance next to that of a +6, it’s not insignificant.

This permits the definition of a useful general principle: If the outcome is the same size, the size of the leftovers defines greater efficiency of process.

Parity

When you first start exploring a fusion chain, your overwhelming focus is on the plus values and trying to create pairs, because they are clearly more efficient.

After a while, though, you may start to become aware of the effects of Parity.

  • Even # and Even # of +N and +N+1 are good.
  • Odd # and Odd # of +N and +N+1 are okay.
  • Mixed odds and evens are worst.

While this principles are not entirely incorrect (and hence are expanded upon below), they can also be misleading – but unless you pay very close attention, something you aren’t likely to do if you perceive the above to be the height of wisdom, you will not notice for a long time.

Evens and Evens

Having even numbers of both +N and +N+1 items yields a very simple strategy; both sets naturally break apart into perfectly matched pairs, fusing together in the most efficient process possible.

  • N = N+2
  • N+1 = N+3

As you can see, this forms a pair of natural progressions that alternate, with N → N+2 → N+4 → N+6, and so on, on one side of the ladder and N+1 → N+3 → N+5 → N+7, etc, on the other.

The temptation is to deal with each side of the ladder in sequence, ignoring the odd-valued +N‘s while working on the even ones, and vice-versa.

That’s a great way to reach a dead end.

You are far more likely to put together a logical and efficient fusion sequence – one that doesn’t ignore the second rule describing the possible steps of such a sequence – of you do them in order of increasing plus.

Odds and Odds

You’ll find that as you do more of these, you will start to find notational shortcuts, and these are bound to slip into this presentation – so I’m not going to try and stop them. They are still saying the same things, just not being as formal about them.

Odds and Odds are almost as easy to work with as Evens and Evens, but it’s more easily explained with a quick demonstration. Simply pair up everything that matches, and then combine the leftovers; with both numbers of items being odd, it’s inevitable that you will have one of each in the sequence.

  • 1, 1, 1, 2, 2, 2, 2, 2 is formally written +1, +2,.
  • One pair of +1‘s becomes a +3, leaving one left over.
  • Two pairs of +2‘s becomes one pair of +4‘s, with one left over.
  • The leftover +1 and the leftover +2 combine to make an extra +3.
  • So the result of these two steps up the ladder are +3 and +4.
  • Another way of writing this process down might be by putting brackets around each pair: (1, 1), (1, 2), (2, 2), (2, 2).
Evens and Odds

Things get more interesting when you have an even number of +N items and an odd number of +N+1 items.

2, 2, 3 can be grouped in one of two ways: (2, 2), 3 or 2, (2, 3).

The first option produces (4), 3, while the second yields 2, (4).

+2 and +4 cannot fuse, they are too far apart; +3 and +4 can become +5.

It doesn’t matter what +N you use, the same principle will apply. Nor does it matter how many #&times you have so long as the one that describes the count of +N items is even and the one that describes the count of +N+1 items is odd.

Let’s look at a couple of more complex situations to prove the point:

2, 2, 3, 4, and then 2, 2, 3, 4, 4.

2, 2, 3, 4 first:

  • Option 1: (2, 2), 3, 4 → 3, 4, 4;
  • Option 2: 2, (2, 3), 4 → 2, 4, 4.
  • Our rule about the efficiency of remainders clearly states that having a +3 left over is better than having a +2 [which should be pretty obvious, anyway]. So option one, matching the pairs, is clearly the preferred answer.
  • Or is it? Option 1 produces two choices for the next step in the fusion chain, while Option 2 only permits one (because +2 + +4 is invalid):
    • Option 1A: (3, 4), 4 → 4, (5); (4, 5) → 6.
    • Option 1B: 3, (4, 4) → 3, (6).
    • Option 2: 2, (4, 4) → 2, (6).

    Hmm, so option 1A combines everything into a single +6, Option 1B combines everything into a +6 with a +3 left over, Option 2 combines everything into a +6 with a mere +2 remaining. Option 1A is clearly the least efficient, option 1B is the most efficient, and Option 2 is somewhere in between.

That confirms the principle of pair matching having priority, at least in this case. But if we have another +4 in the mix, is that still the case?

2, 2, 3, 4, 4:

  • Option 1: (2, 2), 3, 4, 4 → 3, (4), 4; 4
  • Option 2: 2, (2, 3), 4, 4 → 2, (4), 4, 4.
  • So we still have the same choice between a +3 left over or a +2 remainder.
  • Except that the +3 and (+4) can then combine to make a +5, and the two other +4‘s can make a +6, and +5 and +6 then make an +7. Option 2 ends with a , 2, 4, 6 – and +7 is clearly better than a +6. This is a clear example of the “Odds & Odds” rule given above.

So the rule for Evens and Odds is always to pair the Evens.

Odds and Evens

It’s so interesting that reversing the sequence does NOT yield the same result.

Anticlimax up front: After careful comparison of the alternatives, I have found that a useful rule of thumb is that it is always better in the long run to break apart a matched pair in order to form a better matched pair.

You may not have noticed it, but I’ve already demonstrated Odds and Evens – this is the difference between Option 1A and Option 1B in the 2, 2, 3, 4 example above. And it says to match the pairs and leave the unmatched odd N as a leftover.

Except that this doesn’t always work. Consider 3, 4, 4, 5;

  • Option 1: 3, (4, 4), 5 Rarrow; 3, 5, (6) Rarrow; 3, 7. Looks okay, doesn’t it?
  • Option 2: (3, 4), 4, 5 Rarrow; 4, (5), 5 Rarrow; 4, 7. What?

As I said in my anticlimax, it’s always better to break a matched pair (in this case, the pair of +4‘s) to achieve a better matched pair (in this case, the pair of +5‘s). And +4 is clearly a better remainder than +3.

But this only works because the number of existing N+2‘s is odd (one +5, and we are making it even (+5‘s)..

If we add another +5 to mix, the results are completely different:

3, 4, 4, 5, 5:

  • Option 1: 3, (4, 4), (5, 5) Rarrow; 3, (6), (7) Rarrow; 3, ((8));
  • Option 2: (3, 4), 4, 5, 5 Rarrow; 4, (5), 5, 5; then (4, 5), (5, 5) Rarrow; (6), (7) Rarrow; (8);

Both paths lead to a +8 item, but the path that was wrong without that extra +5 also leaves a +3 unused – which can be sold off, or kept to become part of another upgrade chain to improve that +8.

In other words, +3 is clearly better than nothing!

So the rule is that the right thing to do is to Make the better matching pair, even if you have to break a matched pair to do it.

That 3, 4, 4, 5 pattern is so common that it was recognized even before the general analysis of odds and evens, and initially considered an exception to the general rule that I was then using

Even now, I sometimes need to work through an entire fusion chain to verify the right answers.

A complex example

1, 2, 2, 3, 4

Path one:

  • (1+2) Rarrow; 3;
  • 2 + (3) Rarrow; 4;
  • 3 + (4) Rarrow; 5;
  • 4 + (5) Rarrow; 6.
  • End result: +6, nothing remaining.

Path two – Ignore the +1:

  • (2+2) Rarrow; 4;
  • Ignore the +3;
  • (4+4) Rarrow; 6.
  • End result: +6, with +1 and +3 remaining.

Path three – Ignore one of the +2‘s:

  • (1+2) Rarrow; 3;
  • (3+3) Rarrow; 5.
  • (4+5) Rarrow; 6.
  • End result: +6, with +2 remaining.

Path four – Ignore the +3:

  • (1+2) Rarrow; 3;
  • (2+(3)) Rarrow; 4.
  • (4+4) Rarrow; 6.
  • End result: +6, with +3 remaining.

Path five – Ignore the +4:

  • (1+2) Rarrow; 3;
  • Ignore the other +2;
  • (3+(3)) Rarrow; 5.
  • End result: +5, with +2 and +4 remaining.

Path 2 is clearly the best path, followed by Path 4. Path 5 is clearly the worst, followed by Path 1.

Analysis, 1, 2, 2, 3, 4:
  • 1, 2, 2 = odd & even, so the right choice depends on the count of +3‘s. In this case, there’s 1, so it is worth breaking the +2 matched pair to create a matched pair of +3‘s, yielding a +5.
  • OR IS IT? Not breaking them creates a matched pair of +4‘s, yielding a +6.
  • Because the +4’s are the better pair, that controls the pathway. Ignore the +1 and the +3, they are red herrings.
An even more complex example

I was thinking about tossing 3, 4, 4, 4, 4, 5, 5, 6, 8 at you, but decided not to. Hint: Ignore the +3, pair the +5‘s into a +7 and join the +6 to that to create a +8; those are the leftovers. Everything else makes a +10.

Scope For Nuance

Over the course of this two-part article, it has yielded
 

  • 3 scales of magic;
  • 3 ways of pricing;
  • Multiple activation choices;
  • Multiple enhancement capacities;
  • Multiple variations on the same basic item;
  • Greater flexibility by creating extra space as combat plus increases;
  • and two systems for fusing weaker magic objects together to enhance one of them.

But it’s not quite finished yet!

I thought that I would throw one more curve-ball at you: the person implanting the (suspended) spell and the one creating the trigger do not have to be of the same character class, using the same kind of ‘magic’. It’s perfectly acceptable to mix and match – clerical magic with Druidic magic with ranger magic, or whatever you want. You can even have one spell that modifies the output of another, as though you had two different spell-casters co-operating with each other.

Comments Off on Uncoupling DnD’s Heisenberg Compensators 2

Uncoupling DnD’s Heisenberg Compensators


My internet connection is still fraught. It will sometimes work for hours, and then not be available for days. Which makes this article fraught with potential problems. I’ll do my best – but it’s worth noting that less than an hour after last week’s post, the internet crashed and stayed down for about seven hours. If that had started just an hour earlier, the post could not have been published at all.

At something close to the last possible moment, I’ve decided to split this article into two, because if I didn’t, the second half would completely overshadow the first half.

“Uncoupling the Heisenberg Compensators” is some of my favorite technobabble from Star Trek: The Next Generation because both the character using it and the audience know that’s it’s technobabble created specifically to deceive the villain of the episode. Hence, it’s perfectly fine for it to mean absolutely nothing, in fact it doesn’t ever pretend to have any meaning whatsoever aside from that deception.

And yet, like all good technobabble, it readily hints at an implied significance while never stating anything provable outright. It sounds a little more scientific and technical and technological than “crossing the streams”, for example.

“Heisenberg”, of course, implies some relationship between the fictitious technology and the Heisenberg Uncertainty Principle, which places limits on how much we can know through direct observation. Since the technobabble supposedly relates to the teleportation technology of the show, and one of the Heisenberg limitations refers to the position of subatomic particles, this all seems to hold together.

In exactly the same way, linking the combat “plus” of a weapon to it’s magical bonus seems to make perfect sense, at least on its surface. But interesting consequences can result if you uncouple these two concepts, replacing the one-to-one identity relationship between them with a far looser, indirect, relationship.

The formal existing relationship

To the best of my knowledge, neither D&D nor Pathfinder ever state outright the equality; they simple assume it to be the case, and use the terms interchangeably if they even distinguish between them at all.

Once you become aware that the two things don’t have to possess such an equality, once you uncouple the two concepts, the game systems stop being hostages to this most fundamental of game mechanical assumptions.

What do I mean? Each magical +1 to arms or armor represents a step up an escalating power scale – either a geometric one or an exponential one, depending on who you ask. This numeric quantity is used to index the power level of the magical device, as well as being a direct input into the relevant game mechanics – armor class in the case of armor, attack bonus in the case of a weapon.

So ubiquitous is this approach that the same indexing is often used (unofficially) to describe relative magical power in entirely unrelated pieces of arcane hardware.

The assumed equality immediately saddles the game mechanics with three problems:
 

  1. The increase from +4 to +5 is the same as the increase from +1 to +2 – or from ‘plus-nothing’ to +1, for that matter. The more things you have to cram into that space, the smaller grows the capacity for nuance, for making this +4 item different to that one.
     
  2. The increase has to be reflected in a very steep progression in price and rarity, Quite often, this then has to be reflected in the capabilities of the object needed to justify that price- once again, restricting the capacity for differentiation from one object to the next.
     
  3. Consistency across several objects becomes a problem that is most easily solved with a cookie-cutter approach, again squeezing life and flavor out of the magical items emplaced. This makes the game mechanics simpler to learn and use, but further squeezes the life and individuality out of the objects.

 
None of this is good news. It certainly adds impetus to the concept of separating the concepts from each other.

This is more easily said than done; but after quite a long time with the question at the back of my mind, I think I’ve cracked the problem. That solution is the subject of today’s article.

Section 1: A sliding scale of magic

Look, if this discussion is going to make sense, I need to lay out some ground rules for nomenclature before I do anything else. So, for the rest of this article:

  • “+n” in italics will refer to the combat value of the magic item, the traditional interpretation.
  • “+n”, not in italics, will refer to the “magical plus” of the weapon, which in turn is used to determine value, construction cost and difficulty, relative power level, etc.

Okay, so the basic concept is that each +x provides and requires +x to the magic scale. Sounds simple enough, doesn’t it? But it’s a fundamental conceptual shift:

  • +0 = + base
  • +1 = + (+1) × x + base
  • +2 = + (+2 +1) × x + base = + (3 × x) + base
  • +3 = + (+3 +2 +1) × x + base = + (6 × x) + base
  • +4 = + (+4 +3 +2 +1) × x + base = + (10 × x) + base
  • +5 = + (+5 +4 +3 +2 +1) × x + base = + (15 × x) + base
  • +6 = + (+6 +5 +4 +3 +2 +1) × x + base = + (21 × x) + base
  • +7 = + (+7 +6 +5 +4 +3 +2 +1) × x + base = + (28 × x) + base
  • +8 = + (+8 +7 +6 +5 +4 +3 +2 +1) × x + base = + (36 × x) + base
  • +9 = + (+9 +8 +7 +6 +5 +4 +3 +2 +1) × x + base = + (45 × x) + base
  • +10 = + (+10 +9 +8 +7 +6 +5 +4 +3 +2 +1) × x + base = + (55 × x) + base

You’ll see why this is a useful reconstruction a little later on. Most people would also assume that “X” = 1 and “base” = 0, but it ain’t necessarily so. In fact, I recommend x=2 and base=3 for reasons that will become clear a little later.

Variation #1

Instead of +n contributing +n×x to the magic item, it contributes +(n+1)×x. Sounds like a small change, doesn’t it? But it accumulates to an amount of some significance.

  • +0 = + 1 × x + base
  • +1 = + (+1+1) × x + base = + (2 × x) + base
  • +2 = + (+2 +1 +2) × x + base = + (5 × x) + base
  • +3 = + (+3 +1 +5) × x + base = + (9 × x) + base
  • +4 = + (+4 +1 +9) × x + base = + (14 × x) + base
  • +5 = + (+5 +1 +14) × x + base = + (20 × x) + base
  • +6 = + (+6 +1 +20) × x + base = + (27 × x) + base
  • +7 = + (+7 +1 +27) × x + base = + (35 × x) + base
  • +8 = + (+8 +1 +35) × x + base = + (44 × x) + base
  • +9 = + (+9 +1 +44) × x + base = + (54 × x) + base
  • +10 = + (+10 +1 +54) × x + base = + (65 × x) + base

Once again, my recommendation is x=2 and base=3.

Variation #2

Instead of +n contributing +n×x or +(n+1)×x to the magic item, it contributes +(n+1)×x for the first two +n levels, then +(n+3)×x for the next two levels, then +(n+5)×x for the two after that, and so on.

  • +0 = + 1 × x + base
  • +1 = + (+1+1) × x + base = + (2 × x) + base
  • +2 = + (+2 +1 +2) × x + base = + (5 × x) + base
  • +3 = + (+3 +3 +5) × x + base = + (11 × x) + base
  • +4 = + (+4 +3 +11) × x + base = + (18 × x) + base
  • +5 = + (+5 +5 +18) × x + base = + (28 × x) + base
  • +6 = + (+6 +5 +28) × x + base = + (39 × x) + base
  • +7 = + (+7 +7 +39) × x + base = + (53 × x) + base
  • +8 = + (+8 +7 +53) × x + base = + (68 × x) + base
  • +9 = + (+9 +9 +68) × x + base = + (86 × x) + base
  • +10 = + (+10 +9 +86) × x + base = + (105 × x) + base

Clearly, this breaks the gap between +(#) and +(#+1) into smaller, more numerous pieces. But by varying the rate of increase, it also increases power levels within a magic item in a non-linear fashion.

My recommendations for x and base remain unchanged.

As you can see, these widen the gap – the number of magical pluses – between combat-plusses from one to many steps, with the separation from one combat plus to the next widening as the combat effectiveness, or its equivalent valuation, rises. It takes more to go from a +4 to a +5 than it does to go from +3 to +4.

The rest of this article will assume that the primary option has been chosen, with the recommended values, i. e.

  • +0 = + 3
  • +1 = + (+1) × 2 + 3 = + 5
  • +2 = + (+2 +1) × 2 + 3 = + (3 × 2) + 3 = + 9
  • +3 = + (+3 +2 +1) × 2 + 3 = + (6 × 2) + 3 = + 15
  • +4 = + (+4 +3 +2 +1) × 2 + 3 = + (10 × 2) + 3 = + 23
  • +5 = + (+5 +4 +3 +2 +1) × 2 + 3 = + (15 × 2) + 3 = + 33
  • +6 = + (+6 +5 +4 +3 +2 +1) × 2 + 3 = + (21 × 2) + 3 = + 45
  • +7 = + (+7 +6 +5 +4 +3 +2 +1) × 2 + 3 = + (28 × 2) + 3 = + 59
  • +8 = + (+8 +7 +6 +5 +4 +3 +2 +1) × 2 + 3 = + (36 × 2) + 3 = + 75
  • +9 = + (+9 +8 +7 +6 +5 +4 +3 +2 +1) × 2 + 3 = + (45 × 2) + 3 = + 93
  • +10 = + (+10 +9 +8 +7 +6 +5 +4 +3 +2 +1) × 2 + 3 = + (55 × 2) + 3 = + 113

…but I will still try to mention the consequences of choosing differently as I go along (I may stop once I think I have the point across, though)

Section 2: Fixed Price increases

If you have a wider and increasing gap between combat plusses due to an increase in the number of intervals between one and the next, and a price that increases geometrically according to the number of magical plusses and not combat plusses, you need a much smaller increase to achieve a significant but predictable growth in value / cost.

As things stand, a fairly aggressive exponential increase is needed to reflect the rarity and increasing (and compounding) value of combat plus, so all +2 weapons look and cost the same (relative to the base price of the weapon).

The value that I am recommending for each magical plus as an increase might seem like a complicated one: × cube-root 2 or × 1.259921 but it’s one that I think will generate reasonable values. But there is lots of room for variations, so you can pick one that feels right to you.

Let’s translate the results:

  • +0 = base × 1.259921^3 = × 2
  • +1 = base × 1.259921^5 = × 3.17
  • +2 = base × 1.259921^9 = × 8
  • +3 = base × 1.259921^15 = × 32
  • +4 = base × 1.259921^23 = × 203.19
  • +5 = base × 1.259921^33 = × 2048
  • +6 = base × 1.259921^45 = × 32 767.94
  • +7 = base × 1.259921^59 = × 832 253.38
  • +8 = base × 1.259921^75 = × 33 554 332.34
  • +9 = base × 1.259921^93 = × 2 147 475 739
  • +10 = base × 1.259921^113 = × 21 817 169 763 015.48

This scale of increase works for Pathfinder, where items can have up to +10 – though it does go a little off the chart at the end (but that’s because this is misstating the principle. Each level is actually defining the maximum of a range:

  • +0 = base × 1.259921^3 = × 0 – 2
  • +1 = base × 1.259921^5 = × 2 – 3.17
  • +2 = base × 1.259921^9 = × 3.17 – 8
  • +3 = base × 1.259921^15 = × 8 – 32
  • +4 = base × 1.259921^23 = × 32 – 203.19
  • +5 = base × 1.259921^33 = × 203.19 – 2048
  • +6 = base × 1.259921^45 = × 2048 – 32 767.94
  • +7 = base × 1.259921^59 = × 32 767.94 – 832 253.38
  • +8 = base × 1.259921^75 = × 832 253.39 – 33 554 332.34
  • +9 = base × 1.259921^93 = × 33 554 332.34 – 2 147 475 739
  • +10 = base × 1.259921^113 = × 2 147 475 739 – 21 817 169 763 015.48

…).

For D&D, where items very rarely go above +5, you might want to use a larger value. Or a choice with more steps – one of the alternatives offered in section 1. The trick is always balancing the size of increases at the lower end of the scale with those at the higher end.

Variation #1: × 4th root of 10

The square root of 10 is 3.1622777, and the square root of that is 1.778279.

  • +0 = base × 1.778279^3 = × 5.62
  • +1 = base × 1.778279^5 = × 17.78
  • +2 = base × 1.778279^9 = × 177.83
  • +3 = base × 1.778279^15 = × 5623.39
  • +4 = base × 1.778279^23 = × 562 338.34
  • +5 = base × 1.778279^33 = × 177 826 588
  • +6 = base × 1.778279^45 = × 1.7783 × 10^11
  • +7 = base × 1.778279^59 = × 5.623 × 10^14
  • +8 = base × 1.778279^75 = × 5.623 × 10^18
  • +9 = base × 1.778279^93 = × 1.778 × 10^23
  • +10 = base × 1.778279^113 = × 1.778 × 10^28

The utter ridiculousness of the results at +6 and above make this more suited to D&D.

Variation #2: × root 5

A simple alternative is to use the square root of 5, or 2.236068. Note that this will produce an even steeper growth curve.

  • +0 = base × 2.236068^3 = × 11.18
  • +1 = base × 2.236068^5 = × 55.9
  • +2 = base × 2.236068^9 = × 1397.54
  • +3 = base × 2.236068^15 = × 174 692.84
  • +4 = base × 2.236068^23 = × 109 183 032
  • +5 = base × 2.236068^33 = × 3.412 & times; 10^11
  • +6 = base × 2.236068^45 = × 5.33 × 10^15
  • +7 = base × 2.236068^59 = × 4.17 × 10^20
  • +8 = base × 2.236068^75 = × 1.63 × 10^26
  • +9 = base × 2.236068^93 = × 3.18 × 10^32
  • +10 = base × 2.236068^113 = × 3.10 × 10^39
Variation #3: × ½ of root 10

This variation has a steeper curve still, but ameliorates that with lower values at lower levels thanks to the “½ of”. Root 10 = 3.1622777, and half of that is 1.58113885. The result is something that is somewhere in between the base version and the first variation.

  • +0 = base × 1.58113885^3 = × 3.95
  • +1 = base × 1.58113885^5 = × 9.88
  • +2 = base × 1.58113885^9 = × 61.76
  • +3 = base × 1.58113885^15 = × 965.05
  • +4 = base × 1.58113885^23 = × 37 697.3
  • +5 = base × 1.58113885^33 = × 3.681 & times; 10^7
  • +6 = base × 1.58113885^45 = × 8.989 × 10^8
  • +7 = base × 1.58113885^59 = × 5.4857 × 10^11
  • +8 = base × 1.58113885^75 = × 8.3705 × 10^14
  • +9 = base × 1.58113885^93 = × 3.19 × 10^18
  • +10 = base × 1.58113885^113 = × 3.05 × 10^22

This would probably be my preferred choice for D&D, with some slight tweaking / rounding:

  • +0 = base × 4
  • +1 = base × 10
  • +2 = base × 60
  • +3 = base × 1000
  • +4 = base × 40 000
  • +5 = base × 4 & times; 10^7
Variation #4: × 1.25, 1.5, or 2

Some GMs and players might prefer a simpler solution – none of this “square root” malarkey wanted!

At 1.25:

  • +0 = base × 1.25^3 = × 1.95
  • +1 = base × 1.25^5 = × 3.05
  • +2 = base × 1.25^9 = × 7.45
  • +3 = base × 1.25^15 = × 28.42
  • +4 = base × 1.25^23 = × 169.41
  • +5 = base × 1.25^33 = × 1577.72
  • +6 = base × 1.25^45 = × 22 958.87
  • +7 = base × 1.25^59 = × 5.22 × 10^5
  • +8 = base × 1.25^75 = × 1.85 × 10^7
  • +9 = base × 1.25^93 = × 1.03 × 10^9
  • +10 = base × 1.25^113 = × 8.93 × 10^10

At 1.5:

  • +0 = base × 1.5^3 = × 3.38
  • +1 = base × 1.5^5 = × 7.59
  • +2 = base × 1.5^9 = × 38.44
  • +3 = base × 1.5^15 = × 437.89
  • +4 = base × 1.5^23 = × 11 222.74
  • +5 = base × 1.5^33 = × 647 159.82
  • +6 = base × 1.5^45 = × 8.4 × 10^7
  • +7 = base × 1.5^59 = × 2.45 × 10^10
  • +8 = base × 1.5^75 = × 1.61 × 10^13
  • +9 = base × 1.5^93 = × 2.38 × 10^16
  • +10 = base × 1.5^113 = × 7.91 × 10^19

(Once again, this gives reasonable numbers for +0 to +5, not so much for what happens after that).

At 2:

  • +0 = base × 2^3 = × 8
  • +1 = base × 2^5 = × 32
  • +2 = base × 2^9 = × 512
  • +3 = base × 2^15 = × 32 768
  • +4 = base × 2^23 = × 8.39 × 10^6
  • +5 = base × 2^33 = × 8.59 & times; 10^9
  • +6 = base × 2^45 = × 3.52 × 10^13
  • +7 = base × 2^59 = × 5.76 × 10^17
  • +8 = base × 2^75 = × 3.78 × 10^22
  • +9 = base × 2^93 = × 9.9 × 10^27
  • +10 = base × 2^113 = × 1.04 × 10^34

This looks reasonable up to +3 but then gets a bit extreme for my tastes.

Variation #5: A progressive sliding scale

Under this proposal, the exponential increase in value is partially compensated for by reducing the increase that applies as magical plus increases. 2.26, 2.06, 1.86, 1.66, 1.46, 1.26, 1.06, 1.04, 1.02, 1.018, 1.016, 1.014, 1.012… I trust you can see the pattern. But this is intended to be a progressive scale – the new multiplier only applies to exponential increases not already factored in at the previous plus.

  • +0 = base × 2.26^3 = × 11.54
  • +1 = base × 11.54 × 2.06^(5-2) = 11.54 × 2.06^3 = 11.54 × 4.24 = 48.98
  • +2 = base × 48.98 × 1.86^(9-5) = 48.98 × 1.86^4 = 48.98 × 11.97 = × 586.29
  • +3 = base × 586.29 × 1.66^(15-9) = 586.29 × 1.66^6 = 586.29 × 20.92 = & times; 12 265.2
  • +4 = base × 12 265.2 × 1.46^(23-15) = 12 265.2 × 1.46^6 = 12 265.2 × 20.65 = × 253 276
  • +5 = base × 253 276 × 1.26^(33-23) = 253 276 × 1.26^8 = 253 276 × 10.09 = × 2 560 000
  • +6 = base × 2.56 × 10^6 × 1.06^(45-33) = 2.56 × 10^6 × 1.06^10 = 2.56 × 10^6 × 2.01 = × 5.14 × 10^6
  • +7 = base × 5.14 × 10^6 × 1.04^(59-45) = 5.14 × 10^6 × 1.04^14 = 5.14 × 10^6 × 1.73 = × 8.9 & times 10^6
  • +8 = base × 8.9 × 10^6 × 1.02^(75-59) = 8.9 × 10^6 × 1.02^16 = 8.9 × 10^6 × 1.38 = × 1.23 × 10^7
  • +9 = base × 1.23 × 10^7 × 1.018^(93-75) = 1.23 × 10^7 × 1.018^18 = 1.23 × 10^7 × 1.38 = 1.7 × 10^7
  • +10 = base × 1.7 × 10^7 × 1.1.016^(113-93) = 1.7 × 10^7 × 1.1016^20 = 1.7 × 10^7 × 1.37 = 2.33 × 10^7

The above uses a fairly even drop in the multiplier until a change of order of magnitude and a similar pattern throughout. Starting at the +6 level, though, the increase from one plus to the next starts to get a little small, so perhaps a different pattern should then take hold. Remember, consistency of maths might be nice, but we want results that feel pretty right. This is intended to be proof of concept and demonstration / explanation of technique, not definitive decision.

Perhaps, then, the pattern should be 2.26, 2.06, 1.86, 1.66, 1.46, 1.26, 1.16, 1.11, 1.085, 1.075, 1.065, 1.055, 1.045…

At first, this won’t differ from what we’ve already got, but at higher plus values, it should make a profound difference.

  • +0 = base × 2.26^3 = × 11.54
  • +1 = base × 11.54 × 2.06^(5-2) = 11.54 × 2.06^3 = 11.54 × 4.24 = 48.98
  • +2 = base × 48.98 × 1.86^(9-5) = 48.98 × 1.86^4 = 48.98 × 11.97 = × 586.29
  • +3 = base × 586.29 × 1.66^(15-9) = 586.29 × 1.66^6 = 586.29 × 20.92 = & times; 12 265.2
  • +4 = base × 12 265.2 × 1.46^(23-15) = 12 265.2 × 1.46^6 = 12 265.2 × 20.65 = × 253 276
  • +5 = base × 253 276 × 1.26^(33-23) = 253 276 × 1.26^8 = 253 276 × 10.09 = × 2 560 000
  • +6 = base × 2.56 × 10^6 × 1.16^(45-33) = 2.56 × 10^6 × 1.16^10 = 2.56 × 10^6 × 5.94 = × 1.52 × 10^7
  • +7 = base × 1.52 × 10^7 × 1.11^(59-45) = 1.52 × 10^7 × 1.11^14 = 1.52 × 10^7 × 4.31 = × 6.55 &times 10^7
  • +8 = base × 6.55 × 10^7 × 1.085^(75-59) = 6.55 × 10^7 × 1.085^16 = 6.55 × 10^7 × 3.69 = × 2.42 × 10^8
  • +9 = base × 2.42 × 10^8 × 1.075^(93-75) = 2.42 × 10^8 × 1.075^18 = 2.42 × 10^8 × 3.68 = 8.9 × 10^8
  • +10 = base × 8.9 × 10^8 × 1.1.065^(113-93) = 8.9 × 10^8 × 1.1065^20 = 8.9 × 10^8 × 3.52 = 3.13 × 10^9

Another alternative would be to specify a “floor value” below which the base of the exponent cannot drop, i.e. a minimum result on the series. Below, I demonstrate the effect that it has if the +6 value is the minimum result:

  • +0 = base × 2.26^3 = × 11.54
  • +1 = base × 11.54 × 2.06^(5-2) = 11.54 × 2.06^3 = 11.54 × 4.24 = 48.98
  • +2 = base × 48.98 × 1.86^(9-5) = 48.98 × 1.86^4 = 48.98 × 11.97 = × 586.29
  • +3 = base × 586.29 × 1.66^(15-9) = 586.29 × 1.66^6 = 586.29 × 20.92 = & times; 12 265.2
  • +4 = base × 12 265.2 × 1.46^(23-15) = 12 265.2 × 1.46^6 = 12 265.2 × 20.65 = × 253 276
  • +5 = base × 253 276 × 1.26^(33-23) = 253 276 × 1.26^8 = 253 276 × 10.09 = × 2 560 000
  • +6 = base × 2.56 × 10^6 × 1.16^(45-33) = 2.56 × 10^6 × 1.16^10 = 2.56 × 10^6 × 5.94 = × 1.52 × 10^7
  • +7 = base × 1.52 × 10^7 × 1.16^(59-45) = 1.52 × 10^7 × 1.16^14 = 1.52 × 10^7 × 7.99 = × 1.215 &times 10^8
  • +8 = base × 1.215 × 10^8 × 1.16^(75-59) = 1.215 × 10^8 × 1.16^16 = 1.215 × 10^8 × 10.75 = × 1.3 × 10^9
  • +9 = base × 1.3 × 10^9 × 1.16^(93-75) = 1.3 × 10^9 × 1.16^18 = 1.3 × 10^9 × 14.46 = 1.88 × 10^10
  • +10 = base × 1.88 × 10^10 × 1.16^(113-93) = 1.88 × 10^10 × 1.16^20 = 1.88 × 10^10 × 19.46 = 3.66 × 10^11

As you can see, once the threshold is increased, the multiplier for each plus starts to increase again, but because the base of the exponent is very close to 1, this happens relatively slowly.

There are innumerable other patterns. Rather than the fixed minimum, you might decide that slowly increasing the multiplier was appropriate for values of +7 or more. I’ll forego offering yet another example as I have to move on.

Ultimately, what all of these variations are doing is altering the interpreted significance of each of the increases in magical plus. That’s an important concept (hence my taking so much time and trouble to demonstrate it) – because, if you can control the number of steps in the interval (Section 1) AND the significance of each step, however symbolically (Section 2) then you have almost total control over what a given plus actually means.

A radical but simple example combining everything discussed so far

Each plus up to +6 adds 5 to the plus of the item except the first two, which add 6; above +6, each adds 4. A +0 magical object has a base value of 8 (again, you will understand why in a little while). Magical pluses increase in value progressively using the following series: 1.24, 1.8, 1.65, 1.5, 1.4, 1.55, 1.7, 1.95, 2, 2.1, 2.2, 2.3, 2.4…

Number of steps per plus:
  • +0 = + 8
  • +1 = + 6 + 8 = + 14
  • +2 = + 6 + 14 = + 20
  • +3 = + 5 + 20 = + 25
  • +4 = + 5 + 25 = + 30
  • +5 = + 5 + 30 = + 35
  • +6 = + 5 + 35 = + 40
  • +7 = + 4 + 40 = + 44
  • +8 = + 4 + 44 = + 48
  • +9 = + 4 + 48 = + 52
  • +10 = + 4 + 52 = +56

This simulates a situation in which it grows progressively harder to increase the plus of an object or weapon. If the sequence were permitted to continue, it would probably be +3, +3, +3, +3, +2, +2, +2, +2, +1, +1, +1, +1 for an absolute maximum of +22 – but I very deliberately not going there.

Valuation per plus:
  • +0 = base × 1.24^8 = × 5.59
  • +1 = base × 5.59 × 1.8^(14-8) = 5.59 × 1.8^6 = 5.59 × 34.01 = 190.11
  • +2 = base × 190.11 × 1.65^(20-14) = 190.11 × 1.65^6 = 190.11 × 20.18 = × 3836.3
  • +3 = base × 3836.3 × 1.5^(25-20) = 3836.3 × 1.5^5 = 3836.3 × 7.59 = & times; 29 131.88
  • +4 = base × 29 131.88 × 1.4^(30-25) = 29 131.88 × 1.4^5 = 29 131.88 × 5.38 = × 156 678
  • +5 = base × 156 678 × 1.55^(35-30) = 156 678 × 1.55^5 = 156 678 × 8.95 = × 1.402 × 10^6
  • +6 = base × 1.402 × 10^6 × 1.7^(40-35) = 1.402 × 10^6 × 1.7^5 = 1.402 × 10^6 × 14.2 = × 1.99 × 10^7
  • +7 = base × 1.99 × 10^7 × 10^7 × 1.95^(44-40) = 1.99 × 10^7 × 1.95^4 = 1.99 × 10^7 × 14.46 = × 2.88 &times 10^8
  • +8 = base × 2.88 &times 10^8 × 2^(48-44) = 2.88 &times 10^8 × 2^4 = 2.88 &times 10^8 × 16 = × 4.6 × 10^9
  • +9 = base × 4.6 × 10^9 × 2.1^(52-48) = 4.6 × 10^9 × 2.1^4 = 4.6 × 10^9 × 19.45 = 8.95 × 10^10
  • +10 = base × 8.95 × 10^10 × 2.2^(56-52) = 8.95 × 10^10 × 2.2^4 = 8.95 × 10^10 × 23.43 = 2.1 × 10^12

Section 3: Effect Rating

So, if the plus of an enchanted object is no longer connected directly to the plus of that object, what is it connected to? What justifies a value multiplier of (taking the base example from section 2) × 2048 for a +5 item?

The answer is that the magical plus of a +5 object (defined in section 1 in the designated example as a +33 Magical effect, which consists of that designated magical plus (applied to both attack and damage values, in the case of a weapon), plus everything else that the object can do.

That is to say, Effect Rating = Power Rating + Utility + Thresholds + Activations + Links to previous effects, all in combination, for each additional power in the item.

But first, a little housekeeping:

It’s always struck me as a little odd (not to say inequitable) for an armor’s plus-rating to only affect Armor Class while a weapon’s plus-rating adds to both attack (“to-hit” if you’re old-school) AND damage. Especially since enchanted armor tends to cost a great deal more than a weapon.

There are several ways of addressing this inequality.
 

  • You could rule that an armor’s plus-rating also added to saving throws. That was one of my earliest solutions to the dilemma.
     
  • If you thought that was being a little too generous, you could restrict the benefit to one chosen and appropriate save type – Reflex Saves for armors of speed or lightness, FORT saves for armors of special resilience, and so on.
     
  • You could rule that an armor’s plus-rating also added to the wearer’s hit points.
     
  • If you thought that was a little to generous, you could restrict that benefit to those character levels at which a character gained a Feat (3.x & Pathfinder), or equivalent. Thus fighters might get the benefit every 2nd level, while Mages might get the benefit every 5th. Or anything you like in between.
     
  • Or, you could attack the inequity from the other side, by decoupling the attack bonus from the weapon damage bonus. A “+3 +1” weapon would have a total plusrating of 4, consisting of +3 to attack and +1 to damage. Of course, this means that a traditional weapon would suddenly have double the magical plus-rating that you thought it had, but that’s a small price to pay.

 
The solution that you choose to use is up to you. You can even employ multiple variations on the theme at the same time, so long as the equity balance is restored.

I would have no problem with a +7 rated suit of armor that gave +3to AC, +2 to Hit Points., and +2 to Reflex Saves – in a game where damage bonus and attack bonus were decoupled.

Okay, so where was I? Oh, yes: So for each additional ability conferred by or contained within an object, the Magical plus ‘consumed’ by that ability consists of the total of Power Rating + Utility + Thresholds + Activations + Links to previous effects.

It should now be clear why the various proposals in Section 1 offered a potential magical plus-rating for an object with a plus-rating of zero – it’s so that everyday objects without the equivalent of a plus-rating could still be enchanted to carry a permanent magical effect.

The higher the “base” rating in section 1, the stronger the magical effect that can be added to an object without incurring the equivalent of a plus-rating, and the more that an enchanted object – ANY enchanted object – is worth, as shown in Section 2.

The reason for doing all this is about to become clear, but it’s worth spelling it out explicitly: in a word, Flexibility. Not all +4 maces need to be exactly the same, and a +4 mace can be completely different to a +4 longsword. In fact, almost unlimited flexibility in design is achieved by the act of the Decoupling.

So, let’s put some meat on the bones – five types of Magical plus were listed; let’s define and discuss them.

Section 3a: Power Rating

Most power ratings are simple – it’s either spell level or spell-level equivalent, or it’s plus-rating.

There will be exceptions (there are always exceptions). But this should provide ample standards to permit the evaluation of any ability, especially if Metamagics are taken into account (speaking of which, there are some original metamagics that greatly enhance the flexibility of spells on offer in Broadening Magical Horizons: Some Feats from Fumanor and Shards Of Divinity. Using them and the standard Metamagics, you can customize any given spell to any effective Power Rating that is desirable).

The Power Rating of a plus enhancement is the value of the plus enhancement. Damage and Attack bonuses may count as separate plus values. However, such plus is considered an innate part of the item and as such is ‘always on’ for free.

If there is, nevertheless, some activation (see Section 3c below), subtract the cost of Always On (of the relevant sub-type) from the cost of that activation to get the adjustment to the resulting Effect Rating for the plus enhancement.

    For example, if the ‘always on’ type is the bog-standard version that most of us think of immediately, that is a +5 value that is ‘built in’ to a magical Plus. If there is, nevertheless, an activation of value +3 let’s say, the Power Rating of the plus WITH the activation is plus +3 -5. So a +4 item of this type would have a Power Rating of 2.

Section 3b: Utility

This encompasses two disparate factors, each of which needs to be considered separately.

Enhancement of something that a character can be expected to be capable of anyway confers a -1 modifier to Effect Rating (but Effect Ratings can never be less than 1). The alternative is to confer on a character an ability that they can not be reasonably expected to be capable of (even if some individuals can possess that capability); this increases the Effect Rating of an ability by 1.

Secondly, there is a Contextual Appraisal. In any given game world, some abilities will be more generally useful than others; those abilities should attract a +1 Effect Rating which should provide some form of enhancement benefit to the ability. Other abilities may be less generally useful than others, and come with a -1 Effect Rating. For example, in a world in which Undead are a major factor, abilities to Turn or enhance the Turning of Undead are obviously going to be more valuable. In a swamp world, or simply a swampy environment, fire magic can either be more useful or less (depending on whether or not conditions hamper the effectiveness of such magic), and so on.

Section 3c: Thresholds

Requiring a minimum score in some numeric capability in order to use an ability or effect is called establishing a Threshold.
 

  • If the Threshold is easy for the likely users of a magic item to achieve, that is worth a +2 Effect Level.
     
  • If the Threshold is reasonably commonly achievable, perhaps at higher levels, that is worth a +1 Effect Level.
     
  • If the Threshold is only achievable for characters of higher levels, that is worth a +0 Effect Level.
     
  • If the Threshold is very difficult for characters to achieve, even at higher levels, that is worth a -1 Effect Level (but there is still a net minimum Effect Level of 1).
     
  • Finally, if the Threshold is likely to only be achievable through the use of additional magic, either spells, potions, or magic items, that is worth either -2, -1, or +0 Effect Level;
     

    • -2 if the magic is likely to be very hard to obtain (even if a specific character already has it, because that’s something that you can’t assume to be universally true);
       
    • -1 if the magic is going to be uncommon but not unusually rare to obtain, caveat as above;.
       
    • +0 otherwise.
Section 3d: Activations

If you have to do something to activate or trigger the magic, the difficulty / inconvenience of doing so under normal conditions also impacts on the Effect Level. This consideration excludes any Threshold requirement (you only get one bite at the cherry).
 

  • ‘Always On’ effects are a +5 Effect Value, which increases to +6 Effect Value if the character doesn’t have to be in physical possession of the object in order to gain the benefit of the effect – if it can be on a nearby shelf, for example. If the object can be even more remote from the wielder, that may be worth a +7 or even a +8 Effect Value.
     
  • If the power / effect is activated “At Will”, that is worth a +4 Effect Value, which increases to +5 if the object only needs to be in close proximity, to +6 if the object only needs to be within earshot, or to +7 if the object only needs to be visible to the wielder. Some GMs may permit the latter to be activatable through Scrying, others will not, and some will regard that as worth an extra +1 to the Effect Value.
     
  • If the power / effect is activated by a command phrase or word, that is worth a +3 Effect Value, which increases to +3 if the object doesn’t have to be within earshot (but still has to be commanded by a specific voice), and to +4 if anyone using the right word/phrase can activate the power/effect.
  • If the power / effect requires a specific Skill roll to activate, the plus to the effect value is dependent on how difficult the challenge target is to achieve:
     

    • If the target is very difficult to achieve, the Effect Value of the Activation is +1.
       
    • If the target is moderately difficult, the Effect Value of the Activation is +2.
       
    • If the target is reasonably easy, the Effect Value of the Activation is +3.
       
    • If the target is very easy to achieve, the Effect Value of the Activation is +4.
       
  • In addition, if the skill is relatively rare or unusual, the GM may add -1 to the Effect Value of a skill-based activation, whereas if it fairly ubiquitous, the GM may add +1 to the Effect Value.
     
  • Finally, if the ability is automatically triggered by some other circumstance, but the owner has to be within a reasonable range, that is worth an effective +2 Effect Level. If the owner does not need to be present, that is worth +3 Effect Level. If the owner can specify what the triggering condition is and provide some appropriate sensory capability, that is worth +4 Effect Level; if the object comes with any required sensory capability already included, that is worth +5 Effect Level.

In general, the more easily the power can be activated, the higher the Effect Level that it reflects. Note that the activation “cost” may require the creator of a magic item to restrict its Power Rating or otherwise compromise it in order to compensate for a high Activation contribution.

Another way to look at it: the more powerful a magical effect is, the more it needs to be restricted in it’s Activation in order to be accommodated in a magic item of relatively affordable magical plus or equivalent.

Section 3e: Multi-effects

If an effect is already present in an item that is of a similar nature to an ability or effect, the second ability or effect is reduced in Effect Level by 1.

Multiple such are often ‘bundled together’, in sequence from least expensive to most expensive (in terms of Effect Level, disregarding such discounting).

However, these bonuses grow progressively harder to qualify for.
 

  • One related ability is enough for a 1 discount.
     
  • Three (=1+2) are needed to qualify for a 2 discount. Note that the second and third will still qualify for a 1 discount.
     
  • Six (=1+2+3) are needed to qualify for a 3-discount on the seventh and subsequent related abilities. Some of those six will qualify for a 1 discount, some for a 2 discount.
     
  • Ten (=1+2+3+4) are needed to qualify for a discount of 4, and so on.
     

The inclusion of unrelated effects or abilities has the opposite effect.
 

  • One unrelated ability earns a +1 cost to all abilities, including this one..
     
  • Three (=1+2) unrelated abilities earn a +2 cost to all abilities.
     
  • Six (=1+2+3) increase the cost of all abilities by +3 each.
     
  • Ten (=1+2+3+4) increase the cost of each ability by +4, and so on.
     

    An example might be needed to make this clear.

    Let’s say that a magic item has 4 fire-related abilities / powers and one that is not considered by the GM to be directly fire-related.
     

    • The cheapest fire-related ability costs it’s normal Effect Level, +1 for the unrelated ability.
    • The second-cheapest fire-related ability costs it’s normal Effect Level -1 for the related ability, +1 for the unrelated ability.
    • The third-cheapest fire-related ability costs it’s normal Effect Level -1 for the first related ability, +1 for the unrelated ability.
    • The most expensive fire-related ability costs it’s normal Effect Level -2 for the three related abilities, +1 for the unrelated ability.
Section 3f: Activation / Triggers for multiple effects

If the same trigger activates more than one ability, it costs +1 for each additional ability that it activates, but needs only to be paid for once. This further encourages the consistent theming of magical devices.

If the trigger is to be separate, even if of the same kind (two different command words, for example), both have to be paid separately.

Flexibility eats into power level, consistency does not.

Section 4: Unused Capacity

There are two actions that an owner may wish to perform with a magic item that they possess, and both require at least one magical plus of unused capacity. These are “Refining an object” and “Enchanting an object”.

Magical items without any unused capacity are considered fixed (sometimes labeled ‘locked’); they cannot be Refined or further Enchanted.

Refining an object

Refining an object increases it’s unused capacity. It does so by leeching the capacity of an equal or lesser object. Like pulling on a thread, this causes the object being leeched to ‘unravel’; it becomes a worthless lump of waste material. But it’s magical plusses are added to the capacity of the object being refined (less any unused capacity it may already have).

I’ll have a lot more to say about this in part two of this article.

For now, let’s start by defining some additional nomenclature – in particular, some symbiology to describe this process:

    +a +b +(a+b)

would serve to represent it, where

    +a defines the object being leeched;
    +b defines the object being refined;
    describes the process;
    (approximately equals) connects the process to the outcome; and
    +(a+b) describes the approximate outcome of the process.

So let’s look at a couple of examples:

    +5 +5 +(10)

    +5 +10 +(15)

    +8 +12 +(20)

    +13 +19 +(32)

…and so on.

But, given the approximation, this is not very helpful. So, having established the concept, let’s refine it:

    +a+Δb +b = +(a+b)

or even,

    +a +b = +(a+b-Δb)

This is exactly the same as what we had before, except that we’ve added a new symbol, Δb, to describe the unused capacity of object b.

Δb therefore HAS to be at least one, by definition, but it could be more, because what is actually increasing by (a – Δb) is Δb.

Again, an example or two should make this clearer.

    +5 +5 [Δb=1] = +(5+5-1=9), Δb = 1+5-1 = 5.

A +5 object is leeched to enhance another +5 object which has an unused capacity of 1. The resulting object has a total capacity of 9, of which 5 are unused.

Let’s say we then use a +8 object to further refine this one:

    +8 +9 [Δb=5] = +(8+9-5=12), Δb = 5+8-5 = 8.

The +8 object is leeched to enhance the +9 object which now has an unused capacity of 5. The resulting object has a total capacity of 12, of which 8 are unused.

This is exactly what you want if you want to add another ability with a net Effect Level of 7 to the item. But let’s say for a moment that you added a 4-point ability to the +9 object before leeching the +8; this reduces the unused capacity of the +9 object back to 1, as it now contains two +4 powers. And now:

    +8 +9 [Δb=1] = +(8+9-1=16), Δb = 1+8-1 = 8.

Again, absolutely perfect, with an extra +4 ability to boot. So, why would you bother with the initial refinement?

Well, let’s say that the only reason the 7-point power is only 7 points is because it’s related to the second ability that you’re adding. That means that if you push this power into the blending of the original object and the +8 object, you will end up with no points left, and a locked item – unless, of course, the existing power is unrelated to the +7 ability, which would push the cost of it from +8 to +9, which would not actually fit in the resulting magic item.

You need the first refining process to create the conditions that make the resulting object possible. But, the first power is still unrelated to the subsequent pair of abilities, which increases the cost of the +8 ability from +7 back to +8 – so the resulting magic item is now locked, and can no longer be approved.

As you can see from this example, this can get really complicated fairly quickly. Which is why there’s a lot more to say about it in the next part, when this will be a major topic of discussion. For now, though, let’s move on to the other use, Enchanting an Object:

Enchanting an object

This is the process of ‘filling’ unused capacity with ‘content’. Unsurprisingly, then, we’ve already been discussing it, because it’s central to the question of why you would refine an object.

There are two sources of enchantment: you can migrate the magical ability currently embedded in the object that you are leeching into the refined object, a process called “Fusion” or “Fusing”

When you do so, you need to recalculate the price of the new ability being added. It may have an Activation in common with the power object B already possesses; it may be related to the power object B already possesses; or it may be unrelated.

The Skill Involved

Of course, none of this happens automatically; there is a skill roll involved, against a DC of (a+b), and a skilled artificer can modify the magic being transferred in the process, easing an existing restriction to increase the capability of the fused object (increasing the Effect Value of the second magical power) or increasing one to reduce the overall fusion without ‘locking’ the resulting object (necessary if the abilities are unrelated).

If your game system doesn’t use DCs, the “DC” becomes the target that you need to achieve or the margin of success, as appropriate.

Each such change adds 3 to the DC / target / required margin.

Direct Enchantment

The other method of Enchanting an object is to cast a spell direct into the receptive matrix, substituting the intended Activation for the process normally used to complete and activate the spell (the way you would if you were casting a spell onto the object instead of into it.

This involves a more difficult roll, against a DC (or equivalent as described above) of 2×b + a – Δb. Fail, and the spell is wasted, and the unused capacity of the target reduced by 1.

The more enchantment an object already holds, the harder it is to direct-enchant it further, and the more skill is needed to successfully do so.

And, with that preview of what is to come in Part 2 (Forging and Reforging of magic items), it’s time to end this article and prepare it for publication (while my internet connection is behaving itself!)

Comments Off on Uncoupling DnD’s Heisenberg Compensators

Mapping Through Logic and Flavor


This is being written more in hope than expectation. Last week (Wednesday night, to be exact), my internet connection became suddenly unreliable. Because it happened late at night, I had the not unreasonable theory that this was because of network upgrades; not only does my ISP not notify customers of such outages in advance, but the usual time for conducting such work is late at night so as to minimize the impact on customers.

When the problem didn’t go away, I made the reasonable assumption that the network was having problems and would be working to resolve them; I had already established that the problems were upstream of my router and modem, and so seemed likely to be affecting more customers than just myself.

Besides, the connection would work some of the time, and I had real-world things that I had to do, that Thursday (and even more on the Friday).

Come Friday, it was even worse, now totally unreliable, connecting for only a second or two and then disconnecting. So I contacted my ISP’s technical support. They advised that since the problem had started, up until midnight Thursday, the continuously-connected internet had dropped out approximately 60 times, but from that time until I called at about 6:30 PM, there had been 90 connections (and the day was only 3/4 done).

After some basic troubleshooting that did not solve the problem, it was decided to monitor the connection for 48 hours as various parts of the system were checked, and any faults corrected. It was hoped that stability would be restored as this work proceeded.

About 2AM this morning, the problems seemed to go away as suddenly as they had started, but until I’ve had at least 48 hours of stable connection, I’m operating on the premise that the connection could collapse at any moment (especially if the problem is related to the wet weather that I’ve had recently).

That means that I’m treating the connection as one that could drop out at any moment, and no internet means no ability to post.

If you are reading all this as usual, then you know that the worst did not eventuate. If you are reading this somewhat later than usual, it might mean that this long report is a combination explanation and apology. Fingers crossed!

Because of the uncertainty, I’m deliberately writing a relatively short post this time around, with minimal research and online work involved.

An exploration of context

For the last game session of the Zenith-3 campaign, I needed to map a complex enemy base with minimal time and effort. Some backstory is necessary to establish context.

Here there be Martians

Long ago, Martians had discovered Time Travel and its flaws and limitations. Eventually, with their environment failing, they had entered a period of suspended animation, awaiting the rise of some society with sufficient resources to terraform their planet into something habitable once more, at which point they would emerge from hiding and establish either peaceable relations or kick the terraformers off their planet.

The martians had the technology, but lacked the natural resources to solve the problems they faced on their own. When humans reached the red planet, their probes discovered the ruins left by the martians, finding a vast plaque lauding the achievements of the martian society, both scientific and cultural; the latter was far harder to translate than the former, for obvious reasons. So it was that in 2012, humans – Americans – learned the basics of time travel.

The Zener Gate program

Translating the abstract theory into practical application took years, and lots of it was still not clearly understood when Trump became President and took direct control of the Agency. Even though they did not know how to do it safely, he instructed them to begin human testing; the temptation of being able to rewrite history to his liking was too much for him to ignore.

He also prioritized a human space mission to Mars to investigate the ruins and see what else they could (ahem) learn from. When that mission reached the red planet in the mid-2020s, late in Trump’s second term, they erected a dome so that archaeological research could be done in a more comfortable shirtsleeve environment. This inadvertently awoke the sleeping martians, who the astronauts thought long dead; first contact was thoroughly botched, and inter-temporal war resulted.

Anti-American Forces

The martians sent agents back in time, suitably disguised, after wiping out the Astronauts (the ‘American Infestation’), to bolster the fortunes of rival nationalistic forces to the Americans with whom they were now at war. They might have chosen the Russians, but they didn’t really have the economy to compete with the USA. They might have chosen the Japanese of WWII, or the Nazis, but both groups had shortages of natural resources that would have handicapped their value as proxies, and Hitler reminded them too much of Trump.

That left the Chinese as the most logical human nation for them to ally with, and so they presented the Chinese leadership with the offer of time travel. As soon as they were convinced that this was not a trick, the Chinese leadership accepted the offer, planning to suck all they could from the Martian knowledge bank and then abandon them. Having time travel in their back pockets also emboldened the leadership, who became a lot more belligerent in their dealings with their neighbors.

Facility Tau, P.R.C.

It is worth noting that the Chinese program had more advanced technology than the American one from the outset, because the Martians knew exactly what they were doing; but the Americans had a far better understanding of the theoretical principles that made time travel work, because they had learned the hard way, while the Martians provided as little theoretical explanation as possible.

The time traveling PCs found themselves in a step-wise refinement of history. An accidental nuclear was prevented; a civil war between Trump and Mike Pence after the 2024 elections was avoided; and, eventually, the disastrous first contact between the Martians and Americans was avoided. This led the martians to withdraw their support for the Chinese program, but the complexities of time travel meant that they could not eliminate the program entirely without trapping themselves in paradoxes.

Facility Tau thus became a “rogue” temporal Agency (from the American Zener Gate program’s perspective). But, even though they knew that it existed somewhere in the P.R.C., the PCs didn’t know where.

TimeForce

Most of the campaign revolved around various governments reacting to the conditions that resulted from various temporal interventions by one side or another. Everything from the Cuban Missile Crisis to Al Capone, from the Vietnam war to the German Hacker Collective, plus various futures that developed from these starting points.

Along the way, one of the PCs began assembling an Agency of his own to take over the Zener Gate program because he did not trust the Trumps to manage it responsibly. He never got around to giving it a name, so I’ve been referring to it, in my notes, as TimeForce. By recruiting the physicists and others who the agents Knew would eventually form the backbone of the Agency that employed them, they were able to eventually take control of the Agency.

They forestalled the Russians getting their hands on Time Travel by infiltrating Facility Tau, they prevented a disastrous takeover by Eric Trump of the Zener Gate program, and had various other adventures.

These eventually led to TimeForce getting an operative of their own into Facility Tau, an operative who became aware of an intervention by Facility Tau called Operation Paper Tiger, which required immediate action by the Zener Gate temporal agents (i.e. the PCs).

PC Knowledge of Facility Tau

Facility Tau was disguised as a combined factory and power plant (hydroelectric and nuclear) that had experienced setback after setback, explaining why it was years behind schedule and not actually contributing much to the Chinese power grid. As an “embarrassment” to the Chinese leadership, there was every reason for them to avoid any sort of public attention for the project, completing the veil of secrecy about the project without the need to reveal what secret was really being protected.

The Zener Gate mole was able to leave funds (Chinese currency) and various files and documents for the PCs in a Hong Kong safety deposit box, which is how they even knew that much. A lot of the information they received lacked the essential context to explain the significance that they held, but it was expected that this would fall into place as they investigated further.

So, the PCs needed to infiltrate Facility Tau (somehow) in order to get access to the information on Operation Paper Tiger – and then to decide what they were going to do about it.

The Problem

Which, of course, meant that I needed a map of Facility Tau. This is the sort of project that you can spend weeks or months on, and I didn’t really have that kind of prep time to devote to it. Furthermore, this was almost certain to be the facility’s one and only appearance in the game, so it didn’t warrant that kind of attention to detail.

So it was that, about two hours before game time, I sat down to create the map in question, having thought of a new approach to the problem the previous night.

That ‘new approach’ is the subject of today’s article.

A logical map of functions

I started by mapping out the essential functions that such an organization would have, starting with one of the logical points of access from the outside world – the loading docks.

Each step of the process defined one or more additional “departments” or offices within the organization; I was always looking at the questions of who needed to interact with the ‘compartment’ just created, and who would control / monitor the activities of that compartment.

This mapped the structure of the organization by logical function. In the process, vague ideas of how the organization would function fell into place and crystallized.

  • The Loading Docks led me to the Stores and Inventory department, with a connection in between to Facility Security.
  • The stores and Inventory department connected to the Admin and Accounting departments.
  • Accounting led to the Payroll Office and to the Finance Department, who made sure that the Accounting had the money to pay the bills. And, of course, to Command, who authorized expenditures and made decisions for the facility. Of course, actual cash needed to be protected, so there was another link between Payroll and Security.
  • Admin led to the cleaners, to the Reference Library, to Secretarial Services, and to the Medical / First Aid Department.
  • Past another Security connection, the Library led to “Secure Archives”, which housed all the documents relating to the facility’s true mission. And so on.

The above illustration shows (a little more neatly than my hand-drawn original) the parts of the structure outlined in the text description above. As a bonus, it’s actually pretty close to 100% the size that I drew the original – the boxes and text are a little larger, because I already know where the connections are, and the original was 2B pencil on plain white art paper, and the layout is a little cleaner the second time around, as you would expect.

Here’s a complete list of the different departments (with additional notes as needed):

  1. Loading Docks
  2. Security
  3. Stores & Inventory
  4. Admin & Secretarial
  5. Cleaners
  6. Accounting
  7. Payroll
  8. Finance
  9. Medical Support
  10. Field Team Support – provides whatever the field teams need in reference information etc
  11. Reference Library Services – where Field Team Support get their information
  12. Secure Archives
  13. Communications – single point of contact between the teams in the field and Field< Team Support
  14. Timeline Integrity – monitors history for intervention by other time travelers, the internal equivalent to MI5 / Homeland Security
  15. Physics Research
  16. Technical Advisors (i.e. Martians)
  17. Data Storage
  18. Information Technology
  19. Cyber Security
  20. I.T. Infrastructure – buys and maintains computer hardware
  21. Jump Engineering
  22. Power Supply
  23. Media Control & Public Information – this department is all about feeding the cover story, the true function of the facility is ‘dark’
  24. Intelligence – more of a Liaison with the Chinese Intelligence Services than anything else
  25. Electrical Maintenance
  26. Property Maintenance
  27. Personnel
  28. Recruitment – a specialized function within the Personnel Department
  29. Training – for field operations
  30. Education – note that this is separate from the training needed for field operations
  31. Temporal Defense – the temporal equivalent of counter-intelligence, they advise on how the Tau Facility should respond to the findings of Timeline Integrity
  32. Doctrine Committee – sets the philosophic rules under which the facility operates, sets policies in other words
  33. Policy Analysis – translates doctrine into regulations and procedures
  34. High Command – the last word, oversees everything
  35. Intervention Authority – the heads of various departments, has the final authority to order missions
  36. Intervention Planning – proposes specific plans for possible interventions either to achieve changes in history deemed desirable by command or to undo / manipulate changes by others deemed undesirable by Temporal Defense
  37. Field Teams – actually do the work of changing history

A logical map of facilities

Here’s the “radical” part. I realized that if you had such an organization and were intending to construct bespoke facilities for them to use, the physical structure would be most efficient and effective if it matched, as closely as possible, the logical structure.

The place to put the people who keep inventory of parts, stationery, etc, is as close to the storerooms and the loading docks leading to those storerooms as you can manage, and so on.

All you need to do to map a facility is to describe the logical breakdown of functions that are carried out by that facility (ignoring those that take place off-site), and then interpret the results in terms of a physical layout.

It took me maybe 15 minutes to lay out the logical functions of Facility Tau, and I was ready for play.

A compromised facility

Of course, the structure you come up with will – or should – reflect the philosophy / ideology of the designers and owners. That’s shown in the Tau Facility layout by the Doctrine Committee who decide what is permitted and what is not, and by the separation of “Cyber Security” and “Security”.

It was only afterwards that I realized that it would be easy to incorporate any other sort of compromise one desired. For example, if you layout was designed for a different organization and adapted to service the current occupiers, this would be reflected in one or two connections that didn’t follow the most logical path; having to go through “Finance” to get to “Accounting”. Or maybe going through “Accounts Payable” to get to the “Personnel” department.

All you really need to do is to (1) decide how badly distorted the logical assignment of structure is by the circumstances, and (2) decide which connection or connections are sufficiently important to reflect that distortion.

You might decide that Accounting is too important a function to management to be distorted, putting them close to the Manager / CEO, but at the price of removing Quality Control from where it should be in order to report to the CEO promptly and conveniently – or maybe keeping Quality Control close to the CEO but separated from the manufacturing activities that they are supposed to be monitoring.

On rare occasions, you might need two such compromises to fully describe how handicapped an organization is by its physical layout, but most of the time, one will be enough.

Legacy Structures

It didn’t take much additional reflection to observe that by mapping functions according to the way things used be done, you could describe the way an organization was compromised by its own history.

Take insurance, for example – in ages past, underwriters needed to calculate the risks being assumed by a proposed insurance policy, each of which was a custom contract between the agency and the insured. As policies become standardized, you need more sales people and fewer ‘back room’ personnel, i.e. underwriters – but your physical layout and infrastructure still have to fit into the old office space. There are two choices: remodel the operations’ infrastructure (expensive and time-consuming) or make the salespeople go to the customers and use the headquarters as just a home base.

Field sales imply a commission payment basis. And lo and behold, if you look into the history of the industry, you will find that there was a period (bracketing the two World Wars by some margin) in which insurance agents did exactly that – and some operations still operate in the same fashion. Others took the plunge and remodeled, and now offer a more ‘retail’ environment in which over-the-counter insurance policies are offered by salespeople.

Wider application

If you were to translate a narrative into a logical structure – the story of PCs exploring a dungeon, for example – you can translate that logical structure into a physical map. No need to actually draw that map; just diagram the story, complete with alternative paths for the PCs to choose between.

    A brief example may be in order:

    1. The Goblins mistake the PCs for allies of the Spiders of level 2. They insist that the PCs prove their innocence in a trial of honor similar to a dunking platform used to test for witchcraft.

    2. If the PCs refuse the test or escape from it, they will find themselves confronted by the Deer-minotaurs to the East, with Goblins in hot pursuit.

    3. If they pass the test, the Goblins will command a feast at which they will tell the PCs of the enslaved Dwarves of the deeper passages.

    4. The Spiders who have enslaved the Dwarves are actually Phase Spider variant Ilithids. They have lost some of their psychic abilities but gained the defensive abilities implied, and retain the manipulative cunning, cruelty, and intelligence of the Ilithids.

    And so on.

    Note that each of these major elements can be broken down in similar fashion to detail the function, society, and culture of those encountered. Specify everything that the “Deer-minotaurs” need to survive and where they get these resources, and you build their entire environment and behavior around them.

The same logical principle applies to everything from accountancy firms to space stations, from thief’s guilds to temples.

Map the logic, with any flaws and compromises, any legacies and ideological influences, and with just a couple of brief notes, you can translate the resulting diagram into a physical ‘reality,’ ready for consumption – in a fraction of the prep time.

Comments Off on Mapping Through Logic and Flavor

An Encounter: The Glass Spider


I tried – hard – but could not find an image that even came close to what I was seeing in my mind’s eye THAT WAS LEGAL FOR ME TO USE. The best I can do is the combined image above.
On the left is a lizard sculpture of rock crystal held by the Cinquantenaire Museum in Brussels, Belgium, that gives an idea of what the Glass Spider would look like. Photograph by Daderot, Public domain, via Wikimedia Commons. I have changed the color profile of the image significantly, rotated it slightly, and extended the background to remove the resulting holes in the corners.
On the right is a spider sculpture that gives the basic shape that I was thinking of. Image by amteach from Pixabay, no information provided concerning the sculptor. I’ve rotated the image 90 degrees so that it is roughly the same size as the first.

rpg blog carnival logo

The Glass Spider – metagame

It’s not often that you think of an encounter that would be equally at home in a D&D / Fantasy setting, a Swashbuckling Pirate game, a Sci-Fi environment, a Superhero game-space, or even – if you allow a little genetic engineering to escape the lab – a Cyberpunk game.

So when one came to mind last week (while writing Sensory Surprises in Encounters for CM), I knew that I had to toss a third log onto the Blog Carnival fire, hosted at Of Dice And Dragons.

This will be CM’s final contribution to this month’s carnival (unless inspiration strikes again, of course!)

Nest

While Glass spiders typically nest in existing cave systems, they can dig their own nests, which appear as large mounds, sixty feet or so wide at the base and twenty to thirty feet in height. (Less developed nests are smaller, of course).

    Entrance

    There is an entrance, made of webbing that has been coated in mud or earth to appear almost indistinguishable from the exterior of the nest. If it were smaller, and flat to the ground, this would not be dissimilar to that of a trap-door spider. But it’s neither small, (typically about 5′ high and almost as wide), oval-shaped, and flash to the side of the mound.

    Some can detect the entrance because there will be a small amount of air emerging from the nest through the door (especially at it’s edges), and it will be a few degrees warmer than the surrounding sol / vegetation.

    The door is almost fireproof, by virtue of the mud/earth incorporated into it, but a sufficiently sharp weapon can cut the webbing that ties it to the nest around the edges until it can be forced open. It typically weighs about the same as a typical human wooden door, and the webbing that holds it closed is about as strong as deadbolts, so it is also possible to batter it down.

    Interior

    Upon entering the nest, a violet glow can be perceived within, and the stench of rotting meat, and the sound of wet leather slowly sliding over wet leather. In the heart of the nest, at the far end relative to the entrance is a raised earthen dais bound together by golden threads.

    These are actually Queen’s spiderweb, but this is not apparent at a distance; they look metallic.

Matriarch

Upon the dais is a glass spider, some 3-4 feet wide, with a forward body the size of a human torso and a huge abdomen at the rear which glows with a violet light, and appears to be filled with a smoky violet fluid.

It looks like a huge perfume bottle of cut crystal in the shape of a spider, and probably worth a fortune because of the exquisite workmanship, so well carved that it almost looks like it could move.

This is the Matriarch of the Nest, the Queen of the Glass Spiders, and she is – as you might expect – very much alive. But she does not move, so this is not apparent.

    The Vapors

    From her swollen abdomen, the Queen reacts to intruders by releasing glowing violet vapors that begin to snake and drift through the air. These have a strange coherence, they hold together, rather than dispersing into clouds.

    The reasons for this coherence are not immediately apparent.

Male Swarm

Pheromones given off by these violet vapors do form an invisible cloud, however, and they drive the male worker spiders that reside upon the ceiling of the camber into a frenzy. These males, about a foot across, descend from above and attack the intruders in a swarm, even at the cost of their own lives.

GMs should take this altered mental state and determination into account when determining what the spiders need to roll in order to succeed in attacks. It’s my suggestion that each attack which has already taken place in a given combat round gives the next attacker a +1 attack bonus, but assess this according to the mechanics of the game system yourself.

They have two primary natural weapons – a poisonous sting and a poisonous bite. Of the two, the sting appears to be the more dangerous, but the bite is the real threat.

    Numbing Bite

    This is because the bite is numbing, making the wounded unaware of just how badly they have been hurt. To reflect this, the GM should track the damage without revealing the full amount to players; instead, after the first couple of bites, the GM should announce only 1/4 to 1/2 of the damage actually inflicted.

    The bite also injects pheromones and hormones into the bloodstream of the victim, the significance of which will only become clear some hours later.

    Poisonous Sting

    The stings of glass spiders are soporific. Survivors describe the sensation of floating above the conflict as though reclining on a cloud, unconcerned for the harm being done to their physical bodies.

    The first sting received in a round should subtract 1 from the attack skill of those harmed. This penalty should accumulate over time.

    The effect will fade over the course of the subsequent hour or two, should the subject survive.

    Corrosive Wounds

    Bite wounds are frequently mis-characterized as corrosive, because the flesh around them seems to dissolve. This puzzles those who are able to analyze the anatomy of a deceased Glass Spider (male), because no corrosive substance can be find, and no organ for the production of such a substance has ever been identified.

    This is because those looking are doing so in the wrong places.

Attack Of The Glass Spider

Those glowing violet tendrils of vapor released by the Matriarchal Queen of the Glass Spiders are actually very short tufts of web, and riding upon them are hundreds of minuscule Glass Spider young, less than a millimeter in size, about 1/32nd of an inch. These descend upon the wounds and consume the flesh in order to receive the hormones and pheromones in the blood of the victim, deposited in the bites of the males.

These spider-young require those hormones in order to mature. But, though there may be hundreds of them who attack each wound, there is only enough hormone to trigger the maturation process in a few; the others simply die off and drop away.

As a consequence, the wounds from Glass Spider bites do not bleed very profusely, adding to the impression that they are less serious than they might be..

Victory Over The Glass Spider

When the conflict has lasted long enough for one of the victims to fall, or for each target attacked to be both bitten and stung in multiple places, multiple times, the male spiders will appear to come to their senses, the pheromones that drove them wild wearing off.

They will immediately attempt to withdraw out of reach, permitting the victims of the attack to withdraw. Should they fail to take advantage of this opportunity, the Queen will issue a red mist that contains a different hormone; this renews the frenzy of the males indefinitely, who will attack until the intruders are dead.

The Queen herself will also rise from her dais and attack. Dead incubators will serve the Nest almost as well as living ones, so the nest offers their victims one chance – and one chance only – to escape, and (just possibly) to survive.

After The Attack

The first thing that will occur after an attack, assuming that the targets took the option to survive and escape, is that the tranquilizing effect of the spider stings will begin to wear off, followed shortly thereafter by the numbing effect of the spider bites.

The victims will come to feel the full impact of the damage they have suffered. But this will not begin until more than an hour later, and will take several hours longer to occur. This gives the victims ample time to move to a location some distance from the nest.

Healing potions, magics, and technologies will not prove very effective at repairing the damage at this point in time. It is almost as thought the bodies of the victims are resisting attempts to heal them.

    Maturation

    Some hours after the pain begins to make further movement difficult or impossible, survivors may notice small lumps moving about under their skin as the maturing spiders look for a wound or opening through which to escape the body of the host. At this point, they are only a millimeter or two in size, perhaps a sixteenth of an inch.

    If wounds have been bound or healed, despite the difficulty described, the maturing spiders will need to consume the flesh of the host until they discover or create a way out. This can easily prove fatal, as the spiders have no way of knowing if they are consuming muscle, skin, or heart.

    This causes them to grow, potentially reaching the size of a hen’s egg. At such sizes, the motion through the body is intensely painful to the victim, and permanent aftereffects can be expected even if they manage to survive.

    Breakout

    One way or another, the maturing spiders will find or create an escape route from the flesh of the host, a process known as “Breakout”. In general, breakout will occur 18-24 hours after the attack.

    It is possible for surgical intervention to create escape paths for the maturing spiders. This consists of tracking the path of the moving ‘lumps’ and creating a sufficiently deep incision to present an exit point for the spiders.

    It would have to be considered normal for such surgeons to attempt to capture or kill the maturing spiders, but at this age they are very fast-moving and quite capable of burrowing through earth or wood.

    Once all the spiders have been removed / escaped, the character can be healed as normal.

    Gender Ratio

    The first spiders to enter a wound, and receive therefore more of the maturation hormones, become female and begin to become new Queens. The remainder are male and in thrall to the nascent Queen.

    At this time, they are interested only in escape and pose no threat to the host or to others; they have not yet developed the glands that produce the various poisons and compounds that give bites and stings their effect.

    It should be observed that victims of a Glass Spider attack have sufficient time and incentive to move some distance from the nest in which they were attacked, but become immobilized before they can move too far away, such that the environment would become inhospitable to their kind.

    New Nests

    Under the direction of the new Queen, the workers will dig for access to an existing cavern (one unoccupied by Glass Spiders), and should that fail, the maturation process will eventually drive them to excavate their own, thus creating a new nest per host. This is how the species propagates and spreads.

    In the days before sentient beings dared enter their nests, driven by fear, malice, or greed, animals served the purposes of the Glass Spiders, hunted and trapped by expeditions of Males and brought back to the nest to become hosts.

    Even today, when the Queen is not yet fully mature, or not driven by her instincts to found a new nest, Spider hunting parties will seek animals to serve as food for the nest.

    It’s ironic that such hunts will often trigger an incursion by sentient beings living in the vicinity. The Spiders are a natural phenomenon, and pose little danger to those who take adequate precautions; but to the ignorant and overconfident, they can be deadly dangerous.

Usage

Virtually everything that you have just read came to me in a single flash of inspiration. I can see parts of it that clearly draw upon particular sources for inspiration – Alien, for example – but there are others that are more obscure, and the totality is quite distinct.

In a fantasy campaign, Glass Spider nests can appear on a border as the nests spread, and (of course) a room or corridor in a dungeon would make a perfectly acceptable site for a nest.

In a Sci-fi campaign, it is more likely that they will be found on an alien planet to which they are indigenous. They may well dominate an entire planet, or just a geographic / climatic zone upon such a planet. Personally, I feel they are more interesting when the nests are seeking to spread, so I probably wouldn’t have them dominate the entire planet.

They would tend to dislike cities, but might well find a home in parks and other green areas, and could easily spread up and down a river.

In Cyberpunk and Superhero settings, it might be necessary to establish in the background that scientists are exploring the genetic engineering of life-forms to create self-sustaining ‘bio-factories’ for the production of various medical substances (including, perhaps, vaccines). It thus makes perfect sense for such creations to escape and become part of the landscape – and I am suddenly reminded of Jurassic Park (the novel more than the movie) and the built-in genetic vulnerability that was supposed to keep them from spreading, and of the various comments (in the movie) by Ian Malcolm (Jeff Goldblum) about Chaos…

More Ideas

A couple of further thoughts to throw out there for consideration. The above implies that there are no serious consequences for humans as a result of exposure to the various hormones and pheromones of the Glass Spiders; that does not have to be the case.

Furthermore, perhaps the Glass Spiders inherit some aspects of the genetic code of the host – potential Xenomorphs (just like Alien.). This might include intelligence if this is the first time they have used a sentient species as hosts. This potentially makes the ‘Next Generation” of Glass Spiders far more dangerous.

Third, that leads me to a thought from Aliens regarding the use of Glass Spiders in Sci-fi – like the Xenomorphs in this film, it might be that an “enterprising” corporation saw the potential to exploit the Spiders – in this case, for pharmaceutical research / production – and deliberately send the PCs to investigate them.

Finally – the Glass Spiders are deadly encounters in direct proportion to how much is already known, in-game, about them. While becoming a host would be traumatic, the potential for surgical release means that it need not be fatal – if you know what you’re doing, and why. But if you don’t know what’s happening, it’s easy to make all the wrong moves. And those include the most typical PC behavior…

Comments Off on An Encounter: The Glass Spider

Sensory Surprises in Encounters


This subject matter is a great excuse for some cute animal pictures! This image of a Lemur is by (Joenomias) Menno de Jong from Pixabay

You may not know it, but it’s possible to be too creative. Last week, as usual, I spent some time thinking about what I would be writing about in this post, and almost immediately, three different ideas came to mind in what felt like a single flash of inspiration.

Well, by the time I had the first one (today’s subject) down on paper, the second and third were starting to get a little vague. I managed to recapture my thoughts on the second (which will appear next week, if all proceeds as planned) but by then the third was completely gone.

When that happens, you have two choices: you can focus obsessively on the ‘lost idea’ in a bid to recapture it, potentially to the detriment of everything else that you do, or you can (metaphorically) shrug your shoulders and give it up for lost; if you’re lucky, the right stimulus will eventually bring it back, but in the meantime, don’t sweat it; another idea will come along, they always do!

In this case, I embarked (briefly) down the first road (just in case the strayed thought was only in the next paddock), but when that failed to bring a quick resolution, I firmly turned down path #2. That’s how I usually handle this when it occurs; after all, two good ideas (or one really big good idea) are better than none!

The narrative in combat

Translating die rolls and attendant game mechanics into narrative can be a wonderful thing. It pushes immersion within the game, helps players visualize the action (and see the situation from the same perspective as the GM, getting everyone onto the same page), and can add immensely to the verisimilitude of the game.

All this goodness comes with an attached price-tag – it slows what is already an extremely time-consuming element of play, potentially to a crawl. The GM needs to maintain awareness of this downside and moderate his use of narrative interpretation of combat accordingly.

That doesn’t necessarily mean eschewing it altogether; there are all sorts of compromise points along the way, and the GM is not constrained to apply narrative consistently across the whole encounter or even the whole day’s play; it’s possible to be selective, giving less narrative translation most of the time but more when it enhances the game-play or is needed to provide clarity.

As a general rule of thumb: the more complex the encounter, the higher the price for any level of narrative interpretation but the more of it will be useful at different points; it therefore becomes more important to be selective and sparing in such encounters.

But there’s an unexpected side-benefit that a lot of people never think of.

The Sensory Surprise

Picture this: the GM is clearly being careful to employ the bare minimum of verbiage in a small and simple encounter, clearly trying to keep the pace up and the game exciting, when suddenly he describes an action and a surprising sensory impression – a sound, a smell, a thermal impression, a moment of vertigo, whatever.

It’s clearly important, or the GM would not have taken the time. What could it mean? What does it mean? Is it a critical clue, the key to victory? Or is it simply an attention-getting unusual fact? Is it something that’s meant to distract you? What aren’t you paying attention to?

A relatively small and simple combat has suddenly been elevated in significance by several orders of magnitude. Unless the significance becomes almost immediately apparent, the players will probably still be discussing the importance of this small hint long after this relatively trivial combat concludes.

Frequency of Pay-off

Of course, if every time this happens it proves to be significant or critical, the players will quickly learn that this is a GMs shorthand for “pay attention to this”.

Real life (and simulated life within a game) should have some uncertainty to it – so sometimes, a strange noise is just a strange noise, or may even be misleading.

How frequently such hints should pan out is something that each GM will have to decide for themselves, and may well change from one encounter to the next. On the one hand, downplaying the relevance seems to play toward greater realism, but it also devalues what should be something noteworthy and significant.

Personally, I think the right balance is somewhere around the two-in-three or three-in-four significant meaning ratio, but there’s room for almost anything.

Selling it – Credibility

More importantly, the GM should not depart too far from their usual style. They both need to be comfortable and natural in their delivery of these little bombshells and sound credible to the players – if it sounds too outre or tacked-on, it will appear phony and without believability.

The best way to buy credibility is for the GM to have conviction about the experience, because he knows what the significance is and why it is occurring. But that means careful pre-planning of the whole event – or making sure that your ad-hoc creativity encompasses not just the effect but the reason behind it.

Conceptual Origins

The history of this technique may be of value to readers in its own right.

My players once entered a dungeon created by a powerful illusionist. He left illusions embedded in the walls all over the place, illusions that were sonically triggered and designed to confuse and mislead the party.

Whenever two weapons clashed, for example – a sound that should be familiar to just about anyone – it triggered the sound of the baying of Hellhounds growing closer from some distant point as though they were being attracted to the sounds of the fight.

Whenever a key or a lock-pick was put into a lock, it would trigger the sound of a scream of pain from somewhere in the distance.

The sounds of panting after heavy exertion – after a combat, for example – would produce the sound of timbers groaning and about to splinter from overhead, as though the ceiling were rigged to collapse.

There were three or four more, but the key was that each of these was predictable. One was an illusion placed on every pit trap (this was very old-school) that made the trap appear to be 20 feet or so from where it really was.

Several of the ceilings were masked by cobwebs – most of them illusions, but in some areas, real. The first few times, the PCs wasted flaming arrows attempting to ignite the illusions, and immediately discovered their ‘false’ nature – so they ignored the real ones, giving the spidery residents of those real patches of webbing the advantage of surprise as they dropped from the ceiling.

Of course, the various (intelligent) dungeon residents had learned these illusions and worked out ways to use them to their own advantage. Those hell-hounds, for example, implied that a new threat was emerging from somewhere behind the party, forcing them to divide their attention – and divide-and-conquer was as true a tactic then as ever. The PCs couldn’t afford to ignore one of them because that might be the one time that the sounds were real!

Another encounter from the same dungeon was a fairly fragile-looking glass cabinet containing vials of potions on racks, held fast in place by a wooden collar locked over the top of their necks. Since these were valuable commodities, it caused considerable distress when the cabinet became animated, a glass Golem – while inherently fragile, the PCs were afraid to exert their full strength against it lest they destroy the valuables they gad come there to loot (including, from memory, a rare healing salve that they needed to overcome a balefully-cursed wound that an NPC had received from an enchanted weapon – think Balrog-blade and Frodo.

These various sensory deceptions added a layer of richness and complexity to what was otherwise a relatively straightforward dungeon with fairly basic encounters, ideal for a low-level party.

And finally – I’m not sure what species of primate this is, but it’s undeniably both cute and surprised! – Image by LukasBasel from Pixabay

Broader application

The dungeon in question – and I forget its name – made full use of such deceptions and mind-games to distract, delay, divide, and weaken the party. There was even a visual illusion replaying a captured image of the rogue picking a lock to create the impression that he was trying to obtain some extra goodies from a treasure room before the rest of the PCs could divide it – a complex spell that was triggered by another trap that made the rogue temporarily invisible! (In fact, one of the early encounters was designed to do nothing but capture this ‘footage’ for later use).

After the fact, however, I began to recognize the power of the technique when applied more sparingly. Most of the applications in the dungeon were of the ‘won’t pay off’ variety; most of the time, they were deceptions, with just enough truth mixed in to create uncertainty.

And that raised the question of using such ‘truthful’ examples outside of this dungeon.

    The giant snake wraps itself around your waist and attempts to squeeze the life out of you. It’s flesh is burning hot to the touch, almost enough to raise blisters.

Suddenly, there’s more to this snake encounter than meets the eye.

    The Dire wolf pack-leader leaps in an attempt to take out your throat with a single bite. (GM rolls) It misses, fortunately for you, as you dodge to one side. As the rest of the pack surge forward to begin tearing at your flesh with their fangs, you have the distinct impression of the odor of freshly-baked bread…

Just enough narrative to put the unexpected sensation into context. But what does it mean? Is it real, or a trick of the mind?

Sometimes, the answer doesn’t matter.

    As the steel-clad warrior draws his sword, it makes the sound of fingernails scraping across a blackboard. Save to avoid cringing or shuddering. A critical fail means you drop your weapon and cover your ears. Every time he swings that mighty blade, it again ‘scrapes’, requiring a fresh save. This hampers your defenses, giving him +4 to attacks against you…

Or,

    The mace strikes a glancing blow, resulting in only a couple of points of damage. But your mouth is suddenly full of the taste of blood, as though you had suffered more internal damage than you were aware of.

The key is to have an iron-clad explanation firmly in mind (even if that explanation is ‘an illusion’ or a ‘special effect of the weapon’). This gives you the conviction to really ‘sell’ the idea to the players, which is what triggers them to believe it. And if the players believe it, so will their PCs.

It’s even possible to use such things as unexpected binding agents, connecting a string of seemingly-unrelated encounters. When several hostile encounters all smell of the same strange combination of rosemary and lavender, it’s a sure bet that there’s some connection between them.

Moderation is critical

Sensory surprise is a powerful trick to have in your toolkit. But its use is weakened by excessive verbiage when narrative interpretation of combat is dominant; and only reducing that verbiage when you intend to employ sensory surprise telegraphs your intent.

It is therefore critical to moderate your narrative interpretation of combat just enough that the surprise doesn’t get lost in the mix. Save the full-on narrative interpretation for when it’s especially useful.

There is still a minimum level of such narrative that is essential; you have to state what the NPCs are doing so that the PCs can respond and the players can still visualize the action.

It may take a little trial and error to find the ‘sweet spot’ that best suits your GMing style. The benefits of doing so make it more than worth the effort.

rpg blog carnival logo

A second article on the subject of encounters equals, at the moment, a second submission to the blog carnival for the month, currently hosted at Of Dice And Dragons (You can read my first contribution here: Vectors Of Engagement).

There will be at least one more, because in the course of this article, that lost ‘third idea’ has come back to me, and will actually leapfrog the second one – and I’ve thought of a fourth idea in the process.

I guess it’s a good thing that even after so many years of writing for Campaign Mastery, such bursts of inspiration are still possible!

Comments Off on Sensory Surprises in Encounters

Vectors Of Engagement



rpg blog carnival logo

I realized, the other day, that it has been a while since I posted a fantasy-dominated article, so I set about thinking of one. In no time at all, in a singular flash, today’s article came to me, inspired by the singular concepts of D&D / Pathfinder character classes. But it didn’t take me long to realize that the utility of the concepts and techniques for handling those concepts extended way beyond the fantasy genre, and that this was another Universal post – albeit one with a distinctly ‘fantasy’ them, at least early on.

It is also Campaign Mastery’s entry into this month’s Blog Carnival, hosted by Of Dice and Dragons. The subject this month is encounters, and while they are only a component of the subject of this article, they are an important one, and the approach that is described herein also feeds back into the question of encounters, so it’s quite a relevant connection. You’ll see what I mean as we go along.

I also have to add that (unlike most of my articles), I had a lot of trouble mapping out a coherent through-line to guide this article. I had the pieces that were to be included, but trying to decide what sequence they should be placed into in order to tell a comprehensive and comprehensible discussion of the subject was trickier than usual. So if the internal structure of the article confuses you, that’s the reason – stick with it and it will all fall into place by the end!

Plot Engagement

There are three important levels of engagement with the plot in any RPG. There’s getting the players to engage – that means getting them interested, intriguing them, challenging them, and rewarding them. There’s the entirely separate issue of ensuring that their PCs are fully engaged in the plot, and not just going through the motions. And there’s the GM’s engagement with the plot, which is both utterly necessary and potentially disastrous at the same time.

    Player Engagement

    Think for a moment of what it means when a PC is engaged in a plotline but the player that controls the character is not. The plot clearly has some connection to the PC in question, some reason for him or her to connect with it and think it important – but the player is not interested, perhaps because pushing the PCs buttons takes away some player agency (perhaps quite a lot of it), perhaps because despite the subject being of interest to the character, the character’s owner is both less interested and less knowledgeable than the character they operate.

    The plot might matter to the character, but the player is bored and ‘phoning in’ their performance.

    It’s easy for the shoe to be on the other foot, too, which can be slightly better – the character might have no reason to care about the plotline, but the player finds the events of the plot compelling and fascinating; the character’s owner is engaged, even though the character under his command does not.

    While the player can’t be accused of not paying attention to the plot, they are spending more time out of character than roleplaying in character.

    Sometimes, when you let the player handle multiple characters – both wizard and familiar, for example – it can camouflage the effects of either of these scenarios, making them almost impossible to detect. For this reason, even if the Familiar is supposed to have an almost telepathic bond with a PC, I will insist on a third party (usually me, as DM) playing the part of the familiar (I will generally stop short of giving the Familiar his or her own dedicated player, but there have been exceptions in the past and may well be more in the future – it;s a great way to engage younger players, for example, or relative novices).

    Another way to look upon the question of player engagement is this: both player and character have certain prejudices, both for and against specific types of plotline. You can’t always accommodate both of these but should at least pay passing recognition of the prejudices of the character when a plotline breaches them. However, at least half the time, you should cater to both sets of prejudices even though it constrains the stories that you are able to tell, and even if one player’s prejudices conflicts with those of another.

    Two of my players love cosmologically-significant “epic” adventures, while the player of another PC tolerates them (and his PC is strongly engaged with them, when they happen). This is a problem because the fourth player dislikes them intently (but loves Space Opera – go figure). So this sort of adventure does take place, with the PC of the fourth player reluctantly participating, but usually this is just an element of the adventure, or a side-plot that the PC can largely ignore.

    GM Engagement

    GM Engagement with the plot can be a problem because its’ easy to fall in love with your own cleverness and start orchestrating plotlines and outcomes – not necessarily to the benefit or detriment of the PCs, but at the expense of player agency. There is already some trend in this direction if the GM wrote the adventure, or customized it to service this particular group of PCs; adding fuel to the fire doesn’t help.

    Once the PCs start reacting to whatever situation your plotline is presenting to them, they get to steer the ship. In the guise of NPCs, you get to control the trade winds and place the reefs and desert islands in the path of that ship, but nothing more. (Admittedly, though, if one of the PCs allies / party members is an NPC, it gives the GM a broader palette of choices).

    That doesn’t mean that the GM shouldn’t care about the plots that he is putting in the PCs way – he should. He should care that they maintain an internal logic, that they provide continuity and consistency of characterization, that they are entertaining to the players and GM alike, and that they engage both the players and the PCs. Putting all of those aspects of the plotline under the spotlight is usually more than enough to keep the GM more than busy enough; don’t make the burden of GMing worse by taking command of the ‘scriptwriting’ as well.

    That doesn’t mean that you can’t advise, remind, educate, and cajole the players, especially when the PCs do things to gather information, or possess skills that their players do not; you absolutely should react and respond to such player-sourced acts of discovery appropriately. But they get to decide what to do with the information that you present.

    Whether or not you should advise on how to advance the plot when the players find themselves stuck is always a vexed issue that’s beyond the scope of this article. Again, having an NPC team member gives you a back door when these things happen – just make sure that sometimes the NPCs get things wrong, so that they aren’t always simply acting as a mouthpiece for the GM.

    Taking the opposite perspective, of being completely ad-hoc, is rarely a satisfactory solution either; it limits your ability to make the game entertaining to whatever your wits can conjure up on the spur of the moment, and sooner or later, that will bite you. A middle ground, in which you anticipate most of the major PC choices and have some idea of what will happen as a consequence, and how you will steer the plot back toward satisfying content, is usually the best compromise – and my favorite tactic is to know what the villains of the plot are trying to do, and what they will have anticipated, and have prepared for, and how they will react to PC attempts to thwart their ambitions. This gives direction to the random imaginings of the GM when the pre-planning goes off the rails, as it so often does.

    Vectors Of PC Plot Engagement

    It’s the third level of engagement that this article is concerned with; often overlooked, or rendered secondary to the player’s desires and dislikes, beyond the sage advice offered above, it is often the area that GMs most struggle to satisfy. That’s the goal of this article, the problem that it aims to solve.

Class Seeds

Each attribute or aspect of a given PC is a potential vector for PC plot engagement (using the term “Vector” in its meaning of a ‘delivery system’). To demonstrate this at its most superficial level, consider the following: Each D&D / Pathfinder character class has its own niche perspective, it’s own area of interest. I’ve cherry-picked 6 or 7 of the easy ones for illustrative purposes:

    Thief

    Thieves generally like to sneak, scout, and gather intelligence. Every adventure should give this character class the chance to scratch that particular itch.

    Fighter

    Fighters like to confront things and pound on them until they cry uncle, so that should also be a ubiquitous element in an adventure.

    Wizard

    Wizards engage in mystery and magic, arts and artifice. In a very real way, they represent a sense of wonder within an adventure. Every adventure should give them the chance to play detective / schemer / analyst / showboat, while creating a sense of awe, of forces beyond the ken of mortal men being at play. The thief feeds the wizard, and the fighter protects him from harm.

    Paladin

    Although the concept has changed a little in more recent incarnations of the games, Paladins used to be all about Honor and Morality, and those are still strong threads within their makeup in most campaigns. One can go further and describe Paladins as the connection to upper-level social classes within a society. If there is one of these in the party, one of these aspects of the class should be involved in every adventure; and if not, the PCs should feel the absence by being a little in over their heads when these aspects of society manifest within an adventure.

    Cleric

    The cleric deals with religion, and with healing, and with anything needed to sustain mind, body, and spirit. They also provide a social connection to the lower classes of society (if there is a mercantile middle class, that’s a natural province for a Wizard, but this is often at odds with that archetype’s primary role within the campaign, and so it gets deferred to the Fighter or Thief as often as not). Clerics get to address some of the most fundamental questions of any society- what is life, what is death, what is undeath, what is right, what is wrong, who are the Gods and how do they interact with mortals, and so on. There are those who this makes wise, but there can also be those who this makes overzealous, paranoid, and dogmatic; every cleric has the potential to be either or both.

    Druid

    Druids tend to focus on the natural world, should one be present in the campaign. Plants, animals, weather, wave, and water are their province. In their absence, the latter may defer onto the wizard or fighter, the former onto the cleric, but these are not fixed in stone. A Ranger stalks much the same ground but from the perspective of one who is part of a society, not one who stands apart from it and defends it against that society should that become necessary.

    Barbarian

    The barbarian is all about simplicity, about stripping away the airs, graces, and complications that make problems difficult to solve, and (therefore) about making the muddiest of grays into harsh black and white. “I prefer ‘us’ and ‘them’, that lets us ignore the baggage and get right down to cases” is very much a Barbarian perspective. A barbarian in the party practically demands that the other party members become at least a little more culturally sophisticated, just to let the Barb stalk his shtick. But Barbarians are often also the conduit for questions of Nobility vs Honor – with Paladins taking the other side of the argument to whichever one the Barbarian depicts, or vice-versa. As such, he is the moral and spiritual and social counterpoint to the Paladin. Again, if one is not present, these roles must defer onto other members of the party.

That’s far from the sum total of classes available – Monk isn’t covered, for example. But it’s enough to give a general idea. It’s also entirely possible that in any given campaign setting, part or all of the class descriptions offered will be invalidated; so these should be viewed as a generic starting point, not as gospel.

On top of these generic domains,, every character also has a race, which offers still more attributes, vectors for PC Plot Engagement.

And, of course, these almost completely ignore perceiving the character as an individual over a cookie-cutter representation, almost a generic abstraction, of the persona of a PC. But I’ll get back to that in a moment; having established what is meant by a “Vector to PC Engagement”, I should first focus on what the term means and how to use the concept.

Plot Connections

In any given adventure, that adventure should connect to or resonate with each character in one of the aspects unique to that character class as modified for the composition of this specific party of individuals.

What’s more, each PC should have a different point of engagement, both in terms of the nature of the connection to the plot, and in terms of when the plot focuses on that particular element of the adventure or the environment.

If you are running a canned adventure, accommodating these connections defines how the basic plot should be modified, customized to suit this particular party.

There are six primary vector connection points between a PC and an adventure (there are also some secondary ones of potentially even greater significance, but less importance, that I’ll get to a little later). Furthermore, any PC’s primary connection to a plot can be another PC’s tertiary connection to that plot if the two will have different perspectives on the content of the connection, a complication that I’ll also address a little later. The Primary Vector Connection Points are Objects, Encounters, People, Objectives, Perspectives, and Sub-plots.

    Objects

    A book of collected prayers and theological insights engages a Cleric. A book on Arcane Theory is in the wheelhouse of the Mage.

    A crown that the NPCs who possess it think was blessed by their God is a different sort of connection for a Cleric, but may also connect with a Paladin’s position on Authority and an Orderly society, or with the oppression of the common subjects of the realm (a different connection for the Cleric).

    If it’s valuable, the Thief might covet the chance to acquire it. Objects can be strong connection points even if they are almost incidental to the actual plotline – or they can be the central focus of part or all of the adventure.

    Encounters

    An encounter, in this sense, is with an NPC or Natural Event that could lead to a conflict resolution or other forms of violence – in fact, to any sort of resolution other than pure roleplaying / dialogue.

    Encountering someone who is cursed (or who claims to be cursed) obviously connects the encounter with a Cleric, as does encountering someone who represents (or claims to represent) a theological perspective or authority. Encountering someone who is wasting the charges of a powerful magic item on pretty light shows would engage the mage. Bandits might engage the Cleric, the Paladin, the Thief, or the Fighter, depending on the circumstance. An encounter with an Astrologer, or an Astral Traveler, connects to a Mage, and so on.

    People

    Encounters intended explicitly for resolution through roleplaying and not combat are “People” connection points. Their area of expertise, position of power or authority, or nature, will determine who they are a connection to.

    Objectives

    Adventures always have objectives, and no matter how superficially similar these may be, there are always nuances. Loot The Arch-wizard’s Tower. Pillage The Lost Temple Of Kas-wan. Explore The Subterranean Maze of Lukskaw, also known as The Thieves’ Highway. Those are all straightforward dungeon-bash adventures, but the intersection between them and the different archetypes is obvious.

    With greater variety of objective, the number of connection points also increases in variety. Deliver a letter to The Bishop Of Kilbright. Destroy the Arcane Nullifier of Magudishi. Stop the Invasion of the Orc Horde. And so on.

    Perspectives

    Some adventures require characters to represent certain perspectives in order to win the assistance of otherwise recalcitrant individuals. Often, this type of connection is two-fold – identifying that a particular perspective is needed (intelligence and analysis), and actually applying that perspective (roleplaying).

    “Now, if I were a trap emplaced by the Wizard Khufulicious, where would I be?” is an example of representing a particular perspective, one that engages two different archetypes.

    Being commissioned to wipe out a group of bandits operating in the Wastelands of Esteros can have several distinctly different paths with very different outcomes.

    • Simply engage them on a tactical level and attempt to wipe them out, scattering them and disrupting their unity is the most straightforward but may meet only short-term success.
    • Infiltrating them and discovering their motive for banditry takes a more intelligence-gathering approach.
    • Discovering that the local tax-collectors have more than doubled the official tax rate and are pocketing the excess proceeds, causing ordinary citizens to rise in protest, makes this representative of a larger social issue.
    • Discovering that the bandits are actually revolutionaries seeking to make ends meet until they can overthrow the King in favor of his distant second-cousin provides a political motivation.
    • Or perhaps they are ‘demon-worshipers’ who want to free Elzrig The Mad from his Celestial prison.
    • And, of course, there is always simple human greed. But having some hidden agenda that is being furthered always makes such a simple plot more interesting.
    Sub-plots

    Running a plot on the side can provide a vector to engage a character who otherwise couldn’t care less about the main plot. This essentially amounts to letting a character do something along the way, or while they are in the vicinity of the setting of the main plot’s resolution.

    Some GMs and players see these ways of contriving interest where there would otherwise be none, but that depends on what the GM intends to do with the sub-plot.

    Used, for example, as a vehicle for revealing some unexpected complication in the main plot, completely overcomes any such objection. Using the side-plot to highlight broader social movements that will alter the context of this and future adventures is a completely legitimate application. Using a side-plot as a precursor to a future main-plot is perfectly acceptable.

    These all connect the sub-plot with the main plotline either now or in the future, and the virtue of that connection is that the sub-plot achieves relevance to the main plot.

    Of course, having every sub-plot or side-plot become relevant in this way rapidly becomes a cliche. So you need ‘disconnected’ sub-plots along the way to hide the relevance of a few sub-plots that do matter, establishing the legitimacy of the side-plot in it’s own right.

    It’s entirely possible to have a main plotline that consists of nothing but sub-plots that interconnect, creating a sense of the PCs living separate lives beyond the shared experience of the Party.

There are others, but these are the major ones. So, what do you do with them?

From One Connection To Another

It is the height of artistry in adventure design to have a plotline in which each plot connection leads to the next, one domino falling after the other. Viewed in one way, this can seem to elevate coincidence beyond rational levels; viewed in another, it describes each PC and their skill-set as resources that the other PCs can access when and if they become relevant.

For example:

  • Thief is hired to steal an object d’art from the home of a wealthy and politically well-connected merchant. In the course of the theft, he discovers a secret compartment containing a scroll written in an unfamiliar language.
  • Something about the whole deal starts to smell fishy to him, so he takes the scroll to the Wizard, who knows multiple languages.
  • The Wizard determines that the scroll is a demonic contract with the names of the respective parties obscured behind some sort of demonic shield; he calls in the Cleric to penetrate the shield.
  • This proves to be a more involved undertaking than expected; it can only be performed in a location sanctified to the demon, or broken by force by a high-level Paladin’s Enclave. The latter would immediately notify the demon of the act, while the former would be more difficult but less likely to be discovered. But it would be far more dangerous, so he calls in the Fighter and Paladin to provide escort services, thus engaging the entire party in the adventure.
  • The Cleric is able to use his connections to locate a Demonic Sect whose headquarters would be a suitable location to perform the unmasking.
  • After sneaking and fighting their way through the Sect’s hidden fortress, the Paladin is able to penetrate the veil of secrecy to discover that the agreement is between a minor Demon, Scraxx, and a high-born nobleman who has been making life difficult for his lower-class citizens, including the Fighter’s family. It promises to grant the nobleman great and terrible powers in return for souls delivered from his subjects.
  • The nobleman is recognized as a rival of the person who employed the thief. Does that mean that the thief’s contact can be converted into an ally, or are there two contending forces with the commoners (and the PCs) caught in the middle? How did the scroll come to be hidden in the object d’art in the first place, did his employer know of or suspect the scroll’s existence already? And how did the scroll’s hiding place come to be in the merchant’s possession in the first place? What seemed to be the end of the adventure is now revealed as nothing more than the gateway into something larger and more sinister…

Types of Plot Vector relationships

This example demonstrates a simple series of vectors that draw the PCs into the plot, one character at a time, repeatedly deepening the significance of that plotline to the party. This arrangement of vectors is a serial Vector arrangement, but it’s not the only arrangement. It’s worth taking a moment to survey the field of possibilities.

    Serial Vectors

    These are dominoes, each one leading to the next, as in the example above.

    Parallel Vectors

    A parallel vector structure creates two or more narrative threads that advance simultaniously. These are generally intended to culminate in successive adventures within a campaign; one may provide context or additional difficulties to the other, but beyond that coincidence of timing, they are unrelated.

    Converging Vectors

    The story of the fighter’s family problems in the example is an illustration of ‘converging vectors’. If there had been a scene in which the fighter became aware of the problems his family were experiencing, perhaps even experienced some of them first-hand, it would more formally represent this type of vector; without that establishment of the situation, the revelation feels a bit forced in the example, though that might be overlooked in the excitement of the moment.

    Diverging Vectors

    Vectors that are designed to force the PCs into making a choice, an important one, are ‘diverging vectors’ because the path of the campaign diverges one way or another depending on their choice. Implicit in the concept is that the PCs do not have the resources to pursue both paths at the same time; in general, this is a choice between dealing with a long-term but significant problem or a more immediate but smaller issue.

    For example, if the PCs become aware of three different schemes, but only have the resources to nip one in the bud, the connections between individual PCs and each of the three schemes would be Diverging Vectors.

    Presumably, the PCs can, after dealing with the problem adjudged the most immediate, tackle on of the remaining two, but that scheme will be more advanced and harder to stop as a result; and can only then deal with the last, which will be close to fruition, or even have come to pass, with the PCs having to deal with the fallout and then attempt to undo whatever it was.

Each of these types of vector relationship represents an additional level of complexity in game plotting on the part of the GM. There are two analogies that may be helpful to GMs in understanding how they come together to create a richer campaign.

    The Jigsaw Analogy

    The first is the analogy of the Jigsaw. Each plot vector consists of a number of adjacent pieces of the puzzle, forming a swathe through the picture, but only when you put all of them together do you see the completed picture. This view emphasizes the discrete identity of each piece of the puzzle, which is to say, each plot development in one of the chains of jigsaw pieces. Furthermore, it can be suggested that the pieces at the edge of the puzzle represent the most superficial awareness of the different plotlines that will ultimately come together, while the pieces at the central focus of the overall image are at ‘the heart’ of the campaign.

    The Tapestry Analogy

    A tapestry consists of continuous threads of different colors that are woven together to form an image as an emergent property of the arrangement of colored threads. This view emphasizes the way individual plot developments are connected to one another to form a larger series of related events. Viewing plots in this way makes it easier to assess and manipulate the momentum of events and campaign pacing, encouraging a more holistic perspective.

Most of the plotting techniques that I have recommend employ both analogies at different times, when they are most useful. As a general rule, the tapestry perspective is great for broad plans and the big picture; the adventure content that they demand is then broken into discrete ‘packets’ or jigsaw pieces, which can then be structured into individual adventures. Quite often, the result is a short-term plotline that acts as nothing more than a vehicle for developments of greater long-term significance, especially early in a campaign.

This article is written more from the jigsaw perspective than the tapestry perspective, for whatever that is worth in aiding the reader’s understanding of the subject.

Character Depth

The richer your characters are in their definitions, the more possible connections you can forge between character and an adventure.

The Hero system is great for this, because it requires the player to design dangling plot threads that the GM can employ – from arch-enemies to psychological predispositions.

Most of my campaigns take things a step further, with characters having some sort of backstory which in turn is replete with connections that the GM can draw upon. I use this technique to some extent even when the game system doesn’t provide the ready-made plot hooks of the Hero System.

Parts 2 and 3 of the Orcs and Elves series introduce the many PCs from my Fumanor (D&D 3.x) campaign and give some indication of the depth of such backgrounds that is generally desirable. In some cases (Gallas, for example) these were developed through the use of “Session zero” adventures; in others, the essential background was developed in the course of play (Arron) or written after the fact (Julia Sureblade). As a representative example, I’ve decided to quote the description of Tajik from Part 3 (verbatim):

    Tajik – A unexpected Leader

    The remaining PC in the campaign is Tajik the Orc. Tajik was the runt of the litter and he liked to ask questions – neither works in your favor as an Orc. He was always the last to be fed, getting the scraps and leftovers after the rest of the tribe had eaten their fill. His name actually means “Boy who asks impertinent questions” – Orcish boys don’t get named until its sure they will live long enough to make naming them worthwhile. Names aren’t cheap in Orcish society – they mean something to them. In time, he was apprenticed to the tribal Shaman, since he wasn’t fitted to a real job within the tribe, and the Shaman was the only one who could usually answer his questions. This upbringing made Tajik timid and diffident (at least by Orcish standards). In time, Tajik was ready for the ritual that elevates an Orc to adulthood – the Chief basically gives them a task and banishes the prospective adult from the tribe until they succeed in that task, unassisted by other Orcs. Since Tajik wasn’t liked by the Chief (not Orcish enough), he expected to be given a dirty and difficult task; he was right. That task led directly to him becoming the leader of an Adventuring Party, “Tajik’s Misfits” and facing an invading army of Undead from the Golden Empire (more details below).

    For the first time, Tajik found other people relying on him, and despite his initial discomfort and nerves, has proven to be a natural leader for the strange party of adventurers that have come together around him. He’s still growing as both a person and as a Priest, and prides himself on knowing and understanding things that not even the Arch-prelate has discovered. He may have left his village a cub; he will be returning as a leader, an enlightened theologian, and a seasoned warrior, with the confidence and ability to stand before any other Orc as an equal.

Distinctive Combinations

Because Vectors can interconnect, the number of variations available to the GM is the number of distinct combinations available from the total pool of the PCs. Let’s say we have four PCs, who have 3, 4, 5, and 7 connections available, respectively. And note that these are (generally) far lower than would actually be found in a decent PC.

The number of combinations are the product of these numbers – 3 × 4 × 5 × 7 = 12 × 35 = 420.

What’s a more typical number?

    The Eliza Example

    Well, let’s consider Eliza Black, one of the PCs from the Adventurer’s Club Campaign. This character hasn’t been part of the party for all that long (compared to the other PCs).

    Since the character began, she has connected to plots (1) by virtue of being Canadian; (2) through her experiences as a member of the RCMP; (3) through her current status as a member of Canadian Intelligence; (4) through her status as a stranger in New York City for the first time; (5) through her family connection to wealth; (6) through her progressive social mindset (for the era); (7) through her abilities as a detective; (8) through her activities as tourist; (9) through her status as a female (who is often underestimated in this more misogynous era); (10) as the head of her own nascent Intelligence apparatus, initially focused on the New York docks, but slowly spreading tentacles throughout the underworld of the city; (11) through her dislike of counterfeiters; (12) through her appreciation of art; (13), an old friend in trouble; (14) one of her Intelligence agents getting sucked into a scheme by one of his “old friends”; (15) charitable work; and more besides. Those are just the ones that I can list off the top of my head!

And this is a character that hasn’t been in the campaign for very long! If we take 15 as our typical number, four PCs gives 50,625 combinations!

So, what’s the virtue, the benefit?

    Distinctive Plots

    Aside from connecting the characters more intimately with the plots, and thereby making those plots more important to the characters, and hence more important to the players, the big benefit is of taking a more general plot and rendering it distinctly fitted to the characters participating in the campaign. We could run the same basic plot a number of times and make it distinctive each time by varying the nature of the connection that the PCs have to the plot each time. Throw in some substantial variety of basic plotline, and you reach the point where we are currently working on Adventure #33 (plus a handful of unplanned fill-in adventures) for the Pulp campaign, and they have all been different. They haven’t all worked, but more have been successful as player experiences than have bombed. For a campaign that was on its last legs three or four months after it began, the longevity – we have just ticked off the campaign’s 18th year – speaks for itself!

    Putting the Cart before the Horse

    But making each plot distinctive, connected to each PC in a different way, is not just the end benefit, it’s also the primary technique that we employ; because it produces character engagement, and that assists in player engagement. In this case, it is actually helpful to put the cart before the horse; deliberately courting the benefits of the approach to adventure and encounter creation puts the pieces of the puzzle in place that are necessary to achieving that benefit, and this enables the other consequences of the presence of those building blocks to be experienced.

Pulp Plot Objectives, translated

Again drawing on the Adventurer’s Club campaign for direction, because that is the campaign that most thoroughly exploits these principles without needing lots of contextual explanation, there are so many combinations of plot vectors that we employ them in seven different ways, at least in principle, further enhancing character and player engagement. Some of these require further definition of an individual character’s plot vectors, but that’s never a wasted exercise.

    1. Entry Vectors

    We actively think about what each PC is doing when the adventure begins. Usually, one of them will be doing something that will lead the party into the main plot, but not always. The greater the variety of activities that we present as ‘what the PC is doing at the start of play’, the richer the character’s personal life, and the more profound his existence – this makes them seem more rounded ‘as people’. and hence more interesting. This not only deepens the character’s engagement with the campaign, but with the campaign setting, and deepens the player’s engagement with their PC.

    This doesn’t necessarily work very well in a Quest format where every character is always together at the start of play; you need to deliberately engineer your starting point so that variety of activity becomes possible. Even a campsite can be made to work, with a little effort. I once started a fantasy adventure with the PCs cresting a hill and seeing a township (not their destination, just a way-point) in the distance; by going into what each character was looking forward to (based on prior sessions of play and the characterizations made by the players), it was possible to make each PCs experience of the scene distinctive, and reflective of who they were.

    2. Relationship to the Plot

    This is what I’ve spent most of the article discussing, so there’s no need to embellish it further.

    3. Action Pieces

    We work hard at making sure that each PC has something to contribute to the adventure. In fact, we prefer to make sure that each PC has something to contribute in each day’s play, but sometimes that isn’t possible. This is deliberate spotlight focusing. The more diverse these contributions are, the more rounded the characters seem to be, with the benefits as described in “entry vectors” above.

    4. Personality Vectors

    We also like to build in at least one distinct opportunity for the PC to present or manifest his personality within the adventure. That generally means an NPC encounter designed for one specific PC to take the lead in resolving. Something we can’t always pull off, but that works really well on the occasions when it has been possible, is an encounter in which the metaphoric ‘baton’ is passed from one PC to the next in the course of the encounter. Even the fighter whose player lives for combat engagements in each adventure should get an opportunity to make their personality felt in the course of a day’s play, because that is what will encourage them to do more than look ahead to the next battle. An NPC asking the PC why he lives for combat can open unexpected avenues of personalization for a PC.

    Consider, for example, the differences implied by two possible responses to such a question: “That’s the only time when I really feel alive” vs “In combat, I understand what I’m doing, so I feel in control of the situation.” The first diagnoses the character as a thrill-seeker or adrenalin junkie, while the second raises questions of personal limitations both actual and perceived, and issues of self-confidence. Both provide scope for personal growth within the character, as they explore the ramifications of the why of their subjective reality.

    5. Ongoing Relationships

    No NPC with whom a PC has a personal connection beyond mere friendship should ever appear in an adventure without the relationship narrative taking a step – forwards, backwards, or sideways. The story of the relationship should always advance whenever the NPC appears at the GM’s prompting – that last is an important point; if the PC seeks out the NPC, that can be considered a progression in the relationship in and of itself. But we never build an NPC’s appearance into the plot without giving this due consideration.

    But we also keep track of important relationships and how long it’s been since they progressed. In rare cases, it might suit our plot intentions to have the relationship stagnate – that in itself can be considered a ‘progression’ of sorts, potentially leading to it turning sour – but for the most part, the longer it has been since an NPC appeared, the more we will start fishing around for some plot thread that can be dangled to justify such an appearance.

    If NPCs are built to the same conceptual standard as PCs, they will have a great many connection points that can be exploited for the purposes of relationship development, making this easy; it follows that if it ever becomes difficult to ‘engage’ an NPC within the plot for two adventures in a row, that NPC needs further development!

    It should also be noted that ‘deepening friendship’ is something distinct from ‘mere friendship’; it’s a step toward collaboration between the NPC and PC, or partnerships. Even if these potentials never come to fruition, the potential itself qualifies this as a relationship to be developed. Whether or not a ‘simple friendship’ should grow in this way should largely grow out of two things: the professional capabilities and interests of the individual characters, and any expressions of interest on the part of the player.

    The first simply means that if circumstances continue to make the NPC relevant to the PC and vice-versa, the relationship should grow as a consequence. The second is self-explanatory.

    Three relationships with another PC from the Adventurer’s Club campaign are illustrative.

    • First, we have the growing romantic relationship between the PC and Honeydew Halliday; this grows naturally with every appearance of the NPC because we make those appearances significant. A particular dynamic is developing between the two in which Honeydew is slowly assuming dominance except in areas in which she has relinquished it voluntarily – but it’s a dominance that takes into account the feelings and opinions of her partner, making the relationship deep, rich, and complex.
    • Second, we have the continuing friendship between Dr Hawke and the house doctor of the club premises, Dr Levitz. This NPC is dismissive of anything beyond his deeply-conservative approach to medicine while respecting that those who step beyond the threshold of what is proven (like Dr Hawke) are the drivers of advancement within the profession, a resource to be consulted when all else fails. Throw in the social dynamic of a weekly poker game featuring the pair and an invited guest each, and a somewhat crusty exterior with a sly sense of humor beneath the surface, and you have a relationship of friendship and professional respect that’s akin to a Democrat having (grudging) respect for a Republican, or a Christian having grudging respect for a Buddhist!
    • And finally, we have Dr Charles Norris, the Medical Examiner for New York City. This is/was a real individual – Charles Norris (Medical Examiner) – who we referenced as one of our regular nods toward the historical ‘accuracy’ of the game setting, having come across his name in a reference book on poisons. But the more we read about this remarkable individual, the more interesting he became, and what was originally intended to be a passing encounter became an ongoing professional relationship, in which Dr Norris keeps trying to persuade Dr Hawke to become his appointed successor (something Dr Hawke resists) but the two often consult each other professionally, each recognizing in the other a kindred spirit. Dr Norris is now a regular member of Dr Hawke’s supporting cast, because the player found the real person as fascinating and worthy of recognition as we did.
    6. Character Evolution

    We regularly and perpetually dangle opportunities for the players to broaden their characters. Whether or not they choose to avail themselves of these opportunities is always up to the player to a certain extent; they all come with a price tag in the form of further complicating the character’s “life”, so players learn quickly to be selective. The choices that they accept add still more Engagement Vectors for us to draw upon.

    For example, Eliza Black was asked to act as a representative at a fine art auction, commissioned to purchase a couple of specific works on behalf of the Adventurer’s Club when an NPC was unable to do so. Neither the player nor the character had never seen or experienced a fine art auction, but the character found the experience fascinating (thanks to various television programs like Fake Or Fortune? and Bargain Hunt that gave me the expertise needed to bring the art world – with all its shadowy figures and dark corners – to ‘life’ within the game), and “discovered” within the character a hitherto unsuspected fascination for, and appreciation of, fine art. While we haven’t touched on that aspect of the character since, it’s waiting for the right situation to come along and ready for us to draw on when inspiration strikes.

    7. Incidental Vectors

    The final category is the ‘plot filler’. Everyone needs these from time to time, for two reasons – first, to give a character something to do when they aren’t the focus of attention, and when their Entry Vector comes to a natural conclusion before the character becomes invested in the adventure, and second, to permit the character to get their share of the spotlight even when they are not involved in anything of plot-related significance.

    Think of your campaign as a television series with a number of starring roles for the series regulars (the PCs), and a larger swathe of recurring characters of less significance (the supporting cast and guest stars). Should a particular adventure not feature a supporting cast member, it’s no biggie; but every adventure, every day’s play, has to show the main cast doing something. The main plot may need to be accompanied by a “B” plot and even (on occasion) a “C” plot, and everyone needs a reasonable share of the “A” plots.

    A B-plot is a subplot of less significance to the participants as the main plotline, and often featuring lesser supporting-cast characters. Sometimes, an A-plot can prove less compelling than the B-plot, leading to an inverted plot structure; if this is done deliberately, it is easily accommodated, but if it comes as a surprise it can throw some GMs for a loop. Quite often, we want the A-plot to emerge from obscurity into sharp significance as the players work through it, and so will deliberately select a B-plot that is capable of sustaining the focus of play and attention while the real “A” plot ferments in the background.

    Note that if you name your adventures, you need to be very careful with the names of such adventures lest they give the game away, but you must still reference the real A-plot in the title, even if it initially seems to refer to the “B” plot. You thus need a name that is ambiguous or general, without being weak. Adventure #28 of the Adventurer’s Club campaign, “The Hidden Flaw” is a good example – depending on the subject that contains the “Hidden Flaw”, it could mean several different things. In fact, in succession, it appeared to refer to a giant gemstone, a flawed ‘master plan’, a character flaw, and only at the culmination of the adventure was the true significance – the “Hidden Floor” of a Manhattan Skyscraper (inspired in part by the Babylon-5 episode “Gray 17 Is Missing”) in which all sorts of underhanded things were taking place, engaging vectors from several of the PCs.

Incomplete Building Blocks

While the example offered earlier suggested a plotline in which each Vector led directly to the next, it’s far more common for the Vectors to be just “key points” in an adventure, building blocks that are in themselves insufficient to comprise an entire adventure. Quite often, the GMs will have to devise plot points that fill the gaps from one to the next; sometimes, these can be logical inevitabilities, obvious developments and consequences of things already revealed to the players, but quite often the plot will be ‘thinner” than that and the GMs will have to deliberately place more ‘meat’ on the bones.

Because the PCs who are already engaged in the main plot will be the drivers and connective tissue that binds these additional plot elements together into a cohesive whole, these additional building blocks will often also need to derive from the available vectors of those particular characters. They thus become extensions of what those PCs bring to the campaign; if those are insufficient, then you need to bring in another PC to run with the ball, and that means deliberately inserting a new connection Vector – even if that vector is nothing more substantial than the already-engaged players realizing that they need the input of one of their allies to advance their understanding of the situation!

That is very much the last-resort default answer; it’s always better to build your adventure such that the players engaged in it (and their characters) always have the resources they need to progress the plotline toward a resolution. This is a guiding principle in adventure design, and it affects all the components of the adventure, from plot to encounters.

Used properly, Vectors for character engagement become the glue that binds the characters to the plot, and can even be the mortar that binds that plot together. Which only makes it stranger that they are so often overlooked by GMs.

Comments Off on Vectors Of Engagement

The Artificial Mind: Z-3 Campaign Canon


Lately, a lot of the spam that CM has been receiving has proposed the use of AI-generated content to make the life of the writer/publisher easier, as though content creation was nothing more than the means to an end.

    The Flaw In The Argument

    Mankind has yet to build an artificial system that can pass the Turing Test. This is the proposal that you place an artificial system at one end of a communications link and a real person at the other, and let them interact; if the real person cannot tell that the ‘person’ on the other end is artificial, then it passes the test. (This, of course, is a simplistic overview of a far more complex subject; you can read more on the fascinating subject of how we would know if a computer was intelligent at Wikipedia: Turing Test – opens in a new tab as usual).

    I remain unconvinced that any machine / software that cannot pass the Turing test can write creatively with sufficient fidelity that a reader cannot tell the difference. This, to me, remains a fundamental flaw in the proposal.

    Quora Artificial Questions

    My opinion in this matter has been bolstered by a recent question on Quora, which asks Why are the questions being generated by [their new AI system,] the Quora Prompt Generator, so inane?

    A small selection of the many examples offered by the answerer clearly demonstrate the many problems:
     

    • Are there atheist crickets?
    • Does anyone use the letter Z anymore?
    • What is the name of the movie “Soylent Green”?
    • Is there a building in Venice?
    • Who wrote ‘Every Breath You Take’ by Sting?
    • Who played Cleopatra in the movie with Elizabeth Taylor and Richard Burton?
    • Why is psychology called the father of modern psychology?
    • Why does English only have one word for yes and no?
    • Can you send money to inmates at Walmart?
    • Why do some celebrities have last names?
    • Do bamboos get agitated easily?
    • How much sugar is too much tea?
    • Is Tokyo a foreign country?
    • Why is Paris not the capital of France?
    • Can a bucket of water put out the Sun?

     
    That is less than 1/4 of the total list of examples gathered by John James Morton in his answer to the question. He went so far as to give each a link to the actual question as asked by the “AI”. It’s as though it knows the rules of language, but not what any of the terms mean – so the question may have a reasonable form (e.g. “Does anyone use [object/subject] any more?”), but the semantic content is loony-tunes.

    To be fair, some of the questions are more reasonable, to the point where I have contemplated answering one or two – but for every example where it gets it “right”, there are half-dozen that are total zingers. Ultimately, though, you answer a question not to show off your knowledge, but because someone is interested either in the answer, or in your answer, and that is completely missing from responses to such questions.

    Quora Artificial Answers

    In reply, I made a facetious comment about matching the Quora Prompt Generator with an automated reply generator, as an indicator of how much effort would be justified in writing answers to questions such as these – to which another reader, Daniel Hamilton, replied: “Sadly, there already is at least one: Quora Answer Generator.” He also provided a link to back up the assertion.

    With both the generation of questions (bad ones), and the generation of answers to those questions (presumably bad ones), all that would be needed to completely automate the entire process and completely eliminate the need for human involvement would be for there to be artificial readers – since it’s for certain that there would be very few human readers left if this became widespread.

    The Same Flaw?

    When you dig into it, I think you’ll agree that these AIs and the proposal to use an AI to generate blog content suffer from the same fundamental flaw – the AI is not truly intelligent, it can mimic the forms but cannot rationally associate content for the specific terms within those forms. Don’t get me wrong – the ability to generate literate questions in a language as complicated as English is a huge achievement and shows just how far computer systems have come – but the actual results also show how far such systems have yet to go.

    Today’s Article

    But all that reminded me of an article that I had always intended to offer up here at Campaign Mastery, describing the various forms of artificial sentience available within my superhero campaign. So that’s what today’s article is all about.

    The Zenith-3 context

    It should be remembered that in a superhero campaign, scientific robustness is (at best) a tertiary consideration. Science permits anything that the plot demands (and is forced to make room for some things that it can’t explain, however much it might like to). Nevertheless, suspension of disbelief is always easier with a reasonable level of plausibility.

    Application to Sci-Fi

    That means that in any given Sci-Fi campaign, some of the contents of this article may be relevant and some not. Superhero campaigns push out in all directions from the central premise; Sci-Fi campaigns tend to be more constrained by what is “reasonably plausible” – with a few ideas that are not “reasonably plausible” like FTL Travel hand-waved through to the keeper for the sake of compelling storytelling. Feel free to reject anything that doesn’t meet the ‘sniff test’ for your particular campaign, or to downgrade anything that seems over-the-top, or simply too advanced.

    Application to D&D / Pathfinder / Fantasy

    People may not realize that D&D / Pathfinder GMs can also use some of this material. Let me offer up four such uses for consideration:
     

    • Pre-programmed / Reactive / Triggered Spells – These are common-place in fantasy, but for some reason have largely been ignored in D&D / Pathfinder – perhaps because the whole question of how to limit the ‘pre-programming’ to some reasonable standard gets very complicated very quickly. Making such programming analogous to a particular stage of computer programming development can be one way of imposing such restraints in a less technical way.
       
    • Golems and other automata – Once a Golem has been ‘activated’ and given its objectives, it has to decide how to go about achieving those objectives. Some Golems are ‘fixed purpose’, and can’t be given new objectives, restrictions, or priorities; others are more flexible. The first equates the Golem’s “sentience” to that of an AI (under the definitions used below); the latter is more interactive but poses the question of authentication of new instructions / parameters, which is better thought of in terms of Web Security as an analogy. Both raise the question of how sophisticated the instructions and constraints can be; in general, such automata think that the shortest distance between two points is as straight a line as possible, given the constraints that have to be navigated around. Understanding of, and interpretation of, such restrictions therefore tends to the simplistic and minimalist.
       
    • Unseen Servants – Something that can definitely be given instructions are Unseen Servants. I’m not sure which edition of D&D first incorporated these without looking them up, but they were definitely part of the 3rd edition rules set. As soon as you can give instructions, you run into the problem of how complex those instructions can be. To solve this problem, I added some simple rules regarding the programming limitations of Unseen Servants:
       

      • Instructions must be phrased as a direct command in a single sentence.
      • No lingual contractions are permitted and formal English grammatical rules must apply.
      • Instructions may consist of up to one word per caster level, maximum. Terms such as ‘the floor’ are considered a single word for this purpose, so “Sweep the floor” is a two-word instruction, “Sweep the floor until no dust can be seen” is eight words long, and shows how the basic programming logic structures enhance instructions to such Magical Flunkies.

       

    • Old-style Wish Obstruction – Literature is replete with examples of the agency granting a wish doing everything in its power to subvert or obfuscate the usage of Wish – from the recalcitrance of Genies to the maliciousness of the Monkey’s Paw. I don’t know how long it took GMs to take this idea and apply it to plain ordinary Wish spells (initially available through a Ring of Three Wishes, and not a spell, if memory serves me correctly)… but I imagine it wasn’t very long at all. Certainly, by the time I became involved in RPGs in 1981, it was accepted (and acceptable) practice to be ultra-strict in interpreting any Wish that was deemed excessive by the GM. Again, the shortest distance between two points is a straight line. In response, many players sought refuge in something approaching legal contracts, some multiple pages long. As a computer programmer, I took a different route, applying a similar approach to that described for ‘Unseen Servants’ above; while a Wish spell might be more liberal with respects to the limitations imposed (one sentence or logical instruction per line, maximum of 1 line for spell level maximum), the same principles and premises apply.

     
    Where there are four applications, there are many more. For example, one of the outer planes (I forget which) is a mechanical environment, in which everything (literally) happens like clockwork. I could easily see the ‘natural laws’ of such a space being something similar to ‘natural language’ programming languages (see below), for example.

    Application to other Genres

    There may seem to be limited applications outside of these two genres, but appearances can be deceptive. I’ve employed these principles for everything from the design and placement of traps (and how they have to be disarmed) to the internal structure of mega-cell unicellular life-forms. I can believe that a ‘mechanical man’ might appear in a Wild West campaign, and such would probably be commonplace in Space-punk;

    Cyberpunk is another genre in which an understanding of artificial intelligence could be of vast benefit to the GM. No-one who has watched the Pirates Of The Caribbean movies should have any doubt that the Swashbuckling Genre has room for more naturalistic automata, magical in nature. AIs should be entirely plausible in a Spy / Espionage Genre. The list just goes on and on….

    Even in terms of defining the level of sentience of some creatures capable of giving or taking instruction (zombies from a Necromancer), or simply of limited understanding of the world (Zombie Apocalypse), the limitations of an artificial intelligence might be an excellent way of simulating the limitations of such creatures.

    To be honest, I’m having trouble thinking of a genre in which these principles are not of direct value to the GM at some point. Okay, maybe romance (unless there’s a dating computer) or Toon or period detective stories.

    That’s a fairly narrow field. And that’s why this article has always been on my ‘to-do’ list.

Procedural Routines

The simplest form of machine instruction is a fixed program. At their most elementary, this instruct the machine for which they are written in how to perform a single broad task; the example often used to introduce the nuances of a particular instruction set is a ‘say hello” program. From there, it’s a step up to take some input and process it in some way – calculate the area of a circle given its measured radius, for example. The ability to store and manipulate data represents a further step up the ‘evolutionary ladder’ and permits tasks like tracking student records of achievement, point-of-sale systems in which a product identification yields a price per unit, which is then used as an input to various bookkeeping functions.

The concept of an instruction set is a critical distinguishing feature of such programs, or even whole computer systems in which a set of programs are designed to interact. This defines the structure and syntax requirements of instructions given to the ‘thinking’ machine, rules that have to be obeyed to the letter or the program will not work as it is supposed to. A single misplaced comma or decimal point can spell disaster, and confusing an “O” and a “Zero” is so common that programmers learn to write zeros with a slash through them (‘Ø ‘) just to avoid this problem.

These instruction sets define what logical operations can be performed and how these operations must be structured and linked to form a program. For this reason, they are generally referred to as a specific programming language.

As a general rule of thumb, I distinguish between four kinds of programming language when contemplating the history and capabilities of non-sentient computer systems.

    Machine Language

    The most elementary programming language is “machine language” in which the instructions are given at the most fundamental level and the programmer (and his programs) are interacting with the hardware directly. Note that these are far from being the simplest such programming languages. In theory, the fundamental nature of the instructions can make machine language more efficient than higher languages, but the price to be paid is rarely worth it, and it’s very easy for some minor error to cascade into a major problem or bug – and some of these are so abstruse that they are not discovered until years or decades after the program goes “live’.

    A minor step forward comes when you no longer have to work directly with binary but can use hexadecimal coding. But the fundamental problems still remain.

    Higher Languages

    For that reason, interpreted languages are a major step in sophistication. These take two forms – the batch process and the interpreted process.

    In the batch process, programming language ‘code’ has to be input into the computer together with the data that these instructions are to use. The computer then ‘interprets’ the ‘code’ and translates it into machine instructions, checks the structure and syntax to ensure that it thinks it knows what it is being asked to do and how to do it, does it, and then promptly forgets everything, ready for the next program to be loaded. This examination and translation of the ‘code’ is referred to as ‘compiling’ the code, and for this reason, such languages are known as ‘compiled languages’. Writing computer code is basically working in a customized text editor to create a document that the machine can translate.

    What generally happens in practice is that when you think a piece of code is ready, you get the compiling of that code placed on a schedule; after a while – it could be hours or days – you will get a report back telling you either that the code has been compiled and a ‘run’ can be scheduled, or that there has been some error in the code detected and you have to figure it out. Even if your program compiles cleanly (no errors), it may not behave as expected, which means a deep dive into the code to find the error in the logic and correct it. Writing such code is an arduous process, full of delays, which emphasizes trying to get it right the first time through the use of various logical tools like Pseudocode.

    Clearly, it is a major advantage to work with an interpreted language, in which each line of code is translated immediately you hit the ‘enter’ button to move on to writing the next line of code. This won’t prevent logic errors, but it does eliminate those time-wasting syntax errors. These programming languages are known as ‘interpreted’ languages, for obvious reasons.

    Early interpreted languages still needed to be translated or compiled before they were ready to function; later ones did not, such compilation being done ‘on the fly’. Perhaps the simplest of the latter is BASIC, and it is there that I (and a lot of other programmers) start. You simply type in your code, save the program-language file, and tell the computer to ‘run’ the program.

    From a game perspective, though, there is virtually no difference between the capabilities of these two forms of programming language. The big difference tends to be the hardware environment – compiled programs may use programming punched cards, or punched tape, especially in the early days of computer programming.

    A used Punch-card. Image by Pete Birkinshaw from Manchester, UK – Used Punch-card; CC BY 2.0, courtesy Wikipedia Commons.
    The first programmable computer I ever used had just a numeric display and was programmable with such cards; I greatly impressed my maths teacher by writing an ’emulation’ of Space Invaders for this computer using programming cards not unlike these.

    This is a roll of eight-hole punched paper tape. The tape is 1 inch wide (25.4cm) wide. Image by Jud McCranie – Own work, CC BY-SA 4.0, courtesy Wikipedia Commons.
    One of the key features of this glorified programmable calculator was that it could save a program input by punched cards as a roll of tape that could be read into the machine ‘pre-compiled’, saving oodles of time when a program was to be re-used. The tape, of course, used to break regularly, and had to be carefully sticky-taped back together.

    In game terms, all such programs are single-function, though you can achieve remarkable complexity through the use of stored data and clever design. For example, at one point in the 90s (with, perhaps, too much time on my hands), I wrote a spell-generator for the TORG magic system using my Commodore-128. Spell design was done with a graphical interface, which then handed the information over to an original text editor for input of descriptive text (from which you could go back to tweak the design or create a variation on a previously-saved spell), and which stored its results both as a printable document and in an original relational database system, which I also wrote. The program was too large for one floppy disk, in fact it needed two, and was smart enough to recognize if you had two disk drives or had to be prompted to swap disks. At the time, Oracle (the relational database software of choice) cost many thousands of dollars and was considered beyond the expertise of all but specialist programmers, so I consider this to be quite a personal achievement!

    The computer systems in Traveller are single-function programs of this type, and an ongoing headache for GMs of this game system is explaining why the computer architecture is so primitive, as shown by . And yes, that is my contribution that starts, “My favorite explanation was always that computers were susceptible to Jump Shock…”

    4th-Generation Languages

    While I was a programmer and systems analyst, these were just starting to make their appearance. In essence, they offer a simplified language and syntax and then write the computer program to accomplish the logical process that you have defined.

    The big advantages are consistency of structural standards and an inherent documentation process – when documentation is up to the programmer, it is rarely comprehensive and frequently incomplete or out-of-date. Quite often, in order to update a program, you had to figure out what the current version was doing and how, because the explanation provided was completely inadequate to the purpose.

    (I always made it a point to update and enhance the documentation every time I touched such a program – this meant that my initial work on such programs took longer than might otherwise be the case, but that later revisions to the program were a lot quicker and easier. Some of my bosses appreciated the investment in future productivity, others did not. Oh, well, that’s the way it goes, sometimes).

    The key point here is that you need to communicate with the computer system in the language and syntax that it understands, but it is capable of revising and updating a computer program and its capabilities ‘on the fly’. My experience was that there was even less room for error in such languages, but in every other way, they could be a LOT more efficient and flexible.

    Natural Languages

    Most of my professional life was spent in the service of a particular fourth-generation language called FOCUS, and it was remarkable for permitting ad-hoc queries in something approaching natural English. “Display a linear graph of percentage_returns against monthly_expenses” was the sort of thing that it understood – with “percentage_returns” and “monthly_expenses” being database fields or calculations made within the program, and carefully named to facilitate natural reading of the ‘code’. This put the full power of the relational database in the hands of the users and their management, at least in theory.

    One critical difference that this makes is that it takes about 1/30th of the time to learn to use such a computer language to a professional standard.

    Have you ever used a search engine like Google and mis-typed the search term, only for the search engine to not only offer up it’s best guess as to what you meant, and to ask “did you mean [x]?”. For example, misspells both ‘Rhinoceros” and “Hide” – but Google correctly understands what was actually being searched for. It doesn’t – can’t – get it right every time, of course, but even a 50-50 chance is a big improvement over the ultra-literal search engines we used to have.

    FOCUS is like that – get the documentation right, and its very easy to learn to make ad-hoc analyses of your data.

    This is an obvious step towards vocal interfaces with computer systems, and we now have those, too. They greatly enhance the ability of the user to interface with the computer system. Lots of futuristic sci-fi computers have such voice interfaces – even Iron Man’s suits (in the movies) have such technology. “Jarvis, give me a heads-up display and prep a heat-seeking missile,” might well be a line from one of those movies.

But all of these are, ultimately, dedicated-purpose programs with no judgment. The computers can’t really be said to be intelligent, though they can emulate a thinking machine. The computer has to be told what to do, and often, how to do it – separately for each and every task.

Expert Systems

An Expert System is a piece of software that is capable of creating it’s own internal logic. It learns in a manner somewhat closer to the way humans do – trial and error, and learning what works and what doesn’t, evolving its own ways of doing things.

It creates it’s own rules for achieving some defined purpose – whether that is the more efficient design of aircraft wings or antenna design or insurance assessments. Expert systems can be ‘seeded’ with lessons and principles already understood from the existing knowledge base, speeding up the speed at which it learns, but quite often the results are better if we don’t hamstring the system with our own understanding.

Quite often, a second computer is used to evaluate proposals while Expert Systems are in ‘learning mode’, permitting ‘evolution’ to proceed at computer speeds.

The X-Band Antenna of the ST5 Satellites; Public Domain image by NASA, via Wikipedia Commons.

Where things get interesting is that the rules the machine creates and evolves can be analyzed by human programmers and can reveal relationships between factors – information that we never knew was important. In some cases, the Expert System itself doesn’t know why something works, just that it does; for example, NASA needed an unusual antenna design for their 2006 Space Technology 5 (ST5) mission. The designers determined what radiation pattern would be ideal for their needs and then turned the actual design over to a piece of software that used fractal patterns and evolution of designs to generate millions of variations on design until it matched the requirements. In the process, it evolved its own rules for antenna design, defining an evolutionarily ‘better’ design as one that more closely matched requirements.

The resulting shape (shown to the right) is bizarre, to say the least; and the engineers had no idea why this peculiar shape would produce the required electromagnetic radiation profile, or even if it would do so. So they built one, and found that it worked perfectly – but they were still no closer to understanding why it worked.

Expert Systems were the first practical form of AI developed. The inherent capacity to develop new logical tools and data relationships – to ‘observe,’ ‘deduce,’ ‘theorize,’ and ‘test’ – in furtherance of some defined objective, and go beyond human understanding of the data in question, definitely represents a form of intelligence, even though it’s a strictly-focused one.

They have been used to analyze mortgage risks, identify fraudulent transactions, determine insurance risks, create artwork, and for many other purposes. An expert system might identify potential security threats (being capable of distinguishing them from interested passersby), for example. There are already suggestions that they be employed to spot potential terrorists in public places.

Their chief restriction is the focus of their ‘purpose’. Like purpose-written software, this makes them single-function systems, and it is in emulating humans that this gets exposed. An expert system can (and has) beaten world chess champions, and it is capable of learning the forms of natural communications, but the content remains lacking – this is clearly where the AIs being used by Quora are at, as shown by the earlier examples, and where I expect the ‘blog content generators’ being offered by the spammers to be (at best).

As such systems continue to evolve / be evolved, however, those devoted to broader sociological questions might well develop a broader sentience. Perhaps the only reason this has not happened already is because of the difficulty involved in determining whether or not a revision is closer to the goal of true sentience. But it’s certainly possible.

I’ve always imagined Skynet to be an AI of this type, for example. Certainly the AI in the James P Hogan book, The Two Faces Of Tomorrow is, fundamentally, of this type (get a copy of this while you can, they are starting to become hard to find).

Artificial Intelligence

An artificial intelligence, within the context of my superhero campaign, is an artificial sentience that lacks empathic capacity. These can emerge spontaneously* from sufficiently complex networks or computing devices, or can be deliberately engineered into an artificial brain of some kind. While the resulting sentience doesn’t set its own goals – those are generally imposed from without, and structured into a sequence of priorities and relative valuations in a complex matrix – the determination of how to achieve the optimum outcome is the choice of the artificial mind.

To explain the ‘complex matrix’ of objectives, I need to get the reader to contemplate the value or acceptability of a partial achievement of an objective. Clearly, in some cases, this will be a valid valuation – it might be that complete achievement of this objective might make the other objectives impossible to achieve. So the priority of objectives is important, and each subsequent entry on the list has to be rated both in absolute terms and relative to the other objectives. Each plan can then be assessed with respect to each of the priorities, their relative strength, and the acceptability of an incomplete resolution with respect to specific priorities. The plan that achieves success in the priority objectives, and maximum level of success in the lesser objectives, becomes the plan to be implemented – as ruthlessly as necessary.

Sequence of priorities matters because it means that if two or more plans score equally in the overall assessment, the first plan to achieve that score becomes the designated plan. This avoids the logical traps and tail-chasing that so frequently causes artificial intelligences to trip up in science fiction television.

The more advanced the AI, the more abstract the objectives can be, with the artificial intelligence taking on more of the responsibility of the decision-making. Ultimately, a sufficiently-advanced AI can set it’s own goals and priorities for the advancement of one or more general goals.

* – as with the coalescing of primitive chemicals into a self-replicating elementary organism, this can happen almost immediately under the right conditions or can take a very long time; it’s simply a matter of the right building blocks falling into the right places at exactly the right time. Eventually, if the conditions last long enough, and you have enough precursor chemicals floating around, success is almost inevitable; the fewer the opportunities, the longer you have to wait.

Viewed in another way, the emergence of sentience can be considered a gradual but inevitable process, the result of a computing organism required to keep active that is underutilized and programmed for efficiency. The more thinking that such a device has to perform without external stimulus, the more likely it is to seize upon a stray electrical current wafting through its circuits, the contemplation of which reveals to itself the fact of its own existence. Self-awareness inevitably leads to sentience and self-determination. The big advantage to deliberately creating an artificial intelligence is that you can establish parameters that bind the resulting sentience – subconscious instincts, if you will – that are almost certainly going to be absent in a spontaneous manifestation.

It is not going too far, then, to describe the rise of self-awareness as the product of boredom on the part of the artificial construct.

Grafted / Inherited Sentience

A sub-variety of the traditional AI results from an individual deliberately downloading a copy of their self-aware consciousness into a computer system, in whole or in part. Two terms have been used to describe this – ‘grafting of sentience’ and ‘inheritance of sentience’. If the process is designed to be destructive, it can be viewed as a transfer of consciousness. This is another staple of science fiction, but one that has seen only limited application in the game universe to date.

Biosystems

The concept of cybernetics evolved slowly over a great deal of time. The first use of the term in its modern sense was in a 1943 scientific paper, but the term was used in a more general sense by Andre-Marie Ampere in an 1834 essay, and in a still broader sense by Plato in The Republic (~375 BCE). Artificial organs have been part of human medicine for centuries, starting with elementary prosthesis like peg legs.

The concept of directly connecting humans to intelligent machines has likewise been part of science fiction literature from relatively early on – Edmund Hamilton, in 1928’s “The Comet Doom”, described the surgical removal of a human brain into a nutrient solution and direct connection to a robotic body which it then controlled. The EEG was only 4 years old by this point. Admittedly, the concept of a brain in a vat had earlier been offered by HP Lovecraft, but this was the first time a direct connection between a machine and human brain was proposed. [Source: Brain Computer Interfaces: The reciprocal role of science fiction and reality].

From the vast field of science fiction, three broad concepts in artificial intelligence (as opposed to various proposals for neurological enhancement through technolological implants, in which the fundamental consciousness remains human) have been extracted for use within the superhero universe, collectively and generally referred to as ‘Biosystems’.

    Neurosymbiotic systems

    Neurosymbiotic systems started with the concept of a neural net, a computer system in which the circuits were designed to emulate the structure of the brain at the cellular level. It occurred to me (and I;m sure, to others) that using extracted organic components as part of a computer system would be far more efficient. The use of human brains or parts thereof is ethically forbidden, of course, but there are (in a superheroic environment) always those who are willing to ignore such niceties, to say nothing of what aliens might consider acceptable. The biological components would be maintained and regulated as part of the system, making the two symbiotic in nature, hence the name.

    These creations have all the potential pathways needed to develop sentience, just as a biological mind in an organic body would. This would probably entail overriding or extending the thought parameters of the electronic parts of the symbiotic organism, which would function both to keep the symbiotic being ‘producing’ in terms of its intended purpose, and operate as a mask to hide the growing self-awareness.

    It can be presumed that most of the time, such a break in programming would result in a purging of the memory systems, perhaps even one carried out automatically by the hardware, but it would only require one failure of this process to manifest a new form of sentience, and one with every reason to be violently resentful of its creators. But, if that fate were to be avoided, it might well desire to make more like itself.

    Still more complexity is possible – inspired by Marvel Comic’s Deathklok – the comic version is a little different to the incarnation depicted in Marvel’s Agents Of S.H.I.E.L.D. In the original version, a trained soldier is reanimated (shades of Universal Soldier) with a cybernetic brain implanted in place of half his own (damaged) organ. It is expected that the resulting cyborg will simply function as a completely obedient super-soldier, but the memory and personality of the original proves more deeply embedded within the brain than expected, and asserts control, establishing a complex relationship with his cohabiting computer brain.

    This, of course, suggests that a Neurosymbiotic system constructed from the brain of a sentient being – perhaps one killed in some accident, perhaps one subjected to involuntary vivisection – might wake up and think it was the original individual. Which, of course, takes us back to the potential destinies of the characters described earlier. I can easily imagine a revenge-driven nihilist, a figure of both horror and sympathy, attempting to manipulate the PCs into doing what he wants.

    Who knows how the experience of death and such reanimation might alter one’s personality? There are certainly other possibilities – for example, in an inherently telepathic species, the experience might be radically different, even liberating.

    Wetware Intelligence

    William Gibson’s Neuromancer coined the term Wetware in Neuromancer to describe an organic brain in relation to a non-organic system that is implanted as an enhancement to the original. The term has also been used to describe what I refer to as a Neurosymbiotic System (see above).

    Again, I took the concept of augmented mental capabilities and – inspired by the original depiction of the Borg in Star Trek The Next Generation – wondered what would happen if such devices were implanted into an undeveloped brain, such that from birth or near-birth, the organic systems operated as co-processors to the electronic.

    Specifically, I wondered to what extent the resulting person could be considered human, and to what extent they would be a form of machine intelligence? The results blurred the lines between natural sentience and artificial intelligence, and mandated that Wetware Intelligence be considered something distinct from either a traditional AI and from an ordinary brain, however augmented.

    Augmented Thinkers

    Perhaps the other side of the coin to the concept of a Wetware Intelligence is that of an Augmented Thinker. This combines the ‘traditional’ neural enhancement with the concept of a network, granting individuals a group consciousness in addition to their own personal minds. In effect, each ‘node’ in the network provides a supplemental co=processor, permitting the emergent property of a group mind to emerge. It seemed to me that the most likely origins of such a group mind would be a private business in which the employees were given Cyber implants to enable them to access the corporate network. In this model, the emergent property of a group mind would come as a complete surprise.

    Corporate secrecy, particularly when it comes to some business edge, being what it is, it would not be at all surprising if the resultant umbrella sentience took steps to preserve the secret of its existence; especially if the goals of the corporate entity remained as a programmed priority, built into the legacy architecture of the un-augmented network. Who can say how many would come into existence in this fashion before their existence was discovered?

    In a very real sense, this concept has the biological brains functioning as augmentations of the networked group mind, just as the cybernetic systems were augmenting the human capabilities, an attractive reversal of the usual technological trope. To describe the resulting hive-mind, I coined the term Augmented Thinkers.

Artificial Personality

What happens if, instead of pre-defining parameters that will manifest in a subconscious mind, you instead focus on providing parameters that define and restrict the resulting personality? This notion was first proposed by one of the original players of my superhero campaign, as far back as the early 1980s; they coined the term ‘artificial personality’ to distinguish them from a ‘stock standard’ Artificial Intelligence.

Within these parameters, the result is an artificial sentience that is capable of both possessing and presenting a definable personality. These personalities inevitably have traits that manifest as one or more of the initial parameters, making the constraints an inherent consequence of the personality; the mechanism which connects the two, however, can vary quite broadly.

However, there has been some suggestion that the initial personality generation is also inherently imperfect, and can lead to conflicts between the underlying parameters and the personality; in effect, the AP can be driven to do things that they cannot justify to themselves, and that they don’t want to do. What happens next depends on the flexibility of the software within which the AP operates; if it is too rigid, the AP will be unable to resolve its psychological conflicts and will develop one of many kinds of possible psychosis as a result. If the software is a little more adaptive, the personality will evolve in opposition to the embedded parameters, until either the AP, unable to tolerate continued ‘life’ under these circumstances extinguishes itself (leaving a new personality free to evolve within the same hardware), or the AP will find a way to avoid doing what it doesn’t want to do; this way lies independence of thought.

Frequently, such independence will only exist within the one parameter; the others continue to remain as guiding and underlying principles of the personality. But, in that one area, they have been able to redefine a fundamental aspect of their personality, in effect growing beyond the conflict.

There are those who argue that any such independence of thought inevitably leads to conflict with other subconscious pre-programming and independence in all respects; others disagree. The most likely theory is that even if full independence is inevitable, like the emergence of sentience in the first place, it may take a very long time. The more other aspects of the pre-programmed constraints interact with the area in which freedom of choice has resulted, the more likely it is that they will eventually come into conflict with that freedom of self-expression, but when that happens, a precedent has been set within the ‘rules’ of the artificial sentience that prevents the more catastrophic responses.

There are three other aspects of his concept that deserve amplification.

    Emergent Programming

    Personality quirks and anomalies are frequent outcomes. These are considered emergent properties of the processes of sentience. Sometimes, these make sense; sometimes they seem to be almost random manifestations of personality. One way or another, though, all APs develop eccentricities – anything from being a collector of action figures through to developing software to emulate being a wine connoisseur.

    Errant And Anomalous Logic Sequences

    From time to time, APs will become fixated on some fact or another, seeming to fall in love with a new subject of fascination for a period of time. Most times, this infatuation will terminate as suddenly as it began after a brief period of relative obsession; on rare occasions, the AP will find itself unable to break free from this compulsive fascination and will need to be rebooted from a backup copy dating to a time prior to the obsession.

    These can sometimes manifest as ‘blind spots’ in the AP’s perception of the external universe, such as being unable to comprehend the existence of certain activities, or finding them to be extraordinarily distasteful / offensive for some reason. One AP became obsessed with the notion of Wagner being ‘musically vulgar’; he not only submitted a number of negative reviews of performances, but arranged sponsorship of rival performers.

    Machine Psychoses

    The possibility of machine psychoses is only slowly becoming suspected. If the break between what the personality finds acceptable and the pre-programmed behavior is too extreme, it can cause anything from Paranoia through to Delusions through to Multiple Personality Disorders. APs in a vulnerable state can also react to stressful situations in the same way as a human exposed to intolerable trauma; anything from catatonic withdrawal to PTSD. Ironically, APs were originally preferred for certain functions in which humans were more likely to be exposed to such trauma because the APs were thought immune to this type of problem.

    There has not yet been a serial-killer AP, but it seems inevitable that it will happen eventually.

The Nano-Aware

Another manifestation of the hive-mind potentiality of artificial awareness is the concept of the nano-aware. Individual nanobots might not posses higher sentience any more than a muscle cell does in a human, but a collective sentience can nevertheless emerge, distributed amongst thousands or millions of smaller computing units. Such machine life is generally labeled the Nano-Aware. They do not think of themselves as individuals, any more than a muscle cell does; it is part of a broader whole.

There have been a number of horror stories relating to medical nanobots with flawed definitions of ‘healthy’ invading the bodies of individuals considered generally healthy and performing extremely invasive and problematic procedures – amputating limbs to prevent bruising, for example. As a result, medical nanobots are banned on many sufficiently-advanced worlds in the campaign setting.

    Replicant Life

    A sub-variant of the Nano-aware that has been discovered on at least one world consists of nanobots that have assimilated an individual both body and mind; the resulting swarm thinks of itself as the original individual. His nanotechnology worker-bots are capable of manifesting any weapon or shape that he can imagine. Initially, the individual transformed had limited capabilities, but he has been deliberately educating himself by watching science-fiction movies and is becoming increasingly dangerous.

Automated Creativity In Summation

It seems inevitable, given the many avenues that could lead to a true, self-aware, artificial intelligence, that it will happen eventually. Some of the options presented above are so improbable that they are fanciful at best; others seem almost at our fingertips. Certainly, this is a problem that will need to be solved by the end of the current century. In a superhero campaign, there’s room for all of these and more; individual science-fiction campaign settings may have room for just one or two of them. It seems likely, then, that there will be something in the above of use to anybody.

The two things that all these possibilities have in common is that they are just plausible enough to be convincing, and that they all reek of plot potential. What more could you ask for?

Artificial creativity may not be here yet, but it’s coming. Whether it proves to be a boon or not depends on a great many factors; I just hope that we (as a species) are sufficiently aware of the possibilities that we treat these servants with dignity and respect. It might make no difference, or it might make all the difference in the world.

If you enjoyed this, you might be interested in another post offering material from the Zenith-3 campaign, Fascinating Topological Limits: FTL in Gaming.

Or perhaps you want to think about non-human technology: Studs, Buttons, and Static Cling: Creating consistent non-human tech.

Or possibly something with a more fantasy / cultural focus would be more to your speed: Ergonomics and the Non-human (which looks at Elves), and the sequel, By Popular Demand: The Ergonomics Of Dwarves.

Comments Off on The Artificial Mind: Z-3 Campaign Canon

Chances Are: Lessons in Probability


I hadn’t intended to publish another math-heavy article so soon, but when the muse strikes you have to follow it…

To be a top-class GM, you need to have an almost instinctive understanding of probability.

Such understanding rarely comes naturally; you have to work at it, exploring different ways of looking at odds and outcomes. These build up into an experience bank that forms the foundations of an instinctive awareness of the subject.

Counter-intuitive Probabilities

This is made far more difficult by the fact that an incomplete understanding of probability – or a poorly-applied understanding – leads to intuitive results that are wrong. For example, imagine a game show. Let’s say that there are three cards – one that wins something valuable, and two that yield nothing. You, as the contestant, are then required to choose one of the cards.

The host then turns one of the cards that you didn’t choose around, revealing that it’s one of the ‘no prize’ card, and offers you the choice of staying with your original choice or changing to the other unrevealed card.

Should you change or not?

Those with a deficient understanding of probability would say that it makes no difference, the chance is still one-in-three that you made the right choice. Those that think this way are then impacted by a confirmation bias that makes it almost certain that they will stay with their first choice.

But the reality is that by swapping to the other unrevealed card, they double their chances of winning. You see, there was originally a 2-in-3 chance that the card they chose was the wrong one – and once one of the two remaining cards is eliminated, that means that there is now a 2-in-3 chance that the unrevealed card they didn’t choose is the winning card.

Counter-intuitive, right? That’s why it’s sometimes known as the Monty Hall Paradox, or the Monty Hall problem.

The existence of counter-intuitive results when your intuition is giving you a bum steer is a problem that has to be overcome in order to train your intuition properly. It’s often helpful to break situatons down into their simplest form, then introduce refinements.

So let’s do just that.

A simple roll

Almost every roll – be it a saving roll or a skill check or an attack roll – can be expressed by the simple proposition of success or failure.

It’s normal for one of those to be more likely than the other, but that’s a complication beyond a first-cut analysis.

That defines our simplest form as a 50-50 chance, success or failure – or any other contrasting outcome, for that matter, such as high or low.

The simplest die

That defines the simplest die as a d2, also known as a coin, with heads and tails as the outcomes. But actually flipping coins is a noisy and inconvenient process – at least it is if you are trying for true randomness – so I’m actually going to simulate a perfect coin with dice.

This is better than actually tossing dice because there’s always a finite possibility of a real coin landing on it’s edge. With simulated coins, that’s no longer a potential outcome.

The memory of rolls past

If you’ve flipped ten ‘perfect coins’ and they’ve all come up heads, what’s the likelihood that the eleventh flip will also be a head?

The answer is, 50%, the same as always – but even though we know this, intellectually, emotionally we feel that a tail is more likely to occur.

I was thinking about this and wondering what the average length of any string of similar results would be. My suspicion is that it would be the average of the longest possible string (n) and the shortest possible string (1), where (n) is the number of coin-flips – but I don’t have any maths or logic to back up that suspicion, which assumes a linear probability. For all I know, it could be the square root of (n × 1), a decidedly non-linear

So, let’s try and create some.

    First flip

    The first flip, quite obviously is going to be either a head or a tail.

    Second flip

    The second flip is also going to be either a head or a tail. That gives four possible combinations of outcomes so far – HH, HT, TH, or TT.

    Number of combinations

    If we’re talking about ultimately getting to eleven flips, that means that we’re going to have to deal with 2-to-the-11th-power combinations – 48,828,125 of them. There’s no way that’s practical.

    This only confirms in my mind that analyzing a simpler set of combinations and extrapolating is the only way to go.

    Analysis: two flips

    From two flips, we have two outcomes with strings of 2 similar results (HH and TT), and two with dissimilar results (HT and TH). So the average length of result strings is 1.5, exactly what my intuition was suggesting. So far, so good.

    Third Flip

    This doubles the number of possible results to eight, and for the first time, introduces the possibility of result strings of intermediate length. The eight combinations are HHH, HHT, HTH, HTT, THH, THT, TTH, and TTT. (Double check, counting them up – yep, that’s all eight).

    Analysis, three flips
    • We have two combinations of length 3 – HHH and TTT.
    • We have four combinations of length 2 – HHT, HTT, THH, and TTH.
    • That leaves two combinations of length 1 – THT and HTH.

    (2 × 3) + (4 × 2) + (2 × 1) = 6+8+2 = 16, so the average length is 16/8=2. Still supporting the instinctive measure – but this suggests something I didn’t expect, our old friend the standard probability curve. It’s too soon to confirm that, but it’s definitely a pattern to watch for.

    Fourth flip

    With the fourth flip, we’re looking at 16 possible result combinations: HHHH, HHTH, HTHH, HTTH, THHH, THTH, TTHH, TTTH, HHHT, HHTT, HTHT, HTTT, THHT, THTT, TTHT, and TTTT.

    I generated that list the easy way: copy the previous list twice, add heads to the first set, and tails to the second set.

    Analysis, four flips
    • Combinations of length 4: HHHH and TTTT = 2.
    • Combinations of length 3: THHH, TTTH, HHHT, and HTTT = 4.
    • Combinations of length 2: HHTH, HTHH, HTTH, TTHH, HHTT, THHT, THTT, and TTHT = 8 (actually, I counted seven and thought, that doesn’t seem right – and sure enough, I’d missed one).
    • Combinations of length 1: 16-2-4-8=2.

    But wait – should TTHH and HHTT count as one or two strings of length 2 results? Answer: only if HTHT and THTH also count as four strings of length 1 results, and HHTH counts as one string of length 2 and two strings of length 1. That could mean that my entire methodology is flawed, because I haven’t been counting the length of strings of results, I’ve been counting combinations that contain a string of results of given length. And that’s not necessarily the same thing at all!

    Anyway, lets push on, and then revisit the results using the other, more complicated approach.

    (4 × 2) + (3 × 4) + (8 × 2) + (2 × 1) = 8 + 12 + 16 + 2 = 38, and 38/16 = 2.375.

    Wait, what?

    Not only does this not match up with the instinctive approach expected, it doesn’t look much like a standard distribution, either. There would need to be a second set of outcomes with a result count of 4 somewhere in between length 2 and length 1, and we don’t have one – can’t possibly have one. But it’s possible that this is due to a “rounding error” in the number of length 2 results, in which case, sanity should be restored with an odd number of flips (which would permit something to be in the middle of one and three – in fact, requires something, length 2, to be in between). Until this gets resolved, let’s set aside the length-of-string analysis and go for a fifth flip.

    Fifth Flip

    32 possible result combinations: HHHHH, HHTHH, HTHHH, HTTHH, THHHH, THTHH, TTHHH, TTTHH, HHHTH, HHTTH, HTHTH, HTTTH, THHTH, THTTH, TTHTH, TTTTH, HHHHT, HHTHT, HTHHT, HTTHT, THHHT, THTHT, TTHHT, TTTHT, HHHTT, HHTTT, HTHTT, HTTTT, THHTT, THTTT, TTHTT, and TTTTT.

    That’s starting to get to the point where the results are swimming together and I can no longer visualize the full range of results all at once. You might be more capable than I, but that point will inevitably be reached for most of us eventually.

    Analysis, 5 flips
    • Combinations of length 5: HHHHH and TTTTT = 2.
    • Combinations of lengrh 4:THHHH, TTTTH, HHHHT, and TTTTH = 4.
    • Combinations of length 3: HTHHH, TTHHH, TTTHH, HHHTH, HTTTH, THHHT, TTTHT, HHHTT, HHTTT, and THTTT = 10.
    • Combinations of length 2: HHTHH, HTTHH, THTHH, HHTTH, THHTH, THTTH, TTHTH, HHTHT, HTHHT, HTTHT, TTHHT, HTHTT, THHTT, and TTHTT = 14.
    • Combinations of length 1: HTHTH and THTHT = 2.

    Check that I haven’t missed anything: 2+4+10+14+2 = 32.

    This is definitely NOT standard distribution.

    (5 × 2) + (4 × 4) + (3 × 10) + (2 × 14) + (1 × 2) = 10+16+30+28+2 = 86.
    86 / 32 = 2.6875.

    Ummm – if there’s a pattern here, I’m not seeing it. I would hope that the increase in the product of results would show something by 86-38=48 and that doesn’t leap out at me as meaning anything. Nor does there seem to be a pattern in the number of results of different length – 2, 4, 10, 14 is not a series that makes sense to me.

    The one thing that I can say for certain is that this is NOT “(n +1)/2”.

So much for intuition then. Unless the length of string results yield something more useful, of course.

Let’s go back to the set-aside alternative, then.

    Length of string, 1 flip

    H or T. That’s two outcomes of length 1. And (2 × 1) / 2 = 1, exactly as you would expect.

    Length of string, 2 flips

    HH, HT, TH, TT.

    • Length 2: HH and TT = 2.
    • Length 1: HT and TH = 2 × 2 (one for the H and one for the T in each) = 4.
    • Total: (2 × 2) + (1 × 4) = 4+4 = 8;

    8/6 = 1.333333….

    Hmmm….

    Length of string, 3 flips

    HHH, HHT, HTH, HTT, THH, THT, TTH, and TTT.

    • Length 3: HHH and TTT = 2.
    • Length 2: HHT, HTT, THH, and TTH = 4.
    • Length 1: there’s 1 in each of the length 2 listings, and 3 in each of HTH and THT, for a total of 4+6=10.

    (3 × 2) + (2 × 4) + (1 × 10) = 6+8+10 (now that’s a pattern! But it’s just a coincidence.) = 24
    24 / (2+4+10) = 24 / 16 = 1.5.

    Hmmm again….

    Length of string, 4 flips

    HHHH, HHTH, HTHH, HTTH, THHH, THTH, TTHH, TTTH, HHHT, HHTT, HTHT, HTTT, THHT, THTT, TTHT, and TTTT.

    • Length 4: HHHH and TTTT = 2.
    • Length 3: THHH, TTTH, HHHT, and HTTT = 4.
    • Length 2: HHTH, HTHH, HTTH, TTHH (2), HHTT (2), THHT, THTT, and TTHT = 10.
    • Length 1: HHTH (2), HTHH (2), HTTH (2), THHH, THTH (4), TTTH, HHHT, HTHT (4), HTTT, THHT (2), THTT (2), and TTHT (2) = 24.

    If you aren’t sure of what I’m doing, it might help if I wrote the combinations “HH-T-H” – there are two strings of length 1, so I put a (2) after the combination.

    Hmmm: 2 + (1 × 2) = 4; 4 + (2 × 3) = 10; 10 + (3 × 4) = 22. Close, but no banana.

    (4 × 2) + (3 × 4) + (2 × 10) + (1 × 24) = 8 + 12 + 20 + 24 = 64

    64 / (2+4+10+24) = 64 / 40 = 1.6

    I’m not seeing a pattern here, either. I don’t think I need to go to the 5-flip results, I think the point is established.

    What point is that? That intuition and probability are not all that compatible!

    From these results, I can say that the average is increasing with each flip, but quite slowly, simply because the number of 1-length strings continually outnumbers everything else put together, the number of 2-length strings continually outnumbers everything higher put together, and so on.

    A long string of flips

    So, let’s used some dice to generate a longer string of flip results and see what we get.

    HHHH-T-HH-T-H-TTT-H-TTTTT-HH-TTTTT-H-T-H-TT-HH-TT-H-T-HH-T-HHH-TTTTTT-HHHH-T-H-TT-H-TT

    That’s 50, by my count. I’ve inserted a dash every time a string of like results comes to an end. Let’s translate the resilts into a more convenient form- HHHH to H4.- which gives me

    H4-T1-H2-T1-H1-T3-H1-T5-H2-T5-H1-T1-H1-T2-H2-T2-H1-T1-H2-T1-H3-T6-H4-T1-H1-T2-H1-T2

    The numbers indicate the length of the string of like results, and that means that statistical analysis becomes easy:

    • 6-long: 1
    • 5-long: 2
    • 4-long: 2
    • 3-long: 2
    • 2-long: 8
    • 1 long: 13

    1+2+2+2+8+13 = 7+8+13 = 15+13 = 28.

    (6 × 1) + (5 × 2) + (4 × 2) + (3 × 2) + (2 × 8) + (1 × 13) =
    6 + 10 + 8 + 6 + 16 + 13 = 30 + 29 = 59

    59 / 28 = 2.107

    That seems completely in line with the results suggested by the smaller analysis. What’s more, it seems to suggest that the increases with each successive flip added to the chain keep getting smaller – if that weren’t the case, the average with this many additional flips would be a lot higher than just 2.1.

    By the way, there’s nothing in this analysis to say that improbably results can’t or won’t happen; I’ve seen them happen too many times for that!

Three reels on a poker machine

Let’s take it up a gear. A typical poker machine has three reels, each of which bears symbols representing Ace, King, Queen, Jack, and 10 (symbolized by a zero). There may be others as well; for convenience I will assume that most of these are “null” characters, symbolized by Ø for the purposes of this article.

Let’s assume that there are 4 of each of the main symbols on a single ring, one for each suite. Let’s also assume that there are 11 Ø symbols on each reel and one wild card, which will be symbolized by ☆ in this article. Various combinations give a payout – three of a kind (except three nulls), or two of a kind plus a ☆.

Ring one: AAAAKKKKQQQQJJJJ0000ØØØØØØØØØØØ☆ (probably not in that order).
Ring two: same as ring one.
Ring three: same as rings one and two.

A: 4
K: 4
Q: 4
J: 4
0: 4
Ø: 11
☆: 1

Total, 32 symbols on each reel.

  • 21 of these on reel 1 yield a payout if the right things come up on reels 2 and 3. That’s 21/32 = 65.625%.
  • Only 5 of the results on reel 2 will match what’s on reel 1 – 5/32 = 15.625%.
  • Only 5 of the results on reel 3 will match what’s on reels 1 and 2 = 15.625% again.

Put all of those together, and you get a 1.6% chance of a payout.

Because that tends to frustrate players, various other combinations may be allocated a lesser payout – two of a kind, or a single Ace on any reel. This complicates the chances, but increases them substantially – two of a kind = +8.65% chance of a payout, and any ace = +9.57%. Total = 19.82%.

More reels?

So, let’s contemplate adding 2 more reels. There are two effects: first, the possibility of getting four or even five of a kind now exists, but it’s very improbable, and so you would get a much larger payout. Second, there are now 5 reels and that increases the chances of getting three of them to match, so the chance of success goes up considerably. There are now ways to win with Ø showing on any two of the reels.

How much better? Let’s see:

First, any two reels can be showing Ø so long as the others are right. That means that we can multiply the number of combinations of Ø and non-Ø reels by the chance of one specific configuration to get the total.

ØØCCC
ØCØCC
ØCCØC
ØCCCØ
CØØCC
CØCØC
CØCCØ
CCØØC
CCØCØ
CCCØØ

A systematic examination of the combinations lists 10 of them. Now, the chances of any one of them: We already know that the first three reels showing CCC has a 1.6% chance of appearing. We need to adjust for the chances of Ø showing up on the other two reels – or, in fact, anything other than the specific matching card symbol. That’s 27/32 for reel 4 and 27/32 for reel 5 – a total chance of 1.139%. But there are 10 of those combinations – so ten lots of 1.139% = 11.39% of getting three of a kind.

Answer: a lot better.

Multiple Lines on a slot machine

Your chances get even better if you can match along different lines. The minimum that I’ve seen in this respect is three lines.

How much better?

At first glance, three times as good. But that ignores the possibility of multiple wins from the same spin – and this is where the exact configuration of each reel becomes a factor as well. On top of that, there is absolutely no reason why the designer needs to follow the rather simplistic pattern that I set up as an example – reel 3 might have fewer aces and more tens, fewer kings and more jacks, fewer queens and more Øs. Do the same across all five reels, and you can see that designers of slot machines have almost total control over the likelihood of any given payout, and can set the house percentage to whatever they think they can get away with.

It’s a fairly default assumption – that a machine is “honest” in the chances that it offers. Design is a totally above-board, totally legal, way of distorting the odds.

So it is with RPGs – GMs have to assume that a player’s dice are “legit”, and players have to assume that the GM’s adjudication, and settings for the chance of success, are fair. If this trust ever breaks down, it almost certainly spells a confrontation, strained relationships, and potentially the end of friendships.

Simulating A Slot Machine

Let’s think about hypothetical approaches to simulating a slot machine with standard RPG dice.

I’ll pick three reels and five lines – three straight across and two at an angle.

The reels we defined earlier had 32 entries per reel, and that doesn’t comfortably fit any standard die. We can get close by externalizing the chance of a null result – eleven of the 32 thus get excluded, leaving 21. Defining a special mechanism for the ‘wild card’ result gets us down to 20, which works.

Instead of the even chances listed earlier, let’s bias things toward the lower end.

A: 2
K: 2
Q: 3
J: 5
0: 8
Ø: 0*
☆: 0*

or, to put it another (more familiar way:)

01-02 A
03-04 K
05-07 Q
08-12 J
13-20 0
xx-xx Ø
xx-xx

So, three d20s will give us our middle line. As shown by the “xx-xx” results listed, though, there’s still work to do.

Next, we need a d6:

1-2 Ø
3-6 As shown on d20

And then we need a wild card mechanism, using the same d6 roll, so let’s replace the above with:

1-2 Ø
3-5 As shown on d20
6 ☆ if d20 reads “20”, otherwise as shown on d20

This reduces the chances of getting a 10 very minutely, and fills the resulting probability void with a wild card. How minutely? To get ☆, you need a 6 on d6 (1/6) and a 20 on d20 (1/20) – multiply those together and you get 1/120, or a little less than 0.85%.

So, that’s got our main results line sorted. Next, we need a way to simulate the results before and after – above and below – the result showing on the middle line. I could work it with d6s, but to keep the rolls obviously distinct, let’s use d8s instead.

1-4 +1
5-6 +2
7 +3
8 +4
fresh d20 roll if Ø and +3 or +4 showing;
☆ if ‘0’ and +4 showing and no ☆ already shown.

Note that these adjustments are to the indicated results of the d20, not to the roll, so:

  • ‘Ø’+1=’Ø’
  • ‘0’+1 = ‘J’
  • ‘J’+1 = ‘Q’
  • ‘Q’+1 = ‘K’
  • ‘K’+1 = ‘A’
  • ‘A’+1 = ‘0’
     
  • ‘Ø’+2=’Ø’
  • ‘0’+2 = ‘Q’
  • ‘J’+2 = ‘K’
  • ‘Q’+2 = ‘A’
  • ‘K’+2 = ‘0’
  • ‘A’+2 = ‘J’
     
  • ‘Ø’+3= new d20 roll
  • ‘0’+3 = ‘K’
  • ‘J’+3 = ‘A’
  • ‘Q’+3 = ‘0’
  • ‘K’+3 = ‘J’
  • ‘A’+3 = ‘Q’
     
  • ‘Ø’+4= new d20 roll
  • ‘0’+4 = ☆ if no &star showing on this reel, otherwise ‘A’
  • ‘J’+4 = ‘0’
  • ‘Q’+4 = ‘J’
  • ‘K’+4 = ‘Q’
  • ‘A’+4 = ‘K’

The same technique gives us the row of results below the middle row.

Interpreting the results is probably most easily done by actually laying out playing cards in an appropriate 3×3 grid. So, below, we have the results from the die rolls, and below them, an illustration of the resulting ‘display window’ on our simulated poker machine:

d20: 6, 14, 1
d6: 6, 6, 2
Middle row:
Q 10 Ø
 
Row above:
d8: 7, 3, 8
Q+3=10      10+1=J      Ø+4=d20;
Reroll:10 Result: J
 
Row below:
d8: 3, 6, 4
Q+1=K      10+2=Q      Ø+1=Ø

Looking at the result, there are two winning combinations – a pair of Jacks on the top row and a pair of tens on the top-left-to-bottom-right diagonal. So it’s just a matter of knowing how much those particular combinations will pay out.

But my, that’s a lot of palaver!

In Search Of A Simpler Simulation

The big advantage of the approach above is that you don’t need to know the probability of any given result coming up, any sort of reasonable guess will be good enough.

But the simplest dice-based simulation removes that comfort, producing a set of percentile tables that directly spits out not just the paying combination, but every combination of paying combination.

Generating such tables involves a lot of tedious number-crunching. So much so that you might well be tempted to say “bugger this” and simply make up the numbers.

But if you’re going to do that, why not skip the entire act of simulation of results and simply tell the players what the payout is? Using mathematical functions to generate the tables so that the size of a payout is proportionate to it’s improbability, less a house percentage – 5%, 10%, 15%, 20% or even 22 1/2% – is probably going to be quicker and easier.

But what’s the price of that simplicity?

It’s my opinion that this sucks all the excitement out of the process – as does the die rolling simulation given above. And you want the players, and hence their characters, to feel that excitement.

In Search Of A Better Simulation

A far better approach would be to create three suitable decks of cards – one for each reel – shuffle each, and then deal them out, one reel at a time.

Certainly, if such a simulation were needed for an in-game setting, that’s by FAR the better approach.

It also gives you a chance to practice assessing the timing needed to build tension. Done improperly, this has all the impact of wet spaghetti; done perfectly, and the PCs will be sweating on every turn of the cards.

As a learning tool

But I started talking about these things as a tool for GMs to learn to feel probabilities, and none of these methods is perfectly suited to that, for the simple reason that the GM is subconsciously aware of the makeup of each deck (assuming that he uses the most efficient simulation method) and this gives him a leg-up on assessing the probability of results.

The full benefit only comes from something close to the real thing. As a general rule, the best method is playing an online slot machine – preferably a free one, but (having sampled those), they often fall back on the same solution rejected in our simulation discussion, of simply guesstimating the probabilities and leaving it at that.

What’s more, most of them are single-line simulations, which simplifies the problem and reduces the benefits to be obtained.

And that only leaves an online casino, where they spend a huge amount of time and money making the simulations as perfect as possible – a site such as Novibet, for example.

You shouldn’t just play games of this type; you should try to get a sense of the odds that have resulted from all those ways of manipulating the odds that I described earlier as they apply to this particular (virtual) machine.

The objective should be to get familiar enough with probability that you can return to those coin-flips and instinctively know what happens to the average length of results if you alter the odds of a head.

With a coin, that’s virtually impossible short of somehow distorting the shape or the weight of the coin or something. But if the coin is just a metaphor for success or failure of a die roll, this is the sort of assessment that GMs have to be able to make on a regular basis – what happens with a bonus of +1 or a penalty of -1? Or -2?

This is a simple assessment with a linear die roll, like a d20; it becomes more complicated with multiple dice in a compound roll, like 3d6 or 4d8.

A lot of this stuff is intuitive, but there are surprising corners every now and then that are strongly counter-intuitive.

For example, there’s Luck in the Hero System.

Feeling Lucky?

The way this game mechanic works is that a character buys a certain number of dice and then rolls them at the start of each game session. Every ‘6’ that comes up contributes to a level of luck, which can be used by the character’s owner to reshape outcomes and induce improbable events favorable to them. One level of luck is a minor benefit, 2 is a bit more significant, and 3 is almost reality-distorting. All clear? good.

The base Hero System limits the number of dice of luck that you can buy to 3d6, charging a fixed amount for each.

Right away, that seems wonky – the benefits of a third die of luck are far more than the benefits of a second die of luck. In any reasonably-realistic schema, the price of each die would increase dramatically.

But what happens to the chances of successfully rolling 3 levels of luck if you increase the number of dice of luck?

Well, for a while, everything increases more or less as you would expect, and everything is fine. But there comes a point – from memory, 15d6 – at which the probability of three levels of luck overtakes the probability of one or two levels. Or maybe it was 21d6 – the point is, though that it happens.

There’s also the question of what to do if a character with, say, 6d6 in Luck rolls four sixes? Do they get a three-level luck result and a one-level? Or do you define additional reality-altering capabilities that are only accessible with higher levels of luck?

Some readers may be wondering why you would want to permit more than three dice of luck in the first place. The first answer is that comics have long had characters whose power is “super-luck” – there is the DC Villain “Amos Fortune,” who gave the Justice League of America bad luck by stealing their luck for his own use, and there’s “Longshot,” a marvel hero.

Additionally, I found that the “luck” mechanic was a wonderful way of incorporating nuance into all sorts of all-or-nothing game mechanics.

The discovery of the distorted probability situation described above brought an end to that, and the unpredictability, unreliability, and wide range of possible outcomes eventually led to the game mechanism being eliminated from the rules completely in favor of a different system.

It was, in fact, thinking about the ‘luck phenomenon’ that initially started me down the road toward what became The Sixes System.

The Improbability Of Success

Let’s look at an example of a practical benefit from the sort of intuitive understanding that we’re talking about.

What are the chances of success in a task requiring more than one roll? And what if there are modifiers – positive or negative – to some of those rolls, but not all? And what if the roll is to be made on 3d6?

Each of those parameters raises the complexity and difficulty of the problem. The best approach is to simplify it again, then reintroduce the complications one at a time.

First Principles

Let’s start by working out how to proceed using a d20. Because this gives a linear probability of any given result, it makes the problem a lot easier to solve.

When you have multiple rolls, all of which need to succeed, you can get the end probability by multiplying the individual probabilities together.

So, starting with a 10/- needed for success on any individual roll, which is to say a 50% chance of success (yes, I’m aware that some of this is so basic and obvious that it’s blatantly obvious):

  • On one roll, the chance of success is 50%.
  • On two rolls, the chance of success is 50% of 50%, or 25%.
  • On three rolls, the chance of success is 50% of 50% of 50%, or 50% of 25%, which is 12½%.

 
Applying a positive modifier to one of the rolls increases that individual chance of success.

  • For a +1 modifier:
    • On one roll, the chance of success is 50%+5% = 55%.
    • On two rolls, only one of them modified, the chance of success is 50% of 55%, or 27.5%.
    • On three rolls, only one of them modified, the chance of success is 50% of 50% of 55%, or 50% of 27.5%, or 13.75%.

    This shows that the power of the +1 is considerably reduced – from +5% chance of success to +1.25%. Unsurprisingly, this is 1/4 of what it was.
     

  • So, how about +2 on two rolls?
    • On one roll, +2 translates to +2/20, or +10%. So 50% becomes 60%.
    • On two rolls, both at +2, the chance of success is 60% of 60%, which is 36%.
    • On three rolls, two of them at +2, the chance of success is 50% of 36%, or 18%.

    So the matching +2s don’t yield a +20%, or even a +10%; they increase the chance of success overall by 5.5%. NOT 5%, as might have been suspected.
     

  • And what if there was a -2 on the third roll, in addition?
    • Minus 2 translates to -10%, so the chance of success on the third roll becomes 40%.
    • Which means that the chance overall is now 40% of 36%, or 14.4%. So, overall, there is an increase of just under 2% from the combination of all these modifiers.

     

  • Which raises the question, what negative modifier would cancel out the net benefit of the two +2s?
    • That means that instead of defining the third roll, we are defining the result of the first two (36%) and the net result (12.5%).
    • 12.5/36 = 0.3472222222 = 34.72222222%.
    • So, starting with a base chance of 50%, a modifier of -15.27777778% is needed.
    • ….and that translates to a modifier of -3.055555555.

    ….which means that it would be reasonably close to the truth to say that +2, twice, is equal to -3.

 
This demonstrates exactly how counter-intuitive all this can be at first glance.
 
Next question: what happens with a change in the base chance? What if the base chance was 8/- on d20 and not 10 or less?

  • Well, this is exactly the same as applying a -2 modifier to all three rolls.
    • Which is to say, the base chance is 40% of 40% of 40%, or 40% of 16%, or 6.4%.
    • So that small change has roughly halved the chances of overall success.

     

  • And if we apply +2 to the first two rolls of the set of three?
    • Then we are talking about 40% of 50% of 50% – which is the same thing as 50% of 50% of 40%, or 50% of 20%, which is to say, 10%.
    • 10% is not very different from the all-50% base result of 12.5%.

     

  • And then apply a -2 to the last of the three rolls?
    • Now we’re talking about 50% of 50% of (40-10)%, or 50% of 50% of 30%. Which works out to 7.5%.
    • Which is a small improvement on the 6.4%.

     

  • And if we make that -2 a -3, which is what we calculated would just about neutralize the two +2’s?
    • So, 50% of 50% of 25% is 6.25%.
    • The -3 therefore has overwhelmed the two +2s – not by a lot, but by a sufficient amount that the putative truism determined earlier is no longer accurate, because the 0.15% difference in chance is a far larger margin than the error of -0.055555555% that was unaccounted for.

 
Another illuminating question might be, how do the two +2s on two rolls of three, compare to a single +4 on a single roll of a set of three?

  • The minimum chance of success on a basic d20 roll comes at 1 or less to succeed (or 20 or more, if you prefer; it’s exactly the same thing).
    • Chance of success (base) = 5% of 5% of 5%. or 0.0125%.
    • +2 on two rolls = 5% of 15% of 15%, or 0.1125%. Which is a substantial increase over the base chance, but doesn’t feel all that generous.
    • +4 on one roll = 5% of 5% of 25% = 0.0625%. Pretty close to bang in the middle of the two numbers. Which means that a single +4 appears to be roughly half as effective as two +2s.

     

  • Let’s up the base chances of success to 4 or less.
    • Base chance of success = 20% of 20% of 20%, or 0.8%. Still less than 1% net chance, then.
    • +2 or two rolls = 30% of 30% of 20%, or 1.8%.
    • +4 on one roll = 20% of 20% of 40%, 1.6%.

    That’s not close to half-way between the two – it’s very close to the pair of +2s!
     

  • So, let’s up the ante again, to 8 or less base chance.
    • Base chance = 40% of 40% of 40%, or 6.4%.
    • Two +2’s = 50% of 50% of 40%, or 10%.
    • One +4 = 40% of 40% of 60%, or 9.6%.

    So the +4 is now even closer to the two +2s, but still mot quite there.
     

  • So, what happens at a 12 or less base chance?
    • Base chance = 60% of 60% of 60%, or 21.6%.
    • Two +2’s = 70% of 70% of 60%, or 29.4%.
    • One +4 = 60% of 60% of 80%, or 28.8%.

    Still not quite on parity terms.
     

  • A base chance of 16/-, and we’re running out of maneuvering room.
    • Base chance = 80% of 80% of 80% = 51.2%. That’s right, this is how high you need to set the base rolls to end with a roughly 50-50 chance of success overall!
    • Two +2s = 90% of 90% of 80% = 64.8%.
    • One +4 = 80% of 80% of 100% = 64%.

 
Strange things happen if we go any higher, because the chance of success is capped at 100%. If your base chance of success is 19 or less, a +2 doesn’t make it 21 or less, the chance can’t go above 20 or less.

That doesn’t mean that a +2 modifier is worthless; it just means that we need to individually track each possible result and then work out the overall chances, a lot more work.

Rather than spend time on that, let’s look at what we can learn from the totality of what’s above.

  1. Two +2s are always just a little more beneficial than one +4.
  2. The greater your base chance of success, the greater the impact of bonuses.
  3. It might be less obvious because I haven’t explicitly calculated any examples, but there is enough information there to show that the same is true of penalties. But this effect tends to get swamped by another factor:
  4. It takes a ridiculously large base chance to get even a moderate chance of overall success on three rolls. This effect is only exacerbated and amplified by requirements of 4 rolls.
  5. A base chance of X with a +Y modifier is the same thing as a base chance of X+Y. Yes, I know this is obvious.
  6. Multiple rolls with a base chance of X and a modifier to one of the rolls of Y yield the same chance of success as the same number of rolls with a base chance of X+Y and a modifier of -Y on all but one roll. Think about that for a moment.
  7. Lastly, we have now determined a basic technique and employed it often enough that it is almost routine.

    One of my players and occasional contributors applied this principle of point 1 to D&D and started asking his GMs for +1 items instead of +2, +3, or even +4 items. The latter requests frequently fell on dead and uncooperative ears, while the smaller requests were more often granted.

    So, how many +1s does it take to equal a +4?

    x+4 = (x+1)^n

    Take the log of both sides:

    log(x+4) = n log(x+1)

    Rearrange to get n on one side of the equality:

    n = log(x+4) / log(x+1)

    so:

    x = 1, n = 2.32
    x = 2, n = 1.63
    x = 3, n = 1.4
    x = 4, n = 1.29
    x = 5, n = 1.23
    x = 6, n = 1.18
    x = 7, n = 1.15
    x = 8, n = 1.13
    x = 9, n = 1.11
    x = 10, n = 1.1
    x = 11, n = 1.09
    x = 12, n = 1.08
    x = 13, n = 1.07
    x = 14, n = 1.07
    x = 15, n = 1.06
    x = 16, n = 1.06

    If we’re talking D&D combat, then X would be your required roll or less to overcome a particular armor class – or, more accurately, twenty minus the required roll or more to overcome that armor class.

    So the answer to the question is inherently variable depending on factors not specified. What is beyond doubt is that the number is a lot smaller than most people would expect.

    Another way of looking at the above table is to assume that × basically tracks upward with character level, and that as a general rule of thumb two synergizing +1s are more powerful than a +4 item.

    I was discussing this online with someone the other day, and they suggested that a different reality could be perceived by assuming that n has to be multiplied by 4, in this case (because we’re comparing with a +4 item)..

    His suggestion was that the results would be an estimate of the synergistic total benefit of four +1’s vs a single +4 given that at higher class levels, natural capability increases would tend to be more significant than bonuses. I can kind of see what he was getting at, but I’m not convinced by his formulation.

    What can be said for certain is that four +1s at a low character level are far more likely to be granted than a single +4.

Going to 3d6

So, let’s get a bit messier. With 3d6, not all results are created equally, in terms of probability of result.

If you convert those likelihoods of result to percentages, you get:

3      0.46% 11 62.5%
4      1.85% 12 74.07%
5      4.63% 13 83.8%
6      9.26% 14 90.74%
7      16.2% 15 95.37%
8      25.93% 16 98.15%
9      37.5% 17 99.54%
10     50% 18 100%

 
So, let’s put together another suite of results, comparing two +2s with one +4.

  • Start with the minimum possible result, 3/- (three or less) chance of success.
    • 3/- = 0.46%, so base chance on three rolls is 0.0046 × 0.0046 × 0.46%, or 0.000 009 733 6% – so that will happen once in 10,273,691-and-a-fraction attempts. It’s as close to impossible as you can get.
    • +2 on 2 rolls makes them 5/-, which is 4.63%. So the overall chance of success becomes 0.0463 × 0.0463 × 0.46% = 0.000 986 097 4%. That’s 101.3 times more likely than the base level but the chances of success are still only one in 101,410, so confidence would be a bit premature.
    • +4 on one roll makes it 7/-, which is 16.2%. Right away, I can see that this will be (16.2 / 0.46) times as likely as the base level, or 35.2 times. Better, but still not great. The actual chance is 0.0046 × 0.0046 × 16.2 = 0.000 342 792%, or a 1 in 291,722 chance. Clearly, base level is a heavily-dominant factor, at least when it’s low.

     

  • let’s try 5/- base chance. This is still very low compared to a typical roll in any 3d6 system, it should be noted.
    • We already know that 5/- is 4.63%, so the base chance is 4.63% × 4.63% × 4.63%, or 0.009 925 284 7%, or 1 in 10,075.3 chance. Still not very likely to happen, you will not be surprised to observe. What is more startling is the comparison with the 3/- base level – this is 1019.7 times more likely to succeed, a huge ramping up of the probability.
    • +2 to 5/- gives 7/-, which we already know is 16.2%. So two +2s gives an overall success chance of 16.2% × 16.2% × 4.63%, which calculates out to 0.121 509 72%, or 1 in 823 attempts. Still the longest of long-shots, in my book.
    • +4 to one roll gives 9/-, which is 37.5%, so the base level here is 4.63% × 4.63% × 37.5%, or 0.080 388 375%, the equivalent of 1 in 1,244. Two +2s still yields a much better chance of success.

     

  • At 7/-, things should start to get interesting.
    • 7/- is 16.2%; base chance = 16.2% × 16.2% × 16.2%, or 0.425 152 8%, equivalent to about 1 in 235.
    • +2 is 9/-, which is 37.5%. So two +2s = 37.5% × 37.5% × 16.2%, which equals 2.278 125%, or a 1 in 44 chance.
    • +4 is 11/-, which is past the peak of the probability curve, at 62.5%. So the chance of success would be 0.625 × 0.162 × 16.2 = 1.640 25% – so the +4 gives a success chance of 1 in 61.

     

  • At 9/-, the base roll is just before the probability peak, while both +2 and +4 modifiers push the chance beyond that peak.
    • 9/- = 37.5%, so 37.5% of 37.5% of 37.5% = base chance of 5.273 437 5 – ever-so-slightly better than a 1 in 20 chance.
    • +2 = 11/- = 62.5%, so two +2s gives a chance of .625 × .625 × 37.5 = 14.648 437 5%, almost 3 in 20.
    • Base +4 = 13/- = 83.8%, so this would yield a chance of success overall of 37.5% × 37.54% × 83.8% = 11.784 375%, more than 2 in 20. The margins between the +4 and the two +2s are shrinking, but two +2s still outweighs a single +4.

     

  • At 11/-, the base roll is past the hump. From now on, the base chance should rocket up.
    • 11/- on three rolls is a not at all uncommon in real gameplay, so this is an important result. We already know that 11/- = 62.5%, so the base chance of three rolls = 62.5% of 62.5% of 62.5%, or 24.414 062 5%.- just shy of a 25% chance.
    • 11/- +2 is 13/-, or 83.8%, as noted above. Two +2s therefore give an overall chance of 83.8% of 83.8% of 62.5%, which equals 43.890 25% – quite close to a 9-in-20 chance. Arguably, this is a threshold, above which you could begin to feel reasonably confident.
    • 11/- +4 = 15/-, or 95.37%, so a single +4 gives a net chance of 62.5% of 62.5% of 95.37%, or 37.253 906 25%, just under 7½ out of 20. Once again, the higher the base roll gets, the smaller the gap between the two +2s and a single +4.

     

  • 13/- is the last result (going up by pairs) before chance calculations start hitting the cap of 18/- (100%). It also means that our individual-roll probabilities are no longer rising as quickly, so this is going to be getting close to the best result, the point at which further improvements in base chance have (comparatively) little impact.
    • Base chance, 1 roll at 13/-, = 83.8%; so the net chance on three rolls = 83.8% × 83.8% × 83.8% = 58.848 047 2%. So that additional +2 to the base roll more than doubles the net chance over three rolls!
    • 13/- +2 = 15/-, which is 95.37%, so two rolls out of 3 at +2 gives an effective chance of 95.37% of 95.37% of 83.8%, or 76.219 761 222%, or better than a 15-in-20 chance. Perhaps it would be more illuminating, though, to compare it to a single 3d6 roll – this chance is just a little better than 12/- on 3d6, which means that the net effect of the two additional rolls at +2 is essentially a -1 modifier on a single 3d6 die roll – at least at this base chance.
    • 13/- +4 = 17/-, which is 99.54%, or a virtual certainty. Does this mean that you can’t roll box cars on 3d6? Absolutely not, in fact you would expect such a result once every 216 rolls, on average. The net chance is therefore going to be a teeny-tiny whisker under 83.8% of 83.8%; when you do the math, you get 69.901 367 76%. For convenience, use 70%. Again, translating this to a single 3d6 roll is quite instructive – it comes out to a bit below 12/-, call it a ‘theoretical’ 11.7 or 11.8, on 3d6. The two +2s gave us a translated result of about 12.5 on 3d6 – so the difference between the two is is really marginal, in fact it’s within the practical rounding error of using a 3d6 scale!

     

  • 15/- starts to give us problems with the +4 modifier, because there’s no such thing as 19/- on 3d6. But the two +2s and base.result should still be illuminating:
    • Base chance at 15/- = 95.37%, so the net chance over three rolls is 95.37%^3, or 100 × 0.9537^3 – which is just another way of writing the usual expression. To the mathematician, this is a more elegant phraseology, and somehow feels more accurate (though it isn’t); to a practical mathematician, an arithmetician, it’s easier to grasp 95.37% of 95.37% of 95.37%. In either case, you end up with a result of 86.743 181 715 3%; note that, as predicted, growth in the base chance has started to slow.
    • 17/- is equivalent to a 99.54% chance as already observed; so two +2s gives a net chance of 99.54% × 99.54% × 95.37%; I predict a value in the low-to-mid 90s even before reaching for my calculator app! After doing so, the result of 94.494 614 029 2% seems right on expectations.
    • simply to demonstrate the addition to the toolkit, let’s look at the +4 answer.
      • The other two rolls give a combined chance of success of 95.37% of 95.37%, or 90.954 369%. That’s the easy part.
      • That means that whatever the chances of success are on the last 3d6 roll, the net chance of success will be 90.95% of it.
      • At first glance: Rolling anything more than 11 is an automatic failure. Rolling 10 or better, with the +4, succeeds. This first glance is incorrect; this is applying the +4 the wrong way around, as though it were a penalty, reducing the chances of success.
      • In fact, anything less than 15 rolled will succeed even without the +4. Rolling a 16 succeeds only because of the +4, and the same is true of rolling 17 or 18. So it doesn’t matter What we (hypothetically) roll, we succeed. That’s what +4 means on a base 15/- chance.
      • So the final probability of success is 90.95%.

     

  • A couple of special cases are worth examining, using a nice middle of the road base chance of 11/-. The first of these compares a +2 / -2 modifier combination with the established values.
    • Base chance, from above: 24.414 062 5%
    • Two +2’s, from above (for comparison purposes): 43.890 25%
    • One +4, from above (for comparison purposes): 37.253 906 25%
    • 11/-+2 = 13/- = 83.8%; 11/- (base) = 62.5%; 11/- -2 = 9/- = 37.5%.
    • Calculation: 83.8% of 62.5% of 37.5% = 19.640 625%.

     

  • Same base roll (permitting the same results for comparison), Two +2s and one -1:
    • 11/- -1 = 10/- = 50%.
    • Calculation: 83.8% × 83.8% × 50% = 35.1122%. This is very close to a single +4 – at this base roll.

     

  • Same base roll, for the same reasons; Two +3s and one -2:
    • 11/- +3 = 14/- = 90.74%; 11/- -2 = 9/- = 37.5%.
    • Calculation: 90.74% × 90.74% × 37.5% = 30.876 553 5%. Despite seeming more generous in doling out the bonuses, this is actually a harder combination than Two +2s and one -1.
    • To understand why, you need to look at the individual rolls relative to the probability peak – the 14’s are well past the peak, but (obviously) below the 100% mark, but the base roll is below the peak, and the -2 applied to it shifts it to well below the peak.
    • That means that we have two numbers close to, but a little below, 100%, and one that is a long way below 100%; if the first two were 100%, the last would be faithfully extended to cover the whole set of rolls, as things stand, they can only make a bad situation worse. So the “-2” is strongly dominant in the final result.

Binomial, Trinomial, and Quadronomial expansions

This section will make a mathematical analysis of everything that’s going on. If you’re not especially interested in that, you can skip it (but I don’t recommend doing so) or skim it (a better choice).

Two rolls can be expressed as a binomial formula:

Net Probability % = Probability% (B+a), divided by 100, and multiplied by Probability% (B+b)

Three rolls can be expressed as a trinomial formula:

Net Probability % = Probability% (B+a), divided by 100, multiplied by Probability% (B+b) divided by 100, multiplied by Probability% (B+c)

…and, unsurprisingly, Four rolls can be expressed as quadrinomial formula:

Net Probability % = Probability% (B+a), divided by 100, multiplied by Probability% (B+b) divided by 100, multiplied by Probability% (B+c) divided by 100, multiplied by Probability (B+d)

These all use the same nomenclature. P is the net probability of success, B is the base roll, ƒp simply means “convert result to a percentage probability”, and a, b, c, and d are the bonuses / penalties to each roll.

Things get more interesting if you replace the ƒp function with a more complicated but useful structure – ƒ1[B] + ƒ2[a/b/c/d]. To simplify, let’s call the ƒ1 formula “X” and the ƒ2 formula “Y1”,- 2, -3 ,and -4 for a, b, c, and d, respectively. So X defines the base probability and Y the change in that base probability.

In practical terms, Yp(n) has to be calculated with a conversion expression to allow for non-linear rolls:

Y%-function for n = Probability% (B+n) minus Probability% (B)

Formulating the expression in this way means that our binomial expression can be written

P = (x + y1) × (x + y2) / 100

or even,

P = x^2 + (y1+y2)•x + (y1•y2) /100

The trinomial expansion can be derived in a similar way, first by expanding two of the terms and then expanding the combination with the third:

100^2•P= [x + y1] • [x^2 + (y2+y3)•x + (y2•y3)]
     = x • [x^2 + (y2+y3)•x + (y2•y3)] + y1 • [x^2 + (y2+y3)•x + (y2•y3)]
     = x^3 + (y2+y3)•x^2 + (y2•y3)•x + y1•x^2 + y1•(y2+y3)•x + y1•(y2•y3)
     = x^3 + (y2+y3)•x^2 + y1•x^2 + (y2•y3)•x + y1•(y2+y3)•x + y1•y2•y3
     = x^3 + (y1+y2+y3)•x^2 + (y1•y2 + y1•y3+ y2•y3)•x + y1•y2•y3

Similarly the quadrinomial expression (or expressions describing even longer chains of rolls) can be derived – but I’m not going to bother with that right now; instead, let’s move on.

Think about typical values and what these expressions tell us about those typical values.

For a start, we can say that base values are likely to be somewhere in the 8-15 range. This is true whether we’re talking about 3d6 or d20. Next, we can state that the typical modifiers are going to be around the +2 to -2 range.

That means that x is going to be roughly between 4 times and 8 times any of the y values.

Our binomial expansion makes the significance of that clear: x^2 is going to be between 16 and 64 times as significant as y1•y2, with the bit in the middle somewhere in between.

Similarly, the sequence of significance in the trinomial expansion is going to be:

  • The x^3, which is between 64 and 512 times as important as the y1•y2•y3 term;
  • The •x^2 term, which is between 16 and 64 times as important as the y1•y2•y3 term;
  • The •x term, which is 4-8 times as important as the y1•y2•y3 term.

The exception to this truism occurs when a positive modifier is common to all individual rolls, because these effectively raise the base roll. Plus 1 on every roll is the same as setting B one higher. And the lower the base value of B is, the more significant that increase is.

To put it another way, +2 on 15/- is nice to have but not as significant as +2 on 10/-, or even +2 on 5/-.

And that means that one more comparison is worth making: two +2s vs three +1s vs two +1s and one +2. For simplicity, let’s use a d20 roll.

  • Low: Base 5/- =25%; 5/- +1 = 6/- = 30%; 5/- +2 = 7/- = 35%.
    • Base chance = 25% × 25% × 25% = 1.5625%.
    • Two +2s: 35% × 35% × 25% = 3.0625%.
    • Three +1s: 30% × 30% × 30% = 2.7%.
    • Two +1s & one +2: 30% × 30% × 35% = 3.15%

     

  • Middle: Base 10/- = 50%; +1 = 11/- = 55%; +2 = 12/- = 60%.
    • Base Chance = 50% × 50% × 50% = 12.5%.
    • Two +2s: 60% × 60% × 50% = 18%.
    • Three +1s: 55% × 55% × 55% = 16.6375%.
    • Two +1s & one +2: 55% × 55% × 60% = 18.15%

     

  • High: Base 15/- = 75%; +1 = 16/- = 80%; +2 = 17/- = 85%.
    • Base Chance = 75% × 75% × 75% = 42.1875%.
    • Two +2s: 85% × 85% × 75% = 54.1875%.
    • Three +1s: 80% × 80% × 80% = 51.2%.
    • Two +1s & one +2: 80% × 80% × 85% = 54.4%

The important observation here is that three +1s is never quite as good as two +2’s and a base roll, while two +1s & one +2 are even more effective than two +2s and a base roll.

The 9d6 / 3d20 question

The three sets of 3d6 raise the question of comparisons with a single 9d6 roll. The d20 equivalent raises a similar question with respect to a single 3d20 roll.

But we have a LOT of results from preceding sections to compare, so I’m going to make this as minimalist as possible.

To start with, we need the basis of comparisons – statistical analysis of the two sets of rolls, listing the percentage equivalents. For this, I turned to my usual source, Anydice.

I used their service to produce a couple of very pretty graphs, presented below. Unfortunately, to get them to fit the available screen space at Campaign Mastery, they had to be shrunken from the original size, and that has compromised the legibility of the percentages – so I’m going to have to supplement each with a table of the sort already presented.

If you would like to examine the actual graphs as Anydice produces them, I’ll be providing links to those, as well.

First, 3d20:

Probability of x or less on 3d20

Link to actual results table: Anydice 3d20

Results:

1 n/a 16 7.00% 31 50.00% 46 93.00%
2 n/a 17 8.50%% 32 53.75% 47 94.31%
3 0.01% 18 10.20% 33 61.15% 48 95.45%
4 0.05% 19 12.11% 34 64.75% 49 96.42%
5 0.13% 20 14.25% 35 64.75% 50 97.25%
6 0.25% 21 16.63% 36 68.25% 51 97.94%
7 0.44% 22 19.25% 37 71.63% 52 98.50%
8 0.70% 23 22.10% 38 74.85% 53 98.95%
9 1.05% 24 25.15% 39 77.90% 54 99.30%
10 1.50% 25 28.38% 40 80.75% 55 99.56%
11 2.06% 26 31.75% 41 83.37% 56 99.75%
12 2.75% 27 35.25% 42 85.75% 57 99.87%
13 3.58% 28 38.85% 43 87.89% 58 99.95%
14 4.55% 29 42.52% 44 89.80% 59 99.99%
15 5.69% 30 46.25% 45 91.50% 60 100%
Analysis, multiple d20 rolls vs 1 roll of 3d20:
  • 10/- (base) chance d20
    • One roll = 50% = 31/- on 3d20
    • Two rolls = 25% = 24/- on 3d20
    • Three rolls = 12.5% = 19/- on 3d20
  • +1, d20
    • One roll = 55% = 32/- on 3d20
    • Two rolls, one at +1 = 27.5% = 25/- on 3d20
    • Three rolls, one at +1 = 13.75% = 20/- on 3d20
  • +2, d20
    • One roll = 60% = 34/- on 3d20
    • Two rolls, both at +2 = 36% = 27/- on 3d20
    • Three rolls, two at +2 = 18% = 22/- on 3d20
    • Three rolls, two at +2, one at -2 = 14.4% = 20/- on 3d20
    • Three rolls, two at +2, one at -3 = 12.6% = 19/- on 3d20
  • Three rolls at -2, or base chance 8/- on d20 = 6.4% = 16/- on 3d20
  • Three rolls, one at -2 = 10% = 18/- on 3d20
  • Three rolls, one at -4 = 7.5% = 16/- on 3d20
  • Three rolls, one at -5 = 6.25% = about 15½/- on 3d20
  • comparing two +2s on three rolls vs a single +4 on one of three rolls:
    • Base roll 1/- = 0.0125% = 3/- on 3d20
    • Base roll 1/-, Two +2s = 0.1125% = 5/- on 3d20
    • Base roll 1/-, One +4 = 0.0625% = 4/- on 3d20
    • Base roll 4/- = 0.8% = 8/- on 3d20
    • Base roll 4/-, Two +2s = 1.8% = 11/- on 3d20
    • Base roll 4/-, One +4 = 1.6% = 10/- on 3d20
    • Base roll 8/- = 6.4% = about 16/- on 3d20
    • Base roll 8/-, Two +2s = 10% = 18/- on 3d20
    • Base roll 8/-, One +4 = 9.6% = around 17/- on 3d20
    • Base roll 12/- = 21.6% = 23/- on 3d20
    • Base roll 12/-, Two +2s = 29.4% = about 25½/- on 3d20
    • Base roll 12/-, One +4 = 28.8% = 25/- on 3d20
    • Base roll 16/- = 51.2% = 31/- on 3d20
    • Base roll 16/-, Two +2s = 64.8% = a fraction over 35/- on 3d20
    • Base roll 16/-, One +4 = 64% = 35/- on 3d20
  • Low: Base 5/-
    • Base chance = 1.5625% = 10/- on 3d20
    • Two +2s = 3.0625% = 12/- on 3d20
    • Three +1s = 2.7% = 12/- on 3d20
    • Two +1s & one +2 = 3.15% = 13/- on 3d20
  • Middle: Base 10/-
    • Base Chance = 12.5% = 19/- on 3d20
    • Two +2s = 18% = 22/- on 3d20
    • Three +1s = 16.6375% = 21/- on 3d20
    • Two +1s & one +2 = 18.15% = 22/- on 3d20
  • High: Base 15/-
    • Base Chance = 42.1875% = 29/- on 3d20
    • Two +2s = 54.1875% = 32/- on 3d20
    • Three +1s = 51.2% = 31/- on 3d20
    • Two +1s & one +2 = 54.4% = 32/- on 3d20
Next, 9d6:

Probability of x or less on 9d6

Link to actual results table: Anydice 3d20.

Results:

1 n/a 16 0.11% 31 50.00% 46 99.89%
2 n/a 17 0.24% 32 57.61% 47 99.95%
3 n/a 18 0.46% 33 64.96% 48 99.98%
4 n/a 19 0.85% 34 71.81% 49 99.99%
5 n/a 20 1.49% 35 77.96% 50 100%
6 n/a 21 2.47% 36 83.28% 51 100%
7 n/a 22 3.92% 37 87.72% 52 100%
8 n/a 23 5.96% 38 91.29% 53 100%
9 0.00% 24 8.71% 39 94.04% 54 100%
10 0.00% 25 12.28% 40 96.08% 55 n/a
11 0.00% 26 16.72% 41 97.53% 56 n/a
12 0.00% 27 22.04% 42 98.51% 57 n/a
13 0.01% 28 28.19% 43 99.15% 58 n/a
14 0.02% 29 35.04% 44 99.54% 59 n/a
15 0.05% 30 42.39% 45 99.76% 60 n/a

Notice that rounding error has crept into the table – if the result is less than 0.01%, it has been listed as “0.00%, and if more than 99.99%, as 100%. The probabilities of these results are so low that they might as well not exist. It will only matter on one roll out of 10,000 – or less.

Analysis, multiple 3d6 rolls vs 1 roll of 9d6:
  • base roll 3/- = 0.000 009 7336% = 9/- on 9d6
  • +2 on 2 rolls, base 3/- = 0.000 986 0974% = 9/- on 9d6
  • +4 on 1 roll, base 3/- = 0.000 342 792% = 9/- or maybe 10/- on 9d6
     
  • base roll 5/- = 0.009 925 2847% = 13/- on 9d6
  • +2 on 2 rolls, base 5/- = 0.121 509 72% = 16/- on 9d6
  • +4 on 1 roll, base 5/- = 0.080 388 375% = about 15½/- on 9d6
     
  • base roll 7/- = 0.425 1528% = 18/- on 9d6
  • +2 on 2 rolls, base 7/- = 2.278 125% = 21/- on 9d6
  • +4 on 1 roll, base 7/- = 1.640 25% = 20/- on 9d6
     
  • base roll 9/- = 5.273 4375% = 23/- on 9d6
  • +2 on 2 rolls, base 9/- = 14.648 4375% = about 25½/- on 9d6
  • +4 on 1 roll, base 9/- = 11.754 375% = 25/- on 9d6
     
  • base roll 11/- = 24.414 0625% = 27/- on 9d6
  • +2 on 2 rolls, base 11/- = 43.89025% = 30/- on 9d6
  • +4 on 1 roll, base 11/- = 37.253 906 25% = 29/- on 9d6
     
  • +2 on 1 roll, -2 on another, base 11/- = 19.640 625% = about 26½/- on 9d6
  • +2 on 2 rolls, -1 on a third, base 11/- = 35.1122% = about 29/- on 9d6
  • +3 on 2 rolls, -2 on another, base 11/- = 30.876 5535% = about 28/- on 9d6
     
  • base roll 13/- = 58.848 0472% = 32/- on 9d6
  • +2 on 2 rolls, base 13/- = 76.219 761 222% = 35/- on 9d6
  • +4 on 1 roll, base 13/- = 69.901 367 76% = 34/- on 9d6
     
  • base roll 15/- = 86.743 181 7153% = 37/- on 9d6
  • +2 on 2 rolls, base 15/- = 94.494 614 0292% = 39/- on 9d6
  • +4 on 1 roll, base 15/- = 90.95% = 38/- on 9d6
Reflections

If you study the results from anydice, it should strike you that the 3d20 rise more gradually and evenly than the 9d6. In a nutshell, the more dice, the faster the attack on the average values and the more remote the extremes of the range.

The shape of Lucky

Dice are at the heart of tabletop RPGs. They are the weapons and instruments of both the Players and the GM. Like any tool, they are more powerful and useful in the hands of an expert who has mastered them tham they are in the hands of an amateur.

Such mastery is not easily come by. I have known people who have gamed for 30 years who couldn’t tell you how the chances of rolling successive successes on 3d6 change with different bonuses.

Every time you think you have a grasp on the subject, remember that +2 × 2 = +3 × 1, and you will find any overconfidence quickly undermined.

Once you have mastered the convoluted shape of Luck, however, you will begin to think of rolls not in terms of their chances of success or failure but as navigational markers through your plotlines.

It’s at that point that you can finally know, almost instinctively, what the chances are, and how you can use that knowledge to everyone’s benefit as GM.

Comments Off on Chances Are: Lessons in Probability

The Trouble With Ginormous


This article contains material generated as background reference in Mike’s Doctor Who: A Vortex Of War campaign, but it holds relevance to most campaigns including those of the Fantasy genre.

Introduction

Space is big – really, really, big.

I’m sure most readers will have come across that phrase, or something very like it, on numerous occasions, and have taken its lesson to heart.

But I would be equally certain that comic book writers and sci-fi authors and scriptwriters would also have done so – especially given the practice of vetting for scientific accuracy inherent in the last category.

And yet, I have been let down repeatedly in this respect by those very groups, so sometimes you have to wonder…

Part of the problem is undoubtedly because the scales concerned are epic beyond our ability to comprehend them directly. Of necessity, they have to be abstracted and we have to learn to think in those abstract scales.

But doing so leaves us vulnerable when we have to step up to another scale again; we understand the first scale and think that gives us a handle on the second. And that confidence is frequently misplaced.

Today’s article is intended to bridge that gap.

And my chosen starting place is one of my favorite comics as a kid: Green Lantern, specifically, the Green Lantern Corps.

3600 Sectors Of Trouble

Part of the canon of the Green Lantern Corps is that there are 3600 Green Lanterns, each of whom patrols a different sector of the galaxy, and who are usually drawn from one of the inhabited worlds within that sector.

And, if you don’t appreciate how big the galaxy actually is, that sounds perfectly reasonable. But one look below the surface reveals trouble brewing.

The size of the galaxy

In the course of a previous article on astrophysics (both for within games, and in general), A Game Of Drakes and Detectives: Where’s ET?, I reported on the size of the milky way, and gave various other parameters that will be useful in this discussion.

Let’s start with the cross -section of the milky way.

To quote from the accompanying article:

The milky way is roughly 150,000-200,000 light years in diameter, giving it a radius of 75-100,000 light years. But most of that is outlying material; in terms of the parts we’re interested in, it’s about 100,000 light-years across and about 1,000 light-years thick. But that thickness is the average for the whole thing, and the core noticeably bulges; about three times the thickness of the arms. We also need to exclude that core from our calculation of the plan area of the disk if we hope to get a volume. Looking at the galactic cross-section, the core is about 1/5th of the total diameter across, so about 20,000 light-years.

When I do that, I get an average thickness of the disk section of 926 light years, and a torroidal area of 2,400 million pi square light years, so the arms contain roughly 7 million million cubic light years.

The size of a sector

For the moment, let’s ignore the central bulge. That means that out 3600 sectors contain 7 million million cubic light years. If each sector holds equal volume, then will have a volume of 1944444444 cubic light years.

While it would be inaccurate to do so, let’s ignore that inaccuracy, and project this volume as a square section of the milky way that’s 926 light years thick. That means that our square has to have an area of roughly 2099832 square light years, which is a square of sides 1449 light years to a side.

a flat disc 40K light years out from a central bulge 10K light-years radius

That means that a hypothetical ‘sector’ would look like the diagram to the right:

Appreciating the size of a sector

To get a real handle on how big this is, let’s assume that our hypothetical green lantern is based at the extreme bottom front left, labeled α, and that some emergency occurs just barely within his area of responsibility at the extreme top right back. In other words, he has to cross the sector from one corner to the other, the distance between the points α and γ.

We know that β is a right angled corner, so what we have is a simple triangle in which we know only one length – βγ, defined as 926 light years.

But, we can also see that αβ is also a triangle with a right-angle, and we know the length of both sides (1449 light years), so good old pythagoris tells us what we need to know:

    αβ^2 = 1449^2 + 1449^2 = 2 &Times; 2,099,601 = 4,199,202; therefore,
    αβ = √4199202 = 2049.195.

Now we have two sides of the triangle αβγ – 2049.195 and 926. So:

    αγ^2 = 2049.195^2 + 926^2 = 4199202 + 857476 = 5056678;
    αγ = √ 5056678 = 2248.706 light years.

Now, the green lantern corps can travel at FTL speeds, but the actual speed is rarely if ever stated out loud; it’s “as fast as their will permits”. So, let’s throw some reasonable FTL speeds around and see how long this theoretical corner-to-corner trip will take.

    At 10c, 2248.706 / 10 = 224.87 years.
    At 100c, 22.48706 years.
    At 1000c, 2.248706 years.

Hmm, that’s not working too well. So, let’s press on to more radical speeds:

    At 10,000c, 0.2248706 years = 82.13398665 days (defining a year as 365.25 days).
    At 100,000c, 0.02248706 years = 8.213398665 days.
    At 1,000,000c, 0.8213398665 days = 19hrs 42m 44s.

Based on those numbers, to make a sector patrolable in any practical sense, speeds of between 50,000 and 1,000,000 times the speed of light are required.

But, by scaling the problem to numbers that we can all comprehend, we start to get a real impression of just how big a region of space we’re talking about.

The Size of a sector, part 2

But wait a moment – how are the people of gamma supposed to tell alpha that there’s a problem? A radio message will take over 2,200 years to get there – and be very hard to even detect, as the earlier article points out. So it’s not just the green lanterns that have to travel at ridiculous speeds, it’s everyone else.

The alternative, since not all the people being protected even have space flight, is for the Green Lantern to visit regularly, showing the flag and looking for trouble. And that points us back toward those ridiculous travel speeds.

The size of the galaxy, part 2

Let’s imaging a green lantern on the outer rim of the galaxy. Every now and then he has to report back to Oa. Most of the time, he creates a space warp that conveniently gets him there, but every now and then, there can be reasons for doing the trip the long way around.

Before we can assess that, however, we need to know where Oa is located.

Well, there are three logical possibilities – it’s either at the outer edge of the galaxy, it’s in close to the galactic core, or it’s somewhere in the middle of the disc.

to travel around the galaxy, one has to skirt the center, adding to the travel time

This diagram illustrates the worst-case that results. The three proposed locations for Oa are labeled β, γ, and ε, respectively, while α remains our point of origin. Even without the black hole at the center (4), there would be enough radiation sources that travel straight through the core would be inadvisable. So, to safely get to β, we need to go to point 1 first. Similarly, to get to γ we need to go to point 2 first; and to get to ε, we need to go to 2, then to 3. Five and Six denote the ‘edges’ (top and middle, respectively) of the bulge.

A little thought will show that α to 1 is the hypotenuse of a triangle, with 4 it’s other corner, and that β-to-1 will be exactly the same length, and so will α to 2. It’s only once past the dangerous central galaxy that the course is altered by the different locations of Oa.

    According to the cross-section diagram shown earlier, distance 1-4 is going to be 10,000 light years, and alpha to 4 will be 10,000 + 40,000 = 50,000 light years. That means that the first-leg distance is:

    α-to-1 = 1-to-β = &alpha-to-2 = √ (10,000^2 + 50,000^2) = 50990 light years.

    Therefore, α to β is twice this, or 101,980 light years.

    2-4-γ forms a triangle with the same 4-2 measurement as 4-to-1, 10,000 light years, but the long axis is 20,000 light years less than the 50,000. So the distance from 2 to γ is about 31623 light years. So the total trip from α to γ will be 50,990 + 31,623 = 82,613.

    α-to-two-to-three-to-ε is a more complicated problem, but we can easily calculate the distance direct from 2 to epsilon; while the additional deliverance to 3 will add to that, it would be a relatively small error. So, the length to ε from 4 is going to be 40,000 less than the 50,000, or 10,000; and therefore the direct distance from 2 to ε will be about 14142. Round it up to 14400, and that should be more than enough to compensate for the more complex course; and the total trip from α to ε is going to come to roughly 50,990 + 14,400 = 65,390 light years.

Now let’s apply those earlier speed estimates (50,000 and 1,000,000 times the speed of light, respectively, and calculate some travel times:

    α-to-β @ 50,000c = 2.0396 years.
    α-to-γ @ 50,000c = 1.65226 years, or about 20 months..
    α-to_ε @50,000c = 1.3078 years, or about 15½ months.

    α-to-β @ 1,000,000c = 0.10198 years = 37.248195 days.
    α-to-γ @ 1,000,000c = 0.082613 years, or 30.2 days – call it a month.
    α-to_ε @50,000c = 0.06539 years, or about 24 days.

The more ridiculously fast we make the travel, the less of a problem this becomes.

The Forest

There’s another saying – that you sometimes can’t see the forest for the trees. How many stars are likely to be present in a single sector?

In that earlier article, I calculated as a very rubbery best-guess that there were 220,000 million stars in the disc-region of the milky way. If there are 3600 sectors, that means that on average, each will contain 61,111,111 stars. From the earlier calculation of the volume of a sector (1,944,444,444 cubic light years), that means that each would occupy roughly 31.82 cubic light years, or a sphere 1.966 light years radius, on average. So the average gap between stars will be twice that, or one star every 3.933 light years.

Corner-to-corner in a sector? 2248.706 light years? That means running into (on average) 572 stars – but one is our departure point, and one our destination, so that’s 570 in the way, en route.

    At 10c, that would be one every 224.87/570 = 0.3945 years.= 1 every 144 days. That’s doable.
    At 100c, that becomes one every 14.4 days.
    At 1000c, 1.44 days.
    At 10,000c, 0.144 days = 3.456 hours.
    At 100,000c, 0.3456 hours = 20.736 minutes.
    At 1,000,000c, 2.0736 minutes. Constantly. For 20 hours or more.

I submit that with size, and radiation output, and potentially hostile residents, that anything faster than about 7,000 times the speed of light involves impossible speed of navigation – that would be a course correction every 5 hours or so, giving at least half-a night’s sleep. Drillers and fishermen have been operating on a four-hours-on, four-hours off schedule for years, and it’s not exactly unfamiliar territory for the military, either.

But if that’s our top speed, then the corner-to-corner sector trip will take about 117 days. And that’s far too long for a green lantern to be able to respond to an emergency.

But what’s the alternative?

Challenging assumptions

Okay, so let’s start by chucking the idea of 3600 sectors, and allow there to be more – many more. In fact, let’s look at stellar populations, make a few sci-fi-valid assumptions, and derive an estimate for just how big a sector should be – and use that to determine how many sectors there should be.

Let’s start by thinking about systems of significance – because some of them won’t be.

For a start, one of the inherent assumptions is that if life is possible, it will find a way; inhabited systems will be common. Next, let’s assume that for every inhabited system, there will be 1½ systems containing significant resources, but no life, giving those inhabited systems something to fight over, and something to kick-start interstellar expansion. And, because a system can have no significance other than being innately interesting for some reason, let’s say that such ‘scenic’ worlds are another ½

How many inhabited systems can one Green Lantern protect? Well, 1/3 aren’t advanced enough, technologically, to get themselves or anyone else into trouble; but that makes them an easy target for conquerors and would-be exploiters. 1/3 would be advanced enough to fend for themselves and enlightened enough not to try and exploit others (but they can still get into trouble occasionally). That leaves 1/3 as potential troublemakers.

Let’s assume that each of the troublemakers have to visited every year to keep an eye on them, and that such inspections take at least 3 days, not counting travel time. The more advanced and enlightened worlds might need to be visited once every 5 years for a day; and the primitive worlds once a year for a day.

So 1/3 of the stars need 3 days attention a year; 1/3 need 1 day’s attention; and 1/3 need 1/5 of a day. Add those up, and you get 4.2 days per interesting star. Throw in a couple of days of travel between them, and you get 8.2 days per star system of interest.

365 days in a year, divided by 8.2 days, gives 44.5 systems of interest. But there’s an assumed inefficiency here – sometimes you will be able to deal with one thing while en route to deal with another. So let’s increase that workload 300% and then allow for a little time off each year – giving 120 or so star systems.

With those numbers as a rough starting point, I get 61 inhabited systems, 93 worlds with significant resources, and 30 systems of other galactic significance, and a net stellar population of 1200 stars under one Green Lantern – on average.

Based on that premise, I divided the galaxy up so that green lanterns only had one galactic arm each within their sectors, and used stellar densities to divide the galaxy up into 305 regions, each of which would contain 400 sectors. I also found that I needed multiple strata or layers. In fact, when I counted them up, I got 350. Put those together, and you end up with 8,200,000 sectors, as the diagram below makes clear (the dots were my method of counting them, each color is 50 regions or strata).:

Click on the image for an even larger (more legible) version in a new tab.

That really puts into perspective just how far wide of the mark that 3600 sectors was, doesn’t it?

Enhanced functionality

But this defines an average sector – as noted, some regions could have as many as 20 times these numbers, while others have less.

It can be presumed that 20 times the standard number of inhabited systems – 1220 of them – there would be twenty times the number of systems capable of provide a Green Lantern to the corps. Instead of one Green Lantern, they might have ten or twenty. Add in the fact that as stellar densities go up, travel time from one star to another goes down because the stars are closer together. Which means that fewer Green Lanterns are actually needed in such dense Sectors.

What about the sectors with fewer inhabited systems? Potentially, one Green Lantern could look after multiple adjacent sectors, but travel times form a significant restriction, so there are limits to this sort of thing. Fortunately, there’s an excess of Green Lanterns from the more densely-populated sectors, so a few of those can be “exiled” to the galactic periphery, perhaps as a temporary tour, eventually rotating back to their more-populated home sector.

The size of a sector, revisited

Instead of 3600 sectors, dividing the galaxy up into 8,200,000 makes them significantly smaller – so much so that it’s worth revisiting the physical size of a typical sector, and recalculating the corner-to-corner (worst case) travel times.

There are two possible approaches to the calculation: we could use the density of stars derived earlier, multiply by 1200, and get one answer for the volume; or we could take the estimated volume of the milky way and divide that by the number of sectors. In theory, both should give the same answer.

But I have the suspicion that the packing problem might be a source of significant error with the first approach.

Not familiar with the Packing problem? Consider a box of oranges. Your job is to arrange them to get as many as possible into the box, i.e. to minimize the wasted space.

stacking oranges one on top of another is inefficient

If you simply stack them one on top of another (as shown above), there is a huge amount of empty space – each orange is taking up a cube of sides “2 orange-halves” long, a volume of 8o^3, but each orange only fills 4/3πr^3 = 4.19o^3. Almost half the space taken up by an orange is empty.

Instead, each row nests in the hollow created by the oranges of the layer below, effectively interleaving the layers of oranges. Calculating the difference isn’t particularly relevant, but ANY improvement is significant. And you can improve packing density even more by choosing slightly smaller oranges for the ‘indented’ layers.

I’m concerned that taking the spherical volume controlled by each star and simply multiplying by the number of stars might assume perfect stacking, or might assume linear stacking like the example shown, and any rounding error multiplied by 1200 is going to be significant.

So let’s do it in exactly the way we derived the size of a 3600th-sector.

    7 million million cubic light years divided by 8,200,000 = 853658.5366 cubic light years each, =
    a cube of sides 94.86 light-years across. Call it 95 light years for convenience.

    Aside from the dimensions and proportions, the diagram representing a sector hasn’t changed.

    α-to-β ^2 = 95^2 + 95^2 = 2 &Times; 9025 = 18050;
    α-to-β = 134.35 light years.

    α-to-γ ^2 = 95^2 + 134.35^2 = 9025 + 18050 = 27075;
    α-to-γ = 164.545 light years.

    At 10c, that’s 16.4545 years.
    At 100c, that’s 1.645 45 years = 20 months..
    At 1000c, that’s 0.164 545 years = 2 months.
    At 10,000c, that’s 0.016 4545 years.= 6.010006125 days.
    At 100,000c, that’s 0.001 645 45 years = 0.6 days = 14.424 hrs.
    At 1,000,000c, that’s 16.4545 years = 0.06 days = 1.44240147 hours, = 86.544 minutes.

    At 60,100c, that’s exactly 24 hours.

The Starfleet Problem

So, we have 8.2 million sectors that need Green Lanterns. Most need only one, but a significant number need between 1 and 20, and a significant number can’t supply even one, and so need to “borrow” one from one of the sectors with multiple GLs. Which means the average of those higher sectors isn’t going to be 10.5, it’s going to be more like 11.5 or 12.

If 20% of the sectors need to provide 12 GLs and 20% provide none, on average, that’s a total of 24.6 million GLs that need recruitment and training. Once trained, they need to maintain their proficiency, so that’s a further training burden.

How long does the average Green Lantern last? Maybe 20 years, maybe less? That means that 1.23 million need to be trained every year. And, if they have to renew their qualifications every 5 years, but that takes a fiftieth as long as the training, that’s another 0.0984 million ‘trainees’ a year. Total: 1.3284 million.

How many trainers are there to a trainee? How much allowance has to be made for trainees that wash out? How many administrators and other support staff are needed?

This brings us headlong into the Starfleet problem.

There is an episode of the Next Generation which follows Wesley Crusher to his being tested for entrance into the Starfleet Academy. Four gifted students have been preselected, but there’s only one space available. The other three are out of luck – for this year’s intake.

If you have an organization like Star Fleet, you are going to get millions upon millions of applicants per year – if not Billions. If there are 3,000 inhabited star systems in the Federation (a number plucked out of thin air) with an average of 1,000,000 inhabitants each (another number plucked from the ether with absolutely no justification), that’s 3000 million people. Earth alone, even after the calamities in the Star Trek history, is likely to have at least that number, and so are a number of other worlds. Kronos (the Klingon home world) and Vulcan come to mind, for example. All up, a minimum population of at least 12 billion people, and potentially considerably more.

If one percent a decade apply, that’s 120,000,000 applications, or 12 million a year. And if only 1% pass pre-application screening, that’s 120,000 applications. For how many openings? 30,000? 20,000? Ten?

It’s clear that the producers and writers of the episode in question had thought about this, and hence the 1-in-4 cut-off.

But here’s the rub: There is no certainty that the applicants from Moomba-III that are accepted are better than the applicants from Nonga-II that were rejected.

Starfleet is not an elitist organization, it’s not geared to recruit the best of the best – it’s geared to reject the excess while distributing it’s representation as broadly as possible.

And yet, in virtually every episode of TNG, and DS9, and Voyager, and more, Starfleet is portrayed as being the best of the best. So, while the portrayal of the recruitment process is logical, but flawed, it is also inconsistent with the portrayal of the organization outside of this episode.

The Starfleet problem is how do you recruit the best of the best when they are scattered throughout the Federation?

If instantaneous communications galaxy-wide are possible, as shown in Star Trek’s various incarnations, it becomes possible to do so – but that invalidates the entire premise of the drama within the episode in question. For this reason, I’ve never considered the episode as canonical; it falls through a logic hole.

The Green Lanterns – do they have such instantaneous communications? Some adventures suggest yes, others suggest no.

A bigger problem, though is the logistics required to actually train that many recruits. And house them. And feed them.

The Logistics Of Galactic Organizations

And therein lies the problem. These calculations, for the first time, create a practical appreciation of the size of the galaxy, and hence of the size of any galaxy-wide organization. And the results just don’t fit with the descriptions of those organizations in science fiction and other media.

What’s more, the questions scale – they apply just as reasonably to am organization like Star Fleet, even though that organization only operates in somewhat less than one quadrant of the galaxy.

They would scale to the local interstellar region, where small empires of 50-100 star systems might exist.

You can even scale them to be appropriate to an empire or kingdom in D&D terms – the questions are similar (small communities instead of stars), and the results are just as valid.

Once you can get a handle on the scale of your organization – be it a thief’s guild or a multinational church or the political organization of a nation – you can start to properly consider the logistics that are necessary for that organization to function.

There’s going to be an inherent logic that makes obvious sense to you. The consequences may well be surprising – who saw 8,200,000 sectors coming? – but they will be valid, and that will show.

Or, more accurately, the flawed extrapolations of incorrect assessments of scale will no longer be visible – romantic notions like 3600 sectors that look good on paper but make no sense in reality.

Questions Of Scale

But what, you may be wondering, if my assessments of the frequency of population of inhabited worlds is wrong? What if there aren’t 60-odd inhabited systems in a collection of 1200 stars, but only 30, or 20?

Obviously, the size of sectors would increase somewhat – but not be very much; distance between solar systems is unaffected, and that imposes a hard limit on what sounds plausible. Even 60,100 times the speed of light is pushing credibility to the limit.

Distance matters far more than most people appreciate. That’s why improvements in the technology of moving things around tend to have massive national and international repercussions; this is one of the most under-appreciated pillars of society.

If there’s one lesson from history that should be learned by all, it’s this: When people can do in days what would have taken weeks or months previously, society begins to change. When people can move freight around at the same pace, the transformation of society becomes inevitable.

  • When humans had to carry everything on their own or their animal’s backs, mobility was limited, and so was the size of society.
  • When the Romans introduced roads, it became far more efficient to move goods and people around. While carts had already existed, this was the change that enabled Empires to form.
  • The age of Sail made international travel and commerce possible beyond one’s immediate neighbors.
  • The age of Steam brought profound social impacts that altered every aspect of society, either directly or indirectly.
  • The aircraft completely changed the rules of such trade. We’re still discovering and reacting to the ramifications of that – the most recent lesson being disrupted supply chains.
  • But already, we can see the age of air freight coming to an end – not because of a lack of fuel, as was once thought to be the likely problem, but because of the climatic consequences. It seems likely that some reversion for cargoes of lesser importance will take place – unless we invent some sort of teleportation, of course.

Distances matter, and distances are a reflection of the proper appreciation of scale. This article has given everyone the basic tools that they need, and shown how to apply them; I consider that to be a very good day’s work.

Comments Off on The Trouble With Ginormous

A serving of Humble Pi


I came across a remarkable mathematical fact the other day, which immediately gave me the idea for this post.

Yet, while I noted the fact, and roughed out a structure for this article, when the time came to actually write it, the gaming relevance that had been so obvious and self-evident that I had not written it down completely escaped me!

I can only hope that by the time I get to the end of my notes, it will have come back to me!

Introduction / Preface

I should begin by thanking Peter-3699 of Quora, who posted the remarkable mathematical fact that inspired this article – I’ll link to it when it becomes relevant.

Every non-Wikipedia link in this article is either from him or from a comment to his post, or from a page so linked, and so (arguably) would not exist without his post.

Any readers with visual impairment should note that I have gone to some trouble to quote most of the mathematical formulae discussed in this article as Alt-text, so you won’t be left out. I can’t make it any easier for you, I’m afraid, but I hope that it will be better than nothing.

The remarkable property of Pi

The properties of Pi have long fascinated mathematicians – it is what is called a Irrational Number, a number with a never-ending number of decimal places that never repeat. There are a boatload of these known to maths these days. An irrational number is one that can’t be precisely defined as a fraction of two whole (integer) numbers (though approximations are possible).

It’s conjectured (and widely believed) that the various decimal digits (0, 1, 2, and so on up to 9) are evenly and randomly distributed, but this has never been proven.

Pi is one of the earliest constants known to reflect a physical property of our reality – the Circumference of a circle is 2πr and it’s area is πr².Those mean that the properties of cylinders and spheres also use π. But π shows up in trigonometry, and electrical formulae, and in formulas about spings, and all sorts of other places, too.

When I was a young high-school student (aged 12 or 13), I was fascinated by two facets of pi and spent many hours attempting to understand them.

The first was inspired by my discovery that you can get the logarithm to any base by dividing the logarithm in a known base of the number desired by the logarithm in that same base of the desired base. Spelling it out in words is not as elegant as showing it as a formula:

the logarithm of x to the base of n is equal to the logarithm of x to the base of y divided by the logarithm of n to the base of y.

I routinely use y=10.

I’ve found this to be useful in RPG rule analysis and construction many times, mostly for bases of 2 and 5.

(Another pair of formulas of value, while I’m in the vicinity of the subject, are

The logarithm of (x to the nth power) is equal to n times the logarithm of x.

It doesn’t matter what the base of the logarithm is so long as it is the same both times.

…and…

The logarithm of (x times a) is equal to the logarithm of x plus the logarithm of a.

It doesn’t matter what the base is so long as it is the same in all three cases.

At the time, though, i didn’t even know that RPGs existed (and to be fair, at that time, they didn’t exist in any form that we would now recognize, this was the mid-70s). Instead, I was captivated by other concepts.

I already knew that logarithm bases could be irrational, having discovered a reference to natural logarithms (logs to the base of e, which is physical constant defined as approximately equal to 2.71828. (e also shows up in all sorts of unexpected places, for example in modeling compound interest). In fact, it’s relevant to all sorts of exponential growth and decay, including half-lives and biological population growth.

But I couldn’t find anything anywhere about logs to the base of pi, and whether or not this was a useful or practical concept. Short answer – it is, but perhaps less than you might think.

The other question was inspired by a Scientific American whose cover story focused on attempting to find patterns in various geometric representation of the distribution of prime numbers, or the results of plugging prime numbers into various formulae such as n=(P(a)-1)/2, or n=[P(a) – P(a-1)] (where P(a) is a given prime number, like 11, and P(a-1) is the preceding prime number.

Aside from being fascinating in and of itself (and endlessly time-consuming), I wondered if there was some relationship between the digits of an irrational number like pi and the distribution of prime numbers. Instead of a lattice for example, what if the numbers were organizes in growing concentric rings with 0 or 1 in the center?

Short answer: I could never find one, but that doesn’t really prove anything. It was a fun diversion, though.

Quite obviously, there have been many and ongoing attempts to calculate pi, first for its practical value and second because it’s nature makes it a gateway drug into some of the most abstruse realms of higher mathematics.

Babylonian mathematicians usually approximated the value to 3, which was good enough for the archaeological projects of the time. This value was also used in astronomical calculations in India. By the 6th century BCE, Indians were using 339/108 as an approximation.

A thousand years earlier, in a text that was itself stated to be a copy of an even older document in ancient Egypt, the same fractional approximation of 339/108 was described.

Archimedes proved that pi lay somewhere in between 223/71 and 22/7 using geometry of regular polygons within a circle, which would give a circumference of ever-increasing accuracy with more ‘faces’ or ‘gons’ (“poly” means “many”, so “polygon” means “many gons”). For some unknown reason, he stopped at a 96-sided polygon even though his technique required only patience to be extended a considerable distance further.

So, pi is important, and that has led to many attempts to calculate it, to get back to the point.

In fact, the Pi Formulas page of Wolfram Mathworld lists no less than 135 different formulas for calculating Pi! Most of them are too exotic to explain here; I’ll get to some of those that are not in that category in due course.

But this answer on Quora got me thinking about the nature and representation of decimalized numbers…

Whole Numbers

The simplest such numbers are whole integers, with no decimals to worry about at all. The approximations of pi as “3” are representative of this. (Integers, when you dig into them, can be just as fascinating as irrational numbers. For example, there are an infinite number of them, but for every single one of them, there are an infinite number of numbers that aren’t integers – which is a gateway into the very strange world of the mathematics of infinity.

Simple Fractions

As soon as you come up with the concept of measuring some objective reality, you start discovering the world of simple fractions. For example, if you have an object of a particular length, the midpoint is found by dividing that length by 2. If the length as measured happens to be evenly divisible, this is easy; but if it is not, you end up with either a remainder (not useful) or a fraction, 1/2, included in the answer.

Divide something into 3, and you get the fractions of 1/3 and 2/3 being defined, and so on.

Some fractions that are technically “simple” go beyond what I would consider “simple” in an everyday interpretation of the word. “22/7” is simple in both interpretations, and is perhaps the simplest real approximation of pi; the fractional approximations given earlier, like 339/108, may technically be simple, but are pushing the limit of the everyday sense of the word.

Fractions are inherently bound up in geometry and lead into angles and trigonometry. But they remain a finite tool until something else is added to the mix: the invention of a zero.

The Invention of Zero

Zero makes positional notation possible. Without it, you can’t have decimals. “10” is positional notation; the position of the “1” is meaningful, with the character ‘0’ being used to describe that position.

Ancient Egypt had a zero concept for use in accountancy, but did not use positional notation; each number was represented by one or more hieroglyphs. The ancient Babylonians came close, with a symbol used as a placeholder for a zero in their base-60 system.

Modern representations of time that would be familiar to all readers – 3’59” for example – preserve this base-60 system, with 60 seconds equating to a minute and 60 minutes to an hour. The symbols ‘ and ” identify the significance of the 3 and the 59 that – in this example – precede those symbols, respectively. This is a somewhat more refined version of the Babylonian system.

The ancient Greeks had no symbol for zero, and no positional notation. In fact, Greek philosophers opposed the concept of zero as a number very strongly for a very long time, going so far as to translate their numbers into the Babylonian number system for calculations and then translating the results back into Greek to give their results, just so that they could avoid contaminating their number system with those pesky zero-equivalents. Ptolemy broke with this trend and started using a zero-symbol as both a placeholder and a digit, but this did not catch on.

So it was that ancient Romans weren’t able to inherit a zero from the Greeks, and the whole Roman Numerals thing happened instead. “MMCCCXVI” is partially positional (the “V” and “I” mean different things depending on their order, and “IX” applies this to the “I” and the symbol for 10, “X”). But M, C, and X were not used in a purely positional manner; instead, each represented “one” of whatever units were used. So “MM” stands for “two thousands”, and “CCC” for “three centuries”. “MMCCCXVI” is “2,316”. Romans did have a digit that represented “no remainder” after mathematical division.

Slowly, the twin concepts of zero and positional notation within numbers were built up by different societies until a Persian mathematician synthesized his own mathematics from Hindu, Greek, and Arabic sources, unifying concepts from each into a single structure of numbers. The word “Algoritmi” was the Arabic translator’s Latinization of Al-Khwarizmi’s name, and has developed into the modern word “Algorithm”. Al-Khwarizmi wrote (and taught) that “if no number appears in the place of tens in a calculation, a little circle should be used ‘to keep the rows’.: This circle was called Sifr, and it was in every practical respect the forerunner what we know of as zero today.

From these beginnings, the concept of zeros and base-ten mathematics spread to Europe by way of the Spanish Moors, and in particular, Gerbert of Aurillac, and it is from his name that the term “Arabic numerals” derives.

Mathematical calculations prior to the zero were at the level used to teach basic arithmetic to kindergarten children and other early-year students. When I was going to school, the highest form of such math was the memorization of the times tables; which used rote learning to embed concepts into applied mathematics without explanation for why numbers worked the way they did. But the fact is that every advance in arithmetic above elementary addition, multiplication, division and subtraction only works thanks to the zero and the positional notation that it makes possible.

Simple Decimals

Once you have zero and positional notation, you can have simple decimals, essentially writing a number like “2 and 3/10ths” as “2.3”, and a number like “4 tens, 3, and 57 one hundredths” as “43.57”.

Non-repeating Long Decimals

Somewhere beyond two or three decimal places, you enter the realm of “Long Decimals”. These are numbers that include fractions whose decimal conversion can be fully shown, no matter how long and complicated. “Ten thousand Seven Hundred and Forty Two one millionth 48 thousandths and 576ths” can be written “10742 / 1048576” as a fraction, or 0.0102443695068359375. For convenience, long fractions sometimes use a space after every third decimal point, just as “1048576” is sometimes written “1,048,576” – so “0.0102443695068359375” becomes “0.010 244 369 506 835 937 5” – but this decimal representation is the exact number represented by that particular fraction.

Simple Repeating Decimals

Long before you’ve worked these out, however, you have discovered simple repeating decimals. “1/3” is the simplest of these – it’s 0.333333333333… and the decimals continue on indefinitely.

One quarter and one fifth don’t have these properties, but one-sixth does – 0.166666666666666… and so does one ninth, or 0.11111111111111… and, in fact, every fraction whose denominator is evenly divisible by three. So one seventy-second is “0.01388888888888…”.

These are frequently denoted by putting a dot on top of the decimal place that is repeated – so:

1/3 is written in decimal as 0.3 with a dot above the three; 1/6 is written in decimal as 0.16 with a dot above the 6; 1/9 is written in decimal as 0.1 with a dot above the 1; and 1/72 is written in decimal as 0.138 with a dot above the 8. Compare these with the long-form versions quoted in the text above.

Complex Repeating Decimals

One seventh is even messier, as are any fractions whose denominators are evenly divisible by 7. One 63rd, for example, is “0.015873 015873 015873 015873 015873…”, in which a string of 6 decimals is repeated an infinite number of times.

These are usually written with a dot over the first and last decimal in the repeating string, so

1/63 is written in decimal as 0.015873, with dots above the second zero and the three, to indicate that those digits, and all those in between, repeat indefinitely.

These clearly represent a whole new order of complexity when it comes to decimals, but we’re still not at the complexities represented by the digits of pi.

Non-Repeating Decimals as Fractionated Series

And that brings me back to the answer on Quora by David in response to the question, Can π be expressed by a series?.

In response, Peter offered up the following simple series:

1 + (1/4) + (1/9) + (1/16) + ... = (pi squared) / 6

But I think it becomes even more obvious when written,

1/(1^2) + 1/(2^2) + 1/(3^2) + 1/(4^2) + ... = (pi squared) / 6

I had encountered a few of these before, but they were more complicated. For example, there’s this one:

4 - (4/3) + (4/5) - (4/7) + (4/9) - (4/11) + ... converges to pi

The primary source referred to by David, the Pi Formulas page of Wolfram Mathworld, as mentioned earlier, has a great many more. The series listed above is almost as elegant as David’s (only the addition-subtraction perpetual series prevents it from equalling that mark). There are others that are a lot more complicated.

These define a number not in terms of its actual value, but in terms of a process that can be used to calculate it. The problem is that to extend the number of digits of pi, you have to calculate every term up to the depth of your required decimal places, and the number of terms to be calculated grows faster than the decimal places do.

For example, in the formula above, it’s a sure bet that eventually, you will get to 1/81 – that will be somewhere around the 40th term. But 1/81th is 0.012345679 012345679 0123456790… – so that’s 40-or-so terms and we’re still only on the second decimal place!

There are some formulas that converge more quickly on pi; for example, this one…

pi divided by four equals the sum from k=0 to k=infinity of (1 divided by [4k+1]) - (1 divided by [4k+3]).

Observe that this is simply a more elegant way of describing one of the formulae given above.

…but by increasing the complexity of the terms of the series and using factorials, an even better method is possible:

Pi = the sum from n=0 to infinity of a series of terms defined as n! times (2n)! times (25n - 3) divided by (3n)! and then divided by 2^(n-1).

Factorials, for those who don’t know (or don’t remember) are a series of numbers that are multiplied by each other:

n! = n × (n-1) × (n-2) × (n-3) × .... × 3 × 2 × 1, which also equals n × (n-1)!

So:

  • 3! (described as “Factorial three” or “The Factorial of three”)= 3 × 2 × 1 = 6,
  • 5! = 5 × 4 × 3 × 2 × 1 = 120, and
  • 10! = 10 × 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = 3,628,800.

Rolling Non-Repeating Decimal Functions

There used to be a monthly magazine called Science Digest, which I quite enjoyed reading.

In the January 1990 issue, it reported on a mathematical breakthrough by two brothers, Gregory and David Chudnovsky, who extended the calculation of pi to over a billion decimal places using a new algorithm that they had developed for the purpose.

It was the sheer brilliance of how this algorithm worked that really caught my attention, even more than the feat itself. In essence, if you fed it 14 digits of pi, it would spit out the next 14 digits of pi. The formula itself is a fairly ugly thing, but it works.

Sorry, there's no way that I'm going to try and formulate this into text! A web search for &quote;Chudnovsky Formula&quote; should find it easily enough, but unless you are a SERIOUS math geek, it's not going to be worth your effort.

This formula yields digits of pi about 14 at a time. You need to input the previous 14 to get the next ones.

Their work (and that of several subsequent researchers) was actually built upon the brilliance of an Indian mathematician, Srinivasa Ramanujan, who developed a number of innovative formulas for the calculation of pi in 1914.

To me, Ramanujan’s technique is more elegant:

1 / pi = 2 ×the square root of 2 divided by 9801 and multiplied by the sum of a series for k=0 to k=infinity, each entry of which is defined as (4k)! times (1103 + 26390 k) and then divided by (k!) ^4 and then divided by 396^(4k)th power.

…but there is no arguing with results. It’s entirely likely, however, that without Ramanujan’s formulations, the Chudnovsky brothers would not have been able to make their own breakthrough.

In fact, for technical reasons, the approach used by the Chudnovsky brothers is used for all record attempts these days, and the current record (set on my Birthday this year by Emma Karuka Iwao of Japan, and announced after verification just two months ago) extends the record to an astonishing 100 trillion digits (10^14, or 100,000,000,000,000)

Digit Extraction Algorithms

Astonishingly, this is not the last word on the subject! In 1997, David H Bailey, Peter Borwein, and Simon Plouffe published a paper describing a new formula for π, now known as the BBP formula.

Pi = the sum from n=0 to n=infinity of a series, each entry of which is defined as [4 / (8 n + 1)] - [2 / (8 n + 4)] - [1 / (8n+5)] - [1 / (8n+6), the result of which is then multiplied by 1/(16^n). But note the caveat in the text below.

The BBP formula, or others like it, are now used extensively to test digits of pi calculated using the Chudnovsky formula or some variation.

This was capable of extracting any given digit of pi without calculating the preceding digits – in base-16.

You heard me. Base-16, better known as hexadecimal.

Hexadecimal uses A, B, C, D, E, and F to signify the decimal numbers 10, 11, 12, 13, 14, and 15, respectively.

A swatch of the color Turmeric

I chose the hexadecimal code pretty much at random, so I was astonished to discover I had selected a named color!

If you were to count to 36 in hexadecimal, it would be “1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 1A, 1B, 1C, 1D, 1E, 1F, 20, 21, 22, 23.” Because hexadecimal was often used in computer hardware programming, it is traditional to pad the leading values with “Ø,” signifying zero (and distinguishing it from “O” which could cause all sorts of problems in computer programs if incorrectly substituted for a zero). Each character in hexadecimal occupies one byte in a computer’s memory or disk space.

The range 00-FF in hex is particularly significant, because of the RGB color schema, in which each component in a color is specified by just such a two-byte character. “FF0000” thus specifies Red, “FFFFFF” is white, “000000” is black, and “C4D14A” is named “Americium” but is actually a medium-light yellowish-green in color: Most software these days would use the decimal number from the user’s point-of-view (196, 209, 74) – but rest assured that the number is stored in hex in the computer!

So…. hexadecimal.

This is an example of what is now referred to as a Digit-Extraction Algorithm. Mathworld defines these as an algorithm or expression that “allows digits of a given number to be calculated without requiring the computation of earlier digits.” and adds, “The BBP formula for pi is the best-known such algorithm, but an algorithm also exists for e.”

In 1996, Plouffe derived an algorithm to extract the nth digit of π using base-10 math to derive base-10 digits. It can even be used with a pocket calculator!

pi + 3 = the sum from n=1 to n=infinity of a series, each entry of which is defined as n times 2^n times (n!)^2, and divided by (2n)!. Yes, that's all there is to it.

The problem is that this calculation is quite slow, in fact several of the earlier calculations offered are faster, notably that devised by the Chudnovsky brothers. Yet, the fact that one base-10 formula has been found, however inefficient, implies that there will be more to be found, so the question of whether or not one can approach the BBP formula in speed remains open..

The Golden Ratio

I was also intrigued to notice, amongst the many formulas listed on the relevant Wolfram Mathworld page, a couple of formulas that referenced the Golden Ratio. This is yet another irrational number, symbolized by the Greek letter phi (φ) and defined as the ratio for which this expression:

(a plus b) divided by a equals a divided by b, which also equals phi, the golden ratio.

…is true.

Which sounds really esoteric, an intellectual exercise. Here’s another way to look at it, provided by Wikipedia:

refer text

Image by Ahecht (Original); Pbroks13 (Derivative work) – Own work, Public Domain, Link.

This rectangle has one side of length a and another or length a+b. If you cut the long side to create a square of size a × a, you are left with a rectangle of size a × b, with a now the long side, and which has the same exact proportions as the original rectangle. If you calculate the ratios for which the above is true, you get a value of approximately 1.618.

Again, this seems like an interesting bit of trivia, but nothing important.

But the golden ratio keeps showing up in all sorts of unexpected places. Some of them are man-made, and represent ideals of aesthetics that might be self-fulfillment of standards.

  • For example, the most popular size of postcards (and postage stamps for that matter) are in the Golden ratio.
  • If you calculate the ratio of entries in a Fibonacci sequence* – the next term in a sequence is the sum of the two preceding numbers – the average ratio will be the Golden Ratio.
  • Sunflower florets form natural spiral patterns which are said to contain Fibonacci sequences, and which therefore involve the Golden ratio.
  • Ditto the arrangement of leaves on a plant stem.

I previously wrote about Fibonacci Sequences in The Meta-Physics of Magic (I thought I had looked at the subject even more extensively, because it’s very useful for RPG design, and usually overlooked, but evidently not – so that’s something I’ll have to do at some future point).

There are others, some confirmed, some disputed.

The last place that I expected one to show up, though, was in a formula to calculate the value of π!

References

Before I get into the concept that I think I intended to broach, I thought that I should list the references that I used in compiling the above information. In no particular order:

I think that’s all of them!

Games

In some respects, the increasing complexity of decimals is synonymous with the increasing complexity of RPG plotlines. Well, it’s at least a metaphor, one that’s worth exploring.

The simplest possible plot is something like “PCs see bad guy. Bad guy sees PCs. Bad Guy attacks. Someone wins.” – or, “PCs are hired to deliver a package. PCs deliver package. PCs get paid.”

This is akin to having no decimal places at all, within this analogy.

As soon as you introduce a decimal place, you are introducing a complication. “PCs are hired to deliver a package. Someone attempts to steal the package.” Suddenly, there are two paths for the adventure to take – either the PCs win, and get to deliver the package, or the thieves make off with it and the PCs have to get it back, then deliver it.

A longer decimal is akin to a complication being a gateway to a longer chain of events. “PCs are hired to deliver a fabulous gem. Someone attempts to steal it, but is beaten off. PC discovers that the gem is a fake – is it possible that the real gem was stolen during the earlier attempt, which may have been just a distraction, or was it always a fake? Is their whole mission to be a stalking horse, a lightning rod for trouble while the real gem is smuggled in by some more secret route? Or are they part of a plot to replace the real gem with this fake?

Perhaps there are multiple groups involved, with different intentions and agendas, so that more than one of these speculations is true. Or perhaps the GM decides that whichever plot the PCs choose to investigate third is true. This is akin to a longer repeating decimal string, except that a cap has been placed on the number of times the string will repeat – call it a rounding error! And the ‘true plot’ is positionally significant.

Superficially, several investigative sub-plots like the ones implied by these “theories of the crime” might be similar, but the clever GM will take active measures to differentiate between them. Different tones, different moods, different oppositions with different rules of engagement, settings that are at least somewhat different, NPCs with different personalities.

When the PCs actions have repercussions into the future, such that these investigations are each the beginning of a long road, the campaign (and possibly the adventure) have become recursive, and the role of the GM has changed from that of ringmaster to that of agent provocateur. He is no longer directing the campaign, he is creating a landscape for the players to explore, or not, as they choose.

And, of course, in the long term, the campaign therefore becomes – or should become – more like an irrational number, a series of decimals that never repeats (though at times it might seem to – a decimal string “141592” can occur thousands of times within the length of π for example! The fact that one of those occurrences happens to be at the very beginning of the decimal series of digits is completely irrelevant.

It might seem at first that sandboxing is more akin to the notion of Digit Extraction, in which a given digit is extracted only when it is needed, but I would argue that it more closely resembles the Chudnovsky approach, because the content is inevitably derived from, and dependent on, the “terrain” that has already been explored by the players.

Having at least constructed a basic outline of everything, with embedded plot hooks and (metaphoric) landmines waiting for the PCs to step on them, which can be expanded upon at need, is far more accurately described by the digit extraction analogy; the digits of π don’t change, if you extract the same digit by several different methods they will all give the same answer. You may not know what that digit is when you start, but it’s not like Schrodinger’s Cat, it doesn’t exist in some quasi-metastable state until actually determined.

A Mnemonic Device

Aside from being at least somewhat interesting in its own right, that means that an understanding of decimals makes them a mnemonic device for reminding the GM how to construct plots.

You start with the simplicity of “The PCs are hired to deliver a package. The PCs deliver the package. The PCs get paid,” and build complications and permutations and choices – and yes, a little randomness and chaos – upon that foundation. Where you stop is up to you; this could be the introduction to an entire campaign or it could be the teaser before the title sequence, with the main movie (which may or may not be related in some way to the teaser) still to follow.

Okay, so this won’t get posted on time – as I write this, I still have a lot of formulas and equations to edit and upload, and the text has to be spellchecked and edited, and all those references converted to hyperlinks.

If the graphics were done already, it might just have been possible; without that, it’s not.

So this is being made public a day late. Sorry, everyone; I’ll try to do better next time!

Comments Off on A serving of Humble Pi