Campaign Mastery helps tabletop RPG GMs knock their players' socks off through tips, how-to articles, and GMing tricks that build memorable campaigns from start to finish.

Delineating Overarching Character Traits


A technique for creating unique and interesting characters that makes their cultures more rich and detailed. Simple but comprehensive.

This image of a Mannequin in Ferengi Makeup and Uniform by Marcin Wichary from San Francisco, Calif. was first published on Flickr under the Creative Commons Attribution 2.0 Generic License, https://commons.wikimedia.org/w/index.php?curid=79570035.

I was reading something on Quora the other day about how Deep Space 9 used the overall concept of Ferengi Traits to make the personalities of Quark, Rom and Nog distinctive (and don’t worry if you don’t know who those characters are, it’s not important to the article).

The key point being proposed was that while all three fell into the general pattern of ‘Ferengi’, each had his own unique traits for which that general pattern provided context. Putting those together permitted an interpretation of those traits from the Ferengi perspective, which in turn broadened the perspective on that society from comic-book simplicity to rich and culturally-detailed.

To employ a metaphor, a spotlight on one of the characters reflected back on the overarching commonalities, exposing fresh facets of the collective generality.

My thoughts went immediately to the gaming applications. These are essentially the same thing, but four-fold. Racial, Archetypal, Cultural/Social, and Characteristic. Each of these represents a way of generalizing a character, and provides (through interpretation), specific traits that denote the individual personality.

Initially, I was focused on NPC delineation, because that’s always a topic of value to GMs, but then I realized that the same methods would work for PCs as well – and that a lot of advice offered both here and elsewhere over the years were already groping in this particular direction.

An introduction to the Architecture

I’ve tried very hard, in this article, to use different collective descriptions for each facet and sub-facet of a subject. This had two purposes – first, by using non-standard nomenclature, it invites readers to take a fresh look at a very familiar subject; and second, it helps keep it clear just what facet or sub-facet I’m talking about. The goal is to avoid boxing ourselves in with stereotypes while creating a broad range of end personalities within a particular culture of which the individual (and all other individuals) are collectively representative.

This matters because it transforms the personalities from something being dictated by rules narrative and cultural write-ups to a foundation for individuality – it lets individuals be unique while maintaining that cumulative impression.

And it matters because that’s how characters in-game would formulate their impressions of both an individual and of a collective grouping – they wouldn’t be given an overarching definition, they would be given stereotypes if they were told anything at all about the race /culture, into which they would have to ‘fit’ the individual, or – if told nothing about the race / culture – they would be presented with one or more individuals which they would have to then generalize into an overall impression.

In other words, this approach is both more akin to, and more facilitative of, the situation as it would be encountered in the real world. That makes this less work for the GM, allows more creativity, and produces more unique individuals.

    Three Options and how to choose between them

    GMs can either start with a generalized pattern as a structure, or let one emerge naturally as a collective impression created by a group of individuals. Or they can occupy a half-way house somewhere in between these two extremes, offering a broad summary as a guideline and content to extrapolate from that beginning for individuals, fleshing out the resulting general view one individual at a time.

    There are two factors that should be considered when choosing between these three options. (1) How much contact has the society in general had with the race / species? This pushes toward the generalized pattern as foundation. And (2), how diverse are the race / species in personality, and within that question, how representative of their race / species does the GM want this individual to be? The second pushes toward the middle ground, while the latter goes further and promotes the emergent collective impression as the path to follow.

    There’s even a variation on the half-way house in which the specific description is filled with half-truths and inaccuracies perpetuated through myth and legend and culture. The GM may not know what the truth behind this picture is, only that it’s partially accurate and partially invented or romanticized.

    There should never be a forced ‘one size fits all’ answer to this question; it should be different each and every time – but, once made, it should remain in effect for each representative of a race / species until you have good reason to change it..

The Four Stanzas Of A Character

The general picture of an individual character, can be broken down into four stanzas. Four paragraphs / lines that collectively delineate an individual persona. Some GMs may add a fifth, alignment, but that’s fallen out of favor in gaming circles these days.

That redefines the objective – we want to end up with a four-to-eight-sentence summary of the individual and how he represents the broader culture from which he derives.

Before we can achieve that, we need to know the subjects of these four stanzas.

    Racial Traits

    These are the racial stereotypes that collectively apply in some manner to the normal individual – even if the individual is wildly different from them, they are still defined, in relative terms, to those racial traits. “The typical Orc is boisterous and brash, ill-mannered, and prone to violence, with a huge chip on their shoulders from being suppressed as a species, and as an individual within the species.” Right away, there’s a lot in that description that will seem familiar but there’s a nuance or two that are just a little different to the generic description of the race. It provides a subtle redefinition of the race, one that can manifest in different ways in every individual.

    Archetypes

    Similarly, in most RPGs there are archetypes – sometimes explicitly defined as character classes, sometimes not. Each archetype, in turn, carries baggage in the form of a description of the type of persona that it welcomes and develops, the personas that naturally ‘fit’ the archetype and how well-suited the individual is to their profession.

    Social Class, Associations, and Faiths

    These three are all ways that individuals associate with others, sometimes within their culture, and sometimes forming a point of connection with others beyond it. Each of them carries an expectation of behavior that forms part of the collective identity of the specific sub-group of which the individual is a member, and for that behavior to come naturally, those speak to the persona of the individual. On the other hand, if the individual rebels against one or more aspects of the group identification, that also says something about the personality of the individual.

    There can be several such groupings to which an individual belongs, but one of them will always be dominant, and their response to that dominant grouping will be definitive, providing a guideline to how they integrate (and how well they integrate) with the other groups to which they belong. These other groups provide nuance, not definition. They can warrant a mention in this stanza only when it is culturally expected that this association is definitive – and in this individual’s case, it is not.

    Characteristic Attributes

    There are three different aspects of characteristics that shape an individual – those that are relatively high, those that are unrelentingly average (relative to those around him or her), and those that are notably lacking or low (same caveat). Each of these can form an important element of the individual persona or can be negligible. The latter should be ignored for now; it’s the former that we are interested in.

    If the individual is notably stronger than those around them, this will have a profound influence on them, amplifying the consequences of some typical adolescent behaviors into life-altering events. Similarly, if they are faster, more nimble, more agile, more athletic, smarter, wiser, more attractive, or more resilient, there will be profound impacts that will push them either more firmly toward the stereotype, or more strongly away from it.

    If the individual is notably weaker than those around them, or more foolish, or more stupid / easily led, less genteel, or more clumsy, these impacts will also be profound. Always being the last person picked for games or teams will amplify other attributes of the persona, and may even put the individual into situations that threaten their lives. Some may devote their lives to overcoming this handicap, no matter the cost; others will accept it and embrace another path through life.

    It doesn’t matter how many characteristics the game mechanics define, there will always be more than can easily be accommodated in a short descriptive passage of the type being discussed here. Of necessity, you need to focus on the one, two, or (at most) three that are most definitive of the individual relative to the broader population around them.

    I want to highlight something before continuing. I’ve made a big point of using terminology relating to racial / social expectations, for example, “relative to the broader population around them”, for three reasons.

    First, it’s the relative value in comparison to those expectations that shapes a persona, not the absolute value;

    Second, this accommodates circumstances of adoption / resettlement, in which the racial norms themselves deviate from the expectations of the society around the individual; and

    Third, defining these attributes in relative terms means that the individual’s raw numbers can be filtered through the relative terminology to say something about the culture from which they derive.

The Process

With the subjects of each stanza now defined, we can move on to the process of generating an individual’s persona. For each of the Stanzas, this is a four-step process that is often conducted intuitively. As with most intuition-driven events, greater understanding and control can be achieved by understanding the process intellectually, and this can provide a road-map to follow when intuition fails us.

In fact, the four-step process is so quick (and usually easy) that we can contemplate far more than the four stanzas, and that creates a need for a fifth step, placed second-last, and labeled step 4:

  1. Generic Trait to Profile Spectra
  2. Individual Placement within Spectra
  3. Alternative Interpretations & Adaptations of Individual Placements
  4. Selection
  5. Facets of Individuality from Specific Interpretations

Let’s briefly look at each of these in greater detail.

    1. Generic Trait to Profile Spectra

    I recently wrote, though I’m not sure where, “Nature doesn’t deal in absolutes, it deals in spectra”, or words to that effect – I think it might be in the Zenith-3 adventure currently being played.

    Every element in the four stanzas can be viewed as a placement upon a general range of spectra that collectively define the application of the element to the collective identity of the race / species.

    You can see this readily in the case of the characteristic attributes – the character has a specific value for each characteristic, while the full range of possibilities defines the scope of the spectrum from low-to-high. One of my very early advocacies, long before I started writing articles for Campaign Mastery, dealt with the spectrum of full possibilities permitted by the game mechanics and the placement of the individual upon that spectrum as a guide to personality traits.

    In this case, the spectrum of possibilities is reduced to just those considered ‘valid’ for the race / species, permitting a socially-relevant measure of the impact of that placement, but the older interpretation still has some value in terms of defining the significance of those racial restrictions relative to the human population base.

    If the human range is 3-18, for example (very traditional D&D scale), an individual value of 15 give rise to certain character traits (depending on which characteristic is being discussed). If the race in question has a spectral range of 12-20, the 12 tells you something about the race relative to humans, as does the 20, and the individual’s value of 15 tells you something about where they fit within that 12-20 spectrum of possibilities.

    Set aside the individual value of 15 for the moment, though; this step is about defining the 12-20 and translating that into general descriptions of the characteristic with respect to this particular race.

    Each of the stanzas can be treated in the same way, as a range of possibilities that define the race / species, and this step is one of defining those spectra.

    Obviously, if you don’t take racial notes, you have to repeat this process every time. When you don’t have a unified concept of the race / species in your head, that can help create one through step-wise refinement and iteration of the process; but when you do have a clear idea of the central concept of the race / species, it’s a waste of prep time to repeat the process. Either way, the process is sped up in the future with a little careful note-taking at this point in the process..

    2. Individual Placement within Spectra

    This is where that individual’s value of 15 reenters the picture. You aren’t so much looking at what this enables the character to do, or not do; you are looking for the consequences of that specific value toward the personality of the individual. What comes naturally to him or her, what do they struggle with, and how do those things fit them into the culture surrounding them?

    Again, this step is easier when thinking about characteristics, but it’s true of all the stanzas. Social Class, for example, will have a range from those at the bottom to those most-valued by the society (usually rulers, but not necessarily so). Elves may revere those making cultural contributions far above their social standing as defined by their political influence. Applying a little creativity can nuance racial definitions in ways you would scarcely believe – for example, if the Brewers of Ale are the most influential in Dwarven societies, you get a very different picture of the society. If you then generalize that from the specific Beer-maker to ‘Social Lubricants’ to ‘Social Interaction Enablers’, you find that anyone who makes social interactions easier or more significant grows in stature within the resulting society, and that social interactions of all sorts become more significant within the resulting culture. Feasts, Parties, casual get-togethers of all sorts, become more significant, more frequent, and more embedded within the society. There would be excuses for such, both informal and formally-defined, that stretch even beyond the extremes in human cultures – there would literally be an excuse for a ‘party / celebration’ every week of the year. Some of these might even be negatively contextualized in expression – commemorating a war in which such celebrations were not possible might be remembered by making ale forbidden during the first phase of the social event (to be followed by an even more extreme celebration of the victory, when social norms once again became possible). So you have a week of fasting (in terms of alcohol) and then a blow-out.

    3. Alternative Interpretations & Adaptations of Individual Placements

    So we have a spectrum of results and a placement of the individual within that spectrum. The racial profile associated with that spectrum defines what is usually meant by that placement, but nothing exists in a vacuum; how an individual reacts to a specific spectral placement will not be an isolated phenomenon, it will be a part of the unified whole that is the individual’s personality. Rather than look to the generic cardboard cut-out interpretations, it’s worth spending a few moments contemplating alternatives that might better represent a coherent profile of the individual, relegating the generic contribution to (at best) a secondary status within this individual’s makeup.

    This stage of the process is an exploration of ideas – don’t be afraid to throw in something from left field to see what becomes of it.

    4. Selection

    By the time you’ve finished that, you will have a vast swathe of contributing elements, a soup of possibilities, all present in equal strength, and so yielding a fairly bland and unfocused characterization. Time to apply a little selectivity, picking out the elements within each stanza that best define the individual and their place within their natural society.

    Remember, the goal is to be able to sum up the individual and their place within their native culture in just 4-8 sentences of simple construction – none of this 15-line paragraphs that read like legal fine-print. Simple, direct statements. Anything that doesn’t belong in description of the individual’s personality and placement should be part of the racial notes.

    5. Facets of Individuality from Specific Interpretations

    When you’ve boiled off the dross – and it’s likely that your pruning will need to be ruthless – what remains is canon for that character. Everything not explicitly stated is free for interpretation in response to triggering events, though logical implication may narrow the reactions to such events.

    Roleplaying is about taking those defining elements and merging them into a holistic view of the personality which can then be expressed in thought (decisions), word, and deed. The GM has to do it just as much as the players do.

    It can be the case that the holistic view needs 1-2 more sentences to unify the constituent elements. “[Name] is a Party Animal” can mean very different things in different cultures, and usually requiring a clarifying clause within the sentence. “Elvor is a Party Animal, always up for a good poetry recital or inspection of the blooming of roses” – by redefining ‘Party Animal’ into a relevant social context, this describes a very specific individual in a single sentence; everything that follows merely enhances that overall summation.

    Simply by virtue of making this the dominant personality trait of ‘Elvor’, you automatically insinuate that everything else is secondary to this aspect of their personality, to be sacrificed if and when it becomes necessary. Right now, there’s an impression that the character is a gadfly, without serious heft and gravitas – but if this love of ‘intellectual events’ has driven the character to become engaged in internal politics, or a social firebrand / conscience, it’s possible that nothing could be farther from the truth. That’s what the other elements of the characterization are there for.

    It’s the overall summation that GMs and Players should keep in mind when roleplaying. Nuance is all well-and-good, but can often conflict with other characterization elements; the overall summation is the guide to navigating such complexities.

Spotlight Placement

Like most creative types, I love to show off my handiwork to the players. Perhaps eight times in ten, I’ll get a shrug and a ‘so what’, but the remainder generates varying degrees of appreciation and occasionally awe.

There’s a wrong way and a right way and a better way.

The wrong way is simply to dish up “here’s something I’ve been working on,” without in-game context. This risks giving away key details of plot not yet played, throwing away any surprise or wow factors at the game table for a moment of gratification that might not even be coming. It’s something that most of us have been guilty of at some point along the way, and we all have to learn (sometimes repeatedly) not to do it.

The right way is to make the revelation part of the plot by ensuring that the plotline focuses on at least one of the more unique aspects of the character, showcasing his or her individuality.

The better way is to fully integrate the character and one or more of their unique personality attributes into the plot, making them an essential building block of the campaign, while using them to shed light and add substance to the range of possibilities implicit in their race, profession, and social position. This might require the involvement of a second character whose job is merely to forewarn the PCs about the uniqueness or place it in a racial / professional context afterwards, specifically addressing the nuances that make the character function.

    Focal Point

    As you can see, there’s a great deal of similarity between the ‘right way’ and ‘the better way’ – the distinction is in how central the uniqueness of the character is to the plot.

    Both start with the selection of a focal point – the aspect of the personality that is going to be on most prominent display. This could be any one of the character’s stanzas of description, and there will always be a best choice in terms of the plot and intended usage. But if, by chance, the character you’ve created doesn’t match up with your plot needs, it’s at this point that you should set the character created aside for use some other time, and start over – letting the plot guide you to a unique character for that critical role in the story.

    Reflections Of Individuality

    Once the primary point of uniqueness is built into the plotline, the second step is to look for opportunities and character-roleplay moments that can briefly highlight one or more other unique aspects of the character. Failing that, a foil – someone present merely to expose the existence of those other unique attributes – is often the best answer.

    The Racial Rainbow

    I am always cognizant of what the uniqueness of the character adds to the rainbow of racial aspects and colors contained within the race. How does this character, and their role within the adventure, expand the fundamental definition of the race that lives in the player’s heads? How can we make that expansion unforgettable, so that the next example builds upon it, having a cumulative impact?

    Every non-cliche Elf, Dwarf, Orc (or whatever) adds to the substance of that race, so long as their uniqueness can somehow be put on show and made memorable. The more central they are to the plotline, the more easily the latter can be achieved, and the more interesting the character, the more easily you will be able to drop them into future occasions.

    If you make six unique NPCs and only one of them goes on to become a central figure in the campaign, that’s a win for the GM – because if they weren’t memorable, none of them would do so; they would simply be part of the campaign furniture. But at the time of creation, you never have any idea which of them will turn up again in the future – you’re simply placing as many top-quality building blocks to hand as you can come up with.

    The Archetypal Rainbow

    It’s the same thing with respect to the character’s archetype. Expanding the role that the individual can play expands the potential capabilities of their archetype, providing a second avenue into their becoming a recurring element.

    The Social Rainbow

    The sheer variety of groups around which the character can be oriented means that their contributions to the social rainbow will be more diffuse, unless this is the central facet of the character spotlighted.

    But this also brings me to a top tip – The Path Not Fated

      The Path Not Fated

      We’ve all met people who would excel in a different vocation or social position, but who were forced by circumstance, or family, or opportunity, or whatever, into a pathway through life for which they aren’t really a very good fit.

      They nevertheless do as much as they can to fit themselves into the square hole, no matter how much of a round peg they may be, and do enough to continue on in that square hole, though it doesn’t come naturally to them.

      Whenever fate (or a PCs’ decision) throws up the need for a generic cardboard cut-out NPC, my favorite tactic these days is to make them something else, then reconcile that with their life and its demands.

      The noble who would be better-suited to being a bookkeeper. Or a Beekeeper. Or an architect. Anything but a typical ruler, in fact.

      The inn-keeper who was born to tread the Tennis Court. Or the Pool Hall. Or to be a famous singer.

      The Blacksmith who should have been a painter. Or a gardener. Or a butler.

      It’s a shortcut through the processes described here that doesn’t fully flesh out the character but still captures at least half of the uniqueness that would result from such a treatment, and is fast enough that it can be done on the fly – which is exactly what you need in this game situation..

      The biggest trap to watch out for is creating a new stereotype by reusing the same ‘alternative vision’ repeatedly. Avoid that, and you’re well on your way.

    The Characteristic Hues

    Characteristic-defined traits are a little different to the rest. They rarely stand alone, instead compounding with other personality traits to add additional nuance and depth. These are personality elements that would be largely similar no matter what archetype / profession the character adopted, what their social class was, and that are embedded within their racial profile, inseparable from it to at least some degree.

    Contemplate, for example, the differences in the following:

    • “He’s unusually strong for a Gnome.”
    • “He’s unusually strong for a Storm Giant.”

    Both will have generated similar formative influences within their respective cultures; it’s when you step outside those boundaries that the context becomes important. In the first case, the character is likely in for a rough time, adjusting to no longer being the biggest and toughest around, but they may end up a better person for the humbling. In the latter case, any personality traits engendered by their strength are likely to be amplified, if anything.

Totality: The sum of many reflections

The techniques described in this post shouldn’t be used every time you generate an NPC. Their power stems from the cumulative impact of many diverse representatives; if you can’t envisage a pathway through the campaign that yields many encounters with Ettin, it may not be worth going through the whole process.

That’s certainly one path to take. The on-the-other-hand counter-argument is that if there’s only going to be one Ettin, you should make it as memorable and distinctive as possible. While the pragmatist in me aligns with the former position (less time spent on this means more time that can be spent on something else), everything else in my nature (excluding laziness) demands the latter.

I can’t decide this question for you – I can only advise people to find the balance and pathway that works best for them. Every GM has some talent at which they are better than the rest, some have several. Prep time invested in something that comes naturally to the GM yields a better dividend, but leaves holes in their performance behind the screen; prep time invested in the areas they are weaker in elevates the performance bottom line and also frees up some of their time and attention for their strengths to be displayed. There’s no one right answer.

But I thought it worth the effort, before wrapping up this article, to think about some even bigger pictures and the impact the technique can have.

    Genre Variations

    By defining the racial and archetypal parameters differently, even within the same game system, you create genre variations, and these can be as nuanced as you want them to be. If you want to distinguish between high fantasy and low fantasy, you can – even in the middle of a campaign, if you perceive that the campaign has evolved through characters gaining wealth and experience. That’s a powerful benefit, but it misses one of the more useful functions of the process.

    It also makes the conceptual repackaging of one genre’s creatures into another genre. There are two examples that I could offer right now, but both are from adventures that haven?t yet been played. Instead, I’ll throw out a less-developed idea just to illustrate the power of the technique.

    Let’s take a Troll and translate it into Sci-Fi using nanotech repair mechanisms housed within the humanoid organism. There would be certain aspects of the ‘repaired’ creature that would be user-customizable, and some that aren’t. Increased strength, size, and resilience? No problem. Diminished intellect and Agility? Suggestive of nerve damage as a consequence of the nanotechnology, and maybe neuron damage to boot. That suggests an inverse relationship between Strength / Resilience and Intellect / Nimbleness. It might be that every time the nanotech repairs the body, it gains a point of strength and/or resilience, but loses a point of intelligence and/or dexterity. Slowly, the character becomes more brutish – and more dangerous. This treatment doesn’t say anything about the ‘racial’ traits or the social groupings; the latter would probably be generic aspects of the sub-culture that embraced nanotech / cyberware, while the former would be about the places such ‘modified people’ hang out and the jobs they perform, and that would reflect their integration (or the lack thereof) within the broader society. That in turn suggests either a game setting that leans heavily into cyberpunk tropes, or one that is actively trying to avoid going down that path.

    In my Zenith-3 (superhero) campaign, Earth-prime has started down the road to cyberpunk but there is considerable resistance, not least of which stems from a number of unique illnesses / diseases / conditions (some of them physical, some mental) that exist and act as a deterrent to many. There are a few fatalists who believe that cures will eventually be found, and that upgrading now gets them in on the ground floor of the next stages of human evolution; there are some who see the diseases as a natural price that has to be paid if ordinary people are going to compete with superheros and villains; and there are some who are simply overconfident (“it will never happen to me”). Philosophy colliding with Futurology in a Superhero context. These ‘trolls’ would fit right in.

    There can even be an argument made in reference to the purported ugliness of a Troll. Characters who opt for this type of augmentation will probably start out fairly average in appearance, maybe even a little sickly. At first, the gains would be positive – they would put on muscle mass and become more attractive as a result. That wouldn’t last; they would slowly become more grotesque in appearance, a trend enhanced by the natural occupations of this sort of augmented person – bouncers and enforcers and the like. All professions in which intimidation is an asset. And so most of them slide down a slippery slope into a more horrific appearance.

    We can make such a character unique by making them friendly, polite, soft-spoken, with exquisite manners. The dichotomy of such a social paragon being an ugly SOB who does an ugly job does the rest.

    Campaign Variations

    I’ve often discussed my desire to make no two campaigns that I run exactly alike. Sometimes, where they are both set in the same game world and operating concurrently in game time, the distinguishing features may have to be more nuanced and less casually-obvious, but they are still there.

    This is particularly the case when it comes to the different D&D campaigns that I’ve run over the years. I want Elves and Dwarves and Orcs and so on to be different in each, and to have some reason in back of those differences. Collectively, those racial differences manifest from conceptual differences within the world and its history. Put both together, and each campaign takes on its own unique flavor.

    It should be obvious that this technique not only assists in creating such unique reinterpretations, it helps spotlight them in play. That’s both a win and a bonus, in my book.

    GM Individuality

    I’ve often made the point that each GM is a little bit different from the next. No two of us think exactly alike. Over time, the strengths, weaknesses, likes and dislikes, etc of the individual start to come together in a unique GMing style, one that often transcends campaigns and genres and game systems.

    There is a corollary to this perspective – not every game system will suit every GM equally. Some game systems will simply be a complete bust; others may flex ‘muscles’ that the GM didn’t know they had, enhancing and developing their capabilities; and some will fit them to a T, while the GM (metaphorically) next door can’t cope with that system and doesn’t see its attraction.

    Because this process enables individual GMs to craft individual interpretations of common elements like races or species, it facilitates the expression of a GM’s particular style – even before they know what that style is. Without that knowledge as a guide, there will probably be false starts and missteps along the way – but those would happen anyway. We make mistakes and we learn from them.

    The Developmental Sandbox

    The final big-picture that I want to point out is that you can start with a completely generic setting and evolve it, one step at a time, using this process. Eventually, you will find that you have developed your own singular ‘take’ on that setting – your “Ebberon” might be completely different to another GM’s “Ebberron”, your “Middle Earth” unique, while still deriving from and reflecting the source material.

    The process allows for the development of singular elements within a sandboxed game narrative, permitting the incorporation of creativity in greater or smaller doses – but one at a time, making assimilation of the distinguishing features easier for both GM and players.

    That’s not nothing, either.

A Powerful Tool

In conclusion, then, this is a powerful tool for character creation that expands the mythos surrounding the specific races, classes / archetypes, and social groupings to which the individual belongs. Rather than being confined by pre-packaged concepts of those character facets, it causes their expansion to accommodate greater diversity and richness of material within a campaign.

Throw in a few side-benefits along the way, and it should be easy to see why it’s worth your attention.

Leave a Comment

All About Ripple Plotlines


Ripple plotlines use domino chains that feed back to the main plotline while cascading out to trigger other plotlines in a chain reaction. They can start from the most apparently inconsequential act or decision and grow until whole Kingdoms hang from them like Christmas baubles.

Today (as I write this) is Australia Day, our equivalent of the 4th of July, and yesterday was unbearably hot and humid, so I got nothing done. Which meant, of course, that I would need something fairly quick and simple for this week’s topic.

I’ve given a pretty fair description of what a ripple plotline is in my introduction, so instead let’s look at the anatomy of one.

Anatomy Of A Ripple

Every ripple starts with an act or decision, which can be described in an abstract manner as the ‘seed’. This is similar, but not identical, to an adventure seed in that there are some very specific requirements that it has to possess. Specifically, it has to affect others in a number of different ways.

Each of those effects is a Primary Strand of the plotline. At least one primary strand has to affect a PC, usually directly but indirectly can be okay, too.

Each group or individual affected is a secondary node, and each secondary node has to have the need to act or react to the Seed Event. That, too, is a requirement of the Seed that has to be met in order for this to qualify as a Ripple Plot.

Those secondary nodes give off consequences of the decisions. One of these “Secondary Strands” has to connect back to the Seed Originator in some way, and another has to impact one or more PCs in a specific fashion. I’ll come back to that detail in a little bit.

The rest of the Secondary Strands can either connect to the campaign background, creating a change in that background moving forward, or can connect with a Tertiary Node. That tertiary node will cast of Tertiary Strands, which – just like the Secondary Strands, have to affect the original Seed Originator, and either the background, or one or more PCs, or both.

A ripple plotline grows via a chain reaction of dominoes falling, spreading outward like ripples on a pond – hence the name.

The Binding Agent

One of the characteristics of a Ripple Plot is that, initially, it’s about something other than the ripple plot itself. It starts in the background, just a backdrop to the “Through Plot” which serves as a Binding Agent. As ripples intercept the participants in this “Through Plot”, it gains momentum and significance, until the through plot is less important than the ripples that are rewriting the adventuring environment around the characters.

I’ve labeled this a ‘binding agent’ because it ties the narrative together, it ties the PCs to the ripples, and it gives the whole thing a momentum that it would otherwise be lacking. These are important functions, and it follows that the choice of through plot can be just as important as the Ripple Seed.

So what should you look for in a Through Plot?

In a word, discontinuity. It has to be something that starts and stops and then resumes, so that in the intervals in between, the ripples have time to manifest. A dungeon that has to be completed in sections, with rest and recovery away from the dungeon in between, for example. A courier job in which several different noblemen have to be taken a message, and the replies brought back to the employer. Or maybe, instead of noblemen, it’s a particular character class or occupation.

The nature of the Ripple Seed

Some types of plots lend themselves readily and obviously to Ripple Plots, in particular political events / decisions. But these are often too obvious and too significant, causing the PCs to focus on them before the full impact has time to manifest; there’s a fine line to be walked.

A lot of GMs come up with the basic idea, or some variation of it, on their own, usually based around a political seed, and this effect then causes them to lose control of the ripple plot. They then write the whole thing off as an uncontrollable force within a campaign, and never discover the power than it can have from a more subtle Seed.

What’s really desirable is something that’s going to be minor to start off with and grow.

Timing is everything

I can best explain this point by offering up an example. Suppose our Ripple Seed is the notion of disbanding the Inland Revenue Service and contracting the collection of taxes out to public groups / agencies. The theory is that in a year or two, this will save so much money that the tax rate itself can be lowered.

Right away, there’s a potential problem – what if the PCs decide to become one of these contracted groups? There are two ways of avoiding this, and I would use them both. First, the remuneration should be less than the existing tax collectors were being paid – a disincentive; and secondly, making sure the PCs are busy with something that looks far more important / useful / profitable than this before it is even an option.

That ‘something’, obviously, is the Through Plot. I might foreshadow the Ripple Plot with news of a new Advisor to the Government (the Throne in a Kingdom) who has privately proposed radical reforms of the tax code. This, of course, is only half-right; he or she is not advising changes to the Tax Code, only suggesting that such might become possible if this change is put in place. But it sounds both important and boring at the same time, and so will incline the PCs towards the Through Plot when it manifests.

The thing that makes this a suitable Ripple Seed is that there will be lots of different groups who will have different reactions. Some will embrace it, in a restricted manner – Professional Guilds, for example, collecting the Taxes from their members, and using the revenue payed to them for performing this service to lower their guild fees. Churches might embrace it, mandating that the congregations pay their taxes on the collection plate. Thief’s Guilds might also embrace it, as a way of hiding their thugs in plain sight, giving them a veneer of respectability, and fattening their coffers by ‘increasing the tax rate’ (unofficially, of course) – not to mention the money-laundering possibilities. Various bandit groups might sign up as a way of gaining, or regaining, legitimacy.

Other groups will oppose it. Some might see the potential for corruption. Others the prospect of Confusion and/or tax avoidance. Winemakers and Vintners might claim that they’ve paid their taxes through their guild (when they haven’t) and so don’t need to pay agency X – whoever it is that comes around demanding tax payments. Still others may see it as a way for the neighbors to justify intruding into their privacy. How do you prove that you’ve paid your taxes – showing a token of some sort?

“Psst, hey, kid — wanna buy a token? I can give a discount for lots of six or more. Almost as good as the real thing, I promise.”

Instead of a central authority, there would be dozens of smaller authorities – and that makes any inequities in the system harder to remove by increasing the bureaucratic burden. Some groups might take matters into their own hands – if the merchants feel that sales taxes are high enough to stifle business opportunities, they might arbitrarily reduce the amounts they are collecting to what they consider ‘reasonable’.

Some groups may hear rumors of such goings on and decide to do likewise. Others will hear such rumors and decide that the guild in question is elevating themselves and their prosperity over that of others, and start acting against the guild who is the subject of the rumor.

Everyone will have an opinion of the idea, of the way it is implemented, of the groups backing it of the groups opposing it, of the groups trying to make the system fairer and those who are trying to take advantage of it. Those opinions will shape or reshape the implementation of the idea, and some will shift from ardent supporters to vehement denialists. “I was all for this until the Seafarer’s Guild signed up to collect taxes from the docklands. You can’t trust them as far as you can throw a warehouse.”

Trust. In this Ripple Plot, trust becomes a taxable quantity that not everyone can afford.

And, at the end of the day, when society starts coming apart at the seams, it can all be undone by decree the same way as it was implemented. The old Tax Collectors can be rehired – at increased pay, no doubt – and taxes will go up to cover this increased cost. That won’t put the genie back in the bottle – the consequences and repercussions will take years to unravel and stabilize. And lots of different groups will have entirely changed attitudes toward the government who foisted this shambles off onto the public.

The Key To Success

Ripple plots succeed or fail, live or die, according to the extent which the characters are directly affected. Those impacts should start small and innocuous, as already noted, but should compound one on top of another.

Ripple Plots. Everyone should know how to make them and how to use them.

Leave a Comment

A Fairy Colony In Zenith-3


What is a Fairy Colony, and why should you never annoy one? Or attack one? I didn’t want to go full “Fey” so I came up with something different…

Pieces Of Creation Logo version 2

In the Zenith-3 superhero campaign, there’s a Fairy Colony at the bottom of their back yard. It was placed there years ago (real time) but until late last year, no details had ever been worked out. heck, there wasn’t even a functional definition of a fairy, let alone a Fairy Colony! But, with play set to resume next week, that had to change; so I wrote up some concepts, and then added to them, and added to those, and so on. None of my players have seen this yet (and the details in that specific campaign’s version are slightly different, anyway). That’s because I’ve adapted this to work with D&D/Pathfinder, even though it remains a concept for use with Hero Games, fundamentally.

Fairy Physical Structure

Fairies average 6-12 inches (15.24-30.48 cm) in height.

They trend towards being slightly built, though a few are stockier. The average weight is 40-320 g. Stockier examples x 1.45

Their wingspan is typically 2.4 x their height (each wing = x1.2 height), and resemble those of a dragonfly. They fly at peak speeds of up to 6″ 25mph (72 km/h) 12″ 65mph (105 km/h). Divide these by the 1.45^0.5 = 1.204 for stockier builds.

Cruising speed is 6″ 20-25 mph (32.2 – 40.2 km/h) to 12″ 30-.35 mph (48.2 – 56.3 km/h).

They have three fingers and a thumb on each hand. As a result, they tend to number things in base-8.

    1=1
    10=8 (two hands)
    20=16 (four hands)
    100=64. (a great hand)

etc.

At 12 inches tall, a fairy is effectively a small, sentient projectile. Flying at 65 mph, an impact would be significant – carrying about the same kinetic energy as a professional pitcher’s 100mph fastball.

Wearing a pointed helmet or using a pole arm, they become the equivalent of a living AP round (at relatively low velocity relative to a gun, but still…)

The wingspan of the larger fairies handicaps them in forest and indoor settings. They dominate the open skies. The smaller fairies are far more maneuverable and dominate tighter spaces. As a species, they take advantage of these facts – short fairies are melee fighters while taller fairies use javelins and bows..

Because of the high speeds and small size, these fairies would likely have an incredibly high metabolism, requiring constant intake of high-energy foods (nectar, fats, or sugars) to fuel their flight muscles. They magically concentrate food daily. They will eat once when the moon rises, twice more at four-hour intervals, and have a half-meal when it sets (to give them an energy reserve to call upon if attacked in the night). Their preferred diet is tree sap (especially of the maple variety), leaves, and fruit. Most flowers do not produce enough nectar to do more than add flavoring, but they prize them for that function. Especially brave or hungry fairy colonies may raid a beehive.

Fairy Social Structure:

They consider themselves a single clan or “colony”. When their numbers grow too large, the colony will split and have a big fight to see who gets to stay and who has to look elsewhere. Normally, about 2/3 will refuse to fight, either choosing after the outcome is decided which group they will affiliate with, or volunteering to relocate, regardless.

How many is too many? The real number is somewhere between 500 and 1000 adults, but most Kings pick a number between 100 and 500 with which they are comfortable. Beyond a few hundred, you stop knowing everyone as individuals very well, and past about 500, you start losing track of individuals completely – and social cohesion and relationships are essential to a Fairy.

Fairies hold grudges for decades, if not longer, as hot and passionate at the end as when the incident is fresh. They are easily placated, however, if this is done sincerely. For the most part, they simply want to be left alone. And party. And celebrate nature. And socialize. And gossip about each other (usually in a friendly way).

Then, too, in every generation there are a few really mean and nasty individuals – bullies and the like. If the colony is small in size, there won’t be many of these, and they will be easily quelled and controlled by the society at large; once numbers become more significant, society begins to splinter into subcultures, and these louts can become a gang, sparking difficulty with those living around the colony as well as internal strife. They can become a significant problem for the colony.

Four times a year, on the second full moon of the season, the Fairies have a celebration with an outsider as guest of honor. This outsider is chosen by a process called the Fabrinelle, a kind of treasure hunt through the surrounding lands. To be chosen, the person must be a true lover of nature. At the end of the night of wild celebrations, the guest is given a gift of some sort and an honored role in Fairy Society; he or she may call upon the Colony to aid them in some struggle or task that is beyond them. This power, once used, is lost forever.

On rare occasions, a guest may wish to remain with the fairies permanently. It is up to the King to determine if this is possible, and to make any arrangements necessary, but his primary task is to ensure the security of the Colony; there are times when this makes the request impossible. Some Kings, especially those without the guidance of a Queen, have made poor choices in this regard, such as replacing the child with a simulacrum, a changeling, who will fall ill and seem to ‘die’ over the next month or so.

Of secondary importance is that the request must not create conflict between the family of the guest and the colony.

Those who are permitted to remain are transformed permanently into fairies and become members of the colony like any other.

Fairy Political Structure:

On paper, it’s a Monarchy, but Fairies don’t use paper. Kingship rotates through the male population on a weekly basis. The Kings from the previous two weeks and the one who will assume the throne next week form a council of advisors, providing some semblance of continuity. If a King is wed, the she becomes Queen. The role of the Queen is to provide a conduit between the rest of the colony and the throne. She is also in charge of the recreation activities of the colony, some one-in-all-in social occasion.

It is when a King is unwed that things can get messier. The King has the authority to choose as his consort any unwed female who will have him, and she will then act as Queen for the remainder of the King’s Reign, but she has no training or authority to organize events, so the King does that himself – usually more masculine activities like hunts.

Fairy Activity Orientation:

As a general rule, fairies are neither nocturnal nor diurnal – they rise with the moon and set with it. But they can function outside these hours at need. To human observers, their daily cycles drift by about 50 minutes later every earth day; one week they are active at midnight; two weeks later, they are active at noon.

During the New Moon phase, the fairies rise and set almost exactly with the Sun. This is likely their most stressful time – they are active when the “Big People” (humans) and daylight predators are most active, and they lack the cover of night.

Clothing and Equipment:

Fairy clothing is generally made of leaves that has been treated with tree-saps to stiffen them and bind layers together, then magically hardened. Their very best armors are as protective as those used by human SWAT teams.

They carve many implements from wood and then preserve them with lacquers. Because of their small size, these can possess incredible delicacy and detail.

They forge metal through (magical) transmutation and melt/cast/smith it using magical fires. A single “blacksmith” might be one artisan and 15 or 31 others generating the heat. 256 fairies casting in unison can produce brief bursts of plasma-cutter temperatures.

Domiciles & Structures

Edible tree sap isn’t the only type that Fairies use. They dry sap out into flat planes, usually sandwiched between two leaves, building up layers which they treat magically to make them more resistant and resilient, at least as hard as granite, depending on the number of layers. These are then assembled and joined to construct homes and other structures.

The most common practice is to suspend these from tree branches, but every Colony has a different approach. The most grandiose structures may be suspended from multiple sides enabling a much larger construction – these can be full-on medieval palaces in miniature. But most structures are smaller and more humble.

The simplest structures are round, like beehives.

By far the favorite place to reside if one isn’t entitled to a ‘palace’ or ‘castle’ is in the hollow of a tree. These can be extensively and elaborately carved internally while little or nothing is visible from the outside save some internal illumination through windows.

They can sharpen sticks by coating them in resin, wrapping a leaf around it, and transforming it in the same way. A ‘forest’ of 3-6 inch spikes surrounding a colony for a couple of feet – with gaps big enough for the feet of any human(oid) visitors – is enough to discourage most predators; these spikes are needle-sharp and capable of penetrating the hardest hooves. If they have been attacked in the past, other refinements may be added to inflict poisons or diseases on hostile entities. This is also how they make their javelins and arrows.

This often makes a colony in a relatively safe environment confident enough to build dwellings on the ground as well as aloft, though only the lowest social classes would live there.

Fairy Magic

This is generally more elementary than that of a human mage, and more elemental, but it is capable of great subtlety, and backed by enormous power, because the whole clan participates in the casting. They may only have 1 mana point each, but 500 or 600 fairies cast spells more powerful than most human mages can even contemplate.

They recover that 1 Mana point almost instantly – it actually takes 5 or six seconds.

Fairy Spells tend to blow some aspect of the spell out to extremes.

Base area is proportionate to their size, so about 6 non-game inches to a hex.

In practical terms:

    log [Area (square feet) x 12 / 6] / log(2) = area modifier.

So double the area (or less) for +1 modifier. or half area for -1 modifier.

    EG: 10 sqr feet: 10×12/6 = 20; log(20)/log(2) = 4.3 so this is a +5 modifier.

Note that you don’t need a calculator. 2; 4; 8; 16; 32. 32 is more than 20, so we stop doubling. Count the number of doublings: 5. So 20 sqr ft = +5 – and so is anything from 17-32 square feet.

  • 1 square foot = +1. This is the area to affect a human-sized individual.
  • 10 sqr ft area is x20, so +5.
  • 20 sqr ft area is x40 = +6.
  • 100 sqr ft is x200 = +8.
  • 1000 sqr ft is x2000 = +11.
  • 10,000 sqr ft is x20,000 = +15 (a large stadium).
  • 1 square km = 1.55e+9 sqr inches = x1.55e+9 / 6 = x258,333,333.3 = +30.
  • 1 sqr mile = 4.01451e+9 sqr inches = x 4.01451e+9 / 6 = x669,085,000 = +30 (both fall within the same power of two).
  • 25 sqr km (5km x 5km) (a moderate city) = x6,458,333,333.3 = +33
  • 22.7 sqr miles (Manhattan island)= x15,188,229,500 = +34
  • 100 sqr miles (a larger city) = x66,908,500,000 = +36.
  • 12,367 sqr km (Greater Sydney) = x3,194,808,333,333.3 = +42
  • 30-40,000 sqr km (small Western European Country) = x7,750,000,000,000 – x10,333,333,333,333.3 = +43 to +44
  • 100,000 sqr km (average Western European Country) = 25,833,333,333,333.3 = +45
  • 540,000 sqr km (France) = x139,500,000,000,000 = 1.395e+14 = +47 (barely)
  • 7,660,000 sqr km (continental US) = x1.978833e+15 = +51
  • 255 million sqr km (Earth Hemisphere) = x6.5875e+16 = +56
  • 510 million sqr km (Earth) = x1.3175e+17 = +57

Duration: the base is instant (+0), then 1 second (+1), as usual. The calculation is the same, as you will observe below.

  • 1 minute = 60 sec = x60 = 1+log(60)/log(2) = +7.
  • 5 mins = 300 sec = x300 = 1+8.2 = +10.
  • 30 mins = 1800 sec = x1800 = +12.
  • 1 hr = 3600 sec = x3600 = +13.
  • 1 great-hand of minutes = 64×60=3840 sec = x3840 = +13.
  • 1 hand of life = 4 great-hands of minutes = x3840x4 = x15360 = +15
  • 6 hrs = 21,600 sec = x 21,600 = +16.
  • 1 sky-cycle (lunar rise to lunar set) = approx. 12 hrs 43 min = 45780 sec = x45780 = +17
  • 1 long-day (max lunar rise to set, occurs every 18.6 years) = 18.5 hrs (max) = x66600 = +18. Most will be +17.
  • 1 day = x24x60x60 = x86400 = +18.
  • 1 Fairy-day = x(86400+50) = x86450 = +18
  • 1 Fairy-week = x7x86450 = x605,150 = +21
  • 2 hands of fairy days = 1 half-cycle = x8x86450 = x691600 = +21
  • 1 hand of hands of fairy days = 1 cycle = x1,383,200 = +22
  • “15” cycles = 13 cycles = 1 season = x13x1,383,200 = x17,981,600 = +26
  • 1 hand of seasons (1 year) = x4x17,981,600 = x71,926,400 = +28
  • 1 hand of years (4 years) = x4x71,926,400 = x287,705,600 = +30
  • 2 hands of years (8 years) = x2x287,705,600 = x575,411,200 = +31
  • 2 hands of hands of years = 32 years = 1 Fairy generation = 2x4x4x287,705,600 = x9,206,579,200 = +35
  • 1 great-hand of years = 2 Fairy Generations = 1/4 of an age = 1.841316e+10 = +36
  • 1 hand of great-hands of years = 8 Fairy Generations = 1/2 an age = x4x1.841316e+10 = x7.365264e+10 = +38
  • 2 hands of great-hands of years = 16 Fairy Generations = an age = x2x7.365264e+10 = x1.4730528e+11 = +39
  • 1 great-hand of great hands of years = 4096 years = an ‘eternity’ = x4096x71,926,400 = x2.946e+11 = +40

Difficulty in breaking spells:

  • Caster level required +1 = +1
Adapting to D&D Spells:

    Colony Size / (Spell Level* +1) = total pluses (round down).
    Area Pluses + Duration pluses + Difficulty-in-breaking pluses = total pluses

      * includes any additional caster levels to achieve desired effect level.

Kings can choose to cast with lower total pluses, the above sets maximum levels.

A Fairy Queen. Image by Jim Cooper from Pixabay, cropped by Mike

As a general rule, choose the spell effect that you want and then select the spell that best fits. “Bless” and “Curse” are frequent choices.

    EG “May it rain on you, wherever you roam, regardless of cover, for an entire season.”
    Curse, 1st level spell. Human sized individual. Colony of 85 faeries.
    85 / (1+1) = 42.5, rounds to 42. So an individual could be cursed for more than 4096 years. But let’s play it safe (for the colony) and limit the curse to a season (+26). And let’s spend +10 adding to the caster level requirement of any mage or cleric who attempts to lift the curse, for a total of 1 (area) + 26 (duration) + 10 = 37. This leaves 5 unallocated.

Casting Consequences

A colony casting a spell is literally doing so with their life-force. It’s not done trivially.

    (30 x actual total pluses / maximum total pluses) + spell level + 10 = % of colony half-killed = 2 x % of colony killed (round both down).

    % colony killed can be reduced by X% by increasing the % half-killed by 2 x X% and reducing the number of pluses AFTER the above calculation by 0.5 x X.

    EG Continued: 30 x 37/42 = 26%. 26+1+10=37% half-killed and 18% killed. We can use the 5 pluses remaining to reduce the death penalty by 10 to 8%. This adds +16% to the number half-killed, for totals of 8% killed and 53% half-killed.

Not a trivial exercise at all; this curse is right at the limits of what a colony this small can do.

    Comparison example: Colony of 170 (twice the size): “May it rain on you, wherever you roam, regardless of cover, for an entire YEAR.”
    Curse, 1st level spell. Human sized individual.
    170 / (1+1) = 85. Duration: 1 year (+28). +20 caster level requirement of any mage or cleric who attempts to lift the curse, for a total of 1 (area) + 28 (duration) + 20 = 49. This leaves 36 unallocated.

    30 x 49/85 = 17% half killed, 8% killed. Reduce the 8% to 0: uses 4 additional pluses, plenty in reserve. Totals: 0% killed, 17+16=33% half hit points (recovered at 1 per day as usual).

Not only is this a nastier spell (it lasts a year and is harder to dispel), the colony is able to cast it with relative impunity.

Let’s nasty it up a little more, so that it not only affects the individual but anyone physically close to them.

    Comparison example: Colony of 170 (twice the size): “May it rain on you and any who approach you, wherever you roam, regardless of cover, for an entire DECADE.”
    Curse, 1st level spell. Human sized individual + surrounds = 5′ x 5′ area.
    170 / (1+1) = 85.
    Area: 5′ x 5′ = 25 sqr ft. log(25)/log(2) = 4.64, so +5.
    Duration: A decade isn’t on the list, but 8 years is – value of +31. So a decade will be +32.
    +23 caster level requirement of any mage or cleric who attempts to lift the curse.
    Total of 5 + 32 + 23 = 60. This leaves 25 unallocated.

    30 x 60/85 = 21% half killed, 10% killed. Reduce the 10% to 0: uses 5 additional pluses, still plenty left over. Totals: 0% killed, 21+20=41% half hit points (recovered at 1 per day as usual).

Half-killing almost half the colony is about as far as it’s reasonable to go; anything more risks the colony’s survival, should a predator find them.

    One more example:
    “May every building you enter burn to the ground for the rest of your natural life” (man, the King must really be pissed off at the target!)
    Colony Size 400.
    Spell: Fireball (3d6), Level 3 spell, plus 2 caster levels to get 3d6 = level 5.
    Max Bonuses = 400 / (5+1) = 66.
    Area: 20′ x 20′ = +6.
    Duration: +37.
    Dispel Difficulty =+7
    Total = 6+37+7=50, leaves 16.

    30 x 50 / 66 = 22% half-killed, 11% killed. Protect the 11% = +6 levels, 10 in reserve.
    Net cost: 22+22=44% half hit points, no fatalities.

Note that this is right on the edge for a colony of this size, which is close to as big as they come. Maybe they colony could have afforded another +5 dispel difficulty. But most spell-casters would be disinclined to help if the practice of consulting them burned down their houses, so maybe that’s not necessary.

Personal Magic

In addition to the major castings above, which always involve a ritual and a whole colony, most fairies are capable of smaller, more temporary ‘personal magic’ – making vines and tree limbs light up with glowing ‘fairy light’, shrinking visitors to enable them into homes, etc. No such magic effect can last for more than a day and most for less. It is ten times more efficient to sustain an existing spell than it is to cast it anew.

Fairy Personalities

Fairies are generally lighthearted and friendly, though some have nasty senses of humor. A few – generally marked for greatness within their society as a result – are capable of being more serious, more judgmental, and exhibit gravitas that far outweighs their stature. Relatively few are the sly, cunning, scheming types; they are more happy-go-lucky and take life one day at a time as it comes to them.

These moods and attitudes vanish instantly when the colony feels under threat. Fairies are capable of an anger that has to be seen to be believed, and can sustain it for generations. Hillbilly fueders have nothing on these folks when someone earns their enmity. Entire colonies have uprooted and moved simply to be in a better position to harass someone the Fairies think worthy of that level of enmity – though it is more common for a colony to split over such an issue.

One of the fastest ways to earn such enmity is a failure to respect nature. Fairies have no theology as such, but they are fiercely protective of the environment around them. As the land on which they abide sickens or is befouled, so the fairies succumb to ill-health, so this is not all that surprising; they are bound to the life of the nature which surrounds them, and they guard and protect it as fiercely as they guard and protect themselves.

Dishonesty and misrepresentation are the second fastest ways to arouse a Fairie’s ire. A Fairy’s word is inviolable; it would die before breaking it, sacrifice their entire family if need be. And they don’t care about ‘the letter of the law’; they operate on the intention of the principle as spelled out in the original agreement. They never forget the exact wording of an agreement reached and never forget, ignore, or obfuscate the intention behind it; if an agreement is no longer fit to serve that purpose because circumstances have changed, or if the intended purpose becomes out of date, the whole agreement needs to be renegotiated, it cannot be amended. At the same time, Fairies have no equivalent of the human sense of Honor, because that implies dishonor which is unthinkable in a Fairy. They are natural seekers of Justice.

Educated Fairies

Fairies with natural Gravitas are natural leaders, and are groomed for that role. About 1% of the population are natural geniuses (by Fairy standards), with two or even sometimes three times the intelligence of the smartest ‘typical’ fairy. it is very common for these to get initial education by listening outside the windows of human institutions, becoming fascinated by words, stories, and higher learning. When recognized, if it is socially acceptable to the culture outside the colony, these may even be sent to study at a more advanced institution or at the feet of a non-Fairy master of some sort. Eventually, these ‘expatriates’ return to the colony and learn to apply what they have learned – be it the cultivation of food stuffs, new construction techniques, new science, or whatever. They frequently become advisors to the crown – whoever happens to be wearing it this week.

Note that they adapt the knowledge they have gained to Fairy Society and its benefit, and not the other way around. Anything learned that requires a change in social structures or patterns has to be put to the colony as a whole, and may not be implemented until all not only understand it but approve of the change. Anything that can’t be used within this structure is discarded.

Leave a Comment

The Power Of 1 on Root R


Today, I offer a new technique for rolling multiple dice many times with great efficiency. Any RPG can benefit from that!

Sometimes, the shortness of the road can make up for rougher conditions. Image by Nataly from Pixabay

I hope everyone had a wonderful Christmas break. Mine was great, though not without its challenges – but I have evidently weathered them, because here we all are, in a bright and shiny New Year!

This isn’t going to be a long post – but it is going to be a profound one. In the adventure I’m currently working on for the Zenith-3 campaign, a situation arose in which a character was going to be exposed to multiple minutes of an environment doing damage to him every turn.

Not just a few dice, but a lot of dice. Fortunately, he also has a lot of protection. How many dice, and whether or not that protection was going to be enough, would depend on what the character chose to do.

(Note that I’m being circumspect because this adventure hasn’t been run yet).

He could choose to head into the danger and incur a higher rate of damage. He could try to get out of danger by the shortest possible route – which also incurs that higher rate of damage but only for a relatively short time. Unless he gets lost along the way – a potential real danger. He has other options, as well.

So I didn’t know how many dice a round he would be taking, but I knew this: there are 3 twenty-second rounds in a minute (or 6 10-second rounds – the latter is our default, the former something I’m experimenting with). Thats 15 rolls of 8-to-10d6 every five minutes. And the character could be waiting in this situation for 20, 30, 40 minutes or more.

120 or more rolls of 8-to-10 d6 each. And apply defenses to each. And calculate damage from each. And accumulate that damage from each. And recover some of that damage from each.

It might take as little as two minutes to do each, but it would probably be more. FOUR HOURS of making rolls while everyone twiddled their fingers.

There had to be a better way. And then I thought of one, and got Google Gemini to help flesh it out and make it real.

The Principle

As you make more and more rolls, they become more and more inclined to average out. That’s one of the abiding principles harnessed by The Sixes System, and it’s something I understood very clearly. So why not leverage that fact? Roll ONCE and apply a mathematical manipulation to that result to get the outcome of R rolls.

Sounds incredibly simple, doesn’t it? Well, it’s not quite that easy, but it’s pretty close to it.

The procedure

  1. Roll Once.
  2. Subtract the average roll to get Delta.
  3. Determine R, the number of Rolls that this calculation is going to represent.
  4. Multiply the Delta by 1/ (R^0.5).
  5. Add the average roll to the result.
  6. Apply any modifiers that are applicable to every roll. The result is the average result over the totality of R rolls.
  7. Multiply by R.
  8. Apply any other adjustments. Which gives you the total of effect at the end of those R rolls.

This sounds complicated, but in most RPGs it will be even simpler.

An example

Let’s pick… 8d6 damage, 12 rolls over 12 rounds. Defenses subtract 20 from the result. Anything that gets through the defenses also does x3.5 Stun damage. At the end of each minute, the character gets 25 Body back and 50 Stun. He has a pool of 120 HP and 240 stun to draw upon.

  1. I roll 8d6 and get 33.
  2. The average of 8d6 is 8 x 7 / 2 = 28. Delta = +5.
  3. R = 12.
  4. Delta x 1 / (R^0.5) = 5 / 12^0.5 = 5 / 3.464 = 1.4434
  5. Add the average roll 28 + 1.4434 = 29.4434.
  6. Subtract Defenses of 20 = 9.4434.
  7. Multiply by R = 12 x 9.4434 = 113.3208. Round in the character’s favor to 113. Multiply this by 3.5 for the Stun = 395 stun damage.
  8. If 3 rolls is a minute, 12 rolls is 4 minutes, and the character gets 4 x 25 = 100 HP back and 4 x 50 = 200 Stun back. So his losses at the end of the 4 minutes are 113-100=13 HP and 395-200=195 stun.

That took about 5 minutes to do – but I was typing explanations. If I just did it? 2 minutes, tops – 60 to 90 seconds, more likely.

Another example

There are 25 men defending a castle wall. There are 200 archers attacking them, and each archer gets 2 shots per round. Each shot does 1d6 if it hits. The archers have a 3 in 20 chance of hitting, and half of those hits will strike castle wall instead, so it’s effectively 1.5 on d20. Archers have to inflict an 20 points of damage to kill a target.

There are a couple of preliminary calculations needed for this example.

  • 200 x 2 x 1.5 / 20 = 30 hits per round.
  • Distributed over 25 men, that’s effectively 1.2 hits per defender per round.
  • At an average of 3.5 points per hit, that’s an average of 4.2 damage per defender per round.
  • At 20 needed, that’s an average of 20 / 4.2 = 4.76 rounds of combat.

That’s all well and good, but we don’t want averages – we want specifics.

So let’s do 5 x 6d6 per round for 4 rounds and see where we’re at (5 x 6 = 30).

  1. Roll 6d6.I get 18.
  2. The average of 6d6 is 6 x 7 / 2 = 21. Delta is -3.
  3. R = 4.
  4. -3 x 1 / 4^0.05 = -3 / 2 = -1.5.
  5. -1.5 + 21 = 19.5.
  6. 19.5 x 4 = 78.
  • 78 points distributed amongst 25 men is 3 12 points per man per round.
  • For every man who’s taken twice that, there will be one who’s taken half that. So 1.56 and 6.24.*
  • Repeat: 0.78 and 12.48.
  • Repeat: 0.39 and 24.24.96/
  • Six numbers, so out of every 6 defenders, 1 is dead, 1 is half-dead but still fighting, and 1 is wounded slightly.
  • 25 defenders, so the total is 25/6=4 dead, four half-dead, four lightly wounded, 13 virtually whole.
  • * Assuming the roll is symmetrical.**

    ** Okay, this isn’t quite true – if there’s a minimum result, the true answer is half-way from the result to the minimum matches halfway from the maximum to the maximum minus the result. But this is a lot quicker and easier, and it works even when you don’t know what the maximum is, as in this case.

Specifics vs Averages – it makes a VERY big difference.

I would then run the same calculation for the defenders taking down attackers. About 4 minutes to run 4 rounds worth of siege.

But the next time around, I’d be informed by the results of the first run and increase R to 6 or 8, and run the attack in bigger ‘chunks’ of time.

Useful R values

If you can arrange it, the following R values are especially convenient, for reasons that should be obvious: 4, 9, 16, 25, 36, 49, 64. The square root of these numbers are 2, 3, 4, 5, 6, 7, and 8, respectively.

Perhaps less obvious are 2.25, 6.25, 12.25, 20.25, 30.25, 42.25 and 56.25, .These become 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, and 7.5, respectively.

Wait, What? “2.25” rolls? “2.25”” rounds? How does THAT work?

The “round” or “turn” is an artificial construct. It doesn’t actually exist, it’s just a convenient dividing line. Multiply by the number of minutes or seconds in one, and you get real-world units of, respectively, minutes or seconds.

And that works in the other direction, as well. Let’s say there are 12 seconds in a round – then 2.25 rounds is 2.25 x 12 = 27 seconds.

Or, let’s say there are 15 seconds in a round, and a character has to run through a danger zone, which will take him 72 seconds at his movement rate. 72 / 15 = 4.8 rounds. Not 4 rounds, or 5 rounds, 4.8 rounds.

Or, to go back to the original trigger for all this – the character might spend 16 minutes in the 6d6 zone, then cross 100m of 8d6, 100m of 10d6, and 200m of 12d6. Most movement rates aren’t going to translate those distances into neat time intervals when they are measured in rounds. Seconds, maybe, maybe not, but rounds? Almost certainly not.

Three Final Tips

    Tip #1

    If you really want your results to FEEL like you’d rolled them all, aim for an R that is one less than required and add one one totally legitimate random roll. In reality, this inflates the randomness more than is warranted, but it gives the right ‘feeling’ in play.

    So if your true R is 15, use R=14. One random roll feeds into the calculation, and one stands alone. I do NOT recommend this, though – it’s an extra set of die rolls for not enough reward.

    Tip #2

    The second one is this: if you have a long interval, break it into smaller chunks and a smaller R, and generate a new ‘seed value’ for each chunk. For 20, 30, or 40 minutes? 5 or 6 minutes at a time. For longer? 10, or 15. For even longer? 20.

    Divide the time by the total number of rolls that you want to make. That will tell you how long each chunk should be – just round to the nearest convenient number.

    Tip #3

    The more granular the die roll, the better this works. Let that sink in for a moment. It’s not just that the system processes 12d6 just as quickly as it does 6d6, saving more time; the results are qualitatively more nuanced.

    But that granularity is also enhanced with higher R values.

    That implies a sweet spot – and it’s going to be roughly found at (R x N) ^0.5. And the closer that R and N are, therefore, the closer you are to the sweet spot – without even calculating it.

    if you have a choice between 15 dice and R=8 or 10 dice and R=12, the second one will give the best results.

    If you have a choice between 60 dice and R=4 vs 15 dice and R=16, the second one wins every time. Not just is ease of roll, but in quality of result.

Well, that’s the power of 1 on Root R. Hopefully it’s useful out there!

Leave a Comment

The Adverse Effects Engine


The AEE is a subsystem that slots into any RPG for simulating everything from Bad Weather to Plagues & Poisons.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

The Backstory

A while back, I was working on an adventure for one my campaigns (being deliberately vague, here) and I needed to look up the effects of Cobra Venom in the Hero System.

I wasn’t impressed – this stuff is supposed to be dangerous, even deadly, and what was offered in the bestiary supplement would barely kill a child.

And this particular venom was supposed to derive from supernatural Cobras summoned by a pissed-off deity. So that wouldn’t cut it.

I developed the Venom described in the box below, but wasn’t very happy with it – too fiddly, and perhaps a touch TOO lethal.
 
 
 
 
 

PER HIT:

  • Immediate on exposure: -5 all primary stats -2 PD -2 ED -10 END -1 ALL SKILLS -2 OCV -2 DCV plus 10 STUN 1 BODY dmg
  • Round after exposure: -3 all primary stats -1 PD -1 ED -6 END -1 ALL SKILLS -1 OCV -1 DCV (all cumulative) plus 10 STUN 2 BODY dmg
  • 2nd round after exposure: -2 all primary stats -4 END -1 ALL SKILLS -1 OCV -1 DCV (all cumulative) plus 5 STUN 3 BODY dmg
  • 3rd, 4th, rounds after exposure: -1 all primary stats -2 END plus 3 STUN 2 BODY
  • 5th round after exposure: -1 all primary stats -1 PD -1 ED -2 END -1 ALL SKILLS -1 OCV -1 DCV plus 2 STUN 1 BODY
  • 6th, 7th round after exposure: as per 3rd & 4th rounds
  • 8th round after exposure: as 5th round
  • 9th, 10th round after exposure: -2 END plus 2 STUN 1 BODY

These are accompanied by appropriate physical & mental responses – shaking, stumbling, delirium, semi-consciousness, poor decision-making, extreme pain (burning sensations) etc. The wound site will blister as though exposed to Mustard Gas or a gas stove’s flame, and the effect will slowly spread through the 10 rounds, starting 2-3 cm diameter +1 cm diam each subsequent round..

TOTAL EFFECTS:

    -5-3-2-2-1-2-1= -16 all primary stats;
    -2-1-1-1 = -5 PD same ED;
    -10-6-4-2-2-2-2-2-2-2 = -32 END;
    -1-1-1-1-1=-5 ALL SKILLS;
    -2-1-1-1-1=-6 OCV & DCV;
    10+10+5+3+3+2+3+3+2+2 = 43 STUN
    1+2+3+2+2+1+2+2+1+1 = 17 BODY

Clothing: Adds 1 round delay to the above

A tornique: Halves the rate of effect shown

Antivenom: Stops effects instantly, restores 1/4 of the damage taken to stats & skills (round down)

If the character survives the course of the attack and does not get hit again, he can recover:

    1 Primary stat point (each stat) / 30 mins
    1 OCV & DCV / 30 mins
    1 Secondary stat point / hour
    END as Normal
    STUN as 1/2 Normal
    BODY as Normal

Those second thoughts didn’t happen right away – in fact, there was about a year in between generating and reviewing the above, and we’re still nowhere near it appearing in play, which it may never do, so I marked it for reconsideration and moved on to higher-priority tasks.

Then, a few weeks ago, in Traits of Exotic d20 Substitutes pt 1, I casually tossed out a completely original system (inspired by the Sixes System, for which I still have to write the final part).

A number of people seemed to like its elegance and simplicity and flexibility. So, a couple of days later, when I came across my note to review the Cobra Venom, the two thoughts clicked together.

But, to actually be usable in play, I needed to dig deeper into what was a casual aside at the time. And so, here we are.

The Core System

The GM specifies N dice, and a target of T sixes. At intervals (generally fixed by the GM but may be variable), the character rolls Nd6. Any sixes are counted towards T, until the total is T or more.

    If one 1 is showing, something bad happens (specified by the GM but not necessarily announced).

    If two 1s are showing, something worse happens (specified as above). Or the same bad thing happens twice. Or the same bad thing happens, and some other bad thing happens. Whatever – it’s worse.

    If three 1s are showing, something really bad happens (specified as above). And T might increase by 1. Or one of the alternatives listed previously. It’s useful to be consistent.

    If four or more 1s are showing, something catastrophically bad happens and T increases by 1 or more. Or (you guessed it) as above.

    You also have the option of specifying a very small ‘something bad’ if no 1s are showing, just to remind the victim that they have this hanging over their head.

The GM controls the severity of each level of effect, the frequency of rolls, the size of the rolls (N), and the target (T). The combination of N and T also dictates what the frequency of occurrence of the different levels of penalty should be.

Nice, neat, and simple – in theory.

To really use it in practice, the GM needs a way to estimate what the total effects are likely to be. Then he can adjust the penalty levels and N and T accordingly to get exactly what he wants the probable outcome to be.

Or he can start with predetermined outcomes in mind and divide them up into the different penalty levels according to a convenient pairing of N and T, based on E, the number of rolls it’s expected to take to reach T.

On Today’s Menu

I’m going to outline the process in full, with tables and convenient shortcuts built in for the GM, for the first approach. Then I’ll outline the second in a shorter format, because it will use the same tables as the first approach.

When I was planning and contemplating this expansion, I also thought up a number of variations, so I’ll describe them and their impacts as the cherry on top.

Set N and T

These should always be determined by E, the expected number of rolls to reach T rolling N dice at a time.

    T=1, for N=1 to 8: 6, 3, 2, 2, 2, 1, 1, 1
    T=2, for N=1 to 8: 12, 6, 4, 3, 3, 2, 2, 2
    T=3, for N=1 to 8: 18, 9, 6, 5, 4, 3, 3, 3
    T=4, for N=1 to 8: 24, 12, 8, 6, 5, 4, 4, 3
    T=5, for N=1 to 8: 30, 15, 10, 8, 6, 5, 5, 4
    T=6, for N=1 to 8: 36, 18, 12, 9, 8, 6, 6, 5
    T=7, for N=1 to 8: 42, 21, 14, 11, 9, 7, 6, 6
    T=8, for N=1 to 8: 48, 24, 16, 12, 10, 8, 7, 6

or, you might prefer to pick an N and then a T:

    N=1, T=1 to 8: 6, 12, 18, 24, 30, 36, 42, 48
    N=2, T=1 to 8: 3, 6, 9, 12, 15, 18, 21, 24
    N=3, T=1 to 8: 2, 4, 6, 8, 10, 12, 14, 16
    N=4, T=1 to 8: 2, 3, 5, 6, 8, 9, 11, 12
    N=5, T=1 to 8: 2, 3, 4, 5, 6, 8, 9, 10
    N=6, T=1 to 8: 1, 2, 3, 4, 5, 6, 7, 8
    N=7, T=1 to 8: 1, 2, 3, 4, 5, 6, 6, 7
    N=8, T=1 to 8: 1, 2, 3, 3, 4, 5, 6, 6

Don’t worry about these not lining up in neat columns, the same information is available in the table that is below.

Advice:

I prefer this approach because of the clear patterns shown for N=1, 2, 3, and 6 – but these can be misleading if used for extrapolation, as N=4 shows with its jump from 3 to 5, and N=5 shows with the jump from 6 to 8, with the second of these being the stronger example. So the extrapolation is not as certain as a pattern might suggest, and can’t be relied on – so I will always recommend using the first arrangement, simply because it doesn’t suggest potentially misleading extrapolations.

High-T = long durations, especially with lower N values. That’s suitable for diseases that have a long interval between checks – every 12 or 24 hours, say. But for poisons, you don’t want an E that’s more than 6 or 8, even for the worst ones, and 5-6 is probably a better target even for those. E=3-4 is good for mid-strength poisons, and E=1-2 should really be reserved for only the fastest-acting.

For every really lethal poison or disease, there should be several of the mid-strength variety, and for every mid-strength, many weaker poisons – or so runs one line of thinking. But evolution favors those poisons that are strong enough to take down whatever the poisoner feeds on or is commonly attacked by; it doesn’t happen in isolation. That can cause potency to increase, moderating the earlier trend. So here are a trio of ratios to get you thinking:

    By Theoretical Threat Magnitude: 1: 3: 9-12
    By Evolutionary End-point: 1 : 2 : 3
    Compromise: 2 : 5 : 10

Playing into that decision should be the poison reservoir. In other words, how many bites of the poison cherry can one poisoner deliver?

Size of the creature impacts this – the larger the creature, the larger the venom sacs (or their equivalent).

Here are some real-world assessments:

Tiny/Small – insects, small spiders, scorpions, small centipedes – venom capacity is very low and either single-use or low-frequency bursts. The venom is metabolically costly relative to body size. Often have a single, full dose for immediate defense/predation. Recovery is long (hours/days).

Medium – mid-sized snakes, large spiders, cone snails, large scorpions, etc – Moderate venom capacity, low-moderate frequency of delivery – three uses in quick succession. Capable of venom metering – injecting less than maximum to conserve supply. May deliver a full dose for a large threat, or a “dry bite” (no venom). Can deliver a burst of 2-3 significant bites, then need short recovery (minutes).

Large – large snakes, octopuses, large fishes – high venom reservoirs, Moderate-high frequency of use (multiple uses or sustained delivery). High reservoir allows for multiple, significant envenomations. Gaboon Vipers, in particular, are known for a massive venom yield and ability to deliver repeated, high-volume strikes. Delivery can be sustained over a short period. Recovery time for full capacity is still long, but practical use is frequent.

As a general rule of thumb, the less venom, the deadlier it has to be, because volume decreases as the cube of linear size. The venom therefore has to become more potent just to keep up. Larger creatures have much more venom, which they can utilize in a number of different ways, one species compared to another. On top of that, smaller creatures are less physically resilient, and need to end combat encounters more quickly in order to survive – so that’s an extra push toward higher toxicity

The graphic below was provideded by Gemini, Google’s AI, and edited by me:

I also asked Gemini to extrapolate its’ findings to cover giant and ‘dire-” creatures, and this is what it came back with (edited):

Gargantuan Creatures – 5m long spiders, Giant Snakes: Size factor 5-10 x earth “real”. Venom Capacity up to 50x that of normal equivalents. Potency may decrease slightly, but total damage output increases exponentially due to volume. Sustained High Frequency of venom delivery, can deliver (5-10x earth “real”) lethal doses with minimal pause. (May take weeks to recharge but still have sufficient venom for 2-3 encounters while recharging).

Colossal Creatures – 25m sea creatures, “Kaiju” spiders, etc. Size Factor 25+ times earth “real”. Venom Capacity – essentially unlimited. Potency is often low relative to size, but the volume is so immense it acts as a biological (or breath weapon, acid spray, etc, with toxic effects on top). The creature’s bite/sting is less about injecting a dose and more about dousing the target (and/or the environment around it).

A “Dire Version” is a creature that defies the standard biological trade-off, making it inherently more dangerous and a true “boss” encounter. The Dire modifier should break the Inverse Correlation by increasing both Reservoir Size and Venom Potency.

So, once you have T, N and E, and have started thinking about bite frequency vs toxicity

Probable Occurrence of Adverse Effects

By the way, before it begins – generating this table of results proved too complicated for both Gemini and ChatGPT! Both understood clearly what I wanted them to do, and (as much as an LLM can) why, and generated a solution to the problem of how – that didn’t work.

Repeated corrections were attempted in both cases, and failed. That’s not a measure of my intellect or anything like that – it’s an indication of just how much detailed work lies under the surface of this innocuous-looking table.

If I had a BASIC compiler, I could have written the code myself from one of their algorithms in less time, and in about 20 lines.

Key:

“No +” represents low chance of more. Use the indicated number of occurrences in estimating total impact from impact per occurrence.

“+” represents a moderate chance of more. Use the indicated number of occurrences in estimating total impact from impact per occurrence.

“++” represents a significant chance of more. Use the indicated number of occurrences + 0.5 to estimate the average total impact from impact per occurrence.

“+++” represents a high likelihood of more occurrences than the number shown, and a high confidence of at least this many occurrences. Use the indicated number +1 to estimate the average total impact from impact per occurrence.

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
T N E K=1 K=2 K=3 K=4 K=5 K=6 K=7 K=8
1 1 6 1
1 2 4 1 0
1 3 3 1 0 0
1 4 2 0+++ 0 0 0
1 5 2 0+++ 0+ 0 0 0
1 6 1 0+++ 0+ 0 0 0 0
1 7 2 0+++ 0+ 0 0 0 0 0
1 8 1 0++ 0++ 0 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
2 1 12 2
2 2 7 1+++ 0
2 3 5 1++ 0+ 0
2 4 4 1++ 0+ 0 0
2 5 3 1 0+ 0 0 0
2 6 3 1 0+ 0 0 0 0
2 7 3 1 0++ 0 0 0 0 0
2 8 2 0++ 0++ 0 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
3 1 18 3
3 2 10 2+++ 0+
3 3 7 2+ 0+ 0
3 4 5 1+++ 0++ 0 0
3 5 4 1++ 0++ 0 0 0
3 6 4 1++ 0+++ 0 0 0 0
3 7 3 1 0++ 0 0 0 0 0
3 8 3 1 0+++ 0+ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
4 1 24 4
4 2 13 3++ 0+
4 3 9 3 0++ 0
4 4 7 2++ 0+++ 0 0
4 5 5 2+ 0+++ 0 0 0
4 6 5 2 1 0+ 0 0 0
4 7 4 1++ 0+++ 0+ 0 0 0 0
4 8 4 1++ 0+++ 0+ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
5 1 30 5
5 2 16 4+ 0+
5 3 11 3+++ 0+++ 0
5 4 8 3 0+++ 0 0
5 5 7 2+++ 1 0 0 0
5 6 6 2+ 1 0+ 0 0 0
5 7 5 1+++ 1 0+ 0 0 0 0
5 8 5 1+++ 1+ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
6 1 36 6
6 2 19 5+ 0++
6 3 13 4++ 0+++ 0
6 4 10 3+++ 1 0 0
6 5 8 3 1+ 0+ 0 0
6 6 7 2+++ 1+ 0+ 0 0 0
6 7 6 2+ 1+ 0+ 0 0 0 0
6 8 5 1+++ 1+ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
7 1 42 7
7 2 22 6 0++
7 3 15 5 1 0
7 4 11 4 1+ 0 0
7 5 9 3++ 1+ 0+ 0 0
7 6 8 3 1++ 0+ 0 0 0
7 7 7 2++ 1++ 0++ 0 0 0 0
7 8 6 2 1++ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
8 1 48 8
8 2 25 6+++ 0++
8 3 17 5+++ 1 0
8 4 13 5 1++ 0 0
8 5 10 4 1++ 0+ 0 0
8 6 9 3++ 1+++ 0+ 0 0 0
8 7 8 3 1+++ 0++ 0 0 0 0
8 8 7 2++ 1+++ 0++ 0 0 0 0 0

E is usually a decimalized number because the calculations determine the average outcome over many sets of rolls. “2.6” means that 40% of the time it will take 2 rolls and 60% of the time it will take 3 – but there is always an outside chance that it might take 1 or 4, so those percentages are approximate. Because in the real world you can’t have “0.6 of a roll”, these have been rounded up, and the resulting whole number of rolls used to calculate the rest of the table.

If you want to know the exact query that ‘broke’ the AIs, it was something like this:

For N 6-sided fair dice from 1 to 8, calculate the number of rolls required to reach a total number of sixes shown across all rolls equal to or greater than T, which also varies from 1 to 8, and label it E1. Because in the real world you can’t have “0.6” of a roll, round E1 up and label it E. For E rolls of N fair six-sided dice, calculate the number of rolls exactly K 1s will be seen, with K varying from one to 8. If the result for a given K (designated R) is an integer, show the integer; else if RK-INT(RK) is <0.25, show INT(RK); else if RK-INT(RK) is <0.5, show INT(RK) and one “+” sign; else if RK-INT(RK) is <0.75, show INT(RK) and two “+” signs; else show INT(RK) and three “+” signs, for example “2+++”. If an entry is impossible, eg K>N, show a blank space, not a 0. Format the results in a plaintext tab-delimited table with columns T, N, E, K=1, K=2, etc, sorted by T and sub-sorted by N.

Note that I had to run this query about 25 times, refining it each time, and eventually had to take out everything relating to the encoding and requesting the answer to 3 decimal places so that I could ‘manually’ do the coding.

Gemini calculated the results correctly, including the formatting, but couldn’t get the columns of data to line up correctly after 24 rows plus the heading – the K=1 column kept overwriting the E column, no matter what was done.

ChatGPT failed completely to apply the encoding correctly and had several calculation errors at first, but with a bit of patience and simplifying the question, did manage to produce a table that I could copy and paste into a spreadsheet. I then inserted additional columns to perform the calculation of RK-INT(RK) and interpret the results as per the “if” statement shown above. I then hid the working and manually transcribed the results into the tables above.

Oh, and for clarity, I decided at the last minute to break what was one big table into the more user-friendly 8 smaller tables.

I’m getting ahead of myself with this picture, but it had to go somewhere! You’ll see why it’s included in due course. Image by Daniel McWilliams from Pixabay

So let’s pick an entry, I’ll decode it, and show you how it works. How about… 5 dice, target of four 6’s.

  1. Look for the line that starts 4 – 5.
  2. E is 6, so you can expect the victim to roll 6 times on average before getting to the target of 4 sixes – of course, it could happen on the very first roll, but it probably won’t.
  3. So, what’s likely to happen, bad-things wise, over the course of those expected 6 rolls?
    • K=1 has a value of 2+, so there will probably be two times that a single 1 one is showing.
    • K=2 has a value of 0+++ – so the expectation is that this won’t happen on any of them, but there’s a very high chance of it happening at least once – just not a relative certainty of it. And that makes sense – there’s a 1 in 36 that you’ll get 2 ones on two dice, and 25/36 chance that there will be no 1s on the other 2 dice, for a total chance of 25/1296 of this outcome, or 1.9%. But that doesn’t allow for a 1 on the first dice and another 1 on, say, the 3rd dice – so there are more ways for this to happen. And that puts the chance up so high that it’s very likely to happen.
    • K=3 through K=5 are extremely unlikely to occur. Not impossible, but not likely. For all practical purposes, this is a two- or three-tiered penalty structure.
  4. The key takeaway, though, is: 2 x one 1, 1 x two 1’s, and 8-3=5 x no 1’s.
  5. So multiply that by the chosen harm levels that go with those one-counts, add it up, and you have your expected damage.
    • To demonstrate this, let’s say no 1’s = 1 HP, one 1 = 5 HP, and two 1’s = 10HP. Then we would have 1×5 + 1×10 + 5×1 = 20 HP damage.
  6. But the system can be as complicated as you want.
    • Try no 1’s = 2 HP, 1 one = +5 HP, and 2 ones = +10 HP and a point of STR, each accompanied by the lesser levels.
    • Then, we would expect 2x(5+2) + 1x(10+5+2, & 1 STR) + 5×2 = 14+17+10 HP & 1 STR = 41 HP & 1 STR.

Choosing N and T

Unless you are modeling a specific set of conditions that dictate otherwise, or are working to deliver an ‘average fixed amount of damage’ (both covered in subsequent sections), the place to start is with the time intervals* between rolls and the number of rolls expected to be needed, E.

That will give you a short-list (perhaps VERY short) to choose between.

For example, if I want an effect to apply for an average of 6 time-intervals – it could be six rounds, six lots of 30 seconds, 6 minutes, 6 hours, 6 days, or whatever – I would look for E of 5, 6, or 7.

A whopping 17 entries in the table match, so I’m not spoiled for choice. Since there are so many, I would lose the 5’s and 7’s and go with just the options that give exactly what I want.

That gets me down to 5 choices. I want the players to roll more than 1 die but no more than 4, because anything else takes longer to add up.

But that kills all my choices, so the decision is now which restriction do I desire more – the 6 rounds, or the 4 dice?

I decide that 7 rounds is acceptable, after all. That puts a lot of options back on my radar, including T=4 N=4 and T=4, N=5. The first has a higher chance of K=1 results, the latter introduces an outside chance of K=5 and an increased chance of K=3 and K=4. But it does fit my original 6-round desire. In the end, I choose to flip my compromise and choose the N=T=4 option.

Job Done.

Extending The table

Let’s compare the 4-4 line with the 8-8 line.

4-4: 7, 2+, 0+++, 0, 0; vs
8-8: 7, 2++, 1+++, 0++, 0, 0, 0, 0, 0

So you can’t break an 8-8 into two sets of 4-4 rolls. But there is a simple way.

Let’s look at N=12 T=12.

    Step 1: Divide both N and T by 2 (they have to be even).

    Step 2: Look up the results on the tables above. In this case, we get N=6, T=6.

    Step 3: The total number of rolls expected is the same for both – in this case, 7.

    Step 4: Because the scaling also increases the deliberately-induced ’rounding error’, subtract 1/2 from the expected number of rolls in response to the doubling. So that’s 6½.

    Step 5: The total number of rolls is the same, but doubling the dice makes it easier to roll high numbers of ones. The counts for the worse penalties will increase, while the count for the standard penalty remains stable or slightly decreases. Balanced against that is the fact that the probability of those higher penalties is so low that your increasing nothing by a smidgen in most cases. Analysis has led to the rules for doubling:

    • # and #+ are always treated as #.
    • ++ should be read as #+1.
    • If the full E is <16, +++ should also be read as #+1.
    • If E >15, +++ should be read as #+2.

    So, in this case, we have 2+++, 1+, 0+, 0, 0, 0.
    E is <16, so 2+++ becomes 3.
    1+ stays 1.
    0+ stays 0.
    0 stays 0.

    So three single 1s, 1 pair of 1s, and 2.5 rolls without ones.

    Step 6: But then we have to factor in the drop from 7 to 6½ expected rolls:

    3 x 6.5 / 7 = 2.8 single 1s, 0.93 pairs of 1s, and 6.5 – 2.8 – 0.93 = 2.77 rolls with no 1s.

    Step 7: Multiply those by your chosen penalty values.

    Let’s use…

      No 1’s = 3 HP
      One 1 = 10 HP
      Two 1s = 25 HP

    3 x 2.77 + 10 x 2.8 + 25 x 0.93
    = 8.31 + 28 + 23.25 = 59.56 HP.

    Step 8: Round up and add the lower of the half N or T to allow for the possibility of those results of 3 or more 1s.

    In this case, both are 6, giving a final estimate of 66 HP damage.

It is recommended that + and +++ rolls should have their expected penalties softened, especially if using compound effects, as the levels set for them are based on occurrence numbers that are only partially expected to occur. 10% weaker is about right. Similarly, ++ rolls should be subjected to a moderate reduction (~20%) for the same reason.

Setting penalty levels

Ensure the penalty definitions are geometrically worse as K increases (eg., K=2 is far worse than K=1) to reflect the exponentially decreasing probability of high K rolls.

Setting penalty levels from a designated target

If plugging values into the calculations above doesn’t suit, you can establish a fixed geometric ratio – 2.5, 3, or 4 all work well – and use them to reduce your high K results to a specific number of K=1 or K=0 results. I recommend the first of these, but it’s up to you.

For example, let’s use 6 dice and a Target of 3 sixes. E=4.

    One 1 = 1++, treated as 1.5
    Two 1s = 0+++, treated as 1.
    Three to Six 1s = 0. Ignored.
    No 1’s = 4-1-1.5 = 1.5.

And let’s set a nice robust target like 100 HP. That’ll get a PC’s attention in a hurry!

Set the ratio as 4, and let’s extend the calculation down to K=0.

    Two ones = 4 (the ratio) single ones, for a total of 5.5 of them.
    One one = 4 (the ratio).’no ones’, so 5.5 x 4 = 22.

    100/22 = 4.54. Round down to 4. That 4 x 1.5 expected = 6 points, so our target is now 94 points from 5.5 k=1s.

    94 / 5.5 = 17.09. Round it down to 17. Multiply by the 1.5 times it’s expected to occur and we get 25.5. So our target goes down by 25 (round it down again) and our K=1 value is 17 HP.

    96-25 = 71. So our K=2 – expected once – is 71 HP.

Final results:

    K=0 does 4 HP.
    K=1 does 17 HP.
    K=2 does 71 HP.

Of course, if you set more modest targets, you’ll get more moderate results. This was deliberately extreme.

Variation One: Nested Damage Types

Try this on for size:

    K=0: minor HP damage.
    K=1: significant HP damage.
    K=2: significant HP damage & single-stat damage.
    K=3: significant HP damage & second-stat damage.
    K=4: Significant HP damage & both stats damaged.
    K=5: K=4 + Significant HP damage.
    K=6: K=4 + K=2.
    K=7: K=4 + K=3.
    K=8: 3 x K=4.

These results ‘nest’ three types of damage – two to stats and HP. You can use a similar system if the game system has multiple damage types, as in the Hero System:

    K=0: Some END loss
    K=1: K=0 + Some Stun loss
    K=2: 2 x K=1 + Some Body damage
    K=3: K=1 + K=2 + Some temporary Stat loss
    K=4: 2 x K=2 + Some temporary Stat loss
    K=5: K=4 + K=2
    K=6: K=5 + K=3.
    K=7: K=6 + K=4.
    K=8: 3 x K=5.

Defining ‘some’ as 5 points, that becomes:

    K=0: -5 END
    K=1: -5 END -5 Stun
    K=2: -10 END -10 Stun -5 Body
    K=3: -15 END -15 Stun -10 Body -5 Stat
    K=4: -20 END -20 Stun -10 Body -5 Stat
    K=5: -30 END -30 Stun -15 Body -5 Stat
    K=6: -45 END -45 Stun -25 Body -10 Stat
    K=7: -65 END -65 Stun -35 Body -10 Stat
    K=8: -90 END -90 Stun -45 Body -15 Stat

Or you could simplify things:

    K=0: -5 END -1 Stun -0 Body
    K=1: -10 END -5 Stun -1 Body
    K=2: 2 x K1
    K=3: 4 x K1 plus -1 stat
    K=4: 8 x K1 plus -5 stat
    K=5: 15 x K1 plus -10 stat
    K=6: 30 x K1 plus -20 stat
    K=7: 50 x K1 plus -30 stat
    K=8: 100 x K1 plus -40 stat

The Healing Difference

It’s up to you to decide whether or not healing, or recoveries in the Hero System, can function until whatever-it-is has run it’s course.

That makes these effects much nastier and should cause you to halve whatever damage levels you had in mind. Unless you want it to be potentially deadly.

Other Systemic Options

There are five other options that the GM can choose. Some of these can operate in combinations.

1. The Exhaustion Option

When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

That means that your biggest risk of a really bad result is at the start, and possible effects moderate.

It makes it much harder to predict the net outcome though.

Statistical Impact: This dramatically reduces the dice pool (N) over the course of the effect. Successes are achieved quickly, but the chance of rolling K>0 adverse events on any remaining die remains constant (1 in 6). Since the pool shrinks, the absolute chance of rolling multiple 1s decreases rapidly.

Game Feel: Front-loaded risk and rapid resolution. The initial rolls are the most dangerous. If a character survives the first two or three checks, the difficulty in rolling 1s drops faster than the difficulty in reaching the target, T.

Best For: Fast-acting, non-renewable poisons (like a single large dose of nerve agent) or short, focused challenges where the effect is quickly flushed from the system.

2. The Continual Option

Once you roll a 1, it stays unrolled thereafter and counts toward future penalties. Rolling continues until every dice shows either a 1 or a 6. The Core exit condition of accumulating T sixes remains in effect but is overshadowed by the alternative.

This means that things get progressively worse until whatever-it-is has run its course and left your system. It’s nasty but good for supernaturally-sourced troubles.

The one saving grace is the additional way out – if every dice is either a 1 or 6, the nightmare ends. In some cases, the cause – disease or poison – will burn itself out fast, in others it will be the cause of extremely protracted suffering.

The higher the initial N, the worse this gets. If you start with 6 dice:

    1, x, x, x, x, x – T sixes (cumulative) or 5 sixes needed
    K=1 events every roll until you roll another 1 or exit
    1, 1, x, x, x, x – T sixes (cumulative) or 4 sixes needed
    K=2 events every roll until you roll another 1 or exit
    1, 1, 1, x, x, x – T sixes (cumulative) or 3 sixes needed
    K=3 events every roll until you roll another 1 or exit
    1, 1, 1, 1, x, x – T sixes (cumulative) or 2 sixes needed
    K=4 events every roll until you roll another 1 or exit
    1, 1, 1, 1, 1, x – 1 six needed
    K=-5 events every roll until you roll a 1 or a 6. If you roll a 1, there is a K=6 events.

Each time a die is locked on ‘1’, your chances of getting the sixes you need go down and the number of rolls you’re expected to need will go up.

Damage accumulates very rapidly, and with accelerating pace.

3. The Progressively-worse Option

Each 1 that gets rolled increases the Target by 1.

This puts survival on a knife-edge and ensures that if you suffer badly, the effects will linger for longer – making it a good choice for plagues.

Statistical Impact: This maintains the dice pool (N) but increases the overall target (T) dynamically. Every adverse event makes the recovery condition harder to achieve. This means rolling a 1 directly increases the expected duration (E) of the effect. A single unfortunate roll early on can potentially double the total expected number of checks.

Game Feel: Cascading failure and desperation. Failure feeds failure. The character sees the light at the end of the tunnel (the target T) constantly moving further away. This is highly effective for plagues or diseases that exploit the body’s weakening condition.

Best For: Plagues, zombie infection progression, or effects that are harder to fight off the longer they persist (like a viral load).

4. The Blessed Balm Option

Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on.

This creates a situation in which the health of the sufferer is on a roller-coaster, up and down with each roll of the dice. Eventually, these changes will tend to dampen out. Works very well with the Progressively-Worse option.

This fundamentally re-balances the risk assessment; introducing greater variance into the process – rolls are either great (success towards T) or terrible (a large number of 1s) or tension-building (anything else). It models a scenario where the character’s vitality is constantly tested.

Game Feel: Roller-coaster effect and high stakes per roll. The character may suffer a terrible wound but instantly cancel it in the same roll with a heroic recovery effort. This variation is highly dramatic.

Best For: Magical duels, effects that fluctuate with effort or willpower, and scenarios where the poison’s progression is inherently unstable.

5. The Devastating Option

The first 6 in a roll doesn’t count, only sixes above that one.

This strongly biases the results away from recovery, without ruling it out entirely. It makes any of the ‘nasty’ options far worse.

Statistical Impact: This increases the expected number of rolls (E) needed to reach T without changing the probability of adverse events (K). Since E is higher, the total number of adverse events over the life of the affliction is necessarily higher. If you use the same N and T, the effect will be substantially longer and more severe than calculated in the base tables.

Game Feel: Recovery – and the downhill slide before it – feels Incredibly sluggish and unforgiving. Successes are hard-won. This makes the affliction feel resistant or deeply embedded in the character’s system, guaranteeing prolonged suffering

Best For: Artifact-level curses, dire creature venom, effects designed to be a significant narrative roadblock, or spurs for quests for a cure. Don’t hit a PC with this variant except in unusual circumstances when they have no-one to blame but themselves; DO hit someone important who the PCs want to save.

6. The With-A-Bang Option:

A selected number of the dice pool (N) start already showing ones and are not re-rolled. These reduce by 1 each round, becoming regular dice rolled and not “fixed ones”.

The “Fixed Ones” should be 1/2 of N or less. This ‘forces’ the occurrence of a high K result in the first round, tapering off in subsequent rounds. It also extends E by reducing the likelihood of sixes being rolled, generally by the number of fixed ones at the beginning, minus 1.

    6a. Bigger Bang Sub-variant

    The”fixed ones” are only removed when a 6 is rolled. A 6 used for the purpose does not count toward the target.

    This extends the durability of the high-N count AND effectively increases T by the number of initial ones showing.

    6b. It Will All Be Over Soon Sub-variant:

    As per the basic option 6, but fixed ones do not become regular dice, they become automatic sixes.

    This front-loads the results with high-K results but effectively reduces T by the number of initial ones showing.

Going Further

Any situation in which one character uses his skills to solve a multipart problem, or a group collaborate on a challenge, or a group face adversity together, or that can otherwise be broken down into units of roughly equal value, can be modeled using the Adverse Effects Engine.

Each part of the problem, or contributor to a solution, or participant, gets one dice, and they all roll collectively at the same time. This is especially powerful when coupled with the variants listed above.

Think of T as Progress, N as Resource/Skill, and K as Consequence (usually Immediate, but that depends on the definitions of harm that you set up).

Here are just a few of the many situations that the engine, correctly configured, can simulate.

Extreme Weather

N = number of PCs / NPCs in the group

T = N unless there is a natural channel either guiding the weather toward them (+1-3 T) or away from them (-1-2 T).

K = scale of impact of the weather event on the group.

Best Option: The Blessed Balm PLUS Progressively Worse:
Each 1 that gets rolled increases the Target by 1.
Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on, mitigating an existing K result OR reducing the Target by 1 if there are no K results to mitigate.

Everybody rolls a dice and contributes the result to the roll. Sixes push the weather away from the party, Ones bring it down on top of them to a degree. Net effects change from round to round, with weather either just missing the characters (K=0), catching them at its fringes (K=1), or enveloping them (K>1).

For added flavor, throw in Nested Damage Types – First impact = Wind, Second impact = Rain / Hail / Snow, Third impact = Stronger Wind, and so on.

Product Development

Your PC is part of a team developing a new product for sale. You will need a Market Specialist (salesman), a production / manufacturing engineer, a marketer, a technical expert, and a team manager.

The salesman will identify a gap in the market to be targeted, the technician will design the product to fill that gap, the engineer will determine what the possible price-points are, and the rate of production that is possible, the marketer will figure out how to sell the product, and the team manager will make decisions and look at the costs of altering the production environment to change the production engineer’s forecasts.

Each team member gets at least 1 dice to contribute; if their specific skill is more than double the lowest specific skill in the team, they get a second one. If the company has a good history / reputation in the field, the GM can award 1-3 extra dice.

T starts at 1 per team member. If the company has a bad history or reputation to overcome, add 2. If the product is especially cutting-edge, increase this subtotal by +50% or even +100%. If the market is especially cut-throat, add another 25% on top of that. For each team member whose specific skill is less than half the highest specific skill amongst the team, add another 1.

Each 6 counts +1 towards the product being fit for purpose. Each roll marks a milestone in the development process – there can be blind alleys, competitor announcements changing the market / playing field, cost increases, new markets opening up, old markets closing down, scandals in the boardroom – anything and everything that affects the market for the product.

Penalties take the form of additional design time between rolls (K=0, K=1) and reductions in the fitness for purpose of the resulting product (K>0).

I don’t think any of the optional configurations are appropriate for this application.

Collaboration to overcome an environmental hazard (1)

Use the AEE for ongoing natural challenges where the group’s collective effort determines the duration, and individual poor luck determines the immediate suffering.

Crossing a Frozen Lake or Glacier, for example: N (Dice) = The number of characters in the group, or the lowest relevant skill rating in the group, or some reasonable fraction thereof. Only characters with a relevant skill or with a relevant stat value higher than a medium-high threshold get a die. Below those marks, the characters are liabilities toward the group’s success.

T = the GM-assigned difficulty, or some simple fraction thereof, +1 per character, whether they get a die or not..

Options Configuration: The Continual Option, PLUS The Blessed Balm Option.
Once you roll a 1, it stays unrolled thereafter and counts toward future penalties. Rolling continues until every dice shows either a 1 or a 6. The Core exit condition of accumulating T sixes remains in effect but is overshadowed by the alternative.
Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on – removing it from the locked pool and releasing it back into the live dice to be rolled.

Collaboration to overcome an environmental hazard (2)

The party are roped together and have to climb.

N = Characters with climbing skill of +2 or better, or STR+DEX of 16 or better.

T = Total number of characters + 1-4 for difficulty of climb. Add 2 if the characters are under attack or otherwise pressured to climb at speed.

K = falls / setbacks. K>2 = ropes break.

Options Configuration: The Exhaustion Option simulates the rope tying the bad climbers to the good ones: When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

For especially difficult climbs, add The Progressively-Worse Option: Each 1 that gets rolled increases the Target by 1.

For the most supremely challenging climbs, add the Devastating Option instead of Progressively Worse: The first 6 in a roll doesn’t count, only sixes above that one.

Ransacking A Library for specific (hidden / obscure) information

How long it takes to find a specific piece of hidden or obscure lore in a Library that might not even contain what you’re looking for depends on your reading speed (INT), presuming you have the ability to read, and your ability to recognize what you’re looking for, or that what you have just found is a clue to where to look next.

Well-structured libraries also make it a lot easier by excluding most of the books as irrelevant.

I would employ a simulation similar to the Design-A-Product example, but based purely on INT and not on specific skills. Note that if you have a character participating who is low INT, they can actually disrupt the efforts of higher-INT characters by continually interrupting them with “is this it?”.

Specifically, you want the total number of 6s to exceed the total number of 1s before the search comes to an end. If it doesn’t, either the answers aren’t there, or you’ve missed them. So long as there are dice to be rolled, there’s a chance, even if you’re at -2 or -3 to getting a result.

K=penalties to the success total, high-K = passing guards, accidental fires, magical books that scream when opened, ghostly librarians…

Focal Character overcoming an environmental hazard

All sorts of things fall into this category. Picking a combination lock, for example. Or Disarming a bomb with N critical steps that have to be performed in the right order. Or using a code-breaker.

You’ve seen these devices in the movies. Attach one to a lock and let it work its way through the combinations. To make life more difficult, consider a rolling code – that’s where a complex algorithm sets a new code every time, but only the 1000 or so valid results from that algorithm will be accepted. Which means that if you lock in the wrong answer, you have to start over.

The relevant skill here isn’t necessarily one of yours – it’s the design and programming skill of whoever designed and built the code-breaker. All you have to do is place it on the lock in roughly the right position.

With each success (each 6 toward the T), the stakes get higher. One wrong move (K>1) and it’s back to square one.

This scenario seems tailor-made for the Exhaustion Option – a 6 is a locked-in digit: When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

Lesser K results are events that threaten failure / discovery, but which may not actually incur the problem.

T=Number Of Digits in the code.

N=T+a simple fraction of the programmer / designer’s skill.

Let the tension build…

PDF Icon

Click the icon to download the PDF

Using The AEE

If you prep in advance, you have plenty of opportunity to consult the tables and simply put the specific simulation instructions into your notes.

If you want to be able to use the system off-the-cuff, though, you’re going to have to be able to take it with you. For that reason, I’ve put together a PDF with the essential mechanics, shorn of explanation and example – but WITH a hyperlink back to this article.

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5b


This entry is part 20 of 20 in the series Trade In Fantasy

This post continues the text of Part 5 of Chapter 5. Its content has been added to the parent post here and the Table of contents updated.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

5.8.1.5 Blended Models

In general, the rule is one zone, one model. In fact, as a general rule, your goal should be one Kingdom, one model – that way, if you choose “England” as your model, your capital city will resemble London in size and characteristics, and not, say, Imperial Rome.

But, if you can think of a compelling enough reason, there’s no reason not to blend models. There are lots of ways to do this.

The simplest is to designate one model for part of a zone, and another to apply to the rest.

Example, if your capital city were much older than the rest of the Kingdom, you might decide that for IT ALONE, the Imperial model might be more appropriate, while the rest of the Kingdom is England-like. Or you might decide that because of its size, it has sucked up resources that would otherwise grow surrounding communities more strongly, and declare a three-model structure: Imperial Capital, France for all zones except zone 1, and England for the rest of Zone 1.

Example: A zone contains both swamp and typical agricultural land. You decide that those parts that are Swamp are German or Frontier in nature, while the rest are whatever else you are using.

An alternative approach to the problem that works in the case of the latter example is to actually average the two models’ characteristics and apply the result either to just the swamp areas, or to the zone overall.

When you get right down to it, the models are recommendations and guidelines, describing a particular demographic pattern seen in Earth’s history. There’s absolutely nothing to prevent you from inventing a unique one for a Kingdom in your world – except for it being a lot of work, that is.

5.8.1.6 Zomania – An Example

I don’t really think that a fully-worked example is actually necessary at this point, but I need to have one up-to-date and ready to go for later in the article. So it’s time for another deep-dive into the Kingdom of Zomania.

5.8.1.6.1 Zone Selection

I’ll start by picking a couple of Zones that look interesting, and distinctive compared to each other.

Zone 7 is bounded by a major road, but doesn’t actually contain that road; it DOES have capacity for a lot of fishing, though. And I note that there are cliffs in the zones to either side of it, so they WON’T support fishing – in fact, those cliffs appear to denote the limits of the zone..Zone 7 adds up to 167.8 units in area, and features 26 units of pristine beaches.

Zone 30 has an international border, and a major road, lots of forest and foothills becoming mountainous. It’s larger than one 7, at 251.45 units.

Because I haven’t detailed these areas at all, the place that I have to start is back in 5.7.1.13. But first…

5.8.1.6.7.1.1 Sidebar: Anatomy Of A Fishing Locus

I was going to bring this up a little later, but realized that readers need to know it, now.

Coastal Loci are a little different to the normal. To explain those differences, I threw together the diagram below.

1: is a coast of some kind. It might not be an actual beach, but it’s flat and meets the water.

2: It’s normal, especially if there’s a beach, for the ends to be ‘capped’ with some sort of headland. This is often rocky in nature. This is the natural location for expensive seaside homes and lighthouses.

3. Fishing villages.

4. Water. It could be a lake, or the sea, or even a river if it’s wide enough.

5. Non-coastal land, usually suitable for agriculture.

6. A fishing village’s locus is compressed along the line of the coast and bulging out into the water. This territory produces a great deal more food than the equivalent land area – anywhere from 2-5 times as much. Some cultures can go beyond coastal fishing, doubling this area – though what’s further out than shown is generally considered open to anyone from this Kingdom. Beyond that, some cultures can Deep-Sea fish (if this is the sea), which quadruples the effective area again. If you’re keeping track, that’s 2-5 x 2 x 4 = 16-40 times the land area equivalent. The axis of the locus is always as perpendicular to the coast as possible.

7. The bottoms of the lobes are lopped off…

8. And the land equivalent is then found ‘squaring up’ the locuses…

9. …which means that these are the real boundaries of the locus. The area stays roughly the same, though.

The key point is this: you don’t have to choose “Coastal Mercantile” to simulate living on the coast and fishing for food. There are mechanisms already built into the system for handling that – it’s all done with Terrain and a more generous interpretation of “Arable Land”.

Save the “Coastal Mercantile” Model for islands and coastal cultures whose primary endeavor is water-based trade.

Zone 7, then, should have the same Model as all the other farmland within the Kingdom. I think France is the right model to choose.

Zone 30 is a slightly more complicated story. For a start, don’t worry about the road – like coastal villages, that gets taken care of later. For that matter, so is the heavy forestation, and the local geography – hills and mountains. But this is an area under siege from the wilderness, as explained in an earlier post. Which changes the fundamental parameters of how people live, and that should be reflected in a change of model. In this case, I think the Germany / Holy Roman Empire model of lots of small, walled, communities is the most appropriate.

But this does raise the question of where the change in profile takes place. I have three real options: The Zone in it’s entirety may be HRE-derived; or the HRE model might only apply to the forests; or might take hold in the hills and mountains, only.

My real inclination would be to choose one of the first two options, but in this case I’m going to choose door number 3m simply because it will contrast he HRE model with the base French version of the hills and forests. In fact, for that specific purpose, I’m going to set the boundary midway through the range of hills:

5.8.1.6.1.2 Sidebar: Elevation Classification

Which means, I guess, that I should talk about how such things are classified in this system. There are eight elevation categories, but the categories themselves are based on the differences between peak elevation and base elevation.

I tried, but couldn’t quite get this to be fully legible at CM-scale. Click on the image above to open a larger copy in a new tab.

To get the typical feature size – the horizontal diameter of hills or mountains – divide 5 x the average of the Average Peak Elevation range by the average Relief range and multiply by the elevation category number, squared for mountains, or twice the previous category’s value, whichever is higher. Note that the latter is usually the dominant calculation! The results are also shown below. Actual cases can be 2-3 times this value – or 1/2 of it.

1. Undulating Hillocks – Average Peak Elevation 10-150m, Local Relief <50m; Features 16m (see below).
2. Gentle Hills – Average Peak Elevation 150-300m, Local Relief 50-150m; Features 32m.
3. Rolling Hills – Average Peak Elevation 300-600m, Local Relief 150-300m; Features 64m

     -> □ Zone 30 Treeline from the start of this category
     -> □ Normal Treeline is midway through the range

4. Big Hills – Average Peak Elevation 600-1000m, Local Relief 300-600m; Features 128m
5. Shallow Mountains – Average Peak Elevation 1000-2500m, Local Relief 600-1500m; Features 417m
6. Medium Mountains – Average Peak Elevation 2500-4500m, Local Relief 1000-3000m; Features 834 m
7. Steep Mountains – Average Peak Elevation 4500-7000m, Local Relief 3000-5000m; Features 1668m
8. Impassable Mountains, permanent snow-caps regardless of climate – Average Peak Elevation 7000m+, Local Relief 5000m+; Features 3336m.

Undulating Hillocks (also known as Rolling Hillocks or Rolling Foothills) are basically a blend of scraped-away geography and boulders deposited by glaciers. If the boulders have any sort of faults (and most do), they will quickly become more flat than round and start to tumble within the Glacier. When they come to rest, several will be stacked, on on top of another, generally in long waves. There will be gaps in between, which get filled with earth and mud and weathered rock over time, unless the rocks are less resistant to weathering than soil, in which case the rocks get slowly eaten away. In a few tens of thousands of years, you end up with undulating hillocks, or their big brothers. The flatter the terrain, the more opportunity there is for floodwaters to cover everything with topsoil, smoothing out the bumps. The diagram above shows how this ‘stacking and filling’ can produce structures many times the size of individual hillocks.

A very similar phenomenon – wind instead of glaciers, and sand instead of boulders – creates sandy dunes in deserts prone to that sort of thing. Over time, great corridors get carved out before and after each dune, generally at right angles to the prevailing winds. It can help you picture it if you think of the wind “rolling” across the dunes – when they come to a spot where the sand is a little less held together, it starts to carve out a trench, and before long, you have wave-shaped sand-dunes.

5.8.1.6.3 Area Adjustments – from 5.7.1.13

Zone 7 has a measured area of 167.8 units, but that needs to be adjusted for terrain. Instead of the slow way, estimating relative proportions, let’s use the faster homogenized approach:

Hostile Factors:
     Coast 1.1 + Farmland 0.9 + Scrub 1.1 = 3.1; average 1.03333.
     Coast +0.25 + Beaches -0.05 + Civilized -0.1 = +0.1
     Towns -0.1
     Net total: 1.03333
167.8 x 1.0333 = 173.4 units^2.

Benign Factors:
     Town 0.1 + Coast 0.15 + Beaches 0.15 + Civilized 0.2
     Subtotal +0.6
     Square Root = 0.7746
173.4 x 0.7746 = 134.3 units^2.

Zone 30 is… messier. Base Area 251.45 units^2.

Hostile Factors:
     Mining 1.5 +
     Average (Mountains 1.4 + Forest 1.25 + Hills 1.2 = 3.85) = 1.28
     Town -0.1 + Foreign Town 0.1 + River 0.2 + Caves 0.05 + Ruins 0.4 + “Wild” 0.1 = +0.75
     Net total = 1.5 + 1.28 + 0.75 = 3.53
251.45 x 3.53 = 887.6 units^2.

Benign Factors:
     Town 0.1 + Foreign Town -0.1 + River +0.1 + Caves 0.05 + Ruin 0.4 + Major Road 0.2
     Subtotal 0.75
     “Wild” = average subtotal with 1 = 0.875
     Sqr Root = 0.935
887.6 x 0.935 = 829.9 units^2.

To me, this looks very Greek – but it’s actually ‘Gordes’ in England, which the photographer describes as a village. One glance is enough to show that it’s bigger than the town depicted previously. Image by Neil Gibbons from Pixabay

5.8.1.6.4 Defensive Pattern – from 5.7.1.14

Zone 7 is pretty secure, the biggest threat being local insurrection or maybe pirate raids. A 4-lobe structure of 2½,5 looks about right.

When I measure out the area protected by a single fort and 4 satellites, I get 47.2 days^2. That takes into account overlapping areas where this one structure shares the burden 50% with a neighboring structure, and the additional areas that have to be protected by cavalry units.

That means that in Zone 7, there should be S x 134.3 / 47.2 = 2.845 x S of them, depending on the size of a “unit” on the map is, measured in days’ march for infantry.

S is going to be the same for all zones I’ve avoided making that decision for as long as I can – the question is, how large is Zomania?

5.8.1.6.5 Sidebar: The Size of Zomania, revisited

16,000 square miles – at least, that’s the total that I threw out in 5.7.1.3.

That’s about the same size as the Netherlands.

It’s a lot smaller than the Zomania that I’m picturing in my head when I look at the map. It IS the right size if the units shown are miles. But if they aren’t?

There are two reasons for regularly offering up Zomania as an example. The first is to provide a consistent foundation and demonstration of the principles discussed coming together into a cohesive whole. And the second is for me to check on the validity of the logic and techniques that I’ve described.

Feeling ‘wrong’ is keeping my subconscious radar from achieving purpose #2. And the Zomania being described being too small, which is the cause of that ‘wrong’ feeling, means that it isn’t going to adequately perform function #1, either.

There can be only one solution – Zomania has to grow, has to be scaled up. I want Zone 7 to be comparable to the size of the Netherlands, not the entire Kingdom, which should be comparable to France, or Germany, or England, or Spain.

A factor of 10? Where would 160,000 sqr miles place Zomania amongst the European Nations that I’ve named?

UK: 94,356. Germany: 138,063. Spain: 192,466. France: 233,032. So 160,000 would be smack-dab in the middle, and absolutely perfect for both purposes.

So Zomania is now 160,000 square miles, and the ‘units’ on all the maps are 10 miles each.

It wasn’t easy sorting this out – it’s been a road-block in my thinking for a couple of days now – triggered by results that seemed to show Zone 7 to be about 0.08 defensive structures in size.

And that is due to a second scaling problem that was getting in the way of my thinking:

How much is that in day’s marching?

In 5.7.1.14.3, I offered up:

    If d=10 miles (low), that’s 103,923 square miles.
    If d=20 miles (still low), that’s 415,692 square miles.
    If d=25 miles (reasonable), that’s 649, 519 square miles.
    If d=30 miles (doable), 935,307 square miles.
    If d=40 miles (close to max), 1.66 million square miles.
    If d=50 miles (max), 2.6 million square miles.

But that was in reference to a theoretical 6 x 4, 12 + 12 pattern. Nevertheless, the scales are there. And they are way bigger than I thought they would be, and way to big to be useful as examples. Yet the logic that led to them seemed air-tight. Clearly, there was an assumption that had been made that wasn’t correct, but this problem was getting in the way of solving the first one.

Once I had separated the two, answers started falling into place. The numbers shown above are how far infantry can march in 24 solid hours, such as they might do in a dire emergency. But defensive structures would not be built and arranged on that basis.

If infantry march for 8 hours, they have just about enough daylight left to break camp in the morning (after being fed) and set up camp in the evening (digging latrines and getting fed). That’s the scale that would be used in establishing fortifications, not the epic scale listed. In effect, then, those areas of protection are nine times the size they should be.

So, let’s redo them on that basis:

    If d=10 miles (low), that’s 11,547 square miles.
    If d=20 miles (still low), that’s 46,188 square miles.
    If d=25 miles (reasonable), that’s 72,169 square miles.
    If d=30 miles (doable), 103,923 square miles.
    If d=40 miles (close to max), 184,444 square miles.
    If d=50 miles (max), 288,889 square miles.

And those are still misleading, because mentally, I’m thinking of this as the area protected by the central stronghold, and ignoring the satellites. To get the area per fortification,, we should divide by the total number of fortifications in the pattern – in the case of the numbers cited, that’s 6×4+12=36.

    If d=10 miles (low), that’s 320.75 square miles.
    If d=20 miles (still low), that’s 1283 square miles.
    If d=25 miles (reasonable), that’s 2,004.7 square miles.
    If d=30 miles (doable), 2,886.75 square miles.
    If d=40 miles (close to max), 5,123.4 square miles.
    If d=50 miles (max), 8024.7 square miles.

Reasonable = 2004.7 square miles, or roughly equal to a 44.8 x 44.8 mile area. For a really tightly packed defensive structure of the one being discussed, that’s entirely reasonable – and it fits the image in my head.

In my error-strewn calculation, my logic went as follows:

    ▪ In the inner Kingdom, I think that life is easy and lived fairly casually. That points to the lower end of the scale – 10 miles a day or 20 miles a day.

    ▪ 10^2 = 100, so at 10 mi/day, 16,000 = 160 days march.
    ▪ 20^2 = 400, so at 20 mi/day, 16,000 = 40 days march.

    ▪ That’s a BIG difference. 40 is too quick, but 160 sounds a little too slow. Tell you what, let’s pick an intermediate value of convenience and work backwards.

    ▪ 100 days march to cover anywhere in 16000 square miles gives 160, and the square root of 160 is 12.65 miles per day.

Now, that logic’s not bad. But it doesn’t factor in the ‘working day’ of the infantry march – it needs to be divided by 3. And it DOES factor in my psychological trend toward making the defensive areas smaller, because my instinct was telling me they were too large – but this is the wrong way to correct for that. So this number is getting consigned to the dustbin.

After all, the ‘hostile’ and ‘benign’ factors are supposed to already take into account the threat level that these fortifications are supposed to address, and hence their relative density.

    ▪ So, let’s start with the “reasonable” 25 miles.
    ▪ Apply the ‘working day’ to get 8.333 miles.
    ▪ The measured area of the defensive structure is 47.2 ‘days march’^2.
    ▪ Each of which is 8.333^2= 69.444 miles^2 in area.
    ▪ So the defensive unit – stronghold and four satellites – covers 47.2 x 69.444 = 3277.8 sqr miles.
    ▪ Or 655.56 sqr miles each.
    ▪ Equivalent to a square 25.6 miles x 25.65 miles.
    ▪ Or a circle 12.51 miles in radius.
    ▪ Base Area 173.4 units^2 = 17340 square miles.
    ▪ Adjusted for threat level, 134.3 units^2 or 13430 square miles. In other words, defensive structures are further apart because there’s less threat than normal.
    ▪ 13430 / 3277.8 = 4.1 defensive structures, of 1 hub and 4 satellites each.
    ▪ So that’s 4 hubs and 16 satellites plus an extra half-satellite somewhere.

Those satellites could be anything from a watchtower to a small fort to a hut with a couple of men garrisoned inside, depending on the danger level and what the Kingdom is prepared to spend on securing the region. The stronghold in the heart of the configuration needs to be more substantial.

Okay, so that’s Zone 7. Zone 30 is a whole different kettle of fish.

I wanted to implement a 3-lobed configuration with more overlap than the four-lobed choice made for Zone 7. And it was turning out exactly the way I wanted it to; some every hub was reinforced by three satellites, every satellite reinforced by three hubs. I had the diagrams 75% done and was gearing up to measure the protected area.

Which is when the plan ran aground in the most spectacular way. There were areas where responsibility was shared two ways, and three ways, and four ways, and – at some points – six ways. It was going to take a LONG time to measure and calculate.

If I were creating Zomania as an adventuring location for real, I would have carried on. If I lived in an ideal world, without deadlines (even the very soft ones now in place at Campaign Mastery) I would have continued. I still think that it would have provided a more enlightening example for readers, because I would be doing something a little bit different and having to explain the differences and their significance.

But since neither of those circumstances is the case, and this post is already several days late due to the complications explained earlier, I am going to have to compromise on principle and re-use the configuration established for Zone 7.

Well, at least that will show the impact that the greater threat level will impose on the structure, but it leaves the outer reaches of the Kingdom less well-protected than they should be. If and when I re-edit this series into an e-book, I might well spend the extra time and replace the balance of this section – or even work the problem both ways for readers’ edification.

REMINDER TO SELF – 3 LOBES, 1 DAY EXAMPLE

But, in the meantime…

Zone 30.
    ▪ Actual area 251.45 square units = 25,145 square miles.
    ▪ Adjusted for threat level = effective area 829.9 square units = 82,990 sqr miles. (in other words, the defensive structures you would expect to protect 82,990 square miles are so closely packed that they actually protect only 25,145 square miles, a 3.3-to-1 ratio.)
    ▪ Defensive Structure = 3277.8 square miles (from Zone 7).
    ▪ 82,990 / 3277.8 = 25.32 defensive structures of 5 fortifications each, or 126.6 fortifications in total. Zone 7 is 69% of the area and had a total of 20.5 fortifications, in comparison.

What does 0.32 defensive structures represent? Well, if I take the basic structure and ‘lop off’ two of the satellites, then it’s 3/5 of a protected area minus the overlaps. By eye, those overlaps look to be a bit more than 2 x 1/4 of one of those 1/5ths, and since 1/4 of 1/5 is 1/20th, that’s roughly 0.6-0.1 = 0.5.

If I take away a third satellite, the structure is down to 2/5 protected area minus overlaps, and those overlaps are now 1 x 1/20th, so 0.4-0.05=0.35. So, somewhere on the border, there’s a spot with one hub and one satellite.

One more point: 3.3 to 1. What does THAT really mean? Well, the defensive structure used has satellites 2.5 days march from the hub. But everything is more compressed, by that 3.3:1 ratio, so the satellites in Zone 30 are actually 2.5 / 3.3 = 0.76 day’s march from the hub. The area each commands is still the same, but there’s a lot more overlap and capacity to reinforce one another.

Another way to look at it is that there are so many fortifications that each only has to protect a smaller area. 3277.8 sqr miles / 3.3 = 993 sqr miles.

5.8.1.6.6 Sidebar: Changes Of Defensive Structure

The point that I’m going to make in this sidebar won’t make a lot of sense unless you’re paying close attention, because the Zone 30 example has the same defensive structure as Zone 7 – it’s just a lot more compressed. But imagine for a moment that there was a completely different defensive structure in Zone 30.

What does that imply for Zone 11, which lies in between the two?

You might think that it should be some sort of half-way compromise or blend between the two, but you would be wrong to do so.

If you look back at the overall zone map for Zomania (reproduced below)

…and recall that the zones are numbered in the order they were established, a pattern emerges. Zone 1 first, then Zone 2, then Zones 3-4-5-6-7, then zones 8-9-10-11-12, and so on. Until Zones 29-32 were established, Zone 11 was the frontier. it would likely have the same defensive structure as Zone 30. Rather than fewer fortifications, it would have them at the same density as Zone 30 – but the manpower in each would be reduced.

If you know how to interpret it, the entire history of the Kingdom should be laid bare by the changes in its fortifications and defenses.

But that’s not as important as the verisimilitude that you create by taking care of little details like this and keeping them consistent. The specifics might never be overtly referenced – but they still add a little to the credibility of the creation.

5.8.1.6.7 Inns in Zone 7 – from 5.7.3

Zone 7 is noteworthy for NOT having a major road – that’s on the Zone 11 / Zone 6 side of the border. Some of the inns along that road, however, may well be over that border – it’s a reasonable expectation that half of them would count. But only that half that is located where the border runs next to the road – there’s a section at the start and another at the end where the border shifts away.

But there’s a second factor – what is the sea, if not another road to travel down? And Zone 7 has quite a lot of beach. The reality, of course, is that these are holiday destinations, and places for health recovery – but it’s a convenient way of placing them.

So that’s two separate calculations. The ‘road that is a road’ first: There are actually two sections. The longer one runs through Zones 6 and 11, as already noted; it measures out at 15 units long, or 150 miles.

The second lies in Zone 15, and it’s got a noticeable bend in it. If I straighten that out and measure it, I get 5 units or 50 miles.

Conditions:
    Road condition, terrain, good weather = 3 x 2.
    Load = 1 x 1/2.
    Everything else is a zero.
    Total: 6.5.
6.5 / 16 x 3.1 = 1.26 miles per hour.
1.26 mph x 9 hrs = 11.34 miles.

Here’s the rub: we don’t know exactly where the hubs and satellites are in Zone 7, only how many of them that there are to emplace. But it seems a sure bet that those areas where the road and border part ways, do so because there’s a fortification there that answers to Zone 6 or Zone 11, respectively. And that means that we can treat the entire length of the road as being between two end points.

We know from the defensive structure diagram that the base distance from Satellite to Hub is 2 1/2 days march, and that there’s a scaling of x 1.0333 (hostile) x 0.7746 (benign) = x 0.8 – and that benign factors space fortifications further apart while hostile ones bunch them together, so this is a divided by when calculating distances. We know that 8.333 miles has been defined as a “day’s march”.

If we put all that together, we get 2.5 x 8.333 / 0.8 = 26 miles from satellite to hub.

Armies like their fortifications on roads, it makes it faster to get anywhere. Traders like their trade routes to flow from fortification to fortification, it protects them from bandits. The general public, ditto. If a road doesn’t go to the fortification, people will create a new road and leave the official one to rot. So it can be assumed that the line of fortifications will follow the road, and be spaced every 26 miles along it, alternating between hub and satellite.

    150 miles / 26 = 5.77 of them.

It’s an imperfect world; that 0.77 means that you have one of three situations, as shown below:

The first figure shows a hub at the distant end of the road. The first shows a hub at the end of the road closest to the capital. And the third shows the hubs not quite lining up with either position.

But those aren’t the actual ends of the road – this is just the section that parallels the border of Zone 7, or vice-versa. So the last one is probably the most realistic

Now, let’s place Inns – one every 11.34 miles. But we have to do them from both ends – one showing 1 day’s travel for ordinary people headed out, and one showing them heading in. Just because I’m Australian, and we drive on the left, I’ll put outbound on the south side and inbound on the north.

Isn’t that annoying? The don’t quite line up – to my complete lack of surprise. Look at the second in-bound inn – it’s about 20% of a day short of getting to the satellite, and that puts it so close that it’s not worth stopping there; you would keep going.

Well, you can’t make a day longer, but you can make it shorter. And that makes sense, because these are very much average distances.

I’ve shortened the days for the ordinary traveler – including merchants – just a little, so that every 5th inbound Inn is located at a Stronghold, and every 5th outbound inn is located at a satellite. Every half-day’s travel now brings you to somewhere to stop for a meal or for the night.

It’s entirely possible that not all of these Inns will actually be in service, it must be added. Maybe only half of them are actually operating. Maybe it’s only 1/3. But, given it’s position within the Kingdom, there’s probably enough demand to support most of these, so let’s do a simple little table:

    1 inn functional
    2 inn functional
    3 inn functional but 1/4 day closer
    4 inn functional but 3/4 day farther away
    5 inn not functional
    6 inn not functional, and neither is the next one.

Applying this table produces the following (for some reason, my die kept rolling 3s and 6s):

Even here, in this ‘safe’ part of the Kingdom, travelers will be forced to camp by the roadside.

And that’s where I’ll have to leave it, for this post. I had hoped to get all of the Zomania examples done, but the problems early on put paid to that, and didn’t even leave me enough time to get Zone 30 detailed through to the inn stage – let alone up to date! That’s obviously for the next post….

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5a


This entry is part 19 of 20 in the series Trade In Fantasy

This post continues the text of Part 5 of Chapter 5. Its content has been added to the parent post here and the Table of contents updated. I have decided at the last minute to let the featured image (but not the head image) evolve with each post.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

5.8.1 Villages

The village is the fundamental unit of the population distribution simulation – everything starts there and flows from it.

    5.8.1.1 Village Frequency

    I’ve given this section a title that I think everyone will understand, but it’s not actually what it’s all about. The real question to be answered here is, how big is the Locus surrounding a population?

    The answer differs from one Demographic Model to another, unsurprisingly.

    The area of a given Locus is:

        SL = MF x (Pop)^0.5 x k,
            where,
            SL = Locus Size
            MF = Model Factor
            Pop is the population of the village
            and k = a constant that defines the units of area.

    The base calculation, with a k of 1, is measured in days of travel. That works for a lot of things, but comparison to a base area of 10,000 km^2 isn’t one of them. For that, we need a different K – one based on the Travel Ranges defined in previous parts of this series.

    Section 5.7.1.14.5.1 gives answers based on travel speed, more as a side-issue than anything else, based on the number of miles that can be traversed in a day:

      (Very) Low d = 10 miles / day
      Low d = 20 miles / day
      Reasonable d = 25 miles / day
      Doable d = 30 miles / day
      Close To Max (High) d = 40 miles / day
      Max d = 50 miles / day
          ( x 1.61 = km).

    — but these are the values for Infantry Marching, and that’s a whole other thing.

    Infantry march faster than people walk or ride in wagons. The amount varies depending on terrain (that’s the main variable in the above values), but – depending on who you ask – it’s 1 2/3 or 2 or 2.5 times.

    But, because they travel in numbers, they can march for less time in a day. Some say 6 hours, some 7, some 8. Ordinary travelers may be slower, but they can operate for all but an hour or two of daylight. That might be 8-2=6 or 7 hours in winter, but it’s more like 12-2=10 or 11 hours in summer.

    And it has to be borne in mind that the basis for these values assumes travel in Summer – at least in medieval times. But we want to take the seasons out of the equation entirely and set a baseline from which to adjust the list given earlier.

    One could argue that summer is when the crops are growing, and therefore that should be the basis of measurement, given that we’re looking for the size of a community’s reach.

    So let’s take the summer values, and average them to 10.5 hours. When you take the various factors into account and generate a table (I used 6, 6.5, 7, 7.5, and 8 for army marching times per day, and the various figures for speed cited plus 2.25 as an additional intermediate value, and work out all the values that it might be, and average them, you get 1.04. That’s so small a change as to be negligible – 1.04 x 50 = 52. We will have far bigger approximations than that!

    So we can use the existing table as our baseline. Isn’t that convenient?

    But which value from amongst those listed to choose? Overall, unless there’s some reason not to, you have to assume that terrain is going to average out when you’re talking about a baseline unit of 10,000 sqr kilometers. So, let’s use the “Reasonable” value unless there’s reason to change it.

    And that gives a conversion rate of 1 day’s travel = roughly 25 miles, or 40 km. And those are nice round numbers.

    Now, a locus is roughly circular in shape, so is that going to be a radius or a diameter? Well, a “market day” is how far a peasant or farmer can travel with their goods and return. in a day, so I think we’re dealing with a radius of 1/2 the measurement, so that measurement must be the diameter of the locus.

    Which means that the base radius of a locus is 12.5 miles or 20 km.

    In an area where the terrain is friendly in terms of travel, this could inflate to twice as much; in an area where terrain makes travel difficult, it could be 1/2 as much or less. But if we’re looking for a baseline, that works.

    12.5 miles radius = area roughly 500 sqr miles = area 1270 sqr km. So in 10,000 sqr km, we would expect to find, on average, 7.9 locuses.. But that’s without looking at the population levels and the required Model Factors.

    The minimum size for an English Village is 240 people. The Square Root of 240 is 15.5.

    So the formula is now 1270 = 15.5 x 20 x Model Factor, and the Model Factor for England conditions and demographics is 4.1. Under this demographic model, there will be 4.1 Village Loci – which is the same thing as 4.1 villages – in 10,000 sqr km.

    Having worked one example out to show you how it’s done, here are the Model Factors for all the Demographic Models:

    ▪ Imperial Core: 480^0.5 = 21.9, and 21.9 x 20 x Model Factor = 1270, so MF = 2.9
    ▪ Germany (HRE): 400^0.5=20, and 20 x 20 x MF = 1270, so MF = 3.175
    ▪ France: 320^0.5 = 17.9, and 17.9 x 20 x MF = 1270, so MF = 3.55
    ▪ Coastal Mercantile Model: 280^0.5 = 16.733, and 16.733 x 20 x MF = 1270, so MF = 3.8
    ▪ England: 4.1
    ▪ Frontier Nation: 200^0.5 = 14.14, and 14.14 x 20 x MF = 1270, so MF = 4.5
    ▪ Scotland: 160^0.5 = 12.65, and 12.65 x 20 x MF = 1270, so MF = 5.02
    ▪ Tribal / Clan Model: 80^0.5 = 8.95, and 8.95 x 20 x MF = 1270, so MF = 8.95

    So, why didn’t I simply state the number of loci (i.e. the number of villages) in an area?

    It’s because that’s a base number. When we get to working on actual loci or zones, these can shrink, or grow; according to other factors. This is a guideline – but to define an actual village and it’s surrounds, we will need to use the MF. Besides, you might want to generate a specific model for a specific Kingdom in your game.

    You may be wondering, then, why it should be brought up at all, or especially at this stage? The answer to those questions is that the area calculated is a generic base number which may have only passing resemblance to the actual size of the locus.

    A locus will continue to expand until it hits a natural boundary, a border, or equidistance to another population center. Very few of them will actually be round in shape – some of them not even approximately.

    The ratio between ACTUAL area and BASE area is an important factor in calculating the size of a specific village.

    An example of the ‘real borders’ of a Locus

    To create the above map, I made a copy of the base map (shown to the left). At the middle top and bottom, i placed a dot representing the Locus ‘radius’.

    At the left top, another dot marked the half-way point to the next town (top left), where it intersected a change of terrain – in this case, a river.

    At the top right, doing the same thing would have made the town at top right a bit of a mixed bag – it already has forests and hills and probably mountains. I didn’t want it to have a lot of farmland as well. So I deliberately let the current locus stretch up that way. The point below it is also slightly closer to the top right town than it would normally be, but that’s whee there is a change of terrain – the road. I tossed up whether the locus in question should include the intersection and road, but decided against it.

    And so on. Once I had the main intersection points plotted, I thought about intermediate points – I didn’t want terrain features to be split between two towns, they had to belong to one or the other. You can see the results in the “bites” that are taken out of the borders of the locus at the bottom.

    If you use your fingers, one pointing at the town in the center and the other at the top-middle intersection point, and then rotate them to get an idea of the ‘circular’ shape of the locus, you can see that it’s missing about 1/6 of it’s theoretical area to the east, another 1/6 to the south, and a third 1/6th to the west. It’s literally 1/2 of the standard size. That’s going to drive the population down – but it’s fertile farmland, which will push it up. But that’s getting ahead of ourselves.

    As an exercise, though, imagine that the town lower right wasn’t there. The one that’s on the edge of the swamp. Instead of ending at a point at the bottom, the border would probably have continued, including in the locus that small stand of trees and then following the rivers emerging from the swamp, and so including the really small stand of trees. The Locus wouldn’t stop until it got to the swamp itself. The locus would have extended east to the next river, in fact, encompassing forest and hills until reaching the East-road, which it would follow inwards until ii joined the existing boundary. It would still have lost maybe 1/12th in the east, but it would have gained at least that much and probably more in the south, instead of losing 1/3. The locus would be 1 – 1/12 + 1/3 – 1/12 – 1/3 = 10/12 of normal instead of 1/2 of normal.

    5.8.1.2 Village Base Size

    If you look at the models, you will notice “Base Village” and a population count, and might be fooled into thinking that everything in that range is equally likely. It’s not.

    Take the French model – it lists the village size as 320-480.

    First, what’s the difference, high minus low? In this case, it’s 160. We need to divide that by 8 as a first step – which in this case is a nice, even, 20.

    Half of 20 is 10, and three times 10 is 30. Always round these UP.

    With that, we can construct a table:

        01-30 = 320
        31-40 = 321-350 (up by 30)
        41-50 = 351-380 (up by 30)
        51-60 = 381-400 (up by 20)
        61-70 = 401-420 (up by 20)
        71-75 = 421-430 (up by 10)
        76-80 = 431-440 (up by 10)
        81-85 = 441-450 (up by 10)
        86-90 = 451-460 (up by 10)
        91-95 = 461-470 (up by 10)
        96-00 = 470-480 (up by 10)

    I used Gemini to assist in validating various elements of this section, and it thought the “up by 30” was confusing and the terminology be replaced with something more formal.

    I disagree. I think the more colloquial vernacular will get the point across more clearly.

    It was also concerned – and this is a more important point – that GMs couldn’t implement this roll and the subsequent sub-table quickly. I disagree, once again – I’ve seen far more complicated constructions for getting precise population numbers than two d% rolls, especially since the same tables will apply to all areas within the Kingdom that are similar in constituents. Everywhere within a given zone, in fact, unless you deliberately choose to complicate that in search of precision.

    In general, you construct one set of tables for the entire zone – and can often copy those as-is for other similar zones as well. Maybe even for a whole Kingdom.

    The d% breakdown is always the same percentages, and there are always 2 “up by “3 x 1/2″s, 2 “up by 2 x 1/2″s, and 5 “up by 1/2″‘s – with the final one absorbing any rounding errors; in this example there aren’t any.

    We then construct a set of secondary tables by dividing our three (or four) increments by 10. In this case, 30 -> 3, 20 -> 2, 10 -> 1. And we apply the same d% breakdown in exactly the same way, but from a relative position:

    So:
        1/2 x 3 = 1.5, rounds to 2; 3 x 1.5 = 4.5, rounds to 5.
        1/2 x 2 = 1; 3 x 1 = 3.
        1/2 z 1 = 0.5, rounds to 1; 3 x 1 = 3.

    The “Up By 30” Sub-table reads:

        01-30 = +0
        31-40 = +5
        41-50 = +5+5 = +10
        51-60 = +10+3=+13
        61-70 = +13+3=+16
        71-75 = +16+2 = +18
        76-80 = +18+2 = +20
        81-85 = +20+2 = +22
        86-90 = +20+2 = +24
        91-95 = +24+2 = +26
        96-00 = +30 (up by whatever’s left).

    The “Up By 20” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+2 =+8
        61-70 = +8+2=+10
        71-75 = +10+1 = +11
        76-80 = +11+1 = +12
        81-85 = +12+1 = +13
        86-90 = +13+1 = +14
        91-95 = +14+1 = +15
        96-00 = +20 (up by whatever’s left).

    The “Up By 10” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+1 =+7
        61-70 = +7+1=+8
        71-75 = +8+1 = +9
        76-80 = +9+1 = +10
        81-85 = +0-1 = -1
        86-90 = -1-1 = -2
        91-95 = -2-1 = -3
        96-00 = -3-1 = -4

    Notice what happened when I ran out of room in the “+10”? The values stopped going up, and starting from +0, started going DOWN.

    It takes just two rolls to determine the Base Population of a specific village with sufficient accuracy for our needs within a zone..

    EG: Roll of 43: Main Table = 380, in an up-by-30 result. So we use the “Up By 30” Sub-table and roll again: 72, which gives a +18 result. So the Base population is 380+18=398.

    These results are intentionally non-linear.

    Optional:

    If you want more precise figures, apply -3+d3.

    Or -6+d6.

    Or anything similar – though I don’t really think you should go any larger than -10+d10 – and I’d consider -8+2d6 first.

    I have to make it clear, this is relating to the population of a specific village in a specific zone not a generic one. For anything of the latter kind, continue to use the minimum base population. I just thought that it bookended the ‘real locus’ discussion. We had to have the former because it affects what terrain influences the town size and how much of it there is; the latter is just a bonus that seemed to fit..

    5.8.1.3 Village Demographics

    Let’s start by talking Demographics, both real-world and Fantasy-world.

    The raw population numbers are not as useful as numbers of families would be. But that’s incredibly complicated to calculate and there’s no good data – the best that I could get was a broad statement that medieval times had a child mortality rate (deaths before age 15) of 40-50%, an infant mortality rate (deaths before age 1) of 25-35%, and an average family size of 5-7 children.

    If look at modern data, we get this chart:

    Source: Our World In Data, cc-by, based on data from the United Nations. Click the image to open a larger version (3400 x 3003 px) in a new tab.

    I did a very rough-and-ready curve fitting in an attempt to exclude social and cultural factors and derive a basic relationship for what is clearly a straight band of results:

    Derivative work (see above), cc-by, extrapolating a relationship curve in the data

    …from which I extracted two data points: (0%,1.8) and (10%,5.6), which in turn gave me: Y = 0.38 X + 1.8, which can be restated, X = 2.63Y – 4.74. And that’s really more precision than this analysis can justify, but it gives a readout of child mortality for integer family sizes.

    Yes, I’m aware that the real relationship isn’t linear. But this simplified approximation is good enough for our purposes.

    That, in turn, gives me the following:

        Y = Typical Number Of Children,
        X = Overall Child Mortality Rate

        Y, X:
        1, -3%
        2, 0%
        3, 3%
        4, 5%
        5, 8%
        6, 11%
        7, 13%
        8, 16%
        9, 18%
        10, 21%
        11, 24%
        12, 26%

    …so far, so good.

    Next, I need to adjust everything for the rough data points that we have for medieval times, when bearing children was itself a mortality risk for the mothers.

    5-7 children, 40-50%

    so that gives me (5, 8, 40) and (7, 13, 50) – more useful in this case as (8, 40) and (13,50) – which works out to Z = 2 Y + 24.

        Z=Child Mortality, Medieval-adjusted

        Y, X, Z:
        1, -3%, 18%
        2, 0%, 24%
        3, 3%, 30%
        4, 5%, 34%
        5, 8%, 40%
        6, 11%, 46%
        7, 13%, 50%
        8, 16%, 56%
        9, 18%, 60%
        10, 21%, 66%
        11, 24%, 72%
        12, 26%, 76%

    But here’s the thing: realism and being all grim and gritty might work for some campaigns, but for most of us – no. What we need to do now is apply a “Fantasy Conversion” which contains just enough realism to be plausible and replaces the balance with optimism.

    I think Division of Z (the medieval-adjusted child mortality rate) by 3 sounds about right – YMMV. That gives me the F values below – but I also checked on a ratio of 2.5, which gives me the F2 values.

    Gemini suggested using 3.5 or 4 for an even ‘softer’ mortality rate, and 2.25 or 2 for a grittier one.

    In principle, I don’t have a problem with that – and part of the reason why I’m not just throwing the mechanics at you, but explaining how they have been derived, is so that GMs can use alternate values if they think them appropriate to their specific campaigns.

    I don’t just want to feed the hungry, I want to teach them to fish, to paraphrase the biblical parable.

        F= Fantasy Adjusted Child Mortality Rate
        F2 = more extreme Child Mortality Rate

        Y, X, Z, F, F2:
        1, -3%, 18%, 6%, 7%
        2, 0%, 24%, 8%, 10%
        3, 3%, 30%, 10%, 12%
        4, 5%, 34%, 11%, 14%
        5, 8%, 40%, 13%, 16%
        6, 11%, 46%, 15%, 18%
        7, 13%, 50%, 17%, 20%
        8, 16%, 56%, 19%, 22%
        9, 18%, 60%, 20%, 24%
        10, 21%, 66%, 22%, 26%
        11, 24%, 72%, 24%, 29%
        12, 26%, 76%, 25%, 30%

    I think the F values are probably more appropriate for High Fantasy, while the F2 are better for more typical fantasy – but you’re free to use this information any way you like, the better to suit your campaign world.

    You might decide, for example, that averaging the Medieval Adjusted Values with the F2 is ‘right’ – so that 5 children would indicate (40+16)/2 = 28% mortality.

    Social values can also adjust these values – traditionally, that means valuing male children more than females. But in Fantasy / Medieval game settings, I think that would be more than counterbalanced, IF it were a factor, by the implied increased risks from youthful adventuring. In a society that practices such gender-bias, it would not surprise me if the ultimate gender ratio was 60-40 or even 70-30 – in favor of Girls.

    5.8.1.3.1 Maternal Survival

    The next element to consider is the risk of maternal death in childbirth. That’s even harder to pin down data on, but 1-3% per child is probably close to historically accurate. Balanced around that is the greater risks from adventuring, and the availability of clerical healing. So I’m extending the table to cover 4, 5, and 6%, but you are most likely to want the values in the first columns. To help distinguish these extreme possibilities from the usual ones, they have been presented in Italics.

    We’re not interested so much in the number of cases where it happens as I am the number of cases where it doesn’t – the % of families with living mothers, relative to the number of children.

        Y, @1, @2, @3, @4, @5, @6:
        1, 99%, 98%, 97%, 96%, 95%, 94%
        2, 98.0%, 96.0%, 94.1%, 92.2%, 90.3%, 88.4%
        3, 97.0%, 94.1%, 91.3%, 88.5%, 85.7%, 83.1%
        4, 96.1%, 92.2%, 88.5%, 84.9%, 81.5%, 78.1%
        5, 95.1%, 90.4%, 85.9%, 81.5%, 77.4%, 73.4%
        6, 94.1%, 88.6%, 83.3%, 78.3%, 73.5%, 69.0%
        7, 93.2%, 86.8%, 80.8%, 75.1%, 69.5%, 64.8%
        8, 92.3%, 85.1%, 78.4%, 72.1%, 66.3%, 61.0%
        9, 91.4%, 83.4%, 76.0%, 69.3%, 63.0%, 57.3%
        10, 90.4%, 81.7%, 73.7%, 66.5%, 59.9%, 53.9%
        11, 89.5%, 80.1%, 71.5%, 63.8%, 56.9%, 50.6%
        12, 88.6%, 78.5%, 69.4%, 61.3%, 54.0%, 47.6%

    The method of calculation is 100 x ( 1- [D/100] ) ^ Y. Just in case you want to use different rates than these.

    There does come a point at which the likelihood of maternal death begins to limit the size of the average family, though, and I think the 6% values are getting awfully close to that mark.

    Let’s say that a couple have 6 children, right in the middle of the historical average. If the mother falls pregnant a 7th time, at 6%, she has roughly a 1 in 3 chance of dying (and a fair risk of the child perishing with her). Which means that she HAS no more children. But if she beats those odds to have 7 children, her chances are even worse when it comes to child #8, and so on.

    Of all the cases with a mother who survived childbirth, we then need to factor in death from all other causes – monsters and adventuring and mischance and so on. Fantasy worlds tend to be dangerous, so this could be quite high – maybe as much as 5% or 10% or 20%. So multiply the living mothers by 0.8. Or 0.7 Or 0.9 – whatever you consider appropriate – to allow for this.

    This rural community is obviously alongside a major river or coastline – the proximity of the mountains suggests the first, but isn’t definitive. The name offers a clue: ‘hallstatt’, which to me sounds Germanic, and suggests that the waterway may be the Rhine. Or not, if I’ve misinterpreted. Image by Leonhard Niederwimmer from Pixabay

    5.8.1.3.2 Paternal Survival

    The result is the % of families with a surviving mother. So how many surviving fathers are there per surviving mother? Estimates here vary all over the shop, and more strongly reflect social values. But if I’m suggesting 5% – 20% mortality for mothers from other sources, the same would probably be reasonably true of fathers – if those social values don’t get in the way.

        0.95 x 0.95 = 90.25%.
        0.9 x 0.9 = 81%.
        0.85 x 0.85 = 72.25%
        0.8 x 0.8 = 64%.

    Those values give the percentages in which both parents have survived to the birth of the average number of children.

    If you’re using 10% mortality from other causes, then in 90% of cases in which the mother has died, the father has survived. But in 10% of the cases in which the mother has succumbed, the children are orphaned by the loss of the other parent.

    The higher this percentage, the higher the rate of survivors remarrying and potentially doubling the size of their households at a stroke. And that will distort the average family size far more quickly than the actual mortality percentages, unless there is some social factor involved – maybe it’s expected that parents with children will only marry single adults without children, for example.

    The problem with this approach is that if it’s the mother who is remarrying, this puts her right back on that path to mortality through childbirth; the child-count ‘clock’ does not get reset. If it’s a surviving father marrying a new and childless wife, it DOES reset, because the new mother has not had children previously.

    In a society that permits such actions, there is a profound dichotomy at its heart that favors larger families for husbands who survive while placing mothers who survive at far greater risk of the family becoming a burden to the community – which is likely to change that social acceptance. Paradoxically, a double standard is what’s needed to give both parents a more equal risk of death, and a more equal chance of surviving.

    5.8.1.3.3 Childless Couples

    Next, let’s think about the incidence of Childless Couples. We can state that there’s a given chance of pregnancy in any given year of marriage; but once it happens, there is just under a full year before that chance re-emerges.

        Year 1: A% -> 1 child born
        Year 2: (100-A) x A% -> 1 child born, A%^2 -> 2 children born
        Year 3: (100-A^2) x A% -> 1 child born, (100-A) x A% -> 2 children born, A^3% -> 3 children born

    … and so on.

    This quickly becomes difficult to calculate, because each row adds 1 to the number of columns, and its easy to lose track.

    But here’s the interesting part: we don’t care. To answer this question, there’s a far simpler calculation.

    In any given year, there will be B couples married. (100-A%) of them will not have children in the course of that year. If we specify B as the average, rather than as a value specific to a given year, then the year before we will also have B couples marry, and (100-A%) of them without children at the end of that year – which means that in the course of the second year of marriage, A% will have children and stop being counted in this category, and (100-A)% will not, and will still count.

    Adding these up, we get (100-A)% + (100-A)%^2 + …. and so on. And these additions will get progressively and very rapidly smaller.

    Let’s pick a number, by way of example – let’s try A=80%, just for the sake of argument.

    We then get 20% + 4% + 0.8 % + 0.16% + 0.032% + 0.0064% … and I don’t think you’d really need to go much further, the increases become so small. I pushed on one more term (0.000128%) and got a total of 24.998528%. I pushed further with a spreadsheet, and not even 12 years was enough to cross the 25% mark – but it was getting ever closer to it. Close enough to say that for A=80, there would be 25 childless couples for every… how many?

    The answer to that question comes back to the definition of A: It the number of couples out of 100 who have a child in any given year. So, over 12 years, that’s a total of 1200 couples. And 25 / 1200 = 2.08%.

    I did the math – cheating, I used a spreadsheet – and got the following, all out of 1200 couples:

        A%, C, [C rounded]
        80%, 25,
        75%, 33.33, 33
        70%, 42.86, 43
        65%, 53.85, 54
        60%, 66.67, 67
        55%, 81.81, 82
        50%, 99.98, 100
        45%, 122.13, 122
        40%, 149.67, 150
        35%, 184.66, 185
        30%, 230.10, 230
        25%, 290.50, 291
        20%, 372.51, 373

    But that has to mean that the rest of those 1200 couples have to have children – and the number of children will approach the average number that you chose.

    So if you pick a value for A, you can calculate exactly how many childless couples there are relative to the number of families with children:

        A=45%, C=122:

        1200-122 = 1078
        1078 families with children, 122 childless couples
        1078 / 122 = 8.836
        8.836 + 1 = 9.863
        so 1 in 9.863 families will be childless couples.

    5.8.1.3.4 Unwed Singles

    The social pressure to marry has varied considerably through the ages, but the greater the dangers faced by the community, the greater this pressure is going to be. And the fitter and healthier you are, the greater this pressure is going to be amplified.

    This is inescapable logic – the first duty of any given generation in a growing society is to replace the population who have passed away, and it takes a long time to turn children into adults.

    You could calculate the average lifespan, deduct the age of social maturity, and state that society frowns heavily on unwed singles above that age, and as every year passed with the individual approaching that age, the greater the social pressure would become – and that would be a true approach.

    The problem is that the average lifespan is complicated by those high rates of childhood death, and trying to extract that factor becomes really complicated and messy. And then you throw in curveballs like Elves and Dwarves, with their radically different lifespans and the whole thing ends up in a tangled mess.

    So, I either have to pull a mathematical rabbit out of my hat, or I do the sensible thing and get the GM to pick a social practice and do my best to make it an informed choice.

    While a purely mathematical approach is possible, the more that I looked at the question, the more difficult it became to factor every variable into the equation.

    Want the bare bones? Okay, here goes.

    For a given population, P, there are B marriages a year, removing B x 2 unwed individuals from the population. We can already extract the count of those who are ineligible for marriage due to age, because they are all designated as children.

    We can subtract the quantity of childless couples who are already wed in a similar fashion to the calculations of the previous subsection.

    The end result is the number of unwed singles of marriageable age who have not married. Setting P at a fixed value – say 100 people – we can then quickly determine the number of unmarried singles.

    What ultimately killed this approach was that it was – in the final analysis – using a GM estimate of B as a surrogate for getting the GM to estimate the % of singles in their community – and doing so in a manner that was less conducive to an informed choice, and requiring a lot of calculations to end up with the number that they could have directly estimated in the first place.

    Nope. Not gonna work in any practical sense.

    So, instead, let’s talk about the life of the social scene – singles culture. There is still going to be all that social pressure to marry and contribute to the population, especially if you are an even half-successful adventurer, because that makes you the healthiest, wealthiest, and most prosperous members of the community.

    It can be argued that instead of using the average lifespan (with all its attendant problems) and deducting the age of maturity (i.e. the age at which a child becomes an adult) to determine at what age a couple have to have children in order to keep the population at least stable (you need two children for that, since there are two adults involved, and you need to take that child mortality rate into consideration, dividing those 2 by the mortality rate and rounding up), you should use add age of the mother as a factor in the rise of the mother’s mortality during childbirth, and work back from that age. In modern times, that’s generally somewhere in the thirties, maybe up to 40. That doesn’t mean that older women can’t have children, just that under these circumstances, the risks of dying before you have enough offspring are considered too high by the general culture.

    But what does that really get you? There’s always going to be some age at which the pressure to wed starts to grow. Shifting it this way or that by a couple of years won’t change much.

    Looking at it from the reverse angle – how much single life will society tolerate – can be far more useful.

    I would suggest a base value of a decade. Ten years to be an adventurer and live life on the edge.

    In high-danger societies, especially with a high mortality rate, that might come back 2 or 3 years, At it’s most extreme, 5. That’s all the time you have to focus on becoming a professional who is able to support a family, or at least to setting your feet firmly on that path.

    In low-danger societies, especially those with a lower mortality rate, it might get pushed out a few years, maybe even another 5. That’s enough time that you can spread some wild oats and still settle down into someone respectable within the community.

    How long is the typical apprenticeship? In medieval times? In your fantasy game-world? From the real world, I could bandy about numbers like 4 years, or 5 years, or 5 years and 5 more learning on the job, or repaying debts to the master that trained you. And you end up with the same basic range – 5-15 years.

    What is the age of maturity in your world? Again, I could throw numbers around – 18 or 21 seem to be the most common in modern society, but 16 (even 15) has its place in the discussion – that’s how old you had to be back when I was younger before you could leave school and pursue a trade, i.e. becoming an apprentice. But I have played in a number of games where apprenticeships started at eight, or twelve, and lasted a decade – and THEN you got to start repaying your mentor for the investment that he’s made in you. With interest.

    Does there come a point where people are deemed anti-social because they have not married, and find their prospects of attracting a husband or wife diminishing as a result? Don’t say it doesn’t happen, because there is plenty of real-life evidence that it’s there as a social undercurrent – one that shifts, and sometimes intensifies or weakens, without real understanding of the factors that drive the phenomenon – instead, forget the real world and think about the game-world.

    How optimistic / positive is the society? How grim and gritty?

    Think about all these questions, because they all provide context to the basic question: What percentage of the population are unwed with no (official) children?

    Here’s how I would proceed: Pick a base percentage. For every factor you’ve identified that gives greater scope for personal liberty, add 2%. For every factor that demands the sacrifice of some of that liberty, from society’s point of view, subtract 2%. In any given society, there are likely to be a blend of factors, some pushing the percentage up, and some down – but in more extreme circumstances, they might all factor up or down. If you identify a factor as especially weak, only adjust by 1%; if you judge a factor as especially strong, adjust by 3 or even 4%.

    In the end, you will have a number.

    Let me close out this section with some advice on setting that base percentage.

    There are two competing and mutually-exclusive trains of thought when it comes to these base values. Here’s one:

    ▪ In positive societies, low child mortality means fewer young widows/widowers. The society is more stable, allowing for strong family formation and early marriage. Base rate is low.

    ▪ In moderate societies, dangers still disrupt family units, leading to a moderate rate of single, adult households. Base rate is moderate.

    ▪ In dangerous societies, high death rates mean many broken families, orphans, and single parents. The number of adult individuals living outside a stable family unit is maximized. Base rate is high.

    Here’s the alternative perspective:

    ▪ Positive societies produce less social pressure and greater levels of personal freedom, reducing the rate of marriage and increasing the capacity for unwed singles. Base rate is high.

    ▪ Moderate societies have a positive social pressure toward marriage at a younger adult age, and less capacity for personal liberty. Base rate is moderate.

    ▪ Societies that swarm with danger have a higher death rate, and there would be more social pressure to marry very young to create population stability. The alternative leads to social collapse and dead civilizations.

    What’s the attitude in your game world? They are all reasonable points of view.

    In a high-fantasy / positive social setting, I would start with a base percentage of 22%. Most factors will tend to be positive, so you might end up with a final value of 32% – but there can be strains beneath the surface, which could lead to a result of 12% in extreme cases.

    In a mid-range, fairly typical society, I would employ a base of 27%. If there are lots of factors contributing to a high singles rate, this might get as high as 37%, and if there are lots of negatives, it might come down to 17% – but for the most part, it will be somewhere close to the middle.

    In an especially grim and dark world, I would employ a base of 33%, in the expectation that most factors will be negative, and lead to totals more in the 23-28% range. But if social norms have begun to break down, social institutions like marriage can fall by the wayside, and you can end up with an unsustainable total of 40-something percent.

    Anything outside 20-35 should be considered unsustainable over the long run. Whatever negative impacts can apply will be rife.

    5.8.1.3.5 Population Breakdown

    That’s the final piece of the puzzle – with that information, you can assess the four types of ‘typical families’ and their relative frequency:

        # Children with no parents,
        # Children with mothers but no fathers,
        # Children with fathers but no mothers, and
        # Children with two parents.
        # Childless Couples
        # Unwed Singles

    Get the total size of each of these family units / households* in number of individuals, multiply that size by the frequency of occurrence, add up all the results, and convert them to a percentage and you have a total population breakdown. Average the first five and you have the average family size in this particular region and all similar ones.

    Multiply each frequency of occurrence by the village population total (rounding as you see fit), and you get the constituents of that village.

    I have never liked the use of the term ‘households” in a demographic context, even though that seems to be the most commonly preferred term these days. I’ve lived in a number of shared accommodations as a single. over the years, and that experience muddies what’s intended to be a clearer understanding of the results. If you have 50 or 100 singles living in a youth hostel, are they one household or 50-100? Families – nuclear or non-nuclear – for me, at least, is the clearer, more meaningful, term.

    5.8.1.3.6 The Economics Of The Demographics

    In modern times, it’s not unusual for two adults and even multiple children all to have different occupations for different businesses all at the same time. Some kids start as paper boys and girls at a very young age. Even five year olds with Lemonade stands count in this context.

    Go back about 100 years and that all changes. There is typically only one breadwinner – with exceptions that I’ll get to in a moment – and while some of them will have their own business (be it retail or in a service industry), most will be working for someone else.

    There will be a percentage who have no fixed employment and operate as day labor.

    Going into Victorian times, we have the workhouses and poorhouses, where brutal labor practices earn enough for survival but little more. While some were profitable for the owners, most earned less than they cost, and relied on charitable ‘sponsorship’ from other public institutions – sometimes governments, more often religious congregations. These are the exceptions that I mentioned. This is especially true where the father has deserted the family or died (often in war) leaving the mother to raise the children but unable to do so because of the gender biases built into the societies of the time.

    Go back still further, and it was a matter of public shame for a woman to work – with but a few exceptions such as midwifery. Nevertheless, they often earned supplemental income for the families with craft skills such as sewing, knitting, and needlework.

    The concept that the male was the breadwinner only gets stronger as you pass backwards through history.

    Fantasy games are usually not like that. They do see the world from the modern perspective and force the historical reality to conform to that perspective. In particular, gender bias is frequently and firmly excluded from fantasy societies.

    The core reasoning is that characters and players can be of either gender (or any of the supplementary gender identifications) and the makers of the games don’t wish to exclude potential markets with discomforting historical reality.

    There are a few GMs out there who intentionally try to find an ‘equal but distinct’ role for females and others within their fantasy societies; it’s difficult, but it can be done – and it usually happens by excluding common males from segments of the economy within the society. If there are occupations that are only open to women, and occupations of equal merit (NOT greater merit) that are only open to men, you construct a bilateral society in which two distinct halves come together to form a whole.

    But it would still be unusual for a single household to have multiple significant breadwinners; you had one principal earner and zero or more supplemental incomes ‘on the side’.

    Businesses were family operations in which the whole family were expected to contribute in some way, subject to needs and ability.

    And that’s the fundamental economic ‘brick’ of a community – one income per family, whether that income derives as profits from a business or from labor in someone else’s business.

    You can use this as a touchstone, a window into understanding the societies of history, all the way back into classical times – who earned the money and how? In early times, it might be that you need to equate coin-based wealth with an equivalent value in goods, but once you start thinking of farm produce or refined ore as money, not as goods, the economic similarities quickly reveal themselves.

    So that is also the foundation of economics in this system. One family, one income (plus possible supplements). In fact, there were periods in relatively recent history in which the supplementary income itself was justification for marriage and children.

    In modern times, we evaluate based on the reduction of expenses; this is because most of our utilities don’t rise in usage as fast as the number of people using them (which goes back to the muddying concept of ‘households’; if two people are sharing the costs, both have more economic leftover to spend because the costs per person have gone down; if they are NOT sharing expenses, each providing fully for themselves, then they are two ‘households’, not one. It also helps to think of rent as a ‘utility’ within this context).

    But that’s a very modern perspective, and one that only works with the modern concept of ‘utilities’ – electricity, gas, and so on. Go back before that, into the pre-industrial ages, and the perspective changes from one of diminishing liabilities into one of growth of potential advantages. And having daughters who could supplement the household income by working as maids or providing craft services gave a household an economic advantage.

    5.8.1.3.7 An Economic Village Model

        8 a^2 = b^2 – c^2.

    Looks simple, doesn’t it? In fact, it is oversimplified – the reality would be

        a^d = (b^e – c^f ) / g,

    but that’s beyond my ability to model, and too fiddly for game use.

    a = the village’s profitability. Some part of this may show up as public amenities; most of it will end up in the pockets of the broader social administration, in whatever form that takes.

    b = the village’s productivity, which can be simplified to the number of economic producers in the village. You could refine the model by contemplating unemployment rates, but the existence of day laborers whose average income automatically takes into account days when there’s no work to be found, means that we don’t have to.

    c = the village’s internal demand for services and products. While usually less than production, it doesn’t have to be so. But it’s usually close to b in value.

    To demonstrate the model, let’s throw out figures of 60 and 58 for b and c.

        8 a^2 = 60^2 – 58^2 = 3600 – 3364 = 236.
        a = (236 / 8)^0.5 = 29.5^0.5 = 5.43

    The village grows. b rises to 62. c rises to 59.

        8 a^2 = 62^2 – 59^2 = 3844 – 3481 = 363.
        a = (363 / 8)^0.5 = 45.375^0.5 = 6.736.

    It has risen – but not by very much.

    Things become clearer if you can define c as a percentage of b:

        a^2 = b^2 – (D x b^2) / 100
        100 a^2 = 100 b^2 – D x b^2 = b^2 x (100-D)

    If 98% of the village’s production goes to maintaining and supporting the village, then only 2% is left for economic growth. If the village adds more incomes, demand rises by the normal proportion as well – so economic growth rises, but quite slowly. In the above example calculations, 59/62 = 95.16% going to support the village – and 95% is about as low as it’s ever going to realistically go. In exceptionally productive years, it might be as low as 66.7%, but most years it’s going to be much higher than that.

    Side-bar: 5.8.1.3.6.1 Good Times

    You can actually model how often an exceptional year comes along, by making a couple of assumptions. First, if 66.7 is as good as they get, and 95 is as bad as an exceptionally good year gets, then the average ‘exceptional year’ will be 80.85%.

    Second, if 95% is as good as a typical year gets, and 102% is as bad as a typical year gets, then the average ‘normal’ year will be 98.5%.

    Third, if the long term average is 95.16%, then what we need is the number of typical years needed to raise the overall average (including one exceptional year) to 95.16%.

        95.16 x (n+1) = 80.85 + (n x 98.5)
        95.16 x n + 95.16 = 80.85 + 98.5 x n
        (95.16 – 98.5) x n = 80.85 – 95.16
        3.34 n = 14.31
        n = 14.31 / 3.34 = 4.284.

        4-and-a-quarter normal years to every 1 good year.

    You can go further, with this as a basis, and make the good years better or worse so that you end up with a whole number of years.

        95.16 x (5 +1) = g + 5 x 98.5
        g = 95.16 x 6 – 98.5 x 5
        g = 570.96 – 492.5 = 78.46.

    That’s a six-year cycle with one good year averaging 78.46% of productivity sustaining the village and five typical years in which 98.5% of productivity is needed for the purpose.

    I grew up on the land, and I can tell you that an industry is thriving if one year out of 10 is really good; an industry is marking time if one year out of 20 is good, and in trouble if one year in 25 or less is really profitable. One year in six is a boom.

    So to close out this sidebar, let’s look at what those numbers equate to in overall economic productivity for the rural population that depend on them:

        Boom: (1 x 78.46 + 5 x 98.5) / 6
            = (78.46 + 492.5) / 6
            = 570.96 / 6
            = 95.16%
            (we already knew this but it’s included for comparison)

        Thriving: (1 x 78.46 + 9 x 98.5) / 10
            = (78.46 + 886.5) / 10
            = 964.96 / 10
            = 96.496

        Stable, Marking Time: (1 x 78.46 + 19 x 98.5) / 20
            = (78.46 + 1871.5) / 20
            = 1949.96 / 20
            = 97.498

        In trouble / in economic decline: (1 x 78.46 + 24 x 98.5) / 25
            = (78.46 + 2364) / 25
            = 2442.46 / 25
            = 97.6984

    Look at the differences, and how thin the lines are between growth and stagnation.

        Stable to In Decline: 0.2004% change.
        Stable to Thriving: 1.002% change.
        Thriving to Booming: 1.336% change.
        Booming to In Decline: 2.5384% change.

    The whole boom-bust cycle – and it can be a cyclic phenomenon – is contained within 2.54% difference in economic activity.

    An aside within an aside shows why:

        Boom: 95.16% = 0.9516;
        0.9516 ^ 6 = 0.74255;
        so 25.74% productivity goes into growth.

        Thriving: 96.496% = 0.96496;
        0.96496 ^ 6 = 0.8073;
        so 19.27% productivity goes into growth over the same six-year period.

        Stable: 97.498% = 0.97498;
        0.97498 ^ 6 = 0.859;
        14.1% of productivity goes into growth over the same six-year period.

        Declining: 97.6984% = 0.976984;
        0.976984 ^ 6 = 0.8696;
        13.04% of productivity goes into growth.

    Every homeowner sweats a 0.25% change in interest rates because they compound, snowballing into huge differences. This is exactly the same thing.

    5.8.1.4 The Generic Village

    The generic village is perpetually dancing on a knife-edge, but the margins are so small that it’s trivially easy to overcome a bad year with a better one. Even a boom year doesn’t incite a lot of growth, but a lot of factors pulled together over a very long time, can.

    Some villages won’t manage to escape the slippery slope long enough and will decline into Hamlets, but find stability at this smaller size. Given time, disused buildings will be torn down and ‘robbed’ of any useful construction material because that’s close to free, and that alone can make enough of a difference economically. With the land reclaimed, after a while you could never tell that it once was a village.

    Some won’t be able to arrest their decline – whatever led to their establishment in the first place either isn’t profitable enough, or too much of the profits are being taken in fees, tithes, greed, and taxes. They decline into Thorpes.

    In some cases, communities exist for a single purpose; they never grew large enough to even have permanent structures. They are strictly temporary in nature (though one may persist for dozens of years or more); they are forever categorized as Mining or Logging Camps.

    Other villages have more factors pushing them to growth, and once they reach a certain size, they can organize and be recognized as a town. And some towns become cities, and some cities become a great metropolis.

    With each change of scale, the services on offer to the townsfolk, and the services on offer to the traveler passing through, increase.

    The fewer such services there are, the more general and generic they have to become, just to earn enough to stay in operations.

    The general view of a generic village is that most services exist purely for the benefit of the locals, but a small number of operations will offer services aimed at a temporary target market, the traveler. These services are often more profitable but less reliable in terms of income, more vulnerable to changes in markets. They don’t tend to be set up by existing residents; instead, they are founded by a traveler who settles down and joins a community because they see an economic opportunity.

    That means that the number of such services on offer is very strongly tied to both the growth of the village, and to the overall economic situation of the Kingdom as a whole and to the local Region of which this village is a part.

    Here’s another way to look at it: The reason so much of the village’s economic potential goes into maintaining the village is because of all those tithes and taxes and so on. Some of those will be based on the land in and around the village; some on the productivity of that land; and some of it on the size and economic activity of the village. The rest provides what the village needs to sustain its population and keep everything going. There’s not a lot left – but any addition to the bottom line that isn’t eroded away by those demands makes the village and the region more profitable, creating more opportunities for sustained growth. Again, there is a snowball effect.

    Some villages – and this is a social thing – don’t want the headaches and complications of growth; they like things just the way they are. They will have local rules and regulations designed to limit growth by making growth-producing business opportunities less attractive or compelling. Others desperately want growth, and will try to make themselves more attractive to operations that encourage it.

    That divides villages into two main categories and a number of subcategories.

    Main Category: Villages that encourage growth
         Subcategory: Villages that are growing
         Subcategory: Villages that are not growing
         Subcategory: Villages that are being left behind, and declining.
    Ratios: 40:40:20, respectively.

    Main Category: Villages that are discouraging growth despite the risk of decline
         Subcategory: Villages that are growing and can only slow that growth
         Subcategory: Villages that have achieved stability
         Subcategory: Villages that have or are declining.
    Ratios: 20:40:40, respectively.

    And that will about do it for this post. It will continue in part 5b!

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5 (incomplete)


This entry is part 18 of 20 in the series Trade In Fantasy

We’ve used the economy to distribute fortifications, and used those to locate inns. Now let’s wrap some communities around them.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

Table Of Contents

In parts 1-3 of this chapter:

Chapter 5: Land Transport

    5.1 Distance, Time, & Detriments

      5.1.1 Time Vs Distance
      5.1.2 Defining a terrain / region / locality

           5.1.2.1 Road Quality: An introductory mention

    5.2 Terrain

      5.2.0 Terrain Factor
      5.2.1 % Distance
      5.2.2 Good Roads
      5.2.3 Bad Roads
      5.2.4 Even Ground
      5.2.5 Broken Ground
      5.2.5 Marshlands
      5.2.7 Swamplands
      5.2.8 Woodlands
      5.2.9 Forests
      5.2.10 Rolling Hills
      5.2.11 Mountain Slopes
      5.2.12 Mountain Passes
      5.2.13 Deserts
      5.2.14 Exotic Terrain
      5.2.15 Road Quality
           5.2.15.1 The four-tier system
           5.2.15.2 The five-tier system
           5.2.15.3 The eight-tier system
           5.2.15.4 The ten-tier system

      5.2.16 Rivers & Other Waterways
           5.2.16.1 Fords
           5.2.16.2 Bridges
           5.2.16.3 Tolls
           5.2.16.4 Ferries
           5.2.16.5 Portage & Other Solutions

    5.3 Weather

      5.3.1 Seasonal Trend
      5.3.2 Broad Variations
      5.3.3 Narrow Variations
           5.3.3.1 Every 2nd month?
           5.3.3.2 Transition Months
           5.3.3.3 Adding a little randomness: 1/2 length variations
           5.3.3.4 Adding a little randomness: 1 1./2-, 2-, and 2 1/2-length variations

      5.3.4 Maintaining The Average
           5.3.4.1 Correction Timing
                5.3.4.1.1 Off-cycle corrections
                5.3.4.1.2 Oppositional Corrections
                5.3.4.1.3 Adjacent corrections
                5.3.4.1.4 Hangover corrections

           5.3.4.2 Correction Duration
                5.3.4.2.1 Distributed corrections: 12 months
                     5.3.4.2.1.1 Even Distribution
                     5.3.4.2.1.2 Random Distribution
                     5.3.4.2.1.3 Weighted Random Distribution

                5.3.4.2.2 Distributed corrections: 6 months
                5.3.4.2.3 Distributed corrections: 3 months
                5.3.4.2.4 Slow Corrections (2 months)
                5.3.4.2.5 Normal corrections: 1 month
                5.3.4.2.6 Fast corrections: 1/2 month (2 weeks)
                5.3.4.2.7 Catastrophic corrections 1/4 month (1 week)

           5.4.4.3 Maintaining Synchronization
           5.4.4.4 Multiple Correction Layers

    5.4 Losses & Hazards
    5.5 Expenses – as Terrain Factors
    5.6 Expenses – as aspects of Politics
    5.7 Inns, Castles, & Strongholds

      5.7.1 Strongholds
           5.7.1.1 Overall Military Strength
                5.7.1.1.1 Naval Strength
                5.7.1.1.2 Exotic Strength
                5.7.1.1.3 Adjusted Military Strength

           5.7.1.2 Mobility
                5.7.1.2.1 Roads
                5.7.1.2.2 Cross-country

           5.7.1.3 Kingdom Size and Capital Location
           5.7.1.4 Borders
           5.7.1.5 Terrain
           5.7.1.6 Internal Threat
           5.7.1.7 Priority
           5.7.1.8 Threat Level
           5.7.1.9 Zones
                5.7.1.9.1 Abstract Zones
                5.7.1.9.2 Applied Considerations
                     5.7.1.9.2.1 Sidebar: Why do it this way?

                5.7.1.9.3 Preliminary Zones, Zomania

           5.7.1.10 Kingdom Wealth
                5.7.1.10.1 Legacy Defenses
                
      5.7.1.10.2 Military Training
                
      5.7.1.10.3 Disaster Relief
                
      5.7.1.10.4 Religion
                
      5.7.1.10.5 Magic
                
      5.7.1.10.6 Tools
                
      5.7.1.10.7 Entertainment
                
      5.7.1.10.8 Resource Development
                
      5.7.1.10.9 A Hypothetical Disaster
                
      5.7.1.10.10 Housing & Funding Boosts
                
      5.7.1.10.11 Food
                
      5.7.1.10.12 Diplomacy
                
      5.7.1.10.13 Trade
                
      5.7.1.10.14 Education
                
      5.7.1.10.15 Transport (Road Maintenance)
                
      5.7.1.10.16 The Impact On Population

           5.7.1.11 Military Need: Theoretical Scenario 2

In the last part of this series:

           5.7.1.12 Stronghold Density
           5.7.1.13 Zone Size
           5.7.1.14 Base Area Protected per Stronghold
                5.7.1.14.1 The Distance between defensive centers
                
      5.7.1.14.2 The relationship between defensive patterns
                
      5.7.1.14.3 The shape of the defensive pattern
                
      5.7.1.14.4 What is 100% coverage, anyway?          5.7.1.14.5 Calculating Area Protected
                     
      5.7.14.5.1 Three Satellite
                     5.7.14.5.2 Four-Satellite

                5.7.1.14.6 Configuration Choice(s)
                5.7.1.14.7 The Impact On Roads
                The Impact on populations

           5.7.1.15 Economic Adjustments
           5.7.1.16 Border Adjustments
           5.7.1.17 Historical vs Contemporary Structures
           5.7.1.18 Zone and Kingdom Totals
           5.7.1.19 Reserves

      5.7.2 Castles, Fortresses, and the like
           5.7.2.1 Distance to a satellite fortification using 2d6
           5.7.2.2 Distance to a neighboring hub
           5.7.2.3 Combining the two: the nearest neighbor

      5.7.3 Inns

In this part:

    5.8 Villages, Towns, & Cities

      5.8.1 Villages
           5.8.1.1 Village Frequency
           5.8.1.2 Village Initial Size
                Optional
           5.8.1.3 Village Demographics

                5.8.1.3.1 Maternal Survival
                5.8.1.3.2 Paternal Survival
                5.8.1.3.3 Childless Couples
                5.8.1.3.4 Unwed Singles
                5.8.1.3.5 Population Breakdown
                5.8.1.3.6 The Economics Of The Demographics
                     Side-bar: 5.8.1.3.6.1 Good Times

           5.8.1.4 The Generic Village
           5.8.1.5 Blended Models
           5.8.1.6 Zomania – An Example
                5.8.1.6.1 Zone Selection
                5.8.1.6.2 Sidebar: Elevation Classification
                5.8.1.6.3 Area Adjustments – from 5.7.1.13
                5.8.1.6.4 Defensive Pattern – from 5.7.1.14
                5.8.1.6.5 Sidebar: The Size Of Zomania, revisited
                5.8.1.6.6 Sidebar: Changes Of Defensive Structure
                5.8.1.6.7 Inns In Zone 7 – from 5.7.3

      5.8.2 Towns
           5.8.2.1 Towns Frequency
           5.8.2.2 Town Initial Size
           5.8.2.3 The Generic Town

      5.8.3 Cities
           5.8.2.2 Small City Frequency
           5.8.2.3 Small City Size
           5.8.2.4 Size Of The Capital
           5.8.2.5 Large City Frequency
           5.8.2.6 Large City Size

      5.8.4 Economic Factors, Simplified
           5.8.4.1 Trade Routes & Connections
           5.8.4.2 Local Industry
           5.8.4.3 Military Significance
           5.8.4.4 Scenery & History
           5.8.4.5 Other Economic Modifiers
           5.8.4.6 Up-scaled Villages
           5.8.4.7 Up-scaled Towns
           5.8.4.8 Up-scaled Small Cities
           5.8.4.9 Upscaling The Capital & Large Cities

      5.8.5 Overall Population
           5.8.5.1 Realm Size
           5.8.5.2 % Wilderness
           5.8.5.3 % Fertile
           5.8.5.4 % Good
           5.8.5.5 % Mediocre
           5.8.5.6 % Poor
           5.8.5.7 % Dire
           5.8.5.8 % Wasteland
           5.8.5.9 Net Agricultural Capacity

           5.8.5.10 Misadventures, Disasters, and Calamities
           5.8.5.11 Birth Rate per year
           5.8.5.12 Mortality
                5.8.5.12.1 Infant Mortality
                5.8.5.12.2 Child Mortality
                5.8.5.12.3 Teen Mortality
                5.8.5.12.4 Youth Mortality
                5.8.5.12.5 Adult Mortality
                5.8.5.12.6 Senior Mortality
                5.8.5.12.7 Elderly Mortality
                5.8.5.12.8 Venerable Mortality
                5.8.5.12.9 Net Mortality

           5.8.5.13 Net Population

And still to come in this chapter:

      5.8.6 Population Distribution
           5.8.6.1 The Roaming Population
           5.8.6.2 The Capital
           5.8.6.3 The Cities
           5.8.6.4 Number of Towns
           5.8.6.5 Number of Villages
           5.8.6.6 Hypothetical Population
           5.8.6.7 The Realm Factor
           5.8.6.8 True Village Size
           5.8.6.9 True Town Size
           5.8.6.10 Adjusted City Size
           5.8.6.11 Adjusted Capital Size

      5.8.7 Population Centers On The Fly
           5.8.7.1 Total Population Centers
           5.8.7.2 The Distribution Table
           5.8.7.3 The Cities
           5.8.7.4 Village or Town?
           5.8.7.5 Size Bias
                5.8.7.5.1 Economic Bias
                5.8.7.5.2 Fertility Bias
                5.8.7.5.3 Military Personnel
                5.8.7.5.4 The Net Bias

           5.8.7.6 The Die Roll
           5.8.7.7 Applying Net Bias
           5.8.7.8 Applying The Realm Factor
           5.8.7.9 The True Size
                5.8.7.9.1 Justifying The Size
                5.8.7.9.2 The Implications

    5.9 Compiled Trade Routes

      5.9.1 National Legs
      5.9.2 Sub-Legs
      5.9.3 Compounding Terrain Factors
      5.9.4 Compounding Weather Factors
      5.9.5 Compounding Expenses
      5.9.6 Compounding Losses
      5.9.7 Compounding Profits
      5.9.8 Other Expenses
      5.9.9 Net Profit

    5.10 Time
    5.11 Exotic Transport

In future chapters:
  1. Waterborne Transport
  2. Spoilage
  3. Key Personnel
  4. The Journey
  5. Arrival
  6. Journey’s End
  7. Adventures En Route
5.8 Villages, Towns, & Cities

Part 5 of Chapter 5 is all about Population and its distribution. Most systems that I’ve seen for this purpose start with an overall population and work backwards, and often end up with unreasonable results, like a village every mile-and-a-half.

My system works the other way – a population density model to a population density to a local population. Many local populations give a Zone population, and the total of the Zone populations gives the Kingdom population overall.

5.8.0 Concepts & Principles

Select a model based on the desired ‘look and feel’ of the society within the Kingdom / Zone. The model describes the general distribution of population within the Kingdom / Zone, assuming a fixed unit of area (10.000 km^2), but most zones will be smaller.

The model plus a random roll sets initial village size. Village Frequency is determined by the placement of Inns & Administrative / Military structures, already defined. Together these define the total population density of an entire Kingdom according to the model.

This can then be applied to the size of the actual Kingdom to determine the total population of the Kingdom.

All of the above is on today’s agenda. In addition, there will be contributing factors determined that will be applied going forward.

Each village occupies a footprint termed a Locus.

The location within a locus actually occupied by the village or town is generally defined by the content of that locus. The population center will always be in the location within the locus that is most advantageous to growth.

A series of factors increases the size of the village within the locus, sometimes positively and sometimes negatively. Each factor yields a fractional value called a Scale Value. Applicable Scale Values determine the village location because many of them are specific to this place or that, enabling the location to be quickly refined within the locus.

Where there are multiple possible locations of roughly equal value, a community will split into two half-sized populations which will begin growing toward each other.

These Scale Values are totaled. The total Scale Value is applied as exponential growth to the base village size to determine the nominal size of the community.

If this is sufficient to trigger growth into a new size category, it is further adjusted and the new base size is used with the adjusted value to redetermine the size. This process iterates (i.e. gets applied repeatedly) until the final size of the settlement is determined.

Some conditions restrict community size by passing on excess growth to neighboring communities; these are passed from one to another until reaching a community that is no longer restricted. That community is sometimes referred to as the “Gateway” to the region. Becoming a ‘gateway’ is also a growth factor!

This is all achieved by taking the excess part of the Scale Value and applying it as a modifier to the nearest Locus outside the restricted area, reducing the total scaling factor that applies to settlements within the restricted area. Not all the excess can be redirected; growth in restricted areas is slowed, not stopped.

Along the way, various side-issues will be raised and assessed, building up a population profile for the Locus, the administrative division, the Zone overall, and for the Kingdom as a whole. In particular, the political infrastructure of the Kingdom gets determined.

Finally, these various considerations will come together to provide a system whereby a GM can generate a village ‘on the fly’ whenever a group of characters (PCs most of the time) enter a locus or cross a border.

At least, that’s how it’s all supposed to work in theory! As always, if the reality doesn’t yield useful results, I’ll feel free to diverge from this road-map!

    5.8.0.1 Frequency, Size, and Services

    The section above does a good job of outlining the process, but I thought it worth taking a moment to explain the philosophy behind it and the reason for this particular approach.

      5.8.0.1.1 The traditional approach

      The fundamental concepts by which population levels are usually defined come down to two main ones and a boat-load of implications.

      The first primary factor is settlement frequency – how many miles or kilometers or day’s march apart they are. The first two options are the ones with which most readers will be familiar and they have the virtue – and penalty – of being absolute measurements. The third option is more abstract, but can also be more practical. It takes account of terrain, for example, and at first that might seem like a good thing – but then you realize that it takes it into account backwards: if the terrain is poor, travel over it will be slower – but a fixed ‘average time apart’ then means that the settlements will cluster more closely together, i.e. there will be less physical distance covered in the same amount of time because of the terrain. What you really want is the opposite – good terrain clustering communities together, bad terrain setting them further apart.

      The second primary factor is settlement size – how many families or dwellings make up a ‘typical community’ in the specific zone.

      It’s the implications that start to get complicated. Between them, these specify the level of economic and industrial capacity of the typical community, and thus, what services are likely to be available. But that then gets muddied somewhat by demand. Certain services are always going to be in demand and providing those services is an economic opportunity for a practitioner.

      And that then gets complicated by the logistics of travel – the ‘footprint’ serviced by a given provider will vary from one occupation to another. A good blacksmith may service several small communities (if they are close together), or just one, while a mill may have a much bigger ‘footprint’.

      Add to that the secondary impact of travel capabilities – if travel is easy, and the community is on a trade route, there will be more services geared toward supplying the needs of travelers; if not, the primary driving force will be the needs of the inhabitants.

      The more you look into it, the bigger the mess the whole thing becomes. And that’s why I have rejected this traditional approach, at least for the most part.

      5.8.0.1.2 The alternative approach

      Instead, each settlement starts off at a base size and separation. The ‘tail’ – the implications – then wag the dog. Every location has benefits and drawbacks – the benefits help the settlement grow, the drawbacks cause it to shrink in size. If the demand for a blacksmith is high enough, there will be a blacksmith – who gets added to the base population and causes further population growth. If there’s no local blacksmith, but there is one in the next town over, that makes that town grow at the expense of this community. Taking stock of every relevant factor, the size of the actual settlement is then adjusted.

      But there’s one more way of looking at this approach, and for me, it makes this the most compelling possible option – it develops village size to accommodate the needs of the plot! If you need there to be a sage, or a blacksmith, or a tavern with rooms for travelers in the next community, they are there – and the community grows, within the context of the terrain and other factors, to whatever size is needed o justify the presence of these services.

      And if you don’t have any specific plot needs, the defaults of terrain and frequency and traffic and trade dictate the size and the services that are available should the PCs decide they need them.

    5.8.0.2 Community Sizes: Base, and smaller

    The fundamental unit of community size in this system is the Village. It has a certain base population, and that population size supports the provision of a certain number of general services to the community. These are ‘General Services’ and they exist to meet the needs of the inhabitants. A base-sized village also supports a single “Specialist Service” – i.e. a service with a ‘footprint’ larger than just this community. If the distance between communities is large enough, it may add a second ‘Specialist Service”, causing the community to grow – but it’s still within the range of ‘normal’ for the base size.

    Various factors shrink communities. If a community shrinks too much, it enters a community scale lower down the size chart. While the real-world terminology is vague in application, in this ‘unified’ view, these are designated Hamlets, and they have a base size 1/8 that of the base community. Hamlets no longer offer any Specialist services, and support fewer ‘General Service’ providers. The model supports Ha-1, 2, and 3 (those terms will make more sense shortly).

    Communities smaller than a Hamlet are Thorpes. Officially, this is a variant of a Middle English word meaning hamlet or small village – but I’ve expropriated the term for usage to represent the smallest of settlements. Once again, we can have Th-1, 2, and 3, and the base size of a Thorpe is 1/8 that of a Hamlet.

    Except that we can go smaller!

    Smaller than a Thorpe is a mining or logging Camp. Actually, the biggest of these overlap with a Thorpe in size, but the typical-and-smaller range of camps starts where a Thorpe leaves off. Such camps exist to enable the residents to perform one function and one function only; they provide only the essentials necessary to achieve that. These are often (usually?) a satellite of a larger community somewhere nearby. Any single-purpose camp comes under this designation.

    Camps can be rated Ca-1, -2, -3, -4, or -5. The base size of a camp is 1/4 that of a Thorpe (but they also have a minimum population of 1).

    If you’re keeping track, that’s 1/4 of 1/8 of 1/8 of a village, or 1/256th. If your village base size is 256 people or smaller, then the ‘minimum 1’ rule can be said to be in effect.

    Technically, you could also describe a Caravan as a Camp – it just happens to be mobile, or semi-mobile.

    5.8.0.3 Community Sizes: Larger than Base

    Going the other way, we find ourselves buried in adjectives, because there aren’t many terms on offer. Things get even more confusing when you discover that the definition of a city isn’t what we tend to associate with the term – and different countries have different definitions in terms of size.

    And, since most adjectives tend to be relative in meaning, and subject to interpretation, I’ve tried to eschew them in favor of suffixes.

    So, larger than a Village is a Village-2, Larger than a Village-2 is a Village-3, and Larger than a Village-3 is a Village-4.

    A Village-5 is the same size as a Town (leaving off the -1 suffix). The meaning of the term “Town” is also something that can vary widely from one culture to another. The term is used here to designate a community with a municipal authority beyond a singular mayor / burgomaster / whatever. In England, a Town is usually formally defined by a legal Charter issued by the Crown, giving it a specific identity outside of the control of the regional Nobility. In the US, it loosely refers to incorporated communities – i.e. a community that has issued its own Charter, which formally “Incorporates” the community.

    Australia and Canada distinguish communities based on population thresholds – but these can vary from state to state. Nevertheless, this is the mindset that this system adopts.

    The difference between a Town and a Village is that the town provides, by virtue of its Charter, services restricted to the Town Limits, collecting rates and revenues to fund these services; in a Village, there is no central authority to provide these services, and any that are provided are provided by the broader administrative unit – be it a state government or a Nobleman, paid for from the taxes and fees they are entitled to collect.

    ‘Town’ is followed by Town-2, Town-3, Town-4, and Town-5.

    Towns -6 to -10 follow, but a Town-6 is the same size as a City-1.

    A city is distinguished by having a metropolitan area beyond a simple town square, surrounded by residential districts or suburbs. Many of these will possess some singular identifying traits or characteristics (social or economic in nature), or will claim such an identity. Each suburb or district has its own independent retail or services providers. The number of suburbs or districts is roughly equivalent to the city-suffix squared, plus 1, not counting the metropolitan zone. So City-1 has 2 residential zones, City-2 has 5, City-3 has 10, and so on. These residential zones are all still administered by the central metropolitan zone.

    City-5 (with its 26 residential zones) is the same size as Metropolis-1. This is the point at which the central metropolitan area and surrounding suburbs are excerpted from the larger community to form a smaller City (usually City-1 or -2), while the remaining suburbs or districts collectively organize into a separate but contiguous City (usually City-2 or City-3 in size) with an authority independent of that of the central hub. Collectively, these form “Greater [name]”. For example, Greater Sydney consists of the City Of Sydney and 32 surrounding Cities, each of which contains and administers a number of smaller Suburbs. My residence is in the suburb of Panania, which is one of 41 suburbs within the City of Canterbury-Bankstown.

    You can work backwards from such numbers.

    Canterbury-Bankstown, with 41 suburbs, would have a suffix = sqr root (41-2) = sqr root (39) = 6.245. But this is the result of forced amalgamation between two different cities by the state government, a quite unpopular move at the time. Canterbury used to have 17 suburbs and be a City-3.87, while Bankstown had 10, and was a city-2.8. When they were merged, additional suburbs were also added from surrounding areas. Greater Sydney itself would rank as a City-25.5 if taken collectively – but it instead rates as a Metropolis-5.5 (32 cities, -2, take the square root). But Greater Sydney is a BIG city – 5,356,944 people – or more than five times the population of Imperial Rome at its height (1 Million, according to best estimates).

    The justification given for the amalgamation was economy of scale, and for some councils who were struggling to provide services, that was fair enough – but some such mergers were refused by the State Government for political reasons, and others forced through against the wishes of residents even though the parent cities were financially sound. So the whole thing stank of corruption and political manipulation. The leader of the governing party saw his popularity plummet to trump-like figures as a result of this and a couple of other controversies, and was forced to resign so that his successor would stand a shadow of a chance at the next State Election and so that his unpopularity would not impact on the Federal Election due later that year. It was a successful move on the latter front (just barely) but the shadow wasn’t deep enough on the former, and there was a change of state government.

    Adding to the size of Sydney is the fact that it’s a State Capital – and our present National Capital only exists as a compromise between Sydney and Melbourne, neither of whom were willing to let the other be the political Big Dog.

    5.8.0.3 Demographic Research

    Although the models will abstract things greatly, and not adhere to historical reality if it’s inconvenient, reality has to be the underpinning of the Demographic Models that are available.

    You don’t have to dig very deep into the history of various townships in Arkansas to discover the effects, both economic and social, or gaining or losing County leadership; I can only project up to the effect of being named a State Capital, and then scale up again for a National Capital.

    But it is worth noting that in 33 out of 50 US States, the largest city in the state is Not the State Capital. I put this down to everyone else in the state not wanting to be dominated by that largest city, just as Melbourne would not accept Sydney as the capital of Australia as well as of the state of New South Wales.

    Before moving on from this discussion, some historical context is worth highlighting.

    According to this graph…

    Excerpted from “Mortality, migration and epidemiological change in English cities, 1600–1870” by Romola Davenport, University of Cambridge, CC BY 4.0, courtesy of Researchgate (image scaled by me)

    …in 1600, the population of England was 5 million, and about 10% – half a million – lived in an Urban setting. In about 1650, the general population peaked and only slow growth could be seen until about 1775. At that time, the urban population was about 25%, or 1.25 million – and half of them lived in London.

    This graph…

    Excerpted from “When Bioterrorism Was No Big Deal” by
    Patricia Beeson & Werner Troesken (both from the University of Pittsburgh), Copyright unstated, courtesy of Researchgate (left caption moved and image cropped and scaled by me).

    …is harder to read, but shows that the trend given in the first continues back another 50 years and then flattens – so in 1550 it would have been about 6% of 5 million (i.e. 300,000) and in 1500, it might only have been 5% (250,000). And almost all of them would have resided in London.

    (That paper, downloadable from the link “Researchgate”, has a bunch of others for comparison at the back – Western Europe, Scandinavia, Eastern Europe. Worth grabbing for reference if one of those resembles the Kingdom “tone” that you’re going for.)

    This graph…

    Historical_population_of_France.svg by Max Roser, CC BY-SA 3.0, via Wikimedia Commons

    …shows the historical population of France, which provides additional context.

    Below, I’ve isolated the part that matches the 1500-1950 range of the England Graphs:

    Extract From Historical_population_of_France.svg
    Creative Commons CC-BY-3.0 as above, Cropped and Enlarged by Mike

    In 1500, there were about 15 million in France, rising to 18 million by 1600. 1550 would therefore have been about 16.5 million.

    In 1500, it can be estimated that 5.6% of the French population lived in towns of 10,000 or more. In 1550, that was 6.3%; and in 1600, 8%, according to one source (and there aren’t many to pick from).

    In 1500, Paris had a population of about 150,000, or just 16.1% of the urban population.

    In 1550, that was somewhere between 300 and 350,000 people, and 25.2-29.4% of the urban population.

    In 1600, we’re talking between 300 and 400,000 people, and 18.8-25% of the urban population – so other cities grew faster than Paris in the 1550-1600 period.

    Which other cities? The only one with more than 60,000 on all three dates was Paris. In 1600, Lyon or Ruen may have hit that number. We need to go to one-sixth the size of Paris or less for the next biggest population center, Toulouse, but it might also be in the vicinity of Lyon and Ruen. Estimates of the population in those cities at the time vary from about 40-60,000 in 1500, and 70-80,000 in 1600. But when you compare that with England, you see a stark difference.

    Here are some estimated population densities and population levels from the year 1300:

    ▪  France – 36 to 40 people per sqr km – 18 to 20 million total population.

    ▪  England and Wales – 33 to 40 people per sqr km – 5-6 million total population.

    ▪  Germany (then core of the Holy Roman Empire) – 24 to 28 people per square km, 12 to 14 million total population.

    ▪  Scotland – 6-13 people per sqr km – 0.5 to 1 million total population.

    Some other relevant Demographic research:

    France

    ▪  Largest Regional Cities (Excluding Capital): Milan, Venice, Florence (in broader Western Europe) were over 100,000. In France, cities like Ruen or Bordeaux may have reached 25,000?40,000.

    ▪  Major Towns (5,000?10,000+): Numerous. The median major town size in this range may have been around 12,000?15,000.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): The backbone of the French urban network; perhaps a few hundred such towns scattered across the kingdom.

    ▪  Very Small Boroughs (Below 1,000): Most settlements below 1,000 people were agricultural villages.

    England (and Wales)

    ▪  Largest Regional Cities (Excluding Capital): York and Bristol were the undisputed next-largest, likely reaching 15,000?25,000 at their peak before the Black Death.

    ▪  Major Towns (5,000?10,000+): Only a handful of towns (eg., Norwich, Coventry, King’s Lynn) were in this tier, perhaps 8-10 total.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): This was the most numerous class of true urban centers in England. The average was likely around 2,000?3,500 people.

    ▪  Very Small Boroughs (Below 1,000): Many hundreds of market settlements were under 1,000 people, functioning as local market centers but not true urban areas.

    Germany (Holy Roman Empire Core)

    ▪  Largest Regional Cities (Excluding Capital): Cities like Cologne and Prague were major international centers, likely with 30,000?40,000 inhabitants.

    ▪  Major Towns (5,000?10,000+): Cities like Lübeck, Nuremberg, and Augsburg were regional powers, mostly in the 10,000?25,000 range.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): There were hundreds of walled, independent towns across the Empire, with many falling into this category. The average would be difficult to pin down but was lower than England.

    ▪  Very Small Boroughs (Below 1,000): A very large number of minor market towns and Minderstädte (small towns) were below 1,000.

    Scotland

    ▪  Largest Regional Cities (Excluding Capital): Edinburgh was the only city approaching major European size, perhaps 10,000?12,000 at its peak.

    ▪  Major Towns (5,000?10,000+): None. The scale of Scottish urbanization was significantly smaller than its neighbors.

    ▪  Minor Towns/Large Boroughs (1,000?5,000): The largest burghs, such as Aberdeen and Perth, were likely only around 3,000 people.

    ▪  Very Small Boroughs (Below 1,000): Most Scottish burghs (towns) throughout the Middle Ages are believed to have had populations below 1,000.

    Those four models emerge as the most robust to choose from. But I’m going to expand the list further with some bigger-population models and one or two even smaller ones, and abstract the ones that have already been identified so that it doesn’t matter if the results of the generation model aren’t quite 100%.in line with History.

    This is clearly a village in Switzerland. The buildings are bigger and much closer together, but there’s still a lot of empty landscape. Image by ?Christel? from Pixabay

    5.8.0.4 The reality-based Demographic Models r

    &squarf:  France: Demonstrated a more distributed urban network with many cities (especially in the Low Countries/Italy) capable of sustaining populations of 25,000+.
        Urban Population: 5.6% (1500) – 8% (1600)
        Hierarchy Slope: Flat but rising sharply
        Regional Cities: 0.2-0.3 / 10,000 sqr km
        Major Towns: 0.5-1 / 10,000 sqr km
        Minor Towns: 5-7 / 10,000 sqr km
        Base Village: 320-480

    &squarf:  Germany: Akin to France but with a significant amount of Forests and Mountains which were relatively lightly populated while occupying great swathes of land.
        Urban Population: 10%
        Hierarchy Slope: Flat
        Regional Cities: 0.4-0.5 / 10,000 sqr km
        Major Towns: 1-2 / 10,000 sqr km
        Minor Towns: 8-12 / 10,000 sqr km
        Base Village: 400-600

    &squarf:  England: Had a relatively high urban density for its size, but a steep hierarchy. The difference between London and the next tier (York/Bristol) was large, and the gap between those and the average town was also significant.
        Urban Population: 5-6%
        Hierarchy Slope: Steep
        Regional Cities: 0.15 / 10,000 sqr km
        Major Towns: 0.4-0.5 / 10,000 sqr km
        Minor Towns: 3-4 / 10,000 sqr km
        Base Village: 240-360

    &squarf:  Scotland: Was the least urbanized region. Even its major burghs would be considered only medium-sized towns in England or minor towns in France.
        Regional Cities: None.
        Urban Population: 2-3%
        Hierarchy Slope: Very Flat, Slope flattens
        Major Towns: 0.1 / 10,000 sqr km
        Minor Towns: 0.5-1 / 10,000 sqr km
        Base Village: 160-240

    5.8.0.5 The Artificial Demographic Models

    To those four, I am adding the following:

    Imperial Core: A region dominated by a single capital or a handful of enormous cities, like Ancient Rome, Ancient China, or Mamluk, Egypt. It would also apply to any of the others if they have significant improvements over standard medieval technology (including magic) in the fields of agronomy and food transportation.
        Urban Population: 15-20%
        Hierarchy Slope: Very Steep
        Regional Cities: 0.5 – 1 / 10,000 sqr km
        Major Towns: 0.1 – 0.3 / 10,000 sqr km
        Minor Towns: 1-2 / 10,000 sqr km
        Base Village: 480-720

    Coastal Mercantile Model: Based on the late medieval and early modern low countries (Flanders./ Holland) and the Italian City States. Power and wealth are distributed among many medium-large communities, trading ports, and other economic centers, but there is no one super-sized city.
        Urban Population: 20-30%
        Hierarchy Slope: Very flat at low levels, rising sharply from higher town sizes (30,000 people)
        Regional Cities: 1 – 2 / 10,000 sqr km
        Major Towns: 2 – 4 / 10,000 sqr km
        Minor Towns: 4 – 6 / 10,000 sqr km
    Base Village: 280-420

    Frontier Nation: Somewhere in between Scotland and England, consisting of one part moderately densely settled, one part very sparsely settled (4-4 times as large) and a third part in the middle (2-3 times as large), relative to the densely settled region.
        Urban Population: 4-8%
        Hierarchy Slope: Moderate, flattens
        Regional Cities: 0.05 / 10,000 sqr km
        Major Towns: 0.2-0.25 / 10,000 sqr km
        Minor Towns: 1-2 / 10,000 sqr km
        Base Village: 200-300

    Tribal / Clan Model: based on Early Medieval Scandinavia and central Africa. Also useful for an extensive Nomadic Trading Network. Settlements are mainly defensive or seasonal gathering points.
        Urban Population: 2-5%%
        Hierarchy Slope: Impossibly Steep but capped
        Regional Cities: None
        Major Towns: 0.001 / 10,000 sqr km
        Minor Towns: 0.05 / 10,000 sqr km
        Base Village: 80-120

5.8.1 Villages

The village is the fundamental unit of the population distribution simulation – everything starts there and flows from it.

    5.8.1.1 Village Frequency

    I’ve given this section a title that I think everyone will understand, but it’s not actually what it’s all about. The real question to be answered here is, how big is the Locus surrounding a population?

    The answer differs from one Demographic Model to another, unsurprisingly.

    The area of a given Locus is:

        SL = MF x (Pop)^0.5 x k,
            where,
            SL = Locus Size
            MF = Model Factor
            Pop is the population of the village
            and k = a constant that defines the units of area.

    The base calculation, with a k of 1, is measured in days of travel. That works for a lot of things, but comparison to a base area of 10,000 km^2 isn’t one of them. For that, we need a different K – one based on the Travel Ranges defined in previous parts of this series.

    Section 5.7.1.14.5.1 gives answers based on travel speed, more as a side-issue than anything else, based on the number of miles that can be traversed in a day:

      (Very) Low d = 10 miles / day
      Low d = 20 miles / day
      Reasonable d = 25 miles / day
      Doable d = 30 miles / day
      Close To Max (High) d = 40 miles / day
      Max d = 50 miles / day
          ( x 1.61 = km).

    — but these are the values for Infantry Marching, and that’s a whole other thing.

    Infantry march faster than people walk or ride in wagons. The amount varies depending on terrain (that’s the main variable in the above values), but – depending on who you ask – it’s 1 2/3 or 2 or 2.5 times.

    But, because they travel in numbers, they can march for less time in a day. Some say 6 hours, some 7, some 8. Ordinary travelers may be slower, but they can operate for all but an hour or two of daylight. That might be 8-2=6 or 7 hours in winter, but it’s more like 12-2=10 or 11 hours in summer.

    And it has to be borne in mind that the basis for these values assumes travel in Summer – at least in medieval times. But we want to take the seasons out of the equation entirely and set a baseline from which to adjust the list given earlier.

    One could argue that summer is when the crops are growing, and therefore that should be the basis of measurement, given that we’re looking for the size of a community’s reach.

    So let’s take the summer values, and average them to 10.5 hours. When you take the various factors into account and generate a table (I used 6, 6.5, 7, 7.5, and 8 for army marching times per day, and the various figures for speed cited plus 2.25 as an additional intermediate value, and work out all the values that it might be, and average them, you get 1.04. That’s so small a change as to be negligible – 1.04 x 50 = 52. We will have far bigger approximations than that!

    So we can use the existing table as our baseline. Isn’t that convenient?

    But which value from amongst those listed to choose? Overall, unless there’s some reason not to, you have to assume that terrain is going to average out when you’re talking about a baseline unit of 10,000 sqr kilometers. So, let’s use the “Reasonable” value unless there’s reason to change it.

    And that gives a conversion rate of 1 day’s travel = roughly 25 miles, or 40 km. And those are nice round numbers.

    Now, a locus is roughly circular in shape, so is that going to be a radius or a diameter? Well, a “market day” is how far a peasant or farmer can travel with their goods and return. in a day, so I think we’re dealing with a radius of 1/2 the measurement, so that measurement must be the diameter of the locus.

    Which means that the base radius of a locus is 12.5 miles or 20 km.

    In an area where the terrain is friendly in terms of travel, this could inflate to twice as much; in an area where terrain makes travel difficult, it could be 1/2 as much or less. But if we’re looking for a baseline, that works.

    12.5 miles radius = area roughly 500 sqr miles = area 1270 sqr km. So in 10,000 sqr km, we would expect to find, on average, 7.9 locuses.. But that’s without looking at the population levels and the required Model Factors.

    The minimum size for an English Village is 240 people. The Square Root of 240 is 15.5.

    So the formula is now 1270 = 15.5 x 20 x Model Factor, and the Model Factor for England conditions and demographics is 4.1. Under this demographic model, there will be 4.1 Village Loci – which is the same thing as 4.1 villages – in 10,000 sqr km.

    Having worked one example out to show you how it’s done, here are the Model Factors for all the Demographic Models:

    ▪ Imperial Core: 480^0.5 = 21.9, and 21.9 x 20 x Model Factor = 1270, so MF = 2.9
    ▪ Germany (HRE): 400^0.5=20, and 20 x 20 x MF = 1270, so MF = 3.175
    ▪ France: 320^0.5 = 17.9, and 17.9 x 20 x MF = 1270, so MF = 3.55
    ▪ Coastal Mercantile Model: 280^0.5 = 16.733, and 16.733 x 20 x MF = 1270, so MF = 3.8
    ▪ England: 4.1
    ▪ Frontier Nation: 200^0.5 = 14.14, and 14.14 x 20 x MF = 1270, so MF = 4.5
    ▪ Scotland: 160^0.5 = 12.65, and 12.65 x 20 x MF = 1270, so MF = 5.02
    ▪ Tribal / Clan Model: 80^0.5 = 8.95, and 8.95 x 20 x MF = 1270, so MF = 8.95

    So, why didn’t I simply state the number of loci (i.e. the number of villages) in an area?

    It’s because that’s a base number. When we get to working on actual loci or zones, these can shrink, or grow; according to other factors. This is a guideline – but to define an actual village and it’s surrounds, we will need to use the MF. Besides, you might want to generate a specific model for a specific Kingdom in your game.

    You may be wondering, then, why it should be brought up at all, or especially at this stage? The answer to those questions is that the area calculated is a generic base number which may have only passing resemblance to the actual size of the locus.

    A locus will continue to expand until it hits a natural boundary, a border, or equidistance to another population center. Very few of them will actually be round in shape – some of them not even approximately.

    The ratio between ACTUAL area and BASE area is an important factor in calculating the size of a specific village.

    An example of the ‘real borders’ of a Locus

    To create the above map, I made a copy of the base map (shown to the left). At the middle top and bottom, i placed a dot representing the Locus ‘radius’.

    At the left top, another dot marked the half-way point to the next town (top left), where it intersected a change of terrain – in this case, a river.

    At the top right, doing the same thing would have made the town at top right a bit of a mixed bag – it already has forests and hills and probably mountains. I didn’t want it to have a lot of farmland as well. So I deliberately let the current locus stretch up that way. The point below it is also slightly closer to the top right town than it would normally be, but that’s whee there is a change of terrain – the road. I tossed up whether the locus in question should include the intersection and road, but decided against it.

    And so on. Once I had the main intersection points plotted, I thought about intermediate points – I didn’t want terrain features to be split between two towns, they had to belong to one or the other. You can see the results in the “bites” that are taken out of the borders of the locus at the bottom.

    If you use your fingers, one pointing at the town in the center and the other at the top-middle intersection point, and then rotate them to get an idea of the ‘circular’ shape of the locus, you can see that it’s missing about 1/6 of it’s theoretical area to the east, another 1/6 to the south, and a third 1/6th to the west. It’s literally 1/2 of the standard size. That’s going to drive the population down – but it’s fertile farmland, which will push it up. But that’s getting ahead of ourselves.

    As an exercise, though, imagine that the town lower right wasn’t there. The one that’s on the edge of the swamp. Instead of ending at a point at the bottom, the border would probably have continued, including in the locus that small stand of trees and then following the rivers emerging from the swamp, and so including the really small stand of trees. The Locus wouldn’t stop until it got to the swamp itself. The locus would have extended east to the next river, in fact, encompassing forest and hills until reaching the East-road, which it would follow inwards until ii joined the existing boundary. It would still have lost maybe 1/12th in the east, but it would have gained at least that much and probably more in the south, instead of losing 1/3. The locus would be 1 – 1/12 + 1/3 – 1/12 – 1/3 = 10/12 of normal instead of 1/2 of normal.

    5.8.1.2 Village Base Size

    If you look at the models, you will notice “Base Village” and a population count, and might be fooled into thinking that everything in that range is equally likely. It’s not.

    Take the French model – it lists the village size as 320-480.

    First, what’s the difference, high minus low? In this case, it’s 160. We need to divide that by 8 as a first step – which in this case is a nice, even, 20.

    Half of 20 is 10, and three times 10 is 30. Always round these UP.

    With that, we can construct a table:

        01-30 = 320
        31-40 = 321-350 (up by 30)
        41-50 = 351-380 (up by 30)
        51-60 = 381-400 (up by 20)
        61-70 = 401-420 (up by 20)
        71-75 = 421-430 (up by 10)
        76-80 = 431-440 (up by 10)
        81-85 = 441-450 (up by 10)
        86-90 = 451-460 (up by 10)
        91-95 = 461-470 (up by 10)
        96-00 = 470-480 (up by 10)

    I used Gemini to assist in validating various elements of this section, and it thought the “up by 30” was confusing and the terminology be replaced with something more formal.

    I disagree. I think the more colloquial vernacular will get the point across more clearly.

    It was also concerned – and this is a more important point – that GMs couldn’t implement this roll and the subsequent sub-table quickly. I disagree, once again – I’ve seen far more complicated constructions for getting precise population numbers than two d% rolls, especially since the same tables will apply to all areas within the Kingdom that are similar in constituents. Everywhere within a given zone, in fact, unless you deliberately choose to complicate that in search of precision.

    In general, you construct one set of tables for the entire zone – and can often copy those as-is for other similar zones as well. Maybe even for a whole Kingdom.

    The d% breakdown is always the same percentages, and there are always 2 “up by “3 x 1/2″s, 2 “up by 2 x 1/2″s, and 5 “up by 1/2″‘s – with the final one absorbing any rounding errors; in this example there aren’t any.

    We then construct a set of secondary tables by dividing our three (or four) increments by 10. In this case, 30 -> 3, 20 -> 2, 10 -> 1. And we apply the same d% breakdown in exactly the same way, but from a relative position:

    So:
        1/2 x 3 = 1.5, rounds to 2; 3 x 1.5 = 4.5, rounds to 5.
        1/2 x 2 = 1; 3 x 1 = 3.
        1/2 z 1 = 0.5, rounds to 1; 3 x 1 = 3.

    The “Up By 30” Sub-table reads:

        01-30 = +0
        31-40 = +5
        41-50 = +5+5 = +10
        51-60 = +10+3=+13
        61-70 = +13+3=+16
        71-75 = +16+2 = +18
        76-80 = +18+2 = +20
        81-85 = +20+2 = +22
        86-90 = +20+2 = +24
        91-95 = +24+2 = +26
        96-00 = +30 (up by whatever’s left).

    The “Up By 20” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+2 =+8
        61-70 = +8+2=+10
        71-75 = +10+1 = +11
        76-80 = +11+1 = +12
        81-85 = +12+1 = +13
        86-90 = +13+1 = +14
        91-95 = +14+1 = +15
        96-00 = +20 (up by whatever’s left).

    The “Up By 10” Sub-table:

        01-30 = +0
        31-40 = +3
        41-50 = +3+3 = +6
        51-60 = +6+1 =+7
        61-70 = +7+1=+8
        71-75 = +8+1 = +9
        76-80 = +9+1 = +10
        81-85 = +0-1 = -1
        86-90 = -1-1 = -2
        91-95 = -2-1 = -3
        96-00 = -3-1 = -4

    Notice what happened when I ran out of room in the “+10”? The values stopped going up, and starting from +0, started going DOWN.

    It takes just two rolls to determine the Base Population of a specific village with sufficient accuracy for our needs within a zone..

    EG: Roll of 43: Main Table = 380, in an up-by-30 result. So we use the “Up By 30” Sub-table and roll again: 72, which gives a +18 result. So the Base population is 380+18=398.

    These results are intentionally non-linear.

    Optional:

    If you want more precise figures, apply -3+d3.

    Or -6+d6.

    Or anything similar – though I don’t really think you should go any larger than -10+d10 – and I’d consider -8+2d6 first.

    I have to make it clear, this is relating to the population of a specific village in a specific zone not a generic one. For anything of the latter kind, continue to use the minimum base population. I just thought that it bookended the ‘real locus’ discussion. We had to have the former because it affects what terrain influences the town size and how much of it there is; the latter is just a bonus that seemed to fit..

    5.8.1.3 Village Demographics

    Let’s start by talking Demographics, both real-world and Fantasy-world.

    The raw population numbers are not as useful as numbers of families would be. But that’s incredibly complicated to calculate and there’s no good data – the best that I could get was a broad statement that medieval times had a child mortality rate (deaths before age 15) of 40-50%, an infant mortality rate (deaths before age 1) of 25-35%, and an average family size of 5-7 children.

    If look at modern data, we get this chart:

    Source: Our World In Data, cc-by, based on data from the United Nations. Click the image to open a larger version (3400 x 3003 px) in a new tab.

    I did a very rough-and-ready curve fitting in an attempt to exclude social and cultural factors and derive a basic relationship for what is clearly a straight band of results:

    Derivative work (see above), cc-by, extrapolating a relationship curve in the data

    …from which I extracted two data points: (0%,1.8) and (10%,5.6), which in turn gave me: Y = 0.38 X + 1.8, which can be restated, X = 2.63Y – 4.74. And that’s really more precision than this analysis can justify, but it gives a readout of child mortality for integer family sizes.

    Yes, I’m aware that the real relationship isn’t linear. But this simplified approximation is good enough for our purposes.

    That, in turn, gives me the following:

        Y = Typical Number Of Children,
        X = Overall Child Mortality Rate

        Y, X:
        1, -3%
        2, 0%
        3, 3%
        4, 5%
        5, 8%
        6, 11%
        7, 13%
        8, 16%
        9, 18%
        10, 21%
        11, 24%
        12, 26%

    …so far, so good.

    Next, I need to adjust everything for the rough data points that we have for medieval times, when bearing children was itself a mortality risk for the mothers.

    5-7 children, 40-50%

    so that gives me (5, 8, 40) and (7, 13, 50) – more useful in this case as (8, 40) and (13,50) – which works out to Z = 2 Y + 24.

        Z=Child Mortality, Medieval-adjusted

        Y, X, Z:
        1, -3%, 18%
        2, 0%, 24%
        3, 3%, 30%
        4, 5%, 34%
        5, 8%, 40%
        6, 11%, 46%
        7, 13%, 50%
        8, 16%, 56%
        9, 18%, 60%
        10, 21%, 66%
        11, 24%, 72%
        12, 26%, 76%

    But here’s the thing: realism and being all grim and gritty might work for some campaigns, but for most of us – no. What we need to do now is apply a “Fantasy Conversion” which contains just enough realism to be plausible and replaces the balance with optimism.

    I think Division of Z (the medieval-adjusted child mortality rate) by 3 sounds about right – YMMV. That gives me the F values below – but I also checked on a ratio of 2.5, which gives me the F2 values.

    Gemini suggested using 3.5 or 4 for an even ‘softer’ mortality rate, and 2.25 or 2 for a grittier one.

    In principle, I don’t have a problem with that – and part of the reason why I’m not just throwing the mechanics at you, but explaining how they have been derived, is so that GMs can use alternate values if they think them appropriate to their specific campaigns.

    I don’t just want to feed the hungry, I want to teach them to fish, to paraphrase the biblical parable.

        F= Fantasy Adjusted Child Mortality Rate
        F2 = more extreme Child Mortality Rate

        Y, X, Z, F, F2:
        1, -3%, 18%, 6%, 7%
        2, 0%, 24%, 8%, 10%
        3, 3%, 30%, 10%, 12%
        4, 5%, 34%, 11%, 14%
        5, 8%, 40%, 13%, 16%
        6, 11%, 46%, 15%, 18%
        7, 13%, 50%, 17%, 20%
        8, 16%, 56%, 19%, 22%
        9, 18%, 60%, 20%, 24%
        10, 21%, 66%, 22%, 26%
        11, 24%, 72%, 24%, 29%
        12, 26%, 76%, 25%, 30%

    I think the F values are probably more appropriate for High Fantasy, while the F2 are better for more typical fantasy – but you’re free to use this information any way you like, the better to suit your campaign world.

    You might decide, for example, that averaging the Medieval Adjusted Values with the F2 is ‘right’ – so that 5 children would indicate (40+16)/2 = 28% mortality.

    Social values can also adjust these values – traditionally, that means valuing male children more than females. But in Fantasy / Medieval game settings, I think that would be more than counterbalanced, IF it were a factor, by the implied increased risks from youthful adventuring. In a society that practices such gender-bias, it would not surprise me if the ultimate gender ratio was 60-40 or even 70-30 – in favor of Girls.

      5.8.1.3.1 Maternal Survival

      The next element to consider is the risk of maternal death in childbirth. That’s even harder to pin down data on, but 1-3% per child is probably close to historically accurate. Balanced around that is the greater risks from adventuring, and the availability of clerical healing. So I’m extending the table to cover 4, 5, and 6%, but you are most likely to want the values in the first columns. To help distinguish these extreme possibilities from the usual ones, they have been presented in Italics.

      We’re not interested so much in the number of cases where it happens as I am the number of cases where it doesn’t – the % of families with living mothers, relative to the number of children.

          Y, @1, @2, @3, @4, @5, @6:
          1, 99%, 98%, 97%, 96%, 95%, 94%
          2, 98.0%, 96.0%, 94.1%, 92.2%, 90.3%, 88.4%
          3, 97.0%, 94.1%, 91.3%, 88.5%, 85.7%, 83.1%
          4, 96.1%, 92.2%, 88.5%, 84.9%, 81.5%, 78.1%
          5, 95.1%, 90.4%, 85.9%, 81.5%, 77.4%, 73.4%
          6, 94.1%, 88.6%, 83.3%, 78.3%, 73.5%, 69.0%
          7, 93.2%, 86.8%, 80.8%, 75.1%, 69.5%, 64.8%
          8, 92.3%, 85.1%, 78.4%, 72.1%, 66.3%, 61.0%
          9, 91.4%, 83.4%, 76.0%, 69.3%, 63.0%, 57.3%
          10, 90.4%, 81.7%, 73.7%, 66.5%, 59.9%, 53.9%
          11, 89.5%, 80.1%, 71.5%, 63.8%, 56.9%, 50.6%
          12, 88.6%, 78.5%, 69.4%, 61.3%, 54.0%, 47.6%

      The method of calculation is 100 x ( 1- [D/100] ) ^ Y. Just in case you want to use different rates than these.

      There does come a point at which the likelihood of maternal death begins to limit the size of the average family, though, and I think the 6% values are getting awfully close to that mark.

      Let’s say that a couple have 6 children, right in the middle of the historical average. If the mother falls pregnant a 7th time, at 6%, she has roughly a 1 in 3 chance of dying (and a fair risk of the child perishing with her). Which means that she HAS no more children. But if she beats those odds to have 7 children, her chances are even worse when it comes to child #8, and so on.

      Of all the cases with a mother who survived childbirth, we then need to factor in death from all other causes – monsters and adventuring and mischance and so on. Fantasy worlds tend to be dangerous, so this could be quite high – maybe as much as 5% or 10% or 20%. So multiply the living mothers by 0.8. Or 0.7 Or 0.9 – whatever you consider appropriate – to allow for this.

      This rural community is obviously alongside a major river or coastline – the proximity of the mountains suggests the first, but isn’t definitive. The name offers a clue: ‘hallstatt’, which to me sounds Germanic, and suggests that the waterway may be the Rhine. Or not, if I’ve misinterpreted. Image by Leonhard Niederwimmer from Pixabay

      5.8.1.3.2 Paternal Survival

      The result is the % of families with a surviving mother. So how many surviving fathers are there per surviving mother? Estimates here vary all over the shop, and more strongly reflect social values. But if I’m suggesting 5% – 20% mortality for mothers from other sources, the same would probably be reasonably true of fathers – if those social values don’t get in the way.

          0.95 x 0.95 = 90.25%.
          0.9 x 0.9 = 81%.
          0.85 x 0.85 = 72.25%
          0.8 x 0.8 = 64%.

      Those values give the percentages in which both parents have survived to the birth of the average number of children.

      If you’re using 10% mortality from other causes, then in 90% of cases in which the mother has died, the father has survived. But in 10% of the cases in which the mother has succumbed, the children are orphaned by the loss of the other parent.

      The higher this percentage, the higher the rate of survivors remarrying and potentially doubling the size of their households at a stroke. And that will distort the average family size far more quickly than the actual mortality percentages, unless there is some social factor involved – maybe it’s expected that parents with children will only marry single adults without children, for example.

      The problem with this approach is that if it’s the mother who is remarrying, this puts her right back on that path to mortality through childbirth; the child-count ‘clock’ does not get reset. If it’s a surviving father marrying a new and childless wife, it DOES reset, because the new mother has not had children previously.

      In a society that permits such actions, there is a profound dichotomy at its heart that favors larger families for husbands who survive while placing mothers who survive at far greater risk of the family becoming a burden to the community – which is likely to change that social acceptance. Paradoxically, a double standard is what’s needed to give both parents a more equal risk of death, and a more equal chance of surviving.

      5.8.1.3.3 Childless Couples

      Next, let’s think about the incidence of Childless Couples. We can state that there’s a given chance of pregnancy in any given year of marriage; but once it happens, there is just under a full year before that chance re-emerges.

          Year 1: A% -> 1 child born
          Year 2: (100-A) x A% -> 1 child born, A%^2 -> 2 children born
          Year 3: (100-A^2) x A% -> 1 child born, (100-A) x A% -> 2 children born, A^3% -> 3 children born

      … and so on.

      This quickly becomes difficult to calculate, because each row adds 1 to the number of columns, and its easy to lose track.

      But here’s the interesting part: we don’t care. To answer this question, there’s a far simpler calculation.

      In any given year, there will be B couples married. (100-A%) of them will not have children in the course of that year. If we specify B as the average, rather than as a value specific to a given year, then the year before we will also have B couples marry, and (100-A%) of them without children at the end of that year – which means that in the course of the second year of marriage, A% will have children and stop being counted in this category, and (100-A)% will not, and will still count.

      Adding these up, we get (100-A)% + (100-A)%^2 + …. and so on. And these additions will get progressively and very rapidly smaller.

      Let’s pick a number, by way of example – let’s try A=80%, just for the sake of argument.

      We then get 20% + 4% + 0.8 % + 0.16% + 0.032% + 0.0064% … and I don’t think you’d really need to go much further, the increases become so small. I pushed on one more term (0.000128%) and got a total of 24.998528%. I pushed further with a spreadsheet, and not even 12 years was enough to cross the 25% mark – but it was getting ever closer to it. Close enough to say that for A=80, there would be 25 childless couples for every… how many?

      The answer to that question comes back to the definition of A: It the number of couples out of 100 who have a child in any given year. So, over 12 years, that’s a total of 1200 couples. And 25 / 1200 = 2.08%.

      I did the math – cheating, I used a spreadsheet – and got the following, all out of 1200 couples:

          A%, C, [C rounded]
          80%, 25,
          75%, 33.33, 33
          70%, 42.86, 43
          65%, 53.85, 54
          60%, 66.67, 67
          55%, 81.81, 82
          50%, 99.98, 100
          45%, 122.13, 122
          40%, 149.67, 150
          35%, 184.66, 185
          30%, 230.10, 230
          25%, 290.50, 291
          20%, 372.51, 373

      But that has to mean that the rest of those 1200 couples have to have children – and the number of children will approach the average number that you chose.

      So if you pick a value for A, you can calculate exactly how many childless couples there are relative to the number of families with children:

          A=45%, C=122:

          1200-122 = 1078
          1078 families with children, 122 childless couples
          1078 / 122 = 8.836
          8.836 + 1 = 9.863
          so 1 in 9.863 families will be childless couples.

      5.8.1.3.4 Unwed Singles

      The social pressure to marry has varied considerably through the ages, but the greater the dangers faced by the community, the greater this pressure is going to be. And the fitter and healthier you are, the greater this pressure is going to be amplified.

      This is inescapable logic – the first duty of any given generation in a growing society is to replace the population who have passed away, and it takes a long time to turn children into adults.

      You could calculate the average lifespan, deduct the age of social maturity, and state that society frowns heavily on unwed singles above that age, and as every year passed with the individual approaching that age, the greater the social pressure would become – and that would be a true approach.

      The problem is that the average lifespan is complicated by those high rates of childhood death, and trying to extract that factor becomes really complicated and messy. And then you throw in curveballs like Elves and Dwarves, with their radically different lifespans and the whole thing ends up in a tangled mess.

      So, I either have to pull a mathematical rabbit out of my hat, or I do the sensible thing and get the GM to pick a social practice and do my best to make it an informed choice.

      While a purely mathematical approach is possible, the more that I looked at the question, the more difficult it became to factor every variable into the equation.

      Want the bare bones? Okay, here goes.

      For a given population, P, there are B marriages a year, removing B x 2 unwed individuals from the population. We can already extract the count of those who are ineligible for marriage due to age, because they are all designated as children.

      We can subtract the quantity of childless couples who are already wed in a similar fashion to the calculations of the previous subsection.

      The end result is the number of unwed singles of marriageable age who have not married. Setting P at a fixed value – say 100 people – we can then quickly determine the number of unmarried singles.

      What ultimately killed this approach was that it was – in the final analysis – using a GM estimate of B as a surrogate for getting the GM to estimate the % of singles in their community – and doing so in a manner that was less conducive to an informed choice, and requiring a lot of calculations to end up with the number that they could have directly estimated in the first place.

      Nope. Not gonna work in any practical sense.

      So, instead, let’s talk about the life of the social scene – singles culture. There is still going to be all that social pressure to marry and contribute to the population, especially if you are an even half-successful adventurer, because that makes you the healthiest, wealthiest, and most prosperous members of the community.

      It can be argued that instead of using the average lifespan (with all its attendant problems) and deducting the age of maturity (i.e. the age at which a child becomes an adult) to determine at what age a couple have to have children in order to keep the population at least stable (you need two children for that, since there are two adults involved, and you need to take that child mortality rate into consideration, dividing those 2 by the mortality rate and rounding up), you should use add age of the mother as a factor in the rise of the mother’s mortality during childbirth, and work back from that age. In modern times, that’s generally somewhere in the thirties, maybe up to 40. That doesn’t mean that older women can’t have children, just that under these circumstances, the risks of dying before you have enough offspring are considered too high by the general culture.

      But what does that really get you? There’s always going to be some age at which the pressure to wed starts to grow. Shifting it this way or that by a couple of years won’t change much.

      Looking at it from the reverse angle – how much single life will society tolerate – can be far more useful.

      I would suggest a base value of a decade. Ten years to be an adventurer and live life on the edge.

      In high-danger societies, especially with a high mortality rate, that might come back 2 or 3 years, At it’s most extreme, 5. That’s all the time you have to focus on becoming a professional who is able to support a family, or at least to setting your feet firmly on that path.

      In low-danger societies, especially those with a lower mortality rate, it might get pushed out a few years, maybe even another 5. That’s enough time that you can spread some wild oats and still settle down into someone respectable within the community.

      How long is the typical apprenticeship? In medieval times? In your fantasy game-world? From the real world, I could bandy about numbers like 4 years, or 5 years, or 5 years and 5 more learning on the job, or repaying debts to the master that trained you. And you end up with the same basic range – 5-15 years.

      What is the age of maturity in your world? Again, I could throw numbers around – 18 or 21 seem to be the most common in modern society, but 16 (even 15) has its place in the discussion – that’s how old you had to be back when I was younger before you could leave school and pursue a trade, i.e. becoming an apprentice. But I have played in a number of games where apprenticeships started at eight, or twelve, and lasted a decade – and THEN you got to start repaying your mentor for the investment that he’s made in you. With interest.

      Does there come a point where people are deemed anti-social because they have not married, and find their prospects of attracting a husband or wife diminishing as a result? Don’t say it doesn’t happen, because there is plenty of real-life evidence that it’s there as a social undercurrent – one that shifts, and sometimes intensifies or weakens, without real understanding of the factors that drive the phenomenon – instead, forget the real world and think about the game-world.

      How optimistic / positive is the society? How grim and gritty?

      Think about all these questions, because they all provide context to the basic question: What percentage of the population are unwed with no (official) children?

      Here’s how I would proceed: Pick a base percentage. For every factor you’ve identified that gives greater scope for personal liberty, add 2%. For every factor that demands the sacrifice of some of that liberty, from society’s point of view, subtract 2%. In any given society, there are likely to be a blend of factors, some pushing the percentage up, and some down – but in more extreme circumstances, they might all factor up or down. If you identify a factor as especially weak, only adjust by 1%; if you judge a factor as especially strong, adjust by 3 or even 4%.

      In the end, you will have a number.

      Let me close out this section with some advice on setting that base percentage.

      There are two competing and mutually-exclusive trains of thought when it comes to these base values. Here’s one:

      ▪ In positive societies, low child mortality means fewer young widows/widowers. The society is more stable, allowing for strong family formation and early marriage. Base rate is low.

      ▪ In moderate societies, dangers still disrupt family units, leading to a moderate rate of single, adult households. Base rate is moderate.

      ▪ In dangerous societies, high death rates mean many broken families, orphans, and single parents. The number of adult individuals living outside a stable family unit is maximized. Base rate is high.

      Here’s the alternative perspective:

      ▪ Positive societies produce less social pressure and greater levels of personal freedom, reducing the rate of marriage and increasing the capacity for unwed singles. Base rate is high.

      ▪ Moderate societies have a positive social pressure toward marriage at a younger adult age, and less capacity for personal liberty. Base rate is moderate.

      ▪ Societies that swarm with danger have a higher death rate, and there would be more social pressure to marry very young to create population stability. The alternative leads to social collapse and dead civilizations.

      What’s the attitude in your game world? They are all reasonable points of view.

      In a high-fantasy / positive social setting, I would start with a base percentage of 22%. Most factors will tend to be positive, so you might end up with a final value of 32% – but there can be strains beneath the surface, which could lead to a result of 12% in extreme cases.

      In a mid-range, fairly typical society, I would employ a base of 27%. If there are lots of factors contributing to a high singles rate, this might get as high as 37%, and if there are lots of negatives, it might come down to 17% – but for the most part, it will be somewhere close to the middle.

      In an especially grim and dark world, I would employ a base of 33%, in the expectation that most factors will be negative, and lead to totals more in the 23-28% range. But if social norms have begun to break down, social institutions like marriage can fall by the wayside, and you can end up with an unsustainable total of 40-something percent.

      Anything outside 20-35 should be considered unsustainable over the long run. Whatever negative impacts can apply will be rife.

      5.8.1.3.5 Population Breakdown

      That’s the final piece of the puzzle – with that information, you can assess the four types of ‘typical families’ and their relative frequency:

          # Children with no parents,
          # Children with mothers but no fathers,
          # Children with fathers but no mothers, and
          # Children with two parents.
          # Childless Couples
          # Unwed Singles

      Get the total size of each of these family units / households* in number of individuals, multiply that size by the frequency of occurrence, add up all the results, and convert them to a percentage and you have a total population breakdown. Average the first five and you have the average family size in this particular region and all similar ones.

      Multiply each frequency of occurrence by the village population total (rounding as you see fit), and you get the constituents of that village.

      I have never liked the use of the term ‘households” in a demographic context, even though that seems to be the most commonly preferred term these days. I’ve lived in a number of shared accommodations as a single. over the years, and that experience muddies what’s intended to be a clearer understanding of the results. If you have 50 or 100 singles living in a youth hostel, are they one household or 50-100? Families – nuclear or non-nuclear – for me, at least, is the clearer, more meaningful, term.

      5.8.1.3.6 The Economics Of The Demographics

      In modern times, it’s not unusual for two adults and even multiple children all to have different occupations for different businesses all at the same time. Some kids start as paper boys and girls at a very young age. Even five year olds with Lemonade stands count in this context.

      Go back about 100 years and that all changes. There is typically only one breadwinner – with exceptions that I’ll get to in a moment – and while some of them will have their own business (be it retail or in a service industry), most will be working for someone else.

      There will be a percentage who have no fixed employment and operate as day labor.

      Going into Victorian times, we have the workhouses and poorhouses, where brutal labor practices earn enough for survival but little more. While some were profitable for the owners, most earned less than they cost, and relied on charitable ‘sponsorship’ from other public institutions – sometimes governments, more often religious congregations. These are the exceptions that I mentioned. This is especially true where the father has deserted the family or died (often in war) leaving the mother to raise the children but unable to do so because of the gender biases built into the societies of the time.

      Go back still further, and it was a matter of public shame for a woman to work – with but a few exceptions such as midwifery. Nevertheless, they often earned supplemental income for the families with craft skills such as sewing, knitting, and needlework.

      The concept that the male was the breadwinner only gets stronger as you pass backwards through history.

      Fantasy games are usually not like that. They do see the world from the modern perspective and force the historical reality to conform to that perspective. In particular, gender bias is frequently and firmly excluded from fantasy societies.

      The core reasoning is that characters and players can be of either gender (or any of the supplementary gender identifications) and the makers of the games don’t wish to exclude potential markets with discomforting historical reality.

      There are a few GMs out there who intentionally try to find an ‘equal but distinct’ role for females and others within their fantasy societies; it’s difficult, but it can be done – and it usually happens by excluding common males from segments of the economy within the society. If there are occupations that are only open to women, and occupations of equal merit (NOT greater merit) that are only open to men, you construct a bilateral society in which two distinct halves come together to form a whole.

      But it would still be unusual for a single household to have multiple significant breadwinners; you had one principal earner and zero or more supplemental incomes ‘on the side’.

      Businesses were family operations in which the whole family were expected to contribute in some way, subject to needs and ability.

      And that’s the fundamental economic ‘brick’ of a community – one income per family, whether that income derives as profits from a business or from labor in someone else’s business.

      You can use this as a touchstone, a window into understanding the societies of history, all the way back into classical times – who earned the money and how? In early times, it might be that you need to equate coin-based wealth with an equivalent value in goods, but once you start thinking of farm produce or refined ore as money, not as goods, the economic similarities quickly reveal themselves.

      So that is also the foundation of economics in this system. One family, one income (plus possible supplements). In fact, there were periods in relatively recent history in which the supplementary income itself was justification for marriage and children.

      In modern times, we evaluate based on the reduction of expenses; this is because most of our utilities don’t rise in usage as fast as the number of people using them (which goes back to the muddying concept of ‘households’; if two people are sharing the costs, both have more economic leftover to spend because the costs per person have gone down; if they are NOT sharing expenses, each providing fully for themselves, then they are two ‘households’, not one. It also helps to think of rent as a ‘utility’ within this context).

      But that’s a very modern perspective, and one that only works with the modern concept of ‘utilities’ – electricity, gas, and so on. Go back before that, into the pre-industrial ages, and the perspective changes from one of diminishing liabilities into one of growth of potential advantages. And having daughters who could supplement the household income by working as maids or providing craft services gave a household an economic advantage.

      5.8.1.3.7 An Economic Village Model

          8 a^2 = b^2 – c^2.

      Looks simple, doesn’t it? In fact, it is oversimplified – the reality would be

          a^d = (b^e – c^f ) / g,

      but that’s beyond my ability to model, and too fiddly for game use.

      a = the village’s profitability. Some part of this may show up as public amenities; most of it will end up in the pockets of the broader social administration, in whatever form that takes.

      b = the village’s productivity, which can be simplified to the number of economic producers in the village. You could refine the model by contemplating unemployment rates, but the existence of day laborers whose average income automatically takes into account days when there’s no work to be found, means that we don’t have to.

      c = the village’s internal demand for services and products. While usually less than production, it doesn’t have to be so. But it’s usually close to b in value.

      To demonstrate the model, let’s throw out figures of 60 and 58 for b and c.

          8 a^2 = 60^2 – 58^2 = 3600 – 3364 = 236.
          a = (236 / 8)^0.5 = 29.5^0.5 = 5.43

      The village grows. b rises to 62. c rises to 59.

          8 a^2 = 62^2 – 59^2 = 3844 – 3481 = 363.
          a = (363 / 8)^0.5 = 45.375^0.5 = 6.736.

      It has risen – but not by very much.

      Things become clearer if you can define c as a percentage of b:

          a^2 = b^2 – (D x b^2) / 100
          100 a^2 = 100 b^2 – D x b^2 = b^2 x (100-D)

      If 98% of the village’s production goes to maintaining and supporting the village, then only 2% is left for economic growth. If the village adds more incomes, demand rises by the normal proportion as well – so economic growth rises, but quite slowly. In the above example calculations, 59/62 = 95.16% going to support the village – and 95% is about as low as it’s ever going to realistically go. In exceptionally productive years, it might be as low as 66.7%, but most years it’s going to be much higher than that.

      Side-bar: 5.8.1.3.7.1 Good Times

      You can actually model how often an exceptional year comes along, by making a couple of assumptions. First, if 66.7 is as good as they get, and 95 is as bad as an exceptionally good year gets, then the average ‘exceptional year’ will be 80.85%.

      Second, if 95% is as good as a typical year gets, and 102% is as bad as a typical year gets, then the average ‘normal’ year will be 98.5%.

      Third, if the long term average is 95.16%, then what we need is the number of typical years needed to raise the overall average (including one exceptional year) to 95.16%.

          95.16 x (n+1) = 80.85 + (n x 98.5)
          95.16 x n + 95.16 = 80.85 + 98.5 x n
          (95.16 – 98.5) x n = 80.85 – 95.16
          3.34 n = 14.31
          n = 14.31 / 3.34 = 4.284.

          4-and-a-quarter normal years to every 1 good year.

      You can go further, with this as a basis, and make the good years better or worse so that you end up with a whole number of years.

          95.16 x (5 +1) = g + 5 x 98.5
          g = 95.16 x 6 – 98.5 x 5
          g = 570.96 – 492.5 = 78.46.

      That’s a six-year cycle with one good year averaging 78.46% of productivity sustaining the village and five typical years in which 98.5% of productivity is needed for the purpose.

      I grew up on the land, and I can tell you that an industry is thriving if one year out of 10 is really good; an industry is marking time if one year out of 20 is good, and in trouble if one year in 25 or less is really profitable. One year in six is a boom.

      So to close out this sidebar, let’s look at what those numbers equate to in overall economic productivity for the rural population that depend on them:

          Boom: (1 x 78.46 + 5 x 98.5) / 6
              = (78.46 + 492.5) / 6
              = 570.96 / 6
              = 95.16%
              (we already knew this but it’s included for comparison)

          Thriving: (1 x 78.46 + 9 x 98.5) / 10
              = (78.46 + 886.5) / 10
              = 964.96 / 10
              = 96.496

          Stable, Marking Time: (1 x 78.46 + 19 x 98.5) / 20
              = (78.46 + 1871.5) / 20
              = 1949.96 / 20
              = 97.498

          In trouble / in economic decline: (1 x 78.46 + 24 x 98.5) / 25
              = (78.46 + 2364) / 25
              = 2442.46 / 25
              = 97.6984

      Look at the differences, and how thin the lines are between growth and stagnation.

          Stable to In Decline: 0.2004% change.
          Stable to Thriving: 1.002% change.
          Thriving to Booming: 1.336% change.
          Booming to In Decline: 2.5384% change.

      The whole boom-bust cycle – and it can be a cyclic phenomenon – is contained within 2.54% difference in economic activity.

      An aside within an aside shows why:

          Boom: 95.16% = 0.9516;
          0.9516 ^ 6 = 0.74255;
          so 25.74% productivity goes into growth.

          Thriving: 96.496% = 0.96496;
          0.96496 ^ 6 = 0.8073;
          so 19.27% productivity goes into growth over the same six-year period.

          Stable: 97.498% = 0.97498;
          0.97498 ^ 6 = 0.859;
          14.1% of productivity goes into growth over the same six-year period.

          Declining: 97.6984% = 0.976984;
          0.976984 ^ 6 = 0.8696;
          13.04% of productivity goes into growth.

      Every homeowner sweats a 0.25% change in interest rates because they compound, snowballing into huge differences. This is exactly the same thing.

    5.8.1.4 The Generic Village

    The generic village is perpetually dancing on a knife-edge, but the margins are so small that it’s trivially easy to overcome a bad year with a better one. Even a boom year doesn’t incite a lot of growth, but a lot of factors pulled together over a very long time, can.

    Some villages won’t manage to escape the slippery slope long enough and will decline into Hamlets, but find stability at this smaller size. Given time, disused buildings will be torn down and ‘robbed’ of any useful construction material because that’s close to free, and that alone can make enough of a difference economically. With the land reclaimed, after a while you could never tell that it once was a village.

    Some won’t be able to arrest their decline – whatever led to their establishment in the first place either isn’t profitable enough, or too much of the profits are being taken in fees, tithes, greed, and taxes. They decline into Thorpes.

    In some cases, communities exist for a single purpose; they never grew large enough to even have permanent structures. They are strictly temporary in nature (though one may persist for dozens of years or more); they are forever categorized as Mining or Logging Camps.

    Other villages have more factors pushing them to growth, and once they reach a certain size, they can organize and be recognized as a town. And some towns become cities, and some cities become a great metropolis.

    With each change of scale, the services on offer to the townsfolk, and the services on offer to the traveler passing through, increase.

    The fewer such services there are, the more general and generic they have to become, just to earn enough to stay in operations.

    The general view of a generic village is that most services exist purely for the benefit of the locals, but a small number of operations will offer services aimed at a temporary target market, the traveler. These services are often more profitable but less reliable in terms of income, more vulnerable to changes in markets. They don’t tend to be set up by existing residents; instead, they are founded by a traveler who settles down and joins a community because they see an economic opportunity.

    That means that the number of such services on offer is very strongly tied to both the growth of the village, and to the overall economic situation of the Kingdom as a whole and to the local Region of which this village is a part.

    Here’s another way to look at it: The reason so much of the village’s economic potential goes into maintaining the village is because of all those tithes and taxes and so on. Some of those will be based on the land in and around the village; some on the productivity of that land; and some of it on the size and economic activity of the village. The rest provides what the village needs to sustain its population and keep everything going. There’s not a lot left – but any addition to the bottom line that isn’t eroded away by those demands makes the village and the region more profitable, creating more opportunities for sustained growth. Again, there is a snowball effect.

    Some villages – and this is a social thing – don’t want the headaches and complications of growth; they like things just the way they are. They will have local rules and regulations designed to limit growth by making growth-producing business opportunities less attractive or compelling. Others desperately want growth, and will try to make themselves more attractive to operations that encourage it.

    That divides villages into two main categories and a number of subcategories.

    Main Category: Villages that encourage growth
         Subcategory: Villages that are growing
         Subcategory: Villages that are not growing
         Subcategory: Villages that are being left behind, and declining.
    Ratios: 40:40:20, respectively.

    Main Category: Villages that are discouraging growth despite the risk of decline
         Subcategory: Villages that are growing and can only slow that growth
         Subcategory: Villages that have achieved stability
         Subcategory: Villages that have or are declining.
    Ratios: 20:40:40, respectively.

    5.8.1.5 Blended Models

    In general, the rule is one zone, one model. In fact, as a general rule, your goal should be one Kingdom, one model – that way, if you choose “England” as your model, your capital city will resemble London in size and characteristics, and not, say, Imperial Rome.

    But, if you can think of a compelling enough reason, there’s no reason not to blend models. There are lots of ways to do this.

    The simplest is to designate one model for part of a zone, and another to apply to the rest.

    Example, if your capital city were much older than the rest of the Kingdom, you might decide that for IT ALONE, the Imperial model might be more appropriate, while the rest of the Kingdom is England-like. Or you might decide that because of its size, it has sucked up resources that would otherwise grow surrounding communities more strongly, and declare a three-model structure: Imperial Capital, France for all zones except zone 1, and England for the rest of Zone 1.

    Example: A zone contains both swamp and typical agricultural land. You decide that those parts that are Swamp are German or Frontier in nature, while the rest are whatever else you are using.

    An alternative approach to the problem that works in the case of the latter example is to actually average the two models’ characteristics and apply the result either to just the swamp areas, or to the zone overall.

    When you get right down to it, the models are recommendations and guidelines, describing a particular demographic pattern seen in Earth’s history. There’s absolutely nothing to prevent you from inventing a unique one for a Kingdom in your world – except for it being a lot of work, that is.

    5.8.1.6 Zomania – An Example

    I don’t really think that a fully-worked example is actually necessary at this point, but I need to have one up-to-date and ready to go for later in the article. So it’s time for another deep-dive into the Kingdom of Zomania.

    5.8.1.6.1 Zone Selection

    I’ll start by picking a couple of Zones that look interesting, and distinctive compared to each other.

    Zone 7 is bounded by a major road, but doesn’t actually contain that road; it DOES have capacity for a lot of fishing, though. And I note that there are cliffs in the zones to either side of it, so they WON’T support fishing – in fact, those cliffs appear to denote the limits of the zone..Zone 7 adds up to 167.8 units in area, and features 26 units of pristine beaches.

    Zone 30 has an international border, and a major road, lots of forest and foothills becoming mountainous. It’s larger than one 7, at 251.45 units.

    Because I haven’t detailed these areas at all, the place that I have to start is back in 5.7.1.13. But first…

    5.8.1.6.7.1.1 Sidebar: Anatomy Of A Fishing Locus

    I was going to bring this up a little later, but realized that readers need to know it, now.

    Coastal Loci are a little different to the normal. To explain those differences, I threw together the diagram below.

    1: is a coast of some kind. It might not be an actual beach, but it’s flat and meets the water.

    2: It’s normal, especially if there’s a beach, for the ends to be ‘capped’ with some sort of headland. This is often rocky in nature. This is the natural location for expensive seaside homes and lighthouses.

    3. Fishing villages.

    4. Water. It could be a lake, or the sea, or even a river if it’s wide enough.

    5. Non-coastal land, usually suitable for agriculture.

    6. A fishing village’s locus is compressed along the line of the coast and bulging out into the water. This territory produces a great deal more food than the equivalent land area – anywhere from 2-5 times as much. Some cultures can go beyond coastal fishing, doubling this area – though what’s further out than shown is generally considered open to anyone from this Kingdom. Beyond that, some cultures can Deep-Sea fish (if this is the sea), which quadruples the effective area again. If you’re keeping track, that’s 2-5 x 2 x 4 = 16-40 times the land area equivalent. The axis of the locus is always as perpendicular to the coast as possible.

    7. The bottoms of the lobes are lopped off…

    8. And the land equivalent is then found ‘squaring up’ the locuses…

    9. …which means that these are the real boundaries of the locus. The area stays roughly the same, though.

    The key point is this: you don’t have to choose “Coastal Mercantile” to simulate living on the coast and fishing for food. There are mechanisms already built into the system for handling that – it’s all done with Terrain and a more generous interpretation of “Arable Land”.

    Save the “Coastal Mercantile” Model for islands and coastal cultures whose primary endeavor is water-based trade.

    Zone 7, then, should have the same Model as all the other farmland within the Kingdom. I think France is the right model to choose.

    Zone 30 is a slightly more complicated story. For a start, don’t worry about the road – like coastal villages, that gets taken care of later. For that matter, so is the heavy forestation, and the local geography – hills and mountains. But this is an area under siege from the wilderness, as explained in an earlier post. Which changes the fundamental parameters of how people live, and that should be reflected in a change of model. In this case, I think the Germany / Holy Roman Empire model of lots of small, walled, communities is the most appropriate.

    But this does raise the question of where the change in profile takes place. I have three real options: The Zone in it’s entirety may be HRE-derived; or the HRE model might only apply to the forests; or might take hold in the hills and mountains, only.

    My real inclination would be to choose one of the first two options, but in this case I’m going to choose door number 3m simply because it will contrast he HRE model with the base French version of the hills and forests. In fact, for that specific purpose, I’m going to set the boundary midway through the range of hills:

    5.8.1.6.1.2 Sidebar: Elevation Classification

    Which means, I guess, that I should talk about how such things are classified in this system. There are eight elevation categories, but the categories themselves are based on the differences between peak elevation and base elevation.

    I tried, but couldn’t quite get this to be fully legible at CM-scale. Click on the image above to open a larger copy in a new tab.

    To get the typical feature size – the horizontal diameter of hills or mountains – divide 5 x the average of the Average Peak Elevation range by the average Relief range and multiply by the elevation category number, squared for mountains, or twice the previous category’s value, whichever is higher. Note that the latter is usually the dominant calculation! The results are also shown below. Actual cases can be 2-3 times this value – or 1/2 of it.

    1. Undulating Hillocks – Average Peak Elevation 10-150m, Local Relief <50m; Features 16m (see below).
    2. Gentle Hills – Average Peak Elevation 150-300m, Local Relief 50-150m; Features 32m.
    3. Rolling Hills – Average Peak Elevation 300-600m, Local Relief 150-300m; Features 64m

         -> □ Zone 30 Treeline from the start of this category
         -> □ Normal Treeline is midway through the range

    4. Big Hills – Average Peak Elevation 600-1000m, Local Relief 300-600m; Features 128m
    5. Shallow Mountains – Average Peak Elevation 1000-2500m, Local Relief 600-1500m; Features 417m
    6. Medium Mountains – Average Peak Elevation 2500-4500m, Local Relief 1000-3000m; Features 834 m
    7. Steep Mountains – Average Peak Elevation 4500-7000m, Local Relief 3000-5000m; Features 1668m
    8. Impassable Mountains, permanent snow-caps regardless of climate – Average Peak Elevation 7000m+, Local Relief 5000m+; Features 3336m.

    Undulating Hillocks (also known as Rolling Hillocks or Rolling Foothills) are basically a blend of scraped-away geography and boulders deposited by glaciers. If the boulders have any sort of faults (and most do), they will quickly become more flat than round and start to tumble within the Glacier. When they come to rest, several will be stacked, on on top of another, generally in long waves. There will be gaps in between, which get filled with earth and mud and weathered rock over time, unless the rocks are less resistant to weathering than soil, in which case the rocks get slowly eaten away. In a few tens of thousands of years, you end up with undulating hillocks, or their big brothers. The flatter the terrain, the more opportunity there is for floodwaters to cover everything with topsoil, smoothing out the bumps. The diagram above shows how this ‘stacking and filling’ can produce structures many times the size of individual hillocks.

    A very similar phenomenon – wind instead of glaciers, and sand instead of boulders – creates sandy dunes in deserts prone to that sort of thing. Over time, great corridors get carved out before and after each dune, generally at right angles to the prevailing winds. It can help you picture it if you think of the wind “rolling” across the dunes – when they come to a spot where the sand is a little less held together, it starts to carve out a trench, and before long, you have wave-shaped sand-dunes.

    5.8.1.6.3 Area Adjustments – from 5.7.1.13

    Zone 7 has a measured area of 167.8 units, but that needs to be adjusted for terrain. Instead of the slow way, estimating relative proportions, let’s use the faster homogenized approach:

    Hostile Factors:
         Coast 1.1 + Farmland 0.9 + Scrub 1.1 = 3.1; average 1.03333.
         Coast +0.25 + Beaches -0.05 + Civilized -0.1 = +0.1
         Towns -0.1
         Net total: 1.03333
    167.8 x 1.0333 = 173.4 units^2.

    Benign Factors:
         Town 0.1 + Coast 0.15 + Beaches 0.15 + Civilized 0.2
         Subtotal +0.6
         Square Root = 0.7746
    173.4 x 0.7746 = 134.3 units^2.

    Zone 30 is… messier. Base Area 251.45 units^2.

    Hostile Factors:
         Mining 1.5 +
         Average (Mountains 1.4 + Forest 1.25 + Hills 1.2 = 3.85) = 1.28
         Town -0.1 + Foreign Town 0.1 + River 0.2 + Caves 0.05 + Ruins 0.4 + “Wild” 0.1 = +0.75
         Net total = 1.5 + 1.28 + 0.75 = 3.53
    251.45 x 3.53 = 887.6 units^2.

    Benign Factors:
         Town 0.1 + Foreign Town -0.1 + River +0.1 + Caves 0.05 + Ruin 0.4 + Major Road 0.2
         Subtotal 0.75
         “Wild” = average subtotal with 1 = 0.875
         Sqr Root = 0.935
    887.6 x 0.935 = 829.9 units^2.

    To me, this looks very Greek – but it’s actually ‘Gordes’ in England, which the photographer describes as a village. One glance is enough to show that it’s bigger than the town depicted previously. Image by Neil Gibbons from Pixabay

    5.8.1.6.4 Defensive Pattern – from 5.7.1.14

    Zone 7 is pretty secure, the biggest threat being local insurrection or maybe pirate raids. A 4-lobe structure of 2½,5 looks about right.

    When I measure out the area protected by a single fort and 4 satellites, I get 47.2 days^2. That takes into account overlapping areas where this one structure shares the burden 50% with a neighboring structure, and the additional areas that have to be protected by cavalry units.

    That means that in Zone 7, there should be S x 134.3 / 47.2 = 2.845 x S of them, depending on the size of a “unit” on the map is, measured in days’ march for infantry.

    S is going to be the same for all zones I’ve avoided making that decision for as long as I can – the question is, how large is Zomania?

    5.8.1.6.5 Sidebar: The Size of Zomania, revisited

    16,000 square miles – at least, that’s the total that I threw out in 5.7.1.3.

    That’s about the same size as the Netherlands.

    It’s a lot smaller than the Zomania that I’m picturing in my head when I look at the map. It IS the right size if the units shown are miles. But if they aren’t?

    There are two reasons for regularly offering up Zomania as an example. The first is to provide a consistent foundation and demonstration of the principles discussed coming together into a cohesive whole. And the second is for me to check on the validity of the logic and techniques that I’ve described.

    Feeling ‘wrong’ is keeping my subconscious radar from achieving purpose #2. And the Zomania being described being too small, which is the cause of that ‘wrong’ feeling, means that it isn’t going to adequately perform function #1, either.

    There can be only one solution – Zomania has to grow, has to be scaled up. I want Zone 7 to be comparable to the size of the Netherlands, not the entire Kingdom, which should be comparable to France, or Germany, or England, or Spain.

    A factor of 10? Where would 160,000 sqr miles place Zomania amongst the European Nations that I’ve named?

    UK: 94,356. Germany: 138,063. Spain: 192,466. France: 233,032. So 160,000 would be smack-dab in the middle, and absolutely perfect for both purposes.

    So Zomania is now 160,000 square miles, and the ‘units’ on all the maps are 10 miles each.

    It wasn’t easy sorting this out – it’s been a road-block in my thinking for a couple of days now – triggered by results that seemed to show Zone 7 to be about 0.08 defensive structures in size.

    And that is due to a second scaling problem that was getting in the way of my thinking:

    How much is that in day’s marching?

    In 5.7.1.14.3, I offered up:

        If d=10 miles (low), that’s 103,923 square miles.
        If d=20 miles (still low), that’s 415,692 square miles.
        If d=25 miles (reasonable), that’s 649, 519 square miles.
        If d=30 miles (doable), 935,307 square miles.
        If d=40 miles (close to max), 1.66 million square miles.
        If d=50 miles (max), 2.6 million square miles.

    But that was in reference to a theoretical 6 x 4, 12 + 12 pattern. Nevertheless, the scales are there. And they are way bigger than I thought they would be, and way to big to be useful as examples. Yet the logic that led to them seemed air-tight. Clearly, there was an assumption that had been made that wasn’t correct, but this problem was getting in the way of solving the first one.

    Once I had separated the two, answers started falling into place. The numbers shown above are how far infantry can march in 24 solid hours, such as they might do in a dire emergency. But defensive structures would not be built and arranged on that basis.

    If infantry march for 8 hours, they have just about enough daylight left to break camp in the morning (after being fed) and set up camp in the evening (digging latrines and getting fed). That’s the scale that would be used in establishing fortifications, not the epic scale listed. In effect, then, those areas of protection are nine times the size they should be.

    So, let’s redo them on that basis:

        If d=10 miles (low), that’s 11,547 square miles.
        If d=20 miles (still low), that’s 46,188 square miles.
        If d=25 miles (reasonable), that’s 72,169 square miles.
        If d=30 miles (doable), 103,923 square miles.
        If d=40 miles (close to max), 184,444 square miles.
        If d=50 miles (max), 288,889 square miles.

    And those are still misleading, because mentally, I’m thinking of this as the area protected by the central stronghold, and ignoring the satellites. To get the area per fortification,, we should divide by the total number of fortifications in the pattern – in the case of the numbers cited, that’s 6×4+12=36.

        If d=10 miles (low), that’s 320.75 square miles.
        If d=20 miles (still low), that’s 1283 square miles.
        If d=25 miles (reasonable), that’s 2,004.7 square miles.
        If d=30 miles (doable), 2,886.75 square miles.
        If d=40 miles (close to max), 5,123.4 square miles.
        If d=50 miles (max), 8024.7 square miles.

    Reasonable = 2004.7 square miles, or roughly equal to a 44.8 x 44.8 mile area. For a really tightly packed defensive structure of the one being discussed, that’s entirely reasonable – and it fits the image in my head.

    In my error-strewn calculation, my logic went as follows:

        ▪ In the inner Kingdom, I think that life is easy and lived fairly casually. That points to the lower end of the scale – 10 miles a day or 20 miles a day.

        ▪ 10^2 = 100, so at 10 mi/day, 16,000 = 160 days march.
        ▪ 20^2 = 400, so at 20 mi/day, 16,000 = 40 days march.

        ▪ That’s a BIG difference. 40 is too quick, but 160 sounds a little too slow. Tell you what, let’s pick an intermediate value of convenience and work backwards.

        ▪ 100 days march to cover anywhere in 16000 square miles gives 160, and the square root of 160 is 12.65 miles per day.

    Now, that logic’s not bad. But it doesn’t factor in the ‘working day’ of the infantry march – it needs to be divided by 3. And it DOES factor in my psychological trend toward making the defensive areas smaller, because my instinct was telling me they were too large – but this is the wrong way to correct for that. So this number is getting consigned to the dustbin.

    After all, the ‘hostile’ and ‘benign’ factors are supposed to already take into account the threat level that these fortifications are supposed to address, and hence their relative density.

        ▪ So, let’s start with the “reasonable” 25 miles.
        ▪ Apply the ‘working day’ to get 8.333 miles.
        ▪ The measured area of the defensive structure is 47.2 ‘days march’^2.
        ▪ Each of which is 8.333^2= 69.444 miles^2 in area.
        ▪ So the defensive unit – stronghold and four satellites – covers 47.2 x 69.444 = 3277.8 sqr miles.
        ▪ Or 655.56 sqr miles each.
        ▪ Equivalent to a square 25.6 miles x 25.65 miles.
        ▪ Or a circle 12.51 miles in radius.
        ▪ Base Area 173.4 units^2 = 17340 square miles.
        ▪ Adjusted for threat level, 134.3 units^2 or 13430 square miles. In other words, defensive structures are further apart because there’s less threat than normal.
        ▪ 13430 / 3277.8 = 4.1 defensive structures, of 1 hub and 4 satellites each.
        ▪ So that’s 4 hubs and 16 satellites plus an extra half-satellite somewhere.

    Those satellites could be anything from a watchtower to a small fort to a hut with a couple of men garrisoned inside, depending on the danger level and what the Kingdom is prepared to spend on securing the region. The stronghold in the heart of the configuration needs to be more substantial.

    Okay, so that’s Zone 7. Zone 30 is a whole different kettle of fish.

    I wanted to implement a 3-lobed configuration with more overlap than the four-lobed choice made for Zone 7. And it was turning out exactly the way I wanted it to; some every hub was reinforced by three satellites, every satellite reinforced by three hubs. I had the diagrams 75% done and was gearing up to measure the protected area.

    Which is when the plan ran aground in the most spectacular way. There were areas where responsibility was shared two ways, and three ways, and four ways, and – at some points – six ways. It was going to take a LONG time to measure and calculate.

    If I were creating Zomania as an adventuring location for real, I would have carried on. If I lived in an ideal world, without deadlines (even the very soft ones now in place at Campaign Mastery) I would have continued. I still think that it would have provided a more enlightening example for readers, because I would be doing something a little bit different and having to explain the differences and their significance.

    But since neither of those circumstances is the case, and this post is already several days late due to the complications explained earlier, I am going to have to compromise on principle and re-use the configuration established for Zone 7.

    Well, at least that will show the impact that the greater threat level will impose on the structure, but it leaves the outer reaches of the Kingdom less well-protected than they should be. If and when I re-edit this series into an e-book, I might well spend the extra time and replace the balance of this section – or even work the problem both ways for readers’ edification.

    REMINDER TO SELF – 3 LOBES, 1 DAY EXAMPLE

    But, in the meantime…

    Zone 30.
        ▪ Actual area 251.45 square units = 25,145 square miles.
        ▪ Adjusted for threat level = effective area 829.9 square units = 82,990 sqr miles. (in other words, the defensive structures you would expect to protect 82,990 square miles are so closely packed that they actually protect only 25,145 square miles, a 3.3-to-1 ratio.)
        ▪ Defensive Structure = 3277.8 square miles (from Zone 7).
        ▪ 82,990 / 3277.8 = 25.32 defensive structures of 5 fortifications each, or 126.6 fortifications in total. Zone 7 is 69% of the area and had a total of 20.5 fortifications, in comparison.

    What does 0.32 defensive structures represent? Well, if I take the basic structure and ‘lop off’ two of the satellites, then it’s 3/5 of a protected area minus the overlaps. By eye, those overlaps look to be a bit more than 2 x 1/4 of one of those 1/5ths, and since 1/4 of 1/5 is 1/20th, that’s roughly 0.6-0.1 = 0.5.

    If I take away a third satellite, the structure is down to 2/5 protected area minus overlaps, and those overlaps are now 1 x 1/20th, so 0.4-0.05=0.35. So, somewhere on the border, there’s a spot with one hub and one satellite.

    One more point: 3.3 to 1. What does THAT really mean? Well, the defensive structure used has satellites 2.5 days march from the hub. But everything is more compressed, by that 3.3:1 ratio, so the satellites in Zone 30 are actually 2.5 / 3.3 = 0.76 day’s march from the hub. The area each commands is still the same, but there’s a lot more overlap and capacity to reinforce one another.

    Another way to look at it is that there are so many fortifications that each only has to protect a smaller area. 3277.8 sqr miles / 3.3 = 993 sqr miles.

    5.8.1.6.6 Sidebar: Changes Of Defensive Structure

    The point that I’m going to make in this sidebar won’t make a lot of sense unless you’re paying close attention, because the Zone 30 example has the same defensive structure as Zone 7 – it’s just a lot more compressed. But imagine for a moment that there was a completely different defensive structure in Zone 30.

    What does that imply for Zone 11, which lies in between the two?

    You might think that it should be some sort of half-way compromise or blend between the two, but you would be wrong to do so.

    If you look back at the overall zone map for Zomania (reproduced below)

    …and recall that the zones are numbered in the order they were established, a pattern emerges. Zone 1 first, then Zone 2, then Zones 3-4-5-6-7, then zones 8-9-10-11-12, and so on. Until Zones 29-32 were established, Zone 11 was the frontier. it would likely have the same defensive structure as Zone 30. Rather than fewer fortifications, it would have them at the same density as Zone 30 – but the manpower in each would be reduced.

    If you know how to interpret it, the entire history of the Kingdom should be laid bare by the changes in its fortifications and defenses.

    But that’s not as important as the verisimilitude that you create by taking care of little details like this and keeping them consistent. The specifics might never be overtly referenced – but they still add a little to the credibility of the creation.

    5.8.1.6.7 Inns in Zone 7 – from 5.7.3

    Zone 7 is noteworthy for NOT having a major road – that’s on the Zone 11 / Zone 6 side of the border. Some of the inns along that road, however, may well be over that border – it’s a reasonable expectation that half of them would count. But only that half that is located where the border runs next to the road – there’s a section at the start and another at the end where the border shifts away.

    But there’s a second factor – what is the sea, if not another road to travel down? And Zone 7 has quite a lot of beach. The reality, of course, is that these are holiday destinations, and places for health recovery – but it’s a convenient way of placing them.

    So that’s two separate calculations. The ‘road that is a road’ first: There are actually two sections. The longer one runs through Zones 6 and 11, as already noted; it measures out at 15 units long, or 150 miles.

    The second lies in Zone 15, and it’s got a noticeable bend in it. If I straighten that out and measure it, I get 5 units or 50 miles.

    Conditions:
        Road condition, terrain, good weather = 3 x 2.
        Load = 1 x 1/2.
        Everything else is a zero.
        Total: 6.5.
    6.5 / 16 x 3.1 = 1.26 miles per hour.
    1.26 mph x 9 hrs = 11.34 miles.

    Here’s the rub: we don’t know exactly where the hubs and satellites are in Zone 7, only how many of them that there are to emplace. But it seems a sure bet that those areas where the road and border part ways, do so because there’s a fortification there that answers to Zone 6 or Zone 11, respectively. And that means that we can treat the entire length of the road as being between two end points.

    We know from the defensive structure diagram that the base distance from Satellite to Hub is 2 1/2 days march, and that there’s a scaling of x 1.0333 (hostile) x 0.7746 (benign) = x 0.8 – and that benign factors space fortifications further apart while hostile ones bunch them together, so this is a divided by when calculating distances. We know that 8.333 miles has been defined as a “day’s march”.

    If we put all that together, we get 2.5 x 8.333 / 0.8 = 26 miles from satellite to hub.

    Armies like their fortifications on roads, it makes it faster to get anywhere. Traders like their trade routes to flow from fortification to fortification, it protects them from bandits. The general public, ditto. If a road doesn’t go to the fortification, people will create a new road and leave the official one to rot. So it can be assumed that the line of fortifications will follow the road, and be spaced every 26 miles along it, alternating between hub and satellite.

        150 miles / 26 = 5.77 of them.

    It’s an imperfect world; that 0.77 means that you have one of three situations, as shown below:

    The first figure shows a hub at the distant end of the road. The first shows a hub at the end of the road closest to the capital. And the third shows the hubs not quite lining up with either position.

    But those aren’t the actual ends of the road – this is just the section that parallels the border of Zone 7, or vice-versa. So the last one is probably the most realistic

    Now, let’s place Inns – one every 11.34 miles. But we have to do them from both ends – one showing 1 day’s travel for ordinary people headed out, and one showing them heading in. Just because I’m Australian, and we drive on the left, I’ll put outbound on the south side and inbound on the north.

    Isn’t that annoying? The don’t quite line up – to my complete lack of surprise. Look at the second in-bound inn – it’s about 20% of a day short of getting to the satellite, and that puts it so close that it’s not worth stopping there; you would keep going.

    Well, you can’t make a day longer, but you can make it shorter. And that makes sense, because these are very much average distances.

    I’ve shortened the days for the ordinary traveler – including merchants – just a little, so that every 5th inbound Inn is located at a Stronghold, and every 5th outbound inn is located at a satellite. Every half-day’s travel now brings you to somewhere to stop for a meal or for the night.

    It’s entirely possible that not all of these Inns will actually be in service, it must be added. Maybe only half of them are actually operating. Maybe it’s only 1/3. But, given it’s position within the Kingdom, there’s probably enough demand to support most of these, so let’s do a simple little table:

        1 inn functional
        2 inn functional
        3 inn functional but 1/4 day closer
        4 inn functional but 3/4 day farther away
        5 inn not functional
        6 inn not functional, and neither is the next one.

    Applying this table produces the following (for some reason, my die kept rolling 3s and 6s):

    Even here, in this ‘safe’ part of the Kingdom, travelers will be forced to camp by the roadside.

As the Table Of Contents makes clear, there’s still a lot to come in this part. It will continue in part 5c!

Comments (2)

All Spiders (And Snakes) Are Not Alike


Snakes & Spiders in RPGs tend to one-size-fits-all construction. Use reality to make them exceptional!

Image by Alan Couch, CC BY 2.0, via Wikimedia Commons

I got curious this morning.

Australia is well-known around the world for the number and variety of deadly fauna we live alongside.

The likelihood of your home being robbed drops by a ratio of between 100-1000 times if you live above the ground floor, to the point that if you are not away for an extended period (more than a day) and have no neighbors on the same level, it’s perfectly safe to leave your front door unlocked for a few hours – while you go shopping, for example (doing so freaks a lot of urban dwellers out, though – it’s far more comfortable for those coming from relative security like a small country town).

So I suddenly wondered, “How much do Sydney Funnelweb Spiders like to climb? What are the rates of reported bites taking place on any above-ground level higher than the ground floor?”

I wasn’t able to answer the second because it’s not a statistic that is routinely recorded, but was able to get an answer to the first, based on the behavioral traits of the spiders in question. And that answer got me to thinking about Spiders and Snakes in RPGs.

Funnelweb Spiders

These are, perhaps, the most deadly spider in Australia. Nevertheless, there have been few if any fatal attacks since the anti-venom was developed.

Sydney Funnelweb Spiders (Atrax Robustus) are generally terrestrial (ground-dwelling), but they are capable of climbing under specific circumstances.

Sydney Funnelweb Spiders are primarily known for building their silk-lined tubular burrows in sheltered, moist, cool habitats, usually under logs, rocks, or in suburban gardens. The females are especially sedentary and rarely leave their burrows.

The most common encounters occur with wandering males during the warmer months (especially November to April), particularly after rain, as they search for mates. This wandering behavior often leads them into backyards, garages, and houses, or they fall into swimming pools.

The species is overwhelmingly terrestrial (ground-dwelling). Their burrows are in the soil, under rocks, or in logs. The only ones that typically leave the burrow are the wandering males looking for a mate.

When males wander, they move across the ground and seek shelter at dawn. They are most often found entering homes by crawling under doors or sometimes through other ground-level openings.

They generally CANNOT climb smooth surfaces like clean glass, plastic, or very smooth painted walls due to a lack of specialized adhesive pads (like those found on many other spiders). This is common lore among experts.

They CAN climb textured or rough surfaces like rough brick, steps, or rough-barked trees, as their claws can find purchase. In fact, some related species, like the Northern Tree-dwelling Funnelweb (Hadronyche Formidabilis), are known to live meters above the ground in tree bark.

So, while they prefer to stay at ground level, a Sydney Funnelweb Spiders could potentially climb a textured wall or staircase to reach an above-ground level, but this is not their typical, preferred mode of movement or habitat.

By far the most likely source of an above-ground attack is from a Spider being carried up on furniture or boxes being moved (carried up by a human) or an accidentally journey in a lift – by definition, unnoticed by the user of that lift.

Bio-security Barrier

Living on an above-ground level in an apartment building significantly reduces your risk of encounter.

You can treat living above the ground floor as a form of “bio-security” against Funnelwebs (and many other ground-dwelling risks) that is analogous to the security drop in crime mentioned.

Comparison: Huntsman Spiders

Huntsmen are climbers; they like to live high up on walls and on ceilings. Most varieties (maybe all) don’t build webs at all. They are incredibly fast and often very large (bigger than an open hand with the fingers splayed out as far as they will go). They are also adept at squeezing themselves through gaps that are much smaller than their bodies.

While most Australians don’t welcome the intrusion of a Huntsman into the home, it’s rarely a cause for panic. They are actually fairly shy creatures – just getting close to one and staring at it for a few minutes can be enough to get them to leave on their own when you then leave the immediate vicinity and don’t look at them – they treat this as coming across a predator that isn’t hungry enough to have them for lunch, a lucky escape, ‘now let’s get the hell out of here before it comes back!’

Huntsmen live on cockroaches, flies, and other far more annoying insects, so there are exceptions to that general rule. For the most part, in Australia, if you leave them alone, they will earn their keep.

But for the especially arachnophobic, that’s not an option, and there’s always the risk of a visitor freaking out, so it’s common practice to remove them gently and release them outside. Again, this is viewed as a predator ‘toying’ with them cruelly before letting them go – the last place they are likely to go is where they were removed from.

They have been known to scuttle inside cars and can even work their way through the door-seals of a closed door or a window that’s only opened a crack – 1/4 of an inch is more than enough. That’s why you’ll often see videos on the internet of spiders inside cars or on windscreens, and sometimes the braver souls will catch them, open the door, and release them. No Aussie questions the validity of these videos, they are far too plausible for that.

Huntsmen CAN climb smooth surfaces like glass, and can cling to a windscreen at highway speeds. They may not like the experience, though – I can’t attest to that, either way.

The largest one I’ve ever seen was the size of a dinner-plate. I think they can grow a little larger than that, but not much. But size alone makes them terrifying to some.

Snakes

The same is true of the most venomous snake varieties here, provided there is no access for them to get into the ceiling of the ground floor space.

Australia’s most medically significant snakes (like Eastern Brown Snakes or Tiger Snakes) are also strongly terrestrial. While they can climb surprisingly well, they are not naturally adapted to navigate the smooth, high, sheer walls and stairwells of a multi-story building.

Awareness of the ground-floor ceiling / roof void is key. If a snake gets into the space above the ground floor (by climbing a vine, tree, or rough surface to the roof-line and entering through a small gap), it is primarily a risk to the ground-floor residents. If you live on the first floor or higher, this risk is eliminated unless there is some opening in that crawlspace upwards that the snake is small enough to take advantage of – heating ducts or something, perhaps.

There is an evolutionary rationale for this: Because they are principally terrestrial, they are more likely to encounter predators, and so are more likely to develop defenses against those predators. So the general rule is, the less a snake likes to climb, the more likely it is to be dangerous.

Carpet Pythons

Carpet Pythons, and constrictors in general, are far stronger and better able to climb. They can be viewed as the Snake-world’s equivalent of Huntsmen. Their preferred attack mode is to leap / fall on prey from above or from the side and wrap themselves around it, squeezing it until it dies, then swallowing it whole.

The Second Bio-security Barrier

Even the climbing species tend to stay close to where the food is, and that’s closer to the ground. While they can climb higher than the first floor above ground level, there is little advantage to them in doing so, so there is, effectively, an equivalent ‘bio-security barrier’ that’s just one floor above the first. Encounter incidence drops dramatically at such heights. Part of it might be that while robust, strong, climbing snakes and spiders can survive a one-story drop completely unharmed, there is far greater risk when falling two or more stories. Just like people, extreme heights are not what they are built for, and are therefore scary (to some, they are thrilling to others – I wonder if that’s true in the Animal kingdom as well?)

Spiders In RPGs

While there can be exceptions of small-but-deadly spiders taken from the real world – Black Widows, Tarantulas, and so on – for the most part, RPGs treat Spiders as “one stat block does all”. They are all venomous, all climbers, all web-spinners, all generic except for size. At most, there might be cosmetic variations.

Simply dividing the world of spiders into two – terrestrial types vs climbers – and applying the difference to determine capabilities – is a direct infusion of verisimilitude into spider encounters. Go back and read the spider encounter in The Hobbit again, and this time don’t let yourself get distracted by the conversations and “Attercop”, and you will find that the encounter has a greater level of credibility because the behavior of the spiders feels realistic. There are species whose venom doesn’t kill right away, and who surround their prey in webbing and leave it hanging to die on its own, because it’s harder to tear flesh from bone when it hasn’t started to rot.

Snakes In RPGs

These fare somewhat better, but the same truth can ultimately be found here in an awful lot of cases. It might be, in part, due to varieties of deadly snake being recognized culturally with greater frequency – the cobras with their flaring necks, rattlesnakes with their rattles, and so on. When these get super-sized, some of their traits – those known to the referee – tend to go along for the ride. Many systems explicitly detail a “Giant Boa” or other constrictor.

But, past a certain point, the same truth is there – all snakes past a certain size are venomous, have similar behaviors and attitudes, and behave the same way – and can benefit in the same way by a little differentiation.

Example: Giant Swampy Tree-snakes

You don’t have to ground your ideas in reality, the mere fact that they are different from the ‘norm’ gives them instant credibility and interest. As an example, let me present to you the Giant Swampy Tree-snake, better known as the Green-backed Swamp Viper.

My chain of thinking:

  1. I don’t know what the defining characteristics of a Viper are, but the name sounds cool.
  2. These snakes cannot swim. In a swampy environment, that’s the key point of distinction, from which everything else will flow.
  3. To cross small rivers and streams, they learned to climb one tree, head out along its branches until it was above another tree’s branches, then drop down into it.
  4. Evolution favored smaller, lighter specimens, but required the retention of above-average strength relative to their size.
  5. After a while, they learned how to wrap their tails around the end of a tree-limb and swing, greatly increasing their chances of traversing terrain. This favors a longer, thinner body.
  6. Their eyesight grew more acute and their reactions faster in order to better target neighboring tree-limbs.
  7. Once you have a locomotive ability that doesn’t require descending to ground level, there is a survival benefit to not doing so most of the time. The only reason to drop to ground level is to attack prey, and once it’s in your mouth and on its way to being digested, you would head straight for the nearest tree and climb.
  8. Minimizing the time spent on the ground naturally demands a quicker-acting venom. Smaller body sizes give this snake a lower metabolic demand, so smaller prey, less frequently, becomes sufficient. The improved eyesight aids in the resulting development path. So the snake has fewer doses of its venom but it’s more potent.
  9. Take all of the above changes and repeat them because they are not just a change, they are a trend.
  10. Swinging from tree-limb to tree-limb imposes a natural length limit of average height above ground plus enough length to firmly grasp the tree-limb – two or three coils around, so if the tree-limb is 1/2 an inch in diameter, that would be 6 pi 1/2 = 3 pi = 9.4 inches.

In reality, this looks a little cumbersome in terms of the snake releasing it’s grasp at the end of its’ swing – if it wants to leap from one tree to another, I’d probably take one coil out and make the added length 4 pi 1/2 = 6.3 inches.

Put all of these changes into an appropriate stat block, and you have something unique, interesting, unexpected, fantastic – and yet, it has a ring of authenticity.

Snakes that live in trees tend to evolve to have a diameter 1/2 the diameter of the branch, at most. If they stay in close to the trunk, they can be enormous in size; if they head for the outer branches, they shrink – fast. And maximum length, as said, tends to be height above ground in the average tree-limb plus a few inches.

Final Tips

Hunting Vs Defense: A creature’s venom can have either purpose or both.

If it’s for hunting, the quantity will be enough to bring down its usual prey quickly. Every second that a snake or spider is waiting for its prey to konk out (dead or unconscious) is another second that the spider or snake itself can be attacked.

If it’s for defense, the quantity and deadliness will follow the same logic with reference to whatever it usually has to defend itself from.

If both, half-way adaptions become likely – smaller venom amounts but the speed for multiple attacks, for example – so that venom is not wasted on prey when it might be needed for defense.

The same logic still applies when you scale these creatures up.

Before you go, I have a couple of announcements.

Monday Deadlines Erased (well, lightly scuffed)

I (or Johnn) have been publishing Campaign Mastery every Monday at around Midnight my local time since 2008 with just one extended break (not of my choice). Back then, we followed the usual formula of 1-2000 words to a post. For the first ten years, we published twice a week, Mondays and Thursdays.

As of this post, that changes. When I started, I could knock out a post in one day – I often didn’t start writing until the Monday Morning, though I liked to have time up my sleeve by writing the next post early.

I had a set routine – Monday, CM; Tuesday, Pulp; Wednesday, the real world; Thursday, CM; Friday, prep the next campaign to be played on the monthly rotation cycle; Saturday, play; Sunday, personal time.

Then the posts started getting longer and more complicated. First Sundays and then Saturday Nights and then Tuesday Nights all got added to the CM schedule, one at a time. Lately, it’s been Thursday, Saturday, Sunday, Monday – more than half the week – and that often hasn’t been enough.

A number of times, a post has been almost but not quite ready completable before deadline, come Sunday / Monday, and I would have to set it aside and throw something together at the last minute, when another day or two would have seen it good to go.

So, as of this post, there’s a new publishing schedule here at CM:

Something New Every Week.

Where possible, I’ll stick to the old deadline, but when something’s not quite ready to go, I’ll give it the extra time that it needs and publish when it’s ready. If I get to Thursday and it’s still not ready, I’ll do the ‘something quick’ trick – and aim for the delayed post to appear the following week.

Partial Posts

When it’s a major series, like Trade In Fantasy, I’m going to pull a new trick out of my hat, the Partial Post. In a nutshell, come Monday or Thursday, I’ll publish whatever’s ready to go, no matter how minimal it might seem. The following week, I’ll publish everything done since the last post as “Part 5b” or whatever, but I will also update the incomplete post with the new content.

Like I said above, something new every week. I’ll even take my usual Time Out breaks in the middle of working on the larger post instead of waiting until it’s complete.

The “Part 5b”-style posts will be minimal – no updated TOC, a repetition of the same feature image, no commentary – just straight ahead from where I left off, with only a single text panel at the top with a link back to the main post.

When one of these drops, it will also signify that there may have been retroactive amendments to the content of preceding parts – these will be Works In Progress, not complete until the main post is complete.

And, on that main post, there will be a similar text panel which will keep track of the status of that post.

Right now, I’m working on Chapter 5 part 5. So the first part of it will get uploaded and published as “Chapter 5 Part 5 (Incomplete)”.

It will be followed by “Chapter 5 part 5a”, with the date and text saying “partial post, click here to read the more complete version” in a panel at the top. And, when it drops, the content will be integrated with the old “Chapter 5 part 5”, the end-of-post blurb will be updated to indicate whether or not Part 5 is complete or will continue, and a text panel will appear at the start, showing the date, and “Integrated part 5a”.

How well this will work remains to be seen, but the theory is sound, and hopefully readers will stick around.

What’s that? Why post separately at all? There are a number subscribers who get Campaign Mastery delivered by email who won’t get the updated version of “Part X”. Posting the additional text means that they will still get the new parts.

Taking Time

I have a number of major projects on the go right now.

  • I’m illustrating a complex machine for the Warcry campaign – so far, it consists of more than 1800 layers.
  • When it’s finished, I have to write description and narrative around it in the adventure for which it’s written.
  • Then I have to finish the adventure – and I have a hard deadline of early January for this task. So far, it’s 41,200 words long and about 80% complete. It contains 97 original images and 7 sound effects (so far)!
  • Meanwhile, there’s a Pulp adventure that’s almost complete but needs some finishing touches. It has meant creating an 88-page offline website with 500 images, not counting ones that haven’t actually been used, and more than 129,000 words of text. I have one last page of the website to finish of this and then it’s done. The entire (still incomplete) “Value Of Material Things” series is a spinoff of the work put into this website. The adventure itself is is 16,100 words, is about 95% complete, and also contains about 60 illustrations.
  • But before I can finish it, I need to complete work on another article for CM that currently stands at about 90% complete and is almost 9000 words long (there will be some compression in editing and many of those words are HTML, so it won’t be that long when it’s published).
  • After that, there’s another Pulp adventure that’s 80% complete, maybe 90%, but it needs a complicated illustration that I’ve barely been able to start. It needs to be complete by May 2026. So far, it has 184 illustrations (some originals, many hand-edited) and is 24,300 words long.
  • And then there’s my Superhero campaign. The next adventure is more or less complete at 7200 words and 28 illustrations, most of them original, but I have a growing itch to go back and add to it. But I also have to find time for the adventure that’s to follow it, and I haven’t even started on that beyond basic notes. It’s likely to run to 10-15,000 words.
  • And, meanwhile, the current Dr Who adventure currently stands at more than 56,000 words and is only 22% complete. 7200 words of that total have already been played (one full session), so this is turning out to be a monster. So far, it has 33 original illustrations and (in another first for me) 5 animations. Because play has already started, this has been a high priority for me. And the rest of that adventure needs to be illustrated – that’s probably another 67 or so images, maybe more, to be sourced. Most of those won’t be originals, though – I just have to find the ones I need on the internet.

Put all that together.

  1. 718 illustrations, most of them original, with 2 more major ones in progress and 78 more to be sourced.
  2. 7 sound effects. And 5 animated short movies.
  3. 10 documents & 1 88-page website.
  4. 282,800 words. That’s approaching three full-length novels.
  5. With 67200 still to write by February. And another 160,400 to follow later in the year.

That’s doable, but it means stealing back some of the time that Campaign Mastery posts have soaked up in recent times (hence the Partial Post concept). So, in addition to the measures stated above, more time is going to be diverted away from writing longer blog posts for the next few months. And, on top of that, I will be taking a two-week vacation covering Christmas and New Years Day.

There’s a lot to do, so I’d better get on with it!

Leave a Comment

Once We Were Heroes and the AI Controversy


This post is a review of Once We Were Heroes by Fool Moon Productions, which uses art that’s AI-Generated. So I’ve had to set some ground rules.

This post features AI-generated art. If you object to that art or its use, you can click on This Link to read a plaintext version of the article.

As the owner/operator of Campaign Mastery, I have spent a lot of time thinking about what the site’s policy should be with respect to art by Generative AI, and the text below is the result.

Campaign Mastery Policy on AI-generated Art

1. Campaign Mastery will not use or show AI art unless it is profoundly essential to the content of an article. “profoundly essential” includes reviewing a product, tool, or published work which uses AI-generated art, or when the art itself is the subject of the article.

2. AI art will never be used to replace any original art that would normally have been commissioned from or provided by an actual human artist by Campaign Mastery.

3. When AI art is used, a disclaimer will always warn readers, as shown above. This will precede any significant article content and especially any AI-generated content. Whenever possible, a link will be provided to a plaintext, AI-free version of the article.

Campaign Mastery Policy on AI Text

1. While text generated by an AI may be quoted, it will never used to replace human-generated text. All text published on Campaign Mastery must be substantially written, analyzed, and edited by a human author.

2. Text generated by an AI may only be quoted for the purposes of analysis, illustration, or conversation (eg demonstrating prompt engineering). Any such text will be clearly identified as to source. Analyses performed by an AI must be converted into a HTML-code table.

3. Outside of direct quotation, AI may have been used for research or brainstorming, or generating outlines or summaries of other texts. Every such use will be verified for accuracy by a human and the final text will always be written and edited by a human.

4. No third-party submissions which are obviously AI-Generated in the exclusive opinion of the site owner will be accepted for publication.

Campaign Mastery Policy on AI Audio and Video

1. While Campaign Mastery is not an AV site, from time to time Audiovisual materials may appear, and some of these may be AI generated in some respect.

2. If any aspect of these materials (eg AI voice-overs, background music, etc) is significantly AI-generated, the materials will be treated as though they were “AI Art” as per the policies stated above.

This text has been added to the policies page and is effective as of this post.

Human-AI Collaboration?

I couldn’t find what I wanted to use to illustrate this post – an Artificial Artist painting a question mark. This is a next-best alternative, based on robot hand human handshake by Mohamed Hassan, to which I added a question mark image by Gerd Altmann in the background, and some color tweaking to get them to match. Both images were sourced from Pixabay using their “authentic” (human only) setting.

The AI controversy – an overview

I always knew that this day would come eventually. I had expected that the occasion would be when I wrote and published an article on how I use AI within my campaigns, and the techniques and limitations that come with it – but that article isn’t written yet, because the uses that are most illustrative come from an adventure that hasn’t yet been played.

Or another article that’s been drafted on the limitations of AI and how it could be improved – and on how to get the most out of what’s already here.

Polarization, Content, and Hard Lines

The issue of AI-generated art and other content is one of the most polarizing issues in the hobby. It has forced publishers, creators, GMs, and customers / players to establish rules that are starkly black and white, often with the best of intentions – but when those clash, the hobby itself can be the loser.

I’m more in favor of a softer line that acknowledges gray areas with transparency. There are certain ways that I consider ethical when it comes to the use of AI, and others in which it clearly is not. Asking for anything “in the style of” a living artist is a big no-no, for example. Asking for something in the style of a long-dead artist, that’s more of a gray area.

I regard AI as a tool, and like all tools, it can be used for good or ill. Throw in a healthy dose of pragmatism, an acknowledgment that no black-and-white policy can satisfy everyone and that there can be good and valid reasons for the ethical use of the tool, and you find yourself in the same uncomfortable middle-ground that I occupy, and that the policies stated above are intended to encapsulate and define.

Ethics and Labor Rights

This actually breaks down into a number of related concerns. First, there’s the conflict between how generative AIs learn to create their content and respecting the rights and integrity of human creators.

Most AI models are trained on massive accumulations of data scraped from the internet with no concern as to the sources rights, and without recompense. And that irks those who support the rights of writers and artists. I’m one of them, so naturally, my sympathies align more with those who are critical in this regard.

But that perspective is nuanced by the reality of the internet. Once material is publicly available, it’s there for anyone to refer to and use as reference or inspiration. So long as sufficient input from outside that source is incorporated, and in a non-superficial way, so long as you are building on what has been made available and not simply copying it outright, how is what an AI does any different from what a human writer does?

If I want to create an image of a clown, and I start by doing research using Google Images on how other artists have depicted clowns to get ideas, that’s generally considered fine – because at the end of the day, I have to synthesize all those elements and ideas together into my own representation of “a clown”. I don’t generally restrict or place boundaries on those searches; I want as much fuel for the creative fires as I can get.

It’s a long-held maxim – if you don’t want something to be public, don’t put it on the internet.

Here’s a bone to chew on: if it’s valid and legal for a human to be educated by viewing online content, how is it not valid and legal and Fair Use for an AI system to use it in the same way, for the same purpose?

Shades of gray.

Some content creators argue that the results are a form of “unlicensed derivative work”. And that might be true, if only that content creator’s works were used to train the AI – but with every outside source, the purity of that argument gets eroded.

There comes a point where so many sources are being fused into one that you have to draw the line. It’s like music – the difference between doing a cover version of a Beatles song and drawing inspiration from the Beatles is clear and obvious. Both are forms of copying – but the nuance is completely different. One requires the payment of a license fee to the songwriters, and the other doesn’t. Doing them without that payment is legal and ‘fair’ in one case – and completely the opposite in the other.

You can’t copyright the D7(diminished) chord just because you’ve used it in a song. It’s there for anyone else to use.

What’s more, consider the necessary ‘spark of originality’ that distinguishes human creation from artificial construction. In order to generate a good image, a human user of an AI has to specify a prompt, and the general rule is that the more detailed the prompt, the better the result. Is this not providing the needed ‘spark of originality’ into the resulting image?

The more vague and generalized the input, the weaker this line of argument, I admit. But where do you draw the line? How many of these creators started out by imitating someone else’s work?

Shades of gray.

I don’t see how you can end up anywhere else in the argument if you’re applying any half-way reasonable standards.

Devaluation Of Creativity

There’ is a widespread fear among freelancers (artists, mapmakers, writers, editors, you name it) that AI tools will drastically reduce the market rate for their services. Why pay $500 for a unique monster illustration when you can generate a passable image for nothing, or close to it?

And they have a valid point – up to a point.

The keys to deciphering this argument are subtle. AI images may be ‘passable’ but they aren’t going to be as nuanced as a bespoke image from a human artist. This is stealing my own thunder to a certain extent, but here’s the reality: The more detailed you make an AI prompt, the more you are likely to get something close to what you want – but the more likely it is that some crucial element, spelt out in specific detail, is going to be left out completely. And if what you’ve requested isn’t something that people routinely post images of on the internet, you’re going to struggle – try generating an image of a “crashed alien spacecraft” and what you generally end up with is a flying saucer hovering serenely in the air. People don’t take many photos of crashed objects! And if the AI can’t learn what it should look like, it can’t create something like it.

What this argument is really pointing out is that amateur-prompted AI art raises the bar of amateur art to something with much of the gloss of the professional artist. But the professional will always be better at capturing originality and bringing it to the creative table. The differences may be more nuanced than a black-and-white line drawing, but they are real.

You’re still getting something for your $500 that you don’t get from the cheaper alternatives – but it’s not the same thing as it used to be.

And this argument also smacks of similarity to the opponents of every technological advance and consequent job losses. I’ve heard those arguments advanced against everything from the word processor to assembly-line robots. In every case, there has been more employment afterwards than before, once things settled down – but in some cases, those have mandated an evolution of skill-set, and in others, a complete replacement. So the truth of the matter in this respect is, once again, nuanced.

The similarity not only weakens this argument considerably, it points out, more starkly, my previous point – you pay for the services of a human artist for what he or she can provide that the cheaper substitutes can’t. Will that result in a realignment of the market rates? Possibly. But that’s life, it happens to everybody, like it or not – things change, and you either evolve accordingly, or you stagnate.

But there is a sting in the tail – the proposition that this leads to a “race to the bottom,” where only AI-assisted production can compete on cost. And that’s a point that I can’t argue with, and hence my comments on a possible realignment of market rates.

That said, it can also be suggested that AI generators are tools – some will learn to use them more effectively than others, just as some people are better at watercolors than with oil paints. The solution to this problem is for the creators to embrace AI and use it to increase their productivity so that they can accept five or ten times as many commissions paying one-fifth or one-tenth of what they used to command, while leveraging their artistic expertise.

So this line of argument is not as cut-and-dried as it first seems.

Specificity Of Style

Artists often feel that AI allows users to “mimic their unique style” without the artist receiving credit or compensation, effectively eroding their brand and professional identity.

For me, this is a far stronger argument than the preceding one, but I think the proposed remedy (don’t use AI, anti-AI, no AI, no, no, no!) is the wrong line to be pursuing. As I said earlier, I view creating something in the style of a living artist to be an ethical no-no. Once an artist is no longer available to take commissions by virtue of being dead, that’s a different story.

I think the correct remedy here is an extension of copyright protection to include the “distinctive style” of an artist. That’s already implied in the existing protections – more strongly in some fields than others. I always remember the time John Fogerty was sued by his previous record label for sounding too much like himself. That case established (or reinforced) the principle that each artist carries with him a uniqueness of style that cannot be licensed or sold and is emphatically NOT included in the rights purchased when you acquire control over an artist’s work product.

I think the existence of generative AI advances the demand for such to be formalized and generalized to cover all modes of creativity, be it visual, or textual, or audible. I would include under that umbrella, a singer’s unique voice.

There would still be gray areas. A guitarist could argue that they had a distinctive and unique playing style, for example, and that style should merit protection. But they would have to prove that uniqueness in comparison to others within that musical field.

The final jigsaw piece would be to require AI interfaces to explicitly block requests that enter protected fields. “In The Style Of” is permissible once the ‘copyright’ on that uniqueness has expired, and should be blocked the rest of the time – UNLESS you are the artist in question, I suppose. But that gets murky, so let’s keep it clean, and ban them from being lazy, too.

Publisher and market integrity

Large TTRPG publishers have taken explicit stances, and the community judges them harshly when they waver.

Major players like Paizo (Pathfinder/Starfinder) and many prominent independent publishers have issued clear policies stating they will “only accept human-created artwork” for their products, usually citing ethical concerns regarding data scraping. This is often driven by a commitment to supporting the freelance community.

Wizards of the Coast faced significant backlash when multiple freelancers and even their own in-house content creations were discovered to contain AI-generated elements, despite WotC claiming an anti-AI stance. These incidents reinforced the community’s demand for strict auditing and absolute transparency.

Since it’s my position that human-created artwork is superior to AI-generated content in specific ways, I don’t agree with the reasons cited for these policies. Many criticize AI-generated art for lacking the “soul, texture, and character” of human-created fantasy illustrations. In the TTRPG world, art often sells the ‘vibe’ of the setting, and AI is frequently accused of producing generic, overly smooth, or inconsistent visuals that break immersion, and those goes directly to my allegation that human art is better in key respects. But I do agree with the policies themselves as a general principle.

It’s when people seek to extend these policies down the scale to smaller publishers that I think problems start to arise. But I have to admit to being a bit conflicted over that problem.

Ideologically, I’m egalitarian; I favor “one rule for everybody”. And yet, in this circumstance, I think that there need to be different standard applied to different scales, and see the good and ethical use of AI generation as ‘raising the bar’ for the small operators to the point where they are keeping the big-ticket producers honest.

My policies and ideologies don’t hold all the answers, and that admission pushes me back into the shades of gray. If you can afford to, you should always hire human artists because the results will be better. If you can’t afford to, I’ll give you a pass for using AI-generated art. So there are two rules and a lot of gray in between them. But no one, hard-and-fast rule or principle yields a satisfactory answer in every case, and I do NOT agree with anyone that tries to implement one. I’ll respect their position in terms of their own products or pages – the the extent that I’m offering a plaintext version of this article, for example – but that is as far as I’ll go.

I generally think hard-liners are part of the problem in any field, anyway. Having ethics and principles isn’t a problem; expecting them to hold all the right answers every single time, that’s a problem, and a serious one.

AI Limitations

I’m only going to touch on this briefly, because it’s not directly relevant – but it does at least need to be mentioned.

AIs are not intelligent. They don’t understand a word they say. They are sophisticated systems that guess at the best ‘next word’ to follow the word they have just decided to use. That these words form sentences that have emergent properties of meaning when read by a human is a reality with which they cannot contend and can barely cope with.

Some AIs do better in this respect than others. For brainstorming, and nailing down technical details, they can present an enormous advantage – but when it comes to writing text for a TTRPG rules-set or adventure, they vary from inspiring to exasperating in equal measure, sometimes within the same paragraph!

TTRPGs and good written works rely on an internal consistency that has to run deep. Very deep. And that’s a consistency of the emergent properties of meaning within a series of statements. And since AIs don’t understand meaning…

AIs – LLMs – are capable of generating vast amounts of text quickly. They can talk the ear off a donkey, even without voice synthesis enabled! But they are prone to “hallucinations” (in which they make up facts) and struggle to maintain adherence to obscure, specific worldbuilding details. Or a specific role in the creative process. This makes unedited AI text a major liability for professional products – or for decent amateur ones.

Partnership

I view my use of AI as a partnership with a very creative research assistant. I can offer a vague idea and have it refined. I can ask for a suggestion – but I then have to take the ball offered and run with it, or use it to spark a better idea in a brainstorming session. It’s great for narrowing in on technical details – but you have to check its work. One phrase that repeats frequently in my interactions is “Ask questions for clarification if necessary.” And a lot of my inputs start by clarifying or reiterating something that the LLM has not taken into account.

I see the big picture. I use the AI to help clarify and define the details. I frequently need to steer the conversation, offer corrections or clarifications, or outright reject something the AI has suggested, while using that suggestion to clarify my own thinking to offer an alternative. On a number of occasions, the AI that I use most frequently has made three or four suggestions, and I’ve accepted none of them – but taken part of one and part of another and a touch of my own creativity and sense of narrative direction to weld the parts together into something better.

That’s leveraging the strength of the AI while using my ‘bigger picture’ to overcome its limitations. There’s a huge amount more that can be said on that subject, but I’ll save that for another article sometime.

Summing Up, Moving On

If I were to generalize and sum up my ethical position on the use of AI, it could be encapsulated in the statement, “AI as a tool or in partnership with human creativity is fine – with inherent limitations. AI as a primary generator of content that a reader or viewer would expect to be produced by a human is unethical at best and incompetent at worst.”

This is the ethical boundary that we, as consumers, have to navigate. And it’s precisely this boundary that the creators of Once We Were Heroes have forced me, and you as a reader, to confront. This game supplement heavily employs AI-generated art, and makes no bones about it:

    “Recognizing their limited artistic expertise and budget, Jeremy and Matthew at Fool Moon Productions leverage generative AI to enhance their creative outputs. This includes generating thematic “original” artwork, refining existing designs, and improving written content by correcting spelling and grammar. Notably, even this disclaimer was crafted with the assistance of AI.”

It’s not my job, as a reviewer, to argue the rightness or wrongness of this policy or the motivations behind it. It IS my job as a reviewer to consider the efficacy of the results and to bring the matter to the attention of potential buyers, who can then make up their own minds.

To the maximum extent possible, this review will focus on the content without considering its source. If the use of AI has achieved something spectacularly fitting or evocative, I’ll comment on the fitness and the evocative nature of the art – and if something doesn’t fit, I won’t cut them any slack for the source; it will be judged by the standards of human art.

But I wanted to make that clear before we start, too.

To facilitate this review, I have been given a free copy of Once We Were Heroes. I have no other incentive to produce anything other than a fair and unbiased review.

Once We Were Heroes – First Impressions

Front Cover

The front cover gives a first impression of two worlds and a location trapped in between. It’s clearly a collage of two separate pieces of art, and the styles don’t quite mesh. Art by Fool Moon Productions with AI assistance.

You can’t escape a first impression from the front cover, but it’s not all that promising a beginning. The art of the house at the bottom doesn’t feel like any of the other art in the product, and more importantly, doesn’t quite gel with the top part as a result.

The title – for some reason, I started thinking of this as “We Were Once Heroes”, and I think that derives from a grammatical choice in the title – specifically, the absence of a comma after “Once”. It’s a piece of minutia in the larger scheme of things, but it is the difference between a statement that attracts attention and commands interest, and something that’s more vague and leaves you wondering what it’s all about. Compare for yourselves:

Once We Were Heroes

Once, We Were Heroes

The Subtitle doesn’t help much. “An Adventure About Life After You Are Left” – Left where? Left Hanging? Left Alive? Left For Dead?

For all I know at this point, though, that might be a masterpiece summary – the answer might be “All Of The Above, and more”. At least it tells me that this is supposed to be an Adventure.

But the first impression is that the subtitle is there to try and hook a reader into buying the book because the title isn’t doing a strong enough sales job, and it’s too wordy to be very effective at that job. This is back-cover text, not something that belongs on the Front Cover, especially since it’s distracting from the art of the cover.

And, aside from knowing it’s an adventure, I still don’t really know enough about the product to be interested in buying it – though price would factor into that question. I’ll deal with that toward the end of this review.

Back Cover

The first place I go when the front cover doesn’t enlighten me enough (which is usually, to be fair) is the back cover, where I would expect to find a more verbose blurb describing the product.

Okay, so there are cosmic purple swirls evocative of space, or a peculiar storm, set against what might be a mountain and the same two ‘spheres’ of existence. And aside from the Fool Moon Logo and credit, there’s… nothing. This cements the impression that the subtitle was the back cover blurb at some point, and used on the back cover it would be more effective as a tease, because it wouldn’t be trying to sell the product.

As it stands, the back cover is pretty but leaves me none the wiser.

Fool Moon Productions

I want to call attention at this moment to the Fool Moon logo, which they were kind enough to supply in a higher-resolution format – the version below is actually a compromised version of it because I had to shrink it down.

https://www.dmsguild.com/en/product/535760/once-we-were-heroes

I’m calling attention to it because there’s a subtlety within it that you can barely make out in the back cover presented above. It consists – at first glance – of a wolf (evocative of a full moon) wearing a fool’s cap, and set inside a white disk (often used as a symbol of the full moon). But there’s the barest hint of something more, when you look closely.

To examine what I was seeing, I did a little digital editing to bring up the slight tonal difference that I was detecting and make it more prominent.

And now it’s clear to see that this isn’t just a yellow-white circle – it’s an actual representation of the full moon, as seen in the Northern Hemisphere.

Sidebar: Inverted Moon

Wait, what? people in the hemispheres see the moon differently?

Yep. Because the Earth is a sphere, people in the southern hemisphere are upside-down relative to those in the north, and as a result, the moon looks upside down to us, and the phases of the moon run in the opposite direction.

This image is from a post by “The Secrets Of The Universe” on Facebook, and from the logo top right, I assume that it is copyright by them. I have tweaked it slightly to enlarge the explanatory diagram at the top. Link to their post containing the original image, or click on the image itself.

But this is a rabbit hole full of traps for the unwary. Their post’s URL, and it’s text, claims that this happens because the moon is a sphere. WRONG, though they get everything else pretty much right – and got called out on the error in the comments..

This Post on Facebook by “World GeoDemo” gets the explanation right – but has the flags that identify the perceived images back to front, which is only likely to spread confusion further. But they get the explanation right.

So even the people explaining the phenomenon struggle to get the details right. We live in a topsy-turvey world, sometimes…

And all this because I wanted to know which perspective on the moon was being illustrated by Fool Moon’s logo.

Getting back to the point that I was trying to make: While it might have been more effective to have painted the ‘dark parts’ out that lie under the wolf, the normal difference shown is subtle enough that you don’t really notice, it’s only when you darken those ‘blue areas’ that this becomes noticeable.

But the attention to detail displayed in the logo, as a general statement, boded well for what I might find within the product. Nuances and details and subtlety are what it promises; now it’s up to the product to deliver.

The other thing that scrolling through the PDF to the back cover does is hint at the scale of the product – the back cover is page 158, with the front cover counted as page 1. It’s BIG, a lot more so than most ‘adventures’, by a factor of 4 or 5. And that’s an important thing to notice at this point.

Art

Some of the art is quite evocative. This is perhaps the best image in the product, but one or two others come close. For the most part, though, the art is strongly illustrative but nothing more. It does (mostly) avoid the ‘plastic’ impression that some AI art possesses, thanks to the careful and subtle use of textures. Art by Fool Moon Productions with AI assistance.

In fact, so much of the detail was lost in compressing the image above to fit Campaign Mastery’s display space that I decided to capture a larger partial image. The textures are still hard to make out but the impression they create is not. Art by Fool Moon Productions with AI assistance.

The art has been generated using Affinity Suite, Dungeon Draft, and 2-Minute Tabletop. I don’t know any of those tools, but the latter two sound like they are mapping-related, and there are a number of richly-detailed maps provided, so I assume that the first was the primary source for the artwork. The disclaimer, quoted earlier, suggests that the primary human creators involved in the artwork creation were Jeremy “Wolf” Morris and Matthew “Soulforge” Walsh, who are also listed as the writers of the product.

And, for the most part, it’s not bad. I’ve included both the best and (in my opinion) worst as illustrations in this post, but for the most part, it’s effective – at communicating to the GM. I’ll delve into that comment a little later in the review; I’m still conveying my first impressions at this point.

Day-Night Theme

Many of the pieces contain a day-vs-night theme, which is obviously related to the ‘two worlds’ impression created by the cover. At this stage, I’m not sure of the relevance, but it’s too prevalent not to be significant, so I’ll be looking for an answer when I get into the text. Art by Fool Moon Productions with AI assistance.

Encounter Illustrations

There is a stylistic thread that runs through most of the encounter illustrations. Sometimes it works, sometimes I’m not so sure. This is one of those ‘unsure’ examples, but it’s certainly the cutest Beholder that I’ve ever seen, though. All it lacks is a ribbon tied into a bow on the top of its’ head. Is that impression appropriate? I don’t know yet. But this is NOT menacing in the way a Beholder usually would be. Art by Fool Moon Productions with AI assistance.

Compare the Beholder with this Half-orc image. Clever use of negative space creates an impression of size, while the textures transform an image that might have been cartoonish into something more substantial. I wish it were larger though – I’ll discuss that in the text below. Art by Fool Moon Productions with AI assistance.

So far as I can tell from a quick glance through the pages (used to select the images extracted for this review), there’s an image to go with each encounter, though this might be an inaccurate impression. It’s something for me to look for when I dig into the content.

Scene Illustrations

Locations are well illustrated. Some of them are stylistically more related to the encounter illustrations, others are more removed from that but with consistent tonality that works to create a sense of a unified whole. Art by Fool Moon Productions with AI assistance.

This is an example of a scene illustration that is more in line with the encounter illustrations. The biggest problem with it is the size – I had to ENLARGE it to fit the available space. Art by Fool Moon Productions with AI assistance.

I guess, right now, we get to the rub. In terms of presenting a representation of a scene or an encounter to the GM to help them interpret the text, the art is absolutely fine – for the most part. But it’s not all that useful for showing to players, it’s too small. Despite the large page count, this product would be even better if the locations and maybe the encounters were enlarged, even though this would add to that page count.

Sure, you can zoom in to enlarge the image…

Art by Fool Moon Productions with AI assistance.

…but that’s not a perfect solution. Either you cut the top and/or bottom off images, or you show players content to the side of what you’re trying to show them. That could be another area, it could be an encounter, it could be a magic item, it could be text – but what it is most likely to be is a surprise-killer.

Not enough thought has been put into how customers will actually use the product.

Having been involved in the production of Assassin’s Amulet and a few other things over the years, I can see why this has happened – it’s essentially the age-old problem of forest for the trees, and it’s an easy trap to fall into. In a nutshell, the creators were so busy actually making the content that no-one stepped back to look at usage, or not closely enough, anyway.

This goes right back to the initial content design decisions. Presenting the illustrations as full width, 1/3 height panels would need to be decided right from the beginning, because it affects the size of the illustrations that you need. It would have made layout a lot more difficult, with text in columns and illustrations not. But the product would be a lot more user-friendly as a result..

Character Illustrations

There are plenty of character illustrations, too. I’m not sure if this is a petrified character or a statue – not without consulting the text – but it’s effective. Art by Fool Moon Productions with AI assistance.

This image is probably more indicative of the character illustrations, many of which are obvious homages to characters from popular culture. Are these NPCs or PC presets? I’m not sure, yet. There’s lots of more typical spot illustrations throughout, too. Art by Fool Moon Productions with AI assistance.

The same problem affects most of the character illustrations in the book.

Now I don’t see this as a flaw in the product; it’s a lost opportunity to improve the product, but this won’t actually make it unusable, by any stretch of the imagination, and that’s the distinction that defines what I consider to be a flaw.

The Prelude Page

I don’t know whether they referred to this internally as a prelude or a preamble, but it’s the first solid information we get about what we’re looking at. It’s worth quoting the text in full:

An Adventure About Life After You Are Left

Step into the well-worn slippers of elderly parodies of pop culture heroes and heroines, enjoying a mundane day at the Adumbral Strobus Home for Retired Adventurers. But the ordinary turns to chaos when the entire facility is suddenly whisked away to another plane of existence. Waking up in this bizarre new realm, the adventurers quickly realize they’re not in Kansas anymore, Toto.

As they explore their surreal surroundings, they must unravel a series of perplexing mysteries. Clues scattered throughout the complex will help them escape the pocket dimension, discover the fate of their fellow residents, navigate the bizarre mutated growths and entropic rot, and decipher the strange artwork depicting one of their own. Along the way, they might even uncover some juicy staff scandals.

Venture into the enigmas of the Adumbral Strobus Complex to uncover what Dr Mortem has been doing with the poor inmates of the Asylum for the Neglected Elderly. Confront him in the Adumbral Strobus Institute of Entropic Research to find a way to return yourselves and your home to the material plane. Can you solve the riddles, face the horrors, and lead your comrades back home? Adventure and intrigue await in “Once We Were Heroes”!

And remember, whatever you do, don’t look too closely at the toilets.

Okay, so some of the characters are presets, and some are NPCs. The premise is that a nursing home for elderly ‘retired heroes’ from many different realities gets pitched into somewhere else, and the main quest is to get home again. But there are side quests along the way that may impact the success or failure of that main quest. This is a micro-game setting as much as it is an adventure.

Nostalgia, pop culture, iconic characters, and a situation that pitches them all into one last great adventure – sounds intriguing.

Let’s talk for a minute about the Font. For viewing on the internet or on screen pages, it’s long been recognized that a Serif font is not ideal – that’s why Campaign Mastery uses a dirt-common sans-serif font for it’s content. It’s more legible and less tiring. On the printed page, that is reversed. You can read a serif font on the printed page up to three or four times as quickly as you can a sans-serif font. So this product is optimized for screen viewing and not for printing. That’s fine, it’s just something to be aware of.

Because you want headings to stand out, they are frequently in whatever font you aren’t using for your text, and that’s the case here, too. So the designers of the product know what they are doing, or (at the very least) have imitated the work of someone else who knows what they are doing, in terms of typography.

There’s something a little strange about the line heights in some of the text, however. This is usually a result of peculiarities with the actual font used, and it’s incredibly hard to get right. I can’t mark the product down because of it, but I have to mention it.

The text above is then followed by a humorous “Disclaimer” passage which at first glance might appear to be just fluff. This is written, like all fine print ever, in a far smaller version of the main font. But it does actually serve a valid function in terms of the content – in essence, it evades the likelihood that someone will disagree with the specific adaption of a specific entity from pop culture.

“Involuntary translocation across dimensional boundaries may present unforeseen hazards. Accordingly, Adumbral Strobus accepts no liability for any personal belongings that may become entropically compromised, nor for any injuries, accidents, transmogrifications, or sudden instances of extra-dimensional dissolution occurring within the confines of our esteemed establishment during such excursions. For your safety and well-being, certain chambers, thoroughfares, and inter-dimensional portals may be sealed off without prior notification.

“Height, weight, and chronological restrictions may apply in some dimensions, and individuals with specific physiological, psychological, or metaphysical conditions or impairments may find themselves unable to participate in certain dimensional experiences. It is advised, with the utmost gravity, that consumption of any foodstuffs or beverages discovered in alternate realities is strictly ill-advised, as Adumbral Strobus accepts no responsibility for any ensuing transformations, spontaneous combustion, or heroic expulsions of stomach contents that may result from such gastronomic indiscretions.”

The disclaimer continues for another couple of paragraphs after that.

This is exactly the sort of nuance and attention to subtle detail that I expected to find from the Logo, and so it gets a big tick. The final sentence is worth highlighting because it (a) smacks of an Alice-In-Wonderland vibe, and (b) implies that some characters who take the risk may regain some of their youth and former glory. But it also suggests that such reactions will be addressed on a case-by-case basis within the content – which speaks well of the attention to detail within the content.

The Credits and Contents Pages

Pages 4-6 cover this ground. I noted that the credits acknowledged the copyrights over D&D, Forgotten Realms, Ravenloft, and Eberron amongst others.

The contents page reinforces earlier impressions. The introduction runs for four pages from 7 to 11, and will get looked at in detail below. Chapter 1 is “Welcome To The Adumbral Strobus”, Chapter 2 is “The Extra-planar Adventures”, Chapter 3 is “Asylum for the Neglected Elderly” and Chapters 4 and 5 relate to the “Institute of Entropic Research”. It also contains 4 versions of the Aftermath and name-drops three more entities: Mortem, Yixith, and Xeghic. At this point, I know from the prelude that Mortem is a mad scientist who has been experimenting on patients, but don’t know the other two – so I suspect (until I know better) that they are the personifications of the “Day vs Night” conflict implied by the artwork. If so, one or both are probably responsible for the transdimensional relocation – but that’s just speculation with precious little solid foundation.

I have to admit to having a minor problem with the name “Adumbral Strobus” – I keep wanting to read it “Admiral Strobus”. That might be just me, or it might be more common than I think it is. But I’m quite sure that it would trip me up sooner or later.

The 5 main chapters are then followed by 7 appendices, and Appendix C, “Character Concepts” stands out to me. It tells me – without actually saying so – that this is an adventure designed for some variety of D&D / Pathfinder, because it lists the different character classes and then offers two residents as representative of that class.

The Homages, when you look at them, are very tongue-in-cheek. The one that I used as an illustration is of “Prof. Alfus Percy Ulric Bron
Dumblebeard” – I don’t think anyone will need a second guess as to who this is supposed to represent. But that sets a tone for the rest of the product that seems a little incompatible with the content thereof – it will be interesting to see how they cope with that.

The Introduction

Let’s look at the subsections of the Introduction – “About This Adventure,” “Once They Are Heroes,” “Adventure Summary,” “Running The Adventure,” “Character Creation,” “Locales” and “Dungeon Master’s Preparation Checklist”. Some of these are subdivided.

The Game System

Quote: “Once We Were Heroes” is an adventure based on the 5th edition of the worlds most popular role playing game, designed for four to six characters, where the player characters take on the roles of the story’s heroes. This book outlines the villains and monsters they must defeat, as well as the locations they must explore, to successfully complete the adventure.

So, that answers that question, but it produces a big black mark on the product in terms of my personal taste.

You see, like a lot of others, my friends and I participated in the WotC 5e play-test, back when it was “D&DNext,” and after a while, we noticed that every time our feedback said “Zig Left,” the next iteration of the rules went “Zag Right”. There was little-or-no interaction with anyone at WotC in the playtesting feedback reports that we filed, so there was little explanation as to this phenomenon; we could only assume that “Zag Right” was the more popular choice amongst other playtesters. Slowly, what ended up D&D 5e became something we were no longer interested in playing. Some have since changed their minds; others have not. It is what it is.

The problem with tying yourself to one game system so absolutely is that you find yourself living and dying with that game system. When writing Assassin’s Amulet, my co-authors and I worked very hard at making everything compatible with both D&D 3.5 and Pathfinder for that very reason.

Does that mean that this is un-runable, or that it shouldn’t even be up for purchase consideration? Absolutely not. But it does mean that to run it, I would need to adapt it, and that adds to the hurdles that the quality of the product have to surmount.

Anyway, getting back to the “About This Adventure” text… setting for this adventure, right… can be placed in many published settings or even a world of the DM’s creation, good… Intended to be played as a one-shot, okay… Players can either choose from the provided options or create their own 10th-level characters, okay.

…The Tone of this adventure is a comedic take on a horror mystery, okay that’s interesting – those two are hard to make go together (though it can be done)… encourage you not to take it too seriously, okay…

Once They Were Heroes

“Many years ago, the world was saved by a legendary group of adventurers. They stood against the darkness, vanquished terrible evils, and ensured peace for generations…”

So the characters / PCs are not from ‘all over,’ they were allies and teammates who worked together, and then ALL of them ended up in this place? The first part is a disappointment, and the second strains credibility to breaking point right off the bat.

Were I to run this adventure, i would probably go back to my original impression – that these are retired heroes from multiple planes of reality who have been ‘parked’ in this facility; they don’t know each other; and the big thing that they offer (besides aged care) is anonymity, distance from the scenes that made you legendary, so that no-one from home can call you up one last time. This is a Retirement home.

Some may find that this interpretation is even harder to swallow, in terms of credibility, and it probably is – if you run it using normal characters and not the ‘pop culture icons’ provided. But that risks undermining the ‘fun factor’ and making this all too serious. And if you create your own versions of iconic pop culture characters, you’ll find yourself back at the same basic question.

Of course, you may find that the premise doesn’t stretch your credibility as badly as it does mine – but that still doesn’t negate the possibility that your players may struggle with it more than you. So this is something that every GM will have to at least thing about addressing.

The introduction then goes on to outline the adventure, but I’m not going to get into those specifics, there’s a lot of information that players will have to find out the hard way.

The plotline breaks down into three main sections – a ‘get to know you’ routine morning (my comments above pay into this section very heavily); a sudden event and their need to work out what’s happened and what they can do about it, which leads into investigating the mystery and stumbling across side-plots; and the ultimate confrontation and resolution of the plotline.

Running The Adventure

This is pretty standard fare, with no surprises. Stat blocks for all encounters, and any spells or equipment referenced are provided, so the PHB and DMG are the only real requirements.

Character Creation

This section contains ‘meta-rules’ for character generation and explicitly references the PCs as parodies of pop-culture icons, who have aged and retired. Outlines for equipment (very limited) and aging the characters (may not go far enough, but there’s a playability need that has to be taken into account).

“Additionally, randomly allocate one flaw and one feature to each character, either by rolling a d20 and referring to the table in the Appendix A or by dealing cards from the provided deck. Encourage players to incorporate these traits into their role-playing to add depth and humor to their characters.”

The text also states that the characters supplied in appendices C and D should be considered backups for players who are struggling to create their own characters, not as the primary source.

Locales

Interior maps are provided for three buildings within the Adumbral Strobus complex – the Home For Retired Adventurers, the Asylum for the Neglected Elderly; and the Admin building, which includes the facilities belonging to Dr Mortem.

There are two pocket dimensions, the Everburn and Evergloom, which have an interesting cosmological concept that makes total sense in terms of the adventure as described (I’m being deliberately vague to leave players who may read this in the dark).

Visiting these pocket dimensions is not quite what players might expect – there are stings in the tail that are exactly the sort of thing that I like to build into my own campaigns.

This section also categorically identifies Yixith and Xeghic, who were name-dropped in earlier material, and their relationship to the plotline. I have one suggestion to make in this respect but don’t want to make it too easily accessed, so it’s in black text against a black background in a text box below – select the text contents with a shift-and-mouse-drag to read it. The text DOES contain spoilers that will ruin the adventure for any player who reads it, be warned.

One realm is a microplane of life and the other of death. Yixith and Xeghic are inhabitants of these microplanes, one to each. The depictions of each match the illustrations of the microplanes. I suggest REVERSING the indicated images WHEN THEY ARE ‘AT HOME’ so that they contrast with their environments. This will throw a curve ball that is likely to deceive even experienced players – for a while.

After a spot illustration of a nameplate that is REALLY hard to read, the introduction segues into a brief description of the setting – the grounds of Adumbral Strobus, the retirement home building, the Asylum, and the Institute.

Maps vs Battlemaps

The creators suggest using theater of the mind, with the GM referring to the maps provided for cues and the battlemaps in Appendix G reserved for combat situations. They point out that this will speed play, which is true. But they don’t mention that a battlemap should only be placed on the table when combat is actually about to begin – don’t telegraph the situation to the players! Stay in theater-of-the-mind mode until the last possible moment.

This also plays into my statements regarding image size. It can be argued that these are intended only for the GM, and not for player consumption, and it seems clear that this is what the writers had in mind; but it can also be argued that using theater of the mind is sped up and improved by giving a common visual reference for the group to process.

Prep Checklist

This has some additional steps not previously mentioned, and shouldn’t be ignored. But that’s what is most likely to happen because the only two entries on the first page on which it appears are reiterations of advice already provided. All the new content is on page 13. This is the biggest misstep so far in the content, in my opinion; if this is as bad as things get, OWWH will deserve very high praise and recommendations, indeed.

Encounter Balancing

Closing out the Introduction is a section on Encounter Balancing. There’s nothing startling or wrong with this section; the biggest issue is what is Not there.

This adventure is designed, according to the “About This Adventure” text, for 4-6 characters, with a presumed ratio of one character per player.

This section shows how to adjust encounters for 4, 5, or 6 players. It also has an adjustment for having less than the recommended number of players (3). But it makes no accommodation for groups with more than the recommended number. It’s not likely to come up often – but surely expending the three lines of text needed to cope with 7 or 8 players would not have been too much to ask?

That said, as I commented above, if this is the biggest faux pas, this adventure will be doing very well indeed.

Looking Deeper – Chapter 1

I’m not going to break this down into subsections the way I did the introduction – there will be too much trouble with spoilers if I do that. Instead I’m going to skim the chapter and report back.

  • While I can guess, I don’t know for certain what “Balloon Volleyball,” or it’s in-game equivalent, “Beholder Ball” is.
  • It would have been a good idea to warn the GM to come up with “20 questions” for the Getting Ahead game. Unless this game is also not what I think it is.
  • Tess Trill – every facility of this type needs a hot girl for those characters that way inclined to drool over, and she fills that need here. Her male equivalent for those looking in the other direction is the cleaner, Fenim. The text hints that he might have feelings for her, about which she is naively ignorant. Adding the above to their respective descriptions adds massively to the background and general realism of the setting – even if they are cliches.
  • That credibility is severely needed to counterbalance the presence of Derrick the Chevalier. Older nobility, as a general rule, do NOT get shuffled off to somewhere like this. Instead of an actual Noble, he should be a commoner with delusions of Nobility – or maybe pretensions of Nobility.
  • This whole sub-sequence would be a lot easier to roleplay if there was some indication of what this group was actually up to – they are clearly up to something that they probably shouldn’t be. The GM should probably also prepare some relationship cues that can be expressed through dialogue with the PCs. These might be friendly (“Don’t forget we’ve got a chess game to finish later”) to softly hostile (“Mind your own business, [PC], and I’ll mind mine, and we’ll both be happier for staying out of each other’s way.”) In general, I get the impression that the PCs are the ones who have ‘settled’ into a calm existence in the retirement home, while this group are those who are still rebelling a bit and bucking the discipline. That too, would be useful direction – especially if that wasn’t the impression the creators intended.
  • Okay, now we get the explanation of the 20 questions game. Some sort of indicator at the first mention that ‘details will be provided below’ would have been helpful.
  • While the text solves the puzzle, some sort of motivation on the part of the guilty party would be helpful.
  • Context within the adventure explains the Beholder image – so my earlier comments regarding it can be ignored.
  • The first real plot hole – “After the conclusion of the pirate hunt game”… but no such game has been specified or described.

Nine notes, two of them canceled out by a third, and only one (maybe two) really critical. I’ve read a lot of adventures and while there have been one or two that have scored ten out of ten for content, the vast majority have far more serious faux pas and plot holes.

Narrative Content

Most importantly, the narrative generally succeeds in bringing the location to life in a way that feels natural, realistic and interesting. Nailing any two of those three can be difficult, ticking all three boxes – especially in such an unorthodox setting with… unusual… characters is top-rate work.

Locations, Encounters, Mysteries, Solutions, and Action: Chapters 2-4

At this point, I don’t think I need to delve into these areas too deeply. While it’s possible that one of them will lower the established standard, there’s no reason to expect it. A quick skim of the next few chapters confirms that impression; this is a really well-written well-crafted adventure.

It may have the occasional small hole for you to plug, but nothing that won’t be easily taken care of if you do what everyone always says to do and read the whole adventure before play.

I’ve very much been mindful, in writing this review, not to read ahead, but to generate my comments as I came to each passage of content. That permits an honest impression of what’s actually presented by that point in the product, with no cheating by looking ahead.

When I was selecting images, I was deliberately careful to avoid reading any of the text. When I was reading the introduction and making comments on it, I wasn’t looking ahead – I was reacting to what was currently in front of me, in the context of what I had already read. Similarly, my notes on Chapter 1 were very much stream-of-consciousness as I was reading – and you can see in those comments where that caught me out.

Above all else, I was making every effort to make this review both honest and comprehensive, without any bias resulting from the source of the artwork. I hope that I’ve succeeded in reviewing it without any bias or taint, so that you can make a fair assessment of what’s being offered without compounding of any bias or taint from considering the art source.

Price

The price is Australian $7.58 which is $4.95 US. I would actually expect the price to be $5 from this conversion, I suspect that what I got was the “live” conversion rate and not the daily rate. And if you don’t know the difference, don’t worry about it.

Where Do You Get It?

https://www.dmsguild.com/en/product/535760/once-we-were-heroes

, or just click on any of the illustrations excerpted from the product.

The Judgment Call

So here’s the bottom line: If you are really seriously opposed to AI-generated art in RPG products, I don’t think this adventure will change your mind.

If, however, you are willing to even contemplate the possibility that there are potentially valid counterarguments to that opposition, this adventure has enough merit that you should contemplate buying it.

Only the maps are really essential for play; you can blank out every other illustration and still be left with a product worth your attention. It will be diminished by that act, but that’s your choice to make.

If the art had not been AI-sourced, there are two possible paths that this adventure could have taken:

  • Far less art, far weaker presentation, and far less appeal despite the length. Marketplace viability would probably require reduction in the price by 1/3, eating directly into the profits and making the existence of another small publisher less viable. Or,
  • Far less art of potentially slightly superior quality, and a price tag closer to USD $40 – a price that would be sure to compromise sales. The net effect is the same – reduced profitability and a small publisher becoming less viable within the hobby.

Some may argue that no publisher that crosses their hard line deserves to be viable in the market. I think that’s going too far.

For my (metaphoric) money, Fool Moon have done everything right in terms of ethics, here. They are up-front about the art and its source. They have done their best to leverage the output to the maximum benefit of their product without making it an indispensable element of that product.

Is it the greatest RPG product ever published? Probably not, but what right do you have to expect that – especially at this price point?

Is it worth every one of those US dollars? I think it is, and then a couple. And I don’t think you can ask more of Fool Moon Productions than that.

Leave a Comment

Once We Were Heroes and the AI Controversy – AI Redacted


This post is a review of Once We Were Heroes by Fool Moon Productions, which uses art that’s AI-Generated. So I’ve had to set some ground rules.

As the owner/operator of Campaign Mastery, I have spent a lot of time thinking about what the site’s policy should be with respect to art by Generative AI, and the text below is the result.

Campaign Mastery Policy on AI-generated Art

1. Campaign Mastery will not use or show AI art unless it is profoundly essential to the content of an article. “profoundly essential” includes reviewing a product, tool, or published work which uses AI-generated art, or when the art itself is the subject of the article.

2. AI art will never be used to replace any original art that would normally have been commissioned from or provided by an actual human artist by Campaign Mastery.

3. When AI art is used, a disclaimer will always warn readers, as shown above. This will precede any significant article content and especially any AI-generated content. Whenever possible, a link will be provided to a plaintext, AI-free version of the article.

Campaign Mastery Policy on AI Text

1. While text generated by an AI may be quoted, it will never used to replace human-generated text. All text published on Campaign Mastery must be substantially written, analyzed, and edited by a human author.

2. Text generated by an AI may only be quoted for the purposes of analysis, illustration, or conversation (eg demonstrating prompt engineering). Any such text will be clearly identified as to source. Analyses performed by an AI must be converted into a HTML-code table.

3. Outside of direct quotation, AI may have been used for research or brainstorming, or generating outlines or summaries of other texts. Every such use will be verified for accuracy by a human and the final text will always be written and edited by a human.

4. No third-party submissions which are obviously AI-Generated in the exclusive opinion of the site owner will be accepted for publication.

Campaign Mastery Policy on AI Audio and Video

1. While Campaign Mastery is not an AV site, from time to time Audiovisual materials may appear, and some of these may be AI generated in some respect.

2. If any aspect of these materials (eg AI voice-overs, background music, etc) is significantly AI-generated, the materials will be treated as though they were “AI Art” as per the policies stated above.

This text has been added to the policies page and is effective as of this post.

Human-AI Collaboration?

I couldn’t find what I wanted to use to illustrate this post – an Artificial Artist painting a question mark. This is a next-best alternative, based on robot hand human handshake by Mohamed Hassan, to which I added a question mark image by Gerd Altmann in the background, and some color tweaking to get them to match. Both images were sourced from Pixabay using their “authentic” (human only) setting.

The AI controversy – an overview

I always knew that this day would come eventually. I had expected that the occasion would be when I wrote and published an article on how I use AI within my campaigns, and the techniques and limitations that come with it – but that article isn’t written yet, because the uses that are most illustrative come from an adventure that hasn’t yet been played.

Or another article that’s been drafted on the limitations of AI and how it could be improved – and on how to get the most out of what’s already here.

Polarization, Content, and Hard Lines

The issue of AI-generated art and other content is one of the most polarizing issues in the hobby. It has forced publishers, creators, GMs, and customers / players to establish rules that are starkly black and white, often with the best of intentions – but when those clash, the hobby itself can be the loser.

I’m more in favor of a softer line that acknowledges gray areas with transparency. There are certain ways that I consider ethical when it comes to the use of AI, and others in which it clearly is not. Asking for anything “in the style of” a living artist is a big no-no, for example. Asking for something in the style of a long-dead artist, that’s more of a gray area.

I regard AI as a tool, and like all tools, it can be used for good or ill. Throw in a healthy dose of pragmatism, an acknowledgment that no black-and-white policy can satisfy everyone and that there can be good and valid reasons for the ethical use of the tool, and you find yourself in the same uncomfortable middle-ground that I occupy, and that the policies stated above are intended to encapsulate and define.

Ethics and Labor Rights

This actually breaks down into a number of related concerns. First, there’s the conflict between how generative AIs learn to create their content and respecting the rights and integrity of human creators.

Most AI models are trained on massive accumulations of data scraped from the internet with no concern as to the sources rights, and without recompense. And that irks those who support the rights of writers and artists. I’m one of them, so naturally, my sympathies align more with those who are critical in this regard.

But that perspective is nuanced by the reality of the internet. Once material is publicly available, it’s there for anyone to refer to and use as reference or inspiration. So long as sufficient input from outside that source is incorporated, and in a non-superficial way, so long as you are building on what has been made available and not simply copying it outright, how is what an AI does any different from what a human writer does?

If I want to create an image of a clown, and I start by doing research using Google Images on how other artists have depicted clowns to get ideas, that’s generally considered fine – because at the end of the day, I have to synthesize all those elements and ideas together into my own representation of “a clown”. I don’t generally restrict or place boundaries on those searches; I want as much fuel for the creative fires as I can get.

It’s a long-held maxim – if you don’t want something to be public, don’t put it on the internet.

Here’s a bone to chew on: if it’s valid and legal for a human to be educated by viewing online content, how is it not valid and legal and Fair Use for an AI system to use it in the same way, for the same purpose?

Shades of gray.

Some content creators argue that the results are a form of “unlicensed derivative work”. And that might be true, if only that content creator’s works were used to train the AI – but with every outside source, the purity of that argument gets eroded.

There comes a point where so many sources are being fused into one that you have to draw the line. It’s like music – the difference between doing a cover version of a Beatles song and drawing inspiration from the Beatles is clear and obvious. Both are forms of copying – but the nuance is completely different. One requires the payment of a license fee to the songwriters, and the other doesn’t. Doing them without that payment is legal and ‘fair’ in one case – and completely the opposite in the other.

You can’t copyright the D7(diminished) chord just because you’ve used it in a song. It’s there for anyone else to use.

What’s more, consider the necessary ‘spark of originality’ that distinguishes human creation from artificial construction. In order to generate a good image, a human user of an AI has to specify a prompt, and the general rule is that the more detailed the prompt, the better the result. Is this not providing the needed ‘spark of originality’ into the resulting image?

The more vague and generalized the input, the weaker this line of argument, I admit. But where do you draw the line? How many of these creators started out by imitating someone else’s work?

Shades of gray.

I don’t see how you can end up anywhere else in the argument if you’re applying any half-way reasonable standards.

Devaluation Of Creativity

There’ is a widespread fear among freelancers (artists, mapmakers, writers, editors, you name it) that AI tools will drastically reduce the market rate for their services. Why pay $500 for a unique monster illustration when you can generate a passable image for nothing, or close to it?

And they have a valid point – up to a point.

The keys to deciphering this argument are subtle. AI images may be ‘passable’ but they aren’t going to be as nuanced as a bespoke image from a human artist. This is stealing my own thunder to a certain extent, but here’s the reality: The more detailed you make an AI prompt, the more you are likely to get something close to what you want – but the more likely it is that some crucial element, spelt out in specific detail, is going to be left out completely. And if what you’ve requested isn’t something that people routinely post images of on the internet, you’re going to struggle – try generating an image of a “crashed alien spacecraft” and what you generally end up with is a flying saucer hovering serenely in the air. People don’t take many photos of crashed objects! And if the AI can’t learn what it should look like, it can’t create something like it.

What this argument is really pointing out is that amateur-prompted AI art raises the bar of amateur art to something with much of the gloss of the professional artist. But the professional will always be better at capturing originality and bringing it to the creative table. The differences may be more nuanced than a black-and-white line drawing, but they are real.

You’re still getting something for your $500 that you don’t get from the cheaper alternatives – but it’s not the same thing as it used to be.

And this argument also smacks of similarity to the opponents of every technological advance and consequent job losses. I’ve heard those arguments advanced against everything from the word processor to assembly-line robots. In every case, there has been more employment afterwards than before, once things settled down – but in some cases, those have mandated an evolution of skill-set, and in others, a complete replacement. So the truth of the matter in this respect is, once again, nuanced.

The similarity not only weakens this argument considerably, it points out, more starkly, my previous point – you pay for the services of a human artist for what he or she can provide that the cheaper substitutes can’t. Will that result in a realignment of the market rates? Possibly. But that’s life, it happens to everybody, like it or not – things change, and you either evolve accordingly, or you stagnate.

But there is a sting in the tail – the proposition that this leads to a “race to the bottom,” where only AI-assisted production can compete on cost. And that’s a point that I can’t argue with, and hence my comments on a possible realignment of market rates.

That said, it can also be suggested that AI generators are tools – some will learn to use them more effectively than others, just as some people are better at watercolors than with oil paints. The solution to this problem is for the creators to embrace AI and use it to increase their productivity so that they can accept five or ten times as many commissions paying one-fifth or one-tenth of what they used to command, while leveraging their artistic expertise.

So this line of argument is not as cut-and-dried as it first seems.

Specificity Of Style

Artists often feel that AI allows users to “mimic their unique style” without the artist receiving credit or compensation, effectively eroding their brand and professional identity.

For me, this is a far stronger argument than the preceding one, but I think the proposed remedy (don’t use AI, anti-AI, no AI, no, no, no!) is the wrong line to be pursuing. As I said earlier, I view creating something in the style of a living artist to be an ethical no-no. Once an artist is no longer available to take commissions by virtue of being dead, that’s a different story.

I think the correct remedy here is an extension of copyright protection to include the “distinctive style” of an artist. That’s already implied in the existing protections – more strongly in some fields than others. I always remember the time John Fogerty was sued by his previous record label for sounding too much like himself. That case established (or reinforced) the principle that each artist carries with him a uniqueness of style that cannot be licensed or sold and is emphatically NOT included in the rights purchased when you acquire control over an artist’s work product.

I think the existence of generative AI advances the demand for such to be formalized and generalized to cover all modes of creativity, be it visual, or textual, or audible. I would include under that umbrella, a singer’s unique voice.

There would still be gray areas. A guitarist could argue that they had a distinctive and unique playing style, for example, and that style should merit protection. But they would have to prove that uniqueness in comparison to others within that musical field.

The final jigsaw piece would be to require AI interfaces to explicitly block requests that enter protected fields. “In The Style Of” is permissible once the ‘copyright’ on that uniqueness has expired, and should be blocked the rest of the time – UNLESS you are the artist in question, I suppose. But that gets murky, so let’s keep it clean, and ban them from being lazy, too.

Publisher and market integrity

Large TTRPG publishers have taken explicit stances, and the community judges them harshly when they waver.

Major players like Paizo (Pathfinder/Starfinder) and many prominent independent publishers have issued clear policies stating they will “only accept human-created artwork” for their products, usually citing ethical concerns regarding data scraping. This is often driven by a commitment to supporting the freelance community.

Wizards of the Coast faced significant backlash when multiple freelancers and even their own in-house content creations were discovered to contain AI-generated elements, despite WotC claiming an anti-AI stance. These incidents reinforced the community’s demand for strict auditing and absolute transparency.

Since it’s my position that human-created artwork is superior to AI-generated content in specific ways, I don’t agree with the reasons cited for these policies. Many criticize AI-generated art for lacking the “soul, texture, and character” of human-created fantasy illustrations. In the TTRPG world, art often sells the ‘vibe’ of the setting, and AI is frequently accused of producing generic, overly smooth, or inconsistent visuals that break immersion, and those goes directly to my allegation that human art is better in key respects. But I do agree with the policies themselves as a general principle.

It’s when people seek to extend these policies down the scale to smaller publishers that I think problems start to arise. But I have to admit to being a bit conflicted over that problem.

Ideologically, I’m egalitarian; I favor “one rule for everybody”. And yet, in this circumstance, I think that there need to be different standard applied to different scales, and see the good and ethical use of AI generation as ‘raising the bar’ for the small operators to the point where they are keeping the big-ticket producers honest.

My policies and ideologies don’t hold all the answers, and that admission pushes me back into the shades of gray. If you can afford to, you should always hire human artists because the results will be better. If you can’t afford to, I’ll give you a pass for using AI-generated art. So there are two rules and a lot of gray in between them. But no one, hard-and-fast rule or principle yields a satisfactory answer in every case, and I do NOT agree with anyone that tries to implement one. I’ll respect their position in terms of their own products or pages – the the extent that I’m offering a plaintext version of this article, for example – but that is as far as I’ll go.

I generally think hard-liners are part of the problem in any field, anyway. Having ethics and principles isn’t a problem; expecting them to hold all the right answers every single time, that’s a problem, and a serious one.

AI Limitations

I’m only going to touch on this briefly, because it’s not directly relevant – but it does at least need to be mentioned.

AIs are not intelligent. They don’t understand a word they say. They are sophisticated systems that guess at the best ‘next word’ to follow the word they have just decided to use. That these words form sentences that have emergent properties of meaning when read by a human is a reality with which they cannot contend and can barely cope with.

Some AIs do better in this respect than others. For brainstorming, and nailing down technical details, they can present an enormous advantage – but when it comes to writing text for a TTRPG rules-set or adventure, they vary from inspiring to exasperating in equal measure, sometimes within the same paragraph!

TTRPGs and good written works rely on an internal consistency that has to run deep. Very deep. And that’s a consistency of the emergent properties of meaning within a series of statements. And since AIs don’t understand meaning…

AIs – LLMs – are capable of generating vast amounts of text quickly. They can talk the ear off a donkey, even without voice synthesis enabled! But they are prone to “hallucinations” (in which they make up facts) and struggle to maintain adherence to obscure, specific worldbuilding details. Or a specific role in the creative process. This makes unedited AI text a major liability for professional products – or for decent amateur ones.

Partnership

I view my use of AI as a partnership with a very creative research assistant. I can offer a vague idea and have it refined. I can ask for a suggestion – but I then have to take the ball offered and run with it, or use it to spark a better idea in a brainstorming session. It’s great for narrowing in on technical details – but you have to check its work. One phrase that repeats frequently in my interactions is “Ask questions for clarification if necessary.” And a lot of my inputs start by clarifying or reiterating something that the LLM has not taken into account.

I see the big picture. I use the AI to help clarify and define the details. I frequently need to steer the conversation, offer corrections or clarifications, or outright reject something the AI has suggested, while using that suggestion to clarify my own thinking to offer an alternative. On a number of occasions, the AI that I use most frequently has made three or four suggestions, and I’ve accepted none of them – but taken part of one and part of another and a touch of my own creativity and sense of narrative direction to weld the parts together into something better.

That’s leveraging the strength of the AI while using my ‘bigger picture’ to overcome its limitations. There’s a huge amount more that can be said on that subject, but I’ll save that for another article sometime.

Summing Up, Moving On

If I were to generalize and sum up my ethical position on the use of AI, it could be encapsulated in the statement, “AI as a tool or in partnership with human creativity is fine – with inherent limitations. AI as a primary generator of content that a reader or viewer would expect to be produced by a human is unethical at best and incompetent at worst.”

This is the ethical boundary that we, as consumers, have to navigate. And it’s precisely this boundary that the creators of Once We Were Heroes have forced me, and you as a reader, to confront. This game supplement heavily employs AI-generated art, and makes no bones about it:

    “Recognizing their limited artistic expertise and budget, Jeremy and Matthew at Fool Moon Productions leverage generative AI to enhance their creative outputs. This includes generating thematic “original” artwork, refining existing designs, and improving written content by correcting spelling and grammar. Notably, even this disclaimer was crafted with the assistance of AI.”

It’s not my job, as a reviewer, to argue the rightness or wrongness of this policy or the motivations behind it. It IS my job as a reviewer to consider the efficacy of the results and to bring the matter to the attention of potential buyers, who can then make up their own minds.

To the maximum extent possible, this review will focus on the content without considering its source. If the use of AI has achieved something spectacularly fitting or evocative, I’ll comment on the fitness and the evocative nature of the art – and if something doesn’t fit, I won’t cut them any slack for the source; it will be judged by the standards of human art.

But I wanted to make that clear before we start, too.

To facilitate this review, I have been given a free copy of Once We Were Heroes. I have no other incentive to produce anything other than a fair and unbiased review.

Once We Were Heroes – First Impressions

Front Cover

The front cover gives a first impression of two worlds and a location trapped in between. It’s clearly a collage of two separate pieces of art, and the styles don’t quite mesh.

You can’t escape a first impression from the front cover, but it’s not all that promising a beginning. The art of the house at the bottom doesn’t feel like any of the other art in the product, and more importantly, doesn’t quite gel with the top part as a result.

The title – for some reason, I started thinking of this as “We Were Once Heroes”, and I think that derives from a grammatical choice in the title – specifically, the absence of a comma after “Once”. It’s a piece of minutia in the larger scheme of things, but it is the difference between a statement that attracts attention and commands interest, and something that’s more vague and leaves you wondering what it’s all about. Compare for yourselves:

Once We Were Heroes

Once, We Were Heroes

The Subtitle doesn’t help much. “An Adventure About Life After You Are Left” – Left where? Left Hanging? Left Alive? Left For Dead?

For all I know at this point, though, that might be a masterpiece summary – the answer might be “All Of The Above, and more”. At least it tells me that this is supposed to be an Adventure.

But the first impression is that the subtitle is there to try and hook a reader into buying the book because the title isn’t doing a strong enough sales job, and it’s too wordy to be very effective at that job. This is back-cover text, not something that belongs on the Front Cover, especially since it’s distracting from the art of the cover.

And, aside from knowing it’s an adventure, I still don’t really know enough about the product to be interested in buying it – though price would factor into that question. I’ll deal with that toward the end of this review.

Back Cover

The first place I go when the front cover doesn’t enlighten me enough (which is usually, to be fair) is the back cover, where I would expect to find a more verbose blurb describing the product.

Okay, so there are cosmic purple swirls evocative of space, or a peculiar storm, set against what might be a mountain and the same two ‘spheres’ of existence. And aside from the Fool Moon Logo and credit, there’s… nothing. This cements the impression that the subtitle was the back cover blurb at some point, and used on the back cover it would be more effective as a tease, because it wouldn’t be trying to sell the product.

As it stands, the back cover is pretty but leaves me none the wiser.

Fool Moon Productions

I want to call attention at this moment to the Fool Moon logo, which they were kind enough to supply in a higher-resolution format – the version below is actually a compromised version of it because I had to shrink it down.

https://www.dmsguild.com/en/product/535760/once-we-were-heroes

I’m calling attention to it because there’s a subtlety within it that you can barely make out in the back cover presented above. It consists – at first glance – of a wolf (evocative of a full moon) wearing a fool’s cap, and set inside a white disk (often used as a symbol of the full moon). But there’s the barest hint of something more, when you look closely.

To examine what I was seeing, I did a little digital editing to bring up the slight tonal difference that I was detecting and make it more prominent.

And now it’s clear to see that this isn’t just a yellow-white circle – it’s an actual representation of the full moon, as seen in the Northern Hemisphere.

Sidebar: Inverted Moon

Wait, what? people in the hemispheres see the moon differently?

Yep. Because the Earth is a sphere, people in the southern hemisphere are upside-down relative to those in the north, and as a result, the moon looks upside down to us, and the phases of the moon run in the opposite direction.

This image is from a post by “The Secrets Of The Universe” on Facebook, and from the logo top right, I assume that it is copyright by them. I have tweaked it slightly to enlarge the explanatory diagram at the top. Link to their post containing the original image, or click on the image itself.

But this is a rabbit hole full of traps for the unwary. Their post’s URL, and it’s text, claims that this happens because the moon is a sphere. WRONG, though they get everything else pretty much right – and got called out on the error in the comments..

This Post on Facebook by “World GeoDemo” gets the explanation right – but has the flags that identify the perceived images back to front, which is only likely to spread confusion further. But they get the explanation right.

So even the people explaining the phenomenon struggle to get the details right. We live in a topsy-turvey world, sometimes…

And all this because I wanted to know which perspective on the moon was being illustrated by Fool Moon’s logo.

Getting back to the point that I was trying to make: While it might have been more effective to have painted the ‘dark parts’ out that lie under the wolf, the normal difference shown is subtle enough that you don’t really notice, it’s only when you darken those ‘blue areas’ that this becomes noticeable.

But the attention to detail displayed in the logo, as a general statement, boded well for what I might find within the product. Nuances and details and subtlety are what it promises; now it’s up to the product to deliver.

The other thing that scrolling through the PDF to the back cover does is hint at the scale of the product – the back cover is page 158, with the front cover counted as page 1. It’s BIG, a lot more so than most ‘adventures’, by a factor of 4 or 5. And that’s an important thing to notice at this point.

Art

Some of the art is quite evocative. This is perhaps the best image in the product, but one or two others come close. For the most part, though, the art is strongly illustrative but nothing more. It does (mostly) avoid the ‘plastic’ impression that some AI art possesses, thanks to the careful and subtle use of textures.

In fact, so much of the detail was lost in compressing the image above to fit Campaign Mastery’s display space that I decided to capture a larger partial image. The textures are still hard to make out but the impression they create is not.

The art has been generated using Affinity Suite, Dungeon Draft, and 2-Minute
Tabletop. I don’t know any of those tools, but the latter two sound like they are mapping-related, and there are a number of richly-detailed maps provided, so I assume that the first was the primary source for the artwork. The disclaimer, quoted earlier, suggests that the primary human creators involved in the artwork creation were Jeremy “Wolf” Morris and Matthew “Soulforge” Walsh, who are also listed as the writers of the product.

And, for the most part, it’s not bad. I’ve included both the best and (in my opinion) worst as illustrations in this post, but for the most part, it’s effective – at communicating to the GM. I’ll delve into that comment a little later in the review; I’m still conveying my first impressions at this point.

Day-Night Theme

Many of the pieces contain a day-vs-night theme, which is obviously related to the ‘two worlds’ impression created by the cover. At this stage, I’m not sure of the relevance, but it’s too prevalent not to be significant, so I’ll be looking for an answer when I get into the text.

Encounter Illustrations

There is a stylistic thread that runs through most of the encounter illustrations. Sometimes it works, sometimes I’m not so sure. This is one of those ‘unsure’ examples, but it’s certainly the cutest Beholder that I’ve ever seen, though. All it lacks is a ribbon tied into a bow on the top of its’ head. Is that impression appropriate? I don’t know yet. But this is NOT menacing in the way a Beholder usually would be.

Compare the Beholder with this Half-orc image. Clever use of negative space creates an impression of size, while the textures transform an image that might have been cartoonish into something more substantial. I wish it were larger though – I’ll discuss that in the text below.

So far as I can tell from a quick glance through the pages (used to select the images extracted for this review), there’s an image to go with each encounter, though this might be an inaccurate impression. It’s something for me to look for when I dig into the content.

Scene Illustrations

Locations are well illustrated. Some of them are stylistically more related to the encounter illustrations, others are more removed from that but with consistent tonality that works to create a sense of a unified whole.

This is an example of a scene illustration that is more in line with the encounter illustrations. The biggest problem with it is the size – I had to ENLARGE it to fit the available space.

I guess, right now, we get to the rub. In terms of presenting a representation of a scene or an encounter to the GM to help them interpret the text, the art is absolutely fine – for the most part. But it’s not all that useful for showing to players, it’s too small. Despite the large page count, this product would be even better if the locations and maybe the encounters were enlarged, even though this would add to that page count.

Sure, you can zoom in to enlarge the image…

…but that’s not a perfect solution. Either you cut the top and/or bottom off images, or you show players content to the side of what you’re trying to show them. That could be another area, it could be an encounter, it could be a magic item, it could be text – but what it is most likely to be is a surprise-killer.

Not enough thought has been put into how customers will actually use the product.

Having been involved in the production of Assassin’s Amulet and a few other things over the years, I can see why this has happened – it’s essentially the age-old problem of forest for the trees, and it’s an easy trap to fall into. In a nutshell, the creators were so busy actually making the content that no-one stepped back to look at usage, or not closely enough, anyway.

This goes right back to the initial content design decisions. Presenting the illustrations as full width, 1/3 height panels would need to be decided right from the beginning, because it affects the size of the illustrations that you need. It would have made layout a lot more difficult, with text in columns and illustrations not. But the product would be a lot more user-friendly as a result..

Character Illustrations

There are plenty of character illustrations, too. I’m not sure if this is a petrified character or a statue – not without consulting the text – but it’s effective.

This image is probably more indicative of the character illustrations, many of which are obvious homages to characters from popular culture. Are these NPCs or PC presets? I’m not sure, yet. There’s lots of more typical spot illustrations throughout, too.

The same problem affects most of the character illustrations in the book.

Now I don’t see this as a flaw in the product; it’s a lost opportunity to improve the product, but this won’t actually make it unusable, by any stretch of the imagination, and that’s the distinction that defines what I consider to be a flaw.

The Prelude Page

I don’t know whether they referred to this internally as a prelude or a preamble, but it’s the first solid information we get about what we’re looking at. It’s worth quoting the text in full:

An Adventure About Life After You Are Left

Step into the well-worn slippers of elderly parodies of pop culture heroes and heroines, enjoying a mundane day at the Adumbral Strobus Home for Retired Adventurers. But the ordinary turns to chaos when the entire facility is suddenly whisked away to another plane of existence. Waking up in this bizarre new realm, the adventurers quickly realize they’re not in Kansas anymore, Toto.

As they explore their surreal surroundings, they must unravel a series of perplexing mysteries. Clues scattered throughout the complex will help them escape the pocket dimension, discover the fate of their fellow residents, navigate the bizarre mutated growths and entropic rot, and decipher the strange artwork depicting one of their own. Along the way, they might even uncover some juicy staff scandals.

Venture into the enigmas of the Adumbral Strobus Complex to uncover what Dr Mortem has been doing with the poor inmates of the Asylum for the Neglected Elderly. Confront him in the Adumbral Strobus Institute of Entropic Research to find a way to return yourselves and your home to the material plane. Can you solve the riddles, face the horrors, and lead your comrades back home? Adventure and intrigue await in “Once We Were Heroes”!

And remember, whatever you do, don’t look too closely at the toilets.

Okay, so some of the characters are presets, and some are NPCs. The premise is that a nursing home for elderly ‘retired heroes’ from many different realities gets pitched into somewhere else, and the main quest is to get home again. But there are side quests along the way that may impact the success or failure of that main quest. This is a micro-game setting as much as it is an adventure.

Nostalgia, pop culture, iconic characters, and a situation that pitches them all into one last great adventure – sounds intriguing.

Let’s talk for a minute about the Font. For viewing on the internet or on screen pages, it’s long been recognized that a Serif font is not ideal – that’s why Campaign Mastery uses a dirt-common sans-serif font for it’s content. It’s more legible and less tiring. On the printed page, that is reversed. You can read a serif font on the printed page up to three or four times as quickly as you can a sans-serif font. So this product is optimized for screen viewing and not for printing. That’s fine, it’s just something to be aware of.

Because you want headings to stand out, they are frequently in whatever font you aren’t using for your text, and that’s the case here, too. So the designers of the product know what they are doing, or (at the very least) have imitated the work of someone else who knows what they are doing, in terms of typography.

There’s something a little strange about the line heights in some of the text, however. This is usually a result of peculiarities with the actual font used, and it’s incredibly hard to get right. I can’t mark the product down because of it, but I have to mention it.

The text above is then followed by a humorous “Disclaimer” passage which at first glance might appear to be just fluff. This is written, like all fine print ever, in a far smaller version of the main font. But it does actually serve a valid function in terms of the content – in essence, it evades the likelihood that someone will disagree with the specific adaption of a specific entity from pop culture.

“Involuntary translocation across dimensional boundaries may present unforeseen hazards. Accordingly, Adumbral Strobus accepts no liability for any personal belongings that may become entropically compromised, nor for any injuries, accidents, transmogrifications, or sudden instances of extra-dimensional dissolution occurring within the confines of our esteemed establishment during such excursions. For your safety and well-being, certain chambers, thoroughfares, and inter-dimensional portals may be sealed off without prior notification.

“Height, weight, and chronological restrictions may apply in some dimensions, and individuals with specific physiological, psychological, or metaphysical conditions or impairments may find themselves unable to participate in certain dimensional experiences. It is advised, with the utmost gravity, that consumption of any foodstuffs or beverages discovered in alternate realities is strictly ill-advised, as Adumbral Strobus accepts no responsibility for any ensuing transformations, spontaneous combustion, or heroic expulsions of stomach contents that may result from such gastronomic indiscretions.”

The disclaimer continues for another couple of paragraphs after that.

This is exactly the sort of nuance and attention to subtle detail that I expected to find from the Logo, and so it gets a big tick. The final sentence is worth highlighting because it (a) smacks of an Alice-In-Wonderland vibe, and (b) implies that some characters who take the risk may regain some of their youth and former glory. But it also suggests that such reactions will be addressed on a case-by-case basis within the content – which speaks well of the attention to detail within the content.

The Credits and Contents Pages

Pages 4-6 cover this ground. I noted that the credits acknowledged the copyrights over D&D, Forgotten Realms, Ravenloft, and Eberron amongst others.

The contents page reinforces earlier impressions. The introduction runs for four pages from 7 to 11, and will get looked at in detail below. Chapter 1 is “Welcome To The Adumbral Strobus”, Chapter 2 is “The Extra-planar Adventures”, Chapter 3 is “Asylum for the Neglected Elderly” and Chapters 4 and 5 relate to the “Institute of Entropic Research”. It also contains 4 versions of the Aftermath and name-drops three more entities: Mortem, Yixith, and Xeghic. At this point, I know from the prelude that Mortem is a mad scientist who has been experimenting on patients, but don’t know the other two – so I suspect (until I know better) that they are the personifications of the “Day vs Night” conflict implied by the artwork. If so, one or both are probably responsible for the transdimensional relocation – but that’s just speculation with precious little solid foundation.

I have to admit to having a minor problem with the name “Adumbral Strobus” – I keep wanting to read it “Admiral Strobus”. That might be just me, or it might be more common than I think it is. But I’m quite sure that it would trip me up sooner or later.

The 5 main chapters are then followed by 7 appendices, and Appendix C, “Character Concepts” stands out to me. It tells me – without actually saying so – that this is an adventure designed for some variety of D&D / Pathfinder, because it lists the different character classes and then offers two residents as representative of that class.

The Homages, when you look at them, are very tongue-in-cheek. The one that I used as an illustration is of “Prof. Alfus Percy Ulric Bron
Dumblebeard” – I don’t think anyone will need a second guess as to who this is supposed to represent. But that sets a tone for the rest of the product that seems a little incompatible with the content thereof – it will be interesting to see how they cope with that.

The Introduction

Let’s look at the subsections of the Introduction – “About This Adventure,” “Once They Are Heroes,” “Adventure Summary,” “Running The Adventure,” “Character Creation,” “Locales” and “Dungeon Master’s Preparation Checklist”. Some of these are subdivided.

The Game System

Quote: “Once We Were Heroes” is an adventure based on the 5th edition of the worlds most popular role playing game, designed for four to six characters, where the player characters take on the roles of the story’s heroes. This book outlines the villains and monsters they must defeat, as well as the locations they must explore, to successfully complete the adventure.

So, that answers that question, but it produces a big black mark on the product in terms of my personal taste.

You see, like a lot of others, my friends and I participated in the WotC 5e playtest, back when it was “D&DNext,” and after a while, we noticed that every time our feedback said “Zig Left,” the next iteration of the rules went “Zag Right”. There was little-or-no interaction with anyone at WotC in the playtesting feedback reports that we filed, so there was little explanation as to this phenomenon; we could only assume that “Zag Right” was the more popular choice amongst other playtesters. Slowly, what ended up D&D 5e became something we were no longer interested in playing. Some have since changed their minds; others have not. It is what it is.

The problem with tying yourself to one game system so absolutely is that you find yourself living and dying with that game system. When writing Assassin’s Amulet, my co-authors and I worked very hard at making everything compatible with both D&D 3.5 and Pathfinder for that very reason.

Does that mean that this is un-runable, or that it shouldn’t even be up for purchase consideration? Absolutely not. But it does mean that to run it, I would need to adapt it, and that adds to the hurdles that the quality of the product have to surmount.

Anyway, getting back to the “About This Adventure” text… setting for this adventure, right… can be placed in many published settings or even a world of the DM’s creation, good… Intended to be played as a one-shot, okay… Players can either choose from the provided options or create their own 10th-level characters, okay.

…The Tone of this adventure is a comedic take on a horror mystery, okay that’s interesting – those two are hard to make go together (though it can be done)… encourage you not to take it too seriously, okay…

Once They Were Heroes

“Many years ago, the world was saved by a legendary group of adventurers. They stood against the darkness, vanquished terrible evils, and ensured peace for generations…”

So the characters / PCs are not from ‘all over,’ they were allies and teammates who worked together, and then ALL of them ended up in this place? The first part is a disappointment, and the second strains credibility to breaking point right off the bat.

Were I to run this adventure, i would probably go back to my original impression – that these are retired heroes from multiple planes of reality who have been ‘parked’ in this facility; they don’t know each other; and the big thing that they offer (besides aged care) is anonymity, distance from the scenes that made you legendary, so that no-one from home can call you up one last time. This is a Retirement home.

Some may find that this interpretation is even harder to swallow, in terms of credibility, and it probably is – if you run it using normal characters and not the ‘pop culture icons’ provided. But that risks undermining the ‘fun factor’ and making this all too serious. And if you create your own versions of iconic pop culture characters, you’ll find yourself back at the same basic question.

Of course, you may find that the premise doesn’t stretch your credibility as badly as it does mine – but that still doesn’t negate the possibility that your players may struggle with it more than you. So this is something that every GM will have to at least thing about addressing.

The introduction then goes on to outline the adventure, but I’m not going to get into those specifics, there’s a lot of information that players will have to find out the hard way.

The plotline breaks down into three main sections – a ‘get to know you’ routine morning (my comments above pay into this section very heavily); a sudden event and their need to work out what’s happened and what they can do about it, which leads into investigating the mystery and stumbling across side-plots; and the ultimate confrontation and resolution of the plotline.

Running The Adventure

This is pretty standard fare, with no surprises. Stat blocks for all encounters, and any spells or equipment referenced are provided, so the PHB and DMG are the only real requirements.

Character Creation

This section contains ‘meta-rules’ for character generation and explicitly references the PCs as parodies of pop-culture icons, who have aged and retired. Outlines for equipment (very limited) and aging the characters (may not go far enough, but there’s a playability need that has to be taken into account).

“Additionally, randomly allocate one flaw and one feature to each character, either by rolling a d20 and referring to the table in the Appendix A or by dealing cards from the provided deck. Encourage players to incorporate these traits into their role-playing to add depth and humor to their characters.”

The text also states that the characters supplied in appendices C and D should be considered backups for players who are struggling to create their own characters, not as the primary source.

Locales

Interior maps are provided for three buildings within the Adumbral Strobus complex – the Home For Retired Adventurers, the Asylum for the Neglected Elderly; and the Admin building, which includes the facilities belonging to Dr Mortem.

There are two pocket dimensions, the Everburn and Evergloom, which have an interesting cosmological concept that makes total sense in terms of the adventure as described (I’m being deliberately vague to leave players who may read this in the dark).

Visiting these pocket dimensions is not quite what players might expect – there are stings in the tail that are exactly the sort of thing that I like to build into my own campaigns.

This section also categorically identifies Yixith and Xeghic, who were name-dropped in earlier material, and their relationship to the plotline. I have one suggestion to make in this respect but don’t want to make it too easily accessed, so it’s in black text against a black background in a text box below – select the text contents with a shift-and-mouse-drag to read it. The text DOES contain spoilers that will ruin the adventure for any player who reads it, be warned.

One realm is a microplane of life and the other of death. Yixith and Xeghic are inhabitants of these microplanes, one to each. The depictions of each match the illustrations of the microplanes. I suggest REVERSING the indicated images WHEN THEY ARE ‘AT HOME’ so that they contrast with their environments. This will throw a curve ball that is likely to deceive even experienced players – for a while.

After a spot illustration of a nameplate that is REALLY hard to read, the introduction segues into a brief description of the setting – the grounds of Adumbral Strobus, the retirement home building, the Asylum, and the Institute.

Maps vs Battlemaps

The creators suggest using theater of the mind, with the GM referring to the maps provided for cues and the battlemaps in Appendix G reserved for combat situations. They point out that this will speed play, which is true. But they don’t mention that a battlemap should only be placed on the table when combat is actually about to begin – don’t telegraph the situation to the players! Stay in theater-of-the-mind mode until the last possible moment.

This also plays into my statements regarding image size. It can be argued that these are intended only for the GM, and not for player consumption, and it seems clear that this is what the writers had in mind; but it can also be argued that using theater of the mind is sped up and improved by giving a common visual reference for the group to process.

Prep Checklist

This has some additional steps not previously mentioned, and shouldn’t be ignored. But that’s what is most likely to happen because the only two entries on the first page on which it appears are reiterations of advice already provided. All the new content is on page 13. This is the biggest misstep so far in the content, in my opinion; if this is as bad as things get, OWWH will deserve very high praise and recommendations, indeed.

Encounter Balancing

Closing out the Introduction is a section on Encounter Balancing. There’s nothing startling or wrong with this section; the biggest issue is what is Not there.

This adventure is designed, according to the “About This Adventure” text, for 4-6 characters, with a presumed ratio of one character per player.

This section shows how to adjust encounters for 4, 5, or 6 players. It also has an adjustment for having less than the recommended number of players (3). But it makes no accommodation for groups with more than the recommended number. It’s not likely to come up often – but surely expending the three lines of text needed to cope with 7 or 8 players would not have been too much to ask?

That said, as I commented above, if this is the biggest faux pas, this adventure will be doing very well indeed.

Looking Deeper – Chapter 1

I’m not going to break this down into subsections the way I did the introduction – there will be too much trouble with spoilers if I do that. Instead I’m going to skim the chapter and report back.

  • While I can guess, I don’t know for certain what “Balloon Volleyball,” or it’s in-game equivalent, “Beholder Ball” is.
  • It would have been a good idea to warn the GM to come up with “20 questions” for the Getting Ahead game. Unless this game is also not what I think it is.
  • Tess Trill – every facility of this type needs a hot girl for those characters that way inclined to drool over, and she fills that need here. Her male equivalent for those looking in the other direction is the cleaner, Fenim. The text hints that he might have feelings for her, about which she is naively ignorant. Adding the above to their respective descriptions adds massively to the background and general realism of the setting – even if they are cliches.
  • That credibility is severely needed to counterbalance the presence of Derrick the Chevalier. Older nobility, as a general rule, do NOT get shuffled off to somewhere like this. Instead of an actual Noble, he should be a commoner with delusions of Nobility – or maybe pretensions of Nobility.
  • This whole sub-sequence would be a lot easier to roleplay if there was some indication of what this group was actually up to – they are clearly up to something that they probably shouldn’t be. The GM should probably also prepare some relationship cues that can be expressed through dialogue with the PCs. These might be friendly (“Don’t forget we’ve got a chess game to finish later”) to softly hostile (“Mind your own business, [PC], and I’ll mind mine, and we’ll both be happier for staying out of each other’s way.”) In general, I get the impression that the PCs are the ones who have ‘settled’ into a calm existence in the retirement home, while this group are those who are still rebelling a bit and bucking the discipline. That too, would be useful direction – especially if that wasn’t the impression the creators intended.
  • Okay, now we get the explanation of the 20 questions game. Some sort of indicator at the first mention that ‘details will be provided below’ would have been helpful.
  • While the text solves the puzzle, some sort of motivation on the part of the guilty party would be helpful.
  • Context within the adventure explains the Beholder image – so my earlier comments regarding it can be ignored.
  • The first real plot hole – “After the conclusion of the pirate hunt game”… but no such game has been specified or described.

Nine notes, two of them canceled out by a third, and only one (maybe two) really critical. I’ve read a lot of adventures and while there have been one or two that have scored ten out of ten for content, the vast majority have far more serious faux pas and plot holes.

Narrative Content

Most importantly, the narrative generally succeeds in bringing the location to life in a way that feels natural, realistic and interesting. Nailing any two of those three can be difficult, ticking all three boxes – especially in such an unorthodox setting with… unusual… characters is top-rate work.

Locations, Encounters, Mysteries, Solutions, and Action: Chapters 2-4

At this point, I don’t think I need to delve into these areas too deeply. While it’s possible that one of them will lower the established standard, there’s no reason to expect it. A quick skim of the next few chapters confirms that impression; this is a really well-written well-crafted adventure.

It may have the occasional small hole for you to plug, but nothing that won’t be easily taken care of if you do what everyone always says to do and read the whole adventure before play.

I’ve very much been mindful, in writing this review, not to read ahead, but to generate my comments as I came to each passage of content. That permits an honest impression of what’s actually presented by that point in the product, with no cheating by looking ahead.

When I was selecting images, I was deliberately careful to avoid reading any of the text. When I was reading the introduction and making comments on it, I wasn’t looking ahead – I was reacting to what was currently in front of me, in the context of what I had already read. Similarly, my notes on Chapter 1 were very much stream-of-consciousness as I was reading – and you can see in those comments where that caught me out.

Above all else, I was making every effort to make this review both honest and comprehensive, without any bias resulting from the source of the artwork. I hope that I’ve succeeded in reviewing it without any bias or taint, so that you can make a fair assessment of what’s being offered without compounding of any bias or taint from considering the art source.

Price

The price is Australian $7.58 which is $4.95 US. I would actually expect the price to be $5 from this conversion, I suspect that what I got was the “live” conversion rate and not the daily rate. And if you don’t know the difference, don’t worry about it.

Where Do You Get It?

https://www.dmsguild.com/en/product/535760/once-we-were-heroes, or just click on any of the illustrations excerpted from the product.

The Judgment Call

So here’s the bottom line: If you are really seriously opposed to AI-generated art in RPG products, I don’t think this adventure will change your mind.

If, however, you are willing to even contemplate the possibility that there are potentially valid counterarguments to that opposition, this adventure has enough merit that you should contemplate buying it.

Only the maps are really essential for play; you can blank out every other illustration and still be left with a product worth your attention. It will be diminished by that act, but that’s your choice to make.

If the art had not been AI-sourced, there are two possible paths that this adventure could have taken:

  • Far less art, far weaker presentation, and far less appeal despite the length. Marketplace viability would probably require reduction in the price by 1/3, eating directly into the profits and making the existence of another small publisher less viable. Or,
  • Far less art of potentially slightly superior quality, and a price tag closer to USD $40 – a price that would be sure to compromise sales. The net effect is the same – reduced profitability and a small publisher becoming less viable within the hobby.

Some may argue that no publisher that crosses their hard line deserves to be viable in the market. I think that’s going too far.

For my (metaphoric) money, Fool Moon have done everything right in terms of ethics, here. They are up-front about the art and its source. They have done their best to leverage the output to the maximum benefit of their product without making it an indispensable element of that product.

Is it the greatest RPG product ever published? Probably not, but what right do you have to expect that – especially at this price point?

Is it worth every one of those US dollars? I think it is, and then a couple. And I don’t think you can ask more of Fool Moon Productions than that.

Comments (1)

Traits of Exotic d20 Substitutes pt 3: The Really Weird


Lots of die configurations can substitute for a d20, or for 3d6. This article looks at some of the most unusual. Part 3 of 3.

The image of the balance is by Anna Varsányi from Pixabay. I’ve changed it’s balance, added a load of dice, and changed the background color.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

There’s something indescribably appropriate about writing the first words of this post on Halloween – after all, many of these rolls are monsters unfit for gentle company. At the same time, some of them might get under your skin and make themselves at home, because there are some absolutely fascinating (not to mention strange) alternatives being put under the microscope today!

Because the die rolls are so strange, I’ve decided that each graph will be linked to a larger version that can be opened in a separate tab by clicking on the thumbnail. I’m also toying with the notion of doing some even larger versions in a PDF – if so, I’ll feature the link to it prominently.

I’m kicking things off today with a last-minute extra inclusion just as a warm-up. Although conceptually wild, it’s by far the tamest alternative on show today!

BONUS EXTRA: Exotic Choice #0a: 2d6+1 (for high results desired) or 2d6+6 (for low results desired)

I came up with this while finalizing the formatting of the previous post; when a couple of the things I had written about caught my eye in succession and sparked new thoughts.

Specifically: what if the roll was 3d6 – but one of the dice was fixed, in the opposite direction of what a character wants to roll to succeed? A ‘1’ if they want to roll high, a ‘6’ if they want to roll low?

In form, this would then become a triangular probability curve, because it’s functionally the same as 2d6 plus modifier – against a target intended for 3d6. That modifier is critical – the average roll of a d6 is 3.5, so a 1 effectively means a -2.5 modifier against a target intended for 3d6 when you are trying to roll high, and a 6 means a +2.5 modifier on 3d6 when you are trying to roll low.

Integer values matter when they trigger a binary choice like that. In the Hero System, several defined rolls set the standards: 5/-, 7/-, 11/-, 14/-, and 17/-. These are all attempting to roll low, to get below the target number. In D&D, back when it was still 3d6 based, you often had to roll high but sometimes you had to roll low – it depends on what you’re rolling for. With 3rd Ed, this was cleaned up so that you were always trying to roll higher than the target. So both variations have to be evaluated. To do so, I’ll use the same standards – but look at rolling 5+, 7+, 11+, 14+, and 17+ – even though that edition also shifted to the d20. So this is a legitimate option for replacement of both.

With the Hero System rolls, the higher the target number, the easier the roll is supposed to be. With ‘modern’ D&D and Pathfinder, the higher the target number, the harder the roll.

We start, as usual, with some probability graphs:

Every result in between the two curves is bad news for the rolling character. Click the thumbnail for 1024 x 361 version.

It’s the same story here. Click the thumbnail for 1024 x 361 version.

Min, Max, Ave

    2d6+1:

      Minimum 3
      Maximum 13
      Average 8

    26+6:

      Minimum 8
      Maximum 18
      Average 13

    The fact that one peaks as the other begins makes me kinda curious about what the sum of the two – 4d6+7 – would look like, but that’s outside the scope of this article.

The Thresholds
    The 1% Threshold

      Everything beats this minimum – no valid results are off the table.

    The 3% Threshold

      On 2d6+1, 3 and 13 are just below this threshold. On 2d6+6, 8 and 18 are in the same category. In both cases, it’s the most extreme results only; everything else is in the next threshold group or higher.

      In fact, there’s nothing in the 3%-5% band, either. The probability is rising too quickly for that.

    The 5% Threshold

      Breaking the 5% threshold but not making it to the next, 10% mark, are a couple of results on each side of each of the curves.

      2d6+1: 4-5 and 11-12; 2d6+6: 9-10 and 16-17. So these results are more likely to come up than on a d20.

    The 10% Threshold

      Between 10% and 15% are also two results from each side of the curve.

      2d6+1: 6-7 and 9-10. 2d6+6: 11-12 and 14-15. These results are more likely to come up than on a d10.

    The 15% Threshold

      That leaves only the absolute peaks of both ‘curves’, 8 and 13 respectively. They aren’t much higher than 15% but they legitimately beat that target. In fact, these results have the same probability as a flat 1d6 roll plus modifiers.

Slices Of Range: Percentages Of Probability
    Range Of Results

      3-13 and 8-18 have exactly the same range of results, which is not all that surprising since they are both 2d6 rolls. 11 results in each. The odd number means that there is a single result that represents the peak probability – until you get into the exotic die rolls to come, anyway!

    Ave – Min, Max – Ave
      These values will also be the same in all four cases – 8-3=5, 13-8=5, and 18-13=5.
    1/3 (Ave-Min) + Min

      Here’s where things have to diverge because the two rolls have different minimum values.

      1/3 of 5 is 1.6667, which will be common to both.

      1.6667 + 3 = 4.6667, so 3 & 4 are the lowest tier of results for 2d6+1. They have a combined probability of 8.33%.

      1.6667 + 8 = 9.6667, so 8 & 9 are the equivalents (with the same combined probability) for 2d6+6.

    2/3 (Ave-Min) + Min

      2/3 of 5 is 3.3333, again common to both because it’s a function of the 2d6 part of the rolls.

      3.3333 + 3 = 6.3333, so 5 and 6 are the middle lower results band for 2d6+1. They have a combines probability of 27.78 – 8.33 = 19.45%.

      3.3333 + 8 = 11.3333, so 10 and 11 are the equivalents for 2d6+6, with the same probability.

    The Lower Core

      That means that 7 and half of 8 comprise the lower core for 2d6+1 – that’s 13.89 + 1/2 x 16.67 = 22.225%.

      The 2d6+6 equivalents, with the same probability, are 12 and half of 13.

    The Upper Core: 1/3 (Max-Ave) + Ave

      Starting on the downhill leg of the probability charts, we have another 22.225% representing 9 and the other half of 8 on 2d6+1, and 14 and the other half of 13 on 2d6+6.

    2/3 (Max-Ave) + Ave

      Those are followed by the upper middle, a combined probability of 19.45% again, and a span of 2. For 2d6+1, that’s 10 & 11, and for 2d6+6, it’s 15 & 16.

    The Lofty Outcomes

      The very best results, with a probability of 8.33%, are 12 & 13 on 2d6+1, and 17 & 18 on 2d6+6.

    2d6+1:

      03-04: 8.33%
      05-06: 19.45%
      07-08: 22.225%
      08-09: 22.225%
      10-11: 19.45%
      12-13: 8.33%

    2d6+6:

      08-09: 8.33%
      10-11: 19.45%
      12-13: 22.225%
      13-14: 22.225%
      15-16: 19.45%
      17-18: 8.33%

    No real surprises in this set of results except possibly the closeness of 19.45% to 22.225% – especially given the threshold indicator that the probability slope is quite steep with 2d6.

Slices Of Probability: The Definitive Result Values

    Slicing up the 100% pie into 5 slices as equally as possible is the name of the game in this subsection.

    The Lowest 20%

      20% falls after the third result on each curve, so the lowest 20% of results comprise outcomes of (2d6+1) 3-5 and (2d6+6) 8-10. I think it’s just a coincidence that the upper limit of one is double the upper limit of the other.

    Second Lowest 20%

      21-40% contains only a single result in each case – 6 for 2d6+1 and 11 for 2d6+6.

    The Middle 20%

      41-60% contains two values, including the peak. For 2d6+1, those are 7-8, and for 2d6+6, 12-13.

    Second-Highest 20%

      61-80% again holds just one result – 9 for 2d6+1, and 14 for 2d6+6.

    Highest 20%

      Which means the highest 20% of rolls will contain the results from 10-13 for 2d6+1 and 15-18 for 2d6+6.

      Peak Probability

      In both cases the peak probability is 16.67%.

    Matching Result: 1/3 Peak Probability

      1/3 x 16.67 = 5.5567%. This lands in between 3 & 4 (and 12 & 13) on 2d6+1, and between 8 & 9 and 17 & 18 on 2d6+6. So, once again, only the most extreme results are chosen by this method. That’s actually rather predictable, given the earlier threshold results, since 5.5567 is so close to the 5% threshold.

    Matching Result: 2/3 Peak Probability

      2/3 x 16.67 = 11.1133%. As it happens, there are results that have 11.11% probability of occurring, and so these would have to be right on this line. On 2d6+1, these are 6 and 10 – so 4-6 are in this probability zone, as are 10-12. The 2d6+6 equivalents are, predictably, 5 higher – 9-11 and 15-17.

      The most probable results are therefore 7-9 (on 2d6+1) and 12-14 (on 2d6+6).

    2d6+1:

      01-20%: 3-5 (span 3)
      21-40%: 6 (span 1)
      41-60%: 7-8 (span 2)
      61-80%: 9 (span 1)
      81-100%: 10-13 (span 4)

      < 1/3 peak probability: 3 (span 1)
      1/3 – 2/3 peak probability: 4-6 (span 3)
      2/3 – peak – 2/3 peak: 7-9 (span 3)
      2/3 – 1/3 peak probability: 10-12 (span 3)
      < 1/3 peak probability: 13 (span 1)

      It’s the evenness of the spans in the latter table that are most telling. While there is clearly a peak probability associated with the innermost results, there is a significant chance of a result outside them. In fact, there is a 100 – 13.89 x 2 – 16.67 = 55.55% chance that the result of any given roll will be outside the 7-8-9 peak.

    2d6+6:

      01-20%: 8-10 (span 3)
      21-40%: 11 (span 1)
      41-60%: 12-13 (span 2)
      61-80%: 14 (span 1)
      81-100%: 15-18 (span 4)

      < 1/3 peak probability: 8 (span 1)
      1/3 – 2/3 peak probability: 9-11 (span 3)
      2/3 – peak – 2/3 peak: 12-14 (span 3)
      2/3 – 1/3 peak probability: 15-17 (span 3)
      < 1/3 peak probability: 18 (span 1)

      And these are exactly the same, just 5 higher on the results.

Summary Of Results

    The bottom line in terms of mechanics is that you are taking a d6 away from the character’s roll and replacing it with the worst possible outcome.

    But I also have to make the point that you can work it in the other direction – choosing the option that is most beneficial to a character’s chances of success.

When To Use This Substitute

    That matters because of what this die roll is saying to whoever runs that character. If it’s the more difficult option, you are telling the operator of the character, “I want this roll to fail and I want to be sure that you know that”. Or, more simply, “This roll deserves to fail.”

    The alternative construction, that benefits the character’s chances of success, says, “I want this roll to succeed and I don’t care who thinks I’m being biased.”

    In other words, this construction should be reserved for those occasions when the whole point of the roll is making that statement. When a move is so brain-dead stupid that it doesn’t deserve even the minimal chance of success it might have on 3d6 or d20.

    So I guess I need to actually compare what the chances of success are for different targets.

    Target 17/- (17 or less)

      With 3d6, you have a 99.54% chance of making this target.

      With a d20, it’s 85% chance.

      With the penalizing construction (2d6+6), it’s 97.22%.

      With the advantageous construction for an “or less” roll (2d6+1), it’s 100% certain that you will succeed.

    Target 14/-

      With 3d6, your chances of success are 90.74%. With a d20, it’s 70%.

      With 2d6+6, it’s 72.2% – so better than on a d20, but not by much. Compared to a 3d6 roll, you are way worse off.

      With 2d6+1, it’s still 100% success.

    Target 11/-

      3d6 gives a 62.5% chance of success. A d20 gives 55%.

      2d6+6 gives 27.78% chance. That’s like half the chance of a d20.

      And, for the first time, not even 2d6+1 makes success certain – there is a 91.67% chance of success, so the odds are way better than ‘normal’.

    Target 8/-

      3d6 has only a 25.93% chance of making this roll. 3 times in 4, roughly, you would expect to fail. On a d20, the chances are a little better at 40%, but the odds are still stacked against you a little.

      2d6+6 has just a 2.78% chance of success. It literally takes the lowest roll possible to make this target. If both the dice aren’t snake eyes, you’ve failed.

      2d6+1 has a better shot at it – 58.33% – but you’re still going to fail almost half the time. This is actually a fairly hard target to achieve!

    Target 5/-

      …but not as hard as this target. On 3d6 you have just a 4.63% chance. On a d20 it’s a little over 5 times that, at 25%.

      2d6+6 – forget it, your lowest result is an 8, so a 5 or less is not an option.

      Even the construction that appears to give as good a chance at success as you are likely to get, a 2d6+1, has only a 16.67% of success – so a d20 is actually the more generous option with a target this low.

    It’s much the same story if you look at rolling X or more, just in the other direction. The 2d6+6 becomes the generous option, and the 2d6+1, the handicapping one.

    Either way, this choice is all about the message; the actual die roll is almost superfluous.

    Exotic Choice #8: d4 x d6 – d4 +1 or +4

    Now, things start getting strange. For this, you need two different colored d4s and a d6. One d4 is designated the multiplier; whatever shows on the face of that die gets multiplied by whatever’s showing on the d6. The usual nomenclature around me borrows from the d% – the multiplying d4 is “high”.

    The +1 option is for replacing a 3d6 roll, the +4 is for replacing a d20 roll.

    If you want strange, you’ve got it!

    Originally, I had this listed with no modifier whatsoever, but I was looking at the resulting probability chart and thinking about the prospects of replacing d20 and 3d6, and the modifiers suddenly made a lot of sense to me.

    Let me explain why. a d20 has results from 1 to 20, yes? The native construction of this roll gave results from -3 to 23. Which puts the mid-point of the results (NOT the average!) at 10. Bu the bulk of the probability is below this, at around 0-5. A +4 modifier shifted the curve to the right, because that’s what positive modifiers do – the middle of the range becomes 14, and the average will be 4-9. That makes it a usable substitute, if one that’s heavily weighted low.

    3d6 ranges from 3-18. The significant probability results of the native curve end at around 11. So adding 6 shifts the minimum to a 3d6-comparable 3, the middle of the range to about 16, the peak probability to 6-11, and the end of that significant results range up to about 17, again making this a usable substitute for the 3d6 roll, again one that is biased low.

    By applying the different modifiers, it makes both versions fit for purpose and the advice regarding the use of this construction, the same, or close enough to it.

    With that addressed, let’s talk about the core of the roll. Multiplied die rolls have a singular characteristic: they bulk the probabilities low, but have long tails leading off into higher values. These come at a penalty – certain results that simply can’t happen. There’s no multiple that leads to a result of 17, for example – it’s a prime number.

    To solve this issue, you either have to add a die roll or subtract one. Adding one extends the length of the tail by the size of the added die roll, subtracting one shortens it. Adding one also shifts the probabilities right by the average of the added die, while subtracting shifts it left by that average.

    Once you’ve decided to use a multiplied die roll, you’re then negotiating a compromise between the native result and a useful configuration by way of the added or subtracted die roll. The smaller you make it, the smaller the impact – so I thought hard about d2 and d3, but decided that d4 was small enough in this instance. I also considered d5 and d6, but thought that the impact of the larger die was too significant. So that’s why this offering is d4 x d6 – d4 + 4 or 6.

    I’m going to introduce a new way of writing die rolls, having typed that sequence once too often for it to be convenient.

    It’s a simple extension of what’s already done – low dice size to high within an expression, ending with ‘d0’ i.e. modifiers. The new part is a way to indicate Conditional Changes.

    In this case,

    d4 x d6 – d4 +4,6 [d20r,3d6r]

    The conditional parts are separated by a comma instead of text and are followed by a symbolic representation of the condition for differentiation between the two. Once that has been established, in whatever context you are using this notation, you can leave off the content of the square brackets, with the empty brackets meaning “as before”:

    d4 x d6 – d4 +4,6 [ ]

    So, for example, you might have the following as a legitimate construct for some purpose:

    d4,d6 [a,b] x d6 – d6 +1,10 [ab,c]

    [a] = d20r
    [b] = 3d6r
    [c] = x ->20

    and, after the first use, you would just write

    d4,d6 [ ] x d6 – d6 +1,10 [ ]

    until the content of the square brackets next changed.

    Let’s break that example down for anyone who’s struggling to keep up (should be no-one but you can never tell).

    If this roll is to be used in place of a d20, you get condition A, in which your main roll is d4 x d6. The “r” in condition [a] signifies ‘replacement’.

    If the roll is to be used in place of 3d6, you get condition [b], and the main roll becomes d6 x d6.

    Both a and b have a modifier of +1. But if the results of the multiplication and subtraction of die rolls – that’s the “x -” in condition [c] – that is greater than 20, that modifier goes up to +10.

    All clear?

    So, for the remainder of this subsection, I’ll be writing d4 x d6 – d4 +4,6 [ ] for the die roll, with the [ ] signifying [d10r, 3d6r] without explicitly stating the condition every time. Okay?

    An afterthought – how do you decide where the body ends and the tail begins?

    There is a sharp flattening out of the curve at the point of division. You may even enter a secondary peak.

    Everything to the right of that dividing line is tail, everything to the left of it is body.

Min, Max, Ave

    Minimum = [1, 3]
    Maximum = [27, 29]
    Average = [10.25, 12.25]

    Right away, the new format has been extended to the display and differentiation of results, showing them in a far more compact way than would otherwise be possible.

    I got the average the old-fashioned way – multiply each result by it’s % chance and divide the total of all those results by 100.

    I did so because I wanted to test a shortcut that I’ve been using without verification like, forever – substituting in the value of an average roll to calculate the average result of a complex expression like we have here. So let’s try it:

    2.5 x 3.5 – 2.5 + 4 =
    8.75 – 2.5 + 4 =
    10.25

    correct result. It seemed logical and obvious to me that it would work, but I’ve never actually tested it to be sure, until now.

The Thresholds
    The 1% Threshold

      Everything beats this – but in the cases of [1 & 22-27, 3 & 24-29], only just, at a probability of 1.04%.

    The 3% Threshold

      Now things get juicier. in addition to the results mentioned above, each roll has 5 results below this threshold, and they are all in the tail: [16-17 & 19-21, +2]

      Another extension to the protocol – instead of explicitly listing the second case results, I’ve just indicated what the difference is.

      In the case of the second example offered initially, because the core die roll was changing, this wouldn’t work and you would have to use the longer, more explicit format continually.

      The +2 simply indicates, add 2 to get the alternative results, so 16-17 becomes 18-19.

    The 5% Threshold

      Between 3% and 5% things get more varied. We have one result in the main body – [2, +2] – and the entire rest of the tail except for [12, +2] – [10-11 & 13-15 & 18, +2].

    The 10% Threshold

      There are no results with a higher probability than this threshold, so the 5-10% bracket holds the entire rest of the results: [3-9 & 12, +2].

    I don’t usually do this, but I thought it would be worthwhile this time around: a summary of these results in tabular form.

      1-3%: [1, +2]
      3-5%: [2, +2]
      5-10%: [3-9, +2]
      3-5% [10-11, +2]
      5-10%: [12, +2]
      3-5%: [13-15, +2]
      1-3%: [16-17, +2]
      3-5%: [18, +2]
      1-3%: [19-27, +2]

    The other thing worth mentioning is that the average has clearly been ‘pulled’ to a higher number by the tail. If [3-9,+2] is considered the main body, which is what the above results show, then you would expect an average of [6, +2] or thereabouts.

    The greater the probability contained in the tail, the greater the shift. In this case, up a full 4.25 from [6,+2] to [10.25,+2].

    That will have an impact in the next section.

Slices Of Range: Percentages Of Probability
    Range Of Results

      27-1 = 26, +1 for the 1 itself, makes 27.
      The results are [,+2] higher for the alternative construction, but the range is exactly the same.

      Ave – Min, Max – Ave

      10.25 -1 = 9.25
      27 – 10.25 = 16.75

      Because minimum, maximum, and average all go up by the same amount in the second formulation, these ranges are exactly the same.

      The tail isn’t quite twice as long as the main body – 16.75/9.25 = 1.8108. I’ve never tested whether or not that’s true globally, so at this point it’s just an observation, not even a demonstration of a rule-of-thumb principle..

    1/3 (Ave-Min) + Min

      The part of the graph that lies below the average is going to take in the entire body and part of the tail.

      [1/3 x 9.25 + 1 = 4.0833, +2]

      So the band of worst results runs from [1 to 4,+2] and has a combined probability of 17.71%.

    2/3 (Ave-Min) + Min

      [2/3 x 9.25 + 1 = 7.1667, +2]

      The poor results are from [5 to 7,+2] and these have a probability of 42.71 – 17.71 = 25%. So 1 in every 4 rolls will yield a [5, 6, or 7,+2].

    The Lower Core

      This obviously contains everything else up to the average, so [8-10,+2]. The total probability of these results is 59.38 – 42.71 = 16.67%. This is ever-so-slightly less than the bottom band.

    The Upper Core: 1/3 (Max-Ave) + Ave

      For the first time, we have an asymmetric roll, which means that I can’t simply echo the spans in reverse sequence, I have to actually calculate these values.

      [1/3 x 16.75 + 10.25 = 15.8333,+2]

      So the upper core is 11-15, and includes the secondary peak at 12. The total probability in this span of 5 results is 80.21 – 59.38 = 20.83%.

      If the main body is 3-9, this shows that the early part of the tail is quite fat.

    2/3 (Max-Ave) + Ave

      [2/3 x 16.75 + 10.25 = 21.4166,+2]

      The band of ‘good’ results ranges from [16 to 21,+2] and has a total probability of 93.75 – 80.21 = 13.54%.

      This is the lowest-probability band that we’ve see so far. But the 93.75% [1-21,+2] indicates that there’s not much probability left for the very best results.

    The Lofty Outcomes

      The results from [22-27,+2] have to contain the rest of the 100% total, so 100 – 93.75 = 6.25%.

    d4 x d6 – d4 +4,6 [ ]:

      [01-04,+2]: 17.71%, span 4, sub-average=4.4275%
      [05-07,+2]: 25%, span 3, sub-average=8.3333%
      [08-10,+2]: 16.67%, span 3, sub-average=5.5555%
      [11-15,+2]: 20.83%, span 5, sub-average=4.166%
      [16-21,+2]: 13.54%, span 6, sub-average=2.5667%
      [22-27,+2]: 6.25% span 6, sub-average=1.0417%

      This table introduces a new diagnostic tool, the sub-average. This is the probability of the range divided by the span of results – so the range of [05-07,+2] has a total probability of 25% and a span of 3, giving an average probability across the span of 8.3333%.

      The combination of range and sub-averages gives a very approximate description in actual numbers of the shape of the probability curve, ironing out little deviations like the secondary peaks at [12 and 15 and 18,+2].

      I haven’t needed it before, but this is a far more complicated curve than the previous ones.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      The 20% mark in total probability falls between [4 and 5,+2], so this band runs from [1-4,+2].

    Second Lowest 20%

    The 40% mark is a little below 7, so this 20% holds results from [5-6,+2].

    The Middle 20%

      We get a total probability of 60% just above [10,+2], so this band contains results from [7-9,+2].

    Second-Highest 20%

      The 80% total is reached just below 15, so this group contains results [10-14,+2].

    Highest 20%

      Which leaves only the cream of the crop, from [15-27,+2].

    Peak Probability

      The peak probability belongs to a result of [6,+2], exactly as I forecast from the body range of [3-9.+2]. It is 9.38%.

    Matching Result: 1/3 Peak Probability

      1/3 x 9.38 = 3.1267%

      [2,+2] equals this almost exactly, at 3.13%. It’s so close that it has to be included.

      In the tail, things get more interesting. You can look at the probability chart and describe the tail as having peaks at 12, 15, and 18, and/or you can talk about valleys at [10-11, 13-14, and 16-17,+2].

      [14 & 16-27] are all below this threshold.

    Matching Result: 2/3 Peak Probability

      2/3 x 9.38 = 6.2533%.

      [3 & 8-13 & 15,+2] are all at or below this value.

      Which leaves [4-7,+2] as exceeding it.

      The question is always whether or not results that land exactly on a dividing line like this should be counted above or below it. But in this case, [2,+2] above set a precedent of including such cases in the lower of the divisions. So the dividing lines can be read as “[value] or less”.

      d4 x d6 – d4 +4,6 [ ]

      01-20%: [1-4,+2], span 4
      21-40%: [5-6,+2], span 2
      41-60%: [7-9,+2], span 3
      61-80%: [10-14,+2], span 5
      81-100% [15-27,+2], span 13

      [1-2,+2] 4.17%, span 2
      [3,+2] 5.21%, span 1
      [4-7,+2] 33.33%, span 5
      [8-13,+2] 30.30%, span 6
      [14,+2] 3.13%, span 1
      [15,+2] 4.17%, span 1
      [16-27,+2] 19.79%, span 12

Summary Of Results

    This is about as simple and clean as a multiplied die roll gets. The addition or subtraction of a die has done it’s job.

    If you examine the d4 x d6 chart above, one of the first things you notice is that it looks unfinished and incomplete. There are gaps – there’s no way to roll a 7, for example. Adding or subtracting a die fills in those gaps – at the expense of lowering probabilities (the possibility of the additional results ha to come from somewhere).

    Note that if the gaps are too large, a d4 might not be big enough. With d6 x d8 – d4, there is still a gap between 41 and 44, with two results missing. To fill them in, the d4 has to grow to a d6 – note that 6-4=2=the number of missing results.

When To Use This Substitute

    I wouldn’t use this to replace a d20 or 3d6 rolled for the usual purpose. I WOULD use it to replace those things on a custom table.

    For example, when it comes to diseases, there are all sorts of things that you need to know.

    • Unhosted Half-life
    • Base Infectious Rate
    • Immunity
    • Pre-symptom period
    • Infectious Stage Start
    • Infectious Stage End
    • Symptom Recovery
    • Disease Recovery

    Now, you could get some graph paper and draw a number of pretty curves to represent the probability you want; total those up and you can scale to exactly 100%.

    Or you can simply use a die roll like this one to create the curves for you.

    If I were to do that, I might get:

    • Unhosted Half-life = 9 days
    • Base Infectious Rate = 12/-
    • Immunity = 3%
    • Pre-symptom period = 4 days
    • Infectious Stage Start = 5 days
    • Infectious Stage End = 6 days
    • Symptom Recovery = 10 days
    • Disease Recovery = 7 days

    All these numbers were generated just by rolling d4 x d6 -d4 +4.

    What do these numbers mean? Well, a disease starts out with a Base Infection Rate chance of being caught. If it’s out in the open, in the soil for example, it loses half it’s infectiousness as disease cells die off every unhosted half-life that passes.

    So it starts as 12/- – that could be on 3d6 or d20 or whatever. After 9 days, it’s down to 6/-. 9 days later, it’s 3/-. 27 days later, and it’s 1.5 /-; then 0.75, 0.375, and so on. But that’s per exposure – if a dungeon was once plagued by the illness, you might easily have 10, or 50, or 100 exposures.

    You aren’t going to roll all of them. There’s a shortcut.

    • Determine the chance of failure of 1 exposure.
    • Convert it to a decimal.
    • Estimate the number of exposures to be rolled at once. 20, 50, 100 – the choice is yours.
    • Raise the decimalized risk of NOT catching the disease to the power of the number of exposures.
    • The result will be a much smaller number. Convert it to a percentage.
    • That’s your chance of not contracting the disease. Subtract from 100 to get the matching chance that you WILL contract the disease.
    • For example, let’s take our 12/- and assume it’s on 3d6. That’s 74.07%. But 10 half-lives have passed since then; 2^10 = 1024, so the chance per exposure is now down to 74.07 / 1024 = 0.072334%.
    • Which means your chance of NOT catching it is 99.927666%.per exposure.
    • to convert it to a decimal, divide by 100. So that’s 0.99927666.
    • The GM decides that every 100 exposures sounds about right, with each step (and the dust raised) counting as an exposure, as does handling an object, touching a surface, or engaging in a round of combat.
    • 0.99927666 ^ 100 = 0.930196.
    • 0.930196 = 93.0196%.
    • So, every 100 exposures, there is a 6.9804% chance of catching the disease.
    • Instead of counting, the GM assumes 100 feet of walking is 100 steps, and whenever the time since the last check feels about right, based on their activities since, he has the characters roll.

    If the dungeon is 100′ x 100′, divide the area by 2 – that’s a safe estimate for the minimum number of exposures through the whole thing, without allowing for rounds of combat, touching things, etc. So 100^2 / 2 = 10000 / 2 = 5000 exposures. Every 100 exposures means 50 checks will be needed. The GM decides that’s too many and decides to increase the number of exposures per check to 500.

    • 0.99927666 ^ 500 = 0.696.
    • 0.696 = 69.6%. So there’s a 69.6% chance of NOT getting it every 500 exposures.
    • Which means that there’s a 30.4% chance of catching it, per roll.

    A 30.4% chance per check, 10, maybe 11 checks, 4 PCs – what are the odds?

    At least 1 character: 100%. Well, more than 99.9999%.

    At least 2 characters: 99.997%.

    At least 3 characters: 99.80%

    All four characters: 92.78%

    Once someone is infected, they no longer need to roll, but the GM doesn’t want them to know that anything’s changed until symptoms appear, so he lets them continue and just ignores the results.

    The pre-symptom period is 4 days, so 4 days after infection, the symptoms start.

    A day later (5 days), the character becomes infectious – at the full base rate of 12/- on 3d6.

    They will stop being infectious 6 days later, so 11 days after infection (5+6=11).

    Symptoms might end before or after that date. The disease is far more dangerous if they end while the sufferer is still infectious! But in this case, symptoms persist for 10 days, so they end on day 14 (4+10=14). For the last 3 days of that period, they were no longer contagious.

    But the disease will have taken it’s toll. Recovery was rolled at 7 days, and that final clock starts when all the others have stopped – so day 21 is when the victim is back to their old selves – assuming they survived.

    In all 8 cases, the roll used was this one, and the results then interpreted. If I rolled up a second disease the same way, the results would be completely different:

    • Unhosted Half-life = 2 days
    • Base Infectious Rate = 18/-
    • Immunity = 6%
    • Pre-symptom period = 15 days
    • Infectious Stage Start = 15 days
    • Infectious Stage End = 18 days
    • Symptom Recovery = 8 days
    • Disease Recovery = 14 days

    This is a much slower, more pernicious ailment – but despite it’s very high infectiousness (18/- on d20 this time), it has a very short half-life, and 45 of them have passed..

    • 18/- on d20 = 90%
    • 14 half-lives so 90 / 351844 = 2.55795e-4%
    • 100 – 2.55795e-4 = 99.999744205%
    • 99.999744205% = 0.9999744205
    • 500 checks
    • 0.999744205 ^ 500 = 0.9987218
    • 0.9987218 = 99.87218%
    • 100 – 99.87218 = 0.128159%. Effectively no chance.
    • 5500 checks – the entire dungeon: 0.999744205 ^ 5500 = 0.986
    • 0.986 = 98.6%
    • 100 – 98.6 = 1.4%

    So, unless there are 50 people in the party, it’s extremely unlikely that anyone will catch this. It’s half-life is so short that’s effectively dead. But encountering someone who has managed to beat those odds would be extremely bad news. 18/- on d20 chance of catching it? And not knowing it until 15 days later?

    Depending on your interpretation of the rules, having a disease like this might mean that ‘Cure’ spells no longer work on you – that they try to cure the disease and fail. If that’s your GM’s interpretation, it might at least offer an early clue.

    On the other hand, at least part of hit points are self-confidence, and there would be a psychological lift at receiving a Cure Light Wounds spell, and cosmetic improvements, so you might well regain some HP, anyway.

    Okay, here’s the important bit: Why this die roll works

    As the analysis shows, results skew markedly low. It’s rare for anything to be higher than 12. But it can happen. That means that results are focused on the trait that you are rolling for, and need only simple interpretation.

    While it’s rare to get a high result, it can occasionally happen, and it always causes something memorable and significant when it does.

A couple of quick other notes about multiplied die rolls.

  1. If you want the curve to bias in the other direction, (Maximum+1) – the die roll is your solution.
  2. There’s a huge temptation to try dividing a something by the die roll. Don’t – it’s impossible to control. Most of your results will be sensible, but there’s always going to be a divide by 1 or a divide by zero to mess things up.

Exotic Choice #9: d30 +1 – d10

This roll looks deceptively simple. It only has two dice, for heaven’s sake!

And yet, dice subtraction can sometimes do weird things, so let’s take a look at this one…

You see what I mean? I wasn’t expecting it to look like that…!

Probably the first thing you notice is the flat top of what might once have been a triangle. It runs from 1 to 21.

The second thing that strikes you is the enormous range of results – from -8 to 30.

And then that minimum result sinks in. What does a roll of -8 even mean?

Min, Max, Ave

    Minimum -8.
    Maximum 30.
    Average 11.

The Thresholds
    The 1% Threshold

      -6 is exactly at the 1% threshold. So is 28. So the really improbable rolls are -8 to -6 and 28-30.

    The 3% Threshold

      0 and 22 are exactly at the 3% threshold – so the unlikely rolls are -5 to 0 and 22 to 27.

    The 5% Threshold

      Nothing gets this high. Everything from 1 to 21 is at an absolutely flat 3.33%.

Slices Of Range: Percentages Of Probability
    Range Of Results

      30-(-8) = 38, +1 for the -8 result itself. So there is a span of 39 results!

    Ave – Min, Max – Ave

      11-(-8)= 19.
      30-11=19.

      So the roll is symmetric. The fact that the range spans an odd number of results means that there will be one result nominally in the middle whose probability is going to have to be split.

    1/3 (Ave-Min) + Min

      1/3 x 19 + -8 = -1.6667, so the division falls between -1 and -2.

      So the lowest division of results runs from -8 to -2, and comprises 9.33%. Span of 7.

    2/3 (Ave-Min) + Min

      2/3 x 19 + -8 = 4.6667, so the division is between 4 and 5, which means that the next tier of results are -1 to 4. These have a total probability of 28.33 – 9.33 = 19.00%. Span of 6.

    The Lower Core
      That means that everything from 5 to 10, and half of 11, form the lower core. This group have a total probability of 48.33 – 28.33 + 1/2 x 3.3333 = 20 + 1.6667 = 21.6667%. Span of 6 1/2.
    The Upper Core: 1/3 (Max-Ave) + Ave

      The upper side is a mirror-image of the lower. So the upper core is 6 1/2 wide, including 11 (which is split). That gives results of 11-17 and total probability of 21.6667%.

    2/3 (Max-Ave) + Ave

      Above the central core are the good results, a span of 6, starting at 18 – so 18-23 – and with a probability of 19.00%.

    The Lofty Outcomes

      At the very top, the very best results therefore are 24-30, a span 7, and a total probability of 9.33%.

    d30+1 -d10:

      -8 to -2 = 9.33%, span 7.
      -1 to 4 = 19%, span 6
      5 to 11 = 21.667%, span 6.5
      11 to 17 = 21.667%, span 6.5
      18 to 23 = 19%, span of 6
      24 to 30 = 9.33%, span of 7.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      The 20% total comes between 1 and 2, so -8 to 1.

    Second Lowest 20%

      The 40% mark is reached between 7 and 8, so this bracket contains 2-7.

    The Middle 20%

      We cross the 60% mark between 13 and 14, so this band consists of results from 8-13.

    Second-Highest 20%

      80% is almost but not quite to the 20 result. So this band contains 14-19.

    Highest 20%

      Which obviously leaves results from 20-30 to form the highest band of results.

    Peak Probability

      As already mentioned, this is 3.3333% – and it’s shared by 21 results.

    Matching Result: 1/3 Peak Probability

      1/3 of 3.3333 = 1.1111%. Results of -8 to -6, and 28 to 30, are below this level.

    Matching Result: 2/3 Peak Probability

      2/3 of 3.3333 = 2.2222%. That probability band contains -5 to -3 and 25 to 27.

      Everything else, from -2 to 24, is between 2.2222% and 3.3333%.

    d30+1-d10:

      01-20%: -8 to 1, span 8.
      21-40%: 2 to 7, span 6
      41-60%: 8-13, span 6
      61-80%: 14-19, span 6
      81-100% 20-30, span 11

      -8 to -6, 2%, span 3
      -5 to -3, 5%, span 3
      -2 to 24, 86%, span 27
      25 to 27, 5%, span 3
      28 to 30, 2%, span 3

Summary Of Results

    If you use the full span of results, you are going to get some very extreme results. But here’s the thing: If you re-roll any result below 1 or higher than 20, this is a perfect d20 simulation.

    Of course, it’s a lot of malarkey to go through for that result.

When To Use This Substitute

    This is the perfect die roll for bringing a sense of the absurd or ridiculous into a game. For example, when two combatants are roaring drunk.

    Anytime someone rolls below 1, they do something stupid or something completely ridiculous happens to them. Anytime someone rolls above 20, something ridiculous happens to their opponent.

    When circumstances warrant neither a farce nor a circus, there are better constructions to choose. But when those are the orders of the day, this construction is hard to beat.

Exotic Choice #10: 5d4 / d5

We’ve had multiplication and subtraction as well as the more commonplace addition – so it’s no surprise that division makes an appearance at this point.

This chart shows three curves,
all discussed in the text below (or this caption would be far too long):
5d4 / d5; [(2d4+2d6+2d8) / 3d2] +1; and (6d4 / d6) +5.

Three compositions for the price of one!

The first, 5d4 / d5, is the one we’re mainly interested in. It shows all of the classic characteristics of a divided die roll quite clearly – there’s a front, a crown, a back, and a tail with a secondary peak or ‘hump’.

The second shows how complicated these things can get. It was chosen to illustrate two things, maybe three: (1) that 2d4+2d6+2d8 have a maximum of 36, the same as 6d6; (2) that if the denominator is large enough with respect to the numerator, the ‘crown’ can compress into a single point with an extremely high probability – note the scale on the left and you’ll see that the peak is approaching 30% probability. That’s absolutely ridiculous in a roll with this many results! And (3) the back can make a smooth descent to a long tail of virtually no probability while the ‘hump’ has been flattened out of existence, so this shows how the shape of a divided die-roll curve can change.

The third is the configuration I almost chose for this section, shifted 5 spaces to the right because the resulting curve is so like the subject one that it would be confusing. But now that you can see how similar they are by having them side-by-side, you can meaningfully evaluate the differences, which are also significant in revealing traits of divided-die-roll anatomy:

See text below

First, notice that the brown line – our subject construction – isn’t quite flat at the crown, and that our reference comparison, the gold line, is even more angled. I’ve never seen one slope the other way, but that wouldn’t surprise me if I did.

Second, notice that the gold reference line has a tertiary hump at results of 8 & 9 – and, in fact, that our subject composition has one too, at 6 – it’s just a lot smaller.

Until I saw just how similar they are, I was tossing up whether or not to include 6d4 / d6 as a bonus extra, even though time is growing a little short and there’s still a lot to do. But the differences seem to be so small that it’s not worth the effort, and time, involved.

Afterthought: How do you decide where the back ends and the tail starts?

As with the multiplied die roll, there is a sudden flattening, and maybe even entry into a secondary peak. The back includes any tertiary hump(s).

In this case, 4 is a transition between crown and back; 5 is back; 6 is back and the tertiary hump; 7 is back; 8 is back; but at 9, there is a flattening, and 10 starts the buildup to the secondary hump in the tail. so 4-8 are the clearly back and 10+ are clearly tail, with 9 the dividing point, able to go either way.

From the definitions, and comparing the probability differences 8-to-9 (0.98%) to that from 9-to-10 (0.33%), there is an obvious difference that connects 9 more strongly to the tail than to the back. So I would classify 9 as the start of the tail.

Min, Max, Ave

    Minimum 1
    Maximum 20
    Average predicted 5 x 2.5 / 3 = 4.1667
    Average, measured = 5.4367 (which makes me glad that I decided to do it both ways!)

The Thresholds
    The 1% Threshold

      The only results with a probability of 1% or less are in the end of the tail, from 17-20.

    The 3% Threshold

      This threshold is a bit more diverse. Falling beneath it are the front (1), a little of the back (8) and most of the tail (9-11, 14-16).

    The 5% Threshold

      Between 3% and 5% there is part of the back (7) and the rest of the secondary hump in the tail (12-13). In fact, half the time, I would probably have rounded the latter (3.03%) down to include them in the 1-3% category. But the more accurate approach better reflects the anatomy of the die roll results.

    The 10% Threshold

      The 5%+ to 10% bracket has the middle of the back (5-6).

    The 15% Threshold

      In this bracket we have the remainder of the back (4).

    The 20% Threshold

      Both results in the crown climb higher than this percentage (2-3) – which will result more than 40% of the time, collectively!

    5d4 / d5:

      1% to 3%: 1
      20 to 25%: 2-3
      10% to 15%: 4
      5% to 10%: 5-6
      3% to 5%: 7
      1% to 3%: 8
      1% to 3%: 9-11
      3% to 5%: 12-13
      1% to 3%: 14-16
      1% /-: 17-20

Slices Of Range: Percentages Of Probability
    Range Of Results

      There are 20 results, so if the curve were symmetric (it’s not) there would be two results with equal probabilities in the crown.

    Ave – Min, Max – Ave

      Here’s where things get interesting!

      5.4367 – 1 = 4.4367.
      20 – 5.4367 = 14.5633.

      One side of the average result is more than 3.2 times the size of the other!

    1/3 (Ave-Min) + Min

      The worst results band runs from the minimum (1) to

      1/3 x 4.4367 + 1 = 2.4789 – so almost exactly mid-way between 2 and 3.

      1-2 have a total probability of 23.75%.

    2/3 (Ave-Min) + Min

      2/3 x 4.4367 + 1 = 3.9578, so 4 doesn’t quite> make the cut – but it’s so close that I would round to include it, anyway, splitting it in two (a leg in both camps).

      3 has a probability of 21.25%, +1/2 of 4’s probability of 13.01 = 6.505%, gives a total of 27.755%.

    The Lower Core

      Between 3.9578 and 5, we have 5, and the other half of 4. 5 has a probability of 8.57%, and 1/2 of 4 is still 6.505, so the total probability here is 15.075%.

      The lower bands of the curve total 66.58% of all the results!

    The Upper Core: 1/3 (Max-Ave) + Ave

      1/3 x 14.5633 + 5.4367 = 10.2911333, so the upper core stretches from 6 to 10 – that’s the lower back and the start of the tail, but not including the peak of the secondary hump.

      6-10 have a total probability of 84.34 – 66.58 = 17.76%.

    2/3 (Max-Ave) + Ave

      2/3 x 14.5633 + 5.4367 = 15.1455666, so 11 to 15 make up the ‘good but not great’ band of results. Those have a combined probability of 97.64 – 84.34 = 13.3%.

    The Lofty Outcomes

      That leaves the great results as being 16-20, with a combined probability of 100 – 97.64 = 2.36%.

      But I want to especially note the low chance of a 20 at 0.02%. Rounding error is likely to be huge, but on the face of it, you are 5 / 0.02 = 250 times more likely to get a 20 on a d20 than on this roll.

      A moment’s reflection will show why – to get there, absolutely everything has to go right. Maximum result on the 5d4 (20) and minimum result on the d5 (1). Out of 5 x 4 x 5 = 100 possible results. Actually, by my math, that’s a 1% chance, so I’m going to have to look into this a little further. One moment…

      (a few minutes later:) Okay, I’m back. My mistake in the above is in calculating the number of possible outcomes on the 5d4, which I’m sure most of you will have spotted right away.

      The correct number of possible result combinations of die faces is 4^5, not 4×5. That gives 1024, which multiplied by 5, gives 5120 combinations all told. Only 1 of them produces a result of 20, so that’s 0.01953%. And a d20 does indeed have 256.016385 times greater likelihood of resulting in a 20.

      All this might seem like a minor side-note at the moment, but I’m thinking ahead, and expecting it to weigh heavily on evaluating when to use this particular construction.

    5d4 / d5:

      1-2 ‘Worst possible roll’ = 23.75%, span 2
      3-4 ‘Poor result’ = 27.755%, span 2
      4-5 ‘Below Average result’ = 15.075%, span 2
      6-10 ‘ Above Average result’ = 17.76%, span 5
      11-15 ‘Good result’ = 13.30%, span 5
      16-20 ‘Great result’ = 2.36%, span 5
      (20 ‘Best possible result’ = 0.01953%).

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      We get to the 20% total really quickly – in fact, only one result falls into this band, a 1, which has a probability of just 2.79%. Extending the range to 2 carries it over the 20% total, to 23.75%.

      That tells me two things: (1) this tool is of limited utility for the analysis of divided die rolls because of the phenomenally steep face and high crowns; and (2) it might still be useful if I round and generalize a bit. This will compromise the precision of the result, but still give some value in terms of understanding the die roll.

      So, on that basis, the ‘lowest 20%’ contains 1-2.

    Second Lowest 20%

      And, right away, that plan goes off the rails and for exactly the same reason. The 40% mark lands between 2 and 3 and 2 has already been used – so that depopulates this entire zone. I could round 3’s 45% total down to include it, I suppose, but 45% is a full quarter of the way through to the next band.

      Part of the purpose in breaking up all these rolls in the same size divisions – the 20%’s – was to enable direct comparison. (dividing the range of results into two parts about the average and each part into thirds has a similar comparative benefit but one arranged around the results, not the probabilities). That still has value, so I’m going to accept the rounding.

      Which means that this band consists of the result of 3.

    The Middle 20%

      The 60% mark is between 4 and 5, so this band also contains just one result: 4.

    Second-Highest 20%

      We get to 80% almost exactly at 8 – we’ve had to swallow much larger deviations twice already than including 80.68 in the 61-80% band – so this is 5-8.

    Highest 20%

      Which leaves 9-20 for the rest. Basically, anything in the tail is a ‘good result’ to some degree.

    Peak Probability

      This belongs to the result of 3, at 21.25%, which narrowly beats 2 and 20.96%.

    Matching Result: 1/3 Peak Probability

      1/3 x 21.25 = 7.0833. That point-0833 can be very important because it makes it almost impossible for any result to fall exactly on the line, which is more likely to happen with an exact integer result.

      Anyway, 1 and 6-20 all fall below this line, with no results close enough to 7% to even argue about.

    Matching Result: 2/3 Peak Probability

      2/3 x 21.25 = 14.1667. Again, a clear division between the results – 4-5 are below this line and 2-3 are above it.

    5d4 / d5:

      01-20%: 1 to 2, span 2.
      21-40%: 3, span 1
      41-60%: 4, span 1
      61-80%: 5-8, span 4
      81-100% 9-20, span 12

      1: 0-7%, span 1
      2-3: 14%+, span 2
      4-5: 7-14%, span 2
      6-20: 0-7%, span 15

Summary Of Results

    This is a fairly basic divided die roll. It exhibits all the traits of that type of construction. It’s massively biased low in results, with a long tail of relatively low probability. You can spend hours playing around with variations of the general principle, and often land on unexpected results.

When To Use This Substitute

    This is the die substitute to use when failure is – in the GM’s mind – not possible, but degrees of success and complications of pathway in getting to that success ARE.

    “So, you’ve rolled a 2? No problem, here’s what happens…” followed by set-back after set-back, and a last-minute success that the characters fall into more than reach towards. In other words, it’s all about driving the narrative, about roleplay.

    And if you should happen to fall over the line with a result in the tail, that indicates one of those occasions where the universe seems bound and determined to let you succeed; even outright errors of judgment end up working to your benefit, potentially earning the party an unjustified reputation for brilliance – which they will then have to try to live up to.

Exotic Choice #11: (3d6+2) / d4

Having examined the probability curve, this construction has only one novel feature – a singular peak of probability at result 3. So I’ve decided that it’s not worth the additional time it would take, which I can put to better use on something far more exotic and interesting.

Exotic Choice #12: (4d10 / 2) – d2 +1

Okay, now this one’s subtle. If you look really closely, i think there’s the most minute difference in the two sides of the curve. To test this perception, below are graphed two curves: the Main Curve, M, and 21-M.

If there is a difference, the two will not line up.

…and there it is. A subtle but definite asymmetry.

The main roll is averaging just a little higher probability on the low side of the average and a little less on the high side. I wonder what that will do to the average?

Min, Max, Ave

    Minimum 1
    Maximum 20

    Average: Predicted: (4 x 5.5) / 2 -1.5 + 1 = 11 – 1.5 + 1 = 10.5
    Average, measured:10.24977502
    Call it 10.25. And there, again, is that very small difference manifesting itself.

    So I decided to look into why it’s there. Here’s what I found: The division by 2 implicitly rounds down results by treating odd and even rolls on the 4d10 differently. What appears to be one curve is, in fact, the sum of two interleaved curves – odds and evens. Because we’re dividing by 2, the losses on the odd-result rolls are -0.5 each, and because half the possible results of 4d10 are odd and half are even, when this gets averaged over the whole, the net effect is a -0.25 bias on the results. It’s a perfect example of how small nuances can manifest in real-world differences.

The Thresholds
    The 1% Threshold

      Below this threshold are 1-3 and 17-20, so the overall shift low has already had a significant effect. 1-3 have a cumulative probability of 0.80% (so they are well below the 1% mark individually), while 17-20 have a 1.4% total probability – but span 4 results, not 3.

    The 3% Threshold

      4-5 and 16 are below the 3% threshold. The difference in span is because the curve almost has symmetry, meaning that any disparity is likely to be counterbalanced somewhat later on. In this case, the previous band’s spans of 3 vs 4 are the disparity, and the difference in spans, 2 vs 1, this time around are the counterbalance.

      4-5 have a total probability of 4.42%.
      16 has a probability of 2.13%.

      The 5% Threshold

      6 is almost at the 5% threshold, with a probability of 5.08%. It’s close enough for my money. At the high end, we have 15 & 16 – so the disparity in spans returns. 15-16 is a combined probability of 6.07%.

    The 10% Threshold

      7 on the low side and 13-14 on the high are in the 5-10% bracket – so the disparity has worsened.

      7 has a probability of 7.63%, while 13 & 14 total 15.24%.

    The 15% Threshold

      Everything that remains is in the 10-15% range, nothing breaks the 15% threshold. So that’s 8-12, which have a total probability of 38.02%.

      With no time left to even out the disparity, it has to stand – meaning that the right-hand side is cumulatively down on probability and needs a longer span to get to similar probability values. The span of this central region is 5 results.

    (4d10 /2) -d2 +1:

      <1%: 1-3 = 0.8%, span 3
      1% to 3%: 4-5 = 4.42%, span 2
      3% to 5%: 6 = 5.08%, span 1
      5% to 10%: 7 = 7.63%, span 1
      10% to 15%: 8-12 = 38.02%, span 5
      5% to 10%: 15.24%, span 2
      3% to 5%: 15-16 = 6.07%, span 2
      1% to 3%: 16 = 2.13%, span 1
      <1%: 17-20 = 1.4%, span 4

Slices Of Range: Percentages Of Probability
    Range Of Results

      Results span from 1 to 20, so a range of 20.

    Ave – Min, Max – Ave

      10.25 – 1 = 9.25.
      20 – 10.25 = 9.75

      There, once again, is the very subtle asymmetry lurking in the heart of this construction. At least I know and understand what’s causing it now.

    1/3 (Ave-Min) + Min

      1/3 x 9.25 + 1 = 4.0833, so 4 just scrapes into the lowest division of results. 1-4 have a total probability of 2.28%, roughly half of which is 4, and half of what’s left is 3. The remaining quarter is split between 1 and 2.

    2/3 (Ave-Min) + Min

      2/3 x 9.25 + 1 = 7.1667, so this band contains results from 5 to 7. They have a collective probability of 17.93 – 2.28 = 15.65%. That’s 6.864 times the probability of the previous division, meaning that you would expect to see 5, 6, or 7 come up about 7 times for every result in the 1-4 range.

    The Lower Core

      8-10 fall into this band. They have a combined probability of 53.30 – 17.93 = 35.37%.

      That’s about 2 1/4 times the probability of a 5-7 result, so for every four results in that range, you would expect to see 9 rolls producing results of 8-10.

    The Upper Core: 1/3 (Max-Ave) + Ave

      Because of the asymmetry, this has to be actually calculated.

      1/3 x 9.75 + 10..25 = 13.5. This range contains results from 11-13, and they have a combined probability of 86.22 – 53.30 = 32.92%, just a little less than the lower core.

      In fact, while probability says that it could happen sooner, what this amounts to is 12 results in this span for every 13 in the lower core.

    2/3 (Max-Ave) + Ave

      2/3 x 9.75 + 10.25 = 16.75. This band contains results from 14-16, which have a combined probability of 98.6 – 86.22 = 12.38%.

      The upper core will result 2.7 times as often as this range, so for every 8 results in the above average category, there will be 3 ‘good’ rolls.

    The Lofty Outcomes

      The best range of results are therefore 17-20, with a combined probability of just 1.4%.

      That’s a ratio of 8.8 times, so for every 5 results yielding this tier, there would be 44 rolls of the band below it.

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      20% of the probability contains results from 1-7 – so, on 100 rolls you would expect to see 20 of them within this range, give or take.

    Second Lowest 20%

      The 40% mark just fails to capture 9, so results of 8, technically, have this 20% all to themselves. That said, 40.19% is close enough that I’ll include it here for a span of 2. I think that’s a fairer representation of both results.

    The Middle 20%

      The 60% mark splits the difference between 10 and 11, magnifying the asymmetry to the point where it is undeniable – 60% of the rolls will be below the average, and 40% above it.
      10 alone occupies this space, with an actual probability of 13.12%. When the disparity is that large (13.12% vs 20%, so almost half of the 20% is missing), you have to consider including the next result up. 11/- has a combined probability of 66.08%, so this would be an error – but it’s a smaller error than not doing it. So this 20% is now considered to be 10-11, and to have a span of 2.

    Second-Highest 20%

      The 80% mark is distinctly between 12 and 13, so this range contains a single result 12. However, 12 only has a probability of 11.22% – even closer to 1/2 of the desired range of results. So I have to look at whether or not 13 can be included, with it’s 8.91% probability. The combination is a total of 20.13%, so even without looking at the combined value, I’m inclined to say yes. That combined value of 86.22, as before, does represent an error, but it’s a smaller error than not doing so, which confirms the predisposition. So this band is 12-13, a span of 2.

    Highest 20%

      But that leaves the last 20% to hold everything else – results from 14 to 20. That’s a span of 7 results, which is the same size as the first bracket, to be fair.

    Peak Probability

      Breaking this down by the alternative route requires the Peak probability. This belongs to a result of 10, without question, and 10 has a probability of 13.12%.

    Matching Result: 1/3 Peak Probability

      1/3 x 13.12 = 4.3733%.
      1-5 are below this chance, and so are 15-20. note: span of 5 and span of 6, respectively.

    Matching Result: 2/3 Peak Probability

      2/3 x 13.12 = 8.7467%.
      6-7 and 14 are below this result. I’d like to have included 13 or 15 to preserve the symmetry in this range, but the error that results is too great. Which means that this range cancels out the span discrepancy of the previous set of results.

      That leaves 8-13 as having the highest individual probabilities.

    (4d10 / 2) – d2 + 1:

      01-20%: 1 – 7, span 7
      21-40%: 8 – 9, span 2*
      41-60%: 10-11, span 2*
      61-80%: 12-13, span 2*
      81-100% 14-20, span 7

      The results of all that hand-tweaking of errors (indicated by the * in the table above) is a perfect reflection of the underlying symmetry of the curve; the bias is completely hidden. That’s why I’ve used so many analysis approaches – you can never tell which ones will definitively describe the curve, and they are all valid – just with a different emphasis.

      1-5: 0-4.37%, span 5
      6-7: 4.37-8.75%, span 2
      8-13: 9.75%+, span 6
      14: 4.37-8.75%, span 1
      15-20: 0-4.37%, span 6

      More than any other tool, this shows that this curve contains 3 major bands of results – low, middle, and high – connected by two short and therefore steep rises and falls in probability. It’s a classic bell curve, in other words. But it also highlights that slight bias low.

Summary Of Results

    And that sums up this construction, really – a classic bell curve with a hidden tiny bias.

When To Use This Substitute

    To be honest, I can’t think of an occasion that ticks all the boxes for using this alternative. Let’s check off the criteria, though, in case you are cleverer in this respect than I.

    • For the difference between this construction and 3d6 to matter, you need to be making a lot of rolls, or the bias won’t show up.
    • The range runs from 0 to 20, but the most extreme values are so unlikely that the practical range is 4-17. So you have to want to have the chance at a more extreme result, whilst making that chance vanishingly small.
    • It’s probably fair to say that this is a more ready substitute for 3d6 – but that’s not a good thing as it means you need a compelling reason to make that substitution.
    • Substituting this for a d20 roll integrates all the consequences of a bell-shaped curve, so that’s a more dramatic and potentially useful difference – but there are better choices for those cases. You need some valid reason for those choices not to work and for this choice to still be valid in order to justify using this roll. And that’s going to be rare.

    Ultimately, I think the greatest value that this construction holds is as an object lesson and a demonstration of principle.

    The object lesson relates to subtlety and nuance, and the dangers of making assumptions when probabilities are involved. They can, and from time to time will, lead you astray.

    And the demonstration of principle relates to what happens when dividing a die roll by a fixed value. The more you dig into this, the more you get swamped by minutia becoming relevant characteristics.

    Dividing by 3, for example, means that 2/3 of results will have a rounding distortion.

    Dividing by 4 takes that up to 3/4.

    But – some of those bias errors will be larger than others. Take dividing by 10 – a rounding bias that loses 0.1 is not very large, while a bias that loses 0.9 is comparatively huge. And the overall impact: A bias adjustment of -0.5.

    Compare that with the divide-by-3: some results will have an error of -1/3, some of -2/3, and some will have no error at all. And the average of -1/3 and -2/3? It’s -0.5 – again.

    Dice with unequal numbers of odd results vs even can amplify or diminish the bias slightly. That requires the d# to be odd – so d3, or d5. d7 if you can find them – the only one’s I’ve seen are marked with the days of the week. And so on.

    Is the resulting bias large enough to justify the complexity of the process and analysis? I’m not sure that it is, but can’t say that it isn’t either.

BONUS EXTRA: Exotic Choice #13: [log (base 2) [(d6 / 3) ^ d8] +d8 +1}

I’ve saved the weirdest till last! The ultimate in weirdness, this possibility came to me at the last possible moment, just a day before posting this work..

I couldn’t fully analyze this on my own (lack of time more than anything else), so I sought help from Google’s Gemini.

And AnyDice doesn’t understand logarithms, though it does understand exponents. So I’m going to have to do all the analysis the hard way, using a spreadsheet. Which will take additional time.

It’s even possible that I won’t have time to write it up before publication – in which case, I’ll update the post on Thursday and you can read all about this weirdie on Friday.

In which case, right now, the article will shift into ‘conclusions’ mode – but when people check back, they will find this final section miraculously inflated with content!

Have you ever seen anything like it? It looks like some sort of geological formation – but it only uses 3 dice!

The power of this lies in the d6. How well you roll on it determines what effect the first d8 has on the total. If it is 1-2, then the d8 takes a small value and potentially makes it much smaller; if it’s 3 or 4, the effect is neutral; and if the result is 5-6, it takes a large value and makes it much larger – depending on what you roll.

The logarithm then compresses the results back down to a usable scale while placing emphasis on low results.

The second d8 smooths the curve a little, and fills in any gaps, while the +5 shifts the curve into the result space we want.

But it’s by far the weirdest computed probability curve that I’ve ever seen.

NOTE that you can take results of off the table and replace them with results of 20 just by increasing the modifier from +5 to +6.

Min, Max, Ave

    Minimum 1
    Maximum 21
    Average (measured) 6.67989

The Thresholds
    The 1% Threshold

      Two results fall below this line: 19and 21.

    The 3% Threshold

      2-3 and 17-18 and 20 are all below this line. 16 is so close to it that I will include it too, at 3.033%.

    The 5% Threshold

      4-5 and 15 are in this band.

    The 10% Threshold

      Between 5% and 10% we find everything else – nothing crosses this boundary. So that’s 6-14.

    log2 (d6/3) ^ d8 +d8 +5:

      1% to 3%: 2-3 = 4.841%
      3% to 5%: 4-5 = 6.794%
      5% to 10%: 6-14 = 71.942%
      3% to 5%: 15 = 4.688%
      1% to 3%: 16-18 = 6.894%
      <1%: 19 = 0.827%
      1% to 3%: 20 = 1.869%
      <1%: 21 = 0.276%

Slices Of Range: Percentages Of Probability
    Range Of Results

      21 results are possible. But with the average so far removed from the mid-point of this range, the roll is biased somewhat low, and that will be reflected in the divisions.

    Ave – Min, Max – Ave

      6.67989 – 1 = 5.67989
      21 – 6.67989 = 14.32011

    1/3 (Ave-Min) + Min

      1/3 x 5.67989 + 1 = 2.89329. The lower band contains 1 and 2. coming close to inclusion is 3, but not quite close enough.

      1-2 have a total probability of 4.03%.

    2/3 (Ave-Min) + Min

      2/3 x 5.67989 + 1 = 3.78659.

      The only result in this span is 3, which has a probability of 2.681%.

    The Lower Core

      Between 3.78659 and the average are 4, 5, and 6. They have a total probability of 13.106%.

    The Upper Core: 1/3 (Max-Ave) + Ave

      As usual with an asymmetric roll, this has to be calculated; it won’t be the same as the span on the other side of the average.

      1/3 x 14.32011 + 6.67989 = 11.45326, so this band of results contains everything from 7-11, a combined probability of 42.742%.

    2/3 (Max-Ave) + Ave

      2/3 x 14.32011 + 6.67989 = 16.22663, so this band contains results from 12 to 16, a combined probability of 30.609%.

    The Lofty Outcomes

      At the very top, we have results 17-21, which have a cumulative probability of 6.833%.

    log2 (d6/3) ^ d8 +d8 +5:

      1-2 ‘Worst possible roll’ = 4.03%, span 2
      3 ‘Poor result’ = 2.681%, span 1
      4-6 ‘Below Average result’ = 13.106%, span 3
      7-11 ‘ Above Average result’ = 42.742%, span 5
      12-16 ‘Good result’ = 30.609%, span 5
      17-21 ‘Great result’ = 6.833%, span 5

Slices Of Probability: The Definitive Result Values
    The Lowest 20%

      1-6 will be the lowest 20% of results.

    Second Lowest 20%

      The 40% mark captures results 7-8.

    The Middle 20%

      The 60% mark contains 9-10. 11/-, with a combined percentage of 62.56%, just misses out.

    Second-Highest 20%

      So it starts this band off, which ends at the 80% mark and a result of 13, capturing 12 along the way.

    Highest 20%

      Leaving 14-20 as the top end of town.

    Peak Probability

      The other way of dividing results up is to stratify them by fractions of peak probability, which in this case is 11 at 9.100%.

    Matching Result: 1/3 Peak Probability

      1/3 of 9.1% is 3.0333%. Below that line we find 1-3 and 17-21, with 16 exactly on the line.

    Matching Result: 2/3 Peak Probability

      2/9 of 9.1 is 6.0667%. Between this line and the previous one we have a middle stratum of results: 4-5 and 14-15.

      Which in turn means that 6-13 are in the uppermost stratum.

    log2 (d6/3) ^ d8 +d8 +5:

      01-20%: 1 – 6, span 6
      21-40%: 7 – 8, span 2
      41-60%: 9-10, span 2
      61-80%: 11-13, span 3
      81-100% 14-20, span 7

      1-3: 0-3.0333%, span 5
      4-5: 3.0333-6.0667%, span 2
      6-13: >6.0667%, span 8
      14-15: 3.0333-6.0667%, span 3
      16-21: 0-3.0333%, span 6

      When you cut the results up this way, the result seems relative prosaic, barely hinting at the complexity below the surface.

      You can get even stranger results if you use 3d6/12 as the core roll. Some of the results I got while playing around with the concept looked like a cartoon shark’s tooth!

Summary Of Results

    In this case, nothing captures the nuance of what’s going on quite as well as the graph that I made at the top. It’s a bell curve with a flattened top and a longer descent to a secondary peak at 20 – but it’s lumpy.

    That’s because this isn’t one curve, it’s the sum of six different curves.

    There’s log2 (1/3 ^ d8) +d8,
    log2 (2/3 ^ d8) + d8,
    log2 (1 ^ d8) + d8 = d8
    log2 (4/3 ^ d8) + d8,
    log2 (5/3 ^ d8) +d8, and
    log2 (2 ^ d8) + d8 = 2d8.

    Plus the modifier to shift the results, of course.

When To Use This Substitute

    This is the perfect roll to use when results could go either way and snowball, because that’s exactly what is being simulated. The d6 controls the ‘either way’ and the exponentiated d8 controls the degree of snowballing, from none (d8=1) to massive (d8=8). The rest of the construction is just there to make things pretty, and functional.

    There is a slight bias low, which is why the average is so low – but that is compensated for because a d6 has an even number of faces, so division by 3 adds a bias high. The result is the tail, which is clearly longer than the front-face of the curve.

The Wrap-up

If the content below looks familiar, it’s because it is, in essence, a summary of the ‘when to use this roll’ discussion, re-sequenced into a more streamlined narrative, with less focus on the die rolls and more focus on the circumstances that suggest their use be considered.

Replacing a d20:
  • When you need one and don’t have one to hand, 10 x (d2-1) + d10 or 5 x (d4-1) + d5 are perfect replacements.
  • For everyday skill checks with little value in an extreme result, consider 4d6-3. Add + modifiers to nuance the odds in the character’s favor.
  • Ditto combat training.
  • Consider using 3d6 for anything involving biological systems to take advantage of the trend toward the average. Be very aware of the impact of modifiers – which results become impossible, and which results get put on the table to replace them, and what it does to the neutral bias relative to a d20.
  • When you want to take extreme results off the table but still want to preserve a lot of the evenness of results throughout the range, consider 2d10.
  • You can put fumbles back on the landscape with 2d10-1 but this takes away the critical success possibility. This is recommended for a character performing a task unskilled.
  • Also consider 2d10-# when the game system states that you need a certain minimum attack bonus even to hit – it transforms ‘impossible’ into ‘unlikely’, giving your PCs a chance to survive. Works for NPCs up against PCs decked out with magical gear, too.

  • When a player indicates that near-enough-will-be-good-enough, use 2d10+#. It makes extremely good results more unlikely while increasing the likelihood of success. 3d6+# has the same effect but with a stronger bias toward the average result. This is also appropriate when time is more important than ‘pretty’.
  • Whenever 210 is an option, d10+d12-1 also needs to be considered. This makes extreme results just a little more common and resists the trend to the central results a little bit more.
  • When someone is being taught a skill, consider d8+d12, representing a supervisor who will gently nudge toward a satisfactory result, helping out when things get sticky. This roll makes both extremes less likely.
  • When a delicate situation could abruptly swing either way, consider using 2d4+d12 instead of d20. Especially when one character is actively trying to help or hinder another. Extremely sensitive to modifiers; there’s a whole range of nuanced options to pick from.
  • When you want to give players a sense that they are ‘winning’ (even if they aren’t), consider using 2d8+d6-2 instead of d20. Extreme results are more possible than on some other rolls but the overall average is higher, so success is more likely. At the same time, there is a mild push toward more average results.
  • Alternatively, consider d4+d6+d12-2, which is a flatter, more evenly distributed option with greater potential for extreme results. Or d4+d8+d10-2, which is not significantly different.
  • When you want the character to succeed while preserving the chance of potential failure, consider 3d8 – 3. This has an average result of 10.5, same as a d20, but that goes up by 1 for each +1 modifier to the roll. It might be easy to be too heavy-handed in this respect.
  • When you want to convey to a player that they are making a stupid mistake that you don’t want to succeed for the sake of the game, use 2d6+1 (for high results desired) or 2d6+6 (for low results desired).
  • When you want to convey to a player that the circumstances don’t really permit failure and probably don’t need to be rolled (but they insist or it’s an NPC doing something contrary to what the PCs would want), use 2d6+1 (for low results desired) or 2d6+6 (for high results desired).
  • When constructing a table based on a probability chart drawn to your specifications, consider using d4 x d6 – d4 +1 instead and letting the roll do all the hard work.
  • When you want to bring a carnival atmosphere, a sense of the absurd, into the game, use d30 +1 – d10 instead of d20. If you roll less than 1, the opponents do something monumentally stupid or something ridiculous happens to them; if you roll above 20, the shoe is on the other foot.
  • When the GM thinks that there is no chance of failure but degrees of success or complications to be overcome getting to success are present, consider replacing d20 with 5d4 / d5. A gateway to roleplaying.
  • When results could go either way and snowball quickly, consider using log2 (d6/3) ^ d8 +d8 +5.

    If you don’t know how to do a logarithm to the base X, the trick is

    logX(#) = log(#) / log(X).

    For example, log(1024) is 3.0103; log(1024) to the base of 2 is 3.0103 / log(2) = 10. Which mean that 1024 is 2^10.

    Absurdities that are real: 1024 = 6.0551 to the base of pi. I don’t know why you would ever need to know that, but this is the technique that lets you calculate it if you ever do.

Replacing 3d6:
  • For additional drama: consider 4d6-3. Especially to resolve skill checks in which there is significant opposition or circumstantial difficulty to overcome.
  • Consider using a d20 and re-rolling any result below a threshold to describe the results of genetic modification or selective breeding.
  • Consider using d8+d12 to replace 3d6 for the simulation of poisons and diseases, where some effect takes place but extreme effects are unlikely – but can be worse than on a 3d6 roll. But there are better options even than this for this circumstance.
  • When a delicate situation could abruptly swing either way, consider using 2d4+d12 instead of 3d6. Especially when one character is actively trying to help or hinder another. Extremely sensitive to modifiers; there’s a whole range of nuanced options to pick from.

  • Consider using 2d6+d8 instead of 3d6 when the outcome is of lower importance. It has a much lower chances of an extreme result and more even chances of anything else. Modifiers are especially powerful. So trouble is more likely to happen and can be better mitigated by arranging circumstances in your favor. This encourages roleplaying AND tactical thinking.
  • 2d8+d6-2 Increases the potential diversity of results, useful for situations that are on a knife-edge. Far less centrally-dominated than a 3d6 roll.
  • Alternatively, consider d4+d6+d12-2, which is a flatter, more evenly distributed option with greater potential for extreme results. Or d4+d8+d10-2, which is not significantly different.
  • When you want to convey to a player that they are making a stupid mistake that you don’t want to succeed for the sake of the game, use 2d6+1 (for high results desired) or 2d6+6 (for low results desired).
  • When you want to convey to a player that the circumstances don’t really permit failure and probably don’t need to be rolled (but they insist or it’s an NPC doing something contrary to what the PCs would want), use 2d6+1 (for low results desired) or 2d6+6 (for high results desired).
  • When constructing a table based on a probability chart drawn to your specifications, consider using d4 x d6 – d4 +4 instead and letting the roll do all the hard work.
  • When you want to bring a carnival atmosphere, a sense of the absurd, into the game, use d30 +1 – d10 instead of 3d6. If you roll less than 3, the opponents do something monumentally stupid or something ridiculous happens to them; if you roll above 18, the shoe is on the other foot.
  • When the GM thinks that there is no chance of failure but degrees of success or complications to be overcome getting to success are present, consider replacing 3d6 with 5d4 / d5. A gateway to roleplaying.
  • When results could go either way and snowball quickly, consider using log2 (d6/3) ^ d8 +d8 +5. See the notes on d20 substitution if you don’t know how to turn a logarithm in one base (usually 10 or e) into another (for example, 2). This comes up all the time in the Hero System where +5 = twice as much. For example, adding 20 strength means you can lift 2 ^ log(20/5) times as much = 2 ^ 4 = 16 times. And 5 points of temporary stat damage means you have half as much of that stat.

And that brings to a close yet another example of “it seemed like a quick and easy post when I started”. It wasn’t – it’s been arduous and grinding, with lots of detail needing very close attention and very high levels of concentration, which were mentally exhausting – to the point where I could only do about 1/2 a die roll’s analysis in a session without pausing to recuperate and recharge.

But I think the results are worthwhile, and in some cases, fun!

yellow d20

Leave a Comment