Campaign Mastery helps tabletop RPG GMs knock their players' socks off through tips, how-to articles, and GMing tricks that build memorable campaigns from start to finish.

Strange-taste Worldbuilding: Pizza Adjectives


World-building through an exotic cuisine technique, it’s easier than you might think – AND more effective.

I took the original image, by petrovhey from Pixabay (shown below), and subjected parts of it to various color-shifting and blending processes to create the otherworldly pizza shown above.

The original image, credit as above.

This post is one of those ideas that comes to you out of nowhere but, as you’ll see, as a technique, it’s worth sharing.

One of the hardest things to do is to convey a sense of the action taking place either on a strange world, or a fantasy environment, or on a future earth. Well, I’ve come up with a way to generate color content that does just that.

I’ve divided the subject into two, maybe three, parts: Pizza Adjectives, Strange Tech, and (maybe) Strange Words. My goal is to present a straightforward and simple mechanic or process for each of these categories that will function equally well in both Sci-Fi and Fantasy -based genres (Steampunk should be considered fantasy that employs the sci-fi methodology for its technology).

Parts 1 and 2 will appear in close succession because I already have processes / techniques for them worked out. Part 3 might not appear for a while because it is, by far, the most difficult to make practical – I’ve thought of three different solutions to the problem, and none of them are anywhere close to being useful for the busy GM.

Let’s get started…

Pizza Adjectives

Pizza Adjectives – the initial thought that spurred the creation of this article – is where I’ll start. The fundamental premise of a pizza is a large base, with a selected set of topping combinations, which is served in slices, and it is assumed that this basic combination is more-or-less universal, and that everywhere will have its own version or local equivalent.

Prior to the mid-20th century, that wasn’t the case. The modern Pizza, as we recognize it today, with a foundation of cheese and a tomato paste or sauce, traces back to Naples in Italy and the late 18th century, but one can consider the open-faced sandwich to be an early form of the same concept.

That pushes the concept back to 1762, when the sandwich was invented by John Montagu, the 4th Earl of Sandwich, who wanted food in a form that could be consumed while gambling.

Open-faced sandwiches – more closely akin to Pizzas – date back still further, with roots in the middle ages trenchers and 17th century dutch tavern meals. A trencher uses a thick bread as a plate.

It’s even possible to push the concept back further, to Hillel the Elder in the 1st century BC, growing progressively further away from the modern concept. White flatbreads with toppings existed in ancient Greece, Rome, and Egypt.

That makes this foundation concept “true enough for game purposes”.

Conceptual Technique

The technique that this article proposes and explores is incredibly simple – attach an adjective to the pizza that wouldn’t normally apply, and then make sense of it, using locally-available ingredients, to create something unique to this specific location.

Because a pizza has two basic parts – base and toppings – you can consider them individually or can combine variations on the two. And you will get different answers if you consider the paste to be part of the ‘base’ concept or not.

But the real power of the technique is this: it can be applied to non-pizza dishes. I’ll get into that a little later. But let’s start at the bottom, with Pizza Bases.

Pizza Base

Pizza bases can be thick or thin. Their primary job is to provide a foundation that holds the toppings together – there’s the ‘plate’ concept reaffirming the ubiquity of the concept.

They can be uncooked until the dish is assembled, or can be a pre-cooked bread of some sort.

It is becoming more common to consider them a contributor to the total flavor profile, and this has sparked the creation of many variations on the basic theme, with recipe variations like adding donut batter to the pizza base recipe through to adding different seeds or herbs to shift the basic flavor profile. I’ve also seen pizza bases with a variation on the usual milk – goat’s milk and camel-milk for example – for further subtle variations and nuances. Using grain variants is also possible – ground malt or barley, or rice flour, instead of wheat flour.

So there’s a lot of room for development even in this comparatively bland component – and that’s long before we even get to concepts like stuffed crusts!

And that’s how the ‘strange adjective’ can be made to work. To demonstrate the technique in action, here are four examples: “Cloudy”, “Liquid”, “Zingy Seed”, and “Minty Leaf”.

    Example #1: “Cloudy”

    Cloudy suggests some sort of aerogel to me. That would essentially be flavorless. It would need to be coated with something more solid on one side to provide the structural integrity, and that can carry a flavor; it can contain some sort of flavor in the aerogel itself by adding it to the foam mixture, which would probably be sprayed onto that foundation; and it could be coated with a flavorful emulsion of some sort. In addition, the deposition process could be interrupted half-way through to scatter some light-weight seeds or leaves that become encapsulated within the foam when spraying is resumed.

    When the pizza is cooked, there would likely be considerable contraction of the bubble contents, creating something more mechanically robust, while the bubbles would burst, forming cells of hot air.

    But the dominant trait of this construction would be a pizza base which contributed to the flavor of the final product while requiring the consumption of virtually zero calories. This is a pizza base for the responsible, health-conscious, adult, in an environment where food is plentiful and the problems of over-indulgence are more common than those of malnutrition.

    And I’m sure that this would be an angle explored by the pizza-makers – adding electrolytes, vitamin supplements, even protein powders – with appropriate flavorings.

    The addition of this one word, ‘cloudy’, opens up a whole new branch of the culinary art.

    It’s also likely that for the purposes of indulgence, there would also be a countermove in the opposite direction – you can already get fish etc coated in a “beer batter”, which – again – could be added to a more traditional wheat-based dough recipe – but that’s getting a little off-target.

    Example #2 “Liquid”

    Does anyone not know what a Trifle is? If so, look it up, because I think you’ll be in the minority.

    Layers of cake and topping sandwiched by custard in a tall glass is what immediately comes to mind as an interpretation of ‘Liquid Pizza’. Except that it wouldn’t be custard, it would be a savory sauce of some kind, stiff enough that other components would remain in place within the glass. And it wouldn’t be cake, it would be something else – chunks of a sweet-bread coated in something to resist becoming soaked by the sauce. Because there would be a lot of the sauce in comparison to a traditional pizza, it would have to be a lot more bland in flavor, ptoviding a vehicle for the flavors of the ‘toppings’ suspended in it.

    In an inversion of the usual pizza content, it might be meat-flavored with chunks of fruit and/or vegetables and layers of sprinkled nuts. And maybe slices of, or a crumbled variation of, a meatloaf.

    That’s what I imagine when I hear the label “Liquid Pizza”.

    Of course, there are also sweet-flavored pizzas made with fruit – which hearkens back to the original concept of a trifle.

    Example #3: “Zingy Seed”

    Imagine a traditional pizza base with savory-flavored popping candy mixed into the dough. That’s my first thought.

    My second thought is “What does ‘Zingy’ actually mean in terms of flavor?” Well, that’s something easily answered through the use of the internet. According to Google, “Zingy” describes a taste that is sharp, fresh, and lively, often leaving a tingling or stimulating sensation on the tongue. It is typically associated with vibrant, acidic, or slightly spicy notes that make a dish feel energetic, bright, and invigorating rather than dull or flat.

    Citric flavors, vinigars, ginger, cayenne, and peppery flavors can all contribute to the sense of a flavor being ‘zingy’. Sauces based on the term are often used on blander salads.

    So a ‘zingy’ flavored popping candy would be little flavor-bombs of umami sharpness, adding a meaty tone to something that has little or no actual meat. A meat-lover’s variation on the vegetarian pizza, if you will!

    Example #4: “Minty Leaf”

    Adding chopped mint and coriander (or some other peppery plant) into the pizza base produces a very different flavor profile, one perfect for certain meats like lamb or goat. You would probably want a stronger-flavored cheese to weld the flavors together, and that might need some experimentation to get right – I’m expecting a cheese blend of some sort to be the ultimate answer.

Four radically different foundations for a pizza. Each of them, paired with complimentary toppings, could produce a dish that is absolutely delicious. And none of them require anything more exotic than chmistry already familiar to us. The last one is definitiely something that could be employed in a fantasy campaign.

And none of them are built around exotic native ingeredients, whch can add new variations and distinctiveness. In Australia, for example, we have something called “Lemon Mytle” – google offers a description of the flavor as a vibrant, intense citrus flavor often described as “more lemon than lemon,” combining notes of lemon zest, lemongrass, and a hint of eucalyptus or menthol. It is sweet, aromatic, and tangy, yet lacks the harsh acidity or sourness of actual lemon juice, making it ideal for dairy-based dishes.

My experience with it is of a flavor that is punchier than lemon but less diffused, more of an accent than a foundational flavor. And it often seems to have a slight earthiness that pure lemon lacks.

Don’t be afraid to invent something – there are never enough ‘different plants’ in Fantasy / Alien environments, anyway, and there are usually all sorts of wild critters that would add to the range of palete options even if only semi-domesticated.

Which brings me to the subject of Pizza Topping Supplementals, our next stop in this culinary exploration.

Pizza Topping Supplementals

Far moreso than when considering a pizza base, the toppings are where the GM can connect a distinctive flourish with a specific location through the toppings.

When I’m contemplating a multi-world sci-fi setting, I pick a couple of ingredients as ‘ubiquitous’ – these ingredients, or something very like them, are going to be found almost everywhere. The more an ingredient is a signature component of a particular broad quisine, the better a candidate for this treatment it makes. So tomatos are, arguably, the cornerstone of italian quisine as we understand it today. Pepperoni is just a spicy sausage (for all that I love it as a topping), and many european cultures have those or something similar, but it’s really hard to imagine pizza without tomato, let alone many of the other definitively italian dishes.

I then spell all the remaining ingredients in a commonplace recipe from Earth that I know works as a combination, backwards as a mnemonic to “insert local ingredient here”. And then I’ll transpose flavor profiles from another, similar-but-different recipe, or swap the ingredients’ flavors around. A meat that has the rich, tart-but-sweet, flavor of tomato paste, plus a paste / thick sauce that carries the flavor of onion and garlic, and a leafy vegetable that tastes like ground beef – throw in some alien names for the source ingredients and you have a recipe that you know would be palatable, yet subtly unique, and definitively tied to the location from which these ingredients derive. Give it an overall name as a recipe – “Yamarkian Pizza” or something – and the act of worldbuilding is complete. At one stroke, you make the location more unique, and more real, and tie that into the culture, and connect that through the food available back to the distinctiveness of all three elements – the place, the culture, and the food. The distinctiveness and realism permeate all three at the same time.

On it’s own, this is not enough to make any of the three distinguishably unique – this is just a worldbuilding ‘brick’ – but it’s a solid one.

    Example #5: “Curried Bloodwing”

    Pizzas built around curried meats are rare, even in regions where curries are popular, because tomato doesn’t really go all that well with them. Instead, you would want something creamy and spicy, that prevents the curry flavor from being too dominant, too overwhlming.

    But if you hunt around a bit, you can find such. Instead of tomato paste, they are often based on a blend of Hommus and Yoghurt, especially here in Australia, where we love a good fusion of cuisines. Hommus, for anyone who doesn’t know it, is a thicker, dip-like sauce with a creamy-but-slightly-gritty texture and a sesame and chickpea flavor. Onion, garlic, and lemon are sometimes also blended in. Technically, these are Tarator, or (added roasted Eggplant) Moutabel, but the ubiquitous Australian terms are just “Hommus”, “Hummas”, or “Yogurt Hommas”.

    So let’s postulate a meat with a slightly nutty flavor profile from a bat-like creature called a “Bloodwing”, which in turn gets its name from the bright red color that it assumes when cooked. Because it can sometimes be a little gamey, Bloodwings not being a domesticated species, something that tones down the flavor somewhat is the preferred treatment, and a curry fits that bill precisely.

    A pre-cooked base is placed on a tray and coated with a slightly thicker yoghurt hommas. The pre-cooked curried bloodwing and some peppery leaves are placed on top, and the whole placed in a pizza oven just long enough to reheat the ingredients. The sauce is more like a ‘stable custard’ than anything else, it doesn’t break down very much under heat. Because it doesn’t crust, the sauce acts as a flavor sponge, adding the flavor of the meat and leaves to itself, creating swirls of spice and color.

    Variations on the curry – sweeter, milder, or hotter – provide distinguishing touches. It’s a pizza, but one both utterly unlike anything most people have had before and completely recognizable. And the red-and-green-and-yellow on white would create an immediately-appealing visual presentation.

    This could even work with meatballs cooked in a tomato-like sauce, for a radically different interpretation of “Pizza” in the more traditional sense!

    Example #6: “Creamy”

    When I think of the word “Creamy”, the first two things that come to mind are carbonaras – pasta, meat, and a creamy sauce – and the texture of perfectly-cooked callamari, which is sometimes served in such a sauce but is usually coated in some sort of breadcrumb or batter.

    The first would be more challenging as a pizza motif, and would probably be too similar to a variation on “Curried Bloodwing”, above, so for the same campaign – and at least initially – I would focus on the latter.

    Calimari rings are squid-based, but let’s shift that sideways and go for an octopus-like creature where the limbs can be sliced into rings.

    Imagine an octopus that adapts first to become amphibious, and then to become wholly land-based. To shift expectations sideways a little, let’s replace “Octopus” with “Hexapus” or “Decipus” – I like the latter name a little more than the former, so 10-limbed it is. I think that Octopusses rely on salt water to keep their skin moist and flexible, so this adaption would require crustacean-like shells to ‘trap’ salt water between shell and underbody. That, in turn, would give a Decipus both natural defenses and natural weapons, which would enable the species to compete and thrive, growing larger with the generations until it was the size of a wolf or large hound. Capable of short bursts of blinding speed and a high natural intelligence, it would become an apex predator, capable of defeating much larger opponents.

    The larger a calimari ring, the tougher it is likely to be unless prepared pefectly; the smaller, the more forgiving. But let’s set that aside and have this meat be gentle and forgiving of over or undercooking without becoming rubbery or leathery.

    And, finally, let’s say that it has a more crab-like flavor.

    Pizza base – probably fairly traditional.

    Pizza sauce – seafood generally needs something acidic – be it citric or vinigar-based. But tomato – with something of this sort added – would serve quite adequately. To make it more exotic, let’ postulate a citrus that is strongly lemony with a hint of lime, and call it “Limeny Grapes”, then incorporate that into a tomato paste.

    Rings of fried Decipus, possibly crumbed or battered, get placed on top, and grilled fish chunks are then placed inside the rings. Leaves fom a salty & peppery bush – maybe called a “[planet-name] saltbush” – are added around the outside of the rings.

    The result is a seafood pizza that is remeniscent of Cioppino or Marinara and at the same time culinarily unique.

    Example #7: “Fruity”

    You can already get fruit-based pizzas, entirely setting aside things like “Ham & Pineapple” which arouse so much heated debate. So this adjective is so natural (if unusual) when applied to a pizza that we’re going to have to work especially hard to make it distinctive and unique.

    High-concept, even.

    So let’s go with a gellied or comfied meat and a fruity flavor profile with no fruit of any kind.

    Jellying involves creating a savory jelly (aspic) by boiling collagen-rich meats to extract natural gelatin, which then encases meat pieces and sets when cooled. Collagen-rich parts (pork trotters, hocks, ears, chicken feet) are simmered slowly (3–8 hours) with meat, vegetables, and spices. Most jellies are served cold.

    Confying is a preservation method—originally designed for times before refrigeration—that involves cooking meat slowly in fat, then storing it submerged in that same fat. The meat (duck goose or pork) is first cured with salt and aromatics for 12-24 hours to draw out the moistue and tighten the texture. It is then slow cooked in a rendered fat like duck fat, a process taking another 3-10 hours. The fat is absorbed into the dried meat, replacing the moisture that was there with something that will survive reheating. The cooked mixture is packed into jars or containers and cooled, creating an oxygen-free environment that can preserve the meat for months in a cool place. The end result is an extremely tender and moist meat that can be served cold or crisped before serving.

    To meet the expectations created by the term ‘fruity’, we need to change the jelly or fat into something with a sweeter, fruitier flavor. Simply including fruit in the jelly or fat would achieve the goal. But you wouldn’t want it to be too sweet – so, depending on what else you’re putting on the ‘pizza’, you might want a more citrus-flavored profile.

    The aspic or fat would form the pizza sauce upon reheating, but – being mostly transparant – would not look all that appetizing. So let’s also add some food coloring into the mix. Which color would depend on the color of the underlying meat – white could go with almost anything, while brown would restrict you to red colorations, I think.

    Top the whole off with some leafy greens that have a fruity flavor, and maybe some shaved coconut or shaved roasted hazelnut to add some texture and a broader flavor, and it’s job done.

Some of those examples really have my mouth watering! But let’s move on.

Pizza Topping Substitutions

It’s also possible to reverse the process and, by doing so, to connect with existing game-world constructs. The one requiement for doing so successfully is the presence of some known and recognized indiginous life-form or plant.

You start with a known flavor profile, subsitute in the core ingredient, and adjust the other flavors to match. This produces a familiar-tasting food item made with exotic ingredients that are indelibly tied to the location.

To demonstrate this, I needed a setting that everyone would recognize, so I’ve chosen two pizzas that can represent Tattooine from Star Wars.

    Example #8: Hawaiian -> Tattooinian

    What flavor is Bantha Steak? They look mammallian, and most people would immediately think beef. But there’s an inferance that they eat just about anything that’s edible (“bantha-fodder”), and that would be a survival trait in a desert environment like Tattooine. That in turn makes them more akin to goats or pigs.

    Goat is a stringy meat, with either strong gamey flavor or weak flavor of any sort, depending on who you ask and possibly how it’s prepared. It tends to be very tough, and most cooking methods are aimed at softening it in some way. There’s not really very much that’s equivalent in common pizza toppings.

    Pork can also be quite tough as a meat, especially if overcooked. That’s why it remains a common subject for the jellied and confied cooking techniques. Pork is rarely the central ingredient in a piza, but it can be present as an addition if finely diced.

    But bacon and ham are much softer and richer in flavor, and these are quite common, either as part of a medley in a meat-lover’s variety, or in the more controversial Ham and Pineapple. I don’t understand that contoversy, I have to admit – I’m one of the percentage of people who quite like Ham & Pineapple Pizza – but for every lover of the flavors there’s someone who vehemently hates it, or so it sometimes seems, and this is doubly true in the USA (maybe Canada too, I’m not sure about that).

    From afar, one gets the impression that the objections are more ideological than anything else, people objecting to the presence of fruit on a savory dish. To anyone holding that opinion, there are two facts that need to be pointed out:

    • Tomato is a fruit, not a vegetable; and
    • Pineapple is often included in meatloaf recipes because it contains an enzyme that naturally tenderise meat and amplifies its flavor.

    A more refined version of this objection is that the sweetness and high moisture content of the fruit clash with the traditional, savory, and crispy nature of pizza. But that doesn’t entirely stand up to scrutiny, either – the tomato paste is just as capable of making pizza soggy. That’s why tomato-based sandwiches are buttered – to provide a fatty barrier between tomato and bread. And why don’t these purists complain about ham or bacon on pizza, when both of these can also be sweet ingredients? Sweetness is just one element of the dish, one note in a symphony of flavors – you can complain that it’s too dominant in a particular recipe, but not that it’s present, especially when the pizza sauce often contains added sugar.

    Critics, often dubbed “pizza purists,” argue that it violates Italian culinary traditions, with many objecting to the combination of hot cheese with a wet, acidic fruit. This traditionalist arguement is undeniable, but my response to that would be – ‘so what’? Tradition is still served by all the traditional varieties of topping on offer, and no-one is forcing you to eat it. But I do wonder how many of these critics have actually tried it? How many are just food snobs? And of those who have tried it, how many had preconcieved negative attitudes? My bet would be, the majority of them would fall into one of those two categories.

    So, sucks to be them – I’m going to use Ham And Pineapple Pizza as my base flavor profile for Tattooine Pizza – and for one of the reasons that the idealists and purists cite: the moisture content. Tattooine is an arid environment (for the most part – I’ll deal with that in my second example), and every source of moisture has to be exploited.

    We’ve never seen cacti on Tattooine, but they – and other succulants – would seem to be a natural fit, because they concentrate moisture in their pulpy flesh. So “Ham and Pineapple” naturally becomes “Bantha and Cactus” on Tattooine.

    But now we get to have a little fun with the idea. Cactus pulp (specifically from the prickly pear or “tuna” fruit) is often described as a cross between watermelon, kiwi, and bubblegum, making its flavor profile quite different from pineapple. While both are sweet and tropical, cactus pulp is less acidic, more floral, and generally less sharp than pineapple. For those who have never tried Kiwi Fruit, it’s strongly reminiscent of strawberry.

    Bubblegum, Watermelon, and Strawberry? At the very least, I think we’ll need something acidic to cut through the implied level of sweetness. But there are exceptions to this flavor profiile – s Some types of barrel cactus fruit are, in fact, described as tasting remarkably like lemon or pineapple. That puts us back on more familiar ground, flavor-wise.

    So let’s imagiine a variety of succulant that grows under the sand (which is why we don’t see it dotting the landscape), soakng up every skerrick of moisture its roots can find. Thick-skinned with air pockets to insulate the plant, its flesh has a slightly maple-flavor with a mild citric punch moderated by the sweetness. When you bite through the pulp, it releases a burst of flavor and moisture.

    We need to then adjust the flavor of Bantha-meat to suit. That would mean making it a little less sweet and perhaps a little more smoky in flavor – smoked ham, not honey-glazed, in other words.

    That leaves only the sauce – let’s dry it out a little more and then remoisterize with a bit of light red wine just for the flavor, and in particular, the scent. This would add a slight earthiness that can be complimented with dried sliced mushrooms and maybe a little thinly-sliced and de-seeded red capsicum (called red peppers in the US).

    The result would be a pizza with the flavor complexity of a fine italian dish, of mostly local ingredients (the flour might have to be imported), and unmistakeably Tattooine on a plate. It even moisturizes the mouth to make it easier to swallow!

    Example #9: Pepperoni -> Spicy Swamp Rat

    Where is the Water on Tattooine? Virtually none of it appears to be on the surface. Luke’s aunt and uncle run a vapor-farm, extracting moisture from the air and presumably channeling it along underground pipes to farms growing more traditional produce.

    Certain arid regions experience seasonal influxes of moisture. For example, the Sonoran and Chihuahuan deserts, as well as parts of the Sahara and Arabian deserts, can experience high absolute humidity during summer monsoons, with moisture levels comparable to those in humid deciduous forest regions.

    So it’s not going too far to suggest that some areas on Tattooine experience high humidity, and that’s where the bulk of the moisture is. Desert air is often very dry during the hot day but increases at night when temperatures drop, sometimes increasing from roughly 15% to 25% or higher, making the night feel less dry, even if the total moisture content remains the same.

    When discussing attacking the Death Star, Luke talks about bagging “Swamp Rats” from his speeder, saying they are about 2m in size. About the size of a cow in profile, in other words. Because of the subject of discussion at the time, everyone focusses on the size of the creatures, missing the key-word: “Swamp”.

    That implies that there are some parts of the planet that have surface water, even if these bodies are too small to be seen when viewing the planet from space.

    If Swamp Rats are at all edible – something for which there is no evidence either way – their existence would not be ignored as a food source. In a desert, you eat everything you can get.

    What, then, might a Swamp Rat Pizza taste like? I’ve chosen Pepperoni Pizza as my base flavor profile in answer to that question. Pepperoni is a cured, fermented, and air-dried sausage typically made from a mixture of ground pork and beef, seasoned with paprika, garlic, and chili pepper to create a soft, smoky, and bright red meat.

    In my experience, there are two varieties of Pepperoni used as a Pizza Topping. There are the small sausage varieties, between 1/2 and 3/4 of an inch in diameter, and frequently sliced relatively thickly; these curve up at the edges when the pizza is cooked, but can easily become dry and tough if overcooked. And there is the much larger variety, 1 1/2 to 2 1/2 inches across, and sliced about half as thick – this dries out even more, but doesn’t become especially tough unless seriously overcooked because of the thinness of the slices.

    Given the size of the Swamp Rat, I would suggest the latter as the more equivalent. And let’s make it naturally peppery, so that it doesn’t need so much seasoning and preparation. The rest of the flavor – garlic and spices – would have to be supplied through other ingredients, though. Maybe some green peppers with a more paprika flavor, thinly sliced, and something garlicky added to the tomato paste.

    The meat in pepperoni is typically a mixture of ground pork and beef, balancing fatty and lean meat. So let’s say that Swamp Rats contain both – fatty meat around the stomach and lean meat in the more muscular parts such as breasts and limbs. This would make for an efficiency of preparation that would make Swamp Rat quite favorable as a food source, and the manner of preparation puts a degree of separation between the end product and the source – since “Swamp Rat” isn’t a particularly appetizing name. The product, as used in our imaginary pizza, would almost certaintly be given a more appetizing monicker – “Swamperoni” maybe.

One thing more needs to be discussed before this pair of examples can be put to bed – and, because it applies to both, I’m separating it out from the two examples.

Tomatoes are one of the more moisture-dependant crops, ranking somewhere between wheat and rice. They need deep, consistent watering to support their high moisture content (94-95% water). If you can grow tomatoes outside of a controlled glass-house environment, you can grow wheat.

And there’s no sign anywhere of wheat farms or other such large-area crops on Tattooine. That has to raise doubts about the ability of Tattooine farmers to grow tomatoes, too, especially in quantities sufficient to make them a staple crop.

Green, leafy plants on Earth possess that color because they absorb more of the red light and reflect the blue and yellow. Clearly, one chemical reaction (photosythesis) can only cover a limited part of the spectrum, and this is the part that is processed most efficiently. It can even be argued that if the plant did not do this, it might recieve too much light energy, especially in the potentially harmful ultraviolet part of the spectrum. Applying that through to Tattooine, where the sunlight is far more intense, we can conclude that even more of the lgiht reaching the plants is reflected away, and that they would be comparatively white in color. Views from space show the planet as a pale yellow color often called “buff” when applied to manila folders, paper, and envelopes. This is a blend of yellow and white. While the sands of on-planet shots are generally similar, they are arguably darker yellow in tone (because they were filmed on location here on Earth, somewhere near Tunisia). To correctly conflate the two, we need a source of additional white – it doesn’t have to be much – so that logic seems to hold together unexpectedly well.

We have moisture farming, extracting water from (relatively) humid air. We can also surmise that there is considerable underground water, but it’s deep and difficult to extract, and may be extremely salty. This suggests a moisture cycle in which the surface layers are relatively hot and dry, with that heat radiating down during the day to turn the water into water vapor at a comparatively deep level; it then aerates the soil, keeping it loosely bonded except where igneous rocks have formed through volcanic action in the past. So the hot, moist, air rises and forms a layer some distance above the ground levels, only to descend in the relatively cool night, and occasionally there would be rain and thunderstorms. Moisture farms are cooling towers that extract the water from the humidity and pipe it to where it can be used.

What’s needed is to introduce some sort of tomato-equivalent that could thrive in such an environment. I don’t want to make these another succulant, because I’ve already used that plant variety, and there are no bushes that might fruit that are mass-cultivated – they need to be hidden from sight.

Tomatoes are nutrient-dense berries from the nightshade family. Despite being fruit, they are typically treated as vegetables in cooking.

The best answer I can come up with is for tall underground plants that project roots down through the soft, arid, soil to the water table, that are extremely salt-tolerant, and that send shoots upwards toward the light. The upper surface of much of Tattooine is sandy, and sand is actually translucent, so these plants can flower and fruit below the surface, protected from the harshness of the desert environment, and assisted by the relative looseness of the sandy ‘soil’. In this way, they form natural equivalents of the moisture farms in areas where the surface is especially sandy, and not as hard-packed as the family moisture farm from which Luke derives. Human colonists would have discovered this plant and sought to domesticate and crop it, by supplementing the natural moisture they get from the environment; this would enable the fruit to grow larger and faster. I imagine that the lower down the fruit forms, the less mature the berries, and that as the plant itself grows, these are pushed higher and higher – so only the uppermost layers would be cropped, with next month’s crop, and the month after, and the month after that, all growing deeper in the ground – a contninuous production cycle.

And that solves the Tomato problem.

Next, I should consider the Wheat problem. There are various tubors that have been mentioned as growing on the planet in various sources, and they would fit both the ‘grows underground’ and moderate water-needs of a wheat equivalent. This could be pureed and dried into ‘pizza bases’.

Golden Rule: Consider The Environment

Those last two illustrate a golden rule that should always be adhered to – if your dish is to be commonplace and not something exotic, reserved for high-end restaurants and fancy / celebratory meals, the constituants all have to be locally-sourced.

There are two ways for that to happen: transplants and local flora / fauna.

I would reserve the ‘transplant’ option for use only when inspiration fails to deliver a local solution. But either way, the local conditons have to be taken into account – it’s no good saying “plums have been transplanted from Earth” if the environment doesn’t support plums being grown.

The culinary history of Australia over the last century or so contains a myriad of lessons for the outside GM.

Initially, everything here was based on food products transplanted from Europe and a very English cuisine built upon it. Some crops did well, others struggled. The cuisine began to adapt to the resulting scarcity / commonality profile even while efforts were undertaken to overcome the challenges that resulted from the environment.

Post WW2, immigration from elsewhere in Europe opened up, and German, French, Italian, and Greek cuisines all made their way into the everyday diets of the Australian people. Of these, and selected dishes aside, French pastries and cakes and Italian dishes dominated. Gernany gave us the Black Forest cake, and Greece gave us some salads; the latter cuisine still struggles to penetrate the market, becoming more successful with each attempt.

It wasn’t long before fusions and local variations started to appear; bridging the gap from traditional cuisines to what was already established. Of the two, ‘local variations’ were dominant.

That was the cuisine landscape when the doors were opened to Asian immigration. Indian and Chinese immigration dominated for a while, and of course, they brought their cuisines with them. Indian curries – or rather, a softer, gentler variation – had a piecemeal integration that grew over time. Chinese dishes, and in particular those of the Southern Chinese Sezhuan Province (I’m sure I’ve misspelt that, apologies) were an immediate hit – but the local influence permeated the cuisine almost immediately. The rapidity of the integration and adaption were remarkable – in two or three years, chinese restaurants were ubiquitous.

Over the next twenty years, other minority cuisines made their way here, and either adapted to the Australian palate, adjusted the Australian palate to make room for one or two dishes, or failed to make an impact. Combinations of the first two outcomes are common, belied by a trend starting in the late 20th and early 21st centuries toward ‘authenticity’. That seems to have died down lately, and fusion cuisines seem to be on the rise again.

Of these minority cuisines, one of the earliest and most successful has been Middle Eastern food, especially Lebanese and (to a lesser extent) Turkish.

Within easy walking distance of my residence there is a Himalayan Curry House and a Lebanese Pizza specialty store. There’s a restaurant that fuses Japanese and Australian food (especially in the area of drinls) – with a lot of italian-srtyle coffee varieties on the side [the coffee culture is incredibly strong in Australia and unbelievably fussy in its standards].

I’ve offered this brief summation of the evolution of the current cuilinary landscape for two reasons: First, it demonstrates the impact of the Golden Rule, and second, it provides some context for the perspective taken in the rest of the article – don’t look for ‘cultural purity’, you won’t find it here. Australian cuisine is a melting pot, unafraid to adapt a more local variation of a base recipe that a purist might abhore, but that Aussies culturally embrace.

Would someone without that environment come up with the simple process described in this article? Maybe – but I think the conceptual hurdles would have been higher for them, no disrespect intended. The purity of their basic cuisine imposes an additional barrier to overcome.

Broader Application

To close out this article, let’s broaden it’s conceptual boundaries beyond that of Pizza to some other dishes that are extremely common here (always draw on the cuisine that you know as a foundation – or better yet, the cuisine that your audience knows)..

The technique is basicly the same – add or insert an adjective that doesn’t commonly go there and then make sense of it. I have three examples to offer.

When it comes to specific recipes, though, you have two paths to follow: you can either create something with the same flavor profile as the ‘inspiration dish’, recasting it into a new form, or you can have a dish with a similar structure and texture but a different flavor.

There is also the secondary question of texture – it can either (a) be the same as the inspiration dish, (b) be the same as the target dish, (c) be something in between the two, or (d) be something completely different.

The process of applying the principles demonstrated in earlier examples to a broader array of dishes is to transform the dish one way or another and then incorporate a twist – and always to use local ingredients to add the dish to the world-building already in place..

    Example #10: Irish Stew

    I have to admit that this dish brought me to the limits of my imagination. I had a hard time thinking of a different flavor profile that would fit the textural form and cooking method, and an equally hard time thinking of a different form into which this flavor profile could be recast. It took a solid ten or fifteen minutes before something that would work came to me.

    Irish Stew to Sushi Roll, keeping the same flavor profile as the original, which means that it has to be served hot, not cold.

    The primary ingredients of traditional Irish Stew are Lamb or Mutton, Potatoes, Onions, water or stock (lamb or beef), salt and black pepper. Carrots, leeks, parsnips, thyme or bay leaves, and guiness are sometimes added, with the first being the most frequent. But the most common ingredient in the Australian Version not listed in those initial ingredients is barley, and the next most common is celery.

    The broth tends to be robust, thickened by using waxy potatoes. It’s dominant flavor is earthiness, with an underlying sweetness from lamb and carrots and added umami from the onions.

    My experience with the dish suggest that the size of the meat chunks is critical – too large, and they can be uncooked in the center, and not let enough of the meat flavor permeate the rest of the dish; too small, and the meat can become dry and bland, giving up too much of its flavor to the broth.

    So, let’s break this down into components for our ‘hot sushi’:

    Cooked Barley in place of rice. Chunks of meat alternating with potato in the middle, cooked on a skewer. A sauce containing onion, carrot, celery, stock, potato wax from the potato chunks mentioned previously, which is held together with a gelatinous compound. Maybe a little mint and coriander as well. Barley to be cooked in the sauce with the vegetables – which implies that chunks are big enough that they are easily removed from spoonfulls of the barley-and-broth before the gelatinous agents has time to set. And something very liquid-resistant to hold the whole thing together, a wrapping of some kind – possibly banana leaf.

    Each of these candidate for replacement with an other-world equivalent. But first, let’s throw in our wild card, the adjective: Let’s throw Ginger and a little garlic into the sauce and call it “Zesty Irish Stew” – with “Zesty” being our unusual adjective.

    In sequence of priority, in terms of making the dish unique to this setting and place, the ingredients would be:

    1. The Meat – a rich, dark meat that is not overwhelming in flavor. Let’s call it Londassian Winterdeer.
    2. The Barley – a cropped staple that binds together when cooked, possibly something more akin to rice in appearance but not flavor, maybe with a very slight walnut flavor – Londassian Nutrice.
    3. The Ginger & Garlic – one tubor to do both jobs, Londassian Yam.
    4. The Potatoes – Let’s let Londassian Yam take the place of the potatoes as well.
    5. The Stock – I’d like this to be slightly different and earthier than the meat, so Londassian Pan Mushrooms.
    6. The Banana Leaf (because it’s visibly obvious) – Londassian Berry-Leaf, let’s say, and make it a sweeter component, eliminating the carrots.
    7. Everything else – let’s call these transplanted varieties of these vegetables.

.

    Everything gets cooked and then strained through a collander that has pores large enough to admit the Londassian Nut-rice and gelatinous liquid broth. These are then cooked a little longer to concentrate the broth into a sauce..

    Lay the Berry-Leaf flat on a marble surface, spread the Nutrice over the top, alternate Londassian Yam and Winterdeer chunks, dress with the other vegetables extracted from the broth, roll and leave to set, held together by string. Remove the string and steam just prior to serving.

    This is a hot dish, a sushi, and a variation on Irish Stew with hints of an Asian flavor infused into it, all at the same time. Slice and eat with chopsticks, or let the outside cool slightly and eat as a bar or roll.

    Unmistakeably Londassian (wherever that is).

    Example #11: Layer Cake

    In the previous example, we kept the flavor profile but not the form, so this time lets switch up the flavor profile but keep the idea of a cake with layers.

    Key ingredients are the cake, the separating layers, the icing, and maybe a topping of some sort..

    What’s our twist adjective? I thought initially of a lot of terms appropriate to a cake, like ‘fruity’, or to a layered construction like ‘stacked’, but they don’t pose enough of a twist. I’ll use both ideas in the final construction, though. I toyed with ‘icy’ and ‘tropical’ for a while, but they aren’t unusual enough, either. My thoughts were tending toward a savory dish that looked like, and was constructed of, ingredients that would normally be assumed to be sweet. And that concept led me to the word ‘Phantom’

    So we have ‘Phantom Layer Cake’ – something savory made from ingredients normally considered sweet, and consumed as a main course, cut into wedges the same way as one would a cake.

    So, for the cake element: A melon which, when baked, shrinks into a denser layer more akin to a natural pancake, with a strong mango-and-lime flavor, which gives a sweetness to the dish. Let’s use a literal name for this – Foaming Wintermelon.

    The separating layers: A berry which becomes a dark red compote with a flavor strong in meaty umami, blended with a stiffened cream and something to further reinforce it structurally (but not gelatin). Let’s name them after the discoverer – Forsterberries. The stiffening agent should be cornflour or something equivalent. I looked up what ‘corn’ is called in non-english languages and couldn’t go past the Chinese name, Yumi, because it looks so similar to the english “Yummy”. So “Yumiflour”.

    The coating/icing: We’ve got plenty of sweetness, and a meaty flavor. We need a little spice-and-herb action, and this is the place to put it. So a jellied stock with ground spices and herbs. But a ‘stock’ doesn’t sound especially like a component for a sweet dish, so let’s call it a glace, derive it from a citrus – the “Pepperlime” – and add some mint, lavender, basil, rosemary, sage and thyme (transplanted) to it.

    Topping: I think we need to make it obvious that this isn’t a sweet dish, so caramelized Onion (transplanted) and chunks of roast chicken (Konglish Birdmeat), and a cherry in the middle that was roasted in a small quantity of mushroom sauce with red-berries (generic term) that then get spooned onto the top here and there. Top that with a sprinkling of salt, and the “Phantom Layer Cake” is done. To further connect it to its place of origin, let’s amend the name to “Konglish Phantom Layer Cake”.

    Example #12: Lemon Chicken

    For this last example, let’s play it (relatively) straight – same basic flavor profile (citrus and white meat from a bird), same basic ingredients (local equivalents), but a single twist ingredient: “Honeycomb [Citrus] Chicken”, giving the combination a slightly more sweet-and-sour taste.

    Inspired by that, let’s add some thinly-sliced red, yellow, and green peppers to the sauce.

    So, we have:

    • Meat – a local variety of game bird, equivalent to a chicken;
    • Citrus – a local fruit of the citrus type, turrned into a sauce
    • A herb-flavored pale honeycomb, probably ground or powdered; with more of the citrus;
    • Peppers – candied in the honeycomb mixture and then added to the sauce to unify the dish.

    Honeycomb is made by heating sugar, honey (or golden syrup or corn syrup), and water to a high temperature and then adding bicarbonate of soda to make it foam.

    Since we aren’t changing the flavor profile much – just adding some sweetness and herbs to the standard “Lemon Chicken” – the ingredients are going to be similar. Let’s make it an alien meat from the Dominclinan Cassowery, a Dominclinan Lemon-Lime that tastes like a slightly more sour blend of the two, the honeycomb with transplanted herbs, and Dominclinan Peppers (which are really transplanted peppers but grown in an alien soil, which greatly impacts the flavors*).

    See:

Creating bespoke recipes that recall familiar foods using local ingredients is just one way of creating a broader, more robust culture with which to populate a planet or part of an Empire. Or a fantasy world, or even a different Plane Of Existance. In the next article in this series, I’ll look at ‘Strange’ Technology as a unifying force gluing disparate parts of a culture together.

PS

A couple of last-minute thoughts to add.

First, I believe that between 1/3 and 1/2 of your worldbuilding should be aimed directly at the PCs that you expect, even if you have nothing more than a race and a class to go on. Being able to say to a player, “this is a popular dish where you come from, it’s up to you whether your character loves it or hates it,” and then building on the food relationship and the chosen reaction through minor incidents in the first session or two will help establish that the characters are ‘real’ and inhabiting a ‘real’ landscape, with some sort of culture in back of them.

Second, one of the first interactions with a culture that a character will have upon entering it will be the food, and especially the differences between what’s offered them and what they’re used to. Don’t waste that opportunity – if you’re creating a trio of local dishes, make them reasonably consistent in generalized cuisine style, draw inspiration from that in building out the rest of the culture, and then use them to sneak information about that culture – especially anything not obvious just from looking around you – into the adventure. World-building may happen on the page, but it isn’t real until it’s put in front of a player, one way or another.

So have fun with this, and squeeze as much benefit out of it as you can.

If you enjoyed this article, you may also be interested in my previous posts about cuisine in RPGs:

Leave a Comment

Tales Of Youth


In this post, a tool for integrating a character’s evolving personality into their past and backstory. Works in all genres.

Wow, but this post is late. The main reason for that is the Worksheets that I’ve linked to later in the article – the article itself was 90% complete last Sunday Night. These took four days to design and create, and a day to assemble – only for me to discover, late Thursday, a major error in construction which led to part of the design work and all of the assembly to have to be redone. And only then did another two flaws get noticed – but since these were merely cosmetic, I haven’t bothered fixing them.

There’s an advertisement that I’ve seen repeatedly on Australian TV recently. The premise is that if you buy the advertised financial service, you will have more time to worry about various other things in life, from the positive to the profoundly negative, all listed in a humorous manner. It’s the last item listed that I find profoundly offensive – time to “worry about becoming your mother.”

Clearly, this product – usually bought only by those of middle-age or older – is being pitched at the youth market, and relying on some sort of generational divide or gap. If no such divide or gap exists, then the whole message does worse than fall flat, it becomes profoundly offensive and likely to dissuade from the purchase of the financial product in question, not encourage it.

I can’t speak for anyone else, but I am exceptionally proud of my mother and her achievements in life despite difficulty and hardships and the loss of opportunities that accompanied them.

On one occasion, this advert was aired in close conjunction with another for a TV drama, and the two connected in my mind to highlight a difference between perceptions of family in adult-oriented TV drama and in real life – which was followed by the realization that TTRPGs tend to follow the Television Drama model, even when they shouldn’t.

The Television Model Is Necessarily Broken

On TV, family is something no character seems to think all that deeply about unless one or more family members is directly involved in the episode of the week. The rest of the time, the characters stand alone in their adult iteration, their quirks, interests, relationships, personalities, and education all existing in magnificent isolation.

And when family do appear, at best they tell (not show) the backstory to those adult personality traits, if there is one, or simply acknowledge that the focal character was ‘always like that’.

There’s good reason for this – actors are expensive. Stars top the bill, followed by lesser stars, special guests, child actors, recurring adult supporting cast, one-line supporting actors, and extras. Child actors generally don’t have a lot of experience, and so earn less than the next tier until they can become global superstars and renegotiate their contracts. But they come with all sorts of extra expenses – they are more fragile and less likely to behave in a mature fashion, so all your insurance goes up; they have to be educated at the same time as acting; they may need extra psychological support; they are more likely to disrupt filming; and they can only work limited hours, and the best way around that is to hire twins But having two child actors can simply double or triple some of the costs mentioned earlier.

Children only appear on-screen when they absolutely have to be there. It’s cheaper to do something audio-only as a flashback – and cheaper yet to get one of your existing adult cast to provide a child-like voice for a little extra cash on the side.

On top of all that, child actors are rarely up to an adult standard, so getting good ones is likely to take three times as long in casting.

And on top of all that, you will often need several of them, depending on the scene. You can’t really do a school scene with just two kids; siblings alone can demand several.

All of that adds up, and leads to the contracts that child stars signed back in the 60s, 70s, and 80s being far more predatory. That made it more practical to do the occasional scene with multiple child actors, but the screen actors guilds have grown more teeth since, and these days, there are less of those shortcuts available – so studios have to absorb more of the additional costs. Which only makes justifying a scene with multiple child actors harder, and such scenes, less frequent.

TTRPGs suffer none of these problems – so why are such formative experiences left out?

The Real Life Experience (for most of us)

Contrast that with the real-world experience that most people have. As a child, even if they aren’t passions of yours, you get drawn into the things that your parents do professionally, and into their hobbies and pastimes. You learn things from these people in the process, things that shape your own personality in the years to come, often without our even recognizing the source.

And it’s often not just your immediate family, but the extended family around them. And it’s your siblings, and their interests and hobbies, too. Some of it rubs off, leading to bonding experiences.

It’s more common to consider the impact of your parents occupations, but not common enough to consider how the rises and falls of the family’s fortunes impacted the rest of the family. As children, we’re generally not perceptive enough to connect cause and effect, but those effects will impact us as children nevertheless, often as bolts from the blue.

It’s a similar story – with complications – when it comes to your parents’ family friends, your parents’ bosses and employers, etc. You start with the relationship between the individual and the family member, and fold in how they relate to children in general, to arrive at how they interact with a specific child (you) – and only then can you get into the impact of those interactions, as it is experienced by the child in question.

How about the relationships with the children of those bosses and family friends? There’s pressure to get on from the relationship between the adults, but that’s only the beginning of the story of the relationships involved.

All of these are formative experiences that contribute to the evolution of the individual. I had few friends as a child, and that led me to value friendships more highly as an adult, while many of those around me who were more socially-connected within my peer group placed less value on those friendships, considering them more expendable at need – they were somewhat more ‘fair weather’ friends to each other, in general. Most of them had one or two relationships that were more strongly felt on at least one side or the other, and in some cases this was reciprocated.

And almost all of it gets overlooked or ignored completely when thinking about characterization in both TV shows and RPG characters, especially PCs.

The TTRPG model is necessarily broken

If the television restriction is primarily down to expense, then it should not be surprising that the RPG version of the story revolves around a different kind of expense that has exactly the same consequence – time.

It takes quite a bit of time to create a character, and this is true of NPCs as much as it is PCs. Yes, there are shortcuts, such Partial NPCs, but balancing that is the need for these characters to fulfill a specific narrative need to which they must be custom-fitted. That often leaves less scope for making them interesting, compounding the ‘price’ to be paid.

What’s the consequence? Why does this matter? There are three reasons that matter: Character Distinctiveness, Internal Consistency, and Game Leverage.

Character Distinctiveness

No character trait exists in isolation – it came from somewhere, it impacts the character’s past somehow, the character has an emotional response to that – usually a defensive, self-justifying one – and it will shape the characters’ response to future events and experiences.

There can be points of commonality with other characters, but the overlap is temporary, because each trait is also influenced and shaped in its manner of expression within the characters’ life by other traits which combine with it.

It’s as though the personality were a complex vector sum, with influences of different strengths pulling it in all sorts of different directions. If you plot those directions as vectors starting from a defined point on an alignment table, you can get a simple sun showing how the overall personality manifests from these formative elements.

But you get an even more informative tool if each influence also pulls the character around the wheel of alignments, defining a personal character arc which tends to push characters into becoming the complete opposite of where they started, and may even lead them full-circle.

At least, that’s what would happen if they weren’t subjected to high-stakes adventures and interactions with others, each of which adds both a vector of it’s own. And interactions with the world around them, some of which lead the character to adopt more extreme positions (outward) or positions of compromise and a refusal to hold an extreme opinion (inward toward neutrality).

Such a diagram can be a powerful tool for analyzing personalities, but they subtract significantly from player agency, replacing character choice with a mechanical process.

You can simplify traits to expressions of the ‘pull’ toward a particular alignment, then automate and interpret these through die rolls to create an NPC generator, if you want – just roll a die for the relative strength of pull toward each of the alignments, do the resulting vector sums, and see where you end up – then assign traits accordingly.

But that’s not the focus of this article, so I’ll say no more about it.

Instead, suffice it to say that each character is on a journey through life, and these formative influences shape the course of that journey by pushing or pulling the character’s reactions to experiences in different directions. This differentiates an individual from the generalization of their alignment and from the prototypical generic member of every generalized group to which they belong – be that alignment, or character class, or professional association, or racial profile. That alone can make them worth having in the back of your pocket.

Internal Consistency

If you don’t have notes made, or those notes are not structured in a way that makes the factoid you want accessible very quickly, you often make things up on the spot, and it can be very easy for contradictions to creep in.

I remember one player who insisted that his character’s life had been utterly miserable, full of heartbreak and poverty, using this to justify some of his character traits – only to claim, a few months later, that it had been the happiest time of his life. It took a little while for the discrepancy to come to light, which occurred when a childhood friend reappeared in an adventure, one generated in the ‘heartbreak and misery’ phase, with some notes about how the pair used to steal pies off windowsills to hold off starvation. Very Oliver Twist. But this happened while the ‘happiest time’ statements were still fresh in memory. For the next six months, the player tried to reconcile these contradictions in various ways, psychologically twisting the character into a shape that was wildly different to its original personality – until he declared that the character had lost all sense of being fun to play and abandoned it in favor of something new.

That’s a fairly extreme example of what can happen. A less extreme example was the character who had three different ‘best childhood friends’ in succession. He solved that by adding movements to elsewhere into their character’s backgrounds, but he also confessed the truth – that he had simple forgotten the prior owners of that label and relationship the next time he used it for a bit of characterization.

Both these cases could have been avoided with just a little more effort before their past became relevant to the respective adventures in which they took place.

Game Leverage

Having a bespoke supporting cast to draw on is gold for the GM, providing a mechanism to engage the character in an adventure purely through who they are and how they got to be that way. Such characters shouldn’t be invoked in every adventure, and certainly shouldn’t be featured in every adventure, particularly if you have multiple PCs within the campaign – but having some way to involve a character who would otherwise not be engaged beyond companionship can be vital.

It becomes even more important if each PC has his own character arc, upon which the player and GM have collaborated; they provide starting points and reference markers by which these transformative subplots can be measured, showing the PC how far he has come.

I like to think of them as a scaffolding for the PC to stand on while working on the adventure.

Relationship Bundles

For any systematic approach to solving the problem that has been revealed, we first need a systemic view of the relationships, a way of naturally grouping them.

I have such a structure.

This contains six primary Nodes, each with an associated Sub-node, and it’s usually the subnodes that are most important because these are relationships with peers. Nodes are clusters of relationships that have something in common, that something being the node’s definition.

These nodes and sub-nodes are:

  • 1. Immediate Adult Family
    • 1a. Siblings
  • 2. Family Bosses, Employees
    • 2a. Children of… (peers)
  • 3. [Adult] Extended Family (Grandparents, Aunts, Uncles, Cousins etc)
    • 3a. Children of… (peers))
  • 4. Family Friends
    • 4a. Children of… (peers)
  • 5. Teacher(s) / Master
    • 5a. Fellow Students and Children of… (peers)
  • 6. Prominent Locals
    • 6a. Children of… (peers)

The sequence might seem capricious, but it is actually the product of two factors: Potential Influence and Frequency Of Proximity. So Family Bosses (and their children) are outranked by Immediate Family (including Siblings) but are potentially more prominent than Extended Family. The ranking sequence isn’t perfect, and may be wildly incorrect in individual cases.

Let’s talk about each of these, briefly:

    Immediate Adult Family

    This also includes any parental substitutes. To qualify, they have to live in the same household during the youth of the character. We all have complex relationships with our parents, brought on by teen-aged rebellion and resentment for every time a parent does something “for your own good”. There are many formative experiences that result. In some respects, each generation becomes the exact opposite of their parents as a result of this interaction; even before it was given a name, the “generation gap” was very real. And each sibling has a different mode of rebellion. The pathway to mutual respect can sometimes be a difficult and slow one.

    The contributions to the personality of the individual can be significant or subtle, positive or negative – but there isn’t scope for a deep dive into these complex relationships. Instead, you have to focus on a few chosen elements that are fundamental to describing the character as they are today, showing the origin of those traits.

    I’ll end this with a couple of examples. I came home from school one day to find that my father had burned a randomly-chosen half of the comics collection that I had painstakingly gathered over a period of 2-3 years. Almost all my pocket money had gone into that collection, and I felt that it was mine. If I had been told I had to get rid of half of them and given a little while to choose, that would have been different; I wasn’t. In adult life, that turned me into a collector who has to be forced to give up anything that is even slightly valued. That’s why I have eighteen bookshelves full of books, and RPG supplements and CDs and DVDs, and one of the reasons I prize physical media over online sources – I don’t trust that the online resources will always be there when I want them. I’m not (quite) a hoarder, but I flirt with that trait regularly, and make no apologies for it.

    A second anecdote from many years later – my Father had a great deal of trouble with alcohol, and it brought out the worst from him in a self-destructive cycle of self-abuse (he broke that cycle when he was learning to be a pilot – good for him! ) So I was passing through the town where he was living at one point and we went down to the pub for a drink, as adult males did in Australia at the time. And, after a couple of beers, he asked if I was ready for another, and I responded, “No, I think I’ve had enough for now.” That was the moment that he really started to see me as an adult, commenting that it took him many years to learn what I seemed to already know, and that it had been one of the hardest life-lessons he had learned over the years – to know and respect your limits. He was visibly proud of my ‘achievement’. I didn’t want to spoil the moment, so I didn’t mention that I started learning that lesson in childhood by seeing what excess was doing to him and his life. But it’s a lesson that has helped me immeasurably, both directly and indirectly, through the years; these days, it principally manifests in knowing how hard I can push my body through the limitations imposed by my various medical conditions. But the core lesson remains the same.

    Siblings

    I’ve already discussed this at some length in this manuscript. With (almost) every sibling there is a rivalry over something; there is often a shared interest outside of that rivalry, and where there isn’t, there will be a subject about which the character doesn’t personally care, but which they have learned about because it’s an interest of a sibling; and there will usually be one other element to the relationship; it could be something about them that the character envies, it could be something they resent, it could be an achievement of the siblings of which the character is proud, or vice-versa.

    Family Bosses, Employees

    The relationships in Node 3 tend to be more distant and of lesser significance in most cases, but when this is not the case, they tend to be pivotal. The employer may even act as a surrogate parent (note that co-workers fall under the heading of family friends).

    But what if the character’s parent(s) had no boss because they owned their own business (as much as anyone does in their specific culture, at least)? Then it’s their leading employees who come under the spotlight. These are frequently (but not always) going to be closer in age to the character as he was at the time, making it easier to forge a connection. But as a general rule, these relationships are less important than the immediate family.

    The common element is that there is a relationship between the character’s parents and these individuals that stems from their respective occupations, and this relationship brings these guests into contact with the young character.

    Oftentimes, the ‘guest’ will pretend to a personality that isn’t really theirs to the parents, while letting it shine through to the children (ie the character), sometimes it goes the other way around. Other situations can yield quite different relationships.

    Sometimes, there are no bosses or employees of special significance – but there’s usually someone. Think back to your own childhood, and most of us will find someone fitting that description in their somewhere; I can think of a couple right off the top of my head. Use them as a starting point, replace the relationship between the adults with a cultural analogue, and use the relationship you experienced as inspiration for something more appropriate to who the character is supposed to have become in the here-and-now of the campaign.

    And don’t be too literal – one of the examples that came to mind when writing the preceding paragraph was the flatmate of the man who would become my stepfather. But they were flatmates because of my stepfather’s employment, which meant that he needed someplace in that community to live, so he qualifies.

    Children of Family Bosses, Employees (peers)

    If those individuals have children, it’s inevitable that you will come into contact with those children and form relationships with them. Sometimes these will be close, despite age differences, sometimes they will be distant despite them being peers. Regular contact between the adults makes these relationships inevitable.

    [Adult] Extended Family (Grandparents, Aunts, Uncles, Cousins etc)

    These relationships are often stronger and more influential than some of those listed before them, but there is a factor of distance involved that makes them less immediate than those intervening.

    Our adult extended family tend to touch our lives in ways only really appreciated in hindsight. My maternal grandfather lost his vision in a workplace accident, developed cancer, and stubbornly refused to give in to it; at one point, we were told that 98% of his body was cancerous. He actually died and was revived four times, refusing to pass until his wife did.

    That maternal grandmother discovered British Sci-Fi comics like 2000 AD after I bought one. At first, she bought them so that I could read them, and read them herself to be able to relate to me better – but she discovered a love of them in her own right, and that reinforced my own love of the broader sci-fi genre.

    I had an Uncle who had stayed with my paternal great-grandmother in Sydney before deploying to Vietnam in the late 60s. He had bought a number of Marvel comics before departure to fill in the time and left them there when he shipped out. I caught my love of cities from her and taught myself to read at a teen+ level at the age of 3 or 4 using the Iron Man, Spider-man, Avengers and Fantastic Four that he left behind (I wanted to read them, but no-one was available – so I tried to puzzle them out on my own. And succeeded, one syllable at a time; by the end of the second one, I was reading them like I had been doing it for years).

    Children of [Adult] Extended Family (peers))

    < The only difference between the population of this group and the one above is age - these are close enough in age to the Character to be his or her peers in the time of his or her youth. I had a very large extended family with about 12 relatives who ranged from about 10 years older than me to several years younger, whom I saw regularly, and a few more that were more distant. There were also some aunts who were younger siblings of my father, young enough that they counted - and later, their children. All of them contributed something to my makeup, some more than others.

    Family Friends

    I’ve talked about this group, too. But this is one of those relationships where the character’s experience is a complete byproduct of the adult interaction. These friends aren’t there to see you, they are there to see an adult in your household, and the general expectation of politeness is that you will say hello and goodbye and get out from underfoot for the bits in between. Any social chitchat with the child – the character – is secondary, and often nothing more than politeness.

    Nevertheless, there’s a formality to the structure of these interactions and that makes them both more understandable to the child’s mind, and more instructive in social interaction than the more intimate and casual interactions when the ‘guest’ isn’t around.

    Which always makes these interactions fascinating to me when these formalities are not observed. Family Friends who insist on engaging with the children of the household when they don’t have to. Family Friends who are treated to the normal interaction modes of the family, as though they were also family members. The range of degrees of formality that are possible. What parts of etiquette that can be discarded, and the impact that doing so has on the expressed relationship.

    My friend Stephens family always had a very casually fractious interaction with each other. It’s common and normal for that to be set aside when there’s a visitor in the house, a ‘polite fiction’ of politeness; they didn’t have a bar of that with each other. It made the guest feel welcomed into the home in a strange way, but also slightly uncomfortable at the same time, wondering ‘how am I supposed to react to this?’ – my choice was always to politely ignore it, and to speak to all of them the way I would if this display was not taking place – and I’m quite convinced that this scored me brownie points with each of them individually. I saw others who were offended by it, or gave like for like, and they never seemed to last as long as friends to the family members of this household.

    So there can be a lot of nuance in ‘social niceties,’ and that nuance comes out in this Node.

    Children of Family Friends (peers)

    Things can get even more interesting when you’re talking about the children of these family friends, because you are expected to get along with them and interact with them by virtue of the adult relationship involved regardless of actual feelings. This is great training for interactions when you’re working for a living and have to cope with co-workers.

    Some of them, like the children of employers, can possess an arrogance, a sense of entitlement that comes from the adult relationship. That forms the cornerstone of a lot of antagonistic interactions. But there can be times when its the adults who are beset by some kind of social friction, while close bonds form between the children, often leading them to roll their eyes at each other when the ‘friendly’ relationship becomes more heated, a shared ‘here we go again’.

    I think you learn as much about social behavior from navigating these awkward situations than you do interacting with your peers in the school-ground – the two balance because those peer interactions happen more frequently, and the capacity for influence is the product of intensity and frequency.

    Teacher(s) / Master

    Educational structures have changed a lot over the centuries, and a lot of fantasy gaming reflects this with an apprentice-and-master model rather than a common schoolyard. Anne McCafferey’s “Pern” novels offers a variation on the ‘Lone Master” approach, giving each major Craft or Service it’s own institute of higher education and a formalized four-step structure of rank. It can be difficult to adapt to D&D / Pathfinder – “You’re 18th level but still only a Journeyman?” – because level progression is independent of character ranking. This can be solved to some extent in 3.x using Prestige Classes in which a requirement is a level in the preceding Prestige Class. There’s a completely different structure in Raymond E. Feists’ “Magician”.

    I’ve tried to structure this entry to accommodate such complexities.

    Every student has one or more teachers with whom they have particularly strong relationships. In my case it was Science teacher Tom Sculley, Maths teacher Mr Jenkins (I know his first name but can’t bring it to mind at the moment), and Art teacher Art Dickinson. I always got on particularly well with Careers Advisor and English Teacher Dick Rocheford, too, but never had him as an actual teacher. But in the educational lottery, my generation hit the jackpot, and all my teachers were excellent and with a dedication that went well beyond the prescribed minimum.

    Fellow Students and Children of Teacher(s) / Master (peers)

    I’m not sure that I ever encountered one of the latter group in my personal experience, but it’s inevitable that some would. The students a year ahead of me had the Headmaster’s son amongst them, for example. But I had plenty of the first, some of them pleasant relationships and some not so for many years. Fortunately, most hatchets were buried before scholastic education was complete. As in most schools, there were the Jocks and the Nerds, and I was definitely one of the latter.

    After the immediate family, this is the category most commonly serviced by existing backstory generators and models.

    Prominent Locals

    The final category is a bit of a catch-all for locals with whom you have relationships of particular note. In my case, these were mostly shopkeepers of one sort or another.

    Children of Prominent Locals (peers)

    And of course, there are the children if those prominent locals. Whereas in most of these categories, the cause of the relationship is the adults, it can especially be the case in this latter group that the relationship derives from the peers, with the relationship with the parent being a secondary consequence.

    Since most of these will actually be covered under the ‘Fellow Students’ category, this ‘Children Of’ group generally includes ‘peers’ that are either younger or older than the direct peer group.

Okay, with those categories and classifications defined and explained, lets move on to the solution I have devised – a ‘relationship stat block’. When I say that, it’s more of a worksheet, really – a place to document who contributed what to the character’s makeup, and indexed by the age(s) at which it happened.

The relationship ‘stat block’

Obviously, there’s a lot to fit in, so there isn’t room for expansive backstories; this is more of an executive summary for use in crafting those backstories. The worksheet is divided up into the different nodes and subnodes, as numbered above.

In a lot of cases, there’s room only for a single sentence or a handful of keywords. And those are the more expansive entries; when it comes to peers, there’s only room for one-word answers, or maybe 2 short words. The rule of thumb in such cases is to generalize as much as necessary – if you wan to include “poetry, musical appreciation, history” there’s no way that all of that will fit in a ‘peer’ space; you would have to generalize it to ‘humanities’ or something similar.

This is a good thing, believe it or not – it means that when constructing a narrative form of the backstory, you have choices to make. Either the impact of the character is the broader picture painted by the more generic term, expanding the personality just a little bit more, or you have to then explain why anything else covered by the generic term is NOT included. This adds to the personality of the character, of the individual with whom they have the relationship, or both. Where such a prompt is relevant, I recommend adding an asterisk to the end of the word as a reminder.

Below, I’ve discussed each of the entries that may be present, but not all of them will apply in all nodes.

    0. Character Identity

    There are four fields at the top to specifically identify the character. The first one is for the name. That’s followed by the Class, used for character class and Race if either are relevant.

    After that is the campaign, which is either going to be a specific unique name or has to include the GM’s name. And, for version control, there’s space at the right for the date the worksheet was completed.

    0. Character Personality Concept

    This is the starting point, always. The Worksheet is a tool for mapping out the events and relationships that produced this personality, but before it can do that, you need to have some idea of what you’re aiming for.

    Note that this is probably not the personality in its final form – the results of the Worksheet will add nuance and secondary layers to this beginning.

    The most useful format is a general personality profile and then exceptions that apply in particular situations, a couple of things that the character likes, and a couple of things that he actively dislikes, and then finish up with an ambition or two. That’s a lot to fit into the space provided, so you will have to be sparse in your language and may even have to broaden your specifics to get everything to fit, as noted earlier.

    You do not have to fill out every panel, and you do not have to fill out every field within a panel. The goal is to capture the essentials and provide a spur of inspiration as to what those essentials might derive from.

    I’m going to put some final advice on the best use of the Worksheet at the end of the article.

    1. Node

    Look to the left of each main panel and you will find the Node number (and an identifying label), following the same sequence given earlier – so, ranked in sequence of potential influence on the character’s development, assuming a typical cultural setting.

    The more the character’s culture deviates from what we generally consider ‘normal’, the greater the likelihood of a node ascending a step or two, pushing those nodes it climbs over back down the order.

    That’s important because this worksheet is most effective when completed in sequence of potential impact.

    Each node consists of 5 panels – one for adults and four for peers, ie children of roughly the same age as the character at the time.

    That can also be a little trickier to assess when different Races with different lifespans are involved – make sure that you have any needed information from the GM.

    The one for adults contains space for multiple adults; the ones for peers are one-peer-to-a-space.

    While there’s a lot of overlap, there are also subtle differences in the fields. I’m going to look at each field separately below, and these fields have been numbered to group alternatives together..

    2. Name

    Name appears as the first field in every panel, and every row of the adult panels. This isn’t the name of the focal character, its the name of the NPC that influenced their development in some way.

    3. Relationship

    Relationship always describes what the NPC is to the focal character, never the other way around. Keep this simple unless there is significance in being more precise – “Uncle” is usually good enough, “Maternal Uncle” immediately signifies that there’s something significant about the Maternal Family with respect to the character.

    But there are a few alternatives that might appear in place of the relationship field, so they have also been numbered 3.

    3. Parental Relationship

    In Node 2’s adult section this space appears for you to describe what the individual named on that line is, relative to one or more of your parents.

    3. Child Of

    In Node 2’s “Peers” panels, relationship to the focal character is pretty much a given, So this space is being used to connect the family friend listed in the Adult section to the Peer that is associated with them.

    The same thing happens in Nodes 3, 4, and 6, and this parameter is completely missing in Node 5.

    4. Age(s)

    This is the most complicated field because it is used for multiple purposes depending on the node and the panel within that node.

    Adults are deemed to have no age to a child – they are all just “old”. This field in the adult section is used to describe the age of the focal character during the period of their interaction with the adult. This may be a single digit or a range. If you think it relevant, you can precede this with a verbal suggestion to the age of the secondary character – “Young”, “Old”, and “Elderly” cover the gamut of possibilities under most circumstances.

    The real complications lie in the presence of this field in the ‘peers’ panels. You can write the ages of each member of the relationship during the period of interaction, separated by a comma, but then you have to indicate which age belongs to the focal character. A better choice is to write the relevant ages of the focal character as a number or a range, and then indicate the relative age of the peer – it might be -0.25 (indicating about 3 months younger), +1.5 (indicating a year-and-a-half older) or any other possibility meeting this profile.

    6. Influence

    Only Adults are deemed to be an influence. Peers are more about points of connection – I’ll deal with them, later.

    The number of possibilities for describing what influence an adult had on a child are almost endless, but you (very deliberately) don’t have very much room, so you will almost certainly have to generalize here. There’s just about enough room to write “Love of” and a subject. If your handwriting is neat enough and small enough, you might be able to squeeze in an extra word like “encouraged” or something similar.

    It can be even harder to fit a negative influence into the space.

    I recommend standardizing two abbreviations: L/o (for “Love Of”) and D/o (for “Dislike Of”). You can also use D/i for “Disinterest In”. “Distrust of” is a problem because D/o is already allocated, but you can probably get away with “Dt/o”.

    And that’s as far as I recommend you go, or you will reach a point where you have to look up your abbreviation every time you deal with one.

    7. Personality (d6)

    Most of the time you are going to want to choose this, but sometimes you need a hint. That’s what the d6 refers to – roll it and consult the table below if you’re in need of inspiration.

      1 Similar personality, less extreme
      2 Different personality, less extreme
      3. Similar personality
      4. Different personality, similar intensity
      5. Similar personality, more extreme
      6. Different personality, more extreme

    I’ve allowed an extra line for the personality, but don’t waste it writing the interpretation of the roll – at most, write the number ‘rolled’ (feel free to choose if something seems to fit). But I probably wouldn’t even do that – I’d use every last millimeter of space describing the personality.

    Even so, space is limited, so you won’t be able to do a full profile. That’s fine – the worksheet is an intermediary tool, used to organize information.

    8. Location

    Where did these encounters occur, geographically?

    9. Commonalities

    Nodes 1 and 2 give you room for three things the focal character and the peer member have in common. In Node 3, it’s two; in Node 4, it’s one; and nodes 5 and 6 don’t have this field at all.

    10. Differences

    And it’s exactly the same story when it comes to points of difference between the focal character and the peer.

    11. Key Incident

    The reason for the reduction of Commonalities & Differences in Nodes 3-4 is that I needed the space for this field, in which you describe one specific incident involving both the focal character and the peer.

    12. Outcome

    ‘Key Incident’ is followed by another 2-line field to describe the consequences of the Key incident.

    13. Memorable Positive Experience

    In Nodes 5 and 6, Commonalities and Differences are replaced entirely with something a little more specific – A memorable positive experience the focal character and peer shared, and associated Outcome.

    14. Point of disaster

    …and a point of disaster where the association got you both into trouble of some sort, and an outcome.

    These incidents are often more impactful than a year or more of steady influence.

    One peer may have only a memorable positive experience, or a point of disaster, or may have both.

    15. Location Now (d12)

    Every peer panel has two lines dedicated to recording where this peer is now, or where they were before the focal character lost contact with them. A span in years since last contact should also be included in the latter case.

    As before, there’s a die roll to provide inspiration when it’s lacking. But this is a little trickier than a straight die roll.

      P = Rate the size of the community of residence of the focal character now

      S = Rate the size of the community of residence of both the focal character and the peer when they were associating.

      In the case of science fiction games, the ‘community’ might be an entire planetary population.

      Roll d12+P-S.

    •  <2 Same community that the two used to share
    •   3 A neighboring community to the one the two once shared
    •   4 The administrative community closest to the one once shared
    •   5 A much smaller community located some distance from the one once shared
    •   6 A community of similar size to the one once shared, some distance away
    •   7 A larger community some distance from the one once shared
    •   8 A much smaller community located a great distance from the one once shared
    •   9 A community of similar size to the one once shared, a great distance away
    •  10 A larger community a great distance away
    •  11 A much smaller community located reasonably close to focal character’s current location
    •  12 A similar-sized community located reasonably close to focal character’s current location
    • >12 The same community in which the focal character now resides.

     

    •  13. The far side of town relative to the focal character
    •  14. The middle of town in a district very different to that of the focal character
    • >14. Same part of town as the focal character

      Result ^2 / 4 = Approx % chance of bumping in to the individual per week or month (GM’s discretion).

      Result ^3 / 400 = Approx % chance of an encounter with someone who happens to know the non-focus individual per month or year (GM’s discretion).

    As usual, ignore the die roll and select a result if something seems especially appropriate.

    16. Osmotic Knowledge

    Osmotic Knowledge is a subject on which the focal character gains knowledge simply by being around someone who is fascinated by the subject. It could excite passion for the subject in the focal character or could be things they’ve picked up about the subject while not caring about it in the slightest, or even being bored to death by the subject. It’s a more generic version of the Commonality and Difference fields, both folded into the one category and only appears in Node 1 in addition to those fields.

    17. Current Connection

    What is the current status of the relationship between the focal character and the peer? This field appears in every peer panel. I’ve allowed two lines, which should be enough for some nuance. If it’s been [x] years since the focal character has had contact with the individual, this is also the place to make that explicit.

The Worksheet PDFs

Click the icon to download the zip file, 1.60Mb..

Now that you have the anatomy of the Worksheets, you’re in a position to use them. I’ve provided them in 9 different combinations of size and format. Unless explicitly stated otherwise, a margin of 2 cm (about 8/10ths of an inch) is used on all four sides, and portrait format is employed. The format list is:

  • A4 pages, landscape orientation, 3 page
  • A4 pages, no margin, 2 page
  • A4 pages, no margin, 3-page
  • A4 pages, 2-page
  • A4 pages, 3-page*
  • Letter-size, landscape orientation, 3 page
  • Letter-size, no margin, 3 page.
  • Letter-size, 2-page
  • Letter-size, 3-page**
  • *, ** = recommended formats

With so many fields to include, the text is tiny. I’ve chosen a font that should be able to withstand the distortions and remain legible, but YMMV – try them all until you find the format that best suits you and delete the rest.

Click the Icon to download. 371 Kb.

All of the formats started as PNGs that I compressed the heck out of in order to achieve full-page graphics with tiny file sizes. Because none of them is free of distortion to fit the available dimensions, I’ve also included the pure graphics in a second zip file.

1, 2, and 3 are pages 1, 2, and 3 of the 3-page versions; 1a and 2a are pages 1 and 2 of the two-page versions.

Final Advice

Take a good look at the diagram below. Figure 1 shows the simplest possible personality effect: A cause creates the effect, i.e. the contemporary personality trait. The result isn’t just cardboard-cut-out; it’s positively wooden. Better no backstory and a mystery than one this simplistic. Yet, without the benefit of the Relationship Worksheet, I see this sort of thing all too often.

Figure 2 shows a slight improvement in that the cause of the contemporary personality trait has led to a reaction to an influence that then causes that trait. The backstory is richer and more complex. This is generally the best standard to be aspired to if simply writing the backstory as prose. because it’s very hard to map ‘straight lines’ of formative influences and events without something like the Worksheet.

Figure 3 is the minimum standard that the Worksheet makes possible, with less effort than would be required to achieve figure 2 through prose alone. We now have a cause that involves the focal character in an event, reshaping his views, which leads him to an influence that he might otherwise not have been exposed to, which leads to the contemporary personality trait. Break any one of the elements in that domino chain and the personalty is radically reconfigured. The character has depth and nuance and history. This is the minimum standard aimed for by top novelists for their featured characters.

Figure 4 adds a new layer of depth to the domino chain and projects personality evolution into the future as the consequences of this personality trait manifest in the focal character’s life. That’s the sort or depth that makes for a great PC, because they have a past and a ‘now’ that is not static and can (and presumably will) evolve with the campaign.

But that’s just looking at one personality trait. Most characters have two or three major traits of this strength, and three or four more of lesser prevalence of impact but sufficient intensity to override or influence choices that would otherwise be driven exclusively by the dominant traits.

No matter what standard you aim for with your primary traits, I recommend going one step shallower for most of those secondary traits.

But I have to call attention to an alternate construction that can also be valid – a simpler general personality and more substantial secondary traits (‘learning by experience’), and probably more of them. The results have sometimes been summed up ‘a study in contradictions’, but the character is unified by the concept and deep background.

Where Next From Here

I recommend writing a highly compressed diary of sorts, one entry to a year, for each age of the character’s life. The first section lets you introduce the immediate family and some members of the extended family; the second deals with the rest of the extended family; sometime in the next two years will be the focal character’s earliest memory, and will be full of influences and encounters that the focal character doesn’t even remember.

For example, I’m told that when music that I liked came onto the radio at 1 month old, I would bounce in my bassinet in time with the music. I have no memories of that – but my life since has comprised of periods when my interest in music waned only to be suddenly and vehemently reawakened. Right now, I’m in a waning stage – but once my Hi-Fi is finally all hooked up, and I can lean into my CD collection, I expect another strong awakening.

Here’s a nice little daisy chain for you to consider:

  1. Interest in Music -> Concert attendance, large music collection
  2. Concert attendance -> Better Sound Equipment
  3. Better Sound Equipment -> Contact with Musicians
  4. Contact with musicians + Concert Attendance -> Better Sound Equipment
  5. Better Sound Equipment + Interest In Music -> Interest in Production & Composing
  6. Interest in Production & Composing -> Better Computer Audio
  7. Better Computer Audio -> Better Musical Composition
  8. Better Musical Composition -> Awards for Composition
  9. Better Musical Composition + Contact With Musicians + Interest in Production & Composing -> Greater interest in music

That’s a different perspective on my relationship with music.. It’s a great, big, circular loop, and at one or two points I seriously contemplated making music and music production my career path. In fact, I was preparing to release my first CD when a computer crash wiped out almost my entire archive of original music. Of course,
I had a backup – but the archive proved to be corrupted by the supposedly more reliable backup software that I had been using.

Be all that as it may, and getting back on topic – the diary would then continue with specific formative events and memories. This gives you the chance to add in all the specific details that the Worksheet doesn’t have room to hold. But by forcing you to generalize, it also gives the opportunity for the character to grow in unexpected directions.

Another gold mine is to have a latent interest – an undiscovered fascination with a subject that will lurk in the character’s personality until an event or encounter awakens it. For a while, the character will eat, drink, and sleep that fascination, until reluctantly forced to turn his or her attention back onto something else.

Giving a character a direction in which to evolve, absent any other defining events or resonances, means that he is not a static thing, but evolves over time. How that evolution takes place is something to be determined in collaboration with the GM.

A “Resonance” happens when a focal character is not directly involved in the incident but it nevertheless alters their behavior and possibly opinions or priorities. This is probably, but not necessarily, out of sympathy – but it could also be from outrage, for example, or a sense of justice, or several alternative possibilities.

All This and Worldbuilding, too

The other area in which you should collaborate with the GM is geography and world-building. Get the GM to deign – to a conceptual level at lest – the focal character’s home town, to your specifications – then use that as a guideline to filling out the Relationship Worksheet.

The GM helping to define your character and his or her background also helps the GM build and develop the game world. Scratch each other’s backs and you both benefit from the effort.

And finally, remember that the more detailed the character, the more easily the GM can craft adventures that integrate specifically with the individual, to the greater enjoyment of all involved.

Leave a Comment

7 Reasons A Game Physics Matters


A question so obvious I don’t think I’ve ever answered it before: Why does a game physics matter? I give 7 reasons.

This image composites ‘Theory Of Relativity‘ from Pixabay, with Abstract Quantum Physics by p2722754, also from Pixabay. Even though the formula has been an iconic symbol for physics for about a century, unless your Genre is sci-fi or superheroes, I recommend keeping your GAME physics strictly classical.

In fact, you can do worse than to draw a line above a branch of physics you don’t want, eg Quantum Theory, and use that to select a ‘physics foundation’ date.

I was working on a future post detailing a speculative (real-world) physics – it’s good stuff, coming soon – when it struck me that I don’t think I’ve ever addressed this fairly fundamental subject.

What Is A Game Physics?

When you get right down to it, a game physics is what effect follows a cause, from the perspective of the characters inhabiting the game world.

In the outer meta-verse, you get event chains like

    decision (p) -> appraisal (g)
    appraisal (g) -> [skill] check (p)
    [skill] check (p) -> outcome (c)
    outcome (c) -> consequence (c).

(p) is player, (g) is GM, and (c) is character.

A game physics describes what this process looks like from, and makes rational sense to, the characters. The character doesn’t know the game mechanics, doesn’t know there?s this ‘higher power’ called a player, with a still-higher ‘power’ called the GM, both pulling his strings; from his or her or its perspective, he makes a decision to do something, he attempts it, the attempt either succeeds or fails or something in between, and there are then consequences of that outcome that have to be dealt with. The game physics provides the conceptual landscape for translating action (the attempt) into consequences.

Noteworthy properties of a Game Physics include:

  • It doesn’t have to look anything like even a simplified model of real-world physics. It can incorporate things that we don’t consider real, like magic and teleportation and FTL travel.
  • Internally consistent, up to a point – that point being the cutting edge of understanding of the universe (It’s often useful if you can be one step further along in your understanding, because you can then describe the outcome of experiments aimed at probing that unknown).
  • Can contain errors and mistakes of judgment and interpretation. It can contain wild speculation with the caveat that this probably isn’t correct, though there may be elements of truth within it.
  • Doesn’t have to explain everything, though it should explain most things, even if it’ only vaguely.
  • It can be argued that the game mechanics form an element of the Game Physics, and analysis of those mechanics is physical law every bit as much as any natural law in real-world physics. It’s just abstracted a little.

As a general rule, a game physics should not go so far as to express reality in equations the way our real-world physics does – which is a good thing, because many GMs aren’t equipped to do so, anyway. A narrative description is good enough.

Inevitably, this will use technical terminology. While it isn’t perhaps necessary to specifically define these terms explicitly, doing so helps keep their use consistent, and that’s far more important.

Does too much energy in a focal point tear open a hole in reality? How big a hole? Does this hole suck anything nearby into it, or do things have to pass through it by their own motive power? Where does it lead? How big does it get? These are all things that should be in a game physics.

Three Game Physics, not one

Here’s a technique that I’ve learned the hard way. Create a game physics that describes the universe as you, the GM, understands it.

Make a copy of it, and then redact the most advanced content. Then replace some (just a little) of the remainder with incorrect understandings. Then generalize a little bit. That’s the Game Physics as it is understood by the experts of the game world.

Make a copy of that, redact the most advanced content, then replace between 1/3 and 1/2 of the remainder with something that’s oversimplified or incorrect, then generalize the heck of of almost everything. That’s the game physics as understood by (most) of the PCs, and therefore what you provide to the players.

Oh, and one more tip before I move on: Different cultures will have different names for, and interpretations of, the same phenomena. We’re used to the modern world where terminology tends to become universally accepted; but look back just a century or two, and you find that’s not the case, especially at the edges of discovery. Everyone had their own terminology for the same phenomena, and sometimes didn’t even recognize the phenomena as being the same thing. The pioneers in electricity are a great example, and this cuts the scope of ‘the history of science’ into something small enough to digest. You don’t even need to understand the science itself, the stories of the electrical pioneers are enough.

You can start with this YouTube video;

follow it with this introduction for kids;

then move on to this BBC article;

before concluding with this PDF from the University Of Lisbon

– and having the Wikipedia page on the subject on hand to fill in any blanks might also be useful.

How Much Attention Should You Pay To A Game Physics?

About as much attention as you pay to physics in the real world – you consult it when you need to work out what just happened and when you need to figure out what’s going to happen, and more or less ignore it the rest of the time – which is to say, less often than you should.

The one aspect of physics in the real world that’s ignored all too often is the physics of motion – the correlations between distance, speed, time, acceleration, friction, traction, momentum, and braking distance. But that’s of dominant importance purely because we drive so much.

In a fantasy world, the basic laws of Magic and Faith should take center stage. And in most sci-fi worlds, it’s either the laws of cyberspace or the physics of FTL that should be ubiquitous – but probably aren’t.

Part of your job as GM is to smack the PCs in the nose with a rolled-up game physics when their characters do something stupid – and then let them walk back the action if their characters should have known better (an important caveat).

That implies that you have to fill in the gaps that exist between your ‘master physics’ document (see the excerpted section above) and player ignorance (‘What do you mean, “you didn’t read it”?’).

There are four specific time frames in which special consideration needs to be applied, so let’s break those out and take a closer look at them.

    In Campaign Creation?

    If there’s a more important time to work closely with your game physics, I don’t know when it is.

    The logic runs like this: Either your game physics is utterly conventional, or it’s not. If it is not, then you want every difference between your game reality and that of the familiar ‘real’ world to be in display, or what’s the point? A difference that makes no difference is no difference. Exceptions can always be made for twists and surprises that you intend to hit the players with in the course of the campaign.

    In particular, anything mentioned in the players’ briefing material as their characters’ understanding of the game physics should be represented, but must be supported by the ‘real’ game physics – even if the details that indicate that difference are somehow obscured.

    It’s like a mystery plot, where you have to present a clue without making the significance of the clue obvious.

    It’s long been my practice to deliberately design some unique property, some distinguishing concept, into my campaigns to differentiate them. Those “axioms” are fundamental to the game physics of the campaign, shaping both its past and its contemporary reality. And the logical consequences form subsidiary impacts on all of the above.

    In the Rings Of Time campaign, the central tenets were “The Gods are mostly paper tigers; they select a number of promising mortals and give them extraordinary luck and opportunities, use them to do their dirty work, then dump them back into the Normal World and forget them”. The PCs were two such mortals, and they decided (at the end of what was intended as a one-off adventure, doing that ‘dirty work’), that if they were going to do the Work of Gods, they were damned-well going to enjoy the perks that went with Divinity.

    In the Fumanor campaigns, it was “Belief and perception shapes the Gods. Churches, Shrines, Temples, Priests, and Holy Books exist to shape beliefs and perceptions through Myth and Allegory and outright fiction, written by Mortals for their own purposes”. The very concept of Divinity and what made a God a God were central premises, and much of the Campaign revolved around Lolth’s insatiable Lust for Divinity, not realizing that she would then be trapped in the form shaped by those Mortal Perceptions and have far less freedom of action than she currently enjoyed through the Faith of her followers.

    In the Shards Of Divinity campaign, it was an Origin Of The Universe story that created Gods and Anti-Gods to assuage the All-father’s loneliness, and contained an inevitable apocalyptic confrontation between these Children Of The All-father, with the Mortals and Monsters and Demons and Devils created (imperfectly) by these Children playing the role of the ‘X-factor’ in that outcome; Leftover ‘shards’ of the body of the All-father, destroyed by his children out of Jealousy and Teenaged Angst gave mortals the ability to wield Magic, both Faith-based and Arcane. Loneliness, Love of Creation for its own sake, Ego, Pride, Willfulness, Envy, and a sense of Responsibility (or lack thereof) were built directly into the Game Physics, which were set against the imminent Apocalypse.

    In Zener Gate, it was ‘Causality in a universe with Time Travel’,M/em> and the nature and physics of Time Travel were key components of the game physics.

    And so on.

    I can’t recommend this approach strongly enough. Define your game Axioms and let all of History (from the perspective of the END of the campaign) be an exploration of the consequences, implications, and ramifications of those Axioms.

    In Character Generation?

    Anything a character can do should be represented or explained through the game physics. Again, work from the player version but check for compatibility with the ‘real’ version.

    Note carefully any differences between what a character will think is going to happen when they first use this ability, and what is really going to happen. And make sure, when they DO use it for the first time, that you make a point of telling the player that it is the first time, or they will want to know why they didn’t notice the difference sooner.

    Sometimes, the environment they were in can be used to explain this, and you can have the character be familiar with the ability and still get surprised by it, discarding the “first time” justification. That’s always good, because you can only plausibly go there so often.

    For this reason, some of your game world either should be set up to have this consequence, which means doing it after character generation, or you should anticipate the possibility of a character having the ability in question and construct your game world accordingly.

    In Game Prep?

    The game physics should always be in the back of your mind when you are designing adventures – it can be worth your time to skim the physics descriptions accordingly.

    Obviously, the longer and more involved they are, the longer (and probably more superficial) this review is going to have to be. That’s something to keep in mind when you’re creating the game physics in the first place.

    In Play?

    The final ‘special occasion’ is during actual play, because that is when the physics will actually make a difference – if it’s well-constructed. It should rarely be at the front of your mind, but it should normally be at the back of your mind.

    But always remember the Rule Of Cool – “any action attempted can succeed it it’s cool enough”. In a fantasy game, this is attention from a “passing deity” overriding normal cause-and-effect to decree a different outcome. It can be harder to explain in Sci-fi campaigns; perhaps dictating that physics has a fuzziness, an ‘uncertainty principle’, a ‘butterfly effect’, in which random factors, usually negligible, combine to change an outcome.

    And never forget that the Rule Of Cool should also apply to attempted actions by NPCs, be they allies, onlookers, or enemies.

    Sidebar: What is “Cool”?

    I’ve seen “The Rule Of Cool” referenced hundreds if not thousands of times, but it was only when I typed the statement above that I realized that I had never seen anyone define “Cool” – so let’s give it a shot. here and now.

    An action is “Cool” if it so perfectly and spectacularly distills the primary capability of the character performing it that it supersedes rules and planned plotlines. It creates an iconic “moment” within the campaign.

    If the game were a movie, these moments are the ones that bring roars of appreciation from the audience. They can be spectacular, but even without trying hard, I can think of two moments that qualify with nary a special effect in sight:

    • Indy vs the swordsman, Raiders Of The Lost Ark; and
    • Ripley, Aliens, to the Alien Queen: “Get away from her you bitch!”

    Both of these brought the entire cinema to their feet when I first watched the respective movies. It’s the difference in reactions between Star Trek The Motion Picture and The Wrath Of Khan.

    On top of those examples, there are a few that rely on special effects, a paradigm shift in the way you see possibilities.

    • Bullet Time from The Matrix; and
    • Legolas surfing down the Cave Troll’s back while firing an arrow in The Fellowship Of The Ring.

    Once again, cinema-goers were on their feet cheering these scenes the first time they saw them. These were Cool moves.

    The first has its equivalent in the GM getting his narrative description of action absolutely perfect, coalescing a noise of small details into a perfectly-visualized scene. The latter is more obvious – it’s the perfect blend of audacity and skill.

    A rogue catching a dagger in mid-air and throwing it back at the target is a great example. By the rule-book, they might have to make all sorts of rolls and might lack the specific abilities needed – but because it?s definitive of the character, and the PCs have their backs to the wall, the GM lets it happen regardless – once. Try to do it again, and the results are likely to be less than spectacular.

    “The Rule Of Cool” should also apply to NPCs attacking the PCs, but because the GM has so much control over the game, this is more difficult to justify; it has to be self-evident that it’s a ‘Cool Move’ or it will fall worse than flat.

    The final point to be noted is that “The Rule Of Cool” has to be permitted to break GM plot planning. The alternative is disruptive of player agency to the point of sounding like a plot train.

    At most, there can be a brief delay while the GM builds the dramatic effect to a maximum, but even that relies on the player concerned trusting the GM, who can’t just say “The rule of cool will let you succeed” without undermining the dramatic effect of the moment. Managing this situation can be quite tricky; you have to acknowledge the move, not let the announcement influence your NPC decisions, and not telegraph your ruling on the move without discouraging the player. Perhaps the best approach is to spread the action out – “Okay, you’re moving into position to make your attempt, I’ll let you know when you get there.”

    This can force the GM to have to abandon his script and improvise. I find it a lot easier to do so if I’ve noted at the very top of the encounter what the NPCs are trying to achieve in the encounter, because that’s what will dictate their future decisions.

    It can be tempting to play a game of ‘tit-for-tat”, giving the player his “rule of cool” moment but then giving a countermove to the NPCs. Resist this urge. Resist it hard.

    The overriding nature of Critical Hits and Failures also gets explained in terms of Game Physics in the same way.

Seven Reasons For A Game Physics

There’s a lot to keep track of when you’re the GM. The game world, the game rules, the motives, the environment, the personalities and capabilities and plans of the NPCs, the personalities and desires and capabilities and plans of the PCs, the adventure, and how it fits into the bigger picture of the campaign.

As a result, GMs quickly get used to pulling decisions out of their butt when they are needed. Over time, and with experience, they get better at all of this, and start learning to integrate all of this into a coherent view of the current in-game situation, permitting those ad-hoc decisions to be better informed by all the different considerations.

Given that expertise in making decisions, and the already high demands on the GM, why on earth would the GM want to add another demand, another consideration, to their slate?

There are Seven critically-important reasons for having a defined game physics. They aren’t all equal, it has to be admitted, but which ones are dominant will vary from one GM to the next. If there were only one or two, you could argue that the totality might not be enough to justify the added workload, but with seven, I think the justification is a slam-dunk.

1. Consistency Of Rulings

There is always a tension between creating an exciting story and creating verisimilitude. One of the pathways to the latter is consistency of GM rulings, because that adds up to an impression that the GM isn’t just making things up as he goes along, he is describing the interface between the game world and the actions of the PCs, especially the more spectacular ones that are less likely to be covered by the standard rules.

Understanding the fundamental principles behind the simulation of ‘game reality’ gives you a foundation to make those verisimilitude decisions more consistently, with less deep thought – and that leaves you with more capacity to focus on other things.

2. Fairer Rulings

Greater consistency, as already explained, makes the world feel more real, and the decisions you make feel fairer from the perspective of the player. It builds that trust in the GM that I mentioned at the end of the sidebar on the Rule Of Cool. But this isn’t some illusion; your decisions actually do become fairer and more justifiable. If one is challenged, you can pull out the game physics to explain it, and if the players can accept that your NPCs are operating within the limits of the game physics (and fail when they attempt to push beyond them without the benefit of The Rule Of Cool), then it all feels more real to them.

3. Ease Of Rulings

If decisions can be guided by fundamental principles that are clearly understood, making those decisions becomes easier – and that produces an automatic lift in the quality of everything else you have to do as a GM because you have more capacity to devote to those aspects of the task.

4. Predictability By Players

If you’re consistent in your use of Game Physics, even if they aren’t explained to the players, they will start making notes and developing theories. But you get greater benefits by providing an abridged and generalized game physics to the players (with an explanatory note that the players version is generalized and simplified), because the players will then be paying attention to the underlying physics.

You know that the physics has been accepted and integrated into the campaign when the players start using it to make plans, and noticing when the physics overrules the usual game mechanics.

5. Internal World-Logic

This benefit is a through-line direct from world creation to game-play. Using a game physics that has been properly worked out creates an internal logic to the game world, and using that same physics to adjudicate game decisions puts the players into direct contact with that internal logic.

    A development process

    But there was an important caveat in the preceding paragraph that almost snuck through without being noticed, I’m sure: ‘properly worked out’.

    I have seen GMs create their game worlds and THEN try and devise a game physics from scratch, and it never really works out very well. The Game Physics should come first.

    Of course, that’s hard to do if your campaign is already underway, but there is a way around the problem. It’s my old favorite technique, iterative development.

    Start with the most overt expression of game physics. In a fantasy game, that might be ‘what are the gods’ or ‘how does magic work?’. Develop your game physics to answer the question.

    Then look at your game world and find the next biggest item that hasn’t been explained by the first principle, its assumptions, its ramifications, and so on. Add a new principle to your game physics to cover it.

    And repeat the process for as long as necessary.

    You then need to review the results in light of all your in-game descriptions and decisions. Hopefully, there will be no contradictions, but the far greater likelihood is that there will be. For each contradiction, you either need to refine the game physics you’ve built to remove / explain the contradiction, or you need to add another principle to override the nascent game physics.

    You will be faced with inconsistencies and anomalies and outright mistakes, and you have to wrest some sort of rational sense out of all of them. It’s a lot harder than getting it right in the first place – but it’s the only right way to do it after the fact.

6. Verisimilitude

I’ve saved the most obvious benefit for second-last (because the last item was added at the last possible minute). The more fantastic the campaign setting and concept, the harder and more important it is to really sell those things to the players. Every factor that contributes to the believability of the campaign, that aids the suspension of disbelief, is a precious resource not to be neglected.

Even if your campaign is not so out there that you need the Verisimilitude boost, what a game physics then does is give you license to stray beyond the lines that your concept has created. You don’t have to stay way out if you don’t want to; it’s just another plot card that you have up your sleeve.

7. Natural Color

Probably the least-obvious reason, which is why it was an afterthought to the original list of six. The reasoning behind this one lies in the prediction of outcomes, making descriptions of those outcomes more detailed and – when its justified – more spectacular. But because these narrative blocks are describing an implied reality, they feel more natural, and contain hidden ‘worldbuilding’ that lends a solidity to the game world.

There’s an example of this that I would love to cite, but (1) it hasn’t been played yet and would give entirely too much away, and (2) would probably double the length of this article. So, instead, I’ll choose a pair of second-best example from my Doctor Who campaign that suffers from neither of these restrictions.

    Example #1: Gallifreyan Units

    The Gallifreyan units of measurement described below were determined by asking the question, ‘how would this defined race measure these quantities?’ – and that gets into areas of psychology and culture. There’s nothing sacrosanct about the units of measurement employed by humans, and you immediately introduce differentiating color to the species and culture by employing units that they find sensible and traditional. The units described below reflect the internal world-view of Gallifrey and its residents, and add additional layers of world-building to the campaign. Most significantly, it’s possible to reverse-engineer the reasons for these choices of unit to reveal subtle details about the Doctor’s home-world.

    There are two primary levitation coils for each wheel, and 6 more along the chassis that provide lateral stability, for a total of 18 coils in sets of three. A critical failure of the system, requiring a repair, is defined by two or more adjacent primary coils failing entirely, resulting in one corner of the vehicle dropping uncontrollably.

    Repairing only any critical failures would degrade the ride quality 15% or more – you would feel every rock, bump, and jolt, and there would be an increased risk of instability at high speeds, during sudden maneuvers, and during heavy braking using the emergency brakes.

      GM Note: Driving Penalties should be increased.

    It would have taken 10 minutes to fetch a portable power unit from the Tardis and 10 minutes to take it back when the work is done but this is avoided by using the power supply you have already taken to the Bug. It only takes one minute per coil to disconnect it from the system, hook up some power leads, feed it some juice, measure the magnetic flux, and write it down on a Christmas tag (the first thing that came to hand and that won’t be excessively degraded by exposure to vacuum since there’s no glue or binding involved). Tie each tag to the relevant coil’s power cables with a bit of string, disconnect the power and move on to the next. Once you can compare the readings for each coil, you can decide on your next move.

    Gallifreyans measure magnetism in Lobes, a number of Quantum lines of magnetic force per micron. Lobes are useful for the most minute of magnetic effects; kL (Kilolobes) is the right unit for lifting small weights, ML (Megalobes) for the lifting of heavier objects (like the Bug), and GL (Gigalobes) for reactor field strengths and the like.

    They measure temperature in Quanta-D (and yes, the Doctor is well aware that the English translation of the term creates a pun with the word “Quantity”. This is defined from the Perfect Gas Law that relates the absolute temperature to the average velocity of movement of the gas molecules). The “D” refers to “Derivative”, which comes from the definition of velocity as the rate of change in position and direction over time. There’s roughly 100QD in 1°C.

    Clockwise from Front Right:

    • SET 1 Front Right:
      • Coil #1 (Primary): Magnetic Strength 10.8 ML, Thermal Change 10 QD, Implied health status: Slight Degradation.
      • Coil #2 (Primary): Magnetic Strength 10.9 ML, Thermal Change 11 QD, Implied health status: More micro-breaks but they are less severe than Coil #1, so Slight Degradation.
        .
      • Coil #3 (Lateral): Magnetic Strength 12 ML, Thermal Change 10 QD, Implied health status: Very Slight Degradation.
    • SET 2 Middle Right:
      • Coil #4 (Primary): Magnetic Strength 10.6 ML, Thermal Change 9 QD, Implied health status: Slight Degradation.
      • Coil #5 (Primary): Magnetic Strength 9.6 ML, Thermal Change 50 QD, Implied health status: Moderate Degradation.
      • Coil #6 (Lateral): Magnetic Strength 6 ML, Thermal Change 300 QD, Implied health status: Severe Degradation.
    • SET 3 Rear Right:
      • Coil #7 (Primary): Magnetic Strength 0 ML, Thermal Change 0 QD, Implied health status: Complete Failure.
      • Coil #8 (Primary): Magnetic Strength 1.2 ML, Thermal Change 2000 QD, Implied health status: Critical Failure, will cause thermal stress degrading surrounding coils.
      • Coil #9 (Lateral): Magnetic Strength 7.3 ML, Thermal Change 200 QD, Implied health status: Moderate Degradation.
    • SET 4 Rear Left:
      • Coil #10 (Primary): Magnetic Strength 8.4 ML, Thermal Change 104 QD, Implied health status: Moderate Degradation.
      • Coil #11 (Primary): Magnetic Strength 10.5 ML, Thermal Change 11 QD, Implied health status: Slight Degradation.
      • Coil #12 (Lateral): Magnetic Strength 8.5 ML, Thermal Change 97 QD, Implied health status: Moderate Degradation.
    • SET 5 Middle Left:
      • Coil #13 (Primary): Magnetic Strength 0 ML, Thermal Change 0 QD, Implied health status: Complete Failure
      • Coil #14 (Primary): Magnetic Strength 1.2 ML, Thermal Change 2000 QD, Implied health status: Critical Failure, will cause thermal stress degrading surrounding coils. Coil#13 might be it’s first victim.
      • Coil #15 (Lateral): Magnetic Strength 5.2 ML, Thermal Change 63 QD, Implied health status: Moderate Degradation.
    • SET 6 Front Left:
      • Coil #16 (P): Magnetic Strength 8.5 ML, Thermal Change 88 QD, Implied health status: Moderate Degradation.
      • Coil #17 (P): Magnetic Strength 11.1 ML, Thermal Change 8 QD, Implied health status: Slight Degradation.
      • Coil #18 (L): Magnetic Strength 12.2 ML, Thermal Change 7 QD, Implied health status: Very Slight Degradation.
         

          GM NOTE: Prepare a Table.

    Given the minute nature of the components, disassembly of severely degraded coils is not really a viable time-effective option, though you’ll work out the logistics just in case there’s a way around the problems. Instead, you (through Quasima) will probably have to have the Tardis manufacture replacements while you’re doing other things – but that’s going to be easier said than done, too.

    Electromagnets consist of coils of wire (usually very fine, but it depends on the current they are expected to carry, which is a function of the material and magnetic strength they are to output). More of either current or coil density and you get a stronger magnetic field. These coils are usually wrapped around a ferrous material of some sort, which amplifies the magnetic effect produced. Both wires and core are often composed of exotic materials when they have to cope with unusual conditions – and the Moon definitely qualifies in that respect, due mostly to the thermal differential between night and day (254°C or about 26000 QD).

    With permanent magnets, the magnetism can recover from exposure to extreme heat up to a point, known as the Curie point, but anything beyond that causes permanent loss.

    It’s the same with the cores of electromagnets. The core is chosen as a material by a tendency to have it’s crystalline domains align easily, amplifying the magnetic force produced (potentially) thousands of times, but releasing that magnetism quickly and as close to completely as possible when the electrical current stops flowing through the coils. The Curies point is the temperature at which these tendencies break down and the material becomes paramagnetic. Approaching the Curie Point there is a loss of efficiency, but cool them back down and they are as good as ever – but go past that temperature, and the ability of the core to create magnetism just stops working, and the magnetic field almost completely collapses.

    Most commonly-used electromagnetic materials can cope with lunar daytime temperatures just fine – being near a fusion reactor is more likely to cause them trouble. But there’s always a secondary consideration when dealing with materials intended for space – mass, or more specifically density relative to magnetic performance, and THAT is then complicated by the lunar conditions.

    The =second=-best answer, when everything is taken into account, is old-fashioned iron, and until about 2035, that would be the only choice considered. But in that year, engineers came up with the Axiom-Cobalt Matrix, designed for extreme thermal stability and high magnetic density – and with a Curie point at an unprecedented 1450°C, which is 145kQD. It’s weight is significantly higher than that of iron, but a little clever engineering can bring it back down to the effective weight of iron with most of it’s benefits preserved.

    Ideally, you would like to use the old cores and construct a new winding around them, because the new unit would be a more exact replacement for the dead or damaged unit. But that would take time, and time is the one thing you can’t manufacture.

    The Tardis uses a different core material in preference, one humans of this era have not yet invented, a Quantum-Graphene Ferromagnet. It weighs less than 40% the weight of an iron core, but still has several times the magnetic efficiency – though still less than Axiom-Cobalt. It’s just considered a more practical compromise by Gallifreyans.

    It would take the Tardis about an hour to build a new coil with an Axiom-Cobalt core. Once it’s made one, it can turn out as many more as required in about 10 minutes.

    It would take about 40 minutes to build a new coil around the existing Axiom-Cobalt cores, but that’s 40 minutes per coil – it’s detailed and finicky work. And you would have to carry each dead coil back to the Tardis, and cut away the existing wiring loom without damaging the core underneath – that’s another 40 minutes per pair of coils – and give the Tardis 10 minutes to measure the number of coil windings, wire diameter, and electrical properties of the wire. Even if you only have to replace 4 of them, that’s more than 4 hours, and four coils is the bare minimum for functionality.

    Using the technology that the Tardis already ‘understands’ is a significant time saving – about 5 minutes to measure the physical shape of an existing (dead) unit, 5 minutes to do the electrical analysis and determine the number of windings, another 5 minutes to override the scans and replace broken wiring with whole in the specification (with Quasima doing the work, it would take you hours), and 10 minutes to manufacture as many as you need. You would still have to remove a defective coil and carry it down to the Tardis for all this to take place, and still need to ferry the replacements back out to the Bug, but that’s 25 minutes plus transportation time for unlimited replacements – there’s no real alternative to consider, given the time restraints. There’s so =much= you could get done in the more than 3œ hours you save!

    It takes 8 minutes to remove a single defective coil/unit, 12 minutes to install a single replacement and connect it, plus 8 minutes returning to the Tardis (you’re learning the terrain) and 10 minutes per pair of coils to be repaired /replaced carrying them from the Tardis to the Bug. You can’t take more than 2 at a time on the sled.

    And that raises a logistical question of what approach is the most efficient? The Tardis can’t begin it’s scans until you carry a unit down, but you only need to take one. By the time you’ve finished evaluating the different combinations, you have multiple options to consider.

      Option 1: Remove all the coils to be replaced, carry one to the Tardis, wait for the analysis to be completed and the replacements to be manufactured (taking advantage of the time for a rest, a quick cuppa, and to recharge your O2 tanks), collect the first pair, return, collect the second pair, and then install them.
      Total: 2 hrs 31 min.

      Option 2: Remove a coil, carry it to the Tardis and set it up to be analyzed, return to the Bug and remove the other 3 coils to be replaced while the Tardis is performing its scans and analyses and manufacturing the replacements, head back down to the Tardis, bring back the first pair of coils, back to the Tardis, bring back the second pair, and install all four. Total time of 2 hrs 16 mins.

      * Is saving 15 minutes worth foregoing a 25-minute break? Only you can answer that, but I suspect not – you’ve been working for just over 6 hours at this point and are more than ready to take a break.

      But the options don’t end there! Both options 1 & 2 are based on the QDF cores that the Tardis understands and can manufacture relatively quickly. There are still two A-C core options that have to at least be considered.

      Option 3: Much the same as Option 2, but the Tardis manufactures new A-C core Coils instead of QDF Core Coils, which should make them more compatible to the existing systems of the Bug. Time: 3:34.

      Option 4: Remove the damaged cores from the Bug, take them to the Tardis (2 trips), carefully remove the existing windings after the Tardis has run its scans, and manufacture new windings around the existing cores. Take the new cores back to the bug (2 trips) and install them. The problem is the winding removal and replacement; this is time-consuming and requires your expert supervision. Time: 7hrs 12 minutes. You can now rule this option out with a clear conscience.

    It’s when contemplating additional repairs in the Critical Systems phase that things get interesting – assuming you have the foresight to manufacture all the QDF cores at the same time.

    Options 1 and 2 permit the replacement of an additional four Coils in just 1:56. That’s short enough that it has to stay an open question, for now. Option 3 permits the same with a new A-C core coil, but will take almost an hour more, making it only marginally possible. Option 4 would take 6 hrs 36 minutes to do the same job, so it’s definitely off the table.

    After those 8 coils are replaced, the next worst is Coil 9. You can add it to the to-do list at a cost of just 38 minutes (options 1 & 2), 52 minutes (option 3), or a prohibitive 1 hr 48 mins (option 4).

    Which would leave only two coils to be replaced with any benefit, #5 and #12. If you had the replacements on standby, Options 1 and 2 would see them installed in another 58 minutes. Option 3 would require 1 hr 26 mins – and the cumulative total is starting to look pretty bleak for that option. Option 4, at 3 hrs 18 more, is right out of the question. Or you could go the whole hog and replace all 18, even though that would be replacing functional units with less effective ones.

    Aside from the recharging your O2 tanks – and Option 2 is the only one that doesn’t allow this – and the period of rest that ONLY option 1 permits – it’s really when thinking about these additional repairs that the differences between Options 2 and 3 really become stark. Choosing option 3 is choosing systems compatibility over more substantial repairs – In the time it takes to replace the most critical 8 coils under option 3, you could replace 11 coils under option 1 or 2 and still have about an hour in hand to work on other systems.

    But options 1-3 are only fully effective if you get as many replacement cores as you need all manufactured at the same time. So, while you don’t have to =commit= to the larger repair jobs right now, you DO have to commit to a core choice (QDF or A-C) and to keeping the doors open to further repairs to the suspension.

    (What’s your initial gut instinct?)

    Before you make that choice final, there are a few more things to know (that have been taken into account in those timings).

    Assuming you go with one of the QDF core options, the replacement units will have a 25% efficiency loss relative to the existing ones – it would be twice that, but Gallifrey has had a long time to work out the kinks in the technology, and you can use high-current-density Samarium-Copper Alloy in the wiring, making the replacements that little bit more efficient than whatever the designers used, and mitigating the negative impact.

    Your new units will produce about 9.7 ML, so any existing unit doing that or better should not be replaced unless you are prioritizing consistency over speed.

    There are four coils that are completely dead or virtually so: #7, #8, #13, and #14. They HAVE to be replaced, but that’s all that has to be done for =minimum= function.

    Adding coils #6 and #15 would significantly improve lateral stability and #10 and #16 would boost maneuverability and reliability. Finally, although you might not get time for it, replacing Coil #9 would further enhance both those factors.

    Quasima makes a firm declaration of his opinion: The optimum strategy is to manufacture ALL of those replacement coils (9 of them) at the same time using QDF cores so that you have at least the option of doing more if time later permits.

    You CAN operate the Bug without the additional repairs, but just barely.

    Example #2: An advanced fusion reactor

    For various reasons, human technology in Doctor who is 20-30 years more advanced than in our world but most of this doesn’t show, most of the time. This reactor was supposedly built in 2040 by Volvo for the European Space Agency, and almost certainly both the design and construction were subcontracted out with Volvo supplying the parameters, tolerances, & specifications. It had to be compact, robust, and reliable.

    It exhibits world-building from three directions at once. First, this is a statement of the capabilities of human engineering at the time it was built; second, it shows the industrialized deployment of advanced technology, with the the compactness and miniaturization that results; and third, it combines that with well-known, tried and proven technologies that connect back to those of our world in clever ways.

    The Fusion Reactor is a brilliant bit of design. It starts with the batteries, that ionize the deuterium fuel with an electrical current through one side of an ionization chamber. This causes massive expansion of the liquid gas, propelling some of it to the next stage of the process.

    A second electrical current elevates the Hydrogen to plasma temperatures, stripping it of its ability to hold electrons for very long. Electromagnets pull the resulting witches brew apart.

      Electrons are pulled toward a metal plate to create a secondary electrical circuit.

      Nuclei that still have electrons are pulled toward an intermediate point where they enter a pipe that recirculates them back to the ionization end of the chamber using magnetic accelerators.

      But nuclei without electrons are pushed in the opposite direction by the magnets and enter the next element of the system.

    Carbon nanotubes carry them forward toward the reactor in twenty separate pathways through a particle accelerator that raises their speeds to a significant fraction of the speed of light and synchronizes them into pairs, accelerating or slowing them until they are timed to arrive at the reaction chamber in pairs, 1/10th of a second apart.

    The clever part is pulling these nuclei into a curved path and holding them there, so that the accelerator consists of only 4 sets of electromagnetic rings for all twenty nanotubes. This makes the whole accelerator incredibly compact and efficient.

    When the two nuclei are released into the reaction chamber, their trajectories will be close to the desired intersection point but not perfect. To refine that trajectory, they enter a crystalline lattice that uses the Quantum-slit principle in reverse to focus the motion of the nucleus into the precise direction needed. It’s like a misstatement of the old tale: “You take the high road and I’ll take the low road and we’ll both end up in Tipperary”. No matter what the trajectory of the particle was, the Quantum Tunnel Engineering of the reaction chamber forces it to move on increasingly the correct line.

    A magnetic containment field keeps the nuclei from escaping – always possible when dealing with Quantum-level effects..

    A pair of synchronized nuclei. still traveling at significant fractions of the speed of light, strike each other so hard that fusion takes place. Just a single reaction but it will keep happening every tenth of a second.

    The two nuclei do not strike each other in exactly opposite directions; they intersect at an angle, so that the vector sum of their post-fusion movement thrusts the resulting Helium nucleus toward another, larger nanotube set with a lesser accelerator. These nanotubes coil around the reaction chamber, so the nuclei absorb the energy generated by their own creation as heat, which they then pass on to the Salt Chamber. This transfer is about 60% efficient, and is key to the operation of the reactor.

    But not all that 40% is not thrown away; a Magnetohydrodynamic System (essentially a big pair electromagnets) harnesses the passage of the charged particle to deliver instantaneous surges of power when they are needed by this system. The fact that it’s positively charged and not negatively charged is a minor detail in the engineering (any charged particle passing through a magnetic field induces a current – the faster they go, the greater the current). This also slows them down considerably.

    The nuclei then continue through the nanotube tunnel that eventually delivers them to a stream of electrons =from the collection plate in the ionization chamber,= where they pick up a couple of those electrons from the electron beam to become helium =atoms= and are ejected through the exhaust as waste heat, cooled slightly by the cryonic coolant system.

    Meanwhile, the salt has boiled from the heat, and expansion forces it into a pump built around gravitic technology (no blades needed) to circulate the liquid salt through a generator that converts the kinetic energy into electrical power. That power then drives the wheels of the Bug. The liquid salt is thermally isolated from everything else (and from the outside world) by the cryonic coolant system; without it, the salt would slowly overheat until pressure forced the salt chamber to explode. That’s the weakest point in the system; there can’t possibly be any sort of overflow valve, the white hot liquid salt would consume most things in its path.

    The problem with – and virtue of – this arrangement is that the salt takes time to get hot enough to create the kinetic energy. That smooths out the peaks in power delivery (a pulse every 1/10th of a second) from the fusion reaction and delivers consistent baseline power, but can’t cope with a sudden increase in demand very quickly. That’s why the MHD system also harvests energy from the system – to cover any temporary shortfall.

    It’s taken you 5 minutes to work out the intricate ballet of particles that drive the fusion generator. Now that you know how it’s supposed to work, you can turn your attention to what went wrong – and what you might have to do to fix it.

    THE FAILURE:
    Analysis (Sonic Scan): Confirm the containment field integrity and check for radiation leaks (2 mins) by feeding power from the batteries into the electrical components..

    Log Check: Access the black box logs to confirm why the reactor shut down (5 mins). You will need to download these to the Sonic Screwdriver and then extract that information while in the Tardis. But you will have known that was coming, and so could have taken the information down on your last trip for analysis.

    The system itself is robust, but was shut down automatically when the power distribution system overloaded.

    The logs show that the system performed exactly as it was designed to do in such an emergency, 16 years ago. When the Power Regulation System (PRS) failed, the IFB detected a “Load Rejection” (nowhere for the power to go) and initiated an automatic Emergency Shutdown, automatically decreasing the acceleration of the particles in the system to a point below which fusion would occur.

    The reactor should still be intact and ready to go – but whatever caused the PRS to fail is still in effect, so the reactor will =not= fire up until that is fixed.

    If the shutdown wasn’t in time, then there were 1-9 Hydrogen-Hydrogen fusion reactions within the chamber ‘uncontrolled’, which may have caused further damage – but you think the design is robust enough to cope with that. These are technically referred to as “stutter pulses”. So those few minutes analyzing the way the system works leads to the conclusion that it will probably fire up again if the electrical system is repaired. There will have been minor thermal pitting on the containment wall, but this is purely cosmetic and will not affect the function of the reactor.

    Example #3: Main Headlights

    Oh heck, why not one more, since I have the source document open, and because I’m really pleased with the cleverness of the design.

    Lighting systems in the Bug can be subdivided into two classes – those for internal illumination, which you’ve already dealt with, and those to illuminate the outside world. Anything in shadow on the lunar surface is more or less pitch black and could conceal absolutely anything. And that includes the Bug – if you can see where you’re going. Exterior lighting can also be subdivided into running lights and twin light assemblies at the front.

    Running lights form two lines along the sides and rear of the Bug, each light having a rectangular shape 3mm x 2mm. The two rows are slightly off-set from each other to form a ‘brick’ pattern. One set is designed to be on continuously while the bug is in operation, but are dim, while the other set brightens and then goes out a hundred times a second. The net effect isn’t so strong as to be considered ‘strobing’ though it might still trigger anyone with epilepsy; instead, shadows of objects within range seem to ripple, the closer the object, the stronger the effect. You could also liken it to the shadows shifting to give a cross-referenced position of the object casting the shadow. It takes a few minutes to get used to, but it’s clearly designed to address a landscape in which normal distance referents are either absent or misleading because they are based on terrestrial experience.

    But those are just sideshow attractions. It’s easy to dismiss the headlights as ho-hum and humdrum systems – until you examine them really closely with the Sonic Screwdriver as a kind of sonar, letting you look inside. The design extrapolates the potential for maintenance being performed in an atmospheric environment even though that was not an available feature of the human Lunar Outposts in 2048 when it was being designed. It uses a secondary cryonic flow to reduce any gas into a liquid state to permit it to drain out harmlessly through minute “tear-ducts” at the bottom, matched by inverted ones at the top to permit any gas NOT liquefied to escape. So it is designed to operate in the hardest vacuum possible, at superconducting temperatures, even in the harsh light of the lunar day. That means that the operating conditions are as firmly stabilized as the engineers could, well, engineer.

    But that’s just the foundation for one of the most amazing pieces of design you’ve ever encountered. Some genius must have reached the absolute pinnacle of their creativity while creating this system. At it’s heart, it imagines the human eye if the optic nerve were a light generation system – and then improves it – and then nests three of them one inside the other – and then has them work in harmony.

    ‘Adaptive Optics’ is probably how the marketing gurus would have described it, but that undersells its brilliance, unlike most marketing hype.

    The three “optic nerves” are banks of low-powered monochromatic laser beams with different output frequencies. These aren’t milliwatt devices like might be used as pointers in a lecture hall – they are smaller and more reliable solid-state devices that probably operate closer to the microwatt range, and each is collimated to exactly match the aperture of the lens system. Each lens is optically tuned to respond only to the specific wavelength of one of the beams. The lenses are coated in microscopic light-absorbing-and-emitting diodes embedded in the lens material, which is a gel-like substance sandwiched between layers of self-sealing crystal membrane barely one molecule thick – thin enough, in fact, that they can flex to shape when compelled to by the gel within. Unlike the beams that charge them to release the light energy when the laser pulse ceases, they emit white light from the back of the lenses, which is then shaped and directed by the lens as the system requires.

    Running horizontally through the lenses are microscopic wires of a memory metal, which contracts like a muscle when electrical current flows through them. This controls the width of the resulting beam, from broad and diffuse to sharp and focused. Also embedded in the lens are ferromagnetic ‘fibers’ which respond to shaped electromagnetic fields from above and below to raise or lower the focal length of the beams – right in front of the Bug for the most diffuse, and farthest away for the most focused.

    That in turn means that the natural tendency of beams to ‘spread’ has been use to the advantage of the system.

    But the cleverest piece of the whole system is a connection to the sensors that report on the speed with which the front wheels are turning – the faster the bug is going, the more the beams stretch out in front of it. Go slow, and the light focuses on what’s immediately in front of you – go faster, and it stretches to illuminate what you have time to react to at those speeds. And it makes these adjustments with no need for computer controls or other mechanisms – they are engineered directly into the headlamps as a fully-autonomous system.

    It’s complex and innovative that must have taken hundreds of iterations to get right. And then you realize that the most rearward lens’ output has to pass through the two lenses in front of it, and so has to take their shapes and settings into account, and you revise that estimate to thousands of iterations, possibly tens of thousands.

    But what strikes you most of all is the robustness of the design. Each of the fibers, be it memory metal or magnetic, acts completely independently; unlike a coil, which is one long strand of wire, these are all independent and parallel, so even if one suffers a breakdown or puncture, the rest will still perform. One diode emitter can fail, and the rest will still function – with no moving parts that nevertheless change shape. And the rigidity of those environmental settings means that the whole would be designed to operate indefinitely under lunar conditions, or something close to it. There is no one point of failure; there are hundred of them in parallel with each other. This level of engineering would be cutting-edge even on Gallifrey!

    Back in the Bug’s cavern, you isolate an ‘on’ switch (to conserve battery power when the bug doesn’t need to light up), and a ‘test mode’ that lies to the optical mechanism, telling it that the Bug is actually traveling at speed. It takes only a minute to run the system through its’ paces, finding – as you expected from the design – that there is some degradation, but that the system still functions adequately.

    But that raises a separate question – should you actually disable the running lights, despite the possibility that you might need them? They clearly silhouette the Bug, making its presence more obvious. Quasima suggests that the gains from doing so are not worth the effort – not with the main headlights shining. Those, you decide, should only be activated at need; the full earth-shine is enough light for out in the open on Mare Crisium. Only when approaching a ‘built-up’ area or an area with hazardous terrain should they be activated – the way they are now. You hastily flip the switch to deactivate the system.

    In total, you have spent all of 8 minutes on the lighting systems, mostly gushing over the headlamps, if you’re honest.

    Future-proofing Your Game Physics

    The more robust your connection with the real-world physics around us, the more vulnerable you make your game physics to new discoveries in that real world. You avoid some of this by not detailing formulas and more of it by using narrative-based explanations of the physics, but – to be honest – the only real solution that is worth the time needed to implement it is simply to specify a date and ‘fix’ the foundations of your game physics to the real world as of that date. Anything new that comes along has to fit within the spaces between any customized elements.

    Fantasy tends to get a lot more freedom and flexibility, but the historical foundations of your game physics tend to occupy a strange half-way house between what was believed at the time and the modern view of the 1850s-1950s. The latter are simple enough that a high-school education is generally enough to understand them, and a primary-school education gets you most of it, providing a window of accessibility for most players and GMs.

    If you can dispense with the details while locking in a solid foundation with the single line, Based on the Physics of 1910 except where otherwise contradicted, it leaves you free to focus on the exceptional elements.

    Nor do you necessarily have to restrict yourself to reality-based physics; there’s absolutely nothing to stop you from defining your foundations as ‘Based on the novelized Physics contained within [source X]’, and this shortcut can save you an awful lot of work – but there is a price to be paid: this opens the door to differences of interpretation of that material.

    These are best addressed by ignoring them until they arise unless you’re already aware (thanks to professor internet) of such differences of interpretation.

    Some Final Advice

    When writing a game physics, always fall in love with the word ‘because’ and all its antonyms. Not doing so creates a superficial appearance of depth with no real substance in back of it.

    This is a problem that Star Trek The Next Generation suffered from frequently; as a result, they achieved a consistency of technobabble, nothing more. The capabilities of their technology became extremely inconsistent because of this.

    And it connected to other problems on the show – “Tech Today, Gone Tomorrow” being how I describe one of the more important ones, which was an almost-complete reset of capabilities from episode to episode. This week, the cast figured out how to do [x], and three episodes later, when confronted with a similar challenge, they had completely forgotten it. Every now and them, the scripts would drop in a line to cover this – “None of our existing solutions will work,” or “unlike anything we’ve ever encountered before”, but they start to wear thin after so many shows, too. The sheer number of encounters made the crew of the Enterprise some of the hottest problem-solvers around, with exemplary records for bringing home the bacon, week after week, and that only made the absence of institutional knowledge stand out all the more.

    Viewers – and players – remember. They have that institutional knowledge. If you don’t prepare for that, they will get on top of you, every time. The solution to this is to continuously update the game physics every time the PCs figure out how to use it to achieve something else.

    A Game Physics will actually come under more intense scrutiny than that shown to a television show. It needs to be about what can be done with the physics and why things work the way they do than simply “The [x] permits [y]. It is aimed through the [z] panel and has the following side effects…”.

    But it can also be a GM’s best friend – if you let it.

Comments (1)

The Making Of Complex Newness


A process for designing & constructing big or complex things, from spells to magic items, castles to space stations, industrial processes to political campaigns, new chemistries to better TVs and AC systems, using something every RPG already has.

It’s not often that I have a clear idea of what I want in a featured image before I go looking, but this time I did – and couldn’t find it, and didn’t have time to put it together for myself. So I threw this together, instead, adding some color to the original image by Gordon Johnson from Pixabay

The real world caught up with me somewhat in the course of writing this article – I had hoped to publish it Monday (everything but the examples were finished Sunday night, so that was a realistic ambition) but things just didn’t work out that way. I also had no idea that it was going to turn into this 17K+ behemoth; the rules are described in their source location in just 1200 words. This was supposed to be a quick article to let me turn my attention to other things; instead, it has consumed my week.

Background: The evolution of a plotline

The Zenith-3 campaign is transitioning, in the course of the current adventure, from a phase in which a lot of roleplay is handwaved to full operation, following its long shutdown. There have been whole weeks of game time in which lots of little things happened but no major decisions that impacted the team’s overall mission, so it seemed like a functional approach.

For the one or two major decisions that did have to be made, I stepped outside the deliver-events-as-narrative approach and let the players roleplay, having covered all the options and their consequences in the adventure text.

Over The Christmas Break

When you have multiple events per day over multiple weeks, you need ideas – a lot of them. During the Christmas break, I jotted down 14 of them to slot in as they matched up to the narrative. The intention was for full gameplay to restart with Wednesday, Day 53, when the action would segue into… more of the same, but building in intensity as the main mission of the adventure sequence proceeded.

One of those ideas was an all-PCs disaster story. It was summarized in eight lines (four times as many as most of the ideas), and scheduled for Tuesday, Day 52.

Of course, to describe the events within one of these seed ideas, they had to be expanded, in the manner I’ve described in several past posts. How did the story start, Where, How did the participating PCs (usually not all of the team) get involved, What setback(s) were encountered, How were they overcome, and How did the story resolve?

At one sentence or a short paragraph each, that takes a 1-2 line encounter idea and turns it into a 6-12 sentence / paragraph short-short-story. Where it was natural to do so, I indulged in a little more world-building, exactly as I would have done if these had been played out fully. Special attention was paid to the characterization given each PC by the player and each established NPC. All this added a few more – sometimes many more – paragraphs to the story, but it was all kept as concise as possible. I wanted this to serve as a reminder of the tone and style of the campaign, making it easier for everyone to step back into character when the time came.

Because of its length as an idea, the ‘wildfire’ plot idea – and I’m being circumspect because it hasn’t started in play yet – was always likely to be 3-4 times the length of these smaller encounters. Part of the story involved a new creation, a species the PCs hadn’t encountered before. I had vague ideas regarding the morphology, abilities, and persona of this encounter, nothing more.

January 6th

So that was the state of development when I got back to work on the campaign on January 6th. Step one, initial ideas, done. Step four, making the situation and its differences from normality seem both more visceral and more credible, was accomplished by having the team rescue some trapped firefighters.

Step three, better delineation of the threat, followed. That lead straight into Step five, a complete description of the new life-form. As it developed, it became clear that if I delivered it all as a single text block, it would be overwhelming. Too big an infodump. So the overall shape of the story had to change; half of the PCs would not participate in the rescue, while the other half played detective and churned out parts of the infodump.

They needed somewhere to be while that was happening, so that led back into a complete revision of Step two, the ‘how the PCs get involved’ sequence. And then another one. And critical decisions started mounting up. Some could be resolved in narrative, because there was only one logical solution to the problems at hand. Verisimilitude demanded the introduction of additional NPCs, and interactions with those NPCs.

It became clear, after about 9 days of development, that this needed to be more of a full adventure with roleplay of at least the critical moments, and that led to further changes, both in the work done already, and in the work still being planned.

The important bit – the end result

As it currently stands, the eight-line idea has become 98,650 words plus 13,475 words in notes still to be integrated into the text. This is spread amongst 282 scenes, most of which will never see play – there are 28 different pathways through the adventure, based on PC decisions, skill rolls, etc. Some of those decisions are absolutely critical and could have impacts long outside this single adventure, and the adventure itself is going to cast a longer shadow, too.

The adventure will push the PCs into areas they haven?t had to explore for a while, if at all. And it’s had to push the rules system into areas it has never covered before, too. And one of those is the subject of today’s article.

New Rules for the construction of complexity

In one (or more) turns of events, a PC has to create a new chemical. I’m not going to get into purposes or reasons. So I wrote some rules for that, and carefully built in limitations and restrictions to keep this from overpowering the campaign while maintaining verisimilitude.

And, at another point in the narrative, another character has to create a complicated device with a simple function. And I found that the same rules worked for that, too.

And then I realized that there was a way of simplifying that task to make it more manageable, by having another PC use a power in way they had never done before. And the process of doing so could also be described and managed with the same rules.

These rules are nothing like anything I’ve seen in any other RPG. That always meant that they were slated to eventually become an article here at CM. But because I knew that at this point in time, I couldn’t quote the actual usage in the adventure to which those rules have been put, I either had to delay the article until that wasn’t a problem, or offer some examples from outside the adventure.

So I started thinking of some, and ended up with a list sufficiently long that I would never cover them all in just one article. With some additional ideas on the side to throw out there for people to use as they see fit.

I’ll develop one or two of these examples in full, touch on some highlights exemplified by the rest, and call it a day.

The mechanics

The mechanics of this process are really simple, but at the same time, quite elegant and capable of deep levels of richness and complexity – if the game system permits it.

They are capable of simulating the design of a new product, a new technology, a new magic item, a new spell, a new chemical, a website, a computer program, you name it..Anything that requires some form of design.

Conceptual & Functional Elements

To make the system work, you need to have a clear and detailed design objective, because that’s what the process simulates making.

The functional requirements and possibly their underlying concepts need to be listed. That’s step one of the process. It doesn’t happen instantaneously; the GM should decide how long it takes, based on the character’s expertise in the most relevant skill.

Skill Foundation

But what is the most relevant skill? That’s for the GM to specify, based on the desired end product and the level of specificity of the game system.

It’s even possible that there may be a number of different skills required.

Interval

Incorporating each element of what the end product can / will achieve takes time. The time required is standardized and is referred to as the design interval. This is also specified by the GM based on the desired end result and the expertise of the character. It could be seconds, minutes, hours, days, weeks, or years.

    A preliminary discussion of interval selection

    Interval multiplied by the total number of specifications gives the minimum time to complete a project. The actual time spent incorporating any given specification is quite a bit more variable, but under most circumstances, it will be either the minimum or something more. When thinking about intervals, I suggest using the following scales:

    Tactical: seconds – Jury rigging a door, a simple emergency repair, etc.)

    Positioning: minutes – solving a critical problem of some sort when time is critical.

    Professional: hours/days – writing code, crafting a new Grimoire spell

    Strategic: days/months/years – developing a plan to achieve a shift in a strategic balance (or imbalance), a campaign to change public perceptions, a new National Constitution, a plan to end national or city-scale dependence on a particular industry.

    Industrial: minutes/hours/days/weeks – Developing a prototype or one-off gadget. The wide range is sub-selected by the purpose of the project.

    Industrial II: weeks/months – City planning, a new product or system design (eg a new Engine), designing a building or a space station.

    Major: months/years/decades – We Choose To Go To The Moon (even if we don’t currently know how), Terraforming, Inventing something current game physics doesn’t permit eg FTL, Time Travel. The latter devolve into Industrial II if there is a working model whose principles are understood, and may devolve further if the technology is commonplace.

Specifications

This is the GM’s opportunity to add to the list of functional elements. Some may be implied foundational steps ruled necessary in order to achieve a functional element already listed, some may be necessary to convert a prototype into a manufacturing process, and some may simply be parameters that the player hasn’t thought to list. There will almost always be something that the character hasn’t thought to include.

One question that the GM will eventually face is the difference between global effect and specificity. In most cases, specificity is easier to achieve than global effectiveness. It’s a lot easier to find a cure or treatment for one specific cancer than it is to find a universal treatment, simply because “cancer” is a generic label applied to many different diseases with some common elements.

But sometimes, where you don’t want to affect everything, specificity can be just as hard to achieve. If you are designing a new chemical, for example, there may be things that you specifically DON’T want it to react to, and engineering that can be a lot harder than letting things happen – because, if you don’t specify it as a requirement, the GM is free to have the combination have any effect that HE wants.

The scope of the purpose is all-important in this context. A one-off solution to a specific problem is always to be easier than a global cure-all or even a general solution to a specific type of problem, simply because those reduce the GM’s scope for storytelling. Yes, this is blatant metagaming – so what? It’s in the best interests of the long-term campaign, so that’s fine in my book. It would be worse metagaming if the GM made a blanket ruling of ‘you can’t do that’, in my book.

There is also ‘the rule of cool’ to consider. This is a character doing something extraordinary, and bringing abilities to bear on the problem that rarely get shown off. They are character-building.

And, finally, there’s the spotlight issue – does this solve the problem all on its own, or does it require deployment in a specific way; can it in fact require the efforts of the whole team of PCs just to get the solution into the right place at the right time? The first is only satisfactory if this is a last-ditch Hail Mary after the other characters have had their shot at glory and failed; the second makes this a group victory, which inherently creates more opportunities for the action to be tense and dramatic. Both can be good, but the first can also be bad, making the other PCs fifth wheels. This consideration can go in all sorts of directions depending on the circumstances and the purpose of the creation effort. Imagine a situation in which the other PCs have to face certain defeat and possibly death, just to buy the one character with a shot at winning the time that they need?

If it makes for a good story, that’s a big tick. If it will make similar stories harder to tell in the future, that’s a down-check. Neither is the totality of consideration, but either can tip the balance one way or another.

If the GM thinks there are too many Specifications, there are ways that he can conflate two or more into a single entry on the list, making them two or more aspects of the single requirement – for example, on a space station, there might be ‘functions in an Earth orbital environment’ which covers everything from radiation-proofing to vacuum-seals. Applying this to every room / space so that they are independently safe requires a second specification, though. The GM should take the Specification as it ends up as the basis for determining the difficulty.

Rolls & Difficulty

Each Specification needs a successful die roll to integrate it into the design. The GM sets the difficulty, aided by a list of successive values of the potential, taking into account the conditions. Each roll consumes interval in time; that requirement has to be met, with the roll coming at the end.

Rolls that fail by a small amount – and the GM can determine how much this is, on a case-by-case, roll-by-roll basis – may, at his discretion, achieve a partial success. If there is no numeric variable involved, the roll is generally all-or-nothing.

If checkpoints are employed (see below), only the final roll ultimately matters; what the intervening checkpoint results reveal is the path taken to get there, one that can be full of ups and downs, but none of them critical to the final result (but also see the section dealing with critical successes and failures).

Barriers / Problems

Rolls can either succeed or fail. And there can be a gray middle ground offered by the GM in terms of partially meeting the requirements. This then bounces the question back to the player of the character – accept the partial solution, or encounter a Barrier / Setback.

A setback is a situation in which the GM feels that the desired functional Specification can be met, it’s just going to take a little more time. There’s a domino effect involved – the character has to go back one, two, or three steps and implement THAT specification in a different way. They can then work their way forwards, with a bonus to the success of the die rolls, until they achieve the required specification.

A barrier is more difficult to overcome; it adds an additional Specification to the list, inserting it just prior to the point of failure.

99%+ of medicines created in the lab never see human trials. Some of them simply don?t work, there’s an error in the theory on which they are based. Some of them have severe side effects, whether or not they work. And some of them are simply too toxic – they might cure whatever condition they are aimed at, but only at the expense of killing the patient. Those are all Barriers to success, and potentially insurmountable ones.

It’s the GM’s decision what sort of barrier the development process encounters, and its a decision based on whether or not they actually want the development process to succeed or fail – which hearkens back to the points made in the previous section.

If you hit a barrier, the time spent has been used discovering that there is a barrier. It does not magically vanish from the clock. The character has pursued a theory to the point of proving that this approach doesn’t work – an essential, if frustrating, part of the real world.

Extra Time

Another option open to the GM in such cases is to apply an ‘extra time’ modifier. This enables them to say, “You succeed but it takes N times as long as you thought it would / should.” You achieve this by looking at the margin of failure and calculating how much extra time is needed to compensate for it. This becomes a little trickier in that chances of success have to be rounded to whole values. The Hero system has the rule that any rounding happens in the character’s favor, and that seems fair enough to me.

It also opens the door for a character to state, “I’m getting close to completion, and have a little time up my sleeve, so I’m going to spend some extra time on each step from here onwards, dotting i’s and crossing t’s. That should improve my chances on each roll.” This is a perfectly legitimate application of the system, but it precludes the GM ‘helping’ the character with extra time – that ‘help’ has already been taken into account.

Extra Time can therefore be used in one of two ways, mutually exclusive. It can be used presumptively by the character to improve their chances of success on what they perceive to be a critical stage, i.e. one in which a partial success isn’t good enough, or it can be used by the GM at his discretion to turn a failure into a partial or complete success. The player’s choice to use extra time actively precludes the GM’s ability to help with extra time. I know I’ve pointed that out before, but reviewers of the draft rule still missed it.

If a player allocates extra time and still fails the roll, it must result in a Setback or a Barrier.

The reason for the exclusion is geometric expansion – two sources of extra time multiply, and the total can escalate out of control too quickly for effective game management. If the player specifies 4x normal time be used proactively, and the GM found that more was needed, you could end up with 4 x 8 = 32 times the normal interval. Neither 4x nor 8x are unreasonable, but the compound of the two takes the system right to the edge. 32 x 32 = 1024 – so if the interval was originally one minute, the player will have spent more than 17 hours getting there.

With intervals of one minute, you can reasonably expect the task to be complete in minutes – anything more than say 90-120 breaks the limit of being ‘reasonable.’ If you had a task that you thought was going to be 1-4 minutes in time, and decided to take the full 4 minutes to do it well, would you be happy pursuing that path for more than 2 hours, or would you stop and look for a faster way? Even if it meant starting over from an earlier point in the process?

I know what my answer would be.

The player can also set a hard limit on the amount of extra time the GM can force them to use before they fall back to an earlier step and try a different approach. Neither the player nor the GM have to actually describe that ‘different approach’ – the rules assume that one exists. They still use the time that they spent chasing down what they now perceive to have been a blind alley.

xN time = 5% x log(N)/log(2) is the usual pattern, but the “5%” then has to be modified to fit the mechanics of the roll. For a 3d6-based system, 18 (maximum)-3 (minimum) = 15 (range), and 5% x 15 (range) = 0.75. For a d20, the base number would be 5% of 19, or 0.95. All rounding should be in the character’s favor, i.e. round up.

     Time x1 1/3 = +0.3 (3d6) = +0.4 (d20) = +2 (%)
     Time x 1.5 = +0.4 (3d6) = ++0.6 (d20) = +2.9 (%)
     Time x 2 = +0.75 (3d6) = +1 (d20) = +5 (%)
     Time x 3 = +1.2 (3d6) = +1.5 (d20) = +8 (%)
     Time x 4 = +1.5 (3d6) = +1.9 (d20) = +10 (%)
     Time x 5 = +1.8 (3d6) = +2.2 (d20) = +11.6 (%)
     Time x 6 = +2 (3d6) = +2.5 (d20) = +13 (%)
     Time x 8 = +2.25 (3d6) = +2.85 (d20) = +15 (%)
     Time x 10 = +2.5 (3d6) = +3.2 (d20) = +16.6 (%)
     Time x 12 = +2.7 (3d6) = +3.4 (d20) = +18 (%)
     Time x 16 = +3 (3d6) = +3.8 (d20) = +20 (%)
     Time x 20 = +3.25 (3d6) = +4.1 (d20) = +21.6 (%)
     Time x 24 = +3.4 (d%) = +4.4 (d20) = +23 (%)
     Time x 30 = +3.7 (3d6) = +4.66 (d20) = +24.5 (%)
     Time x 32 = +3.75 (3d6) = +4.75 (d20) + 25 (%)

I would not extend the table further than that without explicit permission from the player – in fact, I would probably get such permission far sooner than the table implies. “You’ve spent 10x as long on this as you thought it would take, and you’re not sure you’re anywhere near a solution. You can either call the attempt a failure and deal with the consequences, or you can keep going in hopes of finding a solution.” And then revisit that question at 20x and 30x. Or do it by eights, or fours. (It can be worthwhile, once the base system is understood by the player, to get an indication from them of what time-checks you want them to use – remembering that this does not intentionally spend extra time on this stage of the design process, it caps the amount of extra time the GM can use before consulting the player).

Extra time applies only to the current Specification; the interval resets for the next one.

A clarifying note

3d6 have a nonlinear probability curve, but the system deliberately ignores this. That has consequences, which in turn have consequences.

The ’round in the character’s favor’ covers a lot of the resulting issues; it causes a flattening of the non-linearity of the probability curve, undervaluing the most probable results and overvaluing the extremes, but not by so much that it can’t be tolerated.

The net effect of this is to make the roll a little more ‘knife edge’, because success by any amount is a success (and so the flattening of the best results can be ignored), while failure is made a little more probable. But this is mitigated by the round-in-favor rule, and the availability of partial successes further soften the impact to produce a system that is intuitive art the game table rather than robustly perfect in its statistical modeling.

A second clarifying note

Rounding for a die roll always results in integer values – you can’t roll “3.6” on 3d6, it always collapses into a 3 or a 4 – and the round-in-favor rule makes this explicitly a 4.

Equally, a 3.2 is actually a 4.

Rounding errors are a fact of life. Compression of 5% of a 1-100 range to a 3-15 range (3d6) or 1-20 range (d20) is always going to introduce them anyway. In fact, they are so ubiquitous that their absence is the exception, not the expectation. Don’t stress about it; there are far greater sources of error that can and will drown this out, even in the course of a single project.

A third clarifying note

To be statistically robust, the table only needed entries for 2^N x Time – 2, 4, 8, 16, 32. I’ve included selected other values because they are likely to occur in the real world (x3, multiples of x5), and because they help players and GMs visualize the curve, ie the relationships between values.

There are enough results on a d% to make the curve appear smooth despite the rounding. That’s not the case with other dice structures.

There’s no real need for a Time x 14 entry, for example – so none was included.

    Metaspecifications

    I can only think of one of these, but I’m making it a general category in case GMs find others.

    “I want / need to complete this project in half the usual time”.

    Okay, so halve the interval, and then think of the downsides.

    If extra time gives a bonus to success, less time should give a penalty to all rolls.

    Multiply 32 x (1 minus the fraction of time) and look up / calculate the result of the resulting ‘extra time’.Double the resulting modifier and make it bad instead of good.

    The following are intended to show how it’s done (and provide a bit of a cheat sheet), not to be a comprehensive table of results.

         9% Time Reduction = Time / 1.1 = 32 x (1 – 1 / 1.1) = 3 = -7.9%
         17% Time Reduction = Time / 1.2 = 32 x (1 – 1 / 1.2) = 5 = -11.6%
         23% Time Reduction = Time / 1.3 = 32 x (1 – 1 / 1.3) = 7 = -14%
         29% Time Reduction = Time / 1.4 = 32 x (1 – 1 / 1.4) = 9 = -15.85%
         33% Time Reduction = Time / 1.5 = 32 x (1 – 1 / 1.5) = 10 = -16.6%
         37.5% Time Reduction = Time / 1.6 = 32 x (1 – 1 / 1.6) = 12 = -17.9%
         42% Time Reduction = Time / 1.7 = 32 x (1 – 1 / 1.7) = 13 = -18.5%
         44% Time Reduction = Time / 1.8 = 32 x (1 – 1 / 1.8) = 14 = -19%
         48% Time Reduction = Time / 1.9 = 32 x (1 – 1 / 1.9) = 15 = -19.5%

         5% Time Reduction = 95% Time Taken = 32 x (1 – 0.95) = 1.6 = -3.4%
         25% Time Reduction = 75% Time Taken = 32 x (1 – 0.75) = 8 = -15%
         30% Time Reduction = 70% Time Taken = 32 x (1 – 0.7) = 9.6 = -16.3%

         Time / 2 = 32 x (1 – 1/2) = 16 = -20%
         Time / 3 = 32 x (1 – 1/3) = 21 1//3 = -22%
         Time / 4 = 32 x (1 – 1/4) = 24 = -23%
         Time / 5 = 32 x (1 – 1/5) = 25.6 = -23.4%
         Time / 6 = 32 x (1 – 1/6) = 26 2/3 = -23.6%

    Add an extra Specification to the start – “Accelerated Development” – and another to the end, “Minimal Testing”.

    The GM gets to add a free “unwanted side effect”.

    If the purpose could be described as “Industrial II” or higher, add another “Early Release”, and the GM can add a second free “unwanted side effect” or an “application restriction” – which reduces the effectiveness of the solution but usually doesn’t make it completely useless for the intended purpose. “Takes twice as long to have an effect” is about the softest choice.

    If the character fails ANY of the extra specifications, there IS no way to complete the project in the time desired.

    The character has two or three options:

    1. The GM can rule that some specification carried over from a base model are affected as though it was a Specification that had failed. This option is ONLY available in that specific circumstance.

    2. Revert immediately to the standard timeline, crossing out the accelerated development Specifications (but not the consequences of their having been there).

    3. Keep the accelerated development timeline but further compromise the effectiveness of the project with partial solutions that are twice as bad as those normally encountered.

    Regardless, the time spent trying to fine a method of achieving the accelerated development is gone.

    Slices Of Time

    If you study the numbers in the table closely, you will see that the arrangement is non-linear (as you would expect with logarithms involved). Four attempts at Time x 8 are 4 attempts at +15. You can’t conflate those into +60, but the combination is obviously going to be somewhere between that value and +15 – you can derive an algebraic expression for this specific series of numbers, but it’s not worth the effort. Spending all that time on one attempt gives a total bonus of +25%, and it’s intuitively likely that this is below the compounded value. That means that it appears worthwhile for the researcher to divide up their time into smaller slices and multiple attempts, trying multiple different paths to success. What’s more, it is also logical to make all those attempts concurrently, so that you achieve a greater likelihood of success in a fraction of the time that margin of success would otherwise incur.

    I’ve described this logic in detail because it is in this detail that you discover the flaw in this arrangement – the system assumes that you are doing this already. This ‘suggested’ layering of processes is how you get to the +15% in the first place. So this ‘logic’ is counting the benefits twice. And that’s how you get to a modifier somewhere in the vicinity of +50% when the system dictates +25%. It’s a subtle but definite attempt to cheat the system. Those 4 attempts at +15% simply mark 1/4, 1/2, 3/4, and completion ‘checkpoints’ on the path to a net +25% in the final roll.

    Checkpoints can be useful with long intervals, describing the process and it’s progress toward ultimate success or failure in the integration of this specific requirement. The character can make 3 rolls at +15% or it’s system equivalent, which the GM interprets as measuring progress – a success doesn’t get the character all the way to the next step, and a failure doesn’t obstruct forward progress. They then make the fourth roll at the indicated +25% for ultimate success or failure in this step.

    Greater verisimilitude comes from the use of cumulative time for these rolls – in the case of this example, 8x, 16x, 24x, and finally 32x. This ‘weights’ the intermediate results to reflect the character discarding paths that seem to be going nowhere and honing in on their ultimate solution and its success or failure.

    Critical Successes do nothing but affect character confidence. If a Critical Failure occurs on a checkpoint roll, the GM should invent a number off the top of their head for progress, which the character will know not to be correct – but he doesn’t know how badly incorrect it is. The resulting confusion, uncertainty, and doubt is the consequence of the failure. These interpretations do NOT apply to the final roll needed to complete the Specification’s implementation, they are only about indicating the progress-to-date. But they do serve one additional function: they remind the players that the project is ongoing. The GM should look for opportunities to insert ‘progress text’ and event descriptions, even little roleplay moments, into the ongoing narrative on a regular basis.

Notes regarding Burnout and Fatigue

While ‘burnout’ and ‘compounded fatigue’ are real world phenomena, they are deliberately ignored by the system in favor of game-play.

The potential for large-scale intervals, for designing and constructing a space station for example, implies that characters don’t have to focus continuously on the task, but can interrupt it as necessary; time not spent on the project doesn?t count toward it (though a generous GM might allow that the problems are still ticking over in the back of the character’s mind and permit 10%, 5%, or 1% of such time to contribute to the total.

I thought about excluding sleep time from that, but there are many documented cases of problems being solved ‘on awakening’ which suggests that the subconscious keeps working on problems even while sleeping. That in turn suggests that this would only complicate things as an exclusion or as a separate ‘passive time’ accumulation; it’s a detail that is either unnecessary (interval less than hours) or undesirably complicated (intervals more than minutes).

So they have been left out for cleaner game-play.

Endurance – if you have it

In any system where Endurance gets tracked, skill use – concentration on a task – should cost Endurance. The amount should be determined by the recovery frequency and amount.

In the hero system, Endurance costs are generally determined by dividing the active cost by 5, then applying any modifiers to that. I originally split modifiers into two groups – one that affected END cost and one that didn’t – because it didn’t make sense to me that “reduced END cost” should increase the END cost of using a power, while “increased END cost” decreased the END cost by reducing the Active cost.

In the Hero system, characters will recover 2-4 END or more twice per 12-second turn. That’s 4-8 (plus) in 12 seconds, with the character’s Speed determining how many opportunities they get to act, i.e. to spend that Endurance. It’s geared to relatively low levels of powers – 4-dice attacks costing 4 END each. But it can be made to scale to more epic power levels, and that’s what my original home rules were intended to do. I wanted Superman types who were epic but ran out of steam and had to pause to rest for a while, creating windows for other characters to have the spotlight, and lower power-level characters with low END costs who could act more frequently and more continuously, and all points in between.

If you consider the Purchase price plus improvement cost of skills in the Hero System to be the active cost, then this same division by 5 works perfectly. Skills at very high level (15+) cost 2-3 END, skills at competent levels (10-15) cost 1-2 END, skills at a relatively low level (5-10) cost 1-2 END, and skills at the amateur level (1-5) cost 0-1 points. The bigger the potential game impact, the bigger the END cost.

The current version of the homebrew game system rules is d% based. Many of the stats can range higher – a lot higher – END reserves being one of them. It’s not unusual to have 50-60 END, recover 5-10 per turn, and act just once in a turn. But END costs are also higher, and you can do more in a turn. Skills are purchased with Skill Points, which in turn are paid for with character points, so characters with high capacities for learning skills can get more skill points per character point. Skill points spent on a skill divided by 20 almost works, but penalizes skill-heavy characters; instead, there’s a flat 1, 2, 3, or 4 cost based on skill level.

In implementing the system described in this post, with multiple skill rolls required over a time frame specified by the GM, I would specify a 2-END cost, that cannot be recovered until the end of the process. If there are 8-10 Specifications (not uncommon), that’s 16-20 END, lowering the character’s capacity to act in the meantime without making them completely helpless. That’s within the range of normal people in the system. Furthermore, taking a substantial break (one interval) would deduct END Recovery from that accumulated total.

For the standard Hero System, I would do the same, but price the END cost per Skill Roll at 1.

D&D and Pathfinder don’t track END, but they do have the concept of “Shock Points” – the character gets as many of those as they do hit points, and they go down with Damage just like regular hit points. I would contend that these represent mental fatigue amongst other things, and some attacks do non-lethal Shock damage instead of regular damage. They are normally fully recovered at the end of a character’s turn, or once a minute, or something like that, and – like hit points – the pool can grow quite large at higher levels.

For low-level campaigns, I would use a similar approach to the base Hero system – non-recoverable shock points, 1 per skill roll. For mid-level campaigns, i would make it 2 points, and for high-level campaigns, 3. Note that this doesn’t describe the current levels of the characters, but where those characters are expected to be at the end of the campaign / for the majority of the campaign. This limits the capacity of low-level characters in a higher-level campaign so that characters can feel like they are growing in competence as they progress.

Of course, if a character runs out of END in the process, they suffer from Burnout and need to take a significant break of 1 interval. If the intervals are seconds or minutes, that’s not all that significant; if its hours or days, that’s inconvenient; if it’s weeks, months, or years, that’s a lot more painful. The first interval restores 2 END; if the break is extended another interval, this rises to 4, then 8, 16, 32, and so on, up to the original maximum.

If the system doesn’t have any mechanism for tracking Endurance, just ignore this whole section.

Critical Successes and Failures

These don’t apply to every game system but are generally profound when present.

A critical success halves the interval for that Specification and may optionally reduce the interval for subsequent Specifications, reflecting the “stroke of genius’ or ‘flash of inspiration’ inherent in the concept of a critical success.

Optional: A breakthrough can carry extra momentum into the next stage, worth a +5% or +10% bonus (+1 or +2 in other game systems).

Optional: The GM can rule that the breakthrough makes a future Specification redundant or simplified to the point of being incorporated into this Specification, removing that future item off the list completely.

Optional: sticking with the main proposal, the GM has to decide how much the interval reduces by. I recommend values of 10%, 20%, and 25% be considered, but 15% is a good compromise.

A critical failure should be a failure like any other, but worse, or may be interpreted by the GM as a Barrier to a later step, only discovered when that stage of the process is achieved.

That turns the failure into a time bomb with a hidden clock – the player will know that it’s ticking but not when it will blow up under their feet. For the moment, the character thinks they have succeeded even if the player knows better, and this should be made clear to the player. However, the solution found to that Specification contains a hidden defect that only time will reveal.

The GM should devote some thought to what this hidden defect might be – it shouldn?t be anything that would be obvious in an earlier step, it should be something subtle but catastrophic in terms of the intended purpose. Designing an air-breathing jet engine only to discover that it needs to be kept underwater in operation because parts would otherwise overheat, makes the design worthless. The solution is to replace the effected parts with something more heat-tolerant, or recalculate and remodel the engine to divert the excess heat away from the affected parts. The choice of solution can have a profound impact on the look-and-feel – an experimental jet engine with huge radiator fins is SO steampunk or pulp!

Success at last!

Eventually, the last roll required will be achieved with the last Specification successfully incorporated. The results are now ready for use as specified in the purpose. But here’s where the fun starts – anything NOT specified in the design is free for interpretation by the GM. Side effects are always possible. The goal should never be to make the results useless, but to make the experience interesting. So save these ideas for occasions when the application of the results themselves are less interesting than they should be. Remember both the “Rule of Cool” and that no product is EVER perfect.

Optional Rule, All systems

Sooner or later, a setback will require the character to repeat part of a series of rolls, looking for an alternative route to success (the most obvious approach having failed). That potential is baked into the system, deliberately.

The GM can choose to provide a +5% or +1 modifier to subsequent iterations of a repeated step, reflecting the thought that the character has already put into the integration of that Specification.

This does two things, one more important than the other(okay, maybe three). First, it slightly changes the balance between setbacks and partial solutions in favor of the first because it weakens the penalty involved; but to my mind it doesn’t do so by enough to warrant concern.

Second, it adds a hard limit to the number of times a single step can recur before it becomes an automatic success (critical successes and failures notwithstanding). Automatic success rolls still need to actually rolled to test for critical success or failure, but such rolls take no time. This puts a cap on the process. That’s the most important consequence of this optional rule, I think.

And third, it changes, slightly, the way a player looks at the system in their favor. That can be an important element of the decision-making process when it comes to using this system at all. Given that the system itself enriches the tactical options open to the player and enriches the storytelling at the game table, its existence within the operating rules of the campaign is a benefit to the GM; but a benefit that remains only potential until and unless the rules are actually used. If they then become tedious, they are unlikely to be used again; if they add to the tension and drama, and hence the entertainment value of the game, players are more likely to call upon them again. This is an in-between consequence, starting small and growing with successive uses of the system.

It’s never possible for an Automatic Success result if the system has critical successes and failures, because they override a success if rolled. However, you can approach that point – 17/- on 3d6 is just about there. But there’s always that chance to roll box cars. It doesn’t matter if your change is 28/- (it would never actually get that high) – box cars or their equivalent is ALWAYS a failure.

If you don’t have criticals, then automatic success does become possible, but it never becomes possible to automatically get a critical success because this set of mechanics doesn’t have them.

Before moving on, I want to highlight that there’s at least one other Optional Rule described within the examples, so don’t skip over them too quickly, even if you think you understand the functioning of the system or they are referring to a game system other than the one you’re using.

Selected Full Examples

I kept thinking up new ways of using this system. Too many to offer them all as fully-worked examples. So I have divided them into three categories: a couple of full examples to illustrate the application of the system, a few examples in which some key conceptual element can be brought to light, and which are discussed briefly and perhaps partially worked up in furtherance of that, and a few more that are going to get nothing more than a high-level summary, or perhaps even less.

Using the system to design a new magic spell for the Hero System

Interpreting this system for the base Hero system is conceptually simple – one Specification for the power or skill or ability that’s going to be used to simulate the results from a game mechanics standpoint, or a general conceptual description simplified as much as possible (stripped of anything detailed, in other words), and one Specification per modifier.

Plus one Specification that has to be #2 on the list – Ad-hoc vs Permanent.

    Ad-hoc vs Permanent magics

    Ad-hoc spells are here-today, gone-tomorrow deals, you get one shot at successfully using them. I highlighted the word successfully, because the GM can arrange circumstances in which the first shot fails – the goop doesn’t hit the target or whatever – and it’s not fair to make the character go through the whole process again. One interval is enough to whip up a whole new batch, if necessary. But when the effect that the character has invested their time in does have the effect specified, the mechanism for delivering that effect fizzles and is gone.

    Some circumstances – constructing a new chemical – may seem to preclude this from being reasonable. That’s tough luck for the creator, it still happens. But for a spell, this is entirely reasonable.

    Permanent magics are stored or recorded somewhere and can be used again. Depending on how magic is configured in your game mechanics – and there are multiple options – this can usually be regarded as creating a new ability for the character; the required character points are then spent, and it becomes a permanent addition to their character sheet.

    Once this Specification is successfully incorporated, the GM can assign a blanket modifier to all the subsequent skill rolls. He should be consistent but open to exceptions. If the GM doesn’t want ad-hoc spell use, apply a -50%. If the GM thinks that permanent magics, with side effects that are tolerable on a recurring basis, should require more effort dotting i’s and crossing t’s, then a -20% for permanent solutions is reasonable. If they want to emphasize the flexibility of magic, they can apply a +25% to ad-hoc spell creation.

    Increasing the likelihood of success on the skill rolls shifts the actual time closer to the minimum. Decreasing it adds to the likelihood of some sort of roadblock by making success less likely. This will generally slow the project down.

    All this is relative, of course. If you have 8 or less in a skill, a -4 modifier is huge, especially taking into account the non-linear nature of a 3d6 roll. If you have 13/-, that same modifier is significant but not catastrophic; if you have 17/- or 18/-, it’s niggling but little more.

    The other way to represent this distinction is with Interval. Almost by definition, the Interval for ad-hoc spell use is seconds; I would set the Interval for the crafting of permanent magics at hours if not days. Given how many seconds there are in a day (86400), that’s a massive ratio. Using the divide-by-5 and convert to log-2 scaling of the Hero System, that’s a little more than +14. But hours or minutes seem too short for realism and game balance to be maintained.

    What CAN be done is to impose another blanket modifier to reflect the breakneck pace of development. Not the whole -14, though – that makes the system just about unusable. Maybe half that – a -7 makes ad-hoc more difficult. -7 on a d20 would be equivalent to -35% chances, on a 3d6 it would be more – an eyeball, seat-of-the-pants estimate, the same as I would make when GMing, is between -40% and -45%, and probably at the lower end of that range.

    Bearing in mind that there might already be a blanket modifier for ad-hoc use, this can either make the process almost untenable, or can moderate the net impact back toward neutral.

The Example: Eyes Of Hodur

I’m going to lift a spell from the Grimoire of one of the characters in my Zenith-3 campaign. Credit to Nick Deane for the source (and the character). The numbers for the modifiers might not be (almost certainly won’t be) the same as in the official material but it will be close enough.

Skill: Spell Use
Value: 13/-
Interval: Seconds (ad-hoc spell)
Blanket Modifier: -25% = -25/5 * 0.75 (from earlier in the system description) = -3.75, round to -4. Net roll: 9/-.
Conditions: Magic Workshop +2, significant past experience at crafting spells +2. Net roll: 13/-.

Spell Description:Eyes of Hoder
School: Mind Magic
Effect: 2d6 Flash vs entire sight group + Range Modifier x2 (10 pts)

Modifiers:
Skill Roll Required -1,
Incantation -1,
Gestures -1,
Linked -1 GM’s Note: Linked to what? Assumed valid
Can Use Normal Mana or Mana Battery + 1/4,
Extended Duration: 30 minutes per point of success +3,
Only Affects One Target /4

Modifiers’ Totals: -4 +3 Œ /4

Base Cost: 30
Active Cost: 25
Net Cost: 5
Mana Cost: 1
END Cost: 5
Range: 30′ (Flash has a range of 5″ for every 10 active points, round down – power description) GM’s Note: Range miscalculated, from the rule cited should be 25/10 x 5 = 2.5 x 5 = 12.5″ = 24m when rounded down.

In the base Hero system, there is no division between types of modifiers, they are all + or -, with the + all counting toward active cost and the – not. So the modifier totals become +3 1/4, -8. This impacts the results:

Base Cost: 30
Active Cost: 30 x (1 + 3 1/4) = 127.5 = 127
Net Cost: 127 / (1 + 8) = 14
Mana Cost: n/a
END Cost: 127/5 = 25
Range: 127/10 = 12? = 24m

Clearly, the spell would not be designed quite this way using the base Hero system, but this is good enough for example purposes.

Specifications:
1. Flash
2. Ad-hoc Spell.
3. 2d6
4. Skill Roll Required -1,
5. Incantation Required -1,
6. Gestures Required -1,
7. Linked -1
8. Can Use Normal Mana or Mana Battery + 1/4,
9. Extended Duration: 30 minutes per point of success +3,
10. Only Affects One Target /4

1 sec, Roll #1, Specification 1: 13/- -> 7, success
2 sec, Roll #2: Specification 2: 13/- -> 11, success
3 sec, Roll #3: Specification 3: 13 +2 modifier = 15/- -> 17, failure
     GM applies extra time, x4 intervals = +2 = success
6 sec, Roll #4: Specification 4: 13/- +2 modifier = 15/- -> 7, success
7 sec, Roll #5: Specification 5: 13/- +2 modifier = 15/ -> 12, success
8 sec, Roll #6: Specification 6: 13/ +2 modifier = 15/- -> 9, success
9 sec, Roll #7: Specification 7: 13/- -4 modifier = 9/ -> 15, failure

The GM, having used extra time once, could do so again, but the modifier required to get from 9/- to 15/- is huge and probably beyond the limits of that capability. But he imposes a partial extra time adjustment of x4 time for this step anyway, because reducing the gap from +6 required to +4 required lets him be a little more generous with his partial solution offer.

The player can choose the GM’s offer of a partial success, “Link takes 2 rounds to establish each time”, or can choose a block/setback.

This offer is very nuanced. If it were 1 round each time, the offer would almost certainly be accepted, because it doesn’t compromise the spell’s function very much. If it were 3 rounds each time, the offer would almost certainly be rejected. 2 rounds is the sweet spot at which the character might be tempted if he needed the spell in a hurry – which he does, it’s an ad-hoc spell.

The GM warns the player that the magnitude of the failure means that the setback will be substantial, and gives him one last chance to change his mind, but the player feels that a few extra seconds is tolerable.

The GM returns the clock back to the start of the “Links to” Requirements (Specification 4), explaining that the simple method of making the spell restricted is being overridden by the link. The details beyond that don’t matter.

So that last entry now reads,

9 sec, Roll #7: Specification 7: 13/- -4 modifier = 9/- -> 15, failure
     x4 Extra Time -> 11/- still failure

…and the process continues from there.

12 sec, Roll #8, Specification 4: 13/- +2 modifier = 15/- -> 12, success

The GM notes that a full turn has passed and lets everyone else act.

13 sec, Roll #9, Specification 5: 13/- +2 modifier = 15/- -> 11, success
14 sec, Roll #10, Specification 6: 13/- +2 modifier = 15/- -> 17, failure
     x 4 Extra Time – 17/-, success
15 sec, Roll #11, Specification 7: 13/- -2 modifier = 9/- -> 9, success

The revised approach solves the problem, but could easily have failed again.

16 sec, Roll #12, Specification 8: 13/- +2 modifier = 15/- -> 15, success
17 sec, Roll #13, Specification 9: 13/- -4 modifier = 9/- -> 9, success

This is the second potential ‘choke point’ identified by the GM when considering the list of specifications. He really wanted to be able to ‘force’ the spell effects down to 30 or even 15 seconds per point of success, because 30 minutes of blindness is massive, in tactical terms.

He then realizes that the design fails to specify ‘success on [what]’. He can define it as ‘success on the skill roll’ (6 x 1/2 = approx 3 hrs of blindness, maximum) or ‘points of flash rolled in excess of defense’ (ave 3.5 x 2d6 = 7- flash defense of 0-to-5 for a net 1-3.5 hours of blindness resulting). He doesn’t know what the character intended when designing the spell parameters and not seeking clarification on this point leaves him the latitude to decide. He chooses the latter option, but deliberately doesn’t tell the player – he will find out when he casts the spell. I’ve considered labeling this an “Ambiguity Tax” but “Ambiguity In, Ambiguity Out” probably comes closer.

Note, too, that the player hasn’t specified the skill, so that reverts to the system default of the most appropriate skill, “Magic Use, 13/-“. If the player wanted to employ something other than the system standard, he would have had to Specify that, and justify it to the GM.

18 sec, Roll #14, Specification 10: 13/- -2 modifier = 11/- -> 13, failure

The GM notes that the player hasn’t specified casting time, and therefore also expects that to default to the system standard of 1 round activation time. He could ‘solve’ this failure with additional time, but really wants to reduce the effectiveness of this spell a bit, so he takes advantage of the players’ assumption to add an 11th Specification, x2 casting time.

The player’s choice not to list this variable was not unreasonable; the GM’s use of the added Specification overriding the system standard is exactly what the player could have done originally, but he couldn’t have specified the base standard, it would have to incur a modifier. It’s even possible that the player deliberately left this open for the GM to exploit, confident that the GM wouldn’t be unfair.

This changes the failure into a success:

18 sec, Roll #14, Specification 10: 13/- -2 modifier +2 additional specification = 13/- -> 13, success

19 sec, Roll #15, Specification 11: 13/- -> 11 success.

The modified spell is now complete and ready to use. It has taken 19 seconds to craft, and the spell itself has changed slightly in the construction process thanks to the x2 Casting Time added at the end. So the ‘stat block’ describing the spell would have to be recalculated to accommodate the additional parameter.

The GM can now append special / side effects and a description of the spell when it is cast. He notes that there is not a ‘no attack roll’ parameter specified, so the character still has to make one when using the spell (system default) but he adds an additional element: if the attack roll fails, the spell affects the nearest character, with a preference for anyone between the target and the caster for tie-breaking purposes. Assuming that the target is being attacked by the other characters, that almost certainly means a team-mate. He describes the spell as a ball of light the size of a fist that streaks from caster to target and wraps around his eyes for the duration of the spell’s effects. This is not at all what the player was expecting; Holdur was a blind deity in the Norse Mythos, he expected the effect to be one of simply denying the target the ability to see. But if he wanted ‘no visible effects’ he should have Specified that and incorporated the resulting modifier; he didn’t, and the system default is for a visible effect; the GM simply drew inspiration for the nature of that effect from the power, “Flash,” on which the spell is based.

Using the system to design a new magic spell for the D&D System

I know I said that there would be a second complete example, but I completely ran out of time after discussing the translation mechanics. Sorry!

This is equally straightforward, at least conceptually. Each spell has a stat block that describes most of its specifics; each one is the basis of a Specification. In each case, the GM has to determine whether the numbers are a ‘per level’ or absolute number. Some spells have additional numeric descriptors in the spell descriptions, so these should also be scoured.

On top of that, in some versions of the D&D system, Metamagics can be used to adjust these values, but no Metamagic can be incorporated directly into a base spell without GM approval. Such approval still doesn’t incorporate the metamagic, but it does simulate it and does require an additional Specification for each increase, for example “x2 Range”.

    One-off vs Spellbook storage

    The exception to that statement would be if the GM adopted some variant of the “ad-hoc” spellcasting concept. Ad-hoc spells always have to build their spell variations directly into the spell construction and that includes metamagics, because those metamagics can’t be tacked on after the fact – ad hoc spells are always cast ‘as is’.

    In addition, I would consider the following as a House Rule within that new subsystem: Multiply each spell level by the number of spells within that spell level that the caster can memorize / use, daily, and divide by 3; the results are a ‘Spell Creation END pool’. This limits the ad-hoc elements of the spell to manageable levels. It should not be used outside of this system, it’s not sufficiently robustly-developed for that.

    All that said, D&D does in fact, offer a form of one-off ad-hoc spell variants, permitting these to be captured in either potion or scroll form. Of these two, Potions are the better choice from the GM’s perspective, because they can’t then be transferred into a character’s spellbook for free (in terms of this system).

    To offset the benefits of attempting to rort the system in this way, all scrolls generated using this system should have the property “Fragile” appended to their descriptions by the GM; this means that there is a 75-80% chance that any attempted transfer into a spellbook fails because the scroll self-destructs prematurely, requiring the whole creation process to be repeated. This applies ONLY to spells created using this process; the GM is free to set whatever rules he wants regarding “normal” scrolls and their fragility. And it only applies in situations where a character is trying to cheat the system.

    For such repetition, I would normally grant a +1 modifier because its been done by the character before, but in this instance, forget it. The cosmic halo of Mana has shifted or changed character or something, the ‘weave’ has been stretched by the failure, or whatever. You don’t earn a GM’s goodwill by attempting to cheat the system.

    On the other hand, creating a ‘permanent’ magic item (see below), with its attendant difficulties, is going about this honestly, and no such penalty or ill-will should result. But note that the penalties for creating a spell this way are already greater than those involved in crafting a permanent spell.

      Magic Items

      You can use this system for the crafting of a magic item. It is the GM’s option whether or not to use it for ‘standard’ items, but my recommendation would be not to. However, using this approach to get an estimate for the crafting time of a “variant item with no significant variations”” is perfectly valid. The GM should also estimate the crafting cost of magic items using standard items as a guide.

      Start with the name of the item. “Sword of,” “Armor of,” etc are your cues to the first type of specification, the Form.

      There is a hierarchy to these things, providing a scale for partial successes like any other. That hierarchy is:

      Consumables:
           Potions (n x s / m / h)
           Scrolls (h)
           Wands (h / d)
           Arrows / Other (d)

      Permanent:
           Miscellaneous Minor (d)
           Daggers & Arrows (d)
           Miscellaneous Medium (n x d)
           Shields (w)
           Rods & Staves (n x w)
           Miscellaneous Greater (n x w)
           Other Weapons (n x w)
           Swords (n x w)
           Armors (n x w / mo)
           Minor Artifacts (n x mo / y)
           Major Artifacts (n x y)

      The higher up the scale you go, the greater the global penalty to rolls. I would set the zero point to be Miscellaneous Minor; Forms lower in the ranking get a +2 bonus per step, forms higher in the ranking get -1 per step.

      Artifacts of any kind should get an additional global penalty.

      Note the codes next to the forms – these are the recommended intervals (s = seconds, m = minutes, h = hours, d = days, w = weeks, mo = months, y = years). If a recommended interval is preceded by an (n x ), it means that the intervals should be multiplied by n, which is an integer from 2 to 6. Where the magic item has a plus associated with it, n should normally be ‘plus’+1, but I would make exceptions for consumable items falling into this category.

      But you don’t know what the ‘plus’ is yet?

      That’s the next Specification. Each magical ‘plus’ gives a -1 modifier to the roll for this Specification.

      Each ‘plus’ permits (but doesn’t require) the incorporation of one spell-like effect. If this effect is already extant within the rules, a single Specification is needed for each. The GM should use modifiers to adjust for more powerful effects at his discretion.

      The first Power incorporated gets a +5 modifier, the second a +4, and so on. These are not fully universal, but they apply to everything related to that Power.

      What’s more, if the designer voluntarily gives up some of these Power Slots, he gets a bonus to the rest – +2 for each slot ‘locked out’. This enables larger-plus equipment that is conceptually focused to have bigger abilities than one that is all over the shop.

      If the effect is not normal, but is described by a spell that the character knows, it requires two Specifications: Spell-like effect is one, and the spell name and Character Level it is set to is the other. The higher the ‘effective’ character level, the worse the modifier.

      If the effect is a customized version of the spell, it should be listed as both it’s original form (one Specification) by name, and followed by each of the modifications.

      If the effect is a completely new spell, even one derived from a Reference Spell, the whole spell design process has to be incorporated. That means that you can save a lot of work if you create you new spells in permanent form, and only then commence enchanting the item.

      Most spell-like effects are then followed by either ‘permanent’, ‘at will’ or ‘X times a day’, describing how often the effect can be used. This is a separate Specification, but there’s such a big gap between even “5 times a day” and the firs two that some special handling is needed.

      X times per day is handled as a single Specification, with a penalty increasing as X increases, the amount of which is left to the GM to determine. But there are already a lot of penalties, so I recommend -X, which is therefore a peak modifier of -5.

      ‘Permanent’ and ‘At Will’ also have a -5 modifier, but they confer an additional -1 modifier on all other Specifications related to this specific ability. That gets fairly significant when there are a lot of rolls, as with incorporating a custom spell.

      For wands and their equivalents, these have a different Specification at this point; every spell in them is the same, but the number of copies of a given spell that can be included per ‘plus-equivalent’ is 2, 4, 6, 8, or 10. The lowest value gives a +6 modifier, then +4, +2, +0, and -2.

      Potions can usually only contain 1 ‘charge’ by default, a second and third Specification have to be included for a second and then a third ‘charge’. But all of these have the same modifier. In general, this makes potions more compatible with experimental spell design in the form of one-off spells.

      After the first power, you have the second, and so on.

      There are a lot of negative modifiers, so expect low net rolls and lots of failures. Almost every non-standard magic item is compromised in some way. I strongly recommend the optional rule that gives bonuses for repeated efforts.

      Exotic Materials

      Some materials are more easily enchanted than others. These materials give a blanket modifier to certain types of uses. I want to discuss two of them, and then mention a couple of others in more general form.

      Adamantine is the ultimate recipient for Dwarven magics in the form of weapons and armor. It grants a blanket +8 to all rolls subsequent to this Specification. The Specification “Adamantium” (or “Adamantine”) does not receive this bonus, because it is such a difficult material to work with.

      Mithril is the ultimate receptical for Elven magics in any non-martial form, giving +6 to all rolls subsequent to its Specification. It’s not quite as difficult as Adamantine to work with, so it also gives a +2 to it’s own Specification.

      Other materials should be assessed by their purity. Steel that is forged and folded multiple times becomes more pure, and so do most other materials – enough to make this a general rule. Some other materials can can also be considered effectively ‘pure’ like gold, silver, platinum, ebony, ivory, gemstones, etc. The most pure give a +5 modifier to the first 3 specifications in each powers slot, then +4 to the first 3, +3 to the first 4, +2 to the first four, and +1 to the first 5.

      Herbs and woods are the weakest of the lot. They give +1 to the first 2.

Back to crafting a unique spell!

Wow, that side-trip into magic item design was a lot more extensive in the end than I originally had planned!

Reference Spell & Modifier Levels

Each spell should also list a ‘reference spell’ that is used to guide the GM in assessing the Specifications. This doesn’t have to be a spell that the character has access to, and it’s more powerful within the ad-hoc system if the character doesn’t; it serves purely to set a baseline for the GM evaluation of differences relative to the chosen Specifications.

Each numeric Specification is a point on a continuum, and can be moved up or down by discrete steps (being careful to avoid using the term ‘intervals’ because that already has a specific meaning in this set of rules). It’s up to the GM how large these steps are, but they should make most desirable values a small integer number of steps away.

Range might be “20′ per level” based on the Reference Spell; steps down from that (making the spell easier to craft) might be “15′ per level”, “10′ per level”, “5′ per level”, “1′ per level”, and “touch” – with “touch” being a floor to that particular Specification. Similarly, you can’t have a fraction of “Instantaneous,” so that’s a floor on Casting Time. Each of these steps down would add +2 to the roll for that specific Specification (but note the optional rule below).

Increasing the Range Specification from a Reference level of “20′ per level” would yield values of “25′ per level,” “30 feet per level,” and so on, and these give a -2 modifier to the skill roll for incorporating that Specification into the design.

This approach is especially advantageous because it builds a scale for the formulation of “partial solutions” directly into the system. However, if necessary, the GM can use different step values for partial solutions.

Let’s say the player was aiming for a range of 30′ per level against a reference of 20′ per level. He flubs the roll by 3. Using the standard intervals of ±5′ per level, there isn’t a whole lot of room tom maneuver, there’s just one intermediate step. If the GM chooses each unit of failure to represent 2′ per level difference, that failure by 3 can be interpreted as being 6′ per level away from what the player was aiming for, or “24′ per level”. The GM can then offer this as an option to the character.

Optional Rule (all systems): Transfer Of Bonuses

I noted the GM’s determination of “Choke Points” in the first example. There’s nothing stopping the player from making the same assessment and ‘banking’ modifiers from what he regards as ‘easy steps’ later in the process to boost their chances of success on the earlier steps.

Sometimes, the player’s assessment will be correct, and this can cushion the process in their favor. And sometimes, the GM will have thought of a reason why a different step is a / the Choke Point in the process, and the player will have made a hard roll worse. The better the player understands the game world and its internal physics, and the GM and his way of thinking, the more accurately he will make this assessment; and when those understandings are more limited, this can provide a direct access to that knowledge.

Modifiers can only be transferred from Specifications yet to be rolled. Once the roll is made, only the specified modes of variation are permitted; player modifiers of this type become fixed, as does any matching penalty.

Players can decide that they are about to roll an easily-successful step, and take a penalty on it to make a later, more difficult, step, easier to complete. That’s fine. But they can’t look at a bonus after the roll and decide that the bonus was unused; they aren’t allowed to bank it for later.

This makes each Specification and its roll a more dynamic process, and boosts player interaction with the system and its mechanics – not a bad thing. But it does reduce the GMs ability to modify the outcomes, either in the character’s favor or against it, and that can be a bad consequence, especially from the GM’s perspective.

My own thoughts are balanced 50-50 on the question of whether or not to implement this; I can see both benefits and liabilities. That’s why it’s an optional rule. I would probably give the system a couple of opportunities to establish itself without the optional rule, and then introduce it on a trial basis. Or run a couple of “solo playtests” to see how big a difference it made to the ‘look and feel’ of the system mechanics. Or both. But that’s me; every GM is different and has different underlying philosophies to their GMing style, so you do you.

If there’s no one right answer, there are no completely wrong answers, either.

    Spell Variants

    Oftentimes, the goal isn’t an entirely new spell, it’s a variation on an existing one, which therefore becomes the Reference Spell. If the character already knows or has access to the Reference Spell, this confers a +5 advantage when crafting an ad-hoc variant and a +2 advantage when crafting a permanent addition to a Spellbook or Grimoire.

    Intelligent Items

    To craft an intelligent, sentient item is HARD.

    They start with a seed of intelligence taken from the caster. This can be as large as the caster wants, so long as it is 1 point or better, but his own intelligence goes down by the amount of the seed, so they tend to favor fairly small ones. This point cannot be recovered by any means so long as the crafting is underway, and the character suffers from all the attendant consequences of his lower INT.

    Each Specification of that seed doubles the resulting INT of the item, or doubles it’s rate of maturity. The INT growth normally takes 32 years to mature, so cutting that down to 6-12 months is highly desirable; most mages would prefer to take it further, but each such doubling also adds -1 modifier to the subsequent Specifications of this type.

    Int Seed 1, doubles to 2, doubles again to 4, doubles again to 8, doubles again to 16, matures in 32, halves to 16, halves to 8, halves to 4, halves to 2, halves to 1 – that’s a total of -9 – four doublings and five halvings.

    Int Seed 2, doubles to 4, doubles again to 8, matures in 32, halves to 16, halves to 8, halves to 4, halves to 2, halves to 1, halves to 6 months – that’s a total of -8.

    Int Seed 3, doubles to 6, doubles again to 12, doubles again to 24, matures in 32, halves to 16, halves to 8, halves to 4, halves to 2, halves to 1, halves to 6 months, halves to 3 months, halves to 1 1/2 months, halves to 3/4 months (about 22 days) – that’s a total of -12.

    The creator gets to specify one personality trait, the GM can add 3 more, or one more and add some words before or after the player-specified trait to modify it’s meaning. These must be specified secretly and i writing; only the GM is permitted to know everything. The other players at the table then supply (secretly and in writing) a single word each, which the GM has to arrange into 1-3 additional personality traits. To do so, he can transform any noun into adjective or verb form, or substitute a quality especially associated with the word, or attach any emotional state to a noun. Any unused words get discarded. The personality emerges as the item matures; only at the end of that process does the GM reveal the substance of the personality summary.

    EG: The creator supplies the one word, “Loyal””. The GM adds “to himself” and after contemplating “puppy-dog eager” as a second personality trait adds the tried-and-true “Manipulative” instead. Player #2 offers “Lemon”, #3 provides “Eccentric”, #4 suggests “Affectionate”, and #5 gives “Gleams”. The GM transforms these into “Affectionate About Lemons” and “Gleams Eccentrically” (Eccentric to adjective).

    So this is a magic item that is self-centered to the point of potential disloyalty, who likes to manipulate others to protect itself, who loves everything about lemons from their color to their scent to being bathed in Lemon Juice, and which somehow twists the light striking it’s surface to reflect that light in unusual and unexpected directions, a peculiar expression of vanity.

    The GM could also have transformed “Lemon” into “sour”, and using it as a standalone personality trait. But he decided not to be that mean.

    The item has all its powers while maturing, but is not able to apply as much intelligence to such use. If employed in this time, it might well make mistakes – serious ones.

    If the final integration roll fails, does the caster get their INT seed back? – no, because failure isn’t necessarily the end of the story. The GM can apply extra time modifiers that turn the failure into an eventual success. He can impose a Block or a Setback, forcing the process to retrace some of its steps, or navigate an additional Specification; either of these choices keep the chance of success alive. Or the character can wait an interval and just try again. And again. And again.

    It’s perfectly legitimate for the GM to rule that the final integration can only take place under certain conditions – “inside a magic circle on the night of a full moon” for example. However, he should ensure that the character at least hears hints as to such requirements long before he actually reaches this point in the casting. If he doesn’t follow up on this information, that’s on him. Me, I would use this as the trigger to a whole adventure – the character has to steal into the tower of a bunch of evil wizards and ransack their library for the information he needs, that sort of thing. And, should the character be discovered by the Wizards (he will be), he’ll need the other PCs to help him escape!

Selected other examples

There are three other examples of using this system that I want to highlight because they show off some aspect of what can be done with these mechanics. Like the spells (and magic item) example above, these will be more ‘how-to’s’ than full examples.

Using this system to design a better television set for mass production

To design and create a better TV set (or any other industrial gadget) you need to first define its fundamental properties. I picked this as an example because I think every reader will know what a TV set is and what it does..And for that reason, I think we cam define a TV set as, well, a TV set. So that’s the first Specification: “Prototype Television Set”.

What are the fundamental characteristics of a TV set? What do you look at when considering a purchase?

    Price

    The first item is retail price, but that gets a little complicated because the price is relative to the value of the currency. In general, it takes the form of a range from “0.5 x X” to “1.5 x X”, where X is the median price range. But you can’t define a median price range unless you’re comparing like with like. This is the Target Retail Price. It guides later design choices in ways that are too complicated to map out in a general form, and may or may not be achieved at the end of the process.

    Screen Size & Function

    Which brings us to the second item: Screen Size. These are defined as a basic shape (square, letterbox, cinema) and the size of the screen from corner to corner, and these are traditionally measured in inches long after every other measurement has been converted, in those counties that have switched to the metric system – but sooner or later, these measurements will also make the switch.

    10, 20, 30, 40, 50 – those are the sizes in cm of small, portable units, divide by 2 to get inches. Choosing one of these adds the third item to the list, portable, which implies a weight range.

    60, 80, 100, 120, 150 – these are the sizes of smaller domestic units in cm. Again, divide by 2 to get inches. Third Specification is ‘domestic, small’.

    180, 220, 260, 300, 400 – those are medium modern domestic units in cm. Divide by 2 for inches. Third Specification is “domestic, medium”.

    500, 800, 1000, 1200, 1500 – those are large domestic units in cm. Third specification: “Domestic, large”.

    1600, 2000, 2500, 3000, 4000 – these are the “Home Cinema” sizes in cm, giving the third specification accordingly. Only the first two are common, but the next two are around – my nephew has a set that’s somewhere in the 3000 range, it takes up an entire wall.

    5K, 10K, 20K, 40K, 60K, Special – those sizes are starting to reach the point where people have trouble grasping the size, so let’s switch the units up – meters and feet or yards: 50, 100, 200, 400, 600 meters, Special, multiply by 3.3 to get feet or 1.1 to get yards. The latter conversion is so simple it can be done in your head, so let’s use it: 55, 110, 220, 440, 660 yards, Special. 660 yards is around 1/3 of a mile. To the best of my knowledge, none of these sizes are in actual production – it would be more common to have a bank of smaller sets.

    Note that anything in this final size group adds a penalty to the Reliability testing later in the process. These should be -10%, -20%, -30%, -40%, -60%, and -80%, respectively, or their game system equivalents.

    These are the “fantasy” set-sizes, and I don’t think we need to go much further. Admittedly, my Dr Who campaign featured a spacecraft recently whose cylindrical body was a TV “set” more than a km in length, but it’s display was segmented into different levels within the spacecraft:

    I’ve included the size category “Special” for such purposes.

    Once you know the size category and associated X-value you’re talking about, you can go back to the actual size and fill that in, and that will inform the typical price point for that size of set.

    EG:
         X=2000
         Size = 1000-3000
         Price: $2500 AUD

    Resolution

    Each of these carries an implied resolution, an expectation of display resolution. In the old TV world of cathode-ray tube displays, these were measured vertically in “lines”, in the modern, wide-screen world, they are pixels across the top.

    Old-style sets: square display area (more or less, they were actually 4:3 ratio). Later versions offered a “letterbox” format for showing widescreen images. You can still find sets in the smallest screen sizes that preserve this arrangement, if you search hard enough, but they are increasingly rare. 60, 80, 100, or 120 lines were on offer in portable, domestic small, and the lower sizes of domestic medium – which were considered large sets back in the day – but the real standards were 480 or 576 lines

    In more modern designs, the smallest portable sets will have 512 pixels, but older sets might have 128 or 256 pixel displays. The next size up is 512 (256 in early sets), and the rest of the category is 1024-pixel resolution (twice as sharp as the best of the vacuum-tube screens, basically).

    In the “Domestic, small” category, you have 1024 and 2048 pixel displays.

    “Domestic, medium” is 2048 and 4K in the two smallest sizes, 4K exclusively in the middle, and 4K and 8K at the top end.

    “Domestic, large” is pretty much all 4K and 8K, but I have seen some sets that go to 16K and “interpolate” every second pixel (or AI-upscale the image, such as the BOE 110-inch device). This blurs the image slightly if you get too close to it, but is good for more distant viewing. This is a way of getting around the problem that the source media / transmissions are rarely 8K, let alone better. 32K is barely theoretically possible, but a slow uptake of 8K means that there’s no commercial impetus to go there, and all sorts of technologies need breakthroughs to support this resolution. Your Sci-Fi screens (Bridge-of-the-Enterprise stuff) might go to this resolution, and might even go to 48K or 64K. One of the major hurdles is the size of the pixel, which becomes very hard to manufacture when they ALL have to work, reliably, for a long service life, or you get damage to the displayed image.

    HD is 1080 pixels, Ultra-HD 3840 x 2160 pixels, also known as 4K. UHD2 is 7680 x 4320 pixels, also known as 8K.

    The standard ratio these days is 16:9. If you do the math, that gives a corner-to-corner value of 18.35756 – so if you divide the screen size by this, you get the size of each pixel, which can be useful in terms of appreciating that size.

    EG Cont:
    Size 1000-2000 cm = 500-1000 in. HD = 1000/1080 = 0.926 mm per pixel, a little smaller than the lowercase ‘e’ and ‘s’ in ‘smaller’ – using Campaign Mastery’s default font size.

    Resolution is the fourth specification. And, if you want to allow the use of media with other resolutions, it might also be the third, fourth, and fifth – one Specification for each resolution on offer in the set.

    Sources & Inputs

    You get one for free – that’s usually antenna and digital decoder, here in Australia. The first additional specification gets you two more – commonly a HDMI and a USB. The second adds three more to that list – frequently a second USB, a second HDMI, and a composite input. A fourth Specification in this area adds up to 4 more source inputs.

    What about internet streaming? You may need that 4th Specification, or you may need to sacrifice one of the existing sources to make room.

    What about cable, or satellite? Same story. An inbuilt CD/DVD player? You got it. And then there’s the input that everyone forgets – the remote control.

    And then, there’s the kicker – price points. One of the ways to get sets down to a low price point is to sacrifice inputs, but what’s acceptable in this department has changed a lot. These days, no-one would dare to offer a TV without a remote control and at least one other source. My TV is a mid-priced small unit (slightly smaller than I wanted, in fact) and it has a remote, two HDMI inputs, two USB inputs, a composite input, internet streaming, and Bluetooth that I can use to connect it to my laptop.

    Outputs

    Outputs work the same way – one (the screen) for free, +2 for the first Specification, +3 for the second, +4 for the third, and so on.

    Outputs can include Headphones (virtually all sets have one jack, some have two), HDMI (sometimes x2), loudspeakers (usually x2 for stereo), audio output (to a soundbar or hi-fi). A DVD/blue-ray player may also be a DVD/Blue-ray Burner. Some TVs have a sound bar built in (that can be bypassed if you have a better one) and so will have connections for two external speakers (stereo), three (adds sub-woofer), 5 (better stereo), 5 (surround sound), or 7 (better surround sound). I have also seen a 9-speaker rig (differentiated low to high based on the part of the screen with the greatest brightness) but that didn’t work too well).

    What else? Well, some TVs let you ‘cast’ (short for broadcast) the image to a second TV in a different room. Some have Bluetooth (for wireless headphones), some have Bluetooth for video, some have cameras (an additional input) and function as a telephone for videocalls (there?s the matching output, but it’s also another input).

    That’s about it, really.

    Switching

    Most sets don’t even have this as a Specification, but a few do – video-within-video, casting one channel while watching another, recording one or two channels to an internal hard disk while watching another, and so on. These are 0 free, +1 for 1 Specification, +2 for a second, and so on – one per ‘display mode’. Okay, most TVs will have a control panel, I guess that counts as the free one.

    Controls

    You get volume, brightness, input selection, and channel selection, for free. Everything else costs you. +2 controls for the first Specification, +3 for the second, and so on.

    Controls can be grouped into Visual, Audio, and Other, and it can be helpful to think of them that way, but it’s the grand total that matters.

    Visual contains contrast, color saturation, tint or hue or both, sharpness, and probably a few that aren’t coming to mind. In the old cathode-ray tubes you may have had pinch and skew and x and y adjustments. Some had a degauss function that could either be considered a visual or other control.

    Audio includes tone, wide, bass, treble, sub-bass, ultra-treble, independent headphone volume control, Dolby, de-Dolby, boost, crossover, front-to-rear volume, and I’m sure there are a few that I haven?t thought of. You can also get preset EQ settings for movies, TV, rock music, classical music, and so on. And mute, possibly accompanied by separate Mute Headphones and Mute All. Oh, and balance.

    In the ‘Other’ category you will find channel tuning, channel offsets, auto-tuning on/off, the ability to turn off powering some functions if you aren’t using them, a system reset, a favorites option, TV guide, automated channel switching, search functions, internet browsing, and a text input mode. Some systems have certain streaming modes built in – mine has over 500, including most of the domestic free-to-air channels (but not the Channel 7 family, for some strange reason) – those don’t count separately, consider them one item. Headphones on/off are a common one that could go in either this category or in sound..

    Also in that category, I’ve seen at least one TV which ad a USB printer-port, so that you could do more than just browse the internet, you could print out part or all of the page. Selectable printing is a second item above whole-page print. And I’ve seen one internet-enabled set that let you crest your cursor on a link, click a different button on the remote, and print the page at the other end of the link without even opening it – very handy for gathering research.

    As you can see, these can add up quickly. That’s because, in modern times, these are all done by software. Back in the analogue times, they had to be done with physical circuits and engineering, and that meant that there were a lot less of them. But there was nothing to stop manufacturers adding some of these extra functions, and some did. My first color-TV (I started with a much older black-and-white set inherited from my step-great-grandmother) had both bass and treble controls.

    Special

    This is a catch-all for other parts of the package, the clever bits that can make you stand out from the crowd. OLED, LED, LCD, QLED, mini-LED / Neo-LED / QNED, MicroLED, RGB Mini-led / Micro RGB, Laser Projection, and Lifestyle TVs – most of these are acronyms but all modern TVs are one of these, and they all add different pros and cons.

    Anything you can think of can go in this category. If you want a TV with an attached 3D printer that builds a diorama of the currently-displayed TV image, interpolating multiple frames through the scene to construct a 3D map of the scene – go for it. I’ve never heard of this being done, but with AI, it should be possible. Or maybe you want to license some popular games and build them into the system.

    As a general principle, forget about the acronyms and just Specify the technological benefit that you want – “better blacks”, “sharper colors”, “curved screen”, and so on. Each of these forms a separate Specification.

    If you want to know more, go to google and ask for “Types Of TV Screen” – that’s what I did to generate the list given above.

    This category can also contain things like “User-friendly Menu design” or “Quiet” – some TVs generate so much heat that they need a lot of cooling to maintain reliability, and those fans make noise.

    Prototype Testing

    So, with all this specified, you have defined this particular model of TV. Next, you have to build a prototype. Since that’s effectively the end result of integrating all these specifications, it gets built for free.

    But then you need to test that prototype for efficiency, reliability, heat generation, electromagnetic interference, and physical robustness. Each of these tests on the prototype (save physical robustness) is a separate Test, listed as a separate Specification, and has to be passed. There’s a +5 modifier to doing so, but the number of Specifications listed prior to this point each contribute -1; the expectation is that ‘extra time’ will be spent just to get to the point where this roll is achievable. The more complicated your TV set, the harder these tests are to pass and the longer they will take. Obscure design flaws often don’t show themselves until this point.

    Factory-ready design

    Next, you need to redesign the prototype in terms of a compact design that can be manufactured by assembly-line processes that’s an additional Specification. Modern product development often does this through CAD as each component is selected and its function incorporated into the design, but I’ve gathered it all into this discrete step.

    The next two Specifications can be done in whatever order the player decides.

    The first one I’m going to discuss is Component Sourcing. Prototypes can be built with as many custom parts as desired, and are often over-engineered because until he gets there, the designer doesn’t know what’s going to be needed. There are three levels to Component Sourcing: There’s all off-the-shelf, there’s bespoke components that a manufacturer of parts builds specifically for this model and design – these should be limited in number, and are generally minor variations or adaptions of existing components – and there’s custom parts that have to be commissioned from scratch just for this model, these should be avoided if at all possible, because they are VERY expensive. Either of these last two run the risk of supply lines being compromised; while that can happen with off-the-shelf parts where there is only one source, that’s relatively rare.

    The designer makes his roll, and has to keep making this roll until he succeeds (or extra time can be applied to make a roll successful. This might not be the first roll, or even the second – it will be a roll which is ALMOST a success, because that’s the more efficient pathway).

    The GM takes the margin of success and subtracts the number of intervals required for this Specification.

    If the results are greater than or equal to zero, this is an entirely “off the shelf” design, and costs just plummeted to the middle of the lower half of the range. If the results are -1 to -5, then there are some bespoke parts required, but they are needed in sufficient quantity, or have sufficient other applications, that a manufacturer of such parts will partner up for the manufacturing of these components. But if the process was a troubled one, resulting in an adjusted roll of -6 or worse, at least one custom part is needed, and someone will have to be contracted to make it. Or it’s manufacture can be brought in-house, but that may require training and special expertise that the factory doesn’t have. Costs per unit immediately spike, doubling or tripling ([d8+4]/4, keep fractions).

    The player is permitted to reject this outcome and go back to the start of this Specification (time used is lost, of course), and try again.

    This Specification not only represents replacing the custom builds and expensive components in the prototype, it covers the negotiations with suppliers, legal, and the signing of contracts.

    The other Specification is carried out simultaneously and in conjunction with the first: Miniaturization. The smaller the parts, the smaller (and sometimes the cheaper) the product. Until computers came along and used plug-and-play in 1995, there wasn’t a whole lot going on in terms of off-the-shelf circuitry; almost everything was built by the factory. This was a paradigm shift in manufacturing that not a lot of people were aware of. Plug-and-play computer chips soon followed, and these found utility in markets the chip manufacturers never dreamed of. The extent of the revolution became clear when Y2K loomed; aside from the bigger, more obvious devices, there were millions upon millions of microprocessors embedded in other technologies, and all of them had to be considered suspect until proven otherwise. Between 25 and 100 million products had to be tested, and in some cases, hastily redesigned or reprogrammed.

    The more that can be done with off-the-shelf components, even if that’s not what they were intended for, the better. The more the top-grade components used in the prototype can be replaced with cheaper ones, the better. The smaller the level of over-engineering compatible with safety, the better.

    It can be cheaper to use an off-the-shelf chip in which 90% of the functionality is ignored than to have a custom chip with just the circuitry you want. And designers tend to then look for ways to use that additional functionality even if that wasn’t what it was intended for.

    Miniaturization can reduce the electrical demands, which reduces the cooling required, which reduces weight and noise and size.

    So Miniaturization is incredibly important. The designer rolls until successful, as with Component Sourcing, then subtracts the total number of Specifications from the margin of success until reaching a net minus 5, then adds the remaining Specifications to this total – because, past a certain point, the capabilities of the parts make it easier to add more functions. The results can then be read off the following table:

         +6 or better: very large scale miniaturization, product size halved
         +4 or +5: large-scale miniaturization, product size to 60%
         +2 or +3: large-scale miniaturization, product size 70%
         +0 or +1: considerable miniaturization, product size 80%
         -2 or -1: some miniaturization, product size 90%
         -4 or -3: minimal further miniaturization, product size 100%
         -5: No effective miniaturization, product size +(2d4-1) x 5%

    That’s followed by one last item – aesthetics. These are generally kept fairly simple, these days, but wood-grain vs plastic vs pseudo-metal, all were valid considerations in the older days.

    The end result is a second prototype, and guess what? It needs to be tested, too, just like the first – but this time WITH physical robustness and possibly portability added to the list.

    It is worth noting that the choice of sequence can matter if different foundation skills are used, as a critical success in the first greatly then benefits the second. That won’t matter, most of the time, but never assume that it will never take place.

    Logically, these two processes should influence each other. Getting a “custom part” result should give a bonus to miniaturization, and part of the process of achieving a high level of miniaturization might involve using bespoke components, otherwise ‘steering’ the outcome of the Parts Sourcing roll.

    So I was very keen to establish a firm sequence for these two Specifications – but every attempt (and there’s only two orders they can be in) collapsed completely, because the bonuses run both ways..

    Either I implemented something complicated in which each roll could feed back into the other, or I could completely divorce the two in terms of sequence and just leave it to the GM to interpret the actual results on the day it comes up. How he chooses to interpret the results of the first roll, whichever one it happens to be, should be reflected in a tweak of the results in the second, and in the bonuses offered for successful completion of the second.

    I chose the second of those two choices. The truth is that there are probably dozens of ways a particular circuit could be implemented (in modern times – don’t try to apply this pre-WW2), and the process is an iterative one of finding out what’s available, and how much of the circuitry it can provide the design, and how miniature the results are.

    Patents and Trade Secrets

    Any Custom Part will be patented by the design owner. Any bespoke parts will be patented by the manufacturer, though sometimes this can be shared with the design owner. Neither of these prevents a rival from reverse-engineering the product to learn its secrets; instead, it places those secrets in plain sight, but protects the profits from the use of them.

    But the sweetest result of all is when the design yields a Trade Secret – something that isn’t protected by a patent because it’s a lot harder to reverse-engineer, and so is exclusive to the product manufacturer – at least for a while.

    Either a critical success on the Miniaturization roll, or a Critical Failure on the Component Source roll, can yield a Trade Secret.

    Having a trade secret means that no-one else can use the technology. There’s an unresolved variable in their analyses. But that secret has no legal protection; if anyone else does figure it out, it’s too late to patent it. So the designer gets the choice – publish or don’t publish?

    If he decides to go down the Trade Secret route, he can add one more Specification (which he has to roll for, of course) – a difficulty modifier to be applied to all attempts to reverse-engineer the tech. He specifies how hard he wants it to be, and that becomes a negative modifier on his roll, but his Margin of success increases the penalty for anyone else duplicating his work.

    Manufacture

    And, finally, there is Manufacture. This is a Specification that doesn’t require a roll by the player; instead, it’s a case of what seems reasonable to the GM. The design might seem perfectly reasonable to him, in which case, he has only one variable to assign – manufacturing time.

    But the GM may feel like there’s been too much built into the design for the price-point – the unit cost for manufacture is higher than desired. He gets to assign any variations on the initial parameters that he considers appropriate. Throughout the process, the designer should have had one priority in mind as the most important; and the GM should respect that. It might be the price (and note that the numbers assigned for this are Retail, not manufacturing; you need 30-50% allowance for store profits and 5% for transportation per division in the size category, except 2% for Portable units). It may have been portability, which is a compound of size and weight, It may have been quality.

    These three, and manufacturing cost, form the unholy quadrilateral – you can have any three, but the fourth bears the brunt; you can have any two as specified and the consequences can be spit amongst the remaining pair.

    Quality, Features, Price, and Size/Portability/Weight. Every design exists at a specific point in the resulting 4-dimensional product space, and the GM decides what that point is, and what – if anything – has to be compromised, based on what the player’s choices have been during the design process, and what he said when making his decisions, and his rolls.

    This process can’t be carried out by the player on his own. It HAS to be done face-to-face with the GM.

    It’s not at all uncommon for potential functions to be disabled in this step. Sometimes, you find products which have all the software built in to perform a certain function, but the hardware has been removed from the design, which often shows the legacy of where those components used to be – mounting holes and what have you.

    Manufacturing Time

    From the manufacturing cost, the GM can set the number of manufacturing processes involved – the lower the cost, the fewer of these there will be. The exact numbers don’t matter; what’s important is that the more Specifications there are, the more has to be done at each step of the process. For game purposes, it’s best to reverse the relationship – setting an average time for the number of processes, then multiplying by the number of specifications. Anything from 1 second to 5 minutes is reasonable.

    The average time per workstation dictates how fast the units can be manufactured and packaged, ready for shipment. The product gives the total manufacturing time for a collection of parts to become a completed unit.

    There is then only one consideration left: profit per unit. But that’s way beyond the scope of this article.

Using This system to design and construct a space station

Each required system is a Specification – Life Support is one, Accommodations are one, Meeting Rooms is one, a Control Center is one, docking facilities is one, making those universally compatible is another, and so on.

Anything that has to be there in order for the station to do whatever the designer wants it to do should get listed.

The environment for which it is being designed should also be a Specification.

Weapons are the bare-basics minimum currently available that is capable of functioning in the specified environment. Each upgrade in type or in effectiveness has to be contained in an “Improved Weapons” Specification. From bullets to lasers? One Specification. Bullets to better lasers, that’s two Specifications.

Using this system to design a more efficient air-con

This illustrates two important principles, but beyond that is very like the “Better TV” process discussed earlier.

Principle One: you can’t just say “Better Air Con”, or even “More Efficient Air-Con” – you have to give the GM some sort of conceptual point to hang the difference in efficiency on. “Two force-fields in a closed box separate, lowering the air pressure within. This cools the air. A valve releases the cool air into the room while drawing new air into the chamber.”

So long as the description meets the minimum standards for credibility within the campaign, it’s fine; it doesn’t even matter if it would really work (I suspect that it would be rather noisy). There’s enough there to work with – chamber size, force field creation, force field location, force-field speed of movement, movement mechanism, valve size, outflow fan. The design process might show that more effective cooling is achieved by compressing the air in the chamber and then letting it expand when released, with some sort of radiator mechanism or pool of water (water can absorb a lot of heat) in a tray underneath. As I said, there’s plenty of sci-fi / engineering crunch for the GM to use in the design process.

The second important principle is scale – there’s no mention anywhere in “More Efficient Air Con” of the system scale, but refrigerating a room is way different from refrigerating an entire floor-plan or skyscraper. Adjust designs and difficulty levels accordingly.

Still more examples

Some more applications of the system for you to consider:

  • Designing and constructing a castle or stronghold (or dungeon) – done in a similar way to a space station, each element that you want to add becomes another Specification.
  • Anti-magic grenades – which reduce a spellcaster’s capacity for spellcasting for the rest of the day, or until they Rest sufficiently.
  • Designing and constructing a robotic companion or servant – bread-and-butter for a system like this.
  • Designing and manufacturing a chemical solution to a problem – this is exactly what the system was designed to do; each chemical property you want the result to have is a Specification.
  • Building a complex shape using force-fields – the second purpose to which this system was actually put. This purpose ties in directly to Power Skills, which is why last week’s post had to precede this one – they make a 1-2 punch.
  • Designing and constructing a piece of computer software – each function is a Specification, the Operating System is a Specification, the minimum hardware is one or more Specification.
  • Designing and running a PR campaign to resurrect the career of a politician after a scandal – you didn’t think they’d slip away quietly into the night, did you? Even caught red-handed, they would at least try.

The list goes on and on – even one of the items above hadn’t been considered before I started typing from my notes.

This is a methodology for creating complex structures, objects, patterns, and effects using the tools provided in every RPG for which those things are relevant, from a Royal Carriage to a plot to take over the world. The basic processes are simple, but have enough depth that projects of any complexity

Some final notes on Interval selection

This is one of the most critical aspects of the system. As stated earlier, Specifications x Interval = minimum time, but the totality could be much higher, and is dependent upon choices made by both player and GM along the way, and so, unpredictable.

The likelihood of success of the character’s skill is the only guide that the GM has as to how much extra time the totality is likely to take – the higher the skill, the more closely the total will approach the minimum.

This can be used by the GM to get an indication of the Interval to use, based on available time and task complexity. But that’s not a perfect solution. Set a total estimated time, deduct a margin for Setbacks and Barriers based on the character’s skill relative to the scope of the task, and divide the total by the number of Specifications to get a rough indication. Probably round off to something useful, because the raw result is likely to be anything but.

What’s more, the time interval itself can be part of the Specifications, where the player wants to accomplish in minutes something that should take hours, etc.

By looking at the character’s success margin and the number of Specifications, the GM can “tune” the Interval so the project fits the campaign’s pacing. If the hero has a month of downtime and the project has 10 Specs, an Interval of 2 days feels high-stakes, whereas an Interval of 1 day feels comfortable.

There you have it!

There will be no post next week as I have to deal with medical issues and a property inspection mid-week, and have to prep for the latter despite the former. But I’ve already started work on a post for the following week!

Comments (9)

Power Skills in Zenith-3 (and elsewhere)


A ‘Power Skill’ measures how adept a character is at pushing an ability beyond its normal limits. These are rules for handling them, from my Zenith-3 System, and adapting them to other game systems, permitting their application to D&D class abilities and Feats and so on. Useful in any genre with unusual abilities.

What Is A Power Skill?

A Power Skill is the skill of using a power or ability in ways beyond the straightforward applications given in the applicable rules. The More of their capability a character has invested in the power or ability, the more adept they are at using it in different ways. A character can start with just the one ‘trick’ and expand their repertoire as they gain in power.

The Zenith-3 Rules

The rules system used by my Superhero campaign (1982-) have gone through multiple iterations over the years.

  • v1: 1982: 18 pages of handwritten amendments to the official Champions Rules. One of the initial changes was to go to a d20-based set of mechanics instead of the Hero System’s original 3d6.
  • v2: 1982-1984: Expanded to 20-odd pages typed on a manual typewriter by the sister of one of the players. The first draft consisted of post-it notes attached to both a copy of the official rules and a photocopy of the v1 rules. Also incorporated the official Champions 2 and Champions 3 rulebooks.
  • v3: 1985-1986: Supplementary notes that built on the v2 rules but didn’t change much; most of the changes were to power and skill descriptions from the Hero System. In this period, a comprehensive game physics was written and provided to players for the first time, and there were a few skills and powers revised to accommodate it.
  • v4: 1987-1989: Added some new powers and skills and redacted some old ones, generally aimed at ‘tightening up’ overly broad skills by splitting them into two smaller ones. More notable now for the general principles, which attempted to create a coherent skills structure for the first time using the concept of ‘dependencies’. Most of these incorporation of these principles was never completed.
  • v5: 1989-2000: I now had a Commodore-64 and some word-processing software to go with it. That soon became a C-128. The rules were now printed by an actual computer printer, for which I had to write my own device driver, triggered by codes that I could embed in the documents. This was the first attempt to transition to a fully-self-contained game system, based on Champions 3rd Edition. It ran to over 800 single-sided pages in 5 volumes, two-columns (mostly), and was never finished – but enough was done that it was playable. Introduced the concept of Hybrids – subsystems that were partially one thing and partially another – initially restricted to things like Running. Those page were completed in about a year-and-a-half, and remained the basis of the campaign for 11 years.
  • v6: 2000-2001: An aborted effort to compress, compact, and complete v5 while incorporating the accumulated errata and revisions from 11 years of play-testing. By now, I had a laser printer and a windows-98 PC. Some concepts were carried forward into the next edition wholesale, some were further compacted, and some were flagged for deletion because testing showed that they didn’t work as they should in practice.
  • v7.0: 2001-2002: A co-writer came on board by uttering the immortal words, “It will only take three or four weeks”. A huge amount of progress was made in this year-and-a-half. For the first time, it was segregated into individual files, one for each chapter and appendix In fact, we were working on the appendices and only had one chapter of skill descriptions outstanding. Notably, the system foundation switched from d20 to d%.
  • v7.1 2002-2003: Revision of some powers & disadvantages concepts that didn’t quite work as hoped and correction of errata. Started on the skill descriptions. Most of the contents were completely unchanged from v7.0.
  • v7.2 2003-2005: Addition of some new powers, new disadvantages, and more skill descriptions, plus more inclusion of errata and corrections. A complete overhaul of the three frameworks – Magic, Psionics, and Martial Arts. Most of the content was completely unchanged from v7.1.
  • v7.3 2005-2006: More errata, more revisions, more clean-up and replacement of things that weren’t satisfactory. Most of the content was unchanged from v7.2 but almost 1/3 of the documents had been revised from v7.0 through accumulated revisions. Stayed stable for about 3 years.
  • v7.4: 2009-2012: More errata, more revisions (including a significant revision of the core concepts of the Magic framework, and discarding of the ‘Contacts’ system). But it was a bit more comprehensive than the revisions of previous iterations. There was a lot of effort to find an eliminate ‘infinite points generators’ from the rules – something we thought had been achieved with version 7.2.
  • v7.5 October 2010-2026: In October of 2010 it became apparent that it was possible to push the mechanics too far, and that the points costs for some things needed to be changed. I set about analyzing the situation, discovering that the same power could be under-priced, correctly priced, and drastically overpriced, all at the same time, depending on the combination of modifiers and prices, respectively. Ultimately, it was shown that this was a rounding error from a process simplification. Dozens of analysis graphs were prepared like the one below, and a complete top-to-bottom revision of the fundamental concepts began. This mostly consisted of removing that simplification, but after the deep-dive into prices that its discovery engendered, I also ended up discarding the notion that all things should have the same price at all power levels. Instead, progressive costs were introduced for things other than skills, especially stats. I also took the opportunity to incorporate some changes that the players had been asking for, like shifting the minimum score in skills to 0. The adoption of this iteration of the rules remains incomplete; for the most part, we’re still operating on a hybrid of v7.4, with some revisions, and some incorporation of v7.5. While unstable, this hybrid remains playable. But the change is so substantial, I’m cobntemplating renaming this to V8.0.

First Question, having noticed the problem: How significant was it? Answer: Very. I had hoped that it would resolve into a single curve, which would make correction simple. It didn’t.
 

Second Question, how pervasive was it? Answer: Very. 90%-plus of prices were incorrect through enlargement of rounding errors.
 

Third Question, how significany was it at a practical scale? Was it being exaggerated in high-price purchases. could it be ignored at lower levels? Answer: Not really – not by enough, anyway. The ‘chord’ where prices are correctly calculated is really obvious in this graph – as is the fact that it’s a very small percentage of the whole.

Fourth Question: Continuous plotting, connecting each data point to the next, was great at showing the overall shape, but implied data that wasn’t really there. Since there was an obvious set of patterns that didn’t fit a simple curve, switching to individual data points would be more useful in trying to understand the pattern.

Fifth Question: Could the problem be isolated / simplified by reducing the values of a single variable? Could a formula be derived that way? Was there a pattern? Answer – there was a pattern – but it’s structure varied with all five variables. In other words, it was systemic; the whole technique was flawed. I would never be able to look at ’rounding errors’ quite the same way again!

The Concept of Power Skills and first efforts

The concept of power skills was introduced in v5 of the rules, but virtually nothing was done about implementing that concept until v7.1. The roots of the concept can actually be traced further back, to v2, and the idea of ‘pushing,’ which enabled characters to spend Endurance to boost the effectiveness of a stat or power, either buying more of it temporarily or overcoming a limitation placed on it by sheer effort.

Between this initial idea (which worked) and the formal mention of Power Skills in v5, I began to feel that it was too easy, that there should be some sort of skill in using the power against which the character should have to roll in order to push that power beyond it’s normal limits.

From that seed, the idea grew that this same skill check would permit the character to use their basic ability in all sorts of tricky ways – bounced shots, some combat tactics, and so on. V7.0 had included, for the first time, rules structured in such a way that skills could also be pushed.

First efforts at implementing the idea went no further than the drawing board, because some fundamental issues remained unsolved: What should the skill cost and how fast should it improve? Should everyone have it, or should it be an ‘optional extra’ that a character had to buy? Or should they get some of it for free but have to pay for the rest? And how much should they pay? Should the expenditure reduce the END cost?

Debate went back-and-forth, with positions adopted, revised, and reversed. When it came time to actually bite the bullet and draft some rules, I deliberately chose a relatively simple solution with some of these proposals to be listed as optional rules or options for future consideration after seeing how well the simple proposal worked in play.

Power Skills In Other Systems

My rules weren’t the only ones to adopt similar concepts. I don’t think that any of these played a role in shaping what my rules became, but thought it worth actually mentioning these. Feats from 3.x are a more likely influence.

    Burning Wheel (2002)

    “Ugly Truth” allows a character to manipulate social situations to an extreme degree, or use “Intimidation” to completely break a target. I heard about Burning Wheel when it came put but have never read the rules.

    GURPS Powers (GURPS 4e Supplement) (2005)

    “Power Parry” and “Power Stunts” allowing for exceptional combat feats that transcend standard melee attacks.

    D&D 4e (2007)

    Similar to Daggerheart, “Powers” permit significant alterations to combat conditions and the like. I never played or read the rules for 4e; I was still transitioning to 3.5 at the time, and the negative press and edition wars made me reluctant to spend the money for a copy.

    Powered By The Apocalypse (2010)

    This is a game design framework that has been the foundation for hundreds of Indie game systems. “Moves” or “Actions” function similarly to power skills by focusing on dramatic, genre-specific, high-impact actions rather than mundane tasks. I had never heard of it until I started doing background research for this article.

    Forged In The Dark (2017)

    An SRD used as a foundation for other RPG systems. To date, there are over 300 systems based on these standard mechanics, which derive from the Blades In The Dark rules set (2015-2017).

    Daggerheart (2025)

    “Power Cards” or utility powers are used to give characters, particularly martial ones, more engaging and specialized options beyond just moving and attacking.

The V7.4 System

Power Skills are defined in the same way as other skills in the campaign. Excerpting them for this presentation has bypassed all the explanations that go with that standard approach, so I’ll have to explain them as I go along.

Such explanations will be boxed off, like this. I’ll try to keep them to a minimum.

Let’s start with some fundamentals:

Skills are bought with Skill Points, enabling the subdivision of a single character point into smaller (and hence more flexible) pieces. All prices given within the skills system are in skill points (SP) unless otherwise stated. These Skill Points are purchased with character points, and are used to both buy and improve skills.

How many skill points you get for a ten-character-point investment (purchases are usually made in blocks of 10) – is determined from General Aptitude. If characters purchase less than the full block of 10 character points worth, they get a -1 on the Skill Point Conversion and round the fraction down.

4.9.1 Ex-cathedra Commentary

The concept of Power Skills coalesced from four directions simultaneously. I was looking for a way to reward characters who invested a lot of character points in a given power, and at the same time I was looking for a way to encourage characters to think about using what powers they had in different ways. I was also looking for a mechanism that would describe how effectively characters had mastered what abilities they had bought. And, finally, there was the existing concept of “pushing” a power, which should not be automatic, but should require a skill roll of some kind. All of these contribute towards character consistency and focus, encouraging characters to become singular masters of a single related ability instead of buying everything under the sun – thereby leaving more scope for unique individuals within the rules.

The concept of each power having an associated skill which would permit the character to express the power in different ways first arose during initial work on the d%-based skills system, when considering better ways of representing a flying character’s ability to do barrel rolls, immelmans, etc, initially inspired by the techniques and rules provided for ad-hoc spell use. This would define the various flight maneuvers as tasks with an associated base difficulty, which could then be used as the basis for an appropriate skill check. The approach appealed because it would be equally applicable to stukas, B52s, hot air balloons, and flying characters – only the difficulty would change. The idea was expanded on further consideration, prompted by recollections by Graham MacDonald of some of the ways in which Force fields could be used – frictionless surfaces, simple shapes for grabbing things (a-la Green Lantern), ball bearings, etc, and by similar expansions of capabilities by other characters in the past, such as Ian Mackinder’s use of the “Earthquake Special” attack with Titan’s STR.

It was originally thought that most of the power skills would be reflections of existing skills, for example EDM (Extra-Dimensional Movement, including Teleport would be analogous to Warp Physics, and that the character would get a number of skill points towards the purchase of the appropriate skill free with the purchase of the appropriate power. This plan did not survive however; in some cases there were too many appropriate skills, in others there were no appropriate skills, and in still others there were skills that seemed appropriate but which just didn’t work on closer examination.

And so, the current, less-defined, more flexible system was created, in which each power has it’s own unique Power Skill. Ordinary skills may be complimentary to the Power Skill (see Skill Use below), and (when appropriate) the Power Skill may be complimentary to a more traditional skill – thereby reflecting the benefits a character gets to his understanding of Warp Physics from his many EDM journeys. But each power has a skill all its own.

4.9.2 Basis

All power skills are based on a specific single Characteristic or Skill Roll, and are treated as Fundamental or Expert skills based on the Base Cost of the power per level.

Skills are d% based. Each stat converts to a characteristic roll to permit saves vs STR, for example. Some of these are referenced frequently in play, some are quite rare. The suffix “#” is used to distinguish characteristic score from saving roll, so CON refers to the stat and CON# to the roll. The stat rolls generate Aptitudes, which are ‘the potential for skills’ – there are 15 of those.

The Aptitude scores are then used to generate actual Fundamental skill values. This approach means that you can improve a stat without having to recalculate dozens of skills; all that changes is the cost of improving aptitudes, saving a few points off the cost of the stat improvement.

Skills are broken down into Fundamental and Expert skills. The two major differences are (1) The Fundamental Skills are a fixed list; and (2) Expert skills are based on Fundamental Skill scores. There are generally 3 Fundamental skills per Aptitude. There are also user-definable Advanced Expert Skills, but we don’t need to worry about those.

Skills range in value from -80 to +150. “Average”” works out to be -12. A skill of 0 or better is enough for the owner to qualify for a job using the skill; a skill of 20 or more is having a professional qualification in the skill, or the equivalent. There are ten ‘ranks’ of characters, from pathetic normal (not called that) to mega-deity (not called that, either), with various grades of Paranormal occupying three of the middle grades. Each caps skills to a different maximum and adds or reduces the cost of skills; the scores quoted in this paragraph relate to Veteran Paranormals, one step below Demigods.

The choice is a matter for assessment by the referee and of the creativity of the character. In some cases, the Basis will be obvious, in others it may require some thought by the character’s creator. EG HKA (Hand-to-hand Killing Attacks) would normally be based on STR#, but a character could be built where it was based on AGIL# (the character doesn’t use brute strength, he uses precision) or even INT# (the character identifies and targets vulnerable points on the target. Or even WILL# (the character uses determination and puts English on his blows to create the additional damage).

The examples illustrate how the appropriate choice of Basis adds to the definition of a power – the above are 4 different interpretations of HKA. What was previously justification and explanation of a power now has a real impact on the description of the power and what can be done with it.

It is expected that characters will tend to play to their strengths – a character with a high stat will normally have powers that are derived from that stat in some way – but specious logic will be frowned upon. Just because a character has a high stat is not a good enough reason for power skills to be based on it.

When no compelling case is made for any other choice, it is presumed that the basis will be INT#, reflecting that the character’s ability to use the power in different ways is dependent on his ability to work out how to use it in those different ways.

Skills are also classified into subcategories – A-F for Fundamental Skills, G-J for expert skills. These designate sub-tables within the system – lower is cheaper and easier to learn and generally gives more skill ‘bang’ for your buck, higher is more expensive, harder to learn, and gives a lower score.

4.9.3 Classification Code for Base Value

This is determined by the base cost of the power per level. Note that codes A-E indicate that the Power Skill is treated as a Fundamental Skill when necessary and F-J indicate that it’s considered an Expert Skill.

    <5     A
    5     B
    6-10     C
    11-15     F*
    16-20     D
    21-25     G
    26-30     E
    31-35     H
    36-40     I
    41+     J

         * Use the F column under “Expert Skills”.

EG: Telekinesis has a base cost of 15 points. It would use the “F” column of the Expert Skills table to determine the Base Value of the Power Skill. A character with a Basis of 15 would therefore have a Base Power Skill of 5%.

4.9.4 Base Cost

This is always 0 for a Power Skill.

4.9.5 Free Improvement

Each power has a “difficulty of learning” value given in the table below, based on how flexible the power is and how difficult it is to adapt the basic power to some exotic usage. The character gets 1 skill point worth of improvement to the base value for every 2 character points in the power’s Net cost.

The costing for powers has been broken into the application of modifiers in two stages, compared to the Hero System’s one. “plus” and “minus” modifiers get applied to the total base cost first, and yield Active Points Cost, which is used to determine the END cost of using the power. The formula is

     Active Cost = Total Base Cost x (1 + total “plus modifiers”) / (1 – total “minus modifiers”).

The Active Cost is then adjusted by “times modifiers” and “slash modifiers” to get the Net (or Actual) Cost:

     Active Cost = Total Base Cost x (1 + total “times modifiers”) / (1 + total “slash modifiers”).

These are normally summarized with math symbols – “+1, -2, x3, /4”. Most modifiers contain only a single type of modifier value, but there are a few rare ones with both an Active and a Net contribution.

I’m not going to quote the whole list, just the first dozen or so entries.

    Ablative Armour     b
    Aid     d
    Armour     b
    Change Environment     g
    Characteristics     –
    Clairsentience     d
    Cosmic Awareness     f
    Damage Reduction     d
    Damage Resistance     c
    Danger Sense     d
    Darkness     b
    Density Increase     b
    Desolidification     e

EG: A character has bought 80 points worth of Telekinesis. Assuming the character had a basis of 15, giving a Base Power Skill of 5%, that would give them 40 skill points worth of improvement in the Telekinesis Power Skill. Consulting the table above, TK (Telekinesis) has a code of “d”, so finding 40 points worth of improvement on the “d’ column in section 4.7.1 gives +40, for a total skill of 45%.

Click the link to download

I’d love to include the actual tables here that the above example is using, but Hero Games imposes a four-page limit to the presentation of House Rules. I’m skirting close to the limits already, even assuming these explanatory interjections don’t count.

What I can do, I think, is to provide those tables in a PDF.

4.9.6 Maximum Improvement

Power Skills cannot be improved by more than +75%, as shown on the improvement table.

4.9.7 Costed Improvement

Characters can improve Power Skills using skill points like any other skill. The “free improvement” amount does not count as improvement for the purposes of determining the cost of such purchased skill, but DOES count against the +75% improvement limit.

EG: Our character with the 80 points of TK and 45% TK Skill wants to buy an extra +35% for his power skill. Looking up the table in 4.7.1 shows that this costs 35 skill points. Since 40*+35=75, this is the maximum that the character can buy in improved Power Skill.

     * From the earlier part of the example.

    Purchase Restriction

    You cannot spend more additional skill points on a power skill than you get free.
    EG: In the case of our Telekinetic, he can’t buy more than 40 skill points worth, or +40. Since the purchase he has made, +30, is less than this, the purchase is fine.

    Lost Points

    The downside of buying additional improvement to the Power Skill is that the points are “locked in” once play begins. That means that if the character buys additional power, raising his free improvement, any points expended in a purchased improvement are lost. Purchasing additional power skill should be perceived as a bootstrap to give the character a desired level of flexibility before the character has sufficient points to invest in the power level actually desired.

4.9.8 Reduction Of Power Skill Scores

It’s unusual to do so but characters can reduce their power skill scores in the same way that they buy improvements. However, any such reduction is considered a permanent reduction in the base power skill, so even a later improvement in the power skill, or the purchase of additional power, leaves the character with a lower net Power Skill, and the maximum improvement in the power skill becomes +75 from the reduced value. This is useful for simulating powers that the character wants to have under only marginal control.

Whenever the character chooses to reduce power skill scores, they should also suggest (in writing for future reference) a story arc that permits the character to “buy back” the reduction. The referee can schedule or rewrite this plot arc as he deems desirable, but cannot force the character to pay off the reduction.

In other words, the referee can’t run the scenario until the player wants him to, but he can wait as long as he wants to thereafter. These restrictions are designed to prevent characters taking advantage of the rules to buy extra ability and then paying off the limitation before it has a chance to bite the character.

Now we get to the meat of the rules, the parts that will matter the most to readers.

4.9.9 Default Use Of Powers

Each power description should include a default use of the power. This must be as basic as possible (while retaining the special effects that flavor the power and the appropriate consequences of any advantages and limitations) and is the effect that takes place (if any) when the character fails a power skill roll. Some powers have these largely predefined.

EG: Our continuing TK example: “Default: Push the target aside with full STR” (normal attack).

4.9.10 Routine Use Of Powers

Anything that is a default or obviously-straightforward use of the power will usually be declared a “Routine” use of the power. This is anything at the “aim and fire” or “just pull the trigger” level. The referee will usually not require a roll for such uses of the power unless the character’s power skill is extremely low (score less than 0) before the difficulty modifier.

4.9.11 Congruent Powers

It is possible for a character to have two or more variations on the same power. When this happens, they have the choice of declaring the variations as “Congruent Powers” or treating them as separate.

When powers are Congruent, the “free points” are determined by adding 1/2 the net value of the most expensive power, 1/3 of the net value of the next most expensive, then 1/4, 1/5, and so on. This is a compromise between assuming that the additional variations contribute fully, with NO expertise overlap between the two powers, and assuming 100% overlap (which could penalize characters for focused concepts). Both powers use the same Power Skill roll. The higher classification code (the one closest to “A”) is used for determining both Base Skill Values and improvement costs.

“Force Field” and “Force Wall” are considered eligible for treatment as Congruent Powers.

4.9.12 Elemental Controls

Skipped as irrelevant to most readers.

4.9.13 Multipowers

Skipped as irrelevant to most readers.

4.9.14 Spellcasters, Psis, and Martial Artists

Skipped as irrelevant to most readers.

4.9.15 Gadgets

Gadgets, by definition, are ad-hoc constructs, ie the character has minimal skill in using them for any given purpose. These are always treated as per the basic system, but there’s no point in listing the relevant skill because the device is here today and gone tomorrow. That’s why it’s always better to give the gadget to someone else who has skills in the relevant area than using the gadgets yourself. HOWEVER, in terms of CONTROLLING the gadget, the referee can choose to permit the use of a “Control – Gadgets” skill based on the cost of the gadget pool in character points.

This is in contrast to FOCI which are gadgets bought more-or-less permanently using character points, in which the character gets a skill based on the net cost of the Focus or 1/4 the ACTIVE points, whichever is higher.

An exception to these rules are vehicles, which are controlled using the appropriate driving / piloting skill, regardless of the vehicle’s cost to the character.

4.9.16 Congruances With Framework Elements

It is obviously possible for a character to have a “normal” ability and an element within a framework that are congruent, eg a character could have an EB (Energy Blast) and a separate EB that is in a multipower or elemental control or whatever. Where this is the case, use the appropriate “modified” value for the power in the framework as a congruent value to stack with the power outside the framework.

EG a character has a 100 point EB and a spell that cost 6 character points, on which he has listed a 20-extra-Mana cost. The spell is therefore worth a net “value” of 46 points for the purposes of determining skill level in the power; but this is treated as congruent to the 100 point EB. The characters “free” skill with EBs is 1/2 of 100, or 50 points worth, plus 1/3 of 46 which equals 16 points (rounding in the character’s favor), for a grand total of 66 points worth of “free” skill.

Author’s Note: This is a complicated situation that I hope never arises in real life but feel that sooner or later it’s sure to come up….

Planned Expansions in v7.5

4.9.17 Specialties

Characters will be able to buy a specialty in a specific use of a Power Skill, for example “Trick Shot”. This costs the standard amount in Skill Points for such a specialty as though the skill were the same as any other, and gives +30% to the use of the power in that way.

4.9.18 Expert Versions

For powers rated A-E, once the maximum improvement has been achieved, characters can choose to purchase an Expert Version of the power skill with GM approval. Such approval will only be given if the character has made extensive use of the Power Skill in play. The expert skill has a base level of the skill achieved in the Fundamental Skill version and permits improvement of the skill by another +75%.

The process is as follows:

1. The code transitions to the code 5 higher – A to F, B to G, C to H, D to I, E to J.

2. Consult the purchase price table looking in the appropriate column for a base skill of 1/2 the indicated level.

3. The indicated price is the cost of +0% in the “Expert Version.”

4. The same code is cross-referenced with the desired improvement in the Expert Version to get the price of the improvement.

5. Improvement purchased is added to the existing skill score.

Purchasing the Expert Version reduces the level of difficulty for maneuvers by one step – ‘Routine’ becomes ‘Easy’, ‘Difficult’ becomes ‘Routine’, etc., in addition to permitting further increases in skill level. This makes extremely difficult or complex maneuvers more achievable and less complicated ones more reliable.

Task Difficulty

There’s no section number on this section because I cut out a whole truckload of stuff not relevant to power skills – and what’s left starts off in mid-section.

Task Difficulty is the GM’s response to the question, “How hard should this proposed action be, under ideal conditions”. It sets a baseline modifier. That then gets adjusted progressively to take into account the differences between “ideal conditions” and the actual circumstances in the field.

    Task Difficulty Table

         Trivial task +100
         Routine task +50
         Easy task +25
         Moderately Difficult task +0
         Difficult task -10
         Very Difficult task -30
         Extremely Difficult task -50
         Almost Impossible task -75
         Absurd to even try -100
         “Permission Denied” -120

    Environmental Circumstances

    Environmental circumstances are usually rated on a +50 to -50 scale, but extreme cases may call for plus-or-minus more than that. A positive modifier indicates a more ideal environment, a negative modifier indicates a handicap. It is generally easier to rate the suitability of the environment from 1-10, multiply the result by 10, and subtract 50, but the technique employed is left to the referee’s best judgment.

    Action Modifiers

    Third, the referee should assess anything else the character is, or has been, doing, that might improve or lessen the chances of success. This includes any modifiers from combat maneuvers. These are generally rated on a +25 to -25 scale each, and in general there will be no more than 1 or 2 of them. He should also assess and include anything else about the character who is making the check that is relevant, which includes Aiming (refer Chapter 12 Combat), Complimentary Skills (see below), Specialties, etc.

    Target Modifiers

    Fourth, the referee should assess anything the target is, or has been, doing, that might improve or lessen the chances of success. This includes any modifiers from combat maneuvers being performed by the target, movement, etc. These are generally rated on a +25 to -25 scale each, and in general there will be no more than 1 or 2 of them. He should also assess and include anything else about the target that is relevant, for example the size of the target relative to the range +([size/range-1] x 25), in inches (one inch = 2m).

    Range Modifiers

    Fifth, the referee should apply the appropriate range modifier. This is normally the standard range modifier given in chapter 12, Combat, but this can be modified by advantages and limitations on powers, etc.

    Anything Else

    Finally, the referee should apply anything else that’s applicable. There generally won’t be anything, but it’s worth a moment’s thought to double-check.

    The Total

    The total should then be determined (assuming the referee hasn’t been working that out as he went along) and, if necessary,adjusted to fit within the absolute limits of ±150 modifier. The referee need not announce the exact modifier, simply the closest “category” to the total – taken from the same Task Difficulty scale given above.

    EG: If the modifiers total +35, the referee need only announce that it’s a “Fairly Easy” task (Easy = +25, Routine =+50).

4.14.5 Power Skill considerations when designing Powers

In the old days, what mattered was getting your power for the fewest possible points. The less you spent on something, the more you could spend elsewhere.

Well, the old days are gone. The Power Skill system rewards focused characters with flexibility and ability.

In the past, it was enough to decide how much of something you wanted to have – an RKA that was so big, Flight that was this fast, and so on. Then you tried to afford all these abilities. With the advent of the power skill, it is now just as important, or even more so, to decide how much you want something to cost. There is a benefit to NOT reducing the END cost, and it has to be weighed and compared with the benefits of doing so.

When designing powers, the definitive question is now “What do you want to be able to do with it?”. Power Levels and Power Skill levels are both necessarily defined by the question. Are there any defined standard maneuvers that you want to be able to achieve? The difficulty, and resulting chance of success may well be the defining issue for the power.

In Practical Terms

1. Powers have a base rating according to how much a minimum level costs. You look up that rating on a table.

2. Next, you decide the skill basis of the power. This is usually a stat roll, but can be a skill roll if a sufficiently convincing case is presented.

For example, St Barbara’s Flight is based on her Acrobatics skill rather than her Agility, because she literally uses aerial acrobatics for sharper changes of direction. The downside is that she has to shut her power off for a round to do so. For most characters this would be a massive restriction but because she is literally an Olympic Gymnast with skills to match, St Barbara gains massively in the accuracy of her flight maneuvers and the likelihood of success in them compared to most characters. This also feeds verisimilitude – this “stop, reorient, re-start” approach is probably closer to how a character with that background would fly.

3. Cross-referencing the score in the Basis with the classification code gives a base score in the skill.

4. Next, you find the power itself on a list for it’s improvement code, and cross-reference the net cost of the power with that improvement code to get the amount of free improvement in that base score that has resulted from spending more than the minimum on that power.

5. The maximum improvement in the power skill is either determined by subtracting this ‘free improvement’ from 75%, or by doubling that free improvement, whatever is LOWER.

6. Another table using the same codes can either yield the cost of that much improvement or the amount of improvement for a given cost, whichever is more useful.

Adaption To Other Systems

Until the shift to the d% based system, skills were rated on a d20 scale. To facilitate characters being adapted from the old system, or from standard Champions / GURPS, this section listed approximate equivalents.

What’s interesting is that this is a two-way street, providing an opportunity to adapt the mechanics provided for D&D or whatever. Even if you only use this system when your existing mechanics don’t cover whatever a PC is trying to do, it can be useful to have this in your back pocket.

    As a guide, the following are a list of approximate conversions from the d20 scale to the new d% scale:

      1 = -80
      2 = -71
      3 = -64
      4 = -57
      5 = -48
      6 = -41
      7 = -34
      8 = -25
      9 = -18
      10 = -11
      11 = -2
      12 = 4
      13 = 12
      14 = 20
      15 = 27
      16 = 35
      17 = 43
      18 = 50
      19 = 58
      20 = 66
      21 = 73
      22 = 81
      23 = 89
      24 = 96
      25 = 104
      26 = 112
      27 = 119
      28 = 127
      29 = 135
      30 = 142
      31 = 150

    The same conversion scale can be used for the Official 3d6 Champions System. However, it should be noted that it is now much harder to get higher scores; it also recommended that before conversion, the old score be reduced by 2 to give a more realistic target.

    EG: A character used to have Acrobatics 18/-. In the new d% system, that’s equivalent to an Acrobatics skill of 50%, but a more realistic figure to aim for in character conversion comes from converting 18-2=16/-, ie 35%.

But, if you want to go further, or your game mechanics are neither d20 nor 3d6-based (Traveler, I’m looking at you), there are two essential translations that you will have to make; everything else will flow from those.

Skills Extremes

I you look at the tables provided in the attachment, you will find that the highest base skill value is 100, and there’s a maximum improvement of +75 to that. And a specialty can get you another +30, if it’s relevant. So the highest possible skill to have is 205.

What is the highest skill possible in the game mechanics you are using? If it’s open-ended, use 3x, 4x, or 5x the highest roll result. So, for Traveler, that’s 2d6 -> 12; x3 = 36, x4 = 48, x5 = 60. One of those three numbers will be the maximum.

The absolute minimum that you can usually have in a skill would appear, at first glance, to be zero – but in the realm of roll conversions, that’s misleading. Remember the -2 suggested for 3d6 conversions? That’s because 3d6 have a minimum roll of 3, and both d20 and d% have a minimum roll of 1. You have to ‘pin’ the adjustment to the same foundation.

Let’s pick a hypothetical 4d6 system and derive a conversion to a basic d%.This consists of a mathematical formula of the form, d% = md + 4d6r x (Md – md) / (M4d – m4d).

Looks complicated. But it gets a lot simpler once you realize that defining the system basis, you’ve defined everything in that formula as a constant, so that you end up with something that reads, d% = ## + 4d6result x ##.

md = minimum roll on d%, Md = maximum roll on d%, m4d = minimum roll on 4d6, M44 = maximum roll on 4d6. What we’re trying to match is the range of variability.

So: d% = md + 4d6r x (Md – md) / (M4d – m4d)
= 1+ 4d6r x (100-1) / (24-4)
= 1+ 4d6r x 99/20
= 1+ 4d6r x 4.95.

It’s almost certainly close enough that you could use 4d6r x 5.

Here we have a range of -80 to +150 and a conversion target of whatever.

But if I were working up such a conversion for my own use, I would actually break the results into two bands – one less than average human and one more, simply because the Zenith 7.4 rules bias that value low to make little more room for skilled individuals.

Basis Decision

Once you have the ranges, the next decision to be made is what you’re going to base the ‘free base competence’ on – in other words, how much skill are you going to give the characters for free, and how are you going to measure the answer?

Both the Zenith 7.4 rules and the Hero System from which they derive are point-buy systems. Everything is under the control of the player. That’s not the case with other systems, where stats may be rolled, not chosen.

I urge GMs to get creative. For example let’s pick a class ability from D&D – just about any version will do. It first becomes available at, say, level 8, and is based on a character’s STR score.

As a basis for a skill in “using that ability,” i would look at 8+STR score. I might add multipliers to change the relative importance of each contribution and bring the STR scores closer to the range of the aptitudes used in this system to set skill levels, that’s up to you.

Improvement Cost & Quantity

The third design parameter also deals in the points-buy question – how much does it cost to improve the skill, and how much improvement should you get for that expenditure?

Difficult Decisions

And finally, you have all those decisions that were so hotly debated about the philosophical underpinnings of the system. Really, these boil down to one simple-to-state question: Does everyone with the relevant ability get some or all of the Skill that goes with it, or is this something extra that they have to buy or obtain somehow?

Again talking about D&D for a moment, I can envisage a whole range of magic items – tokens or badges or rings – that do nothing but unlock “Prowess” in a particular class ability.

I had to focus in on this system because the adventure that I’m working on will give one or two of the PCs some difficult challenges in using specific powers – and neither of them have the Power Skill scores for those powers written down. So I will have to calculate them.

And there wasn’t enough time for me to do that AND write a post for Campaign Mastery. This, on the other hand, was 70% copy-and-paste. Nevertheless, the more I worked on it, the more I realized the value of the premise of the article – this IS something that should be more widely available for GMs to consider. It IS valuable as a concept and as a technique outside of the Zenith 7.4 rules. And therefore, this is NOT a filler post – which is what I thought it might be, when I started it.

Comments (1)

What’s The Real Value? A ‘Trade In Fantasy’ Extra


A simplified mechanism for the simulation of trade in an RPG where it is not to be the focal point.

Image by Roy from Pixabay

Background

A confluence of thoughts from different sources came together the other day relating to how we assess profit from selling something. I’m not sure it was strong enough to count as a revelation, but it’s an insight at the very least, a way of looking at objects and trade goods that helps encapsulate an entire economy.

It’s completely irrelevant to the currently-in-progress chapter of the Trade In Fantasy series, but completely relevant to the broader subject, so it will eventually get given a place in the total text – I’m just not sure where it should go at this point. Because it’s a fairly fundamental conceptual tool, it will probably end up being tacked on to the end of one of the chapters already published, or inserted somewhere into the middle of it.

For today, though, it’s a standalone subject for later integration into the main text.

The fundamental concept of Trade

The whole basis of Trade as a concept is the notion that some commodity or item is worth more over there than it is here, and the difference is more than the cost of transporting the Goods over the intervening distance. A merchant therefore buys it here, moves it there, and sells it, becoming wealthier at the end of the process than they were at the start of it.

To avoid bogging down in nomenclature, let’s just call it a ‘thing’.

Processed ‘Things’

There is often an intermediate step in which a character with appropriate expertise takes the commodity, adds work to it, and transforms it from one ‘thing’ into another. It’s usually simpler to disconnect the supply chain into separate transactions, but that’s not always the case.

So,
1. Person #1 makes or extracts Thing A at Location 1.
2. Person #2 buys Thing A from Person #1.
3. Person #2 transports it to location 2.
4. Person #2 sells Thing A to Person #3.
5. Person #3 transforms Thing A into Thing B by adding Work to it.
6. Person #4 buys Thing B from from Person #3.
7. Person #4 transports Thing B to location 3.
8. Person #4 sells Thing B to Person #5.
9. Person #5 either resells Thing B to Person #6, or adds more Work to it to create Thing C.
10. If Thing C was created, the process loops back to step 6 with new People added to the supply chain.

Each of these steps is as simple as its possible to make it, but to make it even clearer, let’s look at an example.

1. Person #1 digs up some iron ore.
2. Person #2 buys the iron ore from Person #1.
3. Person #2 transports it to a smelter.
4. Person #2 sells the ore to the owner of the smelter, or pays them to add work to it.
5. Person #3 transforms the ore into iron, probably in the form of rods or ingots.
6. Person #4 buys the ingots from the smelter (or Person #2 reclaims his property, becoming Person#4 in the process).
7. Person #4 transports The iron to location 3.
8. Person #4 sells the iron to Person #5.
9. Person #5 adds more Work to it to create a steel sword.
10. Person #5 sells the sword, either direct to the public, or by completing the commission to create a sword, or to a retailer (Person #6), or to another intermediary (Person #7).
11. Person #6 (if any) sells the sword to the public, or joins it with others to fulfill a supply contract. It will almost certainly have to be moved, a service Person #7 is hired to provide.
12. Person #7 moves the sword (and other trade goods) from the place it was made to a place where there is higher demand for such.
13. Person #7 sells the sword if they own it, or delivers it. The purchaser, Person #8, either sells it to the public, uses it to fill a commission or contract, or keeps it as a personal possession.

This example breaks down a little in steps 10 and 13 because swords are typically sold by the blacksmith and not to a retailer, but it’s good enough. Many steps may be added – decorations, and scabbards and hilts – before the final product is achieved. For a presentation sword, the sort of item one Noble might gift to another, I could easily double the length of the list.

Value isn’t what you think, perhaps

At each stage of the process, the Thing being traded has three ways of being valued, and they are all valid in some respect.

There’s how much it has cost so far.
There’s how much the current owner can sell it for.
And, there’s how much the ultimate end-product can be sold for.

At each stage of the process, the current owner sells the product after increasing its value, either by adding Value of Location or by adding work. They incur costs in the process, which diminish the profits, so they want those profits to not only cover those costs, but pay therm enough to live on until the next sale.

The third value helps increase the second, helping achieve this goal.

So, at any given point in the process, how much is the Thing, in its current form, actually worth?

To someone who has already sold it, it’s worth exactly what was paid for it.

To someone who currently owns it, it’s COST is what they paid for it plus the Cost of whatever they are doing to it to increase its value. It’s either worth the total of those two costs, or its worth what they can sell it for at the end of that process.

If it were taken from them, the Cost Sum is how much they are actually out of pocket. But the effect on their prosperity is the higher, second, value.

Profits

A lot of people think that a business adds up its costs, including what they paid for the product that they are selling, and add a % profit margin to the total to get the price that they charge.

That’s not how it works.

For any given product, there’s a price that customers are willing to pay, and that’s what drives the retail price.

Even that’s an oversimplification in two important ways. First, there is a correlation between sale price and sale volume. Drop the price, and you sell disproportionately more of a product. If you chart the multiple of those two products against profit (assuming all costs are fixed), you find a dumbbell curve, with a peak at the point of maximum overall profit.

But all costs aren’t fixed, some of them are proportionate to shelf time, and there are other factors that impact sales volume – products stored at eye height outsell those stored somewhat higher, which in turn outsell those stored lower. The higher the sales volume, the shorter the shelf time – so lowering your price a little below that predicted peak volume can actually reduce costs and boost profits. And, if you’re already selling a large volume of a commodity, there’s a temptation to place it at eye height – but that can be a mistake; you’re already selling more relative to a market’s capacity to buy, so there might not be enough room for growth in sales for the better placement to bring maximum benefit; you may be better served putting the popular product just below eye level and using that optimum shelf space for a product with greater capacity for sales volume.

Second, because the correlation between sale price and sale volume doesn’t even mention cost directly, but cost is a critical constraint on profitability, it can be worthwhile selling one commodity at a lower price even than the ideal in terms of profitability and pricing a more premium product on the high side. That’s a modern perspective, driven by studies in the economics of supermarkets, but the principle can apply to farmers markets of a more medieval nature as well.

And you can confuse matters even further with sales and discounts. These are often kept simple for the understanding of the buying public, but “ten cents off a dozen plums if you also buy a melon” can often be a more lucrative approach.

And then you have to factor in quality, both real and perceived. Actual quality pushes up both costs and the price people are willing to pay, but not as a simple addition that would be easily mapped onto sales charts – there’s a complicate relationship between quality and desire to purchase at a given price (it’s not a simple proportionate impact, either).

Perceived quality – a component of reputation in an industrialized setting, but largely independent of it in a more medieval society, where brand identities were subordinate to personal identification – pushes both volume of sales and price tolerance upward, at minimal increase in cost.

I once read somewhere that for every dollar spent improving a product, you should spend $10 telling people about it, but I think that’s more an aphorism of perceived wisdom in the 1970s and 80s than it is a useful guideline – word of mouth is still a thing, and some companies are adept at various forms of free media. I do think the general principle would still hold true in pseudo-medieval times, but the ratio is likely to be 1:1 or less.

Even today, I think the principle is correct but the ratio is not to be relied on save at a global level of development budgets vs marketing budgets, and not at a product-by-product level – and even then, 10:1 seems extreme. Between 2:1 and 5:1 seems far more persuasive to me as a realistic set of numbers. But such marketing aphorisms often exaggerate to get the point across (like everything else in marketing).

Ultimately, there are so many interlocking variables that an informed best-guess is probably the best that you can do in terms of setting an initial price point, and actual measurements of revenue vs price carried out over a period of time used to tweak prices toward the optimum.

Modern production methods also make for more consistent price levels; there would have been a lot more variability in market prices in a pre-industrial era, and seasonal factors probably outweighed everything else, also affecting factors like perceived quality.

Don’t get that last point? When a product is in season, quality perception sits at a different bar to when things are late-season, early season, or off-season. Something that you wouldn’t give a second glance to at season peak might be seen as very high quality in the off-season – so quality expectations are a relative thing.

One final point before I move on: What about our Iron example? Quality there won’t change as a function of the season, there’s no such thing as an “iron mining season”. But I would contend that seasonality is just as important in this product space as in any other – the season might impact mining costs, it might impact how hard workers can or will labor and so impact yields, it will impact transport difficulties and costs, and so on. As a result, even if no-one thinks of it in those terms, there would in fact be an “Iron Ore Season” in every practical sense.

Costs

Before you can properly evaluate what price to sell at, you need to calculate your total costs, and that can be a lot trickier than many people imagine.

There are costs per commodity, like the purchase price. There are costs per load, such as drivers and guards. There are costs per trip, like wagon maintenance. There are costs spread over many loads, like the purchase price of a new wagon (or the repayment of a loan to permit the purchase of this one). There are all sorts of license and permit fees. There are tolls. If it’s available, and your sensible, and can afford it, there’s insurance. And there are taxes and import duties and the like.

If you’re a retailer, you have often-overlooked items like shelf space and product positioning (which has been mentioned already) on top of all of the above. You may need to woo vendors and suppliers, creating entertainment expenses. There may be bribes and protection money. There are staff wages – possibly including your own. There may be advertising and marketing. You may need to hire spies to watch an opposition. There will be guards, often hired from a specialist organization.

Each of these is more complicated than it appears. So you may also need bookkeepers and accountants and a paymaster.

To see just how complicated things can get, let’s simplify things down and consider a single wagon-load.

An example wagon-load

Alphonse has been hired to transport 12 cases of vintage wine to the city 100 miles away. It will take him one day to load the wagon and one day to unload it, and he can travel about 20 miles a day, so the total length of the trip is going to be 7 days. Alphonse owns his cart outright but has maintenance costs to pay, or (more specifically) a small amount of cash that he has on hand to pay for repairs when they become necessary. His wagon is drawn by two horses who are nearing retirement age, so he’s saving for replacements. He will have to pay three tolls along the way – one to enter the city, one to cross a bridge, and one to pay a ferryman. He needs three guards, and he has to pay them well and hire the best he can find. He drives the wagon himself, but he needs a relief driver in case he falls ill. He needs a cook, who will also serve as a medic. He hopes to be able to buy a second cart sometime soon, and so is training an apprentice, but he’s not experienced or skilled enough, yet, to act as the relief driver. He has to buy and carry food and water for his people and fodder and water for the horses. Every trip, he pays a blacksmith to check the horses and replace any horseshoes showing signs of significant wear. He has to allow for a sales tax and a luxuries tax and an income tax, and he also has to pay a fee for each of his workers to safeguard them from being pressed into state service. He has to pay vet bills for the care of the horses. Twice along the route, he stays at inns, which cost money for himself and his crew; the other three nights on the road, they have to rough it. He has to provide tents, and cooking equipment.

12 cases of wine only fill his wagon 1/3 of it’s capacity. Tools and personal effects and other items for use along the way fill another third of the available space. But that still leaves a significant amount that’s not earning any money, it’s just dead weight.
The bulkier a commodity is, the more space it takes up, and the heavier it is, the more carrying capacity it consumes. Spotting a wagon that is carrying high-density items, and therefore is not packed as high, increases the risk of interest on the part of bandits. Riding unusually high (lighter) or low (heavier) can indicate the presence of gems or gold, respectively, and disposable wealth always gets the attention of the more attentive low-lifes.

The only available commodities that could fill that space are low-profit cargoes like wheat or timber. To make a decent return, multiple cargoes will be needed to fill the space. Each with a different weight, a different volume, a different cost, and a different profit level. That’s complicated enough on its own, but then you have to factor in the value of position – if there’s a commodity that can be sold at one of the intervening stops along the way (for a profit) and then replaced, that ’empty space’ becomes even more profitable. There are umpteen jillion combinations of cargo and quantity, and the conveyor has to pick the one that is most likely to be the most profitable, without wasting a lot of time in the process.

The greater the diversity of products within that space, the less he’s carrying of whatever is the most profitable at the end of the day, but the more reliable the earning of some profit.

So that adds questions of supply and demand, not just at the destination, but all along the travel route. Assuming that ‘home’ is the city where the wagon is bound, and that the carrier brought a load out with him that he sold before loading the wine, he may have had the opportunity on that trip out to get a sense of demand that he could fill upon his return; the risk is that someone else will have filled that demand in the time in between. The closer to the far end of the trip, the less likely it is for that to have happened, so any knowledge picked up will be least reliable as he approaches his final destination.

There are endless possible outcomes. The trick is always to turn a profit, even if it’s less than hoped; anything more than that is a bonus. The wine itself is paying all the expenses of the trip save for the actual purchase of goods to fill the void, so that’s a lot easier to achieve than it might have been.

A barrel of apples, a cask of apple cider, a smaller cask of apple vinegar, a couple of bags of beans, a quarter-bin of pumpkins, six crates of shingles, a barrel of nails, 50 horseshoes, a small barrel of pig’s trotters in brine with a hidden compartment in its base to conceal half-a-dozen gemstones, and six live chickens in a cage, plus any eggs they lay en route. And six woven blankets of wool and four cow-hides and a side of beef – that last won’t quite fit and overload the wagon slightly, but after a day or two, enough weight in water and food will have been consumed to solve that problem. Plus 500′ of rope to tie it all down beneath a weather-resistant tarpaulin.

If the wagon owner can just get the wine to its destination, he will turn a profit, just the win and gemstones will make it a very profitable trip, even if he sells the rest at a small loss or just breaks even.

There are always more variables to take into account. The roads will be at their worst when the wagon is most over-loaded, so there is an increased risk of a breakdown of some sort that diminishes as he travels, for example. Is that risk worthwhile, or should he forget the side of beef or the barrel of apples or maybe the bags of beans? Those all act as low-cost camouflage, hiding the real source of profits from spying bandits, so they have a value beyond the obvious.

Minutia and an alternative

As this example demonstrates, Trade as an activity is all about minutia, and – most of the time – minutia is boring. A Traveler GM I know was so put out by this that he ended the campaign when the players decided to go into being traders instead of blindly engaging in the politics that he had set up as the centerpiece of his campaign; it was that incident that led to my writing the original Trade In Traveler article, “Buy Low, Sell High“.

A Mathematical Trick

I developed all sorts of tricks to speed up mental and paper arithmetic as a child because I had trouble learning, of all things, my times tables. Some of those tricks continue to serve me, well even today, and the principles that they exploit can be even more useful.

Let’s say that I have 20 numbers ranging from 1 to 10, as might result from a series of d10 rolls that have to be totaled to give 20d10: 2, 8, 8, 1, 6, 3, 9,10, 5, 7, 6, 9, 5, 4, 10, 1, 5, 7, 1, 1.

If I add the highest possible result to the lowest, I get 11, which is not very useful. But if I exclude the highest possible result and use the one below it, I get 1+9=10. And that is VERY useful for quick counting. So I partner the results up as much as possible and see what’s left over.

    2+8=10
    8+1+1=10
    6+4=10
    3+7=10
    9+1=10
    10=10
    5+5=10
    9+1=10
    10=10

    That’s 9 tens for a total of 90, and I have 6, 5, 7 left over. But I’m not finished yet – I take the highest and lowest of these leftovers, and add them together: 5+7 = 12. Which makes the final addition, 12+6, even simpler – 12+6=18. Add the 90, and you get 108.

This is even easier to do if you actually have 20d10 to roll, because you can physically move the dice into their ‘partnerships’. But even without that, with a list of numbers generated 5 at a time (the number of d10s that I happen to have gotten out), it’s easy – just cross the numbers off the list as you partner them, or use backspace / delete if your list is in an electronic format.

It’s faster than simply doing it as an addition, because it’s easy to lose count of how many dice you’ve rolled.

If I’m talking about d6s, the goal is still to make tens. Here’s 50d6: 1, 6, 1, 4, 3, 3, 3, 5, 5, 4, 2, 2, 5, 5, 6, 3, 1, 4, 3, 5, 2, 5, 1, 2, 4, 3, 5, 5, 4, 3, 4, 4, 5, 4, 1, 2, 6, 6, 1, 6, 3, 6, 1, 2, 5, 6, 5, 2, 3, 4.

    5+5=10
    6+4=10
    3+3+3+1=10
    4+5+1=10
    2+2+6=10
    5+5=10
    3+4+3=10
    1+2+5+2=10
    1+5+4=10
    3+5+2=10
    4+6=10
    4+6=10
    4+6=10
    3+1+6=10
    5+5=10
    4+6=10
    3+2+5=10
    4+3+2+1= 10

    That’s 18 tens and I have a 1 left over – a total of 181.

A Cargo Standard

So, our commodities have 4 values that don’t change but that are different from one commodity to the next: Purchase Price, Quantity, Volume, and Weight. We also have Other Costs and Profit. Most of those numbers are relative to something – kg per bag or kg per 10 items or whatever. How can we repackage those numbers to eliminate some of these in favor of a more user-friendly description that has less minutia?

Volume

Let’s start with volume. Our cart has a fixed amount of it. If you divide that capacity by the commodity that takes up the greatest amount of volume per item, you get a relative minimum quantity that it can carry, and if you divide by the smallest volume per item, you get a maximum quantity. Neither of those are particularly helpful, but if you take the average, you can get a ‘typical quantity per load’. Is that of any more use? Not really.

What we can do is define a standard volume size, and package commodities by volume to fill that exact volume. To do that, we need to start with the volume occupied by the commodity with the highest volume per unit, and round that to a convenient number.

Or we can start by defining the volume capacity of a ‘typical cart’ and divide that by a convenient number to get a standard volume. Our actual cart will have a capacity of so many of those standard volumes. It’s a simple spreadsheet calculation to transform all of those volume-per-unit numbers into a value of standard volumes per unit – or take the reciprocal to get number of units in a standard volume, with a certain amount of space left over to fill that standard unit. And then we can partner that with another commodity whose volume per unit exactly fills the available space.

Weight

The typical wagon will also have a maximum load that it can carry before you start adding to the likelihood of a breakdown. If we divide that by the same number of standard volume as will fit, we get a maximum weight per unit. If we then use our spreadsheet, we could translate the weight of each of our partnerships into a certain number of those maximum weights per unit.

But here’s where my mathematical trick first shows up. We don’t care about the actual weight of any standard volume, so long as the average overall is within the capacity of our actual cart. So we can partner the weights per unit so that we achieve this overall average – a heavy standard unit partnered with 1, 2, or 3 lighter ones. There will be some that are a little over, and some that are a little under, but we’re packaging groups of standard units that will fit in the physical volume so that the weight overall is right.

This process can even be refined – if you’re 0.2 over on one pairing / partnership, you can take that off the next one that you’re putting together so that it ends up being 0.2 under. But that’s probably more detail than you need to go into.

A better approach is to ensure that each package of standard-units is as close to the desired value as possible without going over it.

So far, then, we’ve created standard shipping units that contain combinations that ‘fit’ the available space and weight capacity with specific quantities of a group of commodities.

Price and Profit

Price per item is known, so each of these combinations will have a total price. Profit per item is trickier because of all the different costs that have to be taken into account, some of which only affect specific commodities. Perhaps, then, it’s a good thing that the actual selling price is what a customer is willing to pay, which has nothing to do with either the purchase price of the shipper / wholesaler and the costs incurred. Instead of profit, we should be looking at revenue – the income that can be generated from selling the commodity.

Choosing the commodity package that yields the highest total revenue creates the maximum scope for profit after those expenses are taken out. We could even label it “Idealized Profit”.

That’s what the successful trader wants to maximize. In an ideal world, he would pack his entire wagon’s capacity with whatever yielded the highest idealized profit and be on his way. Unfortunately, it’s not quite that easy.

Compromises With Reality

Said trader has to accommodate two more limitations: Finances and Availability. The second is the more readily dealt with, so let’s do it first.

Availability

There might only be two units of the most profitable combination while the merchant has room for 8. So he picks those two and then looks at the next most potentially profitable units. If there are six of them available, that fills his wagon and he’s on his way. If there aren’t, he adds what is available and then moves to the third most potentially profitable, and so on.

That’s just common sense, right?

I’ll get back to that in a moment. First, we have the other compromise with reality to deal with.

Finances

Merchants frequently don’t have as much money to spend buying commodities to trade as they might like. Perhaps the most potentially profitable single commodity is Emeralds plus something cheap to fill out the unit – potatoes, maybe. But these cost 10,000 GP a unit, and the trader might only have 14,000 to spend.

So, even though there might be two or three such units available, the trader can only afford one – and maybe not even that.

To find out, he has to play a little game with himself. Deduct the price of however many top-profit units he can afford from his ready cash and divide what’s left by the number of spaces still to fill. The result is the maximum amount he can spend per unit on the remainder.

If the list of available consignments has been sorted in sequence of idealize profit from high to low, the job is simple – work your way down the list until you find the highest potential profit package that he can afford. Fill the remaining space with them – if there are enough available. If not, buy them all and repeat the assessment.

If you reach the bottom of the list without being able to fill your wagon, you then have a choice: run light, or assume that you can’t buy as many of the most expensive units as you thought you could. Reduce that quantity of units by one, and recalculate.

Eventually, you will end up with a full wagon-load that is as much profit as you can afford. The next time you come back, hopefully you will have a bit more cash to spend.

If that’s all there was to being a successful trader, there would be a lot more of them to go around.

Idealized Profits, revisited

What?s missing from the picture is allowance for the impact of supply-and-demand, and market knowledge. Both of these impact the idealized profits of the different types of unit on offer.

Demand for certain commodities rises and falls with the time of year and with the market environment. If there hasn’t been a supply of something that’s in demand – no matter what the level of demand is – that demand will rise, carrying the price people are willing to pay with it. If the market is oversupplied, demand will fall, and will again take sales price with it. You might know what the prices and supply were like a couple of days, or a couple of weeks, ago, but you have no idea what they are now, or what they will be when you finally bring your goods to market.

Some of the change depends on factors outside your control – what other traders have delivered a commodity in the meantime, for example, or a temporary hazard that means fewer such loads are reaching their destinations. Inclement weather and a hazardous river crossing can cause loads to build up, undelivered, while demand skyrockets, until conditions improve. And suddenly, there will be a glut on the market as everyone tries to capitalize on that pent-up demand. If you’re one of the early traders, you can do unusually well; if you’re late to the market, you can lose your shirt.

The one thing that’s for certain is that any ‘idealized profit’ list will bear only a passing resemblance to the actual prices at sale. Some commodities will be higher, and some lower.

In practical terms for the GM, they can either drive themselves nuts tracking every influence on actual sales prices – back to minutia again – or they can simply roll a bell-curved die roll and get a relative price adjustment which they then explain in narrative terms.

But just because a factor is outside your control, that doesn’t mean that its completely unpredictable. Market knowledge is a powerful tool that only the most intelligent can access. If there are a bunch of new homes being built, building supplies will increase in demand and therefore in price. If there’s a major sporting event coming up, the resulting groundswell of visitors will push up the demand for food of all sorts, and alcohol in particular. If the other team are from an area with a specialized cuisine, the ingredients for such cuisine will rise disproportionately relative even to the inflation of food demand in general.

If a particular bridge is rickety and old and in urgent need of repairs, it might be worthwhile going around it, even if it slows you down; while that might produce a short-term reduction in profits, sooner or later that bridge will fail, and you will reach your destination to find demand skyrocketing.

The more the canny trader knows about the world around them, the more they can use that knowledge to anticipate movement in demand and in price, and can then buy accordingly.

The GM Shortcut

All that fussing around with so much of this and so much of that comprising a unit can still be a lot of work, time that could more profitably be spent on something else. It’s still minutia, just a more generalized form of it. Is there a way around that?

The answer is yes, and it gives rise to a fundamental principle of making trade work in an RPG – any RPG, regardless of genre.

The GM sets the prices and the quantities and every other significant value in the process.

Some GMs use random die rolls to do so, thinking that takes the effort out of the problem. And they’re right, it does – but it makes more work and more minutia than you save, in the long run, and the removal of GM bias doesn’t make the system any more or less fair, just more chaotic.

Two die rolls are all that are needed. Two.

The first one sets the current market conditions if these are not already known / inferrable.

    <0, 0 = catastrophically unfavorable, +2 to the second roll
    1 = strongly unfavorable, +1 to second roll
    2 = somewhat unfavorable, +1 to second roll
    3 = slightly unfavorable / neutral
    4 = slightly favorable
    5 = somewhat favorable, -1 to second roll
    6 = strongly favorable, -1 to second roll
    7+ = incredibly favorable, -2 to second roll

The second one sets the trend, the direction things are going. The GM can add or subtract 0, 1, or 2 from this die roll to correspond to known external factors, plus there are the modifiers from the first roll.

    -2, -1 = becoming strongly more unfavorable, -2 to the next ‘first roll’
    0, 1 = becoming somewhat less favorable, -1 to the next ‘first roll’
    2 = becoming somewhat less favorable, -1 to the next ‘first roll’, re-roll 5+ results on the next ‘first roll’
    3 = becoming slightly less favorable, -1 to the next ‘first roll’, re-roll 6+ results on the next ‘first roll’
    4 = market steady, no real change
    5 = becoming slightly more favorable, +1 to the next ‘first roll’, re-roll 0- results on the next ‘first roll’
    6 = becoming somewhat more favorable, +1 to the next ‘first roll’, re-roll 1- results on the next ‘first roll’
    7, 8 = becoming much more favorable, +2 to the next ‘first roll’

These then become the foundation for narrative (which has to explain both the movement from the last ‘first roll’ and the current market trend).

The GM then interprets that narrative to set a general trend in commodities and pick out two or three exceptions going up and two or three going down. He then deliberately constructs unit packages with an eye to the implications of supply and demand from the narrative, and from that, can set availability levels.

Working backwards from the general picture of trade to the specifics of what’s available and what it’s likely to sell for saves an awful lot of work.

But it’s possible to simplify the GM’s job a little bit more, by considering group effects.

Group Effects

The principle of group effects is a simple one: events affect related commodities in a common way. As a general rule, you can treat all grains as a single entity, all vegetables, common meats, alcohols (except beer / ale, which often stands apart, and which may go down in demand when other alcohols like wine go up), and so on. If there’s a military conflict on the horizon, weapons, horseshoes, saddles, and armor will all increase in demand, and so will (higher-quality) cloth that can be died into standards and flags. And basic produce, like beans, hay, and oats. If there’s an increase in building, stone, timber, nails, panes of glass, and tools are all affected. And so on.

If only part of a unit is affected in this way, the proportion by cost price that is affected goes up, the rest doesn’t.

Fitting All This Into Trade In Fantasy

The main series deals with minutia of effects. It’s designed to fully simulate immersion by PCs into the world of commerce, where business activities are to be used as a springboard to adventures.

Sometimes, that’s overkill, because that’s not what the PCs have in mind at all – but the GM still needs to simulate the complex field of commerce within the game world. The PCs are hired on as guards for a wagon train of goods, for example – what are they actually protecting? And why does the wagon master need so many guards? Is the owner just paranoid, or is there something he’s not telling the PCs?

Or perhaps the PCs are employed to scout markets in remote places, searching for unusual commodities that might be valuable, and places where something or other seems to be in high demand. That could be a gateway to all sorts of adventures because it’s simply a justification for them going places they’ve never been before. You need some method of simulating trade to give the mission verisimilitude, but it should be as unobtrusive as possible.

That’s where the simplified system contained in this post comes in. You can make it as generalized as you want to – having standardized volume and weight, why not cost as well? It can be done in the same way, so long as you handle expenses separately.

There are some circumstances where this article is all you’ll need.

Leave a Comment

Delineating Overarching Character Traits


A technique for creating unique and interesting characters that makes their cultures more rich and detailed. Simple but comprehensive.

This image of a Mannequin in Ferengi Makeup and Uniform by Marcin Wichary from San Francisco, Calif. was first published on Flickr under the Creative Commons Attribution 2.0 Generic License, https://commons.wikimedia.org/w/index.php?curid=79570035.

I was reading something on Quora the other day about how Deep Space 9 used the overall concept of Ferengi Traits to make the personalities of Quark, Rom and Nog distinctive (and don’t worry if you don’t know who those characters are, it’s not important to the article).

The key point being proposed was that while all three fell into the general pattern of ‘Ferengi’, each had his own unique traits for which that general pattern provided context. Putting those together permitted an interpretation of those traits from the Ferengi perspective, which in turn broadened the perspective on that society from comic-book simplicity to rich and culturally-detailed.

To employ a metaphor, a spotlight on one of the characters reflected back on the overarching commonalities, exposing fresh facets of the collective generality.

My thoughts went immediately to the gaming applications. These are essentially the same thing, but four-fold. Racial, Archetypal, Cultural/Social, and Characteristic. Each of these represents a way of generalizing a character, and provides (through interpretation), specific traits that denote the individual personality.

Initially, I was focused on NPC delineation, because that’s always a topic of value to GMs, but then I realized that the same methods would work for PCs as well – and that a lot of advice offered both here and elsewhere over the years were already groping in this particular direction.

An introduction to the Architecture

I’ve tried very hard, in this article, to use different collective descriptions for each facet and sub-facet of a subject. This had two purposes – first, by using non-standard nomenclature, it invites readers to take a fresh look at a very familiar subject; and second, it helps keep it clear just what facet or sub-facet I’m talking about. The goal is to avoid boxing ourselves in with stereotypes while creating a broad range of end personalities within a particular culture of which the individual (and all other individuals) are collectively representative.

This matters because it transforms the personalities from something being dictated by rules narrative and cultural write-ups to a foundation for individuality – it lets individuals be unique while maintaining that cumulative impression.

And it matters because that’s how characters in-game would formulate their impressions of both an individual and of a collective grouping – they wouldn’t be given an overarching definition, they would be given stereotypes if they were told anything at all about the race /culture, into which they would have to ‘fit’ the individual, or – if told nothing about the race / culture – they would be presented with one or more individuals which they would have to then generalize into an overall impression.

In other words, this approach is both more akin to, and more facilitative of, the situation as it would be encountered in the real world. That makes this less work for the GM, allows more creativity, and produces more unique individuals.

    Three Options and how to choose between them

    GMs can either start with a generalized pattern as a structure, or let one emerge naturally as a collective impression created by a group of individuals. Or they can occupy a half-way house somewhere in between these two extremes, offering a broad summary as a guideline and content to extrapolate from that beginning for individuals, fleshing out the resulting general view one individual at a time.

    There are two factors that should be considered when choosing between these three options. (1) How much contact has the society in general had with the race / species? This pushes toward the generalized pattern as foundation. And (2), how diverse are the race / species in personality, and within that question, how representative of their race / species does the GM want this individual to be? The second pushes toward the middle ground, while the latter goes further and promotes the emergent collective impression as the path to follow.

    There’s even a variation on the half-way house in which the specific description is filled with half-truths and inaccuracies perpetuated through myth and legend and culture. The GM may not know what the truth behind this picture is, only that it’s partially accurate and partially invented or romanticized.

    There should never be a forced ‘one size fits all’ answer to this question; it should be different each and every time – but, once made, it should remain in effect for each representative of a race / species until you have good reason to change it..

The Four Stanzas Of A Character

The general picture of an individual character, can be broken down into four stanzas. Four paragraphs / lines that collectively delineate an individual persona. Some GMs may add a fifth, alignment, but that’s fallen out of favor in gaming circles these days.

That redefines the objective – we want to end up with a four-to-eight-sentence summary of the individual and how he represents the broader culture from which he derives.

Before we can achieve that, we need to know the subjects of these four stanzas.

    Racial Traits

    These are the racial stereotypes that collectively apply in some manner to the normal individual – even if the individual is wildly different from them, they are still defined, in relative terms, to those racial traits. “The typical Orc is boisterous and brash, ill-mannered, and prone to violence, with a huge chip on their shoulders from being suppressed as a species, and as an individual within the species.” Right away, there’s a lot in that description that will seem familiar but there’s a nuance or two that are just a little different to the generic description of the race. It provides a subtle redefinition of the race, one that can manifest in different ways in every individual.

    Archetypes

    Similarly, in most RPGs there are archetypes – sometimes explicitly defined as character classes, sometimes not. Each archetype, in turn, carries baggage in the form of a description of the type of persona that it welcomes and develops, the personas that naturally ‘fit’ the archetype and how well-suited the individual is to their profession.

    Social Class, Associations, and Faiths

    These three are all ways that individuals associate with others, sometimes within their culture, and sometimes forming a point of connection with others beyond it. Each of them carries an expectation of behavior that forms part of the collective identity of the specific sub-group of which the individual is a member, and for that behavior to come naturally, those speak to the persona of the individual. On the other hand, if the individual rebels against one or more aspects of the group identification, that also says something about the personality of the individual.

    There can be several such groupings to which an individual belongs, but one of them will always be dominant, and their response to that dominant grouping will be definitive, providing a guideline to how they integrate (and how well they integrate) with the other groups to which they belong. These other groups provide nuance, not definition. They can warrant a mention in this stanza only when it is culturally expected that this association is definitive – and in this individual’s case, it is not.

    Characteristic Attributes

    There are three different aspects of characteristics that shape an individual – those that are relatively high, those that are unrelentingly average (relative to those around him or her), and those that are notably lacking or low (same caveat). Each of these can form an important element of the individual persona or can be negligible. The latter should be ignored for now; it’s the former that we are interested in.

    If the individual is notably stronger than those around them, this will have a profound influence on them, amplifying the consequences of some typical adolescent behaviors into life-altering events. Similarly, if they are faster, more nimble, more agile, more athletic, smarter, wiser, more attractive, or more resilient, there will be profound impacts that will push them either more firmly toward the stereotype, or more strongly away from it.

    If the individual is notably weaker than those around them, or more foolish, or more stupid / easily led, less genteel, or more clumsy, these impacts will also be profound. Always being the last person picked for games or teams will amplify other attributes of the persona, and may even put the individual into situations that threaten their lives. Some may devote their lives to overcoming this handicap, no matter the cost; others will accept it and embrace another path through life.

    It doesn’t matter how many characteristics the game mechanics define, there will always be more than can easily be accommodated in a short descriptive passage of the type being discussed here. Of necessity, you need to focus on the one, two, or (at most) three that are most definitive of the individual relative to the broader population around them.

    I want to highlight something before continuing. I’ve made a big point of using terminology relating to racial / social expectations, for example, “relative to the broader population around them”, for three reasons.

    First, it’s the relative value in comparison to those expectations that shapes a persona, not the absolute value;

    Second, this accommodates circumstances of adoption / resettlement, in which the racial norms themselves deviate from the expectations of the society around the individual; and

    Third, defining these attributes in relative terms means that the individual’s raw numbers can be filtered through the relative terminology to say something about the culture from which they derive.

The Process

With the subjects of each stanza now defined, we can move on to the process of generating an individual’s persona. For each of the Stanzas, this is a four-step process that is often conducted intuitively. As with most intuition-driven events, greater understanding and control can be achieved by understanding the process intellectually, and this can provide a road-map to follow when intuition fails us.

In fact, the four-step process is so quick (and usually easy) that we can contemplate far more than the four stanzas, and that creates a need for a fifth step, placed second-last, and labeled step 4:

  1. Generic Trait to Profile Spectra
  2. Individual Placement within Spectra
  3. Alternative Interpretations & Adaptations of Individual Placements
  4. Selection
  5. Facets of Individuality from Specific Interpretations

Let’s briefly look at each of these in greater detail.

    1. Generic Trait to Profile Spectra

    I recently wrote, though I’m not sure where, “Nature doesn’t deal in absolutes, it deals in spectra”, or words to that effect – I think it might be in the Zenith-3 adventure currently being played.

    Every element in the four stanzas can be viewed as a placement upon a general range of spectra that collectively define the application of the element to the collective identity of the race / species.

    You can see this readily in the case of the characteristic attributes – the character has a specific value for each characteristic, while the full range of possibilities defines the scope of the spectrum from low-to-high. One of my very early advocacies, long before I started writing articles for Campaign Mastery, dealt with the spectrum of full possibilities permitted by the game mechanics and the placement of the individual upon that spectrum as a guide to personality traits.

    In this case, the spectrum of possibilities is reduced to just those considered ‘valid’ for the race / species, permitting a socially-relevant measure of the impact of that placement, but the older interpretation still has some value in terms of defining the significance of those racial restrictions relative to the human population base.

    If the human range is 3-18, for example (very traditional D&D scale), an individual value of 15 give rise to certain character traits (depending on which characteristic is being discussed). If the race in question has a spectral range of 12-20, the 12 tells you something about the race relative to humans, as does the 20, and the individual’s value of 15 tells you something about where they fit within that 12-20 spectrum of possibilities.

    Set aside the individual value of 15 for the moment, though; this step is about defining the 12-20 and translating that into general descriptions of the characteristic with respect to this particular race.

    Each of the stanzas can be treated in the same way, as a range of possibilities that define the race / species, and this step is one of defining those spectra.

    Obviously, if you don’t take racial notes, you have to repeat this process every time. When you don’t have a unified concept of the race / species in your head, that can help create one through step-wise refinement and iteration of the process; but when you do have a clear idea of the central concept of the race / species, it’s a waste of prep time to repeat the process. Either way, the process is sped up in the future with a little careful note-taking at this point in the process..

    2. Individual Placement within Spectra

    This is where that individual’s value of 15 reenters the picture. You aren’t so much looking at what this enables the character to do, or not do; you are looking for the consequences of that specific value toward the personality of the individual. What comes naturally to him or her, what do they struggle with, and how do those things fit them into the culture surrounding them?

    Again, this step is easier when thinking about characteristics, but it’s true of all the stanzas. Social Class, for example, will have a range from those at the bottom to those most-valued by the society (usually rulers, but not necessarily so). Elves may revere those making cultural contributions far above their social standing as defined by their political influence. Applying a little creativity can nuance racial definitions in ways you would scarcely believe – for example, if the Brewers of Ale are the most influential in Dwarven societies, you get a very different picture of the society. If you then generalize that from the specific Beer-maker to ‘Social Lubricants’ to ‘Social Interaction Enablers’, you find that anyone who makes social interactions easier or more significant grows in stature within the resulting society, and that social interactions of all sorts become more significant within the resulting culture. Feasts, Parties, casual get-togethers of all sorts, become more significant, more frequent, and more embedded within the society. There would be excuses for such, both informal and formally-defined, that stretch even beyond the extremes in human cultures – there would literally be an excuse for a ‘party / celebration’ every week of the year. Some of these might even be negatively contextualized in expression – commemorating a war in which such celebrations were not possible might be remembered by making ale forbidden during the first phase of the social event (to be followed by an even more extreme celebration of the victory, when social norms once again became possible). So you have a week of fasting (in terms of alcohol) and then a blow-out.

    3. Alternative Interpretations & Adaptations of Individual Placements

    So we have a spectrum of results and a placement of the individual within that spectrum. The racial profile associated with that spectrum defines what is usually meant by that placement, but nothing exists in a vacuum; how an individual reacts to a specific spectral placement will not be an isolated phenomenon, it will be a part of the unified whole that is the individual’s personality. Rather than look to the generic cardboard cut-out interpretations, it’s worth spending a few moments contemplating alternatives that might better represent a coherent profile of the individual, relegating the generic contribution to (at best) a secondary status within this individual’s makeup.

    This stage of the process is an exploration of ideas – don’t be afraid to throw in something from left field to see what becomes of it.

    4. Selection

    By the time you’ve finished that, you will have a vast swathe of contributing elements, a soup of possibilities, all present in equal strength, and so yielding a fairly bland and unfocused characterization. Time to apply a little selectivity, picking out the elements within each stanza that best define the individual and their place within their natural society.

    Remember, the goal is to be able to sum up the individual and their place within their native culture in just 4-8 sentences of simple construction – none of this 15-line paragraphs that read like legal fine-print. Simple, direct statements. Anything that doesn’t belong in description of the individual’s personality and placement should be part of the racial notes.

    5. Facets of Individuality from Specific Interpretations

    When you’ve boiled off the dross – and it’s likely that your pruning will need to be ruthless – what remains is canon for that character. Everything not explicitly stated is free for interpretation in response to triggering events, though logical implication may narrow the reactions to such events.

    Roleplaying is about taking those defining elements and merging them into a holistic view of the personality which can then be expressed in thought (decisions), word, and deed. The GM has to do it just as much as the players do.

    It can be the case that the holistic view needs 1-2 more sentences to unify the constituent elements. “[Name] is a Party Animal” can mean very different things in different cultures, and usually requiring a clarifying clause within the sentence. “Elvor is a Party Animal, always up for a good poetry recital or inspection of the blooming of roses” – by redefining ‘Party Animal’ into a relevant social context, this describes a very specific individual in a single sentence; everything that follows merely enhances that overall summation.

    Simply by virtue of making this the dominant personality trait of ‘Elvor’, you automatically insinuate that everything else is secondary to this aspect of their personality, to be sacrificed if and when it becomes necessary. Right now, there’s an impression that the character is a gadfly, without serious heft and gravitas – but if this love of ‘intellectual events’ has driven the character to become engaged in internal politics, or a social firebrand / conscience, it’s possible that nothing could be farther from the truth. That’s what the other elements of the characterization are there for.

    It’s the overall summation that GMs and Players should keep in mind when roleplaying. Nuance is all well-and-good, but can often conflict with other characterization elements; the overall summation is the guide to navigating such complexities.

Spotlight Placement

Like most creative types, I love to show off my handiwork to the players. Perhaps eight times in ten, I’ll get a shrug and a ‘so what’, but the remainder generates varying degrees of appreciation and occasionally awe.

There’s a wrong way and a right way and a better way.

The wrong way is simply to dish up “here’s something I’ve been working on,” without in-game context. This risks giving away key details of plot not yet played, throwing away any surprise or wow factors at the game table for a moment of gratification that might not even be coming. It’s something that most of us have been guilty of at some point along the way, and we all have to learn (sometimes repeatedly) not to do it.

The right way is to make the revelation part of the plot by ensuring that the plotline focuses on at least one of the more unique aspects of the character, showcasing his or her individuality.

The better way is to fully integrate the character and one or more of their unique personality attributes into the plot, making them an essential building block of the campaign, while using them to shed light and add substance to the range of possibilities implicit in their race, profession, and social position. This might require the involvement of a second character whose job is merely to forewarn the PCs about the uniqueness or place it in a racial / professional context afterwards, specifically addressing the nuances that make the character function.

    Focal Point

    As you can see, there’s a great deal of similarity between the ‘right way’ and ‘the better way’ – the distinction is in how central the uniqueness of the character is to the plot.

    Both start with the selection of a focal point – the aspect of the personality that is going to be on most prominent display. This could be any one of the character’s stanzas of description, and there will always be a best choice in terms of the plot and intended usage. But if, by chance, the character you’ve created doesn’t match up with your plot needs, it’s at this point that you should set the character created aside for use some other time, and start over – letting the plot guide you to a unique character for that critical role in the story.

    Reflections Of Individuality

    Once the primary point of uniqueness is built into the plotline, the second step is to look for opportunities and character-roleplay moments that can briefly highlight one or more other unique aspects of the character. Failing that, a foil – someone present merely to expose the existence of those other unique attributes – is often the best answer.

    The Racial Rainbow

    I am always cognizant of what the uniqueness of the character adds to the rainbow of racial aspects and colors contained within the race. How does this character, and their role within the adventure, expand the fundamental definition of the race that lives in the player’s heads? How can we make that expansion unforgettable, so that the next example builds upon it, having a cumulative impact?

    Every non-cliche Elf, Dwarf, Orc (or whatever) adds to the substance of that race, so long as their uniqueness can somehow be put on show and made memorable. The more central they are to the plotline, the more easily the latter can be achieved, and the more interesting the character, the more easily you will be able to drop them into future occasions.

    If you make six unique NPCs and only one of them goes on to become a central figure in the campaign, that’s a win for the GM – because if they weren’t memorable, none of them would do so; they would simply be part of the campaign furniture. But at the time of creation, you never have any idea which of them will turn up again in the future – you’re simply placing as many top-quality building blocks to hand as you can come up with.

    The Archetypal Rainbow

    It’s the same thing with respect to the character’s archetype. Expanding the role that the individual can play expands the potential capabilities of their archetype, providing a second avenue into their becoming a recurring element.

    The Social Rainbow

    The sheer variety of groups around which the character can be oriented means that their contributions to the social rainbow will be more diffuse, unless this is the central facet of the character spotlighted.

    But this also brings me to a top tip – The Path Not Fated

      The Path Not Fated

      We’ve all met people who would excel in a different vocation or social position, but who were forced by circumstance, or family, or opportunity, or whatever, into a pathway through life for which they aren’t really a very good fit.

      They nevertheless do as much as they can to fit themselves into the square hole, no matter how much of a round peg they may be, and do enough to continue on in that square hole, though it doesn’t come naturally to them.

      Whenever fate (or a PCs’ decision) throws up the need for a generic cardboard cut-out NPC, my favorite tactic these days is to make them something else, then reconcile that with their life and its demands.

      The noble who would be better-suited to being a bookkeeper. Or a Beekeeper. Or an architect. Anything but a typical ruler, in fact.

      The inn-keeper who was born to tread the Tennis Court. Or the Pool Hall. Or to be a famous singer.

      The Blacksmith who should have been a painter. Or a gardener. Or a butler.

      It’s a shortcut through the processes described here that doesn’t fully flesh out the character but still captures at least half of the uniqueness that would result from such a treatment, and is fast enough that it can be done on the fly – which is exactly what you need in this game situation..

      The biggest trap to watch out for is creating a new stereotype by reusing the same ‘alternative vision’ repeatedly. Avoid that, and you’re well on your way.

    The Characteristic Hues

    Characteristic-defined traits are a little different to the rest. They rarely stand alone, instead compounding with other personality traits to add additional nuance and depth. These are personality elements that would be largely similar no matter what archetype / profession the character adopted, what their social class was, and that are embedded within their racial profile, inseparable from it to at least some degree.

    Contemplate, for example, the differences in the following:

    • “He’s unusually strong for a Gnome.”
    • “He’s unusually strong for a Storm Giant.”

    Both will have generated similar formative influences within their respective cultures; it’s when you step outside those boundaries that the context becomes important. In the first case, the character is likely in for a rough time, adjusting to no longer being the biggest and toughest around, but they may end up a better person for the humbling. In the latter case, any personality traits engendered by their strength are likely to be amplified, if anything.

Totality: The sum of many reflections

The techniques described in this post shouldn’t be used every time you generate an NPC. Their power stems from the cumulative impact of many diverse representatives; if you can’t envisage a pathway through the campaign that yields many encounters with Ettin, it may not be worth going through the whole process.

That’s certainly one path to take. The on-the-other-hand counter-argument is that if there’s only going to be one Ettin, you should make it as memorable and distinctive as possible. While the pragmatist in me aligns with the former position (less time spent on this means more time that can be spent on something else), everything else in my nature (excluding laziness) demands the latter.

I can’t decide this question for you – I can only advise people to find the balance and pathway that works best for them. Every GM has some talent at which they are better than the rest, some have several. Prep time invested in something that comes naturally to the GM yields a better dividend, but leaves holes in their performance behind the screen; prep time invested in the areas they are weaker in elevates the performance bottom line and also frees up some of their time and attention for their strengths to be displayed. There’s no one right answer.

But I thought it worth the effort, before wrapping up this article, to think about some even bigger pictures and the impact the technique can have.

    Genre Variations

    By defining the racial and archetypal parameters differently, even within the same game system, you create genre variations, and these can be as nuanced as you want them to be. If you want to distinguish between high fantasy and low fantasy, you can – even in the middle of a campaign, if you perceive that the campaign has evolved through characters gaining wealth and experience. That’s a powerful benefit, but it misses one of the more useful functions of the process.

    It also makes the conceptual repackaging of one genre’s creatures into another genre. There are two examples that I could offer right now, but both are from adventures that haven?t yet been played. Instead, I’ll throw out a less-developed idea just to illustrate the power of the technique.

    Let’s take a Troll and translate it into Sci-Fi using nanotech repair mechanisms housed within the humanoid organism. There would be certain aspects of the ‘repaired’ creature that would be user-customizable, and some that aren’t. Increased strength, size, and resilience? No problem. Diminished intellect and Agility? Suggestive of nerve damage as a consequence of the nanotechnology, and maybe neuron damage to boot. That suggests an inverse relationship between Strength / Resilience and Intellect / Nimbleness. It might be that every time the nanotech repairs the body, it gains a point of strength and/or resilience, but loses a point of intelligence and/or dexterity. Slowly, the character becomes more brutish – and more dangerous. This treatment doesn’t say anything about the ‘racial’ traits or the social groupings; the latter would probably be generic aspects of the sub-culture that embraced nanotech / cyberware, while the former would be about the places such ‘modified people’ hang out and the jobs they perform, and that would reflect their integration (or the lack thereof) within the broader society. That in turn suggests either a game setting that leans heavily into cyberpunk tropes, or one that is actively trying to avoid going down that path.

    In my Zenith-3 (superhero) campaign, Earth-prime has started down the road to cyberpunk but there is considerable resistance, not least of which stems from a number of unique illnesses / diseases / conditions (some of them physical, some mental) that exist and act as a deterrent to many. There are a few fatalists who believe that cures will eventually be found, and that upgrading now gets them in on the ground floor of the next stages of human evolution; there are some who see the diseases as a natural price that has to be paid if ordinary people are going to compete with superheros and villains; and there are some who are simply overconfident (“it will never happen to me”). Philosophy colliding with Futurology in a Superhero context. These ‘trolls’ would fit right in.

    There can even be an argument made in reference to the purported ugliness of a Troll. Characters who opt for this type of augmentation will probably start out fairly average in appearance, maybe even a little sickly. At first, the gains would be positive – they would put on muscle mass and become more attractive as a result. That wouldn’t last; they would slowly become more grotesque in appearance, a trend enhanced by the natural occupations of this sort of augmented person – bouncers and enforcers and the like. All professions in which intimidation is an asset. And so most of them slide down a slippery slope into a more horrific appearance.

    We can make such a character unique by making them friendly, polite, soft-spoken, with exquisite manners. The dichotomy of such a social paragon being an ugly SOB who does an ugly job does the rest.

    Campaign Variations

    I’ve often discussed my desire to make no two campaigns that I run exactly alike. Sometimes, where they are both set in the same game world and operating concurrently in game time, the distinguishing features may have to be more nuanced and less casually-obvious, but they are still there.

    This is particularly the case when it comes to the different D&D campaigns that I’ve run over the years. I want Elves and Dwarves and Orcs and so on to be different in each, and to have some reason in back of those differences. Collectively, those racial differences manifest from conceptual differences within the world and its history. Put both together, and each campaign takes on its own unique flavor.

    It should be obvious that this technique not only assists in creating such unique reinterpretations, it helps spotlight them in play. That’s both a win and a bonus, in my book.

    GM Individuality

    I’ve often made the point that each GM is a little bit different from the next. No two of us think exactly alike. Over time, the strengths, weaknesses, likes and dislikes, etc of the individual start to come together in a unique GMing style, one that often transcends campaigns and genres and game systems.

    There is a corollary to this perspective – not every game system will suit every GM equally. Some game systems will simply be a complete bust; others may flex ‘muscles’ that the GM didn’t know they had, enhancing and developing their capabilities; and some will fit them to a T, while the GM (metaphorically) next door can’t cope with that system and doesn’t see its attraction.

    Because this process enables individual GMs to craft individual interpretations of common elements like races or species, it facilitates the expression of a GM’s particular style – even before they know what that style is. Without that knowledge as a guide, there will probably be false starts and missteps along the way – but those would happen anyway. We make mistakes and we learn from them.

    The Developmental Sandbox

    The final big-picture that I want to point out is that you can start with a completely generic setting and evolve it, one step at a time, using this process. Eventually, you will find that you have developed your own singular ‘take’ on that setting – your “Ebberon” might be completely different to another GM’s “Ebberron”, your “Middle Earth” unique, while still deriving from and reflecting the source material.

    The process allows for the development of singular elements within a sandboxed game narrative, permitting the incorporation of creativity in greater or smaller doses – but one at a time, making assimilation of the distinguishing features easier for both GM and players.

    That’s not nothing, either.

A Powerful Tool

In conclusion, then, this is a powerful tool for character creation that expands the mythos surrounding the specific races, classes / archetypes, and social groupings to which the individual belongs. Rather than being confined by pre-packaged concepts of those character facets, it causes their expansion to accommodate greater diversity and richness of material within a campaign.

Throw in a few side-benefits along the way, and it should be easy to see why it’s worth your attention.

Leave a Comment

All About Ripple Plotlines


Ripple plotlines use domino chains that feed back to the main plotline while cascading out to trigger other plotlines in a chain reaction. They can start from the most apparently inconsequential act or decision and grow until whole Kingdoms hang from them like Christmas baubles.

Today (as I write this) is Australia Day, our equivalent of the 4th of July, and yesterday was unbearably hot and humid, so I got nothing done. Which meant, of course, that I would need something fairly quick and simple for this week’s topic.

I’ve given a pretty fair description of what a ripple plotline is in my introduction, so instead let’s look at the anatomy of one.

Anatomy Of A Ripple

Every ripple starts with an act or decision, which can be described in an abstract manner as the ‘seed’. This is similar, but not identical, to an adventure seed in that there are some very specific requirements that it has to possess. Specifically, it has to affect others in a number of different ways.

Each of those effects is a Primary Strand of the plotline. At least one primary strand has to affect a PC, usually directly but indirectly can be okay, too.

Each group or individual affected is a secondary node, and each secondary node has to have the need to act or react to the Seed Event. That, too, is a requirement of the Seed that has to be met in order for this to qualify as a Ripple Plot.

Those secondary nodes give off consequences of the decisions. One of these “Secondary Strands” has to connect back to the Seed Originator in some way, and another has to impact one or more PCs in a specific fashion. I’ll come back to that detail in a little bit.

The rest of the Secondary Strands can either connect to the campaign background, creating a change in that background moving forward, or can connect with a Tertiary Node. That tertiary node will cast of Tertiary Strands, which – just like the Secondary Strands, have to affect the original Seed Originator, and either the background, or one or more PCs, or both.

A ripple plotline grows via a chain reaction of dominoes falling, spreading outward like ripples on a pond – hence the name.

The Binding Agent

One of the characteristics of a Ripple Plot is that, initially, it’s about something other than the ripple plot itself. It starts in the background, just a backdrop to the “Through Plot” which serves as a Binding Agent. As ripples intercept the participants in this “Through Plot”, it gains momentum and significance, until the through plot is less important than the ripples that are rewriting the adventuring environment around the characters.

I’ve labeled this a ‘binding agent’ because it ties the narrative together, it ties the PCs to the ripples, and it gives the whole thing a momentum that it would otherwise be lacking. These are important functions, and it follows that the choice of through plot can be just as important as the Ripple Seed.

So what should you look for in a Through Plot?

In a word, discontinuity. It has to be something that starts and stops and then resumes, so that in the intervals in between, the ripples have time to manifest. A dungeon that has to be completed in sections, with rest and recovery away from the dungeon in between, for example. A courier job in which several different noblemen have to be taken a message, and the replies brought back to the employer. Or maybe, instead of noblemen, it’s a particular character class or occupation.

The nature of the Ripple Seed

Some types of plots lend themselves readily and obviously to Ripple Plots, in particular political events / decisions. But these are often too obvious and too significant, causing the PCs to focus on them before the full impact has time to manifest; there’s a fine line to be walked.

A lot of GMs come up with the basic idea, or some variation of it, on their own, usually based around a political seed, and this effect then causes them to lose control of the ripple plot. They then write the whole thing off as an uncontrollable force within a campaign, and never discover the power than it can have from a more subtle Seed.

What’s really desirable is something that’s going to be minor to start off with and grow.

Timing is everything

I can best explain this point by offering up an example. Suppose our Ripple Seed is the notion of disbanding the Inland Revenue Service and contracting the collection of taxes out to public groups / agencies. The theory is that in a year or two, this will save so much money that the tax rate itself can be lowered.

Right away, there’s a potential problem – what if the PCs decide to become one of these contracted groups? There are two ways of avoiding this, and I would use them both. First, the remuneration should be less than the existing tax collectors were being paid – a disincentive; and secondly, making sure the PCs are busy with something that looks far more important / useful / profitable than this before it is even an option.

That ‘something’, obviously, is the Through Plot. I might foreshadow the Ripple Plot with news of a new Advisor to the Government (the Throne in a Kingdom) who has privately proposed radical reforms of the tax code. This, of course, is only half-right; he or she is not advising changes to the Tax Code, only suggesting that such might become possible if this change is put in place. But it sounds both important and boring at the same time, and so will incline the PCs towards the Through Plot when it manifests.

The thing that makes this a suitable Ripple Seed is that there will be lots of different groups who will have different reactions. Some will embrace it, in a restricted manner – Professional Guilds, for example, collecting the Taxes from their members, and using the revenue payed to them for performing this service to lower their guild fees. Churches might embrace it, mandating that the congregations pay their taxes on the collection plate. Thief’s Guilds might also embrace it, as a way of hiding their thugs in plain sight, giving them a veneer of respectability, and fattening their coffers by ‘increasing the tax rate’ (unofficially, of course) – not to mention the money-laundering possibilities. Various bandit groups might sign up as a way of gaining, or regaining, legitimacy.

Other groups will oppose it. Some might see the potential for corruption. Others the prospect of Confusion and/or tax avoidance. Winemakers and Vintners might claim that they’ve paid their taxes through their guild (when they haven’t) and so don’t need to pay agency X – whoever it is that comes around demanding tax payments. Still others may see it as a way for the neighbors to justify intruding into their privacy. How do you prove that you’ve paid your taxes – showing a token of some sort?

“Psst, hey, kid — wanna buy a token? I can give a discount for lots of six or more. Almost as good as the real thing, I promise.”

Instead of a central authority, there would be dozens of smaller authorities – and that makes any inequities in the system harder to remove by increasing the bureaucratic burden. Some groups might take matters into their own hands – if the merchants feel that sales taxes are high enough to stifle business opportunities, they might arbitrarily reduce the amounts they are collecting to what they consider ‘reasonable’.

Some groups may hear rumors of such goings on and decide to do likewise. Others will hear such rumors and decide that the guild in question is elevating themselves and their prosperity over that of others, and start acting against the guild who is the subject of the rumor.

Everyone will have an opinion of the idea, of the way it is implemented, of the groups backing it of the groups opposing it, of the groups trying to make the system fairer and those who are trying to take advantage of it. Those opinions will shape or reshape the implementation of the idea, and some will shift from ardent supporters to vehement denialists. “I was all for this until the Seafarer’s Guild signed up to collect taxes from the docklands. You can’t trust them as far as you can throw a warehouse.”

Trust. In this Ripple Plot, trust becomes a taxable quantity that not everyone can afford.

And, at the end of the day, when society starts coming apart at the seams, it can all be undone by decree the same way as it was implemented. The old Tax Collectors can be rehired – at increased pay, no doubt – and taxes will go up to cover this increased cost. That won’t put the genie back in the bottle – the consequences and repercussions will take years to unravel and stabilize. And lots of different groups will have entirely changed attitudes toward the government who foisted this shambles off onto the public.

The Key To Success

Ripple plots succeed or fail, live or die, according to the extent which the characters are directly affected. Those impacts should start small and innocuous, as already noted, but should compound one on top of another.

Ripple Plots. Everyone should know how to make them and how to use them.

Leave a Comment

A Fairy Colony In Zenith-3


What is a Fairy Colony, and why should you never annoy one? Or attack one? I didn’t want to go full “Fey” so I came up with something different…

Pieces Of Creation Logo version 2

In the Zenith-3 superhero campaign, there’s a Fairy Colony at the bottom of their back yard. It was placed there years ago (real time) but until late last year, no details had ever been worked out. heck, there wasn’t even a functional definition of a fairy, let alone a Fairy Colony! But, with play set to resume next week, that had to change; so I wrote up some concepts, and then added to them, and added to those, and so on. None of my players have seen this yet (and the details in that specific campaign’s version are slightly different, anyway). That’s because I’ve adapted this to work with D&D/Pathfinder, even though it remains a concept for use with Hero Games, fundamentally.

Fairy Physical Structure

Fairies average 6-12 inches (15.24-30.48 cm) in height.

They trend towards being slightly built, though a few are stockier. The average weight is 40-320 g. Stockier examples x 1.45

Their wingspan is typically 2.4 x their height (each wing = x1.2 height), and resemble those of a dragonfly. They fly at peak speeds of up to 6″ 25mph (72 km/h) 12″ 65mph (105 km/h). Divide these by the 1.45^0.5 = 1.204 for stockier builds.

Cruising speed is 6″ 20-25 mph (32.2 – 40.2 km/h) to 12″ 30-.35 mph (48.2 – 56.3 km/h).

They have three fingers and a thumb on each hand. As a result, they tend to number things in base-8.

    1=1
    10=8 (two hands)
    20=16 (four hands)
    100=64. (a great hand)

etc.

At 12 inches tall, a fairy is effectively a small, sentient projectile. Flying at 65 mph, an impact would be significant – carrying about the same kinetic energy as a professional pitcher’s 100mph fastball.

Wearing a pointed helmet or using a pole arm, they become the equivalent of a living AP round (at relatively low velocity relative to a gun, but still…)

The wingspan of the larger fairies handicaps them in forest and indoor settings. They dominate the open skies. The smaller fairies are far more maneuverable and dominate tighter spaces. As a species, they take advantage of these facts – short fairies are melee fighters while taller fairies use javelins and bows..

Because of the high speeds and small size, these fairies would likely have an incredibly high metabolism, requiring constant intake of high-energy foods (nectar, fats, or sugars) to fuel their flight muscles. They magically concentrate food daily. They will eat once when the moon rises, twice more at four-hour intervals, and have a half-meal when it sets (to give them an energy reserve to call upon if attacked in the night). Their preferred diet is tree sap (especially of the maple variety), leaves, and fruit. Most flowers do not produce enough nectar to do more than add flavoring, but they prize them for that function. Especially brave or hungry fairy colonies may raid a beehive.

Fairy Social Structure:

They consider themselves a single clan or “colony”. When their numbers grow too large, the colony will split and have a big fight to see who gets to stay and who has to look elsewhere. Normally, about 2/3 will refuse to fight, either choosing after the outcome is decided which group they will affiliate with, or volunteering to relocate, regardless.

How many is too many? The real number is somewhere between 500 and 1000 adults, but most Kings pick a number between 100 and 500 with which they are comfortable. Beyond a few hundred, you stop knowing everyone as individuals very well, and past about 500, you start losing track of individuals completely – and social cohesion and relationships are essential to a Fairy.

Fairies hold grudges for decades, if not longer, as hot and passionate at the end as when the incident is fresh. They are easily placated, however, if this is done sincerely. For the most part, they simply want to be left alone. And party. And celebrate nature. And socialize. And gossip about each other (usually in a friendly way).

Then, too, in every generation there are a few really mean and nasty individuals – bullies and the like. If the colony is small in size, there won’t be many of these, and they will be easily quelled and controlled by the society at large; once numbers become more significant, society begins to splinter into subcultures, and these louts can become a gang, sparking difficulty with those living around the colony as well as internal strife. They can become a significant problem for the colony.

Four times a year, on the second full moon of the season, the Fairies have a celebration with an outsider as guest of honor. This outsider is chosen by a process called the Fabrinelle, a kind of treasure hunt through the surrounding lands. To be chosen, the person must be a true lover of nature. At the end of the night of wild celebrations, the guest is given a gift of some sort and an honored role in Fairy Society; he or she may call upon the Colony to aid them in some struggle or task that is beyond them. This power, once used, is lost forever.

On rare occasions, a guest may wish to remain with the fairies permanently. It is up to the King to determine if this is possible, and to make any arrangements necessary, but his primary task is to ensure the security of the Colony; there are times when this makes the request impossible. Some Kings, especially those without the guidance of a Queen, have made poor choices in this regard, such as replacing the child with a simulacrum, a changeling, who will fall ill and seem to ‘die’ over the next month or so.

Of secondary importance is that the request must not create conflict between the family of the guest and the colony.

Those who are permitted to remain are transformed permanently into fairies and become members of the colony like any other.

Fairy Political Structure:

On paper, it’s a Monarchy, but Fairies don’t use paper. Kingship rotates through the male population on a weekly basis. The Kings from the previous two weeks and the one who will assume the throne next week form a council of advisors, providing some semblance of continuity. If a King is wed, the she becomes Queen. The role of the Queen is to provide a conduit between the rest of the colony and the throne. She is also in charge of the recreation activities of the colony, some one-in-all-in social occasion.

It is when a King is unwed that things can get messier. The King has the authority to choose as his consort any unwed female who will have him, and she will then act as Queen for the remainder of the King’s Reign, but she has no training or authority to organize events, so the King does that himself – usually more masculine activities like hunts.

Fairy Activity Orientation:

As a general rule, fairies are neither nocturnal nor diurnal – they rise with the moon and set with it. But they can function outside these hours at need. To human observers, their daily cycles drift by about 50 minutes later every earth day; one week they are active at midnight; two weeks later, they are active at noon.

During the New Moon phase, the fairies rise and set almost exactly with the Sun. This is likely their most stressful time – they are active when the “Big People” (humans) and daylight predators are most active, and they lack the cover of night.

Clothing and Equipment:

Fairy clothing is generally made of leaves that has been treated with tree-saps to stiffen them and bind layers together, then magically hardened. Their very best armors are as protective as those used by human SWAT teams.

They carve many implements from wood and then preserve them with lacquers. Because of their small size, these can possess incredible delicacy and detail.

They forge metal through (magical) transmutation and melt/cast/smith it using magical fires. A single “blacksmith” might be one artisan and 15 or 31 others generating the heat. 256 fairies casting in unison can produce brief bursts of plasma-cutter temperatures.

Domiciles & Structures

Edible tree sap isn’t the only type that Fairies use. They dry sap out into flat planes, usually sandwiched between two leaves, building up layers which they treat magically to make them more resistant and resilient, at least as hard as granite, depending on the number of layers. These are then assembled and joined to construct homes and other structures.

The most common practice is to suspend these from tree branches, but every Colony has a different approach. The most grandiose structures may be suspended from multiple sides enabling a much larger construction – these can be full-on medieval palaces in miniature. But most structures are smaller and more humble.

The simplest structures are round, like beehives.

By far the favorite place to reside if one isn’t entitled to a ‘palace’ or ‘castle’ is in the hollow of a tree. These can be extensively and elaborately carved internally while little or nothing is visible from the outside save some internal illumination through windows.

They can sharpen sticks by coating them in resin, wrapping a leaf around it, and transforming it in the same way. A ‘forest’ of 3-6 inch spikes surrounding a colony for a couple of feet – with gaps big enough for the feet of any human(oid) visitors – is enough to discourage most predators; these spikes are needle-sharp and capable of penetrating the hardest hooves. If they have been attacked in the past, other refinements may be added to inflict poisons or diseases on hostile entities. This is also how they make their javelins and arrows.

This often makes a colony in a relatively safe environment confident enough to build dwellings on the ground as well as aloft, though only the lowest social classes would live there.

Fairy Magic

This is generally more elementary than that of a human mage, and more elemental, but it is capable of great subtlety, and backed by enormous power, because the whole clan participates in the casting. They may only have 1 mana point each, but 500 or 600 fairies cast spells more powerful than most human mages can even contemplate.

They recover that 1 Mana point almost instantly – it actually takes 5 or six seconds.

Fairy Spells tend to blow some aspect of the spell out to extremes.

Base area is proportionate to their size, so about 6 non-game inches to a hex.

In practical terms:

    log [Area (square feet) x 12 / 6] / log(2) = area modifier.

So double the area (or less) for +1 modifier. or half area for -1 modifier.

    EG: 10 sqr feet: 10×12/6 = 20; log(20)/log(2) = 4.3 so this is a +5 modifier.

Note that you don’t need a calculator. 2; 4; 8; 16; 32. 32 is more than 20, so we stop doubling. Count the number of doublings: 5. So 20 sqr ft = +5 – and so is anything from 17-32 square feet.

  • 1 square foot = +1. This is the area to affect a human-sized individual.
  • 10 sqr ft area is x20, so +5.
  • 20 sqr ft area is x40 = +6.
  • 100 sqr ft is x200 = +8.
  • 1000 sqr ft is x2000 = +11.
  • 10,000 sqr ft is x20,000 = +15 (a large stadium).
  • 1 square km = 1.55e+9 sqr inches = x1.55e+9 / 6 = x258,333,333.3 = +30.
  • 1 sqr mile = 4.01451e+9 sqr inches = x 4.01451e+9 / 6 = x669,085,000 = +30 (both fall within the same power of two).
  • 25 sqr km (5km x 5km) (a moderate city) = x6,458,333,333.3 = +33
  • 22.7 sqr miles (Manhattan island)= x15,188,229,500 = +34
  • 100 sqr miles (a larger city) = x66,908,500,000 = +36.
  • 12,367 sqr km (Greater Sydney) = x3,194,808,333,333.3 = +42
  • 30-40,000 sqr km (small Western European Country) = x7,750,000,000,000 – x10,333,333,333,333.3 = +43 to +44
  • 100,000 sqr km (average Western European Country) = 25,833,333,333,333.3 = +45
  • 540,000 sqr km (France) = x139,500,000,000,000 = 1.395e+14 = +47 (barely)
  • 7,660,000 sqr km (continental US) = x1.978833e+15 = +51
  • 255 million sqr km (Earth Hemisphere) = x6.5875e+16 = +56
  • 510 million sqr km (Earth) = x1.3175e+17 = +57

Duration: the base is instant (+0), then 1 second (+1), as usual. The calculation is the same, as you will observe below.

  • 1 minute = 60 sec = x60 = 1+log(60)/log(2) = +7.
  • 5 mins = 300 sec = x300 = 1+8.2 = +10.
  • 30 mins = 1800 sec = x1800 = +12.
  • 1 hr = 3600 sec = x3600 = +13.
  • 1 great-hand of minutes = 64×60=3840 sec = x3840 = +13.
  • 1 hand of life = 4 great-hands of minutes = x3840x4 = x15360 = +15
  • 6 hrs = 21,600 sec = x 21,600 = +16.
  • 1 sky-cycle (lunar rise to lunar set) = approx. 12 hrs 43 min = 45780 sec = x45780 = +17
  • 1 long-day (max lunar rise to set, occurs every 18.6 years) = 18.5 hrs (max) = x66600 = +18. Most will be +17.
  • 1 day = x24x60x60 = x86400 = +18.
  • 1 Fairy-day = x(86400+50) = x86450 = +18
  • 1 Fairy-week = x7x86450 = x605,150 = +21
  • 2 hands of fairy days = 1 half-cycle = x8x86450 = x691600 = +21
  • 1 hand of hands of fairy days = 1 cycle = x1,383,200 = +22
  • “15” cycles = 13 cycles = 1 season = x13x1,383,200 = x17,981,600 = +26
  • 1 hand of seasons (1 year) = x4x17,981,600 = x71,926,400 = +28
  • 1 hand of years (4 years) = x4x71,926,400 = x287,705,600 = +30
  • 2 hands of years (8 years) = x2x287,705,600 = x575,411,200 = +31
  • 2 hands of hands of years = 32 years = 1 Fairy generation = 2x4x4x287,705,600 = x9,206,579,200 = +35
  • 1 great-hand of years = 2 Fairy Generations = 1/4 of an age = 1.841316e+10 = +36
  • 1 hand of great-hands of years = 8 Fairy Generations = 1/2 an age = x4x1.841316e+10 = x7.365264e+10 = +38
  • 2 hands of great-hands of years = 16 Fairy Generations = an age = x2x7.365264e+10 = x1.4730528e+11 = +39
  • 1 great-hand of great hands of years = 4096 years = an ‘eternity’ = x4096x71,926,400 = x2.946e+11 = +40

Difficulty in breaking spells:

  • Caster level required +1 = +1
Adapting to D&D Spells:

    Colony Size / (Spell Level* +1) = total pluses (round down).
    Area Pluses + Duration pluses + Difficulty-in-breaking pluses = total pluses

      * includes any additional caster levels to achieve desired effect level.

Kings can choose to cast with lower total pluses, the above sets maximum levels.

A Fairy Queen. Image by Jim Cooper from Pixabay, cropped by Mike

As a general rule, choose the spell effect that you want and then select the spell that best fits. “Bless” and “Curse” are frequent choices.

    EG “May it rain on you, wherever you roam, regardless of cover, for an entire season.”
    Curse, 1st level spell. Human sized individual. Colony of 85 faeries.
    85 / (1+1) = 42.5, rounds to 42. So an individual could be cursed for more than 4096 years. But let’s play it safe (for the colony) and limit the curse to a season (+26). And let’s spend +10 adding to the caster level requirement of any mage or cleric who attempts to lift the curse, for a total of 1 (area) + 26 (duration) + 10 = 37. This leaves 5 unallocated.

Casting Consequences

A colony casting a spell is literally doing so with their life-force. It’s not done trivially.

    (30 x actual total pluses / maximum total pluses) + spell level + 10 = % of colony half-killed = 2 x % of colony killed (round both down).

    % colony killed can be reduced by X% by increasing the % half-killed by 2 x X% and reducing the number of pluses AFTER the above calculation by 0.5 x X.

    EG Continued: 30 x 37/42 = 26%. 26+1+10=37% half-killed and 18% killed. We can use the 5 pluses remaining to reduce the death penalty by 10 to 8%. This adds +16% to the number half-killed, for totals of 8% killed and 53% half-killed.

Not a trivial exercise at all; this curse is right at the limits of what a colony this small can do.

    Comparison example: Colony of 170 (twice the size): “May it rain on you, wherever you roam, regardless of cover, for an entire YEAR.”
    Curse, 1st level spell. Human sized individual.
    170 / (1+1) = 85. Duration: 1 year (+28). +20 caster level requirement of any mage or cleric who attempts to lift the curse, for a total of 1 (area) + 28 (duration) + 20 = 49. This leaves 36 unallocated.

    30 x 49/85 = 17% half killed, 8% killed. Reduce the 8% to 0: uses 4 additional pluses, plenty in reserve. Totals: 0% killed, 17+16=33% half hit points (recovered at 1 per day as usual).

Not only is this a nastier spell (it lasts a year and is harder to dispel), the colony is able to cast it with relative impunity.

Let’s nasty it up a little more, so that it not only affects the individual but anyone physically close to them.

    Comparison example: Colony of 170 (twice the size): “May it rain on you and any who approach you, wherever you roam, regardless of cover, for an entire DECADE.”
    Curse, 1st level spell. Human sized individual + surrounds = 5′ x 5′ area.
    170 / (1+1) = 85.
    Area: 5′ x 5′ = 25 sqr ft. log(25)/log(2) = 4.64, so +5.
    Duration: A decade isn’t on the list, but 8 years is – value of +31. So a decade will be +32.
    +23 caster level requirement of any mage or cleric who attempts to lift the curse.
    Total of 5 + 32 + 23 = 60. This leaves 25 unallocated.

    30 x 60/85 = 21% half killed, 10% killed. Reduce the 10% to 0: uses 5 additional pluses, still plenty left over. Totals: 0% killed, 21+20=41% half hit points (recovered at 1 per day as usual).

Half-killing almost half the colony is about as far as it’s reasonable to go; anything more risks the colony’s survival, should a predator find them.

    One more example:
    “May every building you enter burn to the ground for the rest of your natural life” (man, the King must really be pissed off at the target!)
    Colony Size 400.
    Spell: Fireball (3d6), Level 3 spell, plus 2 caster levels to get 3d6 = level 5.
    Max Bonuses = 400 / (5+1) = 66.
    Area: 20′ x 20′ = +6.
    Duration: +37.
    Dispel Difficulty =+7
    Total = 6+37+7=50, leaves 16.

    30 x 50 / 66 = 22% half-killed, 11% killed. Protect the 11% = +6 levels, 10 in reserve.
    Net cost: 22+22=44% half hit points, no fatalities.

Note that this is right on the edge for a colony of this size, which is close to as big as they come. Maybe they colony could have afforded another +5 dispel difficulty. But most spell-casters would be disinclined to help if the practice of consulting them burned down their houses, so maybe that’s not necessary.

Personal Magic

In addition to the major castings above, which always involve a ritual and a whole colony, most fairies are capable of smaller, more temporary ‘personal magic’ – making vines and tree limbs light up with glowing ‘fairy light’, shrinking visitors to enable them into homes, etc. No such magic effect can last for more than a day and most for less. It is ten times more efficient to sustain an existing spell than it is to cast it anew.

Fairy Personalities

Fairies are generally lighthearted and friendly, though some have nasty senses of humor. A few – generally marked for greatness within their society as a result – are capable of being more serious, more judgmental, and exhibit gravitas that far outweighs their stature. Relatively few are the sly, cunning, scheming types; they are more happy-go-lucky and take life one day at a time as it comes to them.

These moods and attitudes vanish instantly when the colony feels under threat. Fairies are capable of an anger that has to be seen to be believed, and can sustain it for generations. Hillbilly fueders have nothing on these folks when someone earns their enmity. Entire colonies have uprooted and moved simply to be in a better position to harass someone the Fairies think worthy of that level of enmity – though it is more common for a colony to split over such an issue.

One of the fastest ways to earn such enmity is a failure to respect nature. Fairies have no theology as such, but they are fiercely protective of the environment around them. As the land on which they abide sickens or is befouled, so the fairies succumb to ill-health, so this is not all that surprising; they are bound to the life of the nature which surrounds them, and they guard and protect it as fiercely as they guard and protect themselves.

Dishonesty and misrepresentation are the second fastest ways to arouse a Fairie’s ire. A Fairy’s word is inviolable; it would die before breaking it, sacrifice their entire family if need be. And they don’t care about ‘the letter of the law’; they operate on the intention of the principle as spelled out in the original agreement. They never forget the exact wording of an agreement reached and never forget, ignore, or obfuscate the intention behind it; if an agreement is no longer fit to serve that purpose because circumstances have changed, or if the intended purpose becomes out of date, the whole agreement needs to be renegotiated, it cannot be amended. At the same time, Fairies have no equivalent of the human sense of Honor, because that implies dishonor which is unthinkable in a Fairy. They are natural seekers of Justice.

Educated Fairies

Fairies with natural Gravitas are natural leaders, and are groomed for that role. About 1% of the population are natural geniuses (by Fairy standards), with two or even sometimes three times the intelligence of the smartest ‘typical’ fairy. it is very common for these to get initial education by listening outside the windows of human institutions, becoming fascinated by words, stories, and higher learning. When recognized, if it is socially acceptable to the culture outside the colony, these may even be sent to study at a more advanced institution or at the feet of a non-Fairy master of some sort. Eventually, these ‘expatriates’ return to the colony and learn to apply what they have learned – be it the cultivation of food stuffs, new construction techniques, new science, or whatever. They frequently become advisors to the crown – whoever happens to be wearing it this week.

Note that they adapt the knowledge they have gained to Fairy Society and its benefit, and not the other way around. Anything learned that requires a change in social structures or patterns has to be put to the colony as a whole, and may not be implemented until all not only understand it but approve of the change. Anything that can’t be used within this structure is discarded.

Comments (4)

The Power Of 1 on Root R


Today, I offer a new technique for rolling multiple dice many times with great efficiency. Any RPG can benefit from that!

Sometimes, the shortness of the road can make up for rougher conditions. Image by Nataly from Pixabay

I hope everyone had a wonderful Christmas break. Mine was great, though not without its challenges – but I have evidently weathered them, because here we all are, in a bright and shiny New Year!

This isn’t going to be a long post – but it is going to be a profound one. In the adventure I’m currently working on for the Zenith-3 campaign, a situation arose in which a character was going to be exposed to multiple minutes of an environment doing damage to him every turn.

Not just a few dice, but a lot of dice. Fortunately, he also has a lot of protection. How many dice, and whether or not that protection was going to be enough, would depend on what the character chose to do.

(Note that I’m being circumspect because this adventure hasn’t been run yet).

He could choose to head into the danger and incur a higher rate of damage. He could try to get out of danger by the shortest possible route – which also incurs that higher rate of damage but only for a relatively short time. Unless he gets lost along the way – a potential real danger. He has other options, as well.

So I didn’t know how many dice a round he would be taking, but I knew this: there are 3 twenty-second rounds in a minute (or 6 10-second rounds – the latter is our default, the former something I’m experimenting with). Thats 15 rolls of 8-to-10d6 every five minutes. And the character could be waiting in this situation for 20, 30, 40 minutes or more.

120 or more rolls of 8-to-10 d6 each. And apply defenses to each. And calculate damage from each. And accumulate that damage from each. And recover some of that damage from each.

It might take as little as two minutes to do each, but it would probably be more. FOUR HOURS of making rolls while everyone twiddled their fingers.

There had to be a better way. And then I thought of one, and got Google Gemini to help flesh it out and make it real.

The Principle

As you make more and more rolls, they become more and more inclined to average out. That’s one of the abiding principles harnessed by The Sixes System, and it’s something I understood very clearly. So why not leverage that fact? Roll ONCE and apply a mathematical manipulation to that result to get the outcome of R rolls.

Sounds incredibly simple, doesn’t it? Well, it’s not quite that easy, but it’s pretty close to it.

The procedure

  1. Roll Once.
  2. Subtract the average roll to get Delta.
  3. Determine R, the number of Rolls that this calculation is going to represent.
  4. Multiply the Delta by 1/ (R^0.5).
  5. Add the average roll to the result.
  6. Apply any modifiers that are applicable to every roll. The result is the average result over the totality of R rolls.
  7. Multiply by R.
  8. Apply any other adjustments. Which gives you the total of effect at the end of those R rolls.

This sounds complicated, but in most RPGs it will be even simpler.

An example

Let’s pick… 8d6 damage, 12 rolls over 12 rounds. Defenses subtract 20 from the result. Anything that gets through the defenses also does x3.5 Stun damage. At the end of each minute, the character gets 25 Body back and 50 Stun. He has a pool of 120 HP and 240 stun to draw upon.

  1. I roll 8d6 and get 33.
  2. The average of 8d6 is 8 x 7 / 2 = 28. Delta = +5.
  3. R = 12.
  4. Delta x 1 / (R^0.5) = 5 / 12^0.5 = 5 / 3.464 = 1.4434
  5. Add the average roll 28 + 1.4434 = 29.4434.
  6. Subtract Defenses of 20 = 9.4434.
  7. Multiply by R = 12 x 9.4434 = 113.3208. Round in the character’s favor to 113. Multiply this by 3.5 for the Stun = 395 stun damage.
  8. If 3 rolls is a minute, 12 rolls is 4 minutes, and the character gets 4 x 25 = 100 HP back and 4 x 50 = 200 Stun back. So his losses at the end of the 4 minutes are 113-100=13 HP and 395-200=195 stun.

That took about 5 minutes to do – but I was typing explanations. If I just did it? 2 minutes, tops – 60 to 90 seconds, more likely.

Another example

There are 25 men defending a castle wall. There are 200 archers attacking them, and each archer gets 2 shots per round. Each shot does 1d6 if it hits. The archers have a 3 in 20 chance of hitting, and half of those hits will strike castle wall instead, so it’s effectively 1.5 on d20. Archers have to inflict an 20 points of damage to kill a target.

There are a couple of preliminary calculations needed for this example.

  • 200 x 2 x 1.5 / 20 = 30 hits per round.
  • Distributed over 25 men, that’s effectively 1.2 hits per defender per round.
  • At an average of 3.5 points per hit, that’s an average of 4.2 damage per defender per round.
  • At 20 needed, that’s an average of 20 / 4.2 = 4.76 rounds of combat.

That’s all well and good, but we don’t want averages – we want specifics.

So let’s do 5 x 6d6 per round for 4 rounds and see where we’re at (5 x 6 = 30).

  1. Roll 6d6.I get 18.
  2. The average of 6d6 is 6 x 7 / 2 = 21. Delta is -3.
  3. R = 4.
  4. -3 x 1 / 4^0.05 = -3 / 2 = -1.5.
  5. -1.5 + 21 = 19.5.
  6. 19.5 x 4 = 78.
  • 78 points distributed amongst 25 men is 3 12 points per man per round.
  • For every man who’s taken twice that, there will be one who’s taken half that. So 1.56 and 6.24.*
  • Repeat: 0.78 and 12.48.
  • Repeat: 0.39 and 24.24.96/
  • Six numbers, so out of every 6 defenders, 1 is dead, 1 is half-dead but still fighting, and 1 is wounded slightly.
  • 25 defenders, so the total is 25/6=4 dead, four half-dead, four lightly wounded, 13 virtually whole.
  • * Assuming the roll is symmetrical.**

    ** Okay, this isn’t quite true – if there’s a minimum result, the true answer is half-way from the result to the minimum matches halfway from the maximum to the maximum minus the result. But this is a lot quicker and easier, and it works even when you don’t know what the maximum is, as in this case.

Specifics vs Averages – it makes a VERY big difference.

I would then run the same calculation for the defenders taking down attackers. About 4 minutes to run 4 rounds worth of siege.

But the next time around, I’d be informed by the results of the first run and increase R to 6 or 8, and run the attack in bigger ‘chunks’ of time.

Useful R values

If you can arrange it, the following R values are especially convenient, for reasons that should be obvious: 4, 9, 16, 25, 36, 49, 64. The square root of these numbers are 2, 3, 4, 5, 6, 7, and 8, respectively.

Perhaps less obvious are 2.25, 6.25, 12.25, 20.25, 30.25, 42.25 and 56.25, .These become 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, and 7.5, respectively.

Wait, What? “2.25” rolls? “2.25”” rounds? How does THAT work?

The “round” or “turn” is an artificial construct. It doesn’t actually exist, it’s just a convenient dividing line. Multiply by the number of minutes or seconds in one, and you get real-world units of, respectively, minutes or seconds.

And that works in the other direction, as well. Let’s say there are 12 seconds in a round – then 2.25 rounds is 2.25 x 12 = 27 seconds.

Or, let’s say there are 15 seconds in a round, and a character has to run through a danger zone, which will take him 72 seconds at his movement rate. 72 / 15 = 4.8 rounds. Not 4 rounds, or 5 rounds, 4.8 rounds.

Or, to go back to the original trigger for all this – the character might spend 16 minutes in the 6d6 zone, then cross 100m of 8d6, 100m of 10d6, and 200m of 12d6. Most movement rates aren’t going to translate those distances into neat time intervals when they are measured in rounds. Seconds, maybe, maybe not, but rounds? Almost certainly not.

Three Final Tips

    Tip #1

    If you really want your results to FEEL like you’d rolled them all, aim for an R that is one less than required and add one one totally legitimate random roll. In reality, this inflates the randomness more than is warranted, but it gives the right ‘feeling’ in play.

    So if your true R is 15, use R=14. One random roll feeds into the calculation, and one stands alone. I do NOT recommend this, though – it’s an extra set of die rolls for not enough reward.

    Tip #2

    The second one is this: if you have a long interval, break it into smaller chunks and a smaller R, and generate a new ‘seed value’ for each chunk. For 20, 30, or 40 minutes? 5 or 6 minutes at a time. For longer? 10, or 15. For even longer? 20.

    Divide the time by the total number of rolls that you want to make. That will tell you how long each chunk should be – just round to the nearest convenient number.

    Tip #3

    The more granular the die roll, the better this works. Let that sink in for a moment. It’s not just that the system processes 12d6 just as quickly as it does 6d6, saving more time; the results are qualitatively more nuanced.

    But that granularity is also enhanced with higher R values.

    That implies a sweet spot – and it’s going to be roughly found at (R x N) ^0.5. And the closer that R and N are, therefore, the closer you are to the sweet spot – without even calculating it.

    if you have a choice between 15 dice and R=8 or 10 dice and R=12, the second one will give the best results.

    If you have a choice between 60 dice and R=4 vs 15 dice and R=16, the second one wins every time. Not just is ease of roll, but in quality of result.

Well, that’s the power of 1 on Root R. Hopefully it’s useful out there!

Leave a Comment

The Adverse Effects Engine


The AEE is a subsystem that slots into any RPG for simulating everything from Bad Weather to Plagues & Poisons.

Time Out Post Logo

I made the time-out logo from two images in combination: The relaxing man photo is by Frauke Riether and the clock face (which was used as inspiration for the text rendering) Image was provided by OpenClipart-Vectors, both sourced from Pixabay.

The Backstory

A while back, I was working on an adventure for one my campaigns (being deliberately vague, here) and I needed to look up the effects of Cobra Venom in the Hero System.

I wasn’t impressed – this stuff is supposed to be dangerous, even deadly, and what was offered in the bestiary supplement would barely kill a child.

And this particular venom was supposed to derive from supernatural Cobras summoned by a pissed-off deity. So that wouldn’t cut it.

I developed the Venom described in the box below, but wasn’t very happy with it – too fiddly, and perhaps a touch TOO lethal.
 
 
 
 
 

PER HIT:

  • Immediate on exposure: -5 all primary stats -2 PD -2 ED -10 END -1 ALL SKILLS -2 OCV -2 DCV plus 10 STUN 1 BODY dmg
  • Round after exposure: -3 all primary stats -1 PD -1 ED -6 END -1 ALL SKILLS -1 OCV -1 DCV (all cumulative) plus 10 STUN 2 BODY dmg
  • 2nd round after exposure: -2 all primary stats -4 END -1 ALL SKILLS -1 OCV -1 DCV (all cumulative) plus 5 STUN 3 BODY dmg
  • 3rd, 4th, rounds after exposure: -1 all primary stats -2 END plus 3 STUN 2 BODY
  • 5th round after exposure: -1 all primary stats -1 PD -1 ED -2 END -1 ALL SKILLS -1 OCV -1 DCV plus 2 STUN 1 BODY
  • 6th, 7th round after exposure: as per 3rd & 4th rounds
  • 8th round after exposure: as 5th round
  • 9th, 10th round after exposure: -2 END plus 2 STUN 1 BODY

These are accompanied by appropriate physical & mental responses – shaking, stumbling, delirium, semi-consciousness, poor decision-making, extreme pain (burning sensations) etc. The wound site will blister as though exposed to Mustard Gas or a gas stove’s flame, and the effect will slowly spread through the 10 rounds, starting 2-3 cm diameter +1 cm diam each subsequent round..

TOTAL EFFECTS:

    -5-3-2-2-1-2-1= -16 all primary stats;
    -2-1-1-1 = -5 PD same ED;
    -10-6-4-2-2-2-2-2-2-2 = -32 END;
    -1-1-1-1-1=-5 ALL SKILLS;
    -2-1-1-1-1=-6 OCV & DCV;
    10+10+5+3+3+2+3+3+2+2 = 43 STUN
    1+2+3+2+2+1+2+2+1+1 = 17 BODY

Clothing: Adds 1 round delay to the above

A tornique: Halves the rate of effect shown

Antivenom: Stops effects instantly, restores 1/4 of the damage taken to stats & skills (round down)

If the character survives the course of the attack and does not get hit again, he can recover:

    1 Primary stat point (each stat) / 30 mins
    1 OCV & DCV / 30 mins
    1 Secondary stat point / hour
    END as Normal
    STUN as 1/2 Normal
    BODY as Normal

Those second thoughts didn’t happen right away – in fact, there was about a year in between generating and reviewing the above, and we’re still nowhere near it appearing in play, which it may never do, so I marked it for reconsideration and moved on to higher-priority tasks.

Then, a few weeks ago, in Traits of Exotic d20 Substitutes pt 1, I casually tossed out a completely original system (inspired by the Sixes System, for which I still have to write the final part).

A number of people seemed to like its elegance and simplicity and flexibility. So, a couple of days later, when I came across my note to review the Cobra Venom, the two thoughts clicked together.

But, to actually be usable in play, I needed to dig deeper into what was a casual aside at the time. And so, here we are.

The Core System

The GM specifies N dice, and a target of T sixes. At intervals (generally fixed by the GM but may be variable), the character rolls Nd6. Any sixes are counted towards T, until the total is T or more.

    If one 1 is showing, something bad happens (specified by the GM but not necessarily announced).

    If two 1s are showing, something worse happens (specified as above). Or the same bad thing happens twice. Or the same bad thing happens, and some other bad thing happens. Whatever – it’s worse.

    If three 1s are showing, something really bad happens (specified as above). And T might increase by 1. Or one of the alternatives listed previously. It’s useful to be consistent.

    If four or more 1s are showing, something catastrophically bad happens and T increases by 1 or more. Or (you guessed it) as above.

    You also have the option of specifying a very small ‘something bad’ if no 1s are showing, just to remind the victim that they have this hanging over their head.

The GM controls the severity of each level of effect, the frequency of rolls, the size of the rolls (N), and the target (T). The combination of N and T also dictates what the frequency of occurrence of the different levels of penalty should be.

Nice, neat, and simple – in theory.

To really use it in practice, the GM needs a way to estimate what the total effects are likely to be. Then he can adjust the penalty levels and N and T accordingly to get exactly what he wants the probable outcome to be.

Or he can start with predetermined outcomes in mind and divide them up into the different penalty levels according to a convenient pairing of N and T, based on E, the number of rolls it’s expected to take to reach T.

On Today’s Menu

I’m going to outline the process in full, with tables and convenient shortcuts built in for the GM, for the first approach. Then I’ll outline the second in a shorter format, because it will use the same tables as the first approach.

When I was planning and contemplating this expansion, I also thought up a number of variations, so I’ll describe them and their impacts as the cherry on top.

Set N and T

These should always be determined by E, the expected number of rolls to reach T rolling N dice at a time.

    T=1, for N=1 to 8: 6, 3, 2, 2, 2, 1, 1, 1
    T=2, for N=1 to 8: 12, 6, 4, 3, 3, 2, 2, 2
    T=3, for N=1 to 8: 18, 9, 6, 5, 4, 3, 3, 3
    T=4, for N=1 to 8: 24, 12, 8, 6, 5, 4, 4, 3
    T=5, for N=1 to 8: 30, 15, 10, 8, 6, 5, 5, 4
    T=6, for N=1 to 8: 36, 18, 12, 9, 8, 6, 6, 5
    T=7, for N=1 to 8: 42, 21, 14, 11, 9, 7, 6, 6
    T=8, for N=1 to 8: 48, 24, 16, 12, 10, 8, 7, 6

or, you might prefer to pick an N and then a T:

    N=1, T=1 to 8: 6, 12, 18, 24, 30, 36, 42, 48
    N=2, T=1 to 8: 3, 6, 9, 12, 15, 18, 21, 24
    N=3, T=1 to 8: 2, 4, 6, 8, 10, 12, 14, 16
    N=4, T=1 to 8: 2, 3, 5, 6, 8, 9, 11, 12
    N=5, T=1 to 8: 2, 3, 4, 5, 6, 8, 9, 10
    N=6, T=1 to 8: 1, 2, 3, 4, 5, 6, 7, 8
    N=7, T=1 to 8: 1, 2, 3, 4, 5, 6, 6, 7
    N=8, T=1 to 8: 1, 2, 3, 3, 4, 5, 6, 6

Don’t worry about these not lining up in neat columns, the same information is available in the table that is below.

Advice:

I prefer this approach because of the clear patterns shown for N=1, 2, 3, and 6 – but these can be misleading if used for extrapolation, as N=4 shows with its jump from 3 to 5, and N=5 shows with the jump from 6 to 8, with the second of these being the stronger example. So the extrapolation is not as certain as a pattern might suggest, and can’t be relied on – so I will always recommend using the first arrangement, simply because it doesn’t suggest potentially misleading extrapolations.

High-T = long durations, especially with lower N values. That’s suitable for diseases that have a long interval between checks – every 12 or 24 hours, say. But for poisons, you don’t want an E that’s more than 6 or 8, even for the worst ones, and 5-6 is probably a better target even for those. E=3-4 is good for mid-strength poisons, and E=1-2 should really be reserved for only the fastest-acting.

For every really lethal poison or disease, there should be several of the mid-strength variety, and for every mid-strength, many weaker poisons – or so runs one line of thinking. But evolution favors those poisons that are strong enough to take down whatever the poisoner feeds on or is commonly attacked by; it doesn’t happen in isolation. That can cause potency to increase, moderating the earlier trend. So here are a trio of ratios to get you thinking:

    By Theoretical Threat Magnitude: 1: 3: 9-12
    By Evolutionary End-point: 1 : 2 : 3
    Compromise: 2 : 5 : 10

Playing into that decision should be the poison reservoir. In other words, how many bites of the poison cherry can one poisoner deliver?

Size of the creature impacts this – the larger the creature, the larger the venom sacs (or their equivalent).

Here are some real-world assessments:

Tiny/Small – insects, small spiders, scorpions, small centipedes – venom capacity is very low and either single-use or low-frequency bursts. The venom is metabolically costly relative to body size. Often have a single, full dose for immediate defense/predation. Recovery is long (hours/days).

Medium – mid-sized snakes, large spiders, cone snails, large scorpions, etc – Moderate venom capacity, low-moderate frequency of delivery – three uses in quick succession. Capable of venom metering – injecting less than maximum to conserve supply. May deliver a full dose for a large threat, or a “dry bite” (no venom). Can deliver a burst of 2-3 significant bites, then need short recovery (minutes).

Large – large snakes, octopuses, large fishes – high venom reservoirs, Moderate-high frequency of use (multiple uses or sustained delivery). High reservoir allows for multiple, significant envenomations. Gaboon Vipers, in particular, are known for a massive venom yield and ability to deliver repeated, high-volume strikes. Delivery can be sustained over a short period. Recovery time for full capacity is still long, but practical use is frequent.

As a general rule of thumb, the less venom, the deadlier it has to be, because volume decreases as the cube of linear size. The venom therefore has to become more potent just to keep up. Larger creatures have much more venom, which they can utilize in a number of different ways, one species compared to another. On top of that, smaller creatures are less physically resilient, and need to end combat encounters more quickly in order to survive – so that’s an extra push toward higher toxicity

The graphic below was provideded by Gemini, Google’s AI, and edited by me:

I also asked Gemini to extrapolate its’ findings to cover giant and ‘dire-” creatures, and this is what it came back with (edited):

Gargantuan Creatures – 5m long spiders, Giant Snakes: Size factor 5-10 x earth “real”. Venom Capacity up to 50x that of normal equivalents. Potency may decrease slightly, but total damage output increases exponentially due to volume. Sustained High Frequency of venom delivery, can deliver (5-10x earth “real”) lethal doses with minimal pause. (May take weeks to recharge but still have sufficient venom for 2-3 encounters while recharging).

Colossal Creatures – 25m sea creatures, “Kaiju” spiders, etc. Size Factor 25+ times earth “real”. Venom Capacity – essentially unlimited. Potency is often low relative to size, but the volume is so immense it acts as a biological (or breath weapon, acid spray, etc, with toxic effects on top). The creature’s bite/sting is less about injecting a dose and more about dousing the target (and/or the environment around it).

A “Dire Version” is a creature that defies the standard biological trade-off, making it inherently more dangerous and a true “boss” encounter. The Dire modifier should break the Inverse Correlation by increasing both Reservoir Size and Venom Potency.

So, once you have T, N and E, and have started thinking about bite frequency vs toxicity

Probable Occurrence of Adverse Effects

By the way, before it begins – generating this table of results proved too complicated for both Gemini and ChatGPT! Both understood clearly what I wanted them to do, and (as much as an LLM can) why, and generated a solution to the problem of how – that didn’t work.

Repeated corrections were attempted in both cases, and failed. That’s not a measure of my intellect or anything like that – it’s an indication of just how much detailed work lies under the surface of this innocuous-looking table.

If I had a BASIC compiler, I could have written the code myself from one of their algorithms in less time, and in about 20 lines.

Key:

“No +” represents low chance of more. Use the indicated number of occurrences in estimating total impact from impact per occurrence.

“+” represents a moderate chance of more. Use the indicated number of occurrences in estimating total impact from impact per occurrence.

“++” represents a significant chance of more. Use the indicated number of occurrences + 0.5 to estimate the average total impact from impact per occurrence.

“+++” represents a high likelihood of more occurrences than the number shown, and a high confidence of at least this many occurrences. Use the indicated number +1 to estimate the average total impact from impact per occurrence.

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
T N E K=1 K=2 K=3 K=4 K=5 K=6 K=7 K=8
1 1 6 1
1 2 4 1 0
1 3 3 1 0 0
1 4 2 0+++ 0 0 0
1 5 2 0+++ 0+ 0 0 0
1 6 1 0+++ 0+ 0 0 0 0
1 7 2 0+++ 0+ 0 0 0 0 0
1 8 1 0++ 0++ 0 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
2 1 12 2
2 2 7 1+++ 0
2 3 5 1++ 0+ 0
2 4 4 1++ 0+ 0 0
2 5 3 1 0+ 0 0 0
2 6 3 1 0+ 0 0 0 0
2 7 3 1 0++ 0 0 0 0 0
2 8 2 0++ 0++ 0 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
3 1 18 3
3 2 10 2+++ 0+
3 3 7 2+ 0+ 0
3 4 5 1+++ 0++ 0 0
3 5 4 1++ 0++ 0 0 0
3 6 4 1++ 0+++ 0 0 0 0
3 7 3 1 0++ 0 0 0 0 0
3 8 3 1 0+++ 0+ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
4 1 24 4
4 2 13 3++ 0+
4 3 9 3 0++ 0
4 4 7 2++ 0+++ 0 0
4 5 5 2+ 0+++ 0 0 0
4 6 5 2 1 0+ 0 0 0
4 7 4 1++ 0+++ 0+ 0 0 0 0
4 8 4 1++ 0+++ 0+ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
5 1 30 5
5 2 16 4+ 0+
5 3 11 3+++ 0+++ 0
5 4 8 3 0+++ 0 0
5 5 7 2+++ 1 0 0 0
5 6 6 2+ 1 0+ 0 0 0
5 7 5 1+++ 1 0+ 0 0 0 0
5 8 5 1+++ 1+ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
6 1 36 6
6 2 19 5+ 0++
6 3 13 4++ 0+++ 0
6 4 10 3+++ 1 0 0
6 5 8 3 1+ 0+ 0 0
6 6 7 2+++ 1+ 0+ 0 0 0
6 7 6 2+ 1+ 0+ 0 0 0 0
6 8 5 1+++ 1+ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
7 1 42 7
7 2 22 6 0++
7 3 15 5 1 0
7 4 11 4 1+ 0 0
7 5 9 3++ 1+ 0+ 0 0
7 6 8 3 1++ 0+ 0 0 0
7 7 7 2++ 1++ 0++ 0 0 0 0
7 8 6 2 1++ 0++ 0 0 0 0 0

 

T = target number of 6s
N = number of dice at a time
E = expected number of rolls required, on average
K = number of cases of k ones showing over the span of E rolls
8 1 48 8
8 2 25 6+++ 0++
8 3 17 5+++ 1 0
8 4 13 5 1++ 0 0
8 5 10 4 1++ 0+ 0 0
8 6 9 3++ 1+++ 0+ 0 0 0
8 7 8 3 1+++ 0++ 0 0 0 0
8 8 7 2++ 1+++ 0++ 0 0 0 0 0

E is usually a decimalized number because the calculations determine the average outcome over many sets of rolls. “2.6” means that 40% of the time it will take 2 rolls and 60% of the time it will take 3 – but there is always an outside chance that it might take 1 or 4, so those percentages are approximate. Because in the real world you can’t have “0.6 of a roll”, these have been rounded up, and the resulting whole number of rolls used to calculate the rest of the table.

If you want to know the exact query that ‘broke’ the AIs, it was something like this:

For N 6-sided fair dice from 1 to 8, calculate the number of rolls required to reach a total number of sixes shown across all rolls equal to or greater than T, which also varies from 1 to 8, and label it E1. Because in the real world you can’t have “0.6” of a roll, round E1 up and label it E. For E rolls of N fair six-sided dice, calculate the number of rolls exactly K 1s will be seen, with K varying from one to 8. If the result for a given K (designated R) is an integer, show the integer; else if RK-INT(RK) is <0.25, show INT(RK); else if RK-INT(RK) is <0.5, show INT(RK) and one “+” sign; else if RK-INT(RK) is <0.75, show INT(RK) and two “+” signs; else show INT(RK) and three “+” signs, for example “2+++”. If an entry is impossible, eg K>N, show a blank space, not a 0. Format the results in a plaintext tab-delimited table with columns T, N, E, K=1, K=2, etc, sorted by T and sub-sorted by N.

Note that I had to run this query about 25 times, refining it each time, and eventually had to take out everything relating to the encoding and requesting the answer to 3 decimal places so that I could ‘manually’ do the coding.

Gemini calculated the results correctly, including the formatting, but couldn’t get the columns of data to line up correctly after 24 rows plus the heading – the K=1 column kept overwriting the E column, no matter what was done.

ChatGPT failed completely to apply the encoding correctly and had several calculation errors at first, but with a bit of patience and simplifying the question, did manage to produce a table that I could copy and paste into a spreadsheet. I then inserted additional columns to perform the calculation of RK-INT(RK) and interpret the results as per the “if” statement shown above. I then hid the working and manually transcribed the results into the tables above.

Oh, and for clarity, I decided at the last minute to break what was one big table into the more user-friendly 8 smaller tables.

I’m getting ahead of myself with this picture, but it had to go somewhere! You’ll see why it’s included in due course. Image by Daniel McWilliams from Pixabay

So let’s pick an entry, I’ll decode it, and show you how it works. How about… 5 dice, target of four 6’s.

  1. Look for the line that starts 4 – 5.
  2. E is 6, so you can expect the victim to roll 6 times on average before getting to the target of 4 sixes – of course, it could happen on the very first roll, but it probably won’t.
  3. So, what’s likely to happen, bad-things wise, over the course of those expected 6 rolls?
    • K=1 has a value of 2+, so there will probably be two times that a single 1 one is showing.
    • K=2 has a value of 0+++ – so the expectation is that this won’t happen on any of them, but there’s a very high chance of it happening at least once – just not a relative certainty of it. And that makes sense – there’s a 1 in 36 that you’ll get 2 ones on two dice, and 25/36 chance that there will be no 1s on the other 2 dice, for a total chance of 25/1296 of this outcome, or 1.9%. But that doesn’t allow for a 1 on the first dice and another 1 on, say, the 3rd dice – so there are more ways for this to happen. And that puts the chance up so high that it’s very likely to happen.
    • K=3 through K=5 are extremely unlikely to occur. Not impossible, but not likely. For all practical purposes, this is a two- or three-tiered penalty structure.
  4. The key takeaway, though, is: 2 x one 1, 1 x two 1’s, and 8-3=5 x no 1’s.
  5. So multiply that by the chosen harm levels that go with those one-counts, add it up, and you have your expected damage.
    • To demonstrate this, let’s say no 1’s = 1 HP, one 1 = 5 HP, and two 1’s = 10HP. Then we would have 1×5 + 1×10 + 5×1 = 20 HP damage.
  6. But the system can be as complicated as you want.
    • Try no 1’s = 2 HP, 1 one = +5 HP, and 2 ones = +10 HP and a point of STR, each accompanied by the lesser levels.
    • Then, we would expect 2x(5+2) + 1x(10+5+2, & 1 STR) + 5×2 = 14+17+10 HP & 1 STR = 41 HP & 1 STR.

Choosing N and T

Unless you are modeling a specific set of conditions that dictate otherwise, or are working to deliver an ‘average fixed amount of damage’ (both covered in subsequent sections), the place to start is with the time intervals* between rolls and the number of rolls expected to be needed, E.

That will give you a short-list (perhaps VERY short) to choose between.

For example, if I want an effect to apply for an average of 6 time-intervals – it could be six rounds, six lots of 30 seconds, 6 minutes, 6 hours, 6 days, or whatever – I would look for E of 5, 6, or 7.

A whopping 17 entries in the table match, so I’m not spoiled for choice. Since there are so many, I would lose the 5’s and 7’s and go with just the options that give exactly what I want.

That gets me down to 5 choices. I want the players to roll more than 1 die but no more than 4, because anything else takes longer to add up.

But that kills all my choices, so the decision is now which restriction do I desire more – the 6 rounds, or the 4 dice?

I decide that 7 rounds is acceptable, after all. That puts a lot of options back on my radar, including T=4 N=4 and T=4, N=5. The first has a higher chance of K=1 results, the latter introduces an outside chance of K=5 and an increased chance of K=3 and K=4. But it does fit my original 6-round desire. In the end, I choose to flip my compromise and choose the N=T=4 option.

Job Done.

Extending The table

Let’s compare the 4-4 line with the 8-8 line.

4-4: 7, 2+, 0+++, 0, 0; vs
8-8: 7, 2++, 1+++, 0++, 0, 0, 0, 0, 0

So you can’t break an 8-8 into two sets of 4-4 rolls. But there is a simple way.

Let’s look at N=12 T=12.

    Step 1: Divide both N and T by 2 (they have to be even).

    Step 2: Look up the results on the tables above. In this case, we get N=6, T=6.

    Step 3: The total number of rolls expected is the same for both – in this case, 7.

    Step 4: Because the scaling also increases the deliberately-induced ’rounding error’, subtract 1/2 from the expected number of rolls in response to the doubling. So that’s 6½.

    Step 5: The total number of rolls is the same, but doubling the dice makes it easier to roll high numbers of ones. The counts for the worse penalties will increase, while the count for the standard penalty remains stable or slightly decreases. Balanced against that is the fact that the probability of those higher penalties is so low that your increasing nothing by a smidgen in most cases. Analysis has led to the rules for doubling:

    • # and #+ are always treated as #.
    • ++ should be read as #+1.
    • If the full E is <16, +++ should also be read as #+1.
    • If E >15, +++ should be read as #+2.

    So, in this case, we have 2+++, 1+, 0+, 0, 0, 0.
    E is <16, so 2+++ becomes 3.
    1+ stays 1.
    0+ stays 0.
    0 stays 0.

    So three single 1s, 1 pair of 1s, and 2.5 rolls without ones.

    Step 6: But then we have to factor in the drop from 7 to 6½ expected rolls:

    3 x 6.5 / 7 = 2.8 single 1s, 0.93 pairs of 1s, and 6.5 – 2.8 – 0.93 = 2.77 rolls with no 1s.

    Step 7: Multiply those by your chosen penalty values.

    Let’s use…

      No 1’s = 3 HP
      One 1 = 10 HP
      Two 1s = 25 HP

    3 x 2.77 + 10 x 2.8 + 25 x 0.93
    = 8.31 + 28 + 23.25 = 59.56 HP.

    Step 8: Round up and add the lower of the half N or T to allow for the possibility of those results of 3 or more 1s.

    In this case, both are 6, giving a final estimate of 66 HP damage.

It is recommended that + and +++ rolls should have their expected penalties softened, especially if using compound effects, as the levels set for them are based on occurrence numbers that are only partially expected to occur. 10% weaker is about right. Similarly, ++ rolls should be subjected to a moderate reduction (~20%) for the same reason.

Setting penalty levels

Ensure the penalty definitions are geometrically worse as K increases (eg., K=2 is far worse than K=1) to reflect the exponentially decreasing probability of high K rolls.

Setting penalty levels from a designated target

If plugging values into the calculations above doesn’t suit, you can establish a fixed geometric ratio – 2.5, 3, or 4 all work well – and use them to reduce your high K results to a specific number of K=1 or K=0 results. I recommend the first of these, but it’s up to you.

For example, let’s use 6 dice and a Target of 3 sixes. E=4.

    One 1 = 1++, treated as 1.5
    Two 1s = 0+++, treated as 1.
    Three to Six 1s = 0. Ignored.
    No 1’s = 4-1-1.5 = 1.5.

And let’s set a nice robust target like 100 HP. That’ll get a PC’s attention in a hurry!

Set the ratio as 4, and let’s extend the calculation down to K=0.

    Two ones = 4 (the ratio) single ones, for a total of 5.5 of them.
    One one = 4 (the ratio).’no ones’, so 5.5 x 4 = 22.

    100/22 = 4.54. Round down to 4. That 4 x 1.5 expected = 6 points, so our target is now 94 points from 5.5 k=1s.

    94 / 5.5 = 17.09. Round it down to 17. Multiply by the 1.5 times it’s expected to occur and we get 25.5. So our target goes down by 25 (round it down again) and our K=1 value is 17 HP.

    96-25 = 71. So our K=2 – expected once – is 71 HP.

Final results:

    K=0 does 4 HP.
    K=1 does 17 HP.
    K=2 does 71 HP.

Of course, if you set more modest targets, you’ll get more moderate results. This was deliberately extreme.

Variation One: Nested Damage Types

Try this on for size:

    K=0: minor HP damage.
    K=1: significant HP damage.
    K=2: significant HP damage & single-stat damage.
    K=3: significant HP damage & second-stat damage.
    K=4: Significant HP damage & both stats damaged.
    K=5: K=4 + Significant HP damage.
    K=6: K=4 + K=2.
    K=7: K=4 + K=3.
    K=8: 3 x K=4.

These results ‘nest’ three types of damage – two to stats and HP. You can use a similar system if the game system has multiple damage types, as in the Hero System:

    K=0: Some END loss
    K=1: K=0 + Some Stun loss
    K=2: 2 x K=1 + Some Body damage
    K=3: K=1 + K=2 + Some temporary Stat loss
    K=4: 2 x K=2 + Some temporary Stat loss
    K=5: K=4 + K=2
    K=6: K=5 + K=3.
    K=7: K=6 + K=4.
    K=8: 3 x K=5.

Defining ‘some’ as 5 points, that becomes:

    K=0: -5 END
    K=1: -5 END -5 Stun
    K=2: -10 END -10 Stun -5 Body
    K=3: -15 END -15 Stun -10 Body -5 Stat
    K=4: -20 END -20 Stun -10 Body -5 Stat
    K=5: -30 END -30 Stun -15 Body -5 Stat
    K=6: -45 END -45 Stun -25 Body -10 Stat
    K=7: -65 END -65 Stun -35 Body -10 Stat
    K=8: -90 END -90 Stun -45 Body -15 Stat

Or you could simplify things:

    K=0: -5 END -1 Stun -0 Body
    K=1: -10 END -5 Stun -1 Body
    K=2: 2 x K1
    K=3: 4 x K1 plus -1 stat
    K=4: 8 x K1 plus -5 stat
    K=5: 15 x K1 plus -10 stat
    K=6: 30 x K1 plus -20 stat
    K=7: 50 x K1 plus -30 stat
    K=8: 100 x K1 plus -40 stat

The Healing Difference

It’s up to you to decide whether or not healing, or recoveries in the Hero System, can function until whatever-it-is has run it’s course.

That makes these effects much nastier and should cause you to halve whatever damage levels you had in mind. Unless you want it to be potentially deadly.

Other Systemic Options

There are five other options that the GM can choose. Some of these can operate in combinations.

1. The Exhaustion Option

When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

That means that your biggest risk of a really bad result is at the start, and possible effects moderate.

It makes it much harder to predict the net outcome though.

Statistical Impact: This dramatically reduces the dice pool (N) over the course of the effect. Successes are achieved quickly, but the chance of rolling K>0 adverse events on any remaining die remains constant (1 in 6). Since the pool shrinks, the absolute chance of rolling multiple 1s decreases rapidly.

Game Feel: Front-loaded risk and rapid resolution. The initial rolls are the most dangerous. If a character survives the first two or three checks, the difficulty in rolling 1s drops faster than the difficulty in reaching the target, T.

Best For: Fast-acting, non-renewable poisons (like a single large dose of nerve agent) or short, focused challenges where the effect is quickly flushed from the system.

2. The Continual Option

Once you roll a 1, it stays unrolled thereafter and counts toward future penalties. Rolling continues until every dice shows either a 1 or a 6. The Core exit condition of accumulating T sixes remains in effect but is overshadowed by the alternative.

This means that things get progressively worse until whatever-it-is has run its course and left your system. It’s nasty but good for supernaturally-sourced troubles.

The one saving grace is the additional way out – if every dice is either a 1 or 6, the nightmare ends. In some cases, the cause – disease or poison – will burn itself out fast, in others it will be the cause of extremely protracted suffering.

The higher the initial N, the worse this gets. If you start with 6 dice:

    1, x, x, x, x, x – T sixes (cumulative) or 5 sixes needed
    K=1 events every roll until you roll another 1 or exit
    1, 1, x, x, x, x – T sixes (cumulative) or 4 sixes needed
    K=2 events every roll until you roll another 1 or exit
    1, 1, 1, x, x, x – T sixes (cumulative) or 3 sixes needed
    K=3 events every roll until you roll another 1 or exit
    1, 1, 1, 1, x, x – T sixes (cumulative) or 2 sixes needed
    K=4 events every roll until you roll another 1 or exit
    1, 1, 1, 1, 1, x – 1 six needed
    K=-5 events every roll until you roll a 1 or a 6. If you roll a 1, there is a K=6 events.

Each time a die is locked on ‘1’, your chances of getting the sixes you need go down and the number of rolls you’re expected to need will go up.

Damage accumulates very rapidly, and with accelerating pace.

3. The Progressively-worse Option

Each 1 that gets rolled increases the Target by 1.

This puts survival on a knife-edge and ensures that if you suffer badly, the effects will linger for longer – making it a good choice for plagues.

Statistical Impact: This maintains the dice pool (N) but increases the overall target (T) dynamically. Every adverse event makes the recovery condition harder to achieve. This means rolling a 1 directly increases the expected duration (E) of the effect. A single unfortunate roll early on can potentially double the total expected number of checks.

Game Feel: Cascading failure and desperation. Failure feeds failure. The character sees the light at the end of the tunnel (the target T) constantly moving further away. This is highly effective for plagues or diseases that exploit the body’s weakening condition.

Best For: Plagues, zombie infection progression, or effects that are harder to fight off the longer they persist (like a viral load).

4. The Blessed Balm Option

Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on.

This creates a situation in which the health of the sufferer is on a roller-coaster, up and down with each roll of the dice. Eventually, these changes will tend to dampen out. Works very well with the Progressively-Worse option.

This fundamentally re-balances the risk assessment; introducing greater variance into the process – rolls are either great (success towards T) or terrible (a large number of 1s) or tension-building (anything else). It models a scenario where the character’s vitality is constantly tested.

Game Feel: Roller-coaster effect and high stakes per roll. The character may suffer a terrible wound but instantly cancel it in the same roll with a heroic recovery effort. This variation is highly dramatic.

Best For: Magical duels, effects that fluctuate with effort or willpower, and scenarios where the poison’s progression is inherently unstable.

5. The Devastating Option

The first 6 in a roll doesn’t count, only sixes above that one.

This strongly biases the results away from recovery, without ruling it out entirely. It makes any of the ‘nasty’ options far worse.

Statistical Impact: This increases the expected number of rolls (E) needed to reach T without changing the probability of adverse events (K). Since E is higher, the total number of adverse events over the life of the affliction is necessarily higher. If you use the same N and T, the effect will be substantially longer and more severe than calculated in the base tables.

Game Feel: Recovery – and the downhill slide before it – feels Incredibly sluggish and unforgiving. Successes are hard-won. This makes the affliction feel resistant or deeply embedded in the character’s system, guaranteeing prolonged suffering

Best For: Artifact-level curses, dire creature venom, effects designed to be a significant narrative roadblock, or spurs for quests for a cure. Don’t hit a PC with this variant except in unusual circumstances when they have no-one to blame but themselves; DO hit someone important who the PCs want to save.

6. The With-A-Bang Option:

A selected number of the dice pool (N) start already showing ones and are not re-rolled. These reduce by 1 each round, becoming regular dice rolled and not “fixed ones”.

The “Fixed Ones” should be 1/2 of N or less. This ‘forces’ the occurrence of a high K result in the first round, tapering off in subsequent rounds. It also extends E by reducing the likelihood of sixes being rolled, generally by the number of fixed ones at the beginning, minus 1.

    6a. Bigger Bang Sub-variant

    The”fixed ones” are only removed when a 6 is rolled. A 6 used for the purpose does not count toward the target.

    This extends the durability of the high-N count AND effectively increases T by the number of initial ones showing.

    6b. It Will All Be Over Soon Sub-variant:

    As per the basic option 6, but fixed ones do not become regular dice, they become automatic sixes.

    This front-loads the results with high-K results but effectively reduces T by the number of initial ones showing.

Going Further

Any situation in which one character uses his skills to solve a multipart problem, or a group collaborate on a challenge, or a group face adversity together, or that can otherwise be broken down into units of roughly equal value, can be modeled using the Adverse Effects Engine.

Each part of the problem, or contributor to a solution, or participant, gets one dice, and they all roll collectively at the same time. This is especially powerful when coupled with the variants listed above.

Think of T as Progress, N as Resource/Skill, and K as Consequence (usually Immediate, but that depends on the definitions of harm that you set up).

Here are just a few of the many situations that the engine, correctly configured, can simulate.

Extreme Weather

N = number of PCs / NPCs in the group

T = N unless there is a natural channel either guiding the weather toward them (+1-3 T) or away from them (-1-2 T).

K = scale of impact of the weather event on the group.

Best Option: The Blessed Balm PLUS Progressively Worse:
Each 1 that gets rolled increases the Target by 1.
Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on, mitigating an existing K result OR reducing the Target by 1 if there are no K results to mitigate.

Everybody rolls a dice and contributes the result to the roll. Sixes push the weather away from the party, Ones bring it down on top of them to a degree. Net effects change from round to round, with weather either just missing the characters (K=0), catching them at its fringes (K=1), or enveloping them (K>1).

For added flavor, throw in Nested Damage Types – First impact = Wind, Second impact = Rain / Hail / Snow, Third impact = Stronger Wind, and so on.

Product Development

Your PC is part of a team developing a new product for sale. You will need a Market Specialist (salesman), a production / manufacturing engineer, a marketer, a technical expert, and a team manager.

The salesman will identify a gap in the market to be targeted, the technician will design the product to fill that gap, the engineer will determine what the possible price-points are, and the rate of production that is possible, the marketer will figure out how to sell the product, and the team manager will make decisions and look at the costs of altering the production environment to change the production engineer’s forecasts.

Each team member gets at least 1 dice to contribute; if their specific skill is more than double the lowest specific skill in the team, they get a second one. If the company has a good history / reputation in the field, the GM can award 1-3 extra dice.

T starts at 1 per team member. If the company has a bad history or reputation to overcome, add 2. If the product is especially cutting-edge, increase this subtotal by +50% or even +100%. If the market is especially cut-throat, add another 25% on top of that. For each team member whose specific skill is less than half the highest specific skill amongst the team, add another 1.

Each 6 counts +1 towards the product being fit for purpose. Each roll marks a milestone in the development process – there can be blind alleys, competitor announcements changing the market / playing field, cost increases, new markets opening up, old markets closing down, scandals in the boardroom – anything and everything that affects the market for the product.

Penalties take the form of additional design time between rolls (K=0, K=1) and reductions in the fitness for purpose of the resulting product (K>0).

I don’t think any of the optional configurations are appropriate for this application.

Collaboration to overcome an environmental hazard (1)

Use the AEE for ongoing natural challenges where the group’s collective effort determines the duration, and individual poor luck determines the immediate suffering.

Crossing a Frozen Lake or Glacier, for example: N (Dice) = The number of characters in the group, or the lowest relevant skill rating in the group, or some reasonable fraction thereof. Only characters with a relevant skill or with a relevant stat value higher than a medium-high threshold get a die. Below those marks, the characters are liabilities toward the group’s success.

T = the GM-assigned difficulty, or some simple fraction thereof, +1 per character, whether they get a die or not..

Options Configuration: The Continual Option, PLUS The Blessed Balm Option.
Once you roll a 1, it stays unrolled thereafter and counts toward future penalties. Rolling continues until every dice shows either a 1 or a 6. The Core exit condition of accumulating T sixes remains in effect but is overshadowed by the alternative.
Sixes rolled can undo some of the harm caused. Two sixes = one 1, three sixes = 2 ones, and so on – removing it from the locked pool and releasing it back into the live dice to be rolled.

Collaboration to overcome an environmental hazard (2)

The party are roped together and have to climb.

N = Characters with climbing skill of +2 or better, or STR+DEX of 16 or better.

T = Total number of characters + 1-4 for difficulty of climb. Add 2 if the characters are under attack or otherwise pressured to climb at speed.

K = falls / setbacks. K>2 = ropes break.

Options Configuration: The Exhaustion Option simulates the rope tying the bad climbers to the good ones: When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

For especially difficult climbs, add The Progressively-Worse Option: Each 1 that gets rolled increases the Target by 1.

For the most supremely challenging climbs, add the Devastating Option instead of Progressively Worse: The first 6 in a roll doesn’t count, only sixes above that one.

Ransacking A Library for specific (hidden / obscure) information

How long it takes to find a specific piece of hidden or obscure lore in a Library that might not even contain what you’re looking for depends on your reading speed (INT), presuming you have the ability to read, and your ability to recognize what you’re looking for, or that what you have just found is a clue to where to look next.

Well-structured libraries also make it a lot easier by excluding most of the books as irrelevant.

I would employ a simulation similar to the Design-A-Product example, but based purely on INT and not on specific skills. Note that if you have a character participating who is low INT, they can actually disrupt the efforts of higher-INT characters by continually interrupting them with “is this it?”.

Specifically, you want the total number of 6s to exceed the total number of 1s before the search comes to an end. If it doesn’t, either the answers aren’t there, or you’ve missed them. So long as there are dice to be rolled, there’s a chance, even if you’re at -2 or -3 to getting a result.

K=penalties to the success total, high-K = passing guards, accidental fires, magical books that scream when opened, ghostly librarians…

Focal Character overcoming an environmental hazard

All sorts of things fall into this category. Picking a combination lock, for example. Or Disarming a bomb with N critical steps that have to be performed in the right order. Or using a code-breaker.

You’ve seen these devices in the movies. Attach one to a lock and let it work its way through the combinations. To make life more difficult, consider a rolling code – that’s where a complex algorithm sets a new code every time, but only the 1000 or so valid results from that algorithm will be accepted. Which means that if you lock in the wrong answer, you have to start over.

The relevant skill here isn’t necessarily one of yours – it’s the design and programming skill of whoever designed and built the code-breaker. All you have to do is place it on the lock in roughly the right position.

With each success (each 6 toward the T), the stakes get higher. One wrong move (K>1) and it’s back to square one.

This scenario seems tailor-made for the Exhaustion Option – a 6 is a locked-in digit: When you roll a 6, after adding it to your tally, that dice no longer gets rolled.

Lesser K results are events that threaten failure / discovery, but which may not actually incur the problem.

T=Number Of Digits in the code.

N=T+a simple fraction of the programmer / designer’s skill.

Let the tension build…

PDF Icon

Click the icon to download the PDF

Using The AEE

If you prep in advance, you have plenty of opportunity to consult the tables and simply put the specific simulation instructions into your notes.

If you want to be able to use the system off-the-cuff, though, you’re going to have to be able to take it with you. For that reason, I’ve put together a PDF with the essential mechanics, shorn of explanation and example – but WITH a hyperlink back to this article.

Leave a Comment

Trade In Fantasy Ch. 5: Land Transport, Pt 5b


This entry is part 20 of 20 in the series Trade In Fantasy

This post continues the text of Part 5 of Chapter 5. Its content has been added to the parent post here and the Table of contents updated.

I have a series of images of communities of different sizes which will be sprinkled throughout this article. This is the first of these – something so sparsely-settled that it barely even qualifies as a community. It’s more a collection of close rural neighbors! Image by Jörg Peter from Pixabay

5.8.1.5 Blended Models

In general, the rule is one zone, one model. In fact, as a general rule, your goal should be one Kingdom, one model – that way, if you choose “England” as your model, your capital city will resemble London in size and characteristics, and not, say, Imperial Rome.

But, if you can think of a compelling enough reason, there’s no reason not to blend models. There are lots of ways to do this.

The simplest is to designate one model for part of a zone, and another to apply to the rest.

Example, if your capital city were much older than the rest of the Kingdom, you might decide that for IT ALONE, the Imperial model might be more appropriate, while the rest of the Kingdom is England-like. Or you might decide that because of its size, it has sucked up resources that would otherwise grow surrounding communities more strongly, and declare a three-model structure: Imperial Capital, France for all zones except zone 1, and England for the rest of Zone 1.

Example: A zone contains both swamp and typical agricultural land. You decide that those parts that are Swamp are German or Frontier in nature, while the rest are whatever else you are using.

An alternative approach to the problem that works in the case of the latter example is to actually average the two models’ characteristics and apply the result either to just the swamp areas, or to the zone overall.

When you get right down to it, the models are recommendations and guidelines, describing a particular demographic pattern seen in Earth’s history. There’s absolutely nothing to prevent you from inventing a unique one for a Kingdom in your world – except for it being a lot of work, that is.

5.8.1.6 Zomania – An Example

I don’t really think that a fully-worked example is actually necessary at this point, but I need to have one up-to-date and ready to go for later in the article. So it’s time for another deep-dive into the Kingdom of Zomania.

5.8.1.6.1 Zone Selection

I’ll start by picking a couple of Zones that look interesting, and distinctive compared to each other.

Zone 7 is bounded by a major road, but doesn’t actually contain that road; it DOES have capacity for a lot of fishing, though. And I note that there are cliffs in the zones to either side of it, so they WON’T support fishing – in fact, those cliffs appear to denote the limits of the zone..Zone 7 adds up to 167.8 units in area, and features 26 units of pristine beaches.

Zone 30 has an international border, and a major road, lots of forest and foothills becoming mountainous. It’s larger than one 7, at 251.45 units.

Because I haven’t detailed these areas at all, the place that I have to start is back in 5.7.1.13. But first…

5.8.1.6.7.1.1 Sidebar: Anatomy Of A Fishing Locus

I was going to bring this up a little later, but realized that readers need to know it, now.

Coastal Loci are a little different to the normal. To explain those differences, I threw together the diagram below.

1: is a coast of some kind. It might not be an actual beach, but it’s flat and meets the water.

2: It’s normal, especially if there’s a beach, for the ends to be ‘capped’ with some sort of headland. This is often rocky in nature. This is the natural location for expensive seaside homes and lighthouses.

3. Fishing villages.

4. Water. It could be a lake, or the sea, or even a river if it’s wide enough.

5. Non-coastal land, usually suitable for agriculture.

6. A fishing village’s locus is compressed along the line of the coast and bulging out into the water. This territory produces a great deal more food than the equivalent land area – anywhere from 2-5 times as much. Some cultures can go beyond coastal fishing, doubling this area – though what’s further out than shown is generally considered open to anyone from this Kingdom. Beyond that, some cultures can Deep-Sea fish (if this is the sea), which quadruples the effective area again. If you’re keeping track, that’s 2-5 x 2 x 4 = 16-40 times the land area equivalent. The axis of the locus is always as perpendicular to the coast as possible.

7. The bottoms of the lobes are lopped off…

8. And the land equivalent is then found ‘squaring up’ the locuses…

9. …which means that these are the real boundaries of the locus. The area stays roughly the same, though.

The key point is this: you don’t have to choose “Coastal Mercantile” to simulate living on the coast and fishing for food. There are mechanisms already built into the system for handling that – it’s all done with Terrain and a more generous interpretation of “Arable Land”.

Save the “Coastal Mercantile” Model for islands and coastal cultures whose primary endeavor is water-based trade.

Zone 7, then, should have the same Model as all the other farmland within the Kingdom. I think France is the right model to choose.

Zone 30 is a slightly more complicated story. For a start, don’t worry about the road – like coastal villages, that gets taken care of later. For that matter, so is the heavy forestation, and the local geography – hills and mountains. But this is an area under siege from the wilderness, as explained in an earlier post. Which changes the fundamental parameters of how people live, and that should be reflected in a change of model. In this case, I think the Germany / Holy Roman Empire model of lots of small, walled, communities is the most appropriate.

But this does raise the question of where the change in profile takes place. I have three real options: The Zone in it’s entirety may be HRE-derived; or the HRE model might only apply to the forests; or might take hold in the hills and mountains, only.

My real inclination would be to choose one of the first two options, but in this case I’m going to choose door number 3m simply because it will contrast he HRE model with the base French version of the hills and forests. In fact, for that specific purpose, I’m going to set the boundary midway through the range of hills:

5.8.1.6.1.2 Sidebar: Elevation Classification

Which means, I guess, that I should talk about how such things are classified in this system. There are eight elevation categories, but the categories themselves are based on the differences between peak elevation and base elevation.

I tried, but couldn’t quite get this to be fully legible at CM-scale. Click on the image above to open a larger copy in a new tab.

To get the typical feature size – the horizontal diameter of hills or mountains – divide 5 x the average of the Average Peak Elevation range by the average Relief range and multiply by the elevation category number, squared for mountains, or twice the previous category’s value, whichever is higher. Note that the latter is usually the dominant calculation! The results are also shown below. Actual cases can be 2-3 times this value – or 1/2 of it.

1. Undulating Hillocks – Average Peak Elevation 10-150m, Local Relief <50m; Features 16m (see below).
2. Gentle Hills – Average Peak Elevation 150-300m, Local Relief 50-150m; Features 32m.
3. Rolling Hills – Average Peak Elevation 300-600m, Local Relief 150-300m; Features 64m

     -> □ Zone 30 Treeline from the start of this category
     -> □ Normal Treeline is midway through the range

4. Big Hills – Average Peak Elevation 600-1000m, Local Relief 300-600m; Features 128m
5. Shallow Mountains – Average Peak Elevation 1000-2500m, Local Relief 600-1500m; Features 417m
6. Medium Mountains – Average Peak Elevation 2500-4500m, Local Relief 1000-3000m; Features 834 m
7. Steep Mountains – Average Peak Elevation 4500-7000m, Local Relief 3000-5000m; Features 1668m
8. Impassable Mountains, permanent snow-caps regardless of climate – Average Peak Elevation 7000m+, Local Relief 5000m+; Features 3336m.

Undulating Hillocks (also known as Rolling Hillocks or Rolling Foothills) are basically a blend of scraped-away geography and boulders deposited by glaciers. If the boulders have any sort of faults (and most do), they will quickly become more flat than round and start to tumble within the Glacier. When they come to rest, several will be stacked, on on top of another, generally in long waves. There will be gaps in between, which get filled with earth and mud and weathered rock over time, unless the rocks are less resistant to weathering than soil, in which case the rocks get slowly eaten away. In a few tens of thousands of years, you end up with undulating hillocks, or their big brothers. The flatter the terrain, the more opportunity there is for floodwaters to cover everything with topsoil, smoothing out the bumps. The diagram above shows how this ‘stacking and filling’ can produce structures many times the size of individual hillocks.

A very similar phenomenon – wind instead of glaciers, and sand instead of boulders – creates sandy dunes in deserts prone to that sort of thing. Over time, great corridors get carved out before and after each dune, generally at right angles to the prevailing winds. It can help you picture it if you think of the wind “rolling” across the dunes – when they come to a spot where the sand is a little less held together, it starts to carve out a trench, and before long, you have wave-shaped sand-dunes.

5.8.1.6.3 Area Adjustments – from 5.7.1.13

Zone 7 has a measured area of 167.8 units, but that needs to be adjusted for terrain. Instead of the slow way, estimating relative proportions, let’s use the faster homogenized approach:

Hostile Factors:
     Coast 1.1 + Farmland 0.9 + Scrub 1.1 = 3.1; average 1.03333.
     Coast +0.25 + Beaches -0.05 + Civilized -0.1 = +0.1
     Towns -0.1
     Net total: 1.03333
167.8 x 1.0333 = 173.4 units^2.

Benign Factors:
     Town 0.1 + Coast 0.15 + Beaches 0.15 + Civilized 0.2
     Subtotal +0.6
     Square Root = 0.7746
173.4 x 0.7746 = 134.3 units^2.

Zone 30 is… messier. Base Area 251.45 units^2.

Hostile Factors:
     Mining 1.5 +
     Average (Mountains 1.4 + Forest 1.25 + Hills 1.2 = 3.85) = 1.28
     Town -0.1 + Foreign Town 0.1 + River 0.2 + Caves 0.05 + Ruins 0.4 + “Wild” 0.1 = +0.75
     Net total = 1.5 + 1.28 + 0.75 = 3.53
251.45 x 3.53 = 887.6 units^2.

Benign Factors:
     Town 0.1 + Foreign Town -0.1 + River +0.1 + Caves 0.05 + Ruin 0.4 + Major Road 0.2
     Subtotal 0.75
     “Wild” = average subtotal with 1 = 0.875
     Sqr Root = 0.935
887.6 x 0.935 = 829.9 units^2.

To me, this looks very Greek – but it’s actually ‘Gordes’ in England, which the photographer describes as a village. One glance is enough to show that it’s bigger than the town depicted previously. Image by Neil Gibbons from Pixabay

5.8.1.6.4 Defensive Pattern – from 5.7.1.14

Zone 7 is pretty secure, the biggest threat being local insurrection or maybe pirate raids. A 4-lobe structure of 2½,5 looks about right.

When I measure out the area protected by a single fort and 4 satellites, I get 47.2 days^2. That takes into account overlapping areas where this one structure shares the burden 50% with a neighboring structure, and the additional areas that have to be protected by cavalry units.

That means that in Zone 7, there should be S x 134.3 / 47.2 = 2.845 x S of them, depending on the size of a “unit” on the map is, measured in days’ march for infantry.

S is going to be the same for all zones I’ve avoided making that decision for as long as I can – the question is, how large is Zomania?

5.8.1.6.5 Sidebar: The Size of Zomania, revisited

16,000 square miles – at least, that’s the total that I threw out in 5.7.1.3.

That’s about the same size as the Netherlands.

It’s a lot smaller than the Zomania that I’m picturing in my head when I look at the map. It IS the right size if the units shown are miles. But if they aren’t?

There are two reasons for regularly offering up Zomania as an example. The first is to provide a consistent foundation and demonstration of the principles discussed coming together into a cohesive whole. And the second is for me to check on the validity of the logic and techniques that I’ve described.

Feeling ‘wrong’ is keeping my subconscious radar from achieving purpose #2. And the Zomania being described being too small, which is the cause of that ‘wrong’ feeling, means that it isn’t going to adequately perform function #1, either.

There can be only one solution – Zomania has to grow, has to be scaled up. I want Zone 7 to be comparable to the size of the Netherlands, not the entire Kingdom, which should be comparable to France, or Germany, or England, or Spain.

A factor of 10? Where would 160,000 sqr miles place Zomania amongst the European Nations that I’ve named?

UK: 94,356. Germany: 138,063. Spain: 192,466. France: 233,032. So 160,000 would be smack-dab in the middle, and absolutely perfect for both purposes.

So Zomania is now 160,000 square miles, and the ‘units’ on all the maps are 10 miles each.

It wasn’t easy sorting this out – it’s been a road-block in my thinking for a couple of days now – triggered by results that seemed to show Zone 7 to be about 0.08 defensive structures in size.

And that is due to a second scaling problem that was getting in the way of my thinking:

How much is that in day’s marching?

In 5.7.1.14.3, I offered up:

    If d=10 miles (low), that’s 103,923 square miles.
    If d=20 miles (still low), that’s 415,692 square miles.
    If d=25 miles (reasonable), that’s 649, 519 square miles.
    If d=30 miles (doable), 935,307 square miles.
    If d=40 miles (close to max), 1.66 million square miles.
    If d=50 miles (max), 2.6 million square miles.

But that was in reference to a theoretical 6 x 4, 12 + 12 pattern. Nevertheless, the scales are there. And they are way bigger than I thought they would be, and way to big to be useful as examples. Yet the logic that led to them seemed air-tight. Clearly, there was an assumption that had been made that wasn’t correct, but this problem was getting in the way of solving the first one.

Once I had separated the two, answers started falling into place. The numbers shown above are how far infantry can march in 24 solid hours, such as they might do in a dire emergency. But defensive structures would not be built and arranged on that basis.

If infantry march for 8 hours, they have just about enough daylight left to break camp in the morning (after being fed) and set up camp in the evening (digging latrines and getting fed). That’s the scale that would be used in establishing fortifications, not the epic scale listed. In effect, then, those areas of protection are nine times the size they should be.

So, let’s redo them on that basis:

    If d=10 miles (low), that’s 11,547 square miles.
    If d=20 miles (still low), that’s 46,188 square miles.
    If d=25 miles (reasonable), that’s 72,169 square miles.
    If d=30 miles (doable), 103,923 square miles.
    If d=40 miles (close to max), 184,444 square miles.
    If d=50 miles (max), 288,889 square miles.

And those are still misleading, because mentally, I’m thinking of this as the area protected by the central stronghold, and ignoring the satellites. To get the area per fortification,, we should divide by the total number of fortifications in the pattern – in the case of the numbers cited, that’s 6×4+12=36.

    If d=10 miles (low), that’s 320.75 square miles.
    If d=20 miles (still low), that’s 1283 square miles.
    If d=25 miles (reasonable), that’s 2,004.7 square miles.
    If d=30 miles (doable), 2,886.75 square miles.
    If d=40 miles (close to max), 5,123.4 square miles.
    If d=50 miles (max), 8024.7 square miles.

Reasonable = 2004.7 square miles, or roughly equal to a 44.8 x 44.8 mile area. For a really tightly packed defensive structure of the one being discussed, that’s entirely reasonable – and it fits the image in my head.

In my error-strewn calculation, my logic went as follows:

    ▪ In the inner Kingdom, I think that life is easy and lived fairly casually. That points to the lower end of the scale – 10 miles a day or 20 miles a day.

    ▪ 10^2 = 100, so at 10 mi/day, 16,000 = 160 days march.
    ▪ 20^2 = 400, so at 20 mi/day, 16,000 = 40 days march.

    ▪ That’s a BIG difference. 40 is too quick, but 160 sounds a little too slow. Tell you what, let’s pick an intermediate value of convenience and work backwards.

    ▪ 100 days march to cover anywhere in 16000 square miles gives 160, and the square root of 160 is 12.65 miles per day.

Now, that logic’s not bad. But it doesn’t factor in the ‘working day’ of the infantry march – it needs to be divided by 3. And it DOES factor in my psychological trend toward making the defensive areas smaller, because my instinct was telling me they were too large – but this is the wrong way to correct for that. So this number is getting consigned to the dustbin.

After all, the ‘hostile’ and ‘benign’ factors are supposed to already take into account the threat level that these fortifications are supposed to address, and hence their relative density.

    ▪ So, let’s start with the “reasonable” 25 miles.
    ▪ Apply the ‘working day’ to get 8.333 miles.
    ▪ The measured area of the defensive structure is 47.2 ‘days march’^2.
    ▪ Each of which is 8.333^2= 69.444 miles^2 in area.
    ▪ So the defensive unit – stronghold and four satellites – covers 47.2 x 69.444 = 3277.8 sqr miles.
    ▪ Or 655.56 sqr miles each.
    ▪ Equivalent to a square 25.6 miles x 25.65 miles.
    ▪ Or a circle 12.51 miles in radius.
    ▪ Base Area 173.4 units^2 = 17340 square miles.
    ▪ Adjusted for threat level, 134.3 units^2 or 13430 square miles. In other words, defensive structures are further apart because there’s less threat than normal.
    ▪ 13430 / 3277.8 = 4.1 defensive structures, of 1 hub and 4 satellites each.
    ▪ So that’s 4 hubs and 16 satellites plus an extra half-satellite somewhere.

Those satellites could be anything from a watchtower to a small fort to a hut with a couple of men garrisoned inside, depending on the danger level and what the Kingdom is prepared to spend on securing the region. The stronghold in the heart of the configuration needs to be more substantial.

Okay, so that’s Zone 7. Zone 30 is a whole different kettle of fish.

I wanted to implement a 3-lobed configuration with more overlap than the four-lobed choice made for Zone 7. And it was turning out exactly the way I wanted it to; some every hub was reinforced by three satellites, every satellite reinforced by three hubs. I had the diagrams 75% done and was gearing up to measure the protected area.

Which is when the plan ran aground in the most spectacular way. There were areas where responsibility was shared two ways, and three ways, and four ways, and – at some points – six ways. It was going to take a LONG time to measure and calculate.

If I were creating Zomania as an adventuring location for real, I would have carried on. If I lived in an ideal world, without deadlines (even the very soft ones now in place at Campaign Mastery) I would have continued. I still think that it would have provided a more enlightening example for readers, because I would be doing something a little bit different and having to explain the differences and their significance.

But since neither of those circumstances is the case, and this post is already several days late due to the complications explained earlier, I am going to have to compromise on principle and re-use the configuration established for Zone 7.

Well, at least that will show the impact that the greater threat level will impose on the structure, but it leaves the outer reaches of the Kingdom less well-protected than they should be. If and when I re-edit this series into an e-book, I might well spend the extra time and replace the balance of this section – or even work the problem both ways for readers’ edification.

REMINDER TO SELF – 3 LOBES, 1 DAY EXAMPLE

But, in the meantime…

Zone 30.
    ▪ Actual area 251.45 square units = 25,145 square miles.
    ▪ Adjusted for threat level = effective area 829.9 square units = 82,990 sqr miles. (in other words, the defensive structures you would expect to protect 82,990 square miles are so closely packed that they actually protect only 25,145 square miles, a 3.3-to-1 ratio.)
    ▪ Defensive Structure = 3277.8 square miles (from Zone 7).
    ▪ 82,990 / 3277.8 = 25.32 defensive structures of 5 fortifications each, or 126.6 fortifications in total. Zone 7 is 69% of the area and had a total of 20.5 fortifications, in comparison.

What does 0.32 defensive structures represent? Well, if I take the basic structure and ‘lop off’ two of the satellites, then it’s 3/5 of a protected area minus the overlaps. By eye, those overlaps look to be a bit more than 2 x 1/4 of one of those 1/5ths, and since 1/4 of 1/5 is 1/20th, that’s roughly 0.6-0.1 = 0.5.

If I take away a third satellite, the structure is down to 2/5 protected area minus overlaps, and those overlaps are now 1 x 1/20th, so 0.4-0.05=0.35. So, somewhere on the border, there’s a spot with one hub and one satellite.

One more point: 3.3 to 1. What does THAT really mean? Well, the defensive structure used has satellites 2.5 days march from the hub. But everything is more compressed, by that 3.3:1 ratio, so the satellites in Zone 30 are actually 2.5 / 3.3 = 0.76 day’s march from the hub. The area each commands is still the same, but there’s a lot more overlap and capacity to reinforce one another.

Another way to look at it is that there are so many fortifications that each only has to protect a smaller area. 3277.8 sqr miles / 3.3 = 993 sqr miles.

5.8.1.6.6 Sidebar: Changes Of Defensive Structure

The point that I’m going to make in this sidebar won’t make a lot of sense unless you’re paying close attention, because the Zone 30 example has the same defensive structure as Zone 7 – it’s just a lot more compressed. But imagine for a moment that there was a completely different defensive structure in Zone 30.

What does that imply for Zone 11, which lies in between the two?

You might think that it should be some sort of half-way compromise or blend between the two, but you would be wrong to do so.

If you look back at the overall zone map for Zomania (reproduced below)

…and recall that the zones are numbered in the order they were established, a pattern emerges. Zone 1 first, then Zone 2, then Zones 3-4-5-6-7, then zones 8-9-10-11-12, and so on. Until Zones 29-32 were established, Zone 11 was the frontier. it would likely have the same defensive structure as Zone 30. Rather than fewer fortifications, it would have them at the same density as Zone 30 – but the manpower in each would be reduced.

If you know how to interpret it, the entire history of the Kingdom should be laid bare by the changes in its fortifications and defenses.

But that’s not as important as the verisimilitude that you create by taking care of little details like this and keeping them consistent. The specifics might never be overtly referenced – but they still add a little to the credibility of the creation.

5.8.1.6.7 Inns in Zone 7 – from 5.7.3

Zone 7 is noteworthy for NOT having a major road – that’s on the Zone 11 / Zone 6 side of the border. Some of the inns along that road, however, may well be over that border – it’s a reasonable expectation that half of them would count. But only that half that is located where the border runs next to the road – there’s a section at the start and another at the end where the border shifts away.

But there’s a second factor – what is the sea, if not another road to travel down? And Zone 7 has quite a lot of beach. The reality, of course, is that these are holiday destinations, and places for health recovery – but it’s a convenient way of placing them.

So that’s two separate calculations. The ‘road that is a road’ first: There are actually two sections. The longer one runs through Zones 6 and 11, as already noted; it measures out at 15 units long, or 150 miles.

The second lies in Zone 15, and it’s got a noticeable bend in it. If I straighten that out and measure it, I get 5 units or 50 miles.

Conditions:
    Road condition, terrain, good weather = 3 x 2.
    Load = 1 x 1/2.
    Everything else is a zero.
    Total: 6.5.
6.5 / 16 x 3.1 = 1.26 miles per hour.
1.26 mph x 9 hrs = 11.34 miles.

Here’s the rub: we don’t know exactly where the hubs and satellites are in Zone 7, only how many of them that there are to emplace. But it seems a sure bet that those areas where the road and border part ways, do so because there’s a fortification there that answers to Zone 6 or Zone 11, respectively. And that means that we can treat the entire length of the road as being between two end points.

We know from the defensive structure diagram that the base distance from Satellite to Hub is 2 1/2 days march, and that there’s a scaling of x 1.0333 (hostile) x 0.7746 (benign) = x 0.8 – and that benign factors space fortifications further apart while hostile ones bunch them together, so this is a divided by when calculating distances. We know that 8.333 miles has been defined as a “day’s march”.

If we put all that together, we get 2.5 x 8.333 / 0.8 = 26 miles from satellite to hub.

Armies like their fortifications on roads, it makes it faster to get anywhere. Traders like their trade routes to flow from fortification to fortification, it protects them from bandits. The general public, ditto. If a road doesn’t go to the fortification, people will create a new road and leave the official one to rot. So it can be assumed that the line of fortifications will follow the road, and be spaced every 26 miles along it, alternating between hub and satellite.

    150 miles / 26 = 5.77 of them.

It’s an imperfect world; that 0.77 means that you have one of three situations, as shown below:

The first figure shows a hub at the distant end of the road. The first shows a hub at the end of the road closest to the capital. And the third shows the hubs not quite lining up with either position.

But those aren’t the actual ends of the road – this is just the section that parallels the border of Zone 7, or vice-versa. So the last one is probably the most realistic

Now, let’s place Inns – one every 11.34 miles. But we have to do them from both ends – one showing 1 day’s travel for ordinary people headed out, and one showing them heading in. Just because I’m Australian, and we drive on the left, I’ll put outbound on the south side and inbound on the north.

Isn’t that annoying? The don’t quite line up – to my complete lack of surprise. Look at the second in-bound inn – it’s about 20% of a day short of getting to the satellite, and that puts it so close that it’s not worth stopping there; you would keep going.

Well, you can’t make a day longer, but you can make it shorter. And that makes sense, because these are very much average distances.

I’ve shortened the days for the ordinary traveler – including merchants – just a little, so that every 5th inbound Inn is located at a Stronghold, and every 5th outbound inn is located at a satellite. Every half-day’s travel now brings you to somewhere to stop for a meal or for the night.

It’s entirely possible that not all of these Inns will actually be in service, it must be added. Maybe only half of them are actually operating. Maybe it’s only 1/3. But, given it’s position within the Kingdom, there’s probably enough demand to support most of these, so let’s do a simple little table:

    1 inn functional
    2 inn functional
    3 inn functional but 1/4 day closer
    4 inn functional but 3/4 day farther away
    5 inn not functional
    6 inn not functional, and neither is the next one.

Applying this table produces the following (for some reason, my die kept rolling 3s and 6s):

Even here, in this ‘safe’ part of the Kingdom, travelers will be forced to camp by the roadside.

And that’s where I’ll have to leave it, for this post. I had hoped to get all of the Zomania examples done, but the problems early on put paid to that, and didn’t even leave me enough time to get Zone 30 detailed through to the inn stage – let alone up to date! That’s obviously for the next post….

Leave a Comment