Campaign Mastery helps tabletop RPG GMs knock their players' socks off through tips, how-to articles, and GMing tricks that build memorable campaigns from start to finish.

It’s Not Cheating Unless You Get Caught: Game Fraud and Counter-Fraud in RPGs


Base image by Stelogic, modified by Mike

This is an article that’s been brewing in the back of my mind for a very long time, encouraged in part by the Numb3rs Season 2 episode ‘Double Down’ (and several subsequent episodes), in part by movies such as ‘The Sting’, ‘Oceans 11’, and ‘Oceans 13’, in part by the Babylon-5 Season 1 episode ‘The Quality Of Mercy’ (in which Londo cheats at Poker using an extra (tentacle) limb to reach across under the table), and in part by the first two or three seasons of ‘Las Vegas’. Oh, and some of the Poker games of the crew of the Enterprise in Star Trek The Next Generation, as well – just take a look at this list of ‘poker references in Star Trek’ at Memory Alpha, the Star Trek wiki

Specifically, I started thinking about how people could or would attempt to cheat in worlds where characters had different abilities to the norm, and what measures casinos – who obviously would want to deter such practices – would have to put in place to stop them.

Cheating in Fantasy

At first glance, there seem to be two ways of cheating in addition to the standards. The first is the use of precognition, and the second uses telekinesis of some sort. But with a little imagination, a few more start suggesting themselves.

Precognition

The problem with precognition is that it always lacks in context or in detail. If a spell is so kind as to show the cheater’s hand (assuming that we’re talking about a card game such as poker), a large bet, and a triumphant win, there is no guarantee that these events happen in sequence, or all relate to the same hand of the game. This makes precognition more a “will-I-won’t-I-play-tonight” coin-toss, cast in advance, but even used in this way there are problems. The player may think they have won, only for someone else at the table to crash the party with a better hand even as the precognitive player celebrates. Or for the player to lose his winnings – and then some – in one or more subsequent hands, for that matter.

Because the GM has full control over the content of the precognition, and will never tell the whole story (he needs to leave room for player input, if nothing else), a player should never be able to rely on it 100%. Scenes should be disjointed, not a seamless narrative.

This is justifiable in a fantasy game situation because the casino would make every effort imaginable to stay on the good side of the God(des) of Luck, God(dess) of Fate, and any other such power that might be able to influence the outcome – or prevent such influences from being helpful.

This notion can be taken further – a casino might employ priests of the God(dess) of Dreams to ensure that everyone who stays at the Casino has dreams of winning, just to encourage them to play!

Telekinesis

The ability to manipulate objects at a distance would seem to be an obvious way to cheat, but in a fantasy milieu, such spells are almost always accompanied by a visual display of some kind – a spectral hand, for example. This makes the approach generally untenable.

Scrying

A more subtle approach is to employ an ability to scry to see other player’s cards. In some game systems, this requires a rather obvious physical object to use as the focus of the magic such as a crystal ball, a mirror, or something similar; this makes Scrying unsuitable as a technique for cheating. Other game systems permit scrying in any reflective surface – a refinement of the obvious approach of simply seeing the reflection of someone’s cards. This suggests an obvious technique – a team, one of whom is a fighter in brightly-polished armor and the other of which who uses that armor to scry on his opponent’s hands!

How would a casino combat scrying? They couldn’t very well ban players from wearing anything that was polished or reflective, though that is an obvious remedy. Carried through to its logical extreme, this would impart a rustic flavor to the most up-market casinos, a touch which could help distinguish the game world from the real one. The problem with such a ban is that it would require casinos to store a large quantity of very valuable magic items belonging to patrons, who would be reluctant to hand them over unless the casino assumed full responsibility for them (and had adequate (expensive) security to boot – a financial risk that could quite literally break the bank.

A better, but more difficult, approach would be to employ illusionists to overlay false ‘readings’ on any reflective surface, ensuring that scrying gave a false ‘read’. It follows that the best and most famous casinos would be found in cities/nations with strong magic schools. Since these are generally the most easily-cast illusions – the casino wouldn’t even have to make them look real, if they publicized the security measure – this would also be a relatively inexpensive solution.

It might even reach the point where the casino paid the costs of apprenticeships in the illusionary art, at least until the apprentice/casino employee had learned enough to carry out this function. Many of the apprentices would not have the skills to progress further, but this is still a win/win/win for the parties involved – the casino gets its illusionists, the trainees get secure employment with the prospects of being taken on as a full apprentice if they are good enough, and the tutor gets additional income, a patron, and assistance in locating suitable apprentices to work on his behalf.

Illusions

Of course, illusions work both ways. A cheat can overlay one of his cards with an illusion of a different card, or do the same with a card in a rival’s hand (though that’s trickier to get away with). What’s more, it would be harder for a casino to combat this without disrupting the illusions that they are using to prevent scrying. The best answer to this is to use a deck of custom-created magic cards that disrupt illusions cast on them – again strengthening the affiliation between schools of magic and successful gambling dens.

This would have an additional benefit – most magic items are far more resilient than their non-magical counterparts. Modern casinos combat card-marking by swapping decks frequently, a practice that relies on having a manufacturing industry able to produce hundreds of virtually-identical decks of cards; the pseudo-medieval nature of most fantasy campaigns does not permit this, but using magic cards achieves the same benefit by making the cards harder to mark in the first place.

Non-magical techniques

Of course, there are always the traditional approaches: high levels of manual dexterity, ‘trick’ shuffles, card-counting, and so on. Some of these might not have been thought of in a fantasy environment, or probability might not work exactly the same ‘in-game’ as it does in real life, but as a general rule, they would still be expected to work. The techniques to combat these would be equally traditional, and most revolve around a dealer provided by the casino.

‘Anything goes’ games

Of course, any private game run outside the scope and protection of a well-heeled casino would not be able to counter these measures so easily – which can be the basis for a fun adventure, in which the PCs sit in on a ‘friendly’ game in which everyone else is cheating by means of a different method!

Cheating in Sci-Fi

As technology and analytic sophistication improve, statistical analysis becomes an increasing element of both cheating (and advantage playing) and of the detection of cheating. It is generally considered cheating for someone to have computer assistance, for example, but this becomes difficult to enforce when mobile telephones which can also run software apps become ubiquitous, and almost impossible if cybernetic enhancements are commonplace.

To combat this, casinos would employ computer-based statistical analysis to detect cheating. The house has the advantage of being able to see everyone’s cards through hidden cameras, so they can always determine the optimum approach, assuming ignorance on the part of the players. A player may legitimately make a mistake and ignore what is apparently the optimum move, once or twice without suspicion, especially if they are clearly not a professional player. A professional may even get away with it occasionally if they can ‘read’ another player well enough – though this becomes more problematic when the other players are also professionals who have eliminated as many ‘tells’ as possible. The more times it occurs that a player wins by virtue of an apparent mistake, the more suspicious that player’s behavior becomes.

Adding to this are other methods of profiling players, both legitimate (facial recognition to identify known and suspected cheats) and illegitimate, and the capabilities of non-human species.

Consider a race with multiple eyes on stalks which weave back-and-forth like cat’s tails by instinct, alert to any potential danger or threat – how hard would it be to prevent such a player from getting a glimpse of his neighbor’s cards? Or a race with a marsupial’s natural pouch in which marked cards or cold decks can be located, but which is normally used to protect the young for whom exposure to the outside world can be dangerous – searching one of those would be the equivalent of a body-cavity search, which would have to be deemed unacceptable as a business practice (if not outright illegal). A race with very accurate thermal senses or very sharp hearing or even canine olfactory abilities might be able to measure the nervousness of another player – hardly definitive, but a definite advantage. Heck, even a race from a low-gravity environment with an extra meter (very roughly 3¼’) of height would find it much easier to see other player’s cards!

There are really very few things that can be done to combat the use of such player advantages. One possibility would be the use of ‘isolation walls’ between players so that they could not see each other, but the ability to ‘read’ an opponent is a huge part of the difference between live play and online play, so this would probably not be especially welcomed. It would also impact the audience’s vicarious attraction to the game, which is a big part of the atmosphere of high-stakes tournaments, so once again, the solution is left wanting.

Ultimately, I expect casinos would simply tell players to learn to live with it – each player has a different mix of abilities and skills that they can use to their advantage as they see fit.

The alternative would be to embrace one alternative solution that comes to mind – but I’m going to save discussion of that alternative for the end of the article.

Telepathy

One particular mode of cheating that needs mentioning at this point is the issue of telepathy. Unlike a fantasy campaign, with it’s special-effects-friendly magic spells, psionics is usually an invisible ability in most sci-fi and superhero games.

There are two primary defenses that could be employed against this method of cheating. The first would be a technological defense – some sort of “jammer” that each patron could wear or carry, which would broadcast mental “static” to any telepath present. The alternative would be taken from the old moral, “to catch a thief” – staff telepaths who would wander around “looking” for people using telepathic means to cheat.

I can see the latter being especially appropriate to a Babylon-5 universe (or any similar game) where there is an aggressive recruitment campaign by the Psi-Corps. For the former to work, there needs to be, One, general acceptance that psionic talents are real, backed by experimental proof, and Two, formal and detailed studies into exactly how it works. The second can’t exist without the first, and the second is required in order to provide a theoretical foundation for developing the gadget in the first place.

Cheating in Superhero Campaigns

A world with Superheroes would have to be a Casino Manager’s worst nightmare. Not only would it bring all the modes of cheating from both sci-fi and fantasy environments – and some of the limiting factors that constrain these would or could be reduced or even obviated completely – but there would be more besides.

Telekinesis

Superhero TK doesn’t have to be visible, and often isn’t. That means that players don’t have to ‘reach’ for cold decks or extra cards, or even to take surreptitious glimpses at the deck.

The one saving grace for Casinos where this ability is concerned is that psionics rarely occur alone – a telekinetic usually has at least some telepathic ability, however poor and untrained it is. So developing and then turning up the Jammers might be sufficient.

Stretching

This brings us back to Londo Mullari and his “extra limb”. When a big toe can stretch and curl around the table to surreptitiously reveal the top card on the deck, casinos have a problem – though a card “shoe” would mitigate this.

Precognition

Superhero precognition is usually more controllable and more reliable than the fantasy variety. If this is a psionic ability, then the measures mentioned under Telekinesis might be effective, but if the character reads the future by sensing the shape of the timeline, the power might be completely non-psionic. However, the latter approach would probably require a greater measure of skill in interpreting the results since there is no direct link between outcome and perception, so this would probably be considered the equivalent of being a professional player, and tolerated.

Teleportation

Can a sufficiently-skillful player teleport cards out of a card shoe at the same time as they teleport in a cold deck? If someone were to propose this to me in a game I was running, I would be extraordinarily skeptical, requiring many long hours of practice. Can it be done with no-one noticing (including security cameras)? That’s even more problematic. Casinos could combat this further by installing simple but highly-accurate pressure sensors under each card shoe – sufficient to detect the difference in weight of a single card. If the reduction is accompanied by the dealer extracting a card from the shoe, it can be ignored; if not, the alarm sounds. The casino might not know who was cheating, but they would know that someone was – and could declare the hand null and void, bring in a fresh show, and so on, just as they would if they detected someone marking the cards without knowing who.

Super-luck & Mind control

There are many other approaches to cheating that can be considered for a superhero campaign, but most of them also exist in either fantasy or sci-fi settings.

One step up from mere precognition is the ability to actually alter the outcome. The simplest method is simply to influence another character’s decision-making process – but that’s generally psionics again, or some form of hypnotism (which in turn requires some form of communication with the victim that would either be obvious or be psionic in nature), so the same measures work.

The more unusual method is to be able to fiddle with probability itself, during the shuffling process for example. Not even automated card-shuffling machines would be able to resist this power, and the only way to counter it is to employ someone to give the patrons bad luck – an unacceptable choice.

In fact, the only solution to this method of cheating is to have the randomizing agent somewhere else completely. And that brings us to the final anti-cheating technique, the one that I alluded to earlier:

Online Gambling

Almost all of these problems go away if you replace the casino with a server farm. If there are no physical cards to manipulate, if players don’t know who they are up against except as a screen name (let alone where they are and who they are), most of the old methods of cheating simply won’t work.

Even unusual methods like reading or manipulating the shape of the future become more difficult, and more easily detected, simple because the game relies on numbers that appear random – but are actually pseudo-random in nature, and hence can be replicated on a backup system in a remote third location. If the two machines agree, there’s no problem – but if they disagree, then one of them has been manipulated.

Is online gambling a complete cure? Not at all – there are some very clever people out there who have devised ways of beating the system (I don’t want to encourage this sort of behavior, so I’m not going to tell you what they are). But if I could find out about them, anyone else who really wants to can do so as well.

Suffice it to say that the proprietors of online casinos know all about them, too, and have watchdog routines in place to curb them – or (in some cases) have decided that it is not worth trying to do so and amended the terms and conditions of their games to permit such behavior.

Know the game

As the long list of media at the start of this article attest, gambling is a common human activity. Being able to simulate it in your games is something that will and should be necessary from time to time.

The environment is a factor that needs to be taken into account if you are to do this with any real success. There have been enough movies and TV shows that this is easy for a bricks-and-mortar casino, but its not so easy when you’re talking about online gaming. You might imagine that you know what it is like, but your imagination – in general – won’t capture the real essence of what a genuine game online is like, any more than a few friendly games will prepare you to participate in a genuine tournament.

Your emotional state, the fact that the stakes are real, the fact that someone else – with a different thought process to your own – has an equal impact on the outcome, these all combine to alter your mindset in the genuine event.

It follows that the best prep for simulating online poker in one of your games is to actually play a few
competitive games at pokersoft sites, and take careful note of how the actual experience differs from your imaginary one.

A few words of warning: to get the psychological impact right, there need to be real stakes, and that means a real game. Expect to lose (unless you make an ongoing hobby of it) and budget accordingly – don’t throw good money after bad in an attempt to recoup a loss. That means considering any money spent on a gambling site to have been spent – if you’re lucky, you might get some or all or even more back from a source that just happens to be the same, but that should be considered a separate transaction.

Secondly, Gambling addiction is real, and can be a real problem. Johnn raised this point a couple of weeks ago in his article, ‘Who Got Poker In My RPG?’ and I would like to second his advice on the subject. You may play online poker with real money to learn how to simulate it in your RPG, but DON’T play for real money in your RPG. In fact, think twice before playing for anything of value, even M&Ms. One GM I know once suggested that RPG gambling should be played for XP – I don’t agree, and even consider the suggestion dangerous.

Above all, have fun! There’s nothing like playing a game with no pressure to win – be it an RPG or online gambling. In fact, thinking of the stakes as “money already spent” should allow you to bring the same detachment to the online game as you do to an RPG session – so enjoy it.

Oh, and if you cheat, on your own head be it – it’s one thing to talk about villains and underhanded types doing so in a game, we neither support no condone doing so in real life.

Comments (14)

Hints, Metaphors, and Mindgames: Naming Adventures (Part 1)


This entry is part 6 of 11 in the series A Good Name Is Hard To Find


I use scenario/adventure titles all the time. Used correctly, they can put players into the correct frame of mind to react in the “right” way to the events in a scenario, conceal the identity of a villain until or hide a plot twist until the big reveal, heighten the drama of a situation and/or raise the expectations of the players. At the very least, they provide a referent ‘index’ to the events that occur in the course of the adventure. They can also add to the flavor of the campaign, reinforcing genre elements.

Many of the same methods and criteria that are used for naming campaigns are also relevant to naming adventures. Double or even triple meanings, exaggerations, heightened drama, metaphors and use of nouns, taking synopsis phrases out of context, and so on, are all valid tools to be used.

The heart of this article are a hundred-or-more (!) examples, with discussion of where the name came from, how it relates to the adventure, and – where appropriate – why it is an especially apt title. I’ve organized these by campaign, so that the campaign notes provided in the previous part of this series can be helpful in providing some context.

Some of these adventure titles will be discussed in more detail than others (mainly because it takes time to boil adventures down to a one-sentence synopsis, while I can cut-and-paste from more detailed summaries in next to no time)!

The Adventurer’s Club

The first three adventures in this campaign occurred before I started co-refereeing it. The adventure titles were one of the touches that I brought to the campaign…

  1. “Ghost Ship” – Synopsis: A series of ships have mysteriously vanished in Haiti. Others report sighting a ghost ship before narrowly escaping. The PCs are hired to solve the mystery. The insurance investigator who hires them is then killed by “Zombies” (drugged voodoo cultists). PCs romp through haiti having fun with Voodoo priests and Zombies in what appears to be a power struggle between Voodoo cults. The Ghost Ship turns out to be a fake by the Haitian military to enable them to gather the naval forces required to stage a coup, backed by rogue elements in the US Military. Commentary: The title got the PCs thinking along supernatural lines, an impression reinforced by the “Zombies”. What was basically a political thriller with lots of double-crossing and betrayal was given a lot of color and flavor by the supernatural overtones. The scenario ended with a slightly ambiguous note as the PCs hear of another ghost ship sighting after the fake had been exposed and put out of business.
  2. “St Michael & St George – Synopsis: The “Order of St George & St Michael” is the British equivalent of the Spanish Inquisition, a darker, quieter, more secretive, and far more insidious organization than its continental counterpart. Most don’t even know of its existence, and they have no direct contact with the church authorities. Membership includes some of the elite of the British Peerage. The existence of the organization comes to light when a former member leaves his memoires to the British Library on his deathbed in a fit of conscience. The Director of MI5 (and secretly a member of the Order) needs to retrieve them before his ‘extracurricular activities’ are discovered, but he can’t use his regular people without revealing his hand. So he calls in Paper (one of the PCs) who in turn involves the rest of the Party. Commentary: Not a title we were completely happy with. It does contrast the sanctity of the two named saints with the villainy of the organization acting in their name, and it again implies a strong supernatural element in what turned out to be a spy-thriller – again full of betrayal and subterfuge.
  3. “Teutonic Metaphysics – Synopsis: This was a mission to recover “Teutonic Metaphysics” by J Michelet, London 1928 (1st edition) with marginal notes by Alistair Crowley. There is a Bishop about to be made cardinal, Yugoslav name, and placed in charge of the Vatican Library; the Adventurer’s Club is concerned about his fascist connections and rumors that he is an occultist. Commentary: Once again, a title that hints at the supernatural, but the players – wary from the last two scenarios, when similar hints turned out to be red herrings – immediately discounted that, an opinion reinforced by the revelation that the title refers to a manuscript. As a result, the PCs were considerably surprised when the curse of ill-fortune that comes with the manuscript (which they had dismissed) turned out to be true – everything that could possibly go wrong on the mission, did, starting as soon as they got their hands on the manuscript!
  4. “Heisenberg’s Nightmare” – Synopsis: Several Volumes are missing from the Club Library, an investigation is launched and the library assistant (Honeydew, one of the players’ favorite NPCs) is stood down while the librarian (Mrs Hobbs) is re-cataloging the entire library herself. One of the missing volumes contains information on an atomic weapon: “The Hydrogen Bomb”. The FBI is called in to investigate officially. Through contacts, the club learns that someone is offering the manuscript for sale in Denmark. When they get there, the PCs find that the document has been purchased by German Agents and is en route to a castle run by the SS. PCs have to follow it into Nazi Germany and secure it from the castle – then get out of Germany with it. After recovering manuscript PCs return to club FBI has established that Honeydew’s locker has been wiped clean of fingerprints – either someone is trying to interfere with the investigation or the theft was by Honeydew herself. Commentary: The “Hydrogen Bomb” mentioned turned out to be exactly what it sounded like, much to the surprise of the players, who thought we were being clever again – Heisenberg had gathered together a summary of all atomic knowledge then existent, from Szilard to Rutherford, only then realizing that he had spelt out exactly what research, theoretical breakthroughs, and engineering solutions were needed to construct the terrible weapon. This was the first time we had made Nazis the central villains of the plot. A classic “chase the Macguffin” but the players never expected us to lead them through Berlin to a Schloss via SS Headquarters. Getting out of Germany afterwards was even more action-packed than getting in had been.
  5. “The Library Crimes” – Synopsis: Honeydew and her best friend (one of the PCs) go on the run against the full might of the police and FBI while the rest of the PCs set a trap to catch the real thief. Commentary: A seemingly straightforward title, it was only when they uncovered the identity of the real thief and learned that he was selling the stolen manuscripts to raise funds to buy more books for the very library he was stealing from that the double-meaning became apparent. At the end of the adventure, the FBI took over the operation of the club on the grounds of national security.
  6. “Flash & Richthoffen” – Synopsis: The FBI inform the PCs of intelligence regarding a Nazi super-dirigible missile – a sonic generator in the nose “loosens” the air in front of the dirigible permitting it to travel at 300 mph, and of course it is a huge bag of very flammable hydrogen. The Government can be seen doing anything about it, because the US is technically neutral, but New York is within range of the super-weapon – so they want the PCs to undertake a covert mission into the German R&D facility at Rugen Island. Commentary: This title had so many layers of meaning that some of them almost got lost in the shuffle – it was a little too clever for our own good. One of the featured NPCs was Professor Zarkov (from Flash Gordon). The “Richthoffen” angle obviously referred to the super-dirigible. The whole title was a metaphor for “Shock and Awe”. And finally, the whole thing promised to be a heavily sci-fi / space opera style pulp adventure – so it really surprised the PCs when they discovered Templar Vampires living beneath the Nazi base!
  7. “Southern Comfort” – Synopsis: I’ve described this adventure twice before, in ‘There Is A Hold In Your Mind…’: Solving Mental Block and in Bam! Zap! Crunch! World Conventions In Pulp so I won’t repeat it here. Commentary: This is one of the better titles from the Pulp campaign. While it is obviously named after the world-famous bourbon, the title actually refers to the Bond-style villain who invited one of the PCs to a black-tie dinner aboard his riverboat to gloat (some genuine “southern hospitality”) – putting the PC in the perfect position to break his teammates free and completely louse up the villain’s plans. But it also describes the situation of the villain, a mastermind sitting in luxury while plotting the most foul of deeds.
  8. “Things Of Stone And Wood” – Synopsis: By far the biggest adventure that we’ve run in the Adventurer’s Club campaign, it took the PCs deep into China up the Yangtze River where they encountered a Chinese Emperor and Sorcerer who returned from the dead and reanimated an army of stone, who could only be defeated by using the Four Elements – but these were the Chinese elements, including wood, not the western ones with which the players (and probably our readers) are more familiar. Commentary: In addition to the obvious references to the Chinese elements and the stone army, there were a number of encounters along the way with other representations of the Chinese elements, all connected by association with the title – everything from corals through to pirates. The entire adventure took close to a year-and-a-half of real time, and greatly expanded the overall scope of the campaign.
  9. “Scenes From The Balcony” – Synopsis: A series of miniadventures in which we looked at how each PCs life had been changed by their association with the Adventurer’s Club and their growing fame. Tommy, the aviator, test-flew a radical new type of fighter; Father O’Malley, the priest, had a murder mystery and weird-science swamp monsters; Captain Ferguson, the Sea-Captain and Treasure Hunter, was hired by the Navy to supervise the salvage of an experimental submarine; and Doctor Hawke, the Medic, got involved in a plot to use Native Americans as involuntary subjects of medical experimentation by an immoral pharmaceuticals company, which ended up reuniting the PCs as each of their solo missions came to a conclusion and the Doctor found he needed help to deal with the larger problem he faced. Commentary: The title was determined as soon as we came up with the concept of the adventure, because it described perfectly the situation in which the other players were onlookers (and freely able to kibitz) during each other’s adventures, which were carefully intertwined. Some of the internal timing of the adventures went a bit wonky, but overall it worked fine – as a one-off. For once, the adventure was exactly what it said on the tin.
  10. “The Dream Factory” – Synopsis: The team are hired to troubleshoot a string of mysterious accidents on the Hollywood production of a movie about the Adventurer’s Club is being filmed. One of those incidents leads to the discovery of a child-smuggling ring and a madman practicing human sacrifice in the name of the Aztec God, Tezcatlipoca. Commentary: This is the most recently-completed adventure at the time of writing. Originally an idea by Blair entitled “Hollywood Hijinx”, it evolved into a dark and disturbing tale of insanity, evil, and the tragic exploitation of children and their dreams. The title is an obvious reference to Hollywood, but it has a second and darker meaning in the way the villain is exploiting the kidnapped children as slave labor in order to fulfill his deranged dreams.
  11. “The Legacy Of Vigo” – Synopsis: One of the PCs inherits a castle, and a noble title – in Transylvania. The estate turns out to be beset with problems – everything from Gypsy squatters to ancient curses to ghosts to crazy weather to… well, that would be telling. Commentary: “Vigo” is Vigo Moldova – the bad guy from Ghostbusters II. The estate in question is that of the legendary Vlad Dracul, aka Vlad The Impaler, the model for the Dracula legends. Since the whole adventure is about what Tommy (the PC in question) inherits from Vigo, whose life he apparently saved in WWI without knowing it, the meaning of the title seems clear. There may or may not be more to that story, but I can’t reveal it here, since the adventure is still ongoing…

Fumanor: The Last Deity

Unfortunately, my adventure notes from this campaign have all been filed away “somewhere safe”. I’m sure they will turn up eventually, but wasn’t able to locate them in time for this article.

Fumanor: Seeds Of Empire

With this campaign, I actually went so far as to provide a list of the adventure titles (at least for the first half of the campaign) to the players, and have kept it more-or-less up to date with short synopses of the adventure contents. The campaign is divided up into six Phases:

  • Phase I: The Golden Empire – the Kingdoms of Fumanor face an invasion by the much stronger Golden Empire and select an elite force of unknown youngsters to find a solution.
  • Phase II: The Caverns Of Zhin Tarn – a series of extra-dimensional explorations which reveal the solutions to many mysteries.
  • Phase III: Imperial Sunset – either the PCs defeat the Golden Empire, or the Golden Empire conquers the Kingdoms of Fumanor. The title is appropriate either way. This is the current phase of the campaign.
  • Phase IV: A Minor Matter Of Elves – In the course of their adventures in the Golden Empire, the PCs have learned what they need to know to begin a campaign to overthrow Lolth, who conquered the Elves in the first Fumanor Campaign.
  • Phase V: Shadow-plays – In the first campaign, the Drow were liberated from the Worship of Lolth, becoming enlightened citizens of the Kingdoms of Fumanor – it says so on the tin lid. Reality is rarely so clear-cut, and attempts to release the Elves from the domination of Lolth are sabotaged by ambitious Drow, complicating the Elvish Civil War just as victory seemed to be within the grasp of the PCs.
  • Phase VI: Divine Vengeance – Either the PCs succeed in overthrowing Lolth in the name of Corellon, or they fail, or something in between. No matter what the outcome, this title is appropriate. Will the party reunify the elvish population? And how will the long-lost Aquatic Elves play into events?
  1. “Distant Rumbles” – Synopsis: Tajik (Orc PC) is enlisted to investigate and find a way to counter the threat of potential Invaders from somewhere beyond. Together with Ziorbe (Drow NPC), Eubani (Elf PC), and Arron (Ogre NPC), he forms Tajik’s Misfits…. Commentary: This adventure title dealt with four things: The internal socio-political relationships between elements of Orcish society, the relationship between the Orcish “Nation” and the Kingdoms of Fumanor, disquieting rumors from the world beyond the Orc Tribes’ borders, and early reports of trouble that an “elite force of adventurers” was being assembled to investigate. The adventure title applies with equal validity to each of these subjects.
  2. “Devastation Scene” – Synopsis: Discovering that the enemy is mighty in lost arts and undead soldiers, the Party take advantage of an opportunity to (magically) slip behind enemy lines – straight to their capital city. Commentary: Possibly the weakest of the adventure titles from this campaign, it has no depths of meaning and is completely literal in interpretation.
  3. “Dead Hands” – Synopsis: The true scale of the problem becomes apparent when it is discovered that the Golden Empire is an empire built on the services of undead menial labor Commentary: This campaign was about getting the PCs in over their heads an foot or so at a time. Every time they thought they had a handle on how serious the situation was, another implication or complication was revealed to them that showed the enemy’s strength to be that much greater than previously suspected. The first scenario reported an invading army; in the second, it was revealed that most of the army are undead, and that the living commanders of the army have access to more powerful magic than anyone has ever heard of; and in this part, the economic implications of having virtually all work done by unsleeping, uneating, undead are revealed – as is the fact that these undead retain the mind and personality of the deceased. I also started acquainting the PCs with the social and religious ramifications. The title was a subtle reference to all these concepts.
  4. “Rights & Rites” – Synopsis: Deciding that this reliance on Undead labor is the Empire’s biggest Weakness as well as the source of much of its strength, the team begin to focus their investigation on the specifics. Commentary: The synopsis was obviously written after the fact. The actual plot was simply for the PCs to stir around and come to grips with the society of the enemy, looking for a weak point. In particular, this session focused on the arcane capabilities of the Golden Empire, on their Theology, and on the secondary effects on their society of the presence of so many undead. In particular, they learned that the living possess every luxury and lead lives of sybaritic excess – but that they are expected to pay for this luxury with service to the state as an undead. Again, a fairly literal adventure title, a last minute substitution for “Rites & Wrongs”, the working title of the adventure, which was intended to also go into the criminal code and law-enforcement practices within the Golden Empire – content which was pulled since it would have needed more explanation than simple observation.
  5. “Captive Audience” – Synopsis: The party are captured by a Mummified Temple Guard (Chrin, guest PC), which they persuade to join them. Commentary: An adventure that didn’t run entirely according to plan. The PCs plan was to invade the high temple of the capital in search of Holy Books and spiritual writings that would help them understand how the Golden Empire was able to raise undead so easily and effortlessly – and, more importantly, how to cut the puppet strings, since the Golden Empire was completely dependant on their undead workers and armies. Instead, they were captured by a Mummified Temple Guard, a zealot and minor priest, who was intended to give them more answers without lots of exposition. They were then supposed to defeat him and escape. Instead, they used some specious arguement and logic to persuade him to join them – the guest player acting completely out of character for a young idealist and zealot. Fortunately, I was able to solve the characterization inconsistencies later in the campaign. The title of the adventure actually acquired a delicious double-meaning through the PCs actions, because (while the PCs were Chrin’s ‘Captive Audience’, he was also theirs.
  6. “Troubled Waters” – Synopsis: Escaping from the Capital, the party trigger a massive pursuit. In dodging the hunters, they find themselves in the custody of the only group ever to fight the Golden Empire to a standstill – Aquatic Elves – and charged with treaty violations. They escape after being found guilty, and find that the pursuit has passed them by – for now. Commentary: This adventure served multiple purposes. It extended the mythology of the campaign, by revealing the survival of the aquatic elves (supposedly wiped out by the Drow long ago), laying the foundation for a reconciliation between the Elf and Drow party members. It gave the PCs an outside perspective on the Golden Empire, a third point of view, and revealed more of the history. It layed the foundations for the fourth phase of the Campaign. And the PCs had to learn to work with their Undead party member, and vice versa, with the latter lamenting his hasty decision to join them. So the “Waters” were both literal and figurative, referring to the relationships within the party and their slow transformation from rugged individualists to a team. With this adventure, Chrin reverted to being an NPC; his brief span as a PC had been an experiment that didn’t work and almost derailed the entire campaign. Chalk up another lesson learned.
  7. “Sage Advice” – Synopsis: Chrin leads the Misfits to a cleric he trusts for advice. That cleric points Chrin at the outlands, the only place where Priests sufficiently heretical to listen and sufficiently devout not to be slain out of hand can be found, and gives them a name. The Misfits discover that the Empire are back hot on their trail, because they are able to track Chrin. Commentary: Another fairly straightforward title at first glance, but this adventure was full of people – PCs, Non-PC party members, and other NPCs – all offering each other good advice, or trying to absorb the good advice they had received.
  8. “Digging A Hole” – Synopsis: The pursuing warriors of the Golden Empire chase the Misfits into a region with a number of caves. In the largest, the party find a door that can only be detected by Elven sight – and the Golden Empire numbers no elves in its population. The locks are clearly highly magical, giving rise to the hope that this may give the Misfits one or more weapons or allies against the Empire. They resolve to escape their pursuers by exploring the Realms concealed within the Caverns of Zhin Tarn. Commentary: This is the first adventure in phase II of the campaign. The title obviously refers to the idea of “digging a hole and pulling it in after you”, “going to ground”, and all sorts of other metaphors for hiding – but it actually has a triple meaning. The second meaning refers to these Caverns, and their treasures, being hidden under the noses of the Golden Empire simply because they are located in some unattractive real estate that has only ever been seen by Undead – exposing another of the vulnerabilities of the Golden Empire to the PCs. And the final, most deeply hidden, layer of meaning is that of Chrin’s true allegiances as he slowly works his way into the confidence of the team, encouraging them to dig themselves a hole that they can’t get themselves out of. This last won’t be revealed until adventure #14!
  9. “Air” – Synopsis: The party Explore a realm based on the Elemental Plane of Air. They ally with a Verdonne from another Plane who has been trapped here for years and recover the first of six keys needed to open the door to the seventh cavern. They discover that time rates are distorted within the Cavern Realms – and that while they were within, their pursuers have caught up and are camped right outside. Commentary: A seemingly straightforward title, but with hidden overtones, as much of the events within the plane can be described using the qualities of air – transparency, wind, fog, cloud, insubstantiality – as metaphors. This was perhaps a little too subtle, I don’t think any of the players picked up on it at the time – but, at the same time, it gave me a focal point for all my thinking, inspiration and design, so it was worthwhile. Verde is a new PC, given life by the player who had temporarily controlled Chrin (and had so much fun doing so that he wanted to join the campaign full-time). The Verdonne are closer to Ents as depicted in the Lord Of The Rings movie trilogy than they are the traditional Ents of D&D or the book, quite literally “the shepherds of trees”.
  10. “Earth” – Synopsis: The Misfits sneak across the wastelands between the Cave Realms without alerting the Imperial Undead to their location and enter the Cavern Realm of Earth, finding it far more densely packed than Air was. Thanks to their latest recruit, Verde, they navigate the rivers of dust and find their way to a sentient population – Dwarvlings. After a number of misadventures and a little cultural exploration, the Misfits are joined by Leif, Prince of one of the Dwarvling Clans (New PC). They discover that they are cut off from the Gods while within the Cavern Realms. They eventually find and retrieve the key, rescuing a time-lost Paladin (Julia Sureblade, PC) from a past age in the process. Commentary: This adventure was rewritten into a standalone adventure and published here at campaign mastery a few years back – The Flói Af Loft & The Ryk Bolti, so I’m not going to rehash it too much here. The same seemingly obvious meaning to the title, and the same style of hidden metaphors. This time the players seemed to get them. It wasn’t intended at the time of design that the new PCs would join the campaign, but the opportunity was certainly there and they have certainly made their mark in the campaign. A critical point to note is that the PCs learn that the Pseudo-realms are slowly breaking down, and have the potential for devastating the entire prime material plane when they do.
  11. “Water” – Synopsis: Find the third key. The gods find the party.Commentary: Doesn’t sound like much does it? But this was one of the most imaginative settings I’ve ever created, a totally alien environment full of strange and original creatures and a full biology. It was actually created as “an environment” more than a “location” on the theory that water is only bounded by its interfaces with other elements – earth and air, especially. The metaphors for water qualities (especially “slippery”) were just as prevalent, adding the usual additional layer of meaning to the title. More importantly, this scenario established the relationships with the new PCs and the other party members, and started arousing suspicion towards Chrin. Midway through this adventure, Leif became an NPC.
  12. “Fire” – Synopsis: The Fire Pseudo-realm is the least terrestrial thus far encountered, being shaped like a vast donut with variable internal gravity and pressure. It is a Stratified realm with 4 layers. Temperature differentials rise far more steeply than normal, and heat does not conduct as well as it should. The party find the fourth key, and the Chaos Powers find the party. Commentary: Once again, a plot synopsis that leaves out more than it includes. Heat, pressure, and energy were both the central properties and the metaphors for the action. Chrin is able to seemingly allay the suspicions and raise prospects for an eventual community of interest between the Kingdoms of Fumanor and the Golden Empire, suggesting a possible diplomatic solution to the main plot, but for the PCs they have become suspicious of Chrin once too often. The Chaos Powers are the central opposing force to the Gods in the Fumanor campaign’s mythology. At the start of this adventure, Verde became an NPC.
  13. “Negative” – Synopsis: The PCs Find the fifth key and the mind flayer who created the ecologies and inhabitants of the Caverns. They learn that he killed his research partner, who constructed the “Experimental Pseudo-realms” when the latter was about to carry the experiment to its intended conclusion because he feared that the ecologies he had created would be destroyed in the process. Commentary: Unfortunately, I didn’t have the time to write this up as a full adventure, only enough to create some interesting and deadly new opponents (“Mortus Elementals”); for most of it I ran off-the-cuff, directly from my conceptual notes. I did have time to create a “map” of the realm, which I have used to illustrate this article, but most of the details have been lost to posterity. I do recall that much of this setting was about negative emotions, and that the players had been deliberately holding onto Chrin as a team member for this environment (completely forgetting that in this campaign, Mummies had been established as “Positive Energy” undead, and that he was going to be even more vulnerable to the forces and opposition here than the rest of them.
  14. “Positive” – Synopsis: The PCs find the final key and make the choice of whether or not to deep-six their undead Guide. Commentary: As the commentary to the previous adventure makes clear, the PCs had gotten their wires crossed really badly. Instead of the Positive Energy demi-plane weakening their undead “ally”, he was made stronger than ever. Fortunately, the figured out their mistake before making any irrevocable missteps – and, much to their surprise, Chrin did not take advantage of his heightened abilities to betray them. Perhaps they had misjudged him – again – after all? Any such deliberations were ended prematurely as the boundaries between the pseudo-planes began catastrophic breakdown, Mortus Elementals tearing a hole into the positive energy demiplane.
  15. “The Laboratory Of Tenga Mort” – Synopsis: The PCs use the six keys to create a new world from the cavern realms under the direction of the Gods, who plan to use it as a hidden fortress in their ongoing war with the Chaos Powers. But all the plans go into a cocked hat when Chrin betrays the party and the (PC) Cleric is possessed by the ghost of the dead Mind Flayer whose experiment they are completing, delaying the final act of creation long enough for the Chaos Powers to contaminate the new Material Plane. Verde is trapped in the forming reality, using his powers over fate and luck to protect Leif’s people (and the other inhabitants of the demiplanes, he can’t be selective) during the merging process. Commentary: This adventure could have been subtitled “relationships in a petri dish” and still been consistent with the theme of the main plot. The PCs learn more than anyone in their world has ever known about the origins and formation of the Prime Material Plane, the origins of dungeons and the weirder beasties that inhabit their world, and how the primal forces interplay. They also discover that Chrin has been deliberately leading them away from the help that they need, and that the only thing that they had convinced him of (refer Adventure #5) is that the PCs were potentially dangerous and subversive and should be removed from the inhabited regions of the Empire as quickly as possible. In effect, he has wasted over a month of their time – but, in the process, they have recruited new allies and learned to work together far more effectively. The entire plotline is about the ethics of experimentation, the assumption of responsibility, and a collision of faiths.
  16. “Columbus Verde” – Synopsis: The PCs enter the new world in search of their missing teammate. They discover that he was successful in his attempts and that after a period of anarchy, a new ecology is forming. When they discover Leif’s people, they learn that many years have passed since the Merging Of Realities and become embroiled in a murder mystery. Leif realizes that if he leaves again, he will never be able to return without imperiling his people – but honor compels him to remain with the team. Commentary: This adventure changed a lot between initial conception (when Verde was simply going to insist on the team exploring the new world) and what ultimately transpired. The title lost much of its meaning in the process, though its secondary meaning – of “exploring Verde” – remained valid, just cryptic. This was very much a “price of victory” piece, wrapping up loose plot threads so that the campaign could move forward into Phase III, and a title that reflected that would probably have been more appropriate in the end. Heck, even “The Rule Of Lore” would have been an improvement, since the myths and legends surrounding the party were fundamental to the reshaping of the Society within the new world. Oh, well.
  17. “Broken Bonds & Lost Worlds” – Synopsis: The misfits resume their travels through the Golden Empire, learning that Chrin was taking the long way round, seemingly intentionally. Meanwhile, Leif is coming to terms with the destruction of the world he knew and his first encounters with the outside world are not quite what he had come to expect. Commentary: The “bonds” are bonds of trust, and the bond of mission objective to plan of action. Everyone needs to figure out where the team goes from here, but they can’t wait around to do it in the Caverns – Imperial Forces have been drawn back to the area around the gates by the massive discharge of magic that created the New World, which leaves them no choice but to plunge right in. They also learn that the Lich template makes Beholders really nasty. In the course of the adventure, Leif reconnects with the party, and the party discover that the plan appears to still be sound, and everything gets itself back on track. This adventure marks the start of Phase III of the campaign.
  18. “The Garden of Shimono” – Synopsis: After the trouble they had in town, the Misfits head cross-country. But trouble is never far away when you’re on the run! They make good time until they enter an Estate that is besieged by Demons every night. Commentary: Both the previous adventure and this one feature the everyday life of ordinary citizens of the Golden Empire. I felt it was important to highlight the reality that some problems were universal to both societies. In rescuing the owner of the estate (the sole survivor) from his possessed daughter, the PCs earned a genuine ally in their struggles. The title conveys a slightly Asian flavor, an important element in the Golden Empire that has been occasionally lacking in descriptions so far. In particular, the estates resembled those of an Asian garden, and that association is deliberately reflected in the title of the adventure.
  19. “On A Larger Scale” – Synopsis: The Misfits, with the help of their new ally, have to prevent an alliance between the Elves and Golden Empire. The Gods warn them of an instinctive belief that such an alliance must be prevented at any cost. As the negotiations continue, the potential implications and outcomes become more and more dangerous and disturbing, and the party begins to discover the reasons for the Gods misgivings. Commentary: This adventure suffers from the most awkward of circumstances – a title whose significance was forgotten when the time came to write the adventure scenario, and which is therefore irrelevant to the actual adventure that took place. While I have since remembered what that significance was to be, that irrelevance means there is no point in discussing the particulars; instead, I would rather focus on the potential pitfall of being too clever for my own good, in hopes that others can learn from the mistake. For heaven’s sake, if you think of a subtle and especially clever title for an adventure, write down that meaning before you forget what it is!
  20. “Specter Of Defeat” – Not yet played, so I have to keep this secret – sorry! That said, the title suggests that things will get even more critical. When the PCs began their investigation, they estimated that they had between one and three months before the invasion began. It’s now been about eight weeks, so it doesn’t take a great deal of insight to guess what this scenario is all about!
  21. “The Last Samurai” – Not yet played.
  22. “A Summons Of Strife” – Not yet played.
  23. “The Hidden” – Not yet played.
  24. “Knocking On Heaven’s Door – Not yet played.
  25. “Wrack And Ruin” – Not yet played.
  26. “Imperium Redux” – Not yet played.
  27. “Any Landing You Walk Away From…” – Not yet played. The final adventure in Phase III.

No adventure titles have yet been assigned for phases IV to VI. Note that the big finish to this campaign actually takes place in the “Fumanor: One Faith” campaign, listed below.

Fumanor: One Faith

The One Faith campaign – at first glance – has a far more linear structure. But, looking a little closer, and you will find that the Campaign is about to split into two, one character going one way, while the other two have a series of isolated adventures unrelated to the main plot. The following adventures were part of the first phase of the campaign.

  1. “Surfaceworld” – Synopsis: The Drow Gallas leaves the tunnels in which he was raised and makes his way to Fort Sharpfang (capital of the Outer Kingdom), where he is recruited by the Inquisition. Commentary: This adventure was all about Gallas experiencing the world above the tunnels for the first time, with all its quirks. He is also surprised to discover that some Elven traits have emerged in his bloodline after his people’s recent experiences in the Underdark when the Matriarchy was overthrown, their deception (that Lolth still ruled) having been exposed in the previous Fumanor campaign.
  2. “The Silver Palms” – Synopsis: Gallas receives his first assignment. He joins the Silver Palms (a group of adventurers based on members of the Knights Of The Dinner Table and Black Hands), and gets to know them while the party is en route to retrieve The Red Masque, a legendary artifact of incalculable value. Along the way, they recruit a Bard named Sebastian (PC) and one of the members of the Silver Palms becomes host to a Chaos Power. Commentary: The name of the group is both a play on “The Black Hands” and on “Sweaty Palms”, a condition that the Silver Palms reputedly suffer from when confronted by large sums of gold.
  3. “The Grave Of The Prince Of Lies” – Synopsis: Based on the excellent adventure from 0one Games, free from DrivethruRPG. It turns out that the Silver Palms don’t actually know where the Red Masque is, just where to find a clue to where it might be – in the icy tomb of an Undead Dwarven Prince who was seduced, then betrayed, by a Drow Priestess. Commentary: The module’s backstory was incorporated (with some modification) directly into the campaign history because it fitted almost perfectly. That is acknowledged within the campaign by preserving the title of the original module.
  4. “Reap The Whirlwind” – Synopsis: While following the trail of breadcrumbs to the next clue to the location of the Red Masque, Kardles (Dwarven Cleric, NPC, one of the Silver Palms) is contacted by the image of Dis The Destroyer, a Chaos Power so evil that even the other Chaos Powers locked him away, who offers him the deal of a lifetime. Kardles is “seduced by the Dark Side”, thinking he can act as a spy against Dis and betray him later – or stick with him, if it looks like Dis will win. The party’s travels take them through a village where the connections between some of the current problems become clearer – the Church’s divisions have been systematically weakening the economy of Fumanor, which has forced some of the current unpopular decisions, which is what is seeding the rebellion. The Golden Empire, in internal terms, is irrelevant; if it weren’t them, it would be the Goblins or the Elves or whatever; some threat would reveal the underlying fragility of the economy. As the team approach the town of Khom, Kardle’s deal with Dis is revealed, though he doesn’t know it, as he seeks salvation; and the power that Dis has over him is also made clear to the rest of the party. Commentary: The title derives from the proverbial phrase that suggests that those responsible for an event will experience the consequences of their actions, a form of natural justice or Karma. In the context of the adventure, the relationship of the title to the events should be fairly obvious, but as it happens there are multiple such events within the adventure, not just the encounter between Dis and Kardles. Each character (both PC and NPC) experienced something that would qualify, as did a couple of NPC priests that were encountered en route – but the encounter with Dis was the one of most significance to the campaign overall.
  5. “Khom Back Again” – Synopsis: The party reach the town of Khom only to discover that it is surrounded by a bubble of disrupted time, in which the past and the future collide. They confront Dinosaurs, and Cultists, and past victims of the cultists, and finally an encounter with a 21st century druid and his pack of cybernetic hounds (the ultimate ecoterrorist) before discovering that the entrance to the hiding place of the Red Masque is only accessible from one brief interval of the past. They also learn that the temporal disruptions were part of the imprisonment of Dis, but his attempts to break free had spread the temporal disruption to the surrounding districts. Commentary: This adventure’s title is a misspelling of the name of a hit pop song from the 70s by an Australian band, “Daddy Cool” and it obviously relates to the temporal shifts. These got quite hairy for a while as the PCs would be focused on one threat, only for a new threat to appear from a different era in a different direction and get a new surprise round against them. Dis’ imprisonment was with a twist on Heisenberg’s uncertainty – Dis could localize his awareness in space (but would have no idea what time frame he was in) or he could localize his awareness in a particular time, but only by not knowing where he would find himself in space. All his dealings with Kardles had been from the distant past, reaching into the future in an effort to bootstrap himself to freedom.
  6. “The Burning Sage’s Demesne” – Synopsis: Based on another free module from DrivethruRPG, this one by S.T.Cooley Publishing, The Burning Sage’s Demesne required a bit more work to integrate with the campaign history and with the situation at Khom but it was ultimately well worth the effort. The primary change was in making Dis the power ultimately responsible for the background events that led to the creation of the dungeon. At the last possible moment, Kardles saved himself from damnation and rejected Dis. Commentary: The basic story from this module – a tale of love, betrayal, revenge, and grief – was changed almost beyond recognition during the adaption process, but the actual module barely changed at all. Once again, the title pays homage to the source material.
  7. “The Red Masque” – Synopsis: The PCs disband the Silver Palms and take the Red Masque through Goblin Territory to the province of Viscount Asher under an assumed identity. They then use its recovery to ingratiate that cover identity with Brother Thaloran, a priest of considerable standing within the region, and who is amongst the most outspoken critics of the harsh taxation. Viscount Asher, with the assistance of the PCs, will then determine who amongst the priesthood are organizing the bandits of the region and gather evidence against them. They then have to Blackmail those church members with the threat of exposure of their activities to the Viscount (a mandatory and immediate Death Sentence) into travelling with Gallas to Ortin, the capital city of the Kingdom Of Fumanor, and there to confess and submit – quietly – to the judgment and punishment of the Archprelate. Commentary: Throughout their journey to Viscount Asher, Dis was attempting to tempt both the PCs and throwing encounters in their way (and even throwing some encounters into their future paths in hopes of persuading them that they needed the power of the Red Masque. (They still haven’t figured that out completely, but it’s now too late for them to do anything about it, so there’s no harm in letting that particular cat out of that particular bag). Somehow, using the Red Masque will set Dis free.
  8. “The Brown Heart” – Synopsis: Two of the PCs from the original Fumanor campaign now occupy positions of great authority within the Kingdoms (as NPCs). One suspects that a third has been murdered for possession of a powerful artifact that the team had recovered, so she ‘borrows’ his ace investigator (Gallas) to investigate. Gallas and Sebastian discover that many of the problems besetting the Kingdoms of Fumanor are being caused by the Druids, who are nowhere near as unified in belief as they had assumed in the past (refer Flavors Of Neutral – Focusing On Alignment, Part 4 of 5. The current leader of the Druids was the head of an unstable coalition of forces, barely holding his leadership together. In the course of their mission, they met and befriended an Ambassador heading for the capital of the outer Kingdom (Arazal, a new PC). In return for his assistance, they agreed to escort him. Between them, they were able to establish that BriteOak is actually a non-supporter of the policies of Ceriseth (the ex-PC, and a Moderate) but is doing his best to follow those policies out of loyalty to his mentor. They prove that one of the more radical factions had betrayed and killed Ceriseth and arrange for the leader of the rebel faction to reveal his actions, temporarily preventing a further escalation of trouble from the Druids. Commentary: The title of this adventure has multiple layers of meaning. Not only does it reflect the “eco-credentials” of the Druids, but it refers to the “heart” (moral centre) of BriteOak (who is, essentially, an animated half-tree). Thirdly, it describes the autumnal setting of Briteoak’s Grove, the central site of the action of the Adventure; and, finally, a “brown heart” is somewhere in between a “Good heart” and a “Black heart”, reflective of the Neutrality of the Druids – neither good nor bad, but somewhere (actually, many somewheres) in between.
  9. “Monastery” – Synopsis: Arazal’s mission is an offer to negotiate an alliance between the Kingdoms and the Jal-Pur, a nomadic desert people gifted with high magical abilities that are only vaguely understood by the Kingdoms. The demand includes a poetic but vivid description of the person who the Jal-Pur insist on accompanying the diplomat, a description that matches Gallas to a “T”. Eager to accept, the Kingdom gives Gallas and Sebastian some new orders – ensure that the negotiations succeed, at any cost. Their first stop: the Monastery which is the most remote settlement within the Kingdom and closest to the Jal-Pur to collect the diplomat who will rendezvous with them. During the trip, the PCs encounter strange and wondrous events – corridors of wild magic encapsulating zones of dead magic – without understanding the cause. Commentary: There are times when we are all alone with what’s in our heads, and that’s the real meaning of the adventure title: each of the PCs encounters something on the trip that is uniquely personal to their past, present, and future. Each faces a choice of some kind that will define, or redefine, their characters in subtle but long-ranging respects – and has to make that choice alone, with no help from the other PCs. They also meet an Ogre, Arron, who is being sent to join a young Orc Priest named Tajik in a bold attempt to gather direct intelligence on the impending invasion by the Golden Empire.
  10. “The Sands Of Blood” – Synopsis: The PCs meet the Ambassador and find him to be a most disagreeable, even blunt and acerbic, character. But as they journey with him, and he interrogates Arazal about the Jal-Pur, they begin to realize that the Jal-Pur prize honesty and forthrightness above all else, and “diplomatic language” was one of the barriers that prevented and strained alliances with them in the past. The Ambassador is probably the perfect representative to the Jal-Pur. At the negotiations, all goes well despite the open opposition of some Jal-Pur tribes; a list of outstanding issues to be resolved is agreed apon that is mutually acceptable. Just as the negotiations are about to begin in earnest, both the Ambassador and the Matriarch of the Jal-Pur are killed under suspicious circumstances. Arazal is appointed the representative in negotiations and first ambassador to the Kingdoms (despite his being the wrong gender) and Gallas is ordered to represent the Kingdoms, leaving Sebastian as the lead investigator into the murders. It turns out ultimately that the killings were carried out by the Golden Empire in an attempt to sabotage the negotiations. A treaty is ultimately agreed between the two and the party set out to return to the Kingdoms.
    Commentary: A straightforward adventure title because the situation carried enough drama without need for subtle overtones.
  11. “Goblin, Goblin!” – Not yet played. This will be the final adventure for the unified party.

Strand 1: Gallas (plus 2 temporary PCs)

  1. “Shoot The Messenger” – Not yet played.
  2. ~Adventure is not part of this strand~
  3. “Let Not The Left Hand Know” – Not yet played.
  4. ~Adventure is not part of this strand~
  5. “The Subversion Of Thom Elias” – Not yet played.
  6. “What Price, Freedom?” – Not yet played. Temporarily reunites Gallas with Sebastian and Arazal.
  7. “Everybody’s Human” – Not yet played.
  8. ~Adventure is not part of this strand~
  9. “Heretics To The Left Of Me, Heretics To The Right” – Not yet played.
  10. “Jailbreak” – Not yet played.
  11. ~Adventure is not part of this strand~
  12. “It’s Only Politics” – Not yet played.
  13. “Death Of An Icon” – Not yet played. Reunites Gallas with Sebastian and Arazal.
  14. “Broken Chains” – Not yet played. Begins the buildup to the campaign Climax.
  15. “Grand Theft” – Not yet played. The big finish. Teams Gallas, Sebastian, and Arazal with the members of Tajik’s Misfits (refer “Fumanor: The Seeds Of Empire” above) in final battle with Lolth and Dis.

Strand 2: Sebastian & Arazal (plus 1 temporary PC)

  1. ~Adventure is not part of this strand~
  2. “Meanwhile…” – Not yet played. Teaser: An old friend finds trouble when Sebastian comes to visit.
  3. ~Adventure is not part of this strand~
  4. “Sebastian’s Groupie” – Not yet played.
  5. ~Adventure is not part of this strand~
  6. “What Price, Freedom>” – Refer Strand 1 entry, above.
  7. ~Adventure is not part of this strand~
  8. “The Higher Standard” – Not yet played.
  9. ~Adventure is not part of this strand~
  10. ~Adventure is not part of this strand~
  11. “The Lost Chord” – Not yet played. The final adventure in Strand 2; Sebastian & Arazal will reunite with Gallas for adventure 22.

Out of time and still with several campaigns and about a hundred more adventure titles to analyze. So, I guess there’ll be a part two to come in a couple more weeks. I’ve adjusted the title of this post accordingly..

Comments (6)

The Age Of An Elf: Demographics of the long-lived


I’m taking a break from the ongoing Earth-regency Alternate History series this week (mostly because research has been taking more time than I’ve had available. Instead, the following is based on an email exchange between one of my players and myself, raising some serious questions about the population dynamics of longer-lived species and aging in RPGs…

One of my players asked me today about how to determine the age of his new character, an elf who has entered the game in question in an age category of “Venerable”. But the game in question – I won’t name the rules system – has no rules for character aging, and doesn’t even nominate standard lifespans for different races. He proposed, “would it be appropriate to use the 3.5 tables? If so, then my elf would be at least 350 years of age (more probably 450+) with a maximum age of *rolls 4 percentile dice* 606 years, according to 3.5 PHB ageing for elves.”

This was the first time in several years that I’d looked at the assumptions that underlie “standard aging” tables, and I’ve learned quite a lot since the last time. As a result, my thought process led me down some interesting paths, paths which showed how significant a “mere” +50%-or-so lifespan was – never mind the 4-500% suggested by the 3.5 PHB.

Demographics are not a flat line

My first problem is now, and always has been, with the notion of a flat percentage being used to determine where in a race’s lifespan a particular character’s age falls. This makes it just as probable that a character will have a high age as it is that they will have a low age – and it doesn’t take much examination of demographics to realise that the real world simply doesn’t work that way.

Demographics are not a dumbbell curve

The next most-common approach that I’ve seen is the rolling of multiple dice to determine age. This makes a character’s age more likely to be at or around the mathematical mean, offset by any adjustment made to ensure a minimum age that’s suitable for adventuring. This makes character ages too old, on average, and – once again – looks nothing like a real demographic curve.

The problems

Either of these approaches can yield what seem to be reasonable character ages in the case of individuals; it is only when you start looking at larger populations that the answers stop making sense. The population aging approach you choose brings with it implications for knowledge of the past, acquisition of skills, birth and death rates, relative population levels, and the resulting social mechanisms.

Knowledge of the past

If your character is 500 years old, you should expect them to have a fair idea of what was going on 400 years ago, and about events between then and now. This is a cross for the GM to bear that he really doesn’t need; it would, in general, be better to have events of more than a generation ago being lost in the mists of time and the pages of history. Why? Because then the GM can bring out historical events as he needs them for maximum story gain, rather than having to prepare the history in advance.

It doesn’t matter so much in Fantasy novels, where the author can introduce an Elven character only when it suits the plotline; an Elven PC will be pestering the GM for detailed histories every time the past becomes relevant to a plotline. It adds to the Prep Burdon of the GM, sometimes massively, and can totally erase a lot of the atmosphere and mystery of the past.

Acquisition Of Skills

As soon as you have a race living four or five times as long as humans, the GM has to start fudging questions concerning the acquisition of skills – or they will end up with Ubermensch who don’t need the PCs. If it takes 20 years to master a craft or skill, for example, most humans will do so at around the age of 30, and – given probable lifespans – be able to master only one or perhaps two in a lifetime (50-60 years). Your typical elf, if they have 500-year-liefspans, even if youth and childhood are also increased proportionately, will (in comparison) have time to master TEN to TWENTY, even without any advantages from genetic/racial predisposition. And that also ignores any compounding effects – even though, in reality, studying one subject often makes it easier to learn a related subject. That doesn’t matter so much to humans, where there’s only time for the mastery of two (perhaps 3 or 4 in exceptional cases) skills – but when you start talking about 10-20 skills, this effect goes from negligible to seriously important.

To combat it, and prevent elves from coming to dominate society, you have to start making assumptions about how easily long-lived races learn new things, about how ambitious and motivated they are, and generally adding in whole reams of additional racial profile – much of which doesn’t marry up with other source material like official adventure modules.

Heck, consider the number of diplomatic and trade contacts an even-moderately accomplished Elven trader could amass in hundreds of years, the number of secrets and confidences that one could accumulate!

Four hundred years ago, it was 1612 – how much has occurred since then? How many mysteries have arisen because every eyewitness died out before their stories could be documented?

Birth and Death Rates & Relative Population Levels

This is something that I alluded to not long ago in Sugar, Spice, and a touch of Rhubarb: That’s What Little Names Are Made Of, where I was discussing the effects of birth and death rates on population levels and how to stop long-lived races from overwhelming other races from sheer population level, and the implications for character names.

In a nutshell, the more long-lived the race, the lower the population level needs to be simply to maintain population parity with a human society. I’ll return to this subject as the discussion proceeds.

The Human Analogue In A Fantasy Campaign

Consider humans – get their aging right and then it should be possible to simply scale the answers to get elves or any other long-lived race.

Historically, in the historical timeframe on which D&D is based, 40% of children born die before reaching double-digits in age. 30% of those who get to age 10 will be dead before they reach age 20. 50% of those who get to age 20 will be dead before reaching age 30, and 70% of those who get to 30 will be dead before 40. Of those who reach 40, 80% won’t get to fifty, and of those who get to 50, 90% won’t make 60. Of those who make 60, 95% won’t get to age 70. Thereafter it’s 96% dead before 80, 97% dead before 85, 98% dead before 90, and 99% dead every 2 years thereafter – 92, 94, 96, 98, 100, 102, 104, and so on. In theory, if you make your aging save, you can keep going – the record is believed to be about 116 years, though there is a substantial error rate. There are unsubstantiated claims of a South American tribesman reaching 150 years of age, for example.

Now, factor in the availability of healing magic, and the fact that most of those who die in the 0-20 age bracket die of disease, while most of those who die in the 20-40 age bracket do so in military campaigns of one sort or another.

Then factor in the increased danger of accidental death because there are dangerous monsters and magic and what-have-you around.

You can assume that these two factors cancel each other out, implying that the younger the age, the more likely you are to encounter one of these additional dangers – and that appears to make sense. You increase the rate of accidental death and reduce the rate of death from wounds and disease – but that’s just an assumption that could very well go either way. Make this assumption, though, for the sake of argument, and let’s look at the results:

The Population Breakdown

With our base assumptions and something vaguely approaching a historical foundation in place, we can generate a demographic breakdown:

  • 40% die before age 10 (4 in 10). 60% reach 10 years old (6 in 10).
  • 30% of this 60% die before 20 = 3/10 of 6 in ten = 18 in 100. The other 70% survive = 7/10 of 6 in ten = 42 in 100.
  • 50% of the 42 in 100 die before 30, = 21 in 100. The same amount survive.
  • 70% of the surviving 21 in 100 will die before 40 = 147 in 1000. 30% survive = 3/10 x 21/100 = 63 in 1000.
  • 80% of the surviving 63 in 1000 will die before 50, so 20% will survive = 1/5 x 63/1000 = 63/5000.
  • 90% of the 63 in 5000 die before 60, so 10% will survive = 63/50,000.
  • 95% of the 63 in 50,000 die before 70, so 5% will survive = 63/1e6.
  • 96% of the 63/1e6 die before age 80, so 4% will survive = 126/5e7.
  • 97% of the 126/50 million die before age 85, so 3% survive = 378/5e9.
  • 98% of the 378 in 5000 million die before age 90, so 2% survive = 378/25e10.
  • 99% of the 378 in 25 thousand million die before age 92, so 1% survive = 378/25e12.

…and so on.

Application to a typical population

Now multiply those by a population base – let’s say, 100,000 people.

  • 40,000 will be <10, 60,000 will be 10+.
  • The 60,000 are made up of 18,000 aged 10-19 and 42,000 aged 20+.
  • The 42,000 are made up of 21,000 aged 20-29 and 21,000 aged 30+.
  • The 21,000 are made up of 14,700 aged 30-39 and 6,300 aged 40+.
  • The 6,300 are made up of 5,040 aged 40-49 and 1,260 aged 50+.
  • The 1,260 are made up of 1,134 aged 50-59 and 126 aged 60+.
  • The 126 are made up of 119.7 aged 60-69 and 6.3 aged 70+. That doesn’t make a lot of sense, so round the numbers to 120 and 6 for practical usage.
  • The 6.3 people are made up of 6.048 people aged 70-79 and <1 person older than 80 – though we are now well within the 0.3 in 100,000 rounding error. So leave it be at 6 people aged 70+.

The result is a population curve which is noticeably bunched up into the lower end of the scale, rather different to the bell curve or completely flat line that either of the generation methods we have calculated.

The Next Step Not Taken

In my youth, I would have gone on to plot these results on a graph, and then perform a mathematical analysis to derive a complex equation describing the exact percentage of the population for any given age (to fill in the missing points on the graph), then converted the results into a table for generating a randomly rolled age.

Of course, if we simply assume a flat distribution of possible results across the sub-range of ages specified, we can get a simpler answer far more quickly – a d1000 for the age band, and then a d10 for range within that age band. But for the purposes of this article, even that is going further than we have to.

Elves with a 60% longer lifespan

To be honest, with all the social impacts of being long-lived, I can’t really see elves having more than a +60% lifespan over humans without the difficulties becoming insuperable. Doesn’t sound like a lot, does it? But let’s apply it and see what effects it would actually have on the demographic.

Because the dangers faced by the young would be the same for both humans and for elves, I’m not going to apply the full factor to the young. Instead, I’m going to go: Times 1, times 1.2, times 1.4, and times 1.6 thereafter.

So:

  • 10+ stays 10+.
  • The ten-year gap between 10+ and 20+ becomes a 12-year gap to 22+.
  • The ten-year gap between 20+ and 30+ becomes a 14-year gap – but it now starts at 22+ and runs to 36+.
  • The ten-year gap between 30+ and 40+ becomes a 16-year gap, but it now starts at 36+ and runs to 52+.
  • All the subsequent age brackets are also 16 years of length.

That gives a population breakdown of:

  • 40,000 will be under 10, 60,000 will be 10+.
  • The 60,000 are made up of 18,000 aged 10-21 and 42,000 aged 22+.
  • The 42,000 are made up of 21,000 aged 23-35 and 21,000 aged 36+.
  • The 21,000 are made up of 14,700 aged 36-51 and 6,300 aged 52+.
  • The 6,300 are made up of 5,040 aged 52-67 and 1,260 aged 68+.
  • The 1,260 are made up of 1,134 aged 68-83 and 126 aged 84+.
  • The 126 are made up of 120 aged 84-99 and 6 aged 100+.

That’s what a 60% increase in the lifespan looks like. For any given calendar age, you get more elves alive of that age than you do humans. In the bracket containing 75 years of age, for example, you have 6 humans in every hundred thousand and 1260 elves.

To reduce the population levels of both to match – 6 in both – you find that elvish communities are one 210th the size of comparable human communities – so a city of 20,000 people would be the same as a ‘city’ of 95 elves. And a town of 2000 humans would be the equivalent of a group of 9-10 elves.

A correction

Actually, that’s not quite correct. In both cases, we’re aiming for an age range – to get an absolutely correct comparison, we should divide that age range up. So the 6 humans are actually 6 aged 70+ (with, effectively, none older than 80, according to our earlier calculations). So that means 0.6 of them will be exactly 70 years of age.

The elvish age bracket containing age 70 applies to 1134 people out of 100,000, and runs from 68 to 83, a span of 16 years – so 1134 / 16 gives 70.875 people out of 100,000 aged exactly 70. To get that back to 0.6 people, we have to divide the elvish population by a factor of 70.875 / 0.6, or 118.125.

That means that a city of 20,000 humans is as common as a “city” of 20,000/118.125=169 elves. A town or village of 2,000 humans is as common as a “town” of about 17 elves.

400-500 years?

These differentials would be even more extreme if the 400-500 year lifespan model were applied. You would end up with the average Elven city having like 2 people in it. And villages would contain less than 1 person.

Don’t believe me? Well, let’s have a go.

The conversion factor

So, to start with, we want to graduate from x1 to x5 smoothly. The square root of 5 is 2.236, and the square root of that is 1.5, near enough. So, let’s say the factors are:

  • times 1;
  • times 1.5;
  • times 1.5 x 1.5 = times 2.25;
  • times 2.25 x 1.5 = times 3.375;
  • times 5, thereafter.

Our ten-year population intervals become:

  • 10 years;
  • 15 years;
  • 23 years;
  • 34 years;
  • 50 years, thereafter.

And that gives, from a standard 100,000 breakdown:

  • 40,000 will be under 10, 60,000 will be 10+.
  • The 60,000 are made up of 18,000 aged 10-24 and 42,000 aged 25+.
  • The 42,000 are made up of 21,000 aged 25-47 and 21,000 aged 48+.
  • The 21,000 are made up of 14,700 aged 48-81 and 6,300 aged 82+.
  • The 6,300 are made up of 5,040 aged 82-131 and 1,260 aged 132+.
  • The 1,260 are made up of 1,134 aged 132-181 and 126 aged 182+.
  • The 126 are made up of 120 aged 182-231 and 6 aged 232+.

…and so on.

With 70 years being our standard of comparison, we have 6 humans in 100,000 and 14,700 elves in roughly that time-span. Dividing the 6 humans into the 10-year span gives 0.6 people in 100,000 being exactly 70 years old, while dividing the 14,700 elves into the 34 year age span gives 432-and-a-fraction elves exactly 70 years old out of every 100,000. Reducing elvish populations so that both groups have 0.6 members in 100,000 who are aged exactly 70 years gives a ratio of 720.6.

So an elvish city of “20,000 humans” would contain about 28 elves, and a village of “2000 humans” would be the equivalent of an elvish village of… three. Most of the time. Actually, 20% of the time, it would only be two elves.

Conclusion

Plucking numbers out of the air for lifespan is all well and good, but if you don’t know what you’re doing, the implications can overwhelm your game setting. Or, if they are not taken into account – something few people take the time and trouble to do – they can completely demolish the plausibility of the game setting when someone else hits you between the eyes with some hard questions.

One Caveat: I don’t have any actual population demographics for the calculations shown here, especially for those specified in the section The Human Analogue In A Fantasy Campaign. These are simply numbers that seem about right from the many sources and references that I have read in the past. More accurate data would yield more accurate analysis and projections – but the results ‘feel’ right, as they stand. So you can take them with a grain of salt – but I’ll use them until something more accurate presents itself.

Comments (16)

Who Got Poker In My RPG?


Poker in RPGAt my workplace there’s a poker group. At my previous job there was a group. Players in my Riddleport campaign play in such groups. Games are even broadcast during prime time TV. When did this game become so popular?

If you have a player in your group who is a raving poker fan, I thought I’d put together a few ideas for you today on how to include the game in some form in your campaign to give that player a thrill. It’ll be like chocolate in their peanut butter for them!

Meet The Villain

Losing at a card game can become clever foreshadowing. Have the PCs play against the villain or a notable minion.

Bonus points if they keep their identity hidden during the game!

They’ll cheat, of course. And the loss gives great foreshadowing when the players confront the NPC in an encounter later on in the campaign.

If the PCs somehow win the hand or clean out the bad guy, the villain flips the table and announces revenge. A dish best served cold, my friend.

Get Real

Try to get a realistic feel for the culture of the game. You want to do more than deal out a few hands. You want to roleplay it.

You might consider visiting a casino and observe. Take notes on the sights, sounds and smells. Look at the ceiling, the floor and in the nooks and crannies to collect cool little details you can add to the game scene.

You can also visit pokers sites online to get a feel for the lingo, style and themes to help your encounter and NPC descriptions. You can compete and win at real money poker sites if you want to feel firsthand what it’s like for all you method-actor GMs out there.

Theme It

Mashup poker with your campaign setting to make your own vision for the game a unique experience for your players.

Perhaps the dealer is a cigar chomping quasit. The chips are a special set brought back from the Goblin Lands, covered in goblin runes with accompanying teeth marks. Instead of dealing cards you cast them. Instead of ante it’s spit. You declare shield instead of call.

Even better, give all the hand combos new names. Full House becomes A Coup. A Straight is a Crossbow.

Check out this list of hands to help you figure out what you can rename thematically.

It’s Just Fun

First, a word of caution. Do not play for real money in the game. That seems obvious to you and me. But to someone with the fever, you bring out a deck of cards and say there’s a game in the tavern, and their eyes get squinty and they reach for their wallet out of habit.

No matter how much they ask, keep that aspect to the real game. Think I’m joking?

Combine a casual style RPG player with a passion for poker, and you can see where their loyalty lies. They’ll cancel out on you to play that other game. They’ll have a card suit for an ear ring. They’ll talk the language in-character.

If you run a game within your game, they’ll want to do it for real. Stick to your policy of playing for fun only.

As a compromise, to create real stakes, gamble for GM helper roles. Play a fixed number of hands. The one with the most gold pieces at the end has no duties. The one with the least has table cleanup and garbage duty. Assign other responsibilities to everybody in between – scribe, quartermaster, mapper and so on.

That’s assuming stakes are not already high enough with characters gambling their own wealth away. :)

Find Cool Cards

You can find themed cards, chips and accessories online.

Enhance play, for example, with a fantasy deck. Maybe you have enough discarded Magic cards to create a poker deck.

If you just have regular chips or can’t find cool themed ones online, buy stickers and apply. You could also paint them up to get the look you like. Actually, metallic spray paint + stickers gives you a fantasy or sci-fi set pretty fast.

Hinge A Game Outcome On It

Make the results of the game affect the game world in some way.

OSR folk love getting chess sets into encounters as puzzles and mini-games.

I remember in one campaign I used a chess board as the Game of Gods.

As events unfolded in the world, the chess board got updated. One of the players had the ability to scry this board to see developments. As they were mid-level, they also influenced the moves on the board, which made the scrying even more important. “Did we successfully block the King’s check!?”

Additionally, I turned the pieces in to NPCs. And the scryer could see some pieces cracking, vibrating, leaning and so on. In this way, I provided plot clues.

So too could you make poker have real game world impact. The cards could be NPCs. Or a mix of NPCs, locations and items. Hands dealt or played could be encounters and events.

The game players? Gods, demon lords, kings, imprisoned mega-psionicists, or unknowing rogues in a plane far away. Pick a group that would make cool epic tier NPCs for your game or future adventure opportunities.

There you have it. Six ideas for getting this real world hobby into your RPG game:

  1. Foreshadow future confrontation against the villain
  2. Observe real poker to help you roleplay it better
  3. Theme things in-game to give you better ambiance
  4. Raise the stakes with M helper duties
  5. Find great props
  6. Base the plot on the hands

Have you ever used the game in your RPG sessions? How did it go?

Comments (5)

The Imperial History Of Earth-Regency, Part 11: The Post-Modernist Dark Age – 1998-2015


This entry is part 11 of 12 in the series The Imperial History of Earth-Regency


Pieces Of Creation is an occasional recurring column at Campaign Mastery in which Mike offers game reference and other materials that he has created for his own campaigns.

All images used to illustrate this article are public-domain works hosted by Wikipedia, Wikipedia Commons, or derivations of such works, except as noted, and except for the Image of Big Ben as a 9/11 target, which is the author’s own work.

This article is a work of fiction and no endorsement of the content should be attributed to any of the individuals or institutions named, photographed, or credited.

Author’s Notes: It has been almost a decade since the first draft of this Alternate History was written in 2003. While some of the events forecast in these pages have come to pass, others have not; so this marks the point in the narrative where significant divergances occur, simply because my imagination went left while the real world went right. Very little in the overview below has been changed from the original text, though there have been a few clarifications here and there.

Nevertheless, the content of this series has been updated, as each part has been published through Campaign Mastery, to reflect the benefits of a further decade of life experience and perspective, and the next few chapters will be likewise enhanced with the introduction of more “real history” into the timeline.

This ‘era’ within the fictional history marked, and marks, the transition from based-on-real-history to completely fictional. All the 2012 updates do is smear that transition over a ten-year span within the history. But it will mean a gradual change in style, and possibly shorter subsequent chapters, due to the added research and development requirements.

When one era ends, it’s traditional for a new one to begin. But the seeds of the new era had been layed many years before the conclusion of the Communications Era, with the development and popularization of Home Computers and the Internet. The dominant theme of the new era would be the consequences of the events of the 15 years which preceded it.

Ted Nelson, one of the many fathers of the internet, in 2011. Photo by Gisle Hannemyr

The Popularization of the Internet

The internet began as ARPANET, a means for defense researchers at different universities to exchange technical information by computer linkup without the need for a face-to-face meeting. The scientists who used the systems found the technology so convenient that they began using it in their personal lives. From that point on, it became inevitable that a domestic equivalent would eventually emerge. E-mail had arrived. But it was when Australian researcher Ted Nelson first invented the hyperlink in the 1960s that the seeds of the phenomenon most people envisage when they refer to the internet truly came into existence.

Nelson’s proposals were ignored for almost 20 years, but when they finally caught on, the internet exploded. Crucial to this was the mass-popularization of Microsoft’s Windows 95 operating system, which took many different network protocols & content types – FTP for file transfers, HTTP for web pages, JAVASCRIPT and ASP for interactive scripts on web pages, and SMTP/POP3 for sending and receiving emails – into one (relatively) seamless system, with one type of communications linking directly to another through a simple point-and-click user interface. People didn’t see all these different programs as independent and separate, they saw a single entity – the internet – which had all these different things that could be done with it. This was the ultimate development of the Communications Age.

Nothing this revolutionary could come into existence without widespread social consequences. At first, this was dismissed as a possibility, even while the Dot Coms were setting stock markets on fire. Email was just like regular mail, just faster. The web was just like the magazine rack in the world’s best library – except that all the magazines were by amateurs. Web Pages were regarded as a mass-production extension of the Desktop Publishing that had predominated in the late 80s and early 90s. Even as late as 2002, 70% of internet users cited email as their primary purpose for an internet connection. But all Booms must Bust, and the Dot Com collapse which began in 1998 lasted for more than five years. Despite this implosion, there were four developments that would signpost and make possible a firestorm of change in subsequent years.

Zappos was already the largest online shoe retailer based in Henderson Nevada when it was purchased by Amazon in 2009. Their fulfillment centre - a small part of the Amazon online empire - shows the incredible scope of the Amazon retail juggernaut. This image is from 2006 by lizzielaroo.

Online Retailing

The first of the four was the development, through scripting languages – JavaScript & ASP in particular – of online purchasing systems. Pioneered by the online bookstore Amazon, this development challenged the concept of the storefront display, one of the greatest expenses of a startup business. As the Dot Com explosion proceeded, it was proclaimed as an absolute necessity for a business to have a web site, and for customers to be able to interact with the business through that website as though it was the actual store. Although it never reached the point of totally wiping out in-corpus purchasing, due largely to the rise in internet fraud and public wariness, the first part of the prediction was largely self-fulfilling prophecy; the more companies acted on that perception, the more the internet boomed, and the more necessary it came to appear to other companies that they too needed a web presence to remain competitive.

The most significant consequence was the erosion of trade controls and regulations at the direct customer level – if it was cheaper, with transport costs, to buy a book or a CD from the US, or Venezuela for that matter, you could – and hundreds of thousands did. While internet shopping never grew at the hysterical rate predicted by the “experts”, it did grow, year-by-year, and with it the economic basis of whole nations slowly changed. The internet had bypassed the local economy completely – or, more accurately, had put small manufacturers and multinationals on an even playing field, no matter where the small manufacturer was located.

This in turn inspired shady entrepreneurs, who realized that the internet was not only unregulated, it was almost impossible to regulate. Porn websites proliferated, bypassing local censorship regulations. The internet became the Great Leveler, reducing enforceable laws to the lowest common denominator. Gambling sites soon followed. Many countries made vain attempts to limit or restrict the content placed on servers within their jurisdictions in an attempt to turn back the tide; the owners of those sites simply moved the websites to servers in other countries. They did not even have to physically move to those countries, the entire transaction and infrastructure set-up taking place electronically, over the internet. Typically, it took less than 24 hours before the site was back on the web.

A version of the famous (or notorious depending on who you ask) Napster icon, from a set of themed icons by DBGthekatu.

File-sharing & The Media Industries

The Entertainment industry learned nothing from this lesson, as shown by their reaction to the third major development of the new era, the combined arrival of MP3s and file-sharing. The first such combination, Napster, was eventually shut down, because it had centralized servers that contained the index to the files available for being shared, and those centralized servers could be targeted; but even before it was defunct, new programs were available to the public free of charge that did not have this legal vulnerability.

The entertainment industry, already stung by surveys showing that the internet was beginning to eat into “traditional” recreations like television, responded in 2003 with lawsuits that were beyond any rational belief. They had already tried to gain exemption from liability for any damage they might “inadvertently” cause in invading personal computers via web-based viruses in the search for illegal files or the technology to exchange them; increasingly, their reactions were more suggestive of a state of panic than of a rational business. The irony was that they had only themselves to blame for this state of affairs; in the 1980s they had decided to eliminate relatively cheap LPs in favor of CDs, which they then priced at profit levels that were, to say the least, exorbitant. This created an unsatisfied demand which ensured that alternative (free) distribution methods would flourish. They had also neglected the MP3 phenomenon in its early stages, despite being given the opportunity to bring out a “downloadable files” option of their own. Having decided that internet distribution of movies and audio would not amount to much, because file sizes were so extraordinarily big, they were caught entirely unprepared when new file formats and improvements in internet technologies took file sizes to relatively small numbers without apparent impact on quality. It was their own unwillingness to embrace the potentials of new technology that had left them vulnerable to the problem in the first place, and their own greed that had created the demand.

The IRIA (Imperial Recording Industry Association) lawsuits of 2003 were terrorism by law; the damages sought were so high in order to ensure that the defendants would never even contemplate fighting the case in court. An out of court settlement, setting a precedent favorable to the Recording Industry, is what they were sure of achieving. It didn’t work out that way, for a very simple reason – Iraq.

An F/A-18C Hornet coming in for a landing abourd the USS Constellation (CV-64) after a mission in support of Operation Iraqi Freedom, whose objectives were to liberate the Iraqi people, eliminate Iraq's supposed weapons of mass destruction, and end the regime of President Saddam Hussein. Photo by Photographer's Mate 2nd Class Daniel J McLain.

The Gulf War II Connection

in 2003, the Empire took up where it had left off in the invasion of Iraq, as officials tired of the games Saddam Hussein had been playing with inspection teams (and became wary of increasing Mao tensions over the issue). The result of that conflict was fairly predictable, given the circumstances; but there were some unexpected spin-offs.

Notably, the Imperial Government took notice that many of the most patriotic pieces of music were still under copyright, meaning that they could be used on websites only at the owner’s risk. This was not an acceptable situation, given that troops were fighting a war that had deeply divided the community even as the combat began. That, in turn – and in conjunction with the IRIA’s threats to sue the universities where the alleged infractions had taken place – ensured that a coalition of forces began assembling against the Recording Industry. The Universities called their alumni, especially the politicians and politically connected, and before the IRIA knew where it was going, the entire political and economic infrastructure of the Empire was bankrolling a defense fund, one which had already acquired the worlds leading attorneys (all of whom had also graduated from universities!)

A Barrister in traditional wig, 2009. Photo by Southbanksteve, Flickr

The Copyright Quagmire

Court was, in reality, the last place on earth the IRIA wanted to go. Once there, only three outcomes were possible: either the jury returned a verdict for the defendant (undermining the entire legal status of copyrights that the IRIA had worked for years to create), possibly out of revulsion at the greed displayed by the damages requested; or, worse still, they could do their duty under the law and find the defendants liable, but set a trifling damages bill, destroying the value of copyright violations; or worse still, they could return a finding of malicious prosecution, handing the bill for all legal costs to the IRIA – costs that were sure to be unbelievably exorbitant given the size of the legal team lined up to challenge the case. And worst of all, the verdict would be legally binding and a permanent precedent! Any appeal would place the entire matter in the hands of the High Court – which had the authority to redefine copyright or abandon it altogether if it saw fit. There were No outcomes that benefitted the IRIA – but they had never expected their bluff to be called. Nor could they afford to back down from it – the loss of credibility would do exactly the same damage as a loss.

Screenshot of the Chatzilla IRC software in use. Image is subject to the Mozilla Public Licence.

IRC & The Law

When the case actually reached the Court of Manchester in 2005, the legal process was in its own way being radically overhauled by the increasingly omnipresent internet, by means of a fourth technological innovation – Internet Relay Chat, or IRC. This enabled multiple people to converse by typed text with one another in utter silence save that tapping of a computer keyboard. For the first time, IRC connections between laptop computers via wireless networking enabled the lead attorneys to have their entire research staff hooked into the case, locating precedents and past arguements and rulings more quickly than the clerk of the court could.

It was noteworthy that the defense team had this technology, but that the prosecution did not; a situation many considered symbolic of the entire situation. It was the Progressives vs. the Luddites all over again, and the Luddites were doomed to failure. One reporter would liken it to “The One Eyed Man in the Kingdom of the Blind”, so profound were the differences. In effect, there was a legal team of over 2000 people defending against 3 unassisted lawyers. Much to the chagrin of IRIA, the defense steamrollered them – and managed to get the whole problem kicked up to the Supreme Court on a jurisdictional issue even before a verdict had been reached. The result was a complete redefinition of copyright within the Empire, one driven by what purpose the copyrighted material was being put to – and putting an end to the power of IRIA.

Just part of the plethora of connections that made up the Internet (also known as 'The Cloud' and 'The Dreamtime') on Jan 15, 2005. Image by the Opte Project.

Mass Intercommunications & The Dreamtime

Internet Chat had other implications for society. For the first time, people living in many different countries were able to talk directly to one another – it was not uncommon for a chat room to have Europeans, Pakistanis, Arabs, Americans, Australians and South Americans all chatting at the same time – and those who participated began to forge bonds of understanding and to adopt a more cosmopolitan view. Communities of those with common interests arose, as they always do, but with the fundamental difference that Geography was irrelevant – these were neighborhood clubs whose membership happened to be scattered all over the Earth.

In the past, it had been held by some cultures that reality was more than the eye could see, that overlaying the physical world was a spiritual world containing forces, allies, and foes, a world that was invisible to all but the uninitiated, and in which things were possible that were either impossible outright, or impractical at the very least, in the physical world. Amongst the citizens of the Empire, the Australian Aborigines and Inuit Eskimos had the clearest views of the “spirit world”, preserved despite the influence of Modern “Education”. It was a world of unsuspected dangers and unimaginable possibilities. The Internet was similar, a Dreamtime of new dangers and new promise, an electronic web connecting 70% of the Empire together in ways considered science fiction a decade earlier. The correlation between the two would spark new religions and new analogies; by 2015, “the ‘net” had become known colloquially as “the Dreamtime”.

The DynaTAC 8000x, also known as "The Brick", was the world's first commercially available hand-held mobile phone. The version released in 1983 weighed 28 ounces and was 10 inches tall - not counting the antenna! Previous mobile telephones either had to be mounted in a vehicle or were contained in heavy briefcases.

Collaboration In The Dreamtime

IRC continued to evolve; with improvements in compression technologies and faster computer processing, it became possible first to chat using spoken words, and then to communicate with visuals. Shared electronic whiteboards enabled design teams to collaborate no matter where in the world they might be physically located. Surgical instruction and supervision was conducted live as operations proceeded, over the internet. It became increasingly common for people to work from home and “telecommute”, with obvious ramifications for the transport industry (Canada astonished the world by becoming the world leader in this social trend).

It was a French company, Yahoo – which, thanks to the power of the internet, many people thought American in origin – who first realized that with relay servers connected to phone lines in almost every local district of every nation of the Empire, that it was possible for a telephone call to be routed through an internet connection from another country to a local telephone. A single internet connection made all phone calls local, and almost free. They built the technology into their free “Messenger” IRC software as much as proof-of-concept as an actual marketing exercise. At first, the service was only to USK telephones (from anywhere else in the world); but one after another, more districts and countries were connected.

Telephone monopolies, accustomed to reaping the bulk of their profits from long distance communications, became increasingly unprofitable, and forced to resort to draconian business practices of the most cutthroat variety to stay ahead of the competition. It could be argued that in many ways, the various governments chose the worst time possible to privatize and deregulate the industry, as standards of service were sacrificed to the almighty bottom line of profits.

Nor was this the only challenge to confront the traditional communications monopolies; radio-based mobile telephones evolved rapidly with improvements in technology, leaping from the wildly impractical to the ubiquitous in less than a decade, and then continuing to both shrink in size and grow in capability. The majority of telephone companies had been more than willing to ignore these devices when they were bulky and unwieldy; the sudden explosion brought with it new and more modern rivals. Worse yet, in order to compete with these rivals, their entire infrastructure would need to be overhauled, even replaced, placing a substantial burdon on the operating capital of the telecommunications giants. For some, the burdon would prove too much; they would go under be broken up, their assets consumed by the johnny-come-lately telcos. Others managed, leveraging the size of their existing networks in partnership arrangements with the new services in order to raise the capital needed for infrastructure development; but these could not possibly compete with the powerhouses that the newcomers became with this added advantage. Increasingly, they would begin to fade from the services sector of the industry or face imminant collapse at every turn, becoming administrators of the communications “backbone” and the hubs to which the modern service providers purchased access. Their customer base was no longer the man in the street, but the company with which the man-in-the-street did business – a wholesaler, not a retailer.

The unsustainable progression of a classic pyramid scheme from a US Securities & Exchange Commission report on the subject.

There is No Heaven without a Hell

The new technologies brought new crimes and new threats. Identity theft was one of the most prominent. The growth of pay-by-internet made it possible for the theft of credit card details for subsequent use by the thief without ever possessing the actual credit card. From this beginning, enterprising criminals found that if they intercepted mail addressed to an individual, they could quickly acquire enough documentation to permit the opening of multiple credit cards in the victim’s name, enabling them to purchase goods and obtain cash advances as they saw fit – without the victim even knowing that a crime had been committed until a month after the fact, when the credit card bills arrived.

In 2004, a refinement was devised by Eldon Bartels; he redirected the account mail to a safe-house under his control, and used the proceeds of one “stolen” credit card to make the monthly payments on several others. The result combined the “best” aspects of Identity Theft and a Pyramid scheme. By the time the plot was accidentally discovered and Bartels arrested, he had managed to acquire an estimated 14 Billion dollars through credit card fraud. Hiring the best lawyers – he could afford them – he received a 6-month suspended sentence and retired to the Bahamas.

The 404 file not found page of Wikipedia on 29 Sept 2011. Image by Mover. This sort of page is what customers see when attempting to reach a web server that has been subject to a Denial of Service (DoS) attack.

The Denial of Service Threat

The other threats were more technological, designed to exploit people’s penchant for connecting their computer to a network of other computers. Usually in furtherance of idle mischief, these menaces – worms, trojans, exploits, viruses, and metaphasic threats – were all about gaining access to one person’s identity and passwords to permit more petty thefts. Obtaining the user code to a piece of software could save the perpetrator hundreds of pounds. A few used this as an adjunct to Identity Theft, but most were by petty criminals or vandals.

Somewhat more serious were denial-of-service attacks, where viruses with a time-based trigger were set to attempt to access a particular server at a given moment, or to flood a service with email – whatever that server was designed to do, these were intended to overwhelm it.

The first such was an accident, but several other such attacks followed, usually in support of political activism. Whenever a company with a web presence – which was virtually all of them – did something to offend someone, there was always a chance that the company would be the target of a D.O.S. attack. In many cases, these attacks had little direct impact; the fact that the website for “Joe Bloggs Tire Service” was overloaded didn’t make much difference in the scheme of things. But few companies hosted their own web servers; it was far more common for several companies to lease disk space and web access from a Hosting Service. In which case, whoever was so unhappy with Joe Bloggs had not only shut down access to Joe’s site, but had cut off everyone else who just happened to have chosen the same Hosting service. And of course, the rest of the internet was slowed, sometimes dramatically, by all the extra traffic flowing across it. Like graffiti, the expense to the world in general that was incurred in recovering from a D.O.S. attack far outweighed the damage to the actual target, and the costs of mounting such an attack. Slowly, people – and the courts – began to take the matter increasingly seriously.

The rise of Spyware drove the development of new tools to fight the problem. HijackThis is an analysis tool that targets browser-hijacking methods that was released to Open Source on February 16, 2012. Click on the thumbnail to visit the Wikipedia page describing the software and its use, which also has a link to the project website at SourceForge.

Spyware: The War For Privacy

Many of these attacks were in retaliation for attacks on the privacy of individuals, which came under ever-greater pressure in the post-internet world. The rise of new deployment strategies for software which funded development and support costs through advertising – Adware – led to the incorporation of sub-programs which explicitly tracked web surfing habits, name, address, online purchases, and so on.

Then someone hit on the idea of doing the same thing without the ads – so that the victim didn’t know their security was compromised. This type of software was named Spyware. Increasingly rapacious marketers developed new tricks – like tiny, invisible graphics called Web Bugs – which enabled them to determine which site people had come from and which site they went to when they left. The companies claimed that the information was aggregated and homogenized, statistical in nature – but as the Dot Com collapse continued, and mergers resulted, it quickly became possible for a company to have two separate databases, neither of which contained enough data on an individual to identify them, but which contained enough common details stored in each that it was possible to treat them as one large database – with the result that the company knew all about the person.

The 9/11 terrorist attack against Big Ben united most of the world in outrage. Restoration of the Imperial Icon would be complete in 2015. Click on the thumbnail for a larger image.

9/11 & The Hunt For Al-Qaida

This is not to say that more physical threats were a thing of the past. If anything, the new breed of terrorist was more threatening than their predecessors had been.

On September 11th, 2001, hijackers seized control of a number of fully laden passenger aircraft and flew them into key Imperial landmarks and structures. Big Ben was destroyed. Buckingham Palace was severely damaged (fortunately the Royal Family was not in residence at the time). Another plane crashed before it could strike the headquarters of the Imperial Military Planning & Intelligence Services Building. Imperial Citizens throughout the world were outraged and united in anger; this one act of barbarism created more solidarity on a single political issue than ever before. Even the Mao declared their support and cooperation in the hunt for those responsible.

For the first time, Mao intelligence was assisting the Empire, but that proved less beneficial than Imperial strategists had expected – largely because, through earlier alliances with the Chinese, parts of the Middle East had acquired Mao technology, and were shielded against their most effective methods. Nevertheless, the hunt for Osama Bin Laden and his Al-Qaida terrorist network proceeded with a determination and ferocity that was as much about wounded pride as it was political necessity.

Movement of the NASDAQ index from 1994 shows the inflation and collapse of the Dot Com bubble. Image made by ed g2s using publicly-available data from nasdaq.com.

The State Of The Economy

The Imperial economy represents the spending habits of billions of people, and in times of transition it is acutely vulnerable. Lasting fortunes are won and lost in such times, and the post-modernist era was no exception. Most visibly affected were the “Dot Coms” of course, many of which were financially unsound from the moment of inception. But also transformed and transfigured were a raft of other industries; Postal, Communications, Advertising, Entertainment, Transport, Education, Insurance. These in turn had knock-on effects on other industries. Restaurants that had relied on office staff found themselves unviable with so many workers not in daily attendance (due to telecommuting). Corporate offices became less necessary, but meeting rooms took their place. Even so, the average size of a corporate headquarters would shrink markedly over this period, which reduced the infrastructure costs of business, helping pave the way for greater prosperity and economic diversity in years to come.

Robotic workers photographed (in real life) at the Shanghai Science and Technology Museum by Mountain.

The Consequences of a changing Demographic

The era was always going to be revolutionary, even without the impact of the Internet. The 1950s and 60s had seen a population boom of unprecedented magnitude, with social consequences to match. With the workforce expanding faster than the economy generated jobs, it was inevitable that there would be massive unemployment – with the peak occurring in the decades 1970-1990 as hordes of better-educated workers entered the job market. This, in turn, left humane governments with little option but to adopt social policies designed to support the unemployed. Even paying a pittance – and most unemployment payments were a bare minimum sustenance level – boosted the amount of cash flowing through the economy over what had been there in previous years.

All these people needed to spend their money – generally on the necessities of life – and so supermarkets and takeaway restaurants and video rental dealerships and the like flourished. The growth in the retail sector was driven by lower-income earners, and primarily geared to satisfy their needs. This in turn generated additional employment requirements, and eventually an unstable equilibrium was reached.

As population growth had moderated after the Baby Boom, so 18 years later, workforce growth also began to moderate. Through the early 1990s, stability was at last achieved. Governments took the credit for their economic management. From about 1995 onwards, as women retired from the workforce to raise families, and the baby boomers began to drift out of the workforce through age and attrition, employment growth began to outstrip the growth in workers, and the unemployment rates slowly began to recede. Not by much – about 0.1% a month, most months – but these added up. From a high of 12% in the mid-70s, unemployment first receded to a stable 6.5% in the mid-90s. The Dot Com collapse temporarily drove it back up to almost 8%, but by 2004, it was back down to 6%, and the decline was accelerating as the first onrush of workers approached mandatory retirement and life on an old-age pension.

In some specialist, well-paid industries, or where working conditions were unusually poorly rewarded, such as Doctors and Nurses, there was already a greater demand than could be met, and had been for a decade. Governments took the credit for their economic management – again. In 2010, it hit 1%, and even the dullest of political thinkers had realized what was going on. By 2012, there were more jobs than there were people to fill them, and the only unemployment was an irreducible minimum of people in transit from one position to the next.

Of course, the aging population, in combination with the technological developments, had tremendous impact on the type of work that people were being employed to carry out. Tourism, especially on the local/national scale, aged care, and health care of all types, skyrocketed. There were some losses in construction and similar areas, as there were in clerical staff, but these were relatively stable. The biggest losses were in “blue collar” labor, especially manufacturing, where production became increasingly automated. The ideal was one worker (now a white-collar production supervisor) to one production line, but that was never quite attainable – it was simply too rigid to adapt quickly enough to changing economic circumstances.

Although obvious in hindsight, these patterns were rarely appreciated at the time, and the economic consequences came as surprises when they shouldn’t have. The explosion in service providers to the lower end of the wage spread surprised everyone in the 1970s and 80s. Fast Food chains proliferated. Where a town might have supported one supermarket, it had two or three. The manufacturing and infrastructure demands of this active population pushed industrial systems to the limits and beyond, and had much to do with the pollution problems of the era. As the baby boomers progressed into middle and upper management positions, it came as a total shock that they chose to use their greater spending power; the industries that had initially exploded began to contract, to make way for a proliferation of mid-priced restaurants and service industries, who carried out the tasks that the workers had no time or inclination to provide.

Convenience and quality of life were the objectives of the growth industries of the 1990s. In first decade of the new century, as the problem of unemployment receded, the problems of an aging population grew. Too many people were unable to provide for their own retirement, and where once it was unemployment support that was the critical social expenditure of national governments, the pension became dominant. The result was a resurgence of the industries that had dominated growth in preceding decades, but it was short-lived. Even as demand was growing, the number of available workers was shrinking; and in order to attract staff, wages and employee-related expenses were rising; and these forces demanded a contraction of the market. The result was a number of spectacular collapses and mergers, with concumbant economic chaos.

To meet the needs of business operations, two words became the touchstones of the economy in the late 2000s: Retraining and Subsidy. If a business was going to need a professional, it was better (and less expensive) to hire someone fresh out of school at low wages and pay their way through the education process – as well as giving them real world experience in the business practices of the company. Businesses resumed the roles of Patrons, just as they had during the Renaissance. In order to recoup their expenses, and hold onto their employees, businesses began moving to longer employment contracts. It was not unusual for a new employee to be hired on an eight-year contract – two years of part-time study and part-time work, three years of full-time study, and three years of full employment. As the working population declined, so did the causes of pollution – the number of cars on the road, for example. Although it would be decades before any sort of genuine ecological recovery was underway, the new century slowly shed the problems of the last like a used overcoat.

The Emperor William II in RAF service uniform prior to ascending the throne. Photograph by Robert Payne, Flickr.

The Politics of Empire

In the middle of all this, a new Imperial Monarch came to the throne. Elizabeth defied all predictions by remaining in power for 54 years, but over the first decade of the new era her health increasingly deteriorated. In 2006, she succumbed to a stroke that left her incapable of carrying out the duties of her office, and Prince William became the Emperor William II. Given her age at the time, it would be stating the obvious to point out that to most Imperial citizens, Elizabeth was the only Monarch they had ever known. The presence of a new Monarch with new ideas and a new style was going to shake up the Empire in ways they could barely imagine.

Since the events of 1993, there had been a quasi-stability about the Empire – the Empress had established in that year that she could override the Civil Service Policy in any matter that came to her attention, but her conduits of information were still controlled by the Civil Service, and it had not taken them long to realize that if they disrupted her attention with minor matters while obfuscating the important issues, and filled her days with social engagements, her powers were effectively neutralized. She had been unable to batter her way out of the cotton wool in which that they had increasingly cocooned her, and the Civil Service machine had, after a hiccough or two, rolled majestically onward, untroubled by the demonstrations of the power of the throne. They were confident that the same now well-established and polished techniques would soon have William II firmly emplaced as a figurehead as well, leaving the running of the Empire to ‘those who knew how to do it’ – they just had to “housetrain” him. In the meantime, the search for a bride – receptions, galas, state visits – would keep him happily neutralized for months to come.

View across St Salvator's Quad at the University of St Andrews. Photo by Oliverkeenan.

Reforms Of The Anonymous Monarch

They reckoned without the impact that a technologically adept Emperor could have. William knew full well how to use the internet, and had many aliases for use in Chat Rooms, where he could find out what his citizens were really thinking. Fully capable of his own research, and knowing that this day would eventually come, he had been drawing up an agenda for sweeping reforms well in advance of actually ascending the throne. He also had the advantage of having an informal secondary education from his Grandmother. It had not taken the Empress long to discover the political realities of the post-1993 situation, and she had spent the next 10 years studying the civil service and their techniques, and working out ploys with which to outmaneuver them. When she had her course in “Advanced Politics” ready, she handed her notes over to William. Instead of an insulated and fairly callow youth of only 24 years, he was more fully prepared than any Monarch who had previously ascended the throne.

He started by delegating all ceremonial and social matters to his father, save those few which were approved by that social guardian. He then presented his agenda both to the public and to the civil service at the same time. The principles on which he proposed to reform the Empire were simple:

  • Any bureaucrat whose sole responsibility was to the bureaucracy itself had his position placed in abeyance, pending reallocation of manpower.
  • There were to be no more than two layers of bureaucrats between himself and his subjects – one local manager of the office at which the public connected to the Civil Service, and one senior manager who reported and advised on policy.
  • All policy queries were to be directed to a working group consisting of a member of the house of Lords, the elected minister of the government, the Civil Service head of department, and an independent expert appointed by the Throne – who would have at least 24 hours and at most a week to determine how to resolve the problem – even if that solution impacted another department.

The Civil Service was horrified – it would lead to total anarchy, they predicted. There wasn’t enough time to determine what the real issues were, let alone find solutions, they forecast. Legislation could not be enacted that quickly, they moaned. A four-man committee could become deadlocked, they solemnly announced.

To which William issued a press statement conceding:

  • that there could be some confusion at first;
  • that if officials didn’t know the details of their departments, they might not be able to reach an acceptable decision in the length of time provided – but that outside of unusual circumstances, he would consider that a demonstration of incompetence, which was grounds for dismissal from the service;
  • that the government and opposition representatives had been deliberately included to permit a broader view than the isolated perspectives of the individual bureaucrat – being appointed head of a ministry or shadow ministry meant that they were the placed in charge of government policy as it impacted on the subject at hand;
  • and that the potential for deadlock was the reason why he always had the deciding vote.

At the heart of William’s reforms were the concepts of exceptions processing and regression analysis, things he had learned of in a computer-programming course at University.

There was a simple rule, a principle, for each government department’s function. To that rule, there were a number of exceptions generated. Each exception then became a general case. When the number of exceptions reached a certain threshold, the statement of purpose was reviewed and revised to incorporate the exceptions, and the count started over. The result was a simple set of rules which stated that “X” – whatever the member of the public had requested or required – either could or could not happen. If the member of the public disagreed, he could appeal to the staff responsible for that function of the department in question. If the basis of the appeal was an exact match for a previous appeal, and there had been no change in government policy or in circumstances, the previous ruling was used as a precedent; if it could not be judged by precedent, it would then be forwarded for review, and a new precedent would be determined to fit.

The Civil Service was even more horrified after all this was explained to them. It would amount to an 80% downsizing of the service. Not so, replied William – it would be a 40% downsizing because he was going to triple the number of local offices of most government departments. There would be a greater impact on senior positions, but this would be balanced by the recruiting of more of the people who actually did the work.

William then layed down the operating principles that defined each function of the various government departments, further stunning the Civil Servants and the government representatives. For example, the Social Security function of the DSS: “All Imperial Citizens earning a gross income of less than 25% of the average wage are entitled to a supplementary income of an amount set by the Elected Government in its annual budget but not less than 10% of the average wage, fixed .”

“But what about married couples where only one of them works? What about people who have millions in assets?” demanded the civil servants. “Excellent questions for review,” replied the Monarch. “Put some numbers on them – perhaps deeming an asset to have an annual income value equivalent to the current interest rate multiplied by the current value of the asset – and you will have your first exceptions. But don’t forget to include the deemed amount in the average wage.”

In general, the principles had a similar theme – all members of the public were entitled to whatever support was available unless they were specifically excluded. This sweeping reform also had the advantage of wiping out reams of legalese and casting matters into plain English. It made implementing Government policy changes so easy that there were no excuses for not carrying out an election promise – unless that promise was blocked by the combination of the Throne and the peerage for the long-term good of the Empire. At a stroke, it completely changed the power structure of the Imperial Government, and the expectations that the throne had of the branches of government.

Whitehall in London, looking south towards the Houses of Parliament. Photo by ChrisO.

The Battle Of Whitehall

The Civil Service retaliated with an attempt to drown the new system in paperwork. Every case was sent to review, every review resulted in a question, every question went to committee, every committee was deadlocked and sent to the monarch for approval. William sacked the staff responsible on the grounds of incompetence, including their managers. It was to have enough redundant staff that he could afford to do this that he had tripled the number of local offices. The Civil Service went on strike; he imprisoned the union leaders using the essential services legislation that they had helped draft.

After 6 months, the Civil Service was reduced to about 40% of its previous size. William compromised to the extent of permitting regional and national managers, responsible for staffing levels, administration, and expenditures, restoring some opportunity for upward mobility and promotion within the service – in exchange for removing the almost automatic handing out of honors. He gave each Civil Service head the choice of additional rewards for service as index-linked pensions or honors – they could not have both. He permitted the breaking up of some old departments into smaller ones, knowing that in the process he was reducing the powers of the Mandarins at the top of each, and the creation of a number of new departments. The resulting government structures were very similar to the banks business models – a loans officer (appeals), a branch manager, some clerical staff, the tellers (who dealt with the public), regional and national managers, and so on.

The National Governments also underwent their own upheaval. The new structure transferred power from the party rooms into the hands of the ministers appointed by the local Prime Minister. At first, there was some risk that they would join with the House Of Lords to veto the whole programme; but it was argued by William in meetings with the Parliamentary leaders that the setting of Policy would still be in the hands of the party; it was simply a matter of selecting the best member for the job of implementing those policies in the working parties. Delegation of authority worked in the military, it worked in business, so there was no reason it would not work in politics. What it also meant was that there was no longer any room for rewarding lengthy service with a promotion to the front bench, or other such corruptions; Ministries would have to be handed out to those judged competent to handle them. Besides, how did the Government think the public would react at the next election if they blocked the plans? The voting margin was narrow, but the decree was passed by the Lower House, and by 2010 the new system was operating smoothly.

The Economic Revolution

Having reinvented Government, William turned his attention to business. The profit-at-any-cost mentality had brought the economy to the edge of collapse, and could no longer be tolerated. He decreed a charter of social obligations for various “essential industries” to achieve within 5 years, or the relevant institutions would be nationalized and run by the Government.

His first targets were the Banks, Insurance Companies, Entertainment, and Telecommunications giants. The result was an immediate sharp recession – but one in which, for a change, levels of public service rose – resulting from the affected industries selling the shares they had bought as investments. The public eagerly snapped these up, ensuring that the recession was brief.

It was the most tumultuous period of change ever recorded outside of a time of War, and it freed William to deal with what he considered his real life’s task – the political problems of international relations.

Seventeen years of transformation

These, then, were the themes of the new age – privacy under attack, basic freedoms under attack, the legal system in disarray and threatening a total breakdown as courts became increasingly computerized, terrorists conducting precision attacks calculated to cause the maximum loss of pride, prestige, and innocent lives, dictators rattling sabers and threatening with weapons of mass destruction, new crimes and new pastimes and a social revolution which empowered individuals as never before – only for the empowered to become lost in the crowd of every other individual so empowered, except to those who were seeking to take advantage of them, new business models and practices which were ignored at the owner’s peril, upheaval on every front. It was a short but intense Dark Age, in which the “barbarians” emerged to smash the machinery of society, and then to build something new on the foundation stones that remained.

Next time, we’ll get into the year-by-year chronology of this period and examine some of these events in detail…

Comments (1)

Exceeding the Extraordinary: The Meaning Of Feats



From time to time, I like to look behind the curtain – to see what makes the mechanics of the games that I play tick, and what the implications are. Sometimes this leads down unexpected byways, and at other times it yields a nugget or two of insight. And sometimes, it just goes nowhere. So: In a d20 system (whether it be D&D, Pathfinder, d20 Modern, or whatever – what are Feats?

What Are Feats?

The 3.5 PHB doesn’t define them in its glossary. Chapter 5 of the PHB describes them as “a special feature that either gives your character a new capability or improves one that he or she already has.” The PHB then goes on to define Feats in terms of the differences between Feats and Skills, and then confuses the issue by dividing Feats up into several different types, with different rules applying to each type. Bonus Feats, Class-restricted Feats, Racial Bonus Feats and Feat Slots, Special Feat Lists… It’s a very flexible game mechanic, and it’s been used in all sorts of different ways as a result.

The Pathfinder SRD is even more vague: “A Feat is an ability a creature has mastered. Feats often allow creatures to circumvent rules or restrictions. Creatures receive a number of feats based off their Hit Dice, but some classes and other abilities grant bonus feats.”

One website defined them in terms of character options, used to customize a character. Another speaks of them as a metagame mechanic used to alter the way a character interacts with the rules. A third suggests that they are a way to change the rules of the game as characters become more powerful. A fourth describes them as a way to differentiate the capabilities of representatives of the same race/class combination at a metagame level. Still another talks of evolving characters from a generic common standard to a customized state that is more tightly integrated with the campaign world. And a sixth describes them as a way to give characters a bonus beyond what the normal character gets.

A fellow GM I was chatting to about this a few months back described them as “a way to customize classes or races as a means of adjusting game balance between these conceptual entities”.

And one of my players talks about them in terms of restoring the balance between humans and non-humans with racial abilities, and between Fighters and Mages.

These definitions run the gamut from the hypothetical to the min-maxing character crunch, from the simulationist to the pure roleplaying, from the campaign perspective to the metagame. And in any given campaign, any or all of them may be true – and there are some serious implications and repercussions buried beneath the surfaces of some of them.

Innate vs. Learned

One of the more interesting ideas that I came across in researching this article was the suggestion that all feats represented an innate natural skill or talent while class abilities were all things that the character learned, or learned to do, in the course of their professional development. I’m not entirely sure that I buy the notion, but it certainly raised an interesting question for consideration: is a Feat something that you learn or something that you can do? Or do Feats encompass both? And is that simply because later writers didn’t understand what a Feat was supposed to be any more than I do? In other words, has the original concept been contaminated – and if so, should feats that violate the definition that we arrive at be banned from the game?

It didn’t take very long to get into radical, even controversial, territory, did it?!

Because we have not defined exactly what a Feat is, no answer to these questions is possible. They are just something for us to keep in mind as we examine possible definitions.

If this were the purpose of Feats, progression would be as shown on the left - not as on the right

Balancing Acts

The notion that Feats are a means of tweaking the game balance between races and character classes doesn’t hold water, in my book – though this can be a secondary usage of merit. If this was the concept, only Humans and Fighters would receive feats, and they are not so limited. Of course, it’s possible that this was the original intention, and that the designers decided to raise the bottom line from zero to the default HD-based allocation mechanism.

I’m afraid that this theory doesn’t match up to reality. If it were correct in terms of race, then we disregard the standard Feat every X levels and look only at what the two factors contribute. Can anyone seriously argue that one Bonus Feat is enough to counterbalance all the racial advantages and abilities that Elves receive, or Dwarves? And, if it was correct in terms of class balance, then the receipt of Feats would be more in line with the geometric increases in power that Mages experience instead of a bonus feat every X levels. Either the workmanship is slipshod, and no-one’s ever noticed – yeah, right – or the actual distribution of Feats doesn’t match either pattern.

Nevertheless, the fact remains that there IS a bonus Feat for humans, and there ARE regular bonus Feats for Fighters. So these are either manipulations of the original intent, or the actual definition of what a Feat is must encompass this usage.

Character Options

The notion that Feats are a way of customizing characters by presenting them with a palette of choices and options is one that has played no small part in my approach to the subject, and one that several of the tentative definitions offered by different websites and GMs also touch on. But in order for this definition to work properly, it has to be assumed that all, or almost all, feats are of roughly equal value, and also that there are an arbitrarily large number of Feats for any given character to draw on.

The first condition is problematic. There is a standard employed in setting the effectiveness of Feats in the PHB, though it is one that I have inferred from the details instead of something that has been explicitly stated. That standard is:

  • +4 to one skill or roll
  • +2 to two related skills
  • +1 to four related skills
  • +2 to a type of saving throw
  • +1 to a combat-related numeric value eg Critical threat range
  • An ability that is normally useful once per round in combat
  • A more powerful combat ability that is only useful under specific conditions or is otherwise constrained

Metamagics, of course, fit into the last category, and introduce a sub-mechanic to the game – level adjustment – that is designed to contain the relative effectiveness of the Feat to something close to the appropriate standard. Unfortunately, not everyone has followed this standard – possibly due to poor analysis of the standard, or error – and there are some outright violators out there, some of them in “official” WOTC publications. By and large, though, the worst offenders tend to be home-brewed feats.

Side-Note: Class Abilities
It’s worth noting that most class abilities also meet this standard, but there are some even more outrageous exceptions. Possibly the worst offender is “evasion” and “improved evasion” which can make a character all-but-immune to damage-dealing magic regardless of the circumstances.

Where such magic has to target the character, evasion seems fair enough, but when it comes to Fireballs and other area-effect spells the logic starts to get shaky. Add to that the fact that the check involved is one in which those characters with access to these abilities are naturally good, and that the ability is absolute and not relative to the class levels of the characters involved, and it becomes a right mess.

Evasion and it’s improved counterpart virtually force the GM into the use of “Save-Or-Die” spells, which I loathe.
 

Fixing “Evasion”

“Evasion” and “Improved Evasion” are not that difficult to fix.

  • First, the DC for an evasion check should be increased by double the Spell Level, to make it a little bit harder to save against, especially with higher-level spells;
  • “Evasion” should be amended to read “Half the damage otherwise indicated on a successful Evasion save, Full damage otherwise”; and
  • “Improved Evasion” should be amended to read “One Quarter the damage otherwise indicated on a successful Evasion save, Half damage otherwise”.

These changes have the effect of permitting these abilities to continue making a major difference to the character’s capacity for surviving such spells while not making them a complete “get out of jail free” card. It also establishes a differential in which high-level spells are harder to Evade than low-level spells.

Finally, I would rule that using evasion leaves the character prone, requiring them to spend a move action to get back to their feet – unless they have an appropriate feat to let them do so more quickly.

 

Side-Note: Class Abilities and Feats
Another pet peeve that is also relevant to this discussion is the example of character customization offered by the DMG (Sidebar p175). Many players have interpreted this passage to mean that if the character has a class ability that is identical in description to a Feat, that class ability can be swapped out for a different feat with automatic approval by both the rules and the DM. Those who customarily wear medium or heavy armor are, according to these players, better off swapping out all armor-use feats bar the category of the armor they are actually using.

In theory, this is fine, but in practice, it plays hob with low-level game balance by giving characters as many as 4 additional feats (3 armor types and shield use). The result, when applied to archery-oriented characters or dual-weapon characters, is a hugely disproportionate capacity for inflicting meyhem.
 

Fixing Armor Proficiencies

This peeve is also susceptible to an easy fix. Simply designate Light Armor Proficiency as a prerequisite for Medium Armor Proficiency and Medium Armor Proficiency as a prerequisite for Heavy Armor Proficiency.

Further refinements are possible, such as:

  • Ruling that characters cannot fail to take an armor Proficiency when it is offered and then take it again at a later point. In the long run, this would make no difference; but at low levels in a campaign, the effects can be considerable. And/or,
  • Ruling that Magical Armor counts as one Proficiency Type less unless used with a shield, i.e. no armor proficiency is needed to use Magical Light Armor, Light Armor Proficiency is sufficient to permit use of Magical Medium Armor, and Medium Armor Proficiency is sufficient to permit use of Magical Heavy Armor – unless the character wants to use a shield with this armor, in which case they not only need the Shield Proficiency, they need the correct Armor Proficiency.

These changes not only expand the available choices of armor for a character (a little honey to make the restrictions more palatable), they restrict the benefit that can be achieved by gainsaying proficiencies. The character is forced to choose between being restricted to a lighter armor permanently and having more feats, or having access to the heavier armor types and the full range of protections, with only the standard number of feats. There’s even a middle ground, for those who like to compromise!

They also make the nature of an encounter less prone to telegraphing by means of the armor being worn. That guy in scale mail – is he a poorly-equipped fighter, a fighter wearing some fantastical enchanted armor, or a mage or rogue in enchanted armor? The beefy guy next to him in full plate – is he a Fighter, a Paladin, or a Cleric? If it’s not quite so obvious what character class an NPC is, it is also not quite so obvious what his vulnerabilities are, or what threat he poses!

 

Side-Note: Feats from multiple sources

A third (relevant) pet peeve that I might as well get off my chest while I’m talking about them is the assumption that players can draw feats from any published, compatible, sourcebook without GM approval, and that two different feats that do the same thing but have different names can automatically stack unless the bonus is of a Named Type.

This opens up all sorts of game-unbalancing possibilities. Two feats that are perfectly satisfactory in isolation can combine to create and exploit a rules loophole through which all sorts of game-unbalancing effects can crawl.

In 99% of cases, there is no problem, but that last percent – which min-maxers always seem to locate – annoys the heck out of me.
 

Fixing Multisource Feat Problems

Thankfully, yet again, this is easy to fix.

  • Any Feat that affects a given subsection of the rules, e.g. Flanking or Charge Maneuvers, is deemed to be mutually exclusive to feats from other sources that affect the same subsection of rules except with GM permission. Such permission is given on a case-by-case basis and never as a blanket ruling.
  • Any feat that confers a bonus to a given ability or score is deemed to be the same as any other feat that confers the same bonus, and therefore the benefits do not stack with that feat.
  • The “Flavor Text” that describe feats, including any personality traits, are considered rules just as much as game mechanics are; the referee is entitled to force them to be applied in roleplay, to take them into account in relations with NPCs etc, and/or to require the feat to be replaced with another if the character acquires a feat or other ability that inhibits or controls the non-game-mechanic consequences of a feat.
  • No feat or character class is permitted unless the referee also gains access to a copy of same for use with NPCs.

These four simple house rules permit the characters to utilize any game supplement that the players might have, from any source – within appropriate and reasonable limits.

Getting Back To Our Knitting

We’re still trying to figure out exactly what a feat is. I’ve been looking at the various definitions I could find, in reverse order, so far without finding a complete one. The best we have so far is “a way to customize characters”, which is at roughly a 2nd-grade level so far as definitions go!

The next one is “A way to give characters a bonus beyond what the normal character gets”.

This immediately raises the question, what is meant by the term, ‘the normal character’?

Two possible interpretations come to mind:

  • An NPC without character levels
  • A character without the feat
NPCs without character levels

This assumes that characters don’t automatically get class levels, that there is something extraordinary about those who do. The majority of NPCs encountered will be 1HD peasants, in other words, with no extraordinary capabilities whatsoever.

This interpretation actually stems from older versions of the D&D game system, especially AD&D, which stated outright that most NPCs never gain class levels. The problems with this notion are that it’s hard to set up social infrastructures for the advancement of PCs when they are so rare, and that it becomes difficult to explain where the villains come from.

If the campaign world is set up along these lines, it becomes something very different from the majority of modern campaigns. When creating the world, it becomes necessary for the GM to spell out exactly who the high-level characters are because they would be famous figures throughout the civilized world – and the same goes for the bad guys. PCs become tethered to the base of operations because that is where their character class has its resources – they can go out and adventure but must periodically return to home base to utilize those resources. If characters need to train in order to go up levels – another element that has fallen by the wayside in modern campaigns – that can only occur in locations where there is the infrastructure for such training. This is an elitist model in which the PCs are the elite.

More modern campaigns are more generalist. Removing the need to return for training before a character can go up a level permits more flowing narratives (the need to train being a constant handicap and interruption to an ongoing storyline). The result is a more “novelized” approach. It also means that many more characters have class levels, which in turn makes the integration of their class infrastructure more ubiquitous. It doesn’t completely solve the problem – for that you need something like the Shadow Levels approach that I offered in Shadow Levels: A way to roleplay the acquisition of Prestige Classes in D&D 3.x, one of the more popular articles here at Campaign Mastery.

Removing the restriction on the acquisition of class levels makes adventure easier to come by, but it makes a profound difference to the game world. It also necessitates the creation of character “classes” such as “Noble”, for those characters who don’t go out and adventure to maintain some level of authority over those who do. Without it, the NPCs can quickly be forced to dance to whatever tune the PCs call. (In my original AD&D campaign, I ensured that every nobleman had at least 10 class levels purely to justify their positions of authority – and higher rank had higher-level requirements. That campaign was the tale of the son of the King going out into the world to ‘prove himself worthy’ of his right of succession).

So, there are advantages and disadvantages to both, and a very different campaign flavor. This one assumption takes a modern game and gives it a very ‘old-school’ flavor. But the very fact that it is necessary to distinguish between the two indicates that this is not a correct base interpretation, though it is one that can viably be made in a campaign’s house rules.

That leaves us with:

NPCs without the feat

Restating the proffered definition of a feat as “A way to give characters a bonus beyond what a character without the feat gets” doesn’t seem to get us very far. It seems to be a tautology, adding up to “a bonus is an advantage over anyone who doesn’t have it”.

But there is a hidden implication there, one that removes the tautological overtone. In order to make this definition sensible, the assumption has to be made that not everyone receives everything that’s on offer.

If everyone has access to class levels, but NOT everyone has access to feats, we have a blended compromise between the ‘old school’ approach suggested in the previous section and a fully-democratized model in which everyone has at least theoretical access to everything – class levels and feats.

To be honest, I’ve never seen anyone write up such a campaign structure, in which Feats are the difference between elite (PCs & their Arch-nemeses) and ‘mundane’. But it makes a certain amount of sense, in terms of preserving the infrastructure benefits of the world and still making the PCs elite. It’s not much of an edge, especially at low levels, but it would become massive as the characters progressed.

The only problem with this approach is that I haven’t seen it done anywhere before. In fact, the opposite is true – the trend has been to make Feats more ubiquitous, not less.

Monsters gain feats just by increasing their hit dice, for example – preserving some semblance of balance between PC capabilities and the difficulty of encounters. Can it be seriously suggested as logically-consistent that Monsters can get feats and a 14th level NPC cleric can’t? I’m not saying it can’t be done, but the campaign setting would need to justify this discrepancy.

Examination of this possible definition of a Feat has led us down some interesting byways, but the very fact that standard usage – even within official publications like adventure modules – doesn’t fit the resulting models demonstrates that this is NOT the correct definition.

From Generic To Unique

The definition that I actually use is the next one to be considered: “A means of evolving characters from a generic state to a uniqueness that is more tightly integrated with the campaign world.”

Again, there are some implications and hidden nuances to this definition that are worth taking the time to explore, contained within a couple of loaded phrases in the definition.

A generic state

The suggestion here is that all characters start out being carbon-copies of every other character (other than differences in statistics). That being the case, character stats become the primary differential, in game mechanics terms, between suitability for this career vs that, without actually blocking characters from undertaking a career for which they are unsuited in ability.

The implications for roleplay are enormous. A character who is desirous by personality of a career in a particular character class – any character class – can adopt that class regardless of ability. There will be especially pious clerics with a Wisdom of six who think they have heard “the call”. There will be fighters with glass jaws. There will be rangers who couldn’t follow a ploughed furrow in the ground, and Wizards who can barely light a candle, and Thieves who can trip over their own shadows.

Naturally, few of these will progress beyond 1st level, and many may die trying – but from time to time there will be some who survive by sheer luck or by virtue of more capable companions.

What’s more, this raises the prospect of differentials between temporal authority and levels of expertise – knowing the right people, or having the right relatives, can get people promoted to positions of authority their abilities do not warrant. And of members who go ‘bad’ (which means different things when you’re talking about Rogues and Paladins, of course). The result is more akin to the real world with which we are all familiar, where nepotism and low-levels of corruption are routinely expected by the populace (whether they occur or not).

If the GM recognizes these implications, they can build a consistent world around them. Eventually, that will lead to the players becoming aware of the implications (assuming the GM doesn’t tell them outright as part of the campaign briefing) – and once they know, they can begin employing them as tools in their interactions with the society around them. For example, identifying a discrepancy between demonstrable capability and level of temporal authority implies that the position was achieved by virtue of something more than competence – something that’s useful to know when characters are seeking an avenue to political influence.

A uniqueness

To a min-maxer, there is an ‘ideal solution’ that maximizes the power of any given character class by stacking slight differences in relative effectiveness of every nuance open to the character in their favor. If every character class and feat and class ability is equal in effectiveness to every other, their constructions are no better than anyone else’s. The more players subscribe to this philosophy, the more the characters at higher levels become carbon-copies of all others within their class and character level.

Equality in diversity is naturally the enemy of min-maxers and the ally of the GM who wants the PCs and NPCs in his campaign to be more interesting than this carbon-copy approach. The more viable options, and combinations of options, that are available, the more diverse and distinctive characters become even with exactly the same race, stats, character class, and class levels.

One of the many definitions of “game balance” – or, more properly, game imbalance – can be “the capacity for successful min-maxing within the system”. A less negative definition of game balance might be “the capacity for reflecting character persona in ability options without detrimental effects to the character’s abilities relative to characters who have made different choices.”

As a general statement, the more “equality in diversity” there is, the more character construction becomes an adjunct to roleplay, as opposed to “rollplay”. Clearly, the two types of activity in synergy produce a greater effect than if they are in opposition. Ideally, you would want players to be able to identify what a character can (reportedly) do and be able to extrapolate to a personality.

In practice, that might be an unachievable ideal; but the more closely it can be achieved, the healthier a campaign will be in many respects.

So the sine-qua-non of this definition, in practical terms, and with respect to feats specifically, is the type of parity standards that were offered in “Character Options” above. Without such a standard, the capacity for inequalities exist – which undermines the potential for uniqueness by defining “must have” feats for any given character class.

A plurality of equality

The implications go further. Another is that there be many feats available for characters to choose from – at least five times as many as a single character has the capacity to receive, and the more, the better.

To some extent, this actually undermines the arguements and justifications I employed in my “pet peeves” boxed sections above. Every ability that is traded out makes a character more different from the standard, by definition. A character who has expended a feat slot doubling up on an advantage – an initiative modifier, for example – has not used that slot to gain a different advantage or ability, by definition.

So, which “Pet Peeve” solutions do I actually use?

  • Evasion/Improved Evasion
     
    I’d love to implement this solution but my players won’t hear of it. They argue that it is one of the few mechanisms counterbalancing the excessive power of high-level Wizards – which is true. So, until I develop some other counterbalancing influence over the spell-slingers, this is a non-starter. It’s ironic that one game-unbalancing element’s removal should be countermanded by another game-unbalancing element.
     
  • Exchanging Class Abilities for feats
     
    The only reason that this fix is not in place in my campaigns is that I hadn’t thought of it at the time! Right now, there’s a blanket ban on the practice in my campaigns – but that’s subject to change without notice if my players approve (it’s their campaign, too).
     
  • Open-sourcing of feats
     
    The actual restriction in place in my campaigns is more strict than this proposal. While I couldn’t always put my finger on the source of the problem, I felt that certain feats caused game balance issues, and so set up an approvals process† that let me approve, reject, or modify feats – and prestige clases, and spells, and so on. I have given ground (reluctantly) in the latter case – the Spell Compendium is just too convenient a resource – but in other ares, the process remains.
     
    At the same time, this solution is partially implemented – the “flavor text” part, to be specific. It’s actually taken quite some time to derive a general statement of what I look for in that approvals process. I’d love to introduce this fix in its entirety, but it’s probably too late for the current campaigns.
     
    Besides, if there is one “pet peeve” that is undermined by the arguement given in “A plurality of equality” above more than any other, it’s this one.

I’ll write a separate blog post on the approvals process some other time.

Campaign Integration

The final loaded phrase in this prospective definition is “more tightly integrated with the campaign”. What does that mean? In practice, it means that some feats can be designated “only available to race X” or to “Class X” or to “characters of level X or more”, or combinations thereof – in other words, manipulating and extending the requirements list for a feat to suit the particular campaign. It can also permit certain feats to be made available for free to all members of Race X or Class X – sometimes in addition to, and sometimes as a replacement for, racial or class abilities.

You’re really only limited in this area by the amount of time and effort you can put into the campaign ahead of time (it’s generally too late once play starts).

Counter-skinning

I refer to this practice as “Counter-skinning” because it really is the opposite to the technique of “skinning” one race or monster to create another whose capabilities just happen to match.

It’s when you combine the two that you really develop a powerful tool. “Monster X is exactly the same as monster Y except…

I employed this technique extensively to unique (additional) skills for the different races in the House Rules for my “Shards Of Divinity” campaign – another series of blog posts to be presented in the future – and it worked a treat.

Differentiation

The next potential definition of a feat to be examined – “differentiation between stock examples of a given race/class combination” – has turned out to be just another way of stating the same thing as the previous one, but one with fewer tools for the GM to employ. If the GM employs no manipulations – no Counter-skinning – that affect PC races, the “campaign integration” phrase of the preceding definition goes away and we are left with what is essentially a recapitulation of this definition. So it’s a functional definition, but one that’s less useful than the more verbose one already examined.

Changing Rules with Power Levels

This implies a greater structure to feats than is actually the case, though some feat dependency chains lend it an air of plausibility. Unfortunately, these dependencies are too haphazard for this definition to be correct in a general sense.

Could a more formalized, structured, hierarchy of feats be developed? Of course. Is there any benefit in doing so that outweighs the expense in time and effort of doing so? Are there any lurking downsides?

The potential upside is in fighting min-maxing fire with fire – because that’s what we’re really talking about, here. The downside is the usual one that comes with rampant min-maxing: cookie-cutter assembly-line characters become ubiquitous, the common standard.

This is pandering to the min-maxing crowd – either a GM who has reached his wits’ end and decided “if you can’t fight ’em, join ’em”, or a GM who thinks this is the way the game is supposed to be. I’m sure there are some out there who fall into the latter category.

Unfortunately, simply because the GM is restricted in development time and scatters his efforts over many different characters, he can never compete with the focused (almost obsessive) attention lavished on their characters by the dedicated min-maxer. The GM is on a hiding to nothing, to use an apt Australian expression. (Actually, the phrase comes from the UK but it has fallen into relative disuse there, so far as I can tell, while it remains a common part of the ‘Ocker’ parlance).

The problem is that min-maxing – or “power gaming” to put a more friendly face on the practice – is fairly addictive, something that everyone falls prey to now and then. Weaning players off it can be exceptionally difficult, if not completely impossible. The original premise behind The Knights Of The Dinner Table is that most of the players are incurable power gamers – much to the frustration of the GM and the representative of the genuine roleplayers at their gaming table.

The final definitions

The last couple of definitions to consider don’t actually tell us much more than those already examined – “a metagame mechanic used to alter the interaction between character and rules” (which implies that Feats aren’t part of the rules) and “Character options used to customize a character” – which ignores that Feats are available to more than just characters.

So, where does that leave us?

A feat can be many different things, in many different campaigns – and subtle changes to the definition can have massive effects on the underlying precepts of a campaign. There IS no one “right” answer; instead, we have a tool for manipulating campaign and game elements at an almost primal level, if carried through to their logical conclusions.

Make the choice that’s right for the campaign you and your players want to play, and you strengthen that desired style’s hold on the campaign. Make the choice that’s wrong and you’ll be fighting the game system all the way – and wondering why you can’t get it to work for you.

You can even change from one definition to another to reflect some subtle but fundamental change in the game world, changing the tone and texture without your players being able to put their finger on exactly how you’ve worked the magic.

Have fun…

Comments (6)

The Imperial History of Earth-Regency, Part 10: The Crumbling Of Icons – 1980-1997 continued


This entry is part 10 of 12 in the series The Imperial History of Earth-Regency


Pieces Of Creation is an occasional recurring column at Campaign Mastery in which Mike offers game reference and other materials that he has created for his own campaigns.

All images used to illustrate this article are public-domain works hosted by Wikipedia, Wikipedia Commons, or derivations of such works, except as noted.

This article is a work of fiction and no endorsement of the content should be attributed to any of the individuals or institutions named, photographed, or credited.

Author’s Notes: This Alternate History continues right from where it left off last time. The Civil Service has become a milstone around the metaphoric neck of the Empire, and the Empress has embarked on a plan to regain control of her Empire that is composed of equal parts inspiration, determination, and desperation…

1987

The new offensive against the Peerage bore unexpected fruit one year into the Empress’ four-year plan. By forcing businesses to focus on environmental issues, expenses began to rise dramatically, eating into profits, as expected. This reduced the stock value of the companies affected, leaving them ripe for hostile takeovers by the New Entrepreneurs, also as expected. Politically, this reduced the desirability of the Civil Service/Peerage/Big Business path, and reduced their ability to sway public opinion, also as expected, while elected politicians gained in power and influence, also as expected. As the peerage lost their grip on elected officials, so the Empress regained lost ground – all according to plan. In her political planning, she anticipated that the Lower House would eventually become representative not of the peerage who had previously funded their election campaigns but of the New Entrepreneurs who would be funding them henceforth – so that her decrees would only be blocked on those few issues on which both agreed. She had a whole raft of civil service reforms prepared and ready to go as soon as they became viable.

The movement of the Dow Jones showing the effect of Black Monday; image by Edward.

Black Monday

But on October 19th, the lesson of History that the Empress had not taken into account produced a shattering reminder of it’s importance, as a series of profit-estimate revisions in “blue chip” stocks brought about a massive fall in the Dow Jones index – some 23 per cent – creating a panic that threw the Empire into recession. Hundreds of thousands of jobs were placed in immediate jeopardy across the globe as businesses reacted sharply to their loss of value. Interest rates rose sharply as Banks tried to minimize the risks they faced when issuing loans, putting further pressure on business profitability. Many went under, further driving up unemployment, increasing the perceived risk of other loans, prompting further rises in Interest Rates.

The Derailing Of Reform

Instead of being able to concentrate on her agenda for civil service reform, the Empress, politicians and employers alike combined to try and restore sanity to the economy, while the unions began bitter wars for redundancy benefits, retraining, and other reforms. In order to prevent starvation, the government was forced to introduce welfare payments on a massive scale; to prevent a total collapse of the health-care system, they were forced to provide a medical rebate scheme providing free health care. All of this meant that the government was spending money that didn’t exist in comparison to economic growth. That produced massive inflation levels, driving prices up – and weakening the value of every pound the Government was providing. For every pound of expenditure, by years end, the government was providing only 89p of value – or, more accurately, for every pound of value that had to be provided to maintain a minimum, marginal, existence for those affected, the government had to spend £1.13. The Government had unintentionally become the Empire’s greatest “employer” – paying people to do nothing more than look for work – and the Civil Service was more entrenched than ever.

So massive were the consequences that they overshadowed all other news that year – not that there was much to tell. There were the usual assortments of calamities, calumnities, and catastrophes; the usual pointless bloodshed continued with nothing gained on any side; and so on. None of it mattered very much by year’s end, though it seemed important enough at the time.

1988

The Political balance within the Empire had changed as a result of the Empress’ manipulations, though, and she had regained much of the throne’s ability to rule by decree. There was an ongoing momentum toward change, which she was able to harness. Although she had not been able to achieve her long-term goals of civil service reform, she was at least able to influence events. She started by dismissing for incompetence the senior civil servants of the Empire, and promoting younger public servants to positions of high authority – people with fresh ideas, who had not yet been fully indoctrinated into the culture of the Peerage. Using the New Years Day honors list, the Empress demonstrated clearly that the game of Imperial Rule had changed, and that the player whom many thought defeated was staging a thunderous re-emergence.

David Copperfield, noted magician, in a publicity photograph for his 1977 Television Special 'The Magic Of ABC Featuring David Copperfield', photo by ABC Television. Copyright may persist in some countries.

The Honor List of 1988

For many years, the awarding of honors had been under the control of senior civil servants. They were the ones who drew up the lists of potential honorees, who controlled the choices (frequently through a “magician’s force”, where two choices were offered for each post, the one the peerage wanted and one who was patently unsuitable). While there had been the scope for the occasional “extra” to recognize some common citizen who had achieved extraordinary things on behalf of the Empire, these titles were ceremonial and titular, conferring no authority within the peerage.

With the New Year’s Honors of 1988, that changed. Not having had the chance to be taught all the long-established tricks of the trade, many of the new Department Heads had provided two reasonable candidates; where they had not, it was immediately apparent, and the Empress was able to refuse to accept the nominations, insisting on two viable candidates. The result was a weakening of the influence of the conservatives even within the Peerage; another 10 years of the same, and some sort of equilibrium would be reached, though that was hoping for too much, and the Empress knew it, as shown by her memoirs (posthumously published in 2025).

A window of urgency

The Empress knew that visible change was a political necessity, and very quickly. Firstly, there was the need to restore some optimism in the future of the Empire, to end the economic distress that was an unintentional by-product of her power struggle with the peerage. Secondly, there was the need to recapture public confidence in the ability of the Government to improve the welfare of the common man, lest the rioting and endless succession of coups continue. Thirdly, while the new “Big Businesses” were currently progressive in the attitudes, time and growth would make them increasingly conservative as the political and economic landscape grew to their liking. They would support changes they perceived as being in their best interests, but once those were achieved they would want to maintain the new status quo as strongly as the previous crop had done, so her window of opportunity was limited. And finally, the senior civil servants she had dismissed were still around, as were their predecessors; given time, the new Heads of the Civil Service would learn from them all the old tricks; only the headlong rush of events had left them floundering so far. Having pushed them off balance, she had won a short span of time in which to make changes; if it were squandered, she would soon find that nothing but the names had changed.

So this would inevitably be a year of transition and rapid changes within the Empire.

Symbol of the League of Nations, original image by Mysid and refined by others. Symbol may be trademarked.

100 days of Chess moves

The Empress started by ending the military farce in Afghanistan. The invasion had never come close to achieving its goals, and by providing an ongoing reason for hostility amongst the Arabian population, had encouraged acts of terrorism.

She then rearranged the Civil Service, creating a number of new bureaus and departments to deal with the new technologies and their applications.

She decreed tax benefits for the ongoing training of employees, especially those who would otherwise be retrenched, and changed the priorities of the Space Programme to a more pro- environmental stance.

Finally, she established a new organization, the League Of Nations, to provide a political counterpoint to one of the Peerage’s more subtle but far-reaching advantages, the “Family” network.

The League Of Nations
The latter requires some further description of the political structure of the Empire, as it had evolved over time.

The Empire had a monarch, an elected lower house, an appointed upper house, a civil service, and a military force – the latter mostly consisting of forces contributed by each member nation.

Each member nation also had a monarch, an elected lower house, an appointed upper house, a civil service, and a military force, as described in earlier sections of this history.

In theory, the latter were restricted to dealing with internal issues, and were subservient to their Imperial Equivalents. In practice, there had been unification, over the years, of the Peerages and Civil Services. The Civil Service of Spain, for example, could be considered nothing more than the “local branch” of the Imperial Civil Service. In particular, intermarriage amongst the traditional peerage meant that any given member could trace some sort of relationship to almost any other member; they were effectively one large family, related by blood to the Empress (The exceptions being the peerages of Africa and the Middle East, who had shown considerable disinclination to marry outside of ethnic bounds).

This of course totally broke down the theoretical independence of the various national peerages, and shows quite clearly why the same problems were experienced virtually simultaneously in all corners of the Empire. It hadn’t mattered much before the rise of modern communications, but coordination of strategies, policies, and planning had become increasingly easy as technology developed.

The elected politicians had no equivalent relationships, a significant contribution towards their ineffectualness.

The “League Of Nations” was intended to rectify that lack and to provide a new channel for diplomatic coordination amongst the member nations. Furthermore, it was inherently self-limiting; if politicians assumed positions of real power over their nations, they would inevitably develop agendas which favored their nation over others, which would generally lead to the development of opposing power blocs within the League.

The Avalanche Of Reforms

Many of the Empress’ moves in the 100 days of reform had been designed to dissolve – or at least, disrupt – the uniformity of structure in the peerage and the Civil Service. The real coup came with a reform of the Peerage.

Elizabeth decreed that the membership of the Peerage should be restricted in number according to the growth of industry within the Empire, rather than by population and socio-economic regionality.

At the same time, she changed the structures of the civil services in many of the key members, on the pretext of trialing various solutions to the problems facing the Empire to determine the best one – but “inadvertently” making them harder to relate to one another except through the Imperial Civil Service. While this shifted power from the locals to the overall Civil Service, it also meant that the flow of information through those offices rose by an even greater ratio. Of course, in a time of economic distress, it was easy to refuse any requests for the expansion of the civil service.

The net effect was to ensure that the Imperial Civil Service had no time to use their theoretically-greater powers – unless they neglected their primary tasks. Any Civil Servant who exercised their authority thus became eligible for dismissal on the grounds of incompetence. The theory was that over time, as the National civil services came to recognize that they had greater effective powers than their Imperial Counterparts, they would begin to assert that authority to their own benefit over those in rival nations, fracturing the overall unity of the Civil Service.

They’re all Domestic Issues

In the meantime, the National politicians were using the disarray of their local peerages to effect their own changes.

The South African government announced harsh new restrictions to the Apartheid policy, clearly making the first step in abandoning it altogether.

Ethiopia and Somalia used the avenue provided by the newly-formed League Of Nations to arrive at a peace settlement arbitrated by Denmark, who were seen by both as having absolutely no stake in the outcome, and hence as being as impartial as it was possible to get. This ended 11 years of ongoing border disputes.

The military forces released by the Afghan withdrawal undertook a series of lightning strikes in the Middle East, designed to force various hostilities into a pause for reflection, and began to blockade various nations in the region whose politics were opposed to that of the Empire – Iran and Iraq, Israel, Afghanistan, Syria, and so on.

This caused an immediate escalation of the ongoing Oil shortages, but those nations suffered immediate economic collapse. Within months, peace talks were underway between previously intractable opponents. At the end of the year, PLO leader Yassar Arafat renounced terrorism as an effective means of instituting change and recognized the state of Israel in a speech to the League, in a largely-successful attempt to win political support in the new venue for his claim to “Dispossessed Nation” status.

The emblam of the 'Empire Games' (Commonwealth Games) until supplanted by the Five Olympic Rings. Image by Bill William Copton.

The Last Empire Games: Symbol Of Change

The Empire games of 1988 were especially significant, not only for the sudden wave of optimism that was sweeping the world as a consequence of these changes, but because of a new participant. For the first time, Chinese athletes participated, at the personal invitation of the Empress. Although not overwhelmingly competitive in the events, the Chinese nevertheless began discovering common ground with the population of the rest of the world, and the event was adjudged a magnificent success. Certainly, the Mao reacted to the humiliation they experienced with a determination to succeed that would ensure the process of bridge-building would continue.

This was the last Empire Games per se to be held; at the conclusion of the event, it was announced that it would be renamed the Olympic Games thereafter, and that all nations who were willing to attend the Barcelona Games in four years time would be welcome. India, Pakistan, Central America, Japan, and many others could resume the international community. This was viewed as potentially the greatest step in achieving peaceful relations with the Mao in decades.

Only one occurrence marred the Games; Canadian Ben Johnson won the 100m, but was subsequently disqualified for taking steroids. Although prompting global outrage, this occurrence was not seen for the ominous portent that it would eventually prove….

The Exxon Valdez, three days after the vessel ran aground and shortly before the fateful storm. Photo by the US National Oceanic and Atmospheric Administration.

1989

The peerage rallied, as predicted, in 1989. Their chosen battleground was a legal challenge to the reported likelyhood of an ecological catastrophe, alleging that the campaign was designed to restrain trade and force them into unprofitable business practices.

This was an unfortunate, because 2 months after the lawsuit was announced, the Exxon Valdez, a fully-laden oil tanker, ran aground, spilling more than 40 million liters of oil along the Alaskan coastline. While mathematicians pointed out that probability just measures how unlikely something was to happen, what little public support they had wilted, and many of their allies chose the better part of discretion and abandoned them. No-one was fooled; the timing may have forced the traditional businesses further onto their back foot, but they would wait a while and regroup.

OPEC Headquarters Building, Vienna, Austria. Photo by Priwo.

The Coming Of OPEC

The Middle East continued to slowly edge towards a fragile peace. In June, one of the most divisive figures in the region, the Ayatollah Khomeini, died in Iran. Beloved of the secular hard-liners, even his many enemies conceded that the man fought for what he believed in – usually, they added, to the point of obsession. No matter how earnest his beliefs, it was his unwillingness to compromise – and his willingness to treat any who did as an enemy – that had been the cause of failure of many initiatives intended to heal the wounds. Ultimately, others would take his place as clerical spokesman and intractable fundamentalist, but without his political authority.

The change of government brought about a domino effect, leading the oil-producing nations of the region into a trade coalition designed to regulate oil prices and availability – OPEC. This was the biggest indicator to date that the Arabian nations were serious about peace; in order to be effective, OPEC pretty much presumed a lack of hostilities.

Frederik de Klerk at the annual meeting of the World Economic Forum, January, 1992. Photo and copyright by World Economic Forum.

The End Of Apartheid

Nor was this the only government to change direction completely in response to the changes in global political atmosphere. The next to fall was the Botha administration in South Africa; although they had moved towards reform in the course of 1988, the Botha administration had not moved far enough, fast enough, to satisfy the advocates for change who were growing in authority under the reform umbrella. He was succeeded by F. W. de Klerk, who immediately set about dismantling Apartheid and ending years of political repression.

Imperial Prime Minister John Major in 1996, Photo by PFC Tracey L. Hall-Leahy, Courtesy US Department Of Defense

The Winds Of Change

The same sort of thing was happening all over. Poland, Germany, Hungary, Yugoslavia, Bulgaria, Czechoslovakia, Romania, and Russia all moved away from previously hard-line conservative governments towards more progressive representatives. At year’s end came the ultimate expression of the reform movement, as sufficient changes were made in the different member nations governments to trigger a change of government at the Imperial level. John Major was suddenly the Prime Minister of the Empire – right in line with the decreasing average age of politicians, and ending decades of conservative administration.

1990

The nineties felt like a new beginning, in a lot of ways. The problems of the past were falling away, one after another, and in their place, new conundrums were emerging to trouble the policymakers. The three iconic figures of the 80s were now at their lowest ebb to date; Michael Jackson was a recluse haunted by media allegations of strange lifestyles and pedophilia; Sir Bob Geldof was virtually penniless, divorced, and an often-forgotten man; and the Princess Diana, while still a public favorite, was beginning to experience the bloodthirsty downside of what was generally known as the “media circus” or “paparazzi”, as her marriage to Prince Charles began its public disintegration.

Economic Recovery

The economy had staggered back to its feet after the blows dealt it in the 80s and the same mood of cautious optimism was pervading the stock markets and boardrooms, driven more by colorful entrepreneurs than by faceless men in corporate grey.

There was something of the underdog about these flashy moneymen, battling with the corporate greed of the peerage, which lent many of them public support and market share that they would otherwise not have captured. The 90s would reveal the economic consequences of this new economy and the fates of a second generation of new entrepreneurs, but at the time, they were riding the crest of a wave, a looming boom and groundswell of confidence not seen since the end of the Third Global War.†

That’s WWII in our history.

The Middle East: Musical Ideologies

The Middle East continued to experience a transition that had seemed unthinkable only a decade earlier, as the architects of much of the violence moved closer to a moderate position, while nations who had become members of unstable alliances against them reacted by becoming more extremist and distant from the Empire. In particular, Iraq and Israel would start the decade as staunch members of the Empire and end it as mistrusted agitators, not far removed from hostilities. The first part of this transition had become a clear trend in the late 80s, 1990 saw the introduction of the second.

Saddam Hussein as Prime Minister of Iraq, photo by Iraqi State Television.

Iraq: A disintegrating friendship

The concern was all about weapons of mass destruction under the control of nations in an unstable region, and had been since the Libyan flirtation with a nuclear weapons programme in the late 70s. It had been known for years that Iraq had built up vast stockpiles of nerve and biological agents, flouting the Imperial treaty with the Mao, but because they had no delivery systems for these weapons, because the weapons had never been used, and because the Empire needed staunch allies so badly in the region, there had been no urgency in addressing the situation. Furthermore, one of the reasons for the Iraqi regime’s ability to be such a staunch ally was the security that having these weapons under their control conferred. In 1990, that began to change. As peace continued to grow within the region, the Empire’s need for Iraqi support eased; but that alone was insufficient to prompt military opposition to Prime Minister Saddam Hussein, now entering his 12th year as supreme political power in Iraq.

In 1990 the second “great excuse” – lack of a delivery system with sufficient range – vanished. In April, Imperial Security agents intercepted components of a “supergun” bound for the regime. Subsequent investigation revealed that the Iraqis had been quietly acquiring Russian-made SCUD missiles for much of the last decade; these potentially had ranges roughly double those of the Nazi V2 of the Third Global War, which is to say that London was just inside their theoretical range. The SCUDs were conventional high-explosive devices, but a review of intelligence revealed that they could be modified to carry any warhead desired – and that the greatest concentration of the relevant expertise was Iraqi in nature.

None of these were newly-discovered facts; the failure was not one of intelligence gathering, but one of analysis, as different departments of the Civil Service attempted to protect their sources of information and gain an advantage over the heads of rival departments. The Ministry Of Trade knew of the purchase of the missiles, but because Iraq was a member in good standing and one of the few nations in the region allied to the Empire, they knew of no reason to bring the purchases to the attention of the Intelligence Department; the Ministry of Science knew that the expertise in converting SCUD missiles to alternate payloads was Iraqi, but because they had no missiles, saw no need to stress the fact to the Intelligence analysts; and so on.

All this put a new context on a number of side-comments made in speeches and in diplomatic talks with Saddam over the previous decade, in which the Iraqi Leader had repeatedly emphasized the “rich rewards” that would follow from the national support of the Empire without enumerating exactly what those rewards were. Analysts had dismissed this as rhetoric, or had assumed that the rewards in question were the same ones that the Empire foresaw – peace and prosperity, stability and trade. The question of what rewards Saddam believed would be forthcoming had never been asked, let alone answered. Now for the first time, intelligence analysts focused their attention not on the enemies of the Empire in the region, but on one of their allies, and they did not like what they found.

Saddam Hussein had privately expressed the opinion at one point that the conflicts of the Middle East would not be resolved until the region was brought under the control of a single political force; this had been interpreted at the time as meaning “Imperial Control”, but now it was speculated that he believed that Iraq would be rewarded for its support of the Empire by being given control of the dissident nations. He had similarly suggested that placing the control of the disputed Palestinian West Bank into the hands of a third party might be the most viable solution to the problems there – again, not mentioning Iraq specifically, but in this new context, the implications were clear. Saddam had been expecting to receive his “rewards” – but with peace looming without the conquest of rogue nations being necessary, he was preparing to take what he considered his due, with or without Imperial dispensation.

USK Navy F-14A Tomcat over the burning Kuwaiti oil fields - DN-SC-04-15221, photo provided by US Department Of Defense. Click on the thumbnail to see the full-sized image.

The Kuwait Invasion

By the time all this had been uncovered, it was late July. Even as Imperial Intelligence was reporting to an emergency joint session of the upper and lower Houses of Government, Iraqi military forces were staging. On August 2nd, 3 days after the realization of what was forthcoming, and long before a response had been determined, Iraq invaded Kuwait, and within a week had conquered the neighboring nation. Furthermore, the successful conquest had utilized nerve gas on both military and civilian targets. Iraqi forces were lining up to invade Saudi Arabia even as the Empire was being briefed on Iraq’s Kuwait campaign.

Upping The Ante

From out of nowhere (literally), a new factor strode into the centre of Imperial deliberations on a response, as a representative of the Mao appeared before the Imperial Parliament to convey a message from his government: ‘it now stands clearly revealed that a faction of your Empire has foresworn the agreements held between us. At the time of these treaty violations, they were considered good and faithful servants of your Empress. The Empire Of Greater Britain is clearly in breach of the agreements between our states. You now brand this faction as rebellious, but have taken no action to curb this rebellion. We will generously grant you a brief span of time in which to address this situation, before we declare your default a formal violation of the treaties between us, grounds for immediate action on our part. Understand that our agreements do not recognize individual prerogatives; they are treaties between cultures, and cultures do not change. Should you fail in this most reasonable request, your Empire will henceforth be considered untrustworthy, all agreements between us shall stand as void, and we will undertake whatever actions are required to eradicate any dangers posed by the Empire Of Greater Britain. Your Empire will forever be considered false and foresworn by ours, and we shall eradicate it without quarter or clemency. This is our first, last, and only warning.”

Having issued what amounted to a declaration of War, to be rescinded only if the Empire acted immediately against Iraq, the Mao representative vanished as suddenly as he had appeared.

This put events in Iraq into a whole new context. Within the next 72 hours, the Empire military was beginning a full mobilization, a complete embargo and blockade of Iraq had been decreed, and the Empire was officially in a state of Civil War. On August 7th, Imperial troops began to arrive in Saudi Arabia. A deadline of Jan 16th, 1991 was announced for the complete withdrawal of Iraq’s military to their former borders even as the military buildup for Operation Desert Storm continued. Imperial policies made it clear to Iraq – if chemical or biological weapons were employed against Imperial citizens, a Nuclear response would be forthcoming, with no further warning.

1991

Iraq again dominated the news in the early part of the year. January 16th came and went with no attempt to meet the Empire’s deadline; accordingly, on January 17th, hostilities commenced. It was afterwards determined that Hussein was so convinced of his sense of entitlement that he found it impossible to believe that anyone would seriously oppose him, and that reports of the Mao intervention – which left the Empire with no choice in the matter – were discounted as propaganda within Iraq.

This was the first “modern” war, unlike the Afghanistan campaign which had been fought along traditional lines, and the various civil wars that utilized whatever weaponry was on hand. Ironically, it had been the former relationship between Empire and Iraq that had left the former with a state-of-the-art military apparatus.

The conflict began with exchanges of missile barrages. While Imperial anti-missile technology proved very effective at stopping the majority of the somewhat out-of-date SCUD missiles*, the Iraqi interceptors had considerably less success at dealing with the latest generation of Smart Missiles launched by the Empire. Within 48 hours, the Iraqi airfields were severely damaged and their Air Force crippled, clearing the way for precision bombing runs over key defensive emplacements. 48 hours after these commenced, the ground invasion began, and on the fifth day after the commencement of hostilities, the Iraqis were in retreat, falling back to planned positions – only to find that those positions had been destroyed by the Imperial Air Force. After 5 weeks of military action, Kuwait had been liberated, though the departing Iraqis, in an act of economic barbarism akin to the temper tantrum of a child, had torched hundreds of Kuwaiti oil wells. On Feb 27th, Kuwait City was liberated and the Iraqis defeated.

*It emerged, two decades later, that the criteria used to define a ‘successful interception’ were generous beyond belief – simply launching a missile when an incoming launch was detected wasn’t quite enough to be called a successful interception, but having that anti-missile missile head in the general direction of the attack was. Nevertheless, the outcome was a clear victory for the Imperial forces.

The Dictation Of Terms

Politically, the conduct of the war with Iraq had been dictated by outside forces. Having achieved the objectives that the Empire had set, and unwilling to sustain the high numbers of casualties that would have resulted if Saddam had been forced to employ his weapons of mass destruction, the politicians of the Empire determined that diplomatic and trade pressures would be a more acceptable method of dealing with Iraq.

An ultimatum was issued – the Empire would not invade with the intention of deposing Saddam, provided that his regime immediately acted to destroy their stockpiles of chemical and biological weapons and any facilities capable of manufacturing more, and submitted to Imperial inspection and verification procedures.

There was widespread demand for the reclassification of Iraq’s Imperial Membership status, but that option was restricted to dealing with an inability to administer a nation effectively. Instead, trade sanctions and military isolation zones were established, essentially locking up the entire country within its own borders. It was widely anticipated that a civil war would soon depose Saddam without the need for the Empire to dirty its hands by violating its own charter and principles, especially given that Iraq was dependant on Imperial grain shipments.

The Aftermath Of Victory

News emerging through March suggested that this would indeed be the case, as the people reacted to their defeat and its consequences. The Iraqis responded in various ways; some blaming the Empire for turning against its own, others blaming their leadership for being so foolish as to challenge the might of the Empire, a fight that they were never going to win; some shouldered a burning resentment against whoever they considered ultimately responsible, others responded more actively, and a few looked around for someone to lash out at – their attention falling on the repressed Kurdish minority. Word of the resulting atrocities slowly began to filter out from the Iraqi borders even through the censorship imposed by Saddam, and in April the Imperial forces established to enforce the blockade created a number of safe havens for Kurdish refugees fleeing the regime.

Publicly, Saddam had agreed to the terms of the Imperial Ultimatum, but almost immediately he began playing games with the Imperial inspectors, hamstringing their abilities to pursue their mandate through bureaucratic interference and refusing to permit access to religious sites and his personal palaces. The Empire, for its part, was wary of any military buildup and ready to respond at once to any actual use of the banned weapons, but was otherwise prepared to starve Saddam out.

Map of The war in Yugoslavia, 1993, by Pawel Goleniowski. Click on the thumbnail to see the full-sized image.

An unstable stability

1991 also saw a civil war in Yugoslavia, as Croatia and Slovenia sought independence within the Empire. There was further violence aimed at achieving the same goal in Northern Ireland. Terrorism continued to evolve; where once the goal had been a perpetual wave of small attacks, the trend now was toward fewer acts of greater impact. And the first member of the Peerage fell victim to the economic climate, as Earl Robert Maxwell died under mysterious circumstances; his business & publishing empire, beset by massive debts and financial corruption collapsing within days.

In hindsight, it is easy to see that the optimism and security felt by the Imperial Citizens of the time were a superficial coating of progress with a perilously-rotted and unstable core lying beneath it. But, as remarkable as it now seems, no one at the time foresaw the inevitable crash.

1992

This was the year that some problems long considered “solved” within the Empire returned to haunt the administrators of the Throne, as racial issues dominated events. The year began with The Empire continuing its hypocrisy, simultaneously recognizing the independence of Croatia and Slovenia while denying Ireland the same treatment. Those setting policy were accused of bending over so far in the name of political correctness that it had become reverse discrimination – the Baltic Nations were granted recognition while the Irish were not because one population was of Slavic stock and the other was White. Although this accusation was strenuously denied, it was becoming apparent that this was a de-facto policy brought about because no-one was willing to risk appearing politically incorrect.

South Africa continued to march toward the abolition of the racial division that had marred its political landscape for decades, a referendum giving the Prime Minister of the beleaguered nation overwhelming backing for his plans to dismantle the Apartheid policies, while a Trade Embargo was imposed on the rogue state of Libya in an attempt to force it to hand over suspected terrorists.

The race riots of 1992 had an eerily-familiar feeling to those who remembered the Watts Riots of the 1960s. This photo from 1965 by New York World-Telegram, declared to be in the Public Domain by the US Government.

The Riots Of Los Angeles

On April 29th, racial issues surged to the forefront, as 5 days of rioting began in Los Angeles following the acquittal of a white policeman for the beating of a black motorist. Throughout the Empire it suddenly became clear that all sorts of racial double-standards remained in effect, regardless of the laws demanding equal treatment. Ethnic stereotyping and the natural congregation of communities of similar ethnicity had combined to overlay separate sub-nations over one another within the same geographic space, within which law, and social perspectives, were perceived differently, and handled differently. No solutions were obvious, and the problems would remain an undercurrent within Imperial society for the next two decades.

The Imperial Racial Divide

Nor was this phenomenon unique to the USK‡; it was simply more pronounced and obvious there. In London, Pakistani and Indian communities who had taken refuge from the invasion and conquest of their homelands by the Chinese developed their own branches of organized crime. In Australia, militant Aboriginal leaders began forging links with radical groups; although they would eventually back away from the terrorism route, the connections forged would eventually prove crucial to the Empire’s ongoing prosperity.

Within virtually every member nation of the Empire, there was an ethnic minority which began to feel oppressed by the majority or their public instruments. It didn’t matter how trivial some of the complaints were; the fact that there was any difference in treatment of ethnic groups at all was sufficient to arouse heated protest. Only one ethnic stock were not permitted to cry “foul” over their treatment – the Caucasian. These social developments virtually assured the introduction of exactly the type of reverse-discrimination that had set off a new wave of violence in Northern Ireland only a few months earlier.

Wanted poster for Serb leaders including Slobodan Milosevic, from the US State Department.

Serbia & Montenegro

So heated were the ethnic divisions in the Baltic regions that by the end of May, the Empire was forced to impose trade sanctions against Serbia and Montenegro following fierce attacks on Sarajevo, and the Imperial Military – hardly recovered from their Middle Eastern excursion – was poised to attack Central Europe. Within the month, diplomatic efforts at finding a resolution had largely been abandoned, and Imperial Troops had captured Sarajevo airport, permitting relief supplies to be airlifted into Bosnia.

For many of those affected, these shipments were the first substantial food they had received in months. As the Imperial hold on the region grew, stories of ethnic cleansing began to emerge which soon lost the Serbs any sympathy or support for their already unstable position. In mid-august, the Empire officially condemned the “ethnic cleansing” in Bosnia and vowed to use force if necessary to deliver humanitarian aid. For a time, the Serbs seemed to back off, but renewed acts of aggression in November caused the imposition of a Naval Blockade. On December 21, Slobodan Milosevic became Prime Minister of Yugoslavia. The Empire’s racial problems were about to get a whole lot worse.

Michael Jackson at the Cannes Film Festival 1997. Photo by Georges Biard.

The Crumbling Of Icons

At the time, the election results were hardly front-page material beyond the local area. Instead, the primary hue and cry in the media at the end of the year was the separation of the Prince and Princess of Wales. The lives of the three “anointed ones” of the 1980s had led all three into hard times by now; Michael Jackson’s popularity had ebbed away as lifestyle choices, innuendo, and a rapacious media left his music less relevant than his existence. He had been forced into a hermit-like lifestyle which was inherently abnormal, and then criticized for the abnormality of that existence. No longer the entertainer who had captivated the world, he was a parody of popularity.

Similarly, Sir Bob Geldof’s musical career had failed to survive the ramifications of Band Aid; no matter what he produced musically thereafter, it was inevitably going to seem shallow in comparison to the weighty issues and the massed icons of popular culture with which he had attacked those issues. His crusade had cost him his marriage, custody of his children, and his career; he would forever live in the shadow of what he had achieved in the mid-1980s.

In comparison, it might seem to the casual historian that the Princess Diana had escaped relatively whole the carnage of the paparazzi. While headlines had screamed of the infidelities of her husband, and the partisan support of the Empress, and the betrayals of personal staff and acquaintances, she had at least managed to retain her dignity and avoid being torn down to the common level. But at last the scandal sheets had real meat to their stories, and what had once been seen as the ultimate expression of hope for the future of the Empire was reduced to an increasingly tawdry divorce proceeding. From December 9th, 1992, the Empire would have to look elsewhere for hope.

The Uffizi Gallery is one of thee oldest and most famous art museums in the world. Photo by Chris Wee showing the gallery restored after the bombing.

1993 – A year of extremism

The Baltic situation continued to be a source of trouble throughout the year. The Imperial War Crimes tribunal was called into session for the first time since 1945 to seek justice for the Bosnian atrocities – when and if the people responsible were captured. Imperial Forces supervised the evacuation of civilians from Srebrenica, hoping to clear the field for combat. In mid-year, six “safe zones” for refugees were created. These immediately became targets for Serbian radicals.

It was also a year of increased terrorist activities elsewhere. A Palestinian extremist bombed the World Trade Centre in New York, killing five people and completing the transition to “mature” terrorism. The extent to which the authorities are still struggling to find a working counter to the threat posed by the new Terrorists is shown by the bungling of a raid by USK police forces on the headquarters of an extremist religious sect in Waco, Texas, after a 51 day siege. 72 people are killed, including many women and children. Just five weeks later, a bomb is detonated outside the Uffizi Gallery in Florence, Italy, killing 6 people. To this day (2055), it is uncertain who was behind the attack and what they hoped to achieve, as several credible groups claimed responsibility.

Smaller Extremism

1993 also saw the emergence of other extremist groups, fighting for smaller, better defined causes rather than one all-encompassing general manifesto. The prototype made its existence known to the world on March 11, when a doctor was murdered by an anti-abortion activist outside a Florida abortion clinic. As people despaired of being able to influence the political policies that controlled their lives, their increased desperation was becoming the breeding ground for more and more extreme positions.

Battle is joined

More than anything else, this showed that despite the Empress’ success in taking back some control over the Civil Servants when it came to major and singular issues, the fundamental apparatus that was the real problem remained whole and intact. It was still the purview of petty bureaucrats and civil servants to interpret and implement government policy in most of the areas that mattered to the ordinary citizen.

To be sure, the principle of the Empress overriding or revising policies in any given case or any specific rule or regulation had been established; but unless a case came to the Empress’ attention, nothing had changed. To some extent, her victory had simply ensured that elements of the Civil Service were now covertly antagonistic to her continued reign.

And of course, they controlled the media, and the media told a significant segment of the population what to think. The peerage/industry machine had taken several months to formulate their response to the increased political control of the Empire by its head of state, but from this year forward, many stories appearing in the mass-media began to have anti-monarch undertones, as the battle lines began to be drawn.

'Internet' Icon from the Tango Project, image by warszawianka. Click on the image to see the terms of use & learn more about the Tango Project. Image from the Open Clipart Library http://openclipart.org/

Birth of The Internet

But this was also the year in which the tools which would ultimately lead to the overthrow of the civil service and the reinvigoration of the Empire by its citizens began to achieve popular acceptance. There was a new conduit for information coming into existence – one which linked people together directly, and which bypassed the media barons’ spin doctoring. Although it would take many years to mature, the age of communications had achieved its ultimate expression: The Internet.

Cropped Photograph of Srebrenica in August 2004 by Samum. At first glance, an idyllic setting, but note the missing roofs and damaged buildings in the foreground. Click on the thumbnail to see a full-sized, unedited image.

1994

It seemed to many that one troubled region (the Middle East) had been calmed only for the Baltic Regions to take their place. Ongoing unrest in Bosnia dominated many of the 1994 headlines. In February, a mortar attack on a marketplace in the Capital, Sarajevo, killed 68 and wounded over 200, and in March, Serbian forces bombed the “Safe Zones” of Gorazde and Srebrenica. In April, the Imperial military retaliated with Air Strikes against the Serb forces at Gorazde, and the conflict resumed a slow boil. In December, a cease-fire accord was finally reached.

In general, it’s fair to say that the conflict had little impact on events in the Empire overall. Fighting in a relative backwater of little strategic importance was not something to overly disturb the daily routines of a culture that vast.

The key phrase, according to later criticism, was “of little strategic importance”; the Empire, they would accuse, had grown so unmanageable that there was no capacity left to fight for the ordinary citizen, only the larger ideological conflicts. This criticism overlooks the obvious – that any conflict in which there was a confluence of interests would naturally receive greater support from individuals who had that vested interest. Thus, the peerage (whose economic alliances were threatened by any disruption of oil supplies) would enthusiastically support any action aimed at achieving stability in the Middle East but such support would be far more lukewarm when confronted with an issue of relatively pure morality, with no greater economic impacts in prospect. No conspiracy is necessary when there is an overlapping of purposes.

Peace at last?

But in one particular part of the world, also renounced for it’s ongoing violence, the pointlessness of the unrest was observed by a few unexpected observers. The year seemed like any other in the Middle East from early on; in late February, over 50 Palestinians were killed in a Hebron mosque when an Israeli settler opened fire with an automatic weapon. A week later, six Israelis were killed by a Palestinian sniper. It all seemed so pointless to both sides, and was attacked by both as ‘unproductive’, a refreshing change of perspective. On May 4th, Israel and the PLO signed an agreement giving Palestine self-rule in the Gaza Strip and Jericho.

Former nuclear test site Maralinga, South Australia. Photo by Wayne England, who participated in the clean-up, April 2007. Note: the colors are correct. Click on the thumbnail for the full-sized image.

The Thorny Issue Of Reparations

1994 saw developments in the racial issues which had been edging toward a resolution for some time. In South Africa, the African National Congress won the first free elections. The transition to black self-rule was now complete.

The Australian Government agreed to pay £7,000,000 to South Australian aborigines displaced by the nuclear tests of the 1950s and 60s, and the following week, the New Zealand government offered £400,000,000 compensation to the Maori Tribes displaced by the arrival of European Settlers.

These were a disquieting development to many, opening the door to lawsuits for compensation by many others, even as it addressed passionately-held sources of tension. Most fervently opposed to such measures were the USK, who knew that any serious attempt at reparations to its Native American and African-American populations would bankrupt not only the country but the entire Empire.

The contentious question remained, how much discounting of any reparations should take place for the acknowledged benefits and participation in the benefits of modern society? How much should be discounted because the events in question took place in a different time, when different behavior and practices were deemed acceptable and right, and to which a more modern standard of morality could not be applied? How responsible were modern generations for the failures in the past of those who could not know “better”? In other words, how much of the demand for reparations was the result of greed alloyed with hindsight?

Those who opposed reparations, in principle, were always going to struggle to persuade others of their position, since the events of the past few years made it clear that Racial Equality was not a “solved” problem, but those who supported it, even in principle, had an even more difficult struggle to overcome: the limits of practicality. Realism dictated that some compromise would have to be reached, a compromise that neither side was willing to contemplate.

One proposed solution placed a statute of limitations on the offences, but no agreement could be found on where the dividing line should be located. Another proposal applied a fixed discounting rate to each year since the act of inequality was committed, on the basis that ongoing processes of social reform provided a larger share of any compensation owed; since this applied the greatest discounting to the most expensive claims, it met with considerable private support, but since it would fail to achieve any of the social ambitions of the most vocal and aggressive reformers, it failed to get traction amongst the “victims”.

In truth, neither side was willing to abandon the pulpits of exorbitant rhetoric that the issue provided. Any solution would have to be imposed from higher up the policy food-chain, but the leaders of the Imperial Government had more than enough to contend with in the modern day problems of the Empire.

The Rise Of The Internet

Communications technology continued to grow apace; by the end of the year over 15 million people were connected to the internet. From this year forwards it would be considered a mass medium. With it came a surge of excitement in the business world, as any business connected with the internet seemed to be poised for mammoth profits.

It was quite literally possible for a startup with nothing more than an idea to make its founder a multimillionaire overnight. The IT sector was exploding.

But 1994 was not without its warning bells in this area; for this was the first year in which a new subject was mentioned in the specialty press, a subject that would end the century on everyone’s lips: the millennium bug.

1995

Conflict resumed in Bosnia as the cease-fire agreement broke down. After months of pointless bloodshed an agreement was reached for the partitioning of Bosnia-Herzegovina into separate nations for Muslim Croats and Serbs respectively. This was the only practical solution, but at the same time it aggravated the minority populations in both of the resulting nations, who felt disenfranchised as a result.

The New Terrorism

The transition to “the new terrorism” was completed, as exemplified by the only three serious terrorist incidents to occur in the course of the year. In the first, a religious fringe group conducted a nerve gas attack on a crowded Los Angeles subway, killing 10 and injuring thousands. Three months later, the leader of the Supreme Truth cult would be arrested for masterminding the attack. A month later, a terrorist bomb in Oklahoma City killed 158 and injured hundreds. The third, coming late in the year, involved a radical right-wing Zionist sect which carried out a surgical strike on the Israeli cabinet. The Prime Minister and a number of aides were killed, hundreds more were injured.

The Price Of Profit

Even more significant, though less dramatic, were the consequences of a decade’s gradual transition in Economic Circles. Traditional businesses had been forced to react to the depredations of the “New Entrepreneurs” by adopting many of the same philosophies and practices.

In particular, many had converted from a policy of growth through capital acquisition and expanding customer base into a practice of considering these as mere seed capital for investments, which would then generate incredible profits for the owners and shareholders. In the process, many of the traditional values like customer service of these corporations had been thrown aside as “uncompetitive business practices”. The credo had become “profit at any expense”. As the need for ever-increasing profits became unrealistic expectations and greed and overconfidence, it was inevitable that someone would go too far.

There had been a number of near-misses, covered up by higher authorities lest confidence in the economic system be eroded, producing a depression; but in 1995 there occurred a loss so vast that it could not be concealed. The result was the total collapse of the Barings Bank after losses by trader Nick Leeson of more than £800 million. Tragically, this warning sign was misinterpreted by the management of other corporations as a breakdown of the audit systems, the procedures that were supposed to ensure accountability and limits of losses. The corporate culture which caused the breakdown would not be examined until it was far too late.

The Privatization Fallacy

The profits to be made by listing companies on the stock exchanges were so vast that even governments had gotten into the act, Privatizing many essential industries, in a direct reversal of policies that had been in place and considered incontrovertible only two decades earlier.

Industries that had been nationalized, because their continued functioning was deemed essential to society, were now privatized for vast sums to retire national debt that had accumulated over decades.

Of course, as soon as they were in Private hands, the same credo – “Profits at any cost” – came into operation, and with it, the vulnerability to all manner of economic ills. On the surface, the economy was stronger than ever; driven by booming technology stocks, it was growing at unprecedented rates – 1000% a month was not unheard of in extreme cases and sectors – but at its core, the economy was rotten, and the first stiff wind would result in the loss of branches – if not the collapse of the whole. And hurricane season was fast approaching.

The beautiful Lagoon at Mururoa Atoll, scene of a series of French Nuclear Weapons tests in the 20th century. Photo by Georges Martin, 10 May, 1972. Click on the thumbnail for the full-sized image.

1996 – collapse of the house of sticks

Politically, the slow boil within the Empire continued, as Chechen Rebels seized 3,000 hostages in the Russian town of Kizlyar.

France continued a series of tests of Nuclear Weapons in the Pacific aimed at giving them a Nuclear Arsenal independent of Imperial control.

The IRA called off the cease-fire that had endured for 17 months, just as the rest of the world perceived genuine hope for a peaceful resolution of the ongoing conflict.

A series of rapid-fire suicide bombings in Israel killed 31 people and injured over 100. Following the fourth attack within the fortnight, Israel announced that all peace agreements with Palestine had been abrogated by the Palestinians, and invaded to reclaim the territories to which they had granted independence. Within a month, an Israeli rocket airstrikes had hit an Imperial base in Lebanon killing 105 civilians, and turning the political clock back to the darkest days of the 1960s.

1996 was also the year that the Prince and Princess of Wales petitioned the courts for divorce.

Emerging Social Trends

A number of trends that had been building for years made public debuts in the course of the year. Legislation aimed at controlling the Internet began to appear, but these were national laws – or worse yet, local laws – and as such, completely unenforceable. These were the first indications of one of the dominant themes of a new era in Imperial History (and as such, will be discussed more fully in a subsequent chapter of this history): the rise of internationalism.

The quality-of-life debates that had been growing in intensity for decades came to a head as the Northern Territory of Australia passed legislation permitting terminally-ill patients to instruct their doctors to end their lives, despite Imperial and National laws against assisted suicide. Significant not only in the quality of life domain, this was a further manifestation of the theme of the years to come, as the interaction of laws at different hierarchic levels within the Empire came into question, and some of the fundamental assumptions of the Empire were called into question.

Resistant Diseases

But the biggest themes of the news year were medical developments. The warning by the Imperial Health Authority of an imminent potential plague of antibiotic-resistant strains of tuberculosis called into question 60 years of accepted medical practices. By the years end, resistant strains of many other diseases would also be generating headlines, as would the arrival of new, more persistent strains of diseases long considered to be of minor importance, in particular Legionnaires disease.

Up to 10 million sheep, pigs, and cows were destroyed in Britain before the Foot-and-mouth 'epidemic' was brought under control. Similar scenes took place in most of the Empire. Photo provided by Lawrence Livermore National Laboritories, USA.

The Mad Cow Nightmare

This followed the announcement in January of an outbreak of foot-and-mouth disease in Britain and Europe, and the admission by the Imperial Health Authority in March that the “Mad Cow Disease” could be transmitted to humans through eating contaminated beef products.

This, for the first time in history, raised the specter of an epidemic that took advantage of the existence of the Empire. While there were customs laws and inspections when shipping goods from one country to another, the fact that they were all members of an active and overriding political organization meant that these were far less stringent and restrictive than would otherwise have been the case. A massive programme of testing every herd in the Empire would be announced in December; but by then, France, Germany, Spain, Portugal, Austria, Denmark, Norway, Switzerland, South Africa, Tanzania, Zanzibar, Argentina, Brazil, Mexico, and Canada would all have confirmed outbreaks.

Mercifully, thus far, Israel, the US, Australia, and New Zealand, all appeared to be free of infection; and immediate bans on the import of beef, beef products, and fodder were put in place to keep them that way, while harsh countermeasures were undertaken that had been derived from long-standing policies on Anthrax infections. A single instance was determined to be sufficient cause for the slaughter and incineration of the entire herd.

Beef prices throughout the majority of the Empire collapsed, and untainted beef became a luxury commodity. The USK reserved the bulk of its beef production for domestic usage, over considerable protest; exports from Israel were limited by both political concerns and practical difficulties; and that left the antipodean supply as the only save source, a fact that the national governments immediately began to take advantage of. Short-sightedness squandered what could have been a huge windfall, however, when additional export charges failed to distinguish between live cattle and cattle for slaughter; some of the herds reaching affected ports were immediately diverted from the slaughterhouses to usage as breeding stock. It would take a decade, but eventually the domestic herds would be repopulated from uncontaminated sources, and – aside from an occasional isolated outbreak – would eventually be rebuilt.

It did not happen before public dietary patterns had been fundamentally changed, however. Lamb and Sheep production had grown quickly to occupy much of the gap left by the virtually-vanished beef industry, and Mutton and Chicken would be the dominant meat source for most of the Empire for decades to come.

Mad Cow and the USK

Willy Nelson, one of the primary organizers of the original Farm Aid benefit concert. Photo by Larry Philpot of www.soundstagephotography.com


The Agricultural sector of the USK economy, despite being the largest employer in the nation, had been struggling for more than a decade. It is not insignificant that one of the first imitators of “Live Aid” had been the rather more topically-focused “Farm Aid”, targeting support for family farmers in the USK in danger of losing their farms through Mortgage debt. A concert was organized for Sept 22, 1985, less than a year after the event that was its direct inspiration, and quickly evolved into an annual event (missing 1988 and 1991).

Responses to the Mad Cow crisis in 1996 were consequently more varied than might be expected. Some primary producers saw the event as a vindication of the superiority of the USK over the rest of the world, and argued against any increase of exports, a view that – when stripped of the excessively-nationalistic rhetoric – would ultimately prevail. Others wanted to trade uncontaminated beef for political concessions at the Imperial Scale, while some wanted to impose additional taxes on beef exports to raise funding to be spread as relief payments throughout the agricultural sector.

Nervous commodity markets immediately discounted the value of Beef stocks, which in itself imposed new economic pressures on the farmers, but which did not go as far as the near-total collapse of prices in the realms to both the north and south of the USK, where outbreaks had been confirmed.

This immediately produced a black market in cheap – questionable – beef shipments into the US. Rumors of these shipments began circulating almost immediately, further depressing an already deflated market and further lowering public confidence in the beef industry. It was concern that inflating the value of beef further would only encourage these unsafe practices that ultimately killed any prospects of the USK using the international demand for beef to solve its domestic agricultural problems.

1997

The “Mad Cow” catastrophe went from bad to worse as it was discovered that the soil itself could harbor the infectious agent. This discovery was made as farmers attempted to replace their herds, only to have the disease re-emerge in cattle who had been tested and certified “clean”. It was clearly necessary to not only slaughter an entire affected herd, but to sterilize the soil on which they had grazed and to quarantine the affected farm for a period of 6 months – draconian measures that aroused storms of unrest amongst the public.

Despite the produce only coming from farms tested and declared free of the disease, the domestic beef market in much of the Empire collapsed to such an extent that it would be a decade before it had fully recovered. But with these harsh measures stringently applied, the threat posed by the disease was clearly receding by mid-year.

Only then did the government begin to examine closely the causes of the original problem, seeking answers to the questions of where had the infection come from, and what could be done to ensure that it never happened again? The answers would not be as forthcoming as Imperial analysts expected, and would not become public for years.

The sea of flowers left at the gate of Buckingham Palace in memorial to Diana, former Princess of Wales, speaks to the affection in which she was held. Photo by Maxwell Hamilton. Click on the thumbmail for the full-sized image.

The final bloom of the ‘English Rose’

This was the year in which the fairytales came to an end. Following her divorce of a year earlier, the former Princess of Wales began keeping company with Dodi Fayed, the son of the owner of Harrods (and many other businesses), and the shy smile that had captured the sympathies of millions returned with increasing frequency.

But the divorce had left her vulnerable to the predations of the paparazzi, and increasingly desperate measures were necessary to maintain the couple’s privacy. One rainy night in August, after the couple had been drinking at a restaurant, the press again caught up with them; the couple drove off at high speed, pursued by the reporters and photographers. On a road made greasy by the rain, the powerful BMW lost control and hit a tunnel upright; the pair were killed instantly.

This was one of the critical moments in history; Diana died before the press were able to tear her reputation down for the sake of headlines, despite their best efforts; and in her passing, she was anointed a saint by the public. For the second time in the century, the Empire stopped for a few hours; in 1969 it had been the first landing on the moon, in 1997 it was for the funeral of the embodiment of the promised future. Even those who felt distant from the monarchy found in those days that the world was a sadder, greyer, place.

The Crown In Crisis

Had they behaved differently, the outpoured support might have shored up the rule of the Empress Elizabeth, or even that of the future monarch of the country, Charles; but it was widely held that they were responsible for the circumstances that led to Diana’s death, and instead found that popular support for their rule was markedly declining.

In part, this was driven by a hostile press, who were willing to attack anyone for headlines; in part it was driven by hostile media owners, who had come under attack by the Empress; and in part, it was fully deserved.

The Empress had been so busy focusing on the ongoing battles with Government, and Peerage, and emergencies, and Civil Service, that she had lost touch with her subjects. A blinkered view of the deteriorating relationship between the Imperial Family and Diana and an old-school perspective that told her to keep her feelings private had gradually led her to lose touch with what modern citizens expected of their rulers and public figures. This, more than anything, had been at the heart of many of the conflicts between her and Diana; she had considered the Princess’ behavior to be excessively demonstrative, consistently outrageous, and perpetually verging on the exhibitionist.

These problems were compounded by a situation in which it was not considered etiquette for her servants to correct or even advise her; on the contrary, she was supposed to advise them. It took someone who was not afraid to be critical of the Imperial Family, even to discard protocol completely, to correct the situation.

Prime Minister Tony Blair at the White House, 2001. Photograph by Paule Morse, made available by the Executive Office of the President Of The United States

An unlikely savior

Fortunately, there was such a person at hand – the newly-elected Prime Minister of England, Tony Blair, who had in the past been highly critical of the role of the Imperial Family. It was Blair who explained to the Empress his ‘theory’ that Princess Diana had been so popular with the public because she enabled them to identify with her; distance and forced deference were barriers that had been erected between the Imperial Family and the public, but that if she desired to do so, there was an opportunity to use the current climate of discontent to reconnect with them; all that was necessary was to discard an outmoded policy of presenting herself as an impersonal throne and embrace a policy of letting them see the Monarch as a person. “Of course,” he is reported to have said, “I am sure that this is nothing that has not occurred to Her Majesty,” covering the breach of protocol. Prior to this moment, the generational gap had been the cause of considerable disrespect toward Blair behind the scenes within Buckingham Palace; at this moment, that barrier fell away, and the two entered into a new and more cooperative relationship, one that would reinvigorate the connection between Ruler and Ruled.

This was a policy that Prince Charles had also been advocating, something that he, ironically, had learned from Diana herself. But it forced on the Empress a very hard choice: she could rehabilitate the Monarchy’s image by humanizing herself, but in doing so, she would entrench the public perception of her son as unworthy to inherit, a view deriving from the tawdry infidelities that had caused the marriage to Diana to break down in the first place; or she could attempt to rehabilitate his image despite this additional handicap, risking the Empire itself should they fail to win back the support of the people.

The decision of destiny

By the end of the year, it was clear to the Empress that should Charles ever succeed her, he would preside over a hollow shell of what had been. She had originally intended to retire in favor of her son on her 60th birthday; but circumstances left her no option but to continue beyond that date, until her grandson, Prince William, had reached his age of majority. On William’s 21st Birthday, he would be crowned Emperor of Greater Britain.

It was Prince Charles who had made the decision for her – pointing out that if he abdicated his right to inherit, he could follow his heart and marry for the love he felt for Camilla Parker-Bowles, a decision with which he would be more than satisfied. Only when the Empress’ memoires were published in 2025 would the world learn that she had already decided to act as she subsequently did. Given a choice between the two, he would choose happiness over the throne – a choice that Elizabeth herself might have made, but one that she had never had the opportunity to explore.

So ended the Age of the three anointed saints of the late 20th century. What had started as an absurdly popular recording of dance music had ended in the disinheriting of the heir to the British Throne.

Comments (1)

Grokking The Message: Naming Places & Campaigns


This entry is part 5 of 11 in the series A Good Name Is Hard To Find

So, here it is: a day late, thanks to the Easter long weekend, but better late than never! Normal Service will be restored next week… in the meantime, enjoy.

We’re still working our way through what was originally intended to be Part 4 of this series, believe it or not! Part 1 concerned itself with setting the goals for the series, identifying the characteristics of a good name and considering the value that a good name could add – and the impairments that could result from a detrimental name. Part 2 explored Name seeds, a system for generating character names of passable-or-better quality that I have developed. In parts 3 & 4, I examined name structures, which are the framework within which a Name Seed can be employed can be employed, a subject that segued into telling a story with a name.

Logically, if I were not so focused on trying to make up for lost time, I would have left the last couple of sections of Part 4 for this segment of the Naming series for this part, it would have been a better fit. But hindsight is 20/20 by definition, and at the time I just wanted to get as much of it done as I could – I was so tired at the end of it, that i could barely put one word in front of another, let alone see the structural forest for the narrative trees!

Looking at the rather ambitious agenda I have laid out for this post, I’m not even sure that I’ll get all the way through it in one sitting. Assuming that I do, Part 6 (the originally-intended Part 4) will look at integrating name cores and name structures; and Part 7, to follow that, will look at various name-generation tools and aids – most of which may come as some surprise. But, in this part – and the last few sections of the previous one – we are taking a minor diversion. The subject is telling a story with a name…

Naming Places

Everything happens somewhere. If you are lucky enough to have your adventures take place on Earth, or some commercially-published game setting, a lot of the work of naming things is done for you – thank your lucky stars! If this is not the case, then you have a lot of work in front of you, because any map contains a lot of things that need naming: Mountains, Valleys, Forests, Plains, Deserts, Rivers, Lakes, Waterfalls, Seas & Oceans, Roads, Cities, Towns, Streets, Inns, Banks & Lawyers, Other Business Establishments, Towers, Keeps, & Castles, – even Planets, Stars, Nebulas, and Galaxies… and I’m sure I’ve missed something.

Place names always tell a story, whether it be of exploration, discovery, exploitation, nobility, greed, or whatever – they always have a tale to tell. Even a name like “New York” – the subtext being “Just like York was, only better”. Beyond this, there’s no one pattern – until we look at each type of Place in its own right…

Naming Mountains

Mountains are generally named for appearance (especially when a metaphor can be used to describe that appearance), for their climate, for the explorer who discovered the mountain or some family member, for its height (using a relative measure), for the political location, for the inhabitants, or for a famous person known to the discoverer. The combination of all these options is so broad that just about any name you can think of can be acceptable and justified later.

That’s a bad way to do business. Unless there is some obvious name (“Troll Mountain”) or something highly distinctive about the mountain’s appearance (“Dagger Point”), I prefer the name to reflect the historic activity of the region, or the type of action that I expect to hit the players with if they enter the vicinity, either symbolically or metaphorically.

These modes of assigning the name ensure that whatever name is chosen reflects the sort of things that the namer would have been thinking about. “Black Rock” (coal) – “Small Nugget Mountain” – “Long Pine” – “Bloodfreeze” – “Goat Back” – “Twisted Ally” … well, you get the idea.

I rarely explain the origins of the name, and certainly without a high-level skill check of some kind. The more evocative the name, the better – but once you have the PCs on the hook of curiosity, you have to reel them in gently, to use a fishing metaphor! This way, the PCs are never sure whether I’m dropping hints, describing history, describing prophecy, being cryptic, or trying to mislead them – a lot of potential interpretation when the real objective was simply to get a name that sounds cool.

The same technique works for Volcanoes, but the lexicon will usually involve anger, hostility, violence, smoke, or some other more specific reference to the nature of the mountain. These can be made quite subtle, however, if you are in the mood – “Glacier Slip” is a great name for a volcano with a frozen peak. After the eruption, the PCs will know why the Glacier Slipped – beforehand, they won’t have a clue.

One more example that they may not get even after the fact, giving you an inside joke with which to amuse yourself: “Jaggerfalls”. Don’t get it? “Jaggerfalls” = “Jagger + Falls” = “Rolling Stones” + “Falls” = “Landslides” – and what causes landslides? Earthquakes, i.e. Tectonic Activity, i.e. Volcanically active.

The tallest peak in a region should have a name that especially reeks of Majesty. Put a little extra effort into naming it.

Naming Valleys

Valleys tend to be given optimistic names, because they will frequently be the closest thing to prime real estate in the region. Many are named for the first town to be located in the valley, or vice-versa. They may also be named for some other geographic feature in the region, such as “Three Falls Valley”. With those caveats, the same approach used for mountains usually works just fine for Valleys.

How many valleys can you think of that are named “Happy Valley”, “Pleasant Valley”, “Green Valley”, “Paradise Valley”, “Peaceful Valley”, or something similar?

This optimism can often be used to form a poignant counterpoint to whatever nastiness you have in mind for the location. The darker and more disturbing the events to take place, the more I tend to give the valley a sweetness-and-light name.

The final source of Valley names is the name of the tallest peak adjacent to the valley. When I don’t have anything especially nasty in mind for the inhabitants, I will often use this approach simply to save the more evocatively misleading names for the occasions when they will be most useful.

Of course, no pattern of this sort should be 100% consistent, or it will become predictable. Mix it up occasionally, just to keep the players on their toes.

Naming Forests

Forests are often named for the quality of light within them, or some metaphor describing that quality, though they will sometimes take their name from that of the underlying terrain. Avoid the temptation to name forests for a shape they might make on a map – not only do their perimeters change frequently (making that shape a relatively recent phenomenon), maps were usually not that accurate when it comes to forests.

The second popular source for a forest name is something related to the watercourse that feeds the forest. You HAVE figured out where all the water comes from and where it goes, right?

Finally, beware the temptation to use the actual word “Forest” too often within the names of this type of geographic feature. Pick some other descriptive quality or some metaphor for what lies within, most of the time. “The Silverdim” is a much more evocative name than “Dim Forest” – though “Dimwood” works for Tolkien.

Beyond these considerations, the same guidelines provided for Mountains work fairly well.

Naming Plains

Plains are incredibly dull places, lacking dramatic elements or geography to use in naming them. As a result, they are frequently named for the waterway into which they drain, for the color of the soil, for explorers and their families, in fact for just about anything the explorer can think of. As a result, most have very prosaic names.

An exception comes with one specific type of plain: Tundra. The climate tends to dominate the naming of such areas, often cloaked in metaphor once again.

Quite often, plains don’t receive any name at all – that’s how dull they are. The names are reserved for the towns that locate themselves on the plain.

Naming Deserts

If climate dominates the naming of Tundra’s, how much more common are such name derivations when it comes to Deserts? “Dry Well” works well. So does “Hazy Desert”. Naming a desert “Blue Water” after the mirages is a nasty trick.

Colour, especially of sand, is almost as common. “White Sands” is the obvious example, with the Painted Desert a close second.

Explorer names are also very common – the largest desert in Australia is named the “Simpson Desert.” Sometimes these are named for the first discoverers, sometimes for the first to successfully enter and return, and sometimes for a lost expedition.

Geography & Vegetation come fourth. Mesas, cacti, isolated mountains, all these may lend their names to the desert which surrounds them.

Naming Rivers

River names are almost as broad in derivation as mountains. Frequently, the best tool you have for naming geographic features is an Atlas, but when it comes to rivers, I’m afraid the US is mostly out of luck, because the names are those provided by the Native American inhabitants who preceded white settlers. If you can find a resource that provides literal translations of such names, however, treasure it, because these literal translations are the best bible to naming rivers and waterways that you can find. African rivers have the same problem, as do Australian, and the Pacific regions.

England & Europe are also out of luck, but for a different reason – the languages there have changed so much that the original meaning is frequently as obscure as for their North American counterparts.

Spanish speakers may have an advantage here, because the Spanish frequently renamed the rivers they discovered in places like South America – thought many may retain native names of obscure derivation, so not even this guide is completely infallible.

If you can’t tell where a river name comes from on a modern map, why should things be any different anywhere else? Use the “alien languages” techniques presented later in this post (or in the next, if I run out of time) to generate a language for the original natives and use it to name the rivers by translating names derived in the usual ways.

In fact, the only time that you really have to worry about naming rivers and waterways in general is when there are no “native speakers” to ‘solve’ the problem for you. When this occurs, take a step back and use some literally descriptive elements – Size, width, shape, depth, colour, speed.

If you don’t think those qualities, and the metaphors they engender, are going to be enough, consider the nature of rivers – sometimes willful, occasionally contrary, changing direction as they see fit – these are qualities that (rightly or wrongly) have been attributed to women by men for millennia. “It’s a female prerogative to change her mind” – is there anyone in western society who hasn’t heard that before? In modern times, it can be appreciated that this is probably the result of human biology – monthly hormonal changes, the changes and cravings of pregnancy, and so on. Nevertheless, I’ve found that giving rivers feminine names works very well.

Naming Lakes

Lakes, on the other hand, are more frequently given names of more recent derivation. That is because it takes a relatively high level of sophistication to recognize a lake for what it is, rather than any other type of large body of water.

The larger the lake, the more likely it is to have a name of ‘modern’ derivation. (“Lake Superior”, “Lake Victoria”). The smaller it is, the more likely it is to have a native-tongue-derived name. So for small lakes, use the same approach suggested for Rivers; for larger lakes, size and importance are obviously the dominant factors to consider in naming them.

Naming Waterfalls

The Waterfalls people think of are always spectacular geographic features, frequently very beautiful, and warrant naming accordingly. But there are innumerable small falls, especially in mountainous regions, and these frequently receive more prosaic names.

The road from Sydney to Katoomba, for example, is a distance of less than 48km (30 miles) but I have counted more than a dozen minor waterfalls – often little more than a trickle – in that span. Now, the cross-mountain passages around Sydney are a little unusual in that there is no access through the valleys, to get across the Blue Mountains – people found the hard way – you have to actually go across the top of the peaks, down into a valley, then up across the top of the next peak. So we get to see more of this phenomenon that citizens of most other country.

You can get a better idea of the scale of the situation with a quick squizz at – and remember that these photographs are just the larger ones, there are many smaller ones not featured!

With two scales of waterfall, there are two approaches to naming, and these tend to follow a similar pattern to that of lakes – the smaller ones have minor, relatively unimportant names (though some can be quite picturesque, as the link above shows), while the larger, more spectacular ones tend to be named for more important people. The more prosaic names are often named for the nearest township, or the waterway, or the peak.

Naming Seas & Oceans – and straights

The largest bodies have specific and unique names, derived from ancient Gods (“Atlantic Ocean”, from Atlas), from some relative characteristic (“Pacific Ocean”, from the word meaning peaceful, named for the contrast with the Atlantic), or from the dominant landmass (“Indian Ocean”, for India).

Intermediate bodies – Seas – are generally named for the local landmass, especially when there is a slightly-different archaic name for the landmass. Often, these need to be qualified with a geographic location to distinguish one from another – “South China Sea”, for example – but a quick glance over this list of seas will show that this general statement is honored almost as often in the breach as in the observance. “Red Sea”, “Cooperation Sea”, “Cosmonauts Sea”, “Black Sea”. Added to which are the bodies of water named for their explorers, also obvious on the list – “Mawson Sea”, “Drake Passage”, “Bass Straight”, even “Bismarck Sea”.

The smaller the body of water, the more likely it is to have been named either from a native source, for the discoverer (or a relative or sponsor), or for a famous explorer.

Naming Roads

It’s not uncommon for roads to have more than one name, because a road gets its significance from where it leads. Each town, then, will often have a different name for each road that leads from it.

There are very few exceptions to this general rule. Most of those are named for the explorers who mapped and surveyed the route followed by the road. The longer a region has been settled, the less likely this is. Navigational references are also reasonably common, as are roads that are named for a geographic feature that they pass – a road past “Washingoa Falls” (a waterfall, invented name) might be named Washingoa Road.

This phenomenon means that giving the geographic feature a good name is a two–and-a-half-for-one beneficial deal – not only does the feature become an iconic element of the landscape, but the road shares in that iconic status, and (this is the half), each name-checks and reminds the players of the other. (For the record, I don’t think “Washingoa Falls” is a very good name).

Naming Cities & Towns

So, if many roads get their names from population centers, the problem of naming the roads is merely deferred – and not for very long.

The names of population centers frequently follow a pattern that differs from one geographic and socio-political region to another. You can often hear the name of such a population centre and think “that sounds like a town in (region)”. Names from the US Northeast are different to names from the Midwestern US which are different to names from the Western US, which are different to names from Mexico, or Alaska, or Hawaii, or Southern England, and so on.

In part, these patterns are real, reflecting the history of settlement – Southern California names have a more Spanish flavor, for example – but in part, they are psychological.

The key to naming cities and towns is to employ generic names for places that don’t matter, and reserve the effort for the ones that do – then try to capture the iconic flavor that you wish to impart, so that hearing the name puts you into the correct mindset for the landscape.

Do this right, and a lot of other things that would be hard work become easy.

Naming Inns

Take, for example, the concept of an inn or hostel. There is a world of difference between an Irish Pub and the equivalent establishment in New York, London, or Las Vegas, or outback Australia. Not only will they have different names, but the appearance and flavor of the establishments will be very different.

If the town name already has the players (and yourself) in a receptive and geographically-appropriate mindset, simply referring to “an inn” conveys the right mental image right away. Reinforce this with an appropriate inn name, and the mind fills in any blanks in the details provided by the GM with an appropriate mental image.

This impression can be fragile, however, and easily disrupted if there are jarring discrepancies between the description you provide and what the impression generated by the name. It’s important to get the architecture and furnishings right, or you will undo all the good work.

The easiest way of making sure that all the details match up is to identify a real-world analogue for the region. Make sure that the neighboring regions also match up.

For example, let’s say that your game setting is somewhere very much like the central Irish countryside, the . Use the town names from the region as models and templates for your town names, get descriptions of the local architecture from tourist sites, and so on.

You can also work from the other direction – find a book which features an inn or establishment description, and use its location to lead you to regional maps and other information of use. It’s best to avoid fantasy novels for this purpose, for two reasons:

  • There is going to be a lot less reference material available concerning a small region of a fantasy world. You can’t exactly use Google Image Search to hunt for photos, or Google’s Street View to get a look at the local architecture.
  • You don’t know how accurately the author has done his research, and hence how consistent the architectural and narrative references are.

A far better source is generic non-fiction. Find an evocative narrative description and make it your own. Use it as a starting point for your own research – and be prepared to revise, replace, or abandon parts of the original description if your research contradicts it. You can even use a keyword internet search to find the right description. For example, “smoky cantina” pulls up a number of websites on a Google search, each of which contains part of a phrase – put them together, with a few bits in-between, and you get:

“Sad mariachi songs play until dawn over the moonlit beach behind the low fence. Men roll dice in the corner, wagering nonsensical sums on the outcome and puffing blue smoke from hand-rolled cigarettes. In a back room, two sweaty men in checked-red shirts and scarves were playing pool, while at the dingy bar, a surly bartender pours shots of tequila and lime for an out-of-place figure while a hot-blooded flamenco dancer crawls over him in search of a ticket to a better tomorrow.”

Notice how little of this passage actually describes the architecture or the people in the setting; and yet, how evocatively it conveys an impression of the place. Sight, sound, taste, smell, temperature – five of the six main senses are engaged. The reference to tequila makes it clear that the scene is in Mexico or the southwestern united states – so to really ground this location, fire up Google Maps, go to the right part of the world, look at the place names and use them as a template. Translate them if necessary – no need to name somewhere “La Cereza” if the language is inappropriate. Take the English translation – “The Cherry” – and do a search for similar names in the right part of the world. It won’t take too long to find “Cherrybrook”. Just change the iconic references – the dress style, the game, the music, and the drink (tossing in one or two more for good measure) and you get:

“Sadly-plucked lute strings waft music over the moors until dawn behind the low fence of the Cherrybrook Inn. Men roll dice in the corner, wagering nonsensical sums on the outcome and puffing blue smoke from a long-stemmed pipe. In a back room, two sweaty men in faded robes were play jacks, while at the mahogany bar, a surly bartender pours tankards of ale for an out-of-place figure while a hot-blooded barmaid crawls over him in search of a ticket to a better tomorrow.”

We’re clearly talking English Pub; the only dating references we have are to the “long-stemmed pipe” and the “faded robes”, and those place it anywhere from the early 18th century back to the dark ages – all prime fantasy eras. Even without describing boars’ heads mounted on the walls, or pennants and flags, or thickly-smoke-stained windows, a sense of the presence of such typical decorative features is created.

There’s also a subtext – mahogany isn’t cheap, so there is a hint that the present clientele is a step down the social ladder from the pub’s past.

Inns and pubs are frequently named for wildlife, for the owner, for the town or suburb in which they are located, for some local geographic feature, for famous figures, for famous battlefields or events – in fact, for just about anything you can think of. Unfortunately, it’s just as easy to get a non-evocative name as it is to create an evocative one. “Cherrybrook”, the example given above, is somewhere in between.

When naming an Inn, the best approach is to try and capture the tone of the place, and of the action you want. Whatever overtones the name projects will be added to your narrative description; the same narrative will have a slightly-different nuance if the Inn is named “The Surly Griffon” as compared to “The Bath-house Tavern”, as compared to “The Soldier’s Rest”.

Naming Streets

Streets are usually named for everything else in the country – towns, famous figures, you name it. The only time to really worry about street names is when you want to cast a general impression or tone over an entire district, a subtext similar to those of an Inn, but applying to many buildings.

The plebian approach is to take that subtext and apply it directly. “Diplomat Row”, “Merchant’s Way”… you get the idea.

A far more effective approach is to employ a metaphor for whatever quality you want the region to embody, or a synonym, or even for something you associate with that quality. “Envoy’s Row” and “Barter Way” both have a touch more nuance to them, a little more style. Compare “Temple Street” (okay but dull) with “Cloister Avenue”.

There are two parts to a street name, and that last example gives some notion of the importance of each. As a general rule of thumb, your important streets should never be named “street” – unless a bucolic humdrum is the mood you are trying to capture.

Remember, too, that a plebian name will often be replaced with something descriptive by the local population – “Potter’s Road” may become “The Avenue of Smells” if there are a lot of tanneries along it, or “Tinker’s Road” if that’s where all the blacksmiths are located.

Naming Banks & Lawyers

There are times when a particular institution will want to project a particular image. Banks and Lawyers are the two institutions that reflect this most clearly; each needs to project trustworthiness and, consequently, conservatism. To some extent, in modern times, we have stepped away from that ever so slightly; but in almost every setting you can point to, the names of this type of institution will be positively dripping with formality.

The best way of expressing that formality is to take the rules for naming upper-class individuals and generate one or more, then name the institutions for those individuals.

With Banks, it is most commonly a single individual, but names that reflect the national government are also popular (Bank Of Cyprus, Bank Of England, Commonwealth Bank, Bank Of New South Wales – just to name a few that come to mind right away).

In general, the difference is that the Banks named for individuals are private banks, founded to facilitate growth and/or trade in a particular region, while the more abstract names are ‘official’ banks established by the Government.

One name is rarely enough for a law firm, however – two, three, or five seem to be the most common (for some reason, I’ve never noticed many with only 4 names. Perhaps there is some compound growth relationship that means most firms can go from three partners to five almost every time – or not at all).

References

The best reference I can point to for information on the origin and functions of Banks is fictional: in one passage of Time Enough For Love by Robert Heinlein, the central character, Lazarus Long, is acting the part of a Banker in a burgeoning Colony. Warning: I would rate this book as MA15.

I don’t have any singular reference to offer on the formative aspects of Law Firms. My understanding is a mélange of episodes of L.A. Law, A Civil Action (both the movie starring John Travolta and the novel by Jonathon Harr), and many novels by John Grisham. And oh, yes – throw in some Boston Legal while you’re at it.

Click the thumbnail to purchase the book from Amazon

Click the thumbnail to purchase the DVD from Amazon

Click the thumbnail to purchase the book from Amazon

Naming Other Business Establishments

Most other business establishments will reference the name of the owner, or the name of the settlement in which they are located, at least until the late 19th century or early 20th century. Only then do mass communications and a wide-ranging transport system permit business to start out national or international in scope.

There are exceptions, especially when it comes to trade consortia – the most famous example being the East India Trading Company, which featured prominently in the second and third Pirates Of the Caribbean movies.

Click the thumbnail to purchase from Amazon. Now available as a 3-Disc BlueRay-DVD combo.

Click the thumbnail to purchase from Amazon. Now available as a 3-Disc BlueRay-DVD combo.

Naming Towers, Keeps, & Castles

There are two primary reasons for such structures to be erected. Firstly, they can exist to defend a region or a border; and second, to control and dominate the region around them. These can be characterized as defensive and offensive functions, respectively.

Naming conventions for these structures are often differentiated by the primary function. Defensive structures take their names from the population centers quite frequently, while offensive/control structures take their name from the surname of the family who control them – with further refinements to the name necessary only if there are several belonging to the one family.

One mistake that a lot of fantasy game writers and GMs make is in distinguishing between Keeps and Castles. A keep is a fortified tower, frequently built inside a castle; the two terms are not interchangeable. Often, this won’t matter, but as soon as someone corrects the basic terminology of your name, it’s credibility and all the beneficial effects that it might have had go out the window.

Until you are sure of what your doing, check Wikipedia – or some other appropriate reference source – anytime you give a class of building a title!

Naming Planets

There’s always a story behind the naming of a planet as soon as you get beyond our solar system. There are only so many mythological references to go around, which is where the names we use in our system come from.

Take a look at the Extrasolar Planets Encyclopedia listing of the planets discovered to date beyond our solar system and you will find that not one of them actually has a name. Instead you get things like “1RXS1609 b” and “CD-35 2722 b” – clearly not names intended for everyday usage. As of this writing, 611 planetary systems containing 763 planets have been detected – and that’s not counting 158 Unconfirmed, Controversial and Retracted planets (some of which might eventually make it onto the main list).

Most authors don’t have a naming pattern to the planets with which they populate their science-fiction universes. Two of the exceptions are Larry Niven’s Known Space series, where each world has a name and a reason for that name – whether it be Down or WeMadeIt – and The Mote In God’s Eye (and it’s sequel) by Larry Niven, this time with Jerry Pournelle. Since the latter are set within Pournelle’s CoDominium universe, I think it fair to count these as separate examples, rather than being a recurring gesture of verisimilitude by one author.

And that’s the lesson here. So long as you have a plausible reason behind the name, you can be as inconsistent as you like, except that if multiple planets are named by the same source, they will almost certainly exhibit a consistent pattern or theme.

Naming Stars, Nebulas, and Galaxies

Again, in modern times, these objects are given a user-unfriendly catalog designation that would never make the grade in regular service. The practice in Star Wars is to name stars after the inhabited planet within that orbits them and append the word “System”, and that’s a definite step in the right direction.

Very few stars actually have names; the few that do received them in ancient times, because they were visible and distinctive to the naked eye. Places like Rigel, Regulus, Vega, Sirius, Polaris, and Mira. Most of these proper names derive from Arabic with Latin a distant second. Only a handful have proper English Names, such as Barnard’s Star. The problem with these names is that they are used inconsistently, often spelt in different ways with no standardization, and there are also a few cases where names have been duplicated – there is an Alnair in Grus and another in Centaurus, for example.

Another way of naming stars is by apparent brightness (as seen from earth) and constellation, using the Greek Alphabet – “Alpha Centauri”, “Epsilon Eridani”, and so on. Officially named the Bayer Designation, this system (created in 1603) quickly runs into problems because there are a LOT more stars than there are letters of the Greek Alphabet in a constellation – something not really appreciated until the later 19th century.

The Guide Star Catalog II contains 945 million stars of up to Magnitude 21 (the higher a magnitude number, the dimmer it appears to be).

That’s an awful lot of names needed. NO one system will be enough. Most will never receive a meaningful name – to do so, a star will have to be significant, and probably lacking in a name from any of the other sources. And that’s without counting Nebulas and Galaxies!

So the rule of thumb to use is the same one as for planets – but employ it sparingly.

A quad of Wikipedia links to end this section:

Naming Campaigns

GMs often don’t name their campaigns, simply referring to them as “My Campaign” or by the name of the game system. You don’t have to GM for very long before this becomes inadequate. Some GMs leave it to the players to come up with a name after it’s been running for a while but that often leads to unsatisfactory results. So it’s better for the GM to come up with his own name.

General Principles

A campaign title should tell the story of the campaign – and that gets tricky if you don’t intend to railroad the campaign. At the same time, you can’t give away too much about the campaign; the title has to entice and tease the players, without giving too much away. At the same time, the title has to accurately sum up the overall uniqueness of the campaign.

That’s more easily said than done. I think that the easiest way to explain how to achieve this in practice is to demonstrate with ten of my own campaigns, and a couple of Johnn’s, with which I’ll get started:

The Carnus Campaign

According to Johnn’s introduction to (“A Brief Word from Johnn”), Carnus the Campaign started with the players in the City of Carnus, which was actually Ptolus with a new name.

Naming a campaign for a central adventuring location isn’t new, but the problems come when setting a second campaign in the same location. How do you distinguish between them? How do you refer to one and exclude the other?

Further, such a naming approach makes the adventuring location the central fact of the campaign, rather than simply the place where the action takes place. It’s the difference between naming the trilogy by JRR Tolkien “Middle Earth: The Fellowship Of The Ring” (etc) and naming them “The Lord Of The Rings: The Fellowship Of The Ring”. Now, if your campaign is one that’s just a lot of unconnected stuff that happens, that may be fine – but if there is a larger theme or plotline involved, such a name can detract from it (I’ll offer a counterpoint to this arguement in a few paragraphs, so don’t get excited just yet).

The Riddleport Campaign

Johnn used the same approach when naming his Riddleport Campaign, but this was more appropriate since the city was/is central to the campaign premise and events, as his posts on the campaign here at Campaign Mastery, and at Roleplaying Tips make clear.

I have also seen a similar approach used in Pirate Genre and Sci-Fi campaigns where the name of a ship that is the PCs base of operations is the central hub of the Campaign, and hence the Campaign is named for the vessel.

The Adventurer’s Club

The Adventurer’s Club is the Pulp Campaign that I co-referee. It is named for the club that has gathered the PCs together, and that serves as a hub for their adventures. At the same time, the club has taken on a life of its own, having its own plot arc which touches the lives of the PCs frequently, either tangentially, incidentally, or directly. A couple of years ago (real time) the Club was taken over by the FBI as a resource too dangerous to be left to its own devices, for example.

There are a couple of subtexts to the name. Putting “Adventure” up-front in the title describes the sort of scenario that we run – very much a “there and back again” with dramatic action in-between – a stylistic promise to the players. “Club” emphasizes that the collective is more important than the individual parts that make up the PCs, and also stresses that alliances and fellowship will be ongoing subthemes within the campaign. Lastly, the name has the right flavor for a Pulp campaign.

Fumanor: The Last Deity

The players adventured in this campaign for two years before I revealed more than the first part of the name. As a result, they still refer to the Campaign simply as “Fumanor”. I didn’t like withholding the name, but it gave away altogether too much; that said, it took the PCs a lot longer than I expected to reach a point where they could be told the name, by a good couple of years. Initially, the title referred to the quest to name the last Deity of the Pantheon (described in more detail in “The Absence Of Plot Direction” section of my article, A Potpourri Of Quick Solutions: Eight Lifeboats For GM Emergencies), but it had been designed to have a potential sequel campaign with the same characters and with exactly the same name. In this second phase of the campaign, the title referred to the last Deity not to have joined the Pantheon assembled by the PCs, or to the rise of Lolth from lesser being to a Demigod (or better), or both – and implied that it had done so throughout the campaign, since the seeds and clues to both developments had been carefully planted in the course of the first campaign.

It’s worth noting that the first part of the title is the name of the Kingdom in which most of the action takes place because the central plotline was the destiny of that Kingdom. This, of course, is in direct contradiction to my earlier comments that such a title was only useful when the campaign was undirected; this is an exception to that rule because the direction and theme of the campaign are provided by the subtitle.

Fumanor: Seeds Of Empire

This effect, in turn, permitted me to continue to use that Kingdom, and it’s fate, as the central connecting thread to sequel campaigns. The Seeds Of Empire campaign is about the difficult transition from Kingdom to larger political state; the Kingdom having now grown to the point where Kingdom-level administration is inadequate, and where the Kingdom is facing Imperial-scale problems – like rival contenders for control. Since that growth was a direct byproduct of PC actions in “The Last Deity” campaign, and the PCs were all from races whose political, social, theological, and personal statuses had all been radically altered by the events of that campaign, the connection was fairly obvious. One of the three contending societies that feature in this campaign WILL dictate the shape of the emerging Empire – its up to the PC’s to make sure it’s the one they want it to be.

Fumanor: One Faith

Originally, there was only going to be one three-part sequel campaign to the original Fumanor, but when one of the players temporarily relocated to Canberra for a year or so, but didn’t want to surrender his participation, I split them into two. The first part of the originally-intended campaign became the foundation for the One Faith campaign, the second and most of the third part became the basis of the Seeds Of Empire campaign, and I whipped up a new second half for the One Faith campaign. Although the events in the One Faith campaign thus far have preceded the entire Seeds Of Empire plotline, the two are gradually synchronizing; the whole shebang is intended to (eventually) climax in an epic finale featuring the PCs from both campaigns. At the moment, both campaigns are roughly half-complete.

Shards Of Divinity

When a player asks you to run a campaign so that he can learn how you do it, and how he can improve as a player, it’s hard to say no. Shannon was a player in the later stages of the “second half” of the original Fumanor campaign, but chose to drop out – the campaign was too big in scope, and he was too inexperienced, for him to get a handle on. Five years on, and he felt that he had learned a lot, and was now ready to dive into something bigger. The result was the Shards Of Divinity campaign – a world in which the source of all arcane power is the shattered remains of the original creator of the Universe, and it’s now running out, and in which one PC (Shannon’s) is – through a stroke of chance – in a position to undertake a quest to restore it – having become the sole witness to the original act of creation, and the highlights of human history since.

From that description, the source of the title seems fairly obvious, but the PCs are slowly coming to realize that there are layers of hidden meaning to the name as things that originally seemed quite unrelated begin to connect – everything from Gods in extreme depression who are a mere fraction of what they are purported to be, to the nature of divinity, to the source of divine power, to the nature of the fey, to mystic circles and rituals are starting to link to each other in unexpected ways, and everything they see around them is being revealed to be both more and less than they thought.

Champions

This is the oldest campaign of mine that I’m going to mention here. It was named for the superhero team that was the focus of the campaign, which in turn was named for the rules system. That team name was chosen by the players – but I now deeply regret not having pushed them to be a little more creative, as the fact that it is a trademarked name limits what I can do with my vast stockpile of notes and adventures. I’ve written 2 and three-half novels telling the adventures of the group, with a lot more material to work from – and none of it can be published without a complete rewrite.

Zenith-3

Over a decade ago, the Champions Campaign – which had been put on hold for a few years while I ran TORG – was rebooted into a sequel campaign. The original was heading toward Ragnerok, an epic climax; in the new campaign, that event was five years in the past. A team of novices recruited into a trainee program by the original, parent team, and sent to an alternate dimension, D-Halo, because the Earth-Prime was too dangerous for novices, the PCs eventually discovered that they were in fact the focus of a conspiracy by a 5th agent within the ranks of the parent team and had been sent somewhere almost as dangerous as Earth-Prime would have been. Eventually – at the end of the original Zenith-3 campaign – they overcame that threat.

The name of this campaign obviously derives from the code-name of the superhero team – even though, to the inhabitants of Dimension-Halo, they were simply known as The Champions – because this is the story of the team’s evolution and coming-of-age.

But the name carried a hidden sub-context: the team were forced to climb to the very summit of their chosen profession in order to succeed.

Warcry

I’ve described the origins of the Warcry campaign before, so I won’t go into it again. Created in a hurry as a spinoff to contain a PC that was too powerful for the main team, a minimum of effort went into looking beyond its self-evident title.

Zenith-3: The Regency Campaign

As the Zenith-3 campaign neared its climax, a miscommunication between my players and I was discovered. It had been my intent for them to return to Earth-Prime and deal with all the ongoing problems that I had seeded into the background; but they had the impression that they were to engage in a rotation programme, exchanging places with another of the Zenith teams, and that they were quite looking forward to it. After some thought and discussion, a plan emerged which would see a split campaign – some adventures would take place on Earth-Prime and some on the new world to which they were assigned, Earth-Regency – whose history I have been publishing in these pages each Monday for the last few of months.

I can’t give too much away at this point, but I have told the players – and so can tell you – that over time, their presence in Dimension-Regency will make that dimension a focal point for something BIG, which I have code-named Armageddon. I wrote extensively about the process that I employed in designing the campaign architecture in the series of articles on campaign and adventure structures (November-December 2011).

What I can say is that there are, once again, a couple of meanings to both the main campaign title, and to the campaign sub-title. There is the obvious reference to the team itself; but, once again, the team will have to climb to the peak of their profession – and beyond – in order to win at the end. The plot arcs and circumstances will give the characters the chance to do so, but seizing the opportunities will be up to the players; I can (and have) warned them that subtlety, cleverness, and control will be more important than raw power to the outcome. In terms of the sub-title, it is a metaphoric reference to the Dimension in which the adventures will predominantly take place; but there are at least 3 other layers of meaning as well, that I can’t reveal. Let’s just say that the campaign title is relevant for all sorts of reasons and leave it at that!

The Tree Of Life

Nor can I tell you a whole lot about this campaign yet. The basic premise, from which the campaign appears to draw it’s name, is that the cosmology of the prime material plane is shaped like a vast tree, with it’s branches running through two of the elemental planes to the outer planes, and the roots running through the other two to reach the abyss; and that, for reasons they don’t understand yet, heaven is full; and that a demon prince has successfully wiped out every cleric (and virtually all the non-clerical support staff) of all the churches in the world in a simultaneous strike; only four PCs survived, the de-jure spokesmen to their faiths, and one of those has since fallen.

Once again, there are layers within layers in the campaign title.

Summing Up

Some of these campaign titles work well, for various reasons, mostly relating to a depth of meaning within the title. The rest range from acceptable to the poor; these have only a straightforward meaning, of various degrees of nuance and relevance. A great name gives a reference point and a context to the entire campaign, a poor one can detract from a campaign – or from later usefulness of the work involved in creating and setting up the campaign.

If I get the opportunity, I put a lot of effort into coming up with a campaign title; it serves as a touchstone to the identity of that campaign and is instrumental in shaping not only my thinking as the campaign proceeds but that of the players. In every case where I haven’t had that time (or the expertise, in the case of “The Champions”), I’ve regretted it to at least some extent.

Whew! Almost 8000 words and I am seriously out of time on this post. There’s still a lot to come; in the next part of this series, I will focus on the fine art of naming adventures, with dozens of examples. A Dozen Dozens is not out of the question…

Comments (8)

The Imperial History of Earth-Regency, Part 9: Peter Pan, The Saint, & The Fairy Princess – 1980-1997


This entry is part 9 of 12 in the series The Imperial History of Earth-Regency


Pieces Of Creation is an occasional recurring column at Campaign Mastery in which Mike offers game reference and other materials that he has created for his own campaigns.

All images used to illustrate this article are public-domain works hosted by Wikipedia, Wikipedia Commons, or derivations of such works, except for the image of Prince Charles and Lady Diana.
 

This post was delayed for the Easter Holiday. I hope all our readers had a great break!

Photo by NASA, taken at the Kennedy Space Center.

The Communications Age: Peter Pan, The Saint, and the Fairy Princess – 1980-1997 (~60 years ago)

Author’s notes: Of all the material contained within this Alternate history, this was the section that my players found hardest to digest when it was initially presented to them. I think that this can be attributed to two factors:

  • First, they were either too young or too old and cynical to appreciate the way the public felt at the time about certain public figures; if you did not experience it, you can’t find it completely credible;
  • and Second, their view of the era is principally Australian in nature (unsurprisingly), without an appreciation of how the rest of the world, and North America & Britain in particular, responded to these individuals.

I’ll respond to their comments about specific individuals in authorial asides as they become relevant.

An era of Transition

1982 marked the beginning of the end for the Age of science, or so it seemed. At its beginning, scientific discovery had been seen as the answer to all problems, and the future had been perceived with optimism. Society was open and welcoming and people could leave their doors unlocked, and the Government was the people’s friend.

By its end, science had been forced to admit that it didn’t have all the answers, and might never have them. Much of the progress that had been found to have attached price tags that were unacceptably high – industrial pollution, thalidomide, the discovery of incurable social diseases like AIDS and antibiotic-resistant STDs. The drug trade threatened to tear society apart, having already driven crime rates so high that people lived in fear despite the locks on their doors and the bars on their windows.

The Government lied at best and conspired with big business and “the establishment” to keep “the system” in power, at worst. Hope for the future had been replaced with fear and greed. But when you reach rock bottom, there are only two choices: death or the long climb out of the abyss…

Author’s Notes: I personally lay much of the cynicism and mistrust of government in modern times at the feet of seven key events, and two of these did not occur in this alternate history. I thought it worth taking a moment to reflect on each of these, to provide some context for events and attitudes alternate history.

  • McCarthyism: If you accepted the cold-war position that you were either “with us or against us” then the McCarthy witch-hunts made a certain amount of sense but can only be seen as having gone way too far. If you were more inclined to think that the other side was made up of human beings too, with the same desires and needs as ‘The West’, then these were an abomination and one miscarriage of justice after another. Either way, by adopting an us-vs.-them attitude, and then casting popular figures, with whom many people identified, into the “them” camp (rightly or wrongly) forced everyone else to think about which side they were on. As the witch-hunts became more and more ridiculous in their extremes and increasingly politically biased against McCarthy’s domestic political opposition, the “them” camp looked increasingly attractive by comparison. Divisive politics at it’s worst, it should be no surprise that it divided the community – and put “The Government” that McCarthy represented into the opposition.
  • The Korean War: This was a conflict whose resolution never seemed to be a victory, it just kind of limped to a conclusion. It engendered a sense not of a titanic struggle between two enormous alliances but rather of leadership that seemed less than those who had come before. After all, the previous generation of leadership had won World War II fairly decisively.
  • The Kennedy Assassination: Although he was never as strongly supported as rose-colored hindsight would have us believe, it is nevertheless a fact that John F Kennedy embodied the hope of a brighter future to an awful lot of people, and that this hope seemed to die with him. In part, this is due to the contrast of the eras before and after this pivotal event – before, the Space Race seemed the dominant theme, and afterward it was the mud, muck, and flies of the Vietnam Jungle. Had Kennedy not been killed, I am sure that his all-too-human faults would have tainted his reputation; but he died, and became an icon and a legend.
  • The Vietnam War: Which, of course, brings us to the war that so many opposed so strongly that they eviscerated the servicemen and -women who fought it. I think that a lot of people resented everything about the war, from being forced to fight it through to the manner in which it was fought. Spouting slogans at the government wasn’t enough, the mob needed a symbol of the enemy they opposed – and that symbol were those who actually fought in the war, whether they wanted to or not.
  • The Watergate Scandal: With the world slowly realizing that their leaders pulled on their pants one leg at a time (the same as ordinary people), the atmosphere was ripe for the idealistic perception of government to be shattered, once and for all – and this was the watershed event that showed that the administration had feet of clay. I’m not an American, but even in Australia, the reverberations were felt. First hope, and now Trust had been destroyed; is it any wonder that cynicism and pessimism would be the hallmarks of the decades that followed?
  • The Tabloids: Feeding all of this was the inescapable conclusion of the trend that had started with Hearst and his willingness to pump up, or even fabricate outright, stories to sell his Newspapers. The tabloid mentality, pandering to the most sensationalistic urges and emotions of its readership, holds unremitting sway over the populace only so long as they believe what they read. Every time an excess of zeal in reporting is revealed, it fuels cynicism; every time the press pursue a beat-up story for the sake of sales, or ratings, it slices off a thin segment of the community who know better and who will be mistrustful thereafter. In order to reach those affected by this growing cynicism, the headlines and exaggerations have to be even stronger. The National Enquirer, in the 80s and 90s, became a byword for going to such nonsensical extremes that some people could no longer tell what was fiction and what was fact, a situation which has been lampooned mercilessly ever since. Newspaper stories were always colored by the Editorial philosophy and vested interests of their owners, I’m sure; but efforts to keep those influences at arms length were slowly worn away. All sense of self-restraint seemed to vanish, and people became aware of the bias that existed as it became more obvious. The current situation in the UK with The News Of The World and the Phone-tapping scandal seems to me to be the ultimate expression of this trend, and hopefully the outraged reaction that has followed will be the start of a trend in the opposite direction.
  • The Dictators: And finally, providing fuel for the fire were the excesses of dictators. The revelations of the practices of Idi Amin were a bombshell to any who thought the human race had outgrown such barbaric acts, or been purged of them by the victory over the Nazis in World War II. Such events had occurred in the past; they were part of the folklore of human history, nothing new; witness the excesses of Vlad The Impaler, or the Spanish Inquisition. But that was the problem – people had thought that we had outgrown such barbarism, and when we learned that we (as a species) had not, it tarred everyone who had stood with the architects of barbarity with a little of the same brush. This problem persists even in more modern times – was Saddam Hussain’s persecution of his citizens because of their faith really all that different (if a little less systematic and extreme) from what the Nazis did to the Jewish population of Germany? And how much of American reaction to the Gulf War a sense of guilt over having supported his regime?

It must be remembered that the current generation will become World Leaders in three-to-five decades, and the experiences and philosophies apon which their attitudes are built will form the baseline of their politics. The attitudes of youth 30-40 years earlier are their formative experiences. Our current leaders were children in the 60s, 70s, and 80’s, when Industrial Pollution became regular front-page fodder and the illusion that what benefited a large corporation was necessarily good for the community at large. Their priorities are fixing the things that they perceived to be most wrong with the world at the time, or their modern incarnations.

So, what impact do these events play in the alternate-history world of Earth Regency? Vietnam and Korea didn’t happen – but the Russian equivalent of those experiences, the invasion and occupation of Afghanistan, did – though expanded to cover a larger field of the Middle East in general. The other key events remain, though – in many cases – once removed from the centre of power. Government can thus be mistrusted, but there are the Empress and Imperial family to shield the ordinary citizen. There is thus an avenue of hope and trust that was lacking in our history. But, at this time in history, the Empress is beginning to seem more remote and distant, the representative of a past generation. The populace, and especially their youth, are looking for new figures to idealize and idolize.

Michael Jackson, 1984, derived by SpeedDemon74 from a photograph by the White House Photo Office

The Peter Pan Of Pop

Although no-one recognized it at the time, there were three socially-significant figures who exemplified the rebirth of optimism and human decency in the Empire, and it had been in 1980 that the stories of these three figures began.

The first was an American entertainer, whose showmanship made him the most popular artist in the world. Michael Jackson had very deliberately turned his entire existence into a larger-than-life circus act, selling millions of records, and carrying pop music to its zenith as an entertainment medium. Popular entertainment was transformed by the sales of his album “Thriller”; he transformed a larger-than-life cottage industry into a professionally-operated Big Business.

And then it all began to fall apart on Jackson; reports of increasingly-eccentric behavior led to the nickname Wacko Jacko, and a public made insatiable for sensation began devouring not only the product, but the people that generated it. Rumors, fiction, and outright lies were all grist for the mill – accuracy was no longer important, headlines were all that mattered. The youth countercultures of the 60s had been inherited by the firebrands of the 70s and were unified by Jackson. There was always a mythical element to the story of the “Peter Pan of Pop”, a fairy-tale element that played on people’s lack of hope, offering an escapist retreat from a world that was otherwise becoming unbearable.

To me, Michael Jackson will always be a figure of tragedy. Denied any semblance of a normal childhood, it is no surprise that his adulthood – after earning enough money to make any dream a personal reality – would be bizarre. A Child-like naivety and trust lie behind virtually every decision he ever made, in my opinion – whether that be trust in the Medical profession, in his preference for relating with children (who naturally shared his perspective), or that his life of excess would be understood by his fans.

There was a time, before the advent of the tabloid headlines of his life, where Jackson could seemingly do no wrong. Everything that he touched turned to gold. Those same child-like qualities made him a repository for optimism and hope, a living idol to the inner child within all of us.

His rise and fall are a Greek Tragedy, writ large because of the way he embodied what others wanted to preserve in themselves – hope, optimism, and the ability to enjoy life to the full with no cares for tomorrow. He was successful because he appealed to the things we like about ourselves. It was all too easy to forget that he was also human, and fallible.

Like JFK, he became a popular idol; unlike JFK, he survived to be torn down from the pedestal apon which he had been placed, rightly or wrongly, by the public. Had Kennedy survived, perhaps the same thing would have happened to him; he certainly had enough opposition with whom to contend. But that’s another might-have=been, and one that doesn’t fit within the current story.

The Communications Age

Not all the lessons learned from the larger-than-life success of Jackson were good ones. It became acceptable to spend as much as necessary to achieve a blockbuster success. The same attitude began to pervade all other forms of entertainment, and then business in general. The counterculture figures from the 60s were now aged in their 20s, 30s, and 40s, and had largely been assimilated into the mainstream of the society against which they had rebelled.

The youngest amongst them achieved new levels of greed and excess, and were officially tagged with the collective nickname “Yuppies”.

Real-life prediction: A decade from now, if not sooner (5 years or so) the big international issue will corporate responsibility and making the executives of corporations accountable to the public for their behavior. Think back over the news stories of the last couple of years and you can see the early trends in this direction.

Nor was this the only new word entering the language at this time; a more formal title for the period might well be “The Communications Age”. New language had been infiltrating for decades as the consequences of scientific and industrial progress; but the vast majority of these were technical terms that had little influence on everyday usage.

A frying pan was still a frying pan; the “Non-stick Teflon Coating” was just sales jargon. Now, however, domestic innovations began to appear with increasing regularity, and the language changed as a result. And the most fertile field for those innovations was the communications field, as ‘GPS’, ‘VCR’, ‘Mobile’, ‘Hands-free’, ‘HUD’, ‘ISP’, ‘PC’, and ‘CD’ all became everyday conversational terms.

Princess Diana at the opening ceremony of the community centre on Whitehall Road, Bristol, UK, May 1987. Photo by Rick.

The Princess

The second great figure of the era was even more strongly symbolic of the escapist / fairy tale popular appeal. Lady Diana Spencer was considered a flower, the embodiment of the shreds of hopes and dreams of the common people made manifest.

When she married the heir apparent of the Empress Elizabeth, she became the public symbol of hope. It was then widely believed that the Empress Elizabeth would abdicate on her 60th Birthday, and that her son would ascend the throne; and thus the coming generation would have a representative, an ear and a voice, at the very centre of power.

Behind the scenes, the Cinderella fairy-tale was far from reality, the combination of the weight of public expectations and a husband with adulterous inclinations overwhelming the young woman at the centre of the storm. In the public thirst for sensation, the fairy tale would be exposed, piece by piece, as a sham.

Matters were not helped by the old-world moral judgments of the Empress Elizabeth, who was placed in an impossible situation as the marriage began to fail. Hailing from an era in which loyalty, “for better or worse”, meant forever, she did not support the fragile Diana as much as the Princess was, perhaps, entitled to expect; nor was she especially successful at reigning in her son’s indiscretions. As the marriage first floundered and then ended, she discovered that the sensationalist press had eroded all faith in Prince Charles as a potential Monarch, even as they had destroyed her faith in her son’s discretion and attention to duty.

Worse, he had undermined confidence in the Monarchy as an institution; Diana had been perceived as the People’s Princess, the ally of the commons – titles that were supposed to belong to the Empress – and the failure of the marriage had become perceived as the failure of the Empress to stand by the people. The entire concept of the Empire as a political institution was beginning to lose favor amongst its citizens – without whom, the Empire would be nothing at all.

Diana somehow emerged from the entire fracas with her perceived connection to the people intact; but now that she was no longer royalty, she was seen as fair game for the sensationalists, who slowly dragged her down to earth. She had been careful to maintain a public face of respectability, and (to her credit) never let her dignity escape her, and had even begun to rebuild her personal prestige through many social & charitable projects, when she was killed in a terrible automobile accident. The wolves turned on the legend and did their best to tear it asunder; but the sensationalist movement was beginning to die, and as a result, her legend survived.

We in Australia held a privileged position in terms of being able to see the entire story unfold at arm’s length. We saw the British public attitude of the era as they identified with “Lady Di”; we saw the disintegration of the fairy tale; we saw the rebirth and rise of popularity within the United States; and we saw the British public revere her as a martyr to the lust for headlines of the tabloids and paparazzi.

Part of the appeal was generational; Prince Charles was roughly the same age as my father, Queen Elizabeth roughly the same as my Grandmother. Diana was approximately my age, seemed to like the same things that people of my age liked, had similar attitudes and opinions, and so on. She embodied a hope for the future to many people, whether they were strong supporters of the monarchy, or not.

At the same time, this was the coming of the New Romantics and the tail-end of their extreme counterpoint, Punk. More than musical styles, these represented philosophies in opposition; and for those without the anger at and resentment of society to fuel a punkish attitude, Princess Diana seemed to embody the cleaner-cut image of the New Romantics.

Sir Bob Geldof at the headquarters of the International Monetary Fund, 23 April 2009. Photograph by Stephen Jaffe, courtesy International Monetary Fund.

The Saint: Sir Bob Geldof

The third of the Great figures of the 80s could not have existed without the contributions of the first two. If Michael Jackson unified youth cultures throughout the Empire, however briefly, and Lady Diana gave them optimism and hope, it was Bob Geldof, later nicknamed “Saint Bob”, who showed just what the combination could achieve, socially, when they really wanted to.

His relief project, Band Aid, and subsequent Global Live Pop Festival “Live Aid” (the Mao being a notable non-participant) raised funds in excess of 500 Million Pounds for famine relief in Africa. It spawned imitator events from across the world, most notably USA For Africa (organized by Harry Belafonte and Ken Kragen, and featuring Michael Jackson amongst others). Equally important to future generations was the revelation of the consequences of misrule by African Warlords and Dictators.

The political promise that the politicians had so feared in the late 1960s had been realized – in a socially-acceptable way. Equally importantly, the devastating pictures of mass starvation that resulted reminded people of the benefits that science, and the Empire, when used properly, could provide.

African hunger would be a recurring issue; while no-one of the time thought that any of these activities would be a lasting solution, African aid, and its management (and mismanagement in some cases) would focus attention on the causes of many of the problems in future years.

Granted an Honorary Knighthood by the Empress in 1986, Sir Bob remained active in African relief and similar projects for the remainder of his life. His plainspoken demeanor, occasional outbursts of hyperbole, and – to some extent – his naivety in terms of distribution of the proceeds of his various ventures in the cause, left his efforts open to criticism after the fact, though few doubted his sincerity and willingness to sacrifice his own personal career to the cause. His de-facto position as the media spokesman for just causes and political enlightenment were eventually usurped by Bono of U2, whose activism covered a wider range of issues; but to the public at large, they were all walking in Saint Bob’s shoeprints.

Imperial Resurgence

These three people, more than any others, could be considered the prime movers behind the Imperial Resurgence. Had any one of the three not existed, it is doubtful (in retrospect) whether or not the Empire would have survived to the present day (2055).


Okay, so Mike was popular, Bob made social responsibility popular, and Diana’s wedding was a popular fairy tale. That doesn’t mean that people have to buy into the Deification of the Holy Trio. It can be argued that the people would have rekindled their hopes anyway – very few can live in total despair for any period of time and continue to function – and that these three, amongst others, just happened to be the figureheads anointed by the resurgence. But to Imperial Citizens, they are reverenced.

A return to prosperity

The increased enthusiasm on the part of the ordinary citizen generated other resurgences. In particular, the Economy, which had been slowly growing moribund, began to grow, and a more wary and realistic faith in technology emerged. Technological Solutions were perceived as only part of the story; the real problem with Industrial Pollution, for example, was not a scientific one, it was a social problem. People demanded the products that were being manufactured – a social phenomenon – and it was that demand that was the real cause of the environmental damage. The solution would also have to be a social one.

There was a general perception that any seemingly insoluble problems only seemed so because it was not properly understood. Crime, for example, wasn’t just a social problem, it needed a scientific analysis to find the solution. The concept of prison reform, which had become popular through the 1970s, was increasingly perceived as a failure, because it promised an easy ride to criminals; the deterrent element was missing.

These changes in attitude took time. Together with changes in fundamental social concepts like ownership, and the social unit, they would slowly reinvigorate the Empire, and ultimately culminate in a new groundswell of optimism in the following generation; but it was the Communications Age that layed the foundations.

The original Sony Walkman, photo by joho345

1980

There were few developments of obvious, lasting historical significance in 1980; no doubt the days were as filled as at any time, but from a remote perspective the world seemed to be holding its breath and enduring the calm before the storm.

The Communications age began before the end of the Age Of Science, with the launch of a portable, personal tape recorder, the “Walkman”. It was not recognized at the time as the harbinger of a social revolution, it was just another gadget. It would be years before many people discovered their existence, and even today many have the (false) impression that the Walkman post-dates the Personal Computer (Even fewer realize that the Compact Disc predates the Walkman by two full years!)

Those false perceptions notwithstanding, the Walkman was a new concept in that it personalized the entertainment experience, elevating the individual over his surroundings. Prior to its release, music and entertainment were social activities, involving anyone within earshot. If music were played, everyone in the room heard it; there was a shared aspect, a social aspect, to the experience. Now music became a personal experience; in itself not a groundbreaking development, but one that would symbolize the coming decade and much of the decade to follow.

The Individualistic Experience

For 17 years, in fact, the dominant social trends could be symbolically cast in that one concept – the Individual over Society. Individuals worked for their own benefit first, the benefit of other individuals second, and a collective society hardly at all. Indeed, so little common ground was experienced through this period that society collectively was perceived as a faceless mass, a lowest common denominator, a generalization of individuals.

But at the time, none of this was evident. Life was dominated by day-to-day events, and only with hindsight could a trend be perceived; and many of those day-to-day events were trivial, even irrelevant in the historical sense. Which is not to say there were no significant developments….

Rhodesian Disunity

The sequence of events in Rhodesia came to an end as the South made the transition to Black Rule under the joint leadership of Prime Ministers Mgabe & Nkomo; the north continued in its state of anarchy.

Afghanistan Deadlock

It was announced in February that 90% of Afghanistan was now under direct Imperial Military Control – but that 60% of the Afghan military remained intact within the last 10% of the country.

The Iran Crisis

On April 25th, the USK took unilateral action to free the hostages in Tehran, launching a commando strike. Unfortunately, the Americans were not the equal of the Australian Special Forces, who had already ruled out a raid as too risky; the action was bungled and the hostages killed by their captors.

Less than a week later, terrorists seized the Iranian embassy in London, demanding the release of political prisoners; but unlike the Tehran situation, the layout of the London embassy was conducive to successful intervention, and Australian Special Forces successfully freed the hostages and captured the terrorists within a week of the alarm being raised.

Although Wikipedia Commons also has pictures of the eruption itself, I couldn't go past this spectacular 2004 image of the volcano crater steaming. Prior to the eruption, it looked like an ordinary mountain. Click the thumbnail for a larger image.

A Bellow Of Nature

In mid-may, the long-dormant Mount St Helens unexpectedly erupted with the force of 10,000 atomic weapons. Because of the demonstrated capability of The Mao to create and trigger volcanic events, this brought the Empire closer to global war with the Mao than at any time in the last 35 years. Tensions did not ease until specialist geologists from around the globe confirmed that the eruption was natural in origins.

Ronald Reagan, photograph courtesy the National Archive & Records Association ARC 558523. Photo by UD Department of Defence, Department of the Navy.

Other news of the day

There were other terrorist actions through the year – bombings, assassinations, and so on – as the extremists offered the frustrated an outlet for their dissatisfaction. Israel unified Jerusalem and declared it to be the new capital of the Zionist nation.

Ronald Reagan was elected Prime Minister of the USK despite opposition by King Jeremy Washington I. And finally, Michael Jackson’s “Thriller” was released to critical acclaim and initially poor sales.

1981

1981 felt much the same as 1980. There were a few developments of lasting interest, and new trends continued to gather momentum, but it was nevertheless a year in which life was simply business-as-usual for most of the population.

The Iranian Crisis Deepens

In January, Iran released the 52 Imperial Hostages who had been held in Tehran since November 1979, carrying an offer to the Empress: Iran would rejoin the Empire, and use it’s influence to help persuade the other rebelling Middle Eastern states, in return for an equal voice in the governance of Jerusalem, and the eviction of the USK† from the Empire.

The Kingdom Of The United States Of America. Refer earlier parts of this series for explanations.

While the Empress may have been tempted to consider the offer in those moments when the United States was being especially exasperating, the peace offer failed to take into account two crucial facts:

  1. The USK was vital to the defense of the Realm; and,
  2. As a practical measure, the Empress didn’t have the power to accept or reject the proposal; that would be controlled by the Diplomatic Corps.

Iran had lost touch with the political realities of the Empire, and as such, the proposal was doomed to an inevitable failure.

Prince Charles & Princess Diana Photo © 2010 hans thijs, flickr

The Spanish Experiment & other events

The “United Leadership” experiment of King Carlos‡ came to an unhappy ending, as 200 civil guards under the command of Colonel Terjeo Monila attempted a coup. Carlos resigned as Prime Minister, admitting that his bold attempt to unify sufficient power to force change had failed.

See “The Rules Change” in Part 7 of this series.

Heavy fighting again broke out in Beirut in April, and in June Israeli aircraft bombed a nuclear reactor under construction near Baghdad.

In July, Charles, Prince Of Wales, married Lady Diana Spencer.

The following month, USK Aircraft shot down two Libyan jets over the Gulf of Sirte, while October saw the assassination of Anwar Sadat of Egypt in protest over his peace accords with Israel.

Throughout the first few months of the year, sales of “Thriller” would grow, until it ultimately became the most popular single body of music publicly available; it would be “Top Of The Charts” worldwide for over a year.

The IBM 5150 was the first fully-assembled PC; all the previous ones came as kits which had to be assembled by the user. Photo by Biffy B.


The year also saw the arrival of the space shuttle and the recognition of AIDS as a disease. IBM launched the PC (with 64K of RAM and a single floppy disk drive); it would become the industry standard over the years to come. It certainly was not recognized as the means by which the individualism that had not yet become dominant would first achieve its full flower, and then ultimately wither.

1982

At the start of the year, it looked like it was just going to be more of the same old same old. But the strongest hurricanes start as a light breeze…

Africa

In the culmination of the Smith Plan, Southern Rhodesia establishes a new identity as Zimbabwe. Joint leader Nkomo, whose relations with Mgabe were always strained at best, was dismissed from office because he would not agree to Prime Minister Robert Mgabe’s intentions to establish a police state.

A Pyrrhic Defeat

Israel agreed to give the Sinai over to direct Imperial control in the interests of maintaining peace. By the end of April, all Israeli forces had withdrawn from the region. The Afghanistan advance by the Imperial Military all but ended in the stalemate.

Relations between Iran & Iraq decayed and then devolved into war. The Israeli Ambassador to the Imperial Court was shot by Palestinian terrorists; in retaliation, Israel invaded Lebanon. The significance of this last development would not recognized for over two decades, when it would revolutionize politics within the Empire.

In the meantime, the bloodshed continued unabated. It took less than a month for Israeli forces to encircle Beirut. In an effort to prevent civilian casualties, Prime Minister Begin offered to permit the PLO to withdraw from the city with their weapons.

This was the first acknowledgement by the Israelis of the earlier decision by the Civil Service to recognize the PLO as a political organization – by negotiating with them and treating them as a political authority within the region, they gained political credibility throughout the middle east as a “dispossessed nation”.

Debate raged for almost two months, but the resulting political benefits were too strong for the more moderate elements within the PLO to resist. By accepting, they would be able to claim shelter and sanction within the same laws and rulings which created the artificial national state known as “Israel” – and could thereby claim all the legal and diplomatic protections and concession extended by the Empire toward the Zionist state.

As the only areas in the region under direct Imperial control, the Empire had only two choices in terms of a homeland for the PLO, protected by Imperial Law – Afghanistan and the Sinai. For the population to relocate to the latter, they would have to march directly through the centre of the Iran-Iraq conflict; the only viable answer was for the PLO to be accorded protected status and Rule of the Palestinian region.

In permitting themselves to be ‘defeated’ and withdrawn from Beirut by the Israelis, they would ironically achieve everything that they had been fighting for. For the first time, “Success” was no longer synonymous with “Victory”. Furthermore, Israel would have to support their position or risk weakening their own political authority within the region and losing many of the concessions granted them by a sympathetic Empire in the wake of the Holocaust.

Cordoned-off street in front of the HSBC branch in Beirut, October 2005. Photo by Robysan. Click on the thumbnail for a larger image.

The Beirut Bloodbath

Although it was widely regarded as a troublemaker and an agitator, the PLO was in fact a stabilizing influence within Beirut. Within a week of their departure, and as the Israelis began to push into the city, the Lebanese Druse militia and the Lebanese Army restarted their long-standing Civil War. Three warring states each opposing each other converged, and Beirut became a bloodbath.

Each faction committed what can only be characterized as atrocities on the captured supporters of the others. On August 18th, over 800 Palestinians were executed by Christian militia in two refugee camps in western Beirut, for example.

It was slowly becoming evident that, just as a revolution in military tactics would be needed to succeed against the desert guerillas of Afghanistan, so a revolution in politics would be needed to solve the problems of the Middle East. But at the time, no-one had any idea of what shape that revolution would have to take – had no idea even of where to begin – and in any case, the Civil Service / Peerage alliance were inherently conservative and resistant to any change. Only when this political problem was solved could the search for new paradigms within the Arabian Peninsular begin.

The first reasonably portable computer was the Epson HX-20, shown here in its carrying case. Photo by sandstein.

1983

While a lot happened in ’83, most of it made little difference in the long run. Bloodshed continued in the Middle East. Apartheid continued in South Africa. Uproar continued in central Africa. Terrorism just continued. But some events layed the seeds for future developments.

The worst drought since 1973 (!) ravaged Ethiopia, bringing famine to millions. The Laptop computer introduced the concept of portable computing.

Pioneer 10 or 11, painted by Don Davis, Image provided by NASA. Click on the thumbnail to see the fullsized image.


Pioneer 10 passed the orbit of Neptune, then the most remote planet of the solar system. The IRA destroyed Harrods in London using what “must have been” a Mao sonic bomb that was attuned to the stress-points of the steel girders; 6 people were killed and dozens injured when the building collapsed.
 

I felt that some direct terrorist attacks would be made on the heart of the Empire, simply because it was the central point of authority. Such attacks often occurred in our history, targetting the United States, but on Earth-Regency, some would have to be aimed at London, simple because London was more important on the global scale. Furthermore, because London is closer to the Middle East, there would be more capacity for such attacks. This was the first such additional attack.

The HIV retrovirus was identified. Australia stole the America’s Cup from under USK feet – the first time since the contest’s inauguration in the 1870s that there had been a non-USK victor. This was done using clever, innovative engineering. And the second round of arms limitations talks with the Mao ended in complete breakdown.

The Apple-II computer. Photo by Marcin Wichary, Flickr.

1984

This was the year in which the disparate elements that marked the decade as a turning point began to coalesce. Violence continued in the Middle East, but calm began to return to Central Africa with South African troops leaving Angola, just as internal civil violence escalated.

Diplomatic talks with the Mao resumed after the contentious issue of disarmament was removed from the Agenda; by the end of the year, a new trade agreement was in place which promised a massive economic boost. Apple Computers released the Apple II, the first computer with a graphic interface. “Thriller” sold over 37 million copies in the USK alone, while Bob Geldof’s “Band Aid” produced a chart-topping single to raise money for famine relief.

Greed Is…

Entrepreneurs and Interest Rates began to emerge as the economic patterns of the decade. The rise of the new breed of Entrepreneur, the ultimate expression of the “Yuppie” movement, was a particularly significant development, because for the first time, these were not members of the Peerage.

A new subclass of the “Working Class”, they generated unprecedented wealth through three avenues: Communications (Michael Jackson, Alan Bond); New Technology (Bill Gates, Steve Jobs), and New Products (Franklin Andrews, head of the Asia-Pacific Trading Company).

Our readers will be familiar with most of those names, except possibly Alan Bond; his Wikipedia entry is located here.

The one name that they won’t know is that of Franklin Andrews, because that was the name of the father of Lance Andrews, aka “Behemoth”, one of the characters from the original superhero campaign, from which this history is divergent. The latter is a name that will crop up a number of times in the later chapters of this text.

Differences in the history of trade within the Empire become significant at this point. Tea was an Indian product, as was rubber; when India fell to the Mao in 1914, production of these commodities shifted to South America. Many of the other Chinese products, like Silk, had never reached Western markets.

While the sale of Imperial products to China was the province of the Peerage, differences in Cultural & Economic systems ensured that Asia was a relatively small market. The sale of Chinese products within the Empire was where the Big Money was; and the most significant trader was the Australian, Franklin Andrews.

The peerage tried to stop the rise of these new competitors, but found themselves hamstrung by Common Law. The battle lines between old and new, age and youth, were now clearly established.

It was Band Aid that showed the strength of the emerging youth factor as a social force. The more that disposable income trended to focus downward in age, and the more of that money that was spent on products under the control of the new entrepreneurs, the more strength the established political parties gained from their policies of recruiting a younger generation.

In 1964, the average age of the members of the Lower House of the Imperial Government was 52; in 1974, it was 49; and, by 1984, it had lowered to 45. If the trend held true, the 1990s would be dominated by a Prime Minister in his 40s, and the 2000s by one in his Thirties.

Live Aid at JPK Stadium, Philadelphia, 1985. Photo by Squelle; click on the thumbnail for a larger image.

1985

“Band Aid” was no more than a band aid on the problems faced by Ethiopia and central Africa. By the end of 1984, Geldof was planning an even more ambitious project – a 48-hour-long rock concert televised globally – including (for the first time) – China.

Taking place in July, and watched by over 1500 Million people, Live Aid raised over £350 million for further famine relief.

Although this was a tiny sum in comparison with the needs of the region, it was equivalent to five years of additional disaster relief through official channels.

The most notable omission from the performance list, which included hundreds of heavyweights in the popular music industry, was Michael Jackson, who had organized his own “Band Aid” equivalent project. His reluctance to be involved in the project, and the public castigation that followed, proved the first cracks in the Jackson mythos.

The long slow road to peace

The slow trend towards Peace in the Middle East resumed without addressing the problems that had led to previous outbreaks of violence in the region. Israel agreed on a staged withdrawal from Lebanon, which was complete by mid-year. Libya released four Imperial civilians after negotiations by an envoy of the Archbishop of Canterbury. But that was as good as it got.

October saw the Peace again shattered as PLO extremists murder 3 Israelis in Cyprus. Within hours, Israeli bombs were falling on the Empire-supervised PLO holding camp from which the extremists were believed to derive. While 60 hardliners within the PLO’s ranks were killed, civilian casualties were ten times this number. In addition, 23 Imperial representatives were maimed and 4 killed. The Empire responded by declaring the Israeli action an “Over-reaction”, and warning that any repeat would result in punitive action. This was not enough for some of the Arab nations, and threats of War as a result of the incident lingered for months.

Eyeball-to-eyeball ruthlessness

The Israelis insisted that demonstrating that for every Zionist killed, 200 Palestinians would be executed in reply, including those responsible, would have a deterrent effect. This position ignored the obvious facts that the fanatics, responding to what they considered oppression, would only become more fanatical in response to such “punishments”; and that dead religious fanatics frequently became martyrs to their cause. Consequently, relations between the Imperial Court and Israel became strained, and the PLO moderates gained in sympathy, which they hoped to parley into additional support for their claims to “Dispossessed Nation” status.

Only in 2015 would it be discovered that the PLO hardliners deliberately targeted the Israelis whose deaths had triggered the retaliatory strike in anticipation of a Jewish overreaction. They viewed the deaths of over 600 of their own as a worthwhile sacrifice if it generated additional pressure for the Empire to recognize their claims over the West Bank.

The Terrorism Escalation

If peace was again in short supply in ’85, one thing this year had too much of (as had been the case of late) were acts of Terrorism. March saw the 25th anniversary of the Sharpville massacres in South Africa; the anniversary was commemorated by fresh rioting and by the police firing into the crowd – a mirror image of the events of a quarter-century earlier.

146 were killed during Tamil separatist attacks in Sri Lanka on May 14th. One month later, Shi’ite Muslim gunmen hijacked a TWA airliner and demanded the release of 700 prisoners held by Israel, while in July French Nationalists blew up the Greenpeace ship Rainbow Warrior while it was anchored in Auckland harbor.

August saw 60 dead, 100 injured, when a car bomb exploded in Christian-controlled east Beirut; two days later a retaliatory car bomb exploded in the Muslim sector, killing 50. Of course, the PLO attack on 3 Israelis and the response have already been discussed. Less than a week afterwards, Palestinian guerillas seized the Italian luxury liner Achille Lauro, and murder a USK Hostage. The grim total of over 300 deaths through acts of terrorism in the course of the year would not be exceeded for the rest of the century.

Author’s Notes: While it’s certainly possible to criticize the West Wing episode “Isaac And Ishmael”, which was written and broadcast in the week following 9/11, the one line that most strongly resonated with me at the time, and which (in hindsight) summed up what would be the US attitude in response, was the response to the question of Rob Lowes’ character: “What’s the one thing that strikes you most about terrorism?”, and the response, “Its 100% failure rate.”

The litany of punch and counterpunch listed in the above section clearly demonstrates the utter futility and waste of such methods. Nothing makes a populace more determined to resist than poking them with a stick – and, when you’re talking about a national body, that’s what all acts of terrorism amount to. The 9/11 attack united most of the world in anger and fury, and certainly stiffened American resolve to resist any attempts to change their attitudes toward the Middle East.

It was thinking about that event and the response that it engendered that led me to the plot idea expressed in the last paragraph of the preceding section. I have no inside knowledge concerning the incident; I can only state that such an action as I have described seems consistent with the characters of the people involved.

Top portion, front face, Space Shuttle Challenger Memorial, Arlington National Cemetary, USA. Photo by Tim1965.

1986

The success of Live Aid didn’t have an immediate impact on Imperial Society. The youth movement was largely cause-driven, and not yet the outright political movement that it would become in the 1990s. The still needed a unifying trigger, a cause to rally behind. This was the year in which they gained that cause.

Undoubtedly the biggest news stories of the year were two catastrophic engineering failures. The first of these dealt with the dramatic and tragic failure on launch of the Space Shuttle Challenger, which threw the Imperial space programme into disarray; the second explosion of the nuclear power plant in Chernobyl, which leaked substantial quantities of nuclear fallout over northern Europe.

Only marginally less significant was the concession of the failure of the peace process in Northern Ireland and the new offensive against Libya, which once again was edging towards a nuclear capability.

Reactions

While the events themselves were significant, the reactions of the general public were even more telling. For many years, the value of the Space Programme had been questioned; as diplomatic progress was made, the need for intensive development of space became less pressing. Increasingly, the demand was to shift funding away from Space and toward environmental concerns.

With the discovery of the Ozone hole over the Antarctic in September of 85, these concerns became even more pointed. But following the catastrophic failures of the engineering of which the Empire had been so proud, the impetus became overwhelming, and the minority Green parties in the various members of the Empire became a significant political force, largely by capturing the youth vote. There was a general demand for a step back from the technological forefront and an increasing emphasis on more mundane endeavors.

Increasing concern was repeatedly and loudly voiced concerning the growing population problem and the ability of the Empire to maintain production of food – issues that were heightened by the contamination resulting from the Chernobyl incident.

New Dilemmas

There were a whole raft of new issues to be contemplated. Actually, most of them weren’t all that new; but the sense of urgency, of insistence on priority, was new.

Issues such as recognition of the native inhabitants of the colonies – Canada, Australia, the USK, and Africa – had been growing for some time, for example, spearheaded by the anti-Apartheid movement. There were suggestions that the Empire had double standards which were difficult to refute.

Awareness of the problems of Agriculture had been becoming more general long before the plight of Ethiopia brought them into the living rooms of Imperial Citizens all over the globe. Soil Salinity, Ozone, Oil spills, Nuclear Waste, Smog, Reliance on fossil fuels, Urban Sprawl, Topsoil Erosion, Rainforest restoration – none of them were new issues.

  • The most extreme position demanded that polluting industries be shut down until the environmental issues were resolved. The economic chaos that would have ensued made these demands absurd, and these demands were rejected out of hand.
  • A more balanced (but still extreme) proposal called for a moratorium on further research & development until the rest of the world was brought up to core Imperial engineering standard

Comments Off on The Imperial History of Earth-Regency, Part 9: Peter Pan, The Saint, & The Fairy Princess – 1980-1997

The Power Of Synergy: Maximizing Character Efficiency



One of my regular players and an occasional contributor here at Campaign Mastery, Ian Gray, has a simple philosophy when it comes to rewards – never ask for +5 when five +1’s will do.

The Judo Of Wishes

It’s a philosophy that has developed from his experiences with Rings Of Three Wishes and similar items. Like almost every D&D player out there, he’s seen people make outrageous demands and requests when using Wishes, and the inevitable reaction by the GM has been to do their utmost to screw the PC up as punishment for their audacity and in an attempt to keep some semblance of game balance.

The usual player reaction to this denial of their unmitigated greed has been to become amateur lawyers, attempting to make the terms and conditions of the wish ironclad in defense of the desired and exorbitant benefit they have claimed. The worst case of this that I have ever witnessed occurred when one player prepared a sixteen-page typed contract – for one wish.

This only makes the GM work harder and with more bloody-mindedness at finding and exploiting any loophole they can uncover, in my personal experience, and Ian has made the same observation. Since anything the GM says, goes – (short of driving his players away from the Game Table in outrage) – the deck is inevitably stacked in the GM’s favor in such contests – sooner or later, they will neutralize or steal or pervert or corrupt or render unusable the Player’s ill-gotten gains.

Ian observed this happening to other players on several occasions and quickly decided that a plus-one or plus-two that he got to keep and use was infinitely better than a plus-five that the GM will move heaven and earth to turn into a plus-zero. What’s more, as soon as it is announced that a PC is using a Wish, the GM – through experience and ingrained habit – inevitably girds his mental loins, bracing himself for whatever abomination the greedy player is about to demand. Making a slightly-weaker-than-reasonable request makes the granting of the request, with no hidden catches or strings – practically automatic, using the GM’s own determination to fight unreasonable requests against him.

The Stacking Equation

At around the same time, as I understand it, Ian was also formulating a second philosophic principle that has shaped his PC development ever since – it doesn’t really matter which came first. This states that it is more than twice as much work getting and keeping a +2 bonus than it is getting and keeping a +1 bonus. In other words, it’s easier to get two +1 bonuses that stack than it is to get a single +2, and much easier to get three +1 bonuses that stack than it is to get a single +3 item – but the end result is the same, in terms of character capabilities.

Extrapolating from that: it’s easier to get four +4 bonuses than it is to get a single +16 item (in fact, outside of possibly some monty haul campaigns, no such items even exist, and nor should they). Or six instead of +24.

Are these numbers starting to look alarming yet?

Looking at the rules

A typical +3 weapon costs roughly 18,000gp according to both the 3.x DMG and the Pathfinder Core Rulebook. According to the NPC gear value charts for 3.x (p127, 3.5 DMG) that means that the absolute earliest that a character should be able to get his hands on that equipment is around 11th level; the Pathfinder rules are more explicit and suggest 17th level (p454, Core Rulebook). The character-wealth-by-level table brings that forward to about 7th level (3.x DMG p135); I couldn’t find an equivalent table in Pathfinder.

If you can achieve the same result from three different sources of +1 – a feat, a magic item, and a +2 stat gain or a class ability – how soon can you get there? A feat: 1st level, or perhaps 3rd if you have to wait. A +1 magic item (value aprox 2000gp): 2nd level (3.x), 7th level (Pathfinder) – but I have seen 1st level characters for both that are so equipped. But, let’s stick with the official guideline for the moment. A stat gain of +2? You can get plus one at 4th level – and a potion or a scroll can make up the balance from 2nd level on (but again, I’ve seen 1st level characters with potions as starting equipment). A class ability that only gives +1 is pretty low-level – certainly, any such would normally be received by 4th or 5th level, and 3rd or sooner would not be unexpected.

Total: between 3rd and 5th level (3.x) a character can have the same benefits expected of a 7th level character. For Pathfinder, that’s 7th level to achieve the same effect as an 11th level character.

It takes work

A lot of players just show up to play, not even looking at their character sheets away from the Game table. Ian is not like that – he works hard for his +16 or +24 or whatever. Outside of game time, he will go over his supplements and references, looking for combinations – this class ability with that feat and the other magic item and this other feat – that actually total the sum of their parts, or more.

Nor is he – despite the impression you may have received so far – a min-maxer. He carefully develops a character concept and profile, evolving it as he interacts with the game world, and every choice that he makes has to be justified in light of that character concept. If it seems right for the character, he will ignore an obviously beneficial combination (in terms of the rules) and choose an option that seems more appropriate to who the character is. All this is an expression of his role-playing, not rules-lawyering (at least most of the time).

As he puts it: The bottom line is that you get out a game rewards equal to the effort that you put into it. Ian puts in a lot of effort, and he reaps the rewards – and he has trouble understanding those who don’t, especially if they complain about the relative power level between his character and theirs.

An Unfair Advantage?

Yet, all this single-minded attention gives Ian what many would consider an unfair advantage, simply because the GM can’t spend months or years developing and improving each encounter in advance. Heck, we’re usually lucky to find time to rub two dry words together!

GMs can live with this situation in one of three ways:

  • they can either target the lowest common denominator – matching the effectiveness of most of the party – and accept that Ian will make things look easy; or,
  • they can craft opposition that presuppose Ian-level effectiveness on the part of the PCs and accept that those characters not built to the optimum standard will suffer for their laziness; or,
  • they can try to mix-and-match – one foe of a standard suitably to confronting Ian’s PC and others to a standard appropriate for the other PCs.

Right off the bat that seems like a no-brainer, doesn’t it? When you put it that way, #3 is the obvious right answer. Unfortunately, it’s not that simple. Let’s consider the ramifications of each (in reverse order):

The Mix-and-match solution

Because the GM doesn’t have the time to build an efficient enemy (in the same way that Ian’s characters are efficient PCs), this solution equates to adding gross firepower to the encounter. Instead of (say) a CR8 creature, drop in a CR15.

But that means that the entire party gets not only the experience for defeating the CR15, but also the loot that a CR15 carries – which is a lot more than that of the typical CR8. The net result is that the characters earn more experience than is warranted at this point in the campaign, becoming more capable more quickly. And because Ian’s PC is not of a higher level than the others (or not much – something I’ll get to in a moment), he progresses just as quickly, with the progression amplified by his ability to design good characters.

This solution might work in the short-term, but it does so at the price of making the overall problem worse.

There’s a second exacerbating factor as well – using this approach means that when a solo encounters occur, matching effectiveness means that Ian gets the experience for beating a CR15 while the others get the experience for beating a CR8. It doesn’t take very many such encounters before he has gained several levels over the rest of the party – which only makes the apparent disparity of power levels worse.

All this tends to create ill-feeling and jealousy amongst the other players, as well, because they not only don’t get anywhere as much time in the spotlight, that spotlight doesn’t even burn as brightly when it IS on them. So it’s not the perfect solution that it might have seemed on the surface. In fact, it’s not even close. Throw in the frustration that the GM experiences, and the genuine difficulties of coping with parties whose power levels are so disparate, and you have a recipe for disaster – and I have seen whole campaigns shut down as a result.

I have to admit, this lesson was hard-earned; for a very long time, this was my solution to the problem. It was only when I started to wonder why the problem seemed to be getting worse that I came to the realizations offered in this section.

Targeting the Optimum PC

So, what then, for the idea of using Ian’s power level as the guideline for everyone, in effect “encouraging” the other players to match his expertise in character construction?

This falls into the trap of creating an “us-vs.-him” feeling at the game table, where the “him” is the GM – the other players feeling (quite rightly) that the GM is picking on them because they aren’t as skilled, or don’t have as much time to invest, or don’t have access to the same game resources, as Ian does. There is also a growing resentment toward Ian, whose fault they often consider this to be.

Mechanically, too, this solution has it’s problems – in fact, these are just the same problems as the previous answer, but amplified by the fact that there are now several CR15 opponents and not just one.

This is throwing HP at the problem and hoping it goes away – but because XP and HP are connected, you are also throwing XP at the problem, which only makes it worse.

In other words, this is no solution at all.

Targeting the Lowest Common Denominator

By virtue of excluding the other proposed solutions as fundamentally flawed, this then has to be the correct answer. But there are consequences of adopting it that make life harder for the GM.

The game effectively becomes too easy for the players. You can expect them to win every straightforward encounter without great difficulty. So the trick to making this solution work is to fill the game with challenges that are not so straightforward. Build nasty little surprises into the game. Be deceptive. Be secretive. Accepting that you are overmatched on the power front, attack on a different vector. Play smart, not strong. Emphasize role-play and relationships and situations in which the shortest distance between two goals is NEVER a straight line.

In an ideal world, this is the perfect solution. If you are at least 20 IQ points smarter than your players, this can work. If you and they are more reasonably matched – if you are a mere mortal when not ensconced behind the GM’s screen – you will need to find some other answer. It takes the power that a player like Ian confers to the characters that he designs and makes it largely irrelevant. Sadly, it’s never that easy.

I want to digress for a moment to emphasize that it’s not all downside, having a player like Ian in your games. What you have here is a player who pays attention to what you reveal in the game, who actively thinks about it a lot, who gets the little hints and appreciates the bigger picture and the twists and turns of the plot, who gets and appreciates more of the game than anyone else at the table. And who is a nice guy, to boot.

Everyone has a different tolerance level for the problems that players like Ian engender, but I’ll put up with an awful lot to keep those qualities at my table.

This article is not intended to be a criticism of him or his play – he’s doing nothing wrong – it’s about a GM being able to cope with a player of his caliber.

Other solutions

There are more than those three answers, of course, and it’s entirely possible that the reason none of them seem to be entirely satisfactory is that we haven’t looked hard enough for alternatives to find the solution.

  • Ian as player consultant: It’s a simple solution to the problem of disparate PC power levels: even up the playing field a bit by having the other players consult the acknowledged expert at character creation. Ian is quite happy to do so, because character creation is a skill like any other – the more you do it, the better you get at it. This also eases tensions, hostilities, and resentments amongst the other players toward Ian, producing greater harmony at the game table. Not a total solution, but a definite ameliorative.
  • Recruit Ian’s Talents: There have been a few occasions when I have needed a really top-notch NPC, and judged that the price of giving Ian some inside info about the campaign direction was less than the price of using an under-created character. Getting Ian to help in the creation of some of the top-line NPCs makes the game better for everybody, so he’s usually happy to do that, too. Again, not a complete solution, but a useful approach when you need the enemy to be top-notch.
  • Talk to him about the problem: The first character of Ian’s to really exhibit the mega-built problem in one of my campaigns was Warcry. The first thing I did was verify that Ian wasn’t cheating, and the second thing I did was to talk to him about the problem. Much of this article is a distillation of that, and many subsequent, conversations with him concerning his approach. The initial conversation led to the next solution:
  • Retire the character when it gets to be too much: In the case of Warcry, it was a good character with a lot of plot potential and I had worked up a number of interesting adventures for the character to have with the team. The obvious solution was to split the character off into his own campaign and have Ian generate a new PC for the main campaign. It worked quite well, and with greater awareness of the problems, Ian deliberately chose to create a less confrontational character the second time around; as a result, Glory was able to stick around until the first Zenith-3 campaign came to a close, even though (towards the end) she was again becoming too powerful relative to the other PCs. For the new campaign, Ian has generated another new character – one that he’s had about seven years to polish – but one that is even less directly powerful in terms of a direct confrontation.
  • Find a shortcut: The final solution is to match Ian at his own game. But wait a minute – the entire premise of this article is that no GM can spare the time from general game prep to do so, isn’t it? Well, yes, it is, but that’s not the end of the story. If a shortcut can be found that at least simulates what Ian does, then the whole problem goes away. Suddenly, that impractical answer, “Target the lowest common denominator”, becomes practical. And I think I’ve thought of just such a solution.

One Structure To Rule Them All

If it is conceded that there is one optimum construction for each character class, and that what Ian does is to winnow through the lesser options until he settles on the best one for the current circumstances of game and the particular character that he’s created, there is an approach that replicates his work – in less time, and without the finesse and artistry that he employs, so it will be a lesser solution, but better than nothing.

The solution is a Zwicky Morphological Box:

  • Each class has a number of functions and abilities.
  • Each of these functions will emphasize or be controlled by a particular numeric value. Sometimes there will be more than one, creating sub-variants.
  • Each sub-variant will have a particular characteristic apon which it is based.
  • Every feat will benefit either a numeric value, enhance a particular class ability (ie a function), or a characteristic. Some feats will produce a paradigm shift, altering the basis to a different characteristic.
  • The same is true for every magic item.
  • The same is true for other class abilities and Prestige Classes.
  • The same is true for the optimum tactical situation for the character to utilize their primary focus to best effect.

What I propose is a series of columns of lists, one set to each character class, one column to each character class function (ie, class ability) and any sub-variants. Each column would be divided into sections – Class Abilities, Feats, Magic Items, Skills, Spells, Prestige Classes, Tactical Notes. In addition, there would be a simpler set of columns (no sub-variants) of lists, one for each characteristic.

  1. Go through each of the class abilities for that class. If any of them enhance the character’s primary focus AND are accessible at the same class level as the primary ability, they go on the list under “Class abilities” for that primary ability focus. You can do this at the same time as you are setting up the initial lists.
  2. Go through each feat in the Core Rulebooks for your game – find a list of them, if you can – and number them, i.e. index them numerically. Then list it in the feats section for each primary focus or stat where it is relevant. This should take a matter of seconds for each feat; you aren’t worried about all the bric-a-brac and fluff and restrictions that come with it, just with the general question of ‘does this enhance or improve this focus ability’? If the feat has any prerequisites, these can be noted by number in brackets. Of course, you will also need a master list of indexed feats.
  3. Ditto magic items, in the Magic Items section. (Some won’t go anywhere – add them to another list, called “fluff”). Some may generate new sub-variants – Frostbrands vs. Flame Tongues, for example. Create by copying and pasting into a new column.
  4. Ditto skills, in the Skills section. Most of these will have no effects on any core functions, and can be ignored – you’re mostly looking for synergy bonuses and opportunities to enhance tactical positions. But some skills will recur often – spellcraft, and knowledge (religion), and spot, and listen, and search, for example.
  5. Ditto Class abilities from Prestige Classes.
  6. And so on, until you’ve finished with the core rulebooks. Next, grab the first game supplement that comes to hand, and do the same for what’s in that.
  7. Repeat as necessary. (It might be a good idea to keep a list of game supplements that you’ve processed, in alphabetic order, so you don’t waste time going over old ground a second time).

What you are really doing is culling all the alternatives that don’t benefit the class ability that you want to focus on in each column.

A table or spreadsheet is perfect for this work – and the implementation of tables in Open Office makes it better suited than Word for the purpose, because it lets you copy part or all of a column.

The Time Factor

Will this take time? I’m afraid so – but by simplifying the questions involved, and permitting a quick skim to do the work, and making each entry as simple as possible, and using cut-and-paste with multiple lists open at the same time, it should not take very long.

The beauty of the approach is that in the long run, it actually makes your game prep more efficient, so an initial investment in time helps in the longer term.

And, of course, the results are persistent within the game system that you are using – until a new edition comes out and your campaign switches over.

Why I haven’t done the work for you

I entertained thoughts of doing just that – bit by bit, over the course of multiple articles here at Campaign Mastery. Or of putting the results in an e-book – I’m sure that it would sell! And it would have the benefits of recycling something I’d like to do for my own campaigns into something publishable – which is probably the only way I’m going to find time to do it at all for the next few months!!

But a little thought about the project gave me pause. Every campaign is different – I don’t have every game supplement that’s out there, and I don’t interpret them all the same way, and my House Rules are different to those of the next campaign over. That means that every campaign’s lists would be just a little different from each other, and the format means that it becomes a lot harder to customize them after the fact. In fact, I think it would be even more work to customize an existing list than it would be to create a new one from scratch.

I could be persuaded otherwise, if our readers demand it – once the current Monday series of the Alternate History is finished, of course – but, for the moment, the best solution is to show all of you how to do it.

So, if I have to do it myself – why no example?

Unfortunately, it would take almost as long to craft an example as it would to do the whole thing. I would still have to glance at every feat, every magic item, and so on. In fact, arguably, it is more work to do it one class at a time (because there is more redundant activity) than it is to deal with each potential entry just once for each of the lists required.

This is an all-or-nothing project – and so it isn’t possible to extract and create an example, except perhaps for the layout of the lists – and those will vary with the software each GM has available, and with their own ideas, anyway. I’ve certainly had no time to optimize the design, and have not actually done this myself yet – so there are no examples to offer. Sorry.

The Bigger Picture

A few of you may be thinking that none of this matters to them – after all, they don’t have Ian Gray in their campaigns (for good or ill)!

But the fact is that everybody does have an Ian, at least to some extent. Every player has his own unique strengths and abilities, and no two are ever going to be identically competent at character design. Some will have a ‘favored class’ that is their preference, and whose options and nuances they’ve mastered, but be fish-out-of-water when it comes to optimizing a different class.

So the same problems exist, to at least some extent, in every campaign out there. It’s only that Ian has gone further than anyone else I know down this path – and hence, brought the associated difficulties sufficient prominance to be noticed.

Fractionalizing the Differential

Can this power, this technique, be turned to the Dark Side? Can it be adopted by the players to add to the problems confronting the GM?

Of course – but it’s hardly the end of the world if it happens. In fact, by normalizing the efficiency of character construction for both players and GM, and reducing the differential between the run-of-the-mill player and the Ian Grays of the gaming world, a campaign will be a lot stronger. Opposition will be more nearly a match for the PCs, making the challenge – and the fun of meeting that challenge – better for all.

Oh yes – and it also pulls the teeth somewhat of any genuine min-maxers amongst your players.

Not a bad thing to have your name associated with, eh, Ian?

Comments (4)

The Imperial History of Earth-Regency, Part 8: The Ascendancy Of The Peerage – 1978-1979


This entry is part 8 of 12 in the series The Imperial History of Earth-Regency

Only a short post this week, I’m afraid, and half of it is taken up with a reality check on where things stand at this point for readers who may be coming in late. I could have continued, but I would like to start each Chapter in it’s own post – so I’ll make up for it, next time around.


Pieces Of Creation is an occasional recurring column at Campaign Mastery in which Mike offers game reference and other materials that he has created for his own campaigns.

All images used to illustrate this article are public-domain works hosted by Wikipedia, Wikipedia Commons, or derivations of such works, except for the image of the photographers, which is governed by the SXC terms of agreement.
 

You don't appreciate how big the Pyramids of Giza really are until the skyscrapers offer some perspective. Photo by Jerzy Strzelecki, licenced under the GNU Documentation Licence version 1.2. Click on the thumbnail for a larger image.

Status Check:

As we resume this Alternate History, the Empire is beset by tumult and dissension. Many of the problems are political, some are social, and some are economic.

Politically, the Middle East is by far the least-stable corner of the Empire. Ideological conflicts have produced an unstable political landscape full of ongoing wars and temporary peaces. At the start of 1978, Lebanon was in a state of Civil War, and an attempt to invade Afghanistan had turned into a 7-year bloody standoff for the Empire. A moderate had been elected Prime Minister of Israel, leading to a negotiated peace with Egypt; currently hostile are Syria, Iraq, Libya, Algeria, South Yemen, and Afghanistan. Another of the trouble spots, Saudi Arabia, had recently had a change in head of state, with a Moderate King succeeding a Militant. Pakistan, once the most loyal of Imperial members, had slowly disintegrated politically to such an extent that it had been placed under direct Imperial Control, with neither political party trusted to conduct an honest election. Middle East -based terrorists are an ongoing problem for the Empire.

Second only to the Middle East is the balance of the African states. Idi Amin’s regime is coming under increased public attention as his record on civil rights begins to emerge. The Empire has already placed an arms embargo on South Africa in protest over the Apartheid policy. Somalia had invaded Ethiopia, while Rhodesia had commenced an attempt at creating a unified African Tribal political entity based on the Israeli model. Several other kingdoms have attempted or threatened secession or revolution, with varied results.

Early indications suggested that South America was heading down the same political path as The Middle East and Africa. Most recently, a coup in Argentina had removed Prime Minister Isabel Peron after she blocked investigation of Electoral Fraud allegations leveled against her, while an attempt by Chile to secede from the Empire had been blocked by the application of “The Pakistan Resolution”. In general, the continent is viewed as a remote backwater, of little overall significance.

Europe also teeters on the edge of political disintegration, largely resulting from the overwhelming control of daily life within the Empire by the Civil Service, which the Empress Elizabeth saw as the solution to all her problems for most of the first 25 years of her reign. This has effectively left elected representatives powerless to implement changes in policy not approved by the Civil Service, whose first rule is to protect themselves and their positions. Public unrest is at unprecedented levels as a result.

Unconventional attempts to find solutions are beginning to surface; in Spain, the King is also the Prime Minister, a situation viewed as a one-off – but one that will demand closer scrutiny should King Carlos manage to reign in the Public Service. In the meantime, Elizabeth has at least forced the Civil Service to accept the principle of dismissal for incompetence. Northern Ireland is also a trouble spot; where Mao-backed guerillas have committed a number of terrorist acts in support of demands for an independent voice within the Empire.

North America, dominated by the USK† has become a problem of an entirely different nature to the Empire; as its strength has grown and that of Europe has waned, they have begun to dictate various aspects of Imperial Policy. The Americans are consumed by a particular arrogance that reflects their status as the strong right arm of the Empire: they have slowly become the Political and Popular Cultural leaders of the world, and they know it. Bringing them to heel has so far proven almost impossible. Only the presence of the renegade Central American Kingdom on their boarders has so far kept them in check.

† USK= Kingdom Of The United States Of America. Refer to previous chapters of this Alternate History.

Socially, other problems remain unsolved throughout the Empire. Youth countercultures and the Generation Gap have opened a divide between the Middle-aged majority and their children. Gender and Racial inequities are slowly easing in most corners of the Empire, though some areas remain backwaters of discrimination. The criminalization of Narcotics has generated an escalating crime wave by addicts which society has proven helpless to control.

Even more turbulent is the Imperial Industrial sector. While the issue of union corruption has receded into the background, it remains as an ever-present background element. The union movement has become a breeding ground for politicians, just as the Civil Service has become the breeding ground for Peers. Because the peerage also controls big business, and the Civil Service effectively control government policy, the latter are in an overwhelmingly strong position; only the protections of Common Law prevent total control of the Empire by the latter groups.

Because the Empire was reaching saturation point in the development of known resources, the inherent weaknesses in the 20th century economic models had made inflation an ongoing crisis; this enabled the combined power of the Peerage & Civil Service to clamp down on wages, leading to industrial activity on a broad front. The Empress was aware that the Lower House / Union Movement were her best weapons against the rampant power of the Peerage, but it was a weapon she dared not use, threatening the Empire with face total economic collapse as a byproduct. Only the Coal Act, which defined industrial actions which interfered with “Essential Services” to be a form of terrorism, had so far prevented the cessation of industry altogether.

Overshadowing all these internal crises was the ever-present threat posed by the Mao. The non-human rulers of Asia possessed technologies which, for all their gains in scientific knowledge, remained as unfathomable and inscrutable as ever; while science was capable of analyzing and identifying the applications to which this technology was put, as shown by the discovery of their ability to control the weather, the fundamental operating principles remained cloaked in shadows, the subject of equal parts speculation, assumption, and prejudice. Until the invasion of the rogue state of Afghanistan, the most significant wars of the last two centuries had been fought with the Chinese masters of the Asian continent or their allies. While of late, progress had been made in establishing accords and protocols with the Chinese and their shadowy ruling class in summit talks aimed at achieving specific goals to the benefit of both, they remained the most significant single threat to the ongoing existence of the Empire.

Only beginning to emerge as problems to be solved in the latter part of the 20th century were the environmental consequences of the massive industrialization of the last century. Although the full scope of the problem is not yet appreciated, some progress has already been made, with Business held liable for ecological damage resulting from their operations – in theory. In practice, these laws have just failed their first real test, following the first recorded ecological disaster, centered on the town of Seveso (near Milan) in Northern Italy, devastated by the accidental release of poisonous dioxin gas from a nearby pesticide plant. By reaching private settlements with those directly affected, the Peers involved had successfully prevented their testimony; while this was effectively the committing of even more serious crimes, without the testimony of those receiving the settlements, the case was legally helpless. In effect, a criminally-negligent administration used wealth to reduce very serious charges to a rap on the knuckles – at an expense far less than a legal defense would have cost them. The peerage had finally found a way around Common Law, the only thing that had been keeping them in check….

Joshua Nkomo. Photo by Robin Wright courtesy The Christian Science Monitor and the Alicia Patterson Foundation, licenced under the creative commons 3.0 unported licence.

1978

The Rhodesia plan for a united “Black African Nation” was rejected by black leaders Joshua Nkomo (leader & founder of the Zimbabwe African People’s Union) & Robert Mugabe (the Secretary General of that organization, who had been imprisoned as a Political Prisoner since 1964) in March, as the Empire declared it illegal under Imperial Law; immediately, guerilla warfare increases dramatically as the “Patriotic Front” attempts to force moderate Black Africans to reject the plans. In the midst of these developments, Somalia accepted defeat and withdrew from its invasion of Ethiopia.

This was a particularly bloody month; it also saw a PLO attack which killed 11 Israelis, and an invasion of southern Lebanon by Israel in response. In mid-year, Islamic fundamentalists rioted in Tehran calling for the removal of the Shah (King), whose policies of modernization were at odds with the religious fundamentalists. Civil Unrest and violent demonstrations would lead to Martial Law and a military government by the end of the year.

The following month, Ahmad al-Gashmi, the President of North Yemen, was killed by a bomb. Two days later, the same extremist faction assassinated the Muhammad Ali Haitham, the Prime Minister of South Yemen. With tensions mounting, the Empress personally interceded with the leaders of Egypt & Israel; two weeks of face-to-face negotiations in Buckingham Palace lead to the Buckingham accords, which formally end 30 years of hostility between the two.

Terrorism remained an ongoing problem. Former Italian PM Aldo Moro was kidnapped by Red Brigade terrorists; this was the first international recognition of the group, whose goals were the restoration of the Roman Empire. They did not want to be rid of the Empress so much as they want to be free of Her civil servants; although it had never been done previously, they had no problem with the concept of the one Empress being head-of-state of multiple Empires at the same time. The proposal was unilaterally opposed by all concerned as inherently unstable; inevitably there would arise an occasion when the Empress would be called apon to favor one over the other, destroying the loyalty of the people slighted.

1979

Whenever a society experiences rapid expansion of knowledge, watershed years have a tendency to occur more frequently. The sum of human knowledge in the Empire was now doubling every 25 years, and even experts were finding that they could not master the entirety of their chosen general subject, but were increasingly confined to specializations. Synthesis of new approaches by collecting a disparate group of specialists in relevant fields – the think-tank – would play an increasing role over the next two decades.

Short-term consequences of this expansion of knowledge meant that paradigm shifts in perception occur more frequently – and with each, ‘acceptable behavior’ is redefined. The Generation Gaps were widening. 1979 was recognized even before its commencement as just such a decisive year.

The Mao Summit Talks

January 1st 1979 was touted as a day of hope for all mankind, as ongoing diplomatic relations with the Mao were agreed to for the first time. The breakthrough came with the begrudging political acceptance by the Imperials that the Chinese Empire was the equal of their own system of government. It was hoped that through greater understanding and respect for one another that a fourth Global War could be avoided. Nor were the diplomatic concessions one-sided; the Mao had to swallow their own pride somewhat and acknowledge that the British Empire had grown to the point of achieving parity and equality with their own culture, and were worthy of respect.

However promising the achievement of mutual recognition, it did not erase the fundamental differences between the two regimes. They had different cultures, different technologies, different religious beliefs, and different philosophies. The Mao regime emphasized the comfort and security of their citizens, at the expense of their independence; while the British Empire stressed personal achievement, social mobility, and the maximum amount of freedom for its citizens, at the expense of social guarantees of prosperity. The poorest citizens of the Mao regime were incalculably better off than the homeless and destitute of the Empire, but the wealthiest of the Imperial Peers possessed a luxury unheard-of within the Chinese borders.

The Mao were slow-growing, deliberate, and methodolical; already plans were underway that would not reach fruition for centuries. The Empire, in comparison, was explosive in growth, moving into new areas long before the old was fully established. The results were a much larger Society subject to perpetual growing pains, and one which perpetually needed new areas to grow into. Many of the social and psychological problems that were beginning to emerge were analogous to cabin fever, the result of a confinement and bottling up of that drive to explore. Escapism, in many forms, became an increasingly-prominent feature of literature and mass media; in the past, the youthful vigor and drive had been marshaled and directed into exploration and colonization, but with nowhere remaining to go, new forms of diversion were needed to consume that energy, and media providers who saw this as an opportunity for profits were eager to take advantage of the need.

The Mao were not without problems of their own; slow to change, slow to react, slow to integrate new ideas and new discoveries. It was a certainty that progress of all sorts – literary, social, and scientific – was ponderously slow. If the Empire had now achieved Parity with the Mao, in a century, the Mao would be as antiquated in capabilities as a Victorian Army faced with the best military capabilities of the modern day, or as the Native Americans had been against the western settlers who confronted them during the conquest of North America. These facts did not change human nature; the citizens of China were just as ambitious and desirous of luxury, just as caring for their children, as were their Western counterparts, and their youth possessed just as much excess energy; The Mao focused this energy into an obsession with precision and ritual; the average Mao citizen participated in over a dozen ceremonies and rituals each day, end dissipated the remainder through an increased reliance on manual labor. But the price of this solution was a stultification of their society, a reluctance to innovate when conventional solutions were no longer sufficient.

A few philosophers dared to suggest that both were extremist views, forced down mutually-exclusive social developments by the presence of the other; the optimum social solution would be somewhere in between, a blending of the British drive to explore new ground with the Mao ability to make maximum benefit of what resources they had available.

Donald Perisque Summerkinde, in his landmark 2032 historical and social analysis, A Romanesque Myopia compared both societies with that of the long-past Roman Empire, finding many analogies for each to ponder.

The Roman Empire had been limited in size by the nature of their administrative and economic systems, while the limitations that faced their Modern-day equivalents were essentially geographic in nature – there simply was no new territory left to gain, save by means of hostility against the other, but the consequences were the same – each had found its own form of social degeneration and decline, inevitably manifested most strongly by those with the greatest excess of energy at their disposal as a rebelliousness against whatever had been fashionable a decade or two earlier.

This, he argued, was the true cause of the rise of The Teenager as a social and marketing force. In both societies, the excess energy was manifested and consumed by new means of artistic expression, usually condemned by the generations prior to theirs as “barbaric noise”.

Summerkinde also compared Mao society with that of the North American natives, and came to the conclusion in persuasive fashion that the two were more alike than had been generally realized; the study of Amerind culture would thereafter become an accepted part of the curriculum for the training of diplomatic personnel, and surviving tribal members who had fought so hard in the late 20th and early 21st centuries to preserve their culture suddenly found themselves rewarded with high diplomatic credentials. The irony that a people who had been lied to and deceived so often, and been subject to so many broken treaties and promises, were now the leading negotiators of such treaties and promises, was not lost on them. Some consider it Coyote’s grandest jest.

The Ayatollah Khomeini, Photo by Aleain DeJean, taken 5 February 1979. Photograph is in the public domain in Iran, its country of publication. This photo has been edited, click on the link to see the original and the terms of use.

Rise Of The Modern Theocracy

Internally, developments were far less promising. Faced with near-universal revolt, the Shah of Iran fled to Egypt even as troops were staging to arrest and imprison him. Within two weeks a Theocratic regime led by the once-exiled Ayatollah Khomeini had seized control, and Iran joined the ranks of those hostile to Imperial control.

Harsh laws, based on Ideology instead of democratic principles, began being implemented daily. For the rest of the year, Iran would be in turmoil as the new state sought to override the protests of those disenfranchised under the new regime; in November, terrorists seized the Imperial diplomatic headquarters, taking over 100 hostages, in protest at Imperial “meddling” in the Middle East.

The promise of an African Peace

African developments at least showed the possibility of peaceful outcomes to ongoing problems. Nationalist troops aided by Tanzanian soldiers drove Idi Amin from office in March, reestablishing normal relations with the Empire, while in Zimbabwe the parliament voted overwhelmingly to support the enfranchisement of a predominantly black government. The two-year plan for African Black Unity had failed to be accepted outside of the Rhodesian borders, thanks in part to opposition from within the Empire (read: the Civil Service / Peerage), but the developments in Uganda suggested that this was more because it was ahead of its time than from any real impracticality.

Photograph of the Three Mile Island nuclear power generation station. The reactors are in the smaller cylindical buildings with the rounded tops. Photograph by the United States Department Of Energy, 1979. Click on the thumbnail to see the full-sized image.

The march of progress

In October, the Imperial Health Office declared that after a 22-year campaign, smallpox had at last been eradicated. In hindsight, this was the height of irony; just as the age of science appeared to be drawing to a close, it had begun delivering on the promises it had made.

Unfortunately for the increasingly polarized society, popular sentiment was more in tune with the panic created by a minor failure at the Three Mile Island nuclear reactor in the US; the radioactive material that leaked was less than that received during a dental x-ray, or 8 hours television viewing, but these facts did nothing to quell public hysteria.

The Tabloid Media

This event was a turning point in journalism within the Empire, marking the emerging rise of sensationalism over substance as a guiding principle. While the experts recognized that the public trust won by Woodward & Bernstein and others of their ilk had been betrayed, the integrity of the news media discarded in the choice of flash over substance, this realization would be slow to come to the public at large. The media barons – Peers all – had in effect seized control of the public, and through the public, the branch of the government designed to keep them in check. The Empress’ task of regaining control of her Empire had been made that much harder.

She still controlled the courts (though the judicial process had been at least partially derailed by the application of money and the prospect of rewards of privilege and peerage), and she still controlled the Military (who were dependant on the Peerage for supplies and armaments). But without an independent Media, the Peerage would tell the public what to think – and Public Opinion would tell the Lower House to support the true Peerage position (the Upper house would often adopt a seemingly antagonistic position, arguing over trivial details, while the substance of what they wanted came to pass). With both branches of government united, policy was now the province of Big Business. The descendants of the Barons had at last won the battle with the Throne.

Or so they thought.

Comments Off on The Imperial History of Earth-Regency, Part 8: The Ascendancy Of The Peerage – 1978-1979

With The Right Seasoning: Beyond Simple Names


This entry is part 4 of 11 in the series A Good Name Is Hard To Find


Welcome to “part 3a” of this series on names and naming things – and finding the right choice. Today’s post was actually intended to be part of the previous entry in the series, but the subjects of Mononyms (got it right this time, thanks again elijah!) and bi-structured names just sort of grew… a lot.

So, we’re still talking about Name Structures, and there is a still a lot of ground to cover, so let’s dive right in…

Tertiary Names

In our society, Tertiary names come in three principle varieties: Middle Names, Maiden Names, and Addenda. This barely scratches the surface of the potential value of such names.

Middle Names

With increasing populations and rising levels of communication, two names can become insufficient to identify a specific individual. How many Paul Smiths are there in the world? How many John Jones? The practice of Middle Names usually begins in those with sufficient prestige that many members of the one family are known publicly throughout the land – the aristocracy, the wealthy, and the nobility. To preserve and utilize the prestige that past family members have accumulated, these often have very similar Christian Names and (of course) the same Surname – so some means of identifying two different individuals within the family becomes necessary, especially since these groups tend to have greater longevity and hence a greater probability of two like-named individuals being alive at the same time.

Another way of looking at this trend is that as christian name choices become relatively constrained, the flexibility and freedom that most citizens enjoy with respect to christian names needs to be transferred somewhere. It follows that in important families, most of the advice concerning choice of Christian Names in the previous article actually applies to the Middle Name of the individual, while the Christian Name becomes an adjunct to the Surname.

I once read – and I no longer recall where, so unfortunately I can’t cite the reference – that it was only in the 20th century that middle names became routine and common. That, if true, simply speaks to the power of Christian names as a means of unique identification, especially when coupled with an address or locality. Even now, it is not all that common for people to emphasize all three of their names – though the trend would be for this to become more common in the future if the population continued to increase.

Ethnic Alternatives
Middle names are not the only solution; they are principally a Western-society approach to the problem. Chinese Names, Arabian Names, and (some) Indian Names use an entirely different approach, for example. In fact, this seems to be an excellent place to point to the excellent series of Wikipedia articles on Ethnic Names, which I wish I had discovered many years ago (assuming that it existed then)!

In particular, the Chinese approach to naming reflects the dangers inherent in using English as a cultural basis for assessing the limitations of language. Because the Chinese written language contains so many characters, (3-4,000 in general usage), they will not reach the point of needing additional names beyond their current three-character (three-syllable) system for centuries, even if their population growth were to continue unchecked.

Nevertheless, the majority of our readers – and of game settings – are Western in derivation. So this series will continue as though the Western approach is the ‘natural’ solution, even though I – and now you all – know better.

Middle-Name emphasis

That means that within the context of a general population level, it is possible to infer things about a character simply from the emphasis he or she places on his middle name. In any pre-20th century westernized setting, emphasizing a middle name is a mark of arrogance. Where it may be necessary as a point of identification, it would be more common for characters to reduce the middle name to an initial, and this continues in formal address to this day – my bank uses this format, for example, to refer to me. Consider the (fictitious) name of Patrick Jonathon Bellweather, which I will be using as an example throughout this section: ignoring the middle name and reducing the Christian name gives a fairly typical name, “Pat Bellweather”. Slightly more formal is “Patrick Bellweather”. More formal again (in a modern context) or – perhaps – more rebellious, is “Patrick J. Bellweather”. This same name, in a sixteenth-century setting, carries a distinct overtone that is diminished or lacking completely in the modern context.

Another approach, especially where first names are controlled by inheritance issues and eccentric demands, is to reduce the Christian name to an initial and to use the middle name as the Christian name. This conveys the same overtones of wealth and authority, but without the same level of formality. Compare “Patrick J. Bellweather” with “P. Jonathon Bellweather”. Because this particular approach is no longer as popular as it once was, modern usage carries overtones of a traditional formality, while it would not be all that remarkable 150 years ago.

In particular where one name is Unisex (or has a masculine equivalent that is only different in spelling, if at all), these approaches were often used by women to disguise their gender when participating in male-dominated fields of activity, especially literature and science at the turn of the 20th century and even all the way through to the 1950s and 60s.

The Impact Of Culture

It is clear, from the preceding examples, the extent to which cultural attitudes can impact names and naming conventions – and hence, the capacity of a given naming convention to reflect a character’s social and cultural background. The name is all about where the character is coming from – his or her reaction to those origins is a key component of the character’s personality.

Equally importantly, by placing a group of characters and their current circumstances within a visible social context, a GM can generate naming conventions and give them an original context simply by persistent usage, adding to the uniqueness and verisimilitude of an original society within his game. The preeminent exponent of this approach remains J.R.R. Tolkien, with his many imitators walking in his footsteps. The Lord Of The Rings and The Hobbit employ this approach throughout, and so accustomed is the human mind at detecting such nuances that we don’t need to be told that “Aragorn” and “Arathorn” are related – the names themselves do most of the work, all we need is the specific relationship. This is also true of the Halflings, the Dwarves, the Elves, and even the Rohan – names are used to bind them collectively into a cohesive social entity within the stories.

In modern times, naming conventions and name sources have become so homogenized that this approach leaps out at the reader, almost hitting him over the head with the cultural indicator to make sure that he doesn’t miss it.

Maiden Names & Regnal Names

And speaking of the impact of culture, consider the practice of Maiden Names and Regnal Names. Both use a change of name to symbolize a change of social status – whether that be by marriage or ascent to a throne (Civil or Ecclesiastic).

Prior to the mid-20th century, the change of name on marriage was symbolic of the domination of women in society by men. In the course of the latter part of that century, this attitude was challenged by extremist proponents of Women’s Liberation, but even as they did so, the social convention was changing. These days it is viewed as a commitment to the union, not a gesture of submission; and some of the practices discussed below in “Decadent Naming Structures” such as hyphenating the surnames have also become accepted practice, as has the option of the woman retaining her maiden name.

It is one of the most obvious examples of using naming conventions to invert the traditional practice for a matriarchy. It was thinking about this that led, in part, to the “syllable exchange” of Ullar’s society (whose naming conventions were discussed in the previous part of this series).

Addenda as Tertiary Names

Patrimony, Lineage, and Ancestry are often displayed through tertiary names, and in far more traditional ways than simply playing around with Middle names. In fact, western society has three means of doing so, and many game cultures import a fourth from other sources.

‘Jnr’ is applied when a child has exactly the same name as a surviving or famous ancestor – most commonly father, but sometimes applied to a grandparent or older relative of sufficient fame. It is implicitly (and sometimes explicitly) coupled with ‘Snr’ for the elder – so it is possible for three generations of child within a single family line to have exactly the same “primary name” without confusion (Senior, no suffix, Junior). Rarely, “Senior” is used as an alternative, and usually denotes a case in which the younger generation has become famous despite coming from (relatively) common roots.

One of the easiest (and perhaps, best) ways of giving a society a different feel is to preserve (and make more common) these practices, but translating the suffixes into the language of the society or using a synonym such as “elder” or “the elder” – effectively, using the tertiary name as a title. Titles are a subject we’ll get to in a little while!

Junior imparts a sense of youth, innocence, and even naivety to a name, while senior imparts a sense of seniority, maturity, and even gravitas. Compare “Patrick Bellweather Jnr” to “Patrick Bellweather Snr”, picturing the image that each name brings to mind – it doesn’t matter if “Jnr” is 81 years of age (with a still-older father), the first image most people will have is a youngster, early-20s or less – while “Snr” brings an image of a middle aged-man.

The third approach that is common is to employ numbers. Where this familial naming convention extends beyond two generations (or three at the most), this is the accepted practice. This is a technique for demonstrating lineage that avoids the immediate connotations of “Junior” and “Senior” while implying a larger family history. “Patrick Bellweather IV” could be of any age – but the emphasis placed on lineage reeks of old money and family history, even as the (relatively common) primary names indicate working class roots. The name itself is a capsule history lesson.

The final approach that fantasy cultures often assimilate from other sources is the use of ‘bridging words’ to tell the story of the family in condensed format – the equivalent of “son of”, or “of the”.

All these are useful ways to reinforce character descriptions, adding to the backstory of a character without wasting time on descriptive irrelevances – a shorthand approach, if you will.

The Lack Of Female Equivalence

Once again, the male-female social dichotomy that is part of western history has an influence, in that there are no female equivalents to Jnr and Snr. In part, that’s because the female was expected to change her name when she married, but it is also in part due to inheritance precedents, which were generally to males. Even money bequeathed to a woman was likely to be actually placed under the control of a nominated male administrator, be it a brother, and uncle, a legal representative (conservator or trustee), the local priest, a family friend – almost anyone short of a passing stranger, really. There have even been a few cases where celebrities and trusted political figures have been named as trustees without ever having met, or known of, the decedent or heir. In general, these are refused, though legend & rumor has it that a few have accepted – but it is equally possible that these are examples of Hollywood scriptwriting!

The general method of distinguishing “Marie Obatelli” from her mother remains with another change that occurs with marriage, the change of title. One is referred to as “Mrs.” while the other is “Miss”, “Ms.”, or uses no title at all. It is also relatively uncommon – for the reasons espoused in the preceding paragraph – for female children to be given the same middle name as their mother, thus using the middle name for its purest purpose. If the use of “junior” and “senior” had remained as popular in the modern day as it was 50-60 years ago, it seems virtually certain that some female equivalents to those terms would have entered the lexicon, but the fading from popularity of the masculine terms left little demand for the creation of a feminine version.

Any society with anything approaching gender equality with gerontocratic tendencies (rule by the elderly) – such as most fantasy Elven cultures – would either forbid direct name inheritance, have some other naming structure, or would need both male and female equivalents of “Junior” and “Senior”.

Bridging Words

The use of bridging words is not all that uncommon. Spanish has “de la” which means “of the”, or just “of”. “De” and “Du” are also “of” in French, and prepended to many surnames, as is the Italian “di”. “De” also recurs in Portuguese. German for “of” is “von” and I’m sure that it is immediately recognizable as a part of names from that part of the world, as is the Dutch “van”. Finally, the Irish use an abbreviated form of “of” – as in, “O’Brien”, “O’Kelly”, and so on – and the Scottish “Mac”.

Some cultures use patronomics for daughters as suffixes – these are the syntactic equivalent of bridging words. The Scandinavian nations are especially prone to this practice.

There are an almost-unlimited number of relationships that can be acknowledged through bridging words, the only restrictions are on the imaginations of the GM. These should always reflect the society in which they are found (or vice-versa) – a traditional meritocracy might well have “student of” and “teacher of” as bridging words! They won’t look so strange when they are translated into an appropriate language – though these will usually yield polysyllabic results, and if there is one thing all the real-world examples have in common, it’s that they are short.

Monosyllables tend to be the early words in a language, expressing things that are fundamental to the lives of the primitive cultures from which they derive or that they judge important – so the use of bridging words in this way implies a fundamental trend in their history toward valuing the relationships described by the bridging words. Anything that is too long would be eventually “worn down” by regular usage. Take “Student” and “Teacher” – in most languages, these are immediately recognizable to English-speakers when translated. But Icelandic offers “Nemandi” and “Kennari” as translations. “Nem” and “Kari” would be entirely appropriate “condensations” of such roots after centuries of usage – “Jon nemMagnus Eriksson” would be “Jon, son of Erik, student of Magnus”, while “Magnus kariErik Vigfusson” would be “Magnus, son of Vigfur, teacher of Erik”. This usage also suggests a one-to-one apprenticeship system similar to what many fantasy games have for Wizards.

By all means, strive not to be literal. By far the easiest way to simplify a relationship to a monosyllable is to use a metaphor prior to translation – “light of”, “fire Of”, and the like will work well in just about any language. “Jon Eldur Erik” works quite well (“Jon, fire of Erik”) as do “Louis foc de Vega” (Louis, fire of Vega), “Helena luz de Ruiz” (Helena, light of Ruiz”), and “Marcel lumi Versoire” (a condensation of the French for “Light Of”).

An Orcish Diversion
Just for the practice, let’s try applying these principles to an interpretation of Orcish society – even though it means briefly skipping ahead to some of the content from later in the series on manipulating languages.

Orcish male names would tend to be simple and violent in nature, and fairly guttural. “Crush” and “Kill” and “Axe” and “Blade” and “Make bleed” and the like. The most guttural languages are things like German and Russian and Hungarian. Just because we haven’t mentioned them before, let’s go with Hungarian as the basis for our fantasy Orcish and alter the words as necessary/desirable. Orcish female names would be more prosaic, and probably related to other natural phenomena that the Orcs encountered, like “running deer” (somewhat Amerind in flavor).

There are two ways children can be perceived within most Orcish societies: As weapons to be hurled against the enemy (sons), or as shields against time that will breed more weapons (daughters). [Side-note – this immediately suggests that the women are the keepers of culture, craft, treaties, records, and the like. It is arguable whether or not – in light of this side-note – inheritance would be through the mother (the stay-at-home keeper of the culture) or the father (very Nordic, always looking for trouble somewhere). The best solution when this is the case is to try it both ways and see what looks best. Or perhaps to take a third choice: daughters acknowledge Mothers, sons acknowledge Fathers. I like this option, so that’s what I’ll choose.]

So: “Kill, blade of Sword”, becomes “Oldmeg Penge Kard”, which we can simplify to “Oldeg pengKard”. “Sunshine, shield of Flower” becomes “Napzutes Pajzsa Virag” which we can simplify to “Napzutes ZsaVirag”. Both sound like perfectly acceptable names, and furthermore, names that seem to have a cultural depth and realism behind them that is otherwise hard to convey, especially in so short a statement. You could waffle on for five or ten minutes of narrative about Orcish society without it sounding anywhere near as convincing.

Decadent Naming Structures

When you are a person of influence, you tend to marry other families of significance. And, when you are a person of influence, you occasionally need to remind people of the power and authority at your command. Using your name to do so is one of the more subtle techniques available, and one that is open to anyone – whereas philanthropic personalities can’t readily employ ostentatious displays of wealth. The latter also tends to work against credibility in business negotiations. So there is a continual pressure amongst the wealthy and powerful toward what I describe as “decadent naming structures”.

Hyphenated Surnames

The most common approach employed in the early-to-mid 20th century was the hyphenated surname. With better communications and the advent of the PR machines, this has become less needful in more modern eras, but prior to the rise of Television for the masses it was frequently the best approach for advertising an overt connection between two major power-blocks.

To see how effective this is, let’s try adding a couple of hyphenated names to our usual test subject, “Patrick Bellweather”. Picture the character that is so named in your mind, and then compare that image to:

  • “Patrick Bellweather-Rothschild”
  • “Patrick Bellweather-Hilton”
  • “Patrick Carnevon-Hughes-Bellweather”

Now, if confronted with one of those hyphenated names, how would your impression of the Bellweather family change? That’s right, all of a sudden the entire family is given a degree of cache and significance that is beyond the reach of someone who is just a “Bellweather”.

The Significance of Hyphens
The hyphenation indicates that the character is important – but what does that actually mean? In our culture, that of Western Europe, the significance is attached to wealth or political power, because those are things that we value – even if that is only at the insistence of those who possess wealth or political power. In a different culture, it would be expected that those values are different. Wisdom, physical strength, athletic prowess, even seductive capacity and hedonistic appetite – choose something appropriate to the culture that you have created.

Extended Names

The wealthy of some other cultures take the principle of hyphenated names a step further, and use their names to tell a story. The use of bridging words in names is a sort of ‘watered down’ version of this practice, though I have no idea if there is an actual cultural connection. Traditional middle-eastern cultures are strong proponents of this naming practice.

For example, consider “Muhammad Saeed ibn Abd al-Aziz al-Filasteeni” – which translates to “Praised Happy, grandson of the Palestinian slave of The Magnificent”. Muhammed (Praised) is the name of the character, Saeed (Happy) is the name of his father, Abd al-Aziz (Slave of The Magnificent, which is one way of referencing God in Islamic cultures) is his grandfather, and the family are Palestinian in origin. So what we have here is the grandson of a Priest.

More than anything else, this shows the extent to which religious thought dominates the society in question – the most important thing about the character is not anything he might do, nor any achievement of his father, but that he had or has a grandfather who dedicated his life to God. This verges on the obsessive, in western eyes – which fits our perceptions of the culture in question.

Extended Names In Games
By now, you can probably predict what I’m going to say. To employ this technique for characters in a game, simply pick something that your society obsesses about, compose a relevant description, then use an appropriate language to develop names that reflect that society.

Dwarves are probably the perfect example of this approach, obsessed as they are with mining and the earth.

So, “Son of the brother of the digger of silver on the western slope of Mount Implacable”.

Trying that phrase in different languages eventually turns up Basque, where it reads “Zilarrezko Digger anaia Son mendiaren Implacable mendebaldeko malda”.

With a little tweaking, we get “Zilarrezko mugitzeko lurra anaia gizonezko umea mendiaren ez da gelditzen mendebaldeko malda”. Now, that’s a dwarvish name to reckon with!

In ordinary usage, this would be “Zilarrezko iloba gizonezko”, or “nephew of the silver man”.

Sounds pretty good to me.

Abstract and Descriptive Names

Of course, there are a lot more things – and people – that need names. Supervillains and heroes pose a particular challenge, as do things that people name – supernatural monsters, ships (both naval and star-), organizations, places, projects – and adventures. These are all slightly different problems, and some people have trouble with them. In general, they can all be categorized as ‘Abstract names’ or ‘Descriptive names’, so these are what these sections are all about.

Naming Superheroes & Villains

Superhero and villain names are all about projecting a dramatic identity in a single word – usually a noun, sometimes a verb, and sometimes accompanied by a title. The simpler and less intelligent the character, the simpler and more straightforward the name usually is – but sometimes names are bestowed by the media, so this is not always a reliable guide.

More intelligent characters have a wider palette to draw apon, and some choose a nom-de-plum which represents a subtle in-joke (which they never explain, but which makes them smile every time they hear it – useful for endearing yourself to the media). Others like to subtly reference their powers without blatantly advertising their nature. An example from my superhero campaign is “St. Barbara”, named for the patron saint of artillerymen, rocket scientists, pyrotechnicians, and all others who deal in high explosives, and who wields explosive energy beams (amongst her other abilities).

Others reference abstract qualities that (they believe) they represent, or national values, or simply have names that sound “cool” or “threatening” or whatever the image is that they want to put forward.

And then you get the really clever ones, who deliberately use their identification to mislead others as to the nature of their powers so that their enemies will underestimate them, or prepare defenses against the wrong things, or simply be steered away from some weakness that they would really rather not see exploited. One obvious example is a former PC in the supers campaign prior to the original Zenith-3 campaign, who went by the name of Behemoth to disguise the fact that he was both the gadgeteer and the brains of the team. It didn’t work so well when they became famous, but it did give him a decided edge in his early adventures.

So, how do I choose a superhero or villain name?
Taking into account the intelligence level & creativity of the source of the name, I start by considering the various factors and approaches listed above and choose the one that seems most appropriate.

Once I know the naming philosophy that the character is to embody, I can start listing possible names that express the concept of the character in the appropriate manner.

I then employ a thesaurus to find synonyms for all those potential names which are added to the list of potential options. I’ll also do a web search and check Wikipedia for more ideas.

Once I have about 10-12 items on the list, each new possibility gets compared to those already on the list; unless it is at least as good as those I already have, it gets left off. If it’s noticeably better, it replaces one of the lesser choices.

Unless the character is being named by the media or some other English-speaking source, the next step is to translate the list of potential names into the native tongue of the character.

Finally, I’ll go through the final list of contenders, one by one, assessing them for drama, pronouncability, and “appropriate overtones” – the subtle qualities that distinguish a workable name from an inspiring one.

(I was going to insert an example at this point, but this article is running a little late, so we’ll take that as read).

Ships & Starships

These are named for those who have historically represented the ideals of the operators, those who have commissioned or constructed or even designed the vessel (or their relatives or pets), or those abstract qualities that are generally or ideally symbolic of those qualities.

For a military vessel, that generally means a famous captain or admiral, a shipyard/dockyard, or other great city, a ruler or member of the aristocracy, or an abstract quality that reflects other naval traditions or ideals or the specific military role of this particular vessel.

Merchant vessels are named for famous traders or merchants, the trade routes taken by those traders, the merchandise that the vessel is to carry, a figure of which the owners wish to curry favor, a city with a great trading history, etc. They tend not to be named for abstract qualities as these are not considered all that attractive, but many are also named after wives or girlfriends or pets, or qualities like luck that the owners hope they will enjoy.

Ships of exploration are generally named after famous explorers, after those who have commissioned or funded the expedition, for the abstract qualities of discovery or endurance, or for a family member of the owners.

Pirates, on the other hand, like booty, bawdiness, alcohol, freedom/liberty, and intimidating others. Their ships are often named accordingly, though sometimes they will choose a name that lets them claim innocence – at least in the eyes of a nation they wish to become a privateer of. All that having been said, it’s surprising how often a ship will be named for the figure nailed to the prow! (One of our regular readers, Ian Mackinder, runs a 7th Sea campaign; and, in the past, he’s run Traveller and Star Trek and Klingons, so between them he’s worked with vessels in all of these categories. I’m sure he’ll enlighten/correct me if I’ve left anything significant out.)


From the other direction
I want to close this subsection by mentioning an idea presented in “My Enemy, My Ally” by Diane Duane (now getting hard-to-find, I’m afraid). The Romulans in this Classic Star Trek novel take the concept of sympathetic magic and apply it to the names of Starships, believing that the ship’s subsequent history will reflect and be shaped by the name and the qualities it symbolizes – for example, the crew of a ship named the “Intrepid” will forget to feel fear, and so on. For this reason, Romulan vessel names are derived from a specific animal or weapon or equivalent rather than for a general quality, which can overpower all common sense.

Next Time

Whew, out of space and time already! In the next part of this series, I’ll be talking about

  • Naming Places (including Inns and Castles)
  • Alien & Non-human names
  • How to create an Alien Language
  • How to appear to create an Alien Language
  • The Emphasis Of Inheritance
  • Fashions In Naming, and
  • The Importance & Usage Of Titles

Look for Part 5 of “A Good Name Is Hard To Find” in two week’s time…

Comments (8)