Campaign Mastery helps tabletop RPG GMs knock their players' socks off through tips, how-to articles, and GMing tricks that build memorable campaigns from start to finish.

Three Strange Places Pt 1: Cemetery Gates


This was originally going to be one monster post containing three locations that I have devised recently for different campaigns. I quickly realized that this was too ambitious, so this will be a trilogy of articles, one every 2 weeks.

This is an idea that hints at deeper connections in the game world, metaphysical relationships that tantalize with a glimpse of a more complex reality. It’s suitable for a wide range of campaigns but is, perhaps, most powerful in a Fantasy context.

Naming strangeness in Arkansas…
  • Perryville is the county seat for Perry county, which makes sense.
  • Lonoke is the county seat for Lonoke County, which also seems reasonable. But then we start tiptoeing into the Twilight Zone.
  • Bentonville is the county seat for Benton County.
  • Benton is the county seat for Saline County.
  • Nashville is the county seat for Holland County – but there’s no chance of it being confused with it’s more famous namesake in Tennessee, is there?
  • Russellville is the county seat for Pope County (not Russell County, there’s no such thing).
  • Pope County is not to be confused with Polk County (where Mena is the county seat) or Pike County (County Seat of Murfreesboro – at least that’s fairly distinctive, isn’t it? Oops, there’s three of them – one in Tennessee, one in North Carolina, and this one in Arkansas. Oh well…)
  • Clinton is the county seat for Van Buren County.
  • Van Buren, on the other hand, is the county seat for Crawford County.
  • Clarksville is the county seat for Johnson County, not Clark County (whose County Seat is Arkadelphia).
  • Mountain View is the County Seat for Stone County, Mountain Home is the County Seat for Baxter County. But no-one would ever confuse the two, would they?
  • Yelville is County Seat for Marion County but there is also a Yell County which has TWO county seats – Dardonelle and Danville.
  • On top of that, there is also a city named Marion which is the county seat for Crittendon County.
  • Marion County is not the only one with authority shared between two county seats – in the case of Franklin County, it’s Charleston and Ozark…
  • … Boonville and Paris are both county seats for Logan County….
  • … Carroll County shares power between Berryville and Eureka Springs…
  • …I mustn’t forget Arkansas County, which has both DeWitt and Stuttgart as County Seats…
  • …Sebastian County has Greenwood and Fort Smith…
  • …and in the case of Prairie County (which has a lot of ghost towns and unincorporated communities), it’s DeValls Bluff in the south and, to the North, the unincorporated city (one of two in the county) of Des Arc.

    For those that don’t know, an unincorporated area or community is a region not governed by a local municipal corporation. They may have a town council or other form of local government, or not, and it may be considered ‘attached’ to a larger region for various official purposes like a census.

  • Hot Springs is the County Seat for Garland County.
  • Malvern is the County Seat for Hot Spring County.
  • And, last (and possibly least), The County Seat of Sharp County is Ash Flat which should have anyone who knows anything about music deep in thought!

But those alone weren’t enough to get my creative juices flowing. Nor was it the long list of towns and cities who share a name with somewhere else (and are usually better known), like Nashville, Melbourne, and Paris. There are lots of these in Arkansas, but I don’t know if they are more prevalent there than they are anywhere else in the world (it seems to be a quite common thing in the US, but it’s far from unheard-of here in Australia, either.

The Midways

Maybe the idea started with the Midways.

Lots of places have places named Midway, especially in the US.

You can generally expect one place called Midway in a given state or country, or some equivalent thereof, maybe even two. But, when you discover three of them that are visible at the same time on the map, and you notice them, it gets your attention.

I put this together using a screen capture from Google Maps just to show my players.

What I found out subsequently was that this was the tip of a much larger iceberg.

Wikipedia, on this page, lists 11 states with one place named Midway: Colorado, Delaware, Nebraska, New Mexico, New York, Ohio, Oklahoma, South Dakota, Utah, Washington, Wisconsin, plus part of the geothermal areas of Yellowstone.

The same list contains 23 more states with multiple places named Midway: Alabama, Arizona, Arkansas, California, Florida, Georgia, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Minnesota, Mississippi, Missouri, North Carolina, Oregon, Pennsylvania, South Carolina, Tennessee, Texas, Virginia, and West Virginia.

According to this list, there are no less than 16 of them (including one ghost town)! Or maybe that’s not abnormal in the US?

Well, let’s see. Alabama has 5, California 4 (including a mountain), Florida 5, Illinois 8, Indiana 6 – so far, that 16-count is sticking out like a sore thumb!

Iowa has 5, one of them a ghost town; Kansas 3; Kentucky 5; Louisiana 5; Minnesota 2; Mississippi 7; and Missouri, 4. No, that list of 16 is definitely remarkable.

When I showed my players, they were equally intrigued. One made a comment about getting confused about where you were, and that planted the initial seed – but it would take further discoveries to bring it to bloom.

The Cemeteries of Arkansas

That was achieved by the cemeteries.

Arkansas is home to 4224 of them. A lot of these cemeteries share a common name, or could be mistaken for having a common name.

Somewhere on the internet misinformed me that this was the greatest number per capita anywhere in the world (excluding temporary grave sites in War Zones, and counting mass burials as only one site), but verifying that has shown it to be false – Tennessee has more graveyards per 100,000 people than anywhere else in the world, and a total of 33,000 of them within the state.

The Searchable Database

But let’s get back to Arkansas. As anyone who’s been reading me for the last year or two knows, the PCs in my superhero campaign are in the process of converting a Mansion that I have positioned in Royal, Arkansas, having spent a couple of days on an extended road trip exploring the state.

In The Power Of Basic Utilities, I discussed the process of creating a searchable database of businesses and localities and sites of interest. That list deliberately excluded churches, landmarks, and cemeteries, for reasons that are too complicated to go into right now.

The Second Pass

But, in making a second sweep through the various relevant localities, having discovered that the scale I had been using left out entirely too many of the things that I wanted to list, I decided that selected landmarks needed to be included, and so did cemeteries.

The Lost Cemetery

I was about 95% through my data acquisition for the resulting additional sites when I had trouble finding a particular cemetery for a second time (I no longer remember which one it was). I had found it once to put it on my list, but finding it a second time to actually gather information about it simply wasn’t happening.

Google Search

I tried all the usual tricks that I had developed without success, until I was at what had become my last resort: a Google Search.

This had tracked down businesses that I had misnamed (2 or 3), businesses that had closed, businesses that had relocated, and so on – why couldn’t it find this particular cemetery?

Well, it not only did that, it provided links to a couple of really useful resources –

  • The Arkansas page of The US Cemetery registry, which sometimes has details about a site that you can’t find anywhere else,
  • And Roadside Thoughts, which has lists of all the communities in the US and Canada, but also lists of all the Cemeteries, divided by state or province. The list includes the county in which the cemetery is located and clicking on an entry takes you to a page with more information, including – crucially – GPS coordinates that can be plugged back into Google Maps.
Lost Cemetery – Found

This allowed me to locate the “lost cemetery” and discover that of all the functional zoom levels showing businesses etc, there was only ONE that showed this cemetery; the rest of the time, the name and marker was covered over by other points of greater potential importance.

There’s more where that came from. A LOT more.

It also revealed that there were a LOT more cemeteries in the target zone that were simply not showing up through Google Search (and that Google had a few showing that this source did not).

A lot of those resting places had the same name as ones that I had already processed – but I might have found 4 or 5 when there were twenty or more.

For example, there are six Adams Cemetery listings, plus Adams / Singer Cemetery and the Adams Chapel Cemetery. Sometimes, one county would have two or three with the same name – for example, Newton County has three named Curtis Cemetery.

Flash Of Inspiration

I’ll get to the why of it in a moment, but a flash of inspiration demanded that I work my way through their list, adding those names that recurred (and the occasional one that was too distinctive and intriguing to ignore) to my list.

So far, these additions total about 6 times the length of my original list – and right now, I’m in the middle of the O’s (Oak Grove Cemetery to be more specific), so probably half the names are still to come!

Almost everything in the list of place-name weirdness with which I started this part of the article was compiled from my side-notes on the entries and noticing the strangeness of some of the names.

My entries look like this:

    O’Neel Cemetery [Lincoln County], Star City [33.8787, -91.7646] (note spelling)

– the name of the Cemetery, the County, the County Seat IF it’s close enough to the location that Google might cough up a street address if I search for the name and locality, the GPS coordinates, and – in this case – a notation to myself, which most don’t have).

Portals Connected By Names

But now we get to the flash of inspiration – what if there was an arcane connection between places that are named the same that permits relatively easy instant transit from one to another?

What if this was also possible to places that had similar but not identical names, especially of the names (when spoken aloud) were the same?

And, What if the planet-spanning enclave of Mages that were insinuating themselves into the fabric of society at every level knew how to use this trick?

Not to mention, What if the uniformity of design of fast food franchise outlets were an attempt at creating a less contentious network?

Intended Usage

At some point in the future of the Campaign, someone will mention the existence of this network to one or more of the PCs.

Or maybe they’ll ask to meet the PCs at a particular graveyard and “port in”, giving a visual demonstration that leads to an explanation.

Right now, the PCs are having to get used to being without their “instant teleport into the heart of a problem” – by the time they learn of something, it may have been resolved by conventional forces or it will have developed significantly by the time they arrive.

One key element of the searchable database will be how long it will take the PCs to get to “X”. If and when they learn of an incident, they will have to decide whether they can get to it in time – something they have never had to do before.

Once they have gotten somewhat used to the notion that they can’t be everywhere, and can’t solve every problem, it will be time to loosen the restraints a bit. That’s when and where this concept will come into play.

And there may be the occasional encounter in a location whose name is too inspirational to be refused along the way!

But that’s what I intend to do with it.

Other Uses

Writing Campaign Mastery has taught me well. As soon as I come up with something for one of my campaigns, I assess it for potential value for others.

I could see this being a useful gimmick in all sorts of modern Fantasy campaigns (including Vampire and the like), in Pulp campaigns, in Horror campaigns, and so on.

But in straight Fantasy lies perhaps its greatest potential. Most places would have only one or two graveyards in such campaigns, but the fact that the network would be small only makes it more manageable for the GM. It won’t break the game world, but it might create a backdoor into adventures.

Thoughts about controls

Just because something is possible doesn’t mean that it’s easy. There are three sorts of significant controls to contemplate:

  • Usage Limits
  • Usage Cost
  • Usage Difficulty
Usage Limits

This is a simply a matter of a restriction on how many times the network can be used in a certain time frame, or how long you have to wait before it can be used again, or possibly both.

There are all sorts of pseudo-scientific rationales that can be offered, from an accumulated charge of some sort of strange static field, to incompatibility between the living and the dead, to a limited energy supply within the Cemetery Network.

More sophisticated limits may be used – a total distance traveled, or a delay factor deriving from the total distance traveled. But these are rarely as much trouble as they are worth.

Usage Cost

Another way of ensuring that the network isn’t taken for granted, becoming so ubiquitous that it dominates gameplay and tactical considerations, is for there to be some sort of significant cost involved in using it.

There are three common usage costs that can be applied.

    1. stat points – Frankly, I don’t like this choice, even though it’s one I’ve seen used for this purpose any number of times. It doesn’t impart much flavor unless you can convince your players to roleplay the (temporary) characteristic loss.

    2. hit points – using the network costs 25%, 50%, or 75% of a character’s hit points. These of course are easily recovered through rest and healing magic, but until that happens, characters using the Network will be vulnerable.

    This WILL have implications for usage – short trips are viable but there will be times when travel plus rest time is equal to or greater than conventional travel. The more rest that is required, the greater the value for long trips.

    A variation might be to associate the loss with distance traveled, but this will erode those implications to some extent, and I’m not sure the added complexity is worth it.

    But my preferred answer is #3.

    3. attack & defense values – using the network causes some disorientation and dizziness. This is expressed as a temporary loss of attack and defense scores, and probably anything DEX related as well, the amount of loss to be determined by a roll of some sort – perhaps d6+2, or d6+log(distance[km]).

    The last is worth taking a moment to clarify:

      .0-9 km = d6+0
      10-99 km = d6+1
      100-999 km = d6+2
      1000-9999 km = d6+3
      10,000-99,999 km = d6+4
      ….

    A further variation divides the log of the distance value by log(5), or 0.699. Instead of powers of 10, this sets the threshold at powers of 5:

      0-4 km = d6+0
      5-24 km = d6+1
      25-124 km = d6+2
      125-624 km = d6+3
      625-3124 km = d6+4
      3125-15624 km = d6+5
      and so on.

    Or perhaps you would prefer to use log(2.5)=0.398?

      0 – 2.4 km = d6+0
      2.5 – 6.24 km = d6+1
      6.25 – 15.624 km = d6+2 (note that 3 significant decimals is as far as I’ll go)
      15.625 – 39.062 km = d6+3
      39.063 – 97.656 km = d6+4
      97.657 – 244.14 km = d6+5
      244.141 – 610.351 km = d6+6
      610.352 – 1525.879 km = d6+7
      1525.88 – 3814.7 km = d6+8
      3814.701 – 9536.743 km = d6+9
      ….

    But if I was going in this direction, I would step it down to the much more elegant powers of 2, using log(2) = 0.30103

      <2 km = d6+0
      2 – 3.9 km = d6+1
      4 – 7.9 km = d6+2
      8 – 15.9 km = d6+3
      16 – 31.9 km = d6+4
      32 – 63.9 km = d6+5
      64 – 127.9 km = d6+6
      128 – 255.9 km = d6+7
      256 – 511.9 km = d6+8
      512 – 1023.9 km = d6+9
      1024 – 2047.9 km = d6+10
      2048 – 4095.9 km = d6+11
      4096 – 8191.9 km = d6+12
      8192 – 16383.9 km = d6+13
      ….

    In all such, the first value – the more convenient one – can be considered a threshold; exceed it and you step up to the next penalty step.

    To my tastes, this last choice goes too far, and the one before it is too complicated to casually remember. On the other hand, the powers-of-five peaks too quickly, and the powers-of-ten WAY too quickly.

    What is needed is some mechanism to push results back up the list toward the top. I’ll get back to that thought in a moment. First, let’s put those distances into context.

    To do that, some distances might be useful:

    All distances are shown in kilometers, and thanks to Google Maps, make full allowance for the curvature of the Earth. The maps are copyright free.

    I should caution readers that they were assembled and compiled in some haste, so they may not be error-free – in fact, as I was preparing the above artwork, I spotted at least one number that seemed rather dodgy to me. And there were another pair where I think I transposed them in my research – so I swapped them on the graphic.

    Okay, I got carried away working on these maps, I admit. I estimated it to be a 2-3 hour job – but then I added all the smaller-scale illustrations.

    Using the maps

    Decide what scale your campaign world is, in terms of how far the PCs are free to roam. Find the largest number on the map. Look at the different penalty levels listed for that distance and decide on the one that seems most appropriate to you.

    Choose high rather than low – having up to 5 levels ‘in hand’ can come in handy.

    Remember, the highest value on your chart is how far the PCs can
    (theoretically) jump in a single action. You might decide that you want it to take three jumps to cover the distance if your campaign – i.e. to get from one side of your playable area to the other; in that case, divide the distance determined by three to get the ‘jump scale’.

    For example: GM chooses a European scale. He selects the map that includes southern Scandinavia. It has two numbers showing: 3183 and 3398. As instructed, he takes the higher of these. But he wants Jumps to be less powerful, so he decides that it will take 6 of them to get from one side of the 3398 to the other.

    3398 / 6 = 566 1/3 km per maximum jump. That’s a d6+2 modifier on the powers-of-ten chart, d6+3 on the powers-of-5 chart, d6+5 on the powers-of-2.5 chart, and d6+9 on the powers-of-2 chart.

    The first two are too low. The powers-of-2.5 seems about right, the powers-of-2 charts gives too high a result. But, bearing the advice to choose high in mind (even though I haven’t explained why yet), ht selects the latter.

    Recovering Jump Shock

    Each round after arrival, the character gets a save against FORT or some similar check. If they succeed, they recover one or perhaps two of the losses. So some characters will recover more quickly than others.

    Usage Difficulty

    The final form of restriction is to require some sort of skill check in order to use the Cemetery Gates.

    There are so many advantages and benefits to this that It’s practically a necessity, in my view.

    It lets you decide on how difficult a Cemetery Jump is at all. It lets you factor in very similar names (good) or not so similar names (more difficult). As characters advance in power, jumps that were once very difficult will become easier, making the system progressive. You can throw in all sorts of other modifiers as you see fit – perhaps there are ways of warding cemeteries against such purposes.

    You also get to decide what the roll represents. Do you have to haggle with the Dead Residents every time? Do you have to arrange something in some arcane pattern? Perhaps you have to manipulate arcane energies, somehow?

    You should only choose one – of these options. But you can use a different one in different campaigns to give their Cemetery Gates a somewhat different flavor.

    In a 3d6 or d20 system, for every 2 points by which the character activating the Cemetery Gate makes his or her roll, move their Jump Shock up the table one step. With a d% system, it’s one step for every 10% success, and the minimum jump shock is the “+x” listed against the original entry.

    In the example offered earlier, a maximum jump incurred a penalty of d6+9. So the minimum jump shock is 9. If the character succeeds in his roll by 12 (a very good roll), he can reduce it to d6+3, minimum 9. Quite obviously, there’s no point in rolling the d6; he has reduced the Jump Shock to the irreducible minimum.

    If he succeeds by 6 (a good roll), he reduces it to d6+6, minimum 9. This is still a good result – less than half the time, the total will be worse than the 9 minimum.

    If he succeeds by 2 (a fair roll), he reduces it to d6+8, minimum 9 – and may as well not have bothered.

    Note that the characters can choose to jump less than the maximum, reducing their Jump Shock accordingly. It would probably be reasonable to give a bonus to the skill check for doing so, too.

So, there you have it. There is just so much flavor that you can build into your world with this campaign element, and it has such wide utility in terms of genre, that it has to be worth noting!

Comments (2)

Economics In RPGs 8: The Digital Age Ch 5


This entry is part 15 of 16 in the series Economics In RPGs

Another image from Gerd Altmann from Pixabay. I’ve color-shifted this one because it’s been downloaded nearly 2,000 times already.

Now Updated! –

Scroll to the bottom of the page (past the table of contents) for the additional content if you’ve already read the main article.

I usually write the bulk of these articles on Monday and publish them at Midnight that night or just a little after. The Monday juts past has all sorts of significance for Australians.

The first Monday of October – actually, technically, I think it’s the Monday after the first Sunday of the month – is a public holiday, making this a treasured Long Weekend.

I always liked to take four days of my annual leave in the following week, so that my weekend effectively lasted for 9 days!

It’s the start of Daylight Savings in the Eastern states of Australia (I’m not sure about the other states). So I got one hour less sleep last night before anything else gets factored in.

This weekend marks the end of the football seasons here in Australia (well, the two biggest ones, anyway) – both have just had their grand finals. That doesn’t bother me much, I don’t follow the football.

But it also signals one week until the crowing event of the (local) motorsport season, the Bathurst 1000 – typically 16-20 hours of coverage starting on the Friday and running through to the Sunday, and that is something important to me. Everything else stops that weekend – and to do that properly, I need to prepare properly!

It’s also significant in a number of more personal ways. A niece and a nephew both had Birthdays over the weekend. I’m meeting my mother today, as she travels to my Uncle’s funeral (her brother). My new TV arrives on Tuesday or Wednesday (and would probably be here today if it weren’t a Public Holiday). And this is the week that I have to start Christmas Shopping in earnest.

When you put it all together, it adds up to a significant disruption of the usual routine.

Normally, I don’t take public Holidays off. For one thing, my Tuesday is already pretty full. On this particular occasion, however, I am taking a significant chunk of that Monday away from the keyboard, for reasons already stated. That could mean a delay in completing this article – and since I’m already geared up to accept that, I’m feeling no deadline stress to even try and get it done in time. So when it will get published, I have no idea.

So, if it’s late, now you know the reasons why.

Wow, I can’t believe that I got this finished so close to deadline!

The Digital Age, Sixth Period: Pandemic

Sometimes, sub-eras are quite lengthy – a decade or more. And sometimes they are only a year or two long.

Everyone has a different perception and a different lived experience of the pandemic to the perceptions and experiences of everyone else. While there may be commonalities between particular populations, Florida was not the same as Washington which was not the same as New York; Los Angeles was not the same as Toronto, or Auckland, or Sydney, or London.

The experiences of my relatives still living in Nyngan are different to those of my stepfather and mother in Crookwell NSW, which are different to those of my father and stepmother in the Central Coast, which are different to those of my Sister living just a few miles away from them, and all of those are different to my own experiences, which are different from those of my neighbor across the street.

Commonalities?

Even when you assemble a national picture of these disparate experiences, there are distinctive individualities to the experience – Israel is not the same as Hong Kong, which is not the same as Canada, which is quite different to Venezuela, which bears little resemblance to New Zealand, which is quite distinct from the Australian experience.

Someone who is adept at sniffing out the commonalities and highlighting the distinctions will one day write the definitive book on the time period, and millions of us will buy it, simply because it helps us relate each other’s experiences to our own.

And yet, despite being Different, these experiences are all the same in some degree; the differences largely come down to timing and extent.

Timing

Timing deals in several related variables. When the first cases were recorded; when the problem escalated to a crisis in the minds of the governing authorities; when they acted to contain it; and when they began to reduce restrictions.

To some extent, these are controlled by two or three overarching factors:

  • the severity of the pandemic in the local jurisdiction, which impacts both the severity of restrictions imposed and their duration;
  • the degree of success of those restrictions in containing further infections;
  • local politics.
Extent

This is all about preventing further spread, and ensuring that emergency services are not overwhelmed. The latter point is a criteria one – if there is no more hospital space, or no more ambulance capability to deliver new patients, or no more doctors to treat them, a 5% mortality rate can become an 80% or more mortality rate overnight. And it’s whichever of these is the lowest that controls the overall result.

Having (say) 100,000 cases in the local region, of which 5,000 (that’s 5%) require hospitalization, and the same again might or might not, of whom a given percentage cannot be saved – initially, perhaps as many as 20%, later as few as 3-5% (I’ll use 10% as a convenient figure midway through these extremes), gives 500-1000 deaths.

If that’s per year, it’s negligible though tragic. If that’s per month, it’s serious. Weekly, that’s an emergency. Daily, it’s a crisis.

Now, let’s say you only have capacity for 3,000 additional patients a week. If you’re getting 10,000 a month, that’s no problem, though the margins are closer than you would like. If you’re getting 10,000 a week, that’s very bad news – it means that a high percentage of the 7,000 that you can’t treat are going to die, and at least 6,000 of those could have been saved.

The more extreme the spread, the lower the margins before it’s safe to relax restrictions, and so the longer those restrictions should stay in place.

Local Politics

It takes a truly global event to turn national politics into ‘local’ politics, but the Pandemic was an event of that magnitude.

    The Australian Experience

    Let’s start with this quote from the previous chapter of this article:

      On 23 January 2020, bio-security officials began screening arrivals on flights from Wuhan to Sydney. Two days later the first case of a SARS-CoV-2 infection was reported, that of a Chinese citizen who arrived from Guangzhou on 19 January. The patient was tested and received treatment in Melbourne. On the same day, three other patients tested positive in Sydney after returning from Wuhan.

      — Wikipedia, COVID-19 Pandemic in Australia

    Australia initially pursued a Zero-COVID “Suppression” policy, one of the largest places in the world that could reasonably even contemplate such an option. This policy held until late 2021, when it became clear that it was no longer tenable. In truth, it was probably always overly optimistic.

    Combining strict controls over international arrivals and an aggressive response to local outbreaks with localized lockdowns and exhaustive contact tracing kept the case count low enough that most of the country could simply go about their business, unrestrained and unrestricted.

    It couldn’t last, but it took a dog’s breakfast of confused lines of authority, miscommunications, and unwillingness to take responsibility for the wheels to come off.

      On 8 March 2020, [the] Ruby Princess departed Sydney, Australia for a 13-night cruise around New Zealand.

      — Wikipedia, Ruby Princess

      The cruise was cut short on 15 March and Ruby Princess returned direct to Sydney from Napier

      — Same source

    …but no-one paid much attention to that at the time. I don’t remember it even being mentioned in the media.

      On 19 March 2020, the ship arrived back in Sydney, New South Wales two days early from the New Zealand cruise, docking at 3am, as some COVID-19 swabs needed to be tested as an urgent matter.

      The ship disembarked 2,700 passengers later that morning.
      [emphasis mine]

      The state health minister, Brad Hazzard, announced on 20 March 2020 that 13 of the people on the ship had been tested for the SARS-CoV-2 coronavirus, and 3 of them were positive. New South Wales health authorities asked all passengers to go into self-isolation. It was announced on 24 March that one passenger had died and 133 on the ship had tested positive for the coronavirus.

      — Same source

    By 30 March, 440 passengers had tested positive. One day later, the death toll was 5. The genie was out of the bottle; several returning passengers had been present at welcome-home parties and events, or had simply resumed a normal social schedule, spreading the virus far and wide. Ultimately, 900 deaths would be attributed to the Ruby Princess, either directly or indirectly.

    This set the pattern for Covid in Australia – lockdowns and contact tracing would suppress the virus to an acceptable degree, restrictions would be relaxed, and someone would do something stupid and set off a whole new cluster. Schools and Aged Care facilities were particularly susceptible.

    Even so, no-one realized just how profound these events were – not until the Australian Grand Prix was canceled in the second week of March, 2020. Since this event was to be telecast live – it’s not quite as big a deal here as the Indianapolis 500 is in the US, that’s reserved for the Bathurst 1000 – and the reporters had nothing to cover except the official confusion and eventual decision, it made headline news around the country.

    To be fair, in most of Australia, nothing changed except the mood of the populace. But the sense of smug security was shattered completely, especially for those of us living in larger cities.

      Australian borders were closed to all non-residents on 20 March, and returning residents were required to spend two weeks in supervised quarantine hotels from 27 March.

      Many individual states and territories also closed their borders to varying degrees, with some remaining closed until late 2020, and continuing to periodically close during localized outbreaks.

      Social distancing rules were introduced on 21 March, and state governments started to close “non-essential” services. “Non-essential services” included social gathering venues such as pubs and clubs but unlike many other countries did not include most business operations such as construction, manufacturing and many retail categories.

      — Wikipedia, COVID-19 Pandemic in Australia

    The Australian economic response

    In terms of the economy, then Prime Minister Scott Morrison acted reasonably decisively, though his approach was (at least in part) guided by the criticism of his party of the methods (successfully) used to mitigate against the GFC.

    Morrison’s approach was to support businesses on condition that they keep their employees ‘on the books’ regardless of whether or not they were engaged in any productive work for the business. There was also a relatively small support package for those on social welfare payments.

    This approach was not without its critics, either. There was no allowance for profit levels, so some of the biggest beneficiaries were businesses that arguably did not need the support. Qantas Airlines also accepted the stimulus only to use legal trickery to fire its entire maintenance staff and send the operations offshore (they have recently been found guilty of this in the courts here).

    Nevertheless, and despite the flaws and criticism, Morrison got a LOT of credit and political capital from his handling of the Pandemic. Which only makes it remarkable how precipitously he squandered it, and how total his fall from grace was.

    The ultimate impact was to put the bulk of the stimulus money in the pockets of business owners, with a secondary amount finding its way into the hands of their employees, and a still smaller amount being provided to those who arguably needed it more. This largely diluted the spending of the stimulus, as trade was soon restricted to a much stricter definition of “Critical Services”.

    In many ways, this parallel the approach taken in the US, so the post-pandemic effect has been similar in both countries.

    COVID in the UK

    I’ll admit up-front that aside from knowing it was very bad, I don’t know enough about the sequence of events in the UK to write intelligently about them. Local outbreaks took the headlines, and the tragedies in Italy and the US/Trump situation, filled most of what was left. Shoehorned in there somewhere was the rest of the news.

    I suspect that this pattern will largely hold true everywhere – local first, terrible events elsewhere second showing how bad things could get, and everything else newsworthy scrunched into whatever broadcast time remained. The only exceptions would be the US itself and the scene of any of those tragedies.

    In the US, the “local” would refer to the state or city, the national response and clown show would be second, tragedies elsewhere would be third, and anything else would be a remote fourth – but I wasn’t there, so I can’t say definitively.

    But this principle is worth remembering by GMs – it won’t just apply to new coverage during Covid, it will apply to any high-impact local disaster.

    Back to the UK:

      The virus began circulating in the country in early 2020, arriving primarily from travel elsewhere in Europe. Various sectors responded, with more widespread public health measures incrementally introduced from March 2020.

      The first wave was at the time one of the world’s largest outbreaks. By mid-April the peak had been passed and restrictions were gradually eased.

      A second wave, with a new variant that originated in the UK becoming dominant, began in the autumn and peaked in mid-January 2021, and was deadlier than the first.

      — Wikipedia, COVID-19 Pandemic in the United Kingdom

    Once a vaccination program was underway, restrictions were gradually eased.

      A third wave, fueled by the new Delta variant, began in July 2021, but the rate of deaths and hospitalizations was lower than with the first two waves – this being attributed to the mass vaccination program. By early December 2021, the Omicron variant had arrived, and caused record infection levels.

      — Same source

      A national Lockdown was introduced on 23 March 2020 and lifted in May, replaced with specific regional restrictions. Further nationwide restrictions were introduced later in 2020 in response to a surge in cases. Most restrictions were lifted during the Delta-variant-driven third wave in mid-2021. The “winter plan” reintroduced some rules in response to the Omicron variant in December 2021, and all restrictions were lifted in February and March 2022 as the Omicron wave continued.

      — Same source

    Economic Response and Impact In The UK

    Parts of this have a very familiar ring to them.

      Economic support was given to struggling businesses, including a furlough scheme for employees.

      — Same source

      The pandemic was widely disruptive to the economy of the United Kingdom, with most sectors and workforces adversely affected. Some temporary shutdowns became permanent; some people who were furloughed were later made redundant.

      — Same source

    Of course, no-one expected one of the casualties to be Boris Johnson’s position as Prime Minister!

    COVID-19 in the USA – some observations and recollections

    When the pandemic began, President Trump downplayed it to avoid a public panic. This was probably the right thing to do at the time – but he was too slow to then gear up when the true seriousness began to manifest itself.

    What’s more, there was a shortage of protective equipment – there weren’t enough masks even for the doctors, never mind for the general public. Trump’s first really big misstep of the Pandemic was to play games in this respect – he should immediately have nationalized both a production facility and the existing supply, have distributed that existing supply according to need, and had that production facility churning out new PPE 24/7. he didn’t, and the only explanations can be

      (1) that he had convinced himself that the downplay was the truth; or,

      (2) that the need to continue the ‘no panic’ downplay was more important than the longer-term needs created by the pandemic.

    Neither is particularly flattering toward his Presidency.

    A Litany Of Disastrous Missteps

    I can’t speak for residents of any other country, but here in Australia, the mismanagement of the pandemic has imposed a permanent pall over the Presidency of Donald Trump – not that he was all that popular beforehand, but it hit a new low in 2020. It’s not without reason that I referred to it earlier as the “Clown Show”.

    One got the very strong impression (rightly or wrongly) that Dr Fauci was controlling the effective parts of the response, working around the interference of the Clown-In-Chief.

    This impression started to form early on, when Australian news would describe the severe restrictions (including spending a week in quarantine in a third nation if traveling from a country that had an ongoing outbreak) and then cut to footage of US citizens breezing through customs – quarantine was only required of foreign nationals, according to the broadcasts.

    Cue a collective face-palm.

    And then the news would talk about the dire situation in Italy, and that in New York City, and the overall impression was that few (if anyone) was learning from the experiences of others, especially in the US administration. I remember commenting to someone at the time, “this can’t end well”.

    It didn’t.

    Next, Trump signed the ” Coronavirus Preparedness and Response Supplemental Appropriations Act” into law, which provided $8.3 billion in emergency funding for federal agencies.

    Which sounds very impressive until you realize that this is only about 12.9 Billion Australian Dollars – and the Australian Government’s first Stimulus package was 17.6 Billion Australian dollars – and that a second tranche of stimulus worth another 66 Billion AUD would be allocated before the first had even taken effect. To match this commitment per person, the US response would need to be 1.07 Trillion Australian dollars (about 690 Million USD).

    If our government could do it, why couldn’t “the greatest economy on Earth”?

    The Ivermectin nonsense. The “Bleach” incident. Putting his son in charge of the Vaccine – about which we then heard nothing until Biden came to office. Continuing to downplay the virus even after he was hospitalized with it. Exposing his Secret Service detail to the virus for a publicity stunt. The list just went on and on.

    Ultimately, the US did step up and allocate significant funding to Stimulus packages. The question remained, how much worse was it because of all this nonsense?

    Medical Economic Impact

    By putting the available PPE up for auction to the highest bidder (which is effectively what Trump did, with the Government reaping the profits), Trump triggered a massive inflationary surge in the medical supplies industry of the US

    This duly leaked into related fields. Trump can’t be fully blamed for that, the same thing happened in different forms everywhere.

    Eventually, supply caught up with and even exceeded demand, and over time, an equilibrium was achieved.

    It’s worthwhile spending a moment considering the mechanism by which this was achieved. It’s called competition – when all supplies are perceived as fundamentally the same, price differentiates. The business that is willing to forego some small part of the exorbitant profits demanded of the others gets all the customers, and all the profits. The others either have to lower their prices to match, or even try to undercut the competition to make up lost ground.

    Assuming there is no collaboration to keep prices high by manipulating the market, competition drives profits down to a reasonable level.

    A blended economy

    I’ve already discussed stimulus payments, so I’ve excised the section which was to discuss them. Instead, let’s look at the results:

    • Some businesses rely entirely on the Stimulus payments. They are effectively closed. If the stimulus runs out before demand returns, they will close, creating negative economic growth. The smaller the business, the more susceptible they will be to this.
    • Some businesses rely partially on the Stimulus payments. They are split between activities that are deemed “Essential Services” and those that aren’t. If the stimulus runs out early, they face a choice: reopen fully, and shoulder the associated risks, or downsize and shrink. The first choice may see them pick up where they left off, better their positions, or collapse – depending on demand. Overall, this will stifle growth but not cause the economy to go backwards. Those that downsize because they anticipate a reduction in demand shrink the economy, as before.
    • Finally, there are those businesses that are not reliant on Stimulus payments at all. They continue to contribute to economic growth (best case) or are economically neutral (worst case).

    Put this together, and its the impact on the first and second categories, in combination, that is dominant. If it exceeds the growth from the third category, you have an economy heading into recession. Now, factor in the relative impact on the economy displayed earlier, and it’s easy to see that the first category will largely outweigh the other two in terms of impact.

    The economy during a pandemic is a blend of all three.

    An economy without stimulus payments is effectively the same as those payments having ended prematurely. That’s why there is a common element of such payments in most economies around the world – the amounts, the timing, the duration, and the delivery mechanism may vary, but the broader reality remains.

    Reopening in a Blended Economy – Local Politics II

    Politics on a smaller scale – the state level – also played into decisions about when and how to reopen. In particular, because Biden was now President, and advocating for caution, the Republican states pushed hard for a quick reopening.

    It would be unfair and unrealistic not to acknowledge that they had a point to be taken into consideration – the shutdowns were undoubtedly bad for business, small businesses in particular. In fact, it’s fair to say that the impact was inversely proportionate to the business size.

    And that’s the problem. The Republican initiatives favored large companies over small, because larger companies were better able to absorb a proportion of their workforce being ill.

    If your business has 2 employees, and one becomes ill, that’s half you operation crippled. If you have 10, and one becomes ill, that’s bad – but if 9 of the ten have to stay home even though they are not yet sick, that’s devastating.

    To get some sense of that, we need some sort of comparison of how many businesses there are of a given size. The following is an excerpt from a page at AskWonder.com, US Employers by Employee Count, research by Dagmawit W:

    • Under 500 Employees: 71,720,729 businesses.
    • 500-749 Employees: 88,334 businesses.
    • 750-999 Employees: 58,930 businesses.
    • 1000-1499 Employees: 75,754 businesses.
    • 1500-1999 Employees: 54,095 businesses.
    • 2000-2499 Employees: 41,738 businesses.
    • 2500-4999 Employees: 143,141 businesses.
    • 5000-19,999 Employees: 304,674 businesses.
    • 20,000 + Employees: 600,947 businesses.
    • Second biggest employer: Amazon, 1.29 Million employees.
    • Largest single employer: Walmart, 2.3 Million employees.

    (Data from 2018. Counts branches and franchises as separate businesses. The totals in the last 2 entries are over multiple operations centers, obviously!)

    Let’s multiply those numbers by the average number of employees in the range, which should give a truer picture of the average economic impact of each scale of business:

    • 250 Employees x 71,720,729 businesses = apr. 17,930 Million
    • 624.5 Employees: 88,334 businesses = apr. 55 Million
    • 874.5 Employees: 58,930 businesses = apr. 33.86 Million.
    • 1249.5 Employees: 75,754 businesses = apr. 94.65 Million.
    • 1749.5 Employees: 54,095 businesses = apr. 94.64 Million.
    • 2249.5 Employees: 41,738 businesses = apr. 94 Million.
    • 3749.5 Employees: 143,141 businesses = apr 537 Million.
    • 12499.5 Employees: 304,674 businesses = apr. 3,808 Million.
    • 20,000+ Employees: 600,947 businesses = apr. 12,019 Million.
    • Second biggest employer: Amazon, 1.29 Million employees x 1 = 1,290,000. – but these will be included in the above.
    • Largest single employer: Walmart, 2.3 Million employees x 1 = 2,300,000 – also included in the breakdown above.

    It doesn’t matter where you draw the line between big and small – that 17,930 at the top of the list overwhelms anything else you can add. Even the combined power of the top 601,000 businesses (in terms of employees) doesn’t match up. In fact, if you total the numbers for businesses of 500+ employees, and throw in Amazon and Walmart as free extras, you get a grand total of 16,739.74 million – still short of the 17,930 million of the first category.

    That’s always been the Democrat’s secret weapon for a healthy economy – don’t neglect the big businesses, but prioritize conditions that help small business flourish. It’s not as headline-grabbing, but it works.

    Sidebar: A perception of “socialist” policies

    Counties with a strong social safety net, like Australia, can be viewed as extending this general concept, on the basis that an unemployed person is essentially a business with zero employees.

    The stronger the safety net, the more they contribute to the economy, which creates the conditions for greater employment, which leads to them leaving that category.

    The different parties differ in the use of honey and the whip to get them to make that change, but the principle remains.

    A similar effect takes place with socialized medicine – early intervention transforms those who would otherwise have become drains on the system into productive contributors. The net cost is not only lower, it gets defrayed, making it much lower.

    At least, that’s how we see it. GMs should be familiar with this if they want to properly represent those nations in their games, or people from them.

    Candor also compels me to admit that not everyone in those countries agrees – most of them align with the ‘cut off your nose to spite your face’ perspective of the Republicans. But they are a relative minority here (5-10% of the population at most.)

    But It’s Not That Simple…

    Is it ever?

    Let’s say the economy reopens tomorrow after a shutdown.

    • People stand ready to buy Product X from retail salesmen.
    • Retail Salesmen stand ready to sell Product X, but they need some to be delivered.
    • Delivery Drivers stand ready to deliver Product X, but they need some to be manufactured.
    • The Manufacturers stand ready to make more Product X, but they need the parts to be delivered.
    • Freight Companies are ready to deliver the parts, as soon as they are manufactured.
    • Manufacturers are more than happy to make the parts, but they need the raw materials.
    • More freight companies…. you know the rest.

    This is a supply chain. It’s like a bunch of railroad cars hooked together – there’s some give between them. The first one starts moving, and then the second, and so on.

    In the case of the salesman, whatever stock they have on hand has to last them until fresh Product X arrives. They have a problem if it’s perishable. The Product X manufacturer is similarly constrained by whatever parts inventory they have.

    If all these businesses reopen simultaniously, someone’s going to get the short end of the stick. Ideally, you would want a progressive reopening in which each step only gets back to life as usual when the step below them has something for them to do.

    But that’s impossible to organize, in practice. Especially if you throw in complicating factors like one or more of the parts manufacturers being overseas in a country that has NOT reopened.

    Shutdowns disrupt an economy, and without stimulus, push negative growth on that economy. Re-openings are equally disruptive, and those disruptions have the same effect, usually worsened because stimulus payments have either been removed or reduced.

    The Fall Of Trump, The Success Of Biden

    Trump’s management of the Pandemic was a disaster. David W. Rudlin
    points out on Quora that the US had 4.25% of the world’s population but 16.9% of the world’s deaths.

    President Biden, at the very least, is an effective administrator. His hand may have been forced by Republican governors to some extent, but he successfully managed the reopening of the US economy despite the difficulties described.

    Key to this is the development and successful distribution of the Covid-19 Vaccines. Despite a number of false starts along the way, we got there.

    The Digital Age, Seventh Period: Post-Pandemic

    Like most Pandemics, this one hasn’t really ended – it’s just sort of petered out as we have learned better to adjust to the new reality. We survived, individually, and are now trying to get on with life, collectively. But that collective deep breath has contained some unexpected wrinkles along the way, because trying to predict the future is never going to be wholly satisfactory to those who get their tails caught in the door.

    Some post-pandemic experiences are shared fairly globally; others are distinctly local, a consequence of the distinctive individual experiences mentioned earlier.

    For example, the governor of the Reserve Bank – the institution that sets interest rates here in Australia – made the mistake of forecasting little or no change in those rates in the course of reopening. Yes, he was eviscerated for the forecast; in hindsight, it was a fairly silly prediction, and one that was quickly proven inaccurate. In a nutshell, he should probably have known better.

    Around the world, Inflation peaked at somewhere around the 7-8% mark, post-pandemic, and in many places, it has now fallen to a rather more comfortable 3-4%. At the same time, the jobless rate has fallen below what is considered sustainable. This series would be incomplete without look8ing at why that’s the case.

    Supply Chains: Rebuilding Trade

    The story starts with supply chains, and the problems with everything reopening at once described earlier. But supply is only part of the equation.

    The other part is demand. This was suppressed during shutdowns, but in most industries, surged when lockdowns were lifted. And, because of the stimulus payments that were needed to avoid a recession (or worse, a Depression), people had the money to satisfy that demand.

    When demand is high, and supply is low, prices rise. And that’s inflationary.

    But it also creates a demand for employees to product the supply. So there is an immediate increase in the employment rate. And that means that even more of the population have money to spend and things they want to spend it on. Demand gets another tick up, so it still exceeds supply.

    Workforce Decentralization

    On top of that, a lot of businesses found that their employees were more efficient or more productive when they were working from home. That can only mean that the restrictions they had imposed on the workforce were counterproductive in terms of profitability.

    This has created a trend toward decentralized workplaces. Demand for skyscraper offices has fallen in most CBDs, sometimes precipitously. The price of such real estate is falling, and falling fast.

    That sounds deflationary, or it should. The problem is that a skyscraper concentrates real estate holdings by it’s nature; all the businesses that used to occupy skyscraper space are looking elsewhere for what central office space they still require, and the places they are looking are NOT concentrated to anything like the same extent. Rather than a floor of a skyscraper, they are looking for a building of their own.

    Demand outside the CBD has risen by far more than the demand inside it has fallen – and that means that overall, this causes inflation to rise.

    The full social and economic impact of this won’t be understood for years, possibly even a decade or more, but some predictions are possible. Eventually, CBD real estate will fall to the point where the cache of being in the heart of a city reasserts itself. A number of businesses will realize that they can sell their existing properties for a profit and buy that cache relatively cheaply.

    CBDs will change in character somewhat, but equilibrium will be restored – especially if the CBD demands of these businesses are downsized to accommodate decentralization. Ultimately, we’ll end up with an even greater concentration of businesses in city centers.

    Restricted Oil

    On top of the factors already described, there’s the price of oil, or – more specifically – the price of petrol (what the Americans call gasoline), and the price of diesel as well. Let’s call it “fuel”.

    During the pandemic, there was no demand for fuel, so prices went down. To sustain the oil companies – a cynic might say to sustain their profit levels – President Trump ‘persuaded’ Saudi Arabia to cut their production – but, as usual, there’s more to this story than a lot of people realize.

    Domestic oil production was rising rapidly in the US, to the point that in October 2018, they exceeded 11 million barrels of oil production a day, becoming the world’s leading oil producer. That puts Trump’s ultimatum to the Saudis – cut production by 9.7 million barrels a day or lose the 75-year-old military alliance with the US – into a whole new context, doesn’t it?

    But there’s another wrinkle – the Saudis had been engaged in an oil war with Russia, that had been increasing production and driving prices down. Trump’s demand was that the Saudis lose that conflict. Interesting point, eh?

    See this article at Reuters for more information if interested.

    The oil price rose – but there were other oil suppliers increasing their own production – Nigeria, for example. The Saudis began actively bidding up the price of oil supplied by other producers, including Australia, while adhering scrupulously to the agreement forced on them by Trump.

    So the oil price rose, and so did the fuel price. And then the lockdowns ended.

    International & Domestic Travel

    There was little demand for international travel, and less supply was made available, keeping air fares at a record high. Denied that avenue for their pent-up thirst for travel, for being somewhere other than where they had been locked down, people turned to domestic travel instead.

    Demand spiked, at much the same time as the Saudis were actively pushing the price of oil up.

    Revenge? or Normality?

    There was a lot of confidence about a Saudi increase in production once Trump’s deal expired. Instead, they announced a fresh cut, driving the price of oil (and fuel) even higher. This, of course, is inflationary, and some might be tempted to claim that the resulting economic damage was revenge for Trump’s blackmail.

    I don’t think that’s warranted, for two reasons – (1) by now, Biden was in the White House and a lot of relationships had been reset; and (2) the oil war with Russia was over, and Russian oil supplies were increasingly threatened by the consequences of the invasion of Ukraine. In effect, despite Trump’s intervention, Saudi Arabia won the oil war. Why upset that with vindictiveness? Instead, further winding back the increases of the past meant that they kept more of what they have always regarded as a strategic commodity.

    I could be wrong about this, but I think this is normal service being resumed, not revenge.

    Paying The Piper

    So you have excess cash in the economy (inflationary), high employment (inflationary), increased demand pushing princes up (inflationary), supply-chain problems pushing availability down and prices up even more (inflationary), increased heat in the real estate market (inflationary), and higher fuel prices (inflationary) which creates higher energy prices (inflationary) coupled with a war reducing the supply of oil and gas from Russia and food from Ukraine (inflationary)… is it any wonder that we’ve ended up with reasonably high inflation?

    I think this litany of influences demonstrates why I thought the prediction of little change in interest rates to have been a silly one. Not everything on that list could have been predicted, but there was enough that should have been obvious to predict at least modest increases in interest rates.

    Crystal-ball Gazing

    Until the extra money in the economy washes out, interest rates will remain high. How long that takes depends on what people spend their money on.

    Higher interest rates have an impact on that, too. Money is like water, it flows downhill; the interest rates charged by banks are taking some of the heat out of the real estate markets, but they will be stubborn; the first to go will be domestic debt. Paying off too much of that can also overheat an economy, but counterbalancing that is the inevitable increase in necessities.

    Once a reasonable balance is restored, inflation will drop to an underlying value that reflects the things that are not susceptible to Interest Rate manipulation – workplace decentralization and fuel / energy costs.

    The latter are being influenced by the drive toward Carbon Neutrality, which complicates the situation and is likely to keep prices relatively high. But outside of that, we are approaching a ‘new normal’. Some places – like the US – are already there. Australia is not, at least not yet. Our inflation rate is falling but not fast enough for comfort.

    Wages Growth

    The problem is that the longer inflation rages unchecked – even if it is less than the high-double digits of ages past – the more it fuels demands for compensatory wages growth, and that’s another inflationary cause. That’s how we got to those 18%+ values back in the 70s and 80s.

    Governments the world over have a delicate juggling act, and each slip has potential global repercussions because we’re back in a global economy again.

    Some slippage can be tolerated. Too much can be economically disastrous.

An Imminent Pivot? (The Near Future)

I can’t help but feel the world is approaching a critical point, where the course of history will be changed, one way or another. It might be in the 2024 elections – the choice between Trump and Biden seems pretty stark, but if Trump is forced to withdraw by his legal troubles, it might be any of a number of trump-lites on offer.

The possibility of an armed insurrection should Biden win reelection – however brief and certain to fail – presents another possibility.

But let’s presume for a moment – just in order to explore all the possibilities – that Biden doesn’t win, but the Republicans have managed to put someone relatively sensible up, or Harris takes over the top job. With relative inexperience, what are the odds of a significant slip on the economic side? Worse then with an old hand like Biden in charge, I think.

On top of that, we have climate change, and workplace decentralization, and changes to spending habits – there are already indications that shoppers are re-prioritizing, and retail is going to have to adapt in response, and more and more banks here are going cashless (is the same trend occurring world wide? I don’t know, but suspect that it is). Throw on top the potential chaos in the energy market and the potential collapse of the insurance industry, and… wait, you want me to explain that last one?

The Insurance Failure

Australia is closer to the edge of this change than many places in the world, along with Canada and the Western and Midwestern US. Why? Because we’ve all experienced climate catastrophes that can be attributed to climate change in recent times.

  • The Record temperatures in the US
  • Hurricane and tornado activity in the US
  • The Canadian Bushfires (though maybe they call them Wildfires there, the way the US does?)
  • :

  • The floods and bushfires in Australia (discussed in previous chapters
  • The floods in India in 2019…
  • And South Asia in 2020 and 2021, and 2022…
  • And finally, the 2023 Dema Flood of Eastern Europe.

We are approaching a point where Failure to mitigate climate change sufficiently could be regarded as an act of war akin to the use of weapons mass destruction (because of the indiscriminate nature of the resulting damage).

Will we reach that point? I hope not, because global unity on the cause will inevitably be forced into a pre-existing matrix of alliances if that happens. But it could be the foundation of a new global unity, too. Either way, it’s one more potential transformation that – once it takes place, if it takes place, will completely change the world.

But I’ve wandered off-track. The simple fact is that people who thought they were insulated against the full impact of these disasters have discovered that they are not protected. Another point that we are approaching – at already, in some places – is that if you need insurance, you can’t afford it, and if you don’t (except in case of freak accident), you have no incentive to buy it (because freak accidents and ‘acts of god’ are also not covered).

If people no longer trust the insurance industry to be there when they need them, they will start to sequester emergency funds themselves. They will resist actually purchasing homes – let someone else take the risk. The social impact will be subtle but unstoppable, and economies are not geared up for it. What’s more, governments are increasingly expected – even required – to fill the gap, and that’s not baked into their budgets, either.

The increase in frequency of event currently perceived to be occurring (whether it really is or it’s just a statistical anomaly), coupled with the perceived increase in the severity and scale of these events are pushing this problem to a crisis point.

Generational Change

There’s one final factor that needs to be taken into account when discussing the possibility of such a transformative change – the rise of Gen Z.

People of my parents’ generation have been dying off at a serious rate for quite a while now. They are in their 80s and 90s at this point. My generation are about to enter the same phase of life – we’re in our 60s and 70s. Both groups are reasonably well balanced between conservative and progressive politics, and both are becoming increasingly irrelevant any political outcome.

In their place comes Generation Z – who are, from all reports, far more politically aware as a broad group than most of us were at that age, and most have been alienated by the partisan politics that conservative parties have been exhibiting for a while now.

It won’t take much of a shift for political landscapes to be rewritten. It’s been reported, for example, that Democrats actually won the popular vote in Texas at the mid-terms (or maybe it was the last Presidential election?) – that it was only the gerrymandering of the republicans that enabled them to stay in power there. Texas has so many vote sin the Electoral College that if it ever flips, the Republicans can forget about the Presidency for decades.

There are too many critical issues on which Generation Z have opinions and want action – and they will start electing those they think can deliver it.

There are too many changes afoot at the same time for this not to be the end of one era and the beginning of another.

Sidebar: What Unifies The Digital Era?

IT is all about taking DATA and transforming into INFORMATION through analysis and application of context.

  • Data: You visited Website X.
  • Context: Website X sells general product type Y.
  • Information: You may be interested in buying Product Type Y. Other companies who sell product type Y will pay us to insert ads for such on other pages you visit.

This is the process that has made Google one of, if not The, biggest company on the planet.

Over the decades, we’ve gone from very poor at capturing data to very very good at it. In the course of that same period, we’ve gone from almost-inept at transforming it into something meaningful to being very, very good at it.

And as we have done so, our decision making – personal, professional, and political – has increasingly been driven by the efficacy of the results.

Impersonal, hard, facts. It’s worth remembering that when people protest progress, it’s usually the first of these that they have a problem with – and their second problem is with being told what to do by someone they don’t trust. Eventually, they’ll make the association between those instructions and a machine, and we’ll have a new batch of Luddites on our hands.

And this at the same time as Generative AI is becoming a thing…? Isn’t that food for thought?

The Downfall Of Scott Morrison

I’ve discussed his path through power over the last several chapters, I may as well close this post out by completing the story.

As the post-COVID election began to loom on the calendar, there was a clear mood for change. The instability of the Morrison government had seem minister after minister resign, the government was beset by scandal after scandal, and the never-ending failure to resolve and implement an energy policy combined to have the election balanced on something close to a knife-edge.

And then it emerged that, at the height of the Pandemic, Morrison had contrived to have himself sworn in as head of several ministries without informing the ministers already responsible for those portfolios.

This is like the President of the US secretly appointing himself Secretary Of State AND Secretary Of Energy AND Secretary Of the Treasury – without informing the people he had already appointed to those positions.

At first, it was only the Ministry Of Health, but then it grew to three ministries, and – after the election – two more came to light. And to this day, he refuses to acknowledge that he did anything wrong.

Generation Z in Australia gutted his party at the election. Morrison resigned the Prime Ministership but remains a back-bench member of parliament – like the President resigning, but remaining as a member of the House (there are unkind suggestions that he is so on the nose that no employer will actually hire him). And, in his place, we have a leader of the conservative coalition who is even less popular than Morrison was at the time of his defeat.

Instead of the usual (progressive) rival party, the Greens saw a significant upswing in their results, but even bigger were the rise of a group now known as the “Teal Independents”. These are conservative-oriented except when it comes to one key policy area – they believe climate change is real and demand stronger policy action to address the problem. It’s as though the conservatives had shed their centrist elements, becoming so extreme in the process that they were all but undetectable (sound familiar?)

Which direction Gen Z will force politics into, ultimately, I don’t know. But they are going to drive it, of that there can be no doubt.

One more break and then it will be the final chapter of this series – about how to use all this information in RPGs. But first, it’s time to update the Table Of Contents:

In part 1:

  1. Introduction
  2. General Concepts and A Model Economy
  3. The Economics of an Absolute Monarchy (The Early Medieval)

In part 2:

  1. The Economics of Limited Monarchies (The Later Medieval & Renaissance)
  2. In-Game Economics: Fantasy Games

In Part 3:

  1. The Renaissance, revisited
  2. Pre-Industrial Economics I: The Age of Exploration
  3. Pre-Industrial Economics II: The Age of Sail

In Part 4:

  1. Industrial Economies I: The Age Of Steam
  2. In-game Economics: Gaslight-era

In Part 5, Chapter 1:

  1. Industrial Economics II: The Age Of Electrification & Motoring

In Part 5, Chapter 2:

  1. Industrial Economics III: War & Depression
  2. In-Game Economics: Pulp
  3. In-Game Economics: Sci-fi
  4. In-Game Economics: Steampunk

In Part 6, Chapter 1:

  1. The Pre-Digital Tech Age
  2. World War 2
  3. Post-war & Cold War

In Part 6, Chapter 2:

  1. Government For The People
  2. Aviation

In Part 6, Chapter 3:

  1. The Space Race
  2. Tech Briefing: Miniaturization
  3. Behemoths Of Blind Logic (early computers)
  4. The Promise Of Atomics
  5. A Default Economy

In Part 7

  1. Economic Realities (Inflation & Interest Rates)

Part 8, Chapter one contains:

  1. The Digital Age: Themes
  2. The Digital Age: 70s-80s
  3. The Digital Age: 80s-90s

In part 8, Chapter 2:

  1. The Digital Age: 90s-00s

Chapter 3 of Part 8:

  1. The Digital Age: 00s-2010s

Last week, Chapter 4 of Part 8:

  1. The Digital Age: 2010s-2020

In this post, Chapter 5 of Part 8:

  1. The Digital Age: Pandemic
  2. The Digital Age: Post-Pandemic
  3. An Imminent Pivot? (the near future)

Still to come, in the final part of the series:

  1. In-Game Economics: A Plot-based foundation
  2. In-Game Economics: Modern
  3. In-Game Economics: A broader net (Fantasy +)
  4. Future Economics I: Dystopian
  5. In-Game Economics: Dystopian Futures
  6. Future Economics I: Utopian
  7. In-Game Economics: Utopian Futures
  8. In-Game Economics: Space Opera
  9. In-Game Economics – Look Beyond The Obvious

Addendum

A couple of thoughbts that I should have included in the “Near Future” section of the main article but didn’t think of at the time.

Third Party vote-stealing

One of the specters being raised in discussions of the 2024 elections is the prospect of third-party / independent candidates ‘stealing’ enough votes from the Democrats that Trump gains a narrow Electoral College victory.

Most of those I have discussed this possibility with, have poo-poohed it, and I’m the first to admit that it might not happen. But it has happened before (that’s one of the reasons we had President G. W. Bush instead of President Gore), and in a close election, anything that shifts voter balance, even just a little bit, can be decisive.

That’s why the Gen Z factor is likely to be vital. The usual argument that I am offered as rebuttal is that US politics is a two-horse race, no third-party candidate will ever win the Presidency. Which completely misses the point – under this scenario, they aren’t electing a third-party winner, but they are diverting votes that would otherwise flow to the more progressive candidate.

Thinking that it can’t happen because they can’t vote in a third-party winner is exactly what Scott Morrison thought about the coalition inner-city heartland at the last Australian Federal Election – and look at how that turned out for him.

Food Insecurity

One of the most troubling consequences forecast for climate change is food insecurity – where the crops you usually grow, and on which your nation relies, fail in whole or in part, because of the temperature change or associated weather events.

When people get hungry enough, wars have been known to start. I was thinking about that this morning and realized something. From day one, the Invasion of Ukraine has been put down to the vanity and ego of Vladimir Putin.

What if there’s more to the story? Ukraine is sometimes called the World’s Bread-basket – what if it’s not territory or oil & gas that is motivating Putin, but food insecurity? If this is the case, then he would have to view it as an existential threat to Russia in order for him to go to the lengths he has done – and that means not only that he won’t ever back down, but that he will throw everything he can muster at the situation, regardless of the national cost.

This is just speculation – but it’s very interesting speculation, and raises the prospect of the West having misread Russian intentions and motives from the outset, which in turn would hamstring any attempts to resolve the crisis. No peace overtures made thus far could be even contemplated by Putin, they aren’t telling him anything that he wants to hear.

I suspect that it will be decades, if not longer, before the whole truth of the matter is known – if it ever is – but it’s a possible angle on the Invasion that should not be forgotten.

How Long

I’ve indicated that I view 2024 as a watershed year, but there are so many thrusts toward change underway that most will be incomplete by the end of next year. In fact, I think it likely that it won’t be until 2030 that the shape of the ‘next era’ is fully understood, and it might not be until 2035 that the trends come to fruition.

A lot can change in 11 years….

Comments Off on Economics In RPGs 8: The Digital Age Ch 5

Economics In RPGs 8: The Digital Age Ch 4


This entry is part 14 of 16 in the series Economics In RPGs

Android default wallpapers and icons of the smartphones, which is released under CC 2.5 Attribution. Portions of this page are reproduced from work created and shared by the Android Open Source Project and used according to terms described in the Creative Commons 2.5 Attribution License. Unmodified Image from Wikimedia Commons, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license. Styluses removed by Mike and background image added. Image page: File:Smartphone_with_Android.jpg.
Background also via Wikimedia Commons and released to the public domain by the creator. Image page: File:148625_mundoimg_david1878_abstract-wallpaper-10.jpg

As usual, I’m going to get right down to business. While I’ll try to have this post function as a standalone, You’ll get a lot more out of it if you have already read Chapter 8.1, Chapter 8.2, and Chapter 8.3 before continuing.

The Digital Age, Fifth Period 2010s-2020: The decade of fallout

The most recent decade to conclude was, in many respects, all about the fallout and legacy of the traumas that punctuated the period just concluded. And yet, it’s possible to more or less ignore all of that, giving rise to a revised perception of the period without that high fever masking the reality.

I’ve often found that time lends clarity and perspective. Without that, events often seem disjointed, and the interconnections that form the cohesive outlines of a bigger picture are that much harder to put together.

Careful study of the analyses of isolated events can get you part of the way there, but that takes additional time and effort, and quite a lot of it. The alternative is simply to apply a liberal layer of fuzziness – I know (as does everyone reading this) what happened, and even has some idea of why. If the big picture is a little faded and vague, so what?

The fundamental assumption – that everything that was already happening would continue, unless noted otherwise – holds true.

    Beginning: Recovery

    I ended the previous period with the beginnings of economic recovery from the GFC, and that recovery persisted almost all the way through to the end of the decade. With life getting better on a daily basis, it was easy to lapse into a casual daze, and simply drift along.

    The problem with this sort of attitude is that it becomes habit forming, harder and harder to break. Mountainous problems seemed to have the foundations excavated from under them, and it was possible to ignore them in favor of minutia that seemed oh-so-important at the time.

    Domestic Australian Political Turmoil

    Australia started the decade with political turmoil to spare.

    Kevin Rudd

    Kevin Rudd was elected in 2007, and was a very popular figure at the time Unbeknownst to the Australian public, there was considerable division behind the scenes, Rudd’s autocratic manner putting more and more people offside.

      During his first two years in office, Rudd set records for popularity in Newspoll opinion polling, maintaining very high approval ratings. By 2010 … Rudd’s approval ratings had begun to drop significantly, with controversies arising over the management of the financial crisis, the Senate refusal to pass the Carbon Pollution Reduction Scheme, policies on asylum seekers and a debate over a proposed “super profits” tax on the mining industry.

      — Wikipedia, Kevin Rudd

    The opposition drew political blood over each of these issues. It nevertheless came as a complete shock to the public when his deputy Prime Minister, Julia Gillard, overthrew Rudd in a party room spill on June 23. There had been ongoing speculation about some sort of image reform for months, all of it concluding that there would be no challenge for the leadership.

    Julia Gillard

    Gillard seemed to breathe new life into the Labor party, enough to at least stem the erosion of popular support. She sought to take advantage of this honeymoon period, calling the next Federal Election just 23 days after taking office.

    This resulted in her government being returned to power but in a minority capacity, but she was able to reach agreement on confidence and supply with four members of the Greens party, the price being a carbon emissions trading scheme which directly contradicted the policies she had taken to the public during the election.

    Nevertheless, life seems to steady down and the Gillard government made some significant achievements during this term of government. But once again, the opposition began to score body blows against the government.

    Rudd Returns

    By 2013, it was becoming clear to the labor leadership that unless something changed radically, the Labor government was going to lose the next election. In some desperation, they turned to a familiar figure: Kevin Rudd, who remained publicly popular because none of the behind-the-scenes complaints had been seen in public. One attempt to return the former Prime Minister to power had already failed. In March of 2013, there was a second attempt, aborted when Rudd refused to stand against Gillard.

    Clearly, this was a government in turmoil. That never plays well to the Australian electorate, and the polls continued their unfavorable trends. On 26 June, Rudd was returned to power to lead the party to the polls in early 2014.

    Although the policies promised by Rudd in the 20144 campaign appealed to a lot of traditional labor supporters, there was very little confidence in his ability to deliver on his promises.

    As a result, Tony Abbott of the Liberal-National coalition became Prime Minister in 2013.

    Tony Abbott

    Anyone who thought that this would usher in a period of stability was about to receive a rude shock. Although his policies were disliked by many, there were also many who supported them, and him, in a fashion reminiscent of the adoration of President Trump in more recent times.

    The first hint at what lay ahead came in March, when Abbott announced that without telling anyone in his cabinet, he had advised the Queen to reinstate the knight and dame system of honors in Australia, a wildly unpopular move with the public. The nation had been flirting with becoming a Republic for years, and although the model put forward by the pro-monarchy Prime Minister of the time had been defeated, many of the trappings of Monarchy had been removed, and the nation as a whole were comfortable with the half-way house at which it had arrived. This one autocratic decision upset this comfortable apple-cart, threatening to steer the nation back towards the Monarchy.

    People were still digesting this when the 2014 Budget was announced. Harsh and austere to the point of being bleak, it contained measures that were condemned as “Un-Australian” (the harshest criticism one can make of an Aussie), measures that publicly broke election promises despite the polls informing Abbott prior to the election that they were “deal-breakers” with the Australian Public – some went so far as to claim the budget broke all his election promises.

    The Abbott government plunged precipitously in public approval – Australians will forgive a lot, but broken promises on this grand a scale were seen as intolerable. The reality was that Abbott was following the usual electoral budget cycle – a very harsh first budget, one or two moderate budgets, and then a generous budget as the next election approached, paid for (essentially) by the first, harsh, budget. Abbott and his Treasurer, Joe Hockey, simply went too far too fast.

    From the point of that misstep on, it seemed the Abbott government could not go two weeks without some fresh public policy disaster. From February 2015, Abbott had made one too many authoritarian decisions for even his own party to tolerate, and there was a leadership spill that he only narrowly won. He promised to do better, consult more widely, and reduce the role of his unpopular chief-of-staff.

    The government limped on until September, setting new records for unpopularity amongst the voting public. Opinion poll after opinion poll painted the government as rancid. It came as no surprise that a second spill motion ousted Abbott from the top job in September of 2015.

      By the time he was removed from premiership, Abbott was one of the most unpopular world leaders, and he has been regarded [since] by critics and political experts as one of Australia’s worst prime ministers.

      — Wikipedia, Tony Abbott

    Malcolm Turnbull

    In his place, a moderate who was hopelessly compromised by the extremists in his own party, who actively undercut his authority and government on a number of occasions over a number of policies, notable energy supply and climate change.

    Within some policy areas, he was viewed as weak; in others, he was seen as opinionated. He had won a lot of popular support for his role in the Spycatcher trial (Wikipedia, Spycatcher), and in some policy areas he was more liberal-left (in US terminology) than he was right-wing. (In Australia, politically Left and Right are reversed – the right are progressive and the left conservative).

    The Liberal Party had always aspired to be, claimed to be, fiscally conservative but socially progressive, but the decades since the Whitlam government of 1972-75 had eroded that position. There were many who hoped that Turnbull was the beginning of a return to that position, one that had made Liberal Coalition federal governments the norm for many decades (from 1932-1941, and 1949-1972, and 1975-1983, and 1996-2007, Australia had conservative governments).

    The problem was too many extremists in his own ranks who were unwilling to toe a new party line, and who actively sought to undermine and back-stab the new leader – our fifth since the start of the decade, if anyone’s keeping count.

    These hopes, coupled with a honeymoon period and a repudiation of some of his predecessors more controversial policies, were enough to secure an extremely narrow victory in the 2016 Federal Election – by a single seat. They were quickly dashed, however, as the radical elements of the coalition continued their efforts to undermine his leadership, already threatened by the wearing off of the honeymoon period.

    Throughout 2018, it felt as though the leadership was under siege – one spill attempt had taken place, and more had been threatened or expected without materializing. It was seen as a question of when, not if, there would be a successful move to oust him, probably to be replaced with the unpopular and controversial Peter Dutton (sometimes characterized as the Lord Voldemort of Australian Politics).

    A preemptive move was initiated in August of 2018 that installed another seeming moderate (though one that leaned a little further to the Australian Left than Turnbull had), Scott Morrison.

    Scott Morrison

    Time has not been kind to the public perception of the Reign of Morrison. His honeymoon period, however, was lengthy and again proved enough to lead to a victory and improved coalition position within parliament than that achieved by Turnbull. This was considered an unwinnable election for the coalition, so Morrison was perceived by his party to have walked on water.

    It’s possible that this license to do as he willed went to his head, but from a relatively controversy-free first term, Morrison’s second term was anything but. There was a widespread perception of corruption, of religious-based favoritism, of ideological extremism, and long-standing problems within the party of Misogyny repeatedly surfaced.

    The second term got off to a bad start during the 2019-20 horror Bushfire season (Wikipedia), now known as the Black Summer.

    In Intensity, Size, Duration, and Impact – whole communities being wiped out – this was the first murmur of what would become the end of the era. Morrison was on holidays with his family in Hawaii (no-one begrudged him that) when the fires broke out; but his office lied about where he was, and when exposed, he refused to return, offering a cavalier comment that showed him to be completely out of touch with the community.

    Crisis after crisis followed. Allegations of Sexual Misconduct, a high-profile and still-controversial Rape allegation, another lukewarm response to the 2022 eastern Australia Flooding (Wikipedia) and the emerging Robodebt scandal were just the headlines; there were dozens of smaller crises along the way to see out the decade.

    It didn’t help that these once-in-a-century floods then occurred again in 2023; even though Morrison was no longer in office, this cemented the popular zeitgeist for many.

    Nevertheless, Morrison – the sixth Prime Minister of the decade – was still in power when this sub-period, and this era, came to an end, and he got a lot of credit at the time for his response to the Covid epidemic. More on that later.

    Six Prime Ministers had seen Australia through from 1971 to 2007 (28 years). Six more (plus a 1-week caretaker PM) had been in charge from 1941 to 1971. That really puts into perspective how turbulent the 2010-20 decade was here, politically.

    Consequences

    For the most part, the economy trucked on without problems. Despite being controversial amongst the coalition, the rapid response with stimulus cheques to the lowest members of society economically (who spent almost all of it) had prevented a recession here during the GFC, or at least, that was the popular narrative.

    That criticism shaped the response to Covid, by the way – something I’ll deal with, later.

    It was the cost of those stimulus payments that had prompted the horror budget of 2014.

    It might seem that this economic bloom contradicts the basic assertion of this article series, that the economy of a time reflects the social and political state leading up to that time and drives changes in those social and political realities in the years that follow.

    The reality is that a mining sector boom, fueled by the growth in China, masked everything else that was going on; without that, this era would have been far more economically turbulent.

    If you read Part 7 of this series, Economic Realities, you will realize that two things affect just about everything else in an economy – energy costs (especially electricity) and fuel costs. There is also a significant overlap between the spheres of consequence in which these factors play out. Something I don’t remember pointing out is that electricity costs increase the expense of refining crude oil…

    The electricity price in Australia was unstable throughout the decade in question, a consequence of not only the political turbulence but the massive too-ing-and-fro-ing on how to address the climate change problem.

    Another of those factors that have an octopus-like reach into multiple economic and social sectors is public and business confidence – in stability and prosperity, specifically – and that also experienced a roller-coaster ride in those years.

    A superficial glance from the outside, and all looked rosy – but the reality was quite different for those caught in the middle.

    The UK / Europe

    I have to admit to not paying as much attention to the politics and economy of the UK as, perhaps, I should have. They started the decade with Gordon Brown and ended it with Boris Johnson, having experienced David Cameron and Theresa May in the middle.

    Brown never made much of an impression here. Cameron was respected and viewed as a “typical” English Prime Minister, whatever that means! Theresa May was more controversial, but had little impact here. And Johnson was a maverick, good for entertainment value if nothing else; a huge part of the local impression stemmed from his appearance on Top Gear (UK) when he was Mayor of London, when he gave a very good impression.

    But it was Cameron who promised a vote on what would become Brexit during the election campaign of 2015, after a 5-year buildup on the issue. That referendum took place in 2016, and implementation took effect on 31 January, 2020. So this was the decade in which Brexit went from a minor grumble to public policy to reality.

    This instability may not have manifested in the high turnover of leaders that was experienced in Australia, but they did have a far more dire experience with the GFC, and then the Brexit economic debate to navigate. I suspect that the experienced reality on the ground was no less turbulent there than it was here – it simply manifested in a different form.

    The United States

    Barack Obama and Joe Biden’s first term ran from 2009 to 2013, and is considered part of the previous sub-era because it was dominated by recovery from the GFC. His second term was about building on that foundation, restoring the economy to full health.

    In 2016, the American Public voted Donald Trump into office.

    It doesn’t matter what side you are on, politically – no-one can dispute that the four years of Trump presidency were beset by controversy after controversy, a revolving door of staff in key positions, and a dramatic economic downturn even before Covid.

    To about 1/3 of 42% of Americans, he is the greatest political leader the world has ever seen. To about the same number, he is personally distasteful as a leader, but they will vote for him anyway; Everyone else holds various shades of negative opinion about his Presidency.

    If the Australian experience is described as turbulent, the Trump Presidency was white-water rafting while wearing a blindfold. The economy reflected that chaos and confusion and resulting lack of confidence in the future. Even without Covid, it was on a steep downward trend throughout the four years.

    There are some who consider Trump to have been the worst President in US History; there are others who don’t quite rank him that poorly. And then, there are his fanatically-loyal followers.

    In summary, then, if there was a dominant theme to the economic reality of the decade, a deathly economic illness (the GFC) had been thrown off, but in its wake was instability being masked by prosperity.

    Beginning: Social Media

    There’s a long trail of predecessors that lead to the rise of what we would recognize as social media. We had bulletin boards and chat rooms well before the start of this decade.

    GeoCities was a precursor to the modern micro-blogging platforms like Facebook and Twitter (now “X”). I used it pretty much exactly the same way that I use Campaign Mastery these days, and have even recycled some of my old posts.(which I carefully archived at the time) for articles here.

    Arguably, the first social media platform to make a splash in a very big way was Myspace, which started in 2003, but it was still focused on being a delivery system for whatever interested the account holder, and so closer to a traditional website.

    Facebook was created in 2004, Twitter in 2006. Neither was an overnight success.

    Facebook

    Facebook opened to public users in 2006, available to anyone 13 years of age (or more) with an email address.

      By late 2007, Facebook had 100,000 pages on which companies promoted themselves.

      — Wikipedia, Facebook

    Between 2007 and 2008, developers created 33,000 applications to run on the platform, and there were more than 400,000 registered developers.

    What we would recognize as “Facebook” came into existence with a significant redesign of the user interface dubbed “Facebook Beta” in July 2008. A January 2009 Compete.com study ranked Facebook the most used social networking service by worldwide monthly active users.

      The company announced 500 million users in July 2010. Half of the site’s membership used Facebook daily, for an average of 34 minutes, while 150 million users accessed the site from mobile devices.

      — Same source

    The creator, Mark Zuckerburg,

      …announced at the start of October 2012 that Facebook had one billion monthly active users, including 600 million mobile users, 219 billion photo uploads and 140 billion friend connections.

      — Same source

    The decade of the 2010s was therefore one in which Facebook became a ubiquitous platform, and various users spent the decade learning just what that meant for society.

    Twitter

    Twitter also got off to a slow start. Created at about the same time as Facebook went public, it was not until 2007 that it really got noticed.

    A key turning point was the 2007 South by Southwest Interactive conference.

      During the event, Twitter usage increased from 20,000 tweets per day to 60,000. “The Twitter people cleverly placed two 60-inch plasma screens in the conference hallways, exclusively streaming Twitter messages,” remarked Newsweek’s Steven Levy. “Hundreds of conference-goers kept tabs on each other via constant twitters. Panelists and speakers mentioned the service, and the bloggers in attendance touted it.”

      — Wikipedia, Twitter

    From this beginning, growth was massive and continual. Over its first three years, Twitter rose from ranking 22nd amongst ‘social networking’ sites to be number 2, and it was still surging forward.

      On November 29, 2009, Twitter was named the Word of the Year by the Global Language Monitor, declaring it “a new form of social interaction”. In February 2010, Twitter users were sending 50 million tweets per day. By March 2010, the company recorded over 70,000 registered applications. As of June 2010, about 65 million tweets were posted each day, equaling about 750 tweets sent each second, according to Twitter. As of March 2011, that was about 140 million tweets posted daily.

      — Same source

      Twitter’s usage spikes during prominent events. … A record was set during the 2010 FIFA World Cup when fans wrote 2,940 tweets per second in the thirty-second period after Japan scored against Cameroon on June 14, 2010.

      The record was broken again when 3,085 tweets per second were posted after the Los Angeles Lakers’ victory in the 2010 NBA Finals on June 17, 2010, and then again at the close of Japan’s victory over Denmark in the World Cup when users published 3,283 tweets per second.

      The record was [re-]set again during the 2011 FIFA Women’s World Cup Final between Japan and the United States, when 7,196 tweets per second were published.

      When American singer Michael Jackson died on June 25, 2009, Twitter servers crashed after users were updating their status to include the words “Michael Jackson” at a rate of 100,000 tweets per hour.

      The current record as of August 3, 2013, was set in Japan, with 143,199 tweets per second during a television screening of the movie Castle in the Sky (beating the previous record of 33,388, also set by Japan for the television screening of the same movie).

      — Same source

      From September through October 2010, the company began rolling out “New Twitter”, an entirely revamped edition of twitter.com. Changes included the ability to see pictures and videos without leaving Twitter itself by clicking on individual tweets which contain links to images and clips from a variety of supported websites.<./em>

      — Same source

    Like Facebook, the 2010s would be a period of Twitter dominance, then. Putting the two events together made this the decade of social media in the eyes of many.

    Consequences

    A lot of what follows are personal impressions, with which others may disagree.

    Facebook always seemed to be a platform for more deliberate posts, while Twitter was more casual, more in-the-moment, more ephemeral.

    Facebook largely killed email as a means of staying in contact with family and friends; the capacity to painlessly share photos and videos being an initial snare. To be clear, there were (and are) other solutions to that problem, but they require users to all have the same software just for that purpose; Facebook does all the heavy lifting for you.

    Another difference between the two is the longevity of posts, which remain accessible for days after they are posted, even without you or anyone you know, adding to the discussion thread. Again, twitter seems far more immediate and ephemeral.

    Event Organization

    Protests and even attempted Revolutions have been organized through Social Media. Based on what I’ve written above, it should come as no surprise that events which demand spontaneity for security reasons are more oriented toward Twitter, while those which are deemed more publicly acceptable and hence can organize publicly, tend to be more Facebook oriented – at least until the event begins.

    I can never think about such events without remembering the 1973 novella by Larry Niven, Flash Crowd. This looked at the social impacts of instant, practically free, teleportation:

      One consequence not foreseen by the builders of the system was that with the almost immediate reporting of newsworthy events, tens of thousands of people worldwide – along with criminals – would teleport to the scene of anything interesting, thus creating disorder and confusion.

      — Wikipedia, Flash Crowd

    Instant transport is not necessary for such events; near-instant crowd-derived mass communications is sufficient. The resulting social activity can be pre-planned, or spontaneous, and has become known as a flash mob (clearly a tip of the hat to Niven’s story, IMO, though I am hardly the first to draw a connection between the two).

    Here in Australia, the concept went largely unnoticed until a private party was gatecrashed by over 1000 party-goers, doing the sort of damage that such a large group of drunken revelers naturally commits. The host of the event, who had tweeted out an “all welcome”, had no expectation of the response. His parents home was largely destroyed, neighbors homes were damaged, there were noise and disturbance complaints for the entire street; when police arrived, some attempted to riot. He went to jail (I think for 24 months) as a consequence, and interviews following his release made it clear that his life had been forever changed by one thoughtless public tweet.

    Echo Chambers & Political Polarization

    Social media are built around the concept of interacting with a chosen social circle, but – unlike real life – social media users can choose to block or inhibit the display of content from sources with whom they disagree.

    Studies have shown that, without contradiction, people become more prone to accept fringe reasoning that accords with their existing prejudices as factual, reorienting their belief structures to accommodate the new ‘truths’ that have been revealed to them. Once fringe content is accepted as factual, a new fringe opens for the user, and the process begins again.

    This is what is meant by a social media “Echo Chamber” – someone posts a controversial opinion, and if they only get positive responses because they have curated those who receive the message to only those who support such thinking, it reinforces the original opinion.

    Conspiracy theories, paranoia, and delusions are inevitable outgrowths if one is not careful. I’ve often described this as a rabbit hole down which rationality can vanish, never to be seen again – which is probably a little too strong on the hyperbole, but gets the point across.

    Up until they embrace provable misinformation as fact, people in the grip of this particular form of mental aberration can be reasoned with, I have found; once that line is crossed, a form of induced psychosis takes hold, and the person becomes an adherent of a cult-like mentality. Outside their delusions, such people can be warm and friendly, opening the door for strangers, helping the elderly, being nice to dogs, you name it – but they have certain triggers that engage a break with reality.

    It was quite rare for things to go that far until the latter end of the decade; it must be noted. The impact of the phenomenon is that political and social viewpoints become increasingly dogmatic and polarized.

    In part to combat this, I wrote my 2019 article, The Olympian Perspective: Personal Opinions, Fake News, and the GM. The basic contention is that, as a GM, you need to be able to create rational characters who do not share your personal opinions and make them plausible to the wider audience (normally just your players, but some have greater reach).

    Misinformation Manipulation

    How much worse can the echo chamber effect get when it’s not just opinion and flawed reasoning being shared, but falsehoods deliberately designed to fog beliefs and promote social and political agendas?

    That’s the difference between pre- and post- QAnon, when the flaws in rationalism began to be deliberately exploited, either for personal affirmation, or for entertainment purposes, or for political influence. That makes this extreme outcome of the social media experience a development that starts in 2017.

    Things took an even more serious turn with the interference by Russia (and others) in the 2016 US elections, but this was not recognized until considerably later in history, with the publication of the Mueller Investigation report. I find it fascinating that there were no serious suggestions of similar interference in the 2020 presidential election or the 2018 and 2022 mid-terms, though it’s more understandable in the latter case – the Ukrainian Invasion and related disinformation efforts were clearly more essential.

    But that’s getting ahead of myself.

    Social Media: A box of matches?

    I didn’t want to end this section on such a negative note. I’m well aware that I have focused hard on the problems of social media without giving equal emphasis to the positive aspects of the technology.

    I can only really report these on a personal basis – I have made friends from all over the world through social media. I have regular readers and supporters who I would never have encountered, otherwise. I’ve complained elsewhere about the impact of social media on blog comments (Social Media, SEO, and the dying of comments, written all the way back in 2013), but for the most part, at least until recently, the social contributions of Social Media have been positive for the most part. Fire is useful too, we wouldn’t have a civilization without it.

    Social Media is a box of matches. Used properly, it can enhance our lives and society. Mishandled, it can burn the house down. And no-one had read an instruction manual; we were all just figuring it out as we go.

    I have tried very hard to separate recent events from this discussion. Everything that I’ve written about is relevant up to the point where Elon Musk purchased Twitter. Beyond that point… the jury is still out, but there’s a lot of yelling coming from the room where they deliberate.

    Beginning: Wearable Tech

    There was a time, at the start of the decade, when wearable tech looked like it was going to be The Next Big Thing. And then, for the most part, it went away, squashed flat by the smartphone.

    Slowly, a decade later, it has started to reemerge – as data monitoring devices that feed to a smartphone. In particular, devices that continuously monitor blood sugar levels look set to take diabetes management into the 21st century.

    But this tech doesn’t need to stop there. Consider the possibilities of wearable devices that monitor blood for reduced concentrations of chemotherapy drugs and release targeted medications in consequence. Or anti-psychotics, or dementia preventatives. Doses can be smaller and more targeted, reducing side effects while increasing efficacy. It’s not here yet, but we could be at the thin end of a medical revolution, one which changes the very concept of medication. Time will tell.

    Beginning: Death Of A Visionary

    Steve Jobs was controversial at times, treating apple more as a vehicle for his personal games with technological possibility than as a corporation seeking to make profits for its shareholders.

    But those very qualities are what led to his second coming as Apple CEO, and the development of the iPad, iTunes, the iPod, and the iPhone.

    Jobs died in 2011, about 8 years after being diagnosed with a far less aggressive variety of pancreatic cancer. While he initially refused conventional treatments in favor of alternatives, ultimately he underwent surgery in mid-2004 that appeared to successfully remove the tumor..

    18 months later, his cancer had returned. Over the next three years, his health seemed to decline and his medical issues become more complex, and he began stepping back into the shadows.

    A lot of people in the Tech community, and its more public fringes, treated Jobs’ passing as the death of innovation itself, and the decade seemed determined to justify that reaction. Certainly, Jobs had discovered a rare knack for uncovering technological innovations that would receive public favor and mass adoption.

    That simply meant that it would take time for people to emerge to replace him, and to find the right niches for their talents. It seemed to me as unlikely that anyone would become so messianically-percieved for many years as it was that innovation would actually cease.

    But it would slow for a while, and this contributed to the placidity of the early decade in a business and social sense.

    Middle: The New Entrepreneurs

    The middle of the decade seemed to invalidate that assessment, though, as a brand of entrepreneurs emerged to match those of past eras. Zuckerburg with Facebook; Jeff Bezos with Amazon; Elon Musk with Tesla, all promised revolutionary change to the way people lived their lives, and grew wealthy persuading others of their technological visions.

    Many of these got their start well before this decade; Tesla was incorporated in 2003, Amazon in the 1990s. Arguably, though, it was in the 2010-20 decade that their times came and they delivered on the promises recognized a decade or more earlier.

    As with most overnight successes, people paid little attention to the decades of preparative work involved.

    Like their early 20th-century and 19th century forebears – see Section 6. Locomotives & Robber Barons in Part 4 of this series, The Age Of Steam – some of those individuals felt beholden to share their success with the broader community, while others did so for more cynical PR purposes.

    Either way, and following the trail blazed by modern entrepreneurial archetype Bill Gates (the Bill & Melinda Gates Foundation was launched in 2000 and was reported in 2020 to be the second largest charitable foundation on earth, holding $69 billion in assets), they became modern philanthropists.

    (For the record, the largest is the Novo Nordisk Foundation of Copenhagen, with $120.2 billion USD in its coffers).

    These identities were regularly prominent through the decade sometimes due to controversies, sometimes due to their business operations, and sometimes for their charitable works, though the extent to which they embrace such publicity varies. Bezos, for example, it known to prefer to operate behind the scenes, while Musk is always willing to self-promote.

    Climate Change: A Decade Of Lip Service

    There are two events in recent times that have yielded a different experience in every nation on earth. One, quite obviously, is the Covid pandemic and policy reactions to same; and the other is Climate Change and the policy responses to that challenge.

    Australia

    Here in Australia, redneck refusals to acknowledge the danger caused attempts to derail public policies aimed at addressing the threat when there was a Prime Minister who wanted to act, and a willingness to pay lip service but little more when there was a Prime Minister who did not.

    The Morrison government, in particular, tried to use clever bookkeeping to “meet” the carbon-emissions international commitments of others despite warnings that it posed an existential threat in the minds of many outside the home territories of those rednecks.

    Furthermore, the Black Summer bushfires of 2019-20 and the flood emergencies of 2022 in the Eastern states. Remember the controversy of the Hawaiian holiday discussed earlier? Proving that he had learned nothing, Morrison campaigned in Western Australia while communities were ravaged by unprecedented flooding, which in turn caused a federal relief package for those affected to be delayed. There was strong public belief that both emergencies were either triggered by, or worsened by, climate change.

    The fact that the Coalition Government, over its six year reign, had (1) dismantled an unpopular but effective carbon-tax system, and then (2) offered up no less than 22 energy policies, none of which it had succeeded in enacting, left his ‘climate credibility’ in complete tatters.

    In the course of the catastrophic 2022 elections, the Liberal-National coalition was savaged at the ballot-box. winning just 58 seats – their lowest representation in government since first forming in 1946.

      Six formerly safe Liberal seats in urban and suburban areas, most held by the party and its predecessors for decades, were won by “teal independents”.

      — Wikipedia, 2022 Australian Federal Election

      The Liberals also suffered large swings in a number of suburban seats that had long been reckoned as Liberal heartland. The Greens increased their vote share and won four seats, gaining three seats in inner-city Brisbane, the first time in the party’s history it won more than one seat in the lower house.

      — Same source

    All that, of course, falls on the far side of the pandemic, but it’s simply a measure of the ill-will and resentment that Morrison experienced on the environmental front, an arrogance which was duly punished at the post-Pandemic election.

    The media had, of course, been dutifully reporting on the pronouncements of the various climate authorities, and the Bushfire/Flood/Flood trilogy created a sense that the Government had wasted a decade on inaction, or on actions that were subsequently undone.

    Elsewhere

    But, of course, everywhere else had its own distractions and problems. The US had been a world leader in the fight against carbon emissions under Barack Obama, but Donald Trump undid all that. Europe had Brexit on its plate. Both had the GFC demanding priority. The causes were different, though related, but the end result was the same – a decade came and went with no substantial progress to show for it.

    End: Stirrings Of Alarm

    The beginning of the end of the era was signaled by news reports in November and December 2019 of an outbreak of a new illness in China. These continued into January 2020, but caused no panic.

    Past Epidemics

    In part, the world was a victim of its own past successes. Scares like Bird Flu and had come and gone without a major international ripple. Lots of hand-wringing and moaning about how bad things could be, in the worst-case outcome – but those dire warnings never seemed to actually materialize.

    Perceived Non-events

    By 31 January, Italy indicated its first confirmed infections had occurred, in two tourists from China. But still, this was viewed as a minor incident – there might need to be some restrictions placed on travel from China, but there was little cause for panic.

    In general, this was seen as a local Chinese problem, and a non-event elsewhere.

      On 23 January 2020, bio-security officials began screening arrivals on flights from Wuhan to Sydney. Two days later the first case of a SARS-CoV-2 infection was reported, that of a Chinese citizen who arrived from Guangzhou on 19 January. The patient was tested and received treatment in Melbourne. On the same day, three other patients tested positive in Sydney after returning from Wuhan.

      — Wikipedia, COVID-19 Pandemic in Australia

    This Time It’s Real

    And then that perception changed, and bodies began to pile up in New York and Italy. By now it was March. Too little, too late, serious travel restrictions were slammed into place.

The Pandemic changed everything. Anyone who thought we could just reopen and life would go back to normal had rocks in their heads.

The next part of this series will look at the two years of Pandemic and the years that have followed, and – to the best of my ability – consider what’s likely to happen over the next decade or so, at least in economic terms.

Comments Off on Economics In RPGs 8: The Digital Age Ch 4

How Long Is A Generation?


Four generations of the Dukes of Richmond, painted in 1900, artist unknown. Seated front left is 6th Duke of Richmond and Gordon (1818-1903); seated right is the Earl of March and Kinrara (later the 7th Duke) (1845-1928); standing is Lord Settrington (later the 8th Duke) (1870-1935). The infant on the Duke’s knee is Hon. Charles Henry Gordon Lennox (the son of Lord Settrington) (1899-1919)..
 
Note that artistic works by unknown artists remain copyrighted until at least 1 January, 2040, unless they were published or exhibited publicly prior to 1 January 1953.
 
Source: ClanMacfarlaneGenealogy.info via Wikipedia Commons (image page)
 
Four generations in one image means that it has to encompass a span of three generations. The dates given suggest that those generations are roughly 82 years, which gives an answer to the central question of about 27 1/3 years. Make of that what you will…

Last week, I promised readers something completely different from the Economics in RPGs series, and even though this is a different article to the one I had in mind at the time, I still think it delivers – in classic Campaign Mastery style :)

While working on the Adventurer’s Club campaign this week (as I outline this article), I was prompted to ask myself this question.

“Been in the family for x generations,” is an entirely valid statement that expects the listener to be able to decode the message.

“…and lay undisturbed for generation after generation, until the events had long ago faded into myth and legend,” lays less emphasis on unpacking the units, but the GM still needs to have some vague idea of what they mean.

And it’s not as simple as it sounds.

Prior Engagements

In the past, I’ve skirted around this issue as much as I possibly could. Nevertheless, there are at least two prior articles when I could not have fully avoided it (the first, in two parts):

There may have been others, but those are the ones that come most readily to mind.

A fixed number – 20? 25? More? Less??

There have been all sorts of fixed numbers thrown around by various sources through the years and marginal agreement between them. Four claimants of the definition stand out, in my opinion.

    25?

    Twenty-five years was the number that I first associated with the term, I’m not entirely sure why – this was long before I got into gaming. For many years, it was my default go-to interpretation.

    It’s a convenient subdivision of a century, for one thing, and long enough to encompass most viable analysis-based interpretations.

    Over time, a subtle difference crept into my interpretation of the term, “generation” – rather than a strict 25 years, it became an ‘average’ of 25 years. This attempted to reconcile function-based applications of the term with the simplicity of a numeric definition.

    It made the whole term fuzzier, and potentially more useful.

    30?

    When I raised the question with my fellow GMs – because I had most of this article written already – one offered this up as his go-to interpretation, while admitting that he had never really put any thought into the meaning of the term..

    He was no more sure of where his number had come from than I was of mine, but he justified it in functional terms. There was a hint of a suggestion that the term may have originally had a functional definition that was rounded and approximated by later users, clouding the whole issue – sometimes, a user would have meant the strictly functional definition, and sometimes, the generalized interpretation – and sometimes, they would have simply tossed the term out as a vague “long time” with no significant thought invested in the meaning at all!

    I don’t know about you, but I found this line of thought fairly compelling, even though it clouded the issue more than a little. But then I realized that it could operate in the other direction too – starting with a vague definition, to which people like myself had tried to apply functional refinements, only to find that none of them quite fit the sources (in fact, it would be an astounding coincidence if they did line up.

    More cloudiness, less clarity – both theories have a ring of plausibility.

    An online source then suggested 33 years, using exactly the same logic as for the proposed 30-year units – my impression was that this was compromising the unit to get a simple fraction of a century (well, of 99 years).

    20?

    Another number that I’ve seen seemingly plucked out of thin air from time to time is Twenty Years. In modern times, this would be a reasonable fit for the usual functional definition, perhaps rounded to a convenient number.

    We all know people who consign everything that happened before they were born to this vast dumping ground of “Don’t know, don’t care, don’t know why you care, either”. These are the people who allegedly ask about wooden aircraft carriers, and why Knights didn’t use rifles instead of swords – though I find those to be caricatures, not entirely divorced from reality but not truly reflective of it, either.

    I suspect that this value is a vague compromise between a lot of numbers that have some element of plausibility, bolstered by the modern ‘fit’ and the convenience of being a nice, simple, number.

    But this raises another complicating factor to think about – the possibility that the numeric value interpretation of a definition has changed over time with improvements in medical knowledge and changes in society.

    18?

    A functional definition is the age at which a set of generic parents could be replaced – I’ll examine that definition with more rigor later in the article.

    The first time I remember encountering it, it was being used to justify a generational ‘unit’ of 16-18 years. The line of logic used was that you could marry at 18, and have two children before you were 19, if you really worked at it.

    Actually, the definition used marked the next generation as being the birth of the first child, perhaps for simplicity. Yet another complication!

    What’s more, the fact that people used to marry younger – historically, 16, 14, and even 12 were not unheard of – lends added credence to this value.

    My personal suspicion is that this is simply the youngest legal age in modern times at which marriage, and hence legal childbirth, is permitted. So you couldn’t have a generation be anything less, and anything more sat in the difference between a theoretical future potentiality and the current reality.

    But this brings in a whole raft of new complications – social factors, legal factors, and the difference between theory and application in reality.

    It’s a lazy definition, but an illuminating one.

    Age Of Consent?

    In particular, this raises the question of legal age of consent, and whether it’s anything more than an artificial line in the calendar that most people observe – at least officially.

    It’s tempting to toss this issue aside as an irrelevant distraction, but it seems unlikely that social expectations and behavioral demands can be completely divorced from the question at hand.

    These social restrictions would act as a cut-off filter, setting a minimum legal value which is only loosely related to the biological elements of the definition.

    Biological capacity, in other words, sets a range of values that could be acceptably associated with the term “generation” but social restrictions limit which of those possible answers are considered “socially acceptable”.

    At best, this is a secondary factor – something to be taken into account, but not the primary foundation of a definition.

    But it does point to one more complication: Perceived Value vs Reality. Just what you – and I – needed….

Complication Scoreboard

It’s probably worth rounding up all those complicating factors and questions at this point, and putting them all in a list.

  1. Functional definitions may be compromised for simplicity.
  2. A simple fraction of a century is an obvious and attractive arbitrary value.
  3. Vague and arbitrary definitions may be compromised to fit a functional definition.
  4. Definitions may have changed over time, so you can never be sure of the intended interpretation in any specific text or reference.
  5. Modern interpretations can cloud and bias interpretations.
  6. Applied medical knowledge is an obvious factor in ‘modern interpretations’ and is therefore an implied factor for other historical periods / game settings.
  7. Social factors, especially those consequent to medical knowledge, are also an obvious factor in ‘modern interpretations’ and a factor to definitions in other historical periods / game settings.
  8. Even if a constant, consistent definition is assumed, the numeric value associated with that definition can and will change over time (as a consequence of 6 & 7 at the very least).
  9. The limiting end-points must be a part of any practical definition.
  10. Legal factors can apply a bias, a socially-acceptable limit, or both.
  11. There may be a disconnect between theoretical values and reality on the ground, because 7 and 9 have no impact on the biological reality.
  12. There may be a disconnect between perceived values and theoretical or actual numbers.

It suddenly seems completely UNsurprising that there is so much confusion surrounding such a “simple” question.

Functional Generational Replenishment?

It takes (for most species) two adults to create an offspring. So replacing those two adults is a valid measurement of a generation.

This builds in all sorts of social factors. It re-frames the question to “What is the average age of a couple at the time of the birth of their second child who survives to adulthood?”

It’s a simple answer to specify it as “Age Of Consent plus two” – assuming human gestation periods and social structure.

If the Age of Consent is 18, that takes us back to 20. But if marriage is permitted at a younger age, as was common in medieval times, we get a different answer.

But that whole “+2” is problematic. Infant mortality in medieval times was appalling – I’ve seen values of 70%, 80%, and even 90%, depending on who you ask. In fact, that was a contributing factor to the lower consent age – where “consent”.is construed as “Consent to Wed”.

Mortality Impact

I tried running calculations to determine what the “+2” should be at different mortality rates, but got bogged down in detail to the point where I was no longer confident of the results. For the record, at a 90% mortality, I ended up with a value of +24.5. That’s huge, but is it right? I’m not sure. I can equally see it being half of 1.5+24.5, or 13 years – because we’re looking for the average, not the certainty.

The only thing I can state with confidence is that it’s going to be a LOT higher than +2.

Other Mortality Factors

This is another very real factor that should be taken into account. If the mother dies during one of these childbirths, then (to continue having children) the father needs to remarry, and that means that we are no longer looking for the second surviving child, but the third, because three discrete adult individuals now have to be replaced.

A man with extremely bad luck or judgment might need five wives to have five children.

It works in the other direction, too – if the man gets killed in a battle someplace, the widow needs to remarry in order to continue having (legitimate) children.

It’s entirely possible that BOTH parents will perish before having two children who will survive to adulthood. What do we do then?

Fait Accompli

For all practical purposes, it’s far better to presume a fait accompli and work backwards.

    Current Generation

    Let’s say the current date is 1210, and the current person of interest is 32 years of age. That means that they will have been born in 1210-32 = 1178.

    Prior Generation

    How old was this current person’s father or mother when he was born? Subtract that from our running date and you get the year in which the parent was born.

    Let’s say that he was 22 years of age at the time. That means that parent was born in 1178-22 = 1156.

    Grandparent’s Generation and older

    Repeat for as many generations as you need, or until you reach the critical date in question.

    What we were working on was a treasure hidden just prior to the US civil war, with a starting date of 1938. We ended up going back 4 generations – and this is not some nebulous generic “generation”, it’s the term as it applies to this specific family.

Age Of Death

If you know the age of the parent at which the heir to the family was born, and the year of birth of the parent, it’s then a simple matter to add X years to that age to get the age at death of the parent.

X is important here because that’s the number of years that the parent and child co-existed. You are essentially constructing a family narrative while and anchoring it to actual dates in your chronology.

What we found, when applying this concept, was that it was best to start documenting the family history with the earliest significant member and work forward.

We tried arbitrarily saying someone died at age Y but found that Y rarely married up to the chronology of events within the family; in effect, it was putting the cart before the horse.

Complications

We assumed that only one direct line of descent was important, even though we knew that this was inaccurate.

Every child has at least 2 parents. Every parent has at least 2 parents, who are grandparents to the child. Every grandparent has at least two parents, who are great-grandparents to the child, and so on.

A more complete (and much more complicated and tedious) approach would track each of these family links back using the same technique.

Then we get to the implied question of siblings – uncles and aunts, great-uncles and great-aunts, and so on.

And then we get to the question of their descendants – cousins and the like.

The guiding principle should always be the lived experience of the ‘child’ at the center of your family narrative. If they didn’t know relative Z, there is no need for you to mention relative Z, let alone place them on your growing family tree.

Integrated Histories

I’ve touched on this already, but thought it was worth explicitly considering: no family history should exist in isolation. No family is immune to the big events in the world around them – such as the US Civil War.

There are two general approaches that you can take to such integration.

    The Fast Approach

    The first is to have a list of the critical dates, in sequence oldest to most recent, and simply incorporate them as you are constructing the family narrative.

    Much of the time, this will work seamlessly with no problems. Trouble arises when – for whatever reason – you need to alter the birth and death dates to marry up to the narrative that you are creating. Suddenly, you can find that an event that was supposed to impact a particular family member no longer touches them, or that an event that you thought you could avoid is suddenly very much a part of the family story of the generation in question. Any mistake in your arithmetic can tear your entire chronology apart.

    The Second Approach

    The alternative is to take a little more time and effort and actually map out a chronology:

    xxxx Richard Randall born
         1832 Steven Randall is born
         1834 Efram Randall is born
         1835 Eliza Douglass is born
    1853 Richard Randall dies
         1858 Steven & Efram duel for Eliza’s hand
         1858 Efram Randall survives but is disinherited
         1858 Steven Randall marries Eliza Douglass
    1861 Civil War begins
         1861 Efram Randall joins the Confederate Army
         1861 Steven Randall joins the Union Army
         1862 Efram leads his unit on a raid on the family farm…

    ….and so on (note the careful use of indents!)

    For the record, this example is completely fictitious and bears no resemblance to what was being developed for actual game use.

    This shows that there is no need for the father, Richard, to be affected by the Civil War; he’s already dead and buried. But it shows the siblings, Steven and Efram, caught up within it as an extension of their pre-existing family feud.

    If your narrative requires Richard to be (a) alive but (b) too old to serve in the Civil War, and so (c) in a position to repel the raid, or perhaps to be killed in the course of it, the date of his death will need to be altered, and possibly the date of his birth.

    This can have ripple effects both up and down the timeline that are far more easily handled with a simple list like this – especially if you can use cut-and-paste to move a whole line and change the date.

    If you stick to the principle of only listing those family members known or directly relevant to the current generation, you can invent and insert long-lost relatives and other chapters of the family history as you need them.

Lost To Living Memory

A similar technique can be used when you need to set events beyond living memory. I actually went into this in some detail in discussing the planning tool linked to above, so I’ll try not to belabor the point here.

  • Generation 0 – directly involved.
  • Generation 1 – first-hand accounts from parents
  • Generation 2 – first-hand accounts from grandparents
  • Generation 3 – possible first-hand accounts from great-grandparents,
             more likely 2nd-hand accounts (some distortion) from parents and grandparents.
  • Generation 4 – 2nd and 3rd-hand accounts from grandparents and parents, respectively.
  • Generation 5 – events become part of family mythology.
    The Rule Of Threes

    From that point on, the significance of the event (whatever it was) fades in relevance.

    I always work on the principle of the rule of threes – at any given time, there are normally three living generations of significance: subject, parents, grandparents.

    Anyone older is likely to be deemed an unreliable source.

    That’s why, in the chronology-by-generation listed above, it is Generation 3 where the first-hand accounts stop. Generations 5 or 6 are where the second-hand accounts stop, and the story becomes a family legend. Somewhere around generations 8 or 9, that legend will be so vague and unsubstantiated that it is completely unreliable, if it’s remembered at all.

    For example, one of my Grandfathers was killed in the Second World War; I never knew him. My father’s middle name commemorates him, and I was named for my father’s middle name, so I am concatenated directly to my grandfather’s life – but know very little about it beyond the simple fact of his service. He is a family legend to me.

    A great-grandparent by marriage was in the first world war, but survived; I have personal memories of him and his stories of service. He survived the Gallipoly landing, for example. I can remember him telling me of the sense of wonder he felt as a child when reports of the Wright Brothers flight reached Australia. But going any further back? There are just fogs and mists. My living memory extends to second-hand reports of those early-20th-century events.

    Application

    One of the truths that I have gleaned from Who Do You Think You Are? is that the facts about the way the earliest actually-experienced generation were – personality etc – are known, but the reasons rarely are, and anything beyond that wall of time are lost, at best preserved in myth, folklore and rumor. In some cases, that earliest-generation is a parent; in some cases it’s a grandparent; and in only a few cases is it a great-grandparent.

    Using this, and the preceding section, as a guide, you can determine how many generations forward you need to go before an event becomes an almost forgotten legend, or even gets lost entirely in the sands of history.

    Once you know that, you can start assembling the family history from that moment forward, stopping when you get to the present day. Or you can determine that it was nothing more than a myth for X years and start your narrative from that point.

    What stories were those who are alive today, told as a child? Who was around to pass on those whispers and murmurs?

    Error

    If you list four family tales from beyond that point of personal second-hand knowledge – stories from those who were actually there – you can break them down as follows:

    • About 1/4 will be more-or-less accurate, though circumstances may be wildly different than expected and details will be wrong.
    • About 1/2 will be, at best, half-truths and potentially misleading.
    • And about 1/4 will be outright inaccuracies or willful falsehoods that have been perpetuated through the family history.

    It’s usually helpful to the GM to at least have a vague outline of what the truth was, and then to apply these ratios to the stories known to the current generation.

Non-humans and Hi-Tech

Everything written above applies to humans and those who live on a time-scale that is something resembling that of humans. There clearly have to be modifiers applied to such considerations when you are talking about non-humans, and other modifiers that have to be applied to take into account the penchant of medical advance to meddle in the ‘natural’ state of affairs.

Fortunately, it’s not all that hard.

  1. Start with the natural time-span. Derive a multiplier that can be used to transform a lifespan into human terms, or vice-versa.
  2. Think about the life-cycle of the species, as modified by medical science. This can be considered to shorten some stages of life, lengthen others, and make still others more rigid (treating any variation as a potential ‘medical issue’ that needs to be ‘treated’). Derive an appropriate set of multipliers for these factors and apply to both the human scale and the non-human scale.
  3. Think about the social structure of the species, especially in light of medical advance and the life-cycle impacts already defined. Modify accordingly, and add social restrictions on life-stage transitions.
  4. Use the resulting modified human scale to sketch out the foundations of a family tree, ignoring anything not of direct relevance to the core of the modern family. This works because it’s easier for us to think about such things on the human scale, even a modified one – it takes away one more point of confusion. The answer will be in years relative to the birth of the focal character of the current generation. Go back one further generation than you think you have to.
  5. Apply the human-to-alien scaling factor to get ‘real experienced years’ on the alien timescale. Translate the relative dating into actual dating using whatever the protocol is for such in your game world. Generate a list of the individuals and the dates of their births and deaths, and a timeline which lists those events in chronological sequence.
  6. Starting from the earliest date on your timeline, work forwards through time, looking for key dates in the campaign background to define generational transitions and life-altering events experienced by the population of the family tree. Add each to the overall chronological sequence, and add each event to the “bio” of the characters that experienced it. As you do so, make a brief note as to the consequences / impact on the individual. NB: I always start with a snapshot of the ‘status quo’ at the time of the earliest family member.
  7. When you reach the modern day, you have compiled a set of ancestors and milestones experienced. Some of these may be important enough to expand into a fuller biography – in particular anyone of special significance to the focal character, and anyone still surviving.

There’s a fair amount of work there, but none of it is especially difficult.

NB: you can also use the same technique to generate ‘histories’ of Kingdoms, of multi-generational businesses, of towns – anything you want. The use of a consistent campaign background creates the functionality of a checklist of important events. After you’ve done a few of these for the one game setting, that ‘checklist’ will start to make the whole process even faster and easier.

The same technique works in Sci-Fi, in Fantasy, in Steampunk – in fact, in any genre that you care to apply it to.

Wrap-up

Family histories are not always necessary, they’re not even useful a lot of the time. But, when they are relevant, they can provide a background narrative that makes a character more substantial, or create an adventure in uncovering the past or which has it’s roots in a part of history intimately and directly connected or a specific PC.

This article isn’t about when to create such histories; that’s best left to each GM and the circumstances of their campaigns. The purpose here is to offer a practical answer to the impossibly-vague question of how long a span of years comprises a generation, and how to employ that when it’s useful to do so.

Postscript Sidebar: Adventure style, tone, and sub-genre

Every player has a particular set of preferences and dislikes. One of the foundational players in my development as a GM loved Sherlock Holmes stories – and hated being ground zero of a mystery plotline. One of my current players hates “Big Cosmic” adventures – but loves Space Opera as a Sci-Fi sub-genre (just not in his RPGs, thank you very much). But it’s not just negative preferences – there are some “zones of subject matter” that are certain to bring certain players to life whenever they are encountered. Some players love major plot twists and surprises, some hate them.

More than at any other time, these preferences should be taken into account when constructing a plotline around the background of a particular PC. It does you no good to make a character the focal point of an adventure that won’t interest them – not unless that is the foundation of the whole adventure, at least!

On the other hand, setting such a background-based adventure in a genre that the player likes and enjoys boosts their interest levels, and makes both character and campaign more appealing to them – and, vicariously, to everyone else.

Bear those facts in mind and don’t be too clever for your own good :)

Comments Off on How Long Is A Generation?

Economics In RPGs 8: The Digital Age Ch 3


This entry is part 13 of 16 in the series Economics In RPGs

A visual example of a 24-satellite GPS constellation (the minimum needed to make the technology work) in motion with the Earth rotating. Notice how the number of satellites in view from a given point on the Earth’s surface changes with time. The point in this example is in Golden, Colorado, USA (39.7469°N 105.2108°W).
Image by PaulsavaOwn work, CC BY-SA 4.0, Link

Lots still to get through, so as usual I’m going to dive right in. While I’ll try to have this make sense in a standalone mode, it would be preferable for you to have read Chapter 8.1 and last week’s Chapter 8.2 before continuing.

The Digital Age, Fourth Period 00s-2010s

Most of the decade that followed the beginning of the Millennium just seemed to coast on by without being particularly noticeable or significant. Everything seemed more personal in scope and less international and collective, or at least that is my impression in hindsight.

At the same time, I’m very well aware that this is a false impression that has been created by the epic buildup to the Sydney Olympics, which I wrote about in the previous post.

The goodwill and happy vibe that resulted from “The greatest Games of all time” endured until September 11, 2001 – exactly 22 years ago as I write this – diminishing the importance attached to all troubles and letting people just coast along.

Even the troubled election of George W. Bush (hanging Chads, etc) in late 2000 fitted this narrative, with his strong domestic agenda.

    Beginning: Internet Awakening

    One of the major challenges in constructing a series like this is trying to pick ‘mile markers’ for the end-points. In many respects, a strict chronology means the start of a period exactly matches the end of the previous one.

    You can achieve a starker contrast by shifting those end points to a logical map rather than a strictly chronological one, and that’s something that’s been done throughout this series. But it frequently begs the question – move the dividing line forward or back?

    Frequently, you need a string of different logical dates, which creates a fuzziness about the end/beginning points. So it is, this time around, but it’s worse than usual because one of the defining elements of the era is also somewhat fuzzy in its history, with no clear dividing line separating before from after.

    Internet beginnings and early growth

    While its roots trace back to the 1960s, it’s a certainty that the internet really began sometime in the 1990s. The starting point that I prefer the 1995 decommissioning of the NSFNet in the US, which removed the last impediment to full commercialization of the internet – an event so obscure that most readers will never have heard of it. But you might prefer the 1993 invention of the search engine with web crawler, before which all website indexes had to be manually curated.

    By the end of the decade, the internet was doubling in size every year, while the number of users was increasing by 20-50% each year (Wikipedia, The Internet).

    Here’s the thing with geometric expansions like this – the greater share of whatever you are measuring will always have ‘just happened’, only the precise numbers will change.

    Consider the following sequence, which increases by 33% each year, starting (for convenience) with a value of 3:

    2000 = 3
    2001 = 4
    2002 = 5.333
    2003 = 7.110
    2004 = 9.481
    2005 = 12.641

    growth, year-on-year:

    2001 = 4 — 3 = 1 → 1 / 4 = +25%
    2002 = 5.333 — 4 = 1.333 → 1.333 / 5.333 = +24.995%
    2003 = 7.110 — 5.333 = 1.777 → 1.777 / 7.110 = +24.993%
    2004 = 9.481 — 7.110 = 2.371 → 2.371 / 9.481 = +25.008%
    2005 = 12.641 — 9.481 = 3.16 → 3.16 / 12.641 = +24.998%

    The only reason these aren’t all +25% is because of rounding errors. So 25% of the total growth has always happened in the last 12 months, it doesn’t matter where you draw the line (those mathematically inclined will realize that this is the case with geometric expansion series, by definition).

    Search Engines

    But in this series, I’ve never been overly concerned with the existence of something compared to the ability to do something with it. You can create the greatest web page in the history of the world; it won’t mean a thing if no-one can find it to read it.

    Early search algorithms were simple and unreliable (I can remember articles in computer magazines testing them and finding. that to be close to comprehensive, you needed to use at least two and preferably three. That led to Copernic (named for Copernicus) – a metasearch engine that aggregated dozens of search results. You can date it’s roots to 1996.

    That was what eventually set Google apart – the algorithms that it used to rank results in an effort to bring the most relevant results to the top of the list. They have only gotten better at this in the years since, despite the best efforts of others to ‘game the system’.

    Opinion: Google page ranking cheats

    My take on such nonsense as ‘buying back-links’ and other google-ranking trickery: Yes, you might get a massive boost from trickery, but when it becomes apparent that you are gaming the system, you will be penalized – and such long-term pain is quite likely to exceed the short-term gain.

    So I don’t go in for such – heck, I barely make a stab at SEO, preferring to put my efforts into better content, in the belief that it will pay off in the long run.

    Or, to put it another way, you are only as good as your reputation – and that’s far more easily harmed than regenerated. The optimum approach is to avoid reputational harm as much as you possibly can, in the first place.

    Web-based applications

    Until potential customers can find your products, the internet is a plaything. The more effortlessly consumer and provider can find each other, the more significant the internet becomes. As someone once wrote (a deliberate rephrasing of the statement concerning the Bill Clinton presidency from 1992), “It’s all about the applications, stupid.”

    And it’s the 2000s when web-based applications come into their own. Again, there were precursors, but the real development started during this decade, in two forms.

    First, Cloud Computing – Amazon Web Services, in 2002, permitted developers to use Amazon’s hardware to build applications; in 2006, Google Docs was released in Beta Version. This was the ultimate in ‘smaller devices – using someone else’s hardware, so that all you needed was enough computing power to interface with that hardware.

    Second, the first technologies to use the internet as a communications backbone rather than an end in itself – you had milestones in chat room development in 1971, 1973, 1980, and 1988. IRC (Internet Relay Chat) peaked in 2003, and has been declining since, overtaken by Social Media (and especially Twitter – which is now in decline itself).

    Third, in the same vein, Peer-to-peer networked applications were one of the hot topics through the 2000s – most famously, because of Napster, which dates from May 1999. By popularizing the MP3 and MP4 digital file formats, modern-day streaming services are the legacy of these applications.

    And, finally, E-Commerce. This started in 1994, and the first product for sale over the internet was Sting’s album Ten Summoner’s Tales.

    Wine, chocolate, flowers, Pizza, and internet banking soon followed (see Wikipedia, Online Shopping). It is worth remembering that the creators of the 1995 film, The Net, had to actually explain (and demonstrate in the movie) online shopping for goods and services, in sequences that look incredibly clunky to those used to such services in the world of the 2020s. (I strongly recommend this movie for those trying to get a ‘feel’ for the state of the art around 1995-2000).

    The 2000s were when online shopping hit its stride – Amazon may have started in 1994, and gone public in 1997, and started selling books and videos in 1998, but it was in the DVD era that it really exploded, largely because the products weighed less, reducing postage costs.

    The 2000s were all about the Internet going from a toy with better things ‘on the near horizon’ to ubiquitously connecting everything and everyone – an astonishing rate of adoption. It took 25 years for mobile phones to achieve that level of market penetration, for example.

    Mobile Telephones

    That was because mobile phones got started earlier, and because the constraint was always the construction of a dedicated wireless network and other infrastructure.

    While early examples were suitcases and bricks, the size problem was solved by the mid-90s.

    In the year 2000, there were about 35 mobile phone subscriptions per 100 people in the developed world, and maybe 11 per 100 people in the developing world. By 2010, those numbers were 113 and 68, respectively. That was the year that numbers appeared to reach saturation in the developed world (only to start rising again in 2011-12), while the developing world would reach (by my estimates) 100 units per 100 people around 2017.

    Shortly before the start of the decade, therefore, mobile phones were still expensive toys for wealthy and pretentious people – but the explosive growth in subscriptions from 1999 to 2000 signaled that for the decade to follow, that would no longer be the case.

    Early adopters were mostly professionals (who could afford the devices) but by mid-decade, regular servicemen like electricians and plumbers were signing up as a business necessity.

    Such rapid change means that the GM needs to “fingerprint” his representation of individual years within the period with an appropriate level of market penetration, and appropriate public attitude toward, mobile phones.

Beginning: 9/11: Shockwaves & Awe

So domesticity was the big ticket in 2000, and looked to be the focus for at least the first half of the decade.

Other people had other ideas.

At the time, it was common for Australian TV to affiliate itself with an American network; our ‘late night” TV was their morning shows. (This continues to some extent, even now).

Since these were preceded by the graveyard shift, to which the networks relegated the shows they didn’t understand the appeal of, like Sci-Fi, it was common for gamers and the like to at least get to see the start of the shows – and make up their minds on whether or not to stay tuned based on the promised content (sometimes yes, sometimes no).

The introduction to this days’ show talked about a “terrible accident” as a 747 had struck one of the World Trade Towers. I was still getting my head around that when a second plane struck. No-one on-screen said so, but it was immediately clear that this was a terrorist attack, and the most shocking one that the world had ever seen.

I had been doing prep for one of my RPG campaigns, I forget which, but all thoughts of that vanished as I watched the events from half-a-world away. There seemed an inevitability to the third strike, as though a sword of Damocles had finally fallen. At that point, I could bear inaction no longer, and started trying to encapsulate what I was seeing and feeling in music.

That was my personal experience of 9/11. That night, the world changed. Some of the outcomes were predictable – a massive increase in security at airports, a pointed investigation into the obvious failure of intelligence and new tools and resources for the agencies, and a hot war against anyone who was deemed to have been involved. Someone had poked the bear and was about to feel the Wrath Of God.

The next few days were confused. I couldn’t understand the increasing focus on Iraq when the culprits had been traced to Afghanistan, and to locations within that country that would experience relatively few civilian casualties. Eventually, I realized that Al Quida didn’t represent enough of an outlet for the “righteous anger” of some in the US – I couldn’t blame them for that, but this made it clear to me that unless calmer heads prevailed, and quickly, there would be consequences.

Sadly, no calmer heads emerged.

    Consequences

    Of course, these consequences and responses were not restricted to the United States. Security at airports everywhere was immediately ramped up, for example.

    Domestic issues immediately became nothing more than an arena for consequences of the international relations arena to play out. Even sacred cows like personal liberty and human rights, long held to be sacrosanct, were set aside in the resulting paranoia.

    The problem with doing the previously-unthinkable is that it weakens the commitment to everything else once held inviolable. Breaking rules can be habit forming, needing only sufficient motivation or perceived benefit to doing so. This was the slippery slope upon which the US – and to a lesser extent, the rest of the world – now embarked.

    I see a direct connection between these developments in response to 9/11 and events like the attempted coup of January 6, 2021, the former normalizing radicalism sufficiently to permit the latter to be contemplated.

Middle: Mega-corp Services Proliferate

But the world kept turning, and a new normal soon established itself. And that new normal was the rise of four additional Mega-corporations, to join Microsoft: Apple, Amazon, Google, and Napster.

I’ve already touched on several of these and why & how they became significant. Ultimately, these were Utility and/or Service providers who simply happened to provide a capabilities that everyone wanted. As the sole suppliers of those services to that standard, they were all-but immune to the anti-monopoly laws that had broken up large corporate entities before they got to anywhere near the size of these “new” entities.

Like most overnight successes, these weren’t – they had taken years or even decades laying groundwork for their ultimate dominance.

Nor were any of them as ubiquitous as they seemed at them time; there were alternatives to all of them, or would be before long.

Nevertheless, the second half of the decade revolves largely around those mega-corporations and the products and services they offer.

End: Personal Tech

So much so that those products and services are also central to understanding the social patterns that obtained by the end of the decade.

All of those products and services can be characterized as belonging to a singular conceptual theme: Personal, or personalized, tech. Let me demonstrate by listing some – a few obvious ones and some less obvious examples.

  • iPad / Tablet: This was all about portability, about being able to take your computer, and everything it provided, anywhere you went.
  • Napster / iPod: The ultimate mix-tape, fueled by the most personal of choices, the music that you listen to. Leave out any tracks you don’t like or want, and curate only your personal selection – then take it with you everywhere you want. Approve or not, there’s’ no arguing with the outcome.
  • Google Search: In seeking ways of making the search results more relevant to you, Google was also able to target you with advertising relevant to you – and that translates directly into increased sales for the providers of those products and services. That was the theory, and Google translated that into becoming the biggest corporation in the world. In 2010, it was worth about 400 Billion USD; at the high point in 2021, that had risen to 2,000 Billion USD, or 2 Trillion dollars. It’s value declined sharply through 2022 (down to about 1½ Trillion), but has been recovering in 2023. If the trend continues, it will fully recover in 2024.
  • Amazon: This one’s a little less obvious, but a key part of the sales strategy at Amazon is to use the shopping of others to present the individual with additional products and services that are customized to their profile, or rather, to Amazon’s best guess as to your personal profile. Everything you buy on the site, everything that you put into your wishlist, anything you even look at – in theory, they all weight the selection of products to be offered, on the premise that getting you to buy anything is better (for Amazon) than you not doing so.
  • GPS: The Global Positioning System became fully operational in 1993, following twenty years of development and ‘installation’ by the US Department of Defense. Initially intended to be a purely military application, civilian use was permitted by Reagan after the Korean Airlines Flight 007 disaster. Its civilian accuracy was downgraded in the early 1990s using technology intended to prevent other militaries using the system contrary to perceived US interests, which has led multiple nations toward developing their own Sat-Nav systems. This policy was discontinued by Clinton in May 2000. In 2004, linking GPS to mobile phones for civilian purposes was successfully tested; the facility for using GPS to locate survivors of a disaster having been mandated in 2002. GPS is all about taking you to where you want to go. Most early problems with Commercial Sat-Nav systems can be laid at the feet of completely artificial human traffic-control inventions like one-way streets; since these are all exceptions to the default assumption (two-way travel on a road), they all have to be manually coded within the navigational software – without slowing it down so much as to make it useless. In 2007, Toyota introduced Map On Demand, a technology for distributing updated maps automatically, and the popularity and scope of Sat-Nav systems has been increasing ever since. I even have an App that tracks the bus that I’m waiting for, continually revising its ETA at my stop.
  • iPhone / Smartphone: January 9, 2007, saw Steve Jobs introduce the first generation iPhone. Although there had been mobile computing telephony devices like the Blackberry previously, the iPhone was the device that made the Smartphone popular. Rivals developed their own, and the iPhone now accounts for just 15.6% of the global market share (as of 2022) – but that is still enough for more than 2.2 billion of them to have been sold by Apple since that auspicious 2007 date. Because of their premium pricing, I’ve always regarded the iPhone as a luxury version of the smartphone. Because they can do so much more than a “standard” mobile phone, I’m inclined to treat these as a separate product category in their own right.
End: The GFC

The 2007-2008 financial crisis, known to most of the world as the GFC, marks the beginning of the end of the decade, insofar as all the other trends had made their debut and would encounter no significant development – just more of the same.

Most of us lived through it, and most of us have only a vague idea of what happened and why. Wikipedia lists three causes culminating in a “Perfect Storm”: Predatory lending targeting low-income home-buyers, excessive risk-taking by global financial institutions, and the bursting of the United States housing bubble (Wikipedia, 2007-2008 Financial Crisis) but I would add the subsequent international banking crisis, and prepend the US policy settings for affordable housing that enabled those predatory lending practices in the first place.

Affordable US Housing

The story starts here, with government policies in the US designed to promote the construction of housing that would be affordable by those on less than the median income. There is often assumed to be a greater risk to the financing of such housing purchases, which is used to justify higher interest rates – but those can elevate such housing out of reach of those intended to benefit from it.

Overcoming that problem often requires government support, and that support has taken many forms in many different countries around the world. Ultimately, most of them seem to be founded on the idea of the government sponsoring or co-owning the mortgage, effectively guaranteeing it against failure (or, at least, softening the blow). The most common alternative is some sort of home-buyer’s grant, especially those focused on first buyers, and the construction of low-cost rental accommodations.

The last were especially popular in the 60s and 70s, and contributed markedly to the deterioration of some urban centers as lack of adequate maintenance transformed them into slum tenements. Whenever demand for low-cost housing outstrips supply, the risk is that cheaply-built substandard dwellings will be erected to satisfy that demand; urban decay inevitably follows, with all the attendant social problems.

Correcting these problems can be difficult and expensive, and it’s easy for it to lead to gentrification, which drives out the original residents in favor of a wealthier elite – shifting the problem to somewhere else and pretending that it’s a solution rather than a temporary band-aid.

Subsidizing the construction and purchase of well-built dwellings is the alternative – but said purchase demands affordable loans. The provision of such loans either has to be direct government policy or the result of government policies that are sufficient to persuade commercial entities that there is enough profit involved to be worth the perceived risk.

In my experience, there are two types of low-income tenant / purchaser – those who, when presented with an affordable option, will move heaven and earth to meet their commitments, and those who equate the lowered price with lowered value, and who can hardly be bothered. The first group are generally as safe as houses, given a stable economic foundation; the latter are as reliable as the weather on a changeable day. Ninety percent or more will belong in the first category; but it only takes a few bad apples…. Telling one from the other is always the difficult part.

Anyway, to avoid these problems, in the 1990s, the US Department of Housing and Urban Development (HUD) initiated policies that financed property purchases through the government sponsored entities Fannie Mae and Freddie Mac. Evidence from a securities fraud investigation against six former executives of these entities suggests that in 2008, they held 13 million subsidized loans worth a total of more than 2 trillion dollars.

Several governments, both Democratic and Republican, had sought to limit the amount of government funds that were tied up in such loans by creating policies that encouraged the private sector to do more of the heavy lifting. In particular, various credit controls that were designed to prevent risky and questionable loans to low-income households that had been emplaced as a consequence of the Great Depression were successively watered down or removed entirely.

Predatory Lending

These policy changes permitted, even encouraged the pursuit of subprime lending. This is the provision of loans to those who may have difficulty keeping up the repayment schedule. They are generally characterized by higher interest rates, poor quality collateral, and less favorable terms in order to compensate for higher credit risk, as described above. Burt with perceived government backing, and terms that permit some flexibility in repayment schedules, they can be lucrative.

Some of the protections set up post-Depression included limits to how much risk any given institution could carry. This was designed to protect everyone involved – but a spot of creative accounting on the part of the banks issuing these loans and a weakening of the financial oversight regulations combined to undermine the protection.

Here’s how it worked, as I understand it: You’re a bank, you’ve issued (say) 50 million dollars worth of these loans, in the knowledge that in a stable economy, 10% of them will fail (costing 5,000,000 dollars), but 90% of them will eventually be paid off, earning maybe 20% interest along the way – so a net profit of 5,000,000 dollars.

Packing all of these into a bundle, you can sell this as an asset worth maybe 53,000,000 dollars to someone else, generating an instant 3M profit (instead of an eventual 5M profit), and wiping all those loans off your books – so you can issue another $50 million worth. Rinse and repeat.

These packages were ‘mortgage-backed securities’ and they were being tossed around the various financial institutions like confetti because they were so profitable. Each seller was advantaged by minimizing the risk of defaults and promoting the notion that these were good economic risks to take. The sellers of these ‘securities’ also made greater profits if they exaggerated the value of the properties to assume the ‘best-case’ outcomes of selling them. After all, property always increases in value in the long run, doesn’t it?

The problem is that if there is any sort of economic instability, you can quickly have 95% defaults instead of 10%, and when you repossess the properties, you’re likely to get maybe 10 cents on the dollar (or less) compared to that ‘best case’ valuation. Until the train-wreck, though, the policy appears to be working, especially if you are only looking at the headline numbers.

In the years leading up to the Wall Street Crash, subprime loans were being issued to low-income working-class people to use for speculation on the stock market, but the packages weren’t sold as ‘speculative’, they were safe as houses, and the booming stock market would ‘always’ pay more than the cost of the loan, wouldn’t it? People mortgaged their homes and personal possessions because it was easy money…. These loans amplified and expanded on what would have been a financial crisis by creating a housing crisis and property valuation crisis and banking crisis on top of the original troubles.

Guess what happened in 2007-8 when things went pear-shaped? The sub-prime loans acted as an amplifier, adding a housing crisis and property valuation crisis and banking crisis on top of the original troubles.

This has had an effect on modern-day US politics, too. Republicans were largely and broadly condemned for the GFC, because theirs had been the hands behind the ultimate deregulation, the removal of the financial guard rails. This helped get Obama elected President, and began the drift to a new political paradigm by the right-wing party – if you don’t have a policy, just an intention, you can never be blamed when something goes pear-shaped. It’s never your fault, it’s just an accident that things worked out that way. And sure, “Mexico will pay for the wall. I intend to make them.”.

My advice (for whatever it may be worth): Don’t elect a wish-list. Make people tell you exactly how they are going to achieve their promises and the things that they want to get done. And use your GMing hat to look for ways things might go pear-shaped.

It doesn’t matter what they promise if they are incompetent to implement it, or if you can’t trust them to keep their promises – at least, that’s how I see it.

The Housing Bubble

Every time property gets purchased, it gets inflated in value. There are lots of reasons for this, some good and some bad. Some increase is inevitable because of inflation, for one thing.

Mostly, it’s because there is no rigorous process for valuing a property. It’s all guesstimates and semi-educated guesswork. “Someplace down the road sold for X, but this property isn’t quite the same, so we’ll add Y and take off Z…”

As soon as a property goes up in value, so do all the properties around it, even a block or two away. And these increases can both stack and amplify each other, chains of property value inflation rippling up and down a neighborhood.

Mortgages and interest rates normally act as a brake on these price impacts, slowing the growth of housing bubbles and even occasionally letting some of the hot air out of the prices. After all, if you can’t afford a property because its price has been over-inflated, it won’t sell, and sooner or later the value will be cut back until it does sell.

Now apply the sub-prime mortgage securities situation described earlier to this valuation mechanism. Properties that shouldn’t sell get purchased (by people who shouldn’t be able to afford them). And the place across the road, and another down the street, and another around the corner. And all thee purchases are at inflated values. The result, inevitably, is a runaway housing bubble that is inevitably going to burst at some inconvenient time.

Financial Risk: Trading Sub-Prime Mortgage Securities

It gets worse. Even if the government is no longer keeping proper track of the debt levels and insecurity, you would expect the people buying these bundles of debts (and thinking they are an asset) would do some sort of due diligence to make sure that they really are worth what they are paying for them, right?

But the numbers they would get to see on which to base such an assessment are the very numbers subject to the hyper-inflating housing bundle. So it would look like you were buying property valued at maybe 80 million for your 53 million (to continue and extend the example). That’s 17 million in paper-profits right there – even if things go belly-up, you can sell that property and more than recoup your losses. You can’t lose, right?

Incestuous financing In The Banking Industry

We’re still not at the bottom. Many of the banks that were issuing these risky loans were also investing in credit default swaps and derivatives – essentially bets on the financial soundness of the loans.

A credit default swap is essentially a promise that, in return for a fee, should a particular loan go bad, another bank will cover the loss. Since the expectation was that relatively few defaults would be recorded, this was largely seen as being paid for doing nothing (by the bank offering the money) and a sure-fire insurance policy (by the bank offering the potentially shaky loan).

Derivatives are contracts that derive their value from the performance of an underlying entity. In essence, those buying the derivatives are investing their money directly in the underlying entity; if all goes well, they get an asset of greater value that they can liquidate or on-sell. It’s quite common for assets of this type to be reinvested at the end of the term – why wouldn’t you, it’s already proven able to earn you money and to be safe.

Nothing wrong with that if the underlying entity or operation is sound. Government bonds are essentially a form of Derivative. Put them into this economic climate, however, and they simply increased the stakes that everyone had invested in sub-prime mortgages.

It really was one house of cards built on another, built in turn on a third, which was itself built on an earthquake simulator.

The Enron Failure

It’s just possible that everyone involved should have had a better sense of the dangers involved after the Enron crisis in 2001, the last time deregulation and a lack of oversight combined with people who were being too clever by half.

The story of that scandal is really beyond the scope of this article (and outside the time available to finish it) so I’ll simply drop a link – Wikipedia, Enron scandal – and recommend people watch or read Enron: The Smartest Guys In The Room:

— I will get a small commission from Amazon.

Lehman Brothers Collapse

Far from reading any tea-leaves, or listening to any cautionary tales, the financial services sector seemed to have drunk the cool-aid. Lehman Brothers were the font-line example, but it could have been any of several institutions – they all borrowed money from other institutions to fund more sub-prime mortgages.

It was like a pyramid scheme in which each one was reinvesting their proceeds in the pyramid when they all should have known better.

Lehman brothers were so exposed that a 3-4% decline in the value of their assets would entirely eliminate the assets that underwrote their entire operation.

During the boom, this insane level of risk earned them and their stockholders monstrous profits, for obvious reasons, but once you start down that path, you are committed; any reduction in the practices that got you into that position completely decimate public confidence in your operation.

There were attempts to rescue them, of course. In August 2007, they bit the bullet and closed their subprime lending division, BNC Mortgage, eliminated 1200 staff positions in 23 locations, took a $25 million after tax charge, and wrote off a $27-million loss in Goodwill.

Unlike others, they didn’t repackage their subprime loans and on-sell them; they wanted the whole profit, and just because they were no longer issuing them doesn’t mean that they had eliminated any of the sub-prime loans that they had already issued – and that was what ultimately was their undoing.

They were simply lent too much money to survive when the bottom fell out of the overinflated housing market. Instead, the rot spread as a multitude of lenders had to write off the loans to Lehman Brothers, carrying them perilously close to the edge of their own financial cliffs.

Government Bailouts

    The bankruptcy [of Lehman Bros] triggered a 4.5% one-day drop in the Dow Jones Industrial Average, then the largest decline since the attacks of September 11, 2001.

    It signaled a limit to the government’s ability to manage the crisis and prompted a general financial panic. Money market mutual funds, a key source of credit, saw mass withdrawal demands to avoid losses, and the inter-bank lending market tightened, threatening [other] banks with imminent failure.

    The government and the Federal Reserve system responded with several emergency measures to contain the panic.

    — Wikipedia, Bankruptcy of Lehman Brothers

There was a real risk of the entire financial system collapsing, so deep ran the rot

    After the onset of the crisis, governments deployed massive bail-outs of financial institutions and other palliative monetary and fiscal policies to prevent a collapse of the global financial system.

    In the US., the October 3, $800 billion Emergency Economic Stabilization Act of 2008 failed to slow the economic free-fall.

    — Wikipedia, 2007-2008 Financial Crisis

International Scope

One question not answered so far in this summary of events is why the GFC was so global in scope. Everything so far points to a US Domestic Crisis; however serious, the rest of the world should have been insulated from it, or so the casual reader might think.

There are two problems with this perspective. The first is that it completely ignores how interconnected the finances and economies of countries are in the modern world; and the US is still the focal point of the global economy.

(China could have claimed the crown in more recent years, but didn’t want the responsibility and international scrutiny that goes along with it, and definitely didn’t want the increased transparency that would have been required. So they managed their economy to keep it just a little smaller than that of the US).

The other critical factor is that banks the world over like to invest in profitable enterprises, and like to loan money to people with the apparent ability to pay it back. Combining the two left them hip-deep in the septic tank of the American problems – they simply didn’t know it until the balloon went up.

On top of all that, there’s an additional consequence of the size of the US economy – it results from a lot of people all over the world doing business with the US. People like me, for example. That makes me, and people like me, elements of both the US and my local economy – if financial trouble in the US means that buying products costs me more, that means I have less money to spend locally. There’s an inherent spread of such economic woes beyond the shores of the United States.

Credit where it’s due

When Obama won the Presidential Elections of 2008, Bush went out of his way to ease the transition to the new Administration, inviting the President-elect and members of his team to important summits and meetings such as the G-20.

Bush allegedly told Obama that the GFC was going to be his to manage, and rather than derail attempts to resolve the crisis with an abrupt shift of policies on Inauguration Day, the two worked together crafting and implementing government responses. Much as President Obama gets credit for resolving the crisis, the outgoing President Bush deserves at least some of that credit.

Plenty Of Blame

There’s also plenty of blame to be apportioned. The Republicans may have pulled the final trigger, but the banking and finance sectors had been hard at work in the Clinton administration, persuading those with the authority that they could be trusted, and the whole country would benefit, from the ongoing easing of restrictions.

I vaguely remember it being suggested in one documentary or another that the erosion of protections began with Truman. I’m not sure I entirely believe it, but Kennedy, LBJ, Nixon, or Ford? Could easily have been any of them if it wasn’t Truman.

Greed doesn’t like to be regulated.

Surviving The Storm: An Australian Perspective

Unlike most of the world, Australia did not experience a recession as a consequence of the GFC.

This was a consequence of generous stimulus payments designed to boost the economy, paid to the lowest income earners, including pensioners and the unemployed. Because they have so little, the theory went, they would pump virtually all of it directly into the economy.

The government pumped 11.8 Billion Australian dollars into the economy. They say the proof of the pudding is in the eating – it worked. It was neither too little nor too much (though pundits suggested both at the time); while there was a minor recession in the non-mining sector, overall, the economy grew at 0.4% in the fourth quarter of 2011 and 1.3% in the first quarter of 2012. (Wikipedia, Economy Of Australia – Global Financial Crisis).

The upshot: I think that I can offer a more Olympian perspective on the entire GFC, simply because I don’t have any vested interest axe to grind.

The Rise Of Obama

    On November 10, Obama traveled to the White House and met with President Bush to discuss transition issues while First Lady Laura Bush took his wife Michelle on a tour of the mansion.

    NBC News reported that Obama advanced his economic agenda with Bush, asking him to attempt to pass a stimulus package in a lame duck session of Congress before the inauguration.

    He also urged Bush to accelerate the disbursement of $25 billion in funds to bail out the automobile industry and expressed concern about additional Americans losing their homes as mortgage rates increase again.

    — Wikipedia, Presidential Transition of Barack Obama

In February, mere weeks after the inauguration, the Democrats put forward the American Recovery and Reinvestment Act of 2009 in response to the ongoing crisis, which

    included a substantial payroll tax credit, saw economic indicators reverse and stabilize less than a month after its February 17 enactment.

    — Wikipedia, 2007-2008 Financial Crisis

It’s highly doubtful if the new President would have been able to have all his ducks in a row this quickly if not for the assistance and cooperation of the outgoing President.

I should point out that, in part, President Bush’s treatment of Obama was a response to the events of 9/11; Bush had learned the hard way that these things can come out of nowhere at any time, and he wanted the country to be as ready to respond to an emergency on the evening of Inauguration Day as he could make it.

The economic relief was unfortunately temporary, as secondary effects sparked a recession – now known to Americans as the “Great Recession”.

    In 2010, the Dodd–Frank Wall Street Reform and Consumer Protection Act was enacted in the US as a response to the crisis to “promote the financial stability of the United States”. The Basel III capital and liquidity standards were also adopted by countries around the world.

    — Same Source

With these measures, the economy finally turned the corner.

I still have one, maybe two, chapters left to go in this penultimate part of the series. But I think that next week I’ll take a break from it to present something a little different.

Comments Off on Economics In RPGs 8: The Digital Age Ch 3

Economics In RPGs 8: The Digital Age Ch 2


This entry is part 12 of 16 in the series Economics In RPGs

The Sydney Olympic Games were one heck of a good reason for a party, more than a decade in preparation. So it only makes sense to illustrate this article about the 1990s with this image of the opening ceremonies. The image is courtesy of Wikipedia and is considered to be in the US Public Domain – see the image page.

As usual in this series, I’ve decided to just push on from where we left off, without the preambles and without a synopsis. You should read Chapter 8.1 before starting this continuation of the article to get the most out of it. But you can dive right in if you want to – at your own risk.

One word of warning: I was in the IT industry in the period in question, and know an awful lot about it as a result. That creates a tendency to waffle on (which I have tried to fight against) and to disappear down (tangentially-relevant) rabbit holes (which I have actively tried to resist). That can have two outcomes: either I have skirted over something superficially that deserved greater attention on behalf of those less-informed, or I have delved too deeply into things that seem relevant to me, but which may not be so important in the eyes of others. It’s even possible to fail in both ways at the same time. If there’s anything that’s unclear, use this as a starting point and guideline for your own research.

The Digital Age, Third Period 90s–00s

The final decade of any century is always going to be a compound of conclusions and precursors signposting the beginning of the new. This is doubted and even tripled when we’re talking about the end of a millennium. This was doubly true here in Australia and in Sydney as we ramped up not only for Y2K (like everyone else) but to host the Sydney Olympics; the city was obsessed with going the extra mile to make those games a spectacular success.

Sidebar: The Best Games In History

The reasons for this are an illustrative case study in how many influences can come together to create an irresistible trend. In this series, we’re primarily concerned with influences on the economy, and I’ll touch on that again at the end of this sidebar.

First, there was the inevitable desire to show our city and our nation off to the world. The economic benefits in terms of tourism were expected to last long beyond the games and even beyond the interval to the 2004 games, and in fact this was the outcome. It took COVID to bring them to an end, and now that most countries have reopened, there is undoubtedly a lingering residuum. But there was an increased emphasis on this because Australians are well aware of the tyranny of distance – we are a long way from anywhere else, especially from the US, Canada, and Europe, and needed to sell the nation as being worth that extra effort and expense.

Second, there was a dollop of inter-city rivalry. Melbourne had hosted the Olympics twice, and while Sydney-siders had supported those games wholeheartedly, we wanted to seriously up the ante. Such friendly rivalries are a common element in Australian Society – there are those who think that Australians approach everything as though it were a friendly game, and there’s an element of truth in that.

Third, the preceding games had not been the greatest success. We were acutely aware that then-president of the IOC had failed to deliver his usual pronouncement of ‘the best games in History’ over those games, and that earning that accolade would only enhance the perception of the city and of the event. What’s more, everyone overseas knew that Australia felt that way – so there was an expectation. (The thing with expectations is that you either fail to live up to the hype, and are viewed as diminished as a consequence, or you achieve or even better expectations, and gain added reputational luster as a result. The higher the expectation, the easier it is to fall flat. But if expectations are already going to be sky-high and you pull it off, the world is your oyster. We wanted to ensure that the hype, however elevated it was, would be seen as understatement afterwards – which demanded our going the extra mile to make visitors feel welcome. We did, and it worked).

Fourth, this would be either the culminating sporting event of the century, of the millennium even, or the launchpad for those of the century and millennium to come. Either way, there was extra pressure to get it ‘better than right’.

Fifth, there was widespread community support for the Games. Making the event a success was a point of pride for almost everyone in the city – to the point where some of the volunteers and officials took time off work (at their own expense) to attend the two prior games and learn what to do, and what not to do. And they were so successful that they were actively recruited, to pass those lessons on, by a number of subsequent games and comparable events.

Sixth, Australia had developed an international reputation for hosting big events “better than anyone else”, starting with the Formula 1 in Adelaide. The number of timers teams, drivers, and officials reported that ‘the Australian temporary facilities are better than those of many of the permanent tracks that we go to’ started that perception and it’s been built on every year since.

Seventh, Australians as a group tend to be perceived – more accurately than not – as sports-mad. We regularly punch above our weight in sporting events, and pride ourselves on always making an opposition earn their victories, no matter how great the mismatch may be on paper. We successfully translated that into a perception of the games as a whole being a sporting event in which other host cities, past and future, were our rivals.

And, finally, there were the other economic and social benefits. These were estimated to be in the Billions of Australian dollars pre-game and more billions in the course of the actual event. Afterwards, not only were there the anticipated tourism benefits but the games infrastructure was designed and intended to be economically and socially productive. Other games had made such claims, and failed to deliver, and those lessons were harshly scrutinized as our plans moved forward. The result: the stadium is still in regular use, the Olympic Village is now a residential suburb, there was a marque event designed to step into the Olympic aftermath to keep a positive view of the location active (which it successfully did for many years), and the games turned a significant profit for the state and the country as a whole.

One example of how this was achieved can be considered indicative: There was collaboration with Tourism providers to produce integrated tour packages that either culminated in, or kicked off with, attendance at the Games. Australia may have been a once-in-a-lifetime destination due to the distance involved, there was a lot of work done to maximize the bang that people got for their buck, on the assumption (and I think it was calculated at the time) that every happy tourist would generate 2-point-something more in future years – and that would mean more work, and more money, for everyone.

So there were eight factors contributing to set the standard for our hosting. Some of them were more significant early on, to be largely supplanted and left behind as a culture of excellence took hold. But they were all pushing in the same direction. Other events have done their best to emulate the success, and achieved it at least somewhat, but there were factors – like the fourth item on my list – that they could not replicate.

In economics, there are often four factors pulling one way and three pulling in the other, creating an unstable tension with an overall (and temporary) trend that can be reversed by a quite small change or event. When the dice eventually all line up, the result is a tsunami of events, all but unstoppable; the most you can hope for is to cushion the impact. More often than not, these events are negative in nature; positive examples require a lot of hard work on every front. But sometimes you can aim for the stars, and hit the mark.

The end of one era is the beginning of another

Getting back to the main point, the timing means that everything that happens in the last decade of a century or a millennium is viewed either as a culmination – “everything has been leading to this” – or as the harbinger of the future. There is considerable truth to the concept that – rightly or wrongly – anything that can’t be characterized as the first is automatically assumed to be the second, which shapes policies – sometimes, when it shouldn’t.

There is therefore an inherent turbulence built into social and economic systems and perceptions at the start of a new millennium or century – but this is often masked and temporarily delayed from public perception by a sense of having made the milestone. Everyone relaxes afterwards, at least for a while, and feels secure – when perhaps they shouldn’t. The seeds of the First World War were undoubtedly laid by the Empires of the 19th century, and by their relationships and treaties. Intended to create peace and respect between the dominant powers of their day, no-one foresaw that they would lead instead to war on an unprecedented scale.

We have the benefit of hindsight, and so can see – looking back – the significance of the milestones marked off in the critical decade, whereas they were largely unappreciated or underappreciated or misinterpreted at the time. The lesson some people – including scholars – fail to incorporate into their perceptions and theses is that the same is true of EVERY time period you study. Seeing the forest for the trees is always a challenge, often not possible until you look back from some remove. At the time, critical events are just ‘stuff that happens’.

    Beginning: Invasion Of The PCs

    The ‘stuff that was just happening’ at the beginning of the ultimate decade of the 20th century was the tsunami of adoption of the Personal Computer. No-one, and I mean no-one foresaw how big this movement would be, or what impact it would have.

      Office Computing

      Computers had been in the big businesses for a while, but they cost so much – to purchase, to install, and to operate – that they were completely out of reach for even moderate-sized businesses. To change that, all three of these elements had to change.

      The PC, in any of its incarnations, solved the first problem, purchase price, to at least some extent. The IBM-product, and its clones, solved it better than the Apple Mac, and the price difference became a leading cause of tribal contention at the time – were Macs worth the extra expense? Those in the Pro-Apple tribe said yes, those in the IBM-tribe said no, and there was a very narrow group caught in the middle that said “yes – for now.” The latter were shouted down by everyone but were proven right in the end.

      Both solved the installation problem almost completely – to the point where such problems were viewed as unusual exceptions when they arose. You didn’t need a dedicated computer room to house these devices, you didn’t need expensive air-conditioning systems, etc – you simply plugged them into a wall socket or power-board and sat them on a desk. Job done. Very occasionally, a domestic power supply would not be stable enough for reliable operations, but these were so rare that they were viewed as a failure on the part of the electrical supplier. In effect, the definition of ‘reliable power’ was rewritten to meet the needs of the newly-ubiquitous technology.

      Nevertheless, the occasional such problem arose regularly throughout the decade, slowly becoming less and less frequent as electricity providers caught up.

      At the start, the IBM product had no GUI (Graphical User Interface – i.e. mouse and pointer). It was a text-based system that was perceived as less user-friendly than the Mac. GUIs are inherently easier to learn, more intuitive. Windows 95 changed that – at least partially – and Windows 98 took the best selling point of the Mac and welded it to the best of the IBM product to create a world-beater. Apple responded with better color and high-resolution graphics, but the writing was largely on the wall.

      But Windows had already taken the business world by storm long before Windows 95. There are three major legs of business application: what became known as desktop publishing, database entry and retrieval, and spreadsheets. The Macs had an undoubted edge in the desktop publishing arena – WYSIWYG (‘What you see is what you get’, in other words what you saw onscreen looked like the eventual printed page). Microsoft and the IBM-clones had the edge in the other two, and there was a lot of debate about which was more important, most of which missed the point; it wasn’t that the spreadsheets and databases themselves were more important, it was what you could do with an application built on top of these, using them as a ‘back-end’. Point-of-sale systems, inventory management systems, accounting systems, etc. What was more, these applications could then output information to template ‘forms’ in the word processors to produce invoices and sales records and what-have-you, integrating everything into one unified suite of products. As the potential for these integrated systems emerged, customer after customer was won over – especially since the price was so much better. WYSIWYG was nice, but not critical to operations; everything else permitted better management of resources and potentially greater profitability.

      Cautionary Tales

      Before moving on, let me address one of the biggest mistakes that people made at the time, and continued to make right through the 2000s. Computerizing an operation did not make for less work for staff, or less expense, but that was always the #1 reason offered for doing so. Unscrupulous salespeople took full advantage of this misperception, promising the earth, then washing their hands when it didn’t materialize.

      Computerizing an operation permits better management of the operation, if you structure the implementation to give you essential information in a timely manner. Interpreting that information is always up to management, the computer can’t do it for you. You pay for this control by requiring (generally) more work from staff, and possibly even additional staff.

      You can’t control the cost of implementation; what you can control is getting value for money. Toe-in-the water implementations are doomed to inevitable failure; the more whole-hog that you go, the greater the bang that you get for the buck, and the more likely the implementation is to be a success.

      In the mid-90s, I saw an estimate that 80% of computer implementations were disasters waiting to happen, and half of the rest had already been disasters that had been survived and learned from. Only one-in-ten computerizations had been successful, and half of those (or more) were down to blind luck. Bear these facts in mind when dealing with any business operation in such a time period.

      Back to the topic at hand – the invasion of the PCs into office spaces. I’ve covered two of the three problems that needed to be overcome, showing that one was only partially solved by Mac but fully solved by the IBM-clone; that both solved the second successfully; which brings me to the third, ease of operation. And, in fact, I’ve touched on a lot of aspects of this third problem, along the way. It’s worth remembering that prior to the PC, staff needed specialist training to operate computer equipment, often very expensive training. The new desktop computers did away with that need almost completely – or so it might seem at first glance.

      At first, the Macintoshes and Apple-IIs were comparable to, if not better than, the IBM-PC in the user-interface stakes. The problem was that there was so little software available for business applications on the Apples. The IBMs may have had a menu-based system, with keyboard shortcuts that needed to be memorized, but they had the software ready-to-go. And they had training available for using those applications, providing an instant productivity boost in terms of using the new system. Apples looked prettier (on-screen, at least) but had none of this infrastructure to make them business-friendly. They were selling raw beef and expecting the customer to do the cooking – which is fine at home, but not what you expect at a four-star restaurant.

      An example of benefits

      GMs running games in this era need to understand what made a computerized workflow successful, but this is really hard to pin down unless you were there at the time, because glossy promises in glossier promotional materials that promise the earth, coupled with a certain rose-tinted hindsight, obscure the truth. So here’s a practical, if fictional, example.

      The computer gets brought in to manage sales and inventory for a small general store (I’ll be using a “corner store’ in the Australian sense of the term, but it should be pretty close to similar retail operations the world over). This store has been operating at a profit for a number of years but the profit margins are shrinking and the customers are starting to dry up. Instead of relying on general impressions of what’s selling when, a year’s worth of actual sales being recorded permits the owner to determine that some products sell better in certain seasons (which he already knew) but that there are a couple of spikes in demand because of holidays and the like. As a result, he is able to reduce the inventory that he is carrying of those products, stocking more of them when demand is about to increase; instead of $100,000 in stock, he now only has to keep $80,000 worth for most of the year, but at key points, he needs to carry $110,000 worth – explaining why he experiences difficulty in paying his bills at such times of year.

      That gives him $20,000 for most of the year to invest in products in greater demand – he uses half of it for that purpose, puts half of what’s left towards those difficult bills, and intends to use the remaining $5000 for promotional activities – sales and discounts and so on. Using the computer’s sales, he can determine week-to-week which promotions work (overall revenues go up) and which don’t. So he grows his entire business by 10%. His staff are somewhat disgruntled, because they have to work harder, but their jobs are actually more secure because they know how the new system works – except for those that don’t take the time to understand it. But he can now afford to give them a 2-4% pay rise, and still take more than half the extra earnings as profits. What’s more, he’s able to better cope with local changes in buying habits – stocking products that the nearby supermarket doesn’t, for example, with a small overlap of products that might be needed unexpectedly or when the supermarket is closed.

      History shows that this won’t be enough in the long term to fight off the supermarkets, but this store won’t be amongst the early casualties.

      Home Computing

      Here’s a simple thought experiment: If the average small business has two-to-four employees plus the owner, what’s the more important market: the domestic computer, or the business computer package? The latter costs twice as much, maybe even three times as much, but we’re talking about 3-5 times as many home systems for every business installation. So the home systems market is the more important, right?

      Wrong. People are strongly resistant to learning to do things two different ways. If you’re workplace is using a windows installation, you are twice as likely (or more) to prefer a windows installation at home, too. And businesses like to encourage this, because it means that any experience or additional familiarity you acquire through using your home system makes you more productive with the business systems and vice-versa, so they start including ‘familiarity with’ in their job vacancy requirements. And salesmen cotton on quickly, and start discounting lower-end home systems, because if four of a business’ staff already have familiarity with System X, it makes it more likely that the business will buy System X.

      On top of that, there are all those employees of businesses who had not computerized – and at the start of the decade, that was the majority. If you can capture that market segment, too…

      Computers had already started infiltrating the domestic market for entertainment purposes, but the rise of the business PC massively accelerated the process, and overwhelmingly, it was an IBM-clone that became the baseline home system..

      Bulletin Boards to Early Networks

      So you have a shiny new computer at home. It’s just going to sit there until YOU do something with it.

      A computer is a tool, you have to use it for something. There are a number of application areas that emerged at much the same time as PCs started infiltrating homes in a big way – games, art, writing/publishing, personal finances, well-being, automation, and communications. To these were soon added music and video. All of these were primitive to start with, and developed at different paces. Initially, a lot of development went into the computer games sector, because it made the best and fastest returns. In most of the other areas, you had one or perhaps two programs (if any), while you might have fifty or a hundred different games.

      At different times in the history of the PC, other applications came to the fore to (temporarily) dominate the landscape – the desktop publishing craze of the mid-90s, for example. No-one really expected communications to emerge from the pack as one of the most important to users day-to-day lives. E-mail grew and grew in importance, slowly gaining ground and not relinquishing it until the late 2010s. But it started far simpler, as bulletin boards.

      Describing these for anyone who hasn’t used one is phenomenally difficult, because any explanation takes far longer to describe the limitations than the subject is worth. A smaller, simpler Reddit is probably the closest brief description that can be made, but it only tells half the story. Early examples had no subject differentiation, for example – everything went into one vast list of threads that had to be manually selected. There was no search function. There was no internet – you had to dial a specific number to get access to the bulletin board hosted at that number. To go to a different one, you had to hang up and dial a different number, establishing a new connection. You discovered new sites through word-of-mouth recommendations from other users.

      Over the course of the decade, the bulletin boards evolved into the internet, and spun off into chat rooms (which evolved into social media) and connections began to be extruded into all sorts of other areas – when it became possible to email a fax machine, for example. And then along came the search engine and internet protocols so that one window could be used to “browse” from one site to another, all on related topics. Online trade was still in its infancy by the end of the decade, with people just barely beginning to grasp the potential.

      The history of this development is one milestone after another at lightning-fast pace, and WAY beyond the scope of this series. But some awareness of ‘the state of the art’ is necessary to properly simulate any point in history properly, and it’s generally better described, from a modern perspective, by listing all the things that you couldn’t do – with the knowledge that within two years, the list would have changed.

      Microsoft: The first Mega-corp

      The success of Windows made Microsoft the first in the modern generation of mega-corporations, a term unashamedly stolen from Cyberpunk. They didn’t just lead the industry, they were dominant. And they used this power to do some unsavory things in the corporate sense.

      Serious attempts were made to apply anti-monopoly legislation against them, but these generally fell foul of the fact that this was the product that everyone wanted. There were times when regulators were able to restrict and restrain the monolith – the browser wars come to mind, for example – but these were drops in the ocean. Even today, the monopolistic powers of the mega-corps like Google are at best restrained by legislative authority.

      History shows us that these powers will persist until some fundamental shift in the technological foundation creates an opportunity for a new player to supplant the existing power base; the smartphone brought apple back into prominence, but Google’s Chrome has become even more ubiquitous. To the extent that anyone can be said to have won the Browser Wars, Google has perhaps the best claim to the ‘trophy’.

      Here’s a very common potted business history: Someone writes an app that adds useful functionality to the then-current generation of Windows. It starts selling like hot cakes. Microsoft do one of three things: (1) decide it’s a flash in the pan, and not worth the effort of doing anything about; (2) buy the rights to the killer-app and bundle it into the next generation of Windows; or (3) develop their own version or buy the rights to a rival product and enhance it, then bundle that with the next generation of Windows (if not sooner, through an update). In two out of three cases, a prosperous entity within the IT universe vanishes – and the next version of Windows becomes that little bit more profitable, having something to sell and market to existing customers.

    Middle: Skirting Eco-disaster

    Who remembers the hole in the Ozone layer? This was an eco-disaster in the making in the mid-80s and led to a banning of CFCs and other chemicals known to cause Ozone depletion in 1987.

      The ban came into effect in 1989. Ozone levels stabilized by the mid-1990s and began to recover in the 2000s.

      — Wikipedia, Ozone Depletion

      Recovery is projected to continue over the next century, and the ozone hole was expected to reach pre-1980 levels by around 2075. In 2019, NASA reported that the ozone hole was the smallest ever since it was first discovered in 1982.

      — Same source

    When considering the “Ozone Hole” (actually a misnomer), it has to be remembered that when the ban was instituted, no-one knew how quickly or completely the damage would be repaired. As the populated nation most closely affected, Australia paid particular attention to the situation for the next couple of years, until other policy priorities began to distract the government of the day.

    This is perhaps the most widely-recognized environmental disaster that was narrowly avoided in this era. It is far from the only one. Another one to assume prominence in Australia was Soil Erosion.

    In the US, Acid Rain was perhaps a bigger concern, threatening their Aquaculture-based industries. This led to the 1990 Clean Air Act. How close to disaster was this? You can judge by the 1992 closure of all eastern seaboard fishing grounds because there had been insufficient recover of stock.

    The 1990s saw increased awareness of the hazards of oil spills, soil and water contamination, toxic waste dumping, and chemical accidents.

    And then there’s Asbestos…

    Asbestos

      The use of asbestos in new construction projects has been banned for health and safety reasons in many developed countries or regions, including the European Union, the United Kingdom, Australia, Hong Kong, Japan, and New Zealand.

      A notable exception is the United States, where asbestos continues to be used in construction such as cement asbestos pipes.

      The 5th Circuit Court prevented the EPA from banning asbestos in 1991 because EPA research showed the ban would cost between US$450 and 800 million while only saving around 200 lives in a 13-year time-frame, and that the EPA did not provide adequate evidence for the safety of alternative products.

      — Wikipedia, Asbestos

      Before the ban, asbestos was widely used in the construction industry in thousands of materials. Some are judged to be more dangerous than others due to the amount of asbestos and the material’s friable nature.

      Sprayed coatings, pipe insulation, and Asbestos Insulating Board (AIB) are thought to be the most dangerous due to their high content of asbestos and friable nature. Many older buildings built before the late 1990s contain asbestos.

      — Same source

    The Australian Experience
    Asbestos and the diseases that it causes were far more prominent in Australia even that in countries that took strong action with respect to the construction material. We had as many asbestos-related fatalities here as did the UK, despite having only 1/3 the population, and there was ongoing litigation on behalf of miners throughout the 80s that kept the issue returning to the front pages.

      Western Australia’s center of blue asbestos mining was Wittenoom. The mine was run by CSR Limited (a company that had been the Colonial Sugar Refinery).

      — Wikipedia, Asbestos and the law

    The 1990 single Blue Sky Mine by environmentally and socially-aware rock group Midnight Oil exemplified the anger and resentment that was felt – not so much over the issue itself, but with the way those deemed responsible sought to dodge culpability.

    James Hardie Industries

      The main manufacturer of asbestos products was James Hardie, which set up a minor fund for its workers, then transferred operations to the Netherlands where it would be out of reach of the workers when the fund expired.

      — Wikipedia, Asbestos and the law

    Just to finish the story: In 2001, James Hardy separated two of its subsidiaries from the parent company to create the Medical Research and Compensation Foundation (MRCF), essentially an inadequately-funded dumping ground for the company’s asbestos liabilities.

      Then CEO of James Hardie, Peter McDonald, made public announcements emphasizing that the MRCF had sufficient funds to meet all future claims and that James Hardie would not give it any further substantial funds.

      …The net assets of the MRCF were $293 million, mostly in real estate and loans, and exceeded the ‘best estimate’ of $286 million in liabilities which had been estimated in an actuarial report commissioned by James Hardie.

      — Wikipedia, James Hardie Industries

    The 2004 Jackson report (see below) later found that

      …this ‘best estimate’ was ‘wildly optimistic’ and the estimates of future liabilities was ‘far too low’.

      — Same source

    James Hardy then moved all of their operations to the Netherlands in an attempt to isolate the rest of the company from these liabilities.

    Such tactics created outrage here, and cemented public opinion firmly against what had been one of the most successful and respected companies in the country. But the story kept getting worse:

      Shortly after the move, an actuarial report found that James Hardie asbestos liabilities were likely to reach $574 million.

      The MRCF sought extra funding from James Hardie and was offered $18 million in assets, an offer the MRCF rejected.

      The estimate of asbestos liabilities was promptly revised to $752 million in 2002 and then $1.58 billion in 2003.

      — Same source

    James Hardy was dragged to the negotiating table kicking and screaming by the findings of the Jackson Report cited above, but eventually promised to set up a compensation fund – then stalled and delayed for another two years.

      It was not until November 2006, after the federal government had created ‘black hole’ tax legislation, which made the contributions of James Hardie into the voluntary fund tax deductible, and had granted the voluntary fund tax-exempt status, that James Hardie finalized the compensation deal.

      — Same source

    The saga continues!
    But the story still wasn’t over.

      In February 2007 every member of the 2001 board and some members of senior management were charged by the Australian Securities & Investments Commission (ASIC) with a range of breaches of the Corporations Act 2001 including breach of director’s duties by failing to act with care and diligence.

      ASIC also undertook investigations into possible criminal charges against the company’s executives but in September 2008 the Commonwealth Director of Public Prosecutions decided there was insufficient evidence and charges were not pursued.

      In 2009, the Supreme Court of New South Wales found that directors had misled the stock exchange in relation to James Hardie’s ability to fund claims. They were also banned from serving as board members for five years. Former chief executive Peter Macdonald was banned for 15 years and fined $350,000 for his role in forming the MRCF and publicizing it.

      — Same source

    The former directors other than MacDonald appealed, but the ruling against seven of them was upheld.

    Circling back to relevance
    At the start of the decade, there was a general perception that Asbestos claims were related to the mining of the raw material and the preparation of products that used it, that it was stable and safe to use.

      Asbestos cement, genericized as fibro, fibrolite (short for “fibrous (or fibre) cement sheet”) or AC sheet, is a building material in which asbestos fibres are used to reinforce thin rigid cement sheets.

      — Wikipedia, Asbestos Cement

    Since WWII, this product had been massively popular for quick and easy construction of homes and other structures. While it was used world-wide to some extent because of its resilience and affordability, it became a ubiquitous construction material in Australia and New Zealand through this period. I spent most of my youth living in Fibro-based houses. And, so long as it remained intact, those who saw no danger were correct.

    Damage or demolitions, which tore and shattered the sheets and other forms created using the material, however, released dangerous levels of asbestos fiber into the air and onto surfaces which could then be absorbed by workmen simply by touching contaminated surfaces.

    Over the course of the decade, as this came to light, asbestos removal (abatement) and remediation measures became mandatory, and often expensive. Mitigating the exposures involved tents to confine the dust, high-quality masks and environmental suits for the workforce. At one point, continual wetting of the sources was thought to be necessary. The water itself that was used had to be captured and cleaned, because otherwise you were simply spreading the fibers around.

    Home renovations were a big thing in the Australia of the 90s and have stayed that way, led by TV shows such as Our House (not to be confused with movies or the US TV series). Our House (Australian) ran from 1993 to 2001, and arguably it would have continued if not for the untimely death of host and former Skyhooks front-man, ‘Shirley’ Strachan.

    It was far from the only one, though – “Better Homes And Gardens” (A TV series modeled on the Australian version of the American Magazine) has been running for twenty-eight seasons and a game-show-styled renovations reality program, “The Block” has appeared onscreen for a total of 19 seasons – two in 2003-2004, and the rest after a 6-year break. (The 19th season is currently airing).

    So Asbestos abatement is a particularly big deal here, but is important elsewhere, too.

    Bottom line: Environmental considerations will crop up frequently and unexpectedly throughout the decade, but especially the latter half. That was when the general public started becoming aware of Climate Change.

    Middle: Rise Of The Smaller Device

    Smaller computing devices had been around for years, but exploded in popularity in the 90s.

      Laptops

      Laptops – if you can call them that – had been around for as long as the computer. They weren’t really portable until the Epson HX-20 of 1981, but this had an LCD screen and not the full “portable PC” experience. Displays reached 640×480 (VGA) resolution by 1988 (Compaq SLT/286), and color screens started becoming a common upgrade in 1991 (Wikipedia, Laptop). So these were the first generations of what would be recognized as a modern laptop.

      PDAs

      Parallel to these developments was the rise of the PDA – the first example of which was the Psion Organizer of 1984, but didn’t really take off until the Psion Model 3 of 1991. And then they seemed to explode for the rest of the decade.

      If, as is arguably the case, the Laptop evolved into the iPad, it was a merger between the iPad and the PDA that became the iPhone – the beginnings of the now ubiquitous Smartphone. It can also be argued that e-book readers are also a development of PDAs, with some technology from the iPad incorporated (bigger screens, for example).

      Pagers

      Before all of these was the Pager. These were first developed in the 50s and 60s, became popular in the 1980s, and were ubiquitous amongst certain professions and tiers of management throughout the 1990s. For a lot of the 2000s, they were still preferred over more capable devices by some government groups because they were perceived as more resilient services in the event of natural or man-made emergencies. They were also widely used in restaurants and medical facilities like hospitals.

      In Japan, more than ten million pagers were active in 1996 (Wikipedia, Pager). It’s a measure of the decline of the technology that on 1 October 2019, the last Japanese provider of pager services ceased operating. Everywhere, they are now being phased out, a technology that has reached its sunset.

      But in the 1990s, they are a ubiquitous presence. Some people had – and routinely used – more than one.

    Understanding what is popular at any given point in the era, what it can be used for, and its limitations and fallibilities, is essential to properly depicting the era (does everyone remember Nelson and his Apple Newton from an early episode of The Simpsons?).

    End: Hope Fails

    Although it had been possible for most of the decade to balance good news against the occasional piece of bad, there were a couple of historical developments that could not be dismissed so easily. Four developments in particular would create tensions – some of which would only be inflamed by subsequent decades.

    The end of Glasnost and Rise of Yeltsin

    Much of the following paraphrases content from the Wikipedia page for Mikhail Gorbachev.

    The Russian word from which Glasnost derives has long been used to mean “openness” and “transparency”. In the mid-1980s, it was popularized by Mikhail Gorbachev as a political slogan for increased government transparency in the Soviet Union within the framework of perestroika and the word entered the English language with that definition, especially in relation to the Soviet Union and Russian Federation.

    Under Gorbachev the ice not only thawed, it seemed to shatter; every time he had a summit with a western leader, there was a positive outcome for both Soviet Citizens and those in the West.

    To those who knew what to watch for, Gorbachev was straddling a fine line between hardliners and even more progressive elements, and in time this led to an attempted coup in August of 1991. In less than three weeks, the coup leaders had realized that they lacked sufficient popular support to continue, and had stood down. Boris Yeltsin emerged as a popular figure for standing up against the coup, giving a memorable speech atop a tank.

    Gorbachev pledged to reform the Soviet Communist Party but faced aggressive criticism from Yeltsin for having appointed many of the coup members to their positions of authority in the first place. His attempts at compromising between the two factions were now held as a mistake, and his authority was lost; he had been pushed off that fine line by the conservative hard-liners, and was lost. Just two days after his return, he resigned as general secretary of the Communist Party and called on the Central Committee to dissolve.

    After the coup, the Supreme Soviet indefinitely suspended all Communist Party activity, effectively ending communist rule in the Soviet Union, and the collapse followed at breakneck speed. Many celebrated a final victory in the Cold War (who should have known better), but most considered the resulting instability as a greater threat than a Soviet Union under Gorbachev would have been. Yeltsin, now wielding greater authority than Gorbachev, stated that he would veto any idea of a unified state, instead favoring a confederation with little central authority. The referendum in Ukraine on 1 December with a 90% turnout for secession from the Union was a fatal blow; Gorbachev had expected Ukrainians to reject independence.

    Without Gorbachev’s knowledge, Yeltsin met with Ukrainian president Leonid Kravchuk and Belarusian president Stanislav Shushkevich in Belovezha Forest, near Brest, Belarus, on 8 December and signed the Belavezha Accords, which declared the Soviet Union had ceased to exist and formed the Commonwealth of Independent States (CIS) as its successor. Gorbachev was furious but impotent to preserve the Soviet Union, as one state after another ratified the new political structure. Forced to accept the fait accompli, Gorbachev announced that he would resign when the CIS became a reality.

    Gorbachev reached a deal with Yeltsin that called for Gorbachev to formally announce his resignation as Soviet president and Commander-in-Chief on 25 December, before vacating the Kremlin by 29 December. On the 26th, the Soviet of the Republics, the upper house of the Supreme Soviet of the Soviet Union, formally voted the country out of existence.

    Few people knew what to expect from Yeltsin and this new political entity, the CIS. And unpredictability always lends itself to uncertainty and doubt. The question that needed to be answered was how sincere Yeltsin was in his past proposed reforms, and how much of a political opportunist had he been?

    The fall of Yeltsin & the Rise of Putin

    As it happened, Yeltsin was sincere, but fell victim to the perpetual enemy of the idealist – wishful thinking. His policies were insufficiently robust and had too great a tendency to assume that things would always work out the way he thought they would. Or at least, that’s how many people came to see him after the fact.

    On at least two occasions, he survived attempts to impeach him, signaling the renewed presence of hardliners within the government ranks. In 1998, the Prosecutor General of Russia, Yuri Skuratov, opened a bribery investigation against Mabetex a Swiss construction firm that held many contracts with the Russian Government, accusing its Chief Executive Officer Behgjet Pacolli of bribing Yeltsin and his family. Swiss authorities issued an international arrest warrant for Pavel Borodin, the official who managed the Kremlin’s property empire. Stating that bribery was a common business practice in Russia, Pacolli confirmed in early December 1999 that he had guaranteed five credit cards for Yeltsin’s wife, Naina, and two daughters, Tatyana and Yelena. Yeltsin resigned a few weeks later on 31 December 1999, appointing Vladimir Putin as his successor.

    By some estimates, his approval ratings when leaving office were as low as 2%. Polling also suggests that a majority of the Russian population were pleased by Yeltsin’s resignation.

    — Paraphrased from the Wikipedia page for Boris Yeltsin.

    Yeltsin had become seen as a reasonably amiable drunkard by many in the West. Few recognized the empowerment of the Oligarchs for the potential threat that it became. There was considerable angst over an every-man-for-himself attitude amongst the military and ex-military, in particular the potential for the black market sale of nuclear weapons. This plot element features strongly in a number of Hollywood movies of the time. But Yeltsin had only sown the seeds; they would flower under Putin.

    A former intelligence officer who was generally prepared to play the long game,

      Following Yeltsin’s resignation, Putin became acting president and, in less than four months, was elected to his first term as president. He was subsequently reelected in 2004. Due to constitutional limitations of two consecutive presidential terms, Putin served as prime minister again from 2008 to 2012 under Dmitry Medvedev. He returned to the presidency in 2012, following an election marked by allegations of fraud and protests, and was reelected in 2018. In April 2021, after a referendum, he signed into law constitutional amendments that included one allowing him to run for reelection twice more, potentially extending his presidency to 2036.

      — Wikipedia, Vladimir Putin.

    At first, Putin’s ascension seemed a good thing, bringing stability to an unstable situation. But rather than bringing the oligarchs into line, he encouraged them, indebted each of them to himself, then played one off against the others. At the same time, he blatantly rewrote the rules, as described above. He has now become obsessed with the notion of recreating the Soviet Union (by force since there is no other way); step one was to have been the annexation of Ukraine. Part of that obsession is that 2036 time limit – there must be some reason why he can’t simply rewrite the rules again. So he’s bet the farm on a Ukrainian Assimilation – and appears to be losing.

    But the seeds of the current whirlwind were sown way back then, I think.

    AIDS and the Death of Free Love

    Flower Power was all the idealism and hope of a generation in one package, and nothing was more symbolic of it than the free love movement. Made possible by the contraceptive pill, a hedonist expression of women’s liberation, it was starry-eyed idealism at its most extreme. It’s more than a little ironic that Flower Power withered and before the sub-movement that it engendered, but the impact of the Vietnam War was too great a cross for it to bear. The irony stems from the horror that vision of the actual fighting and the atrocities of war actually made peace seem all that much more desirable. The world might have been a very different place if the flower power movement post-dated that conflict.

    But the legacy remained – sexual liberation – and its descendant movements remain with us today in the struggle for LGBT rights and recognition.

    The 1990s came close to killing what remained, however, as a new disease arose which seemed to target the promiscuous, and especially the gay community: AIDS, caused by HIV. It’s not my intention to delve too deeply into this story; what I am more concerned with is the sense of despair that it engendered. At first, it was thought to be a disease that only afflicted gay men, but slowly it seemed to spread to those addicted to intravenous narcotics, and then to the general public.

    There was something close to a public panic, fueled by suspicion and paranoia, and which gave rise to massive levels of disinformation. I remember someone asking me if you could get it from giving someone a haircut, or shaking their hand. It didn’t quite reach the level where “breathing the same air” was suspicious and to be avoided, but any other form of contact was deemed “dangerous” by some.

    We’ve learned a lot, and a lot better, since those days, and AIDS is no longer a death sentence. In fact, there are indications that a full cure is not far away – the legacy of the bucket-loads of money that were eventually targeted at the disease. But, at the time, there was mortal fear in being anywhere near a potential victim, and any number of people who had done nothing wrong were made outcasts by the more fearful and intolerant elements of society.

    The imminence of Y2K

    Finally, we have Y2K, sometimes described as the Armageddon That Never Came (or other, equally-colorful terms). The “Millennium Bug” is now considered a non-event by the general public, in the same chicken-little vein of predictions of Armageddon by Planetary Alignment or the 2012 panic, or any number of other similar events.

    Unfortunately, there is a qualitative difference – the Y2K problem was very real, and the potential for disaster if nothing was done about it may have been worst-case but were otherwise equally real. To me, it’s ironic that the ‘non-event’ was used to cast aspersions on the credibility of Climate Change – ironic, because the reason that Y2K wasn’t a disaster is that a lot of people put in a lot of very hard and sometimes tedious work making sure that it wasn’t a disaster, and that – a lot of very hard and sometimes tedious work – is exactly what is required to mitigate Climate Change.

    But for those who claim that it was a non-event, consider the following list of actual events and consequences, from the same source cited above:

    • Before 2000:
      • Late 1998: Commonwealth Edison reported a computer upgrade intended to prevent the Y2K glitch caused them to send the village of Oswego, Illinois an erroneous electric bill for $7 million.
      • 1 January 1999: taxi meters in Singapore stopped working, while in Sweden, incorrect taxi fares were given.
      • Midnight, 1 January 1999: at three airports in Sweden, computers that police used to generate temporary passports stopped working.
      • February 8, 1999: while testing Y2K compliance in a computer system monitoring nuclear core rods at Peach Bottom Nuclear Generating Station, instead of resetting the time on the external computer meant to simulate the date rollover a technician accidentally changed the time on the operation systems computer. This computer had not yet been upgraded, and the date change caused all the computers at the station to crash. It took approximately seven hours to restore all normal functions, during which time workers had to use obsolete manual equipment to monitor plant operations.
      • November 1999: approximately 500 residents in Philadelphia received jury duty summonses for dates in 1900.
      • December 1999: in the United Kingdom, a software upgrade intended to make computers Y2K compliant prevented social services in Bedfordshire from finding if anyone in their care was over 100 years old, since computers failed to recognize the dates of birth being searched.
      • Late December 1999: Telecom Italia (now Gruppo TIM), Italy’s largest telecom company, sent a bill for January and February 1900. The company stated this was a one-time error and that it had recently ensured its systems would be compatible with the year rollover.
      • 28 December 1999: 10,000 card swipe machines issued by HSBC and manufactured by Racal stopped processing credit and debit card transactions. This was limited to machines in the United Kingdom, and was the result of the machines being designed to ensure transactions had been completed within four business days; from 28 to 31 December they interpreted the future dates to be in the year 1900. Stores with these machines relied on paper transactions until they started working again on 1 January.
      • 31 December, at 7:00 pm EST, Virginia, USA: as a direct result of a patch intended to prevent the Y2K glitch, computers at a ground control station in Fort Belvoir, Virginia crashed and ceased processing information from five spy satellites, including three KH-11 satellites. The military implemented a contingency plan within 3 hours by diverting their feeds and manually decoding the scrambled information, from which they were able produce a limited dataset. All normal functionality was restored at 11:45 pm on 2 January 2000
    • 1 January, 2000:
      • Australia: bus ticket validation machines in two states failed to operate.
      • Japan: machines in 13 train stations stopped dispensing tickets for a short time.
      • Japan: the Shika Nuclear Power Plant in Ishikawa reported that radiation monitoring equipment failed at a few seconds after midnight. Officials said there was no risk to the public, and no excess radiation was found at the plant.
      • Japan: at 12:02AM, the telecommunications carrier Osaka Media Port found date management mistakes in their network. A spokesman said they had resolved the issue by 02:43 and did not interfere with operations.
      • Japan: NTT Mobile Communications Network (NTT Docomo), Japan’s largest cellular operator, reported that some models of mobile telephones were deleting new messages received, rather than the older messages, as the memory filled up.
      • South Korea: at midnight, 902 Ondol heating systems and water heating failed at an apartment building near Seoul; the Ondol systems were down for 19 hours and would only work when manually controlled, while the water heating took 24 hours to restart.
      • South Korea: two hospitals in Gyeonggi Province reported malfunctions with equipment measuring bone marrow and patient intake forms, with one accidentally registering a newborn as having been born in 1900, four people in the city of Daegu received medical bills with dates in 1900, and a court in Suwon sent out notifications containing a trial date for 4 January 1900.
      • South Korea: a video store in Gwangju accidentally generated a late fee of approximately 8 million won (approximately $7,000 US dollars) because the store’s computer determined a tape rental to be 100 years overdue. South Korean authorities stated the computer was a model anticipated to be incompatible with the year rollover, and had not undergone the software upgrades necessary to make it compliant.
      • Hong Kong: police breathalyzers failed at midnight.
      • China: In Jiangsu, taxi meters failed at midnight.
      • Egypt: three dialysis machines briefly failed.
      • Greece, approximately 30,000 cash registers, amounting to around 10% of the country’s total, printed receipts with dates in 1900.
      • Denmark, the first baby born on 1 January was recorded as being 100 years old.
      • France: the national weather forecasting service, Meteo-France, said a Y2K bug made the date on a webpage show a map with Saturday’s weather forecast as “01/01/19100”. Additionally, the government reported that a Y2K glitch rendered one of their Syracuse satellite systems incapable of recognizing onboard malfunctions.
      • Germany: at the Deutsche Oper Berlin, the payroll system interpreted the new year to be 1900 and determined the ages of employees’ children by the last two digits of their years of birth, causing it to wrongly withhold government childcare subsidies in paychecks. To reinstate the subsidies, accountants had to reset the operating system’s year to 1999.
      • Germany: a bank accidentally transferred 12 million Deutsche Marks (equivalent to $6.2 million) to a customer and presented a statement with the date 30 December 1899. The bank quickly fixed the incorrect transfer.
      • Italy, courthouse computers in Venice and Naples showed an upcoming release date for some prisoners as 10 January 1900, while other inmates wrongly showed up as having 100 additional years on their sentences.
      • Norway, a day care center for kindergartners in Oslo offered a spot to a 105 year old woman because the citizen’s registry only showed the last two digits of citizens’ years of birth.
      • Spain: a worker received a notice for an industrial tribunal in Murcia which listed the event date as 3 February 1900.
      • Sweden: the main hospital in Uppsala, a hospital in Lund, and two regional hospitals in Karlstad and Linkoping reported that machines used for reading electrocardiogram information failed to operate, although the hospitals stated it had no effect on patient health.
      • UK: In Sheffield, a Y2K bug that was not discovered and fixed until 24 May caused computers to miscalculate the ages of pregnant mothers, which led to 154 patients receiving incorrect risk assessments for having a child with Down syndrome. As a direct result two abortions were carried out, and four babies with Down syndrome were also born to mothers who had been told they were in the low-risk group.
      • Brazil: at the Port of Santos, computers which had been upgraded in July 1999 to be Y2K compliant could not read three-year customs registrations generated in their previous system once the year rolled over. Santos said this affected registrations from before June 1999 that companies had not updated, which Santos estimated was approximately 20,000, and that when the problem became apparent on 10 January they were able to fix individual registrations, “in a matter of minutes”. A computer at Viracopos International Airport in Sao Paulo state also experienced this glitch, which temporarily halted cargo unloading.
      • Jamaica, in the Kingston and St. Andrew Corporation, 8 computerized traffic lights at major intersections stopped working. Officials stated these lights were part of a set of 35 traffic lights known to be Y2K non-compliant, and that all 35 were already slated for replacement.
      • USA: the US Naval Observatory, which runs the master clock that keeps the country’s official time, gave the date on its website as 1 Jan 19100.
      • USA: the Bureau of Alcohol, Tobacco, Firearms and Explosives could not register new firearms dealers for 5 days because their computers failed to recognize dates on applications.
      • USA: 150 Delaware Lottery racino slot machines stopped working.
      • USA: In New York, a video store accidentally generated a $91,250 late fee because the store computer determined a tape rental was 100 years overdue.
      • USA: In Tennessee, the Y-12 National Security Complex stated that a Y2K glitch caused an unspecified malfunction in a system for determining the weight and composition of nuclear substances at a nuclear weapons plant, although the United States Department of Energy stated they were still able to keep track of all material. It was resolved within three hours, no one at the plant was injured, and the plant continued carrying out its normal functions.
      • USA: In Chicago, for one day the Chicago Federal Reserve Bank could not transfer $700,000 from tax revenue; the problem was fixed the following day. Additionally, another bank in Chicago could not handle electronic Medicare payments until January 6, during which time the bank had to rely on sending processed claims on diskettes.
      • USA: In New Mexico, the New Mexico Motor Vehicle Division was temporarily unable to issue new driver’s licenses.
      • USA: The campaign website for United States presidential candidate Al Gore gave the date as 3 January 19100 for a short time.
      • USA: Godiva Chocolatier reported that cash registers in its American outlets failed to operate. They first became aware of and determined the source of the problem on 2 January, and immediately began distributing a patch. A spokesman reported they that restored all functionality to most of the affected registers by the end of that day and had fixed the rest by noon on 3 January.
      • USA: The credit card companies MasterCard and Visa reported that, as a direct result of the Y2K glitch, for weeks after the year rollover a small percentage of customers were being charged multiple times for transactions.
      • USA: Microsoft reported that, after the year rolled over, Hotmail e-mails sent in October 1999 or earlier showed up as having been sent in 2099, although this did not affect the e-mail’s contents or the ability to send and receive e-mails. [To me, this sounds like an error in the Y2K patch.]

    ….and there are about as many problems again that took effect after January 1. Some of the problems involved February 29 – the rule is that there’s no leap year if the year ends in 00 except if the year ends in 000, when there is one. And there were a number that took place on 31 Dec, 2000, or Jan 1, 2001, also often leap-year related. And there have even been a couple of significant errors come to light in the years since – the destruction of NASA’s Deep Impact spacecraft has been blamed on a time tagging error, for example.

    In most cases, these are minor errors, though my heart goes out to the UK mothers who had abortions because of a miscalculation of the risk of Downs Syndrome. But there are enough of them, and serious enough here and there – the Japanese nuclear power plant, for example – to show what could have happened.

    Y2K was NOT a non-event. But we survived it, and rolled into the new millennium.

Wow, I can’t believe how much space and time it took to get through all that! There’s absolutely no time to take this article further. So, next week: the 2000s and (hopefully) beyond!

Comments Off on Economics In RPGs 8: The Digital Age Ch 2

Economics In RPGs 8: The Digital Age Ch 1


This entry is part 11 of 16 in the series Economics In RPGs

The first mainframe I used professionally was a networked pair of DEC PDP-11/70s, the same as the one depicted in this photograph. A successful minicomputer – yes, this is smaller than a mainframe! – the 11/70 dates from the mid-70s. Over 100,000 PDP-11 units were sold in the course of the decade. The example pictured includes two nine-track tape drives, two disk drives, a high speed line printer, a DECwriter dot-matrix keyboard printing terminal and a cathode ray tube terminal, all installed in a climate-controlled machine room – cables run underneath the suspended floor. Image by Wikipedia Commons user Kozan, who released it into the Public Domain on April 3, 2016..Image reference page https://en.wikipedia.org/wiki/File:PDP-11-70.JPG

A word of advice: Each part of the series builds heavily on the content from the previous one. While you may be able to get relevant information without doing so, to get the most of out of each, you should have read the preceding article. In this case, though, that “previous part” is actually the one before last, and a three-chapter set of quite lengthy posts. You might have to skim – just bear in mind that if anything is puzzling but not explained, it’s probably because it has already been explained earlier in the series.

Welcome & General Introduction

I’m still not clear on how this article will turn out. MY thinking has gelled considerably over the situation encountered last week, but my magic eight-ball is still very cloudy and unreliable.

What’s clear is that this post is going to encompass just one or two of the bullet points planned for the end-of-series, and that will actually comprise at least three major sub-sections.

How to fit three into two has been the structural problem I’ve been wrestling with for the last couple of weeks. Hopefully everything makes sense. And what it shows is that there is virtually no chance of getting this entire article in one hit – whether it will be in two or three chapters (or possibly more?) remains to be seen.

A disclaimer: I am not an economist and I’m not trying to turn anyone else into an economist. An awful lot of this content will be simplified, possibly even oversimplified. Bear that in mind as you read.

A second disclaimer: I’m Australian with a working understanding, however imperfect and incomplete, of how the US Economy works, and an even more marginal understanding of how the UK economy works (especially in the post-Brexit era). Most of my readers are from the US, and number two are Brits. Canadians and Australians fight over third place on pretty even terms, so those are the contexts in which what I write will be interpreted. And that means that the imperfection can become an issue.

Any commentary that I make comes from my personal perspective. That’s important to remember. Now, sometimes an outside perspective helps see something that’s not obvious to those who are enmeshed in a system, and sometimes it can mean that you aren’t as clued-in as you should be. So I’ll apologize in advance for any errors or offense.

I’ll repeat these disclaimers at the top of each part in this series.

Related articles

This series joins the many other articles on world-building that have been offered here through the years. Part one contained an extremely abbreviated list of these. There are far too many to list here individually; instead check out

the Campaign Creation page of the Blogdex,

especially the sections on

  • Divine Power, Religion, & Theology
  • Magic, Sorcery, & The Arcane
  • Money & Wealth
  • Cities & Architecture
  • Politics
  • Societies & Nations, and
  • Organizations, and
  • Races.
Where We’re At – repeated from Part 3

Along the way, a number of important principles have been established.

  1. Society drives economics – which is perfectly obvious when you think about it, because social patterns and structures define who can earn wealth, the nature of that wealth, and what they can spend it on – and those, by definition, are the fundamentals of an economy.
  2. Economics pressure Societies to evolve – economic activity encourages some social behaviors and inhibits others, producing the trends that cause societies to evolve. Again, perfectly obvious in hindsight, but not at all obvious at first glance – largely because the changes in society obscure and alter the driving forces and consequences of (1).
  3. Existing economic and social trends develop in the context of new developments – this point is a little more subtle and obscure. Another way of looking at it is that the existing social patterns define the initial impact that new developments can have on society, and the results tend to be definitive of the new era.
  4. New developments drive new patterns in both economic and social behavior but it takes time for the dominoes to fall – Just because some consequences get a head start, and are more readily assimilated into the society in general, that does not make them the most profound influences; those may take time to develop, but can be so transformative that they define a new social / political / economic / historic era.
  5. Each society and its economic infrastructure contains the foundations of the next significant era – this is an obvious consequence of the previous point. But spelling it out like this defines two or perhaps three phases of development, all contained within the envelope of a given social era:
    • There’s the initial phase, in which some arbitrary dividing line demarks transition from one social era to another. Economic development and social change is driven exclusively by existing trends.
    • There’s the secondary phase, in which new conditions derive from the driving social forces that define the era begin to infiltrate and manifest within the scope permitted by the results of the initial phase.
    • Each of the trends in the secondary phase can have an immediate impact or a delayed impact. The first become a part of the unique set of conditions that define the current era, while the second become the seeds of the next social era. There is always a continuity, and you can never really analyze a particular period in history without understanding the foundations that were laid in the preceding era.

The general principles contained within these bullet points are important enough that I’m going to be repeating them in the ‘opening salvos’ of the remaining articles in the series.

Introducing The Digital Age

It’s actually been quite difficult to conceptually unify the events of the last 50 years. It’s as though the historic period was filled with movements containing beginnings, actuality, and either endings or transitions, but these distinct narrative threads have overlapped continually with others, occasionally producing unexpected synergies and compounded problems.

It was only when I realized that this, in itself, was a connecting thematic thread that was characteristic of the period that things started to fall into place, with the question of “why?”.Suddenly, there seemed to be an inevitability to the pattern I had observed, responses being shaped by the intervention of technology and by the social environment engendered by the technology of the period.

What had been just one element of the last fifty years, computers and computer applications, emerged as the driving force behind many of the changes. By this time, I had identified many other themes to the age, and redacted them when they didn’t exist for the whole period. With the new touchstone, several coalesced into unexpected forms, and I was left with five, plus that touchstone. As formal layout of the article proceeded, a seventh theme became apparent.

These themes are inconsistent through the period, and they often transit to a different focus or manifestation as the period progresses.

    Theme 1: Outcome-targeted Change

    There was a time when a unified, internally consistent perspective of what the world could be formed a central ideology that framed the policies offered by political parties. As the digital age progresses, this central ideology is eroded in one of two ways.

    The first is an obsession with gaining power for it’s own sake; this mandates humiliation and repudiation of the opposition to perpetually demonstrate their perceived unfitness for office. In the US, the Republican Party has fallen down this rabbit hole.

    The alternative is the proposition of specific policies that are not reflective of the general ideology, but are exceptions aimed at a specific policy outcome in the affected field. “We still believe in doing X, but in this specific case, we need to do Y in order to achieve Z. Once we have done so, we will have to reexamine our priorities and policies, and its likely that some compromise between X and Y will be necessary, achieving most of the benefits of X but maintaining Z.”

    I’ve heard this statement, or variations on it, at least a dozen times over the last twenty or so years. It’s a common feature in US and Australian Politics, and I suspect that it will also be the case through the Parliaments of Europe, though I don’t know enough about their domestic politics to confirm this suspicion.

      Theme 1a: Eco-warriors Through The Modern Age

      A secondary manifestation of this theme is in the context in which those who would term themselves “Eco-warriors” manifest their behavior.

      In the early 70s, the focus was on industrial pollution in cities, groundwater contamination by manufacturing, and logging. The EPA cleaned up the first two, and compromises were reached by loggers to enable them to get back to work – under scrutiny and oversight.

      In the 80s, the focus shifted to the preservation of endangered species. More accurately, about the protection of such species from industrial threats. Scandals rocked some of the biggest groups of organized Eco-warriors over the next 20 years – ranging from the destruction of the Rainbow Warrior to accusations of profiteering leveled at the WWF..

      The 90s saw governments expanding their environmental protections but only in specific areas: Asbestos, and the Ozone layer. As the impacts of past regulatory changes began to accumulate, the skies began to clear, water became safer to drink, and for the most part, the environmental portfolio consisted of attempts to cut protections in the name of business efficiency, and mostly forgotten by the general public.

      Millennium Blues

      The new Millennium brought with it new challenges – the deforestation in the Amazon region and the first warnings about Global Warming. By the end of the first decade, in-principle commitments had been reached with a number of Nations around the world to limit atmospheric carbon – eventually. Initial targets were so low that it was possible to finagle the books to meet them, something the Australian Government of the time was rightly criticized for both internally and externally, for example. With some sort of action being taken, a lot of the heat went out of the Eco-warrior movement, which began to be seen as more fringe.

      In the decade leading up to the Pandemic, action was more driven by consumers and corporate profit, especially when it became clear that renewable sources were becoming cheaper than traditional power-generation mechanisms. But the decade of neglect and desultory inaction began to catch up with reality as signs began to emerge that global warming was proceeding at a faster rate than expected.

      Last year, some places had thirty extra inches of snow, or so I’ve heard. The record-breaking heat wave though the south and Midwest is unprecedented. The hurricanes striking the Atlantic Coast have increased in frequency if not in severity – but the latter caveat might be an artifact of the broad brush of the Fujita Scale; if hurricanes were measured as “F4.7” instead of rounding down to F4, an increase in intensity might also become apparent. Then there’s the Canadian bushfires, and a year or two back, the Californian wildfires. And, of course, there’s one hitting California as I write this.

      Closer to home. A season of unprecedented Bushfires here in Australia was followed by two once-in-a-century floods – thirty days apart! A few communities were hit by both. And we’re expecting a record year for Bushfires, in both severity and threat; our authorities last week admitted that they had been able to do only 20% of their planned remediation. As of last night, even though it’s still officially winter here, there were 60 fires burning out of control!

      The more such events stack up, the less likely it is that ‘coincidence’ or ‘it’s just a bad run’ or other dismissals is an adequate explanation. I’ve been a skeptic in the past, waiting for the clear and unequivable evidence, while advocating for measured climate change action ‘just in case’. To be clear, I considered it undeniable that the climates of the world were changing, heating up; only the cause remained unproven to me – it was still a crisis that needed urgent attention.

      And that brought the fringes-of-society Eco-warriors out of the woodwork, with ever more extreme acts of vandalism to get media attention, all in the name of accelerating progress not just toward net zero, but an actual reduction in carbon levels. These acts would be unthinkable by anyone from the mainstream, and are strongly condemned – and actually annoy people sufficiently that climate action is becoming less popular within the community.

      For those who don’t think that a relatively small change in this space can have huge impacts, think of the atmospheric carbon, and the additional energy that it captures and pumps into the environment, as catalysts, accelerating and intensifying the energy transfers and atmospheric consequences that manifest as weather.

      And yet, just today, I saw a post on Quora in which a denialist tried to rally support for his delusions.

    Theme 2: Limited resources

    This era begins with the Oil Crisis of 1973. While that subject will get appropriate scrutiny in a later section, this section is more concerned with the long-term impacts. The US had peaked in Domestic Oil Production in 1960, when it became cheaper to buy Oil from the Arabian Gulf than to find and exploit it locally. As a result, they had been growing more dependent on foreign oil for about a decade.

    If they had any insecurity about the reliability of supply, the US Domestic Reserve, a stockpile of reserves aimed at meeting critical needs in the event of an emergency, quelled them. And so, the lemmings marched toward the edge of the cliff, unaware of the doom they faced. No, that’s probably being a little unfair and a little over-the-top melodramatic.

    I have to point out that the Crisis was experienced in many different countries around the world.

      The initial nations targeted were Canada, Japan, the Netherlands, the United Kingdom and the United States, though the embargo also later extended to Portugal, Rhodesia and South Africa.

      — Wikipedia, 1973 Oil Crisis

    The reality was that the embargo itself had minimal effect on the supply of oil; but the resulting panic and perceived threat drove the price of crude oil through the roof, and caused many nations to start hoarding oil against the possibility of more serious restrictions of supply, which (supply-and-demand) drove that price even higher. This meant that even nations which were not explicitly targeted, like Australia, were impacted.

    Impacts

    The direct impact was profound.

      The crisis reduced the demand for large cars [globally]. Japanese imports, primarily the Toyota Corona, the Toyota Corolla, the Datsun B210, the Datsun 510, the Honda Civic, the Mitsubishi Galant (a captive import from Chrysler sold as the Dodge Colt), the Subaru DL, and later the Honda Accord, all had four cylinder engines that were more fuel efficient than the typical American V8 and six cylinder engines. Japanese imports became mass-market leaders with unibody construction and front-wheel drive, which became de facto standards.

      From Europe, the Volkswagen Beetle, the Volkswagen Fastback, the Renault 8, the Renault LeCar, and the Fiat Brava were successful. Detroit responded with the Ford Pinto, the Ford Maverick, the Chevrolet Vega, the Chevrolet Nova, the Plymouth Valiant and the Plymouth Volare. American Motors sold its homegrown Gremlin, Hornet and Pacer models.

      — Same Source

    But I think there was an indirect impact that went even deeper than changing global consumer habits. Although there had been years of warnings that oil was a limited resource that would eventually run out, starting in the 1800s (refer Wikipedia, Reserves-to-production ratio). As the linked article states, there had been many false claims that the world’s oil was soon to run out caused by simplistic interpretations and assumptions that, time and again, have proven false. Inevitably, these follow or accompany any oil supply crisis, and there had never been one on the scale of the 1973 Oil Crisis.

    I think that people looked around them at the material objects in their life and realized that everything they owned was constructed from raw materials, and that there was a finite quantity of those raw materials on earth. The notion that, sooner or later, something critical would run out, became embedded in a lot of official thinking thereafter.

    Limits to Services

    And then the concept was broadened to services delivery – there are only so many doctors, and they can only see so many patients; if that isn’t enough to cope with the normal population of an area, you need to provide more doctors. Or Social Workers, or Psychologists, or Plumbers, or whatever.

    Previously, there was the presumption that shortages would lead to higher prices, which would make the employment sector more appealing to new recruits, which would produce more of whatever shortage was being discussed. The “free market” would self-correct, in other words. But that presumption rests on its own assumptions, and those become invalid when government policies change. For example, if regulations increase the qualifications needed to become a doctor, for what seems like perfectly valid reasons (even if those are a knee-jerk reaction to some failure of medical fidelity on the part of a single bad apple), there will be fewer doctors who graduate as a result.

    Baby Boomers

    These assumptions also become invalid when population growth rates change. In 1964, the baby boom of the 60s came to an end.

      The term “baby boom” is often used to refer specifically to the post–World War II (1946–1964) baby boom in the United States and Europe [and Australia, New Zealand, and several other nations].

      Although the answer of when it happened can vary, most people agree that the baby boom occurred around 1946 and 1964. This generation of “baby boomers” was the result of a strong postwar economy, in which Americans felt confident they would be able to support a larger number of children. Boomers also influenced the economy as a core marketing demographic for products tied to their age group, from toys to records.

      — Wikipedia, Baby Boom

    Worker Shortages

    In the 1970s, people started realizing what the end of the baby boom would mean for the future. In essence, an aging population would increase the demand for certain services, while the declining birth rate meant that there would be fewer people to provide those services, either practically (fewer doctors) or economically (fewer incomes).

    This realization leads directly to the demand for increased productivity by workers that has featured so heavily in pay-scale negotiations over the last 25 years or so, at least here in Australia (and, I believe, in all other nations who also experienced the baby boom). The intensity of those demands has changed over the years, but there have to be limits to how productive (in economic terms) a single individual can be.

    I came in on the tail end of the Baby Boom, being born in 1963. I’m currently 60, and almost half-a-year beyond that mark. The vast bulk of Baby Boomers are now in the 60s and 70s. Ten years from now, that will be 70s and 80s. Governments the world over are taking this seriously, and it has already led to economic changes.

    More than half the Baby Boomers have already retired, and are now principally living off whatever they had set aside along the way. That means that investment capital is being withdrawn from superannuation accounts and spent, instead of staying put and providing sustainable cash reserves. Within the next 8 years, this process will conclude, and Superannuation companies will have a lot less cash to spread around. That will have an effect on stock markets – fewer customers, lower demand – and supply-and-demand ordains that share prices will erode at least somewhat as a result. Increasingly, superannuation providers will become economically more shaky over the next ten years or so, more prone to problems they could previously resisted. Some will undoubtedly fail.

    Worker Shortages have been making headlines in the post-pandemic economy. My take on this situation is ‘get used to it’ – this is only the beginning.

    Most seriously impacted will be any business that is manpower-intensive, which is the reason why there is so much interest in automation at the moment. Technology has reduced the need for employees in many fields over the last 50 years, and AI promises to continue that trend. But seasonal workers from outside the country are a very real necessity now and into the future. This isn’t just happening in Australia; it’s a near-global trend (for proof, just look at what happened to the Florida economy after the Governor’s recent anti-immigrant legislation).

    Right now, the primary impact is on unskilled labor (which is actually not all that unskilled, these days). But in the future, it could apply to doctors and nurses and electricians and other qualified professionals.

    Theme 3: Myopic Expectations

    Every now and then, a policy seems to be announced, by whoever is in office at the time, that relies on the magical powers of wishful thinking to have the desired impact. There are two logical fallacies involved:

    • “We have to do something. THIS is something. — but will it have the desired effect? Or is busy-work, or a band-aid, or an effort to at least look like you are doing something?
    • “If we do this, and the this happens, and the that happens, then everything will be great for everybody. We’ve consulted the best experts and they assure us that everyone in the industry who matters wants this.”

    Not all offerings are this blatant, or this unrealistic. And some are even worse, I’m sorry to say. The more common problem, as referenced in the section title, is that each department head in a government starts to develop myopia when it comes to matters outside their particular jurisdiction. That becomes a problem when a policy decision directly affects matters outside your portfolio, or is impacted by them.

    The worst example that comes to mind – and there have been many in recent years – is the Robodebt scandal here in Australia.

    • Take an unproven, ideologically-inspired, assumption: People on welfare are always trying to cheat the system.
    • Advance a test that has inherent flaws: Compare the income reported to the welfare authority to the total income listed on the individual’s tax return, averaged to match the fortnightly reporting periods.
    • Devise a scheme to identify those mismatches and recover the money that these individuals have “stolen” from the taxpayer.
    • Ignore multiple pieces of advice that say that the scheme is not legal, and legislation will be needed.
    • Get no outside audits of how much can be recovered, or how rife under-reporting actually is. In particular, ignore the fact that those on welfare are required to estimate their income if they don’t know the exact amount on the date the form is lodged each fortnight.
    • Institute your scheme in such a way that any mismatch automatically generates a penalty notice and applies that penalty.
    • Permit those accused to challenge the penalty notice by producing proof income from up to ten years prior, even though the law only requires these to be kept for 7 years – a requirement that a significant percentage of the population ignore..
    • Get totally shocked when the whole thing blows up in your face.

    I hope that no reader will ever be able to match that comedy of errors and willful myopia. But I know better – I could point at the ‘social war’ underway between Disney and the Florida Government, for example, where the score is currently 4-1 in favor of Disney – with the whole contest being a petty, childish, own goal by DeSantos..

    Theme 4: Everything’s The Same, Until It’s Not

    This is something that I noticed with regard to the advances in technology, and in particular digital technology. No matter how profound the social change that such technology (including applications) will have eventually, there is a general perception that nothing will change as a result.

    For a time, this expectation is proven true – it generally takes time for any significant social change to permeate an entire culture, even now. But in the longer term, if you look at what the technology enables you to do that you could not do before, or what those who develop such technologies could create based on such a platform, the eventual impact can be forecast.

    The same is true of everything else that ends up being a critical element of economic change – it generally takes time to manifest.

    Theme 5: Permanent Opposition

    I could just as easily have entitled this section “Blind Polarization”. Ideological positions are becoming, or have become, entrenched and unyielding, with levels of obsessive belief that have hitherto been applied to cults. It has obviously happened in the US; it is happening here. The practical manifestation is an inability to compromise – ‘whatever it is, I’m against it”.

    I could go into the whole history of the situation, starting with Newt Gingrich, but I don’t know that there’s any value to it. Suffice it to say that this trait goes from almost non-existent in 1973 to almost-universal in 2023, and it’s almost completely a trait of the right wing of politics.

    This is not purely an American problem; variations have been exported to the UK, to Australia, and to many other nations, in part because it appears successful as a stratagem in the USA. The problem is that I can’t see it being overturned until it causes some irreparable damage to one of the political parties espousing it, sufficient to persuade the remainder to forswear the philosophy. Or something less likely happens, of course! A charismatic leader promoting cooperation and mutual benefits, perhaps – but probably not.

    Theme 6: Digital Revolutions Change Everything – and Nothing

    I’ve touched on this already, in discussing Trend 4. Every revolution in the digital world starts by creating something that does exactly what the existing technology does (hence things stay the same) but with added new potentials. As those potentials are slowly translated into actual capabilities and people learn to exploit them, an inevitable cumulative effect begins that ultimately reshapes the technological and/or social landscape.

    Contemplate for a moment the shape of those digital revolutions. When our period opens, computing is restricted to mainframes that are horribly expensive and require extensive customer programming to be useful. These are at their best in handling large datasets – a census, a bank, an insurance firm, and so on. It is sometimes said that the electronic fuel injection systems of a ‘modern’ car (from 25 years ago) packed more computing power than the Apollo spacecraft did.

    For the record, that’s both true and misleading. NASA insisted on reliability, first and foremost, and that policy remains in effect to this day. One of the prices of that reliability is simplification and minimization, especially in terms of interface. So Apollo had the equivalent of programmable calculators, the Space Shuttle had the equivalent of 286s and Apple-IIs, and so on. Modern digital processors are now in the Pentium era – unless they are being privately built, in which case a different balance point between modernity and simplicity may have been chosen.

    The home computer revolutionized the office space. Off-the-shelf hardware. Off-the-shelf software.

    The mobile phone started off as not much more than a substitute for the nearest phone booth, but quickly became more portable and more convenient. Things eventually reached the point where they contained the potential to completely eliminate the land-line phone connection, so ubiquitous where they.

    The Graphical User Interface, or GUI, made computers easy to use by laypeople. Very people can’t get the hang of one simply by moving a mouse around and observing what the pointer does on-screen.

    The internet and the world-wide-web enabled distributed processing, and using remote hardware to execute complex processes remotely. To a certain extent, computing no longer was about the hardware that you were using and its limitations; it was about the hardware that you could access, and how easily you could do things with it.

    The smartphone initially didn’t do much more than add a GUI to the telephone; it didn’t initially do much more than provide exactly the same service already being delivered by mobile phones – with added convenience. These days, they are less about making phone calls and more about sharing data – replacing identification and bank cards and even music libraries.

    Social Media killed the email almost as surely as the email killed traditional snail-mail. These days, email is primarily a link distribution method, not a primary communications channel. But, more importantly, it meant that people were cross-connecting their beliefs and ideas, while excluding those who did not share in those philosophies, creating the echo chambers that continue to spread both information and misinformation to this day. But social media started off as nothing more than a chat room, a limited-scope multi-person email of limited scope.

    Streaming started off as sharing a youTube video, and then became a catch-up service and now direct existential threat to cable TV.

    AI is now threatening, or promising, to provide artificial simulations of creativity itself, while replacing menial human-to-human connectivity – phone an automated switchboard, and instead of a limited number of options for you to choose from (none of which sometimes seem to fit), you will be able to have a conversation with the AI which will then determine how to route you call, and may well be able to initiate solutions to routine matters without human intervention.

    There could be others. For example, tracing popular music from the LP to the CD to the MP3 to Napster to ITunes to Streaming services. That’s an entire industry that’s gone from cultural co-dominance to near-irrelevance over the term of the age – but the general principle is clear without it.

    Theme 7: Wastelands, Again and Again and Again

    Originally, this theme referred only to economic wastelands, but while writing it I became aware that environmental regulation and a criminal law and a whole bunch of other things also followed the same pattern.

    There is a certain extent to which this is an outgrowth of Theme 5. One political party cleans up the economic mess caused by some sort of financial disaster or potential disaster, the other lot eventually regain power and remove or disrupt the regulatory frameworks and safety mechanisms to enable those financially active in the space to make more money, and eventually the point is reached where the cycle can repeat itself.

    To be honest, though, this is something of an oversimplification. Each financial crisis is different, with different causes – when you dig into the finer details. It isn’t those finer details that I want to focus on, however, it’s the fact that these keep happening,. usually 7-9 years apart.

    There are those who dismiss the pattern as simply the tail end of an in-built boom-and=-bust cycle, but while some examples fit that criterion – the dot-com bubble for example – others do not.

A structure of Contexts

These seven themes don’t exist in isolation. They operate in a context of events, frequently in combination with one or more of the other themes that is also manifesting through those contextual events, and there is always a historical element to the resulting narrative, too.

What remains in dissecting this era is largely an identification of the those contextual threads – what was happening, and sometimes, why.

I was not expecting the clarity of structure that I eventually discovered within the era. Cued by the rolling repetition of those economic disruptions every decade or so, and starting of course with one of the most important, I first started by subdividing the era roughly by decade. I was then able to identify one or more significant event or sequence of events at the start, middle, and end of each subdivision. I was helped in assembling the resulting structure by the inherent flexibility in defining end points; what was more important was that the resulting structure be definitive of the subdivision of time concerned.

Most of the time, I could actually be guided by perceptions of the time subdivision in question. In trying to make the event bundles definitive of those perceptions, I found that the critical events fell into place fairly naturally and inevitably. The chronology may not always be exact – there can be some overlap, thematic elements at the beginning of a new period overlapping with thematic elements of the ending of the old. That’s why these periods all form part of a single broader era, instead of being relatively short eras in their own right.

My intent was to discuss each as succinctly as possible, so that the resulting structure could be more easily apprehended by the readers, but some of them are so significant or profound that it seemed inevitable that there would be too much text in between at times. So I’ve made the last-minute decision to present the structure as a table of contents of sorts.

Anatomy Of The Digital Era
  • First Period 70s-80s
    • Beginning: Oil Crisis
    • Middle: Mainframe Politics
    • End: Fall Of The Wall
  • Second Period 80s-90s
    • Beginning: Hope and Hostility
    • Trickle-down Reaganomics
    • Middle: Eighties Angst
    • Middle: Hope In The Face Of Despair
    • End: Digital Development
  • Third Period 90s–00s
    • Beginning: Invasion Of The PCs
    • Middle: Skirting Eco-disaster
    • Middle: Rise Of The Smaller Device
    • End: Hope Fails
  • Fourth Period 00s-2010s
    • Beginning: Internet Awakening
    • Middle: Megacorps Proliferate
    • End: Personal Tech
    • 9/11: Shockwaves & Awe
    • End: The GFC
  • Fifth Period 2010s-2020
    • Beginning: Social Media
    • Middle: The New Entrepreneurs
    • Climate Change: A Decade Of Lip Service
    • End: Stirrings Of Alarm
  • Pandemic Interruptus
    • Medical Economic Impact
    • Public Impact
    • Economic Disruption
    • Economic Stimulus
    • Reopening in a Blended Economy
    • The Fall Of Trump
    • The Age Of Biden?
    • The Corner Is Turned
    • General Reopening
  • Post-Pandemic Economics
    • Supply Chains: Rebuilding Trade
    • Workforce Decentralization
    • Restricted Oil
    • Ukraine Invasion
    • Paying The Piper
    • Playing Chicken With The World
    • Climate In Meltdown
    • An Imminent Pivot

This comprises my road map for the rest of this article, and any subsequent parts if I need to break it into two or more (as I expect to be necessary).

There’s a lot of ground to cover, so let’s get busy….

The Digital Age, First Period 70s-80s

The 1970s were full of whistling in the dark pretense that anything had changed since the 1960s, even while they shaped and re-shaped consumer spending habits, with attendant knock-on effects that would so dominate the 80s. There was a manic edge to the popular culture celebrations of the 60s, a harder edge – like comparing Woodstock to the Rolling Stones concert disaster at Altamont Speedway, or comparing a poppy folk song (If You’re Going To San Francisco) to anything from the disco era. There was a loss of innocence, much of it these days laid at the feet of the Vietnam War. And yet, the Korean conflict, by and large, had not had the same effect; was this purely because of the newfound global reach of mass media, the televising of horrific images? In retrospect, I think there was another factor at play, the psychological consequence of the events that I have chosen to demark the beginning of the new era.

    Beginning: Oil Crisis

    When analyzed from the perspective of actual gasoline supply, the 1973 oil crisis was pretty much a non-event. In October of that year,

      members of the Organization of Arab Petroleum Exporting Countries (OAPEC)*, led by King Faisal of Saudi Arabia, proclaimed an oil embargo targeted at nations that had supported Israel during the Yom Kippur War. The initial nations targeted were Canada, Japan, the Netherlands, the United Kingdom and the United States, though the embargo also later extended to Portugal, Rhodesia and South Africa.

      — Wikipedia, 1973 Oil Crisis

      * Not to be confused with OPEC, who were blamed for the policy by a number of media outlets.

    Cheap prices and declining domestic production in the US had led to an increase in the dependence on foreign oil, though most of it was actually purchased from Canada and Venezuela, and not from the Middle East as was popularly perceived. The US did purchase 638,500 barrels of oil a day, but that is a relatively small fraction of US consumption at the time (17 million barrels a day). Nor were any of those embargoed barrels of oil actually withheld from the US. Actual enforcement of the embargo was less significant than the public perception that a significant amount of oil was being withheld.

    What did happen was that the price of oil skyrocketed, from under $10/a barrel to over $40. This was, of course, passed on to consumers in the form of higher gasoline and energy prices. Because the cost was so much greater, many gas stations could not afford to fill their reservoirs, and at the same time, public panic caused demand to spike. Inevitably, some suppliers ran out and with each such event, the public panic grew. In an effort to get the escalating situation under control, many levels of government imposed quotas and rationing.

    Impact

    Of course, as shown in the previous part of this series, oil prices are directly inflationary in nature. In fact, they compound with each other to be inflationary to an extent far greater than might be expected.

    The side effects of the oil crisis caused the price of everything else to go up, and created wage demands to compensate – but things are never that simple; simply putting wages up by that extent would only have increased inflation until the increase was completely eaten away, creating the need for yet another wage rise.

    One alternative is for prices to fall as inflation is curbed – but that never seems to happen, with good reason – that would reduce the numeric profit levels of a business, which would undermine confidence and share value. That can be enough to kill even an extremely successful business, and it’s simply too hard to manage such situations. Instead, measured wage increases are offered once inflation falls to manageable levels that restore financial purchasing power while retaining economic stability – at least until the next crisis.
    .
    The increase in oil prices created an impact splash that hit many more countries than on the embargo list. Although not directly targeted, the oil crisis had just as big an impact here in Australia, for example, as in the US.

    Consequences

    There were a number of direct consequences. Large vehicles with high gas consumption were immediately seen as vulnerable to the petrol price, and potentially unusable should supply be restricted. The number of small, efficient cars sold immediately assumed an upwards trajectory, eventually leading to the demise of many domestic manufacturers. There was an immediate governmental re-prioritization of domestic oil exploration and production, and ‘dependence on foreign oil’ became a catch-phrase employed by all sides of politics everywhere, and by the media that reported on politics. The perceived reality was that this was an area of politics with direct relevance to the ordinary worker.

    This, in turn, embedded the concept that there was only so much oil out there in the global zeitgeist. Experts had been warning of this reality for decades, but the message had mostly fallen on deaf ears; suddenly, it had a new cache, because the oil-price shock could be highlighted as a taste of what was to come.

    Once the concept of limited resources takes hold, it produces a fundamental shift in perceptions. Everything is measured in terms of consumption of limited resources – productive hours, money, raw materials, you name it. In some cases, this simply locked into place shifts in public attitude that were already taking place; in others, they began the process of popularizing new concepts like sustainability and closed-cycle recycling. Everything from environmental awareness to power distribution would be affected – if not directly, then indirectly.

    Middle: Mainframe Politics

    Digital products were restricted, at the time, to governments and large-scale corporations, where there were gains to be made from the more efficient use of computer-based billing and analysis. The director of IBM, Thomas Watson, is alleged to have said in 1943, “I think there is a world market for maybe 5 computers”.

    Changes in technology which increased capability and vastly cut costs had already invalidated that prediction, but it remained a conceptual perception. Because computers at the time were difficult to program, stories filled the popular zeitgeist of ‘silly’ computer errors, like bills for one cent, all oriented around the literal-minded inflexibility of computers.

    But clever programming did exist, and permitted analysis of ever-more complex combinations of hypothetical situations, and these were used increasingly to guide the formulation of public policies.

    Sometimes, these policies were effective, but more often they failed because people are rationalizing, not rational, creatures; it might be in the collective best interest to behave in a certain way, but people will gravitate towards actions that better their personal best interests even at the expense of the collective welfare.

    Conflicts between human nature (especially a cynical interpretation of same) and the logic of collective welfare take hold as undercurrents in a lot of fiction of the era. On some occasions, the logic is shown to be faulty, because it does not take human nature into account; on others, it is human nature that is at fault, as flaws within humanity are given an opportunity to prosper.

    You see this even today, as policies that have some justifiable validity are manipulated beyond any acceptable standard. Billionaires who pay only a couple of hundred dollars of tax a year, for example, create the impression that the system is geared toward their benefit, when the reality is that they are better able to take advantage of situations to minimize their taxes.

    End: Fall Of The Wall

    Yes, I know that this didn’t actually happen until 1989, almost the end of the period that follows. What happened, historically, was that as the 1970s gave way to the 1980s, new trends began to supplant those carried over from the preceding era; Glasnost and the fall of the wall marked the end of the cold-war that was so strongly a feature of the 1950s and 60s, for example.

    There was a slow retreat from the themes exemplified by the 1970s over the ensuring decade, and the fall of Berlin Wall is the singular event that punctuates the end of that period. The 1980s is a blend of the philosophies of that era and the legacy attitudes of the 1970s.

    I made this same point in a different way when discussing the impact of digital technology; it takes time for social movements to transform potential into actuality, just as it takes time for digital potentials to be manifested into concrete technological and social changes.

    The 1970s were largely spent pretending that the social and political foundations of the 1960s were still completely applicable, despite gathering evidence following the oil crisis that this was not true; it was the ending of the cold war that made that fiction impossible to sustain. And the crescendo of that global thawing was the fall of the Berlin Wall.

The Digital Age, Second Period 80s-90s

“Greed,” according to a popular mantra of the 80s, “is good.” Well, no, it’s actually not – but the 80s were when people stopped fooling themselves that corporate behavior had any remaining vestiges of the altruism that was present at the end of the time of the robber barons. Being altruistic was an ice-cold marketing decision, nothing more or less. There are three movies that I consider definitive of the economic culture of the time.

The first is, of course, Wall Street, from whence the quotation springs. Gordon Gekko, played by Michael Douglas, offers the line. This character, and those who have bought into the social environment that accompanies it, seduces the main character (played by Charlie Sheen) into sharing his worldview. The story, in its totality, is a fall-and-redemption narrative, as Sheen uses the weapons of Gekko’s philosophy against his mentor, falling on his metaphoric sword to bring down the seductive but evil Gekko. The conflict between human values and corporate profits remains an exemplar of the era’s business philosophy.

Movie #2 didn’t seem to be as big at the box office; it’s something that I stumbled across on TV one afternoon, and became an instant favorite. Other People’s Money tells the story of “Larry The Liquidator”, a more comic-book version of Gordon Gekko who specializes in buying companies, stripping them of their assets, and then liquidating whatever remains once there is insufficient capacity to meet obligations – for example, pensions. When the targets are moribund and unprofitable, this can be seen as burying the corporate dead; but that’s never enough to satisfy, so the Liquidator turns his attention to companies that are productive and profitable, because they can be worth still more to him dead than alive.

Third on my list of movies is Working Class Man, starring Michael Keaton, released through most of the world as “Gung Ho,” about the takeover of an American Car Plant by a Japanese corporation. Although promoted as a comedy film at the time, it was repackaged here as a drama after the success of Batman, enjoying a successful second life. At the time of that second life, the buzzword throughout the white-collar sector was “Japanese Management Practices”, and the movie is all about the cultural clash between such management practices and the more casual, laid-back philosophy of the western worker.

I consider these three to be required viewing for any GM running a game set in the 1980s, or any setting that is an outgrowth of those political and economic philosophies – Cyberpunk, for example.

That’s the 80s in a (rather large) nutshell – corporate greed with no pretense. What is good for the stockholder is good for the company, and vice-versa; customers and workforce are nothing more than necessary evils to be exploited to the fullest extent of law.

When this attitude was last in vogue, rebellions arose and brought forth the labor unions. This time around, a few cases of union corruption enabled a concurrent war against workers rights that has eventually culminated in such protections being at something close to an all-time low. “At Will” employment statutes in many US states mean that an employee can be fired because the boss doesn’t like the color of the worker’s socks. Sold to the public as preserving a worker’s right to move from one job to another where pay and conditions were better, the actual effect on the ground has been significantly different.

In this period, those laws remain a future development. The contemporary reality was an emphasis, especially in the mass media, on the inconvenience promulgated on the public by workers striking for what they perceived as good reason (and which sometimes was and sometimes wasn’t). No-one seems to have wondered whether or not the media in question (or their owners) had a vested interest in the issue.

These attitudes took time to manifest and deepen; the beginning of this period more strongly resembles the 1970s than it does this heartlessly grim picture, the middle of the 80s is somewhere in between as these social attitudes are taking root, and by the end of the period, they are entrenched (and viewed by some as the unchangeable new foundations of economic reality).

    Beginning: Hope and Hostility

    The first recorded aircraft hijack took place on February 21, 1931, in Arequipa, Peru (Wikipedia, Aircraft Hijacking). A sprinkling of other events took place through the years that followed, slowly growing in frequency. Between 1958 and 1967, there were 40 hijackings world-wide, while the FAA claims more than 100 attempted hijackings in the 1960s. (same source). The 1967 termination date is significant, because in the five year period starting in 1968, there were 326 attempted hijackings, a more than 6-fold increase.

    The 1980s saw a profound shift in nature of these incidents – organized terrorists destroying aircraft to draw attention or threatening to do same in order to obtain specific political ends. The majority of incidents prior to this development were attempts to reroute aircraft to a destination desired by the hijackers; aircraft screening of passengers and greater international cooperation, by 1980, had reduced the incidence of such attempts to significantly less than the 1968 level.

    A number of the new wave of attacks seemed linked to middle-eastern groups, a reaction to the greater involvement in political events there. Another legacy of the 1973 oil-price shock, and a second event in 1979 that was actually more serious, was to highlight that this part of the world could not be taken for granted. Western ‘adventures’ into the politics of the region can be traced back to the crusades, it was nothing new; but there seemed to be a rising intensity on all sides, and the existence and politics of Israel was a polarizing factor that only made these interventions and adventures worse in the eyes of many directly affected.

    Many have characterized these interventions as being all about oil; this assignation of motive is misdirected at best, and wholly incorrect at worst. Some were legitimately well-intentioned, others were for political advantage, and some were in furtherance of half-baked plans for the imposition of stability.

    It wasn’t just the Western powers, either – witness the disastrous Russian invasion of Afghanistan in December 1979.

    Regardless of the motivations, these all provoked reactions from those residing in the region. Because it was perceived that they were relatively weak in comparison, methods needed to be employed that had greater indirect and supplementary impact than direct impact – and thus the middle east became inextricably associated with Terrorism in the zeitgeist of the time.

      Afghanistan: An illustrative microcosm of mistakes

      In April 1978, the communist People’s Democratic Party of Afghanistan (PDPA) seized power in a bloody coup d’etat against then-President Mohammed Daoud Khan, in what is called the Saur Revolution. The PDPA declared the establishment of the Democratic Republic of Afghanistan, with its first leader named as People’s Democratic Party General Secretary Nur Muhammad Taraki. This would trigger a series of events that would dramatically turn Afghanistan from a poor and secluded (albeit peaceful) country to a hotbed of international terrorism.

      The PDPA initiated various social, symbolic, and land distribution reforms that provoked strong opposition, while also brutally oppressing political dissidents. This caused unrest and quickly expanded into a state of civil war by 1979, waged by guerrilla mujaheddin (and smaller Maoist guerrillas) against regime forces countrywide.

      It quickly turned into a proxy war as the Pakistani government provided these rebels with covert training centers, the United States supported them through Pakistan’s Inter-Services Intelligence (ISI), and the Soviet Union sent thousands of military advisers to support the PDPA regime.

      Meanwhile, there was increasingly hostile friction between the competing factions of the PDPA – the dominant Khalq and the more moderate Parcham.

      — Wikipedia, Afghanistan – Contemporary History – Democratic Republic and Soviet war

      An assassination led to the Soviet invasion which led to the rise of a more feudal social structure of warlords, which led to a coalition government that collapsed into dysfunction, which led to another civil war, and so on. The occasional intervention on one side or another was disastrous. Eventually, the Taliban emerged as a movement and forcibly installed itself as rulers of the country. Some of the warlords survived within the new regime as terrorist groups, notably Al Quida. Subversive acts of retaliation against those who had used the nation to fight their proxy wars followed with the tacit approval of the regime.

    Afghanistan is a pointed example, but not the only one. Iran & Iraq tell similar stories in broad, for example. As a result, random incidents of middle-eastern terrorism sprinkle the 1980s at regular intervals.

    Trickle-down Reaganomics

    From January 20, 1981, to Jan 20, 1989, Ronald Reagan was President of the United States. The economic policies of the sub-era are definitively those of his administration.

      These policies are characterized as supply-side economics, trickle-down economics, or “voodoo economics” by opponents, while Reagan and his advocates preferred to call [them] free-market economics.

      The pillars of Reagan’s economic policy included increasing defense spending, balancing the federal budget and slowing the growth of government spending, reducing the federal income tax and capital gains tax, reducing government regulation, and tightening the money supply in order to reduce inflation.

      The results of Reaganomics are still debated. Supporters point to the end of stagflation, stronger GDP growth, and an entrepreneurial revolution in the decades that followed. Critics point to the widening income gap, what they described as an atmosphere of greed, reduced economic mobility, and the national debt tripling in eight years which ultimately reversed the post-World War II trend of a shrinking national debt as percentage of GDP.

      — Wikipedia, Reaganomics

    Ultimately, the espoused principle was reducing the tax burden on the wealthy on the presumption that this would encourage more investment, which would then permit business expansion, which would enable the employment of more people, which would spread the largess downward through the economy.

    In terms of stimulating a somewhat moribund economy, this worked, but it reckoned without the businesses that were the recipients of the investment diverting the moneys received to shareholders as increased profits.

    Inflationary consequences meant that the working class continued to get squeezed as a result, though some benefits did work their way downwards as intended, slowing the rate of deterioration in real wages.

    Image by Wikipedia Commons user Soibangla based on data released by the US Bureau Of Labor Statistics. The chart is considered ineligible for copyright and therefore in the public domain because it consists entirely of information that is common property and contains no original authorship.

    While there are those who simply claim that Reaganomics didn’t work, I consider that to be an oversimplification – it worked just fine for those in the upper economic brackets. It was, nevertheless, a flawed policy simply because the economic benefits were not shared with the rest of the society of the time.

    Thatcherism

    If the US had Ronald Reagan, the UK had Margaret Thatcher. Reagan and Thatcher had been elected within a year of each other; while some called that a remarkable coincidence, to me it always seemed indicative of a general trend in politics and society of the era.

    There were a number of similarities between the two. Both were strongly anti-government, anti-Keynesian, advocates of tax reductions and private sector economic prosperity.

      In terms of the general thrust of their policies, both leaders tried to shift the center of the political spectrum sharply to the Right. Reagan set about undoing a half-century of legislation which had built up the public sector while opening up America to expansion led by the private sector. Mrs. Thatcher busied herself with doing the same in Britain. Both leaders believed that government itself was partly the cause of their mutual economic problems, including high inflation and slow economic growth, the answer being less government. In contrast, all previous leaders since the 1930s had assumed that if things went wrong, the remedy would be government intervention or more government.

      Reaganomics and Thatcherism. Origins, Similarities and Differences by Christopher Deeds.

    The Christian Science Monitor puts it even more forcefully:

      Both have slowed down welfarism and curbed the power of the trade unions.

      Both have stressed enterprise and the marketplace.

      Both have turned away from nationalization and have sold off as much nationalized industry as the private sector would accept.

      Both have reduced government regulation of private enterprise, stimulated competition and, by so doing, stimulated productivity.

      But the United States, after six years in office of Ronald Reagan, and Great Britain, after seven years of Margaret Thatcher, are in startlingly different condition. The United States has an annual budget deficit running at nearly $200 billion and an adverse trade balance of nearly as much. Mrs. Thatcher’s Britain has a balanced budget at home and a balance in its foreign trade account.

      Thatcher and Reagan: the difference by Joseph C. Harsch.

    In other words, Thatcher was cruel but achieved success with her policies, while Reagan’s success is more open to debate. The difference in outcome, argues Harsch, is due to the severity of application of theory; Reagan raised his defense spending massively and cut income taxes radically, while Thatcher raised her defense budget in moderation and cut income taxes modestly.

    Both paid a hefty price – Reagan in deficits, Thatcher in unemployment rates and failed businesses.

    And yet, if you were to assign their policies based on their personalities, Thatcher was more abrasive, more confrontational, and less sympathetic, while Reagan was naturally a conciliator. Those traits would lead one to suspect Reagan of the softer, more moderate approach, while assigning Thatcher the more hawkish, uncompromising set of policies. I have no explanation, simply observing that one was a grandfatherly figure while the other was a stern mother, to their respective nations.

    Nevertheless, the distinction between personality and policy, and resulting implications, needs to be understood by any GM with a campaign set in the era.

    Middle: Eighties Angst

    The discussion is starting to get away from me, in terms of length, so I’m going to try and reign it in a bit.

    Eighties Angst wasn’t just expressed through TV shows and the like; people of the time often had good reason to fear for the future and worry about where it was all headed.

    There were a number of factors contributing to this depressed attitude.

    • This was the era when manufacturing jobs began to migrate away from the developed economies, though it might not be noticed for another 20 years in some cases. While some of that migration was direct – close a factory here and open another one somewhere else to manufacture exactly the same products – more of it was indirect, as was the case with the US auto industry being supplanted by Japanese small-car imports.
    • The wage-inflation-cost spirals already discussed were an ongoing problem that no-one could see any hope of escaping. The resulting economic reality was readily visible – I remember interest rates of 21, 22, and 23% being thought ‘normal’. Even now, I’m not entirely sure how we got off that international treadmill.
    • Reactions to the harsh economic practices of the time led to industrial disruptions and often contributed to economic disruptions, more particularly in the UK and other commonwealth countries than in the US, where a war against unionism inhibited them.
    • Business failures, and the abject failure of pension funds etc that were supposed to insulate the workforce from such developments, created a sense of despair in the industrial sector workforce. Farm foreclosures did the same in the rural communities. This often manifested in anti-social behavior on the part of those who felt abandoned and without hope for the future, more notably in the UK – and thus arose expressions of social dystopia like Punk Rock.

    A lot of people were desperate, surrounded by bleak times and unforgiving government policies, sometimes depressed and sometimes angry about it all, especially when it seemed to result from economic forces over which they had no control and less understanding.

    Middle: Hope In The Face Of Despair

    At the same time, there were rays of hope, a belief that collective efforts could yield social dividends that were greater than the sum of their parts. This started with the success of the Band-Aid single, “Do They Know It’s Christmas” – a friend of mine was in England at the time and he remembered the song being on a 30-minute rotation on virtually all the popular radio stations at the time.

    That led into a US replica, and then the global Band-Aid concert, and then more diverse examples like Farm Aid and Artists Against Apartheid.

    At least here in Australia, telethons had been around for a long time, usually in response to some particular pressing need or disaster, but they had started to lose some of their luster in the 70s. Prior to these examples of social welfare activism, the high-water mark had been the benefit telethon in support of the victims of Cyclone Tracy in 1974 – I have vague memories (possibly inaccurate) that all three commercial network and the national broadcaster worked in unison on that appeal. After that, they seemed to fade in significance and appeal – until the global Band-Aid telecast came along.

    The success of these fundraising pursuits in terms of actually achieving their ambitions is disputable. There are those who claim that much of the revenue raised never made it ‘on the ground’, frittered away in administration fees. Regardless, though, their success in rekindling a sense of optimism and hope for the future marks them as a significant social change within the era.

    End: Digital Development

    As already intimated, computer technology never stands still. There is a perpetual engineering effort aimed at making computers stronger, smaller, faster, and cheaper.

    Integrated Circuits had been commercialized in 1964, and the first microprocessor had been developed in 1971. From the mid-70s on, the first microcomputers were developed that were cheap enough to to be commercially viable.

      In what was later to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of features that would later become staples of personal computers: e-mail, hypertext, word processing, video conferencing, and the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time.

      — Wikipedia, Personal Computer

    So the pieces of the puzzle had been there for several years. 1974 saw the introduction of what is generally considered the first true personal computer, the Altair 8800, based on the 8-bit intel 8080 microprocessor chip. The Apple -1 followed in 1976. The first successfully mass-marketed personal computer, the Commodore PET, was revealed in 1977, but it was back-ordered and not available until later in the year. In June 1977, the Apple-II was first shipped, followed later in the year by the Tandy TRS-80.

    Even so, they were largely considered objects for play. They had infiltrated the home; software for personal productivity pointed the way to the future. The stage was set for the computer to enter the workplace in a serious way.

And with that, I’m right out of time. 30-odd years of recent history and economic change remain. To be continued!

Comments Off on Economics In RPGs 8: The Digital Age Ch 1

Inherent, Relative, and Personal Modifiers


To succeed, all one needs is the Vision to imagine,
the Gumption to try, the Capacity to learn,
and the Determination to persist.

Image by Achim Thiemermann from Pixabay

For the first time since I started it, when I went to draft the penultimate(?) parts of the Economics in RPGs series today, I found myself unsure of how best to structure the post.

While I have total confidence that, given enough time, I would have found a satisfactory sequence to bring out the key points, I was extremely uncertain that I would have enough time to both do so and get the article written in time.

To avoid the problem, I pulled out a standby article concept that’s been sitting around for a while, awaiting just this sort of circumstance.

Lending weight to the decision was that this is not a small article, either; the decision needed to be made early enough that there would be time to write it before the normal publishing deadline.

I think that the decision has been made in sufficiently timely fashion – but waffling on in this introduction is eating into that time, so let’s get on with it!

Modifiers – A Ubiquitous Concept

Almost every RPG incorporates, either officially or unofficially, the concept of skill modifiers into its game mechanics. There are good reasons for this, and I’ll touch on several of these before the end of the article.

Superficially, these are a simple concept, simply applied. “Under the circumstances, you are at +x to succeed” – or maybe it’s “- x”.

But the more you dig into it, the more hidden wrinkles come to light. It’s not nearly as straightforward as it first appears.

Most GM’s cut through this complexity and confusion with chutzpah, gut instinct, and the Gamemaster’s Authority – they declare a bonus (or penalty) that simply “feels right” to them.

This is an approach that is beset with problems. Consistency of decisions is one – and a poor showing in this area can affect a player’s confidence in the GM. Errors of judgment are another; we’re all human and make mistakes, but proceeding on instinct incorporates no safety net against such errors. Perceived bias for or against a character (or worse, for or against a player, or worse still, against the players as a group and in general) is a third.

These are serious issues, ones that demand serious attention before they take root. There are solutions to these problems that can enrich the playing experience for all concerned.

The starting point for any such attention has to be a more detailed examination of the basic concept, so that’s where we’ll begin.

    Conceptual Origins: Old-School Combat

    I started out as a player, and then a GM, running an AD&D campaign. The fundamental representation of magic weapons and armor in that game system is as a modifier, either to To-Hit or Armor Class, respectively.

    It’s a short step from that to adding additional modifiers to represent unusual environmental conditions and other circumstances – underwater combat, or unstable footing, surprise or distraction, to offer a couple of quick examples.

    So elementary are these that many of them are incorporated into the official rules of the game. They, in turn, betray the origins of the D&D game system as a Wargame, which are full of such modifiers and complexities.

    Once their incorporation into game mechanics is accepted, it’s only a matter of time before the question arises of equivalents in the application of Skills and Proficiencies of a non-combat variety. The representation of characters occupying space in a “simulated reality” makes the existence of Circumstantial Skill Modifiers something of an inevitability.

    Even later incarnations of the rules contain these, though sometimes they cloak them in different attire – there’s the Difficulty Level of a check in 3.x, for example. But when you dig into the conceptual framework of the rules, you find they are simply different ways of assessing the same basic questions.

Skill Modifiers, or Difficulty Modifiers, or whatever a given game system calls them, are a ubiquitous concept, fundamental to the representation of characters functioning in a game world or game environment.

How Big An Adjustment? – The Eternal Dilemma

The problems with Skill Modifiers aren’t related to the fundamental concept, which is on a sound footing; they are related to identifying the various factors that should be taken into consideration and quantifying their effects – translating the situation in-game into numbers that the game mechanics can then take into account.

Some GMs avoid the whole question, making the assumption that for every modifier one way not quantified in the rules, there is one in opposition if you look closely enough, and hence anything that is not explicitly Rules-As-Written can be assumed to be already taken into account.

This makes the game simpler, and that can have its own virtues, especially when gaming with younger people and those of less experience. Personally, though, I dislike the approach; it sucks too much flavor out of the in-game situation (instead of projecting the flavor of the moment into the game mechanics), and places too much faith – and too heavy a burden – on the shoulders of the game designers.

It’s a starting point, and suitable when that’s all that participants can cope with, but a broader approach can take such games to an entirely new level of immersion and verisimilitude.

And hence, we arrive at the ‘gut instinct’ approach, and its attendant problems.

Three Types Of Modifiers

Once you start studying the details, you come to realize that “Skill Modifiers” is actually an umbrella term that encompasses three similar but distinct game variables.

For the purposes of this article, I have named these “Inherent, Relative, and Personal”.

One fallacy that some GMs fall prey to is limiting their Skill Modifiers to just one of these three sets of variables, and adopting an approach to the problem that fits that perception.

Expanding our understanding of the subject requires understanding all three.

Inherent Modifiers

“Inherent” Modifiers – sometimes known as “absolute modifiers” – are fixed in value. You can simply read the value off a table. Mostly, these are environmental in nature, and assume that the nature of the skill check and the expertise of the character are irrelevancies.

It’s quite common for these to be the sum total of Skill (and Ability) modifiers that a GM applies, at least when they are in the intermediate stage of experience.

They have the virtue of being fairly simple, and of applying universally to any situation. “-2 to all Dexterity-based skills and checks due to the cold of the environment” is an example. All a GM then has to do is discriminate between the different types of skill check with a simple question – does the “-2” apply to this check?

Adding to the appeal is the fact that such modifiers will generally also apply to combat rolls. That means that they are more likely to be predefined in any reasonably comprehensive game mechanics, putting less pressure on the GM.

Relative Modifiers

The second class of Modifiers relate to the task itself, whatever it might be. For the purposes of this article, I have labeled them “Relative Modifiers”, for reasons that will become obvious, and despite the fact that the third type of Modifier can also make justifiable claim to the title.

There are two subtypes to be considered. In general, they are mutually exclusive, with the alternative sub-type considered to be encompassed by the chosen sub-type.

    3.x / D&D

    Some game systems – D&D 3.x for example – confuses the whole Skill Modifier subject by applying these Modifiers to the determination of a target (DC), entirely separately to the other types.

    Mechanically, any increase to a DC is exactly the same as a negative modifier to the skill being used to attempt the task. The major difference lies in the arithmetical operation to be performed – this edition of D&D tries to avoid subtractions at all costs, preferring to make everything an addition, even if it means applying the variable to the ‘other side’ of the basic equation.

    Hero Games

    The default approach of Hero Games is not dissimilar, either. They prefer modifiers that apply to an attack roll, for example, and separate modifiers that add to the target number needed for success. Their theory appears to be that this means that the player is doing half the work and the GM, the other half. In practice, the GM has to involve himself on both sides of the question, so it doesn’t actually make things easier. Our games using this game system revolve around the structure, Roll+OCV-11=Combat Value Hit.

    This is simply a reformulation of the equations/process defined by the mechanics, but it means that the GM can announce any environmental modifiers that apply, the player can roll his die and perform the calculation and simply inform the GM of what defense he has overcome. The GM need only glance at the character being attacked to determine the outcome – which leaves him or her free to engage in other aspects of the game situation.

    It also means that the GM doesn’t need to reveal critical information about the target. From memory, GURPS works in a similar way.

Generalizing the conceptual underpinnings of the question permits it to be rephrased, “How hard is this task?” Answering that question is where the two approaches make themselves apparent.

    Version 1: Difficulty For An Average Character Or Reference Standard

    One approach is to ask how difficult the task would be for an ‘average character’ to perform, or some predesignated reference standard of ability.

    – “How hard is it for a blacksmith to repair this wagon wheel?”

    – “How hard would it be for a typical motor mechanic to diagnose the failed water temperature sensor in the car?”

    – “How hard would it be for a typical falconer to train a Roc?”

    ….and so on. That tells you what the DC should be, or what the modifier for attempting this particular task should be. The relative competence of the character making the attempt then defines their chance of success or failure at the task.

    Version 2: Difficulty Relative To A Standard Task

    The alternative is to define a minimal skill level as ‘competence to achieve a fundamental task X% of the time.” This approach has the advantage of viewing competencies as a general umbrella. “Fishing”, for example, would include the ability to craft a lure, or to repair a net.

    Some GMs use a 50% success standard for a minimal skill, others take the approach that a fundamental task should succeed 95% of the time (in other words, on any roll but a natural ‘1’) because this task is fundamental to the skill. Still others pick somewhere in between.

    A lot depends, in my opinion, on the basic level of competence that a minimal skill represents. If it’s enough for a character to operate professionally, the higher percentage is appropriate; if it’s barely enough for the character to function as an apprentice, the 50% (or even a 25%) definition might be more appropriate.

    Critical to that question is whether or not the rules are being written for PCs, and their assumed levels of competence, or if they are to encompass the entire game-world population. I tend to think the latter, and so define a lower % frequency of success.

Personal Modifiers

Personal Modifiers, the third type of Modifiers, attempt to quantify the question of difficulty for the specific character attempting the task.

This is even more difficult than it sounds, unless a systematic approach is employed, and is full of fuzzy interpretations of vaguely-defined parameters and language.

    Quantifying Characteristics Of Characterization

    The systematic approach that I recommend is one part metagaming and three parts judgment, breaking the individual down into four components.

      A Character’s Forte

      The metagame element is this: is the task reflective of whatever is supposed to be the character’s forte? Every major character (which includes PCs, definition) should have their area of expertise in which no-one who does not share that expertise will be superior, even if they have the same (or better) stats and skill levels.

      Some game systems define characters in terms of archetypes or character classes (although sometimes racial background and heritage will be more important – no amount of education on the subject can equal the experience of actually being an elf, or a Dwarf, for example).

      In terms of character archetypes, no matter how educated a character might be in theology, they should be unable to match a character who actually is a priest or cleric. The educated non-theologian may have equal measure in terms of theory, but will lack the hands-on practical experience of the theologian, and it is entirely likely that the theologian will have access to abstruse texts and resources that no layman can match.

      The same skill level, even with the same stats, doesn’t mean the same thing, once these contexts are taken into account.

      Games that don’t employ such blatant and broad archetypes permit an even more nuanced approach. One of the PCs in my superhero campaign is a former police detective from Los Angeles; he knows the LA area better than anyone who has simply read books about the place ever can, and no matter how good a non-detective might be at deduction, when it comes to criminal investigation, the character should have a significant edge.

      Such game systems routinely utilize character background concepts like this to justify and support the acquisition of skills during character generation, making the character naturally reflective of the concept to some extent, but this extends that synchronization into all the areas that aren’t represented by a specific skill or application of a skill.

      Additionally, each character should have one or more areas in which they are psychologically predisposed to succeed; some things just come more naturally to them, and one hour of study may match the learning that takes an ordinary character a week or more.

      A Character’s Inadequacies

      Equally, most people have areas in which they are inadequate in comprehension. It’s as though they have to work twice as hard just to almost-fail. Sometimes, these can be incredibly nuanced. For example, I took to algebra and differential equations like a duck to water in school, but I struggled to learn my multiplication tables. In fact, to be honest, I never did – instead, I devised short-cuts and workarounds that enabled me to get around my inadequacy.

      (An example: 6×7 – I know that 6×6 is 36, so I can simply add 6 to that. I also know that 7×7 is 49, so I can simply take 7 off that. And I know that 5×7 is 35, so I can simply add 7 to that. Experience has taught me that 7×7-7 is faster – for me – because it gets the 10s place right).

      (Another example: Add up the digits of a number. If the resulting total is evenly divisible by three, so will be the original number. If there’s a remainder, dividing the original number by three will have the same remainder. This enables me to simplify the division and get to an answer more quickly. EG: 57 → 5+7 = 12; 12 / 3 = 4, no remainder, so 57 is divisible by 3. So I can quickly add three to it, divide the result (60) by three (to get 20) and then take one-third of three back off to get 19. This really pays off with bigger numbers, like 8695442 – the total of the digits is 38, so there will be a remainder of 2 from 8695442 / 3. That lets me simplify the calculation enough to do it in my head – 289480, remainder 2).

      (One more: if the last N digits of a number are evenly divisible by 2^N, so will the whole number. So if I need to divide by 4, that’s 2^2, so I only need to test the last 2 digits).

      Okay, I have learned some of the times tables through the years, not in a systematic way, but piecemeal. The point is that basic arithmetic presents problems for me that mean it can take me four or five times as long to get an answer. The fact that I know of these properties of numbers, and can use them in this way, arguably gives me a higher arithmetic skill than someone who has successfully memorized their times tables, with a deeper understanding of the underlying concepts – but that skill still leaves me worse off than someone with a better memory for numbers and less understanding, because they have learned the answer by rote.

      A Character’s Psychology

      That leads into the whole question of a character’s psychology and how that can help or hinder their abilities in a specific sphere or in tackling a specific problem. A character who is good at understanding people could have – should have – an advantage when it comes to analyzing possible criminal motives, for example. They probably still need someone with Detective experience to lay out the parameters and circumstances and gather the evidence that this character then applies.

      Lets take the current crop of PCs in the Adventurer’s Club as an example. One is a doctor, and an experienced Medical Examiner – he’s good at gathering and analyzing forensic evidence. Another is an engineer – he’s good at analyzing physical evidence. And the third is a Priest – good at understanding people and finding credible motives for committing a criminal act. The three of them put together make one adequate detective – but it will take them three or more times as long as if they simply present their findings to an actual policeman.

      A Character’s Experience & History

      Finally, is there anything in the character’s backstory and in-game history that should assist them in performing the task. The mage in my superhero campaign grew up in a Nordic fishing village – he knows how to handle a small boat, how to fish, and so on. Because of the location, it’s cold most of the year, so he didn’t naturally learn to swim; even now, his locomotion through the water is slow, thrashing, and barely adequate.

      Even if the character had not bought any skill at Fishing, I would take the background into account when determine the parameters of an attempt to do something similar.

    Each of these approaches has its own set of merits, but I don’t consider any one of them to be wholly adequate to the task of assigning an appropriate Skill Modifier.

A Compound Approach

What’s needed is a method that combines all three – environment, task, and aptitudes. Breaking the problem up in this fashion also means that the scale of any errors in those ‘gut instinct’ values shrink in relevance, putting greater distance between the game and those potential pitfalls, and a formalized approach of considering each factor in succession further mitigates against such failures – and also provides an opportunity for error detection and correction.

How much longer does it take to make three informed snap decisions, and integrate them into a single answer, than to make a single uniformed gut-instinct call?

My assessment, from experience, is three-to-five times as long. Let’s translate that according to the time spent arriving at a gut-instinct estimated circumstantial Skill Modifier:

  • 1 second → 3-5 seconds.
  • 2 seconds → 6-10 second.
  • 3 seconds → 9-15 seconds
  • 5 seconds → 15-25 seconds
  • 10 seconds → 30-50 seconds
  • 15 seconds → 45-75 seconds.

These time frames show that if you are used to making quick decisions, all it takes is a moment or two of added reflection to implement a more robust solution to the problem.

Mitigating the potential severity, the decisions to be made are more focused and specific, and hence more easily made. You can normally cover the entire process with a smooth patter mentioning some of the main factors that you are taking into account – and this has the added advantage of telling the players that you are taking these factors into consideration, and not just plucking numbers out of the air.

But for all the added precision, ultimately, the process is still about mentally employing a holistic approach rather than a long, tedious, ultra-precise approach.

The other benefit of outlining what is to be considered a reasonable time-frame for the entire process to take is that the process itself can be tailored to meet the requirement.

What follows is the step-by-step process that I routinely employ. Most of the time, each step is in the 1-3 second time frame, and the totality thus lands somewhere near the 15-second total specified above. In a complex or game-critical situation, I can take a couple of extra seconds on the relevant steps and still remain well within the time-frames given.

    1. Standardized Ideal Environment → Inherent Modifier Foundation

    I start by assuming that the task will be attempted under ideal conditions, with no time or performance pressures; the character can theoretically take as long as necessary, and may even be able to backtrack if an error takes them down the wrong path. I then abstract that set of hypothetical conditions into a base modifier.

    2. Relative Modifiers

    Determine how difficult the task would be to accomplish in an ideal environment, taking into account any time or performance pressures. Sometimes I will use approach Version 1, sometimes Version 2, according to which one I consider more appropriate to the actual task and the game mechanics. Abstract the result into a Relative Modifier contribution.

    I often find it useful to have a list of “difficulty standards”, which may or may not have fixed modifier ranges allocated to them. For my superhero campaign, for example, the categories are:

    • Trivial task
    • Routine task
    • Easy task
    • Moderately Difficult task
    • Difficult task
    • Very Difficult task
    • Extremely Difficult task
    • Almost Impossible task
    • Absurdly Difficult Task
    • Virtually Impossible Task

    That system is % based, with skill values ranging from -100 to 150 (a revision is in progress to change that to a range of 0-250). A score of 0 is enough for the character to earn a living using that skill, it’s a minimum ‘professional’ level.

    The modifiers that go with the categories are +100, +50, +25, +0, -10, -30, -50, -75, -100, and -120, respectively – so Moderately Difficult tasks use the skill levels as the character has them, anything easier gets a bonus that ranges from substantial to huge, everything more difficult attracts either a small penalty to a significant one to an absolutely huge negative modifier.

    You can appreciate the system best by contemplating the resulting chances of success for a character with, say, 30% skill: 100% chance at trivial tasks, 80% for routine tasks, 55% for easy tasks, 30% for moderately difficult tasks, 20% for difficult tasks, 1% (the minimum) for very difficult and harder tasks.

    Compare that with the capabilities of a character with 0% skill, or one with 60% skill. I wanted scores in the range from 0-100 to be significant in terms of what scope they gave the characters.

    For a d20 / DC -oriented system, I would use DCs of 3, 5, 8, 10, 15, 20, 30, 40, 50, and either 70 or 100, respectively.

    3. Subtotal

    I compound the two variables into a single total, if that’s appropriate; in game mechanics where it’s not, the Relative modifier gets recorded on scrap paper (or mentally, if I feel up to that) and the subtotal is the Inherent Modifier. There are also times where I consider it most appropriate to apply some of the relative modifiers to a DC / target, and some to the chance of achieving that target – but that’s a rather more complicated choice that I only apply because I’m used to employing both alternatives.

    4. Adjust Modifier to the Individual

    I think about the character and how much more skilled than the reference standard they will be under these conditions and circumstances and adjust the subtotal accordingly.

    5. Inherent Environmental Factors

    I mentally compare the actual environmental conditions to those that would be considered ideal for accomplishing the task. These conditions include the lack of resets at critical points for some tasks, like carving a model or adding spices to a roast.

    There may also be standard modifiers defined by the game system – for being blind, surprised, underwater, or whatever. This step is where I add those in – or, more commonly, an estimate of what they will amount to, overall. One of the most common is applying range modifiers to skills being used at a distance – associating landmarks / signposts to navigational references, for example.

    Circumstantial Modifiers also get factored in – performance anxiety, the need to deliver, time pressures, distractions, etc.

    Abstract all of those environmental considerations into a modifier describing how much worse the actual environment is than the ‘ideal environment’ assumed in step one.

    6. Adjust Environmental Factors for Equipment

    Good equipment can help mitigate these environmental difficulties. A lack of essential equipment can make them a lot worse.

    Note that this ‘essential’ equipment doesn’t actually need to be essential – it simply has to be ‘essential’ in the character’s mind, because that’s what he has been trained to use.

    It can be absolutely reasonable to apply a penalty for lack of essential equipment, a bonus that partially compensates for possessing the real minimum requirements, at the cost of taking three or four times as long to complete the task, and then a further penalty for the resulting time pressure.

    But I try not to get that deeply into specifics, most of the time. Instead, this simply generates an adjustment either upward (greater difficulty) or reduces the existing environmental modifier from step five towards zero.

    It should also be observed that it’s rare for equipment to actually do part or all of the job for you – that should get factored in separately in a later step – so it’s not likely that equipment adjustment will be enough to turn the negative environmental modifier into a positive.

    7. Adjust Environmental Factors for Individuality

    People who are used to cold conditions are less affected by cold conditions. The same is true of any other environment. However, acclimatization wears off surprisingly quickly if you aren’t regularly being exposed to it.

    Although it’s not completely realistic, I divide such adjustments into two components – one that wears off quickly and one that persists for quite a long time, but will wear off eventually.

    It’s also true that those used to a cold environment will find it much harder to cope with a hot environment and vice-versa. There’s a natural sensitivity to the opposite that also needs to be taken into account.

    Unlike equipment, a favorable acclimatization can absolutely completely overcome penalties for a non-ideal environment (though it can’t do much more) – and that can permit equipment bonuses to breach the “better than ideal” barrier.

    I adjust the environmental modifier from step 6 to take such capacity on the part of the individuals into consideration.

    8. Total

    Add the resulting environmental modifier to the subtotal determined in step 4, and you’re done – announce the total.

Well, you’re done if you’re seated at the gaming table, performing an ad-hoc adjustment for some task that the player has decided to attempt. I try very hard to anticipate critical skill checks when doing game prep, allowing a still more robust approach.

The Metagame Approach – What chance of success do you want?

That robust approach is far more heavily metagamed, and adds another six steps to the process.

    9 Assess The Character’s Chances

    The first step is to work out what the character’s chances of success are, given the Circumstantial Modifiers determined. It doesn’t have to be exact, but it should be enough to give you a sense of what their chances of success actually are.

    10. Assess The Level Of Challenge

    I then compare that result to the degree of challenge that I want the skill check to pose to the characters.

    11. Assess The Adequacy

    Comparing the two describes the adequacy of the challenge actually being presented to the character. It’s possible that the answer is “close enough”, but it’s far more likely that it is too easy or too hard.

    12. Revise The Target Chance

    How much harder or easier do the circumstances have to be to turn the first chance into the second? Just roughly, it doesn’t have to be perfect.

    13. Document the resulting actual Modifier to be applied.

    Taking that correction into account, I document the actual Circumstantial Modifier to be applied to the skill check, and write it down. This is simply a matter of taking the rough correction value from step 12 and adding it to the total from step 8.

    14. Revisit Foundation Decisions To Account For The Difference

    But that revised difficulty value needs to be justified – the decisions made in steps 1-3 and 5-7 need to be reassessed and the circumstances modified to reach the desired target Difficulty.

Because time is less of a factor when you can do the work in advance like this, there is less time pressure on the GM to make and refine these decisions. Even so, it doesn’t tend to take much more than the same time, again, to perform these additional steps – and that’s a fairly small investment to make.

Modifiers At The Speed Of Plot Part I: Planning For Success

Something ‘proceeding at the speed of plot’ is a favorite expression in these parts. It describes such a multitude of sins, but what it boils down to is subordinating something of inherent importance to the even more important goal of collaborating with the players to tell a coherent, compelling, and exciting story that entertains those participating.

Skill Modifiers, and skill checks in general, are no exceptions to this principle.

It’s at this point that I usually pull out the plot structure pyramid, last shown in Simulated Unreality: Game Physics Tribulations, which also contains links to its earlier appearances.

This is the third time that I’ve presented this particular depiction of the pyramid. The needs of each level overrule the content of the levels below. Making sure that the campaign is fun is a prerequisite for campaign longevity, so it’s absolutely a 2nd-level need – only practicality is a more important goal.

In this context, you should take a moment to plan for the characters succeeding at some task, especially if you’ve stacked the deck against them.

The check of success is either a function of the Official Rules or applicable House Rules, possibly overridden or modified by the simulation of a coherent game reality or of fitting the plot to the genre.

What’s being discussed here is modifying level 5, Plot, to service the needs of the Campaign, level 6.

Critical Successes?

A related issue that also needs to be addressed is, Does your campaign, and its underlying game mechanics structures, permit or anticipate the possibility of critical successes? Some do, some don’t. If they are required, you need to make sure that a critical success at the right time both rewards the successful character and doesn’t derail the adventure’s plot.

There are four ways to reward a critical success – pick one that makes sense, given the context, and the check being made, and that avoids plot destruction.

    Option 1: Less Time

    It takes the character half as much time as would normally be the case – every decision or step in the process of completing the task fell into place almost automatically. The character was, quite simply, at the top of his game – at least for a while.

    Option 2: Quality Of Result

    This option doesn’t really apply to some skill checks. It’s at it’ most powerful when the skill check is to craft something or do something creative, but it can also apply to interpersonal skills. It can be applied to research tasks but this tends to have greater long-term impact than is desirable.

    Option 3: Penumbra Of Expertise – Putting 2 & 2 Together

    This option shines where the previous one doesn’t apply. Research and deduction and other intellectual skills are at the heart of it. There can be application to interpersonal skills, as well, when the subject of such skills ‘just happens to mention’ something useful to the PCs somewhere down the track.

    One of my favorite applications of this benefit is connecting seemingly-unrelated pieces of information. A critical piece of information lies in a book, waiting for a PC to make a research roll of some sort – but, on a critical success, they not only find the information they want, they find a hint to a future problem they will face, and notice that the previous person to borrow this book from the library is an NPC that has exhibited a deep interest in their progress through the adventure – because, unknown to the PCs, he is secretly in league with the enemy of the adventure. Noticing that little detail will give the PCs another line of investigation – one that won’t solve the major puzzle for them, but will give them an advantage later on in the adventure (because the villain will be less informed of their capabilities and progress.

    A Bigger Picture

    Picture a series of standalone adventures whose primary metagame campaign-level purpose is to present characters with a need to make a specific skill check. An ordinary success is dealt with in the ordinary way, but the intended purpose is to give the characters a chance to have a critical success – and their reward for doing so is the discovery of some critical fact deriving from a past adventure, connecting smaller plots and plot threads together into something much larger, leading to a much bigger adventure once the current one is completed, abandoned, or delayed.

    Option 4: Distributed Benefits

    There have been times when I have distributed the success amongst two or more of the preceding categories – not giving full value in any one of them, but giving something of value in all that apply.

Not all rewards for a critical success need to be earth-shattering; often, a couple of small advantages are sufficient. The resilience of your adventure can only be applied in the context of the circumstances and the specific skill check; determining the correct level and nature of reward for success is best done in advance, especially if it can be potentially adventure-wrecking.

That said, every now and then, let the PCs short-cut an adventure, just as it might happen in real life – assuming that you have a Filler of some sort on standby for the next time you need it.

Modifiers At The Speed Of Plot Part I: Planning For Failure

Just as essential as planning for unexpected success when failure is anticipated is planing for failure when success can be reasonably expected, especially if the consequences can be campaign- or adventure-wrecking.

I know that some GMs advocate letting the chips fall where they may at such times. While I can understand their perspective, this perpetually flirts with campaign train-wrecking in the name of player agency, with only the GM’s wits insulating against disaster.

A better approach, in my opinion, is to prepare for the worst just in case – you only need to make it seem like disaster at the time!

That explicitly does NOT mean that you should hand the players a get-out-of-jail-free card; it may add considerably to the difficulties that have to be overcome before the adventure reaches its climax. Just because there may be an alternative route through the adventure, you don’t have to immediate;y dangle it in front of the players noses.

Snatching defeat from the jaws of victory can happen to the bad guys, too. Maybe the consequence of the players’ failure is that the villain grows overconfident and overreaches at a critical moment – but, in the meantime, the players get to wallow in the consequences of their failure.

There are four options for dealing with failures, and – still more pointedly – with critical failures. All of them should be accompanied with the enemy against whom the players were acting (directly or indirectly) with some sort of advantage. But it won’t happen in a controlled manner without a little advance planning on your part.

    Option 1: Multiple Attempts

    The simplest option is to give the characters multiple attempts. Perhaps the book in which they sought answers contains a veiled reference to some other work, where the answer can ultimately be found.

    Too often (and I’m as guilty of this as anyone else), GMs make the mistake of telling the players that they can try again. While this prevents the failure from being a show-stopper, it sucks all the pain that should be there out of the failure. Instead, make the search more arduous.

    The only time when this might not be the case is when there is, in-game, time-critical pressures known to the players – a time-proven way of heightening tension that can occasionally blow up in the PCs faces.

    In such circumstances, I might relent to the point of giving the PC who failed a partial success – insufficient to progress, but enough when married to information gleaned by another PC from another source – if and when the players are clever enough to put two and two together to make four.

    A favorite technique for achieving this is to have an NPC make wild, ill-informed speculation that contains a nugget of truth – a nugget that can only be exposed by connecting it with the hint that is the “partial success”.

    Option 2: Extra Time or Delayed Cognizance

    A twin-barreled alternative is to determine that success is inevitable – eventually – and that it’s the timeliness of the solution that is the real benefit of success.

    Extra Time simply means that the answer takes longer than expected to disinter, or the crafting strikes surmountable hurdles that delay the eventual success.

    Delayed Cognizance only applies to information-gathering / research rolls. When the player is subsequently presented with a ‘trigger’ of some sort, the answer (or part thereof) will come to them – a flash of inspiration, as it were.

    There can be catastrophic consequences of this approach, however, if the character feels that eventual success is inevitable, when they come up with the right answer without the GM providing the intended hints. You can dodge this bullet if you are adroit enough – “Charlie, when Abby makes her half-in-jest speculation, you suddenly realize that she’s closer to being right than you were expecting…” – and proceed to lead the character who failed to the correct answer by coupling what they gleaned from their research with a piece of the idle speculation.

    Option 3: Multiple Pathways & Back Doors

    Having some alternate route by which the failure can be overcome is often a preferred solution. As noted earlier, you don’t have to make this obvious – creating a sense of frustration over the failure, a sense that shadowy forces are moving unchecked as a result, that time is passing as the sword of Damocles descends above the PC’s heads before the alternative pathway to success or even a back door out of the failure is essential. Again, you want the players to feel the pain of failing a critical roll.

    What is a back door? Giving the answer to the question the roll was supposed to solve to an NPC – who will only think to mention it when the PCs happen to say the right thing to the right person. “Funny you should say that, I was reading something about it in the paper just the other day. I think it said…” and follow it with a breadcrumb and a lot of hogwash. The players then have to separate the wheat from the chaff.

    Option 4: Outside Assistance

    The fourth method converting a failure into a success is by factoring in some form of outside assistance that increases the chance of success.

    It’s quite unrealistic, these days, for a breakthrough discovery to be attributable to a single researcher; instead, it’s generally a team, with a handful of assistants. Or an army of assistants.

    For this approach to work, you have to convince the player who failed that he feels right on the edge of success, but there’s something that he’s missing, and he knows it. That can be tricky, as it frequently feels manipulative and anticlimactic to simply announce it outright. You need to approach the issue more covertly, more slowly, and more lingeringly so that the player feels the frustration.

    The general concept of outside assistance raises a number of associated issues; not all of them will be a part of every such situation, but sooner or later they will manifest.

      Confidence Vs Overconfidence

      Ultimately, the root cause of failure that can be overcome with outside assistance comes down to mistaking overconfidence for justifiable confidence. The character thought he could succeed, when he didn’t quite have everything he needed. The role of the outside assistance is to provide whatever is lacking.

      “Confidence” is a key word to employ in describing the road to potential failure.

      There can be occasions when an outside factor intervenes to transform what should have been a success into a failure. “Flying monkeys descend and commence ripping pages out of the book you are trying to study” is probably going too far, but it’s correct in principle.

      Teamwork

      That shows that the form that the assistance takes can be much more varied than is usually realized. It is also important to recognize outside assistance as a form of teamwork; it’s not like the helper has to have the same skill as the primary character is using to attempt the check.

      When anticipating possible failure, however unlikely, it is worth asking how another PC or allied NPC (or even loose-lipped enemy NPC!) can bridge the gap between failure and success – without simply serving up the correct answer on a platter.

      The Benefits of Unskilled Assistance

      With any number of practical tasks, an unskilled assistant can enable the skilled character to focus on the heart of the problem or task. Simply holding a piece of timber steady while a carpenter measures and marks it, then cuts it to size, doesn’t sound like much, but it can be enough that the carpenter recognizes an error in the design in time to correct it – simply because he has less on his mind.

      The larger the project, the more likely it is that one or more unskilled assistants can be beneficial.

      But it works with research tasks, too. Consider an unskilled character combing bookshelves looking for books on subject X for the skilled character to search through; perhaps he picks up something that seemed to fit the bill but isn’t actually relevant in the mind of the skilled character. Nevertheless, there has to be a reason by it was categorized where it was found, so – while taking a break from the research – the skilled character finds himself casually flipping through the book – only to discover something crucial within its pages.

      There are, of course, limits. I prefer to think of them as diminishing returns. It might take two additional assistants to do twice as much as one, creating a progression: 1, 3, 7, 15, 31, 63, and so on – each offering one incremental gain equal to the contribution of that first assistant (the progression might make more sense if I write it 2-1, 4-1, 8-1, 16-1, 32-1, 64-1, and so on).

      The Pitfalls of Skilled Assistance

      If unskilled assistance can be useful, surely skilled assistance can be even more so?

      The answer to this rhetorical question is yes, obviously – but there are a couple of pitfalls to be wary of if you’re the leader of such a team effort (on top of concerns like security and jealousy that are outside the scope of this article).

      Pitfall 1: Too Many Cooks

      The assistant can misjudge entirely his own competence and make decisions that he can’t justify. “I was only trying to help” is cold comfort. The larger the team grows, the greater the vulnerability. What’s more, the larger the team, the more time the leader has to spend on administrative tasks, removing his focus on actually solving the problem. The real cap on effective assistants is far smaller than that indicated by diminishing returns.

      Pitfall 2: Failures of Assumption

      “I thought Dendron was taking care of that”. “I didn’t think it would matter.” “It looked like a shortcut.” these excuses, and many more like them, are all reflective of failures of assumption on the part of the assistants, which are much more likely to occur when the leader can no longer give each assistant a specific task that contributes to the central effort, but must delegate authority.

    Basic Errors

    One final source of failure deserves special mention – the skill attempt itself can be misdirected, the lead character having made some fundamental error in formulating the plan for tackling the task. When this is the case, it always has to come from the player deciding to make the skill check; the GM cannot foist this justification for failure on s player from the outside.

    When this happens, success means that the character recognizes the mistake in time to redirect his efforts to answering the question he should be asking; failure means that the research continues until the character realizes that he’s made some basic mistake in his thinking. When he can formulate, unassisted by the GM, the nature of that failure, he can make another attempt – at the expense of more time.

    Sidebar: A Pathway to learning

    That brings up an only tangentially-related subject. In a Traveller campaign of which I was once a player, skill use was the only way to improve a character’s expertise. On a success, a character got to roll two additional dice; if they came up Box Cars, the character got +1 to his skill for the next time he used it. But the GM held to the principle that you learn more from failures than from successes – so if you failed, you rolled two dice and if either of them came up a 1, you also got a +1 to your skill thereafter.

    The big benefit is that you couldn’t be trying to use a skill simply for the sake of putting it up – there had to be real stakes attached. You had to be trying to do something that would advance the adventure or the campaign, and that made gains in the things that fitted your character’s role within the campaign more fertile ground for further gains.

    Success also meant that further progress became slower, because the chance of failure diminished.

    As a house rule, it was clever in a number of ways. I’m not suggesting everyone adopt it – I don’t use it in any of my campaigns – but it’s an idea that each GM should evaluate for themselves.

Modifiers – Complexity Disguised As Simplicity

For such a simple and ubiquitous concept, there sure are a lot of nuances to the subject of Skill Modifiers, as shown by the fact that this article is now around 8000 words in length!

Good usage of Skill Modifiers can enhance a campaign in any number of ways, but achieving good usage requires some effort on the part of the GM. The easier part is assessing potential modifiers on the fly in a rational and systematic way. Much harder is dealing with the consequences, which would be present whether or not you employ skill modifiers at all.

If anything, you could say that using the Skill Modifiers assessment process provided affords you a measure of control over the circumstances of those consequences. As tools go, they are a pretty good one – but they are facilitators, not ultimate solutions in and of themselves, and certainly no way to avoid hard work in the development of adventures and campaigns.

Learn to use them wisely, and rewards will follow. But you have to put in the work.

Comments Off on Inherent, Relative, and Personal Modifiers

Economics In RPGs 7: Economic Realities


This entry is part 10 of 16 in the series Economics In RPGs

Image by Nika Akin from Pixabay, Gemstone eyes by Mike

A word of advice: Each part of the series builds heavily on the content from the previous one. While you may be able to get relevant information without doing so, to get the most of out of each, you should have read the preceding article.

Welcome & General Introduction

So, here we are – the six parts that have preceded this one have delivered us to an era that can generally be considered now. I’ll go into the specifics of what changed and how it affected the economy in the next part – because, at last, we’re up to the blog post within this series that I originally intended to write.

You see, a while back I started seeing a lot of ill-informed vitriolic posts on social media about the US Economy, specifically the interest rates and the terrible job that President Biden was doing with the economy.

At much the same time, Interest Rates were experiencing sustained and regular rises in England, where similar rhetoric was on display – just a little toned down.

And Australia was also experiencing a similar phenomenon, creating what’s being described as a “Cost Of Living Crisis”. But our interest rate rises were more cautious about potentially creating a recession, and so – while in the US they have now moderated to something America can live with, and the rhetoric has shifted focus – Interest Rates remain high here – and the opposition have tried to employ similar rhetoric (but it hasn’t worked).

As I was reading the umpteenth variation on the theme, it was becoming more and more apparent that a lot of people have no idea how and why interest rates are set, and how economies worked. And I thought to myself, “What a pity that it’s not suitable for a Campaign Mastery post that could explain it all – more or less.”

And then I realized that every modern game out there has, or should have, some form of economy, and that GMs should have at least a basic idea on how they worked, and so it was a suitable topic, after all.

But I thought that it needed context to make it relevant to non-modern RPGs, and so the rest of the series was born.

A disclaimer: I am not an economist and I’m not trying to turn anyone else into an economist. An awful lot of this content will be simplified, possibly even oversimplified. Bear that in mind as you read.

A second disclaimer: I’m Australian with a working understanding, however imperfect and incomplete, of how the US Economy works, and an even more marginal understanding of how the UK economy works (especially in the post-Brexit era). Most of my readers are from the US, and number two are Brits. Canadians and Australians fight over third place on pretty even terms, so those are the contexts in which what I write will be interpreted. And that means that the imperfection can become an issue.

Any commentary that I make comes from my personal perspective. That’s important to remember. Now, sometimes an outside perspective helps see something that’s not obvious to those who are enmeshed in a system, and sometimes it can mean that you aren’t as clued-in as you should be. So I’ll apologize in advance for any errors or offense.

I’ll repeat these disclaimers at the top of each part in this series.

Related articles

This series joins the many other articles on world-building that have been offered here through the years. Part one contained an extremely abbreviated list of these. There are far too many to list here individually; instead check out

the Campaign Creation page of the Blogdex,

especially the sections on

  • Divine Power, Religion, & Theology
  • Magic, Sorcery, & The Arcane
  • Money & Wealth
  • Cities & Architecture
  • Politics
  • Societies & Nations, and
  • Organizations, and
  • Races.
Where We’re At – repeated from Part 3

Along the way, a number of important principles have been established.

  1. Society drives economics – which is perfectly obvious when you think about it, because social patterns and structures define who can earn wealth, the nature of that wealth, and what they can spend it on – and those, by definition, are the fundamentals of an economy.
  2. Economics pressure Societies to evolve – economic activity encourages some social behaviors and inhibits others, producing the trends that cause societies to evolve. Again, perfectly obvious in hindsight, but not at all obvious at first glance – largely because the changes in society obscure and alter the driving forces and consequences of (1).
  3. Existing economic and social trends develop in the context of new developments – this point is a little more subtle and obscure. Another way of looking at it is that the existing social patterns define the initial impact that new developments can have on society, and the results tend to be definitive of the new era.
  4. New developments drive new patterns in both economic and social behavior but it takes time for the dominoes to fall – Just because some consequences get a head start, and are more readily assimilated into the society in general, that does not make them the most profound influences; those may take time to develop, but can be so transformative that they define a new social / political / economic / historic era.
  5. Each society and its economic infrastructure contains the foundations of the next significant era – this is an obvious consequence of the previous point. But spelling it out like this defines two or perhaps three phases of development, all contained within the envelope of a given social era:
    • There’s the initial phase, in which some arbitrary dividing line demarks transition from one social era to another. Economic development and social change is driven exclusively by existing trends.
    • There’s the secondary phase, in which new conditions derive from the driving social forces that define the era begin to infiltrate and manifest within the scope permitted by the results of the initial phase.
    • Each of the trends in the secondary phase can have an immediate impact or a delayed impact. The first become a part of the unique set of conditions that define the current era, while the second become the seeds of the next social era. There is always a continuity, and you can never really analyze a particular period in history without understanding the foundations that were laid in the preceding era.

The general principles contained within these bullet points are important enough that I’m going to be repeating them in the ‘opening salvos’ of the remaining articles in the series.

1. Inflation & The Economic Thermometer

Simply put, Inflation measures how much faster an economy is growing. In a system tied to a standard, like gold or silver, the number is relative to expectations, which define how much currency needs to be printed and put into circulation.

In economies with a floating currency, if you print too little, it just means that the value of the currency rises to compensate, and vice-versa – but that affects all sorts of other things if you buy or sell anything internationally, so a controlled inflation rate remains absolutely critical to keeping an economy stable.

Negative inflation rates – deflation – are undesirable. It’s the same thing (essentially) as setting interest rates too low, which I’ll get to a little later.

Pandemic

That’s what happened in the pandemic – people weren’t going out, lots of businesses were closed, money wasn’t being spent, and the economy started to crash.

Stimulus

To fight that, governments issued economic stimulus in various forms. This artificially stimulates the economy, but it increases the total purchasing power of the sum of all the money in the economy. That’s inflationary – a good thing in this scenario – but once the cause of the slowdown ends (lockdowns, in this case), all that extra money starts to circulate, and the economy starts to overheat.

Post-Pandemic Inflation

The law of supply and demand means that this expenditure is also inflationary, so inflation spiked, all over the western world – especially as supply chains had been disrupted and would take a while to get back to normal. That meant that supply was down just as demand was going up, and that results in higher prices – which cause inflation.

I’ll cover a whole truckload of economic vulnerabilities a little later – this is just supposed to be a general introduction to the subject.

So, economic managers – whatever their job title – within any given government will make it their business to know what the economy is doing, so that they can intervene if necessary.
.

2. What’s More Important than the current Inflation Rate? Tomorrow’s Inflation Rate!

To be honest, what the inflation rate is now is not all that important. What matters is what it’s going to be – assuming that you do nothing – and how that forecast is affected by economic management (i.e. doing something).

You see, the ‘current’ interest rate is the result of what has already happened within the economy. It’s too late to change it. What matters is how you respond to it.

Because deflation is so nasty, the usual practice is to aim for a modest, consistent, economic growth, and matching low levels of inflation – generally, 2-4%. If your inflation rate is higher than this, you’re in an economic boom – and those are usually caused by an economic “bubble” of some sort that will eventually burst. And that’s bad.

To forecast what Inflation is going to be, you need to identify a trend. And a good starting point is the change in the inflation rate now compared to what it was, the last time you measured it.

But it’s not that simple.

3. Crystal-ball-gazing, Economic Style

Measuring Inflation is never as easy as it is made to sound on the TV News. Predicting what it will be is even harder.

Here’s a set of charts that I put together to illustrate the problems.

Figure 1 is the ideal situation. It shows three measurements of inflation – one very good figure, and two with a margin of error.

You usually get a very good number once a year, or once a quarter, and two less reliable figures in the months that follow.

But the core numbers are solid and show that in period 1, inflation rose by a small amount, and in period 2, it went up by a lesser amount. It then projects that last measured change forward to what it will be at the end of the year/quarter, if it keeps changing at the last measured rate of change.

Figure 2 allows for a margin of error upwards. Figure 3 adds a margin of error downwards. The bar on the right shows the level of fuzziness this introduces in the forecast. Based on this, you’d probably not see an intervention as warranted.

But there’s that pesky margin of error. In figure 4, I project the results if the current number is at the extreme low of the margin of error, and the previous result was at the extreme low, at the estimate, and at the extreme high. And it paints a very grim picture.

Figure 5 does the same thing based on the true value currently being at the top of the margin of error. And it’s results are equally nasty.

But those are unlikely extremes. If you put the two together, you can actually map out a bell curve, just like you would get with (say) 3d6. The center of the curve is right at the point of our original estimate, and spreading out to to either side by the original fuzziness. based on that, you still probably wouldn’t intervene.

But how do you know? Well, you can compare the fuzzy figure at your next measurement to the current one, and it will indicate the current trend. If it’s a, the end-of-period number will be the alarming a1. If it’s b, the forecast will be the still-serious b1. If it’s c, the forecast will be the alarm-bell-ringing c1.

But it’s still not that simple. Numbers need to be adjusted for seasonal variations, which means taking whatever your forecast says and guessing what it will be once those only-fuzzily-known errors are taken into account.

And, on top of that, the reason that there’s a margin of error in the first place is that a lot of the contributing economic factors aren’t precisely measured until the end of the period. Until then, all you’ve got are estimates and indicators, producing an estimated number..

4. Underlying Causes and Trends

The obvious thing to do is to look at the constituents, and use the estimates and indicators to track trend lines in those economic variables, and from that, try to work out what part of that bell curve you are likely to be heading for.

They never quite match up with reality. Think about those 3d6 again, and pretend that each d6 represents one of these underlying constituents. The average of a d6 is 3.5. So tell me then, how many of those d6 are actually going to roll 3.5?

That’s right – none of them. The best that you can hope for is that if one comes out particularly low, another will be high, and the whole thing will balance out in the end.

These contributing trends are called “underlying causes of inflation” by economists. If the underlying trends are fairly stable, so will the inflation rate be fairly stable, and that nice, simple forecast from figure 1 is probably going to be fairly close.

In actual fact, there are eight underlying values or trends that go into a normal inflation forecast.

5. Entwined Correlations & Vicious Circles

But it’s STILL not that simple. Lots of these underlying values are entwined and interconnected – and these secondary impacts often have delayed impacts as well as immediate ones – and not all of the eight factors are ever equal in their contributions; some are bigger, some or smaller. And these scaling factors can vary with the size of the underlying trend.

Let’s take a fairly realistic example: The price of crude oil gets put up by OPEC, as recently happened.

Gas stations raise their prices. Distribution costs of goods goes up.

It costs more to farm. Food prices go up.

It costs more for your employees to get to work. They start pushing for a pay rise, and because their food prices have gone up, they want it to be a healthy increase.

Oil prices eventually feed into the cost of power, which your employees use at home to run their refrigerators and TVs and whatnot – like cooking food. Their demands get larger and louder.

Meanwhile, the cost of electricity also means that it costs more to keep your lights on and your machinery operating. In some cases, this costs a bit; in others, like aluminum manufacturing, it costs a huge amount. So you have to put your prices up, and the employees get at least some of the pay rise.

(Not all industries and businesses are going to be affected the same way or by the same amount, by the way).

But if you and everybody else puts their prices up, it now costs more to buy everything. So the employees want another pay rise.

In the meantime, the extra spending power that they now have gets spent, and as a result, there’s more money in the economy. That’s inflation.

It can all end in a vicious circle in which the necessary response to inflation creates still more inflation.

To put it in a nutshell, the more overheated the economy, the more unstable it is.

6. Hyperinflation, Recession, and Depression: Why Inflation Matters

While the causes of these phenomena are quite different, a lot of the symptoms of the economic disruption are the same.

When one of these positive feedback loops goes completely out of control, you can end up with hyperinflation. Anything more than about 20% inflation is considered hyperinflation.

Germany experienced Hyperinflation between the world wars; people had to literally cart wheelbarrows full of notes to the store to buy bread, which cost a ridiculous amount, and which was increasing in price daily.

Imagine buying a dozen eggs one day for $2.50. The next day, those eggs cost $3.00, the day after, $4, then $7, then $12, followed by $16 and $25. A week after this hyperinflation started, you can no longer buy eggs by the dozen; it’s one egg at a time, for $5 each – then $7, $10, $16, $22, $40, $70, and $120. With prices going up like this, in a few weeks you will be talking about thousands of dollars – for an egg.

Pay rates can’t keep up. No-one can afford to buy eggs (or anything else) – so they steal them. Black markets start dealing in basic (stolen) produce, and people buy from them because they have no other choice but to starve. Riots and looting ensue. The shopkeeper doesn’t have the income to keep his shop open, so he closes it and joins the rioters – and so do anyone who used to work there. If this is restricted to one country, anyone who could has fled to somewhere else long ago – but hundreds of thousands more are following. Instant Dystopia.

Employment rates are a function of supply and demand, too. In a hyper-inflation situation, the demand vanishes, and there are hundreds of people for every vacancy.

Of course, everyone’s withdrawn what money they have saved long before, so the banks have collapsed, too.

Recessions

A contracting economy is not an improvement on this situation.

Recessions happen when there’s not enough wealth being generated by an economy to sustain spending. You can hide one by going into debt, borrowing money from somewhere else – for as long as they will lend it.

In a recession, expenses go up, profit margins go down, and businesses start to collapse. This drives a spike in the unemployment rate and pretty soon there are dozens of applicants for every job. When I first entered the job market, there was a small recession, with 10-12% unemployment. That’s only happened once or twice, since.

Depressions

If you get two quarter-years (usually abbreviated quarters) of economic contraction in a row, a Recession becomes a Depression.

You might think that there’s not much difference, things just don’t improve – but that assumes that a new equilibrium is reached in the economy, and things stabilize, albeit at a level that contains a lot of suffering.

Unfortunately, that’s not the case. Some businesses are ‘primary arteries’ within the ‘body’ of the economy, essential elements of the economy itself. In a Depression, these start to fail. The electric company. Major supermarket chains. Banks.

Domino effects follow, causing the closure of many businesses that might have survived, and massive downsizing of those that attempt to weather the storm.

The greater significance within the economy that these institutions represent makes a Depression slightly different in kind, and not just in degree.

A recession is the economy experiencing sudden, acute, alarming chest pains. A Depression is a full-blown heart attack. A prolonged depression is a stroke on top of the heart attack.

And depressions generally last a long time. It’s that much harder to rebuild the economy without those arterial components; it can, and has, taken years. It’s like a flood caused by the sudden collapse of a dam, a flood that wipes out the only available suppliers of steel and concrete (which you need in order to rebuild the dam). Before you can rebuild the dam, you need to replace those suppliers; before you can replace those suppliers, the flood waters need to recede; and even accomplishing all this still leaves a mess to repair.

7. Interest Rates & Inflation

The most effective tool to use against inflation in general is interest rates. You lower them to boost the economy and avoid (or mitigate) a recession; you raise them to suck money out of the economy when it starts to overheat.

If you rely on today’s inflation rate, you will always do too little, too late. You need to forecast what the inflation rate is going to be and set your interest rates accordingly.

Increasing interest rates has a whole slew of effects. It makes it more expensive to borrow, so people can’t spend as much on housing. The same effect makes it harder to keep a business operating (almost all businesses are built on debt, the infrastructure needed for the business to operate is too expensive).

So businesses need to put prices up, and that means that people won’t get as much to spend on luxuries. Rents go up, and so do power prices. Banks need to raise the interest rates they pay for savings in order to maintain liquidity.

Businesses also look to lower what costs they can – and that often means doing without as many staff. So employment goes down, and unemployment goes up. Pay rates may be cut – just when people are demanding that they go up.

And so on. But that brings me to a second wild-card when it comes to forecasting inflation rates – not all these effects migrate through the economy at the same speed, or with the same intensity. Some of them are immediate; others are a ticking time bomb.

Economists in Australia are forecasting a gradual but steepening fiscal cliff as people’s mortgages go from fixed (low) rates, set before the current financial problems arose, onto variable rates (set according to the current economic conditions), for example.

Just in case it works differently, I should explain that in Australia, ‘fixed rate loans’ on housing are for a limited period – sometimes one year, sometimes two, occasionally five – after which, the loans revert to variable rates. A lot of people refinance their mortgages when that happens, for obvious reasons – a great fixed rate often means a more draconian variable rate.

On top of that, some of the effects of an interest rate increase are persistent, they linger – just as hitting the interest rate brakes don’t slow the inflation trajectory all at once, so taking your foot off the brake has both immediate effects and slower, longer term ones.

Our electricity regulator has fixed prices for the remainder of the year at a rate commensurate with the high costs experienced earlier in the year (when it looked like those prices were going to go up by another 20%, it must be admitted) – which means that they will stay high even though the wholesale cost of generating and distributing the power has now started to fall.

There will be no relief on electricity and gas rates until January next year. The Reserve Bank (which sets interest rates here) could drop rates tomorrow, and they would effectively stay ‘locked in’ at the higher rate so far as power costs are concerned, at least until then.

8. Putting The Cart Before The Horse

So, not only is forecasting the inflation rate much more difficult than it might initially have seemed, it’s also far more important to get right – because that determines the interest rate settings, and those have a profound impact on the economy.

One of the ways economists counter the difficulty in predicting inflation rates is to put the cart before the horse. Instead of trying to measure the effects driving inflation, they measure the consequences of inflation on a given key element and use that to infer what the impact on inflation was over the period in question.

You can get an idea of how much more or less people have to spend by looking at house prices, and at retail spending – and at what people are spending money on. This uses the consequences of the inflation that has already been experienced to deduce the trend, the impact that the resulting patterns of expenditure will have on the future inflation rate.

It makes the guesses more informed, in other words.

Unemployment Rates

Another key example is the current unemployment rate, because that is driven by the inflation that was. If unemployment is too high, it’s a sign that the economy may be heading into a recession; if it’s too low, it means that demand for workers remains high, and that means that businesses have the money to spend on hiring more workers – and that’s a sign that the economy might be overheating.

In any given jobs market, there is a given percentage of the workforce that are going to be in the process of changing employers. The technical term, believe it or not, is “slosh’. It’s usually around 2-4%.

When there is a strong jobs market, and unemployment is low, employees feel more confident in their ability to get a job if their current one isn’t satisfactory for any reason – so slosh can actually go up, even though the overall unemployment rate is down. That confidence can flow through to other indicators, like retail spending, as well.

When there’s a lot of unemployment, businesses work extra hard to keep their best staff but treat everyone else as disposable parts – because there are multiple applicants for every vacancy, and they can pick and choose amongst them. People are less confident in their job security, and less inclined to risk changing jobs unless they have to.

So the unemployment rate is not only a key indicator in its own right, it also provides valuable context for the interpretation of other values. If only it weren’t always 4-6 weeks out of date by the time it gets measured…

9. Surfing the crest of a placid wave

Economists love it when things are nice and calm, when all the indicators are that inflation is stable at a couple of percent or so, unemployment is 50-75% churn (around 4-4.5%), and so on. When that happens, they can set Interest rates to a nice stable 4% that doesn’t put significant price pressure on wages or anything else, and they can continually surf the crest of a very placid wave.

That rarely happens. So many things feed into an inflation rate that there’s always something that’s too high or too low or too unstable and needs to be monitored. If it’s only one or two variables, that can usually be managed – though the reporting delays create considerable discomfort – but when persistent, nerves grow.

10. The Blunt Weapon

Part of the problem is that Interest rates are very much a blunt weapon.

Let’s say that there’s low unemployment, and housing prices are setting new records, and power prices are high, and OPEC have just reduced global supplies to increase the price of Oil. Those are all indicators that inflation is high, and needs action.

But at the same time, there’s high levels of mortgage stress, and there’s a mortgage cliff approaching, and families are under considerable price stress that is impacting retail sales and travel is down, and food prices are too high, creating considerable pressure for wages growth – those are all signs that interest rates are too high, and raising them further will only make things worse.

That’s the situation right now in Australia – about half the indicators say inflation is too high, and half say that interest rates are too high, and you can’t have it both ways.

It takes nerves of steel not to jump, one way or the other. What it probably means is that interest rates are too high, but have to stay that way until at least one more of those high-inflation indicators comes down. That will happen as people start falling off that mortgage cliff; the trick is to help retail and tourism businesses, and those individuals living at lower economic levels, survive until interest rates can be safely cut – without pumping more money into the economy, which will delay that fall.

At any given point in time, there are ten economic vulnerabilities that can massively disturb that placid wave. Governments usually have no control over these; they simply have to deal with the consequences. One is usually not enough to create an economic panic – but two or more can combine at any time.

11. Vulnerability 1: The Price Of Oil

The first vulnerability is the one to which everyone is most vulnerable, even the oil-producing nations.

That might not be immediately obvious to readers. Let’s say that another country restricts oil supplies (hello, OPEC); that has no bearing on how much it costs to produce your oil, so – in theory – the price of oil doesn’t go up in your country. Except that the oil producers are private companies who – like all such – are required by their boards an\d shareholders to make profits, and so they will restrict the amount of oil they sell domestically to increase what they can supply to countries where the price is higher. In effect, in order to be competitive, your country has to raise its domestic oil price to the international value, or close enough to it.

This is the vulnerability that completely blindsided everyone in the 1970s, bringing an end to the Pre-Digital Tech Age, as discussed in the previous part of this series. Simply put, no-one really recognized that it was a vulnerability until then; it was simply taken for granted.

The price of Oil impacts delivery and transport costs, so everything goes up in price. There are a huge number of products that are petroleum-based, especially plastics, so packaging for a lot of things goes up, too. It impacts the cost of consumer travel, so people have less money to spend on other things. It affects the cost of power, which takes longer to work its way through the system but adds an unwelcome second kick to all three effects.

And it’s completely out of your control; or more accurately, it’s subject to the whims of the least-stable oil-producing nation. Everyone – including the oft-maligned Saudis (at least in this space) – is hostage to the price of oil..

12. Vulnerability 2: The Price Of Power

Everything uses power, even oil refining. Manufacturing, packaging, transportation, distribution, communications, retail, domestic consumption – you name it.

Even today, when they should know better, economists frequently underestimate the impact of rising power prices, getting unpleasant surprises as a result.

Some industries and operations are more vulnerable than others; leading this pack are aluminum refining, who pass these costs downstream to every product that uses Aluminum components, or uses machines that contain aluminum components (thankfully, there aren’t so many of those).

Right behind them are anything that has to be stored in a refrigerated state. And that’s more foodstuffs than anyone realizes.

Anything that impacts the fundamentals of food prices has a disproportionate impact on consumer spending, consumer confidence, and consumer demand for better wages – and so rapid is this response that it can often outpace the spread of its causal impact through an economy.

Third in line – nominally – is anything that gets transported by road (or by rail, in many cases). Guess what the two primary distribution methods are? This effect is all about street lighting.

Domestic cooling and heating are in fourth place except every summer and winter, when they vault into third place. That’s because these are fundamentally inefficient processes. But this includes the cost of heating or cooling workplaces and office spaces, so it affects virtually everything.

Even seemingly unrelated industrial processes still rely on electricity to make their conveyor belts and other machinery function.

The price of power is in an especially precarious state at the moment. Already vulnerable due to labor costs and long-standing neglect of distribution infrastructure – substations, poles, and wires – environmental reality is mandating a switch from coal- and oil-powered power generation to more ecologically-friendly sources. This shifts power generation from instantaneous demand responses to longer-term supplies – and that demands new storage technologies that are inherently expensive, adding further cost pressures to the electricity price. In time, those will moderate and the system will stabilize, but its currently precarious.

Even the cost of steel is impacted by the price of electricity and gas.

13. Vulnerability 3: The Price Of Materials

Which brings us to the bogeyman that arose as a consequence of the oil shock in the 1970s – the shift in thinking from an unlimited- resources perspective to a limited-resources perspective.

Ironically, this seems to have coincided with a rise in consumer consumption of disposable products – many are no longer designed to last as long as possible, but incorporate planned obsolescence, designed to increase profits for the manufacturers and sellers.

The truth is that many of the commodities deemed most greatly at risk have proven more resilient in supply than was dreamed of, back in the 1950s and 60s when the first warnings were being sounded on the subject (and falling on deaf ears). It’s materials that were largely unnoticed back then, like lithium, and rare-earth metals that are critical commodities these days – and part of that is due to supply-chain issues and politics.

Rare Earth Metals

In particular, China is the world’s #1 source – one-hundredfold or more – of these commodities, upon which all domestic computers and mobile phones and other smart electronic devices depend. That’s more than 100-fold compared to the rest of the world combined.

Rubber

There was a time in which there was not enough rubber being produced to meet demand. This would have been the first great Materials Shock (predating the Oil Shock by decades), but an artificial supply was devised in the nick of time.

Helium

Let’s talk for a moment about another material resource that doesn’t get enough attention: Helium. Modern disk drives rely on Helium to achieve their astonishing capacities and life-spans. Without Helium, you’re back at the 1 terabyte bricks of yesteryear – at best. Laptops and smartphones are suitcase-sized, and heavy. Helium and Hydrogen are the only suitable gasses – the next lightest gasses are Oxygen and Nitrogen, which won’t work for obvious reasons; you may as well use air. Hydrogen doesn’t work because the atoms are so small that they escape too quickly. The only satisfactory compromise is Helium, and for that reason, it’s now designated a Strategic Material by the US Government.

Yet, we throw it away on party balloons. The only reason this isn’t stopped is because it is feared a public panic would result – and it’s a hard sell persuading people that the second most-common substance in the universe is in critically-short supply.

Silicon

One more, for the sake of completeness: Sand is everywhere, and its the source of Silicon, which are fundamental to all sorts of computer chips. Not something you would ever expect to be in short supply. But it is.

The reason is that modern computer chips require exceptionally-pure silicon, and that only comes from exceptionally-pure sands, and that – in turn – is in comparatively short supply. The only reason this is not as dire a situation as the others mentioned is that there are expectations that ways can be found of ‘purifying’ lower-quality sources – though these are likely to double or triple the prices of electronics virtually overnight, if (when?) they are ever needed..

Food Sources

An afterthought, but an important one. How much agricultural land in the US is actively farmed, do you think?

About 16.8% of the US is considered arable – that is, capable of producing crops. This number is rising as we get better at farming – it was reported to be 17.24% in 2020.

Of that percentage, 52% is actually used for agriculture. Some of it is reserved for forests, for example.

Because of subsidies, the most profitable crop is corn, so that uses up more than half of the land actually farmed. By area, that’s roughly the size of California.

The rest – roughly the size of Indiana – is where the rest of the American food supply comes from.

Because farms need to make profits, too, there’s an incentive for corn syrup to be put into almost everything. This contributes massively to the health problems in the US – about 42 pounds of the stuff per head, each and every year. For comparison, the average American also consumes about 110 pounds of red meat a year.

It’s too late to change – farming anything else would require a multi-generational investment in different farm machinery that would send farmers broke long before they got there.

Everyone needs to eat – so anything that affects food prices (like, say, an invasion of the #1 grain-producing nation on earth, Ukraine) is going to affect everyone and everything.

14. Vulnerability 4: The Price Of Labor

By and large, this is a negligible factor unless inflation is already running rampant, but labor costs are extremely sensitive to other primary vulnerabilities. This has enabled past state governments in the US to pass laws limiting – almost eliminating – employee rights. Nevertheless, as the current Hollywood strikes reveal, if everyone pulls together, massive disruptions can still occur – employees can only be pushed so far before they will revolt.

Unions, by giving employees some negotiating power, can actually decrease the level of industrial disruption – at the price of increasing its frequency.

What’s less harmful? The occasional catastrophic meltdown, or smaller but more frequent temporary disruptions? I don’t pretend to know the answer, but lean toward the latter.

Nothing happens without workers somewhere in the process. So everything is equally vulnerable to general increases in the cost of Labor. But, at the same time, some growth in this area is essential or you get those catastrophic meltdowns, no matter what interventions may attempt to prevent them.

The cost of labor is a key ingredient in everything from mining to power production. This ubiquity means that labor cost increases have delayed impacts on every other critical vulnerability, capable of sending them over the edge.

Minimum Wage

So the answer is to suppress the fundamental cost of labor as much as possible, right? No, no, no! Every time a general increase in minimum wages is mooted here in Australia, the conservative side of politics sound alarm bells about businesses collapsing, and I have no doubt that it’s the same everywhere else in the world.

If the minimum wage is too low, people have to work multiple jobs to make ends meet – and exhaustion means they will be less productive at all of them. There was a time- back in the 50s and 60s – when a family could live comfortably off a single wage; that’s no longer the case.

Two-Income Families

Economics driving social changes which in turn drive economic changes – this is something I’ve tried to show in this series as a significant pillar of reality. In this case, consumer habits adapted to twin-incomes, and businesses evolved to satisfy those consumer habits, and before you know it, you need a two-person income to achieve a satisfactory standard of living.

Which leaves single-income households in a very difficult position. There are only two solutions without raising wages – government support, or second jobs. So the one person is now ‘consuming’ two or more job vacancies, at the expense of someone else who wants a job. The end result is an increase in the unemployment rate, and a more profound increase in the long-term unemployment rate.

This suits businesses, however, because it keeps their labor costs down, enabling more income to be characterized as profits.

Doubling the minimum wage

I remember clearly a doubling of the minimum wage in the US being mooted by some as proposed policy in the 2016 US elections, and on first glance, it would do a lot of good. But…

Does anyone have any doubts that doubling the minimum wage would flow through to increases in all other wage levels? I don’t. The effect might be attenuated for those already on good money, so it wouldn’t end up doubling the entire wage levels of a country, but it would be a significant increase. Conservatives are right about that.

Aside from this, what would happen if such a policy was enacted? Well, people would no longer need a second job, assuming that nothing went up in price. That means that they would achieve a better work-life balance – good for mental health and general happiness.

To replace them, employers would have to recruit more workers. So unemployment would go down, and things would stabilize at a new normal – ironically, one closer to the much-lauded 1950s idolized by the MAGA-crowd.

But, if labor costs go up, businesses will lose profits – and that’s unacceptable. There would be an immediate increase in prices, which would start to erode those wage gains. On top of that, there would be massive short-term economic disruption, with some businesses not raising prices enough and some going too far.

You can only raise prices so far before consumers stop buying. So there are limits to the wages growth that business can absorb, ones imposed by public opinion. So some businesses will get their adjustments wrong, and lose profitability, and go out of business. And some will make the calculations correctly, and conclude that they won’t stay profitable because they can’t raise profits enough, and close.

Doubling the minimum wage will have all sorts of short-term economic and social benefits – before plunging an economy into a Recession or a Depression. Not good.

Equally severe problems result when this picture is used to suppress growth in the minimum wage. They are just less overt and obvious. When this happens, the only solution is to raise pay scales – and if you suppress wages growth too much, for too long, you eventually reach the point where you need a doubling just to get to where your wage-rates should have been.

Theoretical Solutions?

So, short-term, what you need is a graduated rise in minimum wage, rather than trying to do it all in one hit, with a defined end-point.

Once you get there, indexing the minimum wage to Inflation seems the obvious long-term solution.

It’s not that simple, because wage increases are, in and of themselves, inflationary. This is a positive-feedback loop – another one – that will regularly send the economy surging out of control.

Businesses thrive on stability, and this is anything but stable. As with many other things in life and politics, there are no easy answers, and anyone who tells you there are is attempting to pull the wool over your eyes.

15. Vulnerability 5: Economic Disparity

While I’m in the vicinity, I should talk about this momentarily. Income Disparity, also described as Income Inequality, is the uneven distribution of total income throughout a population.

In modern times, it means too much money going to executives and shareholders and not enough being fed into the pay packets of the people actually generating the income.

This is an obvious result of pushing labor costs down too much. Those who stand to gain – the wealthy and well-paid – are obviously in favor of it, for purely selfish reasons. This leads them to donate heavily to political groups who promise to cut their taxes or keep wages “under control”.

Some are smart enough to recognize these as short-term gains leading to long-term pain, but many are not. This problem produces a hidden instability within the economy but competitive pressures – pay scales at the top end necessary to recruit good employees – restrict action to deal with the problem.

Solutions need to affect everyone all at the same time, whether they like it or not, and that makes them the responsibility of the highest levels of government.

In theory, higher tax rates help to balance the pressures – but greater access to tax avoidance mechanisms minimize this effect.

The only solution that I can see is some sort of imposed regulatory mechanism which fixes the wages of a supervisor at (say) 2.5 times the average scale of pay of those supervised. But that contains inherent inefficiencies, encouraging poor business structures, and I doubt that any political party espousing it could ever get elected, so it’s neither effective enough nor ever going to be implemented. No easy answers – again.

16. Vulnerability 6: The Price Of Land

Land is sometimes said to be the safest investment, because – in the long run – land values always go up, not down. Oh, dear, sounds inflationary, doesn’t it?

This is one of those statements that’s both usually true and misleading at the same time. It depends on how volatile the housing market is in a given region, and how long “in the long run” is, and all sorts of other factors.

If you buy property in a small town, and the rail line to that town closes, you’d better believe that the bottom will fall out of the property market – you may have to wait a century or two for land prices to get back to where they were.

Buying property that’s in demand because workers in a nearby factory need residences, if that factory contaminates the water, your property will become worthless overnight. If you’re lucky, the government will forcibly buy you out – for 1 tenth what you paid for the land, which – effectively – will never go back to being worth what you paid for it..

Here in Australia, the big stink is about insurers who will no longer cover communities for flood damage because once-in-a-century floods have struck twice in the last two years. This devalues the property concerned so significantly that the government is contemplating relocating entire towns to higher ground, and confuses probabilities with predictions – but that’s a bitter debate for another day and maybe a different venue.

Part of the problem is that there is no formula for working out what a piece of land is worth; there hasn’t even been a proper analysis to identify the factors and their relative strengths. Instead, the ‘formula’ is to look at what properties in the area went for recently, how this property compares with those, what’s changed economically in the area since then, add a gut-instinct variation to that baseline for these factors, plus 5% and inflation, then list it and wait to see if anyone bites.

The inherent volatility of Auctions don’t help matters any, either.

I remember when the first property in Sydney sold for more than a million dollars – a mansion with estate grounds in the most affluent part of town, overlooking the world-famous Sydney Harbour.

These days, the average 2-BR house sells for over a million, and despite an initial dip when those who saw the interest rate writing on the wall got out while the getting was (relatively) good, prices are still surging.

The number of sales per month is expected to triple or quadruple over the next 4 months, as more people fall off the mortgage cliff already described and either sell up (accepting a loss) or get foreclosed by the banks. Around Christmas, it will be a buyers’ market – if you’ve got the money to invest.

The value of property feeds into inflation in a number of ways – first, mortgages eat into disposable income, in exactly the same way as high inflation does; and second, rents always go up when property does, and that means that business premises cost more, an expense that gets passed on as higher prices for goods.

This can be viewed as the tail wagging the dog – by mimicking the consequences of a rise in inflation, the price of land creates a rise in inflation. Cause-and-effect are all messed up in economics – at least, when it comes to land prices.

A sudden spike in the value of land tends to be a relatively local thing, and so you get pro-inflation hot-spots breaking out here and there all the time. That’s happening right now to land around the still-under-construction second International Airport for Sydney – except where the land lies under the flight path of jet aircraft, where values have crashed. Those affected get only a modicum of sympathy from me, it’s only been on-again-off-again for three decades now; they knew the risks when they bought there.

Most of the time, those hot-spots get ‘leveled out’ to a large extent by the far greater lands where the prices are relatively stable, or even declining. Take a look at photos of the urban decay in Detroit sometime, and think about the property values. There are locations where the cost of demolition of decrepit structure are more than the land is worth as an empty lot.

But these hot-spots arise as a confluence of random and non-random events – and, just as you can roll four sixes on 4d6 three times in a row if you roll for long enough, sooner or later those random hot-spots can combine into a national housing value boom. Once they start, there’s nothing that can be done about these, because house and land prices are as much about perception as they are any concrete, justifiable, valuations. All that can be done is to ride the whirlwind and wait for the inevitable crash.

Supply and demand also factor into the land-value equation; if something happens to limit supply, or greatly increase demand, inflation, and inflation of land values, are the inevitable results.

17. Vulnerability 7: The Price Of Goods

I’ve already touched on this, under the heading of the cost of materials. Consider these formulas:

  • Wholesale Price of Goods x Units produced = (cost of materials + manufacturing + labor + packaging + distribution + marketing + overheads + loan repayments + savings) x (1 + profit margin/100) x (1 + inflation/100).
  • Retail price of goods x units purchased = (cost of sales + labor + warehousing + sub-distribution + marketing + overheads + loan repayments + savings) x (1 + profit margin / 100) x (1 + inflation/100).

There are so many inputs into the final cost that the customer pays for his goods that anything and everything makes them go up. If any factor ever causes them to go down, that can be taken as additional profits or hidden as a ‘sales price’ because it will only be temporary.

And, again, that all sounds very inflationary, doesn’t it?

If modest increases happen in several – or all – of these factors, even though they are not of concern individually, they can aggregate and then amplify into out-of-the-blue spikes in retail prices. I’m talking rises of 30-50% in one step.

Most of the time, that’s just a blip, and while market share may be impacted, for the most part, things go on as usual.

Every now and then, however, some critical commodity will be impacted – be it ball bearings or electrical switches, and there’s an unexpected snowballing in the price of goods that are deemed essential by the consumer.

Most of the time, these impacts are just noise within the system, with a small overall upward trend. Like rolling 6d6-4d6 repeatedly.

Image generated (very quickly) using Anydice. The peak of the curve, as you would expect, is at 7.

Although most of the results will be scattered between, say, 2 and 12, and the occasional oddity might be between -1 and 15 but not in that core region, every now and then, you’ll hit the jackpot.

Inflation can come out of nowhere, when the stars align.

18. Vulnerability 8: The Price Of Services

The same thing can be said for the price of services. But there’s a bias that is often not taken into account.

In Australia, there was a time when a University Education was free; it was felt that the contributions an educated workforce would make to the economy would more than repay the investment in the future of our society.

Successive governments first undermined this system (because it was expensive) and then made a concerted effort to underwrite it by introducing university fees, something similar to US-style college fees, except that the up-front costs were paid by the government who then forced repayment of the resulting debt by the student. There are some protections for the students – you only have to start repayments once your income hits a certain level, for example – but the fees have simply grown and grown since.

These costs obviously flow directly into the fees charged by these graduates for their services. The whole thing is directly inflationary, and at the same time, a drag on economic growth – a contradiction in terms that only resolves when you realize that we’re talking about different time scales.

University administrators love the scheme; by putting a flat fee on the price of a degree, they can charge international students up-front, easing their running costs. A Bachelor’s Degree is costs AU$15K-33K per year, usually for 3 or 4 years. A Masters is two more years at AU$14K-37K a year, a Doctorate is a couple more at the same rate. Medical and Veterinary degrees are up to twice this amount. An MBA can cost as much as AU$121K per year.

As of 23 June this year – about two weeks ago – it was estimated that there was a total of AU$74.4 billion owed by about 3 million Australians, an average of A$24,700 each. Any outstanding debt is indexed every year against inflation – they went up 6.6% this past June 1. This encourages students to repay their debt before the threshold is reached.

The problem here is that incomes have not risen by anywhere close to the inflation rate – so ex-students are suddenly falling deeper and deeper into debt. In effect, 7-14 years of student productivity is lost to the economy repaying this debt.

The more an ex-student can charge for their services, the more quickly they can discharge their debts. This is keeping graduates from home ownership, for example, and increasing the stress they experience, increasing drop-out rates in various professions, while at the same time pushing the services in question further out of reach of lower-income clients. The full social and economic impact has yet to be discovered, but the indications are not good.

The college system in the US sucks this money out of the parents’ finances, so the economic impact starts immediately instead of being deferred.

In effect, this is acting as an amplifier to inflation rates in a limited range of areas. It means that any sudden increase (however temporary) in inflation rate gets an additional boost. What may have been a manageable 4-5% inflation has the effects of a 6-7% rise.

It doesn’t matter what the cause is – if the cost of plumbers and electricians and doctors and teachers and the like go up, it drives inflation beyond the increase itself.

19. Vulnerability 9: Corporate Greed

A favorite bogeyman of the left, and with some justification. It doesn’t matter if 99 out of 100 corporate citizens behave responsibly; that 1-in-100 can and usually does have a disproportionate impact.

Greed basically siphons money out of the economy and parks it where the greedy individual can profit from it but no-one else. Effectively, this is the same as not printing enough currency, which in turn raises the value of the currency – which, at first, might seem to be a good thing for reducing inflation.

The problem is that every dollar (or peso or whatever) that gets spent anywhere else in the economy is therefore essentially spending more than the face value – and that is inflationary.

On top of that, even the hint of an allegation of corporate greed is guaranteed to get workers agitating for a pay rise, for their fair share. Since the mid-90s, governments have linked productivity gains (which boost the economy) to wage rises; the problem is that since 1994-5, every 1% productivity gain has only earned a 0.8% pay rise. That other 0.2% has gone to business owners and shareholders.

I have seen arguments that this is entirely justified – “After all, its their business and they deserve to get more if the business becomes more profitable.” The flaw in this line of argument is that they would get more anyway.

I’ll be the first to admit that the diagram above is exaggerated – it’s intended to illustrate the principle, not the reality.

The White bars are the total income of the business, adjusted for inflation. Unrealistically, it’s the same throughout. This is distributed into three areas – Profits, Wages, and Costs, colored yellow, Green, and Red, respectively. In figure 1, these are roughly equal (again, improbably).

Row two illustrates a 30% productivity gain, over several years.

Row three shows that if the proceeds of the productivity gain are shared equally between owners (profits) and workers (wages), everyone is equally better off. Of course, profits may be divided amongst one or many, but it’s unlikely to be as many as the wages are divided amongst, so an individual will get less in dollar terms – but everyone gets 15%. At least in theory.

Row four, completely unrealistically, shows all 30% of the productivity gain being diverted into profits; the wages segment is the same size as it was in 1 and 2.

The reality is that we are 1/5th of the way between 3 and 4. As I said, this is exaggerated for illustrative purposes.

It’s critical to realize that industrial relations are as much to do with perceptions and mindset as they are economic realities. What that one-in-100 business is doing affects that mindset, and that spreads far beyond that one business, creating an us-vs-them mentality.

If this sort of thing was the result of one decision, it would be bad, and the results would be ugly. Because it has been the result of many wages decisions and agreements over many years, it has largely sailed under the radar, and there was not a lot of outrage – until the current ‘cost of living crisis’ made people search for every last cent they could find.

Those in other countries may have escaped that crisis – certainly, despite a lot of noise, the US economy seems to be loping along quite comfortably at the moment – but there remains plenty of evidence of the same problems there. Explosions in this area are only ever deferred, never avoided – and sometimes, are all the worse when they do finally explode because of the additional buildup of pressure.

20. Vulnerability 10: The Banking Sector

It’s an old story, best elucidated in the movie Sneakers.

“Posit: Someone starts a rumor that a bank is financially shaky. Consequence: people withdraw their money, and pretty soon, it IS financially shaky.” — from memory, so don’t be surprised if this is slightly paraphrased.

Banks are businesses like any other, and like other businesses, they can fail. The difference is that when banks fail, they take other businesses with them. Too many bank failures, or banks of a certain scale failing, can rot an economy from the inside.

And all that is before the impact on public confidence is taken into account – even though that can be the most damaging of all.

It’s fair to say that the biggest crisis in the US economy of late has been the failure of three banks in close succession. In each case, there was ‘good’ reason for the failure*, and the system that has been built up since the GFC did what it was supposed to do, protecting the consumers who banked with those institutions.

* – well, in two cases there were good reasons. The third case was different, and more a manifestation of perception over reality, magnifying a short-term problem into a terminal ailment. Which goes back to the opening comments of this section, I guess.

The banks are the heart of the economy, pumping money around. Once, there was some cushion – physical cheques in the mail – but these days its all done electronically, for the greater convenience for customers. That takes away a safety net, but so far, that defensive mechanism hasn’t been needed.

Nevertheless, any failure – perceived or real – of the banking infrastructure can place an economy on life-support, no matter how robust it might otherwise be.

21. The eleventh vulnerability

Let’s talk about the stock market for a minute, even though most of my knowledge of it comes from Trading Places.

In essence, it’s all about confidence. When people aren’t sure how much a stock will be worth tomorrow, they sell it – and usually put their money in something they think more secure. The more at-risk the stock, the greater the potential gains, but most such bets don’t pay off; the safer and more stodgy a stock, the less likely it is to pay a big return (but the more likely it is to pay that smaller return reliably).

When things look they are going well, in economic terms, the stock market becomes bullish, and the traders who buy and sell stocks more inclined to take a risk; when things look precarious, the opposite happens.

‘Gold Fever’ is a real phenomenon, and it has its modern-day equivalent on the stock market floor. If convinced that a company will deliver in the long run, investors are quite capable of throwing good money after bad – regardless of the reality. Equally, doubt and a lack of confidence can be contagious, driving down the value of perfectly acceptable companies.

Postulate a company, XYZ Tech (I hope it’s not a real one!). For the last three years, it’s traded steadily at $1 a share. But there are rumors of a big contract being negotiated, so today, it’s trading at $2.50 a share, and rising. There are two schools of thought: buy now, because it’s still rising; or don’t buy, because most of the good profit has already gone out of the transaction, instead focusing on the businesses that will go higher if the rumors are true – and some that will rise if they aren’t, depending on how much stock you put into the rumors.

Tomorrow, the news breaks that the negotiations have failed, and the stock plunges to 60 cents a share. Those who bought at $2.50 lose a lot of money, that being the risk of the gamble. There are, once again, two schools of thought: one is that you should under no circumstances buy, because there’s no evidence that the slide is over, or that the company can even survive this roller-coaster. The other looks at those years of stable trading, and decides that the likelihood is that the stock will eventually stabilize at something close to that original $1 a share – buying now will not only shore up the stock price, making that a more likely outcome, it will come close to doubling the money of anyone investing in those shares.

Context is obviously all-important in choosing between these different perspectives. What else is going on that could impact share values, and shares of this sector and this business in particular?

The stock market thus provides a direct connection between confidence in the economy and decision-makers, albeit one that is hopelessly drowning in noise. Nevertheless, the long-term trend of stock markets is normally upwards (beware of the rare exceptions, however!). So it’s that whole 6d6-4d6 thing all over again.

There’s not much of a direct link between a stock market and economic health; it reacts, it doesn’t drive. But what it reacts to DOES impact inflation, and interest rates – by affecting profitability – DO impact on the share market, driving some shares higher and some lower.

Viewed from another perspective, a stock market always reacts to events and to the perception of events, with foresight if at all possible. And that makes it a valuable guide to what an interest rate decision needs to be, in order to manipulate both the economic factors that are directly affected and the public confidence and perceptions that stem from them.

To what extent is the Great Depression directly attributable to the Stock Market Crash? And to what extent were both driven by other factors that they had in common, or domino-consequences of those factors? You can spend years unpicking the minutia and still not be sure, because the ultimate connecting tissue is the attitude and psychology of the time.

Which raises the question, “What were you Thinking?” to a whole new level.

22. Impossible Predictions?

By now, it should be easy to see why accurate predictions of inflation, and hence of interest rate needs, are all but impossible. The best that can be hoped for is to get an educated best-guess that’s not too far wide of the mark – and to shade your guesses based on recent events and underlying trends. “If in doubt, do what you did last month” is trite – but not as far wide of the mark as people might think.

The RPG Perspective

There are so many inputs into inflation and interest rate decisions that a GM can plausibly make these (and other economic indicators) do anything they want, within reason (I’ll come back to that caveat in a moment).

Understanding what the vulnerabilities of an economy are, and what the underlying contributing factors are, means that you can sound credible when asserting those decisions “made in-game by NPCs with the authority to do so”. This understanding also prepares the GM for handling any PC engagement with the economy – whether that’s building a castle or persuading the government to fund your Yiddish Space Laser Defenses.

But here’s the rub: these announcements don’t come at random intervals. There’s always a key reporting date, and a key meeting by the central bank or whoever it is that makes the decisions. Plausibility demands that the relevant trends, and their consequences, manifest a month or months earlier, even if you don’t draw attention to them. And it’s important to note any remedial action that will logically take place in consequence, too, and any direct consequences of such actions.

The in-game economy is completely in your hands. What you do with it is up to you – but before you start playing with the controls, it might be worth taking a moment to skim the instruction manual.

That’s what this post is: a generic user manual for modern economic conditions. Use it in good wealth!

In part 1:

  1. Introduction
  2. General Concepts and A Model Economy
  3. The Economics of an Absolute Monarchy (The Early Medieval)

In part 2:

  1. The Economics of Limited Monarchies (The Later Medieval & Renaissance)
  2. In-Game Economics: Fantasy Games

In Part 3:

  1. The Renaissance, revisited
  2. Pre-Industrial Economics I: The Age of Exploration
  3. Pre-Industrial Economics II: The Age of Sail

In Part 4:

  1. Industrial Economies I: The Age Of Steam
  2. In-game Economics: Gaslight-era

In Part 5, Chapter 1:

  1. Industrial Economics II: The Age Of Electrification & Motoring

    In Part 5, Chapter 2:

  1. Industrial Economics III: War & Depression
  2. In-Game Economics: Pulp
  3. In-Game Economics: Sci-fi
  4. In-Game Economics: Steampunk

In Part 6, Chapter 1:

  1. The Pre-Digital Tech Age
  2. World War 2
  3. Post-war & Cold War

In Part 6, Chapter 2:

  1. Government For The People
  2. Aviation

In Part 6, Chapter 3:

  1. The Space Race
  2. Tech Briefing: Miniaturization
  3. Behemoths Of Blind Logic (early computers)
  4. The Promise Of Atomics
  5. A Default Economy

In this Part:

  1. Economic Realities (Inflation & Interest Rates)

Planned for parts 8+:

  1. Digital Economics
  2. Post-Pandemic Economics
  3. In-Game Economics: Modern
  4. Future Economics I: Dystopian
  5. In-Game Economics: Dystopian Futures
  6. Future Economics I: Utopian
  7. In-Game Economics: Utopian Futures
  8. In-Game Economics: Space Opera

Comments Off on Economics In RPGs 7: Economic Realities

Holistic NPCs: Creating Special Characters


Image by John Hain from Pixabay. Slight crop by Mike.

Because the last part of the Economics in RPGs Series ran to three parts, I’ve decided to throw in an extra non-series article before continuing with it next week. Fortunately, I had an idea on tap.

The Holistic NPC

“Holistic” essentially means ‘complete’. The term derives from Holism, which is a philosophic notion that focusing on specific aspects of something doesn’t convey a complete understanding of the whole, no matter how perfect the understanding of those aspects – the interconnections between them mean that the whole is literally more than the sum of its parts.

Medically, the term refers to the treatment of a person as a whole, regarding the interplay of conditions, rather than just the symptoms of an illness, specifically incorporating mental and social factors.

So a Holistic NPC is one that is more complete, more rounded, more comprehensive than is normal. Today’s article provides and describes a process for the development of such an NPC.

Clearly this is not something that should be utilized routinely. It promises to be a lot more work and to require a considerably greater effort than most NPCs, and should only be used when that is warranted. My go-to for most NPCs remains the Partial NPC (see Creating Partial NPCs To Speed Game Prep).

The good news is that you can feed the results of just about any other NPC-generation process into the Holistic procedure as a starting point.

1. Central Focus

The process starts by defining the central focus of the NPC. This might be a particular ability or professional skillset that is defined by their intended role in an adventure, or within a campaign. It might be a particular personality trait. It might be a particular professional role, or even a specific weakness of personality.

This is the most important element of the character from the point of view of the campaign. It will serve as a focal point around which everything else will revolve. It’s critically important to get it right, and not to choose the first thing that jumps into your head.

I have often found that a singular adjective and a single, specific noun, work best, but that’s not always the case. It’s usually a good starting point, however.

2. Inevitabilities

With almost every central focus, there are traits, skills, and characteristics that come as baggage; these are essential to match the NPC with the assigned role. Sometimes, these are more obvious than at others.

Nuance can be incredibly important in defining something as an inevitable corollary of the central focus. Foe example, if we’re discussing a Priest, “Pious” and “Religious” are not the same thing – one describes a personal philosophy or central belief in a faith, the other refers to the practice of behaviors that are commonly associated with such belief. That practice can be the result of such beliefs, or it can be a cloak, superficial trappings.

3. Manifestations & Consequences

The third step is to take the entirety of the world around the character and contemplate the interactions between that world and that character in three broad areas. The goal is (1) to define the ways in which the focus and the inevitabilities will manifest, and (2) to define the consequences for the character.

Note that so far, what we’re producing is essentially a cardboard cutout of the character defined by the Central Focus. The fact that such variety of focus is possible creates somewhat greater variety of 2-dimensional characters than would otherwise be the case, but be under no illusions – the real effort is still in front of us.

That said, let’s look at those three broad areas.

    Background

    Background refers to the current status of everything except the character and his family. Everything in the game universe – whether that’s practical and objective, or conceptual, abstract, or subjective.

    A “cop” means vastly different things in a street-level superhero campaign, a cosmic thriller, and a post-apocalyptic wilderness.

    Before you can place the character, you need to know what these surroundings are, and what they represent within the campaign. What part do they play, and how does this character’s role intersect with them? If there’s anything for which you can honestly say, “It doesn’t”, then that’s an irrelevancy that defines that part of the background as something to ignore so far as this character is concerned. Only relevant background applies, in other words.

    Culture

    A Culture is not the same thing as a society. Rather, it’s the context within which a society exists. For most characters, there will be a broader culture, and frequently, a number of sub-cultures that are applicable.

    What are the expectations of behavior, the general perceptions of such characters, that apply to characters of this particular focus?

    Society

    Society defines the rules of interaction between individuals – everything from marriage to criminal acts and their punishment.

    Again, ignore anything that’s not directly relevant, but annotate this section with anything that does pertain to the character.

Once you have the three broad areas populated with the relevant constituent elements, it’s time to focus on how the character will manifest, and what the consequences will be for the character, for each item listed within each of the three areas.

These are signposts to the character, sometimes useful in and of themselves for defining traits or circumstances, but more important when treated as facets of the whole. The goal is to generate a brief description of that ‘whole’.

An example to get the mental wheels turning over: contemplate an ‘honest cop’ within a society in which the police are generally viewed as lazy and corrupt.

4. History, Family, & Shaping Events

Personality traits don’t emerge from nowhere, career decisions are rarely made flippantly, and it exceptionally rare for someone’s first job to be the same as their current job – even if they still have the same job title.

The goal in this stage of the process is to take every trait, characteristic, and fact determined thus far and trace them back to a causative trigger. There may be a number of steps in between the ultimate cause and the current situation; don’t be satisfied with simple answers. These should be classified into one of three broad phases of life – Childhood & Family; Youth & Education; and Vocation & Career To Date (which includes any vocational training)..

    Childhood & Family

    Childhood friends, parental figures, other relatives, and family friends, they all normally have some impact on the life of the NPC as a child. Sometimes this impact is a positive one, sometimes it’s negative, and sometimes it’s a mixture of both.

    There are two major reactions possible to each: either a guiding force or principle, or rebellion. The greater the negative impacts of alcoholism by a parent, for example, the more powerful a driving force sobriety becomes, for example – that may not mean complete abstinence, it could simply mean a determination to retain self-control.

    Youth & Education

    Beyond the early formative years, there are the years in which some responsibility are conferred on an individual but they are nevertheless not free to make decisions for themselves in many parts of their life. Western societies tend to label this period as adolescence, and it generally coincides with external role models and life experiences with educators when universal education is a part of the society surrounding the character in their early years.

    Fantasy games tend to be set in a more medieval society, in which characters serve as footmen or apprentices. There is frequently an assumption that apprenticing to a trade locks a character into pursuing that trade, or some offshoot of it, and any deviation from that course is traumatic, disruptive, and a personal milestone event.

    One of the great takeaways from Magician by Raymond E. Feist was the concept of a pre-apprenticeship, in which youths get shared around as general laborers amongst the different trades so that aptitudes and attitudes can be assessed; those with an affinity for a particular ‘trade’ are then taken on as apprentices within that trade. Each year, any given trade only has so many vacancies to fill, so the naturally gifted tend to get chosen first while the mediocre sweat it out. I’m not suggesting that this system is, or should be, universal; but the need for this (or some equivalent) is so obvious that it should be a central element of the society.

    Vocation & Career To Date

    What led the character to the profession he or she now holds? What experiences did they have along the way that have shaped their capabilities, attitudes, and reputation? What training did the NPC have, and who delivered it, and how did what the NPC learned shape him or her into the future? What natural gifts did he or she possess that have aided them in this vocation, and what gifts or traits were possessed that occasionally lead them to contemplate some other path in life? Finally, what traits had to be overcome in order to succeed, and how did they learn to do that (if they did).

Once a catalog of formative influences and critical events has been compiled, this step is completed by projecting the character’s life story forward from the critical moment to the present day and the character’s current circumstances.

Some influences play a part in the character’s development and are then superseded by a differing influence, as the character grows as a ‘person’. These mark distinct phases of life for the character, and these are critical in delineating the character’s personal story and their personality. Very few of us do not experience some such moment of transition; some of us are unlucky enough to experience several. The transitions are always moments of great personal growth, and critical to defining the character as they are now.

This approach has the virtue of ensuring consistency within the character’s life story even if the end result appears to be a mass of contradictions.

    An Abstract Representation

    It can sometimes be helpful to view this stage of the process through a more abstract perspective, so I thought I would offer one as a tool.

    The triptych above represents the process as something similar to a spiderweb of straight lines.

    Panel 1

    The first panel starts with the ‘now’ of the hollow circle (the central focus) at the bottom and traces it back through the character’s past to a critical incident that shaped the character’s decision or destiny to become the central focus. Every time that critical incident caused the character to make a decision, it is represented by a change in direction and a marker that indicates an intermediate stage of development, a personal history milestone.

    Panel 2

    The second panel starts back at that formative event, and traces other impacts of that and the other secondary milestones. In the process, two personal crisis moments (shown in yellow) are identified, moments when two or more of the character’s values came into conflict.

    These are interconnected insofar as the second is a conflict between the consequences of the first and another key aspect of the character’s primary focus. If the character’s path had changed direction at either of these incidents, that’s an example of a “slippery slope” in which an avalanche of past decisions begins to accumulate and will eventually threaten to overwhelm the character.

    Note that such an avalanche is not necessarily a negative – it might be a crisis of conscience in which a villain reforms, at least partially. The first crisis might be an act that the NPC was required to perform and that he came to regret; the second is a moment when he second-guessed his duty or task because of the regret. Again, he did whatever it was that he was being paid to do (no change in life-course) but this only compounded the regrets and – no doubt – doubts would begin to emerge as a consequence.

    These can be considered secondary elements, subordinate only to the primary focus.

    Panel 3

    Panel three examines each of the critical moments and whether or not there is a direct consequence of that decision that is unaccounted for. These are often fringe issues to the central focus, and can be considered tertiary to the other manifestations and influences generated in preceding panels.

    For the first time, this takes the character beyond the simple cardboard cut-out, exploring the penumbra of the formative decisions of the character’s past and the ramifications of those past decisions on the character’s present.

5. Personal Consequences

Step four defined other aspects of the character’s personality by way of past decisions and formative events. These should all have consequences in one or more areas: Relationships, Financial Status, or Social Status.

What’s more, each of these consequences should also manifest in one or more of the areas of initial development – background, culture, or society.

One of the formative personal events may have led to the character entering an unhappy marriage, for example (a relationship impact) – that could have ramifications on the character’s role within the general campaign background, though that’s relatively unusual; but it is far more likely to have an impact on the character’s role within his culture and society, in the form of obligations and expectations.

    Relationships

    Relationships include spouses, children, employers, employees, personal contacts, friends, allies, and enemies. Despite the breadth, only those relationships that can be deemed essential to understanding the NPC should be listed, or those which signpost an aspect of the character’s personality or ethos.

    These are important because in any investigation of the NPC, these are the indicators that reveal – at least in part – what sort of person the NPC is. Such an investigation might never connect with the events that shaped the character’s thought processes; why they did what they chose to do is not especially relevant, what matters is any impact on future decisions.

    Financial

    Many decisions will have financial repercussions.

    Continuing the unhappy marriage example, it might be that this was necessitated to gain access to a business opportunity. That means that it had a positive effect on the character’s finances as well as the negative effect of sharing any prosperity while the character is married.

    Should the marriage strike rocky ground, the financial consequences could be dire economically, or it might be that the character needs to undergo a (presumably bitter) divorce and make a fresh start in order to take advantage of future opportunities. It wouldn’t be the first time that an opportunity steered someone into a personal or professional cull-de-sac.

    Social

    Many decisions compromise a character’s social engagements, and this is often a factor that is not taken into account when assessing the costs and benefits of a decision. I know one person who got married and was forced to give up RPGs as a result – the wife did not understand them, or what the player got out of them, and made it a choice: her or the games. After a while, she relented to the extent of permitting him to play board games with other people who played RPGs, but he never got back into the hobby, even after they were divorced.

    You can never go back again to exactly where you were after events like this, as the example demonstrates. The more traumatic the events, the greater the loss; other social activities tend to expand to fill the resulting free time, and the person undergoes a personal evolution as a result.

    The more quickly the reversal takes place, the closer to ‘the way things were’ the character can get. Married in Vegas as the highlight of a drunken weekend? Divorced on the Monday, when you came to your senses? Relatively little impact – unless the new partner makes trouble, of course. Married thirty years earlier? The entanglements ensure that a divorce will be traumatic and expensive.

6. Causes & Contradictions

Step six is to identify and catalog the things that the character believes in, and any contradictions within his persona.

Note that the layout of the graphic emphasizes that these are more removed from the Core Focus of the character concept.

As usual, there are three general subdivisions.

    Passions, Addictions, & Interests

    What is the character passionate about? What does he do habitually – whether he needs to, or not? What subjects and activities interest him?

    These need not have any relationship whatsoever with the central focus of the character. Quite often, the less they relate to that focus, the more noteworthy they are.

    I once knew an English major who liked to solve differential equations as a way of ‘loosening up his mind’, distracting himself from the world around him and whatever his personal circumstances were so that he could achieve maximum creativity. He had me write a random equation generator app for his laptop when we were both at University.

    My sister loves “True Crime” stories, for no particular reason. They have nothing to do with her career or family. My niece’s favorite color is Purple. No identifiable reason, but it’s a defining characteristic of who she is. My brother and I are both interested in Formula 1, but for very different experiences – I like the tactics and engineering, he likes the excitement and drama. He enjoys going to F1 races as a consequence, while I prefer to watch on television.

    Prejudices

    Everyone has prejudices, even if they are nothing more than opinions formulated on an encounter with one individual that has been generalized.

    Sometimes, these prejudices take the form of a receptiveness, a greater willingness to take a chance that interactions with an individual won’t be a waste of time – My twitter feed shows clear evidence that anyone involved in TTRPGs, Sci-Fi / Fantasy, or Photography / Art is very likely to get a ‘follow’ from me without further inspection; accounts that do not fall into this category are subjected to far more stringent examination.

    Another manifestation is a dislike of certain kinds of behavior that the individual considers socially or personally unacceptable – berating a partner in public, for example.

    But everyone has prejudices.

    Weaknesses & Mistakes

    There can be, at first glance, considerable overlap between this section and the other two. ‘Weaknesses’ generally refer to behaviors rather than to “Kryptonites”, if you get what I mean. Someone who can never resist the last slice of cake, or the last little leftover bit of dessert. It’s not an addiction, it’s not something the character obtains deliberately – it’s something that outside circumstances regularly offer them.

    ‘Mistakes’ also come with a caveat – these don’t include any of the formative events listed in stage 4 of the process, but they do have to be influential on the character or their circumstances. You would have to be a saint not to have made mistakes in the past, whether acknowledged or not.

The other content to be incorporated in this section is anything that stands as an unresolved contradiction to the general nature of the NPC as already described. Someone who is extremely safety-conscious, but loves to drive fast. Someone who is financially fastidious (even miserly) who indulges in collecting something (regardless of cost or monetary worth).

Some people are a bundle of conflicts and contradictions; most are more consistent. But we all have exceptions, hot buttons that override our normal behavior.

7. Ramifications<, Manifestations & Consequences

Of course, everything that got listed in Stage 6 is rooted in the character’s past. So the next logical step is to explore the impact that they have on the character’s present-day personality and circumstances.

The subheadings in this phase map directly onto those from stage 3 – in fact, you can simply expand that section if you want, though I find it more useful to keep them separate, so that stage 3 represents the direct consequences of the Core Focus. If there is ever a contradiction or conflict between two aspects of the character – one in this section and the other in the manifestations / consequences of the core – it’s usually the core that holds sway, perhaps modified slightly.

Just to remind you, those three sub-fields are:

  • Background
  • Culture
  • Society

Their definitions haven’t changed since Stage 3, so I’ll forego repeating them.

8. Correlations: The Origins Of Beliefs

Stage 8 of the process is to take all the entries in the “Causes & Contradictions” category and use them to further populate the “History, Family & Shaping Events” category. What this means will differ depending on the type of content in the “Causes” subcategories.

For “passions” (and “interests”), where did the character first come into contact with the subject? And how did it impact on, and interact with, their other formative events?

A passion for art might have started as a mild interest in drawing, but harsh dictates by a parent who saw it as ‘wasting time’ restricted its expression. What might have been a passing phase or minor sideline grew, through lack of satisfaction, into a more obsessive interest – probably manifesting in a complete different manner to the original, such as being a collector of artworks.

Addictions deal with when the character was first exposed to the substance or practice.

One can argue that Weaknesses were always a part of the character’s makeup, but they have a strong potential to interact with critical decisions, often to the character’s detriment.

Mistakes generally translate into specific incidents in the characters’ past. It’s often not the event of the mistake itself that is significant, however, but the awareness that the character has made a mistake that is important. Nevertheless, it can be useful to incorporate both, with a link connecting the two. Even that impact can be delayed, if the character was not in a phase of life that encouraged or permitted introspection, so there can even be a third step in the causal chain before the original mistake becomes important to the character’s development.

Contradictions can require a deeper consideration – when did they start, when have they contradicted something in the primary makeup, and how was that contradiction resolved without eliminating the contradiction and without it being a formative event critical to the core focus? Sometimes, these answers are easy; sometimes, you will need to look deep into the character’s psychology, and perhaps even conclude that on this subject, they are deluding themselves.

9. Correlations II: Consequences

Of course, once you have new entries in the “History, Family & Shaping Events” category, you need to process these into consequences, in the same way that you did in Stage 4. Since the process is exactly the same, I’ll forego additional details.

10. Interpolations

The interpolations stage is two-fold:

  1. Take everything that’s listed in the Personal Consequences space and interpolate connected entries in the Manifestations & Consequences space, and,
  2. Take everything that’s listed in either of the Manifestations & Consequences spaces that is not already connected to Personal Consequences and interpolate additional entries in the personal consequences space.

It is these connections that are ultimately the difference between this process and other NPC generation methods. Throughout this procedure, the emphasis has been on how two discrete elements of the emerging character concept interact and relate to each other. Put that together with all the elements of the character, and ensure that the impact on the character is front-and-center, and the end result is a more holistic definition of the character.

In particular, emphasizing the connections ensures that the character is internally consistent, even those elements that are contradictory.

11. Concluding Stage

Of course, it’s not entirely in user-friendly format at this point. You have perhaps a page of conceptual elements, classified and categorized in various subcategories. You’ve ensured that everything correlates and interconnects as it would if you were describing a real person, and documented those correlations and connections in two or three additional pages, also within the same categories and subcategories.

The final step is to take each of these lists and combine them into a brief narrative description. In particular, you want to generate a thumbnail perspective of the character’s past history, of the primary focus and inevitabilities, of the way everything manifests outwardly, and what the personal consequences are for the character.

The result is purely conceptual in nature. It is a road-map to the expression of the character in game mechanics, and – as a general rule of thumb – should override any such mechanics that don’t fit. If the concept demands a character who is a skilled negotiator, with a particular approach to such negotiations that breaks through entrenched positions to get results, and there are no game mechanics for such negotiations, either let the approach always play out in roleplaying terms, or introduce such game mechanics (since it’s clearly a hole in the system).

NEVER let the mechanics restrict or interfere with the character concept that you have so clearly crafted; unless you have been completely myopic and created a Saint or a Devil, your creation will have inbuilt checks and balances, flaws and the potential for misjudgments and errors, and that’s all you need in terms of restriction.

It’s a different story for PCs, where there may need to be game mechanics restrictions as well as conceptual ones; if this process is used for the creation of PCs, the “always play it out” option is off the table, simply because there will always be a gap between the player’s skills and abilities and those of the character that they control.

Postscript: Full-sized Graphic

To fit everything in, this had to be a large graphic (1807 × 1723), and almost 4Mb in size. This in turn made getting everything to be legible when compressed to Campaign Mastery’s display footprint a challenge.

Because the final graphic incorporates everything from all ten stages, I thought it best to overcome the latter problem (as usual) by providing a full-sized version.

to open it in a new tab, from which it can be downloaded – complete with spelling error!

Comments Off on Holistic NPCs: Creating Special Characters

The Power Of Basic Utilities


Image by M W from Pixabay, cropped by Mike

Today I’m going to tell readers about something I’ve been working on a lot over the last few months, because it highlights an important principle – you can usually do a lot more with basic tools and utilities than you think.

A Recurring Pattern

Every now and then we seem to get waves of tools and utilities emerging, as though in support of the premise, “One Task, One Tool for that Task”. It happens in real life, it happens in the world of computer software, and it happens in RPGs.

What usually happens is that someone decides that they need to perform a particular task and either none of their existing tools and utilities will do the job, or none of them will do it in the way that the person wants the job done. So they write a Utility to do that job for them, and then make it available to others.

Eventually, that functionality gets incorporated into a larger application, and the Utility gets abandoned and fades away – if the incorporation is successful. Sometimes, the developer of the larger application can’t do the work cost-effectively or without hitting legal trouble, and they will buy the rights to the Utility from the original developer – creating an incentive for that developer (and others like them) to go write another one.

I wrote about the Office Suite that I use in The Braiding Of Plot Threads, more-or-less in passing. For some reason, it’s not a subject that I’ve written about = maybe because what I use might not help others, or because I have a fairly eclectic selection that’s built up over the years.

I might have to do something to correct that deficit in the near future…

RPG Equivalent

The RPG equivalent is writing a standalone mini-supplement to solve a particular creative need, and offering it either as a free download from somewhere or through someplace like RPGNow. These succeed because they plug a hole in one or more game systems. When a new version of those game systems gets written, the best of these solutions get incorporated into the revised mechanics, or the authors put in place an ‘official fix’ for the hole – either way, the standalone product goes away.

The reality is that people often already have all the tools that they need, they just don’t want to – or have time to – do the work, or they don’t know exactly what their existing tools can do.

The back-story

The players in my superhero campaign have chosen, as their new base of operations, a Mansion in Royal, Arkansas. The process of making this their Home-away-from-home and a functional base of operations is now underway.

Meanwhile, I have been working on implementing their decision into the campaign in various ways, effectively converting the reality of Arkansas as it now is (because that’s what the internet gives me access to) into the fictional post-Ragnarok 1986-era version of Arkansas that exists in the campaign.

I want to admit up-front that there is absolutely nothing stopping me from inventing things ad-hoc as needed – except that spur-of-the-moment creativity is rarely as effective as putting a little time and effort into things in advance.

Purposes

One of the things that I wanted to do is create a sortable, searchable, database of businesses and interesting localities within ‘adventuring range’. The latter is defined by the parameters of this phase of the campaign, and can grow and change in time.

Originally, my intent was simply a list of townships, but it quickly grew from that basic concept. If the PCs needed a set of four different banks in their local vicinity, for example – a need that I both anticipated and that has now materialized within the campaign – having a list of the ‘real’ banks in their local area would save me a lot of work. If they find that they need a particular type of consultant, or a particular chemical, or whatever, I wanted to know where they might look for that resource.

And, I wanted a spur to my creativity in coming up with adventures.

Data Acquisition

Before you can create a database, you need the information with which to populate it. Of course, you need a preliminary design so that you know what information to look for, too!

In this case, what I wanted was a list of targets.

    Basic Parameters

    Operations of the PCs meant that road distance and driving time were all that would be relevant for the most local business operations – Royal and the nearby Hot Springs, basically. Outside of that, I would also need straight-line distances.

    Anything and everything within about 100 miles of their base of operations was fair game.

    I decided fairly quickly to exclude churches, temples, and the like, because none of the PCs is especially religious and most of the adventures involving such were becoming stale. But businesses and retail operations and landmarks were fair game. Later, I revised those parameters to specifically include graveyards, because I had a specific idea regarding them. I also excluded hotels and wedding venues and the like; the key intention was to list the suppliers that the PCs might want to buy from, and the services that they might want to employ.

    Scope

    Arkansas is a little like medieval France – some bigger places (but none as big as you would expect) and a huge number of much smaller places. In fact, if you count suburbs as ‘towns’, you can get the sense that the average gap between such is 1 every 10 miles or so. If each location has between 3 and 50 businesses, how many does the zone of interest run to? How big a list are we talking about?

    What I’ve essentially defined is a circle 100 miles in radius, with an excluded zone in the middle of 10-16 miles radius, and what I want to know is how many more 10-mile radius circles I can fit into it. Multiplying the answer by 3-to-50 gives some indication of how many entries to expect.

    100 miles radius = 100 × 100 × π = roughly 31,416 square miles.

    10 miles radius = 10 × 10 × π = roughly 314.2 square miles.

    16 miles radius = 16 × 16 × π = roughly 804.3 square miles.

    Difference = 30611.7 to 31101.8 square miles.

    Divide by 314.2 = 97.4 to 98.987 localities. Call it 100 for convenience.

    Multiply that by the estimate of 3-50 data points per locality, and you get 300-5,000 entries.

    But this is very sensitive to the initial inputs, especially the gap between localities and the number of entries per locality. If the true gap is, say, 8.5 miles, then I get 30611.7 to 31101.8 divided by 226.98 square miles = 135 to 137 localities. Call it 136. If the number of data points per locality is more like 10-100, that’s 1,360-to-13,600 entries.

    Realistically, I expected 1000-2000 entries.

    Approach

    The basic approach to acquiring those entries was simple: Call up Google maps, locate the position selected for the Mansion, measure out a straight-line distance of 141 miles (a right-angle triangle with sides 100 miles long gives a hypotenuse of 141) at a 45-degree angle to get the corners of a box that’s 200×200 and centered on the zero point. Then start listing localities.

    Once I have that list, I can zoom in on each and create a list of the entries of interest.

    Map

    As always, shrinking a full-screen capture down to fit the available visible space at Campaign Mastery remains a challenge, especially if legibility is to be retained. The top part of the image shows the whole window, resized to fit; the bottom is an extract from the source at something close to actual size.

    Creating the list is just a matter of being organized and systematic.

    List

    I ended up with 343 localities, and that’s without treating suburbs independently. That’s very bad news because it means that the average gap is nowhere near 10 miles. Doing the reverse calculation – square root of (31101.8 / 343 / π), I get 5.372 miles between localities.

    But that’s okay – if the number of localities is 2-3 times as many as expected, I can simply list only 1/2 to 1/3 of the entries for each location. A little more selectivity and a little more ruthlessness would keep the job to manageable proportions, or so I thought to myself.

    Here’s a sample of what the resulting list looks like:

      Chappell Armory (Army Reservist Training Center)
      Camp Joseph T. Robinson aka Camp Robinson (Army Reservist Military Base)
      Arkansas Storage Centers (Self-Storage Facility)
      Smackies Grill (Discount Restaurant)
      S&V Renovations LLC (Remodeller)
      Family Dollar, 5613 MacArthur Dr (Discount Store)
      Levy Concrete, 5613 MacArthur Dr, Crystal Hill (Discount Shop)
      Dollar General, 6700 MacArthur Dr, Crystal Springs (Discount Shop)
      Two Men and A Truck, 4125 Crystal Hill Rd Ste A, Crystal Hill (Removalist)
      Smith Campers (Campervan and Caravan Dealer)
      Nicky Houses Cleaning (Janitorial Services)
      Ed’s Corvette Body Repair (Auto Body Shop)
      Hatchet Jacks Sport Shop (Fishing Store)
      Taco Mexicano (Fast Food – Mexican)
      Skyline Automotive (Car Repairs & Maintenance Service)
      Laffert Equipment Manufacturing (Automated Systems Manufacturer)

    After each entry, using copy-and-paste, I then added a line for the data to be gathered:

          x miles, x min, x km (x miles) straight line

    I ended up with about 1200 entries to process. Sounds good, right? Well within estimates.

    You may also observe that I added a description of what the business was or did. In a lot of cases, these were obvious; in others, I was greatly assisted by the fact that if you hover your pointer over an entry, a pop-up appears within the map that contains additional information about the place – sometimes an address, sometimes a photo, sometimes a description. I soon came to the conclusion that most of these were provided by the business themselves. I have an illustration of that, but I’ll save it for a few minutes.

    Map Scale

    I was, in fact, close to completing the task when I had my graveyard brainwave and decided to add them to the list. After all, how many could there be?

    If those sound like famous last words…. The reality was that I had to zoom in closer than the 1km-per-inch that I had been using in order to see most of the cemeteries on the map.

    And when you do that, you discover that there’s a lot of businesses that simply aren’t shown until you get in close enough. Important businesses, ones that should be included.

    Long-story-short: The list of entries now has 2837 items. I keep finding more to add. About 300 of the additions are cemeteries, and about 1500 are places that I overlooked on my initial sweep, by using too small a scale. Ultimately, I went to a scale of about 200 ft per inch outside of urban areas, and 100 ft per inch within urban areas. This still won’t show everything – most shopping centers are represented by only one or two entries out of maybe 50 stores – but it’s good enough.

Data Processing

But that lay awaiting discover in my future as work got underway.

    Basic Tools

    I want to make it clear how basic the tools were – pencil and paper to start with, then a text editor (the same one that I use to write adventures and Campaign Mastery articles), and my web browser.

    Windows Arrangement

    The screen capture above shows the basic layout that I employed: text editor on one side, at a fairly narrow width, and browser window with map and navigation panel on the other. I had no control over how much screen real estate Google Maps used for it’s navigation panel (visible on the left-hand-side of the browser), I needed a reasonable amount of space for the map itself, and whatever was left was used by the text editor.

    I quickly discovered that if I chose a text-editor-window that left the last “x” – the one in (x miles straight line) – at the end of a line, I could simply use the “end” key to get there, and repetition made even the smallest time saving worth the effort.

    I had three additional factors to contemplate as I went.

      Correcting for 1986

      Back in ’86, there were nowhere near as many foreign-car dealerships. There were a few, but nothing like there are now. There were genetic testing labs, and medical imaging was far more primitive – and less common. Instead, we had video rentals, and record stores, and so on. Every time I found a business that looked like it didn’t fit the 1986 era, I would replace it with something era-appropriate that wasn’t around in modern times. Even so, stores like Blockbuster are probably under-represented.

      Correcting for Campaign

      I also had a few business operations that never existed in real life but that either had been, or should be, established within the campaign. Entirely fictitious creations, but I actively looked for ways to insert the Campaign Continuity into the database.

      I added things like a Kzin spaceport / launch & landing facility, for example, and thought about crew needs and human proclivities – that’s why one fast food joint of a type that seemed over-represented became “Cardinal Landing”, offering Americanized Kzin cuisine for the curious and the real thing for the benefit of the crews, and a real pet grooming establishment became “Kitty Pride Fur & Grooming”, a Kzin Grooming and Clan-marking establishment, and a generic gift store became “Kzin Imports” – no prizes for guessing what they sell, both wholesale and retail!

      Rebuilding after Ragnarok would have created considerably more builders, electricians, plumbers, etc, so I deliberately over-represented them on my list. But most of these would now be starting to struggle – most of the rebuilding has now been done, and the world is starting to return to whatever passes for “normal” in a world in which Ragnarok happens – and is stopped by superheros.

      Sparks Of Inspiration

      The third and final factor to be taken into account are outright inspirations. A business that didn’t exist in the fictional 1986 could become something far stranger – sometimes with a name change, sometimes not – simply because it offered opportunities for adventures. A “Geo-chemical & Genetics Research Lab,” for example. An “Exotic Meat Butchery”. A “Genetic Engineering Company”. And so on.

      For example, when I went to screen-capture one of Google Maps’ popup descriptions – I wanted one that was as complete as possible – I came across this:

      “Omega Technical Violator Center”, described as a Prison. It’s NOT on my list of entries (yet).

      I can well imaging that in real life this is a location for low-risk short-term prisoners (unpaid parking tickets, anyone?) – I didn’t bother to look it up. Because the name itself, and the descriptor as a “Prison” is suggestive of something completely different, in a superhero world. This is now a high-tech super-max prison for supervillains, a very different interpretation of the term “Technical”!

    Doing The Research

    So I took the whole-of-screen capture presented earlier and added some graphics to highlight the process of completing the data capture.

    Don’t worry, I’ll be zooming in and going into specifics – this is just to give you an idea of the overall workflow, showing it to be a six-step process.

    I’ve divided the work up into full-page blocks that take between 60 and 90 minutes to complete. If you look closely at the Text Editor, just after the blank space, you might just be able to make out a “25” – that tells me that I have 25 full-screen lists to go.

    And if you look at the Greeked-text whole-of-document panel on the right-hand-side of the editor, you can get a sense of just how big that 25 pages is, compares to the work already done (perhaps that should be, “how much has already been done, compared to that 25 pages”).

      Acquisition, Step 1

      Step one only has to be done at the start of a session.

      I highlight the address of the mansion in the text editor, copy it, and paste it into the Map Search.

      Acquisition, Step 2

      For the first item, I then add ” to ” and then the name of the business to be located to the search. Google maps will then present me with one or more alternative interpretations; I pick the right one.

      The search box splits into two, one for the start point and one for the chosen destination, both already filled in for you.

      On subsequent searches, the two separate search panels already exist, so I just have to click in the box and over-type what’s there with the next name.

      Sometimes, you have to specify the street number and name (if they are known) or the locality (if known) in order to find the business. I didn’t know that at first, but once I discovered the fact, I started noting the (incomplete) address when I could get it from the hover-over described above – that’s why some of the unprocessed entries already have the location listed.

      Acquisition, Step 3

      Google also (usually) adds the address, which I then type after the name in the text editor.

      Google is not always very good at getting the locality right – it defaults to the nearest big settlement, not the actual location. The same is true of suburbs of larger cities. I list the locality both ways, putting the real locality in brackets after the “map search” answer.

      Acquisition, Step 4

      To measure the straight-line distance for the first time, you have to right-click on the start location, and then somewhere near the end location. I zoom in to the 50′ scale (which is probably more accuracy than needed) in order to do so, then zoom back out to place the destination marker.

      Zooming in to the destination point indicated by Google’s trip calculator, I can then click-and-drag the straight line destination marker to that it coincides – moving (3) to (4) on the captured image above.

      Thereafter, when you change the destination (in Step 2), the straight-line-destination marker stays where it is, and it has to be shifted again, while the start-position marker remains unchanged. So this part of the process gets quicker for subsequent searches.

      Acquisition, Step 5

      From the information panel on the left of the map, I locate the distance and travel time, and enter these as the first two ’empty’ x items on the blank in the text editor.

      That sometimes entails changing the route offered by Google, it doesn’t always make the best choice.

      It also requires converting the hours-and-minutes time that often results into just minutes by adding 60 or 120 or whatever.

      Acquisition, Step 6

      The last step is to look at the “measure distance” box at the bottom of the map and extract the Total distance.

      Because I’m in Australia, where km are the default, the distance is shown in km with the miles in brackets afterwards; I would presume that if I lived in the US, it would only show miles, or it might show miles and put kilometers in brackets afterwards.

      This is shown to far greater precision than I need; I round off to the nearest 0.5 km / 0.5 miles. This means that I can get a little quick and sloppy in position the start and end points of the straight-line measuring tool, greater speeding up the whole process.

      Why do I need the straight line distance at all?

      The PCs are going to get to “superhero emergencies” by flying.

      Flight speed in the hero system is given in inches per second, and there are 2m in every Hero-inch. And, of course, you fly in a straight line, instead of following the sometimes convoluted road routes.

      Strictly speaking, I don’t need the distance in miles at all – but it will provide a convenient reliability check on the quality of data once it gets entered into the database.

      Problems & Solutions

      Google Maps is sometimes very broad in its interpretations of the name you’ve searched for, but there are times when the slightest error in punctuation or capitalization means that your search won’t be found.

      So I’ve taken extra care to match what I’ve listed in the text editor to the exact spelling shown on the map. But minuscule errors, like leaving (or adding) a full stop after “Inc” or a comma between business name and “LLC” can make all the difference in the world.

      SOLUTION #1
      Sometimes, though, all of the above is not enough. When that happens, I’ll try a sensible best-guess for the locality, on the presumption that I’m more likely to list two things that are close to each other, sequentially. That’s best-guess #1.

      Best Guess #2 is the actual locality of the previous destination, (because Google is inconsistent). Best Guess #3 is always Hot Springs because I went over the local area with a finer comb. Best Guess #4 is Little Rock because it’s biggest.

      SOLUTION #2
      If that doesn’t work, I’ll cautiously try some obvious variations on the spelling, as described above.

      SOLUTION #3
      If I’m still coming up dry, it’s time for desperate measures. Option #1 is to get rid of the measure distance box and clear the destination field; doing that lets me click on a business name on the map and Google will decide that’s where I want to go, and fill in the destination search box appropriately. But for that to work, I need to find it on the map, and that’s sometimes easier said and done. I won’t spend too much time searching, because….

      SOLUTION #4
      Google maps is incredibly efficient at removing businesses if they go out of business, or at processing changes of name. And a gap of one-to-eight weeks between my listing it as an entry to search is more than enough for this to happen. So my last resort is to open a new tab with a regular Google search, because other sites – like Facebook – are slower to take down old information. Once I have the street address, I can enter that into the search panel, ignoring the business name, and the problem is solved.

      SOLUTION #5
      My absolute last resort is to treat the location as “lost” – set it aside and just keep an eye open for it as I continue on.

      Frequency Of Problems
      It’s near-certain that any given page will have at least one or two cases where I have to resort to ‘stronger measures’ than the quick-and-easy Google Maps search. These techniques are in the sequence presented because I’ve found which ones are the least work for the maximum likelihood of getting a result.

    End-Of-Page

    Google Maps appears to chew up a huge amount of disk cache. By the time I’ve done about a page-and-a-half, Chrome will start throwing up “Not Responding” messages that will grow progressively worse; by the time I’ve done two pages, the browser is essentially frozen.

    The only solution to this problem (aside from more memory) is to close the browser and then start again. So that’s exactly what I do. But first – just in case something goes wrong – I’ll save the updated text file!

    And before I do that, I’ll cut and paste the page that’s been done to the end of the list of completed work, so that the line with the mansion address is near the head of the next page to be processed.

    The List After

    After about an hour’s work, I’ll have a page of completed results. For the sake of being complete, here’s the most recently-completed set of results.

      Burger King, 4227 Camp Robinson Rd, North Little Rock (Crystal Hill) (Fast Food)
          71.9 miles, 69 min, 94 km (58 miles) straight line
      Senor Tequila (Bowman), #A1, 1011 S Bowman Road, Little Rock (Brodie Creek) (Restaurant & Bar – Mexican)
          63.5 miles, 63 min, 80.5 km (50 miles) straight line
      Senor Tequila, 2000 S University Ave, Little Rock (Boyle Park) (Restaurant & Bar – Mexican)
          66.5 miles, 64 min, 85.5 km (53 miles) straight line
      Senor Tequila (Cantrell), 14524 Cantrell Rd, Little Rock (Woodland Heights) (Restaurant & Bar – Mexican)
          68.7 miles, 67 min, 80.5 km (50 miles) straight line
      Senor Tequila (Sherwood), 8605 AR-107, Sherwood (Apple Valley) (Restaurant & Bar – Mexican)
          74.7 miles, 75 min, 99 km (61.5 miles) straight line
      Senor Tequila (Maumelle), 9847 Maumelle Blvd, North Little Rock (Maumelle) (Restaurant & Bar – Mexican)
          69.7 miles, 66 min, 86.5 km (54 miles) straight line
      Senor Tequila, 4304 Camp Robinson Rd, North Little Rock (Park Hill Historic District) (Restaurant & Bar – Mexican)
          71.8 miles, 69 min, 94 km (58.5 miles) straight line
      McDonalds, 4422 Camp Robinson Rd, North Little Rock (Park Hill Historical District) (Fast Food)
          72.0 miles, 69 min, 94 km (58.5 miles) straight line
      Popeye’s Louisiana Kitchen, 4704 Camp Robinson Rd, North Little Rock (Park Hill Historical District) (Restaurant – Chicken / Cajun)
          72.0 miles, 70 min, 94 km (58.5 miles) straight line
      Kroger, 4401 Camp Robinson Rd, North Little Rock (Park Hill Historic District) (Grocery Store)
          72.0 miles, 70 min, 94 km (58.5 miles) straight line
      Edwards Cash Saver, 3801 Camp Robinson Rd, North Little Rock (Park Hill Historic District) (Grocery Store)
          72.1 miles, 69 min, 93.5 km (58 miles) straight line
      El Paisano, 406 W 47th St, North Little Rock (Grocery Store)
          72.2 miles, 69 min, 94 km (58.5 miles) straight line
      Arvest Bank, 4724 Camp Robinson Rd, North Little Rock (Park Hill Historic District) (Bank)
          72.3 miles, 70 min, 94.5 km (58.5 miles) straight line
      Hogg’s Meat Market, 3901 John F. Kennedy Blvd, North Little Rock (Lakewood) (Butcher Shop)
          71.1 miles, 68 min, 94.5 km (59 miles) straight line
      Bargain Brothers NLR, 4135 John F. Kennedy Blvd, North Little Rock (Lakewood) (Discount Store)
          71.4 miles, 69 min, 94.5 km (59 miles) straight line
      Hobby Lobby, 4701 John F. Kennedy Blvd, North Little Rock (Lakewood) (Craft Store)
          71.8 miles, 69 min, 95.5 km (59 miles) straight line
      Camp Robinson Mountain Biking, North Little Rock (Crystal Hill) (Sports Venue)
          76.7 miles, 78 min, 92.5 km (57.5 miles) straight line
      Pi Roofing and Construction, 6109 Remount Rd, North Little Rock (Indian Hills) (Roofing Contractor)
          73.2 miles, 71 min, 95 km (59 miles) straight line
      Bungalow Bills, 22 Remount Rd, North Little Rock (Indian Hills) (Novelty Store)
          73.5 miles, 72 min, 95 km (59 miles) straight line
      B.E.E. Promotional Products, 7000 Remount Rd, North Little Rock (Indian Hills) (Gimmick Manufacturer)
          73.8 miles, 72 min, 95.5 km (59.3 miles) straight line
      US Army Reserve Center, 8000 Camp Robinson Rd, North Little Rock (Crystal Hill) (Government Office)
          74.3 miles, 74 min, 94 km (58.5 miles) straight line

Data Entry

For obvious reasons, I haven’t seriously started on the data entry stage. But I have started work on the database that I’ll be using.

    The Database

    This takes the form of a spreadsheet, because the easiest way of actually using the results is to sort the table contents by different columns, and the results can be used directly and immediately – and updated instantly.

    There are 21 fields and sub-fields in the design.

    Entry Number

      Entry Variation – see note 1 below.

    Name
    Address – doesn’t include locality.
    Google Locality – see note 2 below.
    Actual Locality
    Type – see note 3 below
    Dist to near outskirts by Road (miles) – see note 4 below
    Driving Time w/out stops (min)
    Est. breaks (min) – Calculated automatically – see note 5 below
    Time Zone Adjust (hrs) – see note 6 below
    Net travel time by Car

      (min) – Calculated automatically by adding breaks and driving time.
      (hrs, min) – Calculated automatically by dividing the travel time by 60 and using a different number format.

    Direct distance to same point

      (km)
      (miles)

    Flight Time with Time Zone adjust – see note 7 below

      (ST B s-sonic, min)
      (ST B s-sonic stealth, min)
      (ST B noncom, min)
      (ST B noncom stealth, min)
      (ST B evasive) (min)
      (ST B evasive stealth) (min)

    Notes – see note 8 below

    Notes
      [1]

      Sometimes there are two or more alternative routes of significance. Most of the time, I’ll just measure the fastest/most direct, but where there’s a variation, a number will be put into this field.

      [2]

      Only present to permit the address to be located again using Google Maps.

      [3]

      I’ve tried hard to be consistent, but already know that I’ve failed spectacularly in this respect. But by sorting on this field, and using down copy, I can perform mass-entry editing to fix that. I was originally going to put this information into the Notes – and skip it when it was obvious, eg “Lanais Funeral Home” – anyone want to guess what the service they provide is?.

      [4]

      I want to briefly comment on the “Near Outskirts” specification. Some communities are tiny, some are huge. I deliberately targeted the nearest ‘city limits’ rather than the Google Maps default (somewhere in the middle).

      It will be renamed for the business / specific sub-table.

      [5]

      Standard practice of the PCs is to stop for 5 minutes every full hour – except when they reach their destination.

      [6]

      I’ve also done a list of other significant cities in the US, which will be extended as more become relevant, and a very selective list of international locations which might become relevant if the PCs do something they aren’t supposed to. These will be put in a separate sub-table with the appropriate adjustments for time zone noted in this field.

      [7]

      The character providing ‘transport’ for the group, St Barbara (operating under the name ‘Nightshade’ in this sub-campaign) has several flying modes. This set of fields automatically calculates the travel time in these different modes.

      The modes are

      – Supersonic (100 km in 1.62 min = 3704 kph)
      – Supersonic Stealth (100 km in 6.48 min = 926 kph)
      – Non-combat (100 km in 16.67 min = 360 kph)
      – Non-combat Stealth (100 km in 50 min = 120 kph)
      – Evasive (100 km in 100 min = 60 kph)
      – Evasive, Stealth (100 km in 200 min = 30 kph)

Date becomes Information

The whole point of this exercise is to generate information that can be used by players and GM alike. They may be at DeBarrie and decide they need sacks of cement – where’s the nearest place to get some?

The data entered into the table becomes information when it has meaning attached to it. Often, that meaning is relative to other related entries. “Where’s the nearest Demolitions expert? The nearest Doctor?” – and so on.

This meaning is conferred by sorting the information in the table. Sometimes, a sort can be ad-hoc, but I actually think that duplicating the tables and doing semi-permanent sorts is the best way to go. Or maybe I’ll do a sort and save the results as a PDF.

Already, I have 8 Useful ways to sort the information.

    1. Locality, Type of business, Distance, Name, Address

    What’s at a location. All the businesses in a given locality, grouped by type, in sequence of distance i.e. flying time. If there are multiple examples eg Fast Food restaurants all the same distance away, they are listed alphabetically.

    2. Locality, Name, Type, Distance, Address

    What’s at a location, 2. This is a basic business directory.

    3. Distance, Type, Locality, Name, Address

    ‘What’s nearby?” Note that really local businesses will have no Straight-line Distance information because the PCs aren’t supposed to fly to them.

    Process Flaw

    In theory, since the group will be driving to a nearby secret location and flying from there to avoid risking their secret identities I should have been measuring distance from that departure point. Except that they will only be doing so in daylight, and will forego it if the emergency is serious enough.

    What’s more, if their destination is in the right direction, this error will have comparably little effect on the total. It is most significant when they drive one way and then fly back the way they came.

    With all that in mind, and remembering that the drive time to the secret location is only about 10 minutes (less if they speed), the resulting error seemed relatively negligible, and not worth the effort of taking into account. But it is a flaw in the process, and one that I might eventually be able to use against the PCs!

    4. Drive Time, Locality, Type, Name, Address

    A variation on “What’s Nearby” that takes into account road quality and focuses on communities as a whole – “where’s the nearest town with a dentist?”. I will probably stratify this into time bands – <3 min, 3-10 min, 10-20 min, 20-30 min, 30-45 min, 45-60 min, 60-90 min, >90 min.

    5. Drive Time, Type, Locality, Name, Address

    A variation on “What’s Nearby” that focuses on type of business. Stratified as above.

    7. Type Of Business, Locality, Name, Address

    List every builder in the state, in order of where they are – that’s achieved by loading this sort and scrolling through to “builder”.It relies on consistency of nomenclature in classifying businesses to work, though.

    8. Name, Locality, Type

    Finally, when you’re looking for somewhere specific, this is the list of value.

There may be others, or refinements to these. They are so easy to produce that if something proves useful, I can generate it ad-hoc and keep it forever. And if one of these proves less useful, it can be replaced or simply deleted.

The key point is that I don’t want to have to enter new entries multiple times. One master file that gets resorted is a lot less upkeep!

The Lesson To Learn

This entire process uses only basic tools – a browser, google maps, a text editor, a table, table sort functions, and maybe PDF export function.

In other words, a browser, text editor, and office suite.

I could search Google for weeks and not find what I’m looking for simply because that relies on other people doing the hard work for me.

What we get out of it

For the next few weeks, game time, the plan is for the PCs to head for a randomly chosen location and do superheroic things as opportunity presents – and just get to know the place until it knocks.

As a planning tool, the right type of sorted list will be invaluable. And, if there’s no inspiration in sight, the entire outing can be hand-waved. So they get more adventure, and I get less ongoing work, and sources of inspiration.

Generalizing

As I wrote at the start of this article, the general principle applies in all sorts of ways.

Let’s take a D&D example, just for some variety.

You could buy a supplement – “99 Swamp Encounters”, a simple table listing things that you might encounter. The results are likely to be fairly generic and you might well commit to something only to draw a blank on the specifics, or have the players not be interested.

Or you could have a list of the various monsters and an entry for each terrain in which they are common, and then sort it. You could add in other monster supplements from other publishers – that requires adding a field that lists the source, and – to be truly convenient, the page number.

You can modify and home-brew the official data to fit your campaign world.

And that’s only a direct replication of the core concept.

Further Generalizing

So let’s look beyond it. I can’t present the chart below in legible form without ruining a forthcoming adventure in the Adventurer’s Club campaign.

Fortunately, I have a version without labels and other tell-tales. It took me about 4½ hours to create AND fully populate – using the draw function in my Office Suite, a part of the package that I had never used before.

Once I had the structure, I added some blocks of color and a temporary background – and that’s included in that 4½ hours, too.

That’s a ferocious learning curve!

In the past, I’ve looked for an online diagram generator, or even downloaded and installed software just for the purpose. But everything needed was already there.

What else can my Office Suite do? I use a fraction (about 1/2) of its modules – Writer, Spreadsheet, now Draw. What else is in there? What functions are hidden from view in menus and sub-menus?

What else can your basic utilities do for you? Maybe a sortable list of every NPC ever to appear in your campaign, for example?

Comments Off on The Power Of Basic Utilities

Economics In RPGs 6c: Pre-Digital Tech Age Ch 3


This entry is part 9 of 16 in the series Economics In RPGs

I’ve clearly decided to push on and get this trilogy of posts out of the way before interrupting the series for another break.

As usual, because this is a direct continuation of what’s already been posted, I’m going to skip the usual preamble, so make sure that you have read Chapter 1 and Chapter 2 first.

But before I dive headlong into it, there are a couple of Kickstarters that are worth mentioning.

First up, and closing early next week, is The Geologist’s Primer by Anna Urbanek, with additional content by Jakub Wisz.

A massive 360-page “illustrated guide to Magical gems, rocks and metals – Gem Folklore, Magic & Occult Magic Item Recipes, Game Master’s Tools” – and more.

Click the image to visit the Kickstarter Page.

System-agnostic, the list of content inclusions is quite comprehensive; “Each entry in the Geologist’s Primer provides basic geological information, notes on where to find and how to extract these materials, along with their industrial, decorative, magical, and sometimes even culinary properties. Each entry also includes a short, handy description. So if you’re just browsing for information, you won’t have to read through pages of text!”

Click the image to open the Kickstarter page.

The example pages look fantastic, and the utility of this as a reference work amply justifies backing it. Note that most of the stretch goals that have been unlocked so far have created Add-ons which have to be added to your order when you back the project.

This project was fully-funded within 5 minutes of launch and has currently raised more than $430,000 toward it’s $10K initial goal, so it’s as sure of delivering as any Kickstarter ever can be. Time is running out, with 2/3 of the funding window already closed, so if you are interested, the time to dive in is now.

The PDF-only option is a relatively-affordable $20, but I put my money down on the hardcover, costing $50 (I briefly considered the US$80 deluxe edition, but my budget wouldn’t stretch that far).

The scale of the public response shows quite clearly that there is a significant level of demand for this product, so I’m quite sure that it will interest some of you out there!

One down! Here’s the second serve:

I snuck this photo out of the campaign preview I was sent. But I won’t tell if you don’t! Click it to visit the Kickstarter Page (once it’s live).

Next, and launching (if all goes according to plan) later this week, there’s some newly-designed dungeon tiles and 3D printed accessories that are sure to interest some of my regular readers.

Mad Wizard’s Hall provides “Pre-painted wooden tiles, doors, columns, and traps for any fantasy tabletop RPG and miniatures games”.

These look great, and come in a variety of shapes and sizes. There’s a free print-for-yourself sampler, and 341 people have subscribed to be notified on launch already, so it’s very likely to get up.

These are the first major outing for an indy designer hailing from Kipeda, Lithuania.

Yes, Lithuania – talk about gaming having a global reach! So to welcome Ilya into the pro gaming fold, you should at least consider backing his project!

Okay, with the decks cleared, lets get back to business!

The Space Race

It’s fair to say that the Space Race was the single most transformative economic event series within the twentieth century – and in a period that includes two world wars and the Great Depression, that’s saying something.

Entirely separate from the outcome and spin-offs, and the space industry itself (addressed separately below), there’s the direct investment – Project Mercury was $2.57 billion in 2021 money; Project Gemini, $8.2 billion in inflation-corrected currency; and Apollo $178 billion in 2022 money.

That adds up to (roughly) 189 billion (corrected) dollars in direct investment in the research, engineering, and manufacturing capabilities of the US. Even had the project failed, it’s hard to see that not having a massive economic payoff in the long run.

    Nationalism Vs Progress

    The roots of the Space Race run deep, and branch off into unexpected sociological domains. One of the strangest is the complete 180°-reversal in public perceptions that followed.

    When the trilogy of space programs began, supporting them was very jingoistic; there was a sense of direct confrontation with the advance elements of the Enemy, and failure to support the space program was viewed as ‘unpatriotic’.

    As soon as the Apollo program succeeded with Apollo 11, that began to change, and quite rapidly. Space exploration was immediately and increasingly subject to harsh budgetary constraints and the catch-cry (paraphrased) was ‘there are more important priorities here on earth’.

    Poor Salesmanship

    To be fair, NASA did a very poor job of selling the economic value of their achievements, and still do. Correlation doesn’t imply causation, so it’s entirely possible that the decline in American manufacturing capabilities mirrors the retreat from investment in space technologies entirely by coincidence.

    But the list of sciences and technologies that got a big boost out of the Space Race reads like a comprehensive list of human technological and social achievements. And that’s without the spin-offs and indirect benefits. We’re talking everything from computer technology to materials science and all points in between.

    Softer subjects also benefited – there was so much written about the space program that literature itself had to evolve. There were so many creative artists in other fields that drew inspiration from the projects that whole new fields and styles began to manifest. And manufacturers were quick to learn that if you slapped “Astro-” onto the name of a product, or established some connection with the space program (however tenuous), sales went through the roof.

    Social Antagonism

    The post-Apollo shift defined a new social antagonism between the interests of Nationalism and those of Research. Suddenly, research grant applications had to justify their funding requests in terms of concrete benefits, and you can’t run research efficiently along such lines; the only guaranteed outcome of research is that you’ll have the chance to learn something. What that something might be, and how it might translate into economic and social benefits, is completely unpredictable.

    From a modern perspective, it’s easy to cast this as an opening skirmish in the political wars between progressives and conservatives, but that’s an oversimplification, in my view; NASA simply failed to plan for Apollo’s success, taking their funding for granted, as I have explained in earlier sections of this article.

    That failure is what opened the door for those forces of economic management that wanted to re-prioritize and cut expenditure. These were politicians who saw only the immediate / short-term goal of “Beat The Russians” and not the longer-term benefits to society, and NASA failed to educate them about the longer-term gains. I’m quite sure that they tried, but on this mission, they failed.

    The demand for practical research outcomes to justify investment became a characteristic of the remainder of the 20th century and still lingers today to a large extent. It became part of the economic and social infrastructure of the western world, a fundamental assumption of society, thereafter.

    Progress Vs Service

    Increased funding for social programs was often used as a justification for winding back investment in the space program, and that had the flow-on effect of painting those two elements of society into an antagonistic relationship.

    Increasingly, they were seen as competing for shrinking slices of the available resources.

    The attitudes engendered were pernicious, and spread into a perception that funding of research stole money from the delivery of services as politicians employed divide-and-conquer approaches to enable a growth in their personal power.

    There is a key sequence in The Distinguished Gentleman, the political comedy starring Eddie Murphy in one of the best-written roles of his career (link is to a Double-feature DVD set with Trading Places, his other great role in this sphere. I get a small commission if you buy).

    Murphy’s character is meeting lobbyist Olaf Anderson who sounds him out on the choices that he has on various policy perspectives so that he can know which lobby groups he can direct funding from into Murphy’s reelection campaign. Olaf doesn’t care which position Murphy takes; if he chooses a position in favor of a policy, group “A” will give him money, if he opposes it, group “B” will do so. And, either way, Olaf remains the Kingmaker and gets his slice of the pie from both sides.

    While a cynical exaggeration, this explanation for why nothing ever gets done in government save for politicians feathering their own nests still resonates. Systemic corruption by lobbyists continues to handicap the political system of many nations; only the form varies.

    The key point in this context is that artificially created competitions for funding set different lobby groups into antagonistic positions which can let the orchestrators play one off against the other, to the benefit of the orchestrator.

    Service Vs Profit

    The increasing emphasis on environmental regulation and protection was always described in terms of the public benefit that would (and did) result. That this regulation ate into the ability of a given operation to generate the maximum possible profits created another of these antagonisms, in which the corporate sector increasingly focused on the short-term over the long term and the immediate benefit over the short-term.

    There is little doubt that the same forces which antagonized research from service delivery also encouraged this perspective, but it was all an outgrowth of the more general government vs business disharmony that had existed since the end of the Second World War.

    Three Rivalries

    These three rivalries, manifestations of deeper political philosophies and personal greed and altruism, became increasingly strident as the Pre-Digital era neared its end.

    There are those who would argue that they did not reach their most extreme levels of conflict until the 1980s, but I think of these trends as more of a parabolic arc; the impetus pushing these agendas begins to tail off at the end of this era but momentum pushes the conflicts to greater extremes before the social perspectives responsible begin changing course.

    Any campaign set in this time period needs to keep the three rivalries in mind, and GMs should remember that there will be forces on all sides who will resent and resist any efforts to change the status quo – sometimes from the best of intentions, sometimes for more venal reasons, and sometimes out of pure self-interest.

    Alliances are short-term, extremely focused, and unreliable. There are too many social and political forces pulling these “special interest groups” apart for them to last very long.

    Very Strange Fruit

    Getting back to the Space Race, the legacies of the Apollo program and its predecessors were more significant indirectly than they ever were directly. In order to make Apollo work, industries needed to learn to do new things, and they often found those lessons applicable in other areas and new products.

    For example, the adhesives industry was revolutionized by the space program; it wasn’t so much the adhesives needed for space applications as it was taking the failed experiments along the way to those products and turning them into something useful (and profitable).

    Computers and Communications and Satellite weather maps often get the headline billing when discussing Space Race spin-offs, but the technological ramifications and confluences run much deeper, and include artificial limbs, scratch-resistant lenses, insulin pumps, firefighting equipment, automation, water filters, sports shoes, long-life tires, freeze-dried food, ear thermometers, vacuum cleaners, air purifiers, LEDs, pens, medical imaging and diagnostic technology, and (of course) Velcro!

    Every new product creates new employment and new manufacturing needs, new marketing requirements, new or augmented distribution channels, new demands on income, and new prosperity.

    Despite the high price, the economic benefits were an ongoing contributor to the global economy that far outweighed the costs.

Tech Briefing: Miniaturization

One of the biggest forms of technological progress to result from the space race was miniaturization of electronic components. While no more than half of this takes place in the pre-digital era, it’s worth looking at the totality, at least briefly, because the beginnings of this process set the foundations for the era to follow as well as impacting the available technology throughout the part of the era that follows WWII.

    Beginnings

    Vacuum tubes were developed in the late 19th and early 20th centuries. The simplest and most common examples were the humble light globe.

    Vacuum Tubes of greater sophistication are delicate, expensive, and fragile. Glass-sealed, air slowly leaks into the vacuum sealed within and destroys their effectiveness. Some could last for many years, and some could die a very quick death, depending on the quality of manufacture and the complexity (amongst many other factors) – but quality always costs.

    They are large, and heavy, and power-hungry.

    Worse still, they are horribly inefficient, wasting a lot of the energy fed into them as light and heat. Light could be lived with, but heat distorts glass and can render the technology stone dead.

    Early computers needed significant cooling in order to function, and this could easily double the cost of an installation.

    Vacuum Tubes to Transistors

    Vacuum Tubes made digital computers possible, but not practical. So many of them were required that the director of IBM, in the 1950s, famously predicted a total global market of five or six computers.

    The first transistor was invented in 1947. Discrete components mounted individually on a circuit board, they were soon 1/100th the size of a Vacuum Tube equivalent, consumed 1/100th the power (or less), and wasted 1/20th as much of that power (or less). TV sets went from being a large and bulky cabinet to being portable devices, despite still relying on a vacuum tube for the television display.

    They were more reliable, more robust, less expensive, less expensive to operate, and much more compact.

    The improved electrical requirements also meant that power supplies could be made smaller and more reliable.

    All this meant that more circuits could be squeezed into a given space, and that gave rise to greater capabilities. The earliest remote controls could arguably have been achieved using transistors, but development was proceeding at a breakneck pace.

      In 1954 the worlds first transistor radio, the Regency TR-1 used four Texas Instruments npn transistors and cost $49.95, equivalent [to] $507 in 2021 [dollars]. Today, a 512GB SD card can contain over a trillion transistors and costs about $30.

      — Curious-Droid.com, MOSFET – The Most significant invention of the 20th Century

    Transistors to ICs

    In 1958-59, a way was devised to mount many transistors and a number of auxiliary components onto a single piece of silicon, shrinking the transistors at the same time, and increasing all those other benefits of transistorization at the same time. This was the beginning of the Integrated Circuit.

    The 1962 prototype contained 16 transistors. In 1964, the first commercial MOS integrated circuit was released, containing 120 transistors. These were roughly 1/50th of a millimeter (0.787 thousandths of an inch) across. 120 discrete transistors would have taken up an area of about 3 inches x 4 inches – assuming no electrical components were required in between, but they almost always were. This could easily double or triple the area required, so call it the equivalent of about 7½ x 10 inches.

    ICs to Chips

    By the late 1960s, Integrated Circuits had grown so large and complex that a new term was in use: LSI, or Large-Scale Integration. Early in the 1970s, this gave way to VLSI (“Very Large Scale Integration”).

    The first digital microprocessor is generally considered the Intel 4004, a four-bit CPU whose descendants lead all the way to the Pentium and beyond. It had 2300 transistors on a ‘chip’ of silicon about 3.15×4.46 mm (1/8th of an inch x 2.1 eighths of an inch) – plus case.

    Moore’s Law

    Moore’s Law postulated that the number of transistors on a single chip would double every two years for an unforeseeable period of time, but certainly, for the immediate future.

    A logarithmic graph showing the timeline of how transistor counts in microchips are almost doubling every two years from 1970 to 2020 (Moore’s Law) by Max Roser & Hannah Ritchie, from Our World In Data via Wikipedia (image page) and licensed under CC Attributions 4.0 International license.

    While it’s been refined and revisited a number of times, as a rough-and-ready guesstimate, it’s proven remarkably resilient. On several occasions, doom-and-gloom forecasts have prophesied the end of Moore’s Law, only for new technological developments to make the formerly impossible, possible.

    History shows that Moore’s law is a useful generalization. If it were perfectly valid, the graph above would be a perfectly straight line; clearly, it’s not.

    There are corollaries, which state that power requirements are a function of the physical size of chips (and hence power requirements per transistor will continue to fall), and that R&R and manufacturing costs also increase exponentially in proportion to Moore’s Law – which means that the cost per transistor would remain constant, but actually continually falls with economies of scale, which are driven by demand. Those have also proven useful rules of thumb, but less accurate than Moore’s original law.

    Chips to Multi-cores

    Modern computer chips have transistors that are as little as 35 silicon atoms wide. There can be billions of trillions of transistors in each. We passed the threshold whereby quantum effects had to be taken into account a long time ago – around the time of the Pentium IV or V, if memory serves.

    These days, a single CPU can’t pump electrons through its circuits fast enough; problems and tasks are distributed amongst several CPUs on the one chip, a multi-core, designed to utilize parallel-processing methods such as multi-threading.

    Supercomputers can hold as many as 20,000 processors (not necessarily all on one chip). The technicalities don’t matter much – the operative factors from an economic perspective are that computers get cheaper in (real terms) and more powerful every year or two from the moment of first creation in 1970 through to now (though there have been the occasional brief reversal of this trend).

    Ubiquity

    Despite the dire prediction of that IBM executive, that essentially means that computers have been getting cheaper and more powerful over that period of time. It’s well known that the onboard computers that ran the Apollo spacecraft were less powerful, in computational terms, than those of a 1990s engine management computer in a typical family car.

    A lot of that reduction in price is due to economies of scale – essentially, the more you make of something, the less per unit they cost. And the only way you get economies of scale is through increased usage. Almost everything has an onboard computer of some kind, these days – right down to extension cords and power sockets.

    This has been a steady progression that started in the 1970s and has continued ever since. It was in its infancy at the end of the pre-digital era, but had progressed far enough that bureaucracies and large corporations were increasingly using computers by this point in time.

Behemoths Of Blind Logic

Which means that public perceptions of computers had also begun to take shape. These would also evolve with increasing ubiquity, but – outside of specialist areas – the headline of this section denotes the general attitude that I remember.

Many people could see, at the time, that this would not always be the case; the promise of computer technology was well-known and widely appreciated – and frequently mis-characterized or misunderstood by CEOs.

These failures would exist right through to the 2000s – in particular, the belief that computers would streamline workflows and permit a reduction in labor costs, making a business cheaper to run. That never happens, in my experience. What computers facilitate is greater control, and better management of internal processes – but they were and remain unforgiving.

GIGO is an abbreviation for “Garbage In, Garbage Out”, a phrase that was coined all the way back in 1957. Back then, computer professionals used it in reference to sloppy programming practices, but sometime in the 1970s it began to be used to refer to operator errors and corrupt data, and when PCs began infiltrating office spaces, the term almost exclusively referred to these problems.

    Operator Error

    When computers were new, operators needed – and received – special training in how to use them. This training was not cheap, and often extended for weeks or months.

    In part, that was because many operations that take place automatically in modern times needed to be carried out manually.

    Computer Errors come in four basic varieties:

      Logic Errors

      Computers are – currently – stupid devices, the current crop of “AI” functions notwithstanding. They will do whatever they are told to do, whether that is the right thing to do or not. Errors in the underlying logic that the computer is to implement are the most fundamental mistakes, and some of the hardest to diagnose and correct.

      Hardware & Software Bugs

      If the intended instructions to the computer are correct, they can be mistranslated into computer instructions (a software bug) or misunderstood because of a flaw in the hardware itself. These days, the latter are so rare that there is an almost-automatic assumption of the former. That’s what made the floating-point computational error of the early Pentium chips (now known as the FDIV bug) so shocking to the IT community in the mid-1990s.

      User Errors

      By far the biggest cause of computer errors is an operator typing in something they shouldn’t. Some estimate that 90% of computer code is directly purposed at spotting and handling such incorrect inputs, but I think this exaggerated – a little.

      A real-world but trivial example is of an operator entering numeric values for invoices with a dollar sign at the start – “$123.45” instead of just “123.45”. If the software isn’t told how to handle this – something that could take several lines of code – it won’t add up the invoice line entries to produce a correct total.

      There are all sorts of derogatory terms used by computer professionals to describe this sort of error, most of which will go completely over the heads of laymen, but these are becoming more rare in the modern world because of user-friendly interface design expectations, which hold that operator errors are the fault of the system programmer who should have anticipated that possibility.

      Interpretational Errors

      The fourth type is perhaps the most pernicious, though slowly becoming less frequent; it occurs when the computer does everything right, and so does the operator, but the human who receives the information misinterprets what the results are telling them.

      This used to be a lot more common when computerized functions were newly-introduced to a business, and the wealth of data outputs first became available to management. I once knew a manager who was quite happy spending 12 hours a day restructuring his reports to view information in new contexts, for example.

      Like everyone else who has trouble with data saturation, he eventually figured out what reports were actually useful and what were simply noise, or worse yet, misleading.

      Nevertheless, this remains a valid interpretation of GIGO that casts the expression into a more human context.

    No matter how highly trained, computer operators were human and capable of making a mistake. Depending on the specifics of those mistakes, the results could be catastrophic in terms of the purpose of the information being processed, and decisions deriving from it.

    PCs

    With the advent of the business-purpose personal computer, there was a significant reduction in the training that operators received, and a natural increase in the number of errors that would typically occur.

    Let me be clear – it takes time to master ANY software. The best software for any purpose is often the software that lets you dive straight in and start being productive right away; that doesn’t reduce the learning curve, it just lets you do something useful in the meantime.

    For example, I’ve tried more than a dozen varieties of different music composition software, but one of them clicked with me immediately (sadly, it’s no longer available). Others who tried the software on my recommendation found that it was not so user-friendly for them – in particular, if they knew (musical) keyboards and used one to ‘play’ music into the software (I did everything by mouse). Other packages were ‘best’ for them.

    The immediacy of productivity didn’t mean that I had mastered it; I was still learning new tricks right up to the day that a forced operating system upgrade meant that it stopped working.

But the Pre-digital era falls at the very beginning of that story, at a time when many of these dangers went unrecognized, at least by management; there was a sense that computers were infallible amongst those who had championed their use by a corporation, and there was little capacity for human judgment to leaven harsh and sometimes incorrect decisions.

The popular zeitgeist at the time was that computers would be responsible for all manner of simple mistakes that common sense would prevent immediately, like issuing invoices for 1 cent, often due to a rounding error, or for 99.999 dollars.

Of course, mainframe computers were both huge and hugely expensive. So: Behemoths of blind logic.

Whatever fun mistakes you can have an overly-literal computer make, I guarantee that a worse mistake really happened.

The Promise Of Atomics

Sci-fi of the 1930s had a rose-colored myopia with respect to the future of atomics. The writers of the time had enough understanding of the fundamental research that had been published that they could (and in at least one case, did) predict atomic weapons.

But, to be honest, it was frequently a catch-phrase meant to “sci-fi” an object up. ‘Destroyer – sounds too naval. I know, we’ll call it an Atomic Destroyer!’ Or an Atomic Car. An Atomic Dredge. An Atomic Mole.

Atomics promised power supplies that were smaller, lighter, and more powerful than anything then available – and that was the serious stuff. Every city block would have its own atomic generator that would last a decade, or maybe a century. Self-powered factories, automated refineries…

More frivolous and less-grounded but still somewhat-plausible applications that were predicted included transmutation, atomic-powered rockets, force-fields, and atomic rays.

Setting aside the ridiculous stuff, concepts like Atomic Automobiles that never needed refueling were not only seriously contemplated but expected.

All that was the promise of Atomics.

So, what happened?

    Stumbling Block 1: Cold-War Paranoia

    Klaus Fuchs arrest in 1950, and the Rosenburgs in 1953. These three names were sensationalized following their arrests and trials (and in the latter case, executions).

    On 29 August 1949, the Soviet Union secretly conducted its first successful weapon test. On September 23, President Truman revealed that the Soviets had developed their own version of the super-weapon that many felt had ended the War.

    These developments did not ignite the Cold War, which had already been underway since 1945, following a string of broken agreements regarding the post-Nazi-defeat in Europe and Iran. But they did signal an increased level of (justifiable) paranoia toward secrecy regarding key aspects of nuclear and other cutting-edge technology.

    While military applications – better and newer bomb designs, delivery systems, nuclear-powered vessels, and attempts to create defenses – were well-funded, there was a slowing effect on civilian applications of nuclear power.

    Stumbling Block 2: Government Protectiveness

    The growing environmental awareness of the 1960s and 70s also had a massive impact. Suspicion that nuclear power was not the key to unlimited energy had been growing for a while, as the dangers emerged onto the public consciousness.

    In response, safety standards for nuclear power plants were set at an almost impossibly-high level. The granite of Grand Central Station, like all granite, was slightly radioactive, and in fact exceeded the permitted emission standards applied to US nuclear reactors.

    The accidental escape of radioactive gas at Three Mile Island turned nervousness into outright panic for some. Fact: the radioactivity released was less than that received in a dental x-ray, or a single trans-continental flight..

    The shielding and safety mechanisms that were required – rightly or wrongly – made atomic installations huge and expensive. Both factors signaled the death of the Promise of Atomics.

    Stumbling Block 3: Fear & Atomic Nightmares

    B-movies frequently used Atomic-based monsters as villains. The Beast From 20,000 Fathoms had a fictional type of dinosaur awakened from frozen ice in the arctic circle by an Atomic weapons test. Them (1954) and Godzilla (1954) cemented an exaggerated concept of what nuclear power could do.

    There were more serious movies as well, ranging from The Day The Earth Caught Fire (1961), in which atomic tests displace the Earth from it’s normal orbit through to movies like The China Syndrome (1979), On The Beach (1959) and Silkwood (1983).

    All of these, and many more, created a distorted awareness of nuclear power that resisted the directly the atomic dreams of the more optimistic visions of nuclear power. I don’t know that it ever reached the point where support for the nuclear industry was enough, on its own, to cost someone victory in an election, but it was often a drag on political support.

    Chernobyl & other nuclear disasters

    That’s not to pretend for one minute that Nuclear Power is not dangerous if mismanaged. The Chernobyl nuclear disaster in 1986 is proof of that.

    Nor can nuclear power ever be made 100% secure against natural disaster, as demonstrated by the 2011 Fukushima accident.

    And, one can never entirely dismiss inimical acts by others, such as the ongoing Russian invasions of Ukraine.

      The Russian 22nd Army Corps approached the Zaporizhzhia Nuclear Power Plant on 26 February 2022 and besieged Enerhodar in order to assume control. A fire began, but the International Atomic Energy Agency (IAEA) stated that essential equipment was undamaged. Despite the fires, the plant recorded no radiation leaks.

      — Wikipedia, Russian Invasion of Ukraine – Southern Front

    That, of course, did not end the danger; in fact, the Russians attempted to use the power plant as a pawn in their invasion as the offensive bogged down (see Russian Invasion of Ukraine – Zaporizhzhia Front).

    The plant continued to be a strategic target in the months that followed.

      On 3 September 2022, an IAEA delegation visited the nuclear power plant at Zaporizhzhia and on 6 September a report was published documenting damage and threats to the plant security caused by external shelling and the presence of occupying troops in the plant.

      [Eight Days Later] at 3:14 a.m., the sixth and final reactor was disconnected from the grid, “completely stopping” the plant. The statement from Energoatom said that “Preparations are underway for its cooling and transfer to a cold state”.

      — Wikipedia, Russian Invasion of Ukraine – Zaporizhzhia Front

    Ukraine, of course, remains subject to threat and the invasion is ongoing. Until that changes, the danger posed remains, however it has been mitigated.

    Other uses of Atomics

    Nuclear materials, of course, have a number of other applications, which many people overlook. Medical uses are obvious (see Wikipedia, Nuclear Medicine). There are other industrial and commercial applications, too such as Industrial Radiography – used for

      ….the testing and grading of welds on piping, pressure vessels, high-capacity storage containers, pipelines, and some structural welds. Other tested materials include concrete (locating rebar or conduit), welder’s test coupons, machined parts, plate metal, or pipewall (locating anomalies due to corrosion or mechanical damage).

      — Wikipedia, Industrial Radiography – Inspection of products

    Whenever I think of this subject, though, an odd source springs to mind – a secondary plot thread in Arthur Hailey’s Wheels, in which an auto worker accidentally spreads radioactive contaminants.

    Alternate Reality, Alternate Physics

    So there are lots of good reasons why the envisaged ‘golden atomic age’ didn’t, and was never going to, happen.

    Well, I don’t know about you, but I’m a GM; I’d never let something so trivial get in the way if I really needed a campaign element like ubiquitous atomics. All that’s needed is some simple plot devicium to eliminate the dangers and the need for heavy shielding.

    A thin material that uses something similar to the photoelectric effect to transform one type of radiation (alpha, beta, gamma) into electricity would do it – and would simultaniously get rid of the bulky (and heavy) plumbing, permitting the direct conversion of radiation into energy. One triple layer later and the “Pocket reactor” (perhaps one cubic meter, perhaps half that) is ready to go.

A Default Economy

Time is starting to get away from me – I really wanted to reach this point in the article three or four hours ago. But, press on…

One of the biggest changes over the last 50-70 years of economics has been the relative importance of wages as a component cost of manufacturing. Wages have, in the western world, skyrocketed (in relative terms); this, more than any other factor, has resulted in the exodus of manufacturing to regions where the wages bill will be smaller.

This effect may have been less noticeable in the early 1970s but it was nevertheless present; the increasing pressure on the US auto industry was an early manifestation, and while it will take the Oil Crisis of 1973 to bring matters to a head, this only accelerated greatly a transition that was already ongoing to some extent.

Prior to the Oil Crisis, the dominant cost factor to the manufacturing sector was industrial in nature – machinery, tooling and resources (materials). Environmental concerns were a growing area of expense for many industries, but still secondary; and wages and training were a remote third (Administrative costs were fourth on the list, which will become significant in the next era).

Many of the classic entrants into different genres of RPG were written by people whose experience in economics was rooted in the society and attitudes of the era, and hence a low-scarcity high-manpower foundation became the default economy of those games.

    Incorrect Economics in Fantasy

    Most fantasy GMs knew enough to recognize that assembly-line techniques were inappropriate to the genre, but that was often as far as they went. Very few investigated the economics of steel production, especially the impact on forests. To be fair, the resources to do so were not as readily available.

    But let’s think about this a moment: anything in scarce supply goes up and up in price – that’s the law of supply and demand. And labor was in very short supply – which means that the basic model of the economics was wrong.

    Some GMs tried to correct this problem by increasing labor efficiency and effectiveness – healing magic to make the population healthier, more capable of hard work, and greater crop production through Druidic intervention (not only makes the populace healthier, but frees more of them up to work elsewhere.

    Nothing wrong with that as a foundation for fantasy economics – but many of the secondary impacts of these changes were ignored, or not spelled out properly (at the very least), and the changes themselves were inserted as explanation after the fact. No impact on the prices and availability of various goods was taken into account, for example.

    Now that this has been pointed out to you, you have three choices:

    • Make the explanation official and correct the game mechanics to devalue skilled labor costs and introduce other relevant knock-on effects, including social consequences;
    • Remove the incorrectly applied assumptions and their consequences to produce a more realistic medieval economy and society;
    • Find some other explanation for the incorrect modeling, one that (perhaps) requires less change to other areas of the mechanics – and implement the consequences and knock-on effects without fear or favor, having first adjusted for the incorrect assumptions already present.

    Anything magical or mechanical in nature .should either get a little cheaper or a lot more expensive. Anything that requires extremely high skill, likewise. Anything in common demand will be more easily available, and this may act as a depressant to the price.

    Similarly, apprentice numbers for blacksmiths and wizards and what-have-you will either go down considerably, or go way up.

    These changes aren’t rocket science; they are fairly straightforward and simple, actually. But there’s a lot of them.

    Once those are complete, you can start thinking about economic flows and who has money – and who doesn’t, but wants it – because the generic fantasy society that I have often seen at play is no more realistic.

    To be clear – you can choose not to change a thing, especially if this level of realism is not considered desirable by your players; but this should be an intentional choice, and those who make it should at least give passing consideration to the consequences.

    Sci-Fi Optimism: A Simpler Age

    But, befitting an age of technology, there’s a lot more to talk about on the Sci-Fi front.

    Modern sci-fi is far more dystopian in tone, far more cynical and pessimistic. Sci-fi that’s rooted in the era can go one of two ways:

    • It can be faithful to the era, with a far more positive outlook; read classic Heinlein and Asimov and EE ‘Doc’ Smith for tonal cues. And, in general, think a little more ‘Victorian’.
    • Or, you can adapt to service a modern audience, with cautious injections of pessimism and cynicism – but these changes won’t come out of nowhere and will have knock-on effects, and your campaign setting will need to incorporate and reflect those. Start with the three axes of conflict described at the start of this post, amp them up to 11, and throw in modern levels of political corruption; then incorporate some form of massive betrayal of the people to create that tonal quality within society. Go full pre-Cyberpunk, in other words.
    Sci-Fi Pessimism: Monster-bashes

    Monster movies should be treated as documentary references. This week, the Triffids; next week, Them; and so on.

    Take Myths, Legends, and Cryptids, and add a sci-fi twist. The Headless Horseman from Mars? The Radioactive Ghost? Swamp Men from Venus?

    Why not?

    Sci-Fi: Optimism Depth & Richness

    Both pessimistic and optimistic genres are morally-simplified in some ways. Identify the ones that pertain to your particular genre and run with them.

    In particular, though, the pessimistic route involves more universally-down attitude; greater variation and richness is possible in a more optimistic campaign, even if it’s a single persistent thread through the darkness.

    Sci-Fi Pessimism: Apocalyptic Visions

    There is no such thing as the doomsday clock in an optimistic vision of the sci-fi world; in a pessimistic sci-fi campaign, it should represent an ever-present existential threat.

    A perfect comparison is possible: watch both the original 1960s version of The Thing and the John Carpentier remake. Then watch a whole bunch of other sci-fi and categorize each into either ‘B&W Thing’ or “Carpentier Thing’ compartments, tonally. Alien? – Carpentier. Aliens? B&W. The Blob (the original with Steve McQueen? B&W – the good guys win in the end, and the threat is ended. Invasion Of The Body Snatchers? More ambiguous, and there’s always a suspicion that a pod has survived, somewhere – so that equals paranoia, and that’s Carpentier in classification.

    And so on.

    Sci-Fi Optimism: The Scale of Ginormous

    More than anything else, this section tips a hat toward EE ‘Doc’ Smith, and towards the original Star Wars (the revised Death Star in Return Of The Jedi may have been bigger, but it didn’t feel bigger. Just the opposite, in fact).

    Anything worth doing is worth overdoing. Spacecraft 5 miles long? Go for it! Spacecraft 15 feet long? Get ye to the Dark Side – except in comparison to the scale of the enemy, of course!

    That’s the only reason for the X-Wings to be so small – to make them more insignificant relative to the Star Destroyers and Death Star of the Empire.

    This applies to more than just the physical infrastructure. Contemplate for a moment the economics of building something on the scale of a death star. Here, this site should help: John M Jennings – Economics of the Death Star.

    Superheroics & Idealism

    Okay, let’s take a sideways step in Genre. It’s clearly just a short step from the positive sci-fi sub-genre to the idealism inherent in a superhero campaign.

    Once again, though, contemplate the economic impact of what your PCs and their enemies are up to. If there’s one crisis a month, resulting in significant damage to one or more metro areas, that’s a downward damage bill that’s going to total up into the billions – of 1970s dollars. Possibly more.

    Either the national economy of your setting is going into a lasting depression, with public confidence in the toilet and going under for the third time, or there is some factor that’s giving everyone an unlikely positivity.

    Two obvious factors can (should?) play into that confidence: the good guys always win (in the end), and/or there’s a steady growth in technological prowess that shows up as a more vibrant economic outlook.

    Let’s start by thinking about the rebuilding costs – unemployment goes down, and scarcity of good workers drives wages up. That money has to come from somewhere, and the easiest source is a more rapid technological progression, which boosts corporate profits. And it all plays into greater tax revenues. But, since 90% of the economy gets those positive effects without experiencing the downside, the result is an economic boom.

    So far, so good. Sure, the government will have some additional expenses – a more potent space industry? A holding facility to contain supervillains? And so on – check and check. Rebuilding that damaged infrastructure is just another of those items.

    Let’s say that half of the extra tax revenues gets eaten up – ten per cent per item, plus one or two not listed. The government can bank 20% of what’s left, and still give everyone a 30% tax reduction.

    Next, contemplate the industrial benefits of regularly replacing aging industrial resources. Tokyo and West Germany, it has been argued, benefited massively from such replenishment post World War 2 – but don’t take my word for it, do your own research on the subject.

    That’s easily another 10% kick along for the economy – because additional government spending always comes back three-fold, if you wait long enough, provided that the spending isn’t going straight into the pockets of some corrupt corporation or politician.

    Okay, that’s all just a starting point; you can take it as far as you think you need to. But there’s a lot of good reasons for optimism in that lot, don’t you think?

    Modern Pulp

    Modern pulp – the Clive Cussler model, for example – takes superheros out of the picture and relies on extraordinary examples of ordinary people rising to the occasion. In general, this straddles both positive and negative tones, and so the surrounding world is not going to be all that different from our own.

    What follows, in my opinion, is a more dynamic roller-coaster in terms of the economy – more significant and prominent ups and downs. But instability of this type makes investors more nervous, and is (in itself) a negative impact on the economy.

    Once again, then, we need some positive counterbalance – just to sustain the status quo, in this case. What might that be?

    It could take any of several forms – a series of medical breakthroughs, for example, or the discovery of friendly aliens (even if they are standoffish, with some version of the Starfleet Non-Interference Directive, the mere fact that there are solutions to problems if we want them badly enough could be enough).

    We don’t need an impact on the same scale as superheros provide – a mere 10% should be enough to cover the shortfall, or even less.

    Into this environment, we can then add the benefits of altruistic big business – and all the social changes that flow as a consequence – and we find ourselves firmly in the positive frame, in which all problems have solutions, and the good guys and girls always win in the end. Both of these are part of the infrastructure of such campaigns, a necessary assumption – but one that isn’t often enough factored into the broader society.

    War Games

    Back in the two-genres mold, we find military-based campaigns. These range from WW2 (positive) to Korea (positive but just barely) to Vietnam (negative, and not much fun). But alternate histories provide a more flexible foundation that can occupy any particular space on the map.

    For example: at the height of the Korean war, the USSR invades Canada, intending to plant a soviet super-state right on the American doorstep. Already stretched by the Asian conflict, the US (and its allies) can’t spare a huge manpower commitment – so it puts together an elite force – and suddenly we’re back in ‘Modern Pulp’ territory.

    Spies & Spy Games

    The final genre to be considered is one that goes hand-in-hand with Cold War settings: the solo super-spy or elite counterintelligence force. Variations take place in WW2 settings.

    There’s good reason for what many consider ‘the definitive James Bond’ to derive from this era, and that’s where your economic cues should be drawn from – in essence, whatever it takes (within reasonable limits) is available at need; but you always have to look for a less expensive alternative than simply throwing money at a problem.

    Go read (or re-read) the original Ian Fleming novels. There’s always enough money to spend on supervillain lairs or fancy gadgets. There’s a limited amount that can be spent on establishing a cover if necessary. Villains make fortunes by being villainous – but that only makes them a target that will eventually become the focus of attention.

    They really are the economics of the 60s and 70s, amplified.

Whew – got there at last! It’s been a marathon, but the finish line for this three-part article has now been crossed – and, in the porcess, the series grows to almost 80,000 words!

Next week: something completely different (and, since this part ran for an extra chapter, maybe the week after, as well).

Until then, have fun!

Comments Off on Economics In RPGs 6c: Pre-Digital Tech Age Ch 3