This entry is part 11 in the series Economics In RPGs

The first mainframe I used professionally was a networked pair of DEC PDP-11/70s, the same as the one depicted in this photograph. A successful minicomputer – yes, this is smaller than a mainframe! – the 11/70 dates from the mid-70s. Over 100,000 PDP-11 units were sold in the course of the decade. The example pictured includes two nine-track tape drives, two disk drives, a high speed line printer, a DECwriter dot-matrix keyboard printing terminal and a cathode ray tube terminal, all installed in a climate-controlled machine room – cables run underneath the suspended floor. Image by Wikipedia Commons user Kozan, who released it into the Public Domain on April 3, 2016..Image reference page https://en.wikipedia.org/wiki/File:PDP-11-70.JPG

A word of advice: Each part of the series builds heavily on the content from the previous one. While you may be able to get relevant information without doing so, to get the most of out of each, you should have read the preceding article. In this case, though, that “previous part” is actually the one before last, and a three-chapter set of quite lengthy posts. You might have to skim – just bear in mind that if anything is puzzling but not explained, it’s probably because it has already been explained earlier in the series.

Welcome & General Introduction

I’m still not clear on how this article will turn out. MY thinking has gelled considerably over the situation encountered last week, but my magic eight-ball is still very cloudy and unreliable.

What’s clear is that this post is going to encompass just one or two of the bullet points planned for the end-of-series, and that will actually comprise at least three major sub-sections.

How to fit three into two has been the structural problem I’ve been wrestling with for the last couple of weeks. Hopefully everything makes sense. And what it shows is that there is virtually no chance of getting this entire article in one hit – whether it will be in two or three chapters (or possibly more?) remains to be seen.

A disclaimer: I am not an economist and I’m not trying to turn anyone else into an economist. An awful lot of this content will be simplified, possibly even oversimplified. Bear that in mind as you read.

A second disclaimer: I’m Australian with a working understanding, however imperfect and incomplete, of how the US Economy works, and an even more marginal understanding of how the UK economy works (especially in the post-Brexit era). Most of my readers are from the US, and number two are Brits. Canadians and Australians fight over third place on pretty even terms, so those are the contexts in which what I write will be interpreted. And that means that the imperfection can become an issue.

Any commentary that I make comes from my personal perspective. That’s important to remember. Now, sometimes an outside perspective helps see something that’s not obvious to those who are enmeshed in a system, and sometimes it can mean that you aren’t as clued-in as you should be. So I’ll apologize in advance for any errors or offense.

I’ll repeat these disclaimers at the top of each part in this series.

Related articles

This series joins the many other articles on world-building that have been offered here through the years. Part one contained an extremely abbreviated list of these. There are far too many to list here individually; instead check out

the Campaign Creation page of the Blogdex,

especially the sections on

  • Divine Power, Religion, & Theology
  • Magic, Sorcery, & The Arcane
  • Money & Wealth
  • Cities & Architecture
  • Politics
  • Societies & Nations, and
  • Organizations, and
  • Races.
Where We’re At – repeated from Part 3

Along the way, a number of important principles have been established.

  1. Society drives economics – which is perfectly obvious when you think about it, because social patterns and structures define who can earn wealth, the nature of that wealth, and what they can spend it on – and those, by definition, are the fundamentals of an economy.
  2. Economics pressure Societies to evolve – economic activity encourages some social behaviors and inhibits others, producing the trends that cause societies to evolve. Again, perfectly obvious in hindsight, but not at all obvious at first glance – largely because the changes in society obscure and alter the driving forces and consequences of (1).
  3. Existing economic and social trends develop in the context of new developments – this point is a little more subtle and obscure. Another way of looking at it is that the existing social patterns define the initial impact that new developments can have on society, and the results tend to be definitive of the new era.
  4. New developments drive new patterns in both economic and social behavior but it takes time for the dominoes to fall – Just because some consequences get a head start, and are more readily assimilated into the society in general, that does not make them the most profound influences; those may take time to develop, but can be so transformative that they define a new social / political / economic / historic era.
  5. Each society and its economic infrastructure contains the foundations of the next significant era – this is an obvious consequence of the previous point. But spelling it out like this defines two or perhaps three phases of development, all contained within the envelope of a given social era:
    • There’s the initial phase, in which some arbitrary dividing line demarks transition from one social era to another. Economic development and social change is driven exclusively by existing trends.
    • There’s the secondary phase, in which new conditions derive from the driving social forces that define the era begin to infiltrate and manifest within the scope permitted by the results of the initial phase.
    • Each of the trends in the secondary phase can have an immediate impact or a delayed impact. The first become a part of the unique set of conditions that define the current era, while the second become the seeds of the next social era. There is always a continuity, and you can never really analyze a particular period in history without understanding the foundations that were laid in the preceding era.

The general principles contained within these bullet points are important enough that I’m going to be repeating them in the ‘opening salvos’ of the remaining articles in the series.

Introducing The Digital Age

It’s actually been quite difficult to conceptually unify the events of the last 50 years. It’s as though the historic period was filled with movements containing beginnings, actuality, and either endings or transitions, but these distinct narrative threads have overlapped continually with others, occasionally producing unexpected synergies and compounded problems.

It was only when I realized that this, in itself, was a connecting thematic thread that was characteristic of the period that things started to fall into place, with the question of “why?”.Suddenly, there seemed to be an inevitability to the pattern I had observed, responses being shaped by the intervention of technology and by the social environment engendered by the technology of the period.

What had been just one element of the last fifty years, computers and computer applications, emerged as the driving force behind many of the changes. By this time, I had identified many other themes to the age, and redacted them when they didn’t exist for the whole period. With the new touchstone, several coalesced into unexpected forms, and I was left with five, plus that touchstone. As formal layout of the article proceeded, a seventh theme became apparent.

These themes are inconsistent through the period, and they often transit to a different focus or manifestation as the period progresses.

    Theme 1: Outcome-targeted Change

    There was a time when a unified, internally consistent perspective of what the world could be formed a central ideology that framed the policies offered by political parties. As the digital age progresses, this central ideology is eroded in one of two ways.

    The first is an obsession with gaining power for it’s own sake; this mandates humiliation and repudiation of the opposition to perpetually demonstrate their perceived unfitness for office. In the US, the Republican Party has fallen down this rabbit hole.

    The alternative is the proposition of specific policies that are not reflective of the general ideology, but are exceptions aimed at a specific policy outcome in the affected field. “We still believe in doing X, but in this specific case, we need to do Y in order to achieve Z. Once we have done so, we will have to reexamine our priorities and policies, and its likely that some compromise between X and Y will be necessary, achieving most of the benefits of X but maintaining Z.”

    I’ve heard this statement, or variations on it, at least a dozen times over the last twenty or so years. It’s a common feature in US and Australian Politics, and I suspect that it will also be the case through the Parliaments of Europe, though I don’t know enough about their domestic politics to confirm this suspicion.

      Theme 1a: Eco-warriors Through The Modern Age

      A secondary manifestation of this theme is in the context in which those who would term themselves “Eco-warriors” manifest their behavior.

      In the early 70s, the focus was on industrial pollution in cities, groundwater contamination by manufacturing, and logging. The EPA cleaned up the first two, and compromises were reached by loggers to enable them to get back to work – under scrutiny and oversight.

      In the 80s, the focus shifted to the preservation of endangered species. More accurately, about the protection of such species from industrial threats. Scandals rocked some of the biggest groups of organized Eco-warriors over the next 20 years – ranging from the destruction of the Rainbow Warrior to accusations of profiteering leveled at the WWF..

      The 90s saw governments expanding their environmental protections but only in specific areas: Asbestos, and the Ozone layer. As the impacts of past regulatory changes began to accumulate, the skies began to clear, water became safer to drink, and for the most part, the environmental portfolio consisted of attempts to cut protections in the name of business efficiency, and mostly forgotten by the general public.

      Millennium Blues

      The new Millennium brought with it new challenges – the deforestation in the Amazon region and the first warnings about Global Warming. By the end of the first decade, in-principle commitments had been reached with a number of Nations around the world to limit atmospheric carbon – eventually. Initial targets were so low that it was possible to finagle the books to meet them, something the Australian Government of the time was rightly criticized for both internally and externally, for example. With some sort of action being taken, a lot of the heat went out of the Eco-warrior movement, which began to be seen as more fringe.

      In the decade leading up to the Pandemic, action was more driven by consumers and corporate profit, especially when it became clear that renewable sources were becoming cheaper than traditional power-generation mechanisms. But the decade of neglect and desultory inaction began to catch up with reality as signs began to emerge that global warming was proceeding at a faster rate than expected.

      Last year, some places had thirty extra inches of snow, or so I’ve heard. The record-breaking heat wave though the south and Midwest is unprecedented. The hurricanes striking the Atlantic Coast have increased in frequency if not in severity – but the latter caveat might be an artifact of the broad brush of the Fujita Scale; if hurricanes were measured as “F4.7” instead of rounding down to F4, an increase in intensity might also become apparent. Then there’s the Canadian bushfires, and a year or two back, the Californian wildfires. And, of course, there’s one hitting California as I write this.

      Closer to home. A season of unprecedented Bushfires here in Australia was followed by two once-in-a-century floods – thirty days apart! A few communities were hit by both. And we’re expecting a record year for Bushfires, in both severity and threat; our authorities last week admitted that they had been able to do only 20% of their planned remediation. As of last night, even though it’s still officially winter here, there were 60 fires burning out of control!

      The more such events stack up, the less likely it is that ‘coincidence’ or ‘it’s just a bad run’ or other dismissals is an adequate explanation. I’ve been a skeptic in the past, waiting for the clear and unequivable evidence, while advocating for measured climate change action ‘just in case’. To be clear, I considered it undeniable that the climates of the world were changing, heating up; only the cause remained unproven to me – it was still a crisis that needed urgent attention.

      And that brought the fringes-of-society Eco-warriors out of the woodwork, with ever more extreme acts of vandalism to get media attention, all in the name of accelerating progress not just toward net zero, but an actual reduction in carbon levels. These acts would be unthinkable by anyone from the mainstream, and are strongly condemned – and actually annoy people sufficiently that climate action is becoming less popular within the community.

      For those who don’t think that a relatively small change in this space can have huge impacts, think of the atmospheric carbon, and the additional energy that it captures and pumps into the environment, as catalysts, accelerating and intensifying the energy transfers and atmospheric consequences that manifest as weather.

      And yet, just today, I saw a post on Quora in which a denialist tried to rally support for his delusions.

    Theme 2: Limited resources

    This era begins with the Oil Crisis of 1973. While that subject will get appropriate scrutiny in a later section, this section is more concerned with the long-term impacts. The US had peaked in Domestic Oil Production in 1960, when it became cheaper to buy Oil from the Arabian Gulf than to find and exploit it locally. As a result, they had been growing more dependent on foreign oil for about a decade.

    If they had any insecurity about the reliability of supply, the US Domestic Reserve, a stockpile of reserves aimed at meeting critical needs in the event of an emergency, quelled them. And so, the lemmings marched toward the edge of the cliff, unaware of the doom they faced. No, that’s probably being a little unfair and a little over-the-top melodramatic.

    I have to point out that the Crisis was experienced in many different countries around the world.

      The initial nations targeted were Canada, Japan, the Netherlands, the United Kingdom and the United States, though the embargo also later extended to Portugal, Rhodesia and South Africa.

      — Wikipedia, 1973 Oil Crisis

    The reality was that the embargo itself had minimal effect on the supply of oil; but the resulting panic and perceived threat drove the price of crude oil through the roof, and caused many nations to start hoarding oil against the possibility of more serious restrictions of supply, which (supply-and-demand) drove that price even higher. This meant that even nations which were not explicitly targeted, like Australia, were impacted.

    Impacts

    The direct impact was profound.

      The crisis reduced the demand for large cars [globally]. Japanese imports, primarily the Toyota Corona, the Toyota Corolla, the Datsun B210, the Datsun 510, the Honda Civic, the Mitsubishi Galant (a captive import from Chrysler sold as the Dodge Colt), the Subaru DL, and later the Honda Accord, all had four cylinder engines that were more fuel efficient than the typical American V8 and six cylinder engines. Japanese imports became mass-market leaders with unibody construction and front-wheel drive, which became de facto standards.

      From Europe, the Volkswagen Beetle, the Volkswagen Fastback, the Renault 8, the Renault LeCar, and the Fiat Brava were successful. Detroit responded with the Ford Pinto, the Ford Maverick, the Chevrolet Vega, the Chevrolet Nova, the Plymouth Valiant and the Plymouth Volare. American Motors sold its homegrown Gremlin, Hornet and Pacer models.

      — Same Source

    But I think there was an indirect impact that went even deeper than changing global consumer habits. Although there had been years of warnings that oil was a limited resource that would eventually run out, starting in the 1800s (refer Wikipedia, Reserves-to-production ratio). As the linked article states, there had been many false claims that the world’s oil was soon to run out caused by simplistic interpretations and assumptions that, time and again, have proven false. Inevitably, these follow or accompany any oil supply crisis, and there had never been one on the scale of the 1973 Oil Crisis.

    I think that people looked around them at the material objects in their life and realized that everything they owned was constructed from raw materials, and that there was a finite quantity of those raw materials on earth. The notion that, sooner or later, something critical would run out, became embedded in a lot of official thinking thereafter.

    Limits to Services

    And then the concept was broadened to services delivery – there are only so many doctors, and they can only see so many patients; if that isn’t enough to cope with the normal population of an area, you need to provide more doctors. Or Social Workers, or Psychologists, or Plumbers, or whatever.

    Previously, there was the presumption that shortages would lead to higher prices, which would make the employment sector more appealing to new recruits, which would produce more of whatever shortage was being discussed. The “free market” would self-correct, in other words. But that presumption rests on its own assumptions, and those become invalid when government policies change. For example, if regulations increase the qualifications needed to become a doctor, for what seems like perfectly valid reasons (even if those are a knee-jerk reaction to some failure of medical fidelity on the part of a single bad apple), there will be fewer doctors who graduate as a result.

    Baby Boomers

    These assumptions also become invalid when population growth rates change. In 1964, the baby boom of the 60s came to an end.

      The term “baby boom” is often used to refer specifically to the post–World War II (1946–1964) baby boom in the United States and Europe [and Australia, New Zealand, and several other nations].

      Although the answer of when it happened can vary, most people agree that the baby boom occurred around 1946 and 1964. This generation of “baby boomers” was the result of a strong postwar economy, in which Americans felt confident they would be able to support a larger number of children. Boomers also influenced the economy as a core marketing demographic for products tied to their age group, from toys to records.

      — Wikipedia, Baby Boom

    Worker Shortages

    In the 1970s, people started realizing what the end of the baby boom would mean for the future. In essence, an aging population would increase the demand for certain services, while the declining birth rate meant that there would be fewer people to provide those services, either practically (fewer doctors) or economically (fewer incomes).

    This realization leads directly to the demand for increased productivity by workers that has featured so heavily in pay-scale negotiations over the last 25 years or so, at least here in Australia (and, I believe, in all other nations who also experienced the baby boom). The intensity of those demands has changed over the years, but there have to be limits to how productive (in economic terms) a single individual can be.

    I came in on the tail end of the Baby Boom, being born in 1963. I’m currently 60, and almost half-a-year beyond that mark. The vast bulk of Baby Boomers are now in the 60s and 70s. Ten years from now, that will be 70s and 80s. Governments the world over are taking this seriously, and it has already led to economic changes.

    More than half the Baby Boomers have already retired, and are now principally living off whatever they had set aside along the way. That means that investment capital is being withdrawn from superannuation accounts and spent, instead of staying put and providing sustainable cash reserves. Within the next 8 years, this process will conclude, and Superannuation companies will have a lot less cash to spread around. That will have an effect on stock markets – fewer customers, lower demand – and supply-and-demand ordains that share prices will erode at least somewhat as a result. Increasingly, superannuation providers will become economically more shaky over the next ten years or so, more prone to problems they could previously resisted. Some will undoubtedly fail.

    Worker Shortages have been making headlines in the post-pandemic economy. My take on this situation is ‘get used to it’ – this is only the beginning.

    Most seriously impacted will be any business that is manpower-intensive, which is the reason why there is so much interest in automation at the moment. Technology has reduced the need for employees in many fields over the last 50 years, and AI promises to continue that trend. But seasonal workers from outside the country are a very real necessity now and into the future. This isn’t just happening in Australia; it’s a near-global trend (for proof, just look at what happened to the Florida economy after the Governor’s recent anti-immigrant legislation).

    Right now, the primary impact is on unskilled labor (which is actually not all that unskilled, these days). But in the future, it could apply to doctors and nurses and electricians and other qualified professionals.

    Theme 3: Myopic Expectations

    Every now and then, a policy seems to be announced, by whoever is in office at the time, that relies on the magical powers of wishful thinking to have the desired impact. There are two logical fallacies involved:

    • “We have to do something. THIS is something. — but will it have the desired effect? Or is busy-work, or a band-aid, or an effort to at least look like you are doing something?
    • “If we do this, and the this happens, and the that happens, then everything will be great for everybody. We’ve consulted the best experts and they assure us that everyone in the industry who matters wants this.”

    Not all offerings are this blatant, or this unrealistic. And some are even worse, I’m sorry to say. The more common problem, as referenced in the section title, is that each department head in a government starts to develop myopia when it comes to matters outside their particular jurisdiction. That becomes a problem when a policy decision directly affects matters outside your portfolio, or is impacted by them.

    The worst example that comes to mind – and there have been many in recent years – is the Robodebt scandal here in Australia.

    • Take an unproven, ideologically-inspired, assumption: People on welfare are always trying to cheat the system.
    • Advance a test that has inherent flaws: Compare the income reported to the welfare authority to the total income listed on the individual’s tax return, averaged to match the fortnightly reporting periods.
    • Devise a scheme to identify those mismatches and recover the money that these individuals have “stolen” from the taxpayer.
    • Ignore multiple pieces of advice that say that the scheme is not legal, and legislation will be needed.
    • Get no outside audits of how much can be recovered, or how rife under-reporting actually is. In particular, ignore the fact that those on welfare are required to estimate their income if they don’t know the exact amount on the date the form is lodged each fortnight.
    • Institute your scheme in such a way that any mismatch automatically generates a penalty notice and applies that penalty.
    • Permit those accused to challenge the penalty notice by producing proof income from up to ten years prior, even though the law only requires these to be kept for 7 years – a requirement that a significant percentage of the population ignore..
    • Get totally shocked when the whole thing blows up in your face.

    I hope that no reader will ever be able to match that comedy of errors and willful myopia. But I know better – I could point at the ‘social war’ underway between Disney and the Florida Government, for example, where the score is currently 4-1 in favor of Disney – with the whole contest being a petty, childish, own goal by DeSantos..

    Theme 4: Everything’s The Same, Until It’s Not

    This is something that I noticed with regard to the advances in technology, and in particular digital technology. No matter how profound the social change that such technology (including applications) will have eventually, there is a general perception that nothing will change as a result.

    For a time, this expectation is proven true – it generally takes time for any significant social change to permeate an entire culture, even now. But in the longer term, if you look at what the technology enables you to do that you could not do before, or what those who develop such technologies could create based on such a platform, the eventual impact can be forecast.

    The same is true of everything else that ends up being a critical element of economic change – it generally takes time to manifest.

    Theme 5: Permanent Opposition

    I could just as easily have entitled this section “Blind Polarization”. Ideological positions are becoming, or have become, entrenched and unyielding, with levels of obsessive belief that have hitherto been applied to cults. It has obviously happened in the US; it is happening here. The practical manifestation is an inability to compromise – ‘whatever it is, I’m against it”.

    I could go into the whole history of the situation, starting with Newt Gingrich, but I don’t know that there’s any value to it. Suffice it to say that this trait goes from almost non-existent in 1973 to almost-universal in 2023, and it’s almost completely a trait of the right wing of politics.

    This is not purely an American problem; variations have been exported to the UK, to Australia, and to many other nations, in part because it appears successful as a stratagem in the USA. The problem is that I can’t see it being overturned until it causes some irreparable damage to one of the political parties espousing it, sufficient to persuade the remainder to forswear the philosophy. Or something less likely happens, of course! A charismatic leader promoting cooperation and mutual benefits, perhaps – but probably not.

    Theme 6: Digital Revolutions Change Everything – and Nothing

    I’ve touched on this already, in discussing Trend 4. Every revolution in the digital world starts by creating something that does exactly what the existing technology does (hence things stay the same) but with added new potentials. As those potentials are slowly translated into actual capabilities and people learn to exploit them, an inevitable cumulative effect begins that ultimately reshapes the technological and/or social landscape.

    Contemplate for a moment the shape of those digital revolutions. When our period opens, computing is restricted to mainframes that are horribly expensive and require extensive customer programming to be useful. These are at their best in handling large datasets – a census, a bank, an insurance firm, and so on. It is sometimes said that the electronic fuel injection systems of a ‘modern’ car (from 25 years ago) packed more computing power than the Apollo spacecraft did.

    For the record, that’s both true and misleading. NASA insisted on reliability, first and foremost, and that policy remains in effect to this day. One of the prices of that reliability is simplification and minimization, especially in terms of interface. So Apollo had the equivalent of programmable calculators, the Space Shuttle had the equivalent of 286s and Apple-IIs, and so on. Modern digital processors are now in the Pentium era – unless they are being privately built, in which case a different balance point between modernity and simplicity may have been chosen.

    The home computer revolutionized the office space. Off-the-shelf hardware. Off-the-shelf software.

    The mobile phone started off as not much more than a substitute for the nearest phone booth, but quickly became more portable and more convenient. Things eventually reached the point where they contained the potential to completely eliminate the land-line phone connection, so ubiquitous where they.

    The Graphical User Interface, or GUI, made computers easy to use by laypeople. Very people can’t get the hang of one simply by moving a mouse around and observing what the pointer does on-screen.

    The internet and the world-wide-web enabled distributed processing, and using remote hardware to execute complex processes remotely. To a certain extent, computing no longer was about the hardware that you were using and its limitations; it was about the hardware that you could access, and how easily you could do things with it.

    The smartphone initially didn’t do much more than add a GUI to the telephone; it didn’t initially do much more than provide exactly the same service already being delivered by mobile phones – with added convenience. These days, they are less about making phone calls and more about sharing data – replacing identification and bank cards and even music libraries.

    Social Media killed the email almost as surely as the email killed traditional snail-mail. These days, email is primarily a link distribution method, not a primary communications channel. But, more importantly, it meant that people were cross-connecting their beliefs and ideas, while excluding those who did not share in those philosophies, creating the echo chambers that continue to spread both information and misinformation to this day. But social media started off as nothing more than a chat room, a limited-scope multi-person email of limited scope.

    Streaming started off as sharing a youTube video, and then became a catch-up service and now direct existential threat to cable TV.

    AI is now threatening, or promising, to provide artificial simulations of creativity itself, while replacing menial human-to-human connectivity – phone an automated switchboard, and instead of a limited number of options for you to choose from (none of which sometimes seem to fit), you will be able to have a conversation with the AI which will then determine how to route you call, and may well be able to initiate solutions to routine matters without human intervention.

    There could be others. For example, tracing popular music from the LP to the CD to the MP3 to Napster to ITunes to Streaming services. That’s an entire industry that’s gone from cultural co-dominance to near-irrelevance over the term of the age – but the general principle is clear without it.

    Theme 7: Wastelands, Again and Again and Again

    Originally, this theme referred only to economic wastelands, but while writing it I became aware that environmental regulation and a criminal law and a whole bunch of other things also followed the same pattern.

    There is a certain extent to which this is an outgrowth of Theme 5. One political party cleans up the economic mess caused by some sort of financial disaster or potential disaster, the other lot eventually regain power and remove or disrupt the regulatory frameworks and safety mechanisms to enable those financially active in the space to make more money, and eventually the point is reached where the cycle can repeat itself.

    To be honest, though, this is something of an oversimplification. Each financial crisis is different, with different causes – when you dig into the finer details. It isn’t those finer details that I want to focus on, however, it’s the fact that these keep happening,. usually 7-9 years apart.

    There are those who dismiss the pattern as simply the tail end of an in-built boom-and=-bust cycle, but while some examples fit that criterion – the dot-com bubble for example – others do not.

A structure of Contexts

These seven themes don’t exist in isolation. They operate in a context of events, frequently in combination with one or more of the other themes that is also manifesting through those contextual events, and there is always a historical element to the resulting narrative, too.

What remains in dissecting this era is largely an identification of the those contextual threads – what was happening, and sometimes, why.

I was not expecting the clarity of structure that I eventually discovered within the era. Cued by the rolling repetition of those economic disruptions every decade or so, and starting of course with one of the most important, I first started by subdividing the era roughly by decade. I was then able to identify one or more significant event or sequence of events at the start, middle, and end of each subdivision. I was helped in assembling the resulting structure by the inherent flexibility in defining end points; what was more important was that the resulting structure be definitive of the subdivision of time concerned.

Most of the time, I could actually be guided by perceptions of the time subdivision in question. In trying to make the event bundles definitive of those perceptions, I found that the critical events fell into place fairly naturally and inevitably. The chronology may not always be exact – there can be some overlap, thematic elements at the beginning of a new period overlapping with thematic elements of the ending of the old. That’s why these periods all form part of a single broader era, instead of being relatively short eras in their own right.

My intent was to discuss each as succinctly as possible, so that the resulting structure could be more easily apprehended by the readers, but some of them are so significant or profound that it seemed inevitable that there would be too much text in between at times. So I’ve made the last-minute decision to present the structure as a table of contents of sorts.

Anatomy Of The Digital Era
  • First Period 70s-80s
    • Beginning: Oil Crisis
    • Middle: Mainframe Politics
    • End: Fall Of The Wall
  • Second Period 80s-90s
    • Beginning: Hope and Hostility
    • Trickle-down Reaganomics
    • Middle: Eighties Angst
    • Middle: Hope In The Face Of Despair
    • End: Digital Development
  • Third Period 90s–00s
    • Beginning: Invasion Of The PCs
    • Middle: Skirting Eco-disaster
    • Middle: Rise Of The Smaller Device
    • End: Hope Fails
  • Fourth Period 00s-2010s
    • Beginning: Internet Awakening
    • Middle: Megacorps Proliferate
    • End: Personal Tech
    • 9/11: Shockwaves & Awe
    • End: The GFC
  • Fifth Period 2010s-2020
    • Beginning: Social Media
    • Middle: The New Entrepreneurs
    • Climate Change: A Decade Of Lip Service
    • End: Stirrings Of Alarm
  • Pandemic Interruptus
    • Medical Economic Impact
    • Public Impact
    • Economic Disruption
    • Economic Stimulus
    • Reopening in a Blended Economy
    • The Fall Of Trump
    • The Age Of Biden?
    • The Corner Is Turned
    • General Reopening
  • Post-Pandemic Economics
    • Supply Chains: Rebuilding Trade
    • Workforce Decentralization
    • Restricted Oil
    • Ukraine Invasion
    • Paying The Piper
    • Playing Chicken With The World
    • Climate In Meltdown
    • An Imminent Pivot

This comprises my road map for the rest of this article, and any subsequent parts if I need to break it into two or more (as I expect to be necessary).

There’s a lot of ground to cover, so let’s get busy….

The Digital Age, First Period 70s-80s

The 1970s were full of whistling in the dark pretense that anything had changed since the 1960s, even while they shaped and re-shaped consumer spending habits, with attendant knock-on effects that would so dominate the 80s. There was a manic edge to the popular culture celebrations of the 60s, a harder edge – like comparing Woodstock to the Rolling Stones concert disaster at Altamont Speedway, or comparing a poppy folk song (If You’re Going To San Francisco) to anything from the disco era. There was a loss of innocence, much of it these days laid at the feet of the Vietnam War. And yet, the Korean conflict, by and large, had not had the same effect; was this purely because of the newfound global reach of mass media, the televising of horrific images? In retrospect, I think there was another factor at play, the psychological consequence of the events that I have chosen to demark the beginning of the new era.

    Beginning: Oil Crisis

    When analyzed from the perspective of actual gasoline supply, the 1973 oil crisis was pretty much a non-event. In October of that year,

      members of the Organization of Arab Petroleum Exporting Countries (OAPEC)*, led by King Faisal of Saudi Arabia, proclaimed an oil embargo targeted at nations that had supported Israel during the Yom Kippur War. The initial nations targeted were Canada, Japan, the Netherlands, the United Kingdom and the United States, though the embargo also later extended to Portugal, Rhodesia and South Africa.

      — Wikipedia, 1973 Oil Crisis

      * Not to be confused with OPEC, who were blamed for the policy by a number of media outlets.

    Cheap prices and declining domestic production in the US had led to an increase in the dependence on foreign oil, though most of it was actually purchased from Canada and Venezuela, and not from the Middle East as was popularly perceived. The US did purchase 638,500 barrels of oil a day, but that is a relatively small fraction of US consumption at the time (17 million barrels a day). Nor were any of those embargoed barrels of oil actually withheld from the US. Actual enforcement of the embargo was less significant than the public perception that a significant amount of oil was being withheld.

    What did happen was that the price of oil skyrocketed, from under $10/a barrel to over $40. This was, of course, passed on to consumers in the form of higher gasoline and energy prices. Because the cost was so much greater, many gas stations could not afford to fill their reservoirs, and at the same time, public panic caused demand to spike. Inevitably, some suppliers ran out and with each such event, the public panic grew. In an effort to get the escalating situation under control, many levels of government imposed quotas and rationing.

    Impact

    Of course, as shown in the previous part of this series, oil prices are directly inflationary in nature. In fact, they compound with each other to be inflationary to an extent far greater than might be expected.

    The side effects of the oil crisis caused the price of everything else to go up, and created wage demands to compensate – but things are never that simple; simply putting wages up by that extent would only have increased inflation until the increase was completely eaten away, creating the need for yet another wage rise.

    One alternative is for prices to fall as inflation is curbed – but that never seems to happen, with good reason – that would reduce the numeric profit levels of a business, which would undermine confidence and share value. That can be enough to kill even an extremely successful business, and it’s simply too hard to manage such situations. Instead, measured wage increases are offered once inflation falls to manageable levels that restore financial purchasing power while retaining economic stability – at least until the next crisis.
    .
    The increase in oil prices created an impact splash that hit many more countries than on the embargo list. Although not directly targeted, the oil crisis had just as big an impact here in Australia, for example, as in the US.

    Consequences

    There were a number of direct consequences. Large vehicles with high gas consumption were immediately seen as vulnerable to the petrol price, and potentially unusable should supply be restricted. The number of small, efficient cars sold immediately assumed an upwards trajectory, eventually leading to the demise of many domestic manufacturers. There was an immediate governmental re-prioritization of domestic oil exploration and production, and ‘dependence on foreign oil’ became a catch-phrase employed by all sides of politics everywhere, and by the media that reported on politics. The perceived reality was that this was an area of politics with direct relevance to the ordinary worker.

    This, in turn, embedded the concept that there was only so much oil out there in the global zeitgeist. Experts had been warning of this reality for decades, but the message had mostly fallen on deaf ears; suddenly, it had a new cache, because the oil-price shock could be highlighted as a taste of what was to come.

    Once the concept of limited resources takes hold, it produces a fundamental shift in perceptions. Everything is measured in terms of consumption of limited resources – productive hours, money, raw materials, you name it. In some cases, this simply locked into place shifts in public attitude that were already taking place; in others, they began the process of popularizing new concepts like sustainability and closed-cycle recycling. Everything from environmental awareness to power distribution would be affected – if not directly, then indirectly.

    Middle: Mainframe Politics

    Digital products were restricted, at the time, to governments and large-scale corporations, where there were gains to be made from the more efficient use of computer-based billing and analysis. The director of IBM, Thomas Watson, is alleged to have said in 1943, “I think there is a world market for maybe 5 computers”.

    Changes in technology which increased capability and vastly cut costs had already invalidated that prediction, but it remained a conceptual perception. Because computers at the time were difficult to program, stories filled the popular zeitgeist of ‘silly’ computer errors, like bills for one cent, all oriented around the literal-minded inflexibility of computers.

    But clever programming did exist, and permitted analysis of ever-more complex combinations of hypothetical situations, and these were used increasingly to guide the formulation of public policies.

    Sometimes, these policies were effective, but more often they failed because people are rationalizing, not rational, creatures; it might be in the collective best interest to behave in a certain way, but people will gravitate towards actions that better their personal best interests even at the expense of the collective welfare.

    Conflicts between human nature (especially a cynical interpretation of same) and the logic of collective welfare take hold as undercurrents in a lot of fiction of the era. On some occasions, the logic is shown to be faulty, because it does not take human nature into account; on others, it is human nature that is at fault, as flaws within humanity are given an opportunity to prosper.

    You see this even today, as policies that have some justifiable validity are manipulated beyond any acceptable standard. Billionaires who pay only a couple of hundred dollars of tax a year, for example, create the impression that the system is geared toward their benefit, when the reality is that they are better able to take advantage of situations to minimize their taxes.

    End: Fall Of The Wall

    Yes, I know that this didn’t actually happen until 1989, almost the end of the period that follows. What happened, historically, was that as the 1970s gave way to the 1980s, new trends began to supplant those carried over from the preceding era; Glasnost and the fall of the wall marked the end of the cold-war that was so strongly a feature of the 1950s and 60s, for example.

    There was a slow retreat from the themes exemplified by the 1970s over the ensuring decade, and the fall of Berlin Wall is the singular event that punctuates the end of that period. The 1980s is a blend of the philosophies of that era and the legacy attitudes of the 1970s.

    I made this same point in a different way when discussing the impact of digital technology; it takes time for social movements to transform potential into actuality, just as it takes time for digital potentials to be manifested into concrete technological and social changes.

    The 1970s were largely spent pretending that the social and political foundations of the 1960s were still completely applicable, despite gathering evidence following the oil crisis that this was not true; it was the ending of the cold war that made that fiction impossible to sustain. And the crescendo of that global thawing was the fall of the Berlin Wall.

The Digital Age, Second Period 80s-90s

“Greed,” according to a popular mantra of the 80s, “is good.” Well, no, it’s actually not – but the 80s were when people stopped fooling themselves that corporate behavior had any remaining vestiges of the altruism that was present at the end of the time of the robber barons. Being altruistic was an ice-cold marketing decision, nothing more or less. There are three movies that I consider definitive of the economic culture of the time.

The first is, of course, Wall Street, from whence the quotation springs. Gordon Gekko, played by Michael Douglas, offers the line. This character, and those who have bought into the social environment that accompanies it, seduces the main character (played by Charlie Sheen) into sharing his worldview. The story, in its totality, is a fall-and-redemption narrative, as Sheen uses the weapons of Gekko’s philosophy against his mentor, falling on his metaphoric sword to bring down the seductive but evil Gekko. The conflict between human values and corporate profits remains an exemplar of the era’s business philosophy.

Movie #2 didn’t seem to be as big at the box office; it’s something that I stumbled across on TV one afternoon, and became an instant favorite. Other People’s Money tells the story of “Larry The Liquidator”, a more comic-book version of Gordon Gekko who specializes in buying companies, stripping them of their assets, and then liquidating whatever remains once there is insufficient capacity to meet obligations – for example, pensions. When the targets are moribund and unprofitable, this can be seen as burying the corporate dead; but that’s never enough to satisfy, so the Liquidator turns his attention to companies that are productive and profitable, because they can be worth still more to him dead than alive.

Third on my list of movies is Working Class Man, starring Michael Keaton, released through most of the world as “Gung Ho,” about the takeover of an American Car Plant by a Japanese corporation. Although promoted as a comedy film at the time, it was repackaged here as a drama after the success of Batman, enjoying a successful second life. At the time of that second life, the buzzword throughout the white-collar sector was “Japanese Management Practices”, and the movie is all about the cultural clash between such management practices and the more casual, laid-back philosophy of the western worker.

I consider these three to be required viewing for any GM running a game set in the 1980s, or any setting that is an outgrowth of those political and economic philosophies – Cyberpunk, for example.

That’s the 80s in a (rather large) nutshell – corporate greed with no pretense. What is good for the stockholder is good for the company, and vice-versa; customers and workforce are nothing more than necessary evils to be exploited to the fullest extent of law.

When this attitude was last in vogue, rebellions arose and brought forth the labor unions. This time around, a few cases of union corruption enabled a concurrent war against workers rights that has eventually culminated in such protections being at something close to an all-time low. “At Will” employment statutes in many US states mean that an employee can be fired because the boss doesn’t like the color of the worker’s socks. Sold to the public as preserving a worker’s right to move from one job to another where pay and conditions were better, the actual effect on the ground has been significantly different.

In this period, those laws remain a future development. The contemporary reality was an emphasis, especially in the mass media, on the inconvenience promulgated on the public by workers striking for what they perceived as good reason (and which sometimes was and sometimes wasn’t). No-one seems to have wondered whether or not the media in question (or their owners) had a vested interest in the issue.

These attitudes took time to manifest and deepen; the beginning of this period more strongly resembles the 1970s than it does this heartlessly grim picture, the middle of the 80s is somewhere in between as these social attitudes are taking root, and by the end of the period, they are entrenched (and viewed by some as the unchangeable new foundations of economic reality).

    Beginning: Hope and Hostility

    The first recorded aircraft hijack took place on February 21, 1931, in Arequipa, Peru (Wikipedia, Aircraft Hijacking). A sprinkling of other events took place through the years that followed, slowly growing in frequency. Between 1958 and 1967, there were 40 hijackings world-wide, while the FAA claims more than 100 attempted hijackings in the 1960s. (same source). The 1967 termination date is significant, because in the five year period starting in 1968, there were 326 attempted hijackings, a more than 6-fold increase.

    The 1980s saw a profound shift in nature of these incidents – organized terrorists destroying aircraft to draw attention or threatening to do same in order to obtain specific political ends. The majority of incidents prior to this development were attempts to reroute aircraft to a destination desired by the hijackers; aircraft screening of passengers and greater international cooperation, by 1980, had reduced the incidence of such attempts to significantly less than the 1968 level.

    A number of the new wave of attacks seemed linked to middle-eastern groups, a reaction to the greater involvement in political events there. Another legacy of the 1973 oil-price shock, and a second event in 1979 that was actually more serious, was to highlight that this part of the world could not be taken for granted. Western ‘adventures’ into the politics of the region can be traced back to the crusades, it was nothing new; but there seemed to be a rising intensity on all sides, and the existence and politics of Israel was a polarizing factor that only made these interventions and adventures worse in the eyes of many directly affected.

    Many have characterized these interventions as being all about oil; this assignation of motive is misdirected at best, and wholly incorrect at worst. Some were legitimately well-intentioned, others were for political advantage, and some were in furtherance of half-baked plans for the imposition of stability.

    It wasn’t just the Western powers, either – witness the disastrous Russian invasion of Afghanistan in December 1979.

    Regardless of the motivations, these all provoked reactions from those residing in the region. Because it was perceived that they were relatively weak in comparison, methods needed to be employed that had greater indirect and supplementary impact than direct impact – and thus the middle east became inextricably associated with Terrorism in the zeitgeist of the time.

      Afghanistan: An illustrative microcosm of mistakes

      In April 1978, the communist People’s Democratic Party of Afghanistan (PDPA) seized power in a bloody coup d’etat against then-President Mohammed Daoud Khan, in what is called the Saur Revolution. The PDPA declared the establishment of the Democratic Republic of Afghanistan, with its first leader named as People’s Democratic Party General Secretary Nur Muhammad Taraki. This would trigger a series of events that would dramatically turn Afghanistan from a poor and secluded (albeit peaceful) country to a hotbed of international terrorism.

      The PDPA initiated various social, symbolic, and land distribution reforms that provoked strong opposition, while also brutally oppressing political dissidents. This caused unrest and quickly expanded into a state of civil war by 1979, waged by guerrilla mujaheddin (and smaller Maoist guerrillas) against regime forces countrywide.

      It quickly turned into a proxy war as the Pakistani government provided these rebels with covert training centers, the United States supported them through Pakistan’s Inter-Services Intelligence (ISI), and the Soviet Union sent thousands of military advisers to support the PDPA regime.

      Meanwhile, there was increasingly hostile friction between the competing factions of the PDPA – the dominant Khalq and the more moderate Parcham.

      — Wikipedia, Afghanistan – Contemporary History – Democratic Republic and Soviet war

      An assassination led to the Soviet invasion which led to the rise of a more feudal social structure of warlords, which led to a coalition government that collapsed into dysfunction, which led to another civil war, and so on. The occasional intervention on one side or another was disastrous. Eventually, the Taliban emerged as a movement and forcibly installed itself as rulers of the country. Some of the warlords survived within the new regime as terrorist groups, notably Al Quida. Subversive acts of retaliation against those who had used the nation to fight their proxy wars followed with the tacit approval of the regime.

    Afghanistan is a pointed example, but not the only one. Iran & Iraq tell similar stories in broad, for example. As a result, random incidents of middle-eastern terrorism sprinkle the 1980s at regular intervals.

    Trickle-down Reaganomics

    From January 20, 1981, to Jan 20, 1989, Ronald Reagan was President of the United States. The economic policies of the sub-era are definitively those of his administration.

      These policies are characterized as supply-side economics, trickle-down economics, or “voodoo economics” by opponents, while Reagan and his advocates preferred to call [them] free-market economics.

      The pillars of Reagan’s economic policy included increasing defense spending, balancing the federal budget and slowing the growth of government spending, reducing the federal income tax and capital gains tax, reducing government regulation, and tightening the money supply in order to reduce inflation.

      The results of Reaganomics are still debated. Supporters point to the end of stagflation, stronger GDP growth, and an entrepreneurial revolution in the decades that followed. Critics point to the widening income gap, what they described as an atmosphere of greed, reduced economic mobility, and the national debt tripling in eight years which ultimately reversed the post-World War II trend of a shrinking national debt as percentage of GDP.

      — Wikipedia, Reaganomics

    Ultimately, the espoused principle was reducing the tax burden on the wealthy on the presumption that this would encourage more investment, which would then permit business expansion, which would enable the employment of more people, which would spread the largess downward through the economy.

    In terms of stimulating a somewhat moribund economy, this worked, but it reckoned without the businesses that were the recipients of the investment diverting the moneys received to shareholders as increased profits.

    Inflationary consequences meant that the working class continued to get squeezed as a result, though some benefits did work their way downwards as intended, slowing the rate of deterioration in real wages.

    Image by Wikipedia Commons user Soibangla based on data released by the US Bureau Of Labor Statistics. The chart is considered ineligible for copyright and therefore in the public domain because it consists entirely of information that is common property and contains no original authorship.

    While there are those who simply claim that Reaganomics didn’t work, I consider that to be an oversimplification – it worked just fine for those in the upper economic brackets. It was, nevertheless, a flawed policy simply because the economic benefits were not shared with the rest of the society of the time.

    Thatcherism

    If the US had Ronald Reagan, the UK had Margaret Thatcher. Reagan and Thatcher had been elected within a year of each other; while some called that a remarkable coincidence, to me it always seemed indicative of a general trend in politics and society of the era.

    There were a number of similarities between the two. Both were strongly anti-government, anti-Keynesian, advocates of tax reductions and private sector economic prosperity.

      In terms of the general thrust of their policies, both leaders tried to shift the center of the political spectrum sharply to the Right. Reagan set about undoing a half-century of legislation which had built up the public sector while opening up America to expansion led by the private sector. Mrs. Thatcher busied herself with doing the same in Britain. Both leaders believed that government itself was partly the cause of their mutual economic problems, including high inflation and slow economic growth, the answer being less government. In contrast, all previous leaders since the 1930s had assumed that if things went wrong, the remedy would be government intervention or more government.

      Reaganomics and Thatcherism. Origins, Similarities and Differences by Christopher Deeds.

    The Christian Science Monitor puts it even more forcefully:

      Both have slowed down welfarism and curbed the power of the trade unions.

      Both have stressed enterprise and the marketplace.

      Both have turned away from nationalization and have sold off as much nationalized industry as the private sector would accept.

      Both have reduced government regulation of private enterprise, stimulated competition and, by so doing, stimulated productivity.

      But the United States, after six years in office of Ronald Reagan, and Great Britain, after seven years of Margaret Thatcher, are in startlingly different condition. The United States has an annual budget deficit running at nearly $200 billion and an adverse trade balance of nearly as much. Mrs. Thatcher’s Britain has a balanced budget at home and a balance in its foreign trade account.

      Thatcher and Reagan: the difference by Joseph C. Harsch.

    In other words, Thatcher was cruel but achieved success with her policies, while Reagan’s success is more open to debate. The difference in outcome, argues Harsch, is due to the severity of application of theory; Reagan raised his defense spending massively and cut income taxes radically, while Thatcher raised her defense budget in moderation and cut income taxes modestly.

    Both paid a hefty price – Reagan in deficits, Thatcher in unemployment rates and failed businesses.

    And yet, if you were to assign their policies based on their personalities, Thatcher was more abrasive, more confrontational, and less sympathetic, while Reagan was naturally a conciliator. Those traits would lead one to suspect Reagan of the softer, more moderate approach, while assigning Thatcher the more hawkish, uncompromising set of policies. I have no explanation, simply observing that one was a grandfatherly figure while the other was a stern mother, to their respective nations.

    Nevertheless, the distinction between personality and policy, and resulting implications, needs to be understood by any GM with a campaign set in the era.

    Middle: Eighties Angst

    The discussion is starting to get away from me, in terms of length, so I’m going to try and reign it in a bit.

    Eighties Angst wasn’t just expressed through TV shows and the like; people of the time often had good reason to fear for the future and worry about where it was all headed.

    There were a number of factors contributing to this depressed attitude.

    • This was the era when manufacturing jobs began to migrate away from the developed economies, though it might not be noticed for another 20 years in some cases. While some of that migration was direct – close a factory here and open another one somewhere else to manufacture exactly the same products – more of it was indirect, as was the case with the US auto industry being supplanted by Japanese small-car imports.
    • The wage-inflation-cost spirals already discussed were an ongoing problem that no-one could see any hope of escaping. The resulting economic reality was readily visible – I remember interest rates of 21, 22, and 23% being thought ‘normal’. Even now, I’m not entirely sure how we got off that international treadmill.
    • Reactions to the harsh economic practices of the time led to industrial disruptions and often contributed to economic disruptions, more particularly in the UK and other commonwealth countries than in the US, where a war against unionism inhibited them.
    • Business failures, and the abject failure of pension funds etc that were supposed to insulate the workforce from such developments, created a sense of despair in the industrial sector workforce. Farm foreclosures did the same in the rural communities. This often manifested in anti-social behavior on the part of those who felt abandoned and without hope for the future, more notably in the UK – and thus arose expressions of social dystopia like Punk Rock.

    A lot of people were desperate, surrounded by bleak times and unforgiving government policies, sometimes depressed and sometimes angry about it all, especially when it seemed to result from economic forces over which they had no control and less understanding.

    Middle: Hope In The Face Of Despair

    At the same time, there were rays of hope, a belief that collective efforts could yield social dividends that were greater than the sum of their parts. This started with the success of the Band-Aid single, “Do They Know It’s Christmas” – a friend of mine was in England at the time and he remembered the song being on a 30-minute rotation on virtually all the popular radio stations at the time.

    That led into a US replica, and then the global Band-Aid concert, and then more diverse examples like Farm Aid and Artists Against Apartheid.

    At least here in Australia, telethons had been around for a long time, usually in response to some particular pressing need or disaster, but they had started to lose some of their luster in the 70s. Prior to these examples of social welfare activism, the high-water mark had been the benefit telethon in support of the victims of Cyclone Tracy in 1974 – I have vague memories (possibly inaccurate) that all three commercial network and the national broadcaster worked in unison on that appeal. After that, they seemed to fade in significance and appeal – until the global Band-Aid telecast came along.

    The success of these fundraising pursuits in terms of actually achieving their ambitions is disputable. There are those who claim that much of the revenue raised never made it ‘on the ground’, frittered away in administration fees. Regardless, though, their success in rekindling a sense of optimism and hope for the future marks them as a significant social change within the era.

    End: Digital Development

    As already intimated, computer technology never stands still. There is a perpetual engineering effort aimed at making computers stronger, smaller, faster, and cheaper.

    Integrated Circuits had been commercialized in 1964, and the first microprocessor had been developed in 1971. From the mid-70s on, the first microcomputers were developed that were cheap enough to to be commercially viable.

      In what was later to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of features that would later become staples of personal computers: e-mail, hypertext, word processing, video conferencing, and the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time.

      — Wikipedia, Personal Computer

    So the pieces of the puzzle had been there for several years. 1974 saw the introduction of what is generally considered the first true personal computer, the Altair 8800, based on the 8-bit intel 8080 microprocessor chip. The Apple -1 followed in 1976. The first successfully mass-marketed personal computer, the Commodore PET, was revealed in 1977, but it was back-ordered and not available until later in the year. In June 1977, the Apple-II was first shipped, followed later in the year by the Tandy TRS-80.

    Even so, they were largely considered objects for play. They had infiltrated the home; software for personal productivity pointed the way to the future. The stage was set for the computer to enter the workplace in a serious way.

And with that, I’m right out of time. 30-odd years of recent history and economic change remain. To be continued!

Print Friendly, PDF & Email