This entry is part 9 in the series Economics In RPGs

I’ve clearly decided to push on and get this trilogy of posts out of the way before interrupting the series for another break.

As usual, because this is a direct continuation of what’s already been posted, I’m going to skip the usual preamble, so make sure that you have read Chapter 1 and Chapter 2 first.

But before I dive headlong into it, there are a couple of Kickstarters that are worth mentioning.

First up, and closing early next week, is The Geologist’s Primer by Anna Urbanek, with additional content by Jakub Wisz.

A massive 360-page “illustrated guide to Magical gems, rocks and metals – Gem Folklore, Magic & Occult Magic Item Recipes, Game Master’s Tools” – and more.

Click the image to visit the Kickstarter Page.

System-agnostic, the list of content inclusions is quite comprehensive; “Each entry in the Geologist’s Primer provides basic geological information, notes on where to find and how to extract these materials, along with their industrial, decorative, magical, and sometimes even culinary properties. Each entry also includes a short, handy description. So if you’re just browsing for information, you won’t have to read through pages of text!”

Click the image to open the Kickstarter page.

The example pages look fantastic, and the utility of this as a reference work amply justifies backing it. Note that most of the stretch goals that have been unlocked so far have created Add-ons which have to be added to your order when you back the project.

This project was fully-funded within 5 minutes of launch and has currently raised more than $430,000 toward it’s $10K initial goal, so it’s as sure of delivering as any Kickstarter ever can be. Time is running out, with 2/3 of the funding window already closed, so if you are interested, the time to dive in is now.

The PDF-only option is a relatively-affordable $20, but I put my money down on the hardcover, costing $50 (I briefly considered the US$80 deluxe edition, but my budget wouldn’t stretch that far).

The scale of the public response shows quite clearly that there is a significant level of demand for this product, so I’m quite sure that it will interest some of you out there!

One down! Here’s the second serve:

I snuck this photo out of the campaign preview I was sent. But I won’t tell if you don’t! Click it to visit the Kickstarter Page (once it’s live).

Next, and launching (if all goes according to plan) later this week, there’s some newly-designed dungeon tiles and 3D printed accessories that are sure to interest some of my regular readers.

Mad Wizard’s Hall provides “Pre-painted wooden tiles, doors, columns, and traps for any fantasy tabletop RPG and miniatures games”.

These look great, and come in a variety of shapes and sizes. There’s a free print-for-yourself sampler, and 341 people have subscribed to be notified on launch already, so it’s very likely to get up.

These are the first major outing for an indy designer hailing from Kipeda, Lithuania.

Yes, Lithuania – talk about gaming having a global reach! So to welcome Ilya into the pro gaming fold, you should at least consider backing his project!

Okay, with the decks cleared, lets get back to business!

The Space Race

It’s fair to say that the Space Race was the single most transformative economic event series within the twentieth century – and in a period that includes two world wars and the Great Depression, that’s saying something.

Entirely separate from the outcome and spin-offs, and the space industry itself (addressed separately below), there’s the direct investment – Project Mercury was $2.57 billion in 2021 money; Project Gemini, $8.2 billion in inflation-corrected currency; and Apollo $178 billion in 2022 money.

That adds up to (roughly) 189 billion (corrected) dollars in direct investment in the research, engineering, and manufacturing capabilities of the US. Even had the project failed, it’s hard to see that not having a massive economic payoff in the long run.

    Nationalism Vs Progress

    The roots of the Space Race run deep, and branch off into unexpected sociological domains. One of the strangest is the complete 180°-reversal in public perceptions that followed.

    When the trilogy of space programs began, supporting them was very jingoistic; there was a sense of direct confrontation with the advance elements of the Enemy, and failure to support the space program was viewed as ‘unpatriotic’.

    As soon as the Apollo program succeeded with Apollo 11, that began to change, and quite rapidly. Space exploration was immediately and increasingly subject to harsh budgetary constraints and the catch-cry (paraphrased) was ‘there are more important priorities here on earth’.

    Poor Salesmanship

    To be fair, NASA did a very poor job of selling the economic value of their achievements, and still do. Correlation doesn’t imply causation, so it’s entirely possible that the decline in American manufacturing capabilities mirrors the retreat from investment in space technologies entirely by coincidence.

    But the list of sciences and technologies that got a big boost out of the Space Race reads like a comprehensive list of human technological and social achievements. And that’s without the spin-offs and indirect benefits. We’re talking everything from computer technology to materials science and all points in between.

    Softer subjects also benefited – there was so much written about the space program that literature itself had to evolve. There were so many creative artists in other fields that drew inspiration from the projects that whole new fields and styles began to manifest. And manufacturers were quick to learn that if you slapped “Astro-” onto the name of a product, or established some connection with the space program (however tenuous), sales went through the roof.

    Social Antagonism

    The post-Apollo shift defined a new social antagonism between the interests of Nationalism and those of Research. Suddenly, research grant applications had to justify their funding requests in terms of concrete benefits, and you can’t run research efficiently along such lines; the only guaranteed outcome of research is that you’ll have the chance to learn something. What that something might be, and how it might translate into economic and social benefits, is completely unpredictable.

    From a modern perspective, it’s easy to cast this as an opening skirmish in the political wars between progressives and conservatives, but that’s an oversimplification, in my view; NASA simply failed to plan for Apollo’s success, taking their funding for granted, as I have explained in earlier sections of this article.

    That failure is what opened the door for those forces of economic management that wanted to re-prioritize and cut expenditure. These were politicians who saw only the immediate / short-term goal of “Beat The Russians” and not the longer-term benefits to society, and NASA failed to educate them about the longer-term gains. I’m quite sure that they tried, but on this mission, they failed.

    The demand for practical research outcomes to justify investment became a characteristic of the remainder of the 20th century and still lingers today to a large extent. It became part of the economic and social infrastructure of the western world, a fundamental assumption of society, thereafter.

    Progress Vs Service

    Increased funding for social programs was often used as a justification for winding back investment in the space program, and that had the flow-on effect of painting those two elements of society into an antagonistic relationship.

    Increasingly, they were seen as competing for shrinking slices of the available resources.

    The attitudes engendered were pernicious, and spread into a perception that funding of research stole money from the delivery of services as politicians employed divide-and-conquer approaches to enable a growth in their personal power.

    There is a key sequence in The Distinguished Gentleman, the political comedy starring Eddie Murphy in one of the best-written roles of his career (link is to a Double-feature DVD set with Trading Places, his other great role in this sphere. I get a small commission if you buy).

    Murphy’s character is meeting lobbyist Olaf Anderson who sounds him out on the choices that he has on various policy perspectives so that he can know which lobby groups he can direct funding from into Murphy’s reelection campaign. Olaf doesn’t care which position Murphy takes; if he chooses a position in favor of a policy, group “A” will give him money, if he opposes it, group “B” will do so. And, either way, Olaf remains the Kingmaker and gets his slice of the pie from both sides.

    While a cynical exaggeration, this explanation for why nothing ever gets done in government save for politicians feathering their own nests still resonates. Systemic corruption by lobbyists continues to handicap the political system of many nations; only the form varies.

    The key point in this context is that artificially created competitions for funding set different lobby groups into antagonistic positions which can let the orchestrators play one off against the other, to the benefit of the orchestrator.

    Service Vs Profit

    The increasing emphasis on environmental regulation and protection was always described in terms of the public benefit that would (and did) result. That this regulation ate into the ability of a given operation to generate the maximum possible profits created another of these antagonisms, in which the corporate sector increasingly focused on the short-term over the long term and the immediate benefit over the short-term.

    There is little doubt that the same forces which antagonized research from service delivery also encouraged this perspective, but it was all an outgrowth of the more general government vs business disharmony that had existed since the end of the Second World War.

    Three Rivalries

    These three rivalries, manifestations of deeper political philosophies and personal greed and altruism, became increasingly strident as the Pre-Digital era neared its end.

    There are those who would argue that they did not reach their most extreme levels of conflict until the 1980s, but I think of these trends as more of a parabolic arc; the impetus pushing these agendas begins to tail off at the end of this era but momentum pushes the conflicts to greater extremes before the social perspectives responsible begin changing course.

    Any campaign set in this time period needs to keep the three rivalries in mind, and GMs should remember that there will be forces on all sides who will resent and resist any efforts to change the status quo – sometimes from the best of intentions, sometimes for more venal reasons, and sometimes out of pure self-interest.

    Alliances are short-term, extremely focused, and unreliable. There are too many social and political forces pulling these “special interest groups” apart for them to last very long.

    Very Strange Fruit

    Getting back to the Space Race, the legacies of the Apollo program and its predecessors were more significant indirectly than they ever were directly. In order to make Apollo work, industries needed to learn to do new things, and they often found those lessons applicable in other areas and new products.

    For example, the adhesives industry was revolutionized by the space program; it wasn’t so much the adhesives needed for space applications as it was taking the failed experiments along the way to those products and turning them into something useful (and profitable).

    Computers and Communications and Satellite weather maps often get the headline billing when discussing Space Race spin-offs, but the technological ramifications and confluences run much deeper, and include artificial limbs, scratch-resistant lenses, insulin pumps, firefighting equipment, automation, water filters, sports shoes, long-life tires, freeze-dried food, ear thermometers, vacuum cleaners, air purifiers, LEDs, pens, medical imaging and diagnostic technology, and (of course) Velcro!

    Every new product creates new employment and new manufacturing needs, new marketing requirements, new or augmented distribution channels, new demands on income, and new prosperity.

    Despite the high price, the economic benefits were an ongoing contributor to the global economy that far outweighed the costs.

Tech Briefing: Miniaturization

One of the biggest forms of technological progress to result from the space race was miniaturization of electronic components. While no more than half of this takes place in the pre-digital era, it’s worth looking at the totality, at least briefly, because the beginnings of this process set the foundations for the era to follow as well as impacting the available technology throughout the part of the era that follows WWII.

    Beginnings

    Vacuum tubes were developed in the late 19th and early 20th centuries. The simplest and most common examples were the humble light globe.

    Vacuum Tubes of greater sophistication are delicate, expensive, and fragile. Glass-sealed, air slowly leaks into the vacuum sealed within and destroys their effectiveness. Some could last for many years, and some could die a very quick death, depending on the quality of manufacture and the complexity (amongst many other factors) – but quality always costs.

    They are large, and heavy, and power-hungry.

    Worse still, they are horribly inefficient, wasting a lot of the energy fed into them as light and heat. Light could be lived with, but heat distorts glass and can render the technology stone dead.

    Early computers needed significant cooling in order to function, and this could easily double the cost of an installation.

    Vacuum Tubes to Transistors

    Vacuum Tubes made digital computers possible, but not practical. So many of them were required that the director of IBM, in the 1950s, famously predicted a total global market of five or six computers.

    The first transistor was invented in 1947. Discrete components mounted individually on a circuit board, they were soon 1/100th the size of a Vacuum Tube equivalent, consumed 1/100th the power (or less), and wasted 1/20th as much of that power (or less). TV sets went from being a large and bulky cabinet to being portable devices, despite still relying on a vacuum tube for the television display.

    They were more reliable, more robust, less expensive, less expensive to operate, and much more compact.

    The improved electrical requirements also meant that power supplies could be made smaller and more reliable.

    All this meant that more circuits could be squeezed into a given space, and that gave rise to greater capabilities. The earliest remote controls could arguably have been achieved using transistors, but development was proceeding at a breakneck pace.

      In 1954 the worlds first transistor radio, the Regency TR-1 used four Texas Instruments npn transistors and cost $49.95, equivalent [to] $507 in 2021 [dollars]. Today, a 512GB SD card can contain over a trillion transistors and costs about $30.

      — Curious-Droid.com, MOSFET – The Most significant invention of the 20th Century

    Transistors to ICs

    In 1958-59, a way was devised to mount many transistors and a number of auxiliary components onto a single piece of silicon, shrinking the transistors at the same time, and increasing all those other benefits of transistorization at the same time. This was the beginning of the Integrated Circuit.

    The 1962 prototype contained 16 transistors. In 1964, the first commercial MOS integrated circuit was released, containing 120 transistors. These were roughly 1/50th of a millimeter (0.787 thousandths of an inch) across. 120 discrete transistors would have taken up an area of about 3 inches x 4 inches – assuming no electrical components were required in between, but they almost always were. This could easily double or triple the area required, so call it the equivalent of about 7½ x 10 inches.

    ICs to Chips

    By the late 1960s, Integrated Circuits had grown so large and complex that a new term was in use: LSI, or Large-Scale Integration. Early in the 1970s, this gave way to VLSI (“Very Large Scale Integration”).

    The first digital microprocessor is generally considered the Intel 4004, a four-bit CPU whose descendants lead all the way to the Pentium and beyond. It had 2300 transistors on a ‘chip’ of silicon about 3.15×4.46 mm (1/8th of an inch x 2.1 eighths of an inch) – plus case.

    Moore’s Law

    Moore’s Law postulated that the number of transistors on a single chip would double every two years for an unforeseeable period of time, but certainly, for the immediate future.

    A logarithmic graph showing the timeline of how transistor counts in microchips are almost doubling every two years from 1970 to 2020 (Moore’s Law) by Max Roser & Hannah Ritchie, from Our World In Data via Wikipedia (image page) and licensed under CC Attributions 4.0 International license.

    While it’s been refined and revisited a number of times, as a rough-and-ready guesstimate, it’s proven remarkably resilient. On several occasions, doom-and-gloom forecasts have prophesied the end of Moore’s Law, only for new technological developments to make the formerly impossible, possible.

    History shows that Moore’s law is a useful generalization. If it were perfectly valid, the graph above would be a perfectly straight line; clearly, it’s not.

    There are corollaries, which state that power requirements are a function of the physical size of chips (and hence power requirements per transistor will continue to fall), and that R&R and manufacturing costs also increase exponentially in proportion to Moore’s Law – which means that the cost per transistor would remain constant, but actually continually falls with economies of scale, which are driven by demand. Those have also proven useful rules of thumb, but less accurate than Moore’s original law.

    Chips to Multi-cores

    Modern computer chips have transistors that are as little as 35 silicon atoms wide. There can be billions of trillions of transistors in each. We passed the threshold whereby quantum effects had to be taken into account a long time ago – around the time of the Pentium IV or V, if memory serves.

    These days, a single CPU can’t pump electrons through its circuits fast enough; problems and tasks are distributed amongst several CPUs on the one chip, a multi-core, designed to utilize parallel-processing methods such as multi-threading.

    Supercomputers can hold as many as 20,000 processors (not necessarily all on one chip). The technicalities don’t matter much – the operative factors from an economic perspective are that computers get cheaper in (real terms) and more powerful every year or two from the moment of first creation in 1970 through to now (though there have been the occasional brief reversal of this trend).

    Ubiquity

    Despite the dire prediction of that IBM executive, that essentially means that computers have been getting cheaper and more powerful over that period of time. It’s well known that the onboard computers that ran the Apollo spacecraft were less powerful, in computational terms, than those of a 1990s engine management computer in a typical family car.

    A lot of that reduction in price is due to economies of scale – essentially, the more you make of something, the less per unit they cost. And the only way you get economies of scale is through increased usage. Almost everything has an onboard computer of some kind, these days – right down to extension cords and power sockets.

    This has been a steady progression that started in the 1970s and has continued ever since. It was in its infancy at the end of the pre-digital era, but had progressed far enough that bureaucracies and large corporations were increasingly using computers by this point in time.

Behemoths Of Blind Logic

Which means that public perceptions of computers had also begun to take shape. These would also evolve with increasing ubiquity, but – outside of specialist areas – the headline of this section denotes the general attitude that I remember.

Many people could see, at the time, that this would not always be the case; the promise of computer technology was well-known and widely appreciated – and frequently mis-characterized or misunderstood by CEOs.

These failures would exist right through to the 2000s – in particular, the belief that computers would streamline workflows and permit a reduction in labor costs, making a business cheaper to run. That never happens, in my experience. What computers facilitate is greater control, and better management of internal processes – but they were and remain unforgiving.

GIGO is an abbreviation for “Garbage In, Garbage Out”, a phrase that was coined all the way back in 1957. Back then, computer professionals used it in reference to sloppy programming practices, but sometime in the 1970s it began to be used to refer to operator errors and corrupt data, and when PCs began infiltrating office spaces, the term almost exclusively referred to these problems.

    Operator Error

    When computers were new, operators needed – and received – special training in how to use them. This training was not cheap, and often extended for weeks or months.

    In part, that was because many operations that take place automatically in modern times needed to be carried out manually.

    Computer Errors come in four basic varieties:

      Logic Errors

      Computers are – currently – stupid devices, the current crop of “AI” functions notwithstanding. They will do whatever they are told to do, whether that is the right thing to do or not. Errors in the underlying logic that the computer is to implement are the most fundamental mistakes, and some of the hardest to diagnose and correct.

      Hardware & Software Bugs

      If the intended instructions to the computer are correct, they can be mistranslated into computer instructions (a software bug) or misunderstood because of a flaw in the hardware itself. These days, the latter are so rare that there is an almost-automatic assumption of the former. That’s what made the floating-point computational error of the early Pentium chips (now known as the FDIV bug) so shocking to the IT community in the mid-1990s.

      User Errors

      By far the biggest cause of computer errors is an operator typing in something they shouldn’t. Some estimate that 90% of computer code is directly purposed at spotting and handling such incorrect inputs, but I think this exaggerated – a little.

      A real-world but trivial example is of an operator entering numeric values for invoices with a dollar sign at the start – “$123.45” instead of just “123.45”. If the software isn’t told how to handle this – something that could take several lines of code – it won’t add up the invoice line entries to produce a correct total.

      There are all sorts of derogatory terms used by computer professionals to describe this sort of error, most of which will go completely over the heads of laymen, but these are becoming more rare in the modern world because of user-friendly interface design expectations, which hold that operator errors are the fault of the system programmer who should have anticipated that possibility.

      Interpretational Errors

      The fourth type is perhaps the most pernicious, though slowly becoming less frequent; it occurs when the computer does everything right, and so does the operator, but the human who receives the information misinterprets what the results are telling them.

      This used to be a lot more common when computerized functions were newly-introduced to a business, and the wealth of data outputs first became available to management. I once knew a manager who was quite happy spending 12 hours a day restructuring his reports to view information in new contexts, for example.

      Like everyone else who has trouble with data saturation, he eventually figured out what reports were actually useful and what were simply noise, or worse yet, misleading.

      Nevertheless, this remains a valid interpretation of GIGO that casts the expression into a more human context.

    No matter how highly trained, computer operators were human and capable of making a mistake. Depending on the specifics of those mistakes, the results could be catastrophic in terms of the purpose of the information being processed, and decisions deriving from it.

    PCs

    With the advent of the business-purpose personal computer, there was a significant reduction in the training that operators received, and a natural increase in the number of errors that would typically occur.

    Let me be clear – it takes time to master ANY software. The best software for any purpose is often the software that lets you dive straight in and start being productive right away; that doesn’t reduce the learning curve, it just lets you do something useful in the meantime.

    For example, I’ve tried more than a dozen varieties of different music composition software, but one of them clicked with me immediately (sadly, it’s no longer available). Others who tried the software on my recommendation found that it was not so user-friendly for them – in particular, if they knew (musical) keyboards and used one to ‘play’ music into the software (I did everything by mouse). Other packages were ‘best’ for them.

    The immediacy of productivity didn’t mean that I had mastered it; I was still learning new tricks right up to the day that a forced operating system upgrade meant that it stopped working.

But the Pre-digital era falls at the very beginning of that story, at a time when many of these dangers went unrecognized, at least by management; there was a sense that computers were infallible amongst those who had championed their use by a corporation, and there was little capacity for human judgment to leaven harsh and sometimes incorrect decisions.

The popular zeitgeist at the time was that computers would be responsible for all manner of simple mistakes that common sense would prevent immediately, like issuing invoices for 1 cent, often due to a rounding error, or for 99.999 dollars.

Of course, mainframe computers were both huge and hugely expensive. So: Behemoths of blind logic.

Whatever fun mistakes you can have an overly-literal computer make, I guarantee that a worse mistake really happened.

The Promise Of Atomics

Sci-fi of the 1930s had a rose-colored myopia with respect to the future of atomics. The writers of the time had enough understanding of the fundamental research that had been published that they could (and in at least one case, did) predict atomic weapons.

But, to be honest, it was frequently a catch-phrase meant to “sci-fi” an object up. ‘Destroyer – sounds too naval. I know, we’ll call it an Atomic Destroyer!’ Or an Atomic Car. An Atomic Dredge. An Atomic Mole.

Atomics promised power supplies that were smaller, lighter, and more powerful than anything then available – and that was the serious stuff. Every city block would have its own atomic generator that would last a decade, or maybe a century. Self-powered factories, automated refineries…

More frivolous and less-grounded but still somewhat-plausible applications that were predicted included transmutation, atomic-powered rockets, force-fields, and atomic rays.

Setting aside the ridiculous stuff, concepts like Atomic Automobiles that never needed refueling were not only seriously contemplated but expected.

All that was the promise of Atomics.

So, what happened?

    Stumbling Block 1: Cold-War Paranoia

    Klaus Fuchs arrest in 1950, and the Rosenburgs in 1953. These three names were sensationalized following their arrests and trials (and in the latter case, executions).

    On 29 August 1949, the Soviet Union secretly conducted its first successful weapon test. On September 23, President Truman revealed that the Soviets had developed their own version of the super-weapon that many felt had ended the War.

    These developments did not ignite the Cold War, which had already been underway since 1945, following a string of broken agreements regarding the post-Nazi-defeat in Europe and Iran. But they did signal an increased level of (justifiable) paranoia toward secrecy regarding key aspects of nuclear and other cutting-edge technology.

    While military applications – better and newer bomb designs, delivery systems, nuclear-powered vessels, and attempts to create defenses – were well-funded, there was a slowing effect on civilian applications of nuclear power.

    Stumbling Block 2: Government Protectiveness

    The growing environmental awareness of the 1960s and 70s also had a massive impact. Suspicion that nuclear power was not the key to unlimited energy had been growing for a while, as the dangers emerged onto the public consciousness.

    In response, safety standards for nuclear power plants were set at an almost impossibly-high level. The granite of Grand Central Station, like all granite, was slightly radioactive, and in fact exceeded the permitted emission standards applied to US nuclear reactors.

    The accidental escape of radioactive gas at Three Mile Island turned nervousness into outright panic for some. Fact: the radioactivity released was less than that received in a dental x-ray, or a single trans-continental flight..

    The shielding and safety mechanisms that were required – rightly or wrongly – made atomic installations huge and expensive. Both factors signaled the death of the Promise of Atomics.

    Stumbling Block 3: Fear & Atomic Nightmares

    B-movies frequently used Atomic-based monsters as villains. The Beast From 20,000 Fathoms had a fictional type of dinosaur awakened from frozen ice in the arctic circle by an Atomic weapons test. Them (1954) and Godzilla (1954) cemented an exaggerated concept of what nuclear power could do.

    There were more serious movies as well, ranging from The Day The Earth Caught Fire (1961), in which atomic tests displace the Earth from it’s normal orbit through to movies like The China Syndrome (1979), On The Beach (1959) and Silkwood (1983).

    All of these, and many more, created a distorted awareness of nuclear power that resisted the directly the atomic dreams of the more optimistic visions of nuclear power. I don’t know that it ever reached the point where support for the nuclear industry was enough, on its own, to cost someone victory in an election, but it was often a drag on political support.

    Chernobyl & other nuclear disasters

    That’s not to pretend for one minute that Nuclear Power is not dangerous if mismanaged. The Chernobyl nuclear disaster in 1986 is proof of that.

    Nor can nuclear power ever be made 100% secure against natural disaster, as demonstrated by the 2011 Fukushima accident.

    And, one can never entirely dismiss inimical acts by others, such as the ongoing Russian invasions of Ukraine.

      The Russian 22nd Army Corps approached the Zaporizhzhia Nuclear Power Plant on 26 February 2022 and besieged Enerhodar in order to assume control. A fire began, but the International Atomic Energy Agency (IAEA) stated that essential equipment was undamaged. Despite the fires, the plant recorded no radiation leaks.

      — Wikipedia, Russian Invasion of Ukraine – Southern Front

    That, of course, did not end the danger; in fact, the Russians attempted to use the power plant as a pawn in their invasion as the offensive bogged down (see Russian Invasion of Ukraine – Zaporizhzhia Front).

    The plant continued to be a strategic target in the months that followed.

      On 3 September 2022, an IAEA delegation visited the nuclear power plant at Zaporizhzhia and on 6 September a report was published documenting damage and threats to the plant security caused by external shelling and the presence of occupying troops in the plant.

      [Eight Days Later] at 3:14 a.m., the sixth and final reactor was disconnected from the grid, “completely stopping” the plant. The statement from Energoatom said that “Preparations are underway for its cooling and transfer to a cold state”.

      — Wikipedia, Russian Invasion of Ukraine – Zaporizhzhia Front

    Ukraine, of course, remains subject to threat and the invasion is ongoing. Until that changes, the danger posed remains, however it has been mitigated.

    Other uses of Atomics

    Nuclear materials, of course, have a number of other applications, which many people overlook. Medical uses are obvious (see Wikipedia, Nuclear Medicine). There are other industrial and commercial applications, too such as Industrial Radiography – used for

      ….the testing and grading of welds on piping, pressure vessels, high-capacity storage containers, pipelines, and some structural welds. Other tested materials include concrete (locating rebar or conduit), welder’s test coupons, machined parts, plate metal, or pipewall (locating anomalies due to corrosion or mechanical damage).

      — Wikipedia, Industrial Radiography – Inspection of products

    Whenever I think of this subject, though, an odd source springs to mind – a secondary plot thread in Arthur Hailey’s Wheels, in which an auto worker accidentally spreads radioactive contaminants.

    Alternate Reality, Alternate Physics

    So there are lots of good reasons why the envisaged ‘golden atomic age’ didn’t, and was never going to, happen.

    Well, I don’t know about you, but I’m a GM; I’d never let something so trivial get in the way if I really needed a campaign element like ubiquitous atomics. All that’s needed is some simple plot devicium to eliminate the dangers and the need for heavy shielding.

    A thin material that uses something similar to the photoelectric effect to transform one type of radiation (alpha, beta, gamma) into electricity would do it – and would simultaniously get rid of the bulky (and heavy) plumbing, permitting the direct conversion of radiation into energy. One triple layer later and the “Pocket reactor” (perhaps one cubic meter, perhaps half that) is ready to go.

A Default Economy

Time is starting to get away from me – I really wanted to reach this point in the article three or four hours ago. But, press on…

One of the biggest changes over the last 50-70 years of economics has been the relative importance of wages as a component cost of manufacturing. Wages have, in the western world, skyrocketed (in relative terms); this, more than any other factor, has resulted in the exodus of manufacturing to regions where the wages bill will be smaller.

This effect may have been less noticeable in the early 1970s but it was nevertheless present; the increasing pressure on the US auto industry was an early manifestation, and while it will take the Oil Crisis of 1973 to bring matters to a head, this only accelerated greatly a transition that was already ongoing to some extent.

Prior to the Oil Crisis, the dominant cost factor to the manufacturing sector was industrial in nature – machinery, tooling and resources (materials). Environmental concerns were a growing area of expense for many industries, but still secondary; and wages and training were a remote third (Administrative costs were fourth on the list, which will become significant in the next era).

Many of the classic entrants into different genres of RPG were written by people whose experience in economics was rooted in the society and attitudes of the era, and hence a low-scarcity high-manpower foundation became the default economy of those games.

    Incorrect Economics in Fantasy

    Most fantasy GMs knew enough to recognize that assembly-line techniques were inappropriate to the genre, but that was often as far as they went. Very few investigated the economics of steel production, especially the impact on forests. To be fair, the resources to do so were not as readily available.

    But let’s think about this a moment: anything in scarce supply goes up and up in price – that’s the law of supply and demand. And labor was in very short supply – which means that the basic model of the economics was wrong.

    Some GMs tried to correct this problem by increasing labor efficiency and effectiveness – healing magic to make the population healthier, more capable of hard work, and greater crop production through Druidic intervention (not only makes the populace healthier, but frees more of them up to work elsewhere.

    Nothing wrong with that as a foundation for fantasy economics – but many of the secondary impacts of these changes were ignored, or not spelled out properly (at the very least), and the changes themselves were inserted as explanation after the fact. No impact on the prices and availability of various goods was taken into account, for example.

    Now that this has been pointed out to you, you have three choices:

    • Make the explanation official and correct the game mechanics to devalue skilled labor costs and introduce other relevant knock-on effects, including social consequences;
    • Remove the incorrectly applied assumptions and their consequences to produce a more realistic medieval economy and society;
    • Find some other explanation for the incorrect modeling, one that (perhaps) requires less change to other areas of the mechanics – and implement the consequences and knock-on effects without fear or favor, having first adjusted for the incorrect assumptions already present.

    Anything magical or mechanical in nature .should either get a little cheaper or a lot more expensive. Anything that requires extremely high skill, likewise. Anything in common demand will be more easily available, and this may act as a depressant to the price.

    Similarly, apprentice numbers for blacksmiths and wizards and what-have-you will either go down considerably, or go way up.

    These changes aren’t rocket science; they are fairly straightforward and simple, actually. But there’s a lot of them.

    Once those are complete, you can start thinking about economic flows and who has money – and who doesn’t, but wants it – because the generic fantasy society that I have often seen at play is no more realistic.

    To be clear – you can choose not to change a thing, especially if this level of realism is not considered desirable by your players; but this should be an intentional choice, and those who make it should at least give passing consideration to the consequences.

    Sci-Fi Optimism: A Simpler Age

    But, befitting an age of technology, there’s a lot more to talk about on the Sci-Fi front.

    Modern sci-fi is far more dystopian in tone, far more cynical and pessimistic. Sci-fi that’s rooted in the era can go one of two ways:

    • It can be faithful to the era, with a far more positive outlook; read classic Heinlein and Asimov and EE ‘Doc’ Smith for tonal cues. And, in general, think a little more ‘Victorian’.
    • Or, you can adapt to service a modern audience, with cautious injections of pessimism and cynicism – but these changes won’t come out of nowhere and will have knock-on effects, and your campaign setting will need to incorporate and reflect those. Start with the three axes of conflict described at the start of this post, amp them up to 11, and throw in modern levels of political corruption; then incorporate some form of massive betrayal of the people to create that tonal quality within society. Go full pre-Cyberpunk, in other words.
    Sci-Fi Pessimism: Monster-bashes

    Monster movies should be treated as documentary references. This week, the Triffids; next week, Them; and so on.

    Take Myths, Legends, and Cryptids, and add a sci-fi twist. The Headless Horseman from Mars? The Radioactive Ghost? Swamp Men from Venus?

    Why not?

    Sci-Fi: Optimism Depth & Richness

    Both pessimistic and optimistic genres are morally-simplified in some ways. Identify the ones that pertain to your particular genre and run with them.

    In particular, though, the pessimistic route involves more universally-down attitude; greater variation and richness is possible in a more optimistic campaign, even if it’s a single persistent thread through the darkness.

    Sci-Fi Pessimism: Apocalyptic Visions

    There is no such thing as the doomsday clock in an optimistic vision of the sci-fi world; in a pessimistic sci-fi campaign, it should represent an ever-present existential threat.

    A perfect comparison is possible: watch both the original 1960s version of The Thing and the John Carpentier remake. Then watch a whole bunch of other sci-fi and categorize each into either ‘B&W Thing’ or “Carpentier Thing’ compartments, tonally. Alien? – Carpentier. Aliens? B&W. The Blob (the original with Steve McQueen? B&W – the good guys win in the end, and the threat is ended. Invasion Of The Body Snatchers? More ambiguous, and there’s always a suspicion that a pod has survived, somewhere – so that equals paranoia, and that’s Carpentier in classification.

    And so on.

    Sci-Fi Optimism: The Scale of Ginormous

    More than anything else, this section tips a hat toward EE ‘Doc’ Smith, and towards the original Star Wars (the revised Death Star in Return Of The Jedi may have been bigger, but it didn’t feel bigger. Just the opposite, in fact).

    Anything worth doing is worth overdoing. Spacecraft 5 miles long? Go for it! Spacecraft 15 feet long? Get ye to the Dark Side – except in comparison to the scale of the enemy, of course!

    That’s the only reason for the X-Wings to be so small – to make them more insignificant relative to the Star Destroyers and Death Star of the Empire.

    This applies to more than just the physical infrastructure. Contemplate for a moment the economics of building something on the scale of a death star. Here, this site should help: John M Jennings – Economics of the Death Star.

    Superheroics & Idealism

    Okay, let’s take a sideways step in Genre. It’s clearly just a short step from the positive sci-fi sub-genre to the idealism inherent in a superhero campaign.

    Once again, though, contemplate the economic impact of what your PCs and their enemies are up to. If there’s one crisis a month, resulting in significant damage to one or more metro areas, that’s a downward damage bill that’s going to total up into the billions – of 1970s dollars. Possibly more.

    Either the national economy of your setting is going into a lasting depression, with public confidence in the toilet and going under for the third time, or there is some factor that’s giving everyone an unlikely positivity.

    Two obvious factors can (should?) play into that confidence: the good guys always win (in the end), and/or there’s a steady growth in technological prowess that shows up as a more vibrant economic outlook.

    Let’s start by thinking about the rebuilding costs – unemployment goes down, and scarcity of good workers drives wages up. That money has to come from somewhere, and the easiest source is a more rapid technological progression, which boosts corporate profits. And it all plays into greater tax revenues. But, since 90% of the economy gets those positive effects without experiencing the downside, the result is an economic boom.

    So far, so good. Sure, the government will have some additional expenses – a more potent space industry? A holding facility to contain supervillains? And so on – check and check. Rebuilding that damaged infrastructure is just another of those items.

    Let’s say that half of the extra tax revenues gets eaten up – ten per cent per item, plus one or two not listed. The government can bank 20% of what’s left, and still give everyone a 30% tax reduction.

    Next, contemplate the industrial benefits of regularly replacing aging industrial resources. Tokyo and West Germany, it has been argued, benefited massively from such replenishment post World War 2 – but don’t take my word for it, do your own research on the subject.

    That’s easily another 10% kick along for the economy – because additional government spending always comes back three-fold, if you wait long enough, provided that the spending isn’t going straight into the pockets of some corrupt corporation or politician.

    Okay, that’s all just a starting point; you can take it as far as you think you need to. But there’s a lot of good reasons for optimism in that lot, don’t you think?

    Modern Pulp

    Modern pulp – the Clive Cussler model, for example – takes superheros out of the picture and relies on extraordinary examples of ordinary people rising to the occasion. In general, this straddles both positive and negative tones, and so the surrounding world is not going to be all that different from our own.

    What follows, in my opinion, is a more dynamic roller-coaster in terms of the economy – more significant and prominent ups and downs. But instability of this type makes investors more nervous, and is (in itself) a negative impact on the economy.

    Once again, then, we need some positive counterbalance – just to sustain the status quo, in this case. What might that be?

    It could take any of several forms – a series of medical breakthroughs, for example, or the discovery of friendly aliens (even if they are standoffish, with some version of the Starfleet Non-Interference Directive, the mere fact that there are solutions to problems if we want them badly enough could be enough).

    We don’t need an impact on the same scale as superheros provide – a mere 10% should be enough to cover the shortfall, or even less.

    Into this environment, we can then add the benefits of altruistic big business – and all the social changes that flow as a consequence – and we find ourselves firmly in the positive frame, in which all problems have solutions, and the good guys and girls always win in the end. Both of these are part of the infrastructure of such campaigns, a necessary assumption – but one that isn’t often enough factored into the broader society.

    War Games

    Back in the two-genres mold, we find military-based campaigns. These range from WW2 (positive) to Korea (positive but just barely) to Vietnam (negative, and not much fun). But alternate histories provide a more flexible foundation that can occupy any particular space on the map.

    For example: at the height of the Korean war, the USSR invades Canada, intending to plant a soviet super-state right on the American doorstep. Already stretched by the Asian conflict, the US (and its allies) can’t spare a huge manpower commitment – so it puts together an elite force – and suddenly we’re back in ‘Modern Pulp’ territory.

    Spies & Spy Games

    The final genre to be considered is one that goes hand-in-hand with Cold War settings: the solo super-spy or elite counterintelligence force. Variations take place in WW2 settings.

    There’s good reason for what many consider ‘the definitive James Bond’ to derive from this era, and that’s where your economic cues should be drawn from – in essence, whatever it takes (within reasonable limits) is available at need; but you always have to look for a less expensive alternative than simply throwing money at a problem.

    Go read (or re-read) the original Ian Fleming novels. There’s always enough money to spend on supervillain lairs or fancy gadgets. There’s a limited amount that can be spent on establishing a cover if necessary. Villains make fortunes by being villainous – but that only makes them a target that will eventually become the focus of attention.

    They really are the economics of the 60s and 70s, amplified.

Whew – got there at last! It’s been a marathon, but the finish line for this three-part article has now been crossed – and, in the porcess, the series grows to almost 80,000 words!

Next week: something completely different (and, since this part ran for an extra chapter, maybe the week after, as well).

Until then, have fun!



Discover more from Campaign Mastery

Subscribe to get the latest posts sent to your email.