3d-illustration-of-computer-technologies--concept-server-station-1398483-m

I’ve been desperately trying to clear enough time to attempt to get my main computer back up and running. It occurred to me the other day that there is a clear analogy between that process and the process of creating an RPG campaign.

The Computer Of Today: some context

Personal Computers are everywhere – or, perhaps, I should say, “Personal Computing Devices”. These days, the trend seems to be toward fixed solutions determined at the Manufacturer level, rather than the flexibility of the older Personal Computer; customers buy a tablet or a PC in some ready-made ready-to-go configuration. It might not even be called a PC; it might be called an iPad, or a Smartphone, or a Gaming Console. Economies of scale mean that these preassembled and preconfigured options have a slight edge in cost, and they have a definite edge in required standard of customer expertise.

These advantages come at the price of restricted flexibility, ubiquitous conformity, uniform vulnerability, relinquished control, and diminished expertise.

Restricted Flexibility

Fifteen years ago, at the height of the Personal Computer explosion, the easiest way to tell an expert computer tech selling a PC from a salesman selling a PC was as follows: The expert’s first response to the statement “I want to buy a PC” would be, “What do you want to do with it?” while the salesman would ask “How much do you want to spend?” or simply start showing you the standard models that the computer store had put together.

Purpose drove the choice of components, and those in turn drove every other decision. Your machine could be optimized for gaming, or for working with multi-megabyte graphics, or for business operations, or in any of half-a-dozen or more other ways. These choices would shift the relative expenditure on different components within the overall budget, and hence the computer would start its existence as customized to fit. (My problem was always that I had my toe in so many different areas that no one configuration not costing an arm and a leg would be optimal; I was a writer, an artist, a composer, liked to play games, used the computer to manage my self-employment, and did a lot of stuff on the internet. My computer had to be good at everything!)

These days, that flexibility is becoming a thing of the past, replaced with a set of one-size-fits-all solutions. In part, that’s because tech improvements mean that even the default choices are adequate in every area, so you no longer have to sweat the finer points of customization, and in part its because computer sales have moved away from the specialized trade and into the electrical goods stores as the product matures. Nevertheless, the bottom line is that someone else is making the decisions for you blind; in order to get all the functionality you require in one arena of purpose, you may have to buy more than you need in others.

Ubiquitous Conformity

Software writers and especially the people writing the operating systems like it. The less the customization, the easier support is; rather than having to worry about the compatibility of the components, they know that there are a smaller number of options. What’s more, it makes it easier for them to sell new hardware and software; on the one hand, you have the minimum requirements for the latest version of whatever demanding that you upgrade your hardware, and on the other, you have the line “Takes full advantage of the latest hardware”. It’s a continuous cycle, fueled by the need to sell to the same customers time and time again.

Uniform Vulnerability

There’s a price-tag, and that is that the same security vulnerabilities tend to turn up on a lot of systems at the same time. In the old days, sheer variety afforded an extra layer of security. These days, sheer conformity affords an extra layer of vulnerability. This is counterbalanced, at least in theory, by the ability to write and release patches to cover security holes more quickly and easily, and with fewer chances for them to go wrong. The process of patching such vulnerabilities can be streamlined, as a result.

Theory and reality aren’t quite the same thing, even in this most artificially-controlled and contrived microcosm. Anyone who tracks these things will be well aware of the number of system patches that have been released over recent years with flaws and errors. I personally get the impression that testing of computer patches is perpetually flirting with the line of minimum possible testing before release, with the users of the world functioning as unwitting guinea pigs, though – to be fair – most patches do what they are supposed to without a hitch.

When things go wrong, though, they do so spectacularly, and on a fairly broad scale. My current computer problems may stem from a Hard Disk failure, but nothing started to go wrong until an iTunes update crashed. It might be a coincidence, it might not. Restoring that piece of software to functioning state then caused a problem with a Flash update and a Direct-X update, and that was when I started getting graphics card errors (possible coincidence, possibly not) that froze the system and regularly crashed it in the middle of writing updates to the (problematic) hard disk, and from there the whole domino chain began to fall. Over the next month or so, the operating system corrupted itself to the point where it would take 90-minutes PLUS of rebooting to get it to load at all if it felt like cooperating on that particular day – at which point it might run for a few minutes, a few hours, or even for a few days.

Relinquished Control

I hate “push” software, and cloud computing in general. I work hard at becoming proficient with the current software that I am using, learning how best to get it to do the things that I need it to do, and the things to avoid doing because they function unreliably. I learn how to turn unexpected side effects into “features” that I can take advantage of. And I know that the software that I am using will get the job done when I’m on a deadline, enabling me to manage my time and schedule around my health problems and social life. I’m integrated with the foibles of my computer, in other words.

Cloud computing and “push” technology takes all of that away from me. Someone else decides when the software gets updated; a whole new interface can be foisted on me whether its convenient or not. Functionality that I rely on can be de-emphasized or eliminated entirely. Sure, it’s an inconvenience having to think for myself; and it can be an inconvenience not using state-of-the-art software; but I consider the alternatives to be worse. Frankly, I don’t trust the software vendors to get it right every time, and I rely on my PC to function every day – well, I used to. Now I have a backup solution – two, if I count a cybercafe.

Diminished Expertise

It used to be that you brought your computer home in pieces, and then put it together. Then you installed the operating system, then you configured it to your tastes, and then you started installing software to do the things that you required. Then the computers started being assembled at the specialist stores and sold as a unit; just install the Operating System and away you went. It wasn’t long before the operating systems were being pre-installed and pre-configured as well – all you had to do was plug it in and turn it on.

Every time you had to get your hands dirty, you started to learn about your computer and the way it worked. Expertise was forced upon you. It was intimidating to many, too intimidating for some to even contemplate. The forced acquisition of expertise has gone; users nowadays are more helpless in the face of a systems failure of some sort.

This is the price that has had to be paid for making computing power and systems available to a wider audience. While that end, and the benefits that it brings to society, is more than enough justification, the fact remains that this is a necessary downside.

Technological Maturity?

I tend to see a technology as “mature” when it reaches the plug-it-in-and-turn-it-on stage, with no real need to understand what is going on under the hood. With cars, this happened so slowly that it’s hard to pinpoint exactly when it became the case. Even with computer technology, there were a number of significant milestones along the way. Nevertheless, you would have a hard time arguing that computers had not become a mature technology.

Or have they? By forcing users to abdicate the nuts and bolts perspective of the expert and hiding much of the decision making behind closed doors and automatic updates, is this actually an illusion with the whole thing secretly held together with ceiling wax and baling wire? Does the Emperor, in fact, have no clothes?

The one point where the various manufacturers concerned can’t hide the true state of play is when discussing security concerns and problems. The assumption has to be made that if there is a hole, someone could exploit it tomorrow, and the systems for which they are responsible need to patched now to prevent that.

If computer technology really is mature, these holes will have been systematically closed off, and a design principle learned that prevents that hole or anything similar to it ever being an issue again. The implication is that design refinements and improvements will reduce the number of vulnerabilities needing to be patched, year-on-year. Has that been happening?

Well, in a nutshell – no. The frequency of security alerts has gone up, not down. The software engineers have managed to get ahead of the game to the point where they are most frequently patching systems before “exploits” show up “in the wild”, ie as anything more than theoretical; but that’s about the limit of the achievement thus far. And many of these exploits have disturbingly familiar descriptions that suggest that lessons are not being learned quickly enough to permit a systematic approach to the problem.

One computer-savvy person I know has even gone so far as to suggest that an increased need for responsiveness to threats is the real reason for software manufacturers’ insistence on push technology, and that the cozy hardware/software update cycle is not so much the result of a systemic conspiracy to enhance profitability (that has evolved naturally through the years) as it is an expression of people hanging onto the edge of the cliff by their fingernails.

Now, it might be that the growth in experts poking and prodding and discovering new vectors of insecurity is outstripping the pace of systemic improvements in computer security, producing an increase in threats being discovered even though those systematic improvements are taking place. I’m certainly prepared to entertain that as a scenario. But even if that were the case, it argues that – despite appearances – personal computing technology is not yet the mature environment that it appears to be.

And there’s one final point to consider in this debate; the rise of Script Kiddies and the technology that makes them possible. It is now (reportedly) possible to go to websites and find software ready to create your own worm or virus or spambot. I sincerely hope that this is urban legend but having seen some of the spam that arrives at Campaign Mastery showing malformed scripts with familiar spam phrasing, I somehow doubt it.

The PC is not dead

Some people seem to have the impression that the personal computer is dead and buried. Many tech writers on the net seem to think so, too. To investigate the question, I pulled statistics from the last 6 months of traffic to Campaign Mastery. 64% were from a machine using Windows as its operating system; about 11.5% used iOS; 10.5% used Android; 9% were Macintosh; and about 4% were Linux. The rest in combination totaled less than 1%. That’s 77% PCs of one sort or another and 22-23% some form of mobile computer. Even taking the Windows and Linux systems alone, the numbers are 2:1 in favor of the PC. (There were a total of about 20 visits by game consoles, which I found to be interesting).

I wanted to address this point because it directly relates to the ability of the readership to relate to the analogy that is at the heart of this article. Every PC user will, at some point, have to reinstall their operating system, or install a new one – or have it done for them. With no evidence of any sort other than anecdotal, I would estimate that at least half of the PC users out there have done it themselves at least once, and a substantial number of the rest will have at least some idea of what’s involved. For the rest of you, I’m going to describe the process – using as little jargon as possible.

Installing a computer

Putting a computer together is reasonably straightforward, at least in broad terms. You plug the hardware into the places the hardware is supposed to go, and you connect the cables to the places they are meant to connect. Most of these are designed so that there is only one right place for them to fit and one place for them to go; if there is more than one, they are fairly universally interchangeable amongst those connecting points.

The next step is to install the operating system. This tells the computer how to use the various bits of hardware and provides the environment for software to function within. This process can take a surprisingly short amount of time (when everything runs smoothly) or an extremely lengthy time-frame (when it doesn’t); most of the problems tend to relate to hardware that the Operating System doesn’t understand or doesn’t configure properly. This may also involve installing device drivers for that additional hardware – have to do that for my graphics card, for example. Some people advocate making a systems backup at this point, others say to wait until after Steps three or four.

Step Three is to configure all your options and settings within the operating system. Some people advise making a system backup at this point, others say to wait until after Step four.

Step Four is to patch the operating system – because the original system CD-ROM that you installed from doesn’t include the many updates that have occurred since it was purchased. If you haven’t made a systems backup already, now is the time to do it.

Step Five is to install the Software that you will use to actually do things with, and to start using the computer to do those things. Security-related software like Antivirus Software is right at the top of the list; everything else can be done pretty much as you need it. Though sometimes the order does matter – getting it wrong can make the task ten times as painful. Having that software backup at least lets you go back to the start of step five if you monumentally foul it up.

A lot of people, myself included, divide software into two categories: Essentials and Others. I tend to install the Essentials and then do another whole-system backup, and I recommend this course of action to everyone else out there, as well.

Creating A Campaign

So, is everyone clear on all that? Good, then let’s move on to how it resembles the creation of an RPG campaign – and what can be learned from that.

The Hardware

The Hardware is the most fundamental part of the campaign, its concept. I wouldn’t go so far as to say that there’s only one way that the different campaign concepts can be put together, though, the way I can for computer hardware.

The Operating System

The Operating System is the core rulebooks that you require to implement the campaign. You can run the same campaign using a number of different systems; sometimes the differences that result will be profound, and sometimes they will be barely noticeable.

There’s not a lot of difference between AD&D and 2nd Ed D&D, for example. There’s not a lot of difference between D&D 3.0 and 3.5, and not a lot of difference between 3.5 and Pathfinder. There’s somewhat more difference between any of these`and another fantasy-genre RPG such as The Lord Of The Rings RPG, Rolemaster, Empire Of The Petal Throne, Tunnels & Trolls, or Hackmaster (though early editions of the latter bear a great and deliberate resemblance to AD&D, the current edition has evolved down its own path). More different again are games dedicated to other genres, though there are some that deliberately evoke the same basic architecture – compare D&D 3.x with d20 Modern, for example.

It’s even possible to change “Operating Systems” mid-campaign. My original Fumanor campaign was designed for AD&D, initially ran under 2nd Ed, went to Rolemaster for a while (a bit of a disaster, that), and then became 3.0. The current Fumanor campaigns were designed for 3.0, and have run under 3.5 – and may eventually become Pathfinder, though I have yet to be convinced that there is any pressing reason to make that change. The next Fumanor Campaign will probably be run under Pathfinder rules – maybe.

Configuring Options

This is the equivalent of deciding what official supplements you are going to “turn on” within the campaign. Like some Operating System options, once you turn them on, they make lasting changes that can never be fully undone by turning them off again. Also like OS options, sometimes they cause compatibility or configuration problems, usually because one hasn’t been fully developed or properly tested before it was included.

The Essential Software

Analogous to Essential Software are those third party supplements that you need to incorporate to transform the concept into a functional reality. These can also conflict with the Operating System; to some extent, they all modify it to add needed functionality to they system.

The Non-Essential Software

Non-essential Software is analogous to House Rules. They modify, extend, or replace either parts of the Operating System or a key component, like Internet Explorer; and – in theory – they can be put in and taken out as necessary. If something isn’t working – say you don’t like the first MP3 player you install – you can uninstall it and put in another one. But sometimes these have also made lasting changes to the campaign that aren’t redacted quite so easily.

User Documents

All this is infrastructure designed to give you the tools needed to create and manipulate the documents and files that you create – the adventures and supplementary materials. Without these, the campaign doesn’t actually exist, though it may be ready to play tomorrow. The principle documents in question are PCs, PC Campaign Briefings, and a Campaign Outline for the GM that describes (in general terms) what the adventures are going to stem from and what they are going to be. In television terms, these comprise the “Bible” that is sent to prospective writers who are going to write episodes for the TV show. They ensure consistency (at least in theory) from one week to the next, and keep all the creative personnel on the same page.

The Tail Wags The Dog

The end purpose of the entire assemblage is to enable the production of or access to “user documents”. Change the format of those User Documents and you will usually have to make a change at some more fundamental level, just like adding a new file format – you will often need to install some additional software to use it. Fancy-formatted documents? You’ll need a word processor. Spreadsheets? You need spreadsheet software. Graphics? You need image-editing software. And so on.

But more than that, the shape and nature of the files that you want to work with extend their influence all the way back to the “operating system” itself. Some are inherently better at doing things than others. To the best of my knowledge, for example, Graphic Design studios are still almost completely dominated by Macintosh systems. The advent of Dreamweaver and Flash may have changed that, though.

The Power Of Analogy: Some truths about House Rules

Analogies are useful because they often put things into a different perspective, enabling the observer to perceive aspects and attributes that were previously hidden or inobvious. This analogy spells out the differences and relationships between the different categories of rules, for example, and how deeply a change at one level impacts the campaign as a whole. And it highlights some attributes of House Rules that are worth taking on board.

Modified Out-of-context excerpts

Most house rules consist of out-of-context excerpts from another rules system that have been modified to make them compatible with the rules in play in the current campaign. “I want to use the critical hits system from Rolemaster”, for example. Even if it’s mostly original work, the intent is generally to replicate the functionality of that rules subsystem.

Because these are inserted and adopted piecemeal, key checks and balances are often left out, or left crippled by the modification process. The worst systems failures and abuses always stem from House Rules.

Always exist for a reason

House Rules always exist for a reason. That reason may be good (“this fixes a flaw in the experience table”, “this corrects one aspect of the rules that is incompatible with the campaign concept”), mediocre (“it fixes one problem but introduces another”) or trivial (“it looked interesting”, “it seemed like a good idea at the time”).

Always disruptive to the system

House Rules always come with an overhead. Even if they replace one mechanic with a simpler one, there is the price-tag of learning the new system. And because they are almost always written by relative amateurs, or integrated into the campaign by them, they are like a piece of software that hasn’t yet been fully tested. Expect strange bugs and quirks to show up.

But there’s more. Every House Rule is, to some extent, disruptive of the system that has been crafted and playtested. As my recent article on efficiency in game mechanics showed, even small disruptions that occur frequently can have an enormous cumulative effect – just as a new piece of software can chew up memory or processing power even when its not actually in use.

More Responsive To Change

There is an upside. It’s a lot easier to change a house rule, or even drop one completely, than it is to change or drop an official rule. You may have to live with the consequences of that House Rule having been in place for a period of time, just as a computer may be left with documents produced using software that’s no longer installed – but going forward, the rule is gone.

The Key To Success

The key to the successful creation and integration of House Rules is always a question of Utility vs Overhead. If the Utility value is high, then you want to keep the House Rule, or (at most) tweak it a little to reduce the overhead cost – then live with it. If the Utility value is moderate, then look around for alternative approaches that may also have the utility, or close to it – even if they do things a different way – but that have a much smaller overhead. If the Utility is low, the rule isn’t working and should be junked or replaced, ASAP.

But these values are only mildly covariant – the Utility Value depends on the campaign, the rules system, and the context within the game world, and has only a moderate relationship to the scale of the overhead. There is a whole array of combinations of these two values:

House Rules
Utility vs. Overhead Table

High Utility, Low Overhead

Moderate Utility, Low Overhead

Low Utility, Low Overhead

High Utility, Moderate Overhead

Moderate Utility, Moderate Overhead

Low Utility, Moderate Overhead

High Utility, High Overhead

Moderate Utility, High Overhead

Low Utility, High Overhead

The more red that’s showing, the more “broken” the house rule is. Anything in the yellow zone needs improvement; yellow-green is tolerable only if there’s nothing better.

Self-containment

The more self-contained a rule is, the more it will tend to be in the top-left of the table – the “good zone” – and the more amenable that rule is for excerpting into a satisfactory House Rule.

An excellent example is the Hero System rules for Psychological Limitations. This could be dropped almost straight into D&D because they are so self-contained; each defines a psychological quirk that is strong enough to alter the characters behavior away from what is best for him in terms of how often it impacts the character and how severely, and gives a value to the result in Character Points. To drop this in as a house rule, you need only do two things (aside from copying the two tables and the paragraph or so that outline the rule) – define a “character point” in terms of what a character gets for it in D&D, and set limits as to how many a character can have.

An unwritten principle of the Hero system is that if a character is handicapped in some way, there is a corresponding benefit of equal measure. The Character points, which can be spent for additional powers or skills or stats or whatever is the benefit that a character receives for taking the disadvantage. A very strongly and often-written corollary is that a disadvantage that is no hindrance is no disadvantage and worth no points.

So clearly, the “character points” received should provide some benefit to the character. One obvious choice would be a certain amount of starting gold per character point, to be used for additional equipment. Another might be some equivalent value in XP. If the GM was particularly paranoid about game balance, he could determine an “average expected value” and subtract that from the amount of the commodity that is usually received by starting characters – leaving it up to each player whether they wanted their characters to be psychologically flexible but with a poorer starting position in the game, or more psychologically fixed and defined, but with a starting advantage to make up for this constraint on their future choices.

You could go further, and define related House Rules for the change or removal of these limitations – perhaps it takes adventures totally so much XP in order to affect such a change. You could add rules for acquiring additional ones, perhaps lifting them from Call of Cthulhu’s insanity rules. But those are a separate issue.

These are all – except possibly the last one – low overhead, moderate-to-high utility house rules. By mandating specific examples, the GM can define and enforce attitudes that are reflective of the social position and mindset of a given group – “All druids must have ‘Environmental Awareness’ of 10 points or more”, for example.

Excerpted Rules

Excerpting rules in this fashion takes advantage of the playtesting and experience of others in the use of that subsystem, even if it has not previously appeared as part of the Core Rules that you are using for your campaign. You could get examples of Psychological Limitations from the Hero Games rulebooks or website. You can ask about Psych Lims that caused problems within a campaign, and talk to other gamers who are familiar with the source rule about interpretations.

It’s hard to be both original and to produce a quality rule without either a lot of experience, a lot of luck, a natural flair for rules design, or a crutch to lean on. Existing rules from other game systems can be that crutch. At the very least, they can provide a template upon which your House Rule can be built, a foundation that makes the entire structure more robust.

Related Posts with Thumbnails
Print Friendly