This entry is part 3 in the series The Zener Gate System

This illustration is a composite of ‘Hexagon Structure 1c’ by freeimages.com / deafstar
and ‘Vector Gears’ by freeimages.com / Andrew Javorsky.

Prelude I:

Someone asked why readers might want to read a diary of rules creation.

The Answer is simple: it helps you understand rules and rules processes, making it easier for a GM to interpret other game mechanics as they encounter them.

That’s always the value of a glimpse behind-the-scenes!
 

Prelude II:

Well, that was an adventure! Sorry for the delay in posting folks – it wasn’t my fault! There was a security tangle between my ISP’s backbone provider and my hosting service, with the bottom line being that I was ejected and locked out as a hacker. It was supposed to be only for 10 minutes, but didn’t unlock properly because I was already logged onto the site and in the process of uploading this article. But Bryan from TCH Hosting has done a great job of helping me sort it out – thanks, Bryan! :)

Usually, when you develop rules structures, you edit and write over the top of your draft in progress until satisfied. Because I want this to be as much about my thought processes during rules development, that’s not the approach that this article will take. Instead, I’ll be transcribing my thoughts in chronological sequence as they happen with a minimum of editing for clarity, and showing all my intermediate stages – even if they lead me down a blind alley for a time.

In the last article dealing with the Zener Gate rules, I made mention of a table that was to be at the heart of the system, and a few dangling unresolved questions. Today’s article is intended to complete the picture.

What needs to be in this table of comparative values? Range, Size of target (large), delicacy of precision, time, weight. Maybe speed.

The parts of the system worked out so far indicate that +1 is a significant advantage, -1 a significant liability, and anything up to plus-or-minus-6 can be tolerated – as an extreme modifier. Since some modifiers can counter others, that means that the most useful range on the table will be -12 to +12. I could run it up to plus-or-minus 15, or I could go 20, or even 25 – but whatever I choose, the number of entries on the table will be double that number, and that has me inclined to go smaller rather than larger in terms of range.

But that also makes a big assumption: that minus values will need to extend to the same distance as positive ones. And I don’t think that is likely to be the case. For every 5 values I remove from the low end of the scale, I gain 5 more that I can use at the high end. If I can, I’d like to get away with a low of -5, leaving me 10 more to play with at the high end on a thirty-entry table. But that will all depend on the progressions that I choose and which seem reasonable. And those will be different for each attribute that is indexed.

Weight

The Hero system bases it’s LIFT value – the real-world index of STR – on a geometric progression in which each +5 to STR is a doubling of Lifting capacity. The base value is 100kg at STR 10.

That works well for a superhero game, moderately well for a pulp game, not all that well for a game populated by normal people. LIFT goes up too fast – a STR of 25 permits a lift of 800kg, or a small trailer.

A key question has always been whether or not this “Lift” included the character’s body weight. Part of the table (the low part, in which a grenade requires a STR of -25 to lift) argues no, but the base value makes a heck of a lot more sense (given that STR 10 is supposed to be the Strength of “the average person”) if 100 lb – about 45 kg – or so – is already used up getting the character upright.

I don’t consider my personal Strength to be that far removed from average, but I doubt that I could lift 100kg. Even 50kg would be a struggle – if lifting meant being able to hoist it overhead without assistance.

So instead, I’m going to look at the question of weight in a different way – as “Load”.

Load

A character’s total load capacity is determined by looking up their STR on the index and finding the corresponding weight value.

A Distributed Load counts for 1/3 of it’s actual weight. So 6kg of uniform, boots, etc uses only 2 kg of the capacity. 60kg of body armor would only use 20kg of the load capacity. Medieval armor, at it’s heaviest, came in at about 50kg, because the heaviest load that could be carried by Warhorses of the era was the limiting factor. Note, too, that if you were expected to fight while wearing it, you would not want this load to be anywhere near the wearer’s capacity!

A Balanced Load counts for 1/2 of it’s actual weight. So 20 kg of backpack would use 10kg of capacity.

Unbalanced Loads are the least desirable, counting fully.

Shared Loads

If multiple characters work together to lift or move something heavy, how should loads be assessed? Dividing the load by the number of participants gives each individual load, and the group can only move as fast, and as far, as it’s most heavily-burdened character.

That means that the base value can be set quite a bit lower, and the progression can be quite a bit slower, and reasonable results can still come out the other end.

I was momentarily inclined toward the elegance of a base of 10kg at STR 10, but that seems too low. Something closer to 25 or 30 kg seems more reasonable.

To work out the progression, The simplest way is to look at the top end of the scale. If the top STR value to be indexed for humans is 25, what’s the world record clean-and-jerk?

263.5 kg, lifted by Hossein Rezazadeh, according to Wikipedia.

Let’s plug that in and see where we get:

Balanced Load

So, if every +1 represents ×X on the scale, with STR 10 being 25 or 30 and STR 11 being 25 times × or 30 times ×, respectively, then STR 25 is 25 or 30 times × to the 15th power:

    263.5 = approx 25 x^15. Take the log of both sides:
    Log(263.5) = log(25) + 15×log(x)
    Log(263.5) ? log(25) = 15×log(x) = 2.42078 – 1.39794 = 1.02284
    log(x) = 1.02284 / 15 = 0.06819
    x = 1.17001

Now, that’s not all that convenient a number. Trying it with 30 as the basis won’t make a huge amount of difference, either; x would still likely end up being 1.1-something.

So let’s go with a progression of 1.2, and round the progression off every now and then – downwards.

    (STR 10) 25;
    (STR 11) 25×1.2=30;
    (STR 12) 30×1.2=36;
    (STR 13) 36×1.2= 43.2, round down to 43;
    (STR 14) 43×1.2=51.6, round down to 50.

That’s a doubling every +4 STR, much to my surprise! So +15 STR would be ×2 ×2 ×2 ×43/25 of 25 STR, or 43 ×2 ×2 ×2 = 86 ×2 ×2 = 172 ×2 = 344kg.

We can quickly work out the actual record: 263.5 / 8 = 32.9375, which is a smidgen more than STR 11 above, which means the record is 11-point-something, +12, = 23-point-something. That’s close enough to be workable.

What if the progression is fine, but the base value is a bit too high? What does it need to be for the record to come in at exactly STR 25?

263.5 / 8 = 32.9375; 32.9375×25 / 43 = 19.149, or 19.15kg.

So the best compromise would probably be to define STR 10 as permitting a 20kg load, and a x1.2 progression from there:

    (STR 10) 20;
    (STR 11) 20×1.2=24;
    (STR 12) 24×1.2=28.8, round down to 28;
    (STR 13) 28×1.2= 33.6, round up to 34;
    (STR 14) 34×1.2=40.8, round down to 40.
    (STR 18) 40×2=80.
    (STR 22) 80×2=160.
    (STR 23) 160×1.2=192.
    (STR 24) 192×1.2=230.4, round down to 230.
    (STR 25) 230×1.2=276.

Still not quite there – the world record would be somewhere in the vicinity of STR 24.5.

Hold the phone – what if we consider the load to be balanced, instead of unbalanced?

Balanced Load

In this case, the static load was 263.5, but the balanced load is half that, or 131.75.

We now have three possible bases for consideration: 20, 25, and 30.

Base 20:

    131.75 = approx 20 x^15. Take the log of both sides:
    Log(131.75) = log (20) + 15×log(x)
    Log(131.75)-log(20) = 15×log(x) = 2.11975 – 1.30103 = 0.81872
    log(x) = 0.81872 / 15 = 0.05458
    x = 1.134

….not especially nice. It’s too far away from 1.1 to round down and from 1.2 to round up.

Base 25:

    131.75 = approx 25 x^15. Take the log of both sides:
    Log(131.75) = log (25) + 15×log(x)
    Log(131.75)-log(25) = 15×log(x) = 2.11975 – 1.39794 = 0.72181
    log(x) = 0.72181 / 15 = 0.04812
    x = 1.117

….better, not far removed from 1.1.

Base 30:

    131.75 = approx 30 x^15. Take the log of both sides:
    Log(131.75) = log (30)+ 15×log(x)
    Log(131.75)-log(30) = 15×log(x) = 2.11975 – 1.47712 = 0.64263
    log(x) = 0.64263 / 15 = 0.042842
    x = 1.10367

…which is really close to 1.1. Rounding errors would soon swamp any difference that small. So base 30 gets the nod, and the progression is now x1.1:

    (STR 10) 30;
    (STR 11) 30×1.1=33;
    (STR 12) 33×1.1=36.3, round down to 36;
    (STR 13) 36×1.1= 39.6, round up to 40;
    (STR 14) 40×1.1=44.
    (STR 15) 44×1.1=48.4, round down to 48.
    (STR 16) 48×1.1=52.8, round up to 53.
    (STR 17) 53×1.1=58.3, round down to 58.

… looks like we aren’t going to get a nice neat “doubles in this many steps”. Maybe if we round up at STR 15?

    (STR 15) 44×1.1=48.4, round up to 49.
    (STR 16) 49×1.1=53.9, round up to 54.
    (STR 17) 54×1.1=59.4, round up 60.

It took another “round up, not off” in the last step, but this progression gets us there – load capacity doubles every +7 STR.

Of course, this list isn’t used just for people. Vehicles have a STR, too, that defines their carrying capacity. A sports car has room for 2 people (240kg-250kg, maximum), plus at best 50kg of baggage. Plus itself, of course, but that doesn’t count. This is a distributed load (over all four tires), so the actual static load equivalent would be 4×300=1200kg.

    (STR 17) 60.
    (STR 24) 120.
    (STR 31) 240.
    (STR 38) 480.
    (STR 45) 960.
    (STR 46) 960×1.1=1056.
    (STR 47) 1056×1.1=1161.6, round down to 1161..
    (STR 48) 1161×1.1= 1277.1, round down to 1277.

So a sports car would have a STR of about 47.2 or something like that.

A four-passenger saloon can carry four people and easily 300kg of luggage. 4×120=480, +300 = 780. But this is a distributed load, so the static load equivalent is 4×780 (four tires) = 3120.

    (STR 49) 1277×1.1 = 1404.7, round up to 1405.
    (STR 50) 1405×1.1 = 1545.5, round down to 1545.
    (STR 51) 1545×1.1 = 1699.5, round up to 1700.
    (STR 52) 1700×1.1 = 1870.
    (STR 53) 1870×1.1 = 2057.
    (STR 54) 2057×1.1 = 2262.7, round up to 2263.
    (STR 55) 2263×1.1 = 2489.3, round down to 2489. Except that it should also be 2×1277, which is 2554. So split the difference and call it 2500.
    (STR 56) 2500×1.1 = 2750.
    (STR 57) 2750×1.1 = 3025.
    (STR 58) 3025×1.1 = 3327. So a family saloon would have a STR of about 57.3.

Note that this isn’t the only way to calculate the table. I could take as gospel the principle of double every +7 STR. Which means that STR 18 will be double STR 11, and STR 19 will be double STR 12, and so on. This preserves the rounding errors in the original progression, and enlarges them, but it preserves the shortcut perfectly.

And that makes it easy to find any load on the table, even if the table doesn’t go up that high. Simply keep halving the load (and counting the number of times you have to do so) until you get to a value within the range of the table. Count +7 for each doubling, and add the STR indicated by the table.

A freighter carrying 100,000 tonnes? That’s a classic distributed load, so x3 (there are no legs or tires to distribute the load, so we fall back in the standard).

    300,000 -> 150,000.
    150,000 -> 75,000.
    75,000 -> 37,500.
    37,500 -> 18750.
    18750 -> 9375.
    9375 -> 4687.5.
    4687.5 -> 2343.75. Which is a smidgen under halfway between STR 54 and STR 55, according to our calculations above. So (7×7)+54.5 = 49+54.5 = 103.5.

A third approach is hinted at by what I did at STR 55, above. I rounded off to a convenient number. Which might not be mathematically accurate, but which is a heck of a lot easier to use. And that’s a winning argument in my book.

At this point, constructing the “weight” part of the table is a simple exercise.

Length/Distance

Whenever I think of this value, I think of modifiers to an attack roll, or to a perception or “spot” roll – however the PC wants to define it. Something along those lines is ubiquitous in RPG game mechanics.

But here I don’t have a base value to start from. I could define one – “-1 at 5m” or “-1 at 10m” or something along those lines. I also have no real idea of the desired progression rate. So this is going to be a great deal harder.

I think the way to get a handle on this is to look at the sporting events of some sort of international competition. I didn’t find a list of Olympic events at Wikipedia (I’m sure it’s there somewhere) but did find one for the Commonwealth Games – 10m air pistol, 25m sport pistol, 25m standard pistol, 50m small-bore rifle – so these are important values that need to be embedded within the table.

The longest confirmed sniper kill in combat was achieved by an undisclosed member of the Canadian JTF2 special forces in June 2017 at a distance of 3,540m. So that gives some sort of upper range to the table. I presume that a specialized weapon and expert training are both required, and those would presumably be worth something like +5 each, maybe more – let’s say +10-20 between them. Aiming could achieve as much as +10, also maybe more. Skill checks are to be made using 3d6, and low is better than high. So a 3/- has to result from difficulty – modifiers. Or, to put it another way, difficulty = 3+modifiers.

That pegs this value as roughly index points 23-33 on the table. That more or less fits with the notion of a total number of entries of about 30 – and means that there will be some close ranges at which characters receive a bonus to hit for proximity instead of a penalty for distance.

So 3500m is going to be roughly 30 on the table, and 1m=+0 seems reasonable.

    3500/1 = x^30.
    log 3500 = 30 log x.
    3.544 / 30 = log x = 0.11813333
    x = 1.3126.

That’s not at all a convenient number. Increasing this reduces the number at which 3500m falls on the range, and so reduces the modifiers against success at that range. But we haven’t even done aiming time yet, which is one of the factors being taken into account – so it might be +10 (as speculated) or it might be +7 or something like that. Adjusting the aiming time bonus compensates for any reduction in difficulty.

Reducing it blows the difficulty out, making this even more of a difficult shot to make. And, realistically, a 3 on 3d6 comes up one in 216 times, which is not all that remarkable. Getting six dice to snake eyes would make this a one-in-46,656 shot – which is closer to the mark. Nine dice to snake eyes would make this a one in 10,077,696 shot – that’s noteworthy!

Six Dice? Nine Dice? Where did that come from?

Since writing the previous article, I’ve decided to incorporate an additional game mechanic. If the chance of success is impossible (i.e. 2 or less or below are required), a character can try for a miracle success. For every extra dice they roll and count toward the total, they increase the target by +2, up to the point where a possible roll is achieved. So 2/- on 3 dice becomes 4/- on 4 dice.

Similarly, if a character can’t fail – the chance is 18/- on 3d6 or better – the character can choose to add “extra benefits” to their attempt. The GM evaluates what benefit or trick the player wants to add as an increase in the difficulty. For every 2 over 18/-, the difficulty target gets reduced by 2 for every extra dice that the character gets to roll, while ignoring all but the lowest 3. So a 19/- becomes a 17/- on 4dice, keep the lowest three, with a +2 gimmick, benefit, or advantage. A 20/- becomes a 16/- on 5 dice, keep the lowest three, with a +4 gimmick, benefit, or advantage. A 22/- becomes 16/- on six dice, keep the lowest three, with a +6 gimmick, benefit, or advantage.

These are intended to (1) give PCs a chance at achieving a hail-Mary pass; and (2) offer them a benefit if they increase the chance of failing when success would otherwise be automatic, both as optional rules that the player (not the GM) can invoke.

So, 3/- on 9 dice (six more than the usual 3d6) is worth +12 modifier, meaning that the original chance could be as low as 3-12=-9. Which in turn means that I can put the range entry for 3500m as much as 9 places higher up the table.

That gives me some wriggle room in constructing this progression. I can pick a convenient value, and so long as 3500 comes out meaning something between 23 and 42, everything else can be tweaked to fit the scale.

The pivot point is a progression of 1.3126 – higher than that, and the difficulty is lower; lower than that, and it becomes higher.

Rather than trying to match that with an exact result of convenience, though, a far better approach is to work out how quickly the range index doubles. Is it every step? Every 2nd step? Every 3rd? 4th? 5th? more?

Or, indexing to a ×5 or a ×10 might make more sense.

When you have so many options to choose from, the best answer is to try them all out for size, and see which one looks prettiest.
 

    ×2 every +1 = ×2; 3500m = 12. Too low, our window is 23-42.
    ×2 every +2 = ×1.414; 3500m = 23.55. At the very low end of what’s permitted.
    ×2 every +3 = ×1.26; 3500m = 35.31. Nicely in the middle of the range of permitted values.
    ×2 every +4 = ×1.19. 3500m = 46.91. A little more than the highest acceptable value.
     
    ×5 every +2 = ×2.236. 3500m = 10. 23-42 is acceptable, this is too low
    ×5 every +3 = ×1.71. 3500m = 15. Still too low.
    ×5 every +4 = ×1.5. 3500m = 20.12. A little too low.
    ×5 every +5 = ×1.38. 3500m = 25.34. Acceptable, but on the low side.
    ×5 every +6 = ×1.308. 3500m = 30.393. Close to perfect.
    ×5 every +7 = ×1.2585. 3500m = 35.493. Still acceptable.
    ×5 every +8 = ×1.223. 3500m = 40.5377. Acceptable, but on the high side.
    ×5 every +9 = ×1.1958. 3500m = 45.6366. Too high.
     
    ×10 every +4 = ×1.778. 3500m = 14.18. Too low.
    ×10 every +5 = ×1.585. 3500m = 17.718. Too low.
    ×10 every +6 = ×1.4678. 3500m = 21.264. A little too low.
    ×10 every +7 = ×1.3895. 3500m = 24.808.The low end of acceptable.
    ×10 every +8 = ×1.3335. 3500m = 28.354. Acceptable, but still a little low.
    ×10 every +9 = ×1.29155. 3500m = 31.8966. Acceptable.
    ×10 every +10 = ×1.26. 3500m = 35.43156. Acceptable.
    ×10 every +11 = ×1.233. 3500m = 38.9616. Acceptable.
    ×10 every +12 = ×1.2115. 3500m = 42.5339. Just barely outside the acceptable range.

 
So, the choices are:

  • ×2 every +2
  • ×2 every +3
  • ×5 every +5
  • ×5 every +6
  • ×5 every +7
  • ×5 every +8
  • ×10 every +7
  • ×10 every +8
  • ×10 every +9
  • ×10 every +10
  • ×10 every +11

 
Scoring big for elegance are “×2 every +2”, “×5 every +5” and “×10 every +10”. Scoring big for accuracy to the desired result of about 30 “×5 every +6” and “×10 every +9”, but neither of those make the elegance cut, so at best they are on an equal standing with the first three choices shortlisted. Scoring big in terms of a multiplication factor that’s easy to work with are “×5 every +4” and “×10 every +10”, with “×5 every +5” close behind. That means that we have one clear winner with a score of 2 out of 3 – “×10 every +10”, or ×1.26.

That wasn’t the result that I was expecting – I was sure that a ×2 or ×5 would be more likely to get the nod – but mathematics doesn’t bend to suit our expectations.

The resulting progression is:

    0 = 1m
    1 = 1.3m
    2 = 1.6m
    3 = 2m
    4 = 2.5m
    5 = 3.2m
    6 = 4m
    7 = 5m
    8 = 6.4m
    9 = 8m
    10 = 10m
    11 = 13m
    12 = 16m
    13 = 20m

…and so on. And 3500m is a modifier of 5 (from 3.5) +10 (to 35) +10 (to 350) +10 (to 3500)=35.

Size

I once did an experiment to get a better handle on how target size should work. I drew a number of squares on a sheet of graph paper – 5cm×5cm, 10cm×10cm, 2cm×2cm, 4cm×4cm, and 8cm×8cm, all arranged concentrically. From a height of about 10cm, I dropped 1cm×1cm×1cm d6 and made a mark where they landed. I then repeated the experiment from a height of about 20cm, about 40cm, and about 80cm.

The purpose was to see whether doubling the area also doubled the number of “hits” using the 5cm×5cm score and the 10cm×10cm score. These results would either largely track with the 4cm×4cm vs 8cm×8cm results or they wouldn’t, but the 2cm×2cm vs 4cm×4cm results would give some indication of how the accuracy changed with target area. Comparing all of these with the matching results from the different heights would permit an estimation of the effect of range on the accuracy relative to target size..

So, did doubling the target size double the accuracy?

In fact, it did everything but, depending on the range (height above the graph paper). At close ranges, the majority of dice landed inside the 5×5 area – something like 80% of them. Virtually all of them landed inside the 10×10 area – close to a 125% accuracy increase from doubling the area.

This finding was reinforced by the 2-vs-4-vs-8 results. About 35% landed inside the 2×2 area, another 30% in the 4×4 area, and about 20% more inside the 8×8 area.

As the range increased, so did my inaccuracy (no surprise there!), and the accuracy counts began to approach the sort of ratios that you would expect from the different areas, but even at the greatest range, they never quite got there. I could only conclude that my attempts to aim for the center of the target – no matter how good or how bad – biased even the misses closer to the target than area alone would suggest. At close ranges, this effect overwhelmed the randomness.

So the size of the target, as a modifier, is dependent on the range. Which is extremely difficult to model using simple mechanics of the sort being contemplated for this game system.

Up to a certain point, doubling the size of the target more than doubles the accuracy. Which is another way of saying that the modifiers should not reflect a doubling of the size for a doubling of the modifier, a smaller increase in the area will do that.

That stops when the range is more than the target. The easiest way to build this behavior into the table is a “shift” up the table based on the range if the range modifier is greater than the size value, and a shift down the table if the range modifier is smaller than the size value – in terms of determining the size increase represented by a particular modifier.

But in practical usage, we will want to determine a modifier based on the size of the target, so these adjustments have to go in the other direction – a “shift down” if the range value is greater than the target, a “shift up” if the range modifier is smaller than the target modifier.

For various reasons that I won’t go into here (too long and complicated), these shifts should have non-linear intervals – 1,2,3,4,5,6, and so on.

So,

    +1 = diff 1
    +2 = diff 2 to 1+2=3
    +3 = diff 4 to 3+4=7
    +4 = diff 8 to 7+5=12
    +5 = diff 13 to 12+6=18
    +6 = diff 19 to 18+7=25
    +7 = diff 26 to 25+8=33
    +8 = diff 34 to 33+9=42
    +9 = diff 43 to 42+10=52.
    +10=diff 52 to 52+11=63.

…which is more than we are ever likely to need, but the table can be extended from there.

To accommodate this effect, I need to extend the table seven extra entries in either direction for size only. But that means that I can then use a simple doubling of area for a given modifier.

Next, we need to define a base standard. I keep coming back to 1m × 1m at 2m, If you do the math, that means a target that occupies 53 degrees of a possible 180 degrees (360 if you had eyes in the back of your head), or 29.4% of the visible space.

Why 1m × 1m? Well, the typical human is roughly 2m high × 0.5m wide, which just happens to come to the same area as a 1m × 1m target.

Torso plus head is roughly half that size – leaving an amount of about the same if the goal is to avoid hitting a vital area, conveniently! Head and neck alone are roughly 1/4 the size of torso+head. A hand and wrist is about half that, if open, or about 1/4 of it if wrapped around a grip – so, to attempt to shoot the weapon out of someone’s hand, we’re talking about the same area as the open hand, consisting of half weapon and half gripping hand. Eye sockets are about 1/3 of the width of the head, each, and about 1/6th the length – so that’s 1/18th the head – but a glancing blow to the eyebrow ridge has a 50-50 chance of deflecting towards the eye socket, so we can justify making them just a little larger – a nice convenient 1/16th of the head size is a nice working value. And a ring, or a darts bulls-eye, is about half that area. So 1m × 1m gives a whole range of useful values!

I want these to all be listed on the table. They are all things that a PC might want to target, depending on the situation.

    +0 = 1m² at 2m, human
    -1 = head + torso or flesh wound
    -2 = head + neck
    -3 = open hand or weapon in hand
    -4 = fist
    -5 = finger
    -6 = eye socket
    -7 = ring, darts bulls eye, marble, button
    -8 = keyhole

With the main table, I’m going to take a couple of “rounding error” liberties to keep the values useful.

    1 = 2 m² (large motorcycle, doorway)
    2 = 4 m² (small car side view)
    3 = 10 m² (truck side view)
    4 = 15 m² (aircraft control cabin)
    5 = 30 m² (fishing trawler, barn door)
    6 = 60 m² (locomotive, barn side view)
    7 = 120 m² (small train)
    8 = 250 m² (large train, freighter side view, small house)
    9 = 500 m² (large house)
    10 = 1000 m² (small mansion, lighthouse)
    11 = 2000 m² (large mansion, Eiffel tower)
    12 = 4000 m² (the pentagon, top view)
    13 = 8000 m² (small skyscraper, side view)
    14 = 12,000 m²
    15 = 25,000 m²
    16 = 50,000 m²
    17 = 1 km²
    18 = 2 km²
    19 = 4 km²
    20 = 8 km²
    21 = 15 km²
    22 = 30 km²
    23 = 60 km²
    24 = 120 km²
    25 = 250 km²
    26 = 500 km²
    27 = 1000 km²
    28 = 2000 km²
    29 = 4000 km²
    30 = 8000 km²
    31 = 15,000 km²
    32 = 30,000 km²
    33 = 60,000 km²
    34 = 120,000 km²
    35 = 250,000 km²
    36 = 500,000 km²
    37 = 1,000,000 km²
    38 = 2,000,000 km²
    39 = 4,000,000 km²
    40 = 8,000,000 km²

That probably goes further than necessary. 8,000,000 square km is slightly smaller than the USA – including Alaska and Hawaii. It’s slightly larger than Australia, which is roughly the same size as the continental US.

It’s important to bear in mind the “at 2m”). At 1m, the target is twice the size – a +1 modifier. At 0.5m – effectively point-blank – it’s twice that, or a +2 modifier.

So how about at 200m?

That’s a range modifier of 23. The size at 2m is +0. So you might expect that we’re talking a modifier of 23. But the range modifier is definitely more than the size modifier, by 23 – so we effectively shift 6 rows down the size table, effectively increasing the size of the target. So the modifier is actually 17.

Time

Time as a modifier has multiple functions. It can be used to determine the penalty for rushing through a task (i.e. taking less time than is required to do the job with care, accuracy, and precision, in the GM’s opinion), or a bonus for taking extra time over and above the minimum requirement, or it can be used to define the modifier for aiming based on how long you aim – and capped by the type of weapon.

That last is critical, because none of the others give us any clue as to the base or the scale.

Most people point at the target and shoot. Taking a second or two to aim with a pistol greatly increases the accuracy, but more time after that has a negligible effect. Taking five or ten seconds to aim a rifle will markedly improve the accuracy, but not much more. A sniper can take five or ten minutes or more to aim, and then spends time waiting for the target to get into the optimum position to make the hit when it happens as effective as possible. He might also spend as much as half-an-hour letting his eyes adjust to the natural light, but that’s not time spent aiming.

The Sniper Record Revisited

That brings us back to that record kill-shot by a sniper, which is a key metric for determining what the time modifier for “5 to 10 minutes” is. We want our hypothetical sniper to have a -9 on 3d6 chance.

There’s a 3500m range, which gives a 35 range modifier.

For a kill shot, we could be talking chest, but head/neck seems more likely. So there’s a base size modifier of -2. So that’s a difference of 37. And that’s an adjustment of +7 to the target size, so the total modifier so far is 30. Let’s assume that the telescopic sights are worth another -5, and that the sniper has a +3 from stats and +4 from skill – that’s quite a high score.

    Roll required = skill + modifiers, or less.

    -9 = 3 (stat) +4 (skill) -30 (range and size) + 5 (sights) + Aim, which is the one modifier that we don’t know.

    3+4+5-30=-18. So Aim-18 = -9, or Aim = 18-9 = 9.

If we can identify one other value on the table, we can work out a progression. And we have one – spending 0 time aiming has to be the lowest entry on the table, because you can’t spend less than that. So “0 time” = -5.

But “0 time” is meaningless, because 0 multiplied by a number is always zero. What that actually means is “less than 1 second” has a value of -5 – and therefore, “1 second” has a value of -4.

The difference between -4 and 9 is 13. That means that whatever the progression is, 12 lots of it turns 1 second into 5-10 minutes, i.e. 300-600 seconds.

That’s a big difference. But let’s work out those values and then pick something convenient in between.

    1 times x^12 = 300
    log (x^12) = log (300)
    12 log (x) = log (300)
    log (x) = log(300) / 12 = 2.477 / 12 = 0.2064.
    x = 10^0.2064 = 1.6085.

    1 times x^12 = 600
    log (x^12) = log (600)
    12 log (x) = log (600)
    log (x) = log(600) / 12 = 2.77815 / 12 = 0.2315
    x = 10^0.2315 = 1.7041.

Anything in between those values will work just fine. Given that this was a record, we can assume that the value is closer to the high end, requiring more time to take the shot.

    +1: 1.7×1 = 1.7.
    +2: 1.7×1.7 = 2.89
    +3: 2.89×1.7 = 4.93
    +4: 4.93×1.7 = 8.35
    +5: 8.35×1.7 = 14.19.

That’s not looking too neat, but there are a couple of alternatives there that leap out. x5 for every +3, or x10 every +4.

    x^3 = 5
    3 log (x) = log (5) = 0.69897
    log (x) = 0.69897 / 3 = 0.23299
    x = 1.71 – a fraction outside our acceptable range.

    x^4 = 10
    4 log (x) = log (10) = 1
    log (x) = 1 / 4 = 0.25
    x = 1.7782

…which is even more outside the acceptable range. Obviously, adjusting any of the factor results upwards gets us in trouble. The third-best choice is x8 ever +4:

    x^4 = 8
    4 log (x) = log (8) = 0.90309
    log (x) = 0.90309 / 4 = 0.22577
    x = 1.6818

that’s not an especially pretty number, either. Perhaps this approach should be scrapped, keeping only the identified value of, say, ×500 at +9, and fill in the rest through some other function of the table.

Spending Extra Time on a task

One of the applications of this list is to determine a bonus for spending extra time on something, and a penalty for rushing a task. Base time required is always +0.

It strikes me as appropriate that +1 should result from spending an extra 50% of the time required, and +2 from spending twice the base time. +3 could result from spending 4× the base time required, +4 from spending 8 times the base time. That gives us a number that’s very close to the 1.7-factor we were looking for. And base time ×15 at +5 sets up a neat progression. So the table would be:

    +0 = ×1
    +1 = ×1.5
    +2 = ×2
    +3 = ×4
    +4 = ×8
    +5 = ×15
    +6 = ×20
    +7 = ×40
    +8 = ×80
    +9 = ×150
    +10= ×200
    +11 = ×400
    +12 = ×800

…but that’s not going up fast enough to give us ×500 at +9.

So, keeping the lower values, let’s try again:

    +0 = ×1
    +1 = ×1.5
    +2 = ×2
    +3 = ×4
    +4 = ×10
    +5 = ×15
    +6 = ×25
    +7 = ×50
    +8 = ×100
    +9 = ×200

… still not enough.

    +0 = ×1
    +1 = ×1.5
    +2 = ×2
    +3 = ×5
    +4 = ×10
    +5 = ×20
    +6 = ×50
    +7 = ×100
    +8 = ×200
    +9 = ×500

…bingo!

    +10 = ×1000
    +11 = ×2000
    +12 = ×5000
    +13 = ×10,000

… which is probably as far as I need to take the table.

And what of going the other way?

    +0 = x1
    -1 = 1×5 / 10 = 0.5
    -2 = 0.5 × 2 / 5 = 0.2
    -3 = 0.2 × 1.5 / 2 = 0.15
    -4 = 1 × 1 / 10 = 0.1
    -5 = < 0.1

That defines “the time it takes to point at the target and pull the trigger” as 0.1 seconds, and “the time it takes to pull the trigger indiscriminately” as something less than 0.1 seconds.

These have the opposite problem – they seem to decline too quickly. According to Wikipedia,

Mean Reaction Time for college-age individuals is about 160 milliseconds to detect an auditory stimulus, and approximately 190 milliseconds to detect visual stimulus. The mean reaction times for sprinters at the Beijing Olympics were 166 ms for males and 189 ms for females, but in one out of 1,000 starts they can achieve 109 ms and 121 ms, respectively.

109 ms is 109 thousandths of a second, or 0.109 seconds. Close enough to the 0.1 already in place, but with a -5 modifier. That gives me room for an extra entry.

    +0 = ×1
    -1 = ×0.75
    -2 = ×0.6
    -3 = ×0.4
    -4 = ×0.2
    -5 = ×0.1 or less

That works for me, time to move on.

Precision

Doing delicate, precise work can be just as difficulty as a physically challenging task requiring great strength or agility. Some people can never do such work, others are capable only by spending a great amount of time on the task. The ability to perform time-critical precision tasks, on a very small scale, under pressure, is pretty rare. Some electronics techs might have it; some surgeons have it, especially neurosurgeons; watchmakers have it to some degree; bomb disposal techs often have it in some measure; artists often have some capacity in this direction.

In practical terms, this is a two-fold issue: the delicacy of the task (based on the size of the target) vs the visual amplification or zoom factor and any tools that scale movement down. Zoom factor makes it easier to see exactly what you are doing, movement scaling means that a large movement in the real world becomes a small movement in dealing with the target.

In game system terms, this is all about setting the difficulty of a task. Some of these factors are under the control of the PCs insofar as they can increase the magnification of whatever microscope technology they are using, or acquire better technology if it’s available. Both of those factors have limits according to the technology of the era, and those limits define the limits of what is possible – with skill, natural talent, training, and innate artistry (i.e. skill level) having to bridge the gap.

That means that this will actually be three columns in the finished table. Assuming that zoom factor and movement scaling can use the same column, that can be simplified to two: Delicacy and Scaling.

Delicacy

This is similar to the range target but moving in the other direction – smaller gives a higher difficulty.

So the place to start is with the range column that I worked out earlier. The first few entries will match the negative values on that column; from there, it should be possible to take the reciprocal of entries from the range table.

So my starting point is:

    RANGE:
    0 = 1m
    1 = 1.3m
    2 = 1.6m
    3 = 2m
    4 = 2.5m
    5 = 3.2m
    6 = 4m
    7 = 5m
    8 = 6.4m
    9 = 8m
    10 = 10m
    11 = 13m
    12 = 16m
    13 = 20m

… and so on.

Two observations strike me immediately: first, that I didn’t work out any negative modifier entries earlier, and second, that this progression rate is very small. Too small to be useful in this way, in fact; most modifiers would be so large that mental arithmetic would be hard-put to cope (that’s another reason why I’ve been trying to keep the number of entries in the table small).

So plan “A” is a washout. Back to square one.

Carpenters etc have to be accurate to within a mm in most tasks. Many amateur mistakes come from not being sufficiently precise – my dad has a setup on his workbench that allows for the thickness of his pencil, because that’s between 1 and 0.5mm thick – and if you cut on the wrong side of that line, you’re in trouble. He also has to allow for the thickness of the cutting blade, especially when using a disk cutter. That can be about 1.5mm thick. Again, it’s all about making sure that whatever is left when you finish cutting is exactly what you want.

So I want 1mm to have a small modifier, enough to distinguish between those with some experience or skill in carpentry and those who don’t – between him and me, in other words!

I think that a modifier of 2 would be about right.

At the same time, I remember some of the very rough-and-ready “furniture” that we knocked up at our field camp when I worked for the NSW Dept of Agriculture, essentially using a chainsaw and wire. Okay, there might have been a drill and some bolts on some of it, too. Anything within about 5mm was good enough. Instead of chairs with four legs, we used three-legged designs, because they won’t rock if one of the legs is a little short – it just means that the table or chair slopes a little. For chairs, in fact, we simply sliced a section out of a tree and left it to air-dry – a ‘one leg’ solution!

At the same time, though, I’ve known people who couldn’t do that, more because they had never thought about the practicalities involved. So that’s a modifier of 1.

I’m something of an artist, and have been for decades. I have done my best to adapt those skills to a digital medium, but have in fact ended up developing a whole new set of skills – at least to the point where ten or 15 minutes of effort produced the “dropping dice” illustration above. But there are a huge number of things that I can do with pencil and ink that I would have extreme difficulty replicating in an electronic format.

‘Ink Of The Squid’ illustration from Assassin’s Amulet, with enlargements.

When I was doing the artwork for Assassin’s Amulet, for example this piece, I did pencil sketches at double-size in pencil, went over them (correcting) with 0.5mm marker, scanned them, and then “painted” over the top of them. Finally, the scanned “underlying image” was deleted when I was satisfied.

With such manual tools, I have a resolution of about 1/10th of a mm – which is to say, if a pencil stroke is 0.1mm away from where I want it to be, I can see the error. Well, I used to be able to – I haven’t done anything like this for 6 years, now!

That didn’t mean that the pencil or pen went where I wanted it to go, every time – just that I could detect it when it didn’t.

When doing the digital work, I also worked much larger than the final scale – the “raw image” of this work was about 2400×2400 pixels, as I recall. The image shown here is about 450 pixels wide, the one that actually appears in Assassin’s Amulet is more like 600 pixels wide – so that’s a 4x zoom. But to do some of the detail work – the ribs on the end of the bottle, the suckers and so on – I would have zoomed in perhaps another 500%. So 2400×5=12000, or about 20x zoom.

It meant that small errors – that might not have even been visible to others – became vanishingly small, enabling me to work at absolutely top speed. I was doing 3-5 of these illustrations a night while working on the text and maintaining Campaign Mastery during the day – giving some idea of the speed that was possible from these working practices.

Would I have liked more time? Absolutely. I would love to have been able to linger over one of these for a whole day or two – a week in some cases. But time and financial pressures meant that I had to churn them out at top speed. (I did the best I could – deliberately pairing complex pictures like “Ink Of The Squid” with a couple of simpler ones, so that I could lavish some more attention on it. But it was all compromised to some extent by practicalities.)

So, this illustrates both the zoom effect, the mechanical scaling effect (both of which are to be dealt with shortly) and gives another data point on the scale: 0.1mm. I don’t think the modifier that goes with that scale should be much more than the 0.5mm I’ve already allocated to a 2 modifier, so let’s make it a 3.

But that brings me to the question of progression. There is a clear pattern beginning to emerge, but I’m concerned that it won’t progress fast enough to give workable modifiers for really small operations. At the same time, I want to be sure that these are only possible if you have both the skill and the right equipment. Choosing a non-linear progression should solve these problems.

So let’s start with what we’ve got and extend the table from there, and see how it looks:

    -2 = 1m (FM radio wavelength – included for completeness)
    +0 = 1cm (microwave wavelength)
    +1 = 5mm (ants, seeds, rice grains)
    +2 = 1mm (pixels, grains of sand or salt, furniture tolerance)
    +3 = 0.1 mm = 100µm (width, human hair, limit unaided vision)
    +4 = 0.05mm = 50µm (thickness 1 sheet of paper, human skin cell = 35µm)
    +5 = 0.01mm = 10µm (width of a silk fiber, white blood cell, 1971 Transistors, infrared wavelength)
    +6 = 0.005mm = 5µm (cell nucleus, x chromosome, red blood cell)
    +7 = 1µm (1 micron) (y chromosome, clay particle, e.coli)
    +8 = 0.5µm = 500 nm (largest virus, red wavelength = 750)
    +9 = 0.1µm = 100 nm (limit optical microscopes, HIV, violet wavelength = 400)
    +10 = 0.05µm = 50nm (Hep B virus, infrared wavelength)
    +11 = 0.01µm = 10nm (2017 Transistors = 25nm)
    +12 = 0.005µm = 5nm (cell membrane, DNA)
    +13 = 1 nm = 100 Angstroms (buckyball)
    +14 = 0.5 nm = 50 Angstroms (glucose molecule, cesium atom, x-ray wavelength)
    +15 = 0.1 nm = 10 Angstroms = 100 picometers (carbon atom = 340, water molecule = 280)
    +16 = 0.05 nm = 5 Angstroms = 50 picometers (limit electron microscopes)
    +17 = 0.01 nm = 1 Angstrom = 10 picometers (Hydrogen atom = 31, Helium = 25)
    +18 = 0.005nm = 0.5 Angstrom = 5 picometers
    +19 = 1 picometer (gamma ray wavelength)
    +20 = 0.5 picometer
    +21 = 0.01 picometer (uranium nucleus = 0.015 picometers)
    +22 = 5 femtometers
    +23 = 1 femtometer (proton, neutron, helium nucleus = 3)
    +24 = 500 attometers
    +25 = 100 attometers (smallest confirmed objects in existence)

That’s not bad!

Credit where it’s due: the examples are from The Scale Of The Universe 2 by Cary & Michael Huang. Have a play around with their interactive app, then get their email link from this page to thank them!

Scaling

The above also makes the scaling pretty clear. Because scaling modifiers are to be half the delicacy scale (leaving the other half for movement scaling technology), we get:

    +0 = ×1
    +1 = ×10 (magnifying glass, jeweler’s loupe)
    +2 = ×100
    +3 = ×1000
    +4 = ×10k
    +5 = ×100k (limit optical microscopes)
    +6 = ×1M
    +7 = ×10M
    +8 = ×100M
    +9 = ×1000M (limit, electron microscopes)
    +10 = ×10G or more (sci-fi only)

Movement scaling is relatively new technology, though it was always possible to a limited extent mechanically. In fact, a lot of tools are intended to scale movement in a very limited way – teeny-tiny screws and screwdrivers, for example. These days, robotized tools controlled through a computer let us manipulate objects as small as 50nm or so, and we have processes that let us design and manufacture tangible objects as small as 10nm (the component parts of a 25nm transistor, for example).

Nanotechnology machines are the obvious next stage of development, the cutting edge. Again, we haven’t devised tools to scale our own movement that small, instead we have designed processes that create the components. We are only just getting to the point of being able to assemble these components – that will involve more processes. Fraser Stoddart, Bernard Feringa and Jean-Pierre Sauvage shared the 2016 Nobel prize for their work in the field, especially the creation of a “nanocar”. But tracking down a size for these devices has proven incredibly hard – the best that I’ve been able to manage quotes “a few billionths of a meter”, which is around the +7 or +8 mark on the scale given above. It was just as difficult trying to find a freely-licensed image to illustrate it – the best image I was able to find is shown in an article on The Verge but the terms of usage don’t leave me any the wiser as to who the copyright owner is. So the best I can do is provide the link and let you check it out for yourselves.

So, what we have is the following:

    +0 = ×1 precision tools
    +1 = ×10 high-quality precision manual tools
    +2 = ×100 limit precision manual tools
    +3 = ×1000 primitive process-based designer tools, computerized scaling tools
    +4 = ×10k generation-2 process-based tools, computerized scaling tools
    +5 = ×100k generation-3 process-based tools, light/laser-based scaling tools
    +6 = ×1M generation-4 process-based tools, energy-beam based scaling tools
    +7 = ×10M virus-based nanotechnology, generation-5 process-based tools
    +8 = ×100M true nanomachines, the nanocar
    +9 = ×1000M process-based chemical tools (buckyballs)
    +10 = ×10G or more (sci-fi only)

Each scale of tools permits – in theory – the construction of parts of roughly the size of the tool, and the assembly of those parts into a “machine” one scale larger. So tools the scale of the nanocar would permit the construction of virus-based nanotechnology.

Before I wrap up this section, let’s run a realism check: Designing and creating a custom computer chip at the limits of known precision manufacture in 2017:

    Precision Modifier +11, – Optical Tools +5, – Energy-beam based scaling tools +5 + design difficulty gives an overall difficulty of 1 more than the design difficulty.

    So, if the GM sets a design difficulty of 3, the manufacturing difficulty will be 4. If the character has a skill of 3 and +3 from stats – both reasonable for an expert in the field – he will have to roll 6 or less on 3d6+3. Which is, impossible. So we add a d6 to improve the roll required: 8 or less on 4d6+3. Which is the same as 5 or less on 4d6. That’s a 0.39% chance of success, or about 1 in 256. And the manufacture will be even harder – 4 or less on 4d6, or 0.08% chance, or about 1 in 1250. But manufacturers will typically put 1000 or more chips on a single manufacturing batch – so, if they can get 1250 on a sheet, they are likely to get 1 fully-functional chip from the process.

    Compare that with a genius in the field with skill 5 and stats +4: that’s 9 or less on 3d6+3 for the design, and 3d6+4 for the manufacture: 9.26% chance of success for the design and 4.63% chance of success in the manufacture.

    And both of those test-cases ignore the potential for spending extra time to get the design and manufacturing right. But the results I did get all sound reasonable!

Assistance

It also brings up another point that I don’t think I’ve addressed previously. How to handle multiple people working in teams. Going it along might work for geniuses and mavericks, but most R&D is done by teams of experts.

This is to be based on the non-linear size adjustment, enabling me to re-use the same table entry.

Number of assistants or skill 1 lower than the lead operator required for a given bonus

    +1 = 1
    +2 = 2-3
    +3 = 4-7
    +4 = 8-12
    +5 = 13-18
    +6 = 19-25
    +7 = 26-33
    +8 = 34-42
    +9 = 43-52
    +10 = 53-63
    +11 = 64-75
    +12 = 76-88
    +13 = 89-104

For assistants of skill 2 lower, drop down one count. So 2-3 such assistants give +1, and so on.

Even unskilled assistants can be useful, taking care of the daily routine, for example. If we use “+3 skill” to signify “expert”, then laymen (by definition, those with +0 in the skill) have three ranks less, so 8-12 such assistants are still worth +1.

One expert, leading a team of half a dozen skilled technicians and another half-dozen trainees, and supported by a dozen unskilled people doing mundane tasks, is a reasonable small engineering firm in this sort of industry.

+3 from the expert, +3 from his stats, +2 from extra time, +3 from skilled assistants, +1 from the trainees, and +1 from the support staff, gives 14/- on 3d6+3 – a 62.5% chance of success. If the normal design process takes 1 month, that means that a first attempt will be ready in 3 months, and a second (if necessary) three months after that, increasing the chance of success in design to almost 86%. A third attempt is close to 95% certainty of success; a fourth gets that up to about 98%. A year spent in design and another in manufacture gives you that cutting-edge computer chip almost every time. Most experts would be secure enough in their ability to deliver taking a 2-year contract of this sort.

And all of those calculations assume that nothing is learned from the failures, that’s its all trial-and-error until you get it right; most design/engineering firms wouldn’t work that way. As a GM, investing a month in analyzing each failure would reasonably be worth another +1. So you could have four attempts totaling a 98% chance of success, or three of them – the first at 14/-, the second at 15/-, and the third at 16/-. Those are 37.5% chance of failure, 25.93% chance of failure, and 16.2% chance of failure, respectively – 98.4% chance of success, all told. And, if a fourth attempt was still needed, that would be at 9.26% chance of failure – a 99.9985% chance of success, delivering the design 3 months behind schedule, time that you might well be able to make up on the manufacturing side.

All of which sounds like it works to me.

To be continued…

So, the core table has now been designed, but I’m out of time for compiling it, and for looking at the other unanswered questions, like how combat will work. That means there will need to be on more in these posts, probably in a few weeks’ time.



Discover more from Campaign Mastery

Subscribe to get the latest posts sent to your email.