A bit of a departure from the usual today, with an article that is only indirectly game-related. Every now and then, you have to let your imagination run wild or it gets soft and flabby… normal service will resume on Thursday, this was just something that I had to get out of my system. Hopefully, it will still make for some fascinating reading…

A thought occurred to me a little while back, while thinking about Heisenberg’s Uncertainty Principle, and some of the unexplored implications that it contains.

Consider that as an object approaches the speed of light, it’s aparrant length shrinks, eventually reaching the point at which that length is less than the Uncertainty limit. Once this point is reached, we can no longer be certain of exactly where the object is by an amount equal to the Uncertainty limit, and can speak of its position only as a statement of probabilities, just as we do subatomic particles of smaller size than this limit.

This error must lie in the direction of the object’s motion, and hence its true position is either forward or back of the apparent position. But the object’s position at any given instant of time is used to determine its velocity – if the object’s Uncertainty places its true position further forward of its apparent position at the instant of measurement, that means that its velocity has increased by a miniscule amount (distance over time). Eventually, if the object continues to accelerate by however miniscule an amount each second, it must get close enough to the speed of light that this increase in velocity exceeds the speed of light – which is, supposedly, an impossibility.

There are only three ways out of this conundrum:

  • Either the Uncertainty ‘constant’ itself also shrinks with velocity, so that the amount of possible gain in apparent velocity is always less than the amount required to achieve the speed of light; or,
  • The other quantity in our calculation of velocity, time, is also subject to an Uncertainty principle similar to that of space; or
  • Faster than light travel is a valid physical phenomenon.

As a science-fiction enthusiast, I know which of these three I would like to be true, but let’s consider the alternatives for a moment.

Certainty at speed

If the Uncertainty constant shrinks with velocity in the direction of travel only, that would effectively mean that Certainty was achieved at the same instant as the object achieved the speed of light, i.e. that the quantum Uncertainty reaches a value of zero at the speed of light. I am quite sure that if this were the case, numerous particle acceleration experiments would have revealed it long ago, because as a particle accelerated to close to the speed of light, this effect would have thrown off the timing necessary to further accelerate the particle. Nevertheless, perhaps such effects have gone unnoticed.

If this were the case, how can the flawed designs have functioned? It could only have been a consequence of brute force methods, pumping additional (wasted) energy into the process of accelerating the particle sufficient to compensate. The implication is that particle accelerators can be much smaller and more powerful than our best designs to date; this has massive implications for weapons technology, if true, as well as for fusion research, and many other fields of high-energy engineering.

Uncertainty at any speed

If both space and time are subject to some sort of quantum Uncertainty, however, the effects are altogether more interesting, because every other phenomenon that physics can measure derives from one or both of these. A precise determination of gravitational attraction requires the precise measurement of distances between the two masses. Mass itself is measured by means of the force required to accelerate it, i.e. the acceleration due to gravity on earth, which involves both space and time. The Uncertainty principle would effectively impose a limit of resolution on all physical measurements, whether micro- or macroscopic. Of course, in the macro world, the degree of Uncertainty would be so small that it would be swamped by other margins of error, but no matter how instrumentation was improved, this limit of certainty would exist.

Computer Chips

And yet… there are macroscopic engineering products which are comprised of such sensitive components that quantum effects have to be taken into account in their design. Modern integrated circuits, for example – I remember reading that this was a subject of engineering concern when the Pentium was being designed. If computer chips are sensitive enough that spatial quantum effects are factors, surely they are also small enough, fast enough, to be subject to temporal Uncertainty as well? The implication is that there is a fundamental limit to computer clock speeds, and to the capacities of computer processors.

Fiber Optics

Another avenue of engineering worth considering in this context are fiber-optic communications, and specifically the precision of frequency of the laser beams used within such communications technologies. The distance from one peak to the next defines the wavelength of the energy beam; but any distance is subject to the Uncertainty principle, and the wavelengths in question are really, really small – more than small enough for Uncertainty to be a factor, smearing the frequency across a narrow band of frequencies.

And yet, I was once told, such a smearing effect was very real, and forced a limit to the distance that fiber optics could carry a recognizable stream of data. Some clever engineering by Australian researchers overcame the effect, which was due to the way the beam bounced off the walls of the glass ‘tube’ carrying the data signal. Refinements in the engineering of fiber optics which employed glass with different densities with respect of the permitted speed of light when viewed in cross section eliminated the smearing and gave us the capability of long-distance fiber optic connections. Did this engineering also unwittingly correct for any temporal Uncertainty, or is it another limitation that can be overcome with some clever engineering, vastly accelerating the rate of data communications?

Particle Accelerators – again

If there is a temporal Uncertainty factor, doesn’t this also bring us back to the same problems with particle accelerators, and the same potential for improvements in efficiency? Getting the timing right as a particle blasts past the electromagnets used to accelerate it is fundamental to the design and operation of such devices, after all.

Singularities are fuzzy

Another thought came to me as I was musing over all of the above, prior to writing this article. A singularity is a point in space at which physics breaks down, or appears to do so within the limits of our understanding due to the incomprehensibility of infinities. They exist at the heart of black holes, but Stephen Hawking showed that microscopic black holes could be formed, and could evaporate through pair production (another consequence of the Uncertainty principle), leaving the naked singularity behind.

How can you have a point in space when all distances are uncertain by the amount of Heisenberg’s Uncertainty constant? Just as the position of a subatomic particle cannot be precisely located, only stated as a probability function, surely the same must be true of a singularity, which is the ultimate expression of smallness?

And, if the position of a singularity can only be stated as a probability, then the balance of that probability must describe the likelyhood that the singularity’s properties do NOT prevail at position X relative to the centre of probability of the singularity. Space in the region around a singularity must therefore be – until the potential quantum states collapse into a solution – a strange mixture of normal physics and (for lack of a better term) abnormal singularity-physics, just as Schrödinger’s Cat is both alive and dead.

A singularity, therefore, cannot be a point except when its potential quantum states collapse; it must, most of the time, be a region one Heisenberg Uncertainty Limit in radius.

The Uncertainty Barrier

This in turn, gave rise to another thought: Is Heisenberg’s Uncertainty Constant the dividing line between the macroscopic world and the world of Quantum effects? I’m not entirely sure where to go with this conjecture, but there has to be a dividing line somewhere between the two – a scale at which Quantum effects are no longer negligible. Or is it more that beyond this distance, the probability that Quantum effects are negligible exceeds the probability that they are not?

Time As A Vector

In my superhero game physics, one of the fundamental concepts of reality is that time is a vector in a three-dimensional temporal ‘space’. All human measurement of time are measurements in the rate of change of some physical phenomenon within a universe, effectively the equivalent of the scalar length along that timeline. This provides an answer to the question that arises from the many-worlds theory of quantum mechanics, “where are these alternative universes?”, by separating them in temporal ‘space’ from the pre-collapse universe from which they have branched. A universe becomes a tree through time, each branch representing a different quantum outcome where a probability function has collapsed into one specific possible outcome, with all unresolved quantum probabilities preserved. Because a timeline can be infinitely small, any number of space-times can lie alongside each other in this temporal volume. Because one quantum difference can interact with other quantum possibilities, space-times branching from a given event tend to separate from each other – in other words, the temporal vectors of each resulting “world” are different with respect to each other.

This gets interesting when the possibility of a temporal Uncertainty factor gets introduced, because what was a perfect line becomes fuzzy, just as one in space would do, due to the Uncertainty principle. For a period of time after branching from a quantum event, each resulting timeline would overlap with the others from which it branched, and all outcomes would momentarily exist within the one timeline. There would be a delay in the quantum collapse, in other words, in the amount of the temporal Uncertainty.

This gets really interesting when pair production is considered, because it implies that pair production that occurs during this delay can result in one or both of the resulting particles manifesting in a timeline other than the one in which the pair production took place, effectively crossing the Uncertainty threshold from one timeline to another. In effect, energy is siphoned from one timeline into another.

This presents the potential for a practical application: unlimited free energy, derived by inducing a quantum collapse virtually simultaneous with inducing pair production, then converting the resulting added mass into energy. Anything that can happen at a quantum scale, if repeated often enough in a sufficiently-confined space, becomes macroscopically significant. Free energy from space – sounds like something Nikola Tesla might have dreamed up!

The Edge Of Reality

Something that I find amusing is that the existence of temporal Uncertainty was built into my game physics inadvertently, from day one, by virtue of the explanation within that physics of Heisenberg Uncertainty. The theory runs like this: the ‘edge’ of the universe is itself subject to Heisenberg Uncertainty, therefore pair production means that some particles must manifest outside the space-time that created them, loose within the temporal ‘space’. The resulting loss of energy means that the universe itself shrinks a little, compressing all the distances within by an incredibly minute quantity.

It follows that other loose particles must impact the dimensional boundary, where they cease to be virtual particles in hyperspace and become real, inflating the universe a little.

The net effect is that the ‘edge’ of the universe is constantly quivering like jello, and the ratio of maximum possible growth or shrinkage over average size is one-half of the Uncertainty total. Of course, the term ‘edge’ is a little question-begging here; because we are talking about the boundary in the three temporal dimensions, anywhere in the physical space is equally at the ‘edge’.

But, each of these incoming particles interacts with particles and energy that is already present, effectively altering the temporal vector of the combined whole. Just by a microscopic fraction, given their relative masses, but by a measurable amount nevertheless. So the vector itself must also be quivering ever so slightly – and that’s a temporal Uncertainty.

The Uncertainty Constant – the fundamental measure of length?

If The Uncertainty Constant is the limit of resolution that is theoretically possible without disturbing that which is being measured, does that make it the fundamental smallest length that can exist in any realistic model of the universe? There is an obvious arguement to be made in favor of a ‘yes’ answer. Beneath this limit, according to the conjectures made earlier, we are in the quantum world where position (and everything else) can only be stated as a probability percentage without interacting with the object being measured to such an extent that other key values are forever unknowable and – quite probably – the consequences of interaction will also alter the property being measured. We may know where something was, but that tells us nothing beyond some small generalities about where something now is.

For the sake of convenience, however, I’m going to set the minimum unit of length at one-fifth of Heisenberg Uncertainty, because doing so lets me demonstrate something cool (the exact numeric value doesn’t matter).

In a measurement that has to take into account Uncertainty, it is more accurate to state it as measured length ± Uncertainty. But that can be rephrased as minimum + X% of Uncertainty, with the trend of Uncertainty being 50%.

And that means that can rewrite our measurement, in our theoretical subatomic units, as (minimum – 1) +d6. (Actually, it’s minimum + (d6-1), but I’ve simplified it.

Now, if we take multiple readings of the length of an object, the minimum won’t vary – it will be a constant. Only the actual value will change. Giving us d6+C for our measurement. Look familiar? C is a very large number in this case, but the general statement should be recognized by every gamer out there.

Similarly, any time measurement can be written in the same format – only the units change. The Uncertainty limit would also present an absolute limit of resolution in time.

And that means that speeds – which are distance over time – can be written d6+Cd / d6+Ct.

That’s something we, as gamers, can get our teeth into!

Divided and Multiplied Die Rolls

A long time ago, there was an article on Divided Die Rolls in an issue of The Dragon. The whole concept fascinated me, in a way that nothing had done since an article in Scientific American about the patterns of Prime Numbers (long before my gaming days), and I spent a large chunk of my spare time playing around with various divided die rolls and the shapes of the probability curves that resulted, even deriving various mathematical laws describing minimum, maximum, average, median, etc. Some time later, it was pointed out to me that multiplication is easier than division, and multiplying die rolls had exactly the same effect as dividing them – but produced some more convenient numbers to work with.

You see, the thing with such rolls is that the probable results will tend to bunch up at the lower end of the range of possible results, with extreme values possible but unlikely. Consider the results for d6 × d6:

  • 1st d6 rolls a 1: 1, 2, 3, 4, 5, 6.
  • 1st d6 rolls a 2: 2, 4, 6, 8, 10, 12.
  • 1st d6 rolls a 3: 3, 6, 9, 12, 15, 18.
  • 1st d6 rolls a 4: 4, 8, 12, 16, 20, 24.
  • 1st d6 rolls a 5: 5, 10, 15, 20, 25, 30.
  • 1st d6 rolls a 6: 6, 12, 18, 24, 30, 36.

The average of the possible results is 1+36/2 or 18.5. But 28 of the possible 36 results are less than this, and only 8 are higher. In fact, 21 of the 36 possible results are twelve or less, so more than half the possible outcomes are concentrated into 1/3 of the range of possible results – and into a range of double the unit size (2 × d6). Finally, 14 of the 36 possible outcomes fall within the range of the unit die (1-6). That’s almost 39%. Putting all of that together, and you get: 39% 1-6; 19.4% 7-12; 19.4% 13-18; 22.2% 19-36.

Let’s round that off a bit:

  • 1-6: 40%;
  • 7-12: 20%;
  • 13-18: 20%;
  • 19-36: 20%.

(It’s the unexpected patterns that i find so fascinating: 28, 21, 14 – all multiples of 7. Why)?

The Distribution Of Uncertainty

Of course, the distribution of Uncertainty may not be – probably is not – a flat curve. It’s just as likely to be a bell curve, something that might be better represented as 2d6 or even 3d6.

When you have compound die rolls, with their own probability curves, multiplied together – or divided by each other – the peak of the resulting probability curve falls at a point that can be derived by applying the average results of each side of the calculation.

  • d6 × d6: peak probability is 3.5 × 3.5 = 12.25.
  • 2d6 × d6: peak probability is 7 × 3.5 = 24.5.
  • d6 × 2d6: peak probability is 3.5 × 7 = 24.5.
  • 2d6 × 2d6: peak probability is 7 × 7 = 49.
  • 3d6 x 3d6: peak probability is 10.5 × 10.5 = 110.25.
  • d6/d6: peak probability is 3.5/ 3.5 = 1.
  • 2d6/d6: peak probability is 7/ 3.5 = 2.
  • d6/2d6: peak probability is 3.5/ 7 = 0.5.
  • 2d6/2d6: peak probability is 7/ 7 = 1.
  • 3d6/3d6: peak probability is 10.5/ 10.5 = 1.

(I must stop myself here, before I get too far off-track).

The uncertainty of speed

That means that the Uncertainty of speeds, deriving from the combination of Uncertainty in location and Uncertainty in time, is going to be smaller than the Uncertainty associated with the two values going into the calculation.

All speeds, including the speed of light itself.

The implication of Heisenberg’s Uncertainty is this: all values that we can measure are fuzzy, and have an inherent limit of resolution beyond which strange things can happen (from a macroscopic, familiar, perspective).

The Hero Connection

There’s one other place in gaming that something similar to these divided die rolls can be observed, and it also holds a lesson on the nature of Uncertainty.

In the Hero system, the price of a power is Base × (1 + total advantages) / (1 + total limitations).

When building characters to a budget, you often need to assess the impact of an additional advantage or limitation on the price. As someone who GMs a game using the Hero System (and GMs another two which use a derivative of that system), this is something that I felt I needed to have a solid understanding of.

To start with, let’s state the relationship:

  • Price (New) = Base × (A0 +1 + DA) / (L0 +1 + DL),

where A0 is the total value of advantages already built into the power, LO is the total value of limitations already built into the power, and DA and DL are the total value of proposed new limitations. In effect, though, A0+1 and LO+1 are going to be constants – I’ll use C and K, respectively – which gives us

  • Price (New) = Base × (DA + C) / (DL + K).

If we set the base price as 1, the structure of the resulting calculation is exactly the same as that for Speeds with Uncertainty in both distance and time.

It’s been my experience that the results are far more sensitive to changes in DL, the analogue of changes in time, than they are to changes in DA, the analogue of changes in distance. That’s because any increase on the denominator side divides any increase in the numerator into something smaller. The exact amounts change depending on the values of the constants – the larger one is, the smaller in proportion an increase of +1 is.

If K is 3, in other words, +1 is a bigger difference than if K was 103. But any change in the denominator reduces the impact of a change in the numerator (there are so many variables that it’s hard to get more specific without growing totally confusing.

To show how complicated these things can get, and as a bonus for other users of the Hero system:

  • DP (change in price) = Base × [(DA × K) – (DL × C)] / [K × (K + DL)].

I find it simpler to consider things on a case-by-case basis, working with established and fixed values for K and C. But that’s the general solution, for whatever it’s worth.

Conclusion

Game mechanics is all about systems of simulation, about assigning numerical, manipulable quantities to our imaginations so that others can work with, interact with, and enjoy, the products of those imaginations. But systems of simulation have all sorts of real-world applications beyond the game table. The military use them for tactical planning. Science can use them, too.

The existence of Uncertainty inherent to one type of measurement makes all other types of measurement subject to their own Uncertainty. You don’t have to skim many of the Wikipedia articles on Heisenberg’s Uncertainty Principle and related phenomena (look especially for the name Planck, as in Planck Units and Planck Time to see that this is not entirely an original thought – though I hadn’t read them prior to writing this article!

The potential real-world technological implications discussed should manifest in any futuristic high-tech society, or at least be on their way.

But there’s a more important implication purely for us gamers: if the real world is fuzzy, so our simulated worlds should be fuzzy. So stress less about the last decimal place of accuracy in simulations and focus instead on the meaningful generality – and get on with the game. It’s that train of thought that leads to solutions like the 3-minute NPC. And that’s a “real world” benefit that we, as gamers, can all uniquely appreciate.

“I don’t care what anything was designed to do, or intended to do, I care about what it CAN do!” – Col. Jack O’Niell, Stargate SG-1

(I think – and possibly a misquotation, but the point is made…)


Discover more from Campaign Mastery

Subscribe to get the latest posts sent to your email.