This isn’t the post I expected to be writing today. There are two reasons why you’re reading this and not the first of a three-part article about Priests: the first is a documentary (part 3 of The Fabric Of The Cosmos) and the second is the explosion of inspiration that it produced. I wanted to strike while the iron was hot…
This article builds on and extends the game physics that I use for the Zenith-3 and Warcry campaigns, which I previously discussed in Fascinating Topological Limits: FTL in Gaming amongst several other articles. You might also want to check out A Journey Of 1,000 years which is part two of my series on time travel. More to the point, this article (in parts) will assume that you are familiar with that content.
So, let’s start with the source of inspiration behind this whole article – Quantum Entanglement. If you don’t understand this concept, you could try reading the Wikipedia article that I just linked to, or you could watch the documentary I linked to earlier – or you could stick around for my slightly simplified recapitulation of the explanation they offered in that documentary. If you have used one of the other techniques, feel free to skip down to the next section, “Multiple Dimensions Of Space-Time”.
Still here? Excellent. Okay, imagine you have a roulette style wheel with alternating blue and red panels, which represents an electron. If you bring two electrons or other particles together closely enough, quantum theory says that their properties can become entangled with each other. The colored panels on our wheel represent some property of the particles that the wheels represent, such as the direction of spin. So we need a second wheel for the other particle.
If you spin the wheels, entanglement means that whatever color comes up on one wheel, the opposite color will come up on the other. The act of spinning the wheels describes manipulating that property of the electron. Now for the strange part – it doesn’t matter how far apart the two particles are once they are entangled. One will always have the opposing value for the property being manipulated to the other. Somehow, the second particle “knows” what the condition is of the first, no matter what the distance between them is, and “communicates” this information instantaneously in zero time to the other – with no connection between the two of any sort.
Einstein hated the notion. He called it “spooky action at a distance” – the “spooky” part referred to the lack of connection. At best, he thought, quantum theory was incomplete – and he had already shown that information couldn’t travel faster than the speed of light. Since Quantum Entanglement did, he had to choose to believe that either he was wrong, or that Quantum Theory was… dubious, shall we say?
Multiple Dimensions Of Space-Time
Quantum Theory also states that you can’t state where a particle is (and some other qualities, like its spin) until you observe it; until then, it has every possible outcome at the same time.
The physics of my superhero campaign rests, in part, on the concept that there are three dimensions of time just as they are of space, and that what we experience as time is a vector through the resulting space-time. When a quantum state collapses into one of these possible outcomes, the timeline splits into multiple branches, each of which experiences only one of the possible outcomes. In part, that’s my theory, and in part, it’s established physics theory, known as the “many worlds” theory or the many worlds interpretation (the link is to the Wikipedia article that describes it in far more technical detail). According to my game physics, when that happens, the temporal vector lines branch off in multiple directions within the three dimensions of time, but if you were to perform a vector addition of the results, you would end up back with the original vector.
We don’t experience anything that doesn’t happen on the vector that describes our particular space-time, only the effects of the various forces that affect the path of timelines.
When I first described this theory of “Vector Time”, one of the comments I got was “why?”. Quantum entanglement is a great way to demonstrate the value of the approach.
Demystifying the spookiness
Since there is no absolute temporal frame of reference, all measurements will be relative to some base timeline. The act of entangling the two particles clearly divides the timeline in two – one in which the particles were entangled and one in which they weren’t – but we aren’t interested in the second of those, so let’s discard it and use the timeline in which entanglement took place as our base.
Next, we had the act of separating the entangled particles. This also splits the timeline – one branch for every possible adjusted location
of the entangled pair. But we’ll simplify it and assume that we only ever move the second particle to the same point relative to the world around the first. In which case, we split the base timeline into only two timelines – one in which the pair were separated, and one in which they weren’t. Since there is no change from the base timeline in the non-separated subsequent timeline, it continues along the same temporal vector that the universe had before the question of separating the pair came up. The timeline in which we did separate them is obviously different to this one, so it has some temporal vector relative to the first – it will angle off from the base timeline, starting from the point of divergence.
Subjective time is akin to a wavefront traveling along these temporal lines, expanding from the point at which measurement started and expanding into the future. This is the equivalent of an arc drawn with the center placed on one of the timelines. Note that nothing has been “duplicated” – these are still the same particles on the different temporal vectors. It’s just that one has moved within the internal space-time and the other hasn’t. There is also now a connection that cannot be perceived by any of our senses or instruments that links the past (when the two were made entangled) and the present, where they have been physically separated but the entanglement persists.
The act of causing one of the particles to “collapse” into a deterministic outcome by measuring a property that is subject to entanglement propagates back along the temporal vector to the point in time where the two became entangled in the first place, and it is at THAT instant that the collapse of one takes place – forcing the other to also collapse into the opposing state. Therefore, whenever that property of the entangled remote particle is measured, it will always have the “correct” value.
Some people might now be saying, “but the particle didn’t collapse into one possible configuration at that instant, that happens as a result of the act of observation, at the instant of observation” – to which I reply, how would you know? The only way to find out is to measure the property of the particle at some point before you measure the property of the particle – a logical impossibility. All you end up doing is shifting the point from which the collapse event begins to propagate back along the timeline.
All this takes zero perceived time within the integral space-time containing the particles. It’s not on our temporal vector, in fact, it’s as far removed from our temporal vector as it is possible to get. By definition we can’t experience the connection between the two entangled particles – only the measurable fact that changing one also changes the other, instantaneously within our frame of reference. The connection between the entangled particles is outside our space-time.
I’m still not claiming that my theory is correct. I’m simply saying that it offers a way of explaining the otherwise inexplicable.
The Virtues of Entanglement
Entanglement is real. It has been proven in the lab. And once we know how to do something in the lab, humans have a natural tendency to start looking for ways to utilize it for practical purposes. At this point, the documentary starting talking about star-trek style teleportation – without addressing any of the multiple flaws in the proposal that make it impractical. But I began thinking about some more realistic applications, and very quickly found that entanglement opens up a whole new branch of technology – or maybe a whole new sub-branch of electronics.
Let’s start with the blindingly obvious and go from there…
Real-Time FTL Communications
We’re really good at manipulating electrons. We’ve been doing it for quite a while now. Electron spin, one of the properties affected by entanglement, is the basis for all digital computer RAM, floppy disks, solid-state disc drives, etc. Suppose we were to separate a pair of entangled electrons and send one of the pair a very long way away. Light-years. By altering the spin of one of the pair, we also alter the state of the one left behind. Examining the state of the one left behind gives us the (manipulated) state of the distant one – instantly. What we have is a 1-bit communications channel. We don’t care that reading the status of the one at the receiving end also alters the state of the remote one, because as soon as the next bit of information is encoded onto that remote electron, it resets our local one, undoing the effects of our last reading – so we simply read its state again, ending up with digital serial communications. If we have 8 of them, we can send one byte of information back at a time. Of course, we’ll want more for broader information bandwidth and still more for redundancy.
Imagine something like a matched pair of memory cards with 100Mb capacity each, in which each of the memory circuits on one contains electrons that are entangled with the one in the other card. Or 256Mb. Or Gigabytes. The limiting factor is how quickly the information can be read off the memory card into a computer; the bandwidth that results is vastly more than can be used. You need time to read and process the information being sent before the next manipulation at the remote end. Data can be streamed – in one direction at a time only – at infinite speed over infinite distance.
While it would be possible to reverse the process using the same electrons – reading data into the home RAM to be read by the remote RAM after processing the signal sent from the remote RAM – it would probably be easier to permanently dedicate half to each direction of travel. The result is real-time digital communications to anywhere, instantly.
Real-time Remote Control
Of course, once you have bi-directional instantaneous communications, there is absolutely no barrier to real-time remote control of space probes, etc. Without the “instantaneous” part of that description, speed-of-light lag makes this impractical; you can’t issue a command based on what the situation was so many seconds or minutes ago, and which won’t actually be carried out until that same number of seconds or minutes later.
Stick a couple in (relatively) close solar orbit to watch for solar flares and storms, and send the information back to earth ahead of the event so that electrical companies have time to prepare their systems, for example.
Sidebar: Dispensing With Paradox
It has been said a number of times that FTL communications creates paradox. This comes about because physicists (and astronomers, and others) mistakenly equate an event with the awareness of the event having occurred. Just as astronomers are prone to describing the events observed in distant galaxies such as their motions with what is happening “now”, ignoring the fact that light has a finite speed and therefore the actual event occurred quite a long time ago, so the same mistake creates the apparent paradox of ‘being able to receive information about an event before the event occurs’. The only reason there is a paradox is because of the assumption that nothing can travel faster than the speed of light, and therefore becoming aware of the event via some communications that occur at the speed of light is tantamount to the event occurring at that instant.
If you define the speed of sound as the fastest speed possible for information to travel, then any communications by means of light automatically creates an artificial paradox, a means of learning about an event before awareness of that event (by means of sound) reaches the observer. Therefore either the light-based information is describing the event before it occurs, or there’s a flaw in your logic, assumptions, or both.
I am perfectly happy to accept that the speed of light is the fastest speed possible for information to be sent electromagnetically. But not everything is necessarily subject to electromagnetic limits. That’s a subject I’ll return to a little later, when I ask whether gravity is a force – or the effect of a force acting through an extra-temporal domain, possibly even the same one as entanglement.
The Ultimate in secure communications
Before getting to that, I have a few other aspects of the communications-by-entanglement approach to discuss. It occurs to me that it would be the ultimate in secure communications – assuming that you can’t have multiple particles mutually entangled. The entanglement link cannot be monitored – though, once the messages are received and passed to a standard electronic device, they would become vulnerable, just as a secure internet connection will not protect your passwords from a virus on your computer. Nor can the message be interfered with in any way, so far as I am aware.
The prospects for military communications alone are revolutionary. “Radio” signals that are not subject to interference, can’t be blocked, and can never be intercepted?
But, even more to the point, a modem “pair” employing entanglement technology would be a wireless internet connection that has these same attributes, and that can never be tapped by a bandwidth pirate. The wifi node is secure, by definition.
The Ultimate in off-line storage
Picture a box into which you can plug memory card after memory card, which accesses that memory using an entangled data connection like a SIM card. Limits to computer RAM become a thing of the past. You can have the equivalent of unlimited hard disk space. You take one end of this connection with you wherever you go, plugged into a very localized low-power wireless connection. Suddenly, you have full access to your computer storage everywhere you go. You are no longer limited to the amount of memory in your mobile phone, or your digital camera, or your tablet device, or your e-book reader. You can store and retrieve information instantly, from wherever you are.
That memory device does not have to be in plain view. It can be in a fire safe. It can be in another, more secure, building. It can be on another continent, or another planet for that matter.
The Lord Of The Rings movie trilogy produced multiple terabytes of data. How much more convenient would it have been to have had all of that information instantly accessible, anywhere in the world that Peter Jackson happened to be? And that’s just one application of this technology.
The Universal Telescope
When did entanglement start? There was a time, perhaps a few milliseconds or even microseconds after the big bang, when the fundamental particles came into existence, when there was ample scope for some of them to have become entangled with one another. Those particles could now be anywhere in the universe. Let’s say we want real-time information about the stars at the center of our galaxy – we can get some of that information simply by interrogating a particle we have access to here on earth which happens to have been entangled, way back when, with a particle that is now in one of those stars. It’s only a matter of finding a local particle with the correct entanglement.
It seems probable that shared history means that the majority of entangled particles in a given location, say the earth, will have their partner found nearby. It should be ‘relatively’ easy to find an entangled pair in which both particles are part of the earth. The number of particles which are entangled with a more remote location should be far fewer, because they have less of that shared history.
Like all telescopes, a telescope based on existing entangled pairs would be limited in the data that it can convey, just as radio telescopes, infrared telescopes, x-ray telescopes, and optical telescopes, all give different information to astronomers. It follows that it’s uncertain how much useful information we would get from such a telescope.
There are also some extremely difficult hurdles to overcome before such a telescope would be practical. If you grab a random particle here on earth, first you have to identify it as an entangled particle; secondly, you have to identify where its partner is located by correlating changes in its condition with events observed by other means, where one is subject to a speed-of-light lag of unknown proportions and the other is not; and third, that remote location has to be somewhere of interest. Only once those three problems have been solved can you start addressing the question of what the particle is telling you about conditions where it is located.
Part of these difficulties – a very small part – might be overcome by assuming that every particle that can be entangled, is entangled, and that when we speak of ‘entangling’ two particles, what we are actually describing is a change of entanglement partner. That vaults the first hurdle that I mentioned, and only leaves two to go. The second is almost impossible to overcome, and only the prospect of quantum computers holds any hope of a solution in my opinion – what we have now is simply not up to the task.
I don’t think the third is as much of a burden as it might initially appear, though – finding any naturally-entangled pair with a positively-identified remote location for one of the pair would automatically make it acutely interesting. Maybe it’s somewhere in the Earth’s core? Or attached to an atom in the earth’s atmosphere? Every positive identification would be of acute interest, at least for the foreseeable future – so the third hurdle is a non-event. That leaves only one technological challenge of any seriousness to be overcome, and puts this idea squarely into the “someday soon, maybe” category.
Another prospect that this holds is the potential for direct examination of the big bang itself – that only needs a particle whose state hasn’t changed since that time. The more remote from anything interesting, the less chance an entangled particle has of having encountered anything to change its state since that time – the void between stars, or more likely, between galaxies, is the place to look.
If entanglement communications can become a reality, then any sort of radio communication swiftly becomes obsolete. Which might explain why SETI hasn’t found any. The window in history through which radio transmissions dominate and are detectable over interstellar distances might be less than a century or so wide.
Pick an extrasolar planet that has intelligent life. Find a matching entangled particle here on earth. Start transmitting messages using that particle to change the state of the remote particle in a deliberate pattern. Wait for someone there to find and identify that pattern. Eureka – instant Contact!
This is really an inefficient approach. It would be a lot better to grab a hunk of particles, assume that they are entangled, and start altering their states in a non-random way. This shotgun approach makes it more likely that someone, somewhere, will notice the signal – if there is anyone out there to do so. Remember, no interference, no attenuation of signal with distance, instantaneous. The “background noise” here is of an entirely different kind.
Hold on a moment – that means that every computer on the planet is broadcasting to ET every time it gets used. But our computers aren’t equipped to cope with transmissions received by means of entanglement – they would simply kick out a memory error, or a corrupt file. We may already have missed ET’s reply. Certainly, we are signaling at a more prodigious rate than ever, without even realizing it.
I can picture some future incarnation of SETI testing large numbers of particles for non-random condition changes over a long period of time – the more they test, the more likely they are to find one – while also non-randomly changing the state of another large group of entangled particles – the more they do of that, the more likely it is that they will make contact with ET’s SETI project. Then comes the hard work of convincing each other that the contact is real – and convincing anyone else that it’s real, too.
Bypassing the event horizon
What happens to an entangled pair when one of the pair is swallowed by a black hole? I don’t know, but it would be fascinating to find out. More to the point, it’s entirely possible that entanglement would permit the direct observation of the interior of a black hole, beyond the event horizon, because it bypasses the speed of light limit that creates the event horizon in the first place. That depends on whether or not the subatomic properties that create the entangled condition are lost during the transition.
If we could determine what the properties of a particle whose entangled partner has been swallowed by the hole (but not by the singularity) would be, we can start looking for such a particle…
The mysteries of the Big Bang
Cosmologists sometimes seem to have a masochistic streak. No sooner do they come up with what seems to be a workable theory than they start apologizing for its simplicity and start trying to complicate it.
That’s certainly what happened with the big bang. For some reason that no-one has yet adequately explained, the subsequent expansion doesn’t seem to have occurred at a steady, even, pace. This is called Cosmic Inflation, and while it remains controversial, unproven, and possessed of known flaws, it does predict some phenomena that have been observed, so it would appear to be at least part of the story of the early universe (read the wikipedia article if you want a more detailed discussion). One of the big problems is why things would happen this way?
Multidimensional time might just offer an answer. If the post-big-bang universe was always expanding at the same rate overall, but part of that expansion was along a time axis that we cannot directly observe, it changes the nature of the question by providing a different frame of reference within which the “missing expansion” might have taken place. Interactions between the observable and unobservable time frames can then account for the apparent (but illusionary) change of expansion rates – its an illusion caused by the fact that we can only perceive part of the totality.
This all implies that the additional temporal dimensions can function as an additional energy transfer mechanism, one that is not taken into account by any current theory. Our definition of the universe as a closed system, in thermodynamic terms, becomes untrue until the parts of the universe that are present within these other temporal dimensions is taken into account. So far as our observable universe is concerned, energy could be (apparently) created (actually, siphoned from elsewhere) and destroyed (siphoned off to elsewhere).
This is not a trivial matter. It affects just about everything in science that you can think of – though the impacts on most things would be a vanishingly small, because this sort of thing wouldn’t happen very readily or very often. Conditions would need to be exactly right. Study of entanglement would lead to an understanding of the mechanism by which it operates, opening the door to the adaption of those mechanisms to practical ends.
Unlimited free energy?
Let’s start with a couple of benign ones. The first is that we obviously have a source of unlimited free energy, we just have to work out how to tap it.
Even just a controlled and short-lived tiny trickle might be enough to enable self-sustaining fusion reactions, itself no small benefit. Ultimately, though, the goal would be to go beyond using this energy source as an exciter for fusion and harness it directly on a much larger scale.
It has been suggested by some writers that if you have enough energy, you can do anything that is permitted within the laws of physics, and that might well be the case. Oil could be created artificially from waste, for example, and we could turn garbage back into its constituent parts and then build those up into whatever material we desired. Practicality is not so certain, though. So it would not be a universal panacea, though it would certainly help.
Thermal Pollution Cleanup
Human activity creates a lot of heat. I’d love to see a study sometime of the climatic effects of thermal pollution from a city, and the way that thermal energy interacts with the existing climatic models. In fact, since the majority of human activity takes the chemical energy locked away in various fuels and releases it into the environment with no corresponding mechanism to take the excess and waste energy and store it away again, I would be more prepared to consider this as the cause of global warming than the usually-attributed culprits, simply because there are natural feedback mechanisms to moderate the others. There are natural processes that take C02 out of the atmosphere, for example. That’s not the case with thermal pollution.
The time might come when every city has a heatsink that gathers the excessive heat from around it and shunts it away onto another temporal axis. It’s simply the reverse flow of that which yields all that free energy, implicit in the development of one is the development of the other.
Of course, this is simple sweeping the problem under the carpet unless the realm to which we export this thermal garbage is also the realm from which we draw our energy supply, but out of sight tends to be out of mind – I can’t imagine that stopping us.
If unlimited energy is on tap everywhere, someone’s going to find a way to weaponize the technology. I’ve described the results as a directed energy weapon, but it might not be the energy in its pure form; self-powered rail guns are another distinct possibility, as are lasers without the bulky power supplies that they now require.
It’s not often in the design of a new weapon that its natural defense is also implicit in that development, but that just might be the case this time around. If you can tap an extra-dimensional space for its energy, you can also shunt unwanted energy away, at least in theory. The same tech that deals with thermal pollution could be adapted to provide an energy shield, i.e. a shield against unwanted forms or concentrations of a particular form of energy.
While it might never be practical as a defense against the concentrations of energy produced by the weapons discussed a moment ago, that doesn’t mean that it won’t be useful – for example, as a sunscreen that blocks 100% of the harmful UV from your favorite beach or park.
Black Holes & Entanglement
Let’s talk about gravity for a minute. Imagine that you have two spheres in space orbiting their mutual center of gravity. One of them is charged, the other is not. Now place a static charge somewhere nearby so that there is a new force acting on the charged sphere. This alters its orbit, which in turn will cause a secondary alteration in the orbit of the other sphere. If you were measuring their mutual gravitational attraction, would you really have to wait for light to travel from one sphere to the other before this gravitational shift was experienced by the uncharged sphere? Or would it occur simultaneous with the event itself, and not with the awareness of the event by electromagnetic means?
Gravitic interactions and Entanglement seem to share a common property, then – they operate instantaneously so far as our frame of reference is concerned. Could it be that the reason we’ve never found a graviton is because they function on a different temporal axis that we can’t directly perceive?
The implications of the similarity are staggering, opening the door to gravitic technologies.
Transmission of gravity to a remote destination – in other words, artificial gravity. No more zero-G problems for astronauts on long missions.
Transmission of gravity from a remote destination – i.e. artificial anti-gravity. No more worrying about blackouts from high G-forces. The ability to manufacture the effects of a gravity well in a controllable direction relative to the path of flight – in other words, Star Wars / Star Trek space vehicle maneuverability.
Combining the two would permit acceleration by focused gravitic attraction at huge rates. You already get huge numbers from sustained 1-G acceleration – how about at 100G, or 10,000? What this amounts to is moving gravity from somewhere where it is not useful to us to somewhere where it is, without moving the mass that causes it. Of course, there might be side effects at wherever we’ve taken the gravity from, to consider. What did I say earlier about problems that were out of sight? Oh, yes. If we can do it, we will.
How about sending matter through one of these different temporal vectors? The results have to be FTL travel so far as either the crew, or the universe around them, are concerned – possibly both. Though this is a different order of magnitude in difficulty compared to most of the other engineering/science problems that have been discussed, it might just be possible.
Similarly, it’s not that difficult, if matter can be transmitted through a different temporal axis to a remote temporal point in our perceived space-time to come up with teleportation, at least in theory. The practical problems might be bypassed by this approach, or they might still be relevant – if you can do it by effecting every constituent particle with an energy field of some kind, for example, there is no need to individually analyze and replicate each one at the destination. But that’s a very big ‘if’.
The same sort of technology could be adapted to permit Star Trek -style replicators.
The existing theories concerning large-scale teleportation – which the documentary that inspired this article immediately went to – require that the original be destroyed during the scanning process and then used to recreate every particle in its appropriate quantum state at the destination, and completely ignores the practical problems such as the sheer quantity of data involved and the transmission of that data from source to destination. While it might be theoretically possible, I don’t see this approach as ever being practical.
The Infinite Computer
But even being able to perform the data acquisition, analysis, and transmission using a different temporal axis so that it all takes zero perceived time, or close to it, might change all that. You could hand a computer a problem that will take it 10,000 years to solve, send it off along another temporal axis – more of a temporal loop, really – and have it return to our space-time a short interval later having spent those 10,000 years according to its frame of reference, and therefore having the answer.
Speculation vs Reality
Some of what I have suggested in this article is undoubtedly speculative at best, and potential pie-in-the-sky; but some of it is stuff we know how to do already (especially the proposals made early in the article), which rely on the fact of entanglement and not on any particular mechanism by which it occurs. Some of what I’ve described will happen – as soon as we figure out how to entangle particles reliably on an industrial scale. That could happen tomorrow, it could happen in a year, a decade, or a century – or even never, if there is some physical principle that stands in the way and which I’m not aware of.
It’s going to be interesting to see what happens!
I’m having surgery on Monday so there may not be a new post until this time next week – though, if I can, I’ll do something quick over the weekend and schedule it in advance. The intervention is minor, to remove a lump believed to be a cyst from the back of my skull, but you never know when you’re having an operation.
And, speaking of operations, I’d like to send a quick shout-out to my nephew, Patrick, who hit something on the skateboarding rink two days ago and today had to have an artificial elbow and arm reconstruction. Get well soon, Patrick!