A Game Of Drakes and Detectives: Where’s ET?

Think you know what our galaxy looks like? Think again – the latest findings have changed our view completely. Click on the image to view the 5600×5600 pixel original. Source: NASA/JPL-Caltech/ESO/R. Hurt via Wikipedia.
Over the Christmas break, and for some weeks prior, I read “First Contact” by Ben Bova and Byron Preiss, and three or four times in the course of doing so, I found myself mentally yelling at the page, “that makes no sense”.
There are some logical errors in the assumptions upon which SETI is founded, and even more in the understanding of SETI by all but the most dedicated casual observer. A correction to these radically reshapes the theory.
Certainly, the general public has no idea of the limitations or constraints imposed by even the accepted theory, never mind the corrected version that I will be discussing today.
Now, I’ve met only one gamer who wasn’t completely certain that we were not alone, and eventually would find ourselves in some kind of first contact situation. This was the universally-prevalent opinion amongst all the gamers I know way back in the early 80s, when SETI was only just becoming respectable in the popular zeitgeist.
The Drake
Our guide through the technicalities of SETI, and the spine of this article, will be the Drake Equation. In its currently accepted formulation, that is:

in which N = the number of civilizations in our galaxy with which communication might be possible; R* = the average rate of star formation in our galaxy; fp = the fraction of those stars that have planets; ne = the average number of planets that can potentially support life per star that has planets; fl = the fraction of planets that could support life that actually develop life at some point; fi = the fraction of planets with life that actually go on to develop intelligent life (civilizations); fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space; and L = the length of time for which such civilizations release detectable signals into space.
(Actually, I have to admit that the above resolves a great many of my complaints about the formulation as it was presented in the book – but not all of them).
History of the Drake Equation
It’s always worth remembering that the Drake Equation was never intended to be anything more than a conversation-starter at the first meeting about SETI intended to organize the meeting program on a rational basis.
What’s Wrong With The Drake
Okay, that all seems reasonable on the surface of it. The most fundamental problem with the Drake Equation is not apparent at a superficial glance.
The problem: half the equation deals with the probability of such life existing, the other half with the probability of our detecting such life. By conflating the two, many otherwise reasonable thinkers and researchers have confused the two purposes, making assumptions about the terms of the equation that impact it’s functionality.
To see what’s up, and get closer to a meaningful answer to the question of how many intelligent species there are in the milky way (or any other region of space sufficiently large to permit statistical treatment), the easiest approach is to discuss each of the terms in the equation in succession.
Rate Of Stellar Formation
This was originally estimated at 1 per year, and has now been refined to a rate of 1.5-3 stars per year.
Actually, in it’s original formulation, as I have seen it presented elsewhere, this has been replaced with a completely different term – the number of stars which could potentially have planets. I suspect that the change was made because the final factor in the equation has a unit of years, so there needs to be a “per year” somewhere in one of the other terms to cancel it out. This is one example of the confusion of the two purposes to which the equation has been put interfering in its capacity to do either properly.
The way I always saw the Drake was a “logical onion” – peeling away those locations that for one logical reason or another did not have such a civilization from amongst the total pool of contenders to determine the number of civilizations that probably existed. Such a view of the equation makes sense with an N* (number of candidate stars); it doesn’t make sense with an R*.
It’s getting ahead of myself a little, but for the purposes of this discussion, I’m going back to the old formulation. After all, why should the current rate of stellar formation have any relevance whatsoever to the number of stars that were created in the past – say, around the birth-time of our own sun?
Number Of Eligible Stars
There are an estimated 250 billion stars in the milky way galaxy, according to Google – give or take 150 billion, which is something that I’ll get back to in a moment. For a start though, this is a very question-begging answer. The British and most other commonwealth countries have a different meaning (million million) for ‘billion’ to the US (thousand million); is this answer in the US meaning of the term, because Google are American, or has Google detected that I am in a commonwealth country and used the local term for Billion? It’s only a thousand-fold difference, after all.
To find out, I had to run a second search, for the term Billion, which brought up the interesting factoid that the British officially adopted the US terminology in an attempt to avoid confusion, all the way back in 1974 – though unofficial old-form usage continued for decades after, and even now, when someone says “Billion” you almost always have to ask what they mean.
So, 250,000,000 stars. Maybe 400,000,000. Or maybe 100,000,000. That’s an extraordinarily broad range! The exact figure depends on the number of very-low-mass stars, which are hard to detect, especially at distances of more than 300 l.y. The other problem is the galactic core – it’s so bright, and stars rub shoulders so much in that part of the galaxy (some are less than 1/10th of a light-year apart!) that the total is simply impossible to calculate. And that’s completely ignoring the presence, now thought confirmed, of a supermassive black hole at the heart of the galaxy, which has consumed vast numbers of those stars – and it’s always worth remembering that we can’t observe the situation as it is, only as it was, as I explained in Fascinating Topological Limits: FTL in RPGs.
But “First Contact” solves this problem in a reasonably simple manner – the radiation is so great in the Galactic core that there is no chance of life surviving there. So let’s rule the core out of bounds and only consider the spiral arms.
Right away, we run into a problem that few astronomers ever seem to mention – part of those arms lie within the “light shadow” of the core, so we can’t see them completely, either. On top of that, there are all sorts of stellar phenomena – dust clouds, stellar nurseries, and what-have-you – in, and in-between, the spiral arms – and one arm can get in the way of our seeing another.

by User:Rursus – A redevelopment of Image:Milky Way Arms-Hypothetical.png: details about method below.User:YUL89YYZ, User:Ctachme, Kevin Krisciunas, Bill Yenne: “The Pictorial Atlas of the Universe”, page 145 (ISBN 1-85422-025-X) and ?ÁOR., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=2221433
This diagram from Wikipedia illustrates what we really know as opposed to what’s educated guesswork. The dashed areas are hidden from us, and extrapolated from what we DO know, with varying degrees of accuracy. Notice that the galactic core throws a huge “shadow”, blocking direct observation of a huge wedge of the galaxy.
Our best guess is that for this purpose they contain 2/3 to 3/4 of the stars in the Milky Way.
Which brings us back to that more-than-somewhat-rubbery guess as to how many stars are in the milky way. The fact is that every time there has been a revision to the number of stars over the last century, it’s been upward, so a higher figure seems more probable than a lower one. If we apply the smaller fraction to the larger estimate, and the larger fraction to the “real” estimate, and then average the results, it may be hoped that some of our errors will cancel out and at least give a workable number.
The result: 227,222,222,222. Call it 220,000 million for convenience.
A lot of these stars are going to be too young to have planets, or are too energetic for life – it’s the same thing, so far as we’re concerned. Many of them will be too cold. A lot of early SETI work restricted the range to reasonably sun-like stars, and also excluded binary and triple-star (‘trinary’) systems, because the then-prevailing theory was that planets couldn’t or wouldn’t form in such systems. This excludes about 98% of stars.
That was back in the days before we found ways of actually detecting exoplanets. These days, it’s routinely assumed that the percentage of stars which have planets is close to 100%. But that doesn’t address the temperature concerns.
If we look around our solar system, though, we quickly find that those concerns are somewhat overblown. There are three factors that aren’t being taken into account.
- Planet-sized moons close to giant planets can generate internal temperature through tectonic action.
- Our profiles of life are based on what we know of chemistry, which is evolving all the time. I was taught, for example, that there were four states of matter. A fifth, super-cooled, was later added. Now there’s a sixth. So our knowledge of chemistry, which currently defines only two possible chemical “profiles” for life, is still evolving.
- It’s already well-known that atmospheric pressure changes melting and boiling points, anyway – so the potential ranges of planets on which life might form is far larger than is often thought.
So I’m putting brown and red dwarfs back on the list of potential life-bearing sites. They aren’t supposed to be excluded at this point, anyway, according to the Drake Equation. That, in turn, puts 80-90% of the previously-excluded solar systems back into contention. Let’s call it 85% – and throw in that 2% that everyone agrees on.
That means that N* should be 191,400,000,000.
Fraction With Planets
The original estimate was that 1/5 to 1/2 of the stars in the galaxy would have planets. Actual surveys and the number and variety of stars that have been found to have exoplanets has exploded that estimate. This is now considered something close to 100%, as I mentioned earlier.
What has to be remembered is that in order for us to detect an exoplanet, the plane of its orbit has to put the planet in between us and its star at some point – or it has to be so massive that we can detect the gravitational “wobble” that it produces. That means that unless there’s some cosmic principle that we haven’t yet figured out, the alignment of planetary orbits is going to align with the rotation of the parent star, which is known to be pretty random relative to Earth – one complete dimension is almost completely ruled out.

This shows three possible planes of planetary rotation for an exoplanet. If either of the green options prevails, we can find the planet by the occlusion of the stars light. If the orbit of the planet never puts it between us and their star, no.
That’s perhaps as much as 1° out of 360° – which would mean that were only finding 1/360th of the exoplanets that are out there for us to find. It could be even less. And yet, as of 1 January 2019, there are 3,946 confirmed planets in 2,945 systems, the most distant of which is 2,540 light years away (an unconfirmed exoplanet is claimed for another star more than 5,000 light years removed from us, and we are finding suggestive hints of exoplanets in the Andromeda Galaxy and quasar RX J1131-1231, 3.8 billion (there’s that word again!) light-years from earth.
Of course, the farther away a star is, the harder it is to detect anything..But there are almost certainly as many planets out there as there are stars, if not a substantial multiple of that number.
Our Onion remains 191,400,000,000.
Planets That Can Potentially Support Life (per star that has planets)
The original SETI conference worked with an estimate of 1-5 planets that can potentially support life, on average, per star with planets. There have been a number of attempts to reduce this over the years. Astronomical surveys have suggested that the correct value is 0.4 – using very pessimistic and earth-like definitions of life. There have been suggestions that the correct number is 0.1 times the average number of planets in a solar system – which, while interesting, begs the actual question. This is a number in heated debate at the moment, as cosmologists try to understand what Hot Jupiters do to the process of planetary formation and stability.
Proponents of SETI continue to set a lower value of 3-5 on this number, pointing out that our Solar System has five.
The more planets we find, the more likely it is that there will be more planets to find, and that some of them will be rocky, small enough, and within one of the habitable zones. But even without that, the factors pointed out above mean that those are not the only planets that could potentially support life.
Trappist-1, for example, is an ultra-cool red dwarf 39.6 light-years from Earth. It is known to posses 7 planets, 3 of them in the liquid-water temperate zone. The other 4 are also considered potentially habitable as they all posses liquid water somewhere on their surfaces.
A huge number of astronomers and lay-people either assume that the conditions have to be earth-like to support life – yes this is the only model that we have that we know works, but that’s not enough to say that it’s the only one that can be – and even if it is, the point made earlier about pressures still applies; it isn’t necessary that the environment be all that earth-like for it to happen.
Putting all this together, I’m inclined to set the minimum value at something like 2.5 – One for the environmental conditions that we know work, and 3/4 each for the variations that we suspect work but aren’t sure of.
If this work was to be scientifically-rigorous, though, what should happen is that star populations get subdivided by spectral class, enabling each set of conditions to be independently assessed. It might be, for example, that conditions within the habitable zone of a red dwarf supply enough less energy that life – the next factor – is considerably rarer on such planets. Whilst things remain lumped together, geocentricism perpetually invades thinking on the subject.
Anyway, that lifts our Onion to 478,500,000,000 – a strange onion, this, where an interior layer can be larger than the one that surrounds it!
The Incidence Of Life
This is thought to either be very close to 0 or very close to 1, depending on who you ask. We have no data other than that of earth.
So let’s try and formulate some.
It’s now well-known that if you stuff the atmosphere of primeval earth in a bottle and run an electric current through it, or expose it to sunlight, or do any of half a dozen other things to it, you end up with the building blocks of amino acids after a while. If conditions are right and we persist long enough in waiting for it to happen, some of those are going to find the right configuration at the right time to form actual amino acids.
Again, if we take amino acids and provide food and energy and mobility and enough time, the probability approaches certainty that eventually something that is simpler than a bacterium, but is nevertheless life, will emerge. We’re mixing billions of amino acid molecules together billions of times for billions of years – even a small chance eventually becomes near-certainty, so long as conditions are remotely hospitable to the chemistry.
And we have already defined the conditions under consideration as being hospitable to life.
The uncertainty remaining is one of “What is enough time”? Here once again we become enveloped in anthropic bias. We have only the one example to look at.
Or do we?
Organic molecules have been found on Ceres, the largest of the asteroids, and in fact these may be more prevalent than first thought. [Scientists] from NASA and the University Of Chicago simulated the movements of 5,000 ice grains like those in the asteroid belt prior to the formation of Earth to over a million years in the turbulence of the solar nebula, which tossed them about like laundry in a dryer, lofting some “high enough [so] that they were being irradiated directly by the young Sun.” High-energy ultraviolet radiation broke molecular bonds, creating highly reactive atoms that were prone to recombine and form more stable – and sometimes, more complex – compounds.
(Excerpted from The Building Blocks of Life May Have Come From Outer Space.)
This is obviously a far more challenging environment for life than a primitive planet earth was – but even so, the first part of the process was achieved. If that environment had persisted, or those molecules found their way to a more hospitable environment, they would have had every chance of developing into full-blown life.
That’s a second data point but one that leaves us delicately poised, because they didn’t actually become life. We have one example that says yes, and another that says “maybe”. So let’s apply a relatively conservative factor of 0.75 for this factor.
Our Onion shrinks to 358,875,000,000.
Intelligence
Here we again need to apply a caveat. By “Intelligence”, we’re talking about too-using and/or abstract reasoning. i.e. an intelligence that we can communicate with. Dolphins and Whales are intelligent enough that we haven’t yet learned to speak their language – but they don’t seem to use that intellect fir anything important from our abstract-reasoning tool-using perspective. The octopus displays incredibly sophisticated problem-solving capability, but it’s even further removed from what we consider intelligent.
This is even more controversial than preceding factors. The original SETI discussion set this to 100%, defining intelligence as inevitable. Those supporting this position employ similar logic to that used in the preceding sections. Others argue that the presence of “only one intelligent species” means that it happens very rarely, and this factor should be extremely low.
I’m personally inclined to having a foot in both camps on this issue. Yes, the chances are low and require the correct stimulation and opportunities, but there are so many opportunities for it to happen that the most pessimistic options seem improbable. If you compare the length of time that life has existed on earth with the length of time that higher life forms have existed, you get a ratio of 0.084-0.141 (depending on which estimate you use of when life appeared and how you define higher life); that’s an average of 0.225, a long way removed from 1 (inevitable), but a lot higher than some of the cynics have suggest (1 in a million or billion).
If we use this admittedly geocentric value, our onion drops to “only” 80,746,875,000.
On the other hand, if we use early man as our yardstick of higher life forms and intelligent life, we get a much smaller number – 0.00276.
That drops the onion to 99080706.5.
But I’m going to use still another value, by defining intelligence as the capacity to send and receive radio signals – 123 years ago. That’s a ratio of 0.00000002674, and re-skins the onion down to just 9596. Call it 10,000 for convenience.
Civilization
This has the advantage of setting the next value at something close to one, which is otherwise another controversial value. Once again, I’m hoping that some of our errors cancel out.
So we have 10,000 civilizations out there happily broadcasting radio waves – at some point in time.
Not so fast, cowboy! There are a lot of fudge-factors in the above. The number of stars along could drop this to a couple of thousand. A more conservative estimate would be 1,000. A really conservative estimate would by 200.
Lifetime
Which brings me to the factor that most disturbs and annoys me. It, like everything else has been, SHOULD be a factor, that is to say a fraction that is yes, and a fraction that is no, leaving only the ‘yes’. For me, this is where the Drake Equation breaks down.
Fraction of civilizations that don’t blow themselves up? Okay, that’s a start. Fraction of civilizations that don’t get wiped out by some cosmic calamity or an asteroid strike or whatever? yeah, that’s a factor to think about.
“How long a civilization lasts” seems totally counter-intuitive in this context.
Replacing this with “fraction of a radio-capable civilization’s lifetime that they are actually broadcasting” gets us somewhere interesting.
Everything else has been about the number of civilizations out there that we might be able to detect. This is all about trying to say “the percentage of those civilizations that we can actually detect” – but it’s usually not described that way.
Two ways to detect E.T.
There are two ways that we can detect an alien civilization through their manipulation of the electromagnetic spectrum – the first is listening for a message they have sent us, and the second is detecting their radio ‘noise’.
Distances Between Civilizations
Before we can reasonably analyze either of them, though, we need to get some impression for the average distance between these civilizations.
The milky way is roughly 150,000-200,000 light years in diameter, giving it a radius of 75-100,000 light years. But most of that is outlying material; in terms of the parts we’re interested in, it’s about 100,000 light-years across and about 1,000 light-years thick. But that thickness is the average for the whole thing, and the core noticeably bulges; about three times the thickness of the arms. We also need to exclude that core from our calculation of the plan area of the disk if we hope to get a volume. Looking at the galactic cross-section, the core is about 1/5th of the total diameter across, so about 20,000 light-years.
When I do that, I get an average thickness of the disk section of 926 light years, and a toroidal area of 2,400 million pi – so the arms contain roughly 7 million million cubic light years.
That means that each of those 10,000 (if there are that many) would have roughly 700,000,000 cubic light years each, on average. If you imagine two cubic bricks, corner to corner or side-by-side, each with a civilization at it’s center, you get an impression of the arrangement. Exactly side-by-side gives the minimum distance between them, corner-to-corner gives the maximum, and half-and-half gives a rough value for the typical distance. 700 million cubic light-years is a cube about 888 light years on a side.
The minimum: half of 888 from #1’s brick and half from #2’s adds up to 888 light-years – no surprise there.
The corner-to-corner is 1256 light years. The in-between is a lot harder to work out in timely fashion, but the average of those two number isn’t far off:1072 light years. I suspect the correct answer will be a little on the high side of that, based on the 3-d geometry I roughly sketched out, so I’d say a little over 1100 light years would be about right.
1100 light-years away? They’re right next door! Why haven’t we heard anything?
Not so fast, cowboy! At only 1,000 such civilizations, each would have 7000 million cubic light-years of space. That’s a minimum of 1913 light years, a probable maximum of 2705 light years, and an average of 2309 light-years.
At only 200, each would have 35 thousand million cubic light-years to play in. That’s a minimum of 3271 light-years, a maximum of 4626 light-years, and an average of 3950, near enough.
Of course, this being a statistical result, anything up to three or four times or five these numbers is absolutely reasonable – if others are closer, to bring the averages down. Even ten times are plausible, but wins us the galactic loner-for-life tag. That gives us potentially 5500 light years for 10,000 civilizations, 11545 light-years for 1,000 civilizations, and 19,750 light years for 200 civilizations.
Those numbers are significant. If a civilization 5500 light years from us invented radio at the same time we did, we’ll detect it – 5,377 years from now, at best! In the year 3358 BC, an equal span of years away, the Naqada culture was ruling in Egypt, Cuneiform was new that century, there was an Irish burial mound erected for a child, Enoch disappeared and Methuselah was in charge – according to the Hebrew Bible.
The Communications Window
That’s all a bit awkward if the average lifespan of a civilization is, say, 300 years. It means that our 300 years has to match up to their 300 years less the distance between us. If the nearest is only 50 light-years away, we could have 250 years of productive conversation before time ran out for one of us. Maybe 5 messages back and forth. If 100 light-years, that window is down to 200 years, and we’re only likely to get 2 messages exchanged. At 1000 light-years, they had better have had radio in the time of Ethelred The Unready. And at 5500 light-years, they would need to have been capable of showing Methuselah a trick or two.
Based on the duration of 60 earthly civilizations, the average lifespan of a civilization has been calculated as 420 years. Based on 28 that are more recent than the roman empire, the average falls to 304 years – determined by the same scientist. Food for thought!
The farther away a civilization is, the longer our civilization needs to last if we are to have contact with them.
That brings us back to our two methods of contact, having gained some feeling for what the distances could be – and hence, the times. The first is to detect a signal deliberately sent out by them, and the second is to detect their byproduct electronic noise.
SETI, quite frankly, pins all its hopes on the first. And on them doing all the heavy lifting, too.
Sending A Message
We’ve never sent a radio message to a nearby star. What makes us think that an alien civilization would send us a signal? Especially before there was any way to know if there was intelligent life here?
But let’s set that aside, and assume that they aren’t like us in this respect.

Based on an image at https://www.seti.org/
When you look at radio noise by frequency, from which you want your signal to stand out (assuming you’re sending one), there’s a rapidly-descending wall on the left, caused by electrons in the milky way’s magnetic field, and a series of peaks and valleys on the right caused by the different molecules in Earth’s atmosphere The result is a noise “trough” from 1 GHz to 10 GHz.
For a very long time, then – almost as long as we’ve had radio astronomy – this “trough” has been targeted as a likely set of frequencies for interstellar communications. Personally, I’m not 100% sold on that, based on my once being told that this was due to absorption of the noise by Hydrogen clouds in space – if that’s the case, then this might be the last frequency you chose – but I’m not 100% sure the information I was given was correct, either.
But even so, there’s a problem, and a huge one: Doppler Shift.
Solar systems and the like barrel through space at a fair old rate of knots. If they happen to be coming toward us, every frequency is blue-shifted, moved up the frequency band. If they happen to be moving away, there’s a red shift.
Our sun, for example, is moving at around 43,000 miles per hour in the direction of Vega, and this speed is not in any way unusual. That means that anything under 86,000 mph closing speed and 86,000 mph receding speed is quite believable – two stars that just happen to be moving more or less straight toward, or straight away from, each other.
Now, in terms of the speed of light, those speeds aren’t all that spectacular. The difference is 0.013%, either plus-or-minus – but that can be enough to throw it out of the detection band. Because it would mean recalculating the correct frequency for the motion of every star observed, SETI relies on the aliens sending the message to have adjusted the frequency of their transmission to allow for Doppler effects.
Why are we so special that they would do that? Unless they had already determined that there was intelligent life here – despite our deliberate policy of not telling anyone?
And then the third shoe drops – if they send a message using FM, SETI won’t receive or understand it. all their efforts are bent toward constant-frequency transmissions – that’s AM or digital. They leave FM to the hobbyist, mainly because it’s difficult and relatively expensive, and SETI has always had to be done on a shoestring.
Incidental Transmissions
When the layperson thinks about SETI, even the relatively educated one, they think about picking up the radio “noise” that’s leaking out from the planet. We use so much electromagnetic communications, more every day. And that stuff leaks, despite the best efforts of engineers to contain it.
They want to keep signals confined to the purposes for which they were transmitted because anything else is wasted power, and wasted bandwidth.
One of the earliest TV signals to be broadcast was Adolf Hitler opening the 1934? 1936? Olympics in Berlin. That signal is now arriving at any star that happens to be 80 light years away, more or less.
Throughout the 20th century, our digital noise increased in intensity. It has since started to either stabilize or decline, as more signals are being carried digitally through optic fiber, or more precisely aimed using dish antennas. That’s how we can still be in contact with Voyager 1 even though it is now in interstellar space, the most distant man-made object at 13.2 billion miles away.
So, if we were to use our best (at the time) radio telescope equipment, and point it at a star (and a planet) that (distance) years ago was emitting just as much radio noise as we were (at the time), from what distance do you think we could detect enough of the signal to recognize it as having an intelligent origin?

800 light years vs 5500 light years. The 800 is more than a speck, but that’s about all you can say for it.
Eight hundred light years.
At best.
Let’s go back to the scale of the milky way again for a moment. The measurements shown on the diagram to the right are in thousands of light-years. We’re talking a bubble that’s less than ONE thousand light-years.
Ah, but our equipment has no doubt improved vastly since then. Unfortunately, we’re up against the inverse-square law. To get the 800 out to 1600, we need a 400% improvement in our equipment. Against a background that is increasingly hostile to radio astronomy, which is a whole other story I don’t have time to go into. An eight-fold increase might just about do it.
But our general broadcast emissions are dropping off as our technology improves. There was a period of peak noticability, and now we’ve started to fade, from a radio signal point of view. By now, that eight-fold increase won’t cut it – we would need a 10- or 12-fold improvement just to stand still. The signals are so much weaker, with so little “spill”.
Which brings us back to the question of a receptivity “window”. We’ve had radio for 123 years. We’ve had TV for 80-odd years. We’ve had STRONG TV signals for maybe 60 – but we started “the great fade” about 30 years ago. That window, for a civilization right on the edge of reception, was only about thirty years.
The Panic Merchants
I started thinking about writing this article somewhere around August-September last year, when I spent a couple of very interested days reading articles on “where are all the ETs?”
You see, the longer the period of time that passes without our detecting someone, the more extraordinary it starts to look. As a result, there has been a great deal of thought lately that’s gone into the question of why we aren’t finding them, if there’s anyone out there. Some of the analyses and speculations were absolutely fascinating, and frequently cause for considerable alarm – if the SETI enthusiasts’ estimates of 100,000+ alien civilizations within the milky way are to be believed. Others were more benign in nature.
I’d love to point you at the discussions, but I’m no longer sure of the URLs.
But, I think that the factors that I’ve pointed out in this article go a long way toward explaining the radio silence. And to those, I can add one more.
The Principle of Mediocrity states that there’s nothing exceptional about where we are in terms of the physics and chemistry; the natural laws that apply here, also apply out there. That has often been interpreted as meaning that if there are 500 civilizations out there, roughly 250 will be younger than ours and roughly 250 will be older.
I submit that this application is a nonsense. When you toss a coin for the 250th time, there’s still a 50-50 chance that it will come up heads. When you roll a die for the 250th time, it’s still just as likely to come up 1 as it is 6 (wear notwithstanding). Someone has to be first, and assuming that it’s not us is assuming that there is some reason that we can’t be first, and that is a violation of the Principle of Mediocrity.
Statistically, the odds are that we aren’t first – but someone wins the lottery.
But, let’s postulate that we aren’t first, we’re in fact third. One of the others is on the far side of the galaxy from us and we’ll probably never even know they were there. The other one is a mere 30,000 light years or so from us, so far beyond our ability to detect them (and vice-versa) that we may as well not exist – each from the point of view of the other.
The Chance Of Making Contact
Let’s get back to the Drake Equation. What I think should replace the last term in it is The chance of making contact. This is obviously a value that adds up, year on year. You can state that it’s the average chance of success in any given year multiplied by the lifetime of the civilization doing the listening/sending. Our units work out – we have years, and we have per-year.
But what the preceding discussion makes clear is that the chances are NOT very good. They improve vastly if we have nosy neighbors who stop in (metaphorically) to say “hi” and welcome us to the neighborhood and take a good long look at the drapes while they’re here. We aren’t that type.
We show up, and draw all the blinds, and then spend all our time peeking out through the corner of the window to see if anyone’s watching us. We’re the paranoid nutters and ax-murderers of the neighborhood, the ones always described as “very quiet, kept to himself”, the survivalist convinced that the end is nigh. At best, we’re the neighborhood cat lady.
Maybe if we were more welcoming, we would be more welcome.
But, setting that aside, what actually are our chances of making contact, defined as “hearing a signal that may or may not have been meant for us?”
Well, how do SETI searches usually work?
For a period of time, we take a good close look at one particular group of targets. For perhaps a few hours, we’re paying attention to a single possible target – and then we have to move on to the next. When it comes time to plan the next search, the SETI community are spread so thinly that the main objective is not to waste time on redundancy; “Someone checked Beta Hydri last month / last year / a couple of years ago. Nothing. We’re better off looking at a star they DIDN’T examine.”
But the signals are almost certainly so weak that unless they were deliberately signaling us, we have a very good chance of not picking up anything. And it’s no good for them to be signaling us NOW – if they are 200 light-years away, they needed to be signaling us 200 years ago, when there was virtually no chance of them knowing there was anyone here to hear them.
It follows that the chances of a message being sent our way increase with every passing year. But if we don’t happen to be looking in the right direction in the right way at the right time, we will never know the phone was ringing.
That all means that the chance of making contact is still rising – not because of anything we’re doing that’s all that much more than we were already doing, but because the wave front of those strong TV signals is still out there, expanding, and so is the aliens’ wave front of strong TV (“Buy Grimklakk’s Chelating Cream for a smoother finish!”) heading our way. The odds of us being dead-center in the middle of the range of current civilizations are just as great as the odds of us being the first – and either way, we’ve been listening for a while now.
Once a decade is the maximum frequency with which any given star can be assured of being checked, on average – there are some that are conveniently located and are checked more frequently, and some that are not, and which are checked more rarely. And when they are checked, we’re talking a few hours of observation, at best.
So that’s 2 hours every decade. That would probably be enough to pick up an incidental leakage from someone that was close enough – but for anyone outside a 1,000 light-year bubble, we’re reliant on them sending a message at the exact right time. So we have two chances to calculate, one for each detection method, and the total gives us our chance of detection.
Leakage
Peak signal period is an estimated 30 years. We’ve been listening for about 50. We’ve had radio for 123. Our detection methods are good for 1,000 light years.
1000 light years, as a sphere, has a volume of 4/3 pi r cubed – call it 4,200,000,000 cubic light years. The milky way is 7 million million cubic light years. So the ratio is 0.0006 – per decade. SETI started back in 1980, or close enough to that – so we’re coming up on 40 years of it, i.e. 4 decades. Our accumulated chance of detection from leakage is 0.0024, or 0.24%.
Signal
We have to assume that there’s a signal sent to be received. 70% or so of the galaxy is open to our radio telescopes, so I’m going to assume that we can potentially pick up 70% of the civilizations that are out there, however many there are. At least, we could, if it weren’t for that pesky speed of light limitation.
We’ve been listening for about 40 years. So assuming there’s a signal to detect,, we can detect it today if it comes from 40 light years and was sent when we started listening. 40 years is a trivial, a minuscule speck – if I tried putting a 40-light-year bubble on the 800-vs-5500 above, it would have been about 2.2 pixels across!
But things are not quite so dire. We don’t know L. L might be 1000 years – once you become technologically advanced. So if the people who were first got radio 1000 years ago, their signals could be picked up tomorrow.
Heck, L might be five thousand five hundred, for all we know. It seems improbable, but not implausible.
But let’s say that the pessimists are right, and societies only last for a few hundred years. That 300-year block of time is an expanding hollow sphere centered on where the civilization sending it were located, when they sent it. It’s a message in a bottle. There might even be a hundred thousand of them, as the Police song’s lyrics suggest. If they started send radio signals 500 years ago, the outer edge of the sphere is 500 light-years out at this point, and the last goodbye is 200 light-years out.
The moving window is moving in perpetuity. So the correct fraction is Nx10x2/L divided by the number of hours in a year. At an L of 300, that gives us 0.0000076 per civilization – so, at 10,000 civilizations, we get 0.076 per decade. And, as I said, we’ve been looking for 4 decades – so 0.304, or 30.4%. But, before anyone starts doing high-fives, there’s a catch.
That’s assuming that there’s a deliberate signal being sent specifically to us. We need to factor the likelihood of that happening as well. There are roughly 512 G-type stars within about 100 light years of earth. Let’s assume a similar number of F and K stars – the three stellar types considered most likely to have earth-like planets and hence earth-like life.
That gives 1536 stars within 100 light-years. If we were to signal another star without knowing there was intelligent life there, we probably wouldn’t go much further out than that. If we spent 40 years signaling – a number very deliberately chosen – how long would each of those 1536 stars get? 365.25 x 40 = 14610 days. Divide that by 1536, and you get 9.5 days each.
And, as already pointed out, if we aren’t listening for the ten days that are our turn? Bad luck.
We’ve spent 40 years listening, but we spread ourselves a little thinner – about 10,000 stars have been examined at least once in that 40 years. 365.25 x 40 / 10,000 = 1.461 days.
So the final calculation is 30.4% x 9.5 / 14610 x 1.461 / 14610 = 0.00002 – per cent. Or 0.0000002.
The total
0.0024 + 0.0000002 = 0.0024002. If we multiply 10,000 civilizations by that fraction, we get 24.002.
We could have detected as many as 24 civilizations by now!
But, if we multiply 1,000 civilizations by that fraction, we get 2.4002.
We could have detected as many as … two?
And, if we multiply 200 civilizations by that fraction, we get 0.48004 – get back to me in about 50 years…
…all of which completely ignore the possibility that we are a lot closer to being the first to invent radio out of the 10,000 or 1,000 or 200.
But that’s just an assumption – we’re equally likely to be at the tail end of the queue to get into the party.
But there’s still another caveat. As you can see, the chances of accidental detection of leakage are WAY higher than they are from a deliberate message – there are just too many stars in the sky for blind chance to bring two strangers together. But here’s the thing: most radio astronomy surveys are not routinely analyzed for ETI signals, or at least, they weren’t. So we may very well have picked up those 24 signals from 24 neighboring alien races – and not noticed any of them.
We can state with confidence that 40 years ago, no-one in the 100 closest stars had radio technology. Beyond that, the speed of light kills the chances of a message having been received.
None of which doesn’t mean that one couldn’t be detected tomorrow. The odds are just as good as today, if not infinitesimally better. But in looking for deliberate signals, the SETI community are looking for the black cat in the cellar at midnight. If they keep it up long enough, and the black cat is really there to be found, they could succeed. But the odds are slim.
There’s certainly no need to go all Chicken Little – not yet. There’s ample reason for us to have failed to find extraterrestrial intelligences, and no reason to think they aren’t out there waiting to be found. But if the nearest one really is 5500 light-years away, we might have to look for a LONG time.
Discover more from Campaign Mastery
Subscribe to get the latest posts sent to your email.
January 29th, 2019 at 12:15 pm
Good article, matey. Some thoughts of mine.
First, I am reminded of a bit from an SF novel (‘The Disinherited’ by James White, I think). At one point, Earth humans meet people from another star and, whilst talking, the subject of SETI and the Fermi Paradox arises. That Earth has been avidly searching the electromagnetic spectrum for ET signals is something the aliens find absolutely hilarious – they jokingly ask why not check for smoke signals as well?
The point being that radio/TV communication represented a relatively brief span in their history, the aliens having moved on to FTL communications. Which may or may not be plausible in reality, but you never know.
Second, the basic term “Earthlike” has become VERY rubber-y in recent years. Back around when the Drake Equation was first formulated, we had very specific notions of where life could and could not exist on Earth. Now, well, we’ve finding life al over, especially in places previously deemed impossible. Life Finds A Way.
Third, we may be bombarded with signals from elsewhere, but are simply not equipped to understand them, or else there is something fundamental to the entire process that we are so far overlooking.
Best analogy I have is to imagine a basic telegraph station (plus operator) of the mid-18th century. Further imagine that, by some bizarre confluence of space-time, said telegraph station somehow taps into an active fibre-optic bundle. Question being, with his limited understanding, will the telegraph operator recognize whatever comes in on his limited equipment as an actual signal of some kind, or will he instead go yell for the repair guy to come fix the cable?
Just my thought, though of course we have no way of knowing or proving if this could be the case. My own belief, based on our scientific track record so far, is that we are probably UNDER-estimating rather than OVER-estimating about all this.
Finally noting that I am an avid Trekker and Fortean, which arguably gives me a certain inherent bias on the subject. :)
January 29th, 2019 at 2:18 pm
Thanks, Ian.
I’ve read a number of White’s novels, but that’s not one of them. Nevertheless I’m sure you’ve spoken of it from time to time when this topic has arisen in conversation and the principle lodged in the back of my mind.
Your analogy is interesting, but it only works at a metaphoric level. Fiber-optics are essentially glass strings that carry pulses of light, they need something at the end to convert that light into electrical signals and vice-versa. The bigger the fiber-optic cable, the more individual strands make up that cable, each carrying separate data (ignoring the likelihood of there being some redundancy built into the system). That’s quite different from the electrical cable where the bigger it is, the more strands of copper it contains, but they are all sharing the same signal unless insulated from each other. Penetrating or puncturing a fiber-optic cable with an old copper cable wouldn’t produce any signals that the telegraph equipment could decipher, wouldn’t let the telegraph inject any signals into the fiber-optic bundle, and would immediately disrupt the ability of each strand so damaged to carry information – catastrophically.
The variables inherent within the Drake Equation are such that current ‘best estimates’ for their possible values yield anything from 39 billion civilizations to 10-to-the-minus-8 civilizations, i.e. we are the long sentient species in the entire universe. The probable truth is somewhere in between, but that covers a very wide range. So I applied the best data and the best reasoning that I could to the subject, and think that my 10,000 is probably about right – and more than adequate under the circumstances to explain why we haven’t found one yet.
The number one thing I can think of that can be done, and needs to be done, to boost our chances is for all radio telescope data to be automatically run through a SETI-detection protocol – because, as my analysis shows, the far greater chance is that we’ll be looking at something else and spot something peculiar, than that they should send us a Doppler-corrected signal at the exact same moment that we happen to be listening, in a format that SETI is equipped to detect and comprehend.
January 29th, 2019 at 12:17 pm
Oops, should be ‘mid 19th century’ rather than ’18th century’ in my telegraph station analogy.
January 29th, 2019 at 2:02 pm
I thought so!
January 30th, 2019 at 1:21 pm
The telegraph station analogy is basically an analogy, nothing more. The central point being that differing mindsets, levels of knowledge and methodologies are applied can all make for a very big difference in what is or is not PERCEIVED as an “intelligent” signal.
Another thought is about distinguishing a “natural” signal from an “artificial” one. Some cite a regular cycle as being the obvious choice for at least getting attention, but there are things already out there generating regular signals that (as far as we know) are natural phenomena. IRregular signals are also common, needless to say, hence the difficulty. But something that was, say, pulsing sets of prime numbers or the value of pi … well, if a natural phenomenon is achieving THAT, it’s a bloody interesting one regardless. :)
Hmm. Interesting where lines of thought can take one. As you may recall, I have an interest in ww2 and in the huge role that cryptography (Enigma, Colossus, etc.) played in it. Cryptography, in its most fundamental form, comprises hiding information in what amounts to HUGE sets of numbers. In classic code-breaking, analysts wouldn’t be just scrutinising one message or set thereof, they’d be comparing as many as possible looking for commonality. I daresay there are parts of SETI using approaches like this, but if not maybe there should be. It may be a total waste of time, or it might give us the access codes to the Galactic Federation’s Chat Room. One never knows.
January 31st, 2019 at 2:42 am
Good point. I would also apply steganography principles. Who knows what may be lurking in the noise?
January 31st, 2019 at 12:01 pm
Indeed. Having some sort of puzzle / test inherent in an intentional signal would seem a smart approach – “screen” possible candidates in advance for intelligence and general know-how. A REALLY devious test might even reveal something of the respondent’s mindset and capabilities just by which parts of the signal they figure out and answer.