Infinity's Illusion Page 21
“Hekla . . . one of history’s great eruptions”
The Hekla-3 eruption, in about 1,000 BCE, has been blamed for years or decades of climate disruption and famines as far afield as Egypt—though the connections with Hekla are disputed.
“We should not live as animals, and we can’t live as gods”
It’s Aristotle Morag is thinking of. The point he’s making in the Politics is that human beings are naturally social—creatures of community, we might say. Creatures that don’t need community are either less or more than human.
“May you live all the days of your life”
Jonathan Swift, Complete Collection of Genteel and Ingenious Conversation (1738). It’s a good slogan—but Swift, characteristically, makes its value ambiguous by putting it into the mouth of a character who appears to be chatting someone up while getting drunk.
MORE FROM THE AUTHOR
SOMETHING TO THINK ABOUT
“There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.”
—Shakespeare, Hamlet
“We do not yet know what we have not yet discovered.”
—David Deutsch, The Beginning of Infinity
The Babel Trilogy is speculative fiction, and therefore it contains, well, fiction. But its central spooky puzzle—consciousness—is a real-world puzzle, and its main motivation is a real-world idea that is, or ought to be, pretty disturbing.
Again and again, history shows our ancestors assembling a “big picture” that explains what the world is like, who or what created it, and how we humans fit into the plan. Being able to tell such a story is comforting. But again and again it has been the work of time (and insufficiently respectful grandchildren) to show that—however settled and revered and insisted upon—the big picture is hopelessly mistaken. Our ancestors weren’t really living in the real world at all, but inside the bubble of an illusion they’d built. (“They believed things were like that? Really?”)
Almost certainly, the same is true of us.
First, consider this. Even a devoutly religious person in the twenty-first century understands that the vast majority of all past “supernatural theories” are a total bust. Who believes, now, that the onset of winter is caused by Demeter’s sorrow when her daughter Persephone is forced to return to the Underworld? No one, because no one now believes that Demeter or Persephone exist. For the same reason, people no longer explain their victory in battle by claiming that Odin came down and fought alongside them with his magical dwarf-made spear that never misses. Fewer and fewer people even believe, if they stop to think about it, that their team won the game last weekend, or their cancer went into remission last month, because God (looming benevolently but inexplicably over this hospital bed here and not that one right there) decided to lend His merciful hand.
The atheist Richard Dawkins puts the wider point with characteristic bluntness: “We are all atheists about most of the gods that humanity has ever believed in. Some of us just go one god further.” And it’s striking that, even among believers, central doctrines can be discarded quite rapidly. That sinners fill God with an angry rage, and He burns them for all eternity in a lake of hellfire, is not a fashionable theory about the afterlife these days, even in cultures where until recently it was believed (or professed, which isn’t quite the same) by almost everyone; the sort of God we’re prepared to justify, or excuse, or put up with, changes over time. And the fact that people have treated so many supernatural doctrines as central, only to abandon them, puts enormous pressure on existing religious or supernatural belief. Why does confidence in today’s doctrines make any more sense than the confidence we used to have in the doctrines that we’ve since discarded as baseless?
Thank goodness for the rise of reason and science, eh! But wait . . .
Every scientifically educated person understands that the majority of scientific conceptions of the world have also turned out to be wrong, or anyway fundamentally misleading. Medical science revolved around the four bodily “humors” for two thousand years, but humors don’t exist. Astronomers assured us that the sun revolved around a static Earth, against a background of static stars; they were fixed, like metal studs, into rotating crystalline spheres. Down on the surface of the Earth, “each species has been the same since the Creation” was a bedrock belief among naturalists for centuries—and then it went out the window like a puff of smoke after 1859, when Darwin showed how evolution was possible, created the modern discipline of explanatory biology, and reinvented our whole conception of what life is. At the same time (1845–1865), Michael Faraday and James Clerk Maxwell were putting electromagnetic fields at the center of science. Yet no previous century’s scientists would have counted a field as a physical entity at all.
Despite that radical change, Maxwell was helping to build such a successful “big picture” of the physical world that, over the next generation or so, it would become the foundation for some exceptional scientific complacency. There’s no better example of a good scientist being hopelessly wrong about everything than Albert Michelson. In 1894 he said: “The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote. Our future discoveries must be looked for in the sixth place of decimals.” You can hear in this pronouncement exactly the conceit and overconfidence (and lack of imagination) that closed down the minds of early Christian fundamentalists like Tertullian. (“For us, curiosity is no longer necessary”: see my note on him in Ghosts in the Machine.)
Notice Michelson’s wording: “the possibility . . . is exceedingly remote.” That might sound like a scientific judgment, because it comes from a scientist and is couched (if loosely) as an evidence-based probability assessment. But wait: On what basis can anyone, Michelson included, assign a probability to the effect of future theories on current ones?
(Try this experiment. Time-travel back to the eighteenth century and find yourself a court physician. He’ll tell you that most diseases are caused either by miasmal vapors or by an imbalance in the bodily humors. Now ask: “Given that that’s what you believe, could you please assess the probability that, a couple of centuries from now, everyone will agree that miasmal vapors and bodily humors are fictional, and that most diseases are caused instead by colonies of invisibly tiny animals?”)
The point is, Michelson isn’t really saying that there’s a low probability of physics changing fundamentally. What he’s saying is: “Despite being one of the best scientists of my era, I can’t imagine how physics could change fundamentally.” But of course he couldn’t! That mental feat would have required him to be a different Albert altogether.
Ironically, Michelson’s own work on the speed of light, as far back as the 1870s, was crucial groundwork for that other Albert’s 1905 shocker: “On the Electrodynamics of Moving Bodies,” the paper that gave us the special theory of relativity. The idea that space and time are fixed, and independent of matter and energy, was arguably the most basic scientific idea of all for more than two centuries following the publication of Newton’s Principia. But Einstein (with help from less celebrated figures like Lorentz) blew the Newtonian worldview to smithereens—and, in the following decades, Heisenberg and Bohr and Schrödinger and Dirac took Einstein’s smithereens and ground them into a fine quantum silt.
In fact Heisenberg’s work (and the Schrödinger and Dirac equations of 1926–1928) demanded a conceptual shift even more radical than Maxwell’s. Science studies interactions between things. But quantum mechanics, even more than electromagnetic theory, forces us to throw away all our previous ideas about what it is, in science, for a thing to count as a thing. Exposing all our ordinary concepts as feeble and misleading analogies, it says that we, and the world we experience, are built up not from small atomic “bricks” but from smeared waves of electromagnetic-event probability.
It gets worse. Accordi
ng to at least one interpretation, those waves travel from A to B not by taking the shortest possible route through space, like sensible Newtonian objects, or by taking some whacky route mangled by gravity, as Einstein might tell us to expect, but by taking every possible route through the universe simultaneously. And their properties depend on the presence of conscious observers—a claim that sounds alarmingly close to an old philosophical theory that science thought it had cured us of: idealism, which says, heretically, that matter depends for its existence on pre-existing mind, rather than the other way around.
Einstein made reality strange; with the quantum it has become (to quote J. B. S. Haldane, or possibly Arthur Eddington, or possibly someone else) “not just stranger than we suppose, but stranger than we can suppose.”
By the way, in the face of such mind-numbing exotica, let’s not forget everything else that was happening in more “ordinary” science while these revolutions were going on. We discovered that far more than 99 percent of everything, including our own bodies, is empty space. And that the continents drift, like pieces of wood on a pond. And that we are all descended from a single “mitochondrial Eve,” who lived not six thousand years ago in Eden, like her biblical namesake, but six to ten thousand generations ago, in Africa. And that the Earth is not twice as old as our great-grandparents thought, nor ten or twenty or a hundred times as old, but a million times as old. (This is like saying, “I lost my keys this morning. No, sorry, my mistake—I lost them while the Assyrians were sacking Babylon.”) And then there’s the scale of space: until the 1920s, we thought the Milky Way was the universe; now, we know that our galaxy is much, much bigger than we imagined—and isn’t even a trillionth part of the whole shebang.
In case you think all that’s old news, and maybe this dizzying process of having to rebuild our scientific worldview every few years has come to an end, look up inflationary cosmology, or the string landscape, or Max Tegmark’s four-level multiverse, or the increasingly popular theory that our universe is merely a holographic projection. If that’s not weird enough, try cognitive scientist Donald Hoffman’s “evolutionary argument against reality,” or, as I recommended in an earlier note, philosopher Nick Bostrom’s simulation argument. If your head doesn’t spin, you’re not awake.
There are excellent reasons, then, to bet that the future’s best scientific view of the world (and of our own history, our true nature, and our destiny—the Big Three for religion and science alike) is something that you and I have not yet imagined. And it follows that we will share the fate of Michelson, and all those other overconfident Victorians we like to look down our noses at: the most sophisticated big picture that science gives us now—a quantum field theory that combines superb predictive accuracy with the minor difficulty of being in a straightforward sense unintelligible, even to experts—will make us seem like fools, simpletons, straw-chewing slack-witted bumpkins, in the eyes of our own descendants.
The science-minded will be rolling their eyes by now, because I’m drawing a false parallel: scientific understanding is cumulative—despite shocks and revolutions—in a way that religious knowledge (if there is such a thing) is not. Readers will know by now that I’m inclined to agree, up to a point. But some embarrassing facts remain undigested. Science seeks to offer a unified and complete explanation of the world, and science-boosters often talk as if science is either the only way to do so, or on the verge of doing so, or both. But this is nothing but lies and propaganda. For science has been and continues to be at a loss to explain at least two phenomena that loom large in the world—or that, arguably, constitute the world: numbers, and feelings.
The relationship of mathematical objects like prime numbers to everyday reality is a mystery, and it’s one about which experimental science itself has had exactly nothing to say. Thousands of pages have been written on the subject by philosophers, mathematicians, and philosophically inclined scientists, but it remains as tantalizing and enigmatic a problem now as it was when Euclid and Eratosthenes were trying to get their exceptional heads around it almost two and a half thousand years ago. Mathematicians ever since have trembled in awe before the strangeness of the primes. For here is something that seems to be speaking to us about the most fundamental structure of things, like a slyly whispering god—and yet, for all our efforts, we can’t understand what the god of the primes is saying. More generally, there’s a sense in which none of science is on a sure foundation, because all of science is founded on numbers . . . and one good way to express the problem of the primes is to say that we don’t know what numbers are.
The relationship of conscious experience to the objective world, the world in which experiments get done, is perhaps an even more intractable mystery. Science has changed our view of the brain beyond recognition, and that fact may eventually—as lots of people and their research grants are currently hoping—help us to unlock the multidimensional puzzle so beautifully expressed in the passage from Julian Jaynes at the beginning of this book. Or not. Contrary to a great deal of breathless, wide-eyed commentary, the current state of research on the origin and nature of consciousness (as opposed to, say, the mechanisms of cognition, intelligence, and human information processing, with which consciousness itself is routinely confused) is pretty much that there is no research. What the great philosopher and mathematician Leibniz said about this is every bit as true now as it was three centuries ago: you can poke around in the brain, and discover mechanisms ever more intricate, and that may be interesting. Yet it remains radically unclear how or why any mere mechanism can go beyond reacting to being poked and instead, say, resent being poked.
That critical distinction has never been taken seriously enough by Alan Turing and his disciples. For to react is merely to behave—but to resent is to experience an emotion. And, as Darwin’s friend Thomas Henry Huxley put it: “[H]ow it is that anything so remarkable as a state of consciousness comes about as the result of irritating nervous tissue, is just as unaccountable as the appearance of the Djinn when Aladdin rubbed his lamp.”
This “unaccountable” nature of consciousness is closely related to one of the most long-running and most basic lines of disagreement in our civilization. Plato thought that the immaterial or mathematical world was the ultimate reality, and that our material “reality” was an illusion—a shadow on the wall of a cave, in his famous metaphor. The Christian tradition, deeply influenced by him, played around with many different theologies, and is marked to this day by a view that comes out of Platonism: the immaterial or spiritual world is good, and associated with God, and the material or physical world is sinful or evil, and associated with the devil.
Modern science has favored consistently the view that this is all nonsense—the spiritual or immaterial simply doesn’t exist. Matter is all there is! But this is an increasingly strained position in the era of quantum mechanics, which seems to cast a deep shadow of doubt on the whole idea of a “purely” objective, observer-independent reality “out there.”
Notice: this is not to say science has shown that “everything’s relative.” That’s as lazy and confused a view as any. But our success in creating scientific models that work—for example, by getting complex predictions right to ten decimal places—is of only modest comfort when we lack any context, any images or metaphors, with which to make sense of those models. One common response is: Well, tough—reality is just too weird for us to wrap our tiny little Pleistocene monkey-brains around; just keep quiet and trust the math. But maybe that’s wrong. Because maybe, one day, we will have an intuitive understanding of these theories (and their successors)—and maybe not having that now is merely evidence (from the viewpoint of some future science) that we, including our most confident scientists, just don’t get it yet.
I said it was a disturbing idea that our ignorance of fundamentals goes so deep. But I think our ignorance is also thrilling, and a cause for optimism. So much cool stuff to find out! It’s nice to think that one day people will know how language evolved, whether
a civilization run by intelligent cuttlefish exists under the oceans of Jupiter’s moon Europa, and what dark energy (the missing three-quarters of the universe—big, big oops) really is. It’s nice to think that one day the headache-inducing paradoxes of quantum mechanics won’t force physicists, like medieval theologians, to keep changing their minds about which five mutually contradictory things to believe before breakfast. Moving further out into the realms of fantasy, it’s nice to think that one day people will be able to give a plausible answer to the simple question “Why are prime numbers prime—what made them so?” And how wonderful to think that one day people will also stop arguing over the nature of conscious experience because, unlike us, they will no longer find it puzzling.
But don’t hold your breath on that last one. In every age there are people eager to believe that we’re finally, at last, just around the corner from a convincing solution to what philosophers call “the hard problem.” Scientists, especially in certain fields such as neurology and artificial intelligence, love to pretend that it isn’t really a big deal, or that it’ll go away just as soon as we’ve had one more iteration of Moore’s Law, or done another semester’s worth of experiments on the biochemistry of your brain’s microtubules. The ecologist James Lovelock has said that robots taking over doesn’t matter, because their cognition may be superior to ours—which indicates that, like so many people, he understands neither what the hard problem is, nor what its most elementary ethical implications are. Another recent writer dismisses the problem of consciousness as just a matter of what our cognitive systems are or are not “attending to.” Anyone who finds that convincing should read Molière’s comedy The Hypochondriac, in which the doctor “explains” a drug’s capacity to make the patient sleepy by reference to its “dormitive power.” An engineer might as well say that a wall collapsed because it no longer contained enough verticality.