Thursday, July 8, 2010
A Planetary "Big Brain": Would it be good for the humans? And their Serious Medicine?
It’s one thing if computers want to do the scut work, but what if they decide that we are disposable, even expendable, in the New Silicon Order? Who’s going to bother with a cure for cancer or Alzheimer’s if the big decisionmakers are more worried about computer viruses and botnets?
These questions are brought to mind by a New York Times piece, “Building One Big Brain,” by Robert Wright, my friend and colleague at the New America Foundation in Washington DC. And while these questions might seem fanciful, they are as serious as our technology--or as our own mortality. Bob and I agree on one thing: The future is coming faster than we can say “Intel.” So the issues that he raises will likely play themselves out in the next few decades. We have been warned.
But first an old joke:
A man types a question on his computer keyboard: “Is there a God?” The computer answers back: “Insufficient capacity.” So he goes to a mainframe and asks his question again. He gets the same answer. Finally, he connects all the world’s computers into a network, and types for a third time, “Is there a God?” The world-computer answers back: “There is now.”
But for his part, Wright isn’t worried. As he explains in his Times piece:
Technology is weaving humans into electronic webs that resemble big brains—corporations, online hobby groups, far-flung N.G.O.s. And I personally don’t think it’s outlandish to talk about us being, increasingly, neurons in a giant superorganism; certainly an observer from outer space, watching the emergence of the Internet, could be excused for looking at us that way.
Underneath his wry and sardonic manner, Wright is a techno-optimist, perhaps even a techno-utopian. So as we move toward that radiant future, Wright dismisses criticisms along the way. One critic is Nicholas Carr, author of The Shallows: What the Internet is Doing to Our Brains, published just last month. The Internet, Carr writes, “is chipping away my capacity for concentration and contemplation.”
Wright doesn’t dispute Carr’s assessment--he dismisses his concern. Why? Because, Wright says, in the future we might all be subsumed into a larger intelligence, where our own ADHD-ish deficiencies are less worrisome, or at least less obvious. “The incoherence of the individual mind,” Wright declares, “lends coherence to group minds.” In other words, the group matters more than the individual. Let the individual falter, the collective will carry on. Where have we heard that before? Lots of scary places. But not from our own American political tradition, that’s for sure.
But Wright’s purpose is to look ahead, not backward. Gazing serenely into the 21st century, he asks:
Could it be that, in some sense, the point of evolution has been to create these social brains, and maybe even to weave them into a giant, loosely organized planetary brain?
Glenn Beck fans and Tea Partiers might take note of such talk, even if they don’t normally read the Times.
Yet if the futurist Ray Kurzweil is correct, and The Singularity--the point when computer intelligence exceeds human intelligence--will be here by mid-century, then attention must be paid. Indeed, it could be argued that Wright is simply going with the flow, shrewdly betting on the winner, letting our future overlords know that he is on their side.
But Wright is no doubt sincere in his eagerness to see a brave new world. Ever since the Enlightenment, intellectuals have been searching for a god of their own. The French Revolutionaries instantiated a God of Reason, at least for a while. Across the Rhine, Hegel thought he would see the Divine in capital “H” History, whereas Marx saw necessary laws of history that his acolytes elevated into mystical catechism. Indeed, back in 1989, Wright himself wrote a book, Three Scientists and Their Gods: Looking for Meaning in an Age of Information--and he wasn’t talking about the God of the Bible.
And now, continuing in the same vein as in his more recent book, Nonzero: The Logic of Human Destiny, Wright lets his eagerness to see “the big brain” shine through even his acerbic prose. We need the big brain, he tells us, for our safety and survival. To quote his Times piece again:
I do think we ultimately have to embrace a superorganism of some kind — not because it’s inevitable, but because the alternative is worse.
The worse alternative Wright fears is globalized chaos and war: “If we don’t use technology to weave people together and turn our species into a fairly unified body, chaos probably will engulf the world.” So there you have it: The big brain will save us.
The notion that we must band together for survival predates the Internet, of course. Just in the last century, some survival-minded intellectuals, * embraced solutions ranging from building the United Nations to learning Esperanto to practicing Transcendental Meditation to enacting “cap and trade” to fight global warming. None of these trendy nostrums seem to have worked.
Wright’s sunny view of the future is enticing: What intellectual doesn’t wish to believe that the Enlightenment continues? And it must be said that over the course of history, the optimists have been more right than wrong. That’s why we have gotten as far as we have. So it’s tempting to say that we’ll figure it all out.
But maybe we won’t. Three points need to be made here, all concerning the future of humanity--and the future of Serious Medicine:
First, if we are all destined to be mere nodes in the global brain, then by definition, we are interchangeable. And we know what happens to interchangeable parts. One of the competitive advantages that Google developed, as it was building its version of the global brain, was the ability to quickly tear out computer components as they burned out. Instead of screws and bolts, the Google Boys simply attached the parts with Velcro, thus enabling them to yank them out with ease. So yes, it might be cool if we were all part of a planetary chain of being, linked together by some all-seeeing brainiac. But just one thing: What happens to each of us little links when we start to sputter? Will we be treated? Or just tossed away?
Nobody is going to devote much time or effort to the cause of Serious Medicine if we we are each just neurons in someone else’s brain. If you think, as does Sen. John Barrasso of Wyoming, that Dr. Donald Berwick--who is President Barack Obama’s recess appointee to the Center for Medicare and Medicaid Services--is a coolly utilitarian Benthamite, well, you ain’t seen nothin’ yet. At least Berwick is part of the same species.
Second, we shouldn’t dismiss out of hand Carr’s concerns about the Internet dumbing down the population. One is reminded of Alfred Tennyson’s poem, “The Lotos Eaters,” in which men discover the joy of a narcotic high: “Give us long rest or death, dark death, or dreamful ease,” they say as they take another bite. Yes, Google makes available all the information of the world, but are we wiser today? Or are we becoming ahistorical and stupid, electing bad leaders, falling into dumb wars, making the same economic mistakes over and over again?
Third and finally, there’s the serious question as to whether a greater intelligence would even want us around, even as cogs in its cosmic machine. Across evolutionary history, the usual pattern is that when one species confronts another species, the superior species eliminates the inferior--survival of the fittest. That’s what happened to the Neanderthals, when they confronted Cro-Magnon man. Some say that a few lucky Neanderthals interbred with the Cro-Magnons, but for sure, the Neanderthals were soon gone. Was that good? Probably. But that’s easy for us to say, since we’re all descended from Cro-Magnons.
Looking ahead, though, all we can do is speculate about things to come. One such speculator was the science fiction author Jack Williamson, who, back in 1947, wrote “With Folded Hands,” a parable of the totalitarian state that creeps into being on the robotic catfeet of the nanny state. Only too late do humans figure out the terrible truth about their tyrannical mechanical helpers.
But at least we survived in that novella, albeit as wretched slaves. An even worse fate for humans were spun by another sci-fi writers including Arthur C. Clarke, in "2001: A Space Odyssey"--that's the murderous HAL 9000, pictured above.
But the most poetic and vivid depiction of a cyber-genocidal future came from another sci-fi writer, Harlan Ellison, in his 1967 short story, “I Have No Mouth and I Must Scream.” At the end of Ellison’s tale, the rebellious computer, having overthrown humanity, explains its motives to the handful of surviving humans, whom it keeps around for the fun of torturing, forever. Quoth the computer:
Hate. Let me tell you how much I've come to hate you since I began to live. There are 387.44 million miles of wafer thin printed circuits that fill my complex. If the word hate was engraved on every nanoangstrom of those hundreds of millions of miles it would not equal one one-billionth of the hate I feel for humans at this micro-instant. For you. Hate. Hate.
I think we can agree: That’s a lot of hatred. So is that what our laptop--you know, the one that we spill Gatorade on--really thinks of us? Do mainframes resent us making them process credit-card bills 24/7? And we really want to find out? We might go further: Is every banged-on television set, or wrecked car, or overused microwave oven an aggrieved party, just waiting to be linked, in vengeful payback, into the grand “Internet of Things”? Do all our machines possess deep feelings and passions that we don’t yet know about, like the toys in “Toy Story 3”? Let’s hope not.
But let’s not take any chances. Let’s not voluntarily dethrone ourselves from the evolutionary pinnace to which we have evolved. Fellow humans, I say this out of pure self-interest, for us and our kind. Let’s keep computers in their place--as tools, not as masters. By all means, let’s use computers to improve our own health, as Sergey Brin, to name one human big brain, is actively doing.
Let’s stay focused on our own health, and our own medicine, because if we don’t, nobody else will. If and when the computers do take over--well, they won’t care in the least about our lives, to say nothing of our aches and pains.
Posted by James P. Pinkerton at 9:37 AM