Artificial humans

“Although biological components act in ways that are comparable to those in electronic circuits,” he argued, in a talk titled ‘What Cells Can Do That Robots Can’t,’ “they are set apart by the huge number of different states they can adopt.

Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell.

The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events.”

That makes the ones and zeros that computers trade in look pretty crude. 

Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being – in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.)

Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness – a machine with no ghost in it? And how would we know?

Even if you grant that the Singularity is plausible, you’re still staring at a thicket of unanswerable questions.

  • If I can scan my consciousness into a computer, am I still me?
  • What are the geopolitics and the socioeconomics of the Singularity?
  • Who decides who gets to be immortal?
  • Who draws the line between sentient and non-sentient?
  • And as we approach immortality, omniscience and omnipotence, will our lives still have meaning?
  • By beating death, will we have lost our essential humanity?

Kurzweil admits that there’s a fundamental level of risk associated with the Singularity that’s impossible to refine away, simply because we don’t know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do.

It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don’t have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error. 

If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous.

“It would require a totalitarian system to implement such a ban,” he says.

“It wouldn’t work. It would just drive these technologies underground, where the responsible scientists who we’re counting on to create the defenses would not have easy access to the tools.”

Kurzweil is an almost inhumanly patient and thorough debater. He relishes it.

He’s tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer.

He refuses to fall on his knees before the mystery of the human brain.

“Generally speaking,” he says, “the core of a disagreement I’ll have with a critic is, they’ll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don’t believe I’m underestimating the challenge. I think they’re underestimating the power of exponential growth.”

This position doesn’t make Kurzweil an outlier, at least among Singularitarians.

Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It’s called the Blue Brain project, and it’s an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM’s Blue Gene super-computer.

So far, Markram’s team has managed to simulate one neocortical column from a rat’s brain, which contains about 10,000 neurons.

Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you’d then have to educate the brain, and who knows how long that would take?) 

By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it.

He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware.

“When people look at the implications of ongoing exponential growth, it gets harder and harder to accept,” he says.

“So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I’ve tried to push myself to really look.”

In Kurzweil’s future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level.

Progress hyper-accelerates, and every hour brings a century’s worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all.

Kurzweil hopes to bring his dead father back to life.

We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species. 

Or it isn’t. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.

But as for the minor questions, they’re already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn’t have 600 million humans carrying out their social lives over a single electronic network.

Now we have Facebook. Five years ago you didn’t see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics.

Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?

Already 30,000 patients with Parkinson’s disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy!

Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter.

It got every question it answered right, but much more important, it didn’t need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn’t strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits. 

A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century’s answer to the Founding Fathers – except unlike the Founding Fathers, they’ll still be alive to get credit – or their ideas could look as hilariously retro and dated as Disney’s Tomorrowland.

Nothing gets old as fast as the future.

But even if they’re dead wrong about the future, they’re right about the present. They’re taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another.

Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago.

Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box.

Or maybe you have to think further inside it than anyone ever has before.

Leave a Reply