Artificial humans

Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity’s most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He’s good-natured about it.

His manner is almost apologetic:

I wish I could bring you less exciting news of the future, but I’ve looked at the numbers, and this is what they say, so what else can I tell you?

Kurzweil’s interest in humanity’s cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress.

Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right.

“Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project,” he says.

“So it’s like skeet shooting – you can’t shoot at the target.”

He knew about Moore’s law, of course, which states that the number of transistors you can put on a microchip doubles about every two years.

It’s a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve:

the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.

As it turned out, Kurzweil’s numbers looked a lot like Moore’s. They doubled every couple of years.

Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900.

Kurzweil then ran the numbers on a whole bunch of other key technological indexes – the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond – the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents.

He kept finding the same thing: exponentially accelerating progress.

“It’s really amazing how smooth these trajectories are,” he says. “Through thick and thin, war and peace, boom times and recessions.”

Kurzweil calls it the law of accelerating returns:

technological progress happens exponentially, not linearly.

Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity.

According to Kurzweil, we’re not evolved to think in terms of exponential growth.

“It’s not intuitive. Our built-in predictors are linear. When we’re trying to avoid an animal, we pick the linear prediction of where it’s going to be in 20 seconds and what to do about it. That is actually hardwired in our brains.”

Here’s what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s.

By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity – never say he’s not conservative – at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. 

The Singularity isn’t just an idea. It attracts people, and those people feel a bond with one another.

Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

Not all of them are Kurzweilians, not by a long chalk. There’s room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won’t happen.

But Singularitarians share a worldview.

They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you’re walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything.

They have no fear of sounding ridiculous; your ordinary citizen’s distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality.

When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

In addition to the Singularity University, which Kurzweil co-founded, there’s also a Singularity Institute for Artificial Intelligence (SIAI), based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.)

Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology.

At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James “the Amazing” Randi.

The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading – the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters – handed out pamphlets. An android chatted with visitors in one corner.

After artificial intelligence, the most talked-about topic at the 2010 summit was life extension.

Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It’s not just wishful thinking; there’s actual science going on here.

For example, it’s well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can’t reproduce anymore and dies. But there’s an enzyme called telomerase that reverses this process; it’s one of the reasons cancer cells live so long.

So why not treat regular non-cancerous cells with telomerase?

In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away.

The mice didn’t just get better; they got younger. 

Aubrey de Grey is one of the world’s best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence.

He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine.

“People have begun to realize that the view of aging being something immutable – rather like the heat death of the universe – is simply ridiculous,” he says.

“It’s just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically.

This is why we have vintage cars. It’s really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable.”

Kurzweil takes life extension seriously too.

His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father’s genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day.

He says his diabetes is essentially cured, and although he’s 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.

But his goal differs slightly from de Grey’s. For Kurzweil, it’s not so much about staying healthy as long as possible; it’s about staying alive until the Singularity. It’s an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they’ll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans.

Alternatively, by then we’ll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.

It’s an idea that’s radical and ancient at the same time.

In “Sailing to Byzantium,” W.B. Yeats describes mankind’s fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead?

But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves.

“There are people who can accept computers being more intelligent than people,” he says.

“But the idea of significant changes to human longevity – that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that’s the major reason we have religion.”

Of course, a lot of people think the Singularity is nonsense – a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience.

Most of the serious critics focus on the question of whether a computer can truly become intelligent.

The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn’t currently produce the kind of intelligence we associate with humans or even with talking computers in movies – HAL or C3PO or Data.

Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don’t make conversation at parties. They’re intelligent, but only if you define intelligence in a vanishingly narrow way.

The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn’t exist yet.

Why not? Obviously we’re still waiting on all that exponentially growing computing power to get here.

But it’s also possible that there are things going on in our brains that can’t be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon.

The biologist Dennis Bray was one of the few voices of dissent at last summer’s Singularity Summit.

Leave a Reply