Vernor Vinge, 62, is a pioneer in artificial intelligence, who in a recent interview warned about the risks and opportunities that an electronic super-intelligence would offer to mankind.
Vinge is a retired San Diego State University professor of mathematics, computer scientist, and science fiction author. He is well-known for his 1993 manifesto, "The Coming Technological Singularity, in which he argues that exponential growth in technology means a point will be reached where the consequences are unknown. Vinge still believes in this future, which he thinks would come anytime after 2020.
Exactly 10 years ago, in May 1997, Deep Blue won the chess tournament against Gary Kasparov. Was that the first glimpse of a new kind of intelligence?
I think there was clever programming in Deep Blue, but the predictable success came mainly from the ongoing trends in computer hardware improvement. The result was a better-than-human performance in a single, limited problem area. In the future, I think that improvements in both software and hardware will bring success in other intellectual domains.
In 1993 you gave your famous, almost prophetic, speech on "Technological Singularity." Can you please describe the concept of Singularity?
It seems plausible that with technology we can, in the fairly near future, create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond such an event -- such a singularity -- are as unimaginable to us as opera is to a flatworm.
Do you still believe in the coming singularity?
I think it's the most likely non-catastrophic outcome of the next few decades.
Does the explosion of the Internet and grid computing ultimately accelerate this event?
Yes. There are other possible paths to the Singularity, but at the least, computer+communications+people provide a healthy setting for further intellectual leaps.
When intelligent machines finally appear, what will they look like?
Most likely they will be less visible than computers are now. They would mostly operate via the networks and the processors built into the ordinary machines of our everyday life. On the other hand, the results of their behaviour could be very spectacular changes in our physical world. (One exception: mobile robots, even before the Singularity, will probably become very impressive -- with devices that are more agile and coordinated than human athletes, even in open-field situations.)
How would we be certain about its conscience?
The hope and the peril is that these creatures would be our "mind children". As with any child, there is a question of how moral they may grow up to be, and yet there is good reason for hope. (Of course, the peril is that these particular children are much more powerful than natural ones.)
Stephen Hawking defended in 2001 the genetic enhancing of our species in order to compete with intelligent machines. Do you believe it would be feasible, even practical?
I think it's both practical and positive -- and subject to the same qualitative concerns as the computer risks. In the long run I don't think organic biology can keep up with hardware. On the other hand, organic biology is robust in different ways than machine hardware. The survival of Life is best served by preserving and enhancing both strategies.