We will need a great deal of capabilities if we want to decide how people look like or if you want to even determine what your personality might be like. But it will be quite likely over time that we will start doing all these things. We will modify people and we quite likely eventually will end up with a number of much augmented human capabilities. The question is: how far do we go till we decide that we want to add genetic processing enhancement in people that allow them, for example, to link directly to machines? Do we want them to link to the network? Should we produce genetics that allows people to connect directly to the internet, for example, in 2050, 2060 just by thinking, and therefore to communicate to other people that are walking around telepathically? Why don't follow that path? People might want to go on that path if the capabilities might be there as well as the engineering to do so? Even these things would be possible, very interesting possibilities.
Right now the Pentagon is using some 5,000 robots in Iraq and Afghanistan, patrolling cities, disarming explosives or making reconnaissance flights. The next step is allowing them to carry weapons. Does this way lead to a Terminator scenario?
[Laughs] It's certainly one of the top concerns engineers have been worrying about, whether taking it leads towards Terminator how it happens in the film, whenever you design a robot that should be under their command, but then it becomes self aware or something and decide not to follow your command. When the U.S. is developing robotic weapons they are making a step towards that. The question is how far it can go down that path without a huge line of assistance. For example, in a small dictatorship regime or something, could it afford to take it the full way to make some weapon system? Well, probably not yet because they will lack capabilities that so far have not been built. That capability will need enormous resources to take that development path. It would take a long time to get to the point where it would be possible. I think that's a potential that we have. If you're aware that the possibility exists people obviously will have to think about it when designing these machines, and don't stop merely by expecting to make something which is quite likely to go a line to the Terminator scenarios. We certainly have a self censorship when we try not to be stupid destroying the world.
Are you an optimist about the future? Do you believe we can improve technology at the same time we save the world from hunger, overpopulation, pollution and environmental destruction?
I am an optimist. I recognize that there are dangers in the future. But somehow I still believe that we will manage to avoid those problems and that the future will be much better than it is today. If you go far enough ahead we will solve a lot of those problems using advanced machines. Someway or somehow we will manage to find a way to avert it without destroying the world. That's what I believe. If I look at the negative part of it, there is a risk, a significant risk that we might destroy the world on ways that we couldn't be able to ask. And I think that in the next several decades there will be a balance on problems being caused by technologies as well as solutions being made by them. But in the short to medium terms it probably won't be much better or much worse than it is today. We will have some new problems but we will also have new solutions too. But in the very long term, there's a lot of optimism ahead that we might solve a lot of the problems that we caused, and we will eventually catch up with new problems being caused by a coming technology. So, concerning the far future I am an optimistic, because the opposite is too nasty to think about.