First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.
- — 21 May, 2007 15:58
Someday robots will do more than vacuum your floors. They will train you, advise you and help you remember things as they strive to improve your quality of life.
Takeo Kanade is a roboticist, but his work extends far beyond the C3PO-like humanoids that often come to mind when one thinks of robots. He has been a pioneer in computer vision, smart sensors, autonomous land and air vehicles, and medical robotics.
Kanade, a professor of computer science and robotics at Carnegie Mellon University in Pittsburgh, recently told IDG that people's notions of what robots can and should do will change. Robots will serve as coaches and advisers, not so much replacing human labour as enhancing it.
What's coming in human-computer interfaces?
The trend toward computer vision is clear, and it will accelerate. In 10 years, I wouldn't be surprised to see computers recognizing certain levels of emotions, expressions, gestures and behaviors, all through vision.
What is the "quality-of-life technology" that you are working on?
Intelligent systems [that incorporate robotics] have been developed over the past couple of decades, mostly for military, space exploration, hazardous environments and entertainment. I think they can do better for our daily lives, especially for older people and people with disabilities. The systems range from small devices that you carry to small mobile robots to the whole environment -- home, streets, the community. The most important thing is for these systems to understand what the human wants to do, or is about to do, and then help them accomplish those tasks.
How could a computer know someone's intent?
What I'm advocating right now is what I call inside-out vision. We think of putting cameras in the environment to observe you, which I call outside-in vision. But people don't like being observed. And, technically speaking, it's difficult, because it's important to know what you are looking at in order to know what you are trying to do, and the things you are looking at up close tend to be occluded by your body.
So the idea is to do it the other way and put sensors on you, looking out. They could be small cameras and maybe sound recorders. The computer then should be able to recognize objects that you are looking at, like a door you are approaching. [Because] the computer is looking from your viewpoint, it can understand what you are trying to do. We can put 1TB of memory on a small device with all the images of your home and neighbourhood, all the places you tend to go and the routes you drive.
Where else might computer vision be applied?
Imagine you train a person how to assemble a product. A trainer's job is to observe the trainee's actions, point out errors and show the right way. The job requires a lot of attention and patience, and that kind of one-on-one training is expensive. I can imagine the computer looking at what the trainee is doing and then giving some advice so the trainer can actually train more than one or two people. So the computer becomes a job coach.
Some people forget the names of their friends and relations, and they are so embarrassed about that that they avoid going out, and that actually accelerates the progression of the problem. But maybe the system could, as a kind of social coach, understand that this is a person you are supposed to say a greeting to and then tell you, "This is so-and-so; you may want to talk to her." You can imagine all kinds of things, all the way from those coaches to the system that understands where you want to go and drives your wheelchair.