New interface methods will revolutionise how we interact with computers
When workplace computers moved beyond command-line interfaces to the mouse-and-windows-based graphical user interface, that was a major advance in usability. And the command line itself was a big improvement over the punch cards and tape that came before. We're now entering a new era of user interface design, with companies experimenting with everything from touch and voice to gestures and even direct mind control.
Can you hear me now?
Voice recognition is one input technology that has made significant progress. A decade ago, accuracy was low and the technology required extensive training. Today, it is common to find voice recognition when calling customer support, and, of course, in the latest smartphones.
Voice recognition in medicine
For general office use, however, voice recognition has made the biggest impact in specialized areas, like law and medicine. At the University of Pittsburgh Medical Center, for example, automated transcription has almost completely replaced human transcriptionists in the radiology department.
The artificial intelligence factor
Increased accuracy of speech recognition is just the beginning of how new interfaces are transforming the way we interact with computers. Voice recognition is one of the drivers of this change. "We can say, 'Remind me that I have a meeting at five,' and that's very different from turning on the phone, getting to the home screen, picking the clock applications, putting it into alarm mode, and creating a new alarm," says Henry Holtzman, who heads the MIT Media Lab's Information Ecology group. Apple's Siri is an example of a tool that uses artificial intelligence to figure out what the user wants to do.
Companies are having success setting up intelligent agents that understand spoken language in limited contexts, for example, banking and telecom call centers. “Hannah, for instance, for [UK's] M&S Bank, knows all about their credit cards, loans, and other financial service products," says Chris Ezekiel, CEO of Creative Virtual. For companies that deploy virtual assistants like Hannah, the goal is to answer questions that normally are handled by human staff. According to Ezekiel, these virtual agents typically average 20% to 30% success rates, and the systems are continuously updated to learn from previous encounters so that they can handle more queries.
Interface designers looking to translate spoken -- or written -- words into practical goals have a solid advantage over those designing interfaces for gestures or other non-traditional input methods. That's because designers are already familiar with the use of spoken language. "We're at the beginning of the gesture phase," says MIT’s Holtzman. "And not just the gestures, but everything we can do with some kind of camera pointing at us, such as moving our eyebrows and moving our mouths. For example, the screen saver on the laptop -- why doesn't it use the camera on the lid to figure out whether to screen save? If your eyes are open and you're facing the display it should stay lit up."
One company tracking hand motion is Infinite Z, which requires that users wear 3D glasses and use a stylus to touch objects which appear to float in the air in front of them. “A virtual environment makes a lot of sense for computer-aided design, data visualization, pharmaceuticals, medicine, and oil and gas simulations,” says David Chavez, the company's CTO. The products work with Unity 3D and other virtual environment engines, as well as the company's own Z-Space platform.
A difficult technology to commercialize is eye tracking, which is commonly used to see which portions of an ad or Website viewers look at first. It is also used to improve communication for the handicapped. Reynold Bailey, a computer science professor at the Rochester Institute of Technology, uses eye-tracking technology to teach doctors to read mammograms better. But he doesn't expect eye tracking to replace a mouse for general-purpose use. “With the mouse, you can hover over a link and decide whether to click or not. With the eye, you might just be reading it, so you don't want to activate everything you look at.”
It may sound like science fiction, but mind reading devices are already out in the market -- and they don't require sensors or plugs implanted into your skull. Some work by sensing nerve signals sent to arms and legs, and are useful for helping restore mobility to the handicapped. Others read brain waves, such as the Intific, Emotiv and NeuroSky headsets.
Moving objects with the mind
The Intific and Emotiv headsets can be used to play video games with your mind. NeuroSky makes the technology behind the Star Wars Force Trainer and Mattel's MindFlex Duel game, both of which allow players to levitate balls with the power of their minds. That doesn't mean office workers can sit back, think about the sentences they want to write, and have them magically appear on the screen. "If you're an able-bodied individual, typing words on a keyboard is just so much quicker and more reliable than doing it with the brain control interfaces," says MIT Media Lab's Holtzman.
Don’t have an account? Sign up here
Don't have an account? Sign up now