Tufts' Jacob has developed a framework that unites such disparate technologies as multitouch screens and systems that can read emotions from the faces of users. He says such "reality-based interaction" is based on these four concepts:
Naive physics. It's common sense that tells people that an apple that comes off a tree will fall to the ground, even if they have forgotten Newton's laws. The Apple iPhone helps user intuition by simulating gravity (as when the screen switches from portrait to landscape mode as the user rotates the device) and inertia (e.g., when flicking through the contacts list, a fast flick will keep the contacts scrolling after the user's finger has been removed, as if the list had mass).
Body awareness and skills. It's the ability -- learned very early in life -- to coordinate one's hands, eyes and senses independent of environment. "Emerging interfaces support an increasingly rich set of input techniques based on these skills, including two-handed and whole body interaction," Jacob says.
Environmental awareness and skills. Humans naturally take actions in the context of objects in the environment. Emerging human-computer interaction styles -- such as virtual reality -- do the same. For example, "context aware" or sensing systems may compute a user's location and take actions based on that.
Social awareness and skills. People are aware of the presence of others and interact with them in various ways. Similarly, emerging technologies such as the Mitsubishi DiamondTouch Table facilitates collaboration via a touch display that allows users to maintain eye contact while interacting with the display simultaneously.
Says Jacob, "All of these new interaction styles draw strength by building on users' preexisting knowledge of the everyday, nondigital world to a much greater extent than before."
Tiny devices and the 'fat finger' problem
Some researchers say that a logical extension of touch technology is gesture recognition, by which a system recognizes hand or finger movements across a screen or close to it without requiring an actual touch.
"Our technology is halfway there," IBM's Pinhanez says, "because we recognize the gesture of touching rather than the occlusion of a particular area. You can go over buttons without triggering them."
The occlusion problem, where a finger or hand blocks a user's view of what he's doing and leads him to make mistakes, is being attacked in some novel ways. Microsoft and MERL collaborated on development of a research prototype called LucidTouch, a two-sided, "pseudo-translucent" mobile device that allows users to issue commands with their fingers on either the front or the back of the device.
"The problem we are addressing is what some people have called the 'fat finger problem,'" says Patrick Baudisch at Microsoft Research. When a user touches the back of the device, he sees an image of his fingers behind instead of in front of the things to be touched on-screen. LucidTouch can accept input from as many as 10 fingers at once.