First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.
Chip imitates ear to clean up mobile calls
- — 05 September, 2008 10:03
If instead of writing this story I were telling it to you on my mobile phone, while rushing through an International Airport past crying babies and boarding announcements, you probably wouldn't understand half of what I'm about to say.
Mobile phones frequently sound terrible, and often it's not that you can't hear anything. The problem is that you can hear too much. Unintelligible background noises compete with the intelligent voices trying to communicate. Lloyd Watts thinks that to solve that problem, we need to make mobile phones more like us.
"Here's the insight that allowed us to build a product out of this little bit of neuroscience that we were doing. We said, 'If we could put a second microphone onto the mobile phone, we could turn the mobile phone into a creature that has two ears," Watts told an audience at the Hot Chips conference at Stanford last week.
Watts is CTO of Audience, which is based in Mountain View, California, and in February introduced a custom chip for mobile-phone noise suppression as well as echo cancellation. It turns out that having two microphones helps to distinguish a speaker's voice from background sounds, if there's a chip that can analyze all the signals. The Audience A1010 Voice Processor, which is designed to handle that, is already available in one phone sold in Japan and one in South Korea, and it will soon appear in more handsets, according to the company.
To tap into the human ear's ability to distinguish between sounds, Audience had to first understand the ear as well as the human brain. It drew upon an advisory board of eight scientists, each of whom studies a different part of the brain. After identifying and diagramming the elements of human hearing, Audience designed a DSP (digital signal processor) with custom algorithms that carried them out.
The key to the company's chip is the Fast Cochlea Transform process, which attempts to do the same thing as the cochlea, a spiral chamber in the inner ear. The cochlea breaks up signals into different frequencies, allowing us to distinguish one pitch from another. Also, because we have two ears and two cochleas, the brain can figure out where a sound is coming from based on small differences in the loudness and timing of the sounds picked up by each cochlea, Watts said.
Here's how the two microphones work: When you're talking into a phone with two microphones, your voice goes mostly into the one nearest your mouth. The second microphone typically will be opposite the one you normally speak into, on the back side of the phone.
Noises from a long way away from the two microphones are about even in volume. Also, there's a delay -- maybe half a millisecond -- between when the background noises reach one microphone and the other. The A1010 factors in that delay and is also smart enough to figure out which incoming pitch belongs to your voice and which sounds come from other sources, Watts said, demonstrating the technology last week at Stanford.
"Once we've decided what is a foreground sound -- voice -- and what is a background sound, then we can suppress the background sound," Watts said.
Audience's chip goes beyond the technology used in noise-reducing products such as the headphones pioneered by Bose, Watts said. Those headphones identify steady sounds such as ventilation fans and then put out their own steady tones to counter them. The Audience DSP can suppress even intermittent and changing sounds, such as someone talking nearby you.