Ā鶹AV

News

Our faces, not just our ears "hear" speech: Ā鶹AV study

Published: 19 January 2009

Neural processing of speech involves somatosensory input

A Ā鶹AV-led study has found that the perception of speech sounds is modified by stretching facial skin in different directions. Different patterns of skin stretch affect how subjects perceive different words. Researchers used a robotic device to manipulate the facial skin in ways that would normally be experienced when we speak. They found that stretching the skin while subjects simply listen to words alters the sounds they hear.

In an article published in the current issue of the Proceedings of the National Academy of Sciences (PNAS), Ā鶹AV neuroscientist David Ostry of the Department of Psychology, reports his results testing a group of 75 native speakers of American English.

Ostry and his colleagues at the Haskins Laboratories and Research Laboratories of Electronics, Massachusetts Institute of Technology, had subjects listen to words one at a time that were selected from a computer-produced continuum between the words ā€œheadā€ and ā€œhadā€. When they stretched the skin upward, words sounded more like ā€œheadā€. With downward stretch, they sounded more like ā€œhad.ā€ A backward stretch had no perceptual effect. They found that the subjects clearly were influenced in their choices by how their facial skin was being manipulated.

"Our study provides clues on how the brain processes speech. There is a broad non-auditory basis to speech perception. This study indicates that perception has neural links to the mechanisms of speech production" said Ostry.

Back to top