Synching sound and motion

One distinct feature of badly dubbed Hong Kong action movies from the 1970s is English language voiceovers that end long before the actor’s lips shaping the equivalent syllables in what may have been Cantonese, stop moving.

German researchers wanted to learn more about the bridges built inside the human brain that allow it to make sense of all the information coming from multiple channels such as sight, sound and touch and connect the dots. For example, concertgoers are able to match a singer’s lip movements and his hand gestures over a guitar with the music blasting out of speakers around the arena.

In a study published online Nov. 23 in the Proceedings of the National Academy of Sciences USA, the team opted to focus on how the brain of a trained musician might take in and translate sight, sound and touch information compared to the brain of a nonmusician. A pianist, for example, has to be able to read the sheet music and know where the hands should be placed, how long each note should last and how loud or soft it must be played.

“For this study, we availed of the fact that the pianists specifically train in an activity, in which several sensory stimuli, that is visual and auditory information, movement and the striking of the piano keys, have to be connected,” said study senior author Uta Noppeney of the Max Planck Institute for Biological Cybernetics in a statement.

Synchronized or not

The researchers worked with a group of amateur pianists who’d been playing for at least six years, starting lessons before the age of 12. Each musician was shown two kinds of videos—one where a woman spoke short sentences directly into the camera, and another with a hand playing short melodies that matched the rhythms of the spoken lines. In some cases, the researchers tinkered with the video and audio tracks so that one would either be slightly slower or slightly faster than the other. The study participants were asked to determine in each case if the video and audio tracks were synchronized or not.

Noppeney and her team also scanned the musicians’ brains as they watched the footage. The researchers expected that the altered videos would trigger what they called an “error signal” in the pathways of the brain involved in processing the audiovisual information. The pianists’ brain scans were then compared against the scans from a group of nonmusicians, many of whom had not learned to play any instrument, that were shown the same videos.

Similar scores

The musicians and the non-musicians had similar scores when it came to identifying which videos featuring the speaking woman had been altered and which had not. However, when the researchers compared the results involving the musical videos, they found that the piano-playing study participants were more perceptive in identifying the altered videos, and were more conscious of just how much the music had been slowed down or speeded up in each case.

A comparison of the brain scans revealed that the same sections of the brain were being activated when the study participants were reviewing both the spoken and musical videos. The researchers found that there was more activity in certain portions of the musicians’ brains when the participants were identifying the altered musical videos. In short, the team concluded, the musical training served to “finetune” the pathways of the brain involved in consolidating and translating the information from, say, seeing the finger movements and hearing the sounds of the piano keys.

Noppeney and her colleagues are planning to do further studies involving the effect of musical training on the brain. One direction they’re considering is whether or not musicians trained on one instrument such as the piano can still recognize slightly-off videos featuring different instruments such as a violin.

E-mail the author at massie@massie.com.

Read more...