One of the most confusing sentences my non-Filipino speaking college classmates ever heard spoken was, “Bababa ka ba?” They knew from the intonation that a question had been posed, but couldn’t figure out how the same syllable repeated four times could possibly translate to either, “Are you getting off [at this stop]?” or “Are you going down[stairs]?”
(For the record, the confusing sentence runner-up was the reply, “Bababa.” There was no attempt to explain how the same syllable repeated three times could mean, “Yes.”)
The June 22 issue of the Journal of Neuroscience includes a description from American researchers of their work with several volunteers to identify a three-tier process by which the human brain identifies and makes sense of speech and language. The pathways that carry the sound from the ear to the brain track the sound to its source and identify it in order to interpret it. The team led by neuroscientist Josef Rauschecker of Georgetown University Medical Center also found that this pathway in the human brain is very similar to the one previously found in monkeys.
In a statement, Rauschecker said the similarities between these pathways in the human and monkey brains that allow language to be processed indicate that “in evolution, language must have emerged from neural mechanisms at least partially available in animals.”
The work by Rauschecker’s team complements a report published online June 30 in the journal Current Biology. As detailed by a team of British researchers, the findings suggest that the human brain’s ability to identify sounds as voices, music or something else start in infancy.
“Human voices play a fundamental role in social communication, and areas of the adult ‘social brain’ show specialization for processing voices and their emotional content,” wrote the team led by King’s College London researcher Declan Murphy in their paper. “However, it is unclear when this specialization develops.”
Working with more than 20 babies whose ages ranged from 3 months to 7 months, Murphy and his colleagues imaged the children’s brains to see what parts were activated when they heard adults make noises such as laughing, crying or coughing, as well as familiar sounds made by toys or water. The results revealed that the parts of the brain that were active when the babies heard the sounds made by human voices are the same locations that are active in adult brains.
The team found that when the babies heard the recorded adult voices, even if no actual words were said, the happy and sad sounds such as laughing and crying activated a specific portion of the brain. In studies involving adult humans, this area of the brain is known to be involved in processing emotions.
A second part of the babies’ brains, one known to be involved in understanding human-made noises, was also active during the study. This same portion of the brain showed less activity when the water and toy noises were played.
Finding that the active areas of the babies’ brains are located close to similarly activated areas in adult brains reassured the researchers, who found working with infants challenging. The results also suggest the importance that’s been placed on being able to follow and understand conversations. “It is probably because the human voice is such an important social cue that the brain shows an early specialization for its processing,” said study cofirst author Anna Blasi of King’s College London in a statement. “This may represent the very first step in social interactions and language learning.”
E-mail the author at massie@massie.com.