top of page

Learning New Sounds

Speech. A one syllable word to describe the entirety of the complexity that makes humans, human. Humans’ ability to communicate with each other is unparalleled in any other species. While chimpanzees have the anatomical body parts that should be able to produce speech, such as, a pharynx, larynx, tongue, lips, and a pair of lungs, chimpanzees lack the complex nervous system and flexibility in the mouth that allows humans to speak so precisely. Babies’ ability to learn language is truly amazing. According to Business Insider, right around after birth, newborns can not only tell the difference between their mother’s language and another language, but it can also distinguish between two unheard languages. There are around 800 phonemes in all human languages combined, 40 of which are in the English language, and studies show that at birth, babies can tell the difference between all 800 of the phonemes, theoretically implying that babies could learn any language they are exposed to.

 

“Newborns can distinguish between two un-

heard languages."

 

Scientists tested (and their findings indeed support this!) this by gathering babies growing in a monolingual household; they had the baby listen to the same sound repeatedly and then inserted a similar, but slightly different sound in that sequence and observed the baby’s stimulus response. For example, the baby would listen to “ba, ba, ba, pa” and the scientist would look for a head turn. The head turn would display the baby’s perception of a difference in stimuli.

Babies can do this for sounds it has never been exposed to, showing the true complexity and beauty of babies’ learning abilities. This aptitude for speech sound learning is best from birth to about the first year of age; around the 8 month mark, babies’ ability to tell the difference between the sounds slowly starts to diminish. As they start learning more of their native language, they stop paying attention to sounds that don’t change the meaning of words. They only focus on sounds that are important in their language and tend to group sounds in other languages that somewhat resemble sounds in their native language under the closest sounding category present in their native language. As the baby grows older, language learning becomes more difficult.

For adults, monolinguals and bilinguals alike, perceiving sounds from a different language is hard especially when these sounds are similar to sounds in their native language(s). For example, native English speakers group various /d/ sounds like the dental /d̪/ (produced by the tongue and the upper teeth) and the retroflex /ɖ/ (produced by the underside of the tongue and the area between the superior alveolar ridge and the hard palate),

both under the alveolar /d/ (produced by the tongue and the superior alveolar ridge). These specific sounds are used because English has the alveolar /d/ in the native language inventory, but not the dental or retroflex “d’s”. Different people produce speech in different ways, but humans can quickly learn the distributional properties in the input. In other words, we learn that most /d/ sounds sound a certain way, even when produced by different people or in different contexts. So when we hear /d/ sounds from someone with a non-native accent, we know they really mean to produce the /d/ sound we are more familiar with. We learn to assimilate their /d̪/ or /ɖ/ into our /d/ category. However, for babies raised in a monolingual household, the dental, retroflex, and alveolar “d”s are all very distinct phonemes, but for monolingual English speaking adults, all those sounds are just funky ways to say “d.” For them, the dental and retroflex /d̪/ /ɖ/ very often found in Hindi is no different than the alveolar /d/ found in English. The hardest task for a monolingual speaker is to split the /d/ category up into the alveolar, retroflex, and dental sounds.

At the Myers Lab at the University of Connecticut, former graduate student Sayako Earle and principal investigator Emily Myers tested the ability of college student subjects to learn these new sounds and whether time of learning matters. Their hypothesis was that memory consolidation during sleep might help people learn Hindi sounds better; subjects were only trained on one day. To test this, they trained participants either in the morning or evening and assessed their improvement over time. Participants in both groups took part in three sessions to this study: in the first session, subjects were taught the Hindi sounds and then tested on their ability to discriminate and identify the speech sounds, and in the second and third sessions, the subjects were tested on their retention of their learning (12 and 24 hours later). Although the time of day of the training differed for the groups, both groups’ training schedules included an overnight interval in order to assess gains after sleep. This study showed that only the subjects trained in the evening improved after sleep, and that participants trained in the morning actually got worse after the overnight interval.

Earle and Myers hypothesized that this is most likely because the morning group goes the entire day exposed to their native language, and this exposure before sleep might be interfering with the Hindi sounds they learned, whereas the evening group had much less interference before sleep compared to the morning group.

Pictured above: Dr. Emily Myers

As for why adults fail to initially be able to distinguish between these sounds is still being researched, but essentially because the dental and retroflex d̪/ /ɖ/ sounds do not change the meaning of any words in English, the subject just perceives it as a “funky sounding alveolar /d/” as opposed to a completely different sound. As long as the sound is somewhat similar to a native language sound, the subject recognizes the sound as part of the native language. This is a trait that has been evolved for efficiency, as the human brain tends to ignore signals that do not change any meanings in words. This makes processing language extremely efficient for the human language.

 

“The brain tends to ignore signals that do not change any meanings in words."

 

While losing the ability to distinguish speech sounds not part of the native language inventory is ensuring achieving maximum productivity while processing language, it does make learning non-native speech sounds harder to learn as an adult. Beyond our previous exposure to certain sounds, sleep and time of learning new sounds are crucial to how successful we are at learning. Sleep definitely helps fortify non-native sounds that are learned, and native language interference hinders learning non-native sounds as an adult. While most adults are struggling with this, babies pick this task up very quickly, and this is all due to their remarkable ability to learn language and discriminate speech sounds! Even with all the extensive research that has been done and is being done at the moment, there are still many unanswered questions in the field of speech and language sciences.

bottom of page