[ Neuron vol. 67 pp. 713 – 727 ’10 ] These are my notes are on another review written 10 years later by Patricia Kuhl (who wrote the PNAS review 10 years earlier, the first paper cited in the previous post). If you found the previous post interesting, or if the acquisition of language by infants interests you, I suggest you start with the previous post, then move on to this one. Kuhl’s article is well written and contains over 100 references to the current literature if you want to go further. As a neurologist, I know how the evidence for the various statements she makes was acquired. Fortunately, her review explains the various techniques used.
We all know that kids learn languages much better than adults. My uncle used to talk about his World War II experience with little orphan kids in Europe who spoke 3 or 4 languages. This is good evidence for a ‘critical period’ of language development (just as there is one for ocular dominance). The critical periods for different components of language are different — the phonetic learning’s period is just before 1 year old, syntactic learning 18 – 36 months, vocabulary 18 months onward — but vocabulary has less of a critical period.
In toto, human languages have 600 phonemes (different syllabic sounds), with each language having about 40. No machine in the world can derive the phonemic inventory of a language from natural language input. Models improve when exposed to motherese (see previous post).
Infants who are better at discriminating between native and nonnative language phonemes at 7 months, do better on language measures at 30 months, while those equally good at distinguishing native and nonnative phonemes do worse.
Interestingly Japanese babies hear both R and L sounds, but they don’t represent different categories of phonemes over there. There are different patterns of distribution in both languages. R and L occur often in English, but in Japanese the most frequent sound is japanese r which is related to but distinct from both English variants.
It’s the distribution which the infants are picking up. 6 to 8 month infants were exposed for 2 minutes to 8 sounds forming a continuum of sounds from /da/ to /ta/. All infants heard all the stimuli, but exprerienced different distributional frequencies of the 8 sounds. The bimodal group heard more frequent presenations of stimuli at the ends of the continuum, a unimodal group hear a Gaussian distribution of stimuli, with the most common stimuli in the middle of the range. Infants in the bimodal group could discriminate /da/ from /ta/ while those in the Gaussian group couldn’t. Even more interestingly, the perception increased when the infants could look at a face making the sounds. Discrimination didn’t occur if the same facial motion was used with all auditory stimuli.
Before 8 month old infants know the meaning of a single word, they detect likely word candidates through their sensitivity to the transitional probabilities between adjacent syllables. -py and new are more likely to occur in this order (e.g. Happy new Years) than the reverse (newpy ! ? !). When exposed to a 2 minute string of nonsense syllables with no acoustic breaks or other cues to word boundaries, they treat syllables having high transitional probabilities as words. Even sleeping newborns do this (event related potentials were used to pick it up). So their brains are very sophisticated statistical analyzers. I find this remarkable.
A great experiment exposed English infants at 9 months of age to someone speaking Mandarin to them over 12 sessions over a period one month (how long were the sessions?). Books were read, toys were played with. After the month was over, Enlglish babies did as well on a Mandarin phonetic contrast which doesn’t occur in English as 10 month old Taiwanese infants (who had been exposed to Mandarin the whole time). There was no ‘forgetting’ of the Mandarin over then next 30 days. Even more interestingly, no learning occurred from a television or audiotape giving Mandarin sounds. So the social exposure is crucial (perhaps this is why reading to your kids and playing with them is so important — as well as why as it’s so much fun for you and them).
Bilingual adult speakers show enhanced executive control skills (even school-aged bilinguals) — measured how? < Reference Neuroimage vol. 47 pp. 414 – 422 ’10 >. Infants learning Spanish from a natively English environment, started babbling with the prosodic patterns of Spanish (but only in response to Spanish).
There’s a whole other neuroscience ballgame under intense study, in which areas of the brain used to produce an action are activated in response to just WATCHING the action (I don’t have time to go into it here, but the neuroscientists, if any, reading this know what I’m talking about). Here’s a link http://en.wikipedia.org/wiki/Mirror_neuron. Just listening to speech in 6 – 12 month olds activates brain areas which later will be responsible for speech production. They used functional MRI (fMRI) to pick this up. But beware. I have problems with all fMRI work because the authors invariably seem to find exactly what they expected, and the raw data is never given — science just doesn’t work like that. This type of work is often NOT reproducible. In fact, some of it has been called pseudocolor phrenology.
Even this good review is marred by the occasional idiotic statement as when Kuhl goes outside her field “Brain oscillations in four frequency bands have been associated with cognitive effects — theta (4 -7 Hertz) alpha (8 – 13 Hertz), beta (13 – 30 Hertz) and gamma (30 – 100 Hertz)”. Wonderful but, malheureusement that’s just about ALL the brain oscillations we know about (except those frequencies below 4 Hertz which are seen only in deep sleep or general anesthesia).
The acoustic stretching in motherese, observed across languages, makes phonetic units more distinct from one another.
Children with autism spectrum disorder (whatever that is) prefer to listen to nonspeech rather than speech when given a choice. This is seen in their evoked brain electrical responses to speech.
Young zebra finches need visual interaction with a tutor bird to learn songs in the lab.
If social interaction makes learning easier, then infants might restrict their learning to signals that derive from live humans. This may be why plunking them in front of a TV isn’t a good idea (although, they do seem interested in it).
One must now wonder about the apocryphal king of old, who raised children without any verbal contact, to see what language they would develop. Did this ever happen in real life?
Comments
For an interesting review of two books on how language developed (as opposed to how infants pick it up) see Science vol. 329 pp. 1600 – 1601 ’10 (24 September). The field is as contentious as ever. Chomsky remains influential, contentious but not universally accepted. This has been going on for years. For the early years with an explanation of Chomsky’s many views on the subject see “The Linguistic Wars” by R. A. Harris– note that it was written 17 years ago.