not only is the predisposition to language genetic (and if you don't hear one, you'll make one up), but your *ears* are physically differentiated.
It has long been known that different parts of the brain process different kinds of sounds. The left hemisphere, for example, deciphers rapidly changing sounds such as speech, whereas the right hemisphere dominates the processing of tones and musical sounds. But, explains Yvonne Sininger of the David Geffen School of Medicine at the University of California at Los Angeles, no one has looked closely at the role played by the ear in processing auditory signals. Sininger and co-author Barbara Cone-Wesson of the University of Arizona spent six years evaluating the hearing of newborns before they left the hospital. More than 3,000 babies had their hearing tested using a tiny probe (see image) that measures how much certain sounds are amplified by parts of the ear.
The researchers tested the children's hearing using two types of sounds: sustained tones and a set of rapid clicks, which were timed to imitate speech. We were intrigued to discover that the clicks triggered more amplification in the baby's right ear, while the tones induced more amplification in the baby's left ear, Sininger remarks. This parallels how the brain processes speech and music, except the sides are reversed due to the brain's cross connections. According to the authors, the results indicate that auditory processing begins in the ear, before sounds reach the brain, and could help to improve therapies for hearing-impaired patients. Notes Cone-Wesson: Sound processing programs for hearing devices could be individualized for each ear to provide the best conditions for hearing speech or music. --Sarah Graham http://sciam.com/article.cfm?chanID=sa003&articleID=0000EC2C-11A4-1142-87D683414B7F4945