Dr. Patel: Linking Neurobiology and Music

chris sinclair | Staff Photographer Dr. Aniruddh Patel of the San Diego-based Neurosciences Institute gives a lecture at the Barclay Theater.

chris sinclair | Staff Photographer
Dr. Aniruddh Patel of the San Diego-based Neurosciences Institute gives a lecture at the Barclay Theater.

First, Charles Darwin argued that our ability to love music was innate and biologically powerful. Then, Steven Pinker decided that was wrong; instead, music is the exact opposite in that it is invented and biologically superficial. Now,  Dr. Aniruddh Patel of the San Diego-based Neurosciences Institute believes that music is a type of transformative technology, a mix between the two theories. It is invented and it is biologically powerful.

Standing in front of a packed crowd of people that included various prominent doctors and psychologists at the Irvine Barclay Theater last Tuesday, May 18, Patel began his lecture on how music, evolution and the human mind correlate. Speaking at the 16th UCI Distinguished Lecture Series on Brain, Learning and Memory, his lecture covered his search for how the brain processes music. This, he argued, will tell scientists more about how the brain actually functions.

Patel received his bachelor’s degree in biology from the University of Virginia and his Ph.D. in organismic and evolutionary biology from Harvard University. He played the clarinet since he was young, but it wasn’t until the end of his college years that he chose to combine neurobiology and music to find clues as to how the brain processes music and language. He claims that the similarities and differences between the two will provide a much better understanding about the brain itself.

Dressed in a classic brown suit, Patel is rather laid back and surprisingly funny. He knows how to command a crowd and interacts with the audience, showing YouTube clips of parrots and chimps. His anecdotes cause an eruption of laughter and many listeners are already quoting him in the post-reception. He is the kind of lecturer that many organizations invite to speak because he engages the audience and his words linger in your mind even after you leave the room.

“Mr. Patel is the perfect lecturer to assist in the CNLM’s primary goal of public outreach. He learns from the brain and that is why we all do science – to study the fundamental operation of the brain and then go about learning how to fix it,” said Craig Stark from the Center for the Neurobiology of Learning and Memory, the organization that co-sponsored the event with UCI MIND.

Using a variety of techniques such as theoretical analyses, acoustic research, comparative studies of nonhuman animals and neuroimaging, Patel began to prove his theory that music is what he calls “transformative technology.” This term refers to the idea that our capacity to love music was nurtured, not selected for by evolution, and it is biologically fundamental, with lasting effects on nonmusical brain systems.

According to Patel, while there is music-specific knowledge, growing evidence suggests that music processing is built from nonmusical brain functions. So then why is music ancient and universal, so pervasive in human life? The answer to this lies in the fact that music is an emotional power.

“Music is like the controlling of fire. Humans invented it way back when, but it is a thing that has become so wildly popular that it seems innate,” Patel said.

Listening to music gives rise to emotion in multiple ways, including episodic memory, visual imagery, emotional contagion, expectancy, evaluative conditions and brainstem reflexes. It is unique in the way it mixes all of these together in a way that no other aspect of life can.

Patel then provided an in-depth look into the two factors he feels are the most important in studying the reactions between the brain and music – tonality and synchronization.

Tonality is a uniquely musical system for organizing pitches that has scales and intervals, puts a differential emphasis on certain pitches and gives various pitches unique perceptual qualities. For example, the last note may be exactly the same in two different cases, but depending on the context of the entire series, one will be a resolve pitch and the other is a leading pitch. A resolve pitch indicates that the audience feels as if the music has a sense of finality, while a leading pitch leaves one to anticipate a further note later to complete the sound.

In healthy brains, it is found through neuroimaging that tonal or harmonic processing overlaps with linguistic grammatical processing. This is because tonality involves discrete elements, principles of combination, hierarchically organized sequences and abstract structural categories of sounds, all of which are the same features of language syntax.

“Tonality does involve domain-specific representations, but it shares processing resources with language,” Patel said. “Tonality processing also shares some of the same mechanisms with language, which may explain why tonal music is so ‘sticky’.”

Synchronization to a musical beat emerges spontaneously and is a special rhythmic entrainment. It is involves stimulus complexity, tempo flexibility and cross-modality, which, means that we move as a response to a beat, rather than uttering another random sound.

But it is not just the musical mind that can rise to sync to a beat; a nonmusical brain system can do it as well through vocal learning (VL). Besides humans, it is very rare in nature since it involves specialized brain circuits linking auditory and motor brain centers.

Thus the vocal learning and rhythmic synchronization hypothesis was born. It predicted that only VL species are capable of learning to sync to a musical beat; dogs, cats and chimps, among others,  are unable to. A recent study used monkeys to test this theory. Using a metronome, they attempted to get the monkeys to bang a drum to the sound for a year, but they found that the best they could do was hit the note a fraction behind the metronome. They  displayed a form of reactive behavior, where they respond after the initial sound, whereas  humans exhibit anticipative behavior since we will inevitably hit either exactly on the beat or slightly ahead.

This is where YouTube  meets neuroscience. Snowball, a parrot that became the YouTube sensation  “avian dancing,” was shown to dance in sync with human music. This raised an abundance of questions among the scientific community; for instance, was the bird given timing cues from humans and, if not, can she adjust to different tempi?

Using a controlled experiment, Snowball underwent a series of tests by dancing to 11 different tempi, where human movement was suppressed. After quantitative analyses of the videos, the result was that there was a true sync to a musical beat, but it occurred in bouts, similar to the actions of a human child.

The result then prompted Harvard researches to ask if there are more VL animals and they turned to, once again, YouTube. They analyzed over one thousand videos of animals and music, but only found 14 other cases similar to Snowball’s. Of these 14, 13 were parrots. The other was a circus elephant.

“Parrots are incredibly social and that may be why they enjoy music and have the ability to sync to a beat,” Patel said. “But this ability is far from innate; it is not shaped by natural selection and it is definitely not part of their natural behavior.”

Patel also studies the effects of music within an individual lifetime, which is a growing research area that examines nonmusicians and compares persistent versus transient effects. This focus is a crucial part of studies on cognitive recovery and verbal fluency after a stroke.

Basically put, music builds on brain systems that evolved for other purposes. While our draw toward music is an invention, it can shape those systems within the lifetime of an individual. Empirical studies on kids, mammals and birds are headlining the study of music as a kind of transformative technology.

“In essence,”  Patel said, “studying music will help reveal how both the mind and memory work.”