Tag Archives: Processing

Learning music changes how our brains process language, and vice-versa

Language and music seem to go hand-in-hand in the brain, according to new research. The team explains that music-related hobbies boost language skills by influencing how speech is processed in the brain. But flexing your language skills, by learning a new language, for example, also has an impact on how our brains process music, the authors explain.

Image credits Steve Buissinne.

The research, carried out at the University of Helsinki’s Faculty of Educational Sciences, in cooperation with researchers from the Beijing Normal University (BNU) and the University of Turku, shows that there is a strong neurological link between language acquisition and music processing in humans. Although the findings are somewhat limited due to the participant sample used, the authors are confident that further research will confirm their validity on a global scale.

Eins, Zwei, Polizei

“The results demonstrated that both the music and the language programme had an impact on the neural processing of auditory signals,” says lead author Mari Tervaniemi, a Research Director at the University of Helsinki’s Faculty of Educational Sciences.

“A possible explanation for the finding is the language background of the children, as understanding Chinese, which is a tonal language, is largely based on the perception of pitch, which potentially equipped the study subjects with the ability to utilise precisely that trait when learning new things. That’s why attending the language training programme facilitated the early neural auditory processes more than the musical training.”

The team worked with Chinese elementary school pupils aged 8-11 whom they monitored, for the duration of one full school year. All of the participants were attending music training courses, or a similar programme to help them learn English. During this time, the authors measured and recorded the children’s brain responses to auditory stimuli, both before and after the conclusion of the school programmes. This was performed using electroencephalogram (EEG) recordings; at the start, 120 children were investigated using EEG, with 80 of them being recorded again one year after the programme.

During the music training classes, pupils were taught to sing from both hand signs and sheet music and, obviously, practised singing quite a lot. Language training classes combined exercises for both spoken and written English, as it relied on a different orthography (writing system) compared to Chinese. Both were carried out in one-hour-long sessions twice a week, either after school or during school hours, throughout the school year. Around 20 pupils and two teachers attended these sessions at a time.

All in all, the team reports that pupils who underwent the English training programme showed enhanced processing of musical sounds in their brains, particularly in regards to pitch.

“To our surprise, the language program facilitated the children’s early auditory predictive brain processes significantly more than did the music program. This facilitation was most evident in pitch encoding when the experimental paradigm was musically relevant,” they explain.

The results support the hypothesis that music and language processing are closely related functions in the brain, at least as far as young brains are concerned. The authors explain that both music and language practice help modulate our brain’s ability to perceive sounds since they both rely heavily on sound — but that being said, we can’t yet say for sure whether these two have the exact same effect on the developing brain, or if they would influence it differently.

At the same time, the study used a relatively small sample size, and all participants belonged to the same cultural and linguistic background. Whether or not children who are native speakers of other languages would show the same effect is still debatable, and up for future research to determine.

The paper “Improved Auditory Function Caused by Music Versus Foreign Language Training at School Age: Is There a Difference?” has been published in the journal Cerebral Cortex.

Scientific citation.

AI developed to tackle physics problems is really good at summarizing research papers

New research from MIT and elsewhere is making an AI that can read scientific papers and generate a plain-English summary of one or two sentences.

Scientific citation.

Image credits Mike Thelwall, Stefanie Haustein, Vincent Larivière, Cassidy R. Sugimoto (paper). Finn Årup Nielsen (screenshot).

A big part of our job here at ZME Science is to trawl through scientific journals for papers that look particularly interesting or impactful. They’re written in dense, technical jargon, which we then take and present in a (we hope) pleasant and easy to follow way that anybody can understand, regardless of their educational background.

MIT researchers are either looking to make my job easier or get me unemployed, I’m not sure exactly sure which yet. A novel neural network they developed, along with other computer researchers, journalists, and editors, can read scientific papers and render a short, plain-English summary.

Autoread

“We have been doing various kinds of work in AI for a few years now,” says Marin Soljačić, a professor of physics at MIT and co-author of the research.

“We use AI to help with our research, basically to do physics better. And as we got to be more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

It’s far from perfect at what it does right now — in fact, the neural network’s abilities are quite limited. Even so, it could prove to be a powerful resource in helping editors, writers, and scientists scan a large number of studies for a quick idea of their contents. The system could also find applications in a variety of other areas besides language processing one day, including machine translation and speech recognition.

The team didn’t set out to create the AI for the purpose described in this paper. In fact, they were working to create new AI-based approaches to tackle physics problems. During development, however, the team realized the approach they were working on could be used to solve other computational problems — such as language processing — much more efficiently than existing neural network systems.

“We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm,” Soljačić adds.

Neural networks generally attempt to mimic the way our brains learn new information. The computer is fed many different examples of a particular object or concept to help it ‘learn’ what the key, underlying patterns of that element are. This makes neural networks our best digital tool for pattern recognition, for example identifying objects in photographs. However, they don’t do nearly so well when it comes to correlating information from hefty items of data, such as a research paper.

Various tricks have been used to improve their capability in that latter area, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU). All in all, however, classical neural networks are still ill-equipped for any sort of real natural-language processing, the authors say.

So, what they did was to base their neural network on mathematical vectors, instead of on the multiplication of matrices (which is classical neural-network approach). This is very deep math territory but, essentially, the system represents each word in the text by a vector — lines of a certain length, orientation, and direction — created and altered in a multidimensional space. Encyclopaedia Britannica defines “vectors, in mathematics, as quantities that have both magnitude and direction but not position,” listing velocity and acceleration as examples.

The network used each vector subsequently, as words were being read, to modify a starting vector. The final vector or set of vectors is then translated back into a string of words. The name the team gave this approach, thankfully, is much easier to wrap your head around: RUM (rotational unit of memory).

“RUM helps neural networks to do two things very well,” says Preslav Nakov, a senior scientist at the Qatar Computing Research Institute and paper co-author. “It helps them to remember better, and it enables them to recall information more accurately.”

RUM was developed to help physicists study phenomena such as the behavior of light in complex engineered materials, the team explains. However, the team soon realized that “one of the places where […] this approach could be useful would be natural language processing.”

Artificial summaries

Soljačić, says he recalls a conversation with Mićo Tatalović, a former Knight Science Journalism fellow at MIT, a former editor at New Scientist magazine, and co-author of the study, who said that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was, at the time, exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

As a proof-of-concept, the team ran the same research paper through a conventional (LSTM-based) neural network and through their RUM-based system, asking them to produce short summaries. The end results were dramatically different. RUM can read through an entire research paper, not just it’s abstract, and summarise its content. The team even ran the present study through RUM (they were probably just showing off at this point).

Here’s the summary produced by the LSTM system:

‘”Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.”

Here’s the one the RUM system produced:

“Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.”

Here’s the neural network’s summary of the study we’re discussing:

“Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.”

You guys like my coverage better, though, right? Right…?

The paper “Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications” has been published in the journal Transactions of the Association for Computational Linguistics.