Tag Archives: music

Music baby.

Musical training makes your brain better at paying attention

Musical training won’t just make you cool at get-togethers — it also gives you better control and focus over your attention, new research reports.

Music baby.

Image via Pixabay.

Individuals who train in music see lasting improvements in the cognitive mechanisms that make us more attentive and harder to distract, the study reports. Trained musicians exhibit greater executive control of attention (a main component of the attentional system) than non-musicians, it explains, and this effect increases the longer they train in music.

Professional advantage

“Our study investigated the effects of systematic musical training on the main components of the attentional system. Our findings demonstrate greater inhibitory attentional control abilities in musicians than non-musicians,” explained lead investigator, Paulo Barraza, PhD, Center for Advanced Research in Education, University of Chile, Santiago, Chile.

“Professional musicians are able to more quickly and accurately respond to and focus on what is important to perform a task, and more effectively filter out incongruent and irrelevant stimuli than non-musicians. In addition, the advantages are enhanced with increased years of training.”

Our attention is made up of three types of functions: alerting, orienting, and executive control. The alerting function is associated with maintaining states of readiness for action. The orienting function is linked to the selection of sensory information and change of attentional focus. The executive control function is involved both in the suppression of irrelevant, distracting stimuli and in top-down attentional control. Each is handled by an anatomically-distinct neural network, the team writes.

For the study, the team worked with 18 professional pianists and a matched group of 18 non-musician professional adults, whom they ran through an attentional network test. The musician group consisted of full-time conservatory students or conservatory graduates from Conservatories of the Universidad de Chile, Universidad Mayor de Chile, and Universidad Austral de Chile. On average, participants in this group had over 12 years of practice. “Non-musicians” were university students or graduates who had not had formal music lessons and could not play or read music.

The participants were asked to view a series of rapidly-changing images and provide immediate feedback on what they were being shown to test the efficiency of their reactive behavior. On average, the musician group had a score of 43.84 milliseconds (ms) for alerting functions, 43.70 ms for orienting, and 53.83 ms for executive functions, the team reports. For non-musicians, the mean scores were 41.98 ms, 51.56 ms, and 87.19 ms, respectively. The higher scores show less efficient inhibitory attentional control (i.e. a poorer control of attention).

The authors say their results point to musical training having a lasting (and beneficial) effect on attention networks that previous research didn’t spot.

“Our findings of the relationship between musical training and improvement of attentional skills could be useful in clinical or educational fields, for instance, in strengthening the ability of ADHD individuals to manage distractions or the development of school programs encouraging the development of cognitive abilities through the deliberate practice of music,” says noted co-author David Medina, from the Department of Music, Metropolitan University of Educational Sciences, Santiago, Chile.

“Future longitudinal research should directly address these interpretations.”

The paper ” Efficiency of attentional networks in musicians and non-musicians” has been published in the journal Heliyon.

Scientists played music to aging cheese to change its taste. Hip-hop worked the best

Non-stop loops of rock, hip-hop, classical, ambiental, and techno music were played to aging Emmental cheese wheels to see whether the sounds would affect the taste. Surprisingly, the music did appear to have an effect on the cheese, especially the hip-hop music: it made for a stronger and fruitier smell.

They look similar but taste slightly different. Image credits: Bern University / Beat Wampfler.

Cheese-making is big business. Most people love a healthy chunk of tasty cheese, but for some people, the old Cheddar, Camembert, or even Danish Blue just won’t cut it. As a result, there’s immense variability in the cheeses you can find on shelves, ranging from fruity cheeses to mold cheeses and even some cheeses made with worms.

But, regardless of what they contain or how extreme they are, all cheese-making involves a dance between milk and fermenting bacteria. Despite our thousands of years of making and eating cheese, we’re still only now learning some things about this process and the bacteria that drive it. To test a rather quirky theory, Swiss producer Beat Wampfler and a team of researchers from the Bern University of Arts played different types of music for 6 months to several aging Emmental wheels.

They split the wheels of cheese into several groups:

The researchers then subjected the cheeses to two types of tests: laboratory analysis (which have not been finished yet) and a taste test carried out by a star-studded line-up. Both results highlighted differences between the cheeses. The sample from the medium-frequency cheese exhibited the strongest taste, along with the control sample. The hip-hop cheese, however, was considered the tastiest.

“The experiment was a success, and the results are amazing: the bio-acoustic impact of sound waves affects metabolic processes in cheese, to the point where a discernible difference in flavour becomes apparent – one which can even be visualised using food technology,” researchers write in a press release.

The tests were quite thorough. Core samples were taken from the cheese wheels for the sensory screening tests immediately before the evaluation, with each core sample being approximately 10 cm long and 1 cm in diameter.

Each tester received half a core sample, served to them on odorless and flavorless paper plates. It was a “blind” test, so they did not know which cheese they were tasting. The sensory screening of the cheese samples was carried out based on a consensus profile, the press release reads.

Image credits: University of Bern / Beat Wampfler

Hip-hop cheese appeared to be on top for most people, having a stronger and more fruity flavor, something regarded as favorable in the context. Of course, the taste test is subjective and not everyone agreed. The Mozart cheese, which had a milder taste, was preferred by some.

“My favorite cheese was that of Mozart, I like Mozart but it’s not necessarily what I listen to… maybe a sweet little classical music it does good to the cheese,” chef and jury member Benjamin Luzuy tells Agence-France Presse. Luzuy also told Reuters that “The differences were very clear, in term of texture, taste, the appearance, there was really something very different.”

While this is obviously a cool project with a creative approach, perhaps a healthy dose of skepticism is also required on top of that cheese. The lab results have not yet been concluded, and the study itself has not been peer-reviewed — there’s a certain advertising bonus associated to this whole thing, so it’s safe to say that it will still be a while before results are confirmed. For instance, the heat generated by the transducer playing the music might also influence how the cheeses aged.

The scientists themselves acknowledge the need for a stronger case for this type of study.

“In general, it can be confirmed that the discernible sensory differences detected during the screening process were minimal. The conclusion that these differences did indeed confirm the hypothesis, namely that they can clearly be traced back to the influence of music, is conceivable, but not compelling.”

“More extensive testing is required in order to determine whether there is a link between exposing cheese wheels to music as they mature and discernible sensory differences,” they conclude.

Still, it’s not completely implausible that the soundwaves (or some associated process) slightly affected the bacteria and caused changes in taste. It’s certainly a creative process which raises intriguing questions, and the results may end up appearing on our plates before they appear in a scientific journal. Luzuy concludes:

“For chefs like me, these results are fascinating. This opens up new avenues for us in terms of how we can work creatively with food in the future.”

Classical music.

If you want to be creative, turn the music off, new research reveals

The popular view that music enhances creativity has it all backwards, according to an international team of researchers.

Classical music.

Image via Pixabay.

Psychologists from the University of Central Lancashire, the University of Gävle in Sweden, and Lancaster University investigated the impact of background music on creative performance and let me tell you — the results aren’t encouraging if you like music.

Creatively uncreative

The team pitted participants against verbal insight tasks that require creativity to solve. All in all, they report, background music “significantly impaired” people’s ability to perform these tasks. Background noise (the team used library noises) or silence didn’t have the same effect on creativity, the team notes.

“We found strong evidence of impaired performance when playing background music in comparison to quiet background conditions,” says first author Dr Neil McLatchie of Lancaster University.

As an example, one of the tasks involved showing a participant three words (e.g. dress, dial, flower) and asking them to find a single associated word that can be combined with the three to make a common word or phrase (for example, “sun” to make sundress, sundial, and sunflower).

Each task was performed in three different settings: in the first, music with foreign or unfamiliar lyrics was played in the background. In the second setting, instrumental music (no lyrics) was played in the background. The third setting involved music with familiar lyrics being played in the background. Control groups performed the same task either in a silent environment or with a background of library noises.

All participants in settings with background music showed “strong evidence of impaired performance” in comparison to quiet background conditions, McLatchie says. The team suggests this may be because music disrupts verbal working memory.

The third experiment in particular (music with familiar lyrics) impaired creativity regardless of whether it also induced a positive mood, whether participants liked it or not, or if they usually study or work with music in the background. This effect was less pronounced when background music was instrumental with no lyrics, but still present.

“To conclude, the findings here challenge the popular view that music enhances creativity, and instead demonstrate that music, regardless of the presence of semantic content (no lyrics, familiar lyrics or unfamiliar lyrics), consistently disrupts creative performance in insight problem solving.”

However, there was no significant difference in performance on verbal tasks between the quiet and library noise conditions. The team says this is because library noise is a “steady state” environment which is not as disruptive as music.

So it may be best for your productivity to close that YouTube tab when trying to study or work. Can’t say that I’m thrilled about the findings but hey — science is science!

The paper “Background music stints creativity: Evidence from compound remote associate tasks” has been published in the journal Applied Cognitive Psychology.

Music statues.

Unexpected notes make music more enjoyable

Unexpected but consonant notes in music activate the reward centers in our brains, new research reveals.

Music statues.

Image via Pixabay.

Doesn’t that unexpected, but perfect, note peppered into a song send shivers down your spine? You’re not alone. New research at the McGill University shows that a dash of the unexpected in music lights up our brain’s reward centers, and helps us learn about music as we listen.

Unexpected by not unpleasant

The team, led by PhD candidate Ben Gold, worked with 20 volunteers through a musical reward learning task. The task consisted of a game where participants had to pick a color and then a direction. Each choice had a certain probability of returning a consonant (pleasurable) musical excerpt or a dissonant (unpleasurable) one.

Over time, participants started to learn which choices were more likely to produce either of these excerpts, which was what the team wanted. The test was designed to create an expectation in the mind of participants — either for musical dissatisfaction or enjoyment. Each participant had their brain activity measured using functional magnetic resonance imaging (fMRI) during the trial.

Pooling all this data together, the team determined reward prediction error for each choice the participants made. This error is the difference between expected reward and the actual outcome of their choice. Comparing these errors to MRI data revealed that reward prediction errors correlated with activity in the nucleus accumbens (NA), a brain region previously studies linked to feelings of musical pleasure. This is the first evidence that musically-elicited reward prediction errors cause musical pleasure, the team writes, as well as the first time an aesthetic reward such as music has been shown to create this kind of response. Previous studies have found similar results, but they worked with more tangible rewards such as food or money.

Finally, the team explains that subjects whose reward prediction errors most closely mirrored activity in the NA learned which choices lead to consonant tones faster than their peers. This, the team writes, establishes music as a neurobiological reward capable of motivating learning. The pleasurable feeling elicited by this effect motivates us to listen again and again, they explain, which helps us learn.

“This study adds to our understanding of how abstract stimuli like music activate the pleasure centres of our brains,” says Gold. “Our results demonstrate that musical events can elicit formally-modeled reward prediction errors like those observed for concrete rewards such as food or money, and that these signals support learning. This implies that predictive processing might play a much wider role in reward and pleasure than previously realized.”

The paper ” Musical reward prediction errors engage the nucleus accumbens and motivate learning” has been published in the journal Proceedings of the National Academy of Sciences.

How people use music as a sleeping aid

British researchers at the University of Sheffield surveyed individuals on their quality of sleep and use of music as a sleeping aid. The study, which is the first to perform such an investigation on the general population, found that respondents who use music as a sleep aid do so because they think it blocks external stimuli, induces a mental state conducive to sleep, offers unique properties that stimulate sleep, or simply because it’s become a habit.

Credit: Pixabay.

Approximately 50 to 70 million American adults report having problems sleeping. The widespread problem has serious physical and economic consequences, with studies linking it to a range of health issues. For instance, studies show that even a single night of poor sleep can impair short-term memory. Consistent poor sleepers are more likely to report lower levels of happiness and more feelings of depression. As a result of inefficient cognition, improper sleep also increases the rate of work-related and driving accidents.

To improve sleep, most people turn to pharmaceutical sleeping aids, and the sharp increase in sales of such drugs around the developed world suggests that a good night’s rest is becoming harder to find. In the UK, 1 in 10 adults takes some form of pharmaceutical intervention on a regular basis to improve sleep, a 31% increase between 2006 and 2011. In the United States, sleep-related prescriptions jumped from 5.2 to 20.8 million between 1999 and 2010, marking a 293% increase.

However, sleep aids have been linked to various negative side effects that become worse with long-term use. These include nausea, dizziness, dependency, withdrawal symptoms, amnesia, seizures, and an increased risk of mortality. Music, on the other hand, is free and bears no negative side effects.

Previously, studies have suggested that music relieves anxiety and the subjective effects of pain. One study, in particular, found that music increases oxytocin — a powerful hormone that regulates social interaction and sexual reproduction — and, accordingly, promotes relaxation.

Given the link between stress and poor sleep, it makes sense that music has sleep aid properties. The efficacy of music as a sleep aid has mostly been studied in individuals with chronic insomnia or hospitalized patients. One such study showed that listening to music for 45 minutes prior to sleep for four days shortened stage 2 duration and extended REM sleep in adults with chronic insomnia. Most dreams occur during rapid eye movement (REM) sleep, and it is thought to play a role in learning, memory, and mood.

The new study performed by Tabitha Trahan and colleagues at the Department of Music at the University of Sheffield is the first to survey the use of music as a sleep aid in the general population. In total, 62% of the 651 respondents confirmed that they play music to help themselves sleep, and described 14 musical genres comprising 545 artists. Most respondents (31%) fall asleep to classical music, followed by rock (10.8%), and pop (7.5%).

According to the findings, even those who don’t suffer from sleep disorders use musical intervention in their everyday lives to improve the quality of their sleep. Younger people, who are generally more musically engaged, were more likely to use music to sleep better.

The participants believe that music both stimulates sleep and blocks an internal or external stimulus that would otherwise disrupt sleep. The self-reported answers, however, represent a study limitation. So the findings could represent respondents’ beliefs about how music helps them sleep, rather than a conclusion about the psychological and physiological effects of music. Nevertheless, the results suggest that anyone could sleep better at night with just about any kind of music.

“The largest ever survey of everyday use of music for sleep reveals multiple pathways to effect that go far beyond relaxation; these include auditory masking, habit, passion for music, and mental distraction. This work offers new understanding into the complex motivations that drive people to reach for music as a sleep aid and the reasons why so many find it effective,” the authors concluded in the journal PLOS ONE

autism and music children

Music therapy makes children with autism more socially aware

A new study performed by Canadian researchers found that musical training significantly improves social communication in autistic children. The improvements were significant after only 12 weekly sessions, suggesting that music can have a lasting and profound effect on the quality of life of autistic children and their families.

autism and music children

Credit: Pixabay.

For decades, scientists have known that there’s some kind of relationship between autism spectrum disorder (ASD) and music, seeing how a disproportionate amount of those with this disorder have “perfect pitch” — the ability to instantly and effortlessly identify the pitch of a tone without the use of a reference tone. A person who possesses this rare gift can, for instance, hear any single note and tell if it’s an A or B-flat or anything else.

Absolute pitch seems to run in families, suggesting a genetic link. Some researchers also think studying this musical ability can also reveal valuable clues about some of the genes involved in autism and, more broadly, to how the human brain develops and functions.

It’s not all about perfect pitch, though — people with ASD seem to have a much finer auditory experience in general. They might be able to hear the buzzing of electricity in the walls or find noisy environments simply unbearable, something which was was previously confirmed by British researchers.

In a new study, scientists at the Université de Montréal and McGill University wanted to get a clearer picture of the impact musical lessons can have on individuals with ASD. They enlisted 51 children with ASD, ages 6 to 12, some of whom received a music-based intervention for three months.

The parents of each child first completed standard questionnaires to gauge the child’s social communication skills, the family’s quality of life, and the ASD symptom severity. MRI scans were performed for each child in order to establish a baseline of brain activity.

Children were randomly assigned to one of two groups: one received music-based therapy, the other did not. Both groups worked with a therapist whose main task was to foster reciprocal interaction. The music-therapy group, however, also sang and played different musical instruments for 45 minutes every week.

At the end of the music therapy sessions, parents of children in this group reported significant improvements in communication skills and family quality of life compared to those in the control group. Parents of children in both groups did not report reductions in autism severity.

“Importantly, our study, as well as a recent large-scale clinical trial on music intervention, did not find changes with respect to autism symptoms themselves,” said Megha Sharda, a postdoctoral fellow at Université de Montréal and lead author of the new research, published in Translational Psychiatry. “This may be because we do not have a tool sensitive enough to directly measure changes in social interaction behaviors.”

Brain scans suggest that the improved communication skills in the kids that received musical therapy may be the result of increased connectivity between the auditory and motor regions of the brain, and decreased connectivity between auditory and visual regions (which are commonly over-connected in individuals with ASD).

According to the Canadian researchers, when connectivity between these regions is sub-optimal, it becomes difficult to engage in social interaction. Tuning into a conversation implies a series of processes: paying attention to what the other person is saying, recognizing cues that hint when your turn to speak comes in, and ignoring irrelevant noise. All of these are easy tasks for most people, but can be challenging for individuals with ASD.

The findings show that musical intervention can make a major difference when it comes to improving social skills in children with ASD. Many teachers and parents could find it useful for school-age children to practice some form of music. What’s more, the neurological link between musical training and social communication adds to a body of evidence that suggests ASD also has an important influence on sensory processing in the brain.

“The universal appeal of music makes it globally applicable and can be implemented with relatively few resources on a large scale in multiple settings such as home and school,” said Aparna Nadig, an associate professor at McGill’s SCSD and co-senior author of the study with Krista Hyde, an associate professor of psychology at UdeM.

The findings appeared in the journal Translational Psychiatry. 

Why music makes you feel less tired while exercising

Credit: Pixabay.

It’s leg day, so you hit the gym with perhaps not the usual enthusiasm you’re used to. Luckily you can plug in your headphones and play Eye of the Tiger (or your favorite Spotify workout playlist) to make those reps a little bit more bearable. Music, scientists say, has been found to improve performance of physical tasks, such as exercising — and now, a new study has uncovered a potential mechanism that explains this effect. According to researchers at Brunel University London, intense auditory stimuli activates a region of the brain that suppresses fatigue.

There are many studies that have documented the effects music — be it jazz or death metal — has on exercise. In 2012, researchers at Brunel University completed a systematic review of 62 studies on the performance-enhancing effects of music published since 1997. They found that listening to music before running or playing sports increases arousal and improves the performance of simple tasks. When music is played during physical activity, it has ergogenic (work-enhancing) effects. One way it improves outcomes is by both delaying fatigue and lessening the subjective perception of fatigue.

In a new study, a team of researchers at the same university delved deeper in order to investigate what’s responsible for these effects. For the study, 19 healthy adults had to exercise with a hand strengthener grip ring while they were sitting in an MRI scanner. During some of the 30 sets of reps they had to do, the participants listened to music, such as Creedence Clearwater Revival’s I Heard It Through The Grapevine.

When music was playing, the participants were more excited to complete the task and showed an increase in thoughts that were unrelated to the task at hand. Music also activates a region of the brain called the left inferior frontal gyrus, which integrates and processes information from internal and external sources. The researchers found that the more this region was activated, the less exertion the participant felt.

Unraveling this mechanism might have important practical implications. For instance, the brain region identified in the study could be stimulated directly, facilitating low to moderate exercise and motivating high-risk individuals, such as those with obesity or diabetes. In a previous study, the authors used portable EEG technology — a cap that records the electrical activity of the brain — and found that listening to music while walking increased energy levels and enjoyment, at the expense of mental focus. Podcasts also led to performance improvements, though not as pronounced as listening to music. The effects were associated with an increase in beta waves in the frontal and frontal-central regions of the cortex.

However, music can only get you so far. The review mentioned earlier found that listening to music does not reduce perceptions of exertion when individuals are pushing beyond the anaerobic threshold — the point at which lactic acid begins to accumulate in the bloodstream and you feel sore.

The authors also note that people should be careful not to overdo this, lest they risk becoming overly reliant on music to exercise.  

“We have learned so much about the psychophysical, psychological, and psychophysiological effects of music in the past two decades that people are almost developing a peculiar form of stimulus dependence. If we continue to promote the unnecessary use of auditory and visual stimulation, the next generation might be no longer able to tolerate fatigue-related symptoms and exercise in the absence of music,” Marcelo Bigliassi of Brunel University London and co-author of the new study told PsyPost. 

“My view is that music and audiovisual stimuli can and should be used and promoted, but with due care,” Bigliassi said. “We should, perhaps, learn more about the joys of physical activity and develop methods/techniques to cope with the detrimental effects of fatigue (i.e., learn how to listen to our bodies and respect our biomechanical and physiological limitations).”

The findings appeared in the International Journal of Psychophysiology

Credit: Wikimedia Commons.

Ancient Greek music: now we finally know what it sounded like

Credit: Wikimedia Commons.

Credit: Wikimedia Commons.

In 1932, the musicologist Wilfrid Perrett reported to an audience at the Royal Musical Association in London the words of an unnamed professor of Greek with musical leanings: “Nobody has ever made head or tail of ancient Greek music, and nobody ever will. That way madness lies.”

Roman mosaic with aulos player.
Wikimedia Commons

Indeed, ancient Greek music has long posed a maddening enigma. Yet music was ubiquitous in classical Greece, with most of the poetry from around 750BC to 350BC – the songs of Homer, Sappho, and others – composed and performed as sung music, sometimes accompanied by dance. Literary texts provide abundant and highly specific details about the notes, scales, effects, and instruments used. The lyre was a common feature, along with the popular aulos, two double-reed pipes played simultaneously by a single performer so as to sound like two powerful oboes played in concert.

Despite this wealth of information, the sense and sound of ancient Greek music has proved incredibly elusive. This is because the terms and notions found in ancient sources – mode, enharmonic, diesis, and so on – are complicated and unfamiliar. And while notated music exists and can be reliably interpreted, it is scarce and fragmentary. What could be reconstructed in practice has often sounded quite strange and unappealing – so ancient Greek music had by many been deemed a lost art.

An older reconstruction of ancient Greek music.

But recent developments have excitingly overturned this gloomy assessment. A project to investigate ancient Greek music that I have been working on since 2013 has generated stunning insights into how ancient Greeks made music. My research has even led to its performance – and hopefully, in the future, we’ll see many more such reconstructions.

New approaches

The situation has changed largely because over the past few years some very well preserved auloi have been reconstructed by expert technicians such as Robin Howell and researchers associated with the European Music Archaeology Project. Played by highly skilled pipers such as Barnaby Brown and Callum Armstrong, they provide a faithful guide to the pitch range of ancient music, as well as to the instruments’ own pitches, timbres, and tunings.

Central to ancient song was its rhythms, and the rhythms of ancient Greek music can be derived from the metres of the poetry. These were based strictly on the durations of syllables of words, which create patterns of long and short elements. While there are no tempo indications for ancient songs, it is often clear whether a metre should be sung fast or slow (until the invention of mechanical chronometers, tempo was in any case not fixed, and was bound to vary between performances). Setting an appropriate tempo is essential if music is to sound right.

Apollo plays the lyre.
Wikimedia Commons

What about the tunes – the melody and harmony? This is what most people mean when they claim that ancient Greek “music” is lost. Thousands of words about the theory of melody and harmony survive in the writings of ancient authors such as Plato, Aristotle, Aristoxenus, Ptolemy, and Aristides Quintilianus; and a few fragmentary scores with ancient musical notation first came to light in Florence in the late 16th century. But this evidence for actual music gave no real sense of the melodic and harmonic riches that we learn of from literary sources.

More documents with ancient notation on papyrus or stone have intermittently come to light since 1581, and now around 60 fragments exist. Carefully compiled, transcribed, and interpreted by scholars such as Martin West and Egert Pöhlmann, they give us a better chance of understanding how the music sounded.

Ancient Greek music performed

The earliest substantial musical document, found in 1892, preserves part of a chorus from the Athenian tragedian Euripides’ Orestes of 408BC. It has long posed problems for interpretation, mainly owing to its use of quarter-tone intervals, which have seemed to suggest an alien melodic sensibility. Western music operates with whole tones and semitones; any smaller interval sounds to our ears as if a note is being played or sung out of tune.

Musical fragment from Orestes by Euripides.
Wikimedia Commons

But my analyses of the Orestes fragment, published earlier this year, led to striking insights. First, I demonstrated that elements of the score clearly indicate word-painting – the imitation of the meaning of words by the shape of the melodic line. We find a falling cadence set to the word “lament”, and a large upward interval leap accompanying the word “leaps up”.

Second, I showed that if the quarter-tones functioned as “passing-notes”, the composition was in fact tonal (focused on a pitch to which the tune regularly reverts). This should not be very surprising, as such tonality exists in all the documents of ancient music from later centuries, including the large-scale Delphic Paeans preserved on stone.

With these premises in view, in 2016 I reconstructed the music of the Orestes papyrus for choral realisation with aulos accompaniment, setting a brisk tempo as indicated by the metre and the content of the chorus’s words. This Orestes chorus was performed by choir and aulos-player at the Ashmolean Museum, Oxford, in July 2017, together with other reconstructed ancient scores.

It remains for me to realise, in the next few years, the other few dozen ancient scores that exist, many extremely fragmentary, and to stage a complete ancient drama with historically informed music in an ancient theatre such as that of Epidaurus.

Meanwhile, an exciting conclusion may be drawn. The Western tradition of classical music is often said to begin with the Gregorian plainsong of the 9th century AD. But the reconstruction and performance of Greek music has demonstrated that ancient Greek music should be recognised as the root of the European musical tradition.

Armand D’Angour, Associate Professor in Classics, University of Oxford

This article was originally published on The Conversation. Read the original article.

Heavy Metal? Well, that’s just like renaissance music

For many years, heavy metal was stuck as a fringe preference, ignored by music theorists and the mainstream listeners alike. But that has gradually changed, and some scientists are starting to focus more on this expansive music genre.

Iron Maiden is one of the pioneers of heavy metal. Image: Wiki Commons.

When Esa Lilja, Adjunct Professor at the University of Helsinki, started studying heavy metal academically in 1998, the literature was almost completely devoid of any information. Lilja, who is a pioneer of academic heavy metal research, now works with two PhD students to study the genre.

There are plenty of stereotypes about heavy metal, and Lilja is well aware of them. But most of them aren’t really true. Probably the most prevalent one is about metal’s musical origins.

“Metal is based on classical music theory, and it has a great deal in common with renaissance music, for example,” states Lilja.

Nowadays, heavy metal has an international audience and is well known to a larger size of the population, thanks to the efforts of pioneers such as Metallica and Iron Maiden, as well as more modern ambassadors such as Lordi (who won the mainstream Eurovision contest) and Apocalyptica (who became famous by performing covers of metal songs on classical instruments). In Scandinavian countries, metal is even more popular — it’s essentially mainstream there.

It makes a lot of sense, then, to study it in greater detail.

Paolo Ribaldini, one of the PhD students, studies vocal techniques, while Kristian Wahlström (Tuhat) is focusing on the educational dimension of heavy metal – such as how metal could be employed in music education.

“If a student is interested in heavy metal and has an emotional connection to it, new learning material could be built around excerpts from heavy metal, which the student is already familiar with. They could be used to indicate similarities between different musical genres,” Lilja explains.

Apocalyptica has made a name for themselves by covering metal songs with classical instruments. The band started out as a university project. Image: Wiki Commons.

It’s also interesting to note that many metal bands emphasize historical or mythological themes, which may also have an educational component. Although there are national components to some songs, metal is essentially an international genre, Lilja says.

“I think the national features have more to do with extramusical factors, such as mythological allusions in the lyrics or the overall image of the band.” One well-known example is Amorphis, a band whose lyrics are rife with references to the Kalevala, the Finnish national epic.

Lilja who is, of course, a metal head himself,, also notes that although heavy metal is a pretty niche genre, there’s a lot of variety and no unified theme.

“We middle-aged metal heads in particular are as eclectic a bunch as any set of middle-aged people,” says Lilja, who is 45.

You can read Esa Lilja’s doctoral dissertation on Theory and Analysis of Classic Heavy Metal Harmony here.

Hip hop music teaches children to recognize stroke and act quickly, study finds

Researchers have discovered that a musical movement that uses hip-hop music to educate economically-disadvantaged minority children and their parents about strokes has shown promising results in helping the increase of stroke awareness.

Via YouTube

“The lack of stroke recognition, especially among blacks, results in dangerous delays in treatment,” said Olajide Williams, M.D., M.S., study author and associate professor of neurology at Columbia University Medical Center, New York Presbyterian Hospital. “Because of those delays, only a quarter of all stroke patients arrive at the hospital within the ideal time for clot-busting treatment.”

A simple 9-1-1 call can save someone’s life. Calling an ambulance immediately when stroke symptoms start could increase the rate of optimal stroke treatment by 24%. It is very important for people to start recognizing the symptoms and know what to do in this kind of situation. Strokes kill four times more 35- to 54-year-old black Americans than white Americans.

Sadly, a lot of stroke awareness campaigns have been limited by the high costs of advertising, lack of cultural tailoring and low penetration into ethnic minority populations. But not all of them — “Hip Hop Stroke”, a three-hour multimedia stroke awareness intervention that teaches children rap songs about strokes, has shown great success in stroke education.

Scientists studying more than 3,000 4th through 6th graders from 22 public schools in New York City and a group of 1,144 of their parents have discovered that this campaign increased optimal stroke knowledge from 2% of children before the intervention to 57% right after. Another encouraging finding was that three months after the campaign had ended, 24% of children remembered all they had learned.

“Hip Hop Stroke” uses original hip-hop songs, comic books, and cartoon-style videos to make the kids remember facts about strokes. One of the invented acronyms of the project was F-A-S-T, which refers to stroke warning signs: Face dropping, Arm weakness, Speech difficulty, Time to call 9-1-1. Famous rapper Doug E. Fresh lent a hand in the artistic process and composed music and lyrics for the campaign.

“Rhymes have been shown to have quantifiable educational value,” said Dr. Williams.

Parents also learned new things. Pre-intervention, only 3% of the adults could identify stroke symptoms. That figure rose to 20% after they watched the educational videos. Three months later, 17% retained the information.

Dr. Williams, also known as the Hip Hop Doc, said that time is of the essence when it comes to stroke and clot-busting treatment.

“Every minute a stroke continues 1.9 million brain cells die. The earlier the treatment, the better the outcome,” he declared.

Williams has been conducting this study for over the past five years. He is delighted by the results and hopes that the free program will soon be used around the country.

“The program’s culturally-tailored multimedia presentation is particularly effective among minority youth or other groups among whom Hip Hop music is popular,” Williams said. “One unique aspect of the program is that the children who receive the program in school are used as ‘transmission vectors’ of stroke information to their parents and grandparents at home. Our trial showed that this is an effective strategy.”

https://www.youtube.com/watch?v=OlxWsWu9Y-Q

The paper was published in the American Heart Association Journal Stroke.

LSD changes the way the brain reacts to music, study finds

Researchers have discovered how LSD changes the neural response to music in various brain regions associated with memory, emotion, auditory processing, and self-directed thought.

“I have always been fascinated by emotion, memory, and altered states of consciousness. To this end, I completed my PhD in cognitive neuroscience at UC Davis with Petr Janata, using computational models of music cognition to study the neural basis of emotions and memories evoked by music,” stated study author Frederick Barrett of Johns Hopkins University School of Medicine to Psypost.

After thinking about how natural and intimate the connection between music and psychedelic subjective experiences is, the author wanted to understand the way psychedelics alter how the brain processes music.

So, he contacted University of Zürich’s Dr. Katrin Preller and Dr. Franz Vollenweider who conducted a study of the effects of LSD on meaning-making while listening to music. Barrett inquired about a collaboration and was met with a positive reply, receiving permission to analyze the imaging data collected during music listening sessions after administration of LSD.

Preller and Vollenweider surveyed 25 healthy participants about songs that had a special meaning for them. Next, the participants listened to personally meaningful songs and non-meaningful songs after receiving LSD or a placebo. They discovered that non-meaningful songs gained a sense of meaningfulness under the influence of LSD. The results increased scientists’ understanding of how personal relevance is attributed in the brain.

Berret and his team conducted a secondary analysis of the fMRI scans from the first study. They found that LSD changed the neural response to music in a number of brain areas, including the superior temporal gyrus, inferior frontal gyrus, medial prefrontal cortex, and amygdala.

“Music can evoke a wide range of emotions, memories, and other feelings and states of mind. We can often identify with music, and music can change the way that we feel about and think about ourselves,” Barrett said.

“In the same way, music also engages a broad range of brain regions involved in memory, emotion, attention, and self-directed thought. LSD increases the ​degree to which these brain areas process music, and it seems to use a brain mechanism that is shared across all psychedelic drugs,” he added.

Berret believes that the changes that occur in the brain when listening to music under the influence of LSD might actually be related to the therapeutic effects of psychedelics. But even though psychedelic drugs can be safely administered in a controlled setting, this doesn’t make them any less dangerous. Let’s just remember — bad trips exist.

Scientists still have to understand the degree to which music and LSD are needed for successful therapy. They also have to determine why these elements sometimes lead to bad experiences and find a way to optimize music listening during psychedelic therapy sessions.

“Psychedelics are powerful drugs that hold promise to help us to heal, understand our brains and minds, and potentially uncover the elusive basis of consciousness itself,” Barrett added.

The paper was published in the scientific journal Cerebral Cortex.

 

Researchers at Max Planck developed a new fitness technology called Jymmin makes us less sensitive to pain. Credit: Max Planck Institute For Human Cognitive and Brain Sciences.

Jymmin combines working out with music, makes people feel less pain

Good news for all of us! Whether or not you’re enjoying exercising, scientists have developed new technology that makes working out more enjoyable than ever. The new study also found that it makes us more resistant to pain.

Researchers at Max Planck developed a new fitness technology called Jymmin makes us less sensitive to pain. Credit: Max Planck Institute For Human Cognitive and Brain Sciences.

Researchers at Max Planck developed a new fitness technology called Jymmin that makes us less sensitive to pain. Credit: Max Planck Institute For Human Cognitive and Brain Sciences.

Researchers at Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) developed a new way of working out: they altered fitness machines to produce musical sounds during use. Scientists discovered that this novel approach, which they call Jymmin, increases pain threshold and makes people less sensitive to discomfort.

“We found that Jymmin increases the pain threshold. On average, participants were able to tolerate ten percent more pain from just ten minutes of exercise on our Jymmin machines, some of them even up to fifty percent”, said Thomas Fritz, head of research group Music Evoked Brain Plasticity at MPI CBS, in a press statement.

How do these machines work?

Scientists paired music composition software with sensors attached to the fitness machines. While exercising, the sensors captured and then transmitted signals to the software, which played back an accompaniment from each fitness machine. Basically, the researchers modified steppers and abdominal trainers to become our own musical instruments, so you can get really creative while working out.

Researchers discovered that, after Jymmin, participants were able to immerse their arms in ice water of 1°C (33.8°F) for five seconds longer compared to a conventional exercise session.

Scientists believe that the pain resistance experienced by the participants is due to the increased release of endorphins. Apparently, if music composition and physical activity are combined, endorphins are flushed into our systems in a more efficient way.

Researchers divided all 22 participants according to how they rated pain and discovered that participants with the highest pain threshold benefitted the most from this training method. Maybe this happens because these participants already release endorphins more effectively in comparison to those who are more pain sensitive.

“There are several possible applications for Jymmin that can be derived from these findings. Patients simply reach their pain threshold later,” Fritz added.

Jymmin could do wonders in treating chronic or acute pain. It could also be used as support in rehabilitation clinics by enabling more efficient training.

Scientists tested top swimmers in South Korea and the results were remarkable: athletes who warmed up using Jymmin machines were faster than those using conventional methods. In a pilot test, five of six athletes swam faster than in previous runs.

Previous studies showed that Jymmin has many positive effects on our well-being. They revealed that personal mood and motivation improved, and even the music produced while Jymmin was perceived as pleasant.

Scientific reference: Thomas H. Fritz, Daniel L. Bowling, Oliver Contier, Joshua Grant, Lydia Schneider, Annette Lederer, Felicia Höer, Eric Busch, Arno Villringer. Musical Agency during Physical Exercise Decreases PainFrontiers in Psychology, 2018; 8 DOI: 10.3389/fpsyg.2017.02312.

chord prevalence

Surprising harmonic structure might be the secret to writing a pop hit, new study finds

music-1874621_960_720

Credit: Pixabay.

Music is literally a rewarding experience which activates neural circuits such as those associated with food or sex. It doesn’t necessarily happen for all kinds of music, though. Why is that? Not much is known about the structural aspects of music that elicit this sort of response — but we seem to be getting there.

Neuroscientists at Georgetown University, Washington, propose two hypotheses that explain why people prefer certain songs over others. The first, called the Absolute-Surprise Hypothesis, simply states that unexpected musical elements or phrases are rewarding. The second, the Contrastive-Surprise Hypothesis, suggests bridging unexpected and subsequent expected events leads to an overall rewarding sensation.

Who doesn’t love a surprise?

The Absolute-Surprise hypothesis is predicated on the notion that surprise is a good thing or valuable for the person perceiving it. Musical surprise, or processing harmonically surprising sections of music, is associated with dopamine release, and therefore with reward response. The Contrastive-Surprise hypothesis, on the other hand, is premised on surprise being bad for the listener, in line with the idea of contrastive valence, which attributes one type of a listener’s enjoyment of music to a release from the tension induced by surprise.

Measuring or quantifying expectations, however, might sound daunting but the team was up to the challenge. Luckily, the researchers had a great helping hand: a useful statistical framework called information theory. This probabilistic framework can offer information about deviations from expectations. As such, within an information theory framework, surprise is nothing more than a  mathematical measure of how much an event deviates from expectations. This means we can use mathematical methods to analyze these expectations and the deviations away from them.

For their paper, the researchers studied available dataset featuring transcriptions of 732 Western popular music songs chosen at random from the Billboard Hot 100 charts over a 34-year period, extending from 1958 to 1991.

“The goal of this statistical analysis is to learn more about how the brain processes music, by examining the structure of music that is preferred. A similar statistical approach has been used to study the neuroscience of the visual system. The principle behind this approach is that one can often explain the mechanisms of a brain sensory system as optimized processors for ecologically important stimuli,” the authors wrote in Frontiers in Human Neuroscience.

In order to examine a single uniform measure of surprise, the researchers were careful to transpose all their songs to a common key, which was C major. Songs that were in a minor key or featured within-song modulations were excluded. In the end, they were left with 545 Billboard songs to analyze.

The next step was to group the songs into quartiles, based on the peak Billboard chart position of each song. The top quartile (Q1) represented widely preferred songs and the bottom quartile (Q4) represented less widely preferred songs.

In both quartiles, the pattern of chord prevalence was strikingly similar, where chord I (the root chord) is followed by V (dominant chord) and IV (sub-dominant chord).

chord prevalence

Credit: Frontiers in Human Neuroscience. 

Moving on to the statistical analysis of the harmonic structure of popular music, the authors determined the mean surprise of songs in top (Q1) and bottom (Q4) quartiles using an equation that essentially determined how varied the chord progression were compared to the mean variations found in the entire corpus of 545 songs.

The authors found Q1 songs had significantly higher overall mean surprise than Q4 songs, which supports the Absolute-surprise Hypothesis, “providing evidence that moderate increases in the absolute level of surprise of a song may indeed drive music preference upward.”

To test the Contrastive-Surprise Hypothesis, the researchers had to analyze the transition between sections, typically verses, choruses and bridges. They measured the standard deviation of average surprise values for the different sections within each song, and then compared the values for Q1 to the corresponding values for Q4.

“Standard deviations of average surprise for sections within individual songs were significantly higher for Q1 than for Q4. Therefore, the Contrastive-surprise Hypothesis is supported by our data, providing evidence that the juxtaposition of high-surprise sections and low-surprise sections may indeed drive music preference upward,” the authors wrote.

“Although these hypotheses seem contradictory to one another, we cannot yet discard the possibility that both absolute and contrastive types of surprise play roles in the enjoyment of popular music. We call this possibility the Hybrid-Surprise Hypothesis,” they concluded.

In other words, people enjoy familiar music as evidenced by the fact that most songs from the Billboard charts use common chord progression. But those songs that we really love and remember, out there in the top quartile, they better be surprising. It’s a familiar theme that’s presented in comedy, literature, painting, and other arts.

Artificial intelligence can write classical music like a human composer. It’s the first non-human artist whose music is now copyrighted

There’s nothing stopping a machine designed in our image from performing at least as equally well as humans in virtually any task. If you’re a skeptic, it’s enough to look at what a company from Luxemburg called Aiva Technologies is doing. Their novel artificial intelligence can write classical music, a genre deeply tied to human sophistication and musical acuity, that is for all intents and purposes on par with works written by human composers.

Image credits Gavin Whitner / Flickr.

There are a lot of startups nowadays working with machine learning techniques to craft artificial intelligence applications for anything from law to search engines. Such technologies have a huge potential for disruption because they can help some organizations drastically improve their productivity or returns. Artificial intelligence can also be a social disrupter as it affects the job market. If you’re employed as a truck or taxi driver, teller, cashier or even as a cook, you run at risk of being sacked in favor of a machine. Some would think creative jobs like writing, painting or music are exempted from such trends because there’s the impression you need inherently human qualities to deliver — but that’s just wishful thinking.

Already, AIs seem much better than people at competitive games like Chess, Go or Poker. You might argue that writing music is a totally different affair from crunching raw data such as chess positions or the probability of holding a winning hand at poker but the way these machines are set up really shouldn’t make any difference.

Mainframe prodigy

Aiva, which stands for Artificial Intelligence Virtual Artist, is based on deep learning algorithms which use reinforcement techniques. Deep learning essentially involves feeding a computer system lots of data so that it can make decisions about other data. All of this information is passed through so-called neural networks which are algorithms designed to process information like a human brain would. These networks are what allows Google, for instance, to analyze the billions of images in its index as if a human would interpret them; a previous full-length ZME article goes into more depth about how all of this works.

Reinforced learning means that the artificial intelligence is trained to decide what to do next by being offered a ‘reward’ which is cumulative. Unlike supervised learning, reinforced learning doesn’t require a labeling of input and out data. What this means for Aiva, which was fed thousands of classical musical scores from Bach and Mozart to contemporary composers, is that it was never taught music theory. Essentially, Aiva learned music theory by itself after ‘listening’ to all of these scores. No one ever showed the machine what a triad or seventh chord is or even what a note duration means.

“We have taught a deep neural network to understand the art of music composition by reading through a large database of classical partitions written by the most famous composers (Bach, Beethoven, Mozart, etc). Aiva is capable of capturing concepts of music theory just by doing this acquisition of existing musical works,” Aiva told Futurism. 

The startup is the business of writing and producing musical scores for movies, games, trailers or commercials and the artificial intelligence acts like a 24-hour composer who never runs out of inspiration and always does what it’s told. Clients who come to the company with a brief in which they state their objectives then Aiva runs a couple of iterations until the sheet music looks good enough. Then, humans arrange and play the music with live or virtual instruments in a studio.

By now, you must be dying to hear what the machine came up with. Streaming below is Aiva’s first album called Genesis. Spoiler: it all sounds freaking good!

In the future, Aiva hopes to make its platform versatile enough so a client only needs to upload a reference track, say a song from Radiohead, and select some general themes (ambient, dark, war, suspense etc.). Based on these simple settings, you would get a quick sheet music to play with, augment and revise as you wish. Maybe in the not so distant future, you could have new music written and generated by a computer in real-time based on your preferences similarly to how Spotify always knows to play the tunes you like — only this time it would all be completely new, original, and exclusive to a single pair of ears.

It’s also worth noting that while the music composed by Aiva is rather impressive, the machine didn’t know how to write music that elicits emotions. Some of the tracks sampled above might elicit certain feelings but the machine didn’t seek them out on purpose. This may set to change sooner than some would care to think. Just last week we reported how Japanese researchers made an Ai that writes and generates simple music that triggers an emotional response based on brain scans of humans listening to certain kinds of music.

Like any self-respecting composed, Aiva is registered under the France and Luxembourg authors’ right society (SACEM) so all of its tracks are copyrighted. Interestingly, though such results haven’t been peer-reviewed, Aiva claims it ran its own Turing tests and found humans couldn’t tell the music was written by a machine. That may actually be true and not very surprising considering the music was, at the end of the day, arranged (very important) and played by humans. And if you work in a studio, you don’t have to worry that much yet because your skills can’t be matched by a computer anytime too soon. Writing tonal instructions, which is the music sheet itself, is different from sound design and arrangement. Perhaps, Aiva and other AIs like it will shine the most in collaboration with humans, rather than in competition.

Previously, we reported how AIs also wrote their first pop songs and even the script for a SciFi short film. These works are still clumsy or augmented by human hands but with each passing day that ‘thinking machines’ get smarter, we’re forced to rethink basic concepts that make us human. Things like emotions, creativity, ingenuity.

***

Until one day man and machine will become indistinguishable, for a moment, before ultimately surpassing us for good.

Robots could soon write emotional or motivating songs

Art is one of the few things which truly separate us from artificial intelligence. Even though there are robots which can write music, they don’t consider the emotions that it elicits, they merely weave notes into patterns determined by the physical parameters of the notes. All that may change, thanks to the work of Japanese researchers.

This is a Brain Music EEG headset. The machine learning algorithm is hooked up to the listener’s brain, learning and adapting as it goes. Image credits: Osaka University.

Although it wouldn’t seem so at a first glance, music and mathematics have a lot in common. There’s an almost mathematical beauty to musical patterns, which musicologists have remarked for centuries. Therefore, it would seem, Artificial Intelligence (AI) should have a decent time writing songs — but that doesn’t really happen.

While AI can write OK songs, there’s one aspect which it can’t incorporate: feelings. Really good songs mess with our feelings, they send us to despair or bring great joy to our hearts, and that’s something machines can’t do. At least, not yet.

An international research team led by Osaka University together with Tokyo Metropolitan University, imec in Belgium, and Crimson Technology has released a new machine-learning device that analyzes people’s brain waves as they listen to songs. Their machine learns how listeners feel and adapts to it, giving them more of what they want to hear.

“Most machine songs depend on an automatic composition system,” says Masayuki Numao, professor at Osaka University. “They are preprogrammed with songs but can only make similar songs.”

Numao figured that if he can somehow tap into the listener’s emotional state and see what works for him/her, the AI could deliver more of the same. Basically, he found a way to artificially insert feelings into AI songs. Users listened to music while wearing wireless headphones that contained brain wave sensors. These sensors detected EEG readings, which the robot uses to make music interactively reacting to the listener. The music was then created by the machine through Musical Instrument Digital Interface (MIDI) technology on the spot and played in a rich tone using a synthesizer.

“We preprogrammed the robot with songs, but added the brain waves of the listener to make new music.” Numao found that users were more engaged with the music when the system could detect their brain patterns.

So far, no audio samples have been released, but the machine was featured at the 3rd Wearable Expo in Tokyo, Japan, receiving positive reviews from volunteers.

Aside from being a potential breakthrough in AI, this technology could also have real-life applications. Having customizable music that adapts to your feelings could for instance boost your productivity when you’re working or motivate you when you’re working out. As we all know, finding the music that adapts to your feelings can sometimes be pretty daunting.

Study looks at what makes some song stick in your head, while others don’t

A study published by the American Psychological Association looked at the structure of the songs that get stuck in our heads — the so-called “earworms” — to find out exactly what makes us think about them again, and again, aaaand again.

Image via Pexels / Public Domain.

Did it ever happen to you to sing a song to yourself hours after you’ve heard it? Of course it did. And you’re not alone.

But did you ever wonder why certain songs tend to stick on in our minds, and others don’t? The oldest song in history, for instance, is pretty catchy. There are some very specific reasons why this happens, researchers from the American Psychological Association have found. The earworms are usually faster, with a rather generic but easily remembered melody. But there are also particular intervals that set earworms apart from average pop song, such as distinctive leaps or melodic repetitions, interspersed throughout the melodic line, the study found.

The team asked 3,000 people from the UK to name the song that most frequently sticks in their head. To keep it simple, the researchers limited their questioning to popular genres such as pop, rap, rock, and so on. They compared the answers to a database of songs looking for tracks that weren’t named as earworms but matched these in terms of popularity and how recently they appeared in UK’s music charts. The feature of earworms were then analyzed and compared with those of benign songs. This is the list of the most sticky songs out there — be warned, however, that you’ll be singing each and every one to yourself while you read the list.

  1. “Bad Romance” by Lady Gaga.
  2. “Can’t Get You Out Of My Head” by Kylie Minogue.
  3. “Don’t Stop Believing” by Journey.
  4. “Somebody That I Used To Know” by Gotye.
  5. “Moves Like Jagger” by Maroon 5.
  6. “California Gurls” by Katy Perry.
  7. “Bohemian Rhapsody” by Queen.
  8. “Alejandro” by Lady Gaga.
  9. “Poker Face” by Lady Gaga.

The data for the study was collected from 2010 to 2013.

“These musically sticky songs seem to have quite a fast tempo along with a common melodic shape and unusual intervals or repetitions like we can hear in the opening riff of ‘Smoke On The Water’ by Deep Purple or in the chorus of ‘Bad Romance,'” said lead author Kelly Jakubowski, PhD, of Durham University. She conducted the study while at Goldsmiths, University of London.

The team found that catchy songs tend to have more common global melodic contours — meaning they have overall melodic shapes commonly found in Western pop music. “Twinkle, Twinkle Little Star” for example, with the first phrase rising in pitch while the second one falls, shows the most common such contour pattern. Nursery tunes and children’s songs usually follow the same pattern, authors note, making them easy to remember for children. That same element, in Maroon 5’s “Moves Like Jagger” opening riff makes the song stick in your head.

Another crucial element for earworms is an unusual interval structure of the song. Unexpected leaps or more repeated notes than you’d expect to hear in an average pop song provide this structure, the study reads. The interlude of “My Sharona” by the Knack or “In The Mood” by Glen Miller can be used as examples.

Catchy songs are more likely to get aired on radio, to be featured at the top of charts, neither of which is surprising. But there has previously been little evidence as to why some songs are “catchy” regardless of how popular they are or how many people have heard them.

“Our findings show that you can, to some extent, predict which songs are going to get stuck in people’s heads based on the song’s melodic content,” Jakubowski said.

“This could help aspiring song-writers or advertisers write a jingle everyone will remember for days or months afterwards,”

In addition to a common melodic shape, the other crucial ingredient in the earworm formula is an unusual interval structure in the song, such as some unexpected leaps or more repeated notes than you would expect to hear in the average pop song, according to the study.

But let’s say you’re on the other end of the line — you’re the listener, desperately trying to get the song out of your head. What do you do? Jakubowski suggests engaging with the song, as many people report that it becomes easier to get rid of a song after listening it in full. You can also try distracting yourself with another song, either by listening or thinking about it, but it sounds a lot like replacing one thorn with another. If neither approach works, just give it time, try not to think about the song, and it will naturally fade away on its own.

The full paper, “Dissecting an Earworm: Melodic Features and Song Popularity Predict Involuntary Musical Imagery” has been published online in the journal Psychology of Aesthetics, Creativity, and the Arts.

A young pied butcherbird playing its first songs. Credit: Pixabay

Some birds play tunes not all that different from jazz musicians

A young pied butcherbird playing its first songs. Credit: Pixabay

A young pied butcherbird playing its first songs. Credit: Pixabay

Perching birds are like nature’s choir, raising their voice to the tune of life, along with the clicks of crickets, the howl of wolves, the choirs of fish. But there’s more to a bird’s chirps and whistles than meets the eye. Some species of birds push the envelope and literally act musically. For instance, the pied butcherbird’s tuneful behaviour rivals that of professional human musicians, a new paper concludes.

Studying the musical nature of birds required serious interdisciplinary collaboration between biologists, neuroscientists, musicians, and engineers. Among the researchers were David Rothenberg, distinguished professor of philosophy and music in New Jersey Institute of Technology’s Department of Humanities, Eathan Janney, a Ph.D. candidate in the Department of Psychology at City University of New York (CUNY)’s Hunter College.

“Science and music may have different criteria for truth, but sometimes their insights need to be put together to make sense of the beautiful performances we find in nature,” said Rothenberg who also plays the clarinet and saxophone. He is the author of Thousand Mile Song, a book about making music with whales, and Bug Music, How Insects Gave Us Rhythm and Noise whose premise is that listening to cicadas, as well as other humming, clicking and thrumming insects, fostered an innate sense of musical rhythm and synchronization over the long history of human evolution.

Not too long ago, the idea that some bird songs are actually based on musical principles would’ve put many in a quandary. The extensive analysis of the pied butcherbird, however, suggests that this sort of skepticism is no longer warranted.

Hours of recorded audio and statistical analysis suggest that these highly musical birds “balance their performance to keep it in a sweet spot between boredom and confusion,” according to co-author Ofer Tchernichovski, professor in the Hunter College Department of Psychology,

According to the researchers, the more complex a bird’s repertoire, the better it is at singing in time, rhythmically interacting with other birds more skillfully than those birds who only knew a few songs. The most skillful birds extensively play around with their tunes, balancing repetition and variation. It’s not all that different from a jazz musician, said Constance Scharff, a co-author who directs the animal behavior laboratory at the Freie Universität Berlin.

“We found that different phrase types often share motifs (notes or stereotyped groups of notes). These shared motifs reappeared in strikingly regular temporal intervals across different phrase types, over hundreds of phrases produced without interruption by each bird. We developed a statistical estimate to quantify the degree to which phrase transition structure is optimized for maximizing the regularity of shared motifs. We found that transition probabilities between phrase types tend to maximize regularity in the repetition of shared motifs, but only in birds of high repertoire complexity,” the researchers wrote.

In the video below, you can hear a glimpse of the butcherbird’s musical virtuosity. The sample includes a butcherbird solo, a song of another butcherbird as well as one from an Australian magpie.

The origin of human music is difficult to pinpoint. Why do humans make music? Why do we like it so much? Why do we like it the way we like it? These are serious questions to which we don’t know the answers fully. By all accounts, though, humans aren’ the only musical species. We’re certainly not the first to develop this ability to construct rhythmic patterns of sound.

“Since pied butcherbird songs share so many commonalities with human music,” Taylor said in a statement, “this species could possibly revolutionize the way we think about the core values of music.”

Findings appeared in the journal Royal Society Open Science

This is the first pop song written by an A.I. and it sounds a lot like The Beatles

Hey there, folks! We have a special treat for you tonight: a fresh band called The BITles, featuring Ringo Ram, George Algorithsson, Paul McCryption and John Digienon. Let’s give it up for their latest single ‘Daddy’s Car’!

Liked it? Well, you should know that the whole track was written by a machine. It was composed by Sony’s Flow Machines A.I. which has 13,000 songs fed into its database, including every musical genre and style. For each track, the harmony and melody are analyzed and fed into a huge algorithm that learned what makes a genre stand out from the others.

To demonstrate Flow Machines’ power, a human composer named Benoît Carré was asked to input a style (he chose ‘The Beatles’) and eventually got ‘Daddy’s Car’. He then arranged the composed the musical sheet that the A.I. had given him. The lyrics are also Carré’s work.

Previously, an A.I. wrote the script for a movie, so if you thought artists were spared by the incoming artificial intelligence revolution in the job market think again. But if you liked this track, wait ’till you hear the whole album slated for 2017. The album, the first to be written by an artificial intelligence, will feature various tracks spanning different genre. Another track made by Flowing Machines is composed in the style of American songwriters such as Irving Berlin, Duke Ellington, George Gershwin and Cole Porter. Check it out below.

So, what do you think? I’ve honestly listened to worse things made by humans.

These are the most metal words in the English language, data scientist says

Every once in a while scientists turn their minds from stars or finding more nutritious foods towards life’s real questions: for example, how to sound as metal as possible.

Image credits to Getoar Agushi / Wikimedia

Former physicist turned data scientist Iain of Degenerate State crunched the numbers and has the keywords you need to say to impress. Iain mined DarkLyrics.com, “the largest metal lyrics archive on the web,” for the lyrics of 222,623 songs by 7,634 bands and analyzed them to find the most, and least, metal words in existence. By comparing the data from DarkLyrics with the Brown Corpus — a 1961 collection of English-language documents that is “the first of the modern, computer readable, general corpora” — he put together a list of the 20 most and least metal words, along with their “metalness” factor.

So without further ado, Iain’s top 10 most metal words are:

  1. burn.
  2. cries.
  3. veins.
  4. eternity.
  5. breathe.
  6. beast.
  7. gonna.
  8. demons.
  9. ashes.
  10. soul.

And the top 10 least metal words:

  1. particularly.
  2. indicated.
  3. secretary.
  4. committee.
  5. university.
  6. relatively.
  7. noted.
  8. approximately.
  9. chairman.
  10. employees.

Iain’s method is actually more complex than you’d be inclined to think. He first analyzed the data from DarkLyrics and came up with word clouds showing the most common words in all of the songs. Just looking at this data doesn’t offer any special insight into the genre, however, he found.

“Metal lyrics seem focused on “time” and “life”, with a healthy dose of “blood”, “pain” and “eyes” thrown in. Even without knowing much about the genre, this is what you might expect,” he writes.

But looking only at the frequency with which each word appears in songs doesn’t actually tell us anything about which words are closest to the spirit of metal.

“To do this we need some sort of measure of what “standard” English looks like, and […] an easy comparison is to the brown corpus,” he adds.

Iain attributed each word a “metalness” factor, M, as the logarithm of the frequency with which it appears in lyrics over the frequency with which it appears in the brown corpus.

“To prevent us being skewed by rare words, we take only words which occur at least five times in each corpus.”

He plotted the Metalness of all 10,000 words here, so you can know exactly how intense each word you say is. Unsurprisingly, topics like university and employment don’t quite have the metalness of say, demons or the fiery hells.

Iain says that his final analysis isn’t perfect — because of the different topics in the brown corpus and the lyrics, some words are naturally favoured with more or less metalness. A more precise measurement should involve comparison with other musical genres.

“A better measure of what constitutes “Metalness” would have been a comparison with lyrics of other genres, unfortunately I don’t have any of these to hand.”

However, it’s accurate enough to tell you what you need to know — the next time that sexy someone in a Judas Priest t-shirt saunters by, leave your uni and job alone. Your burning soul, the cries of the beast running through your veins and so on are all you need to talk about.

Musical horns reveal 2,000 year old cultural ties between Europe and India

An archeologist studying Irish iron-age musical horns has found a very surprising correspondent of the ancient musical arts in Europe: these artistic practices, long considered to be dead, are still alive and well in south India.

Billy Ó Foghlú with a Kompu from Kerala, India. Image credit: Stuart Hay/ANU

Billy Ó Foghlú with a Kompu from Kerala, India.
Image credit: Stuart Hay/ANU

Europe and India aren’t exactly what you’d call close to each other. Not only geographically, but also from a cultural point of view — different ideas, language, cuisine, religious practices and artistic concepts have shaped these areas throughout their history. So you can image the surprise of PhD student Billy Ó Foghlú, from The Australian National University (ANU) when he discovered that modern Indian horns are almost identical to iron-age artifacts found throughout Europe.

This is testimony to the strong cultural links that existed between the two regions some 2,000 years ago, he says.

“Archaeology is usually silent. I was astonished to find what I thought to be dead soundscapes alive and living in Kerala today,” said Ó Foghlú. “The musical traditions of south India, with horns such as the kompu, are a great insight into musical cultures in Europe’s prehistory.”

This cultural tie can be used both ways though, also offering insight into the history of India’s musical arts.

“And, because Indian instruments are usually recycled and not laid down as offerings, the artefacts in Europe are also an important insight into the soundscapes of India’s past.”

The level of similarity between the horns suggest that Europe and India had rich cultural exchanges, with musicians sharing instruments and practices native to their area. An example of this exchange can be found in a carving dating from 300 BC that shows a celebration in Sanchi.  A group of performers is depicted here, playing music on two European carnyces, a type of horn with the end fashioned in the image of an animal’s head.

Carnyx reconstruction at the Celtic museum in Hallein.
Image credits wikimedia user Wolfgang Sauber.

The musical style of Kerala also helps explain some of the mysteries surrounding the horns that have been unearthed in European iron-age excavations and suggest a very different musical soundscape to current western music.

“Some almost identical instruments have been unearthed together, but they are slightly out of tune with each other to western ears,” Ó Foghlú said.”This was previously assumed to be evidence of shoddy workmanship.

“But in Indian music this kind of dissonance is deliberate and beautiful. Horns are used more as a rhythm instrument, not for melody or harmony in a western sense.”

The research paper, titled “Ancient Irish musical history found in modern India.” is published in the Journal of Indian Ocean Archaeology.