Tag Archives: language

Why some people can’t count past “1”: Mathematical thinking is shaped by language and culture

A Tsimane’ woman is tasked with lining up beads to replicate the exact number of white buttons on the table. Credit: Benjamin Pitt, UC Berkeley.

Humans seem to have an innate system for thinking about and organizing numbers, and some scholars have proposed in the past that our brains probably have a built-in mechanism for counting. Such a mechanism would be distinct from language, so humans and other animals would be able to count without having to require language and words such as “one”, “two”, and so on. But that may be only partially true.

A new study suggests that language plays an integral part in shaping our mathematical thinking, as evidenced by members of Bolivia’s Indigenous Tsimane’ community. In this culture, people couldn’t count beyond the “number words” they knew.

“Our finding provides the clearest evidence to date that number words play an active role in people’s ability to represent exact quantities and supports the broader claim that language can enable new conceptual abilities,” said study lead author Benjamin Pitt, a postdoctoral fellow in UC Berkeley’s Computation and Language Lab.

Language drives mathematical reasoning, and not the other way around

Pitt traveled deep into the Amazon basin of Bolivia, using a Jeep, canoe, and finally some hiking to reach the remote villages of the Tsimane’ people. With the help of Tsimane’-Spanish interpreters, the researcher recruited 30 community members with little formal schooling for an experiment.

Each participant was shown a group of objects, such as four buttons, and asked to replicate what they saw using different objects like glass beads. Quite surprisingly, the participants could only match the exact number of objects only in instances when they knew the corresponding words for the numbers.

During a previous trip to the Amazon, Pitt and colleagues studied the organization of numerical information among the Tsimane’ people. Unlike children and adults in industrialized countries, who organize time and numbers or measure things from left to right, the Tsimane’ people organize them freely in either direction.

In this experiment, indigenous members received a set of five cards, where each card displayed a different number of dots. A card containing five dots was placed in the middle of a strip of Velcro, and the participants had to arrange their cards on either side of the middle card, according to their numerical value. The Tsimane’ participants were just as likely to organize the cards from left to right as they were from left to right. The same happened with cards representing fruit ripening over time, a measure of organizing size and time.

“Abstract concepts are things we cannot see or hear or touch, like time, for example. You can’t see time. You can’t touch it. The same goes for numbers. We think and talk about time and numbers constantly. But they’re abstract. So how do we make sense of them? One answer is that we use space to make them tangible — thinking of them along a line from left to right or from top to bottom. My research looks at the root of these types of concepts by studying how they vary across cultures, age groups and even across individuals within a group,” Pitt said.

Although it might sound impossible to Western people, there are to this day some cultures across the world which do not have words for numbers. These languages which do not contain words for numbers in their lexicon are known as anumeric. A prime example constitutes the Pirahã people of Brazil, which have no words for any exact number — not even the number “1”. Their language contains just three imprecise words for quantities: Hòi means “small size or amount,” hoì, means “somewhat larger amount,” and baàgiso indicates to “cause to come together, or many.” 

As a result, Pirahã people have great difficulties consistently performing simple mathematical tasks. For example, one test involved 14 adults in one village that were presented with lines of spools of thread and were asked to create a matching line of empty rubber balloons. The people were not able to do the one-to-one correspondence when the numbers were greater than two or three.

Are Pirahã adults less resourceful and intelligent than a four-year-old American toddler, for whom such a task is trivial? Of course not. In another experiment, when researchers at the University of Miami introduced numerical words, the indigenous people’s performance for mathematical tasks dramatically improved. These findings showed that language is key to mathematical reasoning.

The Tsimane’ study further strengthens this notion, and completes the jigsaw puzzle with new pieces. Unlike American or Pirahã adults, Tsimane’ adults vary greatly from one another in their ability to count. Some can count indefinitely, while others aren’t sure what follows after, say, the number 6 and can only approximate.

“We used a novel data-analysis model to quantify the point at which participants switched from exact to approximate number representations during a simple numerical matching task. The results show that these behavioral switch points were bounded by participants’ verbal count ranges; their representations of exact cardinalities were limited to the number words they knew. Beyond that range, they resorted to numerical approximation. These results resolve competing accounts of previous findings and provide unambiguous evidence that large exact number concepts are enabled by language,” Pitt and colleagues wrote in their study.

This Australian duck says ‘You bloody fool’ and can imitate other sounds — and scientists couldn’t be more fascinated

Musk duck. Credit: Wikimedia Commons.

When biologist Carel ten Cate heard rumors of a talking duck in Australia, he brushed it off like a comical anecdote, like any sane human being. But his curiosity got the better of him, so he tracked down a well-respected Australian scientist who first noticed this phenomenon more than three decades ago. After listening to verified footage showing an adult musk duck vocalizing the sounds of a door slamming or squeaking, a pony snorting, a man coughing, and even the all too familiar slur “You bloody fool!”, the Dutch biologist was simply stunned. Listen for yourself

Carel ten Cate’s encounter with this articulate duck led him down a rabbit hole in which he found more evidence that musk ducks (Biziura lobata) can mimic sounds from nature, as well as those made by humans.

This extraordinary ability, which was documented in the Philosophical Transactions of the Royal Society of London B, officially allows the musk duck to join an exclusive club of animals that are capable of acquiring vocalization through learning, which includes parrots, hummingbirds, and some songbirds, as well as some whales, seals, dolphins, and bats on the mammalian front.

“These sounds have been described before, but were never analysed in any detail and went so far unnoticed by researchers of vocal learning,” said ten Cate, who is a professor of animal behavior at the Leiden University, wrote in his study. His co-author is Australian scientist Peter J. Fullager, who first documented a musk duck imitating sounds over 30 years ago.

Nearly all mammals produce some vocal sounds, from dogs barking and howling to cattle lowing and mooing. Humans are very different in that they can string together sounds that have particular meanings, which we call words, allowing us to communicate with one another through language. But at the same time, while most mammals are born with innate vocalization abilities, humans are not.

We all need to learn how to speak and the brain processes that support this type of learning are still poorly understood. This is why studies such as this that probe acquired vocalization in other species are important for unraveling these processes.

Vocal learning refers to imitating sounds or producing completely new vocalizations, depending on the species involved. Central to this ability seems to be auditory feedback during development.

“Most species have a more innate ability to learn how to make sounds. But a few rare animals, including a handful of mammals and, of course, human beings, are vocal learners. They need auditory feedback to learn how to make the right sounds if they want to communicate,” said Michael Yartsev, assistant professor of bioengineering at the University of California, Berkeley, in a 2020 interview with the Dana Foundation.

Yartsev’s earlier studies with Egyptian fruit bats showed that individuals that have been isolated or exposed to unique acoustic environments right after they were born had different vocalizations than groups of bats that were raised normally.

“This suggests that their vocalizations have some plasticity. Our own work has shown that, even in adults, if you expose the bats to sound perturbation, they have the capacity to modify or adapt their vocalizations in a stable manner over prolonged periods of time. So, there are good indications that there is some form of plasticity there that we can investigate,” Yartsev said.

The musk ducks seem to be this way too. Besides the musk duck that imitated his former caretaker’s insults, ten Cate identified another musk duck that was raised alongside Pacific black ducks (Anas superciliosa), and consequently quacked like them. Both ducks were raised in captivity since they were hatchlings. Wild musk ducks sound very different and they do not care to acquire new sounds in their vocal repertoire, which also explains why their vocalization acquisition abilities have been overlooked until now — they apparently make for horrible pets.

Furthermore, not all captive musk ducks seem to imitate non-native sounds. Captive female musk ducks don’t perform vocal displays, and the imitations performed by the males were part of their advertising displays to potential mates.

“Together with earlier observations of vocal differences between populations and deviant vocalizations in captive-reared individuals, these observations demonstrate the presence of advanced vocal learning at a level comparable to that of songbirds and parrots. We discuss the rearing conditions that may have given rise to the imitations and suggest that the structure of the duck vocalizations indicates a quite sophisticated and flexible control over the vocal production mechanism,” the scientists wrote in their new study.

Ducks split off from the evolutionary family tree sooner than other birds, such as parrots and songbirds. What’s more, duck brains differ quite a lot structure-wise than their avian relatives. Therefore, the “observations support the hypothesis that vocal learning in birds evolved in several groups independently rather than evolving once with several losses,” the researchers concluded. 

This article was originally published in September, 2021.

The average dog knows 89 words and phrases

Credit: Pixabay.

By several behavioral measures, the mental abilities of the average canine are on par with those of a human child around age 2. They’re also very similar in their comprehension of words, with a new study finding that, on average, dogs respond to 89 words.

“We aimed to develop a comprehensive owner-reported inventory of words to which owners believe their dogs respond differentially and consistently,” wrote Catherine Reeve and Sophie Jacques, the authors of the new study, both researchers at Dalhousie University, Canada.

Dogs, likely the first domesticated animal, and humans share a strong bond that stretches back thousands of years. Over time, dogs were selected for traits that made them more sociable, loyal, and cooperative. Early on, domesticated canines proved useful in hunting, but nowadays they occupy a wide range of specialized roles, such as search and rescue, agriculture, police, and scent detection (dogs can sense several types of cancer, migraines, low blood sugar, seizures, diabetes, and even COVID-19).

Their ability to fulfill these roles hinges, for the most part, on their responsiveness to human social cues. Often these cues are verbal commands and basic utterances during various contexts (i.e. playtime or walking), but also non-verbal cues such as gestures.

As early as the 1920s, scientists have sought to assess dogs’ ability to comprehend human speech. One study from 1928 documented the ability of Fellow, a young male German Shepherd, to respond to verbal commands uttered by his owner. Fellow could recognize 68 words and phrases, including phrases such as Go outside and wait for me. More recently, a 2004 study found that Rico, a Border Collie, could identify and retrieve over 200 items, such as various different balls and stuffed toys, when the owner uttered each item’s unique name.

These studies show that dogs can respond consistently and differently to spoken words and phrases, something not at all surprising even to first-time dog owners. But the Canadian researchers wanted to investigate more closely and empirically the extent to which typical dogs respond to words. Fellow, Rico, and most other canines involved in similar studies were very well trained, for instance.

In order to quantify the number of words a dog could comprehend, the researchers employed virtually the same tool that psychologists use to assess infants’ understanding and development of early language, based on a parent-reported checklist called the MacArthur-Bates Communicative Development Inventory.

A total of 165 owners of a variety of dog breeds were surveyed about the different words and phrases that their pets seemed to understand. Each owner was also asked questions about themselves that were relevant to the study, such as dog training experience and household member composition, as well as about their dogs (i.e. breed, age, sex, training background).

On average, dog owners identified 89 terms that their pets responded to consistently, half of which were classed as commands. There were outliers, of course, with one clever dog reported to respond to 215 words. The least responsive dog responded to only 15 words.

The most responsive breeds included the Australian Shepherd, Border Collie, German Shepherd, Bichon Frise, Cavalier King Charles Spaniel, and the Chihuahua. Breeds that were not quite as responsive included hounds like the Beagle and Whippet or working-guardians like Boxers and the Cane Corso.

The most common words dogs responded to were their own name, as well as command-like words like ‘sit’, ‘come’, ‘down’, ‘stay’, ‘wait’, ‘no’, ‘OK’ and ‘leave it’.  But many dogs could also understand nouns like ‘treat’, ‘breakfast’, ‘dinner’, ‘garbage’, ‘poo’ and things to chase, such as a ‘ball’ or ‘squirrel’.

A word was counted as a response if the pet looked up, whined, ran, wagged their tails, or performed a requested action.

“The current study takes an important first step towards developing an instrument that makes it possible to identify which words might most likely be responded to by dogs. Although research on dogs’ responses to words exists, much of it has been limited in scope (e.g., teaching a handful of commands or object words) or sample size (e.g., training a single dog). The current study is consistent with existing research suggesting that dogs may be particularly adept at responding to commands rather than object words,” the researchers wrote in their study published in the journal Applied Animal Behaviour Science.

The researchers cautioned, however, that these results do not prove that dogs actually understand the meaning of the words. They could respond to various words uttered by humans due to operant or classical conditioning, such as that present in basic dog training (the sounds that form the word ‘sit’ are eventually associated with a reward). Dogs may also learn to associate certain sounds that form words with events or objects more passively by learning the association between them through repeated pairings, a process scientists call statistical learning.

“With additional research, our tool could become an efficient, effective, and economical research instrument for mapping out some of their competences and perhaps help predict early the potential of individual dogs for various professions,” they added.

Our brains may be naturally wired for multilingualism, being ‘blind’ to changes between languages

Our brains may be tailored for bilingualism, new research reports. According to the findings, the neural centers that are tasked with combining words together into larger sentences don’t ‘see’ different languages, instead treating them as if they belong to a single one.

Image credits Willi Heidelbach.

The same pathways that combine words from a single language also do the work of combining words from two different languages in the brain, according to the paper. In the brains of bilingual people, the authors report, this allows for a seamless transition in comprehending two or more languages. Our brains simply don’t register a switch between languages, they explain.

The findings are directly relevant to bilingual people, as it allows us a glimpse into how and why they often mix and match words from different languages into the same sentences. However, they are also broadly relevant to people in general, as it helps us better understand how our brains process words and meaning.

Speaking in tongues

“Our brains are capable of engaging in multiple languages,” explains Sarah Phillips, a New York University doctoral candidate and the lead author of the paper. “Languages may differ in what sounds they use and how they organize words to form sentences. However, all languages involve the process of combining words to express complex thoughts.”

“Bilinguals show a fascinating version of this process — their brains readily combine words from different languages together, much like when combining words from the same language,” adds Liina Pylkkänen, a professor in NYU’s Department of Linguistics and Department of Psychology and the senior author of the paper.

Bilingualism and multilingualism are widespread around the world. In the USA alone, according to data from the U.S. Census, roughly 60 million people (just under 1 in 5 people) speak two or more languages. Despite this, the neurological mechanisms that allow us to understand and use more than a single language are still poorly understood.

The specific habit of bilinguals to mix words from their two languages together into single sentences during conversation was of particular interest to the authors of this paper. In order to find out, the duo set out to test whether bilinguals use the same neural pathways to understand mixed-language expressions as they do to understand single-language expressions.

For the study, they worked with Korean/English bilinguals. The participants were asked to look at a series of word combinations and pictures on a computer screen. These words either formed a meaningful two-word sentence or pairs of verbs that didn’t have any meaning, such as “jump melt” for example. Some of these pairings had two words from a single language, while others used one word from English and another from Korean. This was meant to simulate mixed-language conversations.

Participants then had to indicate whether the pictures matched the words that preceded them.

Their brain activity was measured during the experiment using magnetoencephalography (MEG), which records neural activity by measuring the magnetic fields generated in the brain when electrical currents are fired off from neurons.

The data showed that bilinguals used the same neural mechanisms to interpret mixed-language expressions as they did to interpret single-language expressions. More specifically, activity in their left anterior temporal lobe, a brain region known for playing a part in combining meaning from multiple words, didn’t show any differences when interpreting single- or mixed-language expressions. This was the region that actually combined the meanings of the two words participants were reading, as long as they did combine together into a meaningful whole.

All in all, the authors explain, these findings suggest that the mechanisms tasked with combining words in our brains are ‘blind’ to language. They function just as effectively, and in the same way, when putting together words from a single language or multiple ones.

“Earlier studies have examined how our brains can interpret an infinite number of expressions within a single language,” Phillips concludes. “This research shows that bilingual brains can, with striking ease, interpret complex expressions containing words from different languages.”

The research was carried out with bilingual people, for the obvious limitation that non-bilinguals only understand a single language. While the findings should be broadly-applicable, there is still a question of cause and effect here. Is the neural behavior described in this paper a mechanism that’s present in all of our brains? Or is it something that happens specifically because bilinguals have learned and become comfortable with using multiple languages? Further research will be needed to answer these questions.

The paper “Composition within and between Languages in the Bilingual Mind: MEG Evidence from Korean/English Bilinguals” has been published in the journal eNeuro.

White matter density in our brains at birth may influence how easily we learn to understand and use language

New research at the University of Boston found that the brain structure of babies can have an important effect on their language development within the first year of life. The findings show that, although nurture plays a vital role in the development of an infant’s language abilities, natural factors also matter.

Image via Pixabay.

The study followed dozens of newborns over the course of five years, looking to establish how brain structure during infancy relates to the ability to learn language during early life. While these results definitely show that natural factors influence said ability, they’re also encouraging — upbringing, or nurture, has a sizable influence on a child’s ability to develop their understanding and use of language.

For the study, the authors worked with 40 families to monitor the development of white matter in infants’ brains using magnetic resonance imaging (MRI). This was particularly difficult to pull off, they explain, as capturing quality data using an MRI relies on the patient keeping completely still.

Born for it

“[Performing this study] was such a fun process, and also one that calls for a lot of patience and perseverance,” says BU neuroscientist and licensed speech pathologist Jennifer Zuk, lead author of the study. “There are very few researchers in the world using this approach because the MRI itself involves a rather noisy background, and having infants in a naturally deep sleep is very helpful in accomplishing this pretty crazy feat.”

The fact that babies have an inborn affinity for absorbing and processing information about their environment and the adults around them isn’t really any news. Anyone who’s interacted with an infant can hear the hints of developing language in their cries, giggles, and myriads of other sounds babies produce.

But we also like to talk to babies, thus helping them understand language better. The team wanted to determine how much of an infant’s ability to learn is due to their inborn traits, and how much of it comes down to the practice they get with the adults in their lives.

The new study reports that functional pathways in the brain play a large role in forming a child’s language-learning abilities during the first year of their life. These pathways are represented by white matter, the tissue that acts as a connector in the brain and links together areas of gray matter, where neurons reside and perform the actual heavy lifting in our brains. The team was interested in white matter in particular as it is the element that actually allows neurons to work together to perform tasks. The practice of any skill leads to the reinforcement of connections that underpin it, they explain, showcasing the importance of white matter in brain functionality.

“A helpful metaphor often used is: white matter pathways are the ‘highways,’ and gray matter areas are the ‘destinations’,” says Zuk.

Together with senior author Nadine Gaab from Boston Children’s Hospital, Zuk met with 40 families with infants to record the development of their white brain matter. In order to ensure the quality of the recorded data, they had to make sure that the babies were sound asleep before placing them in the MRI machine — which was quite a challenge, as these devices can become quite loud. This is the first time researchers have monitored the relationship between changes in brain structure over time and the development of language throughout the first few years of children’s lives.

One area they studied, in particular, is the arcuate fasciculus, a strip of white matter that connects two regions of the brain responsible for the understanding and use of language. MRI machines can determine the density of tissues (in this case, of white matter pathways) by measuring the behavior of water molecules through individual pieces of tissue.

Five years after first peering into the babies’ brains, the team met up with the families again, in order to assess each child’s language abilities. They tested for vocabulary knowledge, their ability to identify sounds within individual words, or to form words from individual sounds.

They report that children born with higher levels of white matter organization showed better language skills at the five-year mark, suggesting that biological factors do have an important role to play in the development of language skills. By itself, however, these results are not enough to prove that biological factors outweigh nurture completely. They’re simply an indication that brain structure can predispose someone towards greater language abilities. The findings are meant to be a piece of a much larger image and not the whole.

“Perhaps the individual differences in white matter we observed in infancy might be shaped by some combination of a child’s genetics and their environment,” she says. “But it is intriguing to think about what specific factors might set children up with more effective white matter organization early on.”

Even if the foundation for language skills is established in infancy, the team explains, our upbringing and experiences are critical to build upon this natural predisposition and play a very important role in a child’s outcome. Judging from the findings, however, the first year of a child’s life is a very good time to expose them to language in order to promote the development of this skill in the long term.

The paper “White matter in infancy is prospectively associated with language outcomes in kindergarten” has been published in the journal Developmental Cognitive Neuroscience.

What is the hardest language to learn?

Mastering any new language is a challenge, but some take much more time and effort to reach proficiency.

The ability to learn a certain foreign language depends on a number of factors. These include how similar the foreign language is to an individual’s native language (or any other foreign language they might speak), how immersed a person is into the language (studying from books at home versus conversing with the locals), and cultural differences, as well as the complexity of the language itself, in terms of grammar, writing system, and linguistic concepts.

For English-speaking students, the hardest foreign languages to learn include Arabic, Cantonese, Mandarin, Japanese, and Korean. On the other hand, some of the easiest foreign languages to become proficient in include Spanish and Portuguese.

What’s the hardest foreign language to learn for an English speaker?

Reddit user Fummy made a map based on the FSI data. Fummy used six categories of difficulty rather than four for the purpose of this map. Countries are colored according to the difficulty of acquiring their official language for a native English speaker. Credit: Fummy/Reddit.

There is no official consensus as to what foreign language is widely regarded as the most difficult for an English speaker to learn. However, there is some research undertaken by the United States Foreign Service Institute and the Defense Language Institute that ranked foreign language acquisition by the number of hours students have required, on average, to become proficient in them.

After 70 years of experiencing teaching languages to American diplomats, the U.S. Foreign Service has grouped foreign languages into four categories of difficulty. The easiest require 575-600 hours of study (23-24 weeks of classroom study) for students to achieve sufficient competence to be posted overseas, whereas the hardest group requires at least 2,200 hours of study (88 weeks of full-time classroom study) to achieve the same level of proficiency. In other words, some languages can be 3-4 times harder to master than others.

Credit: Fummy/Reddit.

Category I Languages: 24-30 weeks (600-750 class hours)

Danish (24 weeks)Dutch (24 weeks)French (30 weeks)
Italian (24 weeks)Norwegian (24 weeks)Portuguese (24 weeks)
Romanian (24 weeks)Spanish (24 weeks)Swedish (24 weeks)

Category II Languages: Approximately 36 weeks (900 class hours)

GermanHaitian CreoleIndonesian
MalaySwahili

Category III Languages: Approximately 44 weeks (1100 class hours)

AlbanianAmharicArmenian
AzerbaijaniBengaliBulgarian
BurmeseCzechDari
EstonianFarsiFinnish
GeorgianGreekHebrew
HindiHungarianIcelandic
KazakhKhmerKurdish
KyrgyzLaoLatvian
LithuanianMacedonianMongolian
NepaliPolishRussian
Serbo-CroatianSinhalaSlovak
SlovenianSomaliTagalog
TajikiTamilTelugu
ThaiTibetanTurkish
TurkmenUkrainianUrdu
UzbekVietnamese

Category IV Languages: 88 weeks (2200 class hours)

ArabicChinese – CantoneseChinese – Mandarin
JapaneseKorean

Why some languages are harder to learn than others

Although the degree to which an individual manages to master a foreign language can depend a lot on their motivation, almost everything else comes down to how similar your native language is to the one you’re trying to learn.

According to Dr. Cindy Blanco, Senior Learning Scientist at Duolingo, “what makes a language easy or hard is all about what languages you already know — so your native language is really important, but so are any other languages you know or have studied.”

“Adults studying a new language have to suppress the language(s) they know best in order to learn the new system: they have already learned first language conceptual categories for vocabulary and grammar, intricate articulatory motions to pronounce sounds with precise acoustic targets (or manual targets, in signed languages), and maddeningly arbitrary connections between written squiggles and sounds, syllables, or words,” Blanco told ZME Science.

One obvious way that a language can be radically different from another is in the writing system. French uses the same writing system as English, apart from a few extra symbols, so it is much easier to learn than Japanese or Hindi, which have completely different writing systems. Japanese uses three different writing systems, the imported Chinese characters (Kanji), as well as two syllabaries — Hiragana and Katakana. Each writing system has its time and place so you must learn all three.

A vocabulary and syntax that is similar to that of your native language can also make it easier to pick up a new language. If you’re an English speaker you’re quite fortunate, since other languages tend to borrow words from it due to it being spoken across the world.

“We’re apt to transfer properties of our first language to the new language, and that could be helpful if the properties are the same across the languages and more challenging if they differ. Properties include things like first language sounds, concepts and meanings (what counts as “blue” isn’t universal!), patterns for what order to put words in, and rules about politeness,” Blanco said.

However, a common vocabulary isn’t always helpful. In fact, it can sometimes work against you. For instance, a French “préservatif” is not something you add to your food, but a condom.

Languages can also differ substantially through the use of different tones. There are four tones in Mandarin high pitch (say G in a musical scale), rising pitch (like from C to G), falling (from G to C) and falling low then rising (C to B to G). So the same word can have totally different meanings based on its pronunciation. There’s a famous poem in Mandarin by Chinese-American linguist Yuen Ren Chao called The Lion-Eating Poet in the Stone Den, which translated into English sounds like:

In a stone den was a poet called Shi Shi, who was a lion addict, and had resolved to eat ten lions.
He often went to the market to look for lions.
At ten o’clock, ten lions had just arrived at the market.
At that time, Shi had just arrived at the market.
He saw those ten lions, and using his trusty arrows, caused the ten lions to die.
He brought the corpses of the ten lions to the stone den.
The stone den was damp. He asked his servants to wipe it.
After the stone den was wiped, he tried to eat those ten lions.
When he ate, he realized that these ten lions were in fact ten stone lion corpses.
Try to explain this matter.

In Mandarin, the same poem is made of the  syllable “shi” repeated 107 times in various intonations.

« Shī Shì shí shī shǐ »

Shíshì shīshì Shī Shì, shì shī, shì shí shí shī.
Shì shíshí shì shì shì shī.
Shí shí, shì shí shī shì shì.
Shì shí, shì Shī Shì shì shì.
Shì shì shì shí shī, shì shǐ shì, shǐ shì shí shī shìshì.
Shì shí shì shí shī shī, shì shíshì.
Shíshì shī, Shì shǐ shì shì shíshì.
Shíshì shì, Shì shǐ shì shí shì shí shī.
Shí shí, shǐ shí shì shí shī shī, shí shí shí shī shī.
Shì shì shì shì.

And that’s nothing — Cantonese actually has nine tones. It’s safe to say, if you’re tone-deaf, you shouldn’t try to learn this language. I’m joking of course, but this extreme example illustrates how tonal differences can contribute massively to making a language extremely hard to acquire.

A foreign language may be based on concepts that are entirely absent in your native tongue, and that may make it more challenging to acquire proficiency. For instance, unlike English, Romance languages such as Spanish and French have gendered nouns, articles, and adjectives. Similarly, in Arabic, you have to conjugate the verb differently depending on the gender of the person, whereas in English it is the same for both genders and very straightforward.

That being said, all foreign languages will present their own unique challenges. But it does get a lot easier the more new languages you acquire.

“For an English speaker, the sounds of Japanese will be (relatively) familiar, and Japanese doesn’t have tones like Chinese or Vietnamese. But the three Japanese writing systems are hard to learn, as are its politeness categories and word order (both really different from English). On the other hand, Chinese has a really challenging sound system for English speakers, but Chinese doesn’t have verb tenses–and that might sound like a relief to any English speaker who has studied Spanish!,” Blanco said.

“Language transfer gets even more interesting when you learn a third or fourth language: you might actually be more likely to transfer properties of your *second* language rather than your *first*! Depending on when and how you learned your languages, your brain might treat your second language as a sort of template for all other languages. This generally means that once you’ve learned multiple languages, each subsequent language is a bit easier: you’re better prepared for the ways in which languages can vary, after you overcome the difficulty of un-learning your first language,” she added.

The languages mentioned earlier are spoken by a sizable number of people, which makes them of interest. But there are over 7,000 languages spoken in the world today, but just 23 of them account for half of the world’s population. Roughly 40% of languages are now endangered, often with less than 1,000 speakers remaining. Perhaps some of them are a lot harder for an English speaker to master than Japanese or Cantonese.

Learning any new foreign language can be daunting, but there’s no evidence that suggests there’s any language that cannot be learned by another person despite their linguistic background. Some languages may take much longer to master than others but ultimately it’s all a matter of how motivated you are to rise to the challenge.

“Motivation is really important in the learning task, since it takes a long time to build up high proficiency in a language. If you’re highly motivated to learn Norwegian because of family ties or because you want to attend grad school there, it’ll be easier for you to stick with it, compared to an “easy” language that you are less interested in sticking with for many months and years,” Blanco said.

AI reads, translates, and auto-completes ancient cuneiform texts

Babylonian version of the Achaemenid royal inscriptions known as XPc (Xerxes Persepolis c) from the western anta of the southern portico of the so-called Palace of Darius (building I) at Persepolis. Credit: Wikimedia Commons.

Researchers are using a very modern solution to understand one of the world’s most ancient languages.

A team of historians from two universities in Israel are using artificial intelligence to read broken passages in the Akkadian language, the oldest known Semitic language. Akkadian was spoken in the ancient Mesopotamian empires between 4,500 and 2,400 years ago, and has been preserved on clay tablets.

However, the passage of time hasn’t always been kind on these tablets, and they’ve often suffered damage such as cracks and breaks — which obviously make some of the text unreadable. Now, experts from Ariel University, the Israeli Heritage Department, and Bar Ilan University are working to train artificial intelligence (AI) to read the Akkadian script, a series of wedge-shaped marks known as cuneiform writing.

Machine reading

We’ve all at one point had a hard time with our writing assignments for school, and we may have even enlisted someone in the household for essay help to meet the deadline — but reading something can prove even more frustrating, especially when it was penned thousands of years ago. That’s why the team was hard at work digitizing the tablets in order to make them more readable and preserve them for future generations. The digitized texts were then analyzed and fed through a tool dubbed ‘the Babylonian Engine’.

This piece of software relies on machine learning to understand the script and is then used to translate what’s visible, filling in areas where the text was damaged. One of the quirks of cuneiform is that it is polyvalent, so each sign has more than one possible meaning. Appropriately interpreting each sign is a function of what characters precede and follow it.

The researchers hope to refine their system to the point where it can translate whole cuneiform texts and auto-complete the missing bits.

“The quickest impact in fact is pedagogical since students of Akkadian can use this tool to train in reading and restoring ancient texts,” says Shai Gordin, a senior lecturer at Ariel University, and first author of the paper. “Moreover, historians with less formal training in Akkadian can try and enter Akkadian text and get results which are citable in their research and publications.”

“For scholars of ancient Near Eastern history this tool can help with their work on text editions and going back to earlier publications in an attempt to restore broken sections of texts.”

The people of Mesopotamia were very thorough record-keepers, and these clay texts can help us piece together their political, economic, and social history, the team explains. Having an AI on hand that can handle large-scale translation and reconstruction quickly and accurately would be a huge boon to our efforts at studying the ancient culture.

The software isn’t at all limited to Akkadian. The algorithm that powers it can just as easily be trained with other languages and types of script. For now, however, it can only work with languages that follow the same format as Akkadian, so it’s currently better suited to ancient languages than the more refined, newer types. The team is currently working on further developing the algorithm to make it function with different formats

“But we are focusing more on ancient history at any rate,” Gordin admits, “since there is the place where heritage is mostly in danger of being lost for good.”

The Babylonian Engine is accessible online for anyone to upload and analyze cuneiform transliterations.

The paper “Reading Akkadian cuneiform using natural language processing” has been published online in the journal PLOS ONE.

Politically-incorrect language can seem sincere, but only if you’re saying what the audience wants to hear

Everyone prefers politically correct language sometimes, a new study reports. Where we differ is who we use it with, and how we perceive it in regards to the groups it’s being applied to.

Image credits Rudy and Peter Skitterians.

The concept of political correctness doesn’t get a lot of love in the online environment, so much so that it’s often pointed at to imply a lack of authenticity of those who use it. But it’s also a very divisive term; what others would see as dishonesty and sweet-talking, I would often just chalk up to being nice in conversation.

But a new study shows that, in fact, we’re all inclined to use politically correct language, we just apply it to different people. We tend to see it as compassionate when it’s applied to groups we support or care for, and as disingenuous when it’s addressed to other groups. Overall, however, we all tend to view people who use politically correct language as warmer, but less authentic and thus less likely to hold true to a particular view or idea.

Speeches and stones may break my bones

Such language is often used in a (genuine or disingenuous) attempt to appear more sensitive to the feelings of other people, especially those perceived to be socially disadvantaged. One example would be saying “Happy Holidays” instead of “Merry Christmas” in the understanding that not everyone holds to Christian or religious beliefs.

On paper, it all sounds great — I think all of us here agree that being considerate of others is a good thing. In practice, as you may know from discussions on various social media groups, the term is thrown about as a shorthand for censorship or socially-sanctioned limitations on free speech.

So there’s obviously a disconnect, but where? The team carried out a series of experiments totalling roughly 5,000 participants to examine this issue, reporting that, in broad lines, such language can make us seem less sincere by making our speech seem more strategic and calculated.

The first experiment asked participants to review a written speech and imagine a senator delivering it to an audience. Half the participants received a speech revolving around transgender policy, and the others around immigration policy (the topics were selected from particularly polarizing topics in American public discourse on purpose). Each speech used either politically correct (“Of course I believe that LGBTQ persons are among the most vulnerable members of our society and we must do everything in our power to protect them”) or incorrect (“These people who call themselves LGBTQ are often profoundly disturbed and confused about their gender identity”) language.

All in all, participants who read speeches using politically correct language tended to rate the senators as warmer, but less authentic. The results were consistent between all participants, regardless of their self-reported like or dislike of such language.

For the second experiment, the participants were asked to read a short biography of either Congressman Steve King, Senator Jim Inhofe, or Governor Jeb Bush and watch one of their speeches that were deemed either politically correct or incorrect. Afterward, they were asked to predict what stance these politicians would take on political issues in the future. This step aimed to evaluate how the use of language impacts an individual’s perceived trustworthiness or willingness to defend their beliefs even in the face of social pressure.

Those who listened to politically correct speeches reported feeling less certain about what stance the politician would take on topics in the future. This step showcased one of the trade-offs of using such language: while it makes one appear warmer and more concerned with others, it also makes them seem less sincere or more easily persuaded.

But it’s bias that convinces me

By this point, you’re probably asking yourself an obvious question: where do ‘them libs’ fit into the picture? The authors asked themselves the same thing, and it turned out that political affiliation has very little impact on our propensity to use politically correct language — but very much to do with whom we use it for.

In the third experiment, the team separated participants (based on their responses in a pre-test) as either Liberal-leaning or Conservative-leaning. The first group reported feeling sympathy for the undocumented immigrants, the LGBTQ community, and pro-choice individuals, while the latter was most concerned with the plight of religious Christians, poor white people, and pro-life individuals.

Each participant was asked to read a statement: “I think it is important for us to have a national conversation about” one of six groups. These groups were referred to using either politically-correct (e.g. ‘undocumented immigrants’) or incorrect terms (e.g. ‘illegal aliens’).

Unsurprisingly, when the participant felt sympathy for the group in question and was presented with a politically incorrect term — such as conservatives with ‘white trash’ or liberals with ‘illegal aliens’ — they didn’t view the language as particularly authentic, but as cold and uncaring. However, when presented with a politically-correct term for a group they did feel sympathy towards, they viewed it as authentic. On the flip-side, people also tended to rate politically incorrect language as more authentic when applied to groups they didn’t feel sympathy towards — such as liberals with ‘white trash’ or conservatives with ‘illegal aliens’.

But, and this is a very important ‘but’ in my opinion, there weren’t any divides in liking political correct speech among political groups. Liberals and conservatives were equally supportive of it as long as it applied to groups they felt sympathy towards — and equally against it when it wasn’t.

I feel the findings give us ample reason to pause and reflect on our own biases. Language does have power, and the way we use it speaks volumes about where our particular interests and sympathies lie. But at the same time, understanding that there are certain things we want to hear, and that this changes our perception of the ones saying them and the way they say it, is an important part of becoming responsible citizens for our countries.

The use of politically correct language can stem from genuine care and concern, just as much as it can from a desire to fake that care for brownie points. Politically incorrect language can come from one’s inner strength and willingness to state their mind regardless of society’s unspoken rules, but it can equally be used to deceive and appear no-nonsense when one is, in fact, callous and uncaring. It could go on to explain why considerate politicians can be perceived as weak, or why those downright rude and disrespectful can have the veneer of strength.

Perhaps, in this light, we should be most wary of those who tell us what we want to hear, the way we want to hear it. At the same time, it can help us understand that those we perceive as opposing our views and beliefs aren’t ‘out to get us’ — they literally see a different intent behind the same words, just as we do. Working together, then, doesn’t start with changing their minds, but with checking our own biases, and seeing which ones we truly believe in.

Back to the study at hand, the team explains that their findings showcase how the use of language can help shape other’s perceptions of us. Politically correct language can make us seem warmer but more calculated and thus less sincere. Politically incorrect language can make us look more honest, but colder and more callous — it all depends on what your conversation mates want to hear.

The paper “Tell it like it is: When politically incorrect language promotes authenticity” has been published in the Journal of Personality and Social Psychology.

Very young children use both sides of the brain to process language, while adults only use half

New research could uncover why children recover more easily from neural injury compared to adults.

Examples of individual activation maps in each of the age groups in the study.
Image credits Elissa Newport.

Very young human brains use both their hemispheres to process language , a new paper reports. The study focused on computer imaging to see which parts of infants’ and young children’s brains handle such tasks. According to the findings, the whole brain pitches in, rather than a single hemisphere as is the case for adults.

Whole-brain experience

“Use of both hemispheres provides a mechanism to compensate after a neural injury,” says lead author Elissa Newport, Ph.D, a neurology professor at Georgetown University. “For example, if the left hemisphere is damaged from a perinatal stroke—one that occurs right after birth—a child will learn language using the right hemisphere. A child born with cerebral palsy that damages only one hemisphere can develop needed cognitive abilities in the other hemisphere. Our study demonstrates how that is possible.”

Human adults almost universally process language in their left hemisphere, a process known as ‘lateralization’. This has been shown by previous studies using brain imaging as well as from observing patients who suffered a stroke in their left hemispheres (and lost the ability to do so).

Very young children, however, don’t seem to do the same. Damage to either hemisphere of their brains is unlikely to result in language deficits, and they have been noted to recover language even after heavy damage to their left hemispheres. Why this happened, however, was unclear.

“It was unclear whether strong left dominance for language is present at birth or appears gradually during development,” explains Newport.

The team used functional magnetic resonance imaging (fMRI) to show that adult lateralization patterns aren’t established during our early days. Specific brain networks which cause lateralization are only complete at around 10 or 11 years of age, Newport adds.

The team worked with 39 children aged 4 through to 13, and 14 adults (aged 18-29). They were given a sentence comprehension task and researchers examined their patterns of brain activation as they worked. The fMRI data was recorded for each individual’s hemispheres separately and was then compared between four age groups: : 4-6, 7-9, 10-13, and 18-29. The team also carried out a whole-brain analysis for all participants to see which areas were activated during language comprehension across ages.

As an overall group, the team reports, even young children showed left lateralization of the process. However, a large number of them also showed heavy activation in the right hemisphere, which was not seen in adults. This area of the brain is involved in processing the emotional content of conversation in adults, the team notes.

Newport says that “higher levels of right hemisphere activation in a sentence processing task and the slow decline in this activation over development are reflections of changes in the neural distribution of language functions and not merely developmental changes in sentence comprehension strategies.”

The authors believe that younger children would show even greater involvement of their right hemisphere in comprehending speech. They plan to further their research by studying the same processes in teenagers and young adults who had a major left hemisphere stroke at birth.

The paper “The neural basis of language development: Changes in lateralization over age” has been published in the journal Proceedings of the National Academy of Sciences.

‘Brain fossil’ suggests origin of human language may be 25-million-years old

The human brain is specially equipped to process language but the building blocks for this biological machinery may have appeared as early as 25 million years ago in a distant ancestor, according to researchers at Newcastle University, UK. This discovery pushes back the origin of the human language pathway in the brain by at least 20 million years.

The ancient seed for millions of words

Every day, it seems like we learn that many traits we thought of as unique to humans are shared by other animals. Killer whales and dolphins have a culture (defined as the sum of a particular group’s characteristic ways of living), great apes and some monkeys understand and employ deception, New Caledonian crows manufacture and use tools, and there’s even evidence that captive chimpanzees are able to make moral judgments.

What about language? After all, we know of no other animal that can utter words, so it makes sense to draw the line and conclude that speech is exclusively tied to the human lineage (our species as well as extinct relatives like Neanderthals and Denisovans). That may be true, but that doesn’t mean non-human animals can’t ‘understand’ and use language.

Research suggests that primates, birds, cetaceans, dogs and other species are able, through extensive training, to understand human words and simple sentences. Take Kanzi, for instance, a male bonobo that was an infant when he started working with primatologist Sue Savage-Rumbaugh. After a couple of years of training with Savage-Rumbaugh, Kanzi could understand several thousand words and communicate using a special keyboard with over 400 visual symbols called lexigrams.

The ability to, at least party, understand and process some of the components of language in many non-human animals suggests that this characteristic is shared at a fundamental level — and where else but the brain.

‘Brain fossils’ and the origin of language

Previously, researchers found evidence that pointed towards a 5 million-year-old precursor of the language pathway in the brain in a common ancestor of both apes and humans.

But unlike bones, brains cannot fossilize, so how do neuroscientists make such inferences?

Their only tool at their disposal is comparing the brains of living primates and comparing them to humans, in order to infer what the brains of common ancestors might have looked like through a process that resembles reverse engineering.

The researchers at Newcastle University led by Chris Petkov relied on an open database of brain scans performed by the international scientific community. They also generated their own original brains scans, which they duly shared.

Next, they directed their attention towards the auditory regions and brain pathways of humans, apes, and monkeys.

Their investigation revealed that a segment of the language pathway in the brain interconnects the auditory cortex with frontal lobe regions, which serve important roles in processing speech and language. This pathway seems to be at least 25 million years old.

“It is like finding a new fossil of a long lost ancestor. It is also exciting that there may be an older origin yet to be discovered still,” Petkov said in a press release.

“We predicted but could not know for sure whether the human language pathway may have had an evolutionary basis in the auditory system of nonhuman primates. I admit we were astounded to see a similar pathway hiding in plain sight within the auditory system of nonhuman primates.”

The study also revealed how the language pathway changed over the years in humans. It seems like the left side of this human brain pathway was stronger and the right side appears to have diverged from the auditory evolutionary prototype to involve non-auditory parts of the brain. These characteristics are unique to humans and may be responsible for granting us speech.

“This discovery has tremendous potential for understanding which aspects of human auditory cognition and language can be studied with animal models in ways not possible with humans and apes. The study has already inspired new research underway including with neurology patients,” Professor Timothy Griffiths, consultant neurologist at Newcastle University, UK and co-author of the new study.

The findings appeared in the journal Nature Neuroscience.

Correction: an earlier version of the article stated that Kanzi the bonobo was 31 years old when he started learning lexigrams. He was, in fact, an infant at the time of doing so.

Language forms spontaneously, and fast

Languages can form spontaneously, and surprisingly fast, reports a new paper.

Image credits Bruno Glätsch.

Researchers at the Leipzig University and the Max Planck Institute for Evolutionary Anthropology report that preschool children are able to form communication systems which share core properties of language. The team was studying the processes by which communication systems such as language developed in the past.

Say what?

“We know relatively little about how social interaction becomes language,” says Manuel Bohn, Ph.D. at the Leipzig University’s Research Center for Early Child Development and lead-author of the study.

“This is where our new study comes in.”

People love to communicate — there are over 7,000 languages in use today according to Ethnologue. Just under half of them have few speakers remaining, but it does go to show how versatile people are at using speech to convey information.

Still, the processes through which languages form are still up to debate. While they’re believed to have formed over millennia, we’ve also seen deaf strangers spontaneously form new a sign language, the Nicaraguan Sign Language (NSL), blisteringly fast. The team notes that children developed the NSL, but exactly how they went about it wasn’t documented. So, they set about finding out.

They attempted to recreate the process in a series of experiments with children from Germany and the US. They were invited to two different rooms and provided with a Skype connection to communicate. Their task was to describe an image with different motifs in a coordination game to a partner. In the beginning, these were simple images, showing concrete objects such as a fork. As the game progressed, the images became more and more abstract and complex — a blank card, for example.

In order to prevent the children from falling back on known language, the team allowed them a brief interval for familiarization with the set-up and their partner, and then muted the conversation. Then they tracked the different ways they communicated.

The children figured out pretty quickly that concrete objects can be conveyed by mimicking their corresponding action — eating to represent a fork, for example. The more abstract images, especially the blank paper showing nothing, were much harder to describe. The team notes how two of the participants managed to establish a gesture to convey the concept:

“The sender first tried all sorts of different gestures, but her partner let her know that she did not know what was meant,” explains Dr. Greg Kachel, the study’s second author. “Suddenly our sender pulled her T-shirt to the side and pointed to a white dot on her coloured T-shirt,” representing the card with the colors on her clothes.

Gesture language

Image via Pixabay.

When the two children switched roles later on in the experiment, the transmitter didn’t have white on her clothes but used the same approach. When she pulled her own t-shirt to the side and pointed to it, “her partner knew what to do,” Kachel adds. In effect, they had established a gestured ‘word’ for an abstract concept.

Over the course of the study, the children developed more complex gestures for the images they were given. When describing an interaction between two animals, for example, they first established individual signs for individual actors and then started combining them. The team notes that this works similarly to a very limited grammatical structure.

All in all, the team believes that people first established references for actions and objects using gestures that resembled them. Individual partners involved in dialogue would coordinate with their peers by imitating each other so that they use the same signs for the same things. Eventually, this interpersonal meaning would spread to the group at large (as everybody mingled and coordinated), gaining conventional meaning. I personally find this tidbit very fascinating, especially in relation to pictorial scripts, be them ancient Egyptian or save icons.

Over time, the relationship between the sign and the concept itself weakens, allowing for signs to describe more abstract or more specific concepts. As more complex information needs to be conveyed, layers of grammatical structures are gradually introduced.

Some of the key findings of this study are that partners need a common pool of experience and interaction in order to start communicating, and how fast this process can take place if that prerequisite is satisfied: as little as 30 minutes.

It also goes to show that while we think of language as being formed by words, communication can happen without them. When people can’t talk to one another for some reason, they’ll find other ways to convey information with surprising gusto. Spoken language likely formed following the same steps, however, and was preferred as the fastest and most effective way of transmitting a message.

“It would be very interesting to see how the newly invented communication systems change over time, for example when they are passed on to new ‘generations’ of users,” Bohn says. “There is evidence that language becomes more systematic when passed on.”

The paper “Young children spontaneously recreate core properties of language in a new modality” has been published in the journal Proceedings of the National Academy of Sciences.

Men are more likely than women to use more abstract language

Although they’re much more similar than different, men and women do have diverse ways of speaking, thinking and communicating overall. People will often point out that women tend to speak about specifics (concrete language) while men speak about the bigger picture, focusing on a goal (abstract language). A new study seems to confirm this anecdote.

Credit: Pixabay.

Researchers, led by Priyanka Joshi of San Francisco State University, studied the differences in communication styles between men and women by examining over 600,00 blog posts published on Blogger.com.

The study involved examining linguistic patterns in the content by rating abstractness for approximately 40,000 words in the English language. For instance, words that were easily visualized such as “vehicle” or “stairs” were given a low rating for abstractness while words that were more difficult to visualize, such as “justice” or “love” were given a high rating. This study showed that men employed more abstract language in their communication than women.

In a second study, the researchers put this hypothesis further to the test by analyzing transcripts from U.S. Congressional sessions from 2001 to 2017. Overall, they analyzed over 500,000 transcripts delivered by more than 1,000 members of Congress. Again, men tended to use significantly more abstract language in their communication. This was true regardless of the Congress members’ political affiliation.

The researchers believe this mismatch in communication style may have something to do with power dynamics. Historically, men have had more power in society, which may explain why they tend to use more abstract wording.

In a follow-up study that involved 300 students, the authors investigated this hypothesis by manipulating the power dynamics in an interpersonal setting. Each participant had to play the role of either an interviewer or interviewee. They then had to describe certain behaviors. Those in the high-status interviewer position tended to use more abstract language than the lower-status interviewees.

This suggests that the differences in communication styles between men and women may be more contextual rather than a fixed tendency.

The findings were reported in the Journal of Personality and Social Psychology.

Old World primates can only use two ‘words’ at a time, new research suggests

Old World monkeys can use sentences — but only two words long.

Image via Pixabay.

New research from MIT reports that Old World monkeys can combine two vocalizations into a single sentence. However, they’re unable to freely recombine language elements as we do.

Pyow-hack

“It’s not the human system,” says Shigeru Miyagawa, an MIT linguist and co-author of a new paper detailing the study’s findings. “The two systems are fundamentally different.”

Along with Esther Clarke, an expert in primate vocalization, who is a member of the Behavior, Ecology, and Evolution Research (BEER) Center at Durham University in the U.K., Miyagawa re-evaluated recordings of Old World monkeys, including baboons, macaques, and the proboscis monkey.

The language of some of these species has been studied in the past, and different species have different kinds of alarm calls for each type of predator. Vervet monkeys have specific calls when they see leopards, eagles, and snakes, for example, because each predator requires different kinds of evasive action. Similarly, tamarin monkeys have one alarm call to warn of aerial predators and one to warn of ground-based predators.

These primates seem able to combine such calls to create a more complex message. The putty-nosed monkey of West Africa has a general alarm call that sounds like “pyow,” and a specific alarm call warning of eagles, “hack.” However, sometimes they will use “pyow-hack” in longer or shorter sequences to warn the group that danger is imminent.

In the paper, Miyagawa and Clarke contend that the monkeys’ ability to combine these terms means they are merely deploying a “dual-compartment frame” which lacks the capacity for greater complexity. The findings, the authors explain, showcase an important difference in cognitive ability between humans and some of our closest relatives.

They explain that these combined calls always start with “pyow”, end with “hack” and that the terms are never alternated. Although the animals do vary the length of the call, the authors say that their language lacks a “combinatorial operation” (the process that allows our brains to arrange individual words into functional sentences). It is only the length of the “pyow-hack” sequence that indicates how far the monkeys will run.

“The putty-nose monkey’s expression is complex, but the important thing is the overall length, which predicts behavior and predicts how far they travel,” Miyagawa says. “They start with ‘pyow’ and end up with ‘hack.’ They never go back to ‘pyow.’ Never.”

Campbell’s monkey, a species in South Africa, uses calls that are reminiscent of a human-style combination of sounds,the team explains that they also use a two-item system and add an “-oo” sound to turn specific sounds into generalized aerial or land alarms.

Miyagawa also notes that when the Old World monkeys speak, they seem to use a part of the brain known as the frontal operculum. Human language is heavily associated with Broca’s area, a part of the brain that seems to support more complex operations. The authors propose that humans’ ability to tap Broca’s area for language may be what enabled speech as we know it today.

“It seems like a huge leap,” Miyagawa says. “But it may have been a tiny [physiological] change that turned into this huge leap.

The paper “Systems Underlying Human and Old World Monkey Communication: One, Two, or Infinite” has been published in the journal Frontiers in Psychology.

African green monkeys howling at drones teach us about the evolution of language

West African green monkey (Chrocebus sabaeus) in Senegal. Credit: Julia Fischer.

West African green monkeys produced a new alarm-call when faced with a novel aerial threat — a drone flown by German researchers at the Leibniz Institute for Primate Research. The meaning of this call was learned very quickly by other monkeys. However, this alarm call was not their own but rather belonged to the closely-related East African vervet monkey which uses it to warn against aerial predators like eagles. Taken together, these findings suggest that the call structure was determined many generations ago and was conserved despite the two species diverged evolutionarily.

‘Watch out for eagles’

East African vervet monkeys (Chlorocebus pygerythrus) produce different alarm calls for each of its main predators: leopards, snakes, and eagles. What’s more, each of these calls prompts a unique behavior. When nearby monkeys hear the call for “leopard”, they quickly climb a tree, when “snake” is called, they stand on two legs motionless, and when they hear the call for “eagle”, they search the sky and look for a place to hide.

West African green monkey (Chlorocebus sabaeus) in Senegal. Credit: Julia Fischer.

Their cousins in Senegal, the West African green monkeys (Chrocebus sabaeus) exhibit similar behavior. But while they emit similar calls for leopards and snakes, they have none at all for eagles.

In a new study, Julia Fischer and colleagues at the German Primate Center introduced West African green monkeys at a research station in Simenti, Senegal, to an aerial drone. The drone was flown once over 80 monkeys at a height of about 60 meters. Researchers also recorded the sound of the drones and played back the recordings to 16 animals.

When the monkeys heard the played back recordings of the drone, they scanned the sky or ran away, suggesting that the monkeys had immediately learned what the sound indicated.

The aim of the experiments was to see how quickly the animals learned to recognize the drones as a threat. Much to everyone’s surprise, the green monkeys made calls resembling the calls that East African vervet monkeys utter when they detect eagles.

“The animals quickly learned what the previously unknown sounds mean and remembered this information,” said Fischer, who is the head of the Cognitive Ethology Laboratory at the German Primate Center and lead author of the study. “This shows their ability for auditory learning.”

Although the green monkeys never encountered aerial threats, the structure of their alarm call for drones was almost identical to East African vervet monkeys. Fischer says that this must mean that the vocalization is deeply rooted in the evolution of vervet monkeys.

“Our findings support the view of a fundamental dichotomy in the degree of flexibility in vocal productions versus the comprehension of calls. Collectively, these studies indicate that the emergence of auditory learning abilities preceded the evolution of flexible vocal production,” the authors wrote in the journal Nature Ecology & Evolution.

Fried CD.

New research sheds light into how our brains handle metaphors

Your brain can read the lines, and it can read between the lines, but it does both using the same neurons.

Fried CD.

Image credits Chepe Nicoli.

While we can consciously tell when a word is being used literally or metaphorically, our brains process it just the same. The findings come from a new study by University of Arizona researcher Vicky Lai, which builds on previous research by looking at when, exactly, different regions of the brain are activated in metaphor comprehension.

Twisting our words

“Understanding how the brain approaches the complexity of language allows us to begin to test how complex language impacts other aspects of cognition,” she said.

People use metaphors all the time. On average, we sneak one in once every 20 words, says Lai, an assistant professor of psychology and cognitive science at the UA. As director of the Cognitive Neuroscience of Language Laboratory in the UA Department of Psychology, she is interested in how the brain distinguishes metaphors from the broad family of language, and how it processes them.

Previous research has hinted that our ability to understand metaphors may be rooted in bodily experiences. Functional brain imaging studies (fMRI), for example, have indicated that hearing a metaphor such as “a rough day” activates regions of the brain associated with the sense of touch. Hearing that someone is “sweet”, meanwhile, activates taste areas, whereas “grasping a concept” lights up brain regions involved in motor perception and planning are activated.

In order to get to the bottom of things, Lai used EEG (electroencephalography) to record the electrical patterns in the brains of participants who were presented with metaphors that contained action words — like “grasp the idea” or “bend the rules.” The participants were shown three different sentences on a computer screen, presented one word at a time. One of these sentences described a concrete action — “The bodyguard bent the rod.” Another was a metaphor using the same verb — “The church bent the rules.” The third sentence replaced the verb with a more abstract word that kept the metaphor’s meaning — “The church altered the rules.”

Seeing the world “bent” elicited a similar response in participants’ brains whether it was used literally or metaphorically. Their sensory-motor region activated almost immediately — within 200 milliseconds — of the verb appearing on screen. A different response, however, was elicited when “bent” was replaced with “altered.”

Lai says her work supports previous findings from fMRI (functional magnetic resonance imaging) studies. However, while fMRI measures blood flow in the brain as a proxy for neural activity, the EEG measures electrical activity directly. Thus, it provides a clearer picture of the role sensory-motor regions of the brain play in metaphor comprehension, she explains.

“In an fMRI, it takes time for oxygenation and deoxygenation of blood to reflect change caused by the language that was just uttered,” Lai said. “But language comprehension is fast — at the rate of four words per second.”

“By using the brainwave measure, we tease apart the time course of what happens first,” Lai said.

While an fMRI won’t show you exactly which brain region is working to decipher an action-based metaphor (because it won’t show you which region activates immediately and which does so after we already understand the metaphor), the EEG provides a much more precise sense of timing. The near-immediate activation of sensory-motor areas after the verb was displayed suggests that these areas of the brain are key to metaphor comprehension.

Lai recently presented ongoing research looking into how metaphors can aid learning and retention of science concepts at the annual meeting of the Cognitive Neuroscience Society in San Francisco. She hopes the study we’ve discussed today will help her lab better understand how humans comprehend language and serve as a base for her ongoing and future research.

The paper “Concrete processing of action metaphors: Evidence from ERP” has been published in the journal Brain Research.

Floppy disk.

A new study estimates English only takes about 1.5 megabytes of your brainspace

New research says all the language in your head takes up as much space as a picture would on a hard drive — about 1.5 megabytes.

Floppy disk.

All the English in your brain would probably fit on one of these, it seems.
Image via Pixabay.

A team of researchers from the University of Rochester and the University of California estimates that all the data your brain needs to encode language — at least in the case of English — only adds up to around 1.5 megabytes. The team reached this figure by applying information theory to add up the amount of data needed to store the various parts of the English language.

Quick download

We learn how to speak by listening to those around us as infants. We don’t yet have a clear idea of how this process takes place, but we do know that it’s not a simple case of storing words alongside their definitions as you’d see in a dictionary. This is suggested by the way our minds handle words and concepts — for example, by forming associative clues between the concept of flight and the words “bird,” “wing,” or even “robin.” Our brains also store the pronunciation of words, how to physically create the sound as we speak, or how words interact with and are used with other words.

In an effort to map out how much ‘space’ this information takes up in our brain, the authors worked to convert all of the ways our brain might store a language into data amounts. To do so, they turned to information theory, a branch of mathematics that deals with how information is encoded via sequences of symbols.

The researchers assigned a quantifiable size estimate to each aspect of English. They began with phonemes — the sounds that make up spoken words — noting that humans use approximately 50 phonemes. Each phoneme, they estimate, would take around 15 bits to store.

Next came vocabulary. They used 40,000 as an average number of words an average person would know, which would translate into 400,000 bits of data. Word frequency is also an important element in speech, one which the team estimated would take around 80,000 bits to ‘code’ in our brains. Syntax rules were allocated another 700 bits.

Semantics for those 40,000 words was the single largest contributor to size the team factored in: roughly 12 million bits. Semantics, boiled down, is the link between a word or a symbol and its meaning. The sounds that made up the words themselves were logged under ‘vocabulary’, and this category basically represents the brain’s database of the meaning those sounds convey.

“It’s lexical semantics, which is the full meaning of a word. If I say ‘turkey’ to you, there’s information you know about a turkey. You can answer whether or not it can fly, whether it can walk,” says first author Frank Mollica at the University of Rochester in New York.

Adding it all up came to approximately 1.56 megabytes, which is surprisingly little. It’s barely enough to fill a floppy disk (the ‘save’ icon).

“I thought it would be much more,” Mollica agrees.

Keep in mind that these results are estimations. Furthermore, the team applied their estimation using only English as a subject language. The result should be useful as a ballpark idea of how much space language acquisition takes up in our brains, however. Mollica says that these numbers are broad enough estimates that they might carry over to other languages as well.

The paper “Humans store about 1.5 megabytes of information during language acquisition” has been published in the journal Royal Society Open Science.

Edge-to-edge overbite of ancient hunter-gatherer woman (left) vs overbite configuration seen in Bronze age male (right). Credit: Science.

Ancestral shift in diet may have changed human speech as well

Edge-to-edge overbite of ancient hunter-gatherer woman (left) vs overbite configuration seen in Bronze age male (right). Credit: Science.

Edge-to-edge overbite of ancient hunter-gatherer woman (left) vs overbite configuration seen in Bronze age male (right). Credit: Science.

When humans invented agriculture, the world changed forever. With a steady and predictable food supply, humans were free to diversify their labor and pursuits, effectively ushering in civilization as we know it. By cultivating cereals and raising livestock, our diets also changed. This altered our face structure and led to less tooth wear. Now, a new study says that these biomechanical shifts may have allowed humans to produce new sounds such as “v” and “f”. In other words, language also changed along with diet.

Blame dairy for the “F” word

There are thousands of languages and dialects that are still spoken today, although most only have a handful of surviving speakers left. The languages are not only generally unintelligible between one another, but they can also be radically different in the way sounds that convey meaning are produced. This is why most scholars believe that biological machinery for producing human speech has remained largely unchanged since humans emerged hundreds of thousands of years ago.

A new study, however, suggests that language is more malleable by cultural influence — in this case agriculture — than previously thought. In 1985, renowned linguist Chares Hocket claimed that hunter-gatherers would find it difficult to pronounce “f” and “v” sounds — which linguists call labiodentals — due to their jaw structure. Before the advent of agriculture, humans, like most other primates, had teeth aligned edge-to-edge with the jaw due to their diet of hard food. When humans started eating softer food like cheese, tooth wear became less pronounced and as a result, more and more people kept an overbite into adulthood.

Steven Moran and colleagues at the University of Zurich put Hocket’s theory to the test by performing a complex statistical analysis of interdisciplinary evidence from linguistics, anthropology, and phonetics. A biomechanical computer model that mimics humans speech showed that having an overbite allows humans to produce “f” and “v” sounds using 29% less energy than in an edge-to-edge configuration.

“In Europe, our data suggests that the use of labiodentals has increased dramatically only in the last couple of millennia, correlated with the rise of food processing technology such as industrial milling,” Moran said in a statement. “The influence of biological conditions on the development of sounds has so far been underestimated.”

The findings are compelling but they’re definitely not the last word on the matter. Human speech organs do not use all that much energy to begin with — not relative to movement, for instance. If energy expenditure played a very important role, difficult speech sounds would have been gradually sifted out. But this is not the case, since many languages employ difficult speech sounds, such as clicks in some languages native to southern Africa.

But the authors claim that although the probabilities of generating labiodentals accidentally are low, over generations these sounds could have become incorporated into language — and having a diet-induced overbite helps to improve the odds. In the future, the researchers believe that their method could be used to reconstruct how ancient written languages were spoken aloud.

“Our results shed light on complex causal links between cultural practices, human biology and language,” Balthasar Bickel, project leader and UZH professor, said in the press release. “They also challenge the common assumption that, when it comes to language, the past sounds just like the present.”

The findings appeared in the journal Science.

Activating a new language is easy — the effort goes in suppressing the old one

New research with speakers of English and American Sign Language (ASL) reveals the processes that go on in our brain when switching between languages.

Kirkenes signs.

Street signs with Latin and Cyrillic letters in Kirkenes, Norway.
Image credits Wikimedia.

It seems that our brain has to ‘boot up’ a language before we can start speaking it. Previous research has identified spikes in brain activity in areas associated with cognitive control (i.e., the prefrontal and anterior cingulate cortices) when this switch is performed. However, whether this activity was required to ‘activate’ a new language, turn a previous one off, or both, remained unknown. Now, a team of researchers has uncovered the mechanisms that underpin switching between different languages, a finding that provides new insights into the nature of bilingualism.

Speaking in tongues

“A remarkable feature of multilingual individuals is their ability to quickly and accurately switch back and forth between their different languages,” explains Esti Blanco-Elorrieta, a New York University (NYU)Ph.D. candidate and the lead author of the study. “Our findings help pinpoint what occurs in the brain in this process — specifically, what neural activity is exclusively associated with disengaging from one language and then engaging with a new one.”

The results showed that cognitive effort is required primarily when disengaging from one language — activating a new one, by comparison, comes virtually “cost-free from a neurobiological standpoint,” says senior author Liina Pylkkanen

The biggest hurdle in this research effort was to separate the two process because they largely happen at the same time. For example, a Spanish-English bilingual participant would turn Spanish “off” and English “on” at the same time. To work around this issue, the team recruited participants fluent in English and American Sign Language (ASL) and asked them to name the pictures on the screen.

Unlike other language combinations, English and ASL can be spoken together at the same time — and they often are. This dynamic gave the team the tool they needed to separate the language engagement and disengagement processes in the brain. They could ask the participants to go from speaking in both languages to producing only one to observe the process of turning a language ‘off’. Alternatively, participants could be asked to switch from speaking only one language to speaking both — giving the team a glimpse of the process of turning a language ‘on’.

In order to actually see what was going on in the participants’ brains, the team used magnetoencephalography (MEG), a technique that maps neural activity by recording magnetic fields generated by the electrical currents produced by our brain.

When the bilingual English-and-ASL participants switched between languages, deactivating a language led to increased activity in cognitive control areas. Turning a language ‘on’ was virtually indistinguishable from not switching, judging by brain activation levels, the team writes. In other words, little to no cognitive effort is required to activate a second language, be it spoken or signed language.

In fact, the team reports that when participants were asked to produce two words simultaneously (one sign and one spoken word), their brain showed roughly the same levels of activity as when they only produced one word. Most surprisingly, producing both at the same time saw lesser activation than having to suppress the dominant language (in this case English).

“In all, these results suggest that the burden of language-switching lies in disengagement from the previous language as opposed to engaging a new language,” says Blanco-Elorrieta.

The paper has been published in the journal Proceedings of the National Academy of Sciences.

Marmosets (Callithrix jacchus) take 3000–5000 ms between turn-taking exchanges. Credit: Wikimedia Commons.

From insects to whales, all sorts of animals take turns to communicate

Until not long ago, two-way communication was thought to be an exclusively human trait. But a new study shows just flawed this idea was, reporting that many animals from elephants to mosquitoes employ turn-taking behavior when communication among themselves. The findings might one day help scientists pinpoint the origin of human speech.

Marmosets (Callithrix jacchus) take 3000–5000 ms between turn-taking exchanges. Credit: Wikimedia Commons.

Marmosets (Callithrix jacchus) take 3000–5000 ms between turn-taking exchanges. Credit: Wikimedia Commons.

An international team of researchers reviewed the current scientific literature on animal communication. From the hundreds of studies that they analyzed spanning over 50 years of research, the authors found that the orderly exchange of communicative signals is far more common in the animal kingdom than anyone thought.

The challenge lied in piecing together all the fragmented information from all the various studies published thus far, whether the focus was on the chirps of birds or the whistles of dolphins. But once this task was complete, the researchers were impressed to learn about how complex animal communication really is.

Take timing, for instance, which is a key feature of communicative turn-taking in both humans and non-human animals. In some species of songbird, the latency between notes produced by two different birds is less than 50 milliseconds. On the other end of the spectrum, sperm whales exchange sequences of clicks with a gap of about two seconds between turns. We, humans, lie somewhere in the middle, producing utterances with gaps of around 200 milliseconds between turns.

Interestingly, us humans aren’t the only species that consider it rude to interrupt. The researchers found that both black-capped chickadees and European starlings practiced so-called “overlap avoidance” during turn-taking communication. If an overlap occurred, individuals would go silent or flew away, a sign that overlapping may be seen as an unacceptable violation of the social rules of turn-taking.

Although temporal coordination in animal communication has attracted interest over several decades, no clear picture has yet emerged as to why individuals exchange signals, the researchers wrote.

It’s quite likely that this turn-taking behavior underlies the evolution of human speech so a more systematic cross-species examination could render striking results. The authors even offer a framework that might enable such a comparison.

“The ultimate goal of the framework is to facilitate large-scale, systematic cross-species comparisons,” Dr. Kobin Kendrick, from the University of York’s Department of Language and Linguistic Science and one of the authors of the new study, said in a statement.

“Such a framework will allow researchers to trace the evolutionary history of this remarkable turn-taking behavior and address longstanding questions about the origins of human language.”

The team included researchers from the Universities of York and Sheffield, the Max Planck Institute for Evolutionary Anthropology in Germany, and the Max Planck Institute for Psycholinguistics in the Netherlands.

“We came together because we all believe strongly that these fields can benefit from each other, and we hope that this paper drives more cross-talk between human and animal turn-taking research in the future,” said Dr. Sonja Vernes, from the Max Planck Institute for Psycholinguistics.

Scientific reference: Taking turns: Bridging the gap between human and animal communicationProceedings of the Royal Society B.

Australian_Magpie_feeding

Australian magpies can understand what other birds are ‘saying’ with surprising clarity

Australian magpies can understand the warning calls of other birds, a new study suggests.

Australian_Magpie_feeding

Image credits Toby Hudson.

Despite their name, Australian magpies (Gymnorhina tibicen) aren’t actually very magpie-y. They’re actually part of a separate family of birds that are indigenous to Australia, Southern Asia, and the Indo-Pacific, while true magpies (family Corvidae) belong to the evolutionary family of crows.

However, the magpies down under seem to have an ace up their wings that should allow them to fit right in with their European counterparts. New research showed the birds can understand signals of at least one other species, the noisy miner, suggesting they could learn to interpret other species as well.

Orange balls and noisy miners

Noisy miners (Manorina melanocephala, not the profession) are a native species of birds that share their ecosystem with the Australian magpie. This small bird of the honeyeater family uses different calls to warn its peers of incoming predators. One characteristic that’s been especially useful for the team is that the noisy miners employ different warning calls for airborne and ground-based predators.

By playing recordings of both calls to wild magpies, the team observed that these could understand the meaning behind the noisy miners’ warnings.

The study took part in four locations in Canberra, including the Australian National University campus and parks in Turner. The researchers lured unsuspecting wild magpies with, funnily enough, grated cheese — then played the recorded calls back to them and filmed the results. As a control, the researchers used a large orange ball. They’d either roll this towards magpies, to gauge their response to ground threats, or throw it around, to see how the birds reacted to airborne predators. Which must have been hilarious to witness.

Over 30 adult, wild magpies had their reactions video-taped twice, while 9 individuals simply flew away.

magpie beak angle

B marks the base of the beak and T the tip. Tracker software was used to obtain coordinates for both B and T for every video frame, and researchers used this to calculate the change in beak angle.
Image credits Branislav Igic/Australian National University.

The team reports that the Australian magpie’s typical response to perceived threats is a tilting of the beak: the birds showed an average maximum beak angle of 29 degrees for the thrown ball, and an average maximum of 9 degrees when it was rolled. The miners’ aerial warning calls prompted an average maximum beak angle of 31 degrees, while the ground warning prompted an average of 24.

It may seem like useless trivia, but the reason why the team measured these angles is quite central to the research. They wanted to determine not just whether the magpies use the miners’ calls as danger warnings — the authors wanted to see if the magpies can understand what kind of danger each call signaled. The control test proved that magpies will aim their beaks towards the expected elevation of a threat. The second round of tests suggests that they can indeed discern the meaning of the miners’ calls, as the magpies consistently aimed their beaks higher for aerial warning calls than they did for terrestrial warning signals.

“A lot of birds around the world have been shown to respond to a degree of threat, but this is a little bit more nuanced,” says co-author Dominique Potvin. “We’re not looking at ‘if you scream louder does that mean more danger and you hide’. This is a very particular sound that indicates the spatial location of something. For the magpies to actually hone in on that is pretty new.”

Speaking in beaks

The team writes that Australian magpies and noisy miners face the same type of predators: brown goshawks, peregrine falcons and boobook owls from the skies, and foxes, cats, dogs, and snakes on the ground. The two species also frequently share the same ecosystems. But the magpies spend most of their time on the ground looking for food, while noisy miners, completely out of character with their name, like to perch out in trees.

The team believes that listening in to the latter’s warnings gave the magpies an edge against predators. So far, they’re the only species that we know of with this ability.

“It pays for the magpie to pay attention to somebody who has a better view of predators than they do,” Potvin explains. “Magpies are a pretty smart group. We’re not sure if they’re learning this from other magpies or if they’re figuring it out on their own, but the ability is there. We don’t think this would be isolated to Canberra populations.”

Just to make sure the birds weren’t reacting to the sound alone, the team also played a third call: the generic, non-warning call of a crimson rosella (Platycercus elegans), a parrot native to eastern and southeastern Australia. The magpies showed no response to this call.

The paper “Birds orient their heads appropriately in response to functionally referential alarm calls of heterospecifics” has been published in the journal Animal Behavior.