Tag Archives: Speaking

Language forms spontaneously, and fast

Languages can form spontaneously, and surprisingly fast, reports a new paper.

Image credits Bruno Glätsch.

Researchers at the Leipzig University and the Max Planck Institute for Evolutionary Anthropology report that preschool children are able to form communication systems which share core properties of language. The team was studying the processes by which communication systems such as language developed in the past.

Say what?

“We know relatively little about how social interaction becomes language,” says Manuel Bohn, Ph.D. at the Leipzig University’s Research Center for Early Child Development and lead-author of the study.

“This is where our new study comes in.”

People love to communicate — there are over 7,000 languages in use today according to Ethnologue. Just under half of them have few speakers remaining, but it does go to show how versatile people are at using speech to convey information.

Still, the processes through which languages form are still up to debate. While they’re believed to have formed over millennia, we’ve also seen deaf strangers spontaneously form new a sign language, the Nicaraguan Sign Language (NSL), blisteringly fast. The team notes that children developed the NSL, but exactly how they went about it wasn’t documented. So, they set about finding out.

They attempted to recreate the process in a series of experiments with children from Germany and the US. They were invited to two different rooms and provided with a Skype connection to communicate. Their task was to describe an image with different motifs in a coordination game to a partner. In the beginning, these were simple images, showing concrete objects such as a fork. As the game progressed, the images became more and more abstract and complex — a blank card, for example.

In order to prevent the children from falling back on known language, the team allowed them a brief interval for familiarization with the set-up and their partner, and then muted the conversation. Then they tracked the different ways they communicated.

The children figured out pretty quickly that concrete objects can be conveyed by mimicking their corresponding action — eating to represent a fork, for example. The more abstract images, especially the blank paper showing nothing, were much harder to describe. The team notes how two of the participants managed to establish a gesture to convey the concept:

“The sender first tried all sorts of different gestures, but her partner let her know that she did not know what was meant,” explains Dr. Greg Kachel, the study’s second author. “Suddenly our sender pulled her T-shirt to the side and pointed to a white dot on her coloured T-shirt,” representing the card with the colors on her clothes.

Gesture language

Image via Pixabay.

When the two children switched roles later on in the experiment, the transmitter didn’t have white on her clothes but used the same approach. When she pulled her own t-shirt to the side and pointed to it, “her partner knew what to do,” Kachel adds. In effect, they had established a gestured ‘word’ for an abstract concept.

Over the course of the study, the children developed more complex gestures for the images they were given. When describing an interaction between two animals, for example, they first established individual signs for individual actors and then started combining them. The team notes that this works similarly to a very limited grammatical structure.

All in all, the team believes that people first established references for actions and objects using gestures that resembled them. Individual partners involved in dialogue would coordinate with their peers by imitating each other so that they use the same signs for the same things. Eventually, this interpersonal meaning would spread to the group at large (as everybody mingled and coordinated), gaining conventional meaning. I personally find this tidbit very fascinating, especially in relation to pictorial scripts, be them ancient Egyptian or save icons.

Over time, the relationship between the sign and the concept itself weakens, allowing for signs to describe more abstract or more specific concepts. As more complex information needs to be conveyed, layers of grammatical structures are gradually introduced.

Some of the key findings of this study are that partners need a common pool of experience and interaction in order to start communicating, and how fast this process can take place if that prerequisite is satisfied: as little as 30 minutes.

It also goes to show that while we think of language as being formed by words, communication can happen without them. When people can’t talk to one another for some reason, they’ll find other ways to convey information with surprising gusto. Spoken language likely formed following the same steps, however, and was preferred as the fastest and most effective way of transmitting a message.

“It would be very interesting to see how the newly invented communication systems change over time, for example when they are passed on to new ‘generations’ of users,” Bohn says. “There is evidence that language becomes more systematic when passed on.”

The paper “Young children spontaneously recreate core properties of language in a new modality” has been published in the journal Proceedings of the National Academy of Sciences.

Old World primates can only use two ‘words’ at a time, new research suggests

Old World monkeys can use sentences — but only two words long.

Image via Pixabay.

New research from MIT reports that Old World monkeys can combine two vocalizations into a single sentence. However, they’re unable to freely recombine language elements as we do.


“It’s not the human system,” says Shigeru Miyagawa, an MIT linguist and co-author of a new paper detailing the study’s findings. “The two systems are fundamentally different.”

Along with Esther Clarke, an expert in primate vocalization, who is a member of the Behavior, Ecology, and Evolution Research (BEER) Center at Durham University in the U.K., Miyagawa re-evaluated recordings of Old World monkeys, including baboons, macaques, and the proboscis monkey.

The language of some of these species has been studied in the past, and different species have different kinds of alarm calls for each type of predator. Vervet monkeys have specific calls when they see leopards, eagles, and snakes, for example, because each predator requires different kinds of evasive action. Similarly, tamarin monkeys have one alarm call to warn of aerial predators and one to warn of ground-based predators.

These primates seem able to combine such calls to create a more complex message. The putty-nosed monkey of West Africa has a general alarm call that sounds like “pyow,” and a specific alarm call warning of eagles, “hack.” However, sometimes they will use “pyow-hack” in longer or shorter sequences to warn the group that danger is imminent.

In the paper, Miyagawa and Clarke contend that the monkeys’ ability to combine these terms means they are merely deploying a “dual-compartment frame” which lacks the capacity for greater complexity. The findings, the authors explain, showcase an important difference in cognitive ability between humans and some of our closest relatives.

They explain that these combined calls always start with “pyow”, end with “hack” and that the terms are never alternated. Although the animals do vary the length of the call, the authors say that their language lacks a “combinatorial operation” (the process that allows our brains to arrange individual words into functional sentences). It is only the length of the “pyow-hack” sequence that indicates how far the monkeys will run.

“The putty-nose monkey’s expression is complex, but the important thing is the overall length, which predicts behavior and predicts how far they travel,” Miyagawa says. “They start with ‘pyow’ and end up with ‘hack.’ They never go back to ‘pyow.’ Never.”

Campbell’s monkey, a species in South Africa, uses calls that are reminiscent of a human-style combination of sounds,the team explains that they also use a two-item system and add an “-oo” sound to turn specific sounds into generalized aerial or land alarms.

Miyagawa also notes that when the Old World monkeys speak, they seem to use a part of the brain known as the frontal operculum. Human language is heavily associated with Broca’s area, a part of the brain that seems to support more complex operations. The authors propose that humans’ ability to tap Broca’s area for language may be what enabled speech as we know it today.

“It seems like a huge leap,” Miyagawa says. “But it may have been a tiny [physiological] change that turned into this huge leap.

The paper “Systems Underlying Human and Old World Monkey Communication: One, Two, or Infinite” has been published in the journal Frontiers in Psychology.

Activating a new language is easy — the effort goes in suppressing the old one

New research with speakers of English and American Sign Language (ASL) reveals the processes that go on in our brain when switching between languages.

Kirkenes signs.

Street signs with Latin and Cyrillic letters in Kirkenes, Norway.
Image credits Wikimedia.

It seems that our brain has to ‘boot up’ a language before we can start speaking it. Previous research has identified spikes in brain activity in areas associated with cognitive control (i.e., the prefrontal and anterior cingulate cortices) when this switch is performed. However, whether this activity was required to ‘activate’ a new language, turn a previous one off, or both, remained unknown. Now, a team of researchers has uncovered the mechanisms that underpin switching between different languages, a finding that provides new insights into the nature of bilingualism.

Speaking in tongues

“A remarkable feature of multilingual individuals is their ability to quickly and accurately switch back and forth between their different languages,” explains Esti Blanco-Elorrieta, a New York University (NYU)Ph.D. candidate and the lead author of the study. “Our findings help pinpoint what occurs in the brain in this process — specifically, what neural activity is exclusively associated with disengaging from one language and then engaging with a new one.”

The results showed that cognitive effort is required primarily when disengaging from one language — activating a new one, by comparison, comes virtually “cost-free from a neurobiological standpoint,” says senior author Liina Pylkkanen

The biggest hurdle in this research effort was to separate the two process because they largely happen at the same time. For example, a Spanish-English bilingual participant would turn Spanish “off” and English “on” at the same time. To work around this issue, the team recruited participants fluent in English and American Sign Language (ASL) and asked them to name the pictures on the screen.

Unlike other language combinations, English and ASL can be spoken together at the same time — and they often are. This dynamic gave the team the tool they needed to separate the language engagement and disengagement processes in the brain. They could ask the participants to go from speaking in both languages to producing only one to observe the process of turning a language ‘off’. Alternatively, participants could be asked to switch from speaking only one language to speaking both — giving the team a glimpse of the process of turning a language ‘on’.

In order to actually see what was going on in the participants’ brains, the team used magnetoencephalography (MEG), a technique that maps neural activity by recording magnetic fields generated by the electrical currents produced by our brain.

When the bilingual English-and-ASL participants switched between languages, deactivating a language led to increased activity in cognitive control areas. Turning a language ‘on’ was virtually indistinguishable from not switching, judging by brain activation levels, the team writes. In other words, little to no cognitive effort is required to activate a second language, be it spoken or signed language.

In fact, the team reports that when participants were asked to produce two words simultaneously (one sign and one spoken word), their brain showed roughly the same levels of activity as when they only produced one word. Most surprisingly, producing both at the same time saw lesser activation than having to suppress the dominant language (in this case English).

“In all, these results suggest that the burden of language-switching lies in disengagement from the previous language as opposed to engaging a new language,” says Blanco-Elorrieta.

The paper has been published in the journal Proceedings of the National Academy of Sciences.