Tag Archives: Words

Old World primates can only use two ‘words’ at a time, new research suggests

Old World monkeys can use sentences — but only two words long.

Image via Pixabay.

New research from MIT reports that Old World monkeys can combine two vocalizations into a single sentence. However, they’re unable to freely recombine language elements as we do.

Pyow-hack

“It’s not the human system,” says Shigeru Miyagawa, an MIT linguist and co-author of a new paper detailing the study’s findings. “The two systems are fundamentally different.”

Along with Esther Clarke, an expert in primate vocalization, who is a member of the Behavior, Ecology, and Evolution Research (BEER) Center at Durham University in the U.K., Miyagawa re-evaluated recordings of Old World monkeys, including baboons, macaques, and the proboscis monkey.

The language of some of these species has been studied in the past, and different species have different kinds of alarm calls for each type of predator. Vervet monkeys have specific calls when they see leopards, eagles, and snakes, for example, because each predator requires different kinds of evasive action. Similarly, tamarin monkeys have one alarm call to warn of aerial predators and one to warn of ground-based predators.

These primates seem able to combine such calls to create a more complex message. The putty-nosed monkey of West Africa has a general alarm call that sounds like “pyow,” and a specific alarm call warning of eagles, “hack.” However, sometimes they will use “pyow-hack” in longer or shorter sequences to warn the group that danger is imminent.

In the paper, Miyagawa and Clarke contend that the monkeys’ ability to combine these terms means they are merely deploying a “dual-compartment frame” which lacks the capacity for greater complexity. The findings, the authors explain, showcase an important difference in cognitive ability between humans and some of our closest relatives.

They explain that these combined calls always start with “pyow”, end with “hack” and that the terms are never alternated. Although the animals do vary the length of the call, the authors say that their language lacks a “combinatorial operation” (the process that allows our brains to arrange individual words into functional sentences). It is only the length of the “pyow-hack” sequence that indicates how far the monkeys will run.

“The putty-nose monkey’s expression is complex, but the important thing is the overall length, which predicts behavior and predicts how far they travel,” Miyagawa says. “They start with ‘pyow’ and end up with ‘hack.’ They never go back to ‘pyow.’ Never.”

Campbell’s monkey, a species in South Africa, uses calls that are reminiscent of a human-style combination of sounds,the team explains that they also use a two-item system and add an “-oo” sound to turn specific sounds into generalized aerial or land alarms.

Miyagawa also notes that when the Old World monkeys speak, they seem to use a part of the brain known as the frontal operculum. Human language is heavily associated with Broca’s area, a part of the brain that seems to support more complex operations. The authors propose that humans’ ability to tap Broca’s area for language may be what enabled speech as we know it today.

“It seems like a huge leap,” Miyagawa says. “But it may have been a tiny [physiological] change that turned into this huge leap.

The paper “Systems Underlying Human and Old World Monkey Communication: One, Two, or Infinite” has been published in the journal Frontiers in Psychology.

Dialogue.

People learn to predict which words come after ‘um’ in a conversation — but not with foreigners

People can learn to predict what a speaker will say after a disfluency (such as ‘um’ or ‘aaah’). However, this only seems to work with those that share their native tongue, not foreigners.

Dialogue.

Image via Pixabay.

Even flowing conversation is peppered with disfluencies — short pauses and ‘umm‘s, ‘ahh’s, ‘ugh’s. On average, people produce roughly 6 disfluencies per 100 words. A new paper reports that such disfluencies do not occur randomly — they typically come before ‘hard-to-name’ or low-frequency words (such as ‘automobile’ instead of ‘car’).

The team notes that, while previous research has shown that people can use disfluencies to predict when such a low-frequency (uncommon) word is incoming, no research has established whether listeners would actively track the occurrence of ‘uh’, even when it appeared in unexpected places. And that’s exactly what this present study wanted to find out.

Small pauses for big words

The team asked two groups of Dutch participants (41 in total, 30 of which produced useable data) to look at sets of two images on a screen (one ‘common, such as a hand, and an ‘uncommon’ one such as an igloo) while listening to both fluent and disfluent instructions. These instructions would tell participants to click on one of the two images. One of the groups received instructions spoken in a ‘typical’ manner — in which the talker would say ‘uh’ before low-frequency words — while the other group received ‘atypical’ instructions — in which the talker said ‘uh’ before high-frequency words.

Eye-tracking devices were used to keep track of where each participant was looking during the trial. What the team was interested in finding was whether participants in the second group would keep track of the unexpected ‘uh’s and would learn to expect the common object after them.

At the start of the experiment, participants listening to ‘typical’ instructions immediately looked at the igloo upon hearing the disfluency, as did those in the atypical group. Note that the team intentionally left a relatively long pause between the ‘uh’ and the following word, so the participants looked at an object even before hearing the word itself. However, people in the atypical group quickly learned to adjust this natural prediction and started looking at the common object upon hearing a disfluency.

“We take this as evidence that listeners actively keep track of when and where talkers say ‘uh’ in spoken communication, adjusting what they predict will come next for different talkers,” explains lead author Hans Rutger Bosker from the Max Planck Institute for Psycholinguistics.

The team also wanted to see if this effect would hold for non-native speakers. In a follow-up experiment — one that used the same set-up and instructions but this time spoken with a heavy Romanian accent — participants learned to predict uncommon words following the disfluencies of a ‘typical’ (‘uh’ before low-frequency words) non-native talker. However, they didn’t start predicting high-frequency words in an ‘atypical’ non-native speaker, despite the fact that the same sentences were used in the native and non-native experiments.

“This probably indicates that hearing a few atypical disfluent instructions (e.g., the non-native talker saying ‘uh’ before common words like “hand” and “car”) led listeners to infer that the non-native speaker had difficulty naming even simple words in Dutch,” says co-author Geertje van Bergen.

“As such, they presumably took the non-native disfluencies to not be predictive of the word to follow — in spite of the clear distributional cues indicating otherwise.”

The findings suggest an interplay between ‘disfluency tracking’ and ‘pragmatic inferencing’, according to the team. In non-science speak, that largely means we only track disfluencies if the talker’s voice makes us believe they are a reliable umm’er.

“We’ve known about disfluencies triggering prediction for more than 10 years now, but we demonstrate that these predictive strategies are malleable. People actively track when particular talkers say ‘uh’ on a moment by moment basis, adjusting their predictions about what will come next,” explains Bosker.

The paper “How tracking the distribution of native and non-native disfluencies influences online language comprehension” has been published in the Journal of Memory and Language.

Typing fonts.

Each language you speak in alters your perception of time, study finds

Language can have a powerful effect on how we think about time, a new study found. The link is so powerful that switching language context in a conversation or during a task actually shifts how we perceive and estimate time.

Typing fonts.

Image credits Willi Heidelbach.

I think we all like to consider our minds as being somehow insulated from the going on’s around us. That we take comfort in knowing it will absorb, process, and respond to external stimuli in a calm, efficient, but most of all consistent fashion. Maybe it comes down to the sense of immutable truth our reasoning is imbued with if we assume that it’s rooted in a precise and impartial system — in a chaotic world, we need to know that we can trust our mind. A view which is a tad conceitful, I’d say, since it’s basically our mind telling us what to believe about itself.

And it’s also probably false. Professors Panos Athanasopoulos, a linguist from Lancaster University and Emanuel Bylund, a linguist from Stellenbosch University and Stockholm University, have discovered that our perception of time strongly depends on the linguistic context we’re currently using.

Doublespeak

People who are fluent in two (bilinguals) or more (polyglots) languages are known to ‘code-switch’ often — a rapid and usually unconscious shift between languages in a single context. But each language carries within it a certain way of looking at the world, of organizing and referring to things around us. For example, English speakers mark duration of events by likening them to physical distances, e.g. a short lecture, while a Spanish speaker will liken duration to volume or amount, e.g. a small lecture. So each language subtly ingrains a certain frame of reference for time on its speaker.

But bilinguals, the team found, show a great degree of flexibility in the way they denote duration, based on the language context in use. In essence, this allows them to change how the mind perceives time.

For the study, the team asked Spanish-Swedish bilinguals to estimate the passage of time or distance (distractionary task) while watching a screen showing either a line growing across it or a container being filled. Participants reproduced duration by clicking the computer mouse once, waiting the appropriate time, and clicking again. They were prompted to do this task either with the word ‘duración’ (the Spanish word for duration) or ‘tid’ (the Swedish term). The containers and lines themselves weren’t an accurate representation of duration, however, but were meant to test to what extent participants were able to disregard spatial information when estimating duration.

The idea is that if language does interfere with our perception of duration, Spanish speakers (who talk about time as a volume) would be influenced more by the fill level of the containers than their Swedish counterparts (who talk about time as a distance), and vice-versa for the lines.

And it did

Image credits emijrp / Wikimedia.

The team recruited 40 native Spanish and Swedish bilinguals each and had them run three variations of the test. The first one found that Spanish native speakers were influenced to a greater extent (there was a wider discrepancy between real and perceived time) by the containers than the lines (scoring an average 463-millisecond discrepancy vs the Swedes’ 344 ms). Native Sweedish speakers were more influenced by the lines than the containers (scoring 412 discrepancies vs their counterparts’ 390 ms discrepancies).

The second test again included 40 of each group and found that in the absence of the Spanish/Sweedish prompt words, the team “found no interaction between language and stimulus type, in either the line condition or the container condition. […] both language groups seemed to display slightly greater spatial interference in the lines condition than in the containers condition. There were no significant main effects.”

The third test included seventy-four Spanish-Sweedish bilinguals who performed either the line or container task. The team removed the distractor task to reduce fatigue and alternated between the prompt languages. Each participant took the experiment twice, once with Spanish and once with Swedish prompt labels. The team concludes that “when all stimuli were analysed,” there were “no significant main effects or interaction” either in the distance or time task — meaning both groups were just as accurate in estimating time or distance regardless of language.

“Our approach to manipulate different language prompts in the same population of bilinguals revealed context-induced adaptive behavior,” the team writes. “Prompts in Language A induced Language A-congruent spatial interference. When the prompt switched to Language B, interference became Language B-congruent instead.”

“To our knowledge, this study provides the first psychophysical demonstration of shifting duration representations within the same individual as a function of language context.”

Exactly why this shift takes place is still a matter of debate: the team interprets the finding in the context of both the label-feedback hypothesis and the predictive processing hypothesis, but mostly in technical terms for other linguists to discern. For you and me, I think the main takeaway is that as much as our minds shape words so do words shape our minds — texturing everything from our thoughts to our emotions, all the way to our perception of time.

The paper “The Whorfian Time Warp: Representing Duration Through the Language Hourglass” has been published in the Journal of Experimental Psychology.

Editor’s note: edited measured discrepancy for more clarity.

No matter what your native language is – we all speak the same, study finds

A new study has found an unexpected link between languages — all over the world, the sounds we use to form words for common objects and ideas are surprisingly similar. Could we all, in fact, be talking the same language?

language stays with us

Credit: Radboud University.

One of the core principles of linguistics is that languages appeared and evolved independently of each other.  Language, in other words, is a creation of the people speaking it. This would mean that the sounds that we use to create words have no sense in themselves, but serve to create something with meaning — just like a bunch of shapeless rocks have no purpose until you mortar them together to make a wall.

But what if those rocks weren’t so shapeless after all? A new study performed by a team including physicists, linguists, and computer scientists from the US, Argentina, Germany, the Netherlands and Switzerland analyzed 40-100 basic vocabulary words in around 3,700 languages – approximately 62 per cent of the world’s current languages. They found that humans use surprisingly similar sounds to form the words for basic concepts such as body parts, family relationships or parts of the natural world. This would suggest that concepts fundamental to human life instinctively evoke the same verbalizations from all of us.

“These sound symbolic patterns show up again and again across the world, independent of the geographical dispersal of humans and independent of language lineage,” said Dr Morten Christiansen, professor of psychology and director of Cornell’s Cognitive Neuroscience Lab in the US where the study was carried out.

“There does seem to be something about the human condition that leads to these patterns. We don’t know what it is, but we know it’s there.”

The team found that in most languages, the word for nose was likely to include the group “neh” of the “oo” sound (as in coop.) The word for sand was likely to include the sound “s” and “leaf” the “l,” “p” or “b” sounds. The words for ‘red’ and ‘round’ were likely to include the ‘r’ sound, they found.

Other words were likely to share sound groups across languages, including bite, dog, fish, skin, water, and star. For words describing parts of the body, such as knee or breast, these associations were even stronger.

“It doesn’t mean all words have these sounds, but the relationship is much stronger than we’d expect by chance,” added Dr Christiansen.

There are also negative associations — meaning that there are sounds or groups of sounds that words for the same objects or concepts avoid across languages. This was most evident with pronouns, with the words for first person singular, I, unlikely to include the sounds u, p, b, t, s, r and L. ‘You’ is unlikely to include sounds involving u, o, p, t, d, q, s, r and L.

The study, however, doesn’t address the question of why this happens. So we have no explanation why humans seem compelled to use the same sounds for basic objects or ideas, but of course, the scientists have a few theories. Dr Christian believes that because these concepts are important in all languages, so children are likely to learn these words early in life.

“Perhaps these signals help nudge kids into acquiring language,” he added: “Maybe it has something to do with the human mind or brain, our ways of interacting, or signals we use when we learn or process language. That’s a key question for future research.”

And other studies seem to back the idea of sound-object associations underlying language. They found, for example, that regardless the language, words for small spiky objects are likely to contain high-pitched sounds, while rounder shapes contain ‘ooo’ sounds, now known as the ‘bouba/kiki’ effect.

There’s also the possibility that some words share sounds across languages because they’re the first tentative vocalizations babies make — so ma, ma, and da, da, become mama and daddy.

“You could argue that the words chosen here are very old and therefore most likely to have a common ancestor language in the past, from which they all derived,” said Dr Lynne Cahill, a lecturer in English Language and Linguistics at the University of Sussex

“I think this is an interesting study which has looked at so many languages but I don’t think it quite justifies their claim that it debunks the idea that language is arbitrary and I think they looked at too few words to make any firm conclusions.”

The full paper “Sound–meaning association biases evidenced across thousands of languages” has been published in the journal Proceedings of the National Academy of Sciences.

These are the most metal words in the English language, data scientist says

Every once in a while scientists turn their minds from stars or finding more nutritious foods towards life’s real questions: for example, how to sound as metal as possible.

Image credits to Getoar Agushi / Wikimedia

Former physicist turned data scientist Iain of Degenerate State crunched the numbers and has the keywords you need to say to impress. Iain mined DarkLyrics.com, “the largest metal lyrics archive on the web,” for the lyrics of 222,623 songs by 7,634 bands and analyzed them to find the most, and least, metal words in existence. By comparing the data from DarkLyrics with the Brown Corpus — a 1961 collection of English-language documents that is “the first of the modern, computer readable, general corpora” — he put together a list of the 20 most and least metal words, along with their “metalness” factor.

So without further ado, Iain’s top 10 most metal words are:

  1. burn.
  2. cries.
  3. veins.
  4. eternity.
  5. breathe.
  6. beast.
  7. gonna.
  8. demons.
  9. ashes.
  10. soul.

And the top 10 least metal words:

  1. particularly.
  2. indicated.
  3. secretary.
  4. committee.
  5. university.
  6. relatively.
  7. noted.
  8. approximately.
  9. chairman.
  10. employees.

Iain’s method is actually more complex than you’d be inclined to think. He first analyzed the data from DarkLyrics and came up with word clouds showing the most common words in all of the songs. Just looking at this data doesn’t offer any special insight into the genre, however, he found.

“Metal lyrics seem focused on “time” and “life”, with a healthy dose of “blood”, “pain” and “eyes” thrown in. Even without knowing much about the genre, this is what you might expect,” he writes.

But looking only at the frequency with which each word appears in songs doesn’t actually tell us anything about which words are closest to the spirit of metal.

“To do this we need some sort of measure of what “standard” English looks like, and […] an easy comparison is to the brown corpus,” he adds.

Iain attributed each word a “metalness” factor, M, as the logarithm of the frequency with which it appears in lyrics over the frequency with which it appears in the brown corpus.

“To prevent us being skewed by rare words, we take only words which occur at least five times in each corpus.”

He plotted the Metalness of all 10,000 words here, so you can know exactly how intense each word you say is. Unsurprisingly, topics like university and employment don’t quite have the metalness of say, demons or the fiery hells.

Iain says that his final analysis isn’t perfect — because of the different topics in the brown corpus and the lyrics, some words are naturally favoured with more or less metalness. A more precise measurement should involve comparison with other musical genres.

“A better measure of what constitutes “Metalness” would have been a comparison with lyrics of other genres, unfortunately I don’t have any of these to hand.”

However, it’s accurate enough to tell you what you need to know — the next time that sexy someone in a Judas Priest t-shirt saunters by, leave your uni and job alone. Your burning soul, the cries of the beast running through your veins and so on are all you need to talk about.

Got an exam coming up? Better start sketching

A new study found that drawing information you need to remember is a very efficient way to enhance your memory. The researchers believe that the act of drawing helps create a more cohesive memory as it integrates visual, motor and semantic information.

Image via youtube

“We pitted drawing against a number of other known encoding strategies, but drawing always came out on top,” said lead author Jeffrey Wammes, PhD candidate in the Department of Psychology at the University of Waterloo.

Wammes’ team included fellow PhD candidate Melissa Meade and Professor Myra Fernandes. Together, they enlisted some of the University’s students’ help and presented them with a list of simple, easily drawn words, such as “apple.” Participants were given 40 seconds in which to draw or write out the the word repeatedly. After this, they were given a filler task of classifying musical tones, to facilitate memory retention. In the last step of the trial, the students were asked to recall as many of the initial words as they could in just 60 seconds.

“We discovered a significant recall advantage for words that were drawn as compared to those that were written,” said Wammes.

“Participants often recalled more than twice as many drawn than written words. We labelled this benefit ‘the drawing effect,’ which refers to this distinct advantage of drawing words relative to writing them out.”

In later variations of this experiment, students were asked to draw the words repeatedly or add visual details to the written letters — shading or doodling on them for example. Here, the team found the same results; memorizing by drawing was more efficient than all other alternatives. Drawing led to better later memory performance than listing physical characteristics, creating mental images, and viewing pictures of the objects depicted by the words.

“Importantly, the quality of the drawings people made did not seem to matter, suggesting that everyone could benefit from this memory strategy, regardless of their artistic talent. In line with this, we showed that people still gained a huge advantage in later memory, even when they had just 4 seconds to draw their picture,” said Wammes.

While the drawing effect proved itself in testing, its worth noting that the experiments were conducted with single words only. The team is now working to find out why the memory benefit of drawing is so powerful, and if it can be carried over to other types of information.

The full paper, titled “The drawing effect: Evidence for reliable and robust memory benefits in free recall” has been published online in The Quarterly Journal of Experimental Psychology and can be read here.