Tag Archives: Talking

Language forms spontaneously, and fast

Languages can form spontaneously, and surprisingly fast, reports a new paper.

Image credits Bruno Glätsch.

Researchers at the Leipzig University and the Max Planck Institute for Evolutionary Anthropology report that preschool children are able to form communication systems which share core properties of language. The team was studying the processes by which communication systems such as language developed in the past.

Say what?

“We know relatively little about how social interaction becomes language,” says Manuel Bohn, Ph.D. at the Leipzig University’s Research Center for Early Child Development and lead-author of the study.

“This is where our new study comes in.”

People love to communicate — there are over 7,000 languages in use today according to Ethnologue. Just under half of them have few speakers remaining, but it does go to show how versatile people are at using speech to convey information.

Still, the processes through which languages form are still up to debate. While they’re believed to have formed over millennia, we’ve also seen deaf strangers spontaneously form new a sign language, the Nicaraguan Sign Language (NSL), blisteringly fast. The team notes that children developed the NSL, but exactly how they went about it wasn’t documented. So, they set about finding out.

They attempted to recreate the process in a series of experiments with children from Germany and the US. They were invited to two different rooms and provided with a Skype connection to communicate. Their task was to describe an image with different motifs in a coordination game to a partner. In the beginning, these were simple images, showing concrete objects such as a fork. As the game progressed, the images became more and more abstract and complex — a blank card, for example.

In order to prevent the children from falling back on known language, the team allowed them a brief interval for familiarization with the set-up and their partner, and then muted the conversation. Then they tracked the different ways they communicated.

The children figured out pretty quickly that concrete objects can be conveyed by mimicking their corresponding action — eating to represent a fork, for example. The more abstract images, especially the blank paper showing nothing, were much harder to describe. The team notes how two of the participants managed to establish a gesture to convey the concept:

“The sender first tried all sorts of different gestures, but her partner let her know that she did not know what was meant,” explains Dr. Greg Kachel, the study’s second author. “Suddenly our sender pulled her T-shirt to the side and pointed to a white dot on her coloured T-shirt,” representing the card with the colors on her clothes.

Gesture language

Image via Pixabay.

When the two children switched roles later on in the experiment, the transmitter didn’t have white on her clothes but used the same approach. When she pulled her own t-shirt to the side and pointed to it, “her partner knew what to do,” Kachel adds. In effect, they had established a gestured ‘word’ for an abstract concept.

Over the course of the study, the children developed more complex gestures for the images they were given. When describing an interaction between two animals, for example, they first established individual signs for individual actors and then started combining them. The team notes that this works similarly to a very limited grammatical structure.

All in all, the team believes that people first established references for actions and objects using gestures that resembled them. Individual partners involved in dialogue would coordinate with their peers by imitating each other so that they use the same signs for the same things. Eventually, this interpersonal meaning would spread to the group at large (as everybody mingled and coordinated), gaining conventional meaning. I personally find this tidbit very fascinating, especially in relation to pictorial scripts, be them ancient Egyptian or save icons.

Over time, the relationship between the sign and the concept itself weakens, allowing for signs to describe more abstract or more specific concepts. As more complex information needs to be conveyed, layers of grammatical structures are gradually introduced.

Some of the key findings of this study are that partners need a common pool of experience and interaction in order to start communicating, and how fast this process can take place if that prerequisite is satisfied: as little as 30 minutes.

It also goes to show that while we think of language as being formed by words, communication can happen without them. When people can’t talk to one another for some reason, they’ll find other ways to convey information with surprising gusto. Spoken language likely formed following the same steps, however, and was preferred as the fastest and most effective way of transmitting a message.

“It would be very interesting to see how the newly invented communication systems change over time, for example when they are passed on to new ‘generations’ of users,” Bohn says. “There is evidence that language becomes more systematic when passed on.”

The paper “Young children spontaneously recreate core properties of language in a new modality” has been published in the journal Proceedings of the National Academy of Sciences.

Dialogue.

People learn to predict which words come after ‘um’ in a conversation — but not with foreigners

People can learn to predict what a speaker will say after a disfluency (such as ‘um’ or ‘aaah’). However, this only seems to work with those that share their native tongue, not foreigners.

Dialogue.

Image via Pixabay.

Even flowing conversation is peppered with disfluencies — short pauses and ‘umm‘s, ‘ahh’s, ‘ugh’s. On average, people produce roughly 6 disfluencies per 100 words. A new paper reports that such disfluencies do not occur randomly — they typically come before ‘hard-to-name’ or low-frequency words (such as ‘automobile’ instead of ‘car’).

The team notes that, while previous research has shown that people can use disfluencies to predict when such a low-frequency (uncommon) word is incoming, no research has established whether listeners would actively track the occurrence of ‘uh’, even when it appeared in unexpected places. And that’s exactly what this present study wanted to find out.

Small pauses for big words

The team asked two groups of Dutch participants (41 in total, 30 of which produced useable data) to look at sets of two images on a screen (one ‘common, such as a hand, and an ‘uncommon’ one such as an igloo) while listening to both fluent and disfluent instructions. These instructions would tell participants to click on one of the two images. One of the groups received instructions spoken in a ‘typical’ manner — in which the talker would say ‘uh’ before low-frequency words — while the other group received ‘atypical’ instructions — in which the talker said ‘uh’ before high-frequency words.

Eye-tracking devices were used to keep track of where each participant was looking during the trial. What the team was interested in finding was whether participants in the second group would keep track of the unexpected ‘uh’s and would learn to expect the common object after them.

At the start of the experiment, participants listening to ‘typical’ instructions immediately looked at the igloo upon hearing the disfluency, as did those in the atypical group. Note that the team intentionally left a relatively long pause between the ‘uh’ and the following word, so the participants looked at an object even before hearing the word itself. However, people in the atypical group quickly learned to adjust this natural prediction and started looking at the common object upon hearing a disfluency.

“We take this as evidence that listeners actively keep track of when and where talkers say ‘uh’ in spoken communication, adjusting what they predict will come next for different talkers,” explains lead author Hans Rutger Bosker from the Max Planck Institute for Psycholinguistics.

The team also wanted to see if this effect would hold for non-native speakers. In a follow-up experiment — one that used the same set-up and instructions but this time spoken with a heavy Romanian accent — participants learned to predict uncommon words following the disfluencies of a ‘typical’ (‘uh’ before low-frequency words) non-native talker. However, they didn’t start predicting high-frequency words in an ‘atypical’ non-native speaker, despite the fact that the same sentences were used in the native and non-native experiments.

“This probably indicates that hearing a few atypical disfluent instructions (e.g., the non-native talker saying ‘uh’ before common words like “hand” and “car”) led listeners to infer that the non-native speaker had difficulty naming even simple words in Dutch,” says co-author Geertje van Bergen.

“As such, they presumably took the non-native disfluencies to not be predictive of the word to follow — in spite of the clear distributional cues indicating otherwise.”

The findings suggest an interplay between ‘disfluency tracking’ and ‘pragmatic inferencing’, according to the team. In non-science speak, that largely means we only track disfluencies if the talker’s voice makes us believe they are a reliable umm’er.

“We’ve known about disfluencies triggering prediction for more than 10 years now, but we demonstrate that these predictive strategies are malleable. People actively track when particular talkers say ‘uh’ on a moment by moment basis, adjusting their predictions about what will come next,” explains Bosker.

The paper “How tracking the distribution of native and non-native disfluencies influences online language comprehension” has been published in the Journal of Memory and Language.

Man with dog.

That ridiculous voice we use to talk to dogs? They actually love it

A high-pitched voice and exaggerated emotion when interacting with a dog will get you a long way, science says.

Man with dog.

Image credits Besno Pile.

University of York researchers say that the way we speak to our dog-friends is a key relationship building element between pet and owner. The effect is similar to how ‘baby-talk’ helps adults bond with babies.

Whosagoodbooooy?

Previous research suggests that talking to a puppy in a high-pitched voice, with the customary exaggerated amount of emotion, helps improve engagement. New research from the University of York tested whether this effect holds true for adult dogs as well. Their results suggest that using this “dog-speak” can also help improve attention, and helps strengthen the bond between owner and pet.

“A special speech register, known as infant-directed speech, is thought to aid language acquisition and improve the way a human baby bonds with an adult,” said first author Dr. Katie Slocombe from the University of York’s Department of Psychology. “This form of speech is known to share some similarities with the way in which humans talk to their pet dogs, known as dog-directed speech.”

This high-pitched, rhythmic speech is widely used in human-dog interactions in western cultures, but we don’t actually know if it’s any good for the dog. So, the team set out to find whether the type and content of the conversation help promote social bonding between pets and their human owners.

Unlike previous research efforts on this subject, the team placed real human participants in the same room as the dogs — up to now, such studies involved broadcasting speech over a loudspeaker, without any human present. This setting created a much more naturalistic environment for the dogs, and helped the team better control the variables involved — i.e. if the dog not only paid more attention, but would also want to interact more with a person that speaks to them in such a way.

The tests were performed with adult dogs. Each animal first listened to one person who used dog-directed speech (the high-pitched voice) using phrases such as ‘you’re a good dog’ or ‘want to go for a walk?’, then to another person using adult-directed speech with no specific, dog-related content — phrases such as ‘I went to the cinema last night’, for example. The attentiveness of each dog during these ‘talks’ was measured. Following the speaking phase, each dog was allowed to chose one of the two people to physically interact with.

Dogs were much more likely to want to interact and spend time with those who used dog-directed speech that contained dog-related content, compared to the counterparts. But this result by itself doesn’t do much to clear the waters — so the team also performed something of a control trial, meant to give them insight into what elements of speech appealed to the dogs: was it the high-pitched, emotional tone, or the words themselves? During this phase, the speakers were asked to mix dog-directed speech with non-dog-related words, and adult-directed speech with dog-related words.

“When we mixed-up the two types of speech and content, the dogs showed no preference for one speaker over the other,” says Alex Benjamin, PhD student at the department of psychology, paper co-author. “This suggests that adult dogs need to hear dog-relevant words spoken in a high-pitched emotional voice in order to find it relevant.”

“We hope this research will be useful for pet owners interacting with their dogs, and also for veterinary professionals and rescue workers.”

The paper “Alex Benjamin, Katie Slocombe. ‘Who’s a good boy?!’ Dogs prefer naturalistic dog-directed speech” has been published in the journal Animal Cognition.

You can’t keep eye contact during conversation because your brain can’t handle it, study finds

A new study suggests that we may struggle to maintain eye contact while having a conversation with someone because out brains just can’t handle doing both at the same time.

Image credits Madeinitaly / Pixabay.

It’s not (just) shyness, it seems. Scientists from Kyoto University, Japan tested 26 volunteers on their ability to play word association games while keeping eye contact with computer-generated faces. Their results suggest that people just can’t handle thinking of the right words while keeping their attention on an interlocutor’s face. The effect, they found, becomes more noticeable when the participants had to think up less familiar words — implying that this process uses the same mental resources as maintaining eye contact.

“Although eye contact and verbal processing appear independent, people frequently avert their eyes from interlocutors during conversation,” write the researchers.

“This suggests that there is interference between these processes.”

The participants were asked to think of word associations for terms with various difficulty levels. Thinking of a verb for ‘spoon’, for example, is pretty easy — you can eat with it. Thinking of a verb associated with the word ‘paper’ is harder since you can write, fold, cut it, and so on. Participants were tested on their ability to associate while looking at animations of faces maintaining eye contact and animations of faces looking away.  And in the first case, they fared worse.

It took them longer to think of answers when maintaining eye contact, but only when they had to associate a more difficult word. The researchers believe that this happens because the brain uses the same resources for both actions — so in a way, talking while maintaining eye contact overloads it.

The team suspects that participants may be experiencing some kind of neural adaptation, a process in which the brain alters its response to a constant stimulus — take for example the way you don’t feel your wallet in the back-pocket you usually put it in but becomes uncomfortable in the other one. The sample size this team worked with is pretty small, so further research is needed to prove or disprove the findings.

The paper “When we cannot speak: Eye contact disrupts resources available to cognitive control processes during verb generation” has been published in the journal Cognition.

Conversation

That urge to complete other people’s sentences? Turns out the brain has its own Auto Correct

The hippocampus might have a much more central role to play in language and speech than we’ve ever suspected, a team of US neuroscientists claims. They examined what happens in people’s brains when they finish someone else’s sentence.

Conversation

Image credits Isa Karakus / Pixabay.

Do you ever get that urge to blurt out the last word of somebody else’s sentence? Happens to me all the time. And it seems scientists do it too because a team led by senior researcher at the Donders Centre for Cognition and Radboud University Medical Centre Vitoria Piai looked into the brains of 12 epileptic patients to make heads and tails of the habit. What they’ve found flies against everything we currently know about how memory and language interact in our brains.

The 12 patients were taking part in a separate study trying to understand their unique patterns of brain activity. Each one of their brains was monitored with a set of electrodes. Piai and her team told the participants a series of six-syllable (but incomplete) sentences, “she came in here with the…” or “he locked the door with the…” for example. After the sentence was read out to them the researchers held up a card with the answer printed on it, all the while monitoring how the patients’ hippocampi — on their non-epileptic side of the brain — responded.

When the missing word was obvious, ten out of the twelve subjects showed bursts of synchronised theta waves in the hippocampus, a process indicative of memory association.

“The hippocampus started building up rhythmic theta activity that is linked to memory access and memory processing,” said Robert Knight from the Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley and co-author of the paper.

But when the answer wasn’t so straightforward, their hippocampi ramped up even more as it tried (without success) to find the correct word — like an engine revving up with the clutched pulled down.

The original auto correct

“[The results] showed that when you record directly from the human hippocampal region, as the sentence becomes more constraining, the hippocampus becomes more active, basically predicting what is going to happen.”

Just like the auto correct feature replaces a more unusual word the first time you use it but adapts over time to not only stop replacing it, but also starts filling it in for you, the findings suggest that our minds try to fill blanks in dialogue drawing from our memory stores of language and the interlocutor’s particularities of speech, linking memory and language.

“Despite the fact that the hippocampal area of the medial part of the temporal lobe is well known to be linked to spatial and verbal memory in humans, the two fields have been like ships running in the fog, unaware that the other ship is there,” Knight added.

This would mean that the hippocampus plays a much more important role in language, previously thought to be the domain of the cortex — though right now, the team doesn’t know exactly how this link works. Because of this, the team hopes to continue their work to better understand the bridge between memory and language, which will hopefully give us a better understanding of the brain itself.

Another implication would be that, because at least part of the act of speaking is handled by the hippocampus and not the cortex, language might not be so human-only as we’d like to believe.

The full paper “Direct brain recordings reveal hippocampal rhythm underpinnings of language processing” has been published in the journal PNAS.