Tag Archives: linguistics

Bubbles.

Bubble physics can explain why dialects appear and how they evolve

The way language and dialects evolve could be explained using the laws of an unexpected chapter of physics: the behavior of bubbles.

Bubble crowd.

Bubbles: making everything more awesome since forever.
Image in public domain.

Physics and foreign languages have a lot of similarities: they both string up a bunch letters that us regular folk can’t really make sense of, for example. But the similarities seem to extend to our native tongue as well: new research from the University of Portsmouth shows that equations from physics can become very accurate predictors of where and how dialects appear.

And we’re talking about the best part of physics: bubble physics!

“If you want to know where you’ll find dialects and why, a lot can be predicted from the physics of bubbles and our tendency to copy others around us,” says Dr James Burridge from the University of Portsmout.

Bubblingly social

In broad lines, Burridge’s theory goes like this: because we’re social animals and like to fit in, we strive to copy the way others around us speak. Since people tend to “remain geographically local in their everyday lives”, Dr Burridge explains, this creates areas where one certain particularity of speech (what we call a dialect) becomes dominant.

Imagine this early step of dialect formation like a foamy bath. There’s a lot of bubbles, but they’re pretty tiny and all mushed up into each other. So these bubbles/dialects start to interact, and here’s where physics gets involved.

“Where dialect regions meet, you get surface tension. Surface tension causes oil and water to separate out into layers, and also causes small bubbles in a bubble bath to merge into bigger ones,” Dr Burridge adds.

“The bubbles in the bath are like groups of people — they merge into the bigger bubbles because they want to fit in with their neighbours.

As small dialect-dominated bubbles come into contact with the ones around them, they’ll tend to merge (align their dialects) with the ones neighboring them. The same happens with the now-bigger bubbles, leading to ever-larger areas where a single dialect imposes itself over the others.

Dialectologists use the term ‘isogloss’ to describe the boundaries between distinct linguistic features, such as dialects. Under Dr. Burridge’s theory, the isoglosses behave like the thin edges of bubbles and, he says, “the maths used to describe bubbles can also describe dialects.”

Bubbles.

Image credits Natalia Kollegova (Наталья Коллегова).

Bubbles merge in your bath because they’re trying to appease surface tension. This is the force of the bulk liquid’s molecules pulling on those forming the surface, trying their best to make the surface/volume ratio as small as possible. Because water molecules tend to stick together (cohesion) much tighter than water and air molecules (adhesion), the liquid’s surface gets put under tension by the force imbalance and gets ‘pulled in’. That’s what makes water in your glass edge up ever so slightly, and why water drops tend to merge.

It’s also why new ways of speaking often spread outwards from a large urban center.

“My model shows that dialects tend to move outwards from population centres, which explains why cities have their own dialects. Big cities like London and Birmingham are pushing on the walls of their own bubbles. This is why many dialects have a big city at their heart — the bigger the city, the greater this effect, he concludes.”

“If people live near a town or city, we assume they experience more frequent interactions with people from the city than with those living outside it, simply because there are more city dwellers to interact with.

This model also suggests that language boundaries get smoother and straighter over time, explaining why dialects stabilize over time.

The paper “Spatial Evolution of Human Dialects” has been published in the journal Physical Review X.

What does gestural communication of great apes tell us about human language?

Our language is one of the features that define us as human beings and distance us from all other animals. Though no other species has developed language like us, animals communicate with each other through a vast set of signals.

A chimpanzee asking for a snack from a keeper at Wellington Zoo, New Zealand. Image credits: Gabriel Pollard.

In the case of great apes, they communicate by vocalizations, facial expressions, body displays or gestures. Due to the phylogenetic proximity between humans and great apes, the study of gestural communication is particularly attractive since it allows to hypothesize how language evolved in our species. And the evolution of human language is one of the hardest scientific topics to do research. The reason is simple: language does not fossilize. That is why we are forced to look for other clues to enlight us about how our language evolved and great ape gestures can lead us much further in the search for answers than we previously thought.

First of all, great apes employ gestures in an intentional, flexible and goal-oriented ways and display them in various contexts like grooming, playing or feeding. For example, to request food, great apes usually use begging gestures in which they stretch their arms and open their hands towards other conspecific with food.

But why gestures can be considered as a precursor of human language? Well, neuroscience brought some interesting and strong findings but is not my intention discuss that kind of gestural theories based on brain data. Instead, I’m going to take a quick look at robust data collected during long-term field studies conducted in different study sites. Yeah, we already know that in our ontogenetic path, before we speak, we communicate to the world using gestures. In our species, gestures emerge first. Speech appears later. But this is not the proof that tickles my guts!

There is no doubt that great ape gestures are flexible. All scientific papers about primate gestural communication support this evidence. Same gesture for different purposes and different gestures for the same purpose. Pretty much similar to what we do with our spoken language. Different words for the same meaning and vice versa. So, we can highlight that apes communicate different things in very different situations.

One particular paper, written by Amy Pollick and Frans de Waal, reports an outstanding discovery: the gestural repertoire varies from group to group of the same species, in some kind of gestural dialect. Some gestures were only observed in particular circumstances and at one study site. Once again, pretty much similar to our language. Moreover, and in a broader view, Graham et al. (2017) made a diagram about gestural repertoires of chimpanzees (Pan troglodytes) and bonobos (Pan paniscus) and these two great ape species share a very significant amount of gestures while some of them are unique to each species. Such striking overlap of gestures and, at the same time, the still more mind-blowing exclusivity of some gestures between chimpanzees and bonobos reveals us a scenario in which, most likely, the different languages of today evolved from an ancestral language. We are biologically programmed to speak but our language evolved itself in a cultural way, as apparently occurs with great ape gestures.

Furthermore, Hobaiter & Byrne (2014), focused on an attempt to translate the meaning of chimpanzee gestures. At the first glance, it may seem and exaggerated anthropocentric approach, trying to humanize all animal behavior. But for those who have spent many hours observing great apes gestural communication (like me), the similarities between human and great ape gestures pop out at you. So, in the paper cited above, the authors identified some gestures that sound us like “move away”, “please, groom me”, “stop that” or “follow me”.

Will be the great ape’s gestural communication the holy grail to understand the roots of human language? I guess so. The growing body of evidence that comes to us from primatological studies are quite exciting and it makes me very optimistic to solve the riddle of the evolution of our language. We need to keep collect data and test some hypothesis.

This is a guest post from Miguel Oliveira.

References:

Pollick, A. S.; de Waal, F. B. M. (2007). Ape gestures and language evolution. PNAS, 104(19), 8184-8189;

Hobaiter, C.; Byrne, R. W. (2014). The meanings of chimpanzee gestures. Current Biology, 24(14), 1596-1600;

Graham, K. E.; Furuichi, T.; Byrne, R. W. (2017). The gestural repertoire of the wild bonobo (Pan paniscus): a mutually understood communication system. Animal Cognition, 20(2), 171-177;

 

koko_gorilla

Koko’s compassion might show to the world that gorillas communicate

Just a while ago, I told you how researchers translated the chimpanzee gesture language. It was a real breakthrough, since the work proved chimps are the first animals that we know of  that intentionally communicate through gestures inside their own society, apart from humans. There are other animals, however, that can be taught to emulate human language and directly communicate with us. The case of Koko the gorilla is perhaps the most famous and, at the same time, heartbreaking evidence of this idea.

Koko is a 38-year-old lowland Gorilla who not only learned to speak sign language as a baby but who has grown a love for kittens whom she treats like her own children. Koko could understand 1000 signs on the the American Sign Language system and over 2000 words of spoken English. The gorilla is deeply compassionate towards her feline friends, and could be seen often playing with her kitten friends or teaching them how to eat – she would pretend to bite, then would offer the morsel to the kittens.

One unfortunate day, one of Koko’s pet kitten escaped from the gorilla’s cage and was hit a car. Dr. Penny Patterson, her teacher and caretaker, informed her verbally and through sign language. Koko’s reaction follows.

koko_gorilla

koko_gorilla-

koko_gorilla

koko_gorilla1

Here’s the complete video, too.

So, does this mean that Koko understands human language? There’s an immense amount of debate surrounding Koko and Dr. Patterson. A lot of scientists believe Dr. Patterson has gravely overestimated Koko’s ability to comprehend human language, since there has yet to be one definite proof. It is generally accepted that she understands words but it is grammar she lacks. And grammar is the most important part of actually communicating. She can sign several words and such but she doesn’t make them consistent and can’t actually form sentences. It is because of this most people assert Koko doesn’t in fact communicate. You can learn more and read up on opinions that both support and criticize claims of Koko’s ability to communicate at the Wikipedia article.

If in fact Koko can communicate, how will the world react when it finds out what was always classed colloquially as a ‘dumb animal’ is capable of feelings, emotions, thoughts and form opinions expressing these? A more important question that might need addressing is whether we actually need any proof whatsoever to treat any living being with the respect we’d show to our fellow homo sapiens.

Proportion of fixations (gazes) on the agent (person acting) and patients (object of the action) when describing simple situations (a) and more complex situations (b). In the case of simple situations, the spoken utterance begins slightly later than in complex ones (vertical line in the graphs) because the speaker plans further in advance in the first case. © MPI for Psycholinguistics

How we think before we speak

Photo: glowscotland.org.uk

Photo: glowscotland.org.uk

The common saying “think before you speak” is often used after a person spoke something inappropriate. It implies that the person in question has not given enough thought to the consequences of his spoken words. Obviously we can’t speak without thinking, though, so naturally the question arises: how do we plan out our utterances? Researchers at the Max Planck Institute for Psycholinguistics in Nimwegen sought to answer this question. Their findings suggest that the temporal coordination of thought and speech depends on the situation, namely on how complex it is. If the situation requires simple descriptions and dialogue, the speaker will in most cases plan the utterances in advances, while in the case of a more complex speech, the speaker will start with an initial portion pre-planned and improvise on the go.

[READ] Primate howl hints to origin of human speech

Antje Meye and her colleagues at MPI were particularly interested in how the thoughts we want to express in speech become gradually formed, and most importantly if this process, once found, is the same in all situations and for all individuals.

A strategy for planning your thoughts

To see how far in advance someone plans his speech, the researchers devised an experiment. Study participants were asked to describe various scenes, like those comprised of an “agent” figure (for instance a girl that is shown performing an action) and a “patient” (a boy that undergoes the action). While they described the scene, the participants were recorded to later decode speech signals and had an eye movement camera strapped on. This approach is based on the general principle that we usually direct our gaze where there is something “important” to be seen, that is, for example, the person performing an action, about whom we would like to speak. Knowing when a speaker directs his attention towards an object of interest, scientists can infer that this is the moment the subject begins to form related thoughts, selects corresponding words from memory and begins his speech.

So how do we plan our speech? Previously there have been two proposed hypotheses:

  1. Speakers are only able to define the first concept and first word before the utterance. According to this theory as soon as a subject looks at an image, he directs his attention towards an object of interest (say the girl in the scene) and begins to speak immediately.
  2. Speakers are able before the beginning of the utterance to roughly determine what happens in the image, meaning who does what. The subject looks directs his attention to multiple objects of interest and, possibly, other secondary elements before commencing his speech. Thus a more complex planning strategy is expressed compared to the fist hypothesis.

A third possibility considered, this time, by the MPI researchers is that speakers do not use either of these strategies consistently and that their speech planning depends on the difficulty of the task to be completed.  To determine whether or not this is true, the scientists presented subjects with scenes of various complexity; some images were very easy to recognize from the head-go, while others required a bit more thought. The test subjects were not given any specific instructions regarding the nature and length of the descriptions.

Proportion of fixations (gazes) on the agent (person acting) and patients (object of the action) when describing simple situations (a) and more complex situations (b). In the case of simple situations, the spoken utterance begins slightly later than in complex ones (vertical line in the graphs) because the speaker plans further in advance in the first case. © MPI for Psycholinguistics

Proportion of fixations (gazes) on the agent (person acting) and patients (object of the action) when describing simple situations (a) and more complex situations (b). In the case of simple situations, the spoken utterance begins slightly later than in complex ones (vertical line in the graphs) because the speaker plans further in advance in the first case. © MPI for Psycholinguistics

The graph above shows which proportion of all gazes is directed at the agent (black) and the patient (grey) for each point in time from the commencement of the image. The utterances began after around 1.8 to 2 seconds. In general, the test subjects tend to view the agent initially rather than the patient. However, when the action was easy to describe  , the preference for the agent was not very marked.

Prepare or improvise

This suggests that people, when confronted with a simple situation, will begin by establishing an overview of the events  (looking both at agent and patient), then form a thought pattern. In contrast, when the action was more difficult to describe, the test subjects tended to limit themselves to looking at the person performing the action before the start of the utterance.

This and other evidence tells us that there isn’t one governing planning strategy, rather a more flexible approach depending on the situation is used. Therefore, we can plan our utterances in different ways and think in advance to various extents.

The implications of the findings are important for studies of linguistics and psycholinguistics, particularly  the extent to which the structure of language influences thinking. The study was made with the help of participants who are Dutch speaking natives. In Dutch, like in German or English, the agent is named first and then the action. In some langues like Tzeltal, which is spoken in Mexico and Tagalog, which is spoken in the Philippines,  the verb is placed at the beginning of the sentence. Considering the MPI findings, it’s assumable that people speaking these kind of languages  differ considerably in their approach to planning units. The MPI researchers in Nimwege plan on investigating these language patterns in the future, in hope that armed with more thorough empirical insight, the psycholinguists can better explore how language and thinking are related to each other.

 

 

Map showing approximate regions where languages from the seven Eurasiatic language families are now spoken. Image: Pagel et al./PNAS

Oldest known words are 15,000 years old. Includes “mother”, “not” or “spit”

caveman_familyA team of researchers at the University of Reading’s School of Biological Sciences have compiled a list of 23 of the oldest words known so far, all common to seven  “proto-Eurasiatic” ancient languages that at their own term evolved into hundreds of languages, some still spoken today, other extinct. The researchers estimate these words are 15,000 years old.

These words are:

  • thou, I, not, that, we, to give, who, this, what, man/male, ye, old, mother, to hear, hand, fire, to pull, black, to flow, bark, ashes, to spit, worm.

Were you to find yourself beside a campfire 150 centuries ago alongside a group of hunter gatherers, chances have it that they might understand some of these words. Some are pretty obvious, like “mother”, “not”, “what” or the ever so life-saving “fire”, but “worm” and “spit” definitely come as a surprise.

There’s a consensus among linguists that a language typically can’t survive past 8,000 or 9,000 years, since it’s common for languages to mix and get replaced by other more influential languages or morph into new ones altogether. These timeless “ultra conservative” words, as they’ve been dubbed by the researchers, show that this isn’t entirely true, albeit the list is only a handful large.

Map showing approximate regions where languages from the seven Eurasiatic language families are now spoken. Image: Pagel et al./PNAS

Map showing approximate regions where languages from the seven Eurasiatic language families are now spoken. Image: Pagel et al./PNAS

Mark Pagel of the University of Reading’s School of Biological Sciences led the research. Pagel and his team first started off with 200 words that linguists know to be the core vocabulary of all languages. What interested them were “cognates,” which are words that have the same meaning and a similar sound in different languages. For instance father (English), padre (Italian), pere (French), pater (Latin) and pitar (Sanskrit) are cognates. After the roots of these words were found, the scientists came up with the list of 23 words.

“Our results suggest a remarkable fidelity in the transmission of some words and give theoretical justification to the search for features of language that might be preserved across wide spans of time and geography,” Pagel and his team wrote.

What’s rather interesting to note is the meaning of these words. These words survived for 15,000 years, despite technology, society, religion and so forth changed dramatically. Their value has remained undisturbed for thousands of years.

“I was really delighted to see ‘to give’ there,” Pagel said. “Human society is characterized by a degree of cooperation and reciprocity that you simply don’t see in any other animal. Verbs tend to change fairly quickly, but that one hasn’t.”

If you’d like to hear how some of these words sounded thousands of years ago, check out the Washington Post, where they have several words like “mother”, “thou” and … “spit” spelled into some of the world’s oldest languages.

Findings were detailed in the journal Proceedings of the National Academy of Sciences.

Baby’s ability to interpret languages is innate, research shows

Despite having brains that are still “under construction”, babies are able, even three months before full term, to distinguish between different syllables.

babies

Changes in blood oxygenation show how brain areas activate while premature babies listen to speech.

It was recently shown that full born babies, even just a few days after they are born, display remarkable linguistic sophistication: they can distinguish between two different languages [1], they can recognizes their mother’s voice[2], and they can remember stories they were told while in the womb [3]. But researchers started wondering, just how much of this ability is innate, and just how much comes after birth?

To answer that question, neuroscientist Fabrice Wallois of the University of Picardy Jules Verne in Amiens, France took a peek at what happens inside a baby’s head before he is born. As you can guess though, it’s quite hard to study fetuses, so they turned to the next best thing: 2-3 months old babies. The thing is, when you’re 2-3 months, old neurons are still migrating to their final destinations, and the basic connections between upper brain areas are just shaping up.

Researchers played a soft voice recording to premature babies while they were asleep in their incubators a few days after birth, then monitored their brain (of course, using a non-invasive technique). They were looking for any sign that the baby understands what’s happening in the recording, like for example, when a male voice says something after a long period in which a female voice talked.

The babies always distinguished between male and female voices, as well as between the trickier sounds ‘ga’ and ‘ba’, which demands even faster processing. Interestingly enough, in the process, they were using the same parts of the cortex which adults use for sophisticated understanding of speech and language.

The results further showed that linguistic connections inside the cortex are already “present and functional” and are not formed as a result of “outside-the-womb” practice; this clearly suggests that at least part of these speech-processing abilities is innate. However, the “innate” issue is still a matter of debate:

“It is possible that the experience of birth triggers a set of processes that prime the brain of a premature infant to respond to language in ways that a same-aged fetus will not.”, added

Scientific article

texting teens

Txting makes u stupid, study finds

texting teens

A linguistic study found that people who regularly text message are less likely to accept new words, as opposed to those that read more traditional print media such as books, magazines, and newspapers. For the study, student volunteers were asked about their reading habits and text messaging frequency, and then presented with a set of words both real and fictitious.

“Our assumption about text messaging is that it encourages unconstrained language. But the study found this to be a myth,” says Joan Lee, who authored the study for her master’s thesis in linguistics. “The people who accepted more words did so because they were better able to interpret the meaning of the word, or tolerate the word, even if they didn’t recognize the word. Students who reported texting more rejected more words instead of acknowledging them as possible words.”

Study participants who were exposed to traditional reading material scored better in identifying real from fictitious words. Lee suggests that reading traditional print media exposes people to variety and creativity in language that is not found in the colloquial peer-to-peer text messaging used among youth or ‘generation text’. The study author goes on to say that reading encourages linguistic flexibility and tolerance of different words. This helps them interpret certain words in  correct manner, despite these being new or unusual.

According to a survey carried out last year by Nielsen unrelated to the present study, Americans between the ages of 13 and 17 send and receive an average of 3,339 texts per month. Teenage girls send and receive more than 4,000.

“In contrast, texting is associated with rigid linguistic constraints which caused students to reject many of the words in the study,” says Lee. “This was surprising because there are many unusual spellings or “textisms” such as “LOL” in text messaging language.”

According to a 2011 survey by the National Endowment for the Arts, the proportion of Americans between the ages of 18 and 24 who read a book not required at school or at work is now 50.7 percent, the lowest for any adult age group younger than 75, and down from 59 percent 20 years ago.

[RELATED] Growing up around gadgets hinders hinders your social skills, study finds

Lee says that for texters, word frequency is an important factor in the acceptability of words.

“Textisms represent real words which are commonly known among people who text,” she says. “Many of the words presented in the study are not commonly known and were not acceptable to the participants in the study who texted more or read less traditional print media.”

Source (pay wall) / via