Tag Archives: communication

Earrings aren’t just fashionable. They’re also a form of communication.

Something as simple as earrings can serve as a means of communication just by themselves. As it turns out, humans have been using this type of non-verbal communication for millennia, with researchers recently recovering shell beads that were used for earrings 150,000 years ago.  

The researchers working at the cave. Image credit: The researchers

The beads are the earliest known evidence of widespread non-verbal communication, according to the group of anthropologists who made the discovery. In their study, they argue that this brings new valuable information on the evolution of human cognitive abilities and interactions. 

“You think about how society works—somebody’s tailgating you in traffic, honking their horn and flashing their lights, and you think, ‘What’s your problem?'” Steven Kuhn, lead author of the study, said in a statement. “But if you see they’re wearing a blue uniform and a peaked cap, you realize it’s a police officer pulling you over.”

Kuhn and the group of researchers recovered 33 marine shell beads between 2014 and 2018 from Bizmoune Cave, located about 10 miles from the Atlantic coast of southwest Morocco. The cave, formed in Upper Cretaceous limestone, was discovered during a survey of the area in 2004 and was then subject to archaeological excavations — including the excavations .

The beads were made from sea snails shells belonging to the species Tritia Gibbosula, each measuring half an inch long. They feature an oval-shaped or circular perforation, indicating they were hung on strings or from clothing. There are also traces of human modification such as chipping, possibly using a stone tool, the researchers found.

“They were probably part of the way people expressed their identity with their clothing,” Kuhn said. “They’re the tip of the iceberg for that kind of human trait. They show that it was present even hundreds of thousands of years ago, and that humans were interested in communicating to bigger groups of people than their immediate friends and family.”

Looking into the beads

While is far from the first time researchers have found symbolic artifacts such as beads, previous examples date back to no older than 130,000 years. Some of the earliest examples are associated with the Aterian industry, a Middle Stone Age culture known for its advanced tools such as spear points, used to hunt diverse wild animals. 

For anthropologists as Kuhn, the beads are a way to advance our understanding of the evolution of human cognition and communication. They are a fossilized form of basic communication, Kuhn said. While they don’t know exactly what they meant, they are symbolic objects deployed in a way for other people to see them, Kuhn explained. 

The researchers agree that their findings, while significant, also leave a lot of open questions. They will know to explore further the role of the Aterian industry and why they had the need to make the beads when they did. A possibility is that they wanted to identify themselves as more people started expanding into the North of Africa. 

Using a certain bead might have meant that you belonged to a certain clan, created as a way to protect limited resources due to population expansion. Still, as Kuhn explains, it’s one thing to know that they were capable of making them, and another to understand what actually stimulated them to do it. A chapter for another day. 

The study was published in the journal Science Advances. 

Puppies are likely born with the ability to communicate with humans

Credit: Emily Bray.

Apart from barking, whimpering, and muttering in various tones, dogs aren’t too big on vocal communication. But despite this shortcoming, dogs and humans are able to communicate remarkably well. Humans issue commands and dogs will tend to follow. Dogs will also issue their own commands, pawing their humans to get pets or barking while standing next to the door to signal it’s time to go for a walk. Now, researchers report that this amazing ability to interact with people is present in canines from a very early age. What’s more, this communication link between puppies and humans requires little, if any, training.

A good boy from an early age

For more than a decade, Emily Bray of the University of Arizona and colleagues have been conducting all sorts of research with dogs, particularly with those from Canine Companions, the largest service dog organization in the U.S. for people with physical disabilities. Ultimately, they’d like to learn how to improve the performance of service dogs, which requires a better understanding of how dogs think, solve problems, and communicate.

One important question the researchers attempted to investigate for this present study was how much of this capacity for communication is explained by biology. To this aim, they were fortunate enough to have access to hundreds of budding service dogs that were all starting out at the same early age. This setup allowed the researchers to constrain the experimental conditions quite nicely since all the puppies had virtually the same reading history and a known pedigree that can be traced for multiple generations. Seeing how the degree of relatedness between the puppies was known, this information could be then used to plot genetic versus environmental factors when assessing certain outcomes, such as communication abilities.

Credit: Emily Bray.

In total, the researchers worked with 375 eight-week-old puppies — 98 Labrador retrievers, 23 golden retrievers, and 254 Labrador golden crosses from 117 different litters — that had to perform the same tasks. For instance, one such task involved finding and bringing back an object such as a cup pointed to by a human.

All the puppies ultimately completed the task successfully, even when the odor of the object to be retrieved was masked. This shows that dogs are able to perform social communication from a very early age using gestures and eye contact. However, the puppies were only successful when a person initiated the interaction by speaking in a high-pitched voice, typical of humans addressing cute babies. Without the human initiating the interaction, the puppies didn’t look to people for help in completing a task in which food was locked in a Tupperware container.

“We show that puppies will reciprocate human social gaze and successfully use information given by a human in a social context from a very young age and prior to extensive experience with humans,” said Bray in a statement. “For example, even before puppies have left their littermates to live one-on-one with their volunteer raisers, most of them are able to find hidden food by following a human point to the indicated location.”

A statistical model employed by the researchers suggests that 40% of the variation in a puppy’s ability to follow a human’s finger-pointing or gaze can be explained genetically. The genetic component is likely even greater between more distantly related breeds. We know from previous research that breeds of dogs that were initially selected for cooperative work (like sheepdogs) are much better at following a person’s point than breeds selected for other kinds of work like guard dogs, hounds, or sled dogs).

In the future, Bray would like to identify some of these specific genes that are involved in such behaviors. The researchers have already collected blood samples and cognitive data, which will come in handy for a genome-wide association study.

“From a young age, dogs display human-like social skills, which have a strong genetic component, meaning these abilities have strong potential to undergo selection,” Bray said. “Our findings might therefore point to an important piece of the domestication story, in that animals with a propensity for communication with our own species might have been selected for in the wolf populations that gave rise to dogs.”

The findings appeared in the journal Current Biology.

4G on the Moon? Yes! NASA and Nokia are already working on it

In a bit to solve internet connectivity issues in space, NASA has partnered with Nokia to set up a 4G network on the Moon, as part of the Tipping Point project. The plan is to first build a 4G network, and eventually transition to 5G, just like on Earth. It will be the first 4G communication system in space.

Credit Flickr Osde8Info

NASA awarded Nokia $14.1 million to deploy the cellular network on the Moon. The grant is part of $370 million worth of contracts for lunar surface research missions. Most of the funds were given to large space companies such as SpaceX and United Launch Alliance to perfect techniques to make and handle rocket propellant in space.

The project will have to move fast to stay in line with NASA’s goal to have astronauts working at a lunar base by 2028.

“We need power systems that can last a long time on the surface of the moon, and we need habitation capability on the surface,” NASA Administrator Jim Bridenstine said in a statement.

Back in 2018, Nokia and British firm Vodafone had announced their goal for a moon mission. They intended to launch a lander and rover built by Audi, utilizing a SpaceX rocket. They would set down near the Apollo 17 landing site and examine the Luna Roving vehicle astronauts left behind in 1972.

The launch never took place but the new contract with NASA brings Nokia’s plans for moon projects to life. The upcoming 4G network could allow for surface communications at greater distances, increased speeds, and provide more reliability than current standards, NASA explains. This means communication between lunar landers, rovers, habitats, and astronauts would be possible thanks to the service, said Jim Reuter, associate administrator for NASA’s Space Technology Mission Directorate. Nokia will look at how terrestrial technology could be modified for the lunar environment, he adds.

The moon’s cellular network will operate during lunar landings and launches. At the same time, it will be designed to tolerate the particularities on the lunar surface such as radiation, extreme temperature, and vacuum.

The network will allow astronauts to control lunar rovers, stream high-definition videos, transmit data, and have real-time navigation of the lunar geography. While 4G networks on Earth need big cell towers with power generations, Nokia has created small cell technology that is much easier to pack into a rocket ship.

Other technologies funded by NASA include demonstrations of lunar surface power generation and energy storage. Intuitive Machines will develop a hopping robot that could launch and carry small packages from one lunar site to another, while Alpha Space will create a small laboratory that could land on the moon’s surface.

Uncertainty can be reported without damaging trust — and we need that more than ever

The numbers on COVID-19 are often uncertain and based on imperfect assumptions. It’s an ever-changing situation that often involves uncertainty — but we’re better off communicating things that way.

Typing fonts.
Image credits Willi Heidelbach.

Communicating science is rarely an easy job. In addition to “translating” complex data and processes into a language that’s familiar and accessible to all, there’s also the problem of data itself, which is often not clear-cut.

Experts and journalists have long assumed that if science communication includes “noise” (things like margin of error, ranges, uncertainty), public trust in science will be diminished.

“Estimated numbers with major uncertainties get reported as absolutes,” said Dr. Anne Marthe van der Bles, who led the new study while at Cambridge’s Winton Centre for Risk and Evidence Communication.

“This can affect how the public views risk and human expertise, and it may produce negative sentiment if people end up feeling misled,” she said.

But this might not be the case, a new study concludes.

The researchers carried a total of five experiments involving 5,780 participants, who were shown titles with varying degrees of uncertainty. The participants were then queried on how much they trusted the news.

The researchers report that participants were more likely to trust the source that presented data in the most accurate format, where the results were flagged as an estimate, and accompanied by the numerical range from which it had been derived.

For example: “…the unemployment rate rose to an estimated 3.9% (between 3.7%–4.1%)”.

The results of one of the five experiments: perceived
uncertainty (A), trust in numbers (B), and trust in the source (C). Even as the trust in the numbers was lower, the trust in the source was slightly higher. Results were slightly different for some of the other experiments. Image credits: PNAS.

We’ve seen both before and during the COVID-19 pandemic how damaging scientific disinformation can be. Disinformation often presents things as certain and absolute, and science communicators are concerned about adding more uncertainty, and diminishing trust in science.

If this study is any indication, addressing uncertainty head-on might actually be better. At a time where scientific information and expertise is more important than ever, the researchers encourage communicators to consider their results.

“We hope these results help to reassure all communicators of facts and science that they can be more open and transparent about the limits of human knowledge,” said co-author Prof Sir David Spiegelhalter, Chair of the Winton Centre at the University of Cambridge.

Speaking of uncertainty and assumption, this is a limited sample size, and all the participants were British — there could be a cultural component in this case, and the results might not apply to a larger sample of people, or to people in other countries.

The results are intriguing nonetheless. Uncertainty cannot be avoided at this point in the COVID-19 outbreak, and we should become more comfortable in dealing with it.

Read the study in its entirety here.

Talkative robots make humans chat too — especially robots that show ‘vulnerability’

Robots admitting to making a mistake can, surprisingly, improve communication between humans — at least during games.

Image via Pixabay.

A new study led by researchers from Yale University found that in the context of a game with mixed human-and-robot teams, having the robot admit to making mistakes (when applicable) fosters better communication between the human players and helps improve their experience. A silent robot, or one that would only offer neutral statements such as reading the current score, didn’t result in the same effects.

Regret.exe

“We know that robots can influence the behavior of humans they interact with directly, but how robots affect the way humans engage with each other is less well understood,” said Margaret L. Traeger, a Ph.D. candidate in sociology at the Yale Institute for Network Science (YINS) and the study’s lead author.

“Our study shows that robots can affect human-to-human interactions.”

Robots are increasingly making themselves part of our lives, and there’s no cause to assume that this trend will stop; in fact, it’s overwhelmingly likely that it will accelerate in the future. Because of this, understanding how robots impact and influence human behavior is a very good thing to know. The present study focused on how the presence of robots — and their behavior — influences communication between humans as a team.

For the experiment, the team worked with 153 people divided into 51 groups — three humans and a computer each. They were then asked to play a tablet-based game in which the teams worked together to build the most efficient railroad routes they could over 30 rounds. The robot in each group would be assigned one pattern of behavior: they would either remain silent, utter a neutral statement (such as the score or number of rounds completed), or express vulnerability through a joke, personal story, or by acknowledging a mistake. All of the robots occasionally lost a round, the team explains.

“Sorry, guys, I made the mistake this round,” the study’s robots would say. “I know it may be hard to believe, but robots make mistakes too.”

“In this case,” Traeger said, “we show that robots can help people communicate more effectively as a team.”

People teamed with robots that made vulnerable statements spent about twice as much time talking to each other during the game, and they reported enjoying the experience more compared to people in the other two kinds of groups, the study found. However, participants in teams with the vulnerable and neutral robots than among both communicated more than those in the groups with silent robots, suggesting that the robot simply engaging in any form of conversation helped spur its human teammates to do the same.

“Imagine a robot in a factory whose task is to distribute parts to workers on an assembly line,” said Sarah Strohkorb Sebo, a Ph.D. candidate in the Department of Computer Science at Yale and a co-author of the study. “If it hands all the pieces to one person, it can create an awkward social environment in which the other workers question whether the robot believes they’re inferior at the task.”

“Our findings can inform the design of robots that promote social engagement, balanced participation, and positive experiences for people working in teams.”

Language forms spontaneously, and fast

Languages can form spontaneously, and surprisingly fast, reports a new paper.

Image credits Bruno Glätsch.

Researchers at the Leipzig University and the Max Planck Institute for Evolutionary Anthropology report that preschool children are able to form communication systems which share core properties of language. The team was studying the processes by which communication systems such as language developed in the past.

Say what?

“We know relatively little about how social interaction becomes language,” says Manuel Bohn, Ph.D. at the Leipzig University’s Research Center for Early Child Development and lead-author of the study.

“This is where our new study comes in.”

People love to communicate — there are over 7,000 languages in use today according to Ethnologue. Just under half of them have few speakers remaining, but it does go to show how versatile people are at using speech to convey information.

Still, the processes through which languages form are still up to debate. While they’re believed to have formed over millennia, we’ve also seen deaf strangers spontaneously form new a sign language, the Nicaraguan Sign Language (NSL), blisteringly fast. The team notes that children developed the NSL, but exactly how they went about it wasn’t documented. So, they set about finding out.

They attempted to recreate the process in a series of experiments with children from Germany and the US. They were invited to two different rooms and provided with a Skype connection to communicate. Their task was to describe an image with different motifs in a coordination game to a partner. In the beginning, these were simple images, showing concrete objects such as a fork. As the game progressed, the images became more and more abstract and complex — a blank card, for example.

In order to prevent the children from falling back on known language, the team allowed them a brief interval for familiarization with the set-up and their partner, and then muted the conversation. Then they tracked the different ways they communicated.

The children figured out pretty quickly that concrete objects can be conveyed by mimicking their corresponding action — eating to represent a fork, for example. The more abstract images, especially the blank paper showing nothing, were much harder to describe. The team notes how two of the participants managed to establish a gesture to convey the concept:

“The sender first tried all sorts of different gestures, but her partner let her know that she did not know what was meant,” explains Dr. Greg Kachel, the study’s second author. “Suddenly our sender pulled her T-shirt to the side and pointed to a white dot on her coloured T-shirt,” representing the card with the colors on her clothes.

Gesture language

Image via Pixabay.

When the two children switched roles later on in the experiment, the transmitter didn’t have white on her clothes but used the same approach. When she pulled her own t-shirt to the side and pointed to it, “her partner knew what to do,” Kachel adds. In effect, they had established a gestured ‘word’ for an abstract concept.

Over the course of the study, the children developed more complex gestures for the images they were given. When describing an interaction between two animals, for example, they first established individual signs for individual actors and then started combining them. The team notes that this works similarly to a very limited grammatical structure.

All in all, the team believes that people first established references for actions and objects using gestures that resembled them. Individual partners involved in dialogue would coordinate with their peers by imitating each other so that they use the same signs for the same things. Eventually, this interpersonal meaning would spread to the group at large (as everybody mingled and coordinated), gaining conventional meaning. I personally find this tidbit very fascinating, especially in relation to pictorial scripts, be them ancient Egyptian or save icons.

Over time, the relationship between the sign and the concept itself weakens, allowing for signs to describe more abstract or more specific concepts. As more complex information needs to be conveyed, layers of grammatical structures are gradually introduced.

Some of the key findings of this study are that partners need a common pool of experience and interaction in order to start communicating, and how fast this process can take place if that prerequisite is satisfied: as little as 30 minutes.

It also goes to show that while we think of language as being formed by words, communication can happen without them. When people can’t talk to one another for some reason, they’ll find other ways to convey information with surprising gusto. Spoken language likely formed following the same steps, however, and was preferred as the fastest and most effective way of transmitting a message.

“It would be very interesting to see how the newly invented communication systems change over time, for example when they are passed on to new ‘generations’ of users,” Bohn says. “There is evidence that language becomes more systematic when passed on.”

The paper “Young children spontaneously recreate core properties of language in a new modality” has been published in the journal Proceedings of the National Academy of Sciences.

Scientists show how plants communicate — and it looks amazing

Image credits: Simon Gilroy.

For a while now, researchers have been observing an intriguing phenomenon: when one part of a plant is under attack (say, by a hungry caterpillar), the defense systems are activated in other parts of the plant. But how do they know to do so? A new study sheds new light on that process, highlighting the impressive means through which plants communicate — and they have the amazing videos to go with it.

Plants don’t have nerves, but, as it turns out, they have something that’s surprisingly similar: a network of signaling cues, the same cues that many animals use in their own nervous systems.

“We know there’s this systemic signaling system, and if you wound in one place the rest of the plant triggers its defense responses. But we didn’t know what was behind this system,” explained botanist Simon Gilroy from the University of Wisconsin-Madison.

Gilroy and botanist Masatsugu Toyota, a former postdoc in Gilroy’s lab, wanted to see how this signal propagates.

“We do know that if you wound a leaf, you get an electrical charge, and you get a propagation that moves across the plant,” Gilroy adds. What triggered that electric charge, and how it moved throughout the plant, were unknown. But there was one likely culprit: calcium.

Calcium is found almost everywhere in cells, often acting in a sensor-like fashion. Because it carries an electrical charge, it can produce a signal about a changing environment. But the problem is that calcium is very difficult to study, spiking and dipping quickly, and researchers needed a way to study it in real time.

So they genetically engineered a mustard plant that would reveal changes in calcium concentration in real-time. The thus-developed plants produce a protein that fluoresces around calcium — basically, whenever there’s a spike in calcium, the plant lights up. They found that this allowed them to see the signaling process, which propagates at a speed of about 1 millimeter per second — lightning fast in the plant world, but still only a fraction of what we see in the animal world.

Toyota and Gilroy showed that when the plant is threatened (most commonly by insects) waves of calcium flow from the source of the attack throughout the plant. As soon as the defensive wave hits, defensive hormones are released in the plant in an attempt to stop the damage from taking place. These noxious hormones deter some of the plants’ predators from eating them.

The team also wanted to see what triggers this calcium release in the first place. Previous research had suggested that glutamate, an amino acid and significant neurotransmitter in both plants and animals, is the key. So they used mutant plants lacking glutamate receptors and found that the flow of calcium was also disrupted.

“Lo and behold, the mutants that knock out the electrical signaling completely knock out the calcium signaling as well,” says Gilroy.

So essentially, when the plant is bitten or attacked, it spills out glutamate from the wound site. From there, this triggers a wave of calcium flowing through the plant, which leads to activation of the plant’s hormonal defense mechanisms. It’s a remarkably complex and dynamic process, for a group of organisms which are often regarded as inert and lacking a nervous system.

In addition to the describing this process, the study videos can also help scientists visualize this astonishing mechanism — and let’s admit it, it’s also really nice to look at.

“Without the imaging and seeing it all play out in front of you, it never really got driven home — man, this stuff is fast!” he says.

The study has been published in the journal Science.

In dolphin gangs, everybody knows everyone’s name

Dolphin behavior is often surprisingly human-like.

It’s already well known that male dolphins often form alliances with each other — sometimes lasting for a few decades. Now, researchers have found that dolphins use individual vocalizations to refer to each other. In other words, they have names.

“We found that male bottlenose dolphins that form long-term cooperative partnerships or alliances with one another retain individual vocal labels, or ‘names,’ which allows them to recognize many different friends and rivals in their social network,” says Stephanie King from the University of Western Australia. “Our work shows that these ‘names’ help males keep track of their many different relationships: who are their friends, who are their friend’s friends, and who are their competitors.”

[panel style=”panel-default” title=”Dolphin calls” footer=””]Here’s an example of the mentioned dolphin calls.

Credits: Stephanie King.[/panel]

 

King and her colleagues wanted to figure out how the dolphins were using vocalizations to address each other, so they recorded the dolphins’ vocalizations using underwater microphones and determined the individual vocal label used by each of the males. They compared vocalizations from different alliances and found that males in an alliance retain vocal labels that are quite distinct from one another.

“All bottlenose dolphins (male and female) have their own signature whistle, which is a unique identity signal somewhat comparable to a human name,” King told ZME Science. “Animals use these signature whistles to introduce themselves to others (broadcast their identity) and can copy another dolphin’s signature whistle as a means of addressing that specific individual. However, it was previously thought that male dolphins would converge on a shared signature whistle when they formed long-term cooperative partnerships/alliances with one another. This is because across the animal kingdom it is very common for pairs or groups of animals to make their calls more similar when they share strong social bonds. This can be seen in some parrots, bats, elephants and primates, and represents a means of advertising the strength of their relationships and their group membership. It was previously thought that allied male dolphins also shared an identity signal to advertise alliance membership. While this is true for pairs of allied male dolphins in Florida, USA, we found it was not the case for the male dolphins found in Shark Bay, Western Australia, who form multi-level alliances e.g. alliances within alliances where each alliance can comprise up to 14 males.”

This strongly suggests that dolphins in alliances give each other names, or at least refer to each other using name-like vocalizations. However, this is also in contrast to previous studies, which found that dolphins refer to each other using shared vocalizations as a way of advertising their membership to that partnership or group.

“With male bottlenose dolphins, it’s the opposite–each male retains a unique call, even though they develop incredibly strong bonds with one another,” King says. “Therefore, retaining individual ‘names’ is more important than sharing calls for male dolphins, allowing them to keep track of or maintain a fascinating social network of cooperative relationships.”

Interestingly, researchers also found no evidence of any genetic relatedness influencing signature whistle similarity between males. In other words, dolphins that were related didn’t necessarily have similar names. Most of the males in this study had signature whistles that were notably different from those of both first-order and second-order alliance partners.

Just like in early human populations, it seems that dolphins use individual vocal labels to maintain recognition within complex social structures. It’s not clear if this behavior is restricted only to bottlenose dolphins.

King and her colleagues now intend to study the dolphin interpersonal relationships even more closely. They want to record and then play the “names” of individual males back to the dolphin to see how different males would react to the call in different circumstances.

“It will be interesting to reveal whether all cooperative relationships within alliances are equal or not,” she says.

The bottlenose dolphin is the best-studied dolphin species, so researchers aren’t sure yet if this behavior also exists in other species — but it very well might. King explains:

“For pairs of males that form alliances in Florida, it appears they do make their identity signals more similar. Where there is only one partner, such convergence may be used to indicate commitment to one another, but we found that when multiple males form alliances between themselves and other alliances (e.g. multi-level alliances) then retaining individual names is more important than sharing calls as it allows dolphins to negotiate a complex social network of cooperative relationships. There may be other vocalisations that do not encode identity that might be favoured by particular alliances. That is something that would be worth investigating. The bottlenose dolphin is, so far, the best studied small dolphin species, but evidence suggests that other species,such as spotted dolphins and common dolphins, also have signature whistles.”

Journal Reference: Current Biology, King et al.: “Bottlenose Dolphins Retain Individual Vocal Labels in Multi-level Alliances” https://www.cell.com/current-biology/fulltext/S0960-9822(18)30615-8

Giraffes.

Your voice will always sound funny when talking to someone you think is your superior

When holding a conversation, men and women alike tend to change the pitch of their voices based on the perceived social status and prestige of their interlocutors, a new study reports.

Giraffes.

Image credits Christine Sponchia.

Humans are social animals par excellence. And, while we may like to think that we willingly, knowingly, make all the social magic happen, our bodies play a much larger role than we give them credit for. Non-verbal communication is known to convey a huge wealth of information, often right under the noses of those engaged in conversation. But a new paper shows that paraverbal information also comes to flesh out our social interactions — again, without us even being aware.

Pitching in

A team of psychologists from the University of Stirling wanted to see how perceived social status alters pitch, one of the main elements of paraverbal communication. So they had a group of 48 students, 24 male and 24 female, take a simulated job interview.

The interview was taken with a series of 3 virtual employers. The team started with series of 28 virtual male interlocutors designed by another group of students from written descriptions before the study. Dominants were described as “approximately 36–45 year old, […] extremely dominant individual” who “likes to be in control and to get their way. They will use force, coercion, and intimidation to achieve their goals if necessary.” Prestigious individuals were described as “approximately 36–45 year old, male […] highly valued, prestigious and influential” with “many valued skills and qualities and others follow him freely. This ultimately leads to his achieving his goals.”

To get a feel for how close to their mark the designers were, a third group of 69 undergrads was asked to rate the faces on a 1 (low dominance/prestige) to 7 (high dominance/prestige) scale. The highest-scoring face for dominance and for prestige were used, as well as the face which received median marks for both traits (as a control/neutral employer.)

During the mock interview, students responded to introductory, personal, and interpersonal interview questions. Overall, they tended to speak with higher-pitched voices when talking with employers who they perceived were of higher social status than them, the team reports. This happened regardless of the way the students felt about their own social status — all that was needed was for them to perceive the employer as being of higher status — for male and female students alike.

In contrast, they tended to lower the pitch of their voice most in response to the more complex or interpersonal questions, such as when explaining a conflict situation to an employer.

The team believes this happens since low-pitched voices sound more dominant (particularly in men) while lower pitches sound relatively submissive — which may be your body’s way of saying “don’t worry I’m not here to stir trouble.”

“So, if someone perceives their interviewer to be more dominant than them, they raise their pitch. This may be a signal of submissiveness, to show the listener that you are not a threat, and to avoid possible confrontations,” explains Dr Viktoria Mileva, a Postdoctoral Researcher at the University of Stirling and co-author to the paper

“These changes in our speech may be conscious or unconscious but voice characteristics appear to be an important way to communicate social status. We found both men and women alter their pitch in response to people they think are dominant and prestigious.”

Further supporting this theory, the team also found that participants who perceive themselves as dominant — those who use methods like manipulation, coercion, or intimidation to acquire social status — were less likely to change to a higher pitch when speaking with someone of a higher social status.

The findings show how deeply embedded subconscious communication is in every type of human interaction, and how our perceived place in society governs how we act in relation to our peers. The effects seen in this study might hold true for other settings in which there’s a perceived social status difference between the two interlocutors, such as in school or when settling a dispute.

“Understanding what these signals are, and what their effects are, will help us comprehend an essential part of human behaviour,” Dr Mileva adds.

The paper “Perceived differences in social status between speaker and listener affect the speaker’s vocal characteristics” was published in the journal PLOS One.

Credit: Pixabay.

Women aren’t more expressive than men, contrary to the common stereotype

You’ll hear a lot of people claim women are more expressive than men. But according to a new paper published by researchers at Microsoft who used a facial algorithm en mass, this doesn’t seem true at all. Instead, the gender pattern is actually more nuanced. For instance, some emotions are displayed more by men than women.

Credit: Pixabay.

Credit: Pixabay.

The team led by Daniel McDuff, a scientist working at Microsoft Research, Redmond, recruited online 2,106 people from France, Germany, China, the US, and the UK. The participants were asked to watch a series of ads from their own countries on everything from cars to fashion to manufacturing which elicited various emotional responses. They had to film themselves with their own webcams while doing so.

Each video was analyzed by Microsoft’s facial recognition machine which understands emotional patterns from facial expressions. Here’s a glimpse of how it works. The fact that this whole process is automated is a huge advantage, especially for something as subjective as assessing emotions. Since a machine did all the facial analysis, instead of multiple human researchers, we can at least get an objective, unified review.

According to the findings:

  • women do seem to smile more, mirroring previous research. They also raise their inner brow more which generally reflects fear or sadness. However,
  • men frowned more. Frowns are usually indicative of anger, though it can reflect a state of confusion or concentration.
  • otherwise, there were no gender differences in other facial expressions.

If we’re to believe emotions and facial expressions are closely associated, the obvious implication is that women are more prone to feeling happy but also to feel more anxious. Men, on the other hand, are more likely to feel angry and, maybe, confused. If this is the case, why? What evolutionary mechanisms could have supported this gender difference?

chart-men-women-expressivity

The mean fraction of videos in which inner brow raises, outer brow raises, brow furrows, lip corner pulls and lip corner depressors appeared. Credit: PLOS ONE.

The researchers believe that some of the findings can be partly explained by cultural and social expectations. For instance, in many countries happiness is considered more desirable for women than for men. The paper highlights the observed data from the UK where the smallest difference in gender variation was seen.

Nevertheless, apart from some differences across countries in smiling and frowning, the gender difference in expressivity was far less pronounced than the stereotype might have us think.

Speech and language deficits aren’t to blame for autistic children’s tantrums

Children with autism are known for throwing more frequent tantrums, but contrary to popular belief it doesn’t come down to language or speech impediments, Penn State College of Medicine researchers report.

Pidgeon Tantrum.

Image credits Ann Larie Valentine / Flickr.

Raising a child with autism can be a very difficult job. They are known to go through more tantrums than children without the disorder, and communication can become difficult if not impossible with those on the higher end of the spectrum. Their outbursts are usually chalked up to these difficulties in expressing their wants and needs, but that isn’t necessarily the case.

To understand the relationship between communication and these tantrums, a team of researchers from the Penn State College of Medicine worked with 240 children with autism aged 15-71 months of age.

“There is a common pervasive misbelief that children with autism have more tantrum behaviors because they have difficulty communicating their wants and their needs to caregivers and other adults,” said Cheryl D. Tierney, associate professor of pediatrics at the College of Medicine and section chief of behavior and developmental pediatrics at Penn State Children’s Hospital.

“The belief is that their inability to express themselves with speech and language is the driving force for these behaviors, and that if we can improve their speech and their language the behaviors will get better on their own.”

Tierney and co-author Susan D. Mayes, professor of psychiatry at Penn State, addressed limitations in previous research by including a larger sample of children and recording data on more of their characteristics. The authors also note that unlike previous research, they measured IQ and considered speech and language as distinct elements that might play into the development of tantrums in children with autism.

IQ level is particularly important here because a child with the mental capacity to understand and use language will behave very differently from one who can’t do the same, the authors note. Furthermore, they make the distinction between language and speech as one represents a child’s ability “to understand the purpose of words and to understand what is said” while speech is their ability to use their mouth, tongue, lips and jaw to form the sounds of words and make those sounds intelligible to other people.”

They found that the children’s IQ, their ability to understand language and their ability to use words and speak clearly, accounted for less than 3% of their tantrums. They also report that 2-year-olds who could speak at the normal level of development still had more tantrums than children who could speak at their age level.

“We found that only a very tiny percentage of temper tantrums are caused by having the inability to communicate well with others or an inability to be understood by others,” Tierney notes. “We had children in our sample with clear speech and enough intelligence to be able to communicate, and their tantrums were just as high in that group.”

The authors write that while the paper doesn’t answer why these tantrums take place, the findings are enough to justify shifting the focus from improving speech to improving behavior. They note that a low tolerance for frustration and general mood deregulation (two common traits in the autism spectrum) likely play a part and should be the focus of future studies.

Applied behavior analysis is probably the most helpful method of working through these tantrums, and having a well-trained and certified behavior analyst on a child’s treatment team is key to a good outcome.

“We should stop telling parents of children with autism that their child’s behavior will get better once they start talking or their language improves, because we now have enough studies to show that that is unlikely to happen without additional help,” she said.

“This form of therapy can help children with autism become more flexible and can show them how to get their needs met when they use behaviors that are more socially acceptable than having a tantrum,” Tierney said.

The full paper “Tantrums are Not Associated with Speech or Language Deficits in Preschool Children with Autism” has been published in the Journal of Developmental and Physical Disabilities.

World-first Braille Smartwatch brings all the connectivity of a smartphone to your fingertips

South Korean company Dot is launching the first-ever Braille Smartwatch to give visually impaired people the same connectivity of modern devices at the tips of their fingers.

And it does so in sleek style.

Smartphones and smartwatches are pretty neat. They owe a big part of their appeal to the touchscreen — by merging the display with the input method, you get more ‘screen’ and better handling of the devices. But what opens up a whole new class of accessibility and convenience for you seems like nothing but an inert piece of plastic to visually impaired people.

With estimates placing the number of such people worldwide at over 285 million, that’s a lot of people not taking part in the fun. But one South Korean developer wants to bring the same level of connectivity in a medium that’s tailored to their needs. Named Dot, the company has unveiled the first ever Braille smartwatch under the same name.

The Dot Watch measures 43 mm (1,69 inches) in diameter and displays information through 4 proprietary dynamic Braille cells. These update the displayed information as if the user were running his or her fingers over a piece of static text. It offers users the option to change the speed at which the characters update on the screen for seamless reading, and two touch-sensitive areas to the left and right of the cells allow scrolling through the text.

It’s not a big device. Having only 4 pieces of text available might seem limited, but it’s enough to let the watch display time down to the second (the first Braille device to ever do so,) the date, and includes all basic watch functionalities such as a stopwatch or timer. Dot is confident that the freedom users will have in manipulating information through the touch sensors and auto scroll feature will allow them to easily access the information they require.

“The point is not to read a whole book, much like sighted users who won’t read a novel on a smartwatch — it’s to be mobile and connected,” a press kit from Dot reads.

The device also pairs to smartphones via the ubiquitous Bluetooth so it can receive, translate, and display all the information a phone would. Think alarms, texts, or messages from any platform (such as Messenger, directions from Google Maps, etc).

“When paired with a smartphone, additional functions [become available] such as receiving and reading notification messages, checking the caller information and receiving/rejecting the call, alarm, and “Find my phone” – a function that, when activated, calls up your connected phone with a loud “beep” and vibration,” Dennis Jung, Dot’s Account Executive, told me in an e-mail.

It’s not only about receiving, either — there are plans to enable Dot users to send simple messages using buttons embedded into the side of the device, although this feature may not be available in the first model.  It also supports Open API, so new apps and functions for the phone can be developed independently, lending huge flexibility to the device.

The Dot comes into a market dominated by sound-based devices. While these can rapidly convey information for the user, they have severe drawbacks: you can either plug in headphones to receive the message — which blocks surrounding sounds on which blind people rely for important cues about the world around them — or have it blasted out of speakers for all the world too hear. Tactile devices are available, but they’re usually bulky and prohibitively expensive.

“Though the introduction of refreshable Braille displays to the public aided in widespread increase in information accessibility, only 5% of visually-impaired people worldwide have the rare privilege of owning such devices. This is largely due to the price being an incredible barrier,” Dot explains.

“Thus, our company is working tirelessly to address the demand for an affordable actuator technology. Our team strives to apply it to the wearables concept and make it into reality.”

The company has been developing the Dot Watch for 3 years now with funding from some 140,000 backers. They plan to ship the first 100,000 devices starting this month, with the rest of the initial batch scheduled for next year.

The initial model will support English and Korean, with an estimated battery life of around 336 hours — this is obviously highly dependent on level of use. Depending on customer feedback, Dot says they can incorporate “other features such as voice recognition” or speakers.

Dot hopes to cap the price tag at around US$300 for the USA and 300 EUR in the EU.

“Our goal is to keep the price as close to each other no matter the currency and purchasing platforms.”

If you plan to get your hands on one, the first 1,000 units will be available in London retail.

A bunch of artificial cells just passed the Turing Test

Scientists have built cells that are not living but are so life-like that other cells can communicate with them. Not only this, but they’ve also passed a version of the Turing Test.

E. coli, featured here, communicated with the artificial cells. Image credits: CDC

The Turing Test was developed by Alan Turing as a way to assess a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. This is not truly the case here, it’s a different type of Turing Test, one to assess a non-living system’s ability to behave as a living system. Sheref S. Mansy, one of the study authors, explains:

“The Turing test was formulated over sixty years ago to evaluate whether a machine could behave intelligently. One nice aspect of this test was that it avoided the contentious issue of defining intelligence. Instead, if a machine can trick a person through textual communication into believing that the machine is another person, and thus not a machine, then the machine must display some level of intelligence to pull off this deception.”

In order to create the cells, the team built tiny, cell-like structures packed with DNA instructions that they could use to make RNA, which in turn produces very specific responses (in this case, proteins) to stimuli. Think of it as a cell-robot, programmed by biological laws.

The proteins were only produced in the presence of a particular bacterial molecule – an acyl homoserine lactone (AHL). The robot cells were placed next to living bacteria from three different species – E. coli, Vibrio fischeri, and Pseudomonas aeruginosa. They started producing response proteins to AHL, which was the first good news — the robots were tuning into the conversation. But in order to have a proper conversation, the artificial cells needed to send out messages of their own, so they were equipped with an AHL-production mechanism of their own. It didn’t take long for the real and the artificial bacteria to engage in a chemical conversation. Mansy adds:

“First, it is absolutely possible to make artificial cells that can chemically communicate with bacteria. Artificial cells can sense the molecules that are naturally secreted from bacteria and in response synthesize and release chemical signals back to the bacteria. Such artificial cells do a reasonably good job of mimicking natural cellular life and can be engineered to mediate communication paths between organisms that do not naturally speak with each other.”

The scientists did cheat a bit, though. They didn’t enable the cells to produce their own “translation mechanism,” they just harvested it from living bacteria. Moving on from here, they plan on doing just that and making the robot cells completely self-reliant.

“The artificial cells were quite life-like for a short period of time, but this chemical system was completely reliant on translation machinery that was isolated from bacteria. The artificial cells could not produce their own translation machinery. To make more advanced and life-like artificial cells, the artificial cells would need to synthesize their own translation machinery, which is a daunting task. Either we figure out how to do this, or we’ll have to find a way to build artificial cells that are not reliant on the activity of proteins.”

This study isn’t only academic, helping us understand how bacteria evolve and communicate. It has several real-life applications and the potential of this type research should not be understated. It could very well create a new delivery mechanism for drugs, or it could even interfere with dangerous pathogens. Mansy concludes:

“We also found that artificial cells can interfere with the signaling of pathogenic bacteria. If developed further, such artificial cells could be used to disrupt biofilms and thus help to clear infections.”

Journal Reference: Roberta Lentini et al. — Two-Way Chemical Communication between Artificial and Natural Cells. DOI: 10.1021/acscentsci.6b00330

Superdense-coded logo of an oak leaf sets new record for transfer rate over optic cable

Department of Energy researchers working at the Oak Ridge National Laboratory have just set a new world record for data transfer speed. They relied on a technique known as superdense coding, which uses properties of elemental particles such as photons or electrons to store much more information than previously possible.

Image credits Thomas B. / Pixabay.

The Oak Ridge team has achieved a 1.67 bits per qubit (quantum bit) transfer rate over a fiber optic cable, a small but significant improvement over the previous record of 1.63 per qubit.

Awesome! What does it mean though?

One of the most fundamental differences between a traditional computer and a quantum one is how they encode and transmit data. Computers do it in bits — 1s or 0s. Quantum computers do it in qubits, which can be both a 1 and a 0 at the same time — bending minds and limits on stored information at the same time. The team, composed of Brian Williams, Ronald Sadlier, and Travis Humble has used a physical system similar to that seen in the latter, which are widely touted for the speed with which they solve complex problems.

They were the first to ever transmit superdense code over optical fiber, a major step forward if we want to use quantum communication without re-installing every cable in the world. ORNL’s oak-leaf logo was chosen to be the fist message ever transmitted with this technique, sent between two terminals in the lab. The exact mechanisms of this process sounds more like hardcore sci-fi than actual science but hey — it’s quantum physics.

“We report the first demonstration of superdense coding over optical fiber links, taking advantage of a complete Bell-state measurement enabled by time-polarization hyperentanglement, linear optics, and common single-photon detectors,” the team writes.

The team used run of the mill laboratory equipment such as common fiber optic cable and standard photon detectors, meaning their technique is already suited for practical use.

Right now, the technology remains largely experimental. Potential applications are very enticing though, including a novel, cost-effective way of condensing and transferring dense packages of information at high speed. The main winner in this is of course, the Internet — the tech could allow for anything from less buffering time on Netflix to improved cybersecurity applications.

“This experiment demonstrates how quantum communication techniques can be integrated with conventional networking technology,” Williams said. “It’s part of the groundwork needed to build future quantum networks that can be used for computing and sensing applications.”

The full paper “Superdense coding over optical fiber links with complete Bell-state measurements” has been published in the journal Physical Review Letters, where it was selected as an “Editor’s Suggestion” paper.

Turns out goats and dogs aren’t that different when communicating with humans

The derpy goat might rival a dog’s ability to communicate with humans, a new study found. This adds to previous research showing that these animals are good problem-solvers and have excellent long-term memory, suggesting these ruminants are more intelligent than they appear.

This guy learned how to bleat fluently in several languages.
Image credits wikimedia user EnderWikiTX.

Goats are the first domesticated animals we know of, with evidence of their husbandry stretching as far as 10,000 years ago according to lead author Dr. Alan McElligott from Queen Mary University of London’s Department of Biological and Experimental Psychology. During all this time they’ve built a reputation of eating virtually anything and getting stuck in weird places in their unending quest for food — hardly what you’d expect from an intelligent animal.

But their wily ways might hide a sharper mind than we give them credit for. McElliogott’s team found that goats employ human communication behaviours very similar to the ones dogs rely on. The researchers trained the animals to remove a lid from a box with a tasty reward inside. After the goats got the hang of it, the team made the rewards inaccessible and recorded the animals’ reactions towards an experimenter supervising the test — who was either facing the animal or had his back to it.

https://www.youtube.com/watch?v=5CJgWjhW2CE

When the animals found they couldn’t reach the treat, they shifted their gaze between the box and their human experimenters, similar to what dogs do when they need help. The ruminants also seemed to understand when communication was viable or not: they looked towards a person facing them more often and for longer periods of time compared to an experimenter facing away from them.

Average time for (a) gaze latencies, (b) gaze durations, (c) gaze frequencies, (d) latencies until first gaze alternation and (e) frequencies of gaze alternations towards either Experimenter 1 or Experimenter 2. Dark grey bars indicate forward looking group of experimenters, and light grey bars the ones facing away. Asterisk indicates significant differences between groups.
Image provided by authors.

“Goats gaze at humans in the same way as dogs do when asking for a treat that is out of reach, for example. Our results provide strong evidence for complex communication directed at humans in a species that was domesticated primarily for agricultural production, and show similarities with animals bred to become pets or working animals, such as dogs and horses,” said first author Dr. Christian Nawroth.

If you look at dogs, domestication came with a decrease in foraging skills and social complexity, but their brains adapted so they can perceive information from humans. This makes sense for dogs as they are bred to be companion animals, but not so much for goats — they have always been bred almost exclusively for agricultural purposes. The findings of this study thus suggest that domestication has more far-reaching implications on animals’ psychology than previously believed.

“From our earlier research, we already know that goats are smarter than their reputation suggests, but these results show how they can communicate and interact with their human handlers even though they were not domesticated as pets or working animals.”

The researchers hope their findings will help farmers better understand their animals and lead to a general improvement in animal welfare.

The full paper, titled “Goats display audience-dependent human-directed gazing behaviour in a problem-solving task” has been published online in the journal Biology Letters.

Shrimps communicate using a secret, polarized light language

An University of Queensland study of mantis shrimp discovered a new form of light communication employed by the animals, the findings having potential applications in satellite remote sensing, biomedical imaging, cancer detection, and computer data storage.

Dr Yakir Gagnon, Professor Justin Marshall and their colleagues at the Queensland Brain Institute previously found that mantis shrimp (Gonodactylaceus falcatus) can sense and reflect circular polarizing light, an ability extremely rare in nature. Until now, no-one has known what they use it for. The study follows up on that research and shows how shrimp use circular polarization to covertly communicate their presence to aggressive competitors.

This is my rock!
Image via wikimedia

“In birds, colour is what we’re familiar with and in the ocean, reef fish display with colour – this is a form of communication we understand. What we’re now discovering is there’s a completely new language of communication,” said Professor Marshall.

Where linear polarized light travels in only one plane, circular polarized light travels in a clockwise or anti-clockwise spiral. The human eye can’t see polarized light, but special lenses — often found in sunglasses — make it visible. It’s also invisible to most other animals, and the shrimp use this to their advantage:

“We’ve determined that a mantis shrimp displays circular polarised patterns on its body, particularly on its legs, head and heavily armoured tail,” said Professor Marshall. “These are the regions most visible when it curls up during conflict.”

“These shrimps live in holes in the reef,” he added. “They like to hide away; they’re secretive and don’t like to be in the open.”

They are also “very violent”, Professor Marshall goes on to explain:

“They’re nasty animals. They’re called mantis shrimps because they have a pair of legs at the front used to catch their prey, but 40 times faster than the preying mantis. They can pack a punch like a .22 calibre bullet and can break aquarium glass. Other mantis shrimp know this and are very cautious on the reef.”

And this aggression is what the team used to test the animals. For the study, the researchers put a mantis shrimp in a water tank, providing them with two burrows they could chose from for shelter: one reflected unpolarised light and the other, circular polarized light. The shrimps made a beeline for the unpolarized burrow in 68% of tests, suggesting that they viewed the other hiding spot as being already occupied.

“If you essentially label holes with circular polarising light, by shining circular polarising light out of them, shrimps won’t go near it,” said Professor Marshall. “They know – or they think they know – there’s another shrimp there.

Cameras equipped with circular polarizing sensors, similar to the shrimp’s sensory organs, may detect cancer cells long before the human eye can see them.

“Cancerous cells do not reflect polarised light, in particular circular polarising light, in the same way as healthy cells,” he added.

But they’re not the only ones that see it

Professor Marshall also published another study in this number of the journal, showing that linear polarized light is used as a form of communication by fiddler crabs, Uca stenodactylus. They live on mudflats, a very reflective environment, and use the the amount of polarisation reflected by objects, the researchers found, to navigate through and react to their environment.

“It appears that fiddler crabs have evolved inbuilt sunglasses, in the same way as we use polarising sunglasses to reduce glare,” Professor Marshall said.

Fiddler crab.
Image via wikimedia

Fiddler crabs react to ground-based objects based on how much polarized light they reflected, moving in either a forward mating stance, or retreating back into their holes, at varying speeds.

“These animals are dealing in a currency of polarisation that is completely invisible to humans,” Professor Marshall said. “It’s all part of this new story on the language of polarisation.”

Both the mantis shrimp study and the fiddler crab study are available online in the journal Current Biology.

Bonobos use flexible “baby communication”

Researchers have found that just like babies, bonobos exhibit a type of communication in which they use the same sound with different intonations to say different things. They use these high pitch “peeps” to express their emotions.

Bonobo (Pan paniscus) mother and infant at Lola ya Bonobo

Bonobo (Pan paniscus) mother and infant at Lola ya Bonobo

Formerly called pigmy chimps, bonobos are endangered great apes found only south of the Congo River and north of the Kasai River, in the humid forests of Congo. Like all great apes, they exhibit some similarities to humans; they can pass the mirror recognition test, they can communicate through vocalizations and they can also communicate using lexigrams (geometric symbols). Now, a new study has found that they use the same call to mean different things in different situations, and the other bonobos have to take the context into account when determining the meaning – something only observed in humans, especially babies.

Lead author Zanna Clay was studying these endangered apes in their native Congo when she started noticing their peeps – she and her colleagues quickly understood that the same sounds were being used in different circumstances. Dr. Clay, a biologist at the Université de Neuchâtel, was also surprised by the frequency of these calls.

“Bonobos peep in just about every context you can imagine,” Clay says in an interview. “They peep when they’re traveling, feeding, grooming, resting, nest building, playing, aggression, alarm – you name it. This is striking because bonobos also produce many other calls which are much more fixed in their apparent use and function.”

Along with colleagues from the University of Birmingham, they started to look at these calls in more detail. Their complexity quickly became apparent.

“It became apparent that because we couldn’t always differentiate between peeps, we needed to understand the context to get to the root of their communication,” she said.

Other animals used fixed calls – individual vocalizations specific to a certain situation. But using function-flexible calls can make for more complex communication.

“In humans, protophones are the building blocks of speech, in that they vary in function across different emotional states and contexts,” Clay says. “This contrasts with fixed calls in human babies, such as laughter and crying, which resemble typical primate calls. The peep seems to be a rather exceptional call in the bonobo repertoire in its degree of flexible usage.”

This represents an important evolutionary moment of transition – bonobos are shifting to flexible calls and more advanced speech. Given enough time, their speech might develop into something similar to human communication.

“Although humans are unique in terms of our amazing speech and language capacities,” Clay says, “the foundations underlying these abilities appear to be already present in the last common ancestor we share with great apes. The findings suggest the existence of an intermediate stage between fixed vocal signalling seen in most primate calls and fully fledged flexible signalling in humans.”

According to previous research, humans developed this ability some 6-10 million years ago. It appears that features like this one are deeply enrooted in the primate lineage.

 

Basic emotions in music to be universally recognised

Nietzsche once said that “Without music, life would be an error” and he was definitely right as there are not many of us who can stay for an entire day without listening to their favorite tracks especially as just a few tunes are enough to change our mood in a second. However, the story doesn’t end here. It seems that one doesn’t have to be familiar at all with a song in order to understand its emotional message; native African people who had never listened to Western music before in their lives (yes, such people do exist) were able to say correctly whether happiness, fear or sadness was the main emotion of a song.

This discovery could be a very good indicator of why this type of music has become so successful all over the world even in parts of the Globe where less emphasis is put on transmitting a certain emotion and more importance is given to elements such as group coordination during rituals.

Previous studies had been conducted before as Westerners were asked to listen to Hindustani music for the first time in their lives before answering some questions. But the main purpose of the researchers, however, was to find people who had never had the experience of listening to a piece of Western music.

In order to achieve this goal, researcher Thomas Fritz traveled to the extreme north of the Mandara mountain ranges, to find members of the Mafa, one of about 250 ethnic groups in Cameroon. “Armed” with a laptop and a sun collector he made a surprising discovery: even though totally unfamiliar with Western music, the Mafa listeners were able to say whether the song implied sadness, fear or happiness more often than it would be expected if they would have simply guessed. However, their performance was variable as 2 of the 21 participants were performing at a chance level.

Previously, Westerners had been asked to answer the same tasks and it seems that both categories relied mostly on temporal cues and mode to make their decision, even if Westerners seemed to use this method to a larger extent.

Moreover, by manipulating music it was established that the Mafa listeners preferred the original versions, probably because of the increased sensory dissonance of the altered tunes.

The results of the study clearly show that  Western music is indeed an universal language as the three main emotions could be easily identified just like in the case of facial expressions and emotional prosody of the speech, which refers to its rhythm, stress and intonation.

source:Cell Press

Robofish work together

robofish

Kristi Morgansen, an aeronautics and astronautics engineer at the University of Washington presented the results of these amazing robofish. These new robots that feature tails and fins passed the test with flying colours.What makes it so amazing? Well, unlike most robots which receive instructions from a scientist or satellite, Robofish (as they are called) rely solely on each other and they work as a team by wirelessly communicating only with each other.

The fish are about two feet long and wiggle through the water by using their fish-like tails and fins. These fins are also a step forward, as scientists claim they are way better than usual propellants, as they produce less drag and noise and allow a better control for the fish taking turns.

Here’s what Morgensen has to say about how the fish “talk” to each other:
“One of them will send a message, and the rest of them know it’s not their turn to talk and so they are listening. There’s a time during which they know there is a signal coming. If they receive it, they use it; if they don’t, they keep following what they were doing.”
“If you have some sort of event going on like an underwater eruption, you’re not going to be able to get one vehicle to a bunch of places quickly and so the more underwater researchers, the better, as long as they don’t all flock to the same location.”