Tag Archives: face

People find AI-generated faces to be more trustworthy than real faces — and it could be a problem

Not only are people unable to distinguish between real faces and AI-generated faces, but they also seem to trust AI-generated faces more. The findings from a relatively small study suggest that nefarious actors could be using AI to generate artificial faces to trick people.

The most (top row) and least (bottom row) accurately classified real (R) and synthetic (S) faces. Credit: DOI: 10.1073/pnas.2120481119

Worse than a coin flip

In the past years, Artificial Intelligence has come a long way. It’s not just to analyze data, it can be used to create text, images, and even video. A particularly intriguing application is the creation of human faces.

In the past couple of years, algorithms have become strikingly good at creating human faces. This could be useful on one hand — it enables low-budget companies to produce ads, for instance, essentially democratizing access to valuable resources. But at the same time, AI-synthesized faces can be used for disinformation, fraud, propaganda, and even revenge pornography.

Human brains are generally pretty good at telling apart real from fake, but when it comes to this area, AIs are winning the race. In a new study, Dr. Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley, conducted experiments to analyze whether participants can distinguish state of the art AI-synthesized faces from real faces and what level of trust the faces evoked.

 “Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,” the researchers note.

The researchers designed three experiments, recruiting volunteers from the Mechanical Turk platform. In the first one, 315 participants classified 128 faces taken from a set of 800 (either real or synthesized). Their accuracy was 48% — worse than a coin flip.

Representative faces used in the study. Could you tell apart the real from the synthetic faces? Participants in the study couldn’t. Image credits: Credit: DOI: 10.1073/pnas.2120481119.

More trustworthy

In the second experiment, 219 new participants were trained on how to analyze and give feedback on faces. They were then asked to classify and rate 128 faces, again from a set of 800. Their accuracy increased thanks to the training, but only to 59%.

Meanwhile, in the third experiment, 223 participants were asked to rate the trustworthiness of 128 faces (from the set of 800) on a scale from 1 to 7. Surprisingly, synthetic faces were ranked 7.7% more trustworthy.

“Faces provide a rich source of information, with exposure of just milliseconds sufficient to make implicit inferences about individual traits such as trustworthiness. We wondered if synthetic faces activate the same judgements of trustworthiness. If not, then a perception of trustworthiness could help distinguish real from synthetic faces.”

“Perhaps most interestingly, we find that synthetically-generated faces are more trustworthy than real faces.”

There were also some interesting takeaways from the analysis. For instance, women were rated as significantly more trustworthy than men, and smiling faces were also more trustworthy. Black faces were rated as more trustworthy than South Asian, but otherwise, race seemed to not affect trustworthiness.

“A smiling face is more likely to be rated as trustworthy, but 65.5% of the real faces and 58.8% of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy,” the study notes

The researchers offer a potential explanation as to why synthetic faces could be seen as more trustworthy: they tend to resemble average faces, and previous research has suggested that average faces tend to be considered more trustworthy.

Although it’s a fairly small sample size and the findings need to be replicated on a larger scale, the findings are pretty concerning, especially considering how fast the technology has been progressing. Researchers say that if we want to protect the public from “deep fakes,” there should be some guidelines on how synthesized images are created and distributed.

“Safeguards could include, for example, incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often-laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

“At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic-media technologies that incorporate ethical guidelines for researchers, publishers, and media distributors.”

The study was published in PNAS.

Obscuring the bottom half of our faces makes it harder for our brains to notice and mirror certain emotions

Although they keep us safe and were invaluable during the pandemic, masks may nevertheless influence how we socially interact with one another, new research suggests.

Image credits Marcos Cola.

Obstructing the bottom halves of our face can impact others’ ability to understand and empathize with some of our emotions or pick up on certain social cues, a new study reveals. Although not all emotions are affected by this — those that are primarily conveyed through the eyes are a notable exception — the authors explain that it is still important to understand these effects on our collective social interactions.

Something hidden

“Our study suggests that when the movements of the lower part of the face are disrupted or hidden, this can be problematic, particularly for positive social interactions and the ability to share emotions,” explains lead author Dr. Ross Vanderwert, from Cardiff University’s School of Psychology. “People tend to automatically imitate others’ facial expressions of emotion when looking at them, whether that be a smile, a frown, or a smirk. This facial mimicry — where the brain recreates and mirrors the emotional experience of the other person — affects how we empathize with others and interact socially.”

“Wearing a face mask continues to be vital to protect ourselves and others during the COVID-19 pandemic, but our research suggests this may have important implications for the way we communicate and interact.”

For the study, the team recorded the brain activity levels of 38 individuals using electroencephalography while they watched video recordings of people showing fearful, happy, or angry expressions. A collection of video footage of inanimate, everyday objects was used as a control. Participants were asked to watch half of these videos while holding a pen between their teeth, and the other half without the pen.

The aim of the study was to analyze what effect face masks have on neural mirroring. This is a process that our brain undergoes automatically in reaction to actions observed in another person. It is meant to help us better coordinate with others during simple tasks, and to facilitate social bonding by giving us insight into the emotions of those around us.

According to the findings, participants who could move their face freely (i.e. when they were not holding the pen between their teeth) showed significantly higher levels of neural mirroring when observing emotional expressions. They showed no mirroring when viewing everyday objects, as was to be expected.

However, when they were holding the pen between their teeth, they exhibited mirroring at neither happy nor angry expressions — but did so when looking at fearful ones.

“For emotions that are more heavily expressed by the eyes, for example fear, blocking the information provided by the mouth doesn’t seem to affect our brain’s response to those emotions. But for expressions that depend on the mouth, like a friendly smile, the blocking had more of an effect,” said second author Dr. Magdalena Rychlowska, from Queen’s University Belfast’s School of Psychology.

“Our findings suggest that processing faces is a very challenging task and that the brain may need more support from, and rely more heavily on, our own faces to support the visual system for understanding others’ emotions. This mirroring or simulation of another person’s emotions may enable empathy; however, up until now the neural mechanisms that underline this kind of emotion communication have been unclear.”

The findings don’t dramatically change anything in our lives, but they can benefit us on a personal level to understand how certain elements impact how we interact with others and how they, in turn, interact with us. It’s also useful to know exactly what that effect is, so we can do our best to counteract it or, alternatively, find ways to make it benefit us. The authors note that face masks can produce this effect as well, by obscuring the bottom half of our faces.

Beyond those direct implications, the study also helps us better understand some of the nuances of human interaction, the automated mechanisms in our brains that make us human.

The paper “Altering Facial Movements Abolishes Neural Mirroring of Facial Expressions” has been published in the journal Cognitive, Affective, & Behavioral Neuroscience

Hamsters confirm — face masks work against the coronavirus

New research in Hong Kong re-confirms that the use of face masks can stop the spread of COVID-19 — even for hamsters.

Image via Pixabay.

Everyone is understandably anxious to get out of the house and resume normal life. But the coronavirus hasn’t left, not at all, and resuming normal life means we have to take precautions. The wide-scale use of face masks is the simplest and most effective step we can take towards ensuring public health. And hamsters are helping prove its worth.

Safely masked

“It’s very clear that the effect of masking the infected, especially when they are asymptomatic — or symptomatic — it’s much more important than anything else,” Yuen told reporters Sunday.

“It also explained why universal masking is important because we now have known that a large number of those infected have no symptoms.”

The team claims that their research (not yet published) is the first to test whether masks specifically can stop COVID-19, both symptomatic and asymptomatic, from infecting other individuals.

The authors infected healthy hamsters with the virus and placed these animals in containers. In another container connected to the first one, they placed a healthy hamster, thus creating an opportunity for infection. A fan was used to blow air from the infected animal’s container into the neighboring one.

Then, they placed a surgical face mask in the space connecting these two in order to filter all airflow between them.

According to the results, two-thirds (10 out of a 15 total) of the healthy mice were infected within a week without the masks set in place (and without any direct physical contact between the healthy and unhealthy). However, after the masks were installed, transmission rates went down by as much as 75%.

The findings have been detailed on the Hong Kong Today show and in the South China Morning Post. According to the SCMP,” only two of 12 subjects in the adjoining cage” (16.7%) tested positive for the coronavirus when masks were placed on the infected hamster’s box. When the masks were applied only to the cage with healthy hamsters, 4 out of 12 (33%) became infected.

“Transmission can be reduced by 50% when surgical masks are used, especially when masks are worn by infected individuals,” Professor Yuen explained for SCMP.

Furthermore, hamsters that did become infected during the masked experiments showed lower levels of the virus within their body than those infected without a mask.

Sew face masks out of cotton and chiffon or natural silk to protect against COVID-19

A new study from the University of Chicago reports that a multi-layered mask made from cotton fabric and chiffon or natural silk can be just as effective as N95 masks against the coronavirus.

Image credits Alexandra Gerea.

There just aren’t enough masks to go around, and those that we do have should be earmarked for healthcare workers. How, then, are we to keep ourselves safe in the great (and pandemic) outdoors? Well, according to one new study, we should do like our forefathers before us — and sew!

The authors analyzed the filtration properties of fabrics against aerosols (the main method of transmission for the SARS-CoV-2 coronavirus) and reported on the types of materials to use in order to create an effective mask.

Cotton and chiffon

Although the U.S. Centers for Disease Control and Prevention recommends the use of face masks whenever going outside, the reality on the ground is that such equipment is often in short supply. Surgical masks are somewhat easier to come by, but they are much less effective than filtering masks such as the N95 model (although they’re still useful).

The real problem is that every mask we use is one that’s no longer available for the healthcare sector, and the medical personnel fighting to help the infected against the disease need such masks to be able to continue doing their jobs. So people have started making their own, which is awesome. Researchers are now pitching in, too, and are informing us of the best way, and the best materials, to use when making our masks.

Coronavirus is spread through saliva droplets that form aerosols when we breathe, talk, or cough. The heavier droplets fall to the floor, but the lighter ones remain in suspension around us and can travel (and infect) up to 4 meters away.

The team, led by Molecular Engineering Professor Supratik Guha, used an aerosol mixing chamber to produce particles ranging in diameter from 10 nm to 6 μm in diameter, roughly the same interval of the size seen in coronavirus-carrying aerosols. A fan was used to force them through various textile samples (the fan was set to generate airflow comparable to that of a person’s respiration at rest), and the team compared particle levels in the air before and after passing through the material. The study was carried out at the U.S. Department of Energy’s Center for Nanoscale Materials user facility at Argonne National Laboratory with funding from the U.S. Department of Defense’s Vannevar Bush Fellowship.

Their results show that one layer of “tightly-woven” cotton combined with two layers of polyester-spandex chiffon (a type of sheer fabric most commonly seen in evening gowns — can filter out between 80% to 99% of all aerosol particles in a sample (depending on their size). Such performance, they add, is close to that of an N95 respirator mask.

The chiffon can be swapped for natural silk or flannel without losing filtering ability, or the whole thing can be replaced with a cotton quilt with cotton-polyester batting. The combination of two materials is important, however. The team explains that the cotton creates a physical barrier to incoming aerosol particles, while materials such as chiffon and natural silk can become charged, and serve as an electrostatic barrier.

Another thing to keep in mind is that it’s essential for such masks to be perfectly fitted. Even the slightest gap between the mask’s edges and the user’s skin can reduce their filtering efficiency by 60%.

The paper “Aerosol Filtration Efficiency of Common Fabrics Used in Respiratory Cloth Masks” has been published in the journal ACS Nano.

Man portrait.

Tilting your head down will make you seem more dominant — but also more aggressive

A new study shows that the position of our head can change how others perceive us.

Man portrait.

Image via Pexels.

Facial cues — how narrowed or widened someone’s eyes are, whether their mouth is turned up or down — can provide a wealth of information regarding the emotional state of those we’re interacting with. But they aren’t the only features we look to for this purpose. New research shows we also look to the tilt of the head.

Getting a heading

“These findings suggest that ‘neutral’ faces can still be quite communicative,” explain researchers Zachary Witkower and Jessica Tracy of the University of British Columbia, the study’s authors.

“Subtle shifts of the head can have profound effects on social perception, partly because they can have large effects on the appearance of the face.”

The way that facial muscle movements (i.e. facial expressions) correlate with social impressions has been well studied, the team explains, but the role of head movements in the same context is poorly understood. So the duo designed a series of experiments to see whether the angle of different head position influence how we’re perceived socially, even when facial features remain neutral.

They worked with 101 participants in an online trial. Each participant was shown an avatar with a neutral facial expression in one of three head positions: tilted upward 10 degrees, neutral (0 degrees), or tilted downward 10 degrees. They then had to judge how dominant each of the avatar images appeared to be using statements including “This person would enjoy having control over others,” and “This person would be willing to use aggressive tactics to get their way.” Overall, the participants rated avatars with a downward head tilt as being more dominant than the rest.

During a second online trial, 570 participants were put through a largely similar task, with the only difference being that they were shown pictures of actual people, not of computer-generated avatars. The results were consistent with those of the first trial.

The team reports that the area around the eyes and eyebrows is both necessary and sufficient to produce this perception of dominance. Participants consistently rated heads that were angled downwards as more dominant, even when they could only see the eyes and eyebrows. However, this effect didn’t persist when the eyes and eyebrows were obscured and the rest of the face remain visible. Further experimentation showed that the angle of the eyebrows generates this effect.

“Tilting one’s head downward systematically changes the way the face is perceived, such that a neutral face — a face with no muscle movement or facial expression — appears to be more dominant when the head is tilted down,” the paper reads.

“This effect is caused by the fact that tilting one’s head downward leads to the artificial appearance of lowered and V-shaped eyebrows — which in turn elicit perceptions of aggression, intimidation, and dominance.”

Even in cases where the eyebrows didn’t move from their neutral position, tilting the head downwards caused them to take more of a V-like shape and were consistently rated as more dominant by participants.

“Head tilt is thus an ‘action unit imposter’ in that it creates the illusory appearance of a facial muscle movement where none in fact exists.”

“People often display certain movements or expressions during their everyday interactions, such as a friendly smile or wave, as a way to communicate information. Our research suggests that we may also want to consider how we hold their head during these interactions, as subtle head movements can dramatically change the meaning of otherwise innocuous facial expressions.”

The paper “A Facial-Action Imposter: How Head Tilt Influences Perceptions of Dominance From a Neutral Face” has been published in the journal Psychological Science.

Look at all these faces. None of them are real — they were created by an AI

All these hyper-realistic faces were generated using NVidia’s new algorithm and it’s awesome — and a bit scary.

Credits: NVidia.

Women, children, different skin tones and complexities — it doesn’t matter: NVidia’s algorithm generates them all equally well. The algorithm separates coarse features (such as pose and identity) from finer details, producing faces in different positions and lighting. It can even throw in some random details like blemishes or freckles.

To better illustrate this ability, the computer scientists who created this exhibit a face generated with different amounts of noise. The results are truly impressive.

Effect of noise inputs at different layers of our generator. (a) Noise is applied to all layers. (b) No noise. (c) Noise in fine layers only. Noise in coarse layers only. We can see that the artificial omission of noise leads to featureless “painterly” look. Coarse noise causes large-scale curling of hair and appearance of larger background features, while
the fine noise brings out the finer curls of hair, finer background detail, and skin pores.

“We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature,” a paper published on arXiv reads. “The new architecture leads to an automatically learned, unsupervised separation of high-level
attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis.”

Neural style transfer, the technique they used here, is one that has been used before to generate synthetic images — think of those algorithms that let you transform a photo into a particular style. Imagine a landscape image as if it were painted by Van Gogh, for instance. Neural style transfer typically requires three images: a content image, a style reference image, and the input image you want to style. In this case, NVidia taught its generative adversarial network (GAN) to generate a number of styles: faces with glasses, different hairstyles, different ages, etc. As far as we can tell, there’s not a particular weak point in the algorithm’s outputs.

However, the network also tried generating cat faces — and while results are still impressive they’re not as good. Some are indeed indistinguishable from the real thing, but a few are quite bizarre. Can you spot them?

Image credits: NVidia.

“CATS continues to be a difficult dataset due to the high intrinsic variation in poses, zoom levels, and backgrounds,” the team explain.

However, they did much better on different types of datasets — with cars and bedrooms.

AI-generated cars and bedrooms. Image credits: NVidia.

So what does this mean? Well, for starters, we might never be able to trust anything we see on the internet. It’s a remarkable achievement for NVidia’s engineers and for AI progress in general, but it’s hard to envision all the ways in which this will be used. It’s amazingly realistic — maybe even too realistic.

Here’s a video detailing the face generation process:

Anthropologists recreate the face of a 9,000-year-old teenager

Based on a skull found in a Greek cave, researchers have reconstructed how an ancient teenager might have looked like. This sheds new light on how our features evolved and softened across the millennia.

Facial features have greatly smoothed out in recent times. Image credits: Oscar NIlsson.

Scientists called her Avgi, which translates to Dawn in English. Some 9,000 years ago, Avgi must have had a really bad day. Not much is known about her life and what brought her demise, but no one has seen her face ever since. Yet, through careful analysis and modern technology, we are now able to see her facial features once again — her prominent cheekbones, dimpled chin, and heavy brow indicate a period much unlike our own.

The skull was found in 1993 at Theopetra cave, a site in central Greece which has been occupied continuously for some 130,000 years. The cave has been intensely studied by archaeologists and anthropologists and has yielded many important insights about the lifestyles of ancient populations.

Reconstructing Avgi’s face was a painstaking process. The team which analyzed her skull included an endocrinologist, orthopedist, neurologist, pathologist, and radiologist, working under the guidance of orthodontist Manolis Papagrigorakis, who recently unveiled the reconstructed face at the Acropolis Museum in Athens. Together, they worked with Oscar Nilsson, a Swedish archaeologist and sculptor who specializes in recreating the features of ancient people.

Long time no see

The process started with a CT scan of the skull and ended with a 3D printer recreating Avgi’s features. In between, researchers closely followed the skull for any clues it might have to offer — specifically, they looked at cues which indicate the thickness of the flesh at certain points. Bit by bit, muscle after muscle was added. Then, the remainder of the features (such as eye color and skin complexion) were assumed based on general population traits in the area.

The skull itself revealed a few surprises. Avgi’s bones appeared to belong to a 15-year-old-woman, but the teeth tell a different story, indicating that she was 18, “give or take a year,” said Papagrigorakis.

It’s not the first time Papagrigorakis and Nilsson have teamed up to bring ancient faces back to life. In 2010, they recreated the face of an ancient, 11-year-old Athenian girl named Myrtis. Unlike Avgi, Myrtis had features which are much more familiar to us today.

The 11-year-old Myrtis, who lived in 5th century BC Athens. Image credits: Oscar Nilsson.

“Avgi has very unique, not especially female, skull, and features. Myrtis, still a child, does not differ at all in the features we find around us today,” says Nilsson. “Having reconstructed a lot of Stone Age women and men, I think some facial features seem to have disappeared or ‘smoothed out’ with time. In general, we look less masculine, both men and women, today.”

Avgi lived at an important time in human evolution — the dawn of human society, when people were just starting to grow their own food and settle down permanently. The transition to agriculture, called the Neolithic Revolution, has taken place independently many times and in many different places. In today’s Greece, it took place somewhere between 10,000 and 8,000 years ago, right as Avgi was going on with her own (unfortunately short) life.

It’s not clear exactly what killed Avgi. No obvious trauma is visible, and researchers aren’t quite sure what happened to her. Myrtis, on the other hand, was killed by a typhoid epidemic that ravaged Ancient Athens. To this day, the disease claims over 200,000 lives every year, according to the World Health Organization.

As both scanning and 3D printing technology advances, we can expect more and more detailed models to emerge. For the first time in history, we might get the chance to not only know how these people lived but also what they looked like.

Face contrast sample.

Contrasting facial features make you seem younger, no matter where you’re from

Facial contrast, a measure of how much facial features stand out in the face, could be one of the most important elements we look for when trying to decide someone’s age, a new paper reports. The research shows that observers, regardless of their ethnic background, perceive women with increased facial contrast as being younger.

Wire mesh face.

Image via Pixabay.

Age plays a big role in how others see us. A youthful appearance is a sign of health and typically considered more attractive — so it’s no surprise that people often try to look a few years younger than they actually are. It’s a view that spans across cultural boundaries. Certain characteristics, such as wrinkles, for example, are viewed as a sign of aging in many ethnicities — but there are many cues our brains use to gauge someone’s age that we don’t yet know of.

Age at face value

A new paper authored by French and American researchers details the discovery of one new such cue, namely facial contrast.

“Facial contrast refers to how much the eyes, lips and eyebrows stand out in the face in terms of how light or dark they are or how colorful they are,” says Aurélie Porcheron, lead author of the paper.

Previous work has revealed that observers perceive female actors with increased facial contrast as healthier, more youthful, and more feminine compared to their unaltered pictures. However, research into facial contrast has so far been rooted in small-scale studies, mostly focusing on Caucasian faces or observers, making it difficult to expand on the findings. After all, the effect could come down to cultural or societal factors, i.e. a certain culture’s standards for what ‘old’ or ‘attractive’ look like in women.

Porcheron and her team speculated that this isn’t the case. They believed that the link observed between facial contrast and apparent age would hold for other cultures and ethnicities since although different peoples have different skin colors, age-related changes in skin color tend to be similar for everyone. To test their hypothesis, the team used images of women of different ethnicities, including Chinese, Latin American, South African, and French Caucasian women. Only pictures of women were used to avoid differences caused by gender. They were aged from 20 to 80, and the researchers analyzed their facial images using computer software to measure various facial contrast parameters.

The team discovered that several aspects of facial contrast decreased with age in all four groups of women, most notably contrast around the mouth and eyebrows. This indicates that at least some aspects of facial contrast naturally decline with age in women from around the world. Each picture was then digitally altered to create two versions of each face, one with high contrast, the other with low contrast. Participants were asked to look at the images (each participant received pictures belonging to every ethnicity involved in the test) and rate how old the women depicted appeared to be.

Face contrast sample.

A sample of the images participants had to pick between.
Image credits Aurélie Porcheron.

Male and female participants (from France and China) were asked to look at the images and chose the younger-looking face from the two options. High-contrast faces were selected as being the youngest almost 80% of the time, regardless of the cultural origin of the participant or the face. It shows that previous results weren’t picking up on a cultural or ethnic element, and that higher facial contrast seems to be a universal cue for youthfulness — at least for female faces.

It’s not exactly a fountain of youth, but the findings do show that there’s an easy trick you can pull off to appear more youthful to others — make your eyebrows and lips stand out.

“People of different cultures use facial contrast as a cue for perceiving age from the face, even though they are not consciously aware of it,” Porcheron says.

“The results also suggest that people could actively modify how old they look, by altering how much their facial features stand out, for example by darkening or coloring their features.”

Next up, we’ll have to see if this effect still stands with male faces.

The paper ” Facial Contrast Is a Cross-Cultural Cue for Perceiving Age” has been published in the journal Frontiers.

People prefer wider faces on products if they are seeking to show dominance or would like to project importance. Credit: Journal of Consumer Research.

Cars or watches with wider faces makes consumers feel more dominant

Modern product design is focused on aesthetics and functionality but that might not be the whole picture at all. Brands could attract new customers and charge a premium to boot if they also keep an eye on specific consumer personality traits that their products can touch. For instance, according to a new study performed by the University of Kansas, consumers prefer to buy products with wider faces on cars or watches when they want to be perceived as more dominant in certain situations.

People prefer wider faces on products if they are seeking to show dominance or would like to project importance. Credit: Journal of Consumer Research.

People prefer wider faces on products if they are seeking to show dominance or would like to project importance. Credit: Journal of Consumer Research.

Our knack for faces

If there’s one thing that computers still can’t do nearly as well as humans do, it’s pattern recognition — and no pattern is more easily recognizable than the human face. We’re basically hard-wired to recognize them because the human face is packed with cues that instantly inform us about a person’s identity, age, gender, mood, attractiveness, race, and friendliness. Humans and other primates even have specialized neurons in their brains – specifically six patches in the temporal lobe — dedicated to processing and recognizing faces.

Sometimes, however, this propensity for the human countenance makes us see faces in inanimate objects such as rocks or electricity plugs. When this happens, we usually shrug it off after a couple miliseconds of processing realizing that’s just a rock, though some people just can’t get over it. For instance, in the 1970s, NASA released a low-resolution photo taken by Viking 1 showing an area on Mars called Cydonia Mensae. The light, shadows, and low-resolution orbital photography made the outcrop uncannily resemble a human face. Even to this day, some people are convinced the ‘Face on Mars’ is a NASA cover-up conspiracy despite modern high-resolution images plainly showing this is just big freaking rock.

Right-lower corner: the low-resolution 'face on Mars' versus a 2001 high-resolution image of the same Cydonia region. Credit: Wikimedia Commons.

Right-lower corner: the low-resolution ‘face on Mars’ versus a 2001 high-resolution image of the same Cydonia region. Credit: Wikimedia Commons.

Shut up and take my money

Our tendency for anthropomorphism can also be a useful commercial trait for some companies, as Ahreum Maeng, an assistant professor of marketing at the KU School of Business, recently demonstrated.

“These kinds of things are automatically going on in people’s brains,” Maeng said in a statement. “When we see those shapes resembling a human face in the product design, we can’t help but perceive it that way.”

While previous studies found people are averse to wider faces because these elicit a fear of being dominated, the reverse effect seems to be true in the case of wider faces on products in a situation where the consumer wants to feel being dominant.

In five experiments, participants were asked to examine photos of human faces that ranged from a low width-to-height ratio (narrow and non-dominant) to a higher such ratio (wide-jawed and dominant). The participants then examined photos of products resembling faces, such as cars and watches, which similarly had a varied width-to-height ratio.

Finally, the study’s volunteers were asked to imagine different scenarios, like preparing for an encounter with either an old high school bully or a former sweetheart at the 10-year-old high school reunion.

When the participants felt they were in a situation that required them to assert more dominance, such as when meeting the old high school bully or in a meeting for a tough business negotiation, they were more inclined to prefer wider-faced products. When the situation called for a less pronounced desire to be perceived as dominant, this effect was less pronounced and people didn’t give nearly as much importance to products with a high width-to-height ratio.

“It’s probably because people view the product as part of themselves and they would think, ‘it’s my possession. I have control over it when I need it, and I can demonstrate my dominance through the product,” Maeng said.

Maeng says that some brands might one to pay attention to his findings especially since the consumer preference for dominant-looking products is not the same as people’s preference for luxury items. Previously in 2013, her team found a positive correlation between automobile prices and width-to-height ratio which suggests manufacturers can charge more for products with such an appearance. This enough “can have marketplace impact — by significantly improving the company’s bottom line,” Maeng concluded, whose findings appeared in the journal Consumer Research. 

Japanese rice fish (Oryzias latipe). Credit: Wikimeida Commons.

You likely can’t recognize faces when they’re upside down — and neither can the Japanese rice fish

Japanese rice fish (Oryzias latipe). Credit: Wikimeida Commons.

Japanese rice fish (Oryzias latipe). Credit: Wikimeida Commons.

Some people recognize and remember faces better than others. This ability is suddenly lost if the face is inverted or if, by some weird turn of events, you’re seeing things upside-down. This brain quirk has been thoroughly documented and until not too long ago it was thought to be solely a mammal thing. A new study performed by scientists at the University of Tokyo, however, shows that this happens to Japanese rice fish as well. It follows that inverted-face brain override is likely linked with brain mechanisms in social animals in general, be them mammals or fish.

A social fish

Japanese rice fish (Oryzias latipes) are tiny 3.5-centimeter-long shoaling fish commonly found in slow-moving streams in East Asia. As the name implies, they’re most often encountered in rice paddies where they mingle with their peers.

This is a highly social fish known to be able to recognize individuals easily. With this in mind, the Japanese researchers thought this was an excellent opportunity to investigate whether the face-inversion effects applies to these animals too.

To make things relevant, the researchers exploited the fact that rice fish females mate faster with a male that they recognize. What they did was pair various acquainted male and female rice fish, before gradually masking either the face, body or tail of males with a semi-transparent film.

The team found that only when the male’s face was covered did the female fail to recognize its pair. To work out whether the fish are able to recognize inverted faces too, the Japanese scientists simply used a prism to inverted the male’s face either vertically or horizontally.

To everyone’s surprise, the fish were unable to recognize the inverted faces, whether horizontal or vertical. And just like humans, they were able to recognize inverted objects that don’t resemble faces instead.

Face time

Humans have no problem recognizing a chair or car that’s upside down but neither we nor the Japanese rice fish can seem to handle inverted faces. This has something to do with a region of the brain dedicated to processing faces called the fusiform face area. This brain area stores and simplifies faces so you make next to no effort when the time comes to identify an individual. Oddly, the fusiform face area (FFA) stops showing brain activity when faces or objects resembling faces are revolved 180 degrees.

We always thought FFA’s response had something to do with how social us mammals are, but to witness a fish behave in the same way comes with many implications. The findings hint that there has been an evolutionary trade-off when animals specialized their face-recognition ability. Although the upside-down and right-side-up face essentially conveys the same information, primate and some fish brains that we know of so far have gone down a route that quickly and accurately identifies faces. Somewhere along this route, a decision has been made to ignore information from objects resembling faces of an unexpected orientation.

This hypothesis makes sense. Just take a second to imagine ancient humans strolling through the dark woods in search of game. Their brains are already hardwired to confuse a twig for a snake because a false positive is better than the risk of a false negative — it could mean the difference between life and death. At the same time, twisted roots or twigs that might form an inverted face would only confuse us. A lot of things look like faces after all. So, a necessary trade-off had to be made.

Forensic Expert creates the most accurate Jesus you’ve seen so far

Christianity is currently the world’s largest religious movement, with an estimated 2.2 billion followers. And because he plays such a huge role in christian mythos and practice, and because of the influence he’s had on the course of history (we even date our years after his birth), we all know how Jesus Christ looks like. We’ve seen it in paintings, on TV, in church, on Christmas; he’s white, long haired and wears something thorny. Right?

Well, truth is we’ll never really know for certain, but we can approximate what he would have looked like.

Neave putting the finishing touches.
Image via art-sheep

With all the imagery of Jesus that we’ve learned to take for granted it’s easy to forget that not only was he born over 2000 years ago, but for most of you he was also born somewhere very far away — in the region of Galilea, today in northern Israel. So, to help us get a clearer picture, back in 2002 Richard Neave, forensic facial reconstruction expert and former medical artist from the University of Manchester, decided to recreate a typical resident of the region Jesus was born.

Image via Naji.

Neave and a team of Israeli archaeologists started from three Galilean Semite skulls found in the area around Jerusalem. Then they used computerized tomography to create 3D cross-sectional images of these skulls, which they fed into a facial generator software to create a mock-up of what the men looked like.

From these, Neave was able to cast a typical skull of a man from that area. With information on soft tissue thickness from another reconstruction software he applied layers of clay to the 3D cast to recreate the muscles and skin. The team had to turn to ancient drawings found throughout the region to estimate how his hair, skin tone or eyes looked like, with the final result in the picture above.

Image via art-sheep

 

Apple

How much weight you need to lose to appear more attractive

Apple

Obesity rates have increased virtually everywhere in the world, especially in the developed world. Some 160 million Americans are obese or overweight. Over 70 percent of all men and 60 percent of all women from the US are overweight, and it seems like the next generation will have similar problems: nearly 30% of boys and girls under age 20 are either obese or overweight, up from 19% in 1980. When talking strictly about obesity, one-third of American men (32%) and women (34%) were obese in 2013 compared with about 4% of Chinese and Indian adults. Being obese puts you at risk of developing a myriad of conditions from heart disease and stroke, to diabetes, to some cancers, to osteoarthritis.  Yet, for all the hazards that being overweight causes most people would rather lose weight to appear more attractive, than be more healthy. The two are interlinked, as we shall see.  But that’s better than not having any reason at all to lose weight, and now a new study quantified just how much weight men and women need to lose for this to show and make them look more attractive. Some might find the findings useful.

Increased facial adiposity is associated with a compromised immune system, poor cardiovascular function, frequent respiratory infections, and mortality.

Previously, other studies showed that facial adiposity, or the perception of weight in the face, significantly predicts perceived health and attractiveness.  Overweight people have high facial adiposity and are perceived to be less attractive and lower in leadership ability.

To see just how subtle changes in facial adiposity need to be for people to notice, researchers from the University of Toronto, Canada presented a series photos to volunteers that were digitally doctored to make the people in photos appear more or less overweight than in reality. Participants looked at randomly selected pairs of images and were asked to choose the heavier one.

Researchers found that  a change in BMI (body mass index — defined as the body mass divided by the square of the body height) of 1.33 kg/m2 is required for someone to notice a difference between doctored photos. Then, the team assessed just how much weight an individual needed to lose to not only make an observer notice, but appear more attractive as well. For men, it was around 8.2kg or around 18 pounds. For women, the difference was 6.3kg or about 14 pounds.

“We calculated the weight change thresholds in terms of BMI rather than simple kilograms or pounds, so that people of all weights and heights can apply it to themselves according to their individual stature,” said Daniel Re, study co-author.

Even adjusting for height, proportionally women need to lose less weight than men to appear more attractive, according to the paper published in the journal Social Psychological & Personality Science.

Header image via Pixabay

face_two

Human face diversity may have evolved to make us look unique

face_two

Human face traits are so diverse because of evolutionary pressure, according to a new study published in Nature Communications. Photo: Lynolive, Venice Carnival.

While you might find people sometimes resemble each other, if you look close enough you’ll soon find unique features and facial characteristics that sets them apart. It’s remarkable how diverse human faces are across the billions alive today and the countless billions that used to live in this world. Scientists at University of Berkeley now believe they understand why this is the case: humans have evolved facial variety to make each of us look unique and easily recognizable.

“Humans are phenomenally good at recognizing faces; there is a part of the brain specialized for that,” said  Michael J. Sheehan, a postdoctoral fellow in UC Berkeley’s Museum of Vertebrate Zoology. “Our study now shows that humans have been selected to be unique and easily recognizable. It is clearly beneficial for me to recognize others, but also beneficial for me to be recognizable. Otherwise, we would all look more similar.”

“The idea that social interaction may have facilitated or led to selection for us to be individually recognizable implies that human social structure has driven the evolution of how we look,” said coauthor Michael Nachman, a population geneticist, professor of integrative biology and director of the UC Berkeley Museum of Vertebrate Zoology.

A face like no other

The premise started from a fundamental question: is the widely recognized variance in facial features, like the distance between the eyes or width of the nose, dictated purely by chance or has there been evolutionary selection for these to become more variable than they would be otherwise. To answer this question, the team of researchers first mined a 1988 U.S. Army database that compiled male and female body measurements, then made a  statistical comparison of facial traits: forehead-chin distance, ear height, nose width and distance between pupils. Comparisons in other parts were also made, like forearm length, height at waist, etc.

The researchers found that  facial traits are much more variable than other bodily traits, such as the length of the hand, and that facial traits are independent of other facial traits, unlike most body measures. People with longer arms, for example, typically have longer legs, while people with wider noses or widely spaced eyes don’t have longer noses. Both findings suggest that facial variation has been enhanced through evolution.

[ALSO READ] DNA alone could be used to visually recreate a person’s face

This significant physical difference between facial trait variance and the degree of variation of all other body parts was then put to the test by accessing genetic logs. The researchers turned to the  data collected by the 1000 Genome project, which has sequenced more than 1,000 human genomes since 2008 and catalogued nearly 40 million genetic variations among humans worldwide. Looking at regions of the human genome that have been identified as determining the shape of the face, they found a much higher number of variants than for traits, such as height, not involving the face.

“All three predictions were met: facial traits are more variable and less correlated than other traits, and the genes that underlie them show higher levels of variation,” Nachman said. “Lots of regions of the genome contribute to facial features, so you would expect the genetic variation to be subtle, and it is. But it is consistent and statistically significant.”

Of course, since it has proven to be so productive for humans to be social, cooperate and work together with their peers it seems natural for evolution to facilitate a species grow even further employing the same mechanisms. Other animals, however, which don’t rely that much on vision to distinguish individuals from the same species, seem to look more like each other. With this in mind, what about our hominid ancestors? Were individuals then also geared towards facial uniqueness? It’s yet to early to speak about ancestors who lived hundreds of thousands or even millions of years ago, but the more recent Neanderthals and Denisovans are a different matter. The researchers compared  human genomes with those sequenced from Neanderthals and Denisovans  and found similar genetic variation, which indicates that the facial variation in modern humans must have originated prior to the split between these different lineages.

“Clearly, we recognize people by many traits – for example their height or their gait – but our findings argue that the face is the predominant way we recognize people,” Sheehan said.

Findings appeared in the journal Nature Communications.