Tag Archives: perception

The color purple is unlike all others, in a physical sense

Our ability to perceive color is nothing short of a technical miracle — biologically speaking. But there is one color we can see that isn’t quite like the rest. This color, purple, is known as a non-spectral color. Unlike all its peers it doesn’t correspond to a single type of electromagnetic radiation, and must always be born out of a mix of two others.

A violet rectangle over a purple background. Image credits Daily Rectangle / Flickr.

Most of you here probably know that our perception of color comes down to physics. Light is a type of radiation that our eyes can perceive, and it spans a certain range of the electromagnetic spectrum. Individual colors are like building blocks in white light: they are subdivisions of the visible spectrum. For us to perceive an object as being of a certain color, it needs to absorb some of the subdivisions in the light that falls on it (or all of them, for black). The parts it reflects (doesn’t absorb) are what gives it its color.

But not so for purple, because it is a…

Non-spectral color

First off, purple is not the same as violet, even though people tend to treat them as interchangeable terms. This is quite understandable as, superficially, the two do look very similar. On closer inspection, purple is more ‘reddish’, while violet is more ‘blueish’ (as you can see in the image above), but that’s still not much to go on.

Why they’re actually two different things entirely only becomes apparent when we’re looking at the spectrum of visible light.

Image via Reddit.

Each color corresponds to photons vibrating with a particular intensity (which produces their wavelength). Humans typically can see light ranging from 350 to 750 nanometers (nm). Below that we have ultraviolet (UV) radiation, which we can’t see but is strong enough to cause radiation burns on the beach, DNA damage, and other frightful things. Above the visible spectrum, we have infrared (IR), a type of electromagnetic radiation that carries heat, and which armies and law enforcement use in fancy cameras; your remote and several other devices also use IR beams to carry information over short distances.

The numbers above aren’t really extremely important for our purposes here; they describe the exact colors used for flairs on a subreddit I follow, and the wavelengths noted there will shift slightly depending on the hue you’re dealing with. I left the numbers there, however, because it makes it easier to showcase the relationship between light’s physical properties and our perception of it.

What we perceive as violet is, quite handily, the bit of the visible spectrum right next to that of UV rays. This sits on the left side of the chart above and is the most energetic part of light that our eyes can see (low wavelength means high vibration rates, which mean higher energy levels). On the right-hand side, we have red, with high wavelength / low energy levels.

Going through the spectrum above, you can find violet, but not purple. You may also be noticing that while we talk of ultraviolet radiation, we’re not mentioning ultrapurple rays — because that’s not a thing. Purple, for better or worse, doesn’t make an appearance on the spectrum. Unlike red or blue or green, there is no wavelength that, alone, will make you perceive the color purple. This is what being a ‘non-spectral’ color means, and why purple is so special among all the colors we can perceive.

More than the sum of its parts

If you look at orange, which is a combination of yellow and red, you can see that its wavelength is roughly the average of those of its constituent colors. It works with pretty much every color combination, such as blue-yellow (for green) or red-green (for more orange).

Now, the real kicker with purple, which we know we can get by mixing in red with blue, is that by averaging the wavelengths of its two parent colors, you’d get something in the green-yellow transition area. Which is a decidedly not-purple color.

That’s all nice and good, but why are we able to perceive purple, then? Well, the short of it is “because brain”. Although purple isn’t a spectral color in the makeup of light, it is a color that can exist naturally and in the visible spectrum, so our brains evolved the ability to perceive it; that’s the ‘why’. Now let’s move on to ‘how’. It all starts with cells in our eyes called ‘cones’

CIE colour matching functions (CMFs) Xbar (blue), Ybar (green) and Zbar (red). Image via Reddit.

The chart on the left is a very rough and imperfect approximation for how the cone cells on our retinas respond to different parts of the visible spectrum. There’s three lines because there are three types of cone cells lining our retinas. While reality is a tad more complicated, for now, keep in mind that each type of cone cell responds to a certain color (red, green, or blue).

How high each line peaks shows how strong a signal it sends to our brain for individual wavelengths. Although we only come equipped with receptors for these three colors, our brain uses this raw data to mix hues together and produce the perception of other colors such as yellow, or white, and so on.

The more observant among you have noticed that cone cells that respond to the color red also produce a signal for parts of the visible spectrum corresponding to blue. And purple is a mix of red and blue. Coincidence?! No; obviously.

The thing is, while every color you perceive looks real, they’re pretty much all just hallucinations of your brain. When light on the leftmost side of the spectrum (as seen in the chart above) hits your eye, signals are sent to your brain corresponding only to the color red. Move more towards the middle, however, and you see that both red and green are present. But the end perception is that of yellow, or green.

What happens is that your brain constantly runs a little algorithm that estimates what color things you’re seeing are. If a single type of signal is received, you perceive the color corresponding to it. If a mix of signals is received, however, we perceive a different color or hue based on the ratio between signals. If both green and red signals are received, but there’s more of the red than the green, our brains will tell us “it’s yellow”. If the signal for green is stronger than that for red, we see green (or shades of green). The same mechanism takes place for all possible 9 combinations of these colors.

That bit to the right of the chart, where both red and blue signals are sent to the brain, is where the color purple is born. There’s no radiation wavelength that carries purple like there is with violet or orange. The sensation of purple is created by our brains, sure, but the reason why it needs to be created in the first place is due to this quirk of how the cone cells in our eyes work. From the chart above you can see that cells responding to green pigments also show some absorption in the area corresponding to purple, but for some reason, our brains simply don’t bother with it.

From my own hobbies (painting) I can tell you that mixing violet with green produces blue, but mixing purple with green results in brown. Pigments and colored light don’t necessarily work the same way, this is all anecdotal, and I have no clue whether that’s why green signals get ignored in purple — but I still found it an interesting tidbit. Make of it what you will.

In conclusion, what makes purple a non-spectral color is that there isn’t a single wavelength that ‘carries’ it — it is always the product of two colors of light interacting.

Are there any others like it?

Definitely! Black and white are prime examples. Since there’s not a single wavelength for white (it’s a combination of all wavelengths) or black (no wavelengths), they are by definition non-spectral colors. The same story with gray. These are usually known as non-colors, grayscale colors, or achromatic hues.

Furthermore, colors produced by mixing grayscale with another color are also considered non-spectral (since one component can’t be produced by a single wavelength, the final color can’t be produced by a single one either). Pink is most often given as an example, as is brown, since these can be produced using non-spectral colors (white and/or purple for pink, gray/black for brown).

Metallic paints also, technically, are non-spectral colors. A large part of the visual effect of metallic paints is given off by how they interact with and scatter light. A certain wavelength produces a single color; the shininess we perceive in metallic pigments can’t be reproduced using a single wavelength, as this is given off by tiny variations in surface reflecting light in different directions. The metal itself may well be a solid color, but our final perception of it is not. A gray line painted on canvas doesn’t look like a bar of steel any more than a yellowish one can pass off as a bar of gold. As such, metallic colors are also non-spectral colors.

A matter of taste: tongue differences shape our palate

New research at the University of Copenhagen found that Danes can’t perceive bitter taste as well as Chinese individuals. This seems to come down to anatomic differences on the surface of the tongue between these two groups. The findings showcase how ethnicity can influence our enjoyment of food, but may also showcase how our natural differences in perception can help give rise to cultural preferences.

Image via Pixabay.

We’ve known for a while now that not everybody perceives tastes the same. Women, for example, are better at picking up bitter flavors than men. New research suggests that ethnicity can also play a role in our sensitivity to bitter taste, and thus, our enjoyment of items such as broccoli or dark chocolate.

A matter of taste

“Our studies show that the vast majority of Chinese test subjects are more sensitive to bitter tastes than the Danish subjects. We also see a link between the prominence of bitter taste and the number of small bumps, known as papillae, on a person’s tongue,” says Professor Wender Bredie of the University of Copenhagen’s Department of Food Science (UCPH FOOD) and coauthor of the study.

The team used artificial intelligence to analyze the density of mushroom-shaped “fungiform” papillae on the tongues of 152 test subjects, half of them ethnically Danish, and half ethnically Chinese. These papillae are concentrated to the tip of the tongue and they contain a large part of our taste buds — as such, they’re a key player in our ability to perceive taste. In order to understand whether cultural preferences for particular tastes are mediated by these papillae, the team first set out to see what differences in the distribution, size, and quantity of papillae exist between different groups of people.

Papillae are usually counted by hand, making it a very slow and tedious, labor-intensive process. It also means that mistakes are often made (there are hundreds of tiny fungiform papillae on every tongue). This new method allowed for automated counting, which is much faster and reliable. The method uses a tongue-coordinate system to map individual papillae using image recognition, and is described in the first of these two papers.

All in all, participants of Chinese descent had (generally) more of them than Danish subjects, which the team believes explains why the former are better able to pick up on bitter flavors. Still, Professor Bredie underscores that the findings need to be replicated on larger groups of participants before we can draw a definitive conclusion here; the study may have picked up on a fluke, and these differences may not hold at the general population level.

“It is relevant for Danish food producers exporting to Asia to know that Asian and Danish consumers probably experience tastes from the same product differently,” Bredie says. “This must be taken into account when developing products.”

Genetic factors are only one of the elements that influence our experience of food. Our personal preference also has a big part to play. Professor Bredie uses texture as an example, explaining that many Danes would likely prefer crispy chips over soft chips. Previous research at the UCPH showed that Danish and Chinese test subjects likely differ on this point as well — it found that a majority of Chinese subjects (77%) prefer foods that don’t need much chewing, whereas a majority of Danes (73%) like those with a harder consistency.

Exactly what causes this divergence in preference is unknown, but the team believes it’s due to cultural and diet differences and not down to the structure of our tongues.

The first paper, “A Novel Approach to Tongue Standardization and Feature Extraction” has been presented at the International Conference on Medical Image Computing and Computer-Assisted Intervention.

The second paper, “Cross-cultural differences in lingual tactile acuity, taste sensitivity phenotypical markers, and preferred oral processing behaviors” has been published in the journal Food Quality and Preference.

Dragon fight.

Exposure to graphic images shifts your perception of reality, video games study shows

It’s an unlockable perk.

Dragon fight.

Player fighting a dragon in Skyrim.
Image credits Eliot Carson / Flickr.

Gamers, particularly those who partake in violent video games, show greater resilience when viewing disturbing images than their peers, a new study suggests. While the research doesn’t establish a cause-effect relationship between the two, it is an important look into how exposure to violent images can alter perception.

Needs more dakka

“Our study focused on perception and how it may be disrupted by negative stimuli. This is very different from other research on the link between violent video games and social behaviour, such as aggression,” says study author and cognitive psychologist Dr. Steve Most from University of New South Wales (UNSW) Psychology.

People who frequently play violent video games may gain a degree of immunity to disturbing images. The findings come from a study on emotion-induced blindness at the UNSW carried out under the supervision of Dr. Most. Emotion-induced blindness is a process via which a person’s emotions impact their perception of the world.

Emotions have a central role in shaping our perception. You can read more about that relationship here and here.

Such players were more adept at ignoring graphic content while viewing a rapid succession of images, making them better at focusing on the pictures they were asked to spot. The study doesn’t prove that this happened because of gaming history per se — it established a correlation, not causation. However, it’s an interesting look into how our perception might shift following exposure to violent imagery.

“When people rapidly sift through images in search of a target image, a split-second emotional reaction can cause some of them to be unable to see the target,” Dr Most explains. “This occurs even if you’re looking right at the target. It’s as if the visual system stops processing the target in order to deal with the emotional imagery it’s just been confronted with.”

For the study, the team split participants into two groups: a group of ‘heavy gamer’ participants and a control group of people who played no video games at all. They classified heavy games as those who played more than 5 hours per week of video games that ‘often’ or ‘almost always’ involved violence. Participants were not told that the experiment would focus on their video game playing history so as not to skew the results.

Call of Duty drones.

Call of Duty is definitely violent. And one of the most popular games out there right now.
Image via Flickr.

During the experiment, the participants were shown a sequence of 17 images, each flashing on the screen for 0.1 seconds. These images were combinations of upright landscape-style photos, but among them was one ‘target’ image — which was rotated to the left or right by 90 degrees — which participants were asked to spot and report on its rotation.

In some of these sequences, the team also included a ‘distractor’ image. These would appear for a significantly longer period — between 200 and 400 milliseconds — before the target image and were either emotionally-neutral (such as non-threatening animals or people), or they showcased graphic / emotionally negative content. This could be violent (depictions of assault) or simply kind of gross (like dirty toilet bowls, for example).

Those in the gamer group seemed to be largely immune to these emotional disruptions, the team reports. They were able to correctly identify target images and their rotation with greater accuracy than the control group, despite the team’s attempts to throw them off. In neutral-images streams, the two groups performed virtually the same, with no significant differences in accuracy.

This last bit is important because it rules out that gamers were simply better at paying attention than the control group. Those who regularly play violent video games were generally less responsive to emotional images instead, the team believes. Since they didn’t focus disproportionately on these pictures, gamers could better perceive other elements around them.

“This study suggests that, depending on the situation, people with different levels of violent media and game consumption can also have different perceptions of the environment,” says Dr Most.

“This suggests a link between violent video game exposure and a person’s perception, that is, how they process information.”

The team underscores that the results don’t mean violent video games make people emotionally numb. Instead, the study only focused on “perception and how it may be disrupted by negative stimuli,” Dr. Most explains, and shouldn’t be seen as linking violent video games with social behavior such as aggression.

“There is conflicting literature about the degree to which playing violent video games affect real-world behaviour. This study only investigated a low-level effect on an individual’s perception, and we definitely need further research into the mechanisms that underlie this impact of emotion on perception.”

The team plans to follow-up on their study by investigating emotion-induced blindness in emergency first responders — another group that is frequently (and very directly) exposed to graphic images.

The paper “Aversive images cause less perceptual interference among violent video game players: evidence from emotion-induced blindness” was published in the journal Visual Cognition.

Goats can tell when you’re happy — and they like it when you smile

Not only can goats tell when people are happy, but they also prefer interacting with happy people.

Goats at the Buttercups Sanctuary. Image credits: Christian Nawroth.

It took us a while to figure it out and prove it, but now we know that animals feel and have empathy. Not only do they understand each others’ emotions, but they can also understand our emotions — something which is especially visible in pets, and even more so with dogs. As our closest companions since the dawn of time, dogs have greatly familiarized themselves with our mood and way of life, even evolving alongside us.

But dogs aren’t the only domestic animals that can read our emotions.

In the first study to ever assess this on goats, researchers explain that goats can differentiate between happy and angry facial reactions, and they prefer happy ones. Dr. Alan McElligott who led the study at Queen Mary University of London and is now based at the University of Roehampton, said:

“The study has important implications for how we interact with livestock and other species, because the abilities of animals to perceive human emotions might be widespread and not just limited to pets.”

Bernard the goat clearly likes happy people.

During the study, which was carried out at Buttercups Sanctuary for Goats in Kent, England, researchers showed 20 goats grey-scale pairs of unfamiliar human faces, exhibiting happy or angry emotions. The team reports that happy faces elicit greater interactions — goats were more likely to reach out to them and explore with their snouts. Furthermore, this was particularly prevalent when the happy faces were positioned on the right, suggesting that the goats use their left (opposite) brain hemisphere to process positive emotions. Overall, this shows just how adept goats have become at reading human body language.

First author Dr. Christian Nawroth, who worked on the study at Queen Mary University of London but is now based at Leibniz Institute for Farm Animal Biology, praises goats’ ability to ‘read’ humans and says that while their ability was previously hinted on, this is the first study that shows goat prefer happy people.

McElligott with a goat which clearly likes that he’s happy. Image credits: Alan McElligott.

“We already knew that goats are very attuned to human body language, but we did not know how they react to different human emotional expressions, such as anger and happiness. Here, we show for the first time that goats do not only distinguish between these expressions, but they also prefer to interact with happy ones.”

The study of emotion perception has already revealed complex capabilities in dogs and horses, says co-author Natalia Albuquerque, from the University of Sao Paulo. But this opens up a whole new avenue, paving the way for studying emotion perception on all domestic animals. It wouldn’t be surprising if, to some extent, all animals can tell when we’re happy.

The study has been published in Royal Society Open Science.

Zebra Finch.

Birds perceive colors and hues the same way we do

Zebra finches seem to clump similar hues together and perceive them as single colors, new research suggests. This approach is similar to how the human mind processes color and sheds light on the biological root of color perception.

Zebra Finch.

Zebra finches break the color spectrum into discrete colors — much like we do.
Image credits Ryan Huang / TerraCommunications LLC

For zebra finch (Taeniopygia guttata) males, wooing is all about what colors you’re wearing. These gents sport various hues on their beaks, ranging from light orange to dark red, which they use to attract mates. But all their chromatic efforts might be in vain, new research suggests, as the females may simply not perceive subtle nuances.

Red is red

For the study, the researchers worked with 26 zebra finch females and a handful of paper discs the size of a small coin. Some of these discs were painted in a solid color, while others were two-toned. The birds were taught that flipping over a two-toned disc earns them a reward in the form of a millet seed hidden beneath it. Solid-colored discs, meanwhile, didn’t return any tasty treats.

What the team wanted to determine through this experiment was how well the zebra finches could perceive ranges of hues. A bird picking at a certain disk before others during the experiment indicated that it perceived it as being two-toned, i.e. that it could perceive the hues on the disk as being different from one another. To see exactly how well the birds could distinguish different hues, some trials involved discs painted in color pairs that were far apart on the color spectrum (violet and yellow would be such a combination) while others used colors that were more similar (red-orange, for example).

Perhaps unsurprisingly, the females found it a breeze to perceive pairings of dissimilar colors. However, they didn’t fare nearly as well when trying to discern pairings of the hues in between these colors. The findings suggest a threshold effect at work — a sharp perceptual boundary near the orange-red transition.

The birds were also much better at spotting a two-toned disc if it bore colors from the opposite sides of the boundary (i.e. red-orange, for example) than pairs from the same side (two shades of the same color). This effect persisted even when the pairs were all equally far apart on the color spectrum, the team notes. This suggests that, while the finches have no problem perceiving different colors side-by-side, they do have some difficulty perceiving different hues of the same color on the discs.

First off, the findings help us gain a better understanding of how zebra finches handle romance. Previous research has shown that red-beaked males have more success with the ladies, likely because the color denotes good health. While this present study doesn’t show whether the females prefer one color over another, it does help us understand what females perceive when looking at potential mates.

The findings indicate that the birds lump all hues of red on one side of a certain threshold as being ‘red’. Because of this, the females likely aren’t very picky.

Color wheel.

I give thee the color wheel.
Image credits László Németh.

“What we’re showing is: he’s either red enough or not,” said senior author Stephen Nowicki, a biology professor at the Duke University.

It also helps us gain a better understanding of our own vision. The process of lumping similar hues together and perceiving them as a single color, known as categorical color perception, is something that our brain does as well. It’s not yet clear whether we share the same orange-red threshold with zebra finches, but the fact that we both exhibit categorical color perception suggests that the process has deep biological roots. Color, then, might not be just a construct of human language and culture, but may also stem from biological hardwiring.

Still, it likely doesn’t happen in the eye, the team writes — categorical color perception, even in zebra finches, is probably a product of their minds.

We don’t ‘see’ the light that hits our retina; what we see is the image our brain constructs from that data. This type of color perception, then, could be a way for the brain to help reduce ambiguity and noise from the environment — a way for our lumps of gray matter to help keep images simple so we don’t get overwhelmed.

“We’re taking in this barrage of information, and our brain is creating a reality that is not real,” said senior author Stephen Nowicki.

“Categorical perception — what we show in zebra finches — is perhaps one strategy the brain has for reducing this ambiguity,” adds Duke postdoctoral associate and paper co-author Eleanor Caves. “Categories make it less crucial that you precisely interpret a stimulus; rather, you just need to interpret the category that it’s in.”

The paper “Categorical Perception of Colour Signals in a Songbird” has been published in the journal Nature.

Barn owls (Tyto alba) took part in an experiment which tested their behavioral and neural responses to moving objects. Credit: Yoram Gutfreund.

Owls perceive moving objects like we do, suggesting bird and human vision are quite similar

Barn owls (Tyto alba) took part in an experiment which tested their behavioral and neural responses to moving objects. Credit: Yoram Gutfreund.

Barn owls (Tyto alba) took part in an experiment which tested their behavioral and neural responses to moving objects. Credit: Yoram Gutfreund.

Differentiating a moving object from a static background is crucial for species that rely on vision when interacting with their environment. This is especially true for a predator such as an owl. Now, a new study found that owls and humans share the same mechanics for differentiating objects in motion.

Individual cells in the retina can only respond to a small portion of a visual scene and, as such, send a fragmented representation of the outside world to the rest of the visual system. This fragmented representation is transformed into a coherent image of the visual scene in which objects are perceived as being in front of a background. Previous studies, mostly performed on primates, found that we perceive an object as distinct from a background by grouping the different elements of a scene into “perceptual wholes.”

However, these studies left some important questions unanswered. For instance, is perceptual grouping a fundamental property of visual systems across all species? At least, this seems to be true for barn owls (Tyto alba), according to the latest findings by Israeli researchers at Technion University’s Rappaport Faculty of Medicine in Haifa.

Yoram Gutfreund and colleagues studied both the behavior and brain of barn owls as the birds tracked dark dots on a gray background. The owls’ visual perception was tracked by a wireless “Owl-Cam”, which provided a perceptual point of view while neural activity was mapped in the optic tectum — the main visual processor in the brain of non-mammalian vertebrates.

“In behaving barn owls the coherency of the background motion modulates the perceived saliency of the target object, and in complementary multi-unit recordings in the Optic Tectum, the neural responses were more sensitive to the homogeneity of the background motion than to motion-direction contrasts between the receptive field and the surround,” wrote the authors.

Caption: An example of owl DK spontaneously observing the computer screen. The target is embedded in a static array of distractors (singleton). The left panel shows a frontal view of the owl and the right panel the corresponding headcam view. The red circle in the right panel designates the functional fovea. The color of the circle changes to green when it is on target. Credit: Yael et al., JNeurosci (2018).

Caption: the same setup as above only now the target is embedded in a mixed array of moving distractors. Credit: Yael et al., JNeurosci (2018).

The two experiments conducted by the researchers revealed that owls seem to be indeed using perceptual grouping, suggesting that the visual systems of birds and humans are more similar than previously thought. More importantly, the study provides evidence that this ability evolved across species prior to the development of the human neocortex.

The findings appeared in the Journal of Neuroscience. 

Smiley face.

Emotions shape how you see the world — quite literally

Our perception of the world isn’t a crystal-clear reflection of reality, new research suggests, but a looking glass bent and distorted by our emotional state. The findings show people will perceive a neutral face as smiling more often when it is paired with an unseen positive image.

Smiley face.

Image via Pixabay.

They say seeing is believing, but new research suggests that our emotions have a central role to play in shaping what we see in the first place.

Behind the looking glass

“We do not passively detect information in the world and then react to it — we construct perceptions of the world as the architects of our own experience. Our affective feelings are a critical determinant of the experience we create,” the researchers explain.

“That is, we do not come to know the world through only our external senses — we see the world differently when we feel pleasant or unpleasant.”

Siegel and his team previously found that changing people’s emotional states — without them knowing — changed their first impressions of neutral faces, making them seem more or less likable, trustworthy, or reliable. For the new paper, they wanted to see if changing people’s emotional states outside of their awareness might change not only their impression, but their actual perception of neutral faces.

They used a technique called continuous flash suppression to expose participants to stimuli without them knowing it. In the first experiment, 43 participants had a series of flashing, alternating images of either a neutral face of a pixelated mess presented to their dominant eye. During this time, a low-contrast image of a smiling, scowling, or neutral face was shown to their nondominant eye. What typically happens is that the images seen by the dominant eye suppress data incoming from the other eye, meaning participants shouldn’t be able to consciously experience the latter.

At the end of the trial, each participant was shown a set of five faces. They were asked to pick the one among them that best matched the face they saw during the trial. Their dominant eye was always shown a neutral face, but the participants tended to select the smiling faces more if their non-dominant eye was shown a smiling face.

For the second experiment, the team included an objective measure of awareness: they asked participants to guess the orientation of the suppressed face. Those who could guess the orientation correctly above chance levels were excluded from subsequent analyses. The continuous flash suppression trial was repeated, following the same structure as the first trials. The results again indicated that being exposed to an unseen positive face changed the participants’ perception of the visible, neutral face.

The research focused on positive stimuli (i.e. smiling faces) because similar studies in the past tended to look at the effect of negative stimuli. The team notes that such research suggests negative stimuli show a greater influence on behavior and decision-making, but the effects of positive stimuli are still robust enough to warrant further study. Siegel adds that their findings could have broad applications in real-world situations. These could include facilitating run-of-the-mill social interactions to situations that have major consequences, such as courtroom rulings.

One of the most important lessons we can all draw from this research, I feel, is that our perception of the world is subjective — it’s a construct of our own minds, a representation that’s in part painted by our emotions.

The paper “Seeing What You Feel: Affect Drives Visual Perception of Structurally Neutral Faces,” has been published in the journal Psychological Science.

Color Dots.

Biology imparts us with instinctive color categories — culture only shapes them

Although different cultures go about ordering colors into different systems, all babies seem to share a set of common, instinctive color categories.

Researchers have a pretty good grasp of how humans see colors. Different wavelengths of light reflected by various objects go through the pupil and lands on the retina, where specialized cells (known as cones) pick up on either short, medium, or long wavelengths. They send this information up to the brain where it all gets put together and processed into the final image we see.

Color Dots.

But although every human out there sees the same way, we have different systems for explaining what we see. Some languages, like Japanese, don’t necessarily make the distinction between green and blue, two colors which most of you reading this take as obviously distinct. Culture has a big part to play in shaping how we group colors, but previous research has also shown that babies also have a kind of built-in color category system.

So how do these two fit together?

To find out, a team from the University of Sussex has studied the responses of 176 babies aged between four to six months to patches of color. They report that while cultural context does play a part, our brains are naturally inclined to bunch colors up into five basic categories.

Colorama

The infants were seated in front of a wooden booth which had two windows cut out at the sides. Initially, both windows repeatedly showed the same color, but as the experiment progressed, one of them was filled with a different color at random. This new pairing was then shown multiple times, and the babies were recorded with a webcam to capture their reaction. Each baby was shown only one pair of different colors, with at least 10 babies tested for each pair.

“We wanted to find out what’s the connection between two [color categories and groupings], what is it that babies are using to make their colour categories and what can that tell us about the way we talk about colour as adults,” said said Alice Skelton, first author of the research and a doctoral candidate at the University of Sussex.

The team was looking for a phenomenon known as novelty preference in the babies — the infants will look at the second color for more if they perceive it to be different from the first-shown color. So if babies consistently look more time at the new color, even if they’re really close together on the color spectrum, that would suggest that our brains perceive it as belonging to a different category.

Some of the infants were shown very similar pairs of colors, while others were shown pairings farther apart on the color spectrum, to get a feel for where their boundaries of color categories fell. Fourteen different colors throughout the color spectrum and of the same lightness were used in total. The results show that babies order colors under five basic categories: red, yellow, green, blue, and purple.

The next step was to compare these categories to color groupings in English and 110 other nonindustrialized languages. There were obviously several differences in the way different cultures went about ordering color (such as different numbers of categories, their placement on the spectrum, and exact boundaries) but overall, their systems tied well with the five categories the team found.

Build-in color

“Infants’ categorical distinctions aligned with common distinctions in color lexicons and are organized around hues that are commonly central to lexical categories across languages,” the authors write.

“The boundaries between infants’ categorical distinctions also aligned, relative to the adaptation point, with the cardinal axes that describe the early stages of color representation in retinogeniculate pathways, indicating that infant color categorization may be partly organized by biological mechanisms of color vision.”

What’s more, four of the color boundaries the infants exhibited mapped the four extremes signals from the cone cells can produce when they are processed and interpreted in the brain. Taken together, these findings suggest that biology creates our color categories, and environmental as well as cultural factors shape them afterward — if your language doesn’t differentiate between green and blue, for example, babies learn not to make that distinction either as they age.

The findings are important as they lend a lot of weight to the color universality theory since infants show a definite color categorical structure long before they learn the words for them.

But the paper isn’t without its limitations. First off, there is a possibility that the colors the babies were exposed to from birth, for example in toys or wallpaper colors, could have determined their brain to create certain color categories. Since the study included only children from the UK, they were likely to have lived in similar conditions and be exposed to roughly the same color schemes. Retaking the test with children from other cultures should show whether these five categories are learned or instinctual.

The team now hopes to explore how our categories shift as we develop language.

The paper “Biological origins of color categorization” has been published in the journal Proceedings of the National Academy of Sciences.

Darth Vader.

Our perception of a character comes not from their actions, but from how they compare to others

There are some characters whom we love although they do legitimately bad things — take Walter White for example. A new paper from the University at Buffalo tries to explain why we still root for these characters.

Darth Vader.

On the one hand, I really hoped all the best for Walter, all the way to the end. Which I found surprising because he does a lot of shady, a lot of downright dark things on the show. And I’m not the only one to do so — in fact, most people feel the same way as I do, while agreeing that Walter is, when you draw the line, more villain than hero. So what gives?

According to lead author Matthew Grizzard, an assistant professor in UB’s Department of Communication and an expert on the cognitive, emotional, and psychobiological effects of media entertainment, it’s because behavior isn’t the end-all standard after which we gauge a villain or a hero.

Exactly how to make an audience like or dislike a hero has been a burning question on the minds of media researchers even since the 70s. They’ve had a lot of time to look into the issue since then and one thing seems to stand the test of time — morality matters with the public. People simply love the good guys and dislike the bad guys. But Grizzard’s study suggests it’s not necessarily behavior that we use when making a distinction between the hero and the villain.

Whiter than thou

The team, which included co-authors Jialing Huang, Kaitlin Fitzgerald, Changhyun Ahn and Haoran Chu, all UB graduate students, wanted to find out if slight outward differences — for example wearing darker or lighter clothes — would be enough to make people consider a character as being a hero or villain. So, they digitally altered photographs of characters to see if they could influence the audience’s perception of them.

They also drew on previous research which found that while villains and heroes differ in morality, the two don’t differ in competence. In other words, villains aren’t simply immoral, but they’re “good at being bad”, according to Grizzard. This offered the team an opportunity to determine if their alterations activated participants’ perception of a hero or villain or if any shift in perception was caused by unrelated biases.

“If our data had come out where the heroic-looking character was both more moral and more competent than the villain, then we probably just created a bias,” says Grizzard.

“But because the hero was more moral than the villain but equally competent, we’re more confident that visuals can activate perceptions of heroic and villainous characters,”

The study found that while appearance does, to a certain degree, help skew perception of a character as either a hero or villain, it showed that characters were judged chiefly by how they compare to the others, and the order they’re introduced to the audience. For example, a hero was usually judged as being more moral and heroic if he or she appeared after the villain, and villains were usually considered to be more villainous if they were introduced after a hero. This suggests that people don’t make isolated judgments on the qualities of a character using a strict moral standard, but rather by making comparisons between them and those they oppose.

In Walter’s case, people see the character’s ethics taking a constant turn for the worse and still stick by his side. The trick is that Walter doesn’t evolve by himself — there are all those other characters going about, usually turning worse by the episode, and Walter comes on top when compared to them. He seems better when compared to the really low standard the others in the show set, making him feel like the good guy.

Well, if nothing else, the villains at least have an easier time catching up to Mr. Good Guy, Gizzard says.

“We find that characters who are perceived as villains get a bigger boost from the good actions or apparent altruism than heroes, like the Severus Snape character from the Harry Potter books and films.”

The findings could help improve the effectiveness of character-based public service campaigns, or for programs trying to promote a certain behavior. By helping authors understand how we perceive their characters, the research could also help them write better stories.

And on a more personal note, it can help each and every one of us form a clearer image of the characters we love — with all their flaws and strengths.

The full paper “Sensing Heroes and Villains: Character-Schema and the Disposition Formation Process” has been published in the journal
Communication Research.

Typing fonts.

Each language you speak in alters your perception of time, study finds

Language can have a powerful effect on how we think about time, a new study found. The link is so powerful that switching language context in a conversation or during a task actually shifts how we perceive and estimate time.

Typing fonts.

Image credits Willi Heidelbach.

I think we all like to consider our minds as being somehow insulated from the going on’s around us. That we take comfort in knowing it will absorb, process, and respond to external stimuli in a calm, efficient, but most of all consistent fashion. Maybe it comes down to the sense of immutable truth our reasoning is imbued with if we assume that it’s rooted in a precise and impartial system — in a chaotic world, we need to know that we can trust our mind. A view which is a tad conceitful, I’d say, since it’s basically our mind telling us what to believe about itself.

And it’s also probably false. Professors Panos Athanasopoulos, a linguist from Lancaster University and Emanuel Bylund, a linguist from Stellenbosch University and Stockholm University, have discovered that our perception of time strongly depends on the linguistic context we’re currently using.

Doublespeak

People who are fluent in two (bilinguals) or more (polyglots) languages are known to ‘code-switch’ often — a rapid and usually unconscious shift between languages in a single context. But each language carries within it a certain way of looking at the world, of organizing and referring to things around us. For example, English speakers mark duration of events by likening them to physical distances, e.g. a short lecture, while a Spanish speaker will liken duration to volume or amount, e.g. a small lecture. So each language subtly ingrains a certain frame of reference for time on its speaker.

But bilinguals, the team found, show a great degree of flexibility in the way they denote duration, based on the language context in use. In essence, this allows them to change how the mind perceives time.

For the study, the team asked Spanish-Swedish bilinguals to estimate the passage of time or distance (distractionary task) while watching a screen showing either a line growing across it or a container being filled. Participants reproduced duration by clicking the computer mouse once, waiting the appropriate time, and clicking again. They were prompted to do this task either with the word ‘duración’ (the Spanish word for duration) or ‘tid’ (the Swedish term). The containers and lines themselves weren’t an accurate representation of duration, however, but were meant to test to what extent participants were able to disregard spatial information when estimating duration.

The idea is that if language does interfere with our perception of duration, Spanish speakers (who talk about time as a volume) would be influenced more by the fill level of the containers than their Swedish counterparts (who talk about time as a distance), and vice-versa for the lines.

And it did

Image credits emijrp / Wikimedia.

The team recruited 40 native Spanish and Swedish bilinguals each and had them run three variations of the test. The first one found that Spanish native speakers were influenced to a greater extent (there was a wider discrepancy between real and perceived time) by the containers than the lines (scoring an average 463-millisecond discrepancy vs the Swedes’ 344 ms). Native Sweedish speakers were more influenced by the lines than the containers (scoring 412 discrepancies vs their counterparts’ 390 ms discrepancies).

The second test again included 40 of each group and found that in the absence of the Spanish/Sweedish prompt words, the team “found no interaction between language and stimulus type, in either the line condition or the container condition. […] both language groups seemed to display slightly greater spatial interference in the lines condition than in the containers condition. There were no significant main effects.”

The third test included seventy-four Spanish-Sweedish bilinguals who performed either the line or container task. The team removed the distractor task to reduce fatigue and alternated between the prompt languages. Each participant took the experiment twice, once with Spanish and once with Swedish prompt labels. The team concludes that “when all stimuli were analysed,” there were “no significant main effects or interaction” either in the distance or time task — meaning both groups were just as accurate in estimating time or distance regardless of language.

“Our approach to manipulate different language prompts in the same population of bilinguals revealed context-induced adaptive behavior,” the team writes. “Prompts in Language A induced Language A-congruent spatial interference. When the prompt switched to Language B, interference became Language B-congruent instead.”

“To our knowledge, this study provides the first psychophysical demonstration of shifting duration representations within the same individual as a function of language context.”

Exactly why this shift takes place is still a matter of debate: the team interprets the finding in the context of both the label-feedback hypothesis and the predictive processing hypothesis, but mostly in technical terms for other linguists to discern. For you and me, I think the main takeaway is that as much as our minds shape words so do words shape our minds — texturing everything from our thoughts to our emotions, all the way to our perception of time.

The paper “The Whorfian Time Warp: Representing Duration Through the Language Hourglass” has been published in the Journal of Experimental Psychology.

Editor’s note: edited measured discrepancy for more clarity.

Your brain tricks you into seeing difficult goals as less appealing

Your lazy brain actually changes how you see the world to discourage you from effort, a new study found. The results suggest that the amount of work required to achieve a task changes our perception of it — in essence, our brain makes anything challenging seem less appealing.

Something tells me we’re not the only species to have this bias.
Image credits Dimitris Vetsikas.

Today was to be the day. You made a commitment to yourself — today, you’d replace the after-work couch and chip marathon with a healthy dose of jogging. You bought the sneakers, put together the mother of all jogging playlists, and had the route all planed. It would be the dawn of the new, fitter you.

So how on Earth did you end up on the couch munching on snacks again?

Blame the brain

A new paper from the University College of London found that your brain just won’t let you put the effort in. The estimated amount of work required to do a task influences the way we perceive it, making us choose the path of least resistance.

“Our brain tricks us into believing the low-hanging fruit really is the ripest,” says Dr Nobuhiro Hagura, who led the UCL study before moving to NICT in Japan.

“We found that not only does the cost to act influence people’s behaviour, but it even changes what we think we see.”

The team had 52 participants undergo a series of tests in which they had to judge the direction a bunch of dots moved on a screen. They would input their answer by moving one of two handles — one held in the right hand, the other in the left.

At first, these two handles required an equal amount of effort to move. But as the tests progressed, the researchers gradually added a load to one of the handles to make it more difficult to move. They report that the volunteers’ responses became gradually more biased towards the free handle as the load increased, even though the dots’ patterns of movement weren’t altered. For example, when weight was added to the left handle, they were more likely to judge the dots as moving to the right — because this answer was easier to express.

When asked about their choices, the participants reported they weren’t aware of the increased load on the handle. This suggests that their movements adapted automatically, which in turn changed their perception of the dots.

“The tendency to avoid the effortful decision remained even when we asked people to switch to expressing their decision verbally, instead of pushing on the handles,” Dr Hagura said.

“The gradual change in the effort of responding caused a change in how the brain interpreted the visual input. Importantly, this change happened automatically, without any awareness or deliberate strategy.”

Matter over mind

“That seems like a lot of work. Let’s watch Netflix instead.” — your brain.
Image credits Carlos Barengo.

Dr Hagura further explained that up to now, researchers believed that we made a decision based on our senses and the motor system reacts to that decision — we see a tasty apple in the grocery and we reach out for it. In this case, the motor system plays no part in our choice to act. The paper suggests that this isn’t entirely true. The effort required to complete a task actually changes how we perceive the object of our desire, playing a central role in influencing our decision.

The team believes their findings can be used to shape everyday decisions by making certain choices more effort-intensive.

“The idea of ‘implicit nudge’ is currently popular with governments and advertisers,” said co-author Professor Patrick Haggard from the UCL Institute of Cognitive Neuroscience.

“Our results suggest these methods could go beyond changing how people behave, and actually change the way the world looks. Most behaviour change focuses on promoting a desired behaviour, but our results suggest you could also make it less likely that people see the world a certain way, by making a behaviour more or less effortful. Perhaps the parent who places the jar of biscuits on a high shelf actually makes them look less tasty to the toddler playing on the floor.”

I’m not particularly big on the idea of being “implicitly nudged” and all, I have to admit. But seeing as my brain is already hard at work doing just that, I guess some counter-manipulation wouldn’t be so bad.

So why does it happen? Well, this effect is probably set in place to conserve energy. Our brains evolved over hundreds of thousands of years when access to food wasn’t guaranteed. One of their prime concerns thus is to make sure you put in as much work as you need to survive — but not much more than that.

The full paper “Perceptual decisions are biased by the cost to act” has been published into the journal eLife.

rubick cube

What babies can see that you can’t anymore

rubick cube

Credit: DALE PURVES, American Scientist

Check out the red chips in these two Rubik cubes. Though these chips in the two pictures might look like the same colour, only shaded differently, the ones on the left are actually orange and the ones on the right are purple. Don’t stress yourself too much with this, because it will likely get you nowhere. A four-month infant, however, can spot these differences instantly.

That’s because very young babies haven’t yet developed a crucial perceptual skill that enables us to navigate the world properly, something called perceptual consistency, also known as Object Constancy, or Constancy Phenomenon.

To understand how this works, we first need to establish that what our retinas record is different from the images decoded by the brain that we know as sight. This evolutionary adaption appeared because otherwise our minds would simply be engulfed in chaos by the constantly shifting lighting conditions, but also the shape of objects.

Imagine what it’s like to be consciously aware of people growing physically bigger as they approach you, objects changing shape as they move or colours changing as the lighting changes. You won’t be able to do anything as you try to wrap your mind around all the chaos. That’s why our brains rely on a perception and not recording, making it so things like site, shape, lightness and colour are consistent. Although a bus moving towards a bus stop changes in size from a dot to twice your height, we don’t perceive it as having grown in size — we’re capable of realising the bus has the same size, rectangular shape, and brightness as it had in the distance.

It’s really a game changer, although some faulty information like optical illusions sometimes slip in — a small price to pay, really, for the ability to make sense of the world. But little babies don’t have this consistency fully developed yet. The three snails in the image below, for instance, were featured in a recent paper published by Japanese psychologists at the Chuo University led by Professor Jiale Yang. Which two images are the most similar out of the three?

Computer generated renditions of the same 3D object. Credit: YANG ET AL,

Computer generated renditions of the same 3D object. Credit: YANG ET AL.

If you answered A and B, you’re wrong — it’s B and C, as these two images of the snail are the most similar in terms of pixel intensity. Even though the physical disparities between B and C are small, we adults think this pair looks the most different. Infants, however, were able to identify the right discrepant pair almost immediately, the researchers found.

Of course, you can’t ask a baby which pair is the most similar because “ga, ga, guu”. Instead, Yang and colleagues enlisted  42 babies, aged 3 to 8 months and put them in front of a computer screen with images rendered from real 3-D objects such as the snails. Previously, it was established that when babies are presented with a novel object, they spend more time looking at it than a familiar item. The researchers found the babies looked at the first and second image for an equivalent amount of time, suggesting they found both images novel and different.

The data shows infants aged 3 to 4 months old have a striking ability to spot physical disparities between images, but this ability is gradually lost starting at the age of 5 months. Around age 7 to 8 months, the babies start to discriminate surface properties like glossy vs. matte and lose this skill.

Previously, other research found that as babies grow up they lose other perceptual skills that adults don’t have like being able to differentiate between very subtle differences in the faces of monkeys, or the  ability to distinguish speech sounds in languages other than spoken by their own families. Four-month-old babies are also able to tell which crossed foot actually got a tickle, unlike adults who often mistake which hand is getting touched when they cross their hands, for instance.

In other words, we’re most objective during our very first months of life but gradually slip into subjectivity as we age.

distance long

Overweight people judge distances as being farther, making it harder to exercise

Our perception does not always reflect reality, as evidenced by numerous studies. The information sent by the eyes to the brain is processed and contains many short-cuts and assumptions which makes things more optimized, but also leads to biases. One study, for instance, found that people who are overweight will judge an object as being farther than it really is. This suggests that physical characteristics — people who thought they were overweight, but weren’t in fact, did not share this bias — plays a major role in defining perception.

distance long

Image: Pixabay

Sixth-six normal, overweight and obese Walmart shoppers were recruited for the study. They were asked to estimate distances, inclines but also play tennis, golf and virtual tennis.

Everyone estimated distances poorly, but overweight and obese individuals tended to overestimate. Image: Acta Psychologica

Everyone estimated distances poorly, but overweight and obese individuals tended to overestimate. Image: Acta Psychologica

Everyone seemed to judge distances poorly. What was interesting is how the bias was dependent on body weight. While people of normal weight judged distances as being shorter than they really were, overweight individuals perceived objects as being farther than they really were. On average, obese people see distances at least 10% farther than those with an average weight. As for inclines, people of all heights and weight “grossly overestimate” how steep hills are.

The Ebbinghaus illusion (sometimes called the "Titchener illusion") is an optical illusion of relative size perception. The two orange circles are exactly the same size; however, the one on the left seems smaller.  Image: New World Encyclopedia

The Ebbinghaus illusion (sometimes called the “Titchener illusion”) is an optical illusion of relative size perception. The two orange circles are exactly the same size; however, the one on the left seems smaller. Image: New World Encyclopedia

The findings suggest that overweight people “may choose to perform less physically demanding actions not as a result of how they perceive their bodies, but as a result of how they perceive the environment,” the researchers write in the journal Acta PhycologicaThis seems to create a vicious cycle of perception and behaviour.

The other experiments further exemplified how perception affects our ability to act. When the researchers used a Ebbinghaus illusion to make a golf course hole seem smaller, participants performed worse and, conversely, when the illusion made the hole seem bigger performed better. A virtual tennis ball was perceived to travel faster when participants held a larger racket, and slower when using a smaller racket, the Colorado State University Fort Collins researchers found.

Do you see a normal face when the mask rotated to the hollow section?  Our prior knowledge of a normal face is that the nose protrudes. So, we subconsciously reconstructed the hollow face into a normal face. Image: PiktoChart

Do you see a normal face when the mask rotated to the hollow section? Our prior knowledge of a normal face is that the nose protrudes. So, we subconsciously reconstructed the hollow face into a normal face. Image: PiktoChart

 

It would be interesting to find the root of this kind of behaviour. Seem biases are good and clearly seem to have an evolutionary advantage. Trekking through the woods, people often mistake twigs for snakes because it’s better to be wrong than take the risk. Similarly, cars seem to travel faster than they really do because it gives you ‘more time’ to think and act.

 

You can never truly kiss anyone!

tumblr_inline_nxn5p3gVku1srob4n_500Photo Credit

I am sorry to break it to you, but you can never truly kiss anyone. But hey, It’s not your fault! This is a consequence of the fact that two atoms can never really touch each other.

Wait, what?

Push them together as hard as you want, and they will resist.

Two like charges always repel each other. And also no two objects can have the same exact properties ( courtesy of the Pauli’s exclusion principle ). This curtails any possibility of two objects coming together.

This also means that from an atomic perspective. you can never really touch anything and that chair you think you are sitting on, you are actually hovering over it!

 

view-768429_1920

This goes beyond our intuition because we know it when we have kissed someone. That adrenaline rush and the dopamine surge is just inevitable! You feel it, yet not touch?

Atoms love to Share.

tumblr_inline_nxn4yzcHBV1srob4n_500Photo Credit

You might have heard about bonds in Chemistry. Well, as it turns although atoms don’t like getting close to one other, they are completely okay with sharing! Hmm..

The Grandeur of touch

hands-437968_1920

If we can’t really touch anything, how can we explain the perception of touch?

This is where it gets astonishing. The answer boils down to how our brains interpret the physical world.

perception1

What your brain is perceiving as touch is merely the electron’s repulsive force.

A legion of atoms and molecules collectively known as your skin are interacting with a surface. The repulsive force they experience is being sent to the brain which then interprets the data as touch!

The sensation of touch is arguably a Grand Illusion, created as a way of interpreting interactions between electrons and electromagnetic fields

The profundity of nature

It is of quintessence to realize that these are constraints of nature and not some man-made voodoo. This is how the laws of nature have been laid.

We are all merely spectators to Nature’s endeavors.

sunset-681840_1920

Delusion

Delusional people actually see the world differently

There’s an extremely fine line between delusion and grand vision – it’s enough just to take a look back in history and you’ll find a myriad of examples where great minds who justly challenged the status quo were labeled insane, and in some even more unfortunate times, heretics. That’s not to say that behind every conspiracy theorist or person who emits extreme, upside-down hypotheses lies on man of genius or great foreseeing. Much of the time, these are truly simple delusions – erroneous views on the world and how it works. A new study addressed how delusions work and how people who have them ‘see’ the world. The findings are most interesting: according to researchers at Yale University in New Haven, Conn, the delusional mind actually perceives the world in a manner different from what  most would class as real.

“Beliefs form in order to minimize our surprise about the world,” said neuroscientist Phil Corlett of Yale University in New Haven, Conn., who was not involved in the study. “Our expectations override what we actually see,” Corlett added.

DelusionIn other words, our beliefs are guided and formed by our perception. If we perceive something – an event, a concept, even a way of life – differently than our minds predicted it should be, then we generally change/update our belief. What we see or what we understand of certain events is extremely subjective, and some folks are prone to forming delusions. The Yale University researchers found that people who form delusions typically do not distinguish correctly among different sensory inputs.

This is why some people suffer from extreme cases of paranoia, constantly feeling they’re being watched or persecuted. Others have inflated ideas about one’s selves, believing they’re  in much greater position than they actually are. Nevertheless, the researchers conducted a series of experiments to test how delusions form and how these work in the brain.

Seeing the world though different eyes

First off healthy volunteers, with no mental health problems, were recruited and asked to fill in a questionnaire that measured how delusional they were. Questions included: “do you ever feel as if people are reading your mind?” “Do you feel there’s a conspiracy against you?” and even “Do you feel like your partner may be unfaithful?”.

Next, the study participants were asked to perform a task that tested their visual perception. A sphere-like set of dots would rotate in an ambiguous direction, and participants had to report which direction they perceived the dots were heading towards at various intervals. People who scored high on the questionnaires  saw the dots appear to change direction more often than the average person, suggesting what other similar studies had already found as well: delusional individuals have less stable perceptions of the world.

In a second experiment, participants were offered ‘special glasses’ which they were told would bias their view so that the rotating dots would appear to go in one direction more often than the other direction. This was simple trolling the scientists’ part, if you will, since these were nothing but ordinary glasses. The same experiment as in the first part was played, but in two phases: in the learning phase, where dots clearly rotated in one direction and a test phase, where the direction was ambiguous.

The participants reported they were seeing the dots rotate in the biased direction even in the test phase, when clearly didn’t happened. They clung to the delusion that the glasses altered their vision, even though the visual evidence contradicted this idea, suggesting they used their delusional beliefs to interpret what they were seeing.

In a third try, the same experiment was once again repeated but this time participants had brain scans done on them using fMRI. Brain imaging revealed that as people were deluded about the direction of the dots’ rotation, their brains were encoding the delusion as if they had really seen the dots move that way. Otherwise said, these people weren’t ignoring or actively denying what they saw – they were genuinely seeing/perceiving something else. Scans also revealed links between a brain area involved in beliefs, the orbitofrontal cortex, and an area involved in visual processing, the visual cortex.

via Live Science