Tag Archives: fovea

Our eyes have a focal point — but images don’t seem to focus on it, weirdly

New research says that if you want to see something better, you shouldn’t look directly at it. At least, that’s what our eyes seem to believe.

Image via Pixabay.

Researchers at the University of Bonn, Germany, report that when we look directly at something, we’re not using our eyes to their full potential. When we do this, they explain, light doesn’t hit the center of our foveas, where photoreceptors (light-sensitive cells) are most densely packed. Instead, light (and thus, the area where images are perceived) are shifted slightly upwards and towards the nose relative to this central, highly sensitive spot.

While this shift doesn’t seem to really impair our perception in any meaningful way, the findings will help improve our understanding of how our eyes work and how we can fix them when they don’t.

I spy with my little eye

“In humans, cone packing varies within the fovea itself, with a sharp peak in its center. When we focus on an object, we align our eyes so that its image falls exactly on that spot — that, at least, was the general assumption so far,” says Dr. Wolf Harmening, head of the adaptive optics and visual psychophysics group at the Department of Ophthalmology at the University Hospital Bonn and corresponding author of the paper.

The team worked with 20 healthy subjects from Germany, who were asked to fixate on (look directly at) different objects while monitoring how light hit their retinas using “adaptive optics in vivo imaging and micro-stimulation”. An offset between the point of highest photoreceptor density and where the image formed on the retina was observed in all 20 participants, the authors explain. They hypothesize that this shift is a natural adaptation that helps to improve the overall quality of our vision.

Our eyes function similarly to a camera, but they’re not really the same. In a digital camera, light-sensitive elements are distributed evenly across the surface of their sensors. They’re the same all over the sensor, with the same size, properties, and operating principles. Our eyes use two types of cells to pick up on light, the rod and cone photoreceptors. The first kind is useful for seeing motion in dim light, and the latter is suited to picking out colors and fine detail in good lighting conditions.

Unlike in a camera, however, the photosensitive cells in our retinas aren’t evenly distributed. They vary quite significantly in density, size, and spacing. The fovea, a specialized central area of our retinas that can produce the sharpest vision, has around 200,000 cone cells per square millimeter. At the edges of the retina, this can fall to around 5,000 per square millimeter, which is 40 times less dense. In essence, our eyes produce high-definition images in the middle of our field of view and progressively less-defined images towards the edges. Our brains kind of fill in the missing information around the edges to make it all seem seamless — but if you try to pay attention to something at the edges of your vision, you’ll notice how little detail you can actually notice there.

It would, then, seem very counterproductive to have the image of whatever we’re looking at directly form away from the fovea. Wouldn’t we want to have the best view of whatever we’re, you know, viewing? The team explains that this is likely an adaptation to the way human sight works: both eyes, side by side, peering out in the same direction.

All 20 participants in the study showed the same shift, which was slightly upwards and towards the nose compared to the fovea. For some, this offset was larger, for some, smaller, but the direction was always the same for all participants, and all of them showed symmetry in the offset between both eyes. Follow-up examinations carried out one year after the initial trials showed that these focal points had not moved in the meantime.

“When we look at horizontal surfaces, such as the floor, objects above fixation are farther away,” explains Jenny Lorén Reiniger, a co-author of the paper. “This is true for most parts of our natural surrounds. Objects located higher appear a little smaller. Shifting our gaze in that fashion might enlarge the area of the visual field that is sheen sharply.”

“The fact that we were able to detect [this offset] at all is based on technical and methodological advances of the last two decades,” says Harmening.

One other interesting conclusion the authors draw is that, despite the huge number of light-sensitive cells our retinas contain, we only use a small fraction of them — around a few dozen — when focusing on a single point. Even more, it’s probably the same cells all throughout our lives, as the focal point doesn’t seem to move over time. While this is an interesting tidbit to share in trivia, it’s also valuable for researchers trying to determine how best to repair eyes and restore vision following damage or disease.

The paper “Human gaze is systematically offset from the center of cone topography” has been published in the journal Current Biology.

How the eye works

 

Image via flickr. 

Doing some light reading

Touch interprets changes of pressure, texture and heat in the objects we come in contact with. Hearing picks up on pressure waves, and taste and smell read chemical markers. Sight is the only sense that allows us to make heads and tails of some of the electromagnetic waves that zip all around us — in other words, seeing requires light.

Apart from fire (and other incandescent materials), bioluminiscent sources and man-made objects (such as the screen you’re reading this on) our environment generally doesn’t emit light for our eyes to pick up on. Instead, objects become visible when part of the light from other sources reflects off of them.

Let’s take an apple tree as an example. Light travels in a (relatively) straight line from the sun to the tree, where different wavelengths are absorbed by the leaves, bark and apples themselves. What isn’t absorbed bounces back and is met with the first layer of our eyes, the thin surface of liquid tears that protects and lubricates the organ. Under it lies the cornea, a thin sheet of innervated transparent cells.

Behind them, there’s a body of liquid named the aqueous humor. This clear fluid keeps a constant pressure applied to the cornea so it doesn’t wrinkle and maintains its shape. This is a pretty important role, as that layer provides two-thirds of the eye’s optical power.

Anatomy of the eye.
Image via flikr

The light is then directed through the pupil. No, there’s no schoolkids in your eye; the pupil is the central, circular opening of the iris, the pretty-colored part of our eyes. The iris contracts or relaxes to allow an optimal amount of light to enter deeper into our eyes. Without it working to regulate exposure our eyes would be burned when it got bright and would struggle to see anything when it got dark.

The final part of our eye’s focusing mechanism is called the crystalline lens. It only has half the focusing power of the cornea but its most important function is that it can change how it does this. The crystalline is attached to a ring of fibrous tissue on its equator, that pull on the lens to change its shape (a process known as accommodation), allowing the eye to focus on objects at various distances.

Fun fact: You can actually observe how the lens changes shape. Looking at your monitor, hold your up hands some 5-10 centimeters (2-4 inches) in front of your eyes and look at them till the count of ten. Then put them down; those blurry images during the first few moments and the weird feeling you get in your eyes are the crystalline stretching to adapt to the different focal vision.
Science at its finest.

After going through the lens, light passes through a second (but more jello-like) body of fluid and falls on an area known as the retina. The retina lines the back of the eye and is the area that actually processes the light. There are a lot of different parts of the retina working together to keep our sight crispy clear, but three of them are important in understanding how we see.

  • First, the macula. This is the “bull’s eye” of the retina. At the center of the macula there’s a slight dip named the fovea centralis (fovea is latin for pit). As it lies at the focal point of the eye, the fovea is jam-packed with light sensitive nerve endings called photoreceptors.
  • Photoreceptors. These differentiate in two categories: rods and cones. They’re structurally and functionally different, but both serve to encode light as electro-chemical signals.
  • Retinal pigment epithelium. The REP is a layer of dark tissue whose cells absorb excess light to improve the accuracy of our photoreceptors’ readings. It also delivers nutrients to and clears waste from the retina’s cells.

So far you’ve learned about the internal structure of your eyes, how they capture electromagnetic light, focus it and translate it into electro-chemical signals. They’re wonderfully complex systems, and you have two of them. Enjoy!

There’s still something I have to tell you about seeing, however. Don’t be alarmed but….

The images are all in your head

While eyes focus and encode light into the electrical signals our nervous system uses to communicate, they don’t see per se. Information is carried by the optical nerves to the back of the brain for processing and interpretation. This all takes place in an area of our brain known as the visual cortex.

Brain shown from the side, facing left. Above: view from outside, below: cut through the middle. Orange = Brodmann area 17 (primary visual cortex)
Image via wikipedia

Because they’re wedged in your skull a short distance apart from each other, each of your eyes feeds a slightly different picture to your brain. These little discrepancies are deliberate; by comparing the two, the brain can tell how far an object is. This is the mechanism that ‘magic eye’ or autostereogram pictures attempt to trick, causing 2D images to appear three dimensional.  Other clues like shadows, textures and prior knowledge also help us to judge depth and distance.

[YOU SHOULD  ALSO READ] The peculiar case of a woman who could only see in 2-D for 48 years, and the amazing procedure that gave her stereo-vision

The neurons work together to reconstruct the image based on the raw information the eyes feed them. Many of these cells respond specifically to edges orientated in a certain direction. From here, the brain builds up the shape of an object. Information about color and shading are also used as further clues to compare what we’re seeing with the data stored in our memory to understand what we’re looking at. Objects are recognized mostly by their edges, and faces by their surface features.

Brain damage can lead to conditions that impair object recognition (an inability to recognize the objects one is seeing) such as agnosia.  A man suffering from agnosia was asked to look at a rose and described it as ‘about six inches in length, a convoluted red form with a linear green attachment’. He described a glove as ‘a continuous surface infolded on itself, it appears to have five outpouchings’. His brain had lost its ability to either name the objects he was seeing or recognize what they were used for, even though he knew what a rose or a glove was. Occasionally, agnosia is limited to failure to recognize faces or an inability to comprehend spoken words despite intact hearing, speech production and reading ability.

The brain also handles recognition of movement in images. Akinetopsia, a movement-recognition impairing condition is caused by lesions in the posterior side of the visual cortex. People suffering from it stop seeing objects as moving, even though their sight is otherwise normal. One woman, who suffered such damage following a stroke, described that when she poured a cup of tea the liquid appeared frozen in mid-air, like ice. When walking down the street, she saw cars and trams change position, but not actually move.

subretinal-implant

Retina implant restores sight to the blind

In the culmination of 15 years worth of painstaking research work related to retina implants, scientists from Germany and Hungary have for the first time demonstrated that a light sensitive electronic chip, implanted under the retina, can restore useful vision in patients blind from hereditary retinal degeneration.

subretinal-implantAs part of the research, nine persons previously completely blind have had their vision partially restored. They can now identify objects in their surroundings, and have become independent allowing them to live a life closer to normal. One participant in particular showed extraordinary improvements, after he was able to discern 7 shades of grey, read the hands of a clock and combined the letters of the alphabet into words.

The  3mm x 3 mm implant has 1500 pixels or just as many independent microphotodiode-amplifier electrode elements. It is meant to be surgically implanted below the fovea (area of sharpest vision in the retina), and is powered  by subdermal coil behind the ear.

“So far, our approach using subretinal electronic implants is the only one that has successfully mediated images in a trial with freely moving blind persons by means of a light sensor array that moves with the eye,” the scientists said.

“All the other current approaches require an extraocular camera that does not link image capture to eye movements, which, therefore, does not allow the utilization of microsaccades for refreshing the perceived images.”

In people suffering from hereditary retinal degeneration, the photoreceptors in the retina progressively degenerate, often causing blindness in adult life. Unfortunately, there is no viable treatment that can prevent this from happening, however forefront research like this might offer them a chance to a normal life.

Patients implanted with the device now posses a diamond-shaped visual field of 15 degrees diagonally across chip corners. This is a poor vision by all means, magnified by the fact that the visual field is so tiny, but when compared to the pitch black darkness the blind were thrown in, eyesight restoration, partial as it is, becomes nothing less than godsend. In the video below for instance, a study participant needed 2 minutes to recognize and read a succession of letters that formed the word “MIKA”. Remarkably, the patient read it correctly and signaled the researchers that they’ve spelled his name, Mikka, wrong – of course, this was made on purpose.

Findings were reported in the journal Proceeding of the Royal Society.

The work was made possible thanks to a long-standing collaborative effort between the University Eye Hospitals in Tübingen and Regensburg, the Institute for Microelectronics in Stuttgart (IMS), the Natural and Medical Sciences Institute (NMI) in Reutlingen as well as the Retina Implant AG and Multi Channel Systems (MCS).

Points within the face (green circles) where, on average, each of 50 participants first looked at when trying to identify faces of famous people. White circle corresponds to the average across all participants. Background is an averaged face across 120 celebrity faces. (c) UCSB

Right below the eyes is the best place to get the look of a person

Eye contact plays a very important role in human interactions, however a recent research study made by psychologists at UC Santa Barbara found that looking below the eyes is the best place to get the feel of what a person is up to. Besides, apparently most of us are already hard-wired to fix our initial gaze to this point, albeit for an extremely short period and unconsciously.

“It’s pretty fast, it’s effortless –– we’re not really aware of what we’re doing,” said Miguel Eckstein, professor of psychology in the Department of Psychological & Brain Sciences

Points within the face (green circles) where, on average, each of 50 participants first looked at when trying to identify faces of famous people. White circle corresponds to the average across all participants. Background is an averaged face across 120 celebrity faces. (c) UCSB

Points within the face (green circles) where, on average, each of 50 participants first looked at when trying to identify faces of famous people. White circle corresponds to the average across all participants. Background is an averaged face across 120 celebrity faces. (c) UCSB

Miguel Eckstein and Matt Peterson used high-speed eye tracking cameras, more than 100 photos of faces and a sophisticated algorithm to pinpoint the first place participants looked at when fixing their gaze towards a person, in order to assess their  identity, gender, and emotional state.

“For the majority of people, the first place we look at is somewhere in the middle, just below the eyes,” Eckstein said.

The whole initial, involuntary glance lasts a mere 250 millisecond, yet despite this and the relatively featureless point of focus, during these highly important initial moments our brain performs incredibly complex computations that plan eye movement in advance to ensure the best information gathering possible, as well as assess whether its time to run, fight or entertain.

“When you look at a scene, or at a person’s face, you’re not just using information right in front of you,” said Peterson.

The eyes are the windows to one’s soul, but what lies beneath it?

You might have noticed whenever you look at something, anything, the center of your point of view appears more refined and clearer than its surroundings, which offers less spatial detail. The high resolution areas are picked up by a region of the eye known as the fovea, which is a slight depression in the retina.

When sitting next to a person at a conversational distance,  the fovea can read the whole person’s face in great detail and catch even the most subtle gestures. More detail spatial information relating to face features like the nose, mouth and eyes is readily available. Despite this, however, when study participants were asks to asses the identity, gender and emotions of an individual based on a photograph of his forehead or mouth alone, for instance, they did not perform as well as they did when looking close to the eyes.

These empirical data were correlated with the output of a sophisticated computer algorithm that mimics the varying spatial detail of human processing across the visual field and integrates all information to make decisions, allowing the researchers to predict what would be the best place within the faces to look for each of these perceptual tasks. The common denominator derived from both the computer model and actual human participant data is that looking below the eyes is the optimal place to look, say the scientists, because it allows one to read information from as many features of the face as possible.

“What the visual system is adept at doing is taking all those pieces of information from your face and combining them in a statistical manner to make a judgment about whatever task you’re doing,” said Eckstein.

This doesn’t seem to be a general rule for all humans, though. Previous research, say the scientists involved with the paper published in the journal PNAS, has found that t East Asians, for instance, tend look lower on the face when identifying a person’s face. Next  Peterson and Eckstein are looking to refine their algorithm in order to provide insight into conditions like schizophrenia and autism, which are associated with uncommon gaze patterns, or prosopagnosia – an inability to recognize someone by his or her face.

source: UCSB