Tag Archives: retina

Spanish researchers developed an “artificial retina” that beams sight directly into the brain of blind patients

A team of Spanish researchers is working to restore sight to blind people by directly stimulating their brains and bypassing the eyes entirely.

The 57-year-old participant of the study during testing of the device. Image credits Asociación RUVID.

Current efforts to address blindness generally revolve around the use of eye implants or procedures to restore (currently limited) functionality to the eye. However, a team of Spanish researchers is working on an alternative approach: bypassing the eyeball entirely.

Their work involves the use of an artificial retina, mounted on an ordinary pair of glasses, that feeds information directly into the users’ brains. The end result is that users can perceive images of what the retina can see. In essence, they’re working to create artificial eyes.

Eyeball 2.0

“The amount of electric current needed to induce visual perceptions with this type of microelectrode is much lower than the amount needed with electrodes placed on the surface of the brain, which means greater safety” explains Fernández Jover, a Cellular Biology Professor at Miguel Hernández University (UMH) of Spain, who led the research.

The device picks up on light from a visual field in front of the glasses and encodes it into electrical signals that the brain can understand. These are then transmitted to an array of 96 micro-electrodes implanted into a user’s brain.

The retina itself measures around 4 mm (0.15 inches) in width, and each electrode is 1.5 mm (0.05 inches) long. These electrodes come into direct contact with the visual cortex of the brain. Here, they both feed data to the neurons and monitor their activity.

So far, we have encouraging data on the validity of such an approach. The authors successfully tested a 1,000-electrode version of their system on primates last year (although the animals weren’t blind). More recently, they worked with a 57-year-old woman who had been blind for over 16 years. After a training period — needed to teach her how to interpret the images produced by the device — she has successfully identified letters and the outlines of certain objects.

The device was removed 6 months after being implanted with no adverse effects. During this time, the authors worked with their participant to document exactly how her brain activity responds to the device, to analyze the learning process, and to check whether the use of this device would lead to any physical changes in the brain.

Although limited in what images it can produce so far, the good news is that the system doesn’t seem to negatively interfere with the workings of the visual cortex or the wider brain. The authors add that because the system requires lower levels of electrical energy to work than other systems which involve electrode stimulation of the brain, it should also be quite safe to use.

Such technology is still a long way away from being practical in a day-to-day setting, and likely even farther away from being commercially available. There are still many issues to solve before that can happen, and safely addressing these will take a lot of research time and careful tweaking. But the results so far are definitely promising and a sign that we’re going the right way. The current study was limited in scope and duration but, based on the results, the authors are confident that a longer training period with the artificial retina would allow users to more easily recognize what they’re seeing.

The team is now working on continuing their research by expanding their experiments to include many more blind participants. They’re also considering stimulating a greater number of neurons at the same time, which should allow the retina to produce much more complex images in the participants’ minds. During the course of this experiment, they also designed several video games to help their participant learn how to use the device. The experience gained during this study, as well as these video games, will help improve the experience of future users and give them the tools needed to enjoy and understand the experience more readily.

Apart from showcasing the validity of such an approach, the experiments also go a long way to proving that microdevices of this type can be safely implanted and explanted in living humans, and they can interact with our minds and brains in a safe and productive way. Direct electrode stimulation of the brain is a risky proposition, but the team showed that this can be performed using safe, low levels of electrical current and still yield results.

Professor Jover believes that neuroprosthetics such as the one used in this experiment are a necessity for the future. There simply aren’t any viable alternative treatments of aids for blind people right now. Although retina prostheses are being developed, many patients cannot benefit from them, such as people who have suffered damage to their optical nerves. The only way to work around such damage right now is to send visual information directly into the brain.

This study proves that it can be done. It also shows that our brains can still process visual information even after a prolonged period of total blindness, giving cause for hope for many people around the world who have lost their sight.

The paper “Visual percepts evoked with an Intracortical 96-channel microelectrode array inserted in human occipital cortex” has been published in The Journal of Clinical Investigation.

Our eyes have a focal point — but images don’t seem to focus on it, weirdly

New research says that if you want to see something better, you shouldn’t look directly at it. At least, that’s what our eyes seem to believe.

Image via Pixabay.

Researchers at the University of Bonn, Germany, report that when we look directly at something, we’re not using our eyes to their full potential. When we do this, they explain, light doesn’t hit the center of our foveas, where photoreceptors (light-sensitive cells) are most densely packed. Instead, light (and thus, the area where images are perceived) are shifted slightly upwards and towards the nose relative to this central, highly sensitive spot.

While this shift doesn’t seem to really impair our perception in any meaningful way, the findings will help improve our understanding of how our eyes work and how we can fix them when they don’t.

I spy with my little eye

“In humans, cone packing varies within the fovea itself, with a sharp peak in its center. When we focus on an object, we align our eyes so that its image falls exactly on that spot — that, at least, was the general assumption so far,” says Dr. Wolf Harmening, head of the adaptive optics and visual psychophysics group at the Department of Ophthalmology at the University Hospital Bonn and corresponding author of the paper.

The team worked with 20 healthy subjects from Germany, who were asked to fixate on (look directly at) different objects while monitoring how light hit their retinas using “adaptive optics in vivo imaging and micro-stimulation”. An offset between the point of highest photoreceptor density and where the image formed on the retina was observed in all 20 participants, the authors explain. They hypothesize that this shift is a natural adaptation that helps to improve the overall quality of our vision.

Our eyes function similarly to a camera, but they’re not really the same. In a digital camera, light-sensitive elements are distributed evenly across the surface of their sensors. They’re the same all over the sensor, with the same size, properties, and operating principles. Our eyes use two types of cells to pick up on light, the rod and cone photoreceptors. The first kind is useful for seeing motion in dim light, and the latter is suited to picking out colors and fine detail in good lighting conditions.

Unlike in a camera, however, the photosensitive cells in our retinas aren’t evenly distributed. They vary quite significantly in density, size, and spacing. The fovea, a specialized central area of our retinas that can produce the sharpest vision, has around 200,000 cone cells per square millimeter. At the edges of the retina, this can fall to around 5,000 per square millimeter, which is 40 times less dense. In essence, our eyes produce high-definition images in the middle of our field of view and progressively less-defined images towards the edges. Our brains kind of fill in the missing information around the edges to make it all seem seamless — but if you try to pay attention to something at the edges of your vision, you’ll notice how little detail you can actually notice there.

It would, then, seem very counterproductive to have the image of whatever we’re looking at directly form away from the fovea. Wouldn’t we want to have the best view of whatever we’re, you know, viewing? The team explains that this is likely an adaptation to the way human sight works: both eyes, side by side, peering out in the same direction.

All 20 participants in the study showed the same shift, which was slightly upwards and towards the nose compared to the fovea. For some, this offset was larger, for some, smaller, but the direction was always the same for all participants, and all of them showed symmetry in the offset between both eyes. Follow-up examinations carried out one year after the initial trials showed that these focal points had not moved in the meantime.

“When we look at horizontal surfaces, such as the floor, objects above fixation are farther away,” explains Jenny Lorén Reiniger, a co-author of the paper. “This is true for most parts of our natural surrounds. Objects located higher appear a little smaller. Shifting our gaze in that fashion might enlarge the area of the visual field that is sheen sharply.”

“The fact that we were able to detect [this offset] at all is based on technical and methodological advances of the last two decades,” says Harmening.

One other interesting conclusion the authors draw is that, despite the huge number of light-sensitive cells our retinas contain, we only use a small fraction of them — around a few dozen — when focusing on a single point. Even more, it’s probably the same cells all throughout our lives, as the focal point doesn’t seem to move over time. While this is an interesting tidbit to share in trivia, it’s also valuable for researchers trying to determine how best to repair eyes and restore vision following damage or disease.

The paper “Human gaze is systematically offset from the center of cone topography” has been published in the journal Current Biology.

Artificial eye paves the way for cyborg vision

Researchers have devised an artificial eye that mimics the structure of the human eye, which has important applications in robotics, scientific measurements, as well as cyborg-like prosthetics that restore vision.

Artist impression of an artifcial eye. Credit: Yaying Xu.

The proof-of-concept, which was recently described in the journal Nature by a team led by Zhiyong Fan from the Hong Kong University of Science and Technology, is about as sensitive to light at its natural counterpart. What’s more, it even has a faster reaction time than the real thing (30 to 40 milliseconds, rather than 40 to 150 milliseconds).

The human eye is nothing short of spectacular — and much of what it’s capable of doing is owed to the dome-shaped retina, an area at the back of the eyeball that is littered in light-detecting cells.

There are around ten million photoreceptor cells per square centimeter, enabling a wide field of view and excellent resolution that has yet to be replicated by any man-made technology.

For many years, scientists have sought to replicate these characteristics in synthetic eyeballs. However, such efforts proved extremely challenging due to the inherent difficulties in mimicking the shape and composition of the human retina.

Fan and colleagues devised a hemispherical artificial retina, measuring only two centimeters in diameter and containing densely packed, light-sensitive nanowires made from a perovskite — a promising material that is very popular in solar cell manufacturing. The purpose of these nanowires is to mimic the photoreceptors of the human eye.

Schemtic of the artificial eye. Credit: H. JIANG/NATURE 2020.

The artificial eye’s hollow center is filled with a conductive fluid, whereas the human eye is filled with a clear gel called vitreous humour.

In an experiment, the artificial eye was hooked up to a computer and could “see” by reconstructing the letters ‘E’, ‘I’, and ‘Y’.

However, this is a far cry from the capabilities of the biological eye. The array consists of just 100 pixels, where each pixel corresponds to three nanowires.

This is a proof of concept, though, and Fan is confident that his design can be scaled so that the artificial eye can obtain a resolution ever higher than the human eye. According to Fan and colleagues, the density of nanowire can be enhanced to cover ten times the number of photoreceptors in the human eye.

Each nanowire could theoretically function as a small solar cell, which means that artificial eyes might not require an external power source as the researchers’ device currently requires.

The researchers envision applications in scientific measurements and advanced robotics. But, theoretically, the artificial eye could also be connected to an optic nerve, enabling the brain to process information received from the device like it would with a real eye. This latter prospect, however, is years and years away — but the prospect is still incredibly exciting.

Bigger boost in robot’s field of view

Oregon State University’s team has earned some serious bragging rights: they’ve come up with an optical sensor that can mimic the human eye. Think of robots, ones that are built to track moving objects. Roboticists dealing with such machines wouldn’t have to play with complex image processing anymore — they could rely on this optical sensor to do the job.

The human eye, while not nearly as highly performant as some of its counterparts from the animal kingdom, is still a magnificent structure. Replicating its functionality in robots has proven immensely challenging, but the OSU team’s work brings us one step closer to it, as their robot eye is able to closely match the human eye’s ability to perceive changes in its visual field.

Due to the way the team’s sensor works, a static item in the robot’s field of view would draw no response. A moving object would—registering a high voltage. Science Focus summed up the importance of their work thusly:

“Currently, computers receive information in a step-by-step way, processing inputs as a series of data points, whereas this technology helps build a more integrated system. For artificial intelligence, researchers are attempting to build on human brains which contain a network of neurons, communicating cells, able to process information in parallel.”

For example, the OSU team proceeded to simulate an array of “retinomorphic” (human eye-type) sensors that predict how a retina-like video camera would respond to visual stimuli. The idea was to input videos into one of these arrays and process that information in the same way a human eye would. For instance, one such simulation shows a bird flying into view, then all but disappearing as it stops at an invisible bird feeder. The bird reappears as it takes off. The feeder, swaying, becomes visible only as it starts to move. But you don’t just need the eye, you also need the processing power — which in the case of humans, is provided by the brain. The OSU team also tried to replicate that.

The team’s paper appears in Applied Physics Letters, explaining that “neuromorphic computation is the principle whereby certain aspects of the human brain are replicated in hardware. While great progress has been made in this field in recent years, almost all input signals provided to neuromorphic processors are still designed for traditional (von Neumann) computer architectures.”

You may have already read about researchers exploring devices that behave like eyes, especially retinomorphic devices. But previous attempts to build a human-eye type of device relied on software or complex hardware, said John Labram, Assistant Professor of Electrical and Computing Engineering.

The Science Focus piece describes why he stepped up to this kind of research effort. Labram was “initially inspired by a biology lecture he played in the background, which detailed how the human brain and eyes work.” Our eyes are very sensitive to changes in light, the piece explains, but less responsive to constant illumination. This marked the core of a new approach for devices that mimic photo-receptors in our eyes.

The innovation in this work lies mostly in the materials and the technique they used. The authors discuss how “a simple photosensitive capacitor will inherently reproduce certain aspects of biological retinas.” Their design involves using ultrathin layers of perovskite semiconductors — perovskite being a mineral also used for solar panels, among others. The perovskite is a few hundred nanometers thick and works as a capacitor that varies capacitance under illumination.

These change from strong electrical insulators to strong conductors when exposed to light. “You can think of it as a single pixel doing something that would currently require a microprocessor,” said Labram, for the university’s news site.

Their human eye-like sensor would not just be useful for object tracking robots, though. Consider that “neuromorphic computers” belong to a next generation of artificial intelligence in applications like self-driving cars. traditional computers process information sequentially as a series of instructions; neuromorphic computers emulate the human brain’s massively parallel networks, said the OSU report.

The human eye can tell day from night with three types of cells

The circadian rhythm — our biological internal clock that regulates the sleep-wake cycle and resets every 24 hours — plays a major role in health. How exactly our bodies are able to synchronize with day-night cycles has been a matter of debate. Now, a new study discovered that human eyes have three types of specialized cells that sense light with important applications in preventing circadian rhythm disruptions.

Credit: Pixabay.

Researchers at the Salk Institute developed a new method that can keep retina samples healthy and functional well after a donor passed away. Such samples were placed on an electrode grid that allowed the researchers to study how the retina reacted to light.

Several colors of light were tested, which showed that a small group of cells in the retina — known as intrinsically photosensitive retinal ganglion cells (ipRGCs) — started firing about 30 seconds after they interacted with a pulse of light. After the light was turned off, the cells took several seconds to stop firing.

The cells were the most sensitive to blue light, which is the type of light used in LCD screens employed by most smartphones and laptops. Blue light also inhibits the production of melatonin, keeping us awake and messing with our natural sleep cycles.

Follow-up experiments revealed that there are, in fact, three types of ipRGCs.

  • Type 1 responds to light relatively quickly but takes a long time to turn off;
  • Type 2 took longer to turn on and also was long to turn off;
  • Type 3 cells responded only when the light was very bright, but turned on faster than type 1 and 2, and switched off as soon as the light source was gone.

These cells may explain some very peculiar findings reported by other studies. For instance, blind people are able to align their sleep-wake cycles and circadian rhythms to the day-night cycle despite not being able to see. The new study may explain how they were able to sense light despite their visual impairment.

“We have become mostly an indoor species, and we are removed from the natural cycle of daylight during the day and near-complete darkness at night,” said Satchidananda Panda, senior author of the study and a professor at the Salk Institute.

“Understanding how ipRGCs respond to the quality, quantity, duration, and sequence of light will help us design better lighting for neonatal ICUs, ICUs, childcare centers, schools, factories, offices, hospitals, retirement homes and even the space station,” he added.

Although ipRGCs are responsible for sending light signals to the brain, they also work closely with rods and cones. The researchers believe that the ipRGCs may combine their light sensitivity with the light detected by purely visual cells to enhance brightness and add contrast.

“This adds another dimension to designing better televisions, computer monitors and smartphone screens in which changing the proportion of blue light can trick the brain into seeing an image as bright or dim,” says Panda.

In the future, the researchers plan on conducting more experiments on ipRGCs under different conditions of light color, intensity, and duration. The authors are also interested in how the cells will react to sequences of light (blue that turns into orange or vice-versa, for instance).

By understanding how each specialized light cells function in the eye, the researchers claim that is possible to access an entirely new spectrum of applications. For example, the insights could be used to design indoor lights that offer better day-night synchronization or which — why not — improve our moods.

“It’s also going to open a number of avenues to try new drugs or work on particular diseases that are specific to humans,” says Ludovic Mure, a postdoctoral researcher in the Panda lab and first author of the new study.

The findings were reported in the journal Science.

Credit: MaxPixel.

Parkinson’s disease might soon be diagnosed with a simple eye test

Credit: MaxPixel.

Credit: MaxPixel.

Patients in the advanced stage of Parkinson’s disease experience severe and debilitating symptoms, such as rigidity and bradykinesia. There is currently no cure for this terrible disease, but the earlier it’s caught, the better it can be kept under control by medication before the problems with movement become irreversible. This is why the latest research out of South Korea is so exciting — it suggests that in the future a simple eye test could diagnose Parkinson’s.

The study involved 49 volunteers. On average, they were age 69, and they had been diagnosed with Parkinson’s disease 2 years prior but were still unmedicated. They were compared to 54 age-matched healthy individuals.

Researchers performed a complete eye exam on each participant, along with eye scans that emit light waves to image each layer of the retina. Additionally, 28 patients with Parkinson’s disease had dopamine transporter positron emission tomography (PET) imaging to measure the density of dopamine-producing cells in their brains.

Parkinson’s disease (PD) appears after dopamine-producing neurons die, causing symptoms such as tremor, slowness, stiffness, and balance problems. High levels of glutamate, another neurotransmitter, also appear in PD as the body tries to compensate for the lack of dopamine. The cause of Parkinson’s is largely unknown but scientists are currently investigating the role that genetics, environmental factors, and the natural process of aging have on cell death and PD.

The current study’s results suggest that PD not only kills dopamine-producing neurons but also thins the retina — specifically, the two inner layers out of the retina’s five layers.

“Our study is the first to show a link between the thinning of the retina and a known sign of the progression of the disease — the loss of brain cells that produce dopamine,” said study author Jee-Young Lee of the Seoul Metropolitan Government – Seoul National University Boramae Medical Center in South Korea.

In individuals with PD, the innermost layer of the retina had an average thickness of 35 micrometers while this measured 37 micrometers for participants without PD. The researchers say that thinner retinas correspond with loss of brain cells that produce dopamine. Perhaps most importantly, the measurements also correspond with the severity of the disease. For instance, people with the most thinning of the retina (30 micrometers) also showed the most severe PD symptoms while patients with the thickest retina layer (47 micrometers) had the least severe PD symptoms.

“Larger studies are needed to confirm our findings and to determine just why retina thinning and the loss of dopamine-producing cells are linked,” said Lee. “If confirmed, retina scans may not only allow earlier treatment of Parkinson’s disease but more precise monitoring of treatments that could slow progression of the disease as well.”

The findings appeared in the journal Neurology.

Google AI can now look at your retina and predict the risk of heart disease

Google researchers are extremely intuitive: just by looking into people’s eyes they can see their problems — cardiovascular problems, to be precise. The scientists trained artificial intelligence (AI) to predict cardiovascular hazards, such as strokes, based on the analysis of retina shots.

The way the human eye sees the retina vs the way the AI sees it. The green traces are the pixels used to predict the risk factors. Photo Credit: UK Biobank/Google

After analyzing data from over a quarter million patients, the neural network can predict the patient’s age (within a 4-year range), gender, smoking status, blood pressure, body mass index, and risk of cardiovascular disease.

“Cardiovascular disease is the leading cause of death globally. There’s a strong body of research that helps us understand what puts people at risk: Daily behaviors including exercise and diet in combination with genetic factors, age, ethnicity, and biological sex all contribute. However, we don’t precisely know in a particular individual how these factors add up, so in some patients, we may perform sophisticated tests … to help better stratify an individual’s risk for having a cardiovascular event such as a heart attack or stroke”, declared study co-author Dr. Michael McConnell, a medical researcher at Verily.

Even though you might think that the number of patients the AI was trained on is large, AI networks typically work with much larger sample sizes. In order for neural networks to be more accurate in their predictions, they must analyze as much data as possible. The results of this study show that, until now, the predictions made by AI cannot outperform specialized medical diagnostic methods, such as blood tests.

“The caveat to this is that it’s early, (and) we trained this on a small data set,” says Google’s Lily Peng, a doctor and lead researcher on the project. “We think that the accuracy of this prediction will go up a little bit more as we kind of get more comprehensive data. Discovering that we could do this is a good first step. But we need to validate.”

The deep learning applied to photos of the retina and medical data works like this: the network is presented with the patient’s retinal shot, and then with some medical data, such as age, and blood pressure. After seeing hundreds of thousands of these kinds of images, the machine will start to see patterns correlated with the medical data inserted. So, for example, if most patients that have high blood pressure have more enlarged retinal vessels, the pattern will be learned and then applied when presented just the retinal shot of a prospective patient. The algorithms correctly discovered patients who had great cardiovascular risks within a 5-year window 70 percent of the time.

“In summary, we have provided evidence that deep learning may uncover additional signals in retinal images that will allow for better cardiovascular risk stratification. In particular, they could enable cardiovascular assessment at the population level by leveraging the existing infrastructure used to screen for diabetic eye disease. Our work also suggests avenues of future research into the source of these associations, and whether they can be used to better understand and prevent cardiovascular disease,” conclude the authors of the study.

The paper, published in the journal Nature Biomedical Engineering, is truly remarkable. In the future, doctors will be able to screen for the number one killer worldwide much more easily, and they will be doing it without causing us any physical discomfort. Imagine that!

Artificial Intelligence can tell you your blood pressure, age, and smoking status — just by looking at your eye

Eyes are said to be the window to the soul, but according to Google engineers, they’re also the window to your health.

The engineers wanted to see if they could determine some cardiovascular risks simply by looking a picture of someone’s retina. They developed a convolutional neural network — a feed-forward algorithm inspired by biological processes, especially pattern between neurons, commonly used in image analysis.

This type of artificial intelligence (AI) analyzes images holistically, without splitting them into smaller pieces, based on their shared similarities and symmetrical parts.

The approach became quite popular in recent years, especially as Facebook and other tech giants began developing their face-recognition software. Scientists have long proposed that this type of network can be used in other fields, but due to the innate processing complexity, progress has been slow. The fact that such algorithms can be applied to biology (and human biology, at that) is astonishing.

“It was unrealistic to apply machine learning to many areas of biology before,” says Philip Nelson, a director of engineering at Google Research in Mountain View, California. “Now you can — but even more exciting, machines can now see things that humans might not have seen before.”

Observing and quantifying associations in images can be difficult because of the wide variety of features, patterns, colors, values, and shapes in real data. In this case, Ryan Poplin, Machine Learning Technical Lead at Google, used AI trained on data from 284,335 patients. He and his colleagues then tested their neural network on two independent datasets of 12,026 and 999 photos respectively. They were able to predict age (within 3.26 years), and within an acceptable margin, gender, smoking status, systolic blood pressure as well as major adverse cardiac events. Researchers say results were similar to the European SCORE system, a test which relies on a blood test.

To make things even more interesting, the algorithm uses distinct aspects of the anatomy to generate each prediction, such as the optic disc or blood vessels. This means that, in time, each individual detection pattern can be improved and tailored for a specific purpose. Also, a data set of almost 300,000 models is relatively small for a neural network, so feeding more data into the algorithm can almost certainly improve it.

Doctors today rely heavily on blood tests to determine cardiovascular risks, so having a non-invasive alternative could save a lot of costs and time, while making visits to the doctor less unpleasant. Of course, for Google (or rather Google’s parent company, Alphabet), developing such an algorithm would be a significant development and a potentially profitable one at that.

It’s not the first time Google engineers have dipped their feet into this type of technology — one of the authors, Lily Peng, published another paper last year in which she used AI to detect blindness associated with diabetes.

Journal Reference: Ryan Poplin et al. Predicting Cardiovascular Risk Factors from Retinal Fundus Photographs using Deep Learning.  arXiv:1708.09843

New hydrogel can glue retina back into eye

The retina is extremely important for vision. It processes light and sends all visual information to the brain through the optic nerve. Therefore, it’s very serious when the retina detaches from its normal position; the eye can’t function, and it can result in permanent blindness. To fix the problem, the retina needs to be repositioned to the back of the eye as soon as possible. Vitreous, the gel-like substance that fills the space between the retina and the lens needs to be replaced during this surgery.

The structure of the eye: vitreous fills up the whole inner chamber. Image credits: Holly Fischer

The problem

Current procedures involve injecting silicone oil or gas bubbles in the eye to push the detached retina back into place. However, these substances don’t mix well with water and don’t function well in the long term. In addition, it is necessary for the patient to have his or her head secured facing downwards during this surgery. For an extended period after surgery, doctors recommend keeping the head only in certain positions and not flying in an airplane or venturing to high altitudes.

Hydrogels, which are elastic gels that are composed mostly of water, are a promising alternative to replace vitreous because they are similar to human soft tissue.  Also, keeping the head in a certain position is not necessary. However, one problem with hydrogels is that they can absorb water after a few months and swell, putting pressure on other parts of the eye and causing damage.

The solution

Associate Professor Tadamasa Sakai of the University of Tokyo and his lab group developed a new hydrogel. It has a low concentration of polymers so it can be placed in an eye as a liquid and still form a gel in only 10 minutes. They used special techniques to get it to gel quickly. The scientists mixed two types of polymers to create branched polymer clusters in liquid and incited them to form a solid when injected into an eye. The hydrogel stays transparent, while other gels turn cloudy over time, which makes additional surgery unnecessary.

This hydrogel stays clear and doesn’t become cloudy. Image credits: 2017 TAKAMASA SAKAI

“Hydrogels are promising biomaterials, but their physical properties have been difficult to control. We wanted to show that these difficulties can be overcome by designing molecular reactions and I think we’ve been successful”— Tadamasa Sakai

In a jar the hydrogel forms after 30 seconds, but in the eye, it takes up to 10 minutes. Video credits: 2017 TAKAMASA SAKAI

This technique was tested successfully on rabbits. The test subjects didn’t experience any side effects: the gels were not rejected after 410 days. No significant swelling was noted. In another experiment, the new hydrogel gel was shown to cure rabbits with detached retinas.

The hydrogel still needs to be tested for safety and efficiency in humans but could replace other soft tissues when dealing with tumours, trauma, and degenerative diseases. In this way, this hydrogel could pave the way for new surgery techniques.

 

How the eye works

 

Image via flickr. 

Doing some light reading

Touch interprets changes of pressure, texture and heat in the objects we come in contact with. Hearing picks up on pressure waves, and taste and smell read chemical markers. Sight is the only sense that allows us to make heads and tails of some of the electromagnetic waves that zip all around us — in other words, seeing requires light.

Apart from fire (and other incandescent materials), bioluminiscent sources and man-made objects (such as the screen you’re reading this on) our environment generally doesn’t emit light for our eyes to pick up on. Instead, objects become visible when part of the light from other sources reflects off of them.

Let’s take an apple tree as an example. Light travels in a (relatively) straight line from the sun to the tree, where different wavelengths are absorbed by the leaves, bark and apples themselves. What isn’t absorbed bounces back and is met with the first layer of our eyes, the thin surface of liquid tears that protects and lubricates the organ. Under it lies the cornea, a thin sheet of innervated transparent cells.

Behind them, there’s a body of liquid named the aqueous humor. This clear fluid keeps a constant pressure applied to the cornea so it doesn’t wrinkle and maintains its shape. This is a pretty important role, as that layer provides two-thirds of the eye’s optical power.

Anatomy of the eye.
Image via flikr

The light is then directed through the pupil. No, there’s no schoolkids in your eye; the pupil is the central, circular opening of the iris, the pretty-colored part of our eyes. The iris contracts or relaxes to allow an optimal amount of light to enter deeper into our eyes. Without it working to regulate exposure our eyes would be burned when it got bright and would struggle to see anything when it got dark.

The final part of our eye’s focusing mechanism is called the crystalline lens. It only has half the focusing power of the cornea but its most important function is that it can change how it does this. The crystalline is attached to a ring of fibrous tissue on its equator, that pull on the lens to change its shape (a process known as accommodation), allowing the eye to focus on objects at various distances.

Fun fact: You can actually observe how the lens changes shape. Looking at your monitor, hold your up hands some 5-10 centimeters (2-4 inches) in front of your eyes and look at them till the count of ten. Then put them down; those blurry images during the first few moments and the weird feeling you get in your eyes are the crystalline stretching to adapt to the different focal vision.
Science at its finest.

After going through the lens, light passes through a second (but more jello-like) body of fluid and falls on an area known as the retina. The retina lines the back of the eye and is the area that actually processes the light. There are a lot of different parts of the retina working together to keep our sight crispy clear, but three of them are important in understanding how we see.

  • First, the macula. This is the “bull’s eye” of the retina. At the center of the macula there’s a slight dip named the fovea centralis (fovea is latin for pit). As it lies at the focal point of the eye, the fovea is jam-packed with light sensitive nerve endings called photoreceptors.
  • Photoreceptors. These differentiate in two categories: rods and cones. They’re structurally and functionally different, but both serve to encode light as electro-chemical signals.
  • Retinal pigment epithelium. The REP is a layer of dark tissue whose cells absorb excess light to improve the accuracy of our photoreceptors’ readings. It also delivers nutrients to and clears waste from the retina’s cells.

So far you’ve learned about the internal structure of your eyes, how they capture electromagnetic light, focus it and translate it into electro-chemical signals. They’re wonderfully complex systems, and you have two of them. Enjoy!

There’s still something I have to tell you about seeing, however. Don’t be alarmed but….

The images are all in your head

While eyes focus and encode light into the electrical signals our nervous system uses to communicate, they don’t see per se. Information is carried by the optical nerves to the back of the brain for processing and interpretation. This all takes place in an area of our brain known as the visual cortex.

Brain shown from the side, facing left. Above: view from outside, below: cut through the middle. Orange = Brodmann area 17 (primary visual cortex)
Image via wikipedia

Because they’re wedged in your skull a short distance apart from each other, each of your eyes feeds a slightly different picture to your brain. These little discrepancies are deliberate; by comparing the two, the brain can tell how far an object is. This is the mechanism that ‘magic eye’ or autostereogram pictures attempt to trick, causing 2D images to appear three dimensional.  Other clues like shadows, textures and prior knowledge also help us to judge depth and distance.

[YOU SHOULD  ALSO READ] The peculiar case of a woman who could only see in 2-D for 48 years, and the amazing procedure that gave her stereo-vision

The neurons work together to reconstruct the image based on the raw information the eyes feed them. Many of these cells respond specifically to edges orientated in a certain direction. From here, the brain builds up the shape of an object. Information about color and shading are also used as further clues to compare what we’re seeing with the data stored in our memory to understand what we’re looking at. Objects are recognized mostly by their edges, and faces by their surface features.

Brain damage can lead to conditions that impair object recognition (an inability to recognize the objects one is seeing) such as agnosia.  A man suffering from agnosia was asked to look at a rose and described it as ‘about six inches in length, a convoluted red form with a linear green attachment’. He described a glove as ‘a continuous surface infolded on itself, it appears to have five outpouchings’. His brain had lost its ability to either name the objects he was seeing or recognize what they were used for, even though he knew what a rose or a glove was. Occasionally, agnosia is limited to failure to recognize faces or an inability to comprehend spoken words despite intact hearing, speech production and reading ability.

The brain also handles recognition of movement in images. Akinetopsia, a movement-recognition impairing condition is caused by lesions in the posterior side of the visual cortex. People suffering from it stop seeing objects as moving, even though their sight is otherwise normal. One woman, who suffered such damage following a stroke, described that when she poured a cup of tea the liquid appeared frozen in mid-air, like ice. When walking down the street, she saw cars and trams change position, but not actually move.

Artist's impression of a proton-proton collision producing a pair of gamma rays (yellow) in the ATLAS detector (Image: CERN)

Human eye inspired processor is 400 times faster at detecting sub-atomic particles

Artist's impression of a proton-proton collision producing a pair of gamma rays (yellow) in the ATLAS detector (Image: CERN)

Artist’s impression of a proton-proton collision producing a pair of gamma rays (yellow) in the ATLAS detector (Image: CERN)

Inspired by the properties of the human eye, physicists have created a processor that can analyze sub-atomic particles 400 times faster than the current state of the art. The prototype might significantly speed up the analysis of data from the collisions of particles in high-end particle accelerators like the Large Hadron Collider, at CERN, as early as 2020.

Faster than the blink of an eye

The processor employs a detection algorithm that works much in the same way as the human retina. In our retinas,  individual neurons are specialized to respond to particular shapes or orientations and locally analyze these patterns. This way, the brain is never consciously aware of the processing itself and only interprets the results. Analogously, the “artificial retina” detects a snapshot of the trajectory of each collision which is then immediately analysed, according to CERN physicist Diego Tonelli, one of the collaborators who was involved in the project.

During these collisions, particles are accelerated near the speed of light and smashed together. At these extremely high energies, peculiar things start to happen and new matter is born. Each second the LHC generats some 40 million collisions and each can result in hundreds of charged particles, which are the only kind whose trajectories can be mapped. Clearly, speed is of the essence and the ‘artificial retina’ will definitely come in handy.

“It’s 400 times faster than anything existing or foreseen for high energy physics applications. If implemented in a real experiment it will allow us to collect more interesting data more quickly,” the researchers write.

The LHC received a lot of hype in recent years, after the breakthrough moment of modern physics when the Higgs boson was confirmed using the particle accelerator.  However, the ‘artificial retina’ won’t be employed for experiments that probe elementary particles, like the Higgs boson. Instead, it will be mostly used for ‘flavor physics’, which deals with the interaction of the basic components of matter, the quarks.

“When our detectors take these snapshots of the collisions – to us that’s like the picture that your eye sees and when your brain is scanning that picture and making sense of it, well we try and codify those rules into an algorithm that we run on computers that do the job for us automatically,” Prof Shears said.

“When the LHC continues… we will start to operate with a more intense beam of protons getting a much higher data rate, and then this problem of sifting out what you really want to study becomes really really pressing,” she added.

“This artificial retinal algorithm is one of the latest steps in our mission to [understand the Universe], and it’s really good, it does the job vast banks of computers normally do.”

Right now, the LHC is shutdown for maintenance, but it’s due to come back online in 2015 and resume its hunt for elusive particles. The algorithm won’t be introduced before 2020, however, when an upgrade is slated. The findings were documented in a paper published in the pre-print arXiv server.

 

 

Japanese woman is first recipient of next-generation stem cells

Researchers were able to grow sheets of retinal tissue from induced pluripotent stem cells, and have now implanted them for the first time in a patient.

Researchers were able to grow sheets of retinal tissue from induced pluripotent stem cells, and have now implanted them for the first time in a patient. RIKEN/Foundation for Biomedical Research and Innovation

A Japanese woman in her 70s is the world’s first recipient of cells derived from induced pluripotent stem cells, a technology that promises to work wonders and has the scientific community excited about the perspectives. Surgeons working on the case created the retinal tissue after reverting the patient’s own cells to ‘pluripotent’ state.

If you’d like to benefit from stem cells, but you’re worried that you haven’t had cells harvested early enough – then stop worrying, the next level technology is already here, offering the same advantages as embryo-derived cells but without some of the controversial aspects and safety concerns.

The two hour procedure took place a mere four days after the health-ministry committee gave Takahashi clearance to begin human trials; previously, it had been safely conducted on rats and mice. The surgery’s objective was transplanting a 1.3 by 3.0 millimeter sheet of retinal pigment epithelium cells into an eye of an elderly Japanese woman suffering from age-related macular degeneration.

Yasuo Kurimoto of the Kobe City Medical Center General Hospital led the procedure, accompanied by a team of three other specialists.

“[She] took on all the risk that go with the treatment as well as the surgery”, Kurimoto said in a statement released by RIKEN. “I have deep respect for bravery she showed in resolving to go through with it.”

Kurimoto also took a moment to acknowledge the work of Yoshiki Sasai, a researcher who recently committed suicide. Yoshiki Sasai, deputy director of the RIKEN Center for Developmental Biology (CDB) in Kobe was one of the most brilliant minds working in stem cell research, but a scandal swirling around two stem-cell papers published in Nature in January had wreaked havoc on his career.

“This project could not have existed without the late Yoshiki Sasai’s research, which led the way to differentiating retinal tissue from stem cells.”

Sadly enough, Sasai’s downfall wasn’t even his own doing – one of his proteges, Haruko Obokata, then a visiting researcher, manipulated the results of two research papers on which Sasai also worked. In Japan, the media rained criticism on Sasai, including unsubstantiated accusations; despite the fact that he himself did not contribute to the forgery, he didn’t check the facts close enough. This immense pressure eventually led him to commit suicide.

Yoshiki Sasai. Nature.

But the results of his work are still alive today, and show much promise for future research. Even in a patient over 70 years old, the procedure will ensure that future degeneration doesn’t take place anymore, although it is less likely to restore vision to what it was before degeneration.

“We’ve taken a momentous first step toward regenerative medicine using iPS cells,” Takahashi said in a statement. “With this as a starting point, I definitely want to bring [iPS cell-based regenerative medicine] to as many people as possible.”

This diagram depicts the spatial distribution of the five types of light-sensitive cells known as cones in the chicken retina. (c) Washington University in St. Louis

Weird state of matter found in chicken’s eye

You may not find many interesting things to see when glaring into a chicken’s eye, but after closely studying its retina researchers at Washington University have come across a most fascinating discovery. It seems chicken eyes bear a never before seen state of matter in biology, an arrangement of particles that is both ordered and disordered – neither crystal, nor liquid. This state is called “disordered hyperuniformity”  and could only previously be found in non-biological systems , like liquid helium or simple plasmas.

Typically, the retina is comprised of several layers, but only the cones and rods are photosensitive allowing us to see and visually sense our surroundings. In the eye of a chicken, like many other bird species, the retina is comprised of five different types of cones – violet, blue, green and red, while the fifth is responsible for sensing light level variance. Most importantly, however, each type of cone is of a different size.

This diagram depicts the spatial distribution of the five types of light-sensitive cells known as cones in the chicken retina. (c) Washington University in St. Louis

This diagram depicts the spatial distribution of the five types of light-sensitive cells known as cones in the chicken retina. (c) Washington University in St. Louis

Most animal species have their cones arranged around an orderly pattern. Insects for instance have their cones arranged in a hexagon pattern. Those of a chicken, however, seem to be in complete disarray. At first, if one didn’t know better, you might think that they shouldn’t be able to see anything. Upon closer inspection, the shroud was lifted and most peculiar discovery was made.

After making a computer model, the scientists found that the arrangement of chicken cones is particularly tidy. Each cone has a so-called exclusion area that blocks other cones of the same type from straying too close, but this means each individual cone has its own uniform arrangement. At the same time, the five different patterns of the five different cone types are layered on top of each other in a disorderly way as opposed to the orderly structure found in other species’ eyes.

“Because the cones are of different sizes it’s not easy for the system to go into a crystal or ordered state,” study researcher Salvatore Torquato, a professor of chemistry at Princeton University, explained in a statement. “The system is frustrated from finding what might be the optimal solution, which would be the typical ordered arrangement. While the pattern must be disordered, it must also be as uniform as possible. Thus, disordered hyperuniformity is an excellent solution.”

Simply put, systems like the arrangement of chicken cones or liquid helium act both at the same time like crystals, keeping the density of particles consistent across large volumes, and liquids, having the same physical properties in all directions. This is the first time, however, that disordered hyperuniformity  has been observed in a biological system.

Their findings were detailed on Feb. 24 in the journal Physical Review E.

 

A targeted retinal ganglion cell fires when illuminated by a white light after application of DENAQ photoswitch compound (credit: Ivan Tochitsky et al./Neuron)

New chemical restores light perception to blind mice

A long time ago I wrote a piece on the developments made by a group at University of California, Berkeley that managed to restore light perception to blind mice without using invasive procedures like surgery. A chemical was used and just as easy as putting some eye drops, the researchers enabled mice to sense light when such a thing wasn’t possible before. Now, the same group has developed a new chemical that works much in the same way as the previous one, only much better: it lasts longer and isn’t potentially dangerous like before. Hopefully, this chemical or some upgraded version may be used in treating patients suffering from degenerative retinal disorders.

A targeted retinal ganglion cell fires when illuminated by a white light after application of DENAQ photoswitch compound (credit: Ivan Tochitsky et al./Neuron)

A targeted retinal ganglion cell fires when illuminated by a white light after application of DENAQ photoswitch compound (credit: Ivan Tochitsky et al./Neuron)

The retina is comprised of three layers, but only the outermost layer is photoreceptice, containing  the rod and cone cells that respond to light. When the rods and cones die during the course of degenerative blinding diseases,  like  retinitis pigmentosa and age-related macular degeneration, the rest of the retina is still intact – it’s just that it’s not responsive to light causing loss of sight.

The new chemical called DENAQ, which replaced AAQ,  confers light sensitivity for several days with ordinary white light and only impacts retinal ganglion cells if the rods and cones have already died. The latter part is really important since if you still have functioning eye sight, in the sense that you’ve already lost a bunch of ganglion cells but not all of them, those cells that are still active won’t be affected by the chemical. The previous AAQ photoswitch chemical  required very bright ultraviolet light, which can be damaging, to work.

“Further testing on larger mammals is needed to assess the short- and long-term safety of DENAQ and related chemicals,” says Richard Kramer of the University of California. “It will take several more years, but if safety can be established, these compounds might ultimately be useful for restoring light sensitivity to blind humans.”

Findings were reported in the journal Neuron.

New Urine Test Could Diagnose Eye Disease

Urine isn’t exactly the first place you want to start looking for eye diseases – but according to a new Duke University study a patient’s urine can be linked to gene mutations that cause Retinitis Pigmentosa (RP), an inherited, degenerative disease that results in severe vision impairment and often blindness.

A composite image of the human retina shows diffused pigmentary retinal degeneration. Photo credit – Ziqiang Guan, Duke University Medical Center

A composite image of the human retina shows diffused pigmentary retinal degeneration. Photo credit – Ziqiang Guan, Duke University Medical Center

“My collaborators, Dr. Rong Wen and Dr. Byron Lam at the Bascom Palmer Eye Institute in Florida first sought my expertise in mass spectrometry to analyze cells cultured from a family in which three out of the four siblings suffer from RP,” said Ziqiang Guan, an associate research professor of biochemistry in the Duke University Medical School and a contributing author of the study.

The team had previously sequenced the genome of this family and found that children with RP carry two copies of a mutation of a gene responsible for synthesizing organic compounds called dolichols. This mutation appears to be prevalent in RP patients of the Ashkenazi Jewish origin (the people who are most suffering from the disease), and some 0.3% of all Ashkenazi carry one copy of the mutation.

They think that urine makes for better testing than blood in this case.

“Since the urine samples gave us more distinct profiles than the blood samples, we think that urine is a better clinical material for dolichol profiling,” he said. Urine collection is also easier than a blood draw and the samples can be conveniently stored with a preservative. The team is now pursuing a patent for this newl diagnostic test for the DHDDS mutation.

There are currently no treatments for RP, but Guan hopes that developing this urine-based test will also provide insight on how this ailment could be treated.

“We are now researching ways to manipulate the dolichol synthesis pathway in RP patients with the DHDDS mutation so that the mutated enzyme can still produce enough dolichol-19, which we believe may be important for the rapid renewal of retinal tissue in a healthy individual.”

bi0niceye

This is not SciFi: software update slated for bionic eye will grant higher resolution and colour vision

The Argus II is the first bionic eye implant, designed to grant the blind vision, that has been approved by the FDA in the US. The wearer of such an implant is now capable of distinguishing objects and live an almost independent life, which is absolutely remarkable by itself, however its performance is light years away from the natural counterpart. Technology has always been on an upward trend and it’s natural to expect the implant will get better in coming years, but think of it for a moment. It’s not like you’re buying a new hard-drive for your PC. In analogy, you’d have to go through surgery again, have your implant taken out, and then have a new one implanted back – the hassles are too great.

Zoom in on the future

bi0niceyeThing is, hardware’s not the only thing you can upgrade to increase performance, a lot of times a software update can do wonders and the bionic eye is no different. Recently, Second Sight – the company that developed Argus II – announced they’ll soon be rolling out a firmware update which will allow the users to have better resolution, focus, and image zooming. A planed second update will even allow for colour recognition, even though the initial product offers only black and white imaging.

The are many causes that can lead to blindness – cataracts, glaucoma, macular degeneration or various other diseases. Going to the root of the problem, what happens is the diseased eye is incapable of converting the light that hits the rods and cones of the retina  into electrical signals which, in a healthy eye, are then transmitted down your optic nerve to your brain which process them and retrieves images, granting sight.

Granting sight

Argus II is essentially a retinal prosthesis, and it works by having 60 electrodes implanted into the macula of the patient — the central region of the retina that provides central and high-resolution vision. Since the eye can’t receive light anymore or the essential input data it needs to convert into electrical signals, the actual “eyes” are replaced by a pair of spectacles with a mounted camera that records whatever the wearer is pointing towards. The camera then converts the captured images into electrical signals and sends them to a tiny antenna which is connected to the electrodes implanted in the retina. The signal picked up by the electrodes stimulates them in such a way that they then produce an electrical signal that can read and understood by the brain. Finally, out of pitch black, finally enters light.

It’s rough pixel world, however. Since the implant only has 60 electrodes (6×10), you can imagine the resolution is extremely small, but it’s a lot better than being completely blind, and for some this means they now have the chance of living  a normal life or at least take care of themselves.

“There’s a new firmware out for my bionic eye. Cool!”

Even so, once the update called  Acuboost rolls in, Argus II users will be able to improve their implants’ performance significantly. If Second Sight needed years and years of back and forth discussions with the FDA for their product to become the first ever implantable bionic eye to be approved, apparently they require no such approval for firmware updates. Something that will most likely change soon enough as policies will become just as demanding for software, as well as hardware.

The ‘see in colour’ update is the most anticipated release, scheduled to come after Acuboost, and the most fascinating yet since technically the implant users don’t have any color vision capabilities as the these cells were destroyed by the disease. Instead, colour can be granted by ingeniously reading and correlating specific frequencies and delays in electrode stimulation with a colour.

There’s an European, more interesting, counterpart to the Argus II. In Germany, the Alpha IMS bionic eye has recently received European regulatory approval, and works much in the same way as the Argus II except in one major aspect: instead of having an auxiliary camera that feeds imaging signals, the Alpha IMS uses a self-contained bionic eye that grants vision by using light that actually enters the eye. More than that, instead of 60 electrodes the Alpha IMS boasts 1,500 electrodes, greatly enhancing resolution.  We reported on the retina implant in a previous ZME Science piece from February. Check out the presentation video for Alpha IMS below.

Blood vessels in the eye linked to IQ and cognitive functions

It’s not quite what scientists expected – the width of blood vessels in the eye, at the back of the retina, may indicate brain health risks, such as dementia and alzheimers years before they actually set in according to a new study published in Psychological Science.

Credit: © lightpoet / Fotolia

Credit: © lightpoet / Fotolia

It is already well known that young people who score very low at IQ tests are at a higher risk for poorer health and shorter lifespan which can’t be explained only by other factors. Psychological scientist Idan Shalev of Duke University and colleagues have tried to find out if there is a any connection between IQ score (I really prefer this term to intelligence in this case) and brain health.

To do this, they turned to opthalmology – the branch of medicine that deals with the eye. They used a technique called digital retinal imaging, which is relatively new and completely noninvasive.

“Digital retinal imaging is a tool that is being used today mainly by eye doctors to study diseases of the eye,” Shalev notes. “But our initial findings indicate that it may be a useful investigative tool for psychological scientists who want to study the link between intelligence and health across the lifespan.”

Basically, you can get a pretty good idea of what happens in the blood vessels in the brain by looking at the blood vessels in the retina – it’s the next best thing. Retinal blood vessels share similar size, structure, and function with blood vessels in the brain. Their results were rather intriguing.

Having wider retinal venules was linked with lower IQ scores at age 38, even after researchers eliminated all the other likely causes, like health, lifestyle, and environmental risk factors that might have played a role.

People who had wider retinal venules had noticeable cognitive deficits, scoring lower on numerous tests of neuropsychological functioning, verbal comprehention, memory, and many more. But what was even more surprising, is that people who had wider blood vessels at age 38 also had lower IQ in childhood, a full 25 years earlier.

“It’s remarkable that venular caliber in the eye is related, however modestly, to mental test scores of individuals in their 30s, and even to IQ scores in childhood,” the researchers observe.

They believe that there is not really a distinct mechanism between retinal vessels and cognitive functioning, but is rather connected to oxygenation of the brain.

“Increasing knowledge about retinal vessels may enable scientists to develop better diagnosis and treatments to increase the levels of oxygen into the brain and by that, to prevent age-related worsening of cognitive abilities,” they conclude.

subretinal-implant

Retina implant restores sight to the blind

In the culmination of 15 years worth of painstaking research work related to retina implants, scientists from Germany and Hungary have for the first time demonstrated that a light sensitive electronic chip, implanted under the retina, can restore useful vision in patients blind from hereditary retinal degeneration.

subretinal-implantAs part of the research, nine persons previously completely blind have had their vision partially restored. They can now identify objects in their surroundings, and have become independent allowing them to live a life closer to normal. One participant in particular showed extraordinary improvements, after he was able to discern 7 shades of grey, read the hands of a clock and combined the letters of the alphabet into words.

The  3mm x 3 mm implant has 1500 pixels or just as many independent microphotodiode-amplifier electrode elements. It is meant to be surgically implanted below the fovea (area of sharpest vision in the retina), and is powered  by subdermal coil behind the ear.

“So far, our approach using subretinal electronic implants is the only one that has successfully mediated images in a trial with freely moving blind persons by means of a light sensor array that moves with the eye,” the scientists said.

“All the other current approaches require an extraocular camera that does not link image capture to eye movements, which, therefore, does not allow the utilization of microsaccades for refreshing the perceived images.”

In people suffering from hereditary retinal degeneration, the photoreceptors in the retina progressively degenerate, often causing blindness in adult life. Unfortunately, there is no viable treatment that can prevent this from happening, however forefront research like this might offer them a chance to a normal life.

Patients implanted with the device now posses a diamond-shaped visual field of 15 degrees diagonally across chip corners. This is a poor vision by all means, magnified by the fact that the visual field is so tiny, but when compared to the pitch black darkness the blind were thrown in, eyesight restoration, partial as it is, becomes nothing less than godsend. In the video below for instance, a study participant needed 2 minutes to recognize and read a succession of letters that formed the word “MIKA”. Remarkably, the patient read it correctly and signaled the researchers that they’ve spelled his name, Mikka, wrong – of course, this was made on purpose.

Findings were reported in the journal Proceeding of the Royal Society.

The work was made possible thanks to a long-standing collaborative effort between the University Eye Hospitals in Tübingen and Regensburg, the Institute for Microelectronics in Stuttgart (IMS), the Natural and Medical Sciences Institute (NMI) in Reutlingen as well as the Retina Implant AG and Multi Channel Systems (MCS).

Points within the face (green circles) where, on average, each of 50 participants first looked at when trying to identify faces of famous people. White circle corresponds to the average across all participants. Background is an averaged face across 120 celebrity faces. (c) UCSB

Right below the eyes is the best place to get the look of a person

Eye contact plays a very important role in human interactions, however a recent research study made by psychologists at UC Santa Barbara found that looking below the eyes is the best place to get the feel of what a person is up to. Besides, apparently most of us are already hard-wired to fix our initial gaze to this point, albeit for an extremely short period and unconsciously.

“It’s pretty fast, it’s effortless –– we’re not really aware of what we’re doing,” said Miguel Eckstein, professor of psychology in the Department of Psychological & Brain Sciences

Points within the face (green circles) where, on average, each of 50 participants first looked at when trying to identify faces of famous people. White circle corresponds to the average across all participants. Background is an averaged face across 120 celebrity faces. (c) UCSB

Points within the face (green circles) where, on average, each of 50 participants first looked at when trying to identify faces of famous people. White circle corresponds to the average across all participants. Background is an averaged face across 120 celebrity faces. (c) UCSB

Miguel Eckstein and Matt Peterson used high-speed eye tracking cameras, more than 100 photos of faces and a sophisticated algorithm to pinpoint the first place participants looked at when fixing their gaze towards a person, in order to assess their  identity, gender, and emotional state.

“For the majority of people, the first place we look at is somewhere in the middle, just below the eyes,” Eckstein said.

The whole initial, involuntary glance lasts a mere 250 millisecond, yet despite this and the relatively featureless point of focus, during these highly important initial moments our brain performs incredibly complex computations that plan eye movement in advance to ensure the best information gathering possible, as well as assess whether its time to run, fight or entertain.

“When you look at a scene, or at a person’s face, you’re not just using information right in front of you,” said Peterson.

The eyes are the windows to one’s soul, but what lies beneath it?

You might have noticed whenever you look at something, anything, the center of your point of view appears more refined and clearer than its surroundings, which offers less spatial detail. The high resolution areas are picked up by a region of the eye known as the fovea, which is a slight depression in the retina.

When sitting next to a person at a conversational distance,  the fovea can read the whole person’s face in great detail and catch even the most subtle gestures. More detail spatial information relating to face features like the nose, mouth and eyes is readily available. Despite this, however, when study participants were asks to asses the identity, gender and emotions of an individual based on a photograph of his forehead or mouth alone, for instance, they did not perform as well as they did when looking close to the eyes.

These empirical data were correlated with the output of a sophisticated computer algorithm that mimics the varying spatial detail of human processing across the visual field and integrates all information to make decisions, allowing the researchers to predict what would be the best place within the faces to look for each of these perceptual tasks. The common denominator derived from both the computer model and actual human participant data is that looking below the eyes is the optimal place to look, say the scientists, because it allows one to read information from as many features of the face as possible.

“What the visual system is adept at doing is taking all those pieces of information from your face and combining them in a statistical manner to make a judgment about whatever task you’re doing,” said Eckstein.

This doesn’t seem to be a general rule for all humans, though. Previous research, say the scientists involved with the paper published in the journal PNAS, has found that t East Asians, for instance, tend look lower on the face when identifying a person’s face. Next  Peterson and Eckstein are looking to refine their algorithm in order to provide insight into conditions like schizophrenia and autism, which are associated with uncommon gaze patterns, or prosopagnosia – an inability to recognize someone by his or her face.

source: UCSB

 

Pigmented epithelial cells were grown from embryonic stem cells prior to injection.

Stem cells treatment dramatically improves vision of the blind

Pigmented epithelial cells were grown from embryonic stem cells prior to injection.

Pigmented epithelial cells were grown from embryonic stem cells prior to injection.

Detailed in a recently published study, a team of ophthalmologists have successfully managed to improve the vision of both of their trial patients, which were declared legally blind due to macular degeneration, by inserting human embryonic stem cells into one eye of each person. Significant improvements were recognized shortly after the procedure, and continued to progress positively in the months that came after, as well. The other eyes that were left untreated remained in the same poor condition as prior to the operation .

Macular degeneration is the leading cause of vision loss among the elderly, while Stargart’s muscular dystrophy, or Stargart’s disease, is a common cause of vision loss among children and young people. Drugs, laser treatment of the retina and so forth only help in slowing down the process, but the end scope of these diseases cannot be derailed, and hence are considered incurable.

Stem cell treatment has been considered an option before, however the procedure conducted by the team of scientists, lead by Steven Schwartz, an opthalmologist and chief of the retina division at UCLA’s Jules Stein Eye Institute, is the first one of its kind.

“This is a big step forward for regenerative medicine, said Dr. Steven Schwartz at UCLA’s Jules Stein Eye Institute. “It’s nowhere near a treatment for vision loss, but it’s a signal that embryonic stem-cell based strategies may work.

The operation involved injecting stem cells into one of each patient’s eye, a 78 year old woman suffering from macular degeneration and another woman, aged 51, who suffered from Stargardt’s macular dystrophy, both declared legally blind, with hopes that the cells required for proper vision will regenerate. The stem cells were treated before being injected into the patients’ eyes, as they were induced to grow into retinal pigment epithelial cells. The loss of these cells located in the pigmented layer of the retina is the leading cause of macular dystrophy.

[RELATED] Deafness cured by gene therapy

The results post the half hour surgery, in which 50,000 stem cells were injected, were remarkable – just a few weeks after the patients went from barely recognizing a hand to counting fingers, reading their own handwriting, pouring a glass of water without spilling it all over the floor and so on. In short, they were given the chance to live a normal life once more. Their vision continued to improve months after the surgery. The patients were also given immunosuppressants to prevent their bodies from rejecting the foreign tissue.

Other scientists have recently commented upon the research, admitting the results are indeed remarkable, while warning at the same time that the trial was conducted only on two persons,  and the improvements can still be considered short-term. Extensive studying on a broader range of patients and over longer time is required to accurately measure the effectiveness of stem cell treatment for this kind of operation.

According to Dr. Robert Lanza, chief scientific officer at Advanced Cell Technology and a co-author of the study, the embryo was destroyed after the stem cells were derived, but in the future, doctors will be able to derive stem cells from an embryo without destroying it.

The research was published in the journal The Lancet.

source: BBC via singularity hub