Tag Archives: synapse

Where are memories stored in the brain? They may be hidding in the connections between your brain cells

In the nervous system, a synapse is a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron. Credit: NIH Image Gallery.

All memory storage devices, from your brain to the RAM in your computer, store information by changing their physical qualities. Over 130 years ago, pioneering neuroscientist Santiago Ramón y Cajal first suggested that the brain stores information by rearranging the connections, or synapses, between neurons.

Since then, neuroscientists have attempted to understand the physical changes associated with memory formation. But visualizing and mapping synapses is challenging to do. For one, synapses are very small and tightly packed together. They’re roughly 10 billion times smaller than the smallest object a standard clinical MRI can visualize. Furthermore, there are approximately 1 billion synapses in the mouse brains researchers often use to study brain function, and they’re all the same opaque to translucent color as the tissue surrounding them.

A new imaging technique my colleagues and I developed, however, has allowed us to map synapses during memory formation. We found that the process of forming new memories changes how brain cells are connected to one another. While some areas of the brain create more connections, others lose them.

Mapping new memories in fish

Previously, researchers focused on recording the electrical signals produced by neurons. While these studies have confirmed that neurons change their response to particular stimuli after a memory is formed, they couldn’t pinpoint what drives those changes.

To study how the brain physically changes when it forms a new memory, we created 3D maps of the synapses of zebrafish before and after memory formation. We chose zebrafish as our test subjects because they are large enough to have brains that function like those of people, but small and transparent enough to offer a window into the living brain.

Zebrafish are particularly fitting models for neuroscience research. Zhuowei Du and Don B. Arnold, CC BY-NC-ND

To induce a new memory in the fish, we used a type of learning process called classical conditioning. This involves exposing an animal to two different types of stimuli simultaneously: a neutral one that doesn’t provoke a reaction and an unpleasant one that the animal tries to avoid. When these two stimuli are paired together enough times, the animal responds to the neutral stimulus as if it were the unpleasant stimulus, indicating that it has made an associative memory tying these stimuli together.

As an unpleasant stimulus, we gently heated the fish’s head with an infrared laser. When the fish flicked its tail, we took that as an indication that it wanted to escape. When the fish is then exposed to a neutral stimulus, a light turning on, tail flicking meant that it’s recalling what happened when it previously encountered the unpleasant stimulus.

Pavlov’s dog is the most well-known example of classical conditioning, in which a dog salivates in response to a ringing bell because it has formed an associative memory between the bell and food. Lili Chin/Flickr, CC BY-NC-ND.

To create the maps, we genetically engineered zebrafish with neurons that produce fluorescent proteins that bind to synapses and make them visible. We then imaged the synapses with a custom-built microscope that uses a much lower dose of laser light than standard devices that also use fluorescence to generate images. Because our microscope caused less damage to the neurons, we were able to image the synapses without losing their structure and function.

When we compared the 3D synapse maps before and after memory formation, we found that neurons in one brain region, the anterolateral dorsal pallium, developed new synapses while neurons predominantly in a second region, the anteromedial dorsal pallium, lost synapses. This meant that new neurons were pairing together, while others destroyed their connections. Previous experiments have suggested that the dorsal pallium of fish may be analogous to the amygdala of mammals, where fear memories are stored.

Surprisingly, changes in the strength of existing connections between neurons that occurred with memory formation were small and indistinguishable from changes in control fish that did not form new memories. This meant that forming an associative memory involves synapse formation and loss, but not necessarily changes in the strength of existing synapses, as previously thought.

Could removing synapses remove memories?

Our new method of observing brain cell function could open the door not just to a deeper understanding of how memory actually works, but also to potential avenues for treatment of neuropsychiatric conditions like PTSD and addiction.

Associative memories tend to be much stronger than other types of memories, such as conscious memories about what you had for lunch yesterday. Associative memories induced by classical conditioning, moreover, are thought to be analogous to traumatic memories that cause PTSD. Otherwise harmless stimuli similar to what someone experienced at the time of the trauma can trigger recall of painful memories. For instance, a bright light or a loud noise could bring back memories of combat. Our study reveals the role that synaptic connections may play in memory, and could explain why associative memories can last longer and be remembered more vividly than other types of memories.

Currently the most common treatment for PTSD, exposure therapy, involves repeatedly exposing the patient to a harmless but triggering stimulus in order to suppress recall of the traumatic event. In theory, this indirectly remodels the synapses of the brain to make the memory less painful. Although there has been some success with exposure therapy, patients are prone to relapse. This suggests that the underlying memory causing the traumatic response has not been eliminated.

It’s still unknown whether synapse generation and loss actually drive memory formation. My laboratory has developed technology that can quickly and precisely remove synapses without damaging neurons. We plan to use similar methods to remove synapses in zebrafish or mice to see whether this alters associative memories.

It might be possible to physically erase the associative memories that underlie devastating conditions like PTSD and addiction with these methods. Before such a treatment can even be contemplated, however, the synaptic changes encoding associative memories need to be more precisely defined. And there are obviously serious ethical and technical hurdles that would need to be addressed. Nevertheless, it’s tempting to imagine a distant future in which synaptic surgery could remove bad memories.The Conversation

Don Arnold, Professor of Biological Sciences and Biomedical Engineering, USC Dornsife College of Letters, Arts and Sciences

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Artificial synapses work together with biological brain cells

A  2017 photo of Alberto Salleo and Scott Keene characterizing the electrochemical properties of a previous artificial synapse design. Credit: L.A. Cicero/Stanford News Service.

Scientists at Stanford University have devised a biohybrid system that allows artificial synapses to communicate with living brain cells. What sets it apart from other brain-machine interfaces is its ability to respond to chemical signals, rather than electrical cues. As such, this is an important leap forward in scientists’ efforts to mimic the brain’s efficiency and natural learning processes.

Machine synapses and biological brain cells

The researchers built upon their previous work from 2017, when they developed artificial synapses made of two soft polymer electrodes, separated by a gap filled with an electrolyte solution. Experiments later showed that such devices could be connected in arrays, mimicking the way real, biological synapses process and store information.

In the brain, synapses are junctions between neurons, allowing brain cells to communicate with one another by exchanging chemical information in the form of various neurotransmitters, such as dopamine or serotonin.

Neuroscientists believe that one of the reasons why the human brain is so efficient has to do with the ability of synapses to simultaneously process and store information. In contrast, computers store information after it is processed, making them very slow by comparison.

The new hybrid system also employs electrochemistry to allow an array of artificial synapses to communicate with living cells as though they were just another neuron exchanging information with its neighbor.

“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the new study.

“The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.”

Salleo and colleagues placed living neuroendocrine cells from rats — which release the neurotransmitter dopamine — on top of one the electrodes of the artificial synapse. When neurotransmitters interact with the electrode, a chemical reaction takes place that produces ions, which travel across the synapse trench to the second electrode. The ions alter the conductive state of the electrode, resulting in a permanent change in the connection that simulates how learning occurs in nature.

“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.”

For now, this is just a proof-of-concept. The researchers do not have any immediate plans or applications in mind for their device since the main focus of the research was to simply show that this is all possible. However, this work may one day lead to a new generation of brain-mimicking computers, brain-machine interfaces, medical devices, and novel research tools for neuroscience and drug discovery.

The findings appeared in the journal Nature Materials.

Schizophrenia patients show fewer brain connections than healthy people

New research confirms that schizophrenia’s cognitive symptoms are correlated with lower synaptic density in certain parts of the brain.

Image credits Ellis Chika Onwordi / MRC London Institute of Medical Sciences.

Researchers have hypothesized that there is a link between schizophrenia and malfunctioning synapses since the early 1980s but lacked proper tools needed to investigate this in living brains. However, it was confirmed in post-mortem brain samples and animal cells in the lab.

But there’s no better proof of something than seeing it in action. New research at the Medical Research Council (MRC) London Institute of Medical Sciences did just that by using advanced brain-imaging techniques to peer into the synapses of living schizophrenia patients.

Instant synapses, just add protein

“Our current treatments for schizophrenia only target one aspect of the disease—the psychotic symptoms—but the debilitating cognitive symptoms, such as loss of abilities to plan and remember, often cause much more long-term disability and there’s no treatment for them at the moment. Synaptic loss is thought to underlie these symptoms,” says Professor Oliver Howes from the MRC London Institute of Medical Sciences, Imperial College London and King’s College London, the paper’s lead author.

For the study, the team enlisted the help of 18 adults with schizophrenia and 18 people without (these were the controls). The research was made possible by a tracer molecule that emits a signal that can be picked up by a PET (positron emission tomography) brain scan. This tracer is injected into the bloodstream of a subject and binds to SV2A, a specific protein found in brain synapses. Animal and post-mortem human studies have shown that SV2A is a reliable marker for synaptic density in the brain.

The team reports that patients with schizophrenia showed lower levels of SV2A in the frontal and anterior cingulate cortices of the brain, which are involved in planning and other high-level functions. In essence, the lower levels of SV2A proteins seen here suggest a lower number of synapses (and thus, brain functionality) in the area.

“Our lab at the MRC London Institute of Medical Sciences is one of the few places in the world with this new tracer, which means we’ve been able for the first time to show there are lower levels of a synaptic protein in people with schizophrenia,” Professor Howes adds.

“This suggests that loss of synapses could underlie the development of schizophrenia.”

The schizophrenia patients that participated in this study had all received antipsychotic medication, which could affect the results. To address this, the team gave haloperidol and olanzapine, two antipsychotic drugs, to lab rats for 28 days, then analyzed their brains using the same method. Such medication had no effect on SV2A protein levels, they found, which helped to validate their results. This step also indicated that the antipsychotic medication currently in use doesn’t lead to a loss of synaptic density or function, which is always nice to know.

Therapeutic options for schizophrenia remain few and far between. The condition is a highly debilitating one, and any effective avenue of treatment would dramatically improve the quality of life for patients. Studies such as this one serve as a launching pad for developing future treatments, according to Dr. Ellis Onwordi from the MRC London Institute of Medical Sciences, lead author of the paper. The findings can also help guide brain research into other similar conditions by showcasing “how the extraordinarily complex wiring of the human brain is altered by this disease.”

“Having scans that can characterise the distribution of the approximately 100 trillion synapses in the living brain, and find differences in their distribution between people with and without schizophrenia, represents a significant advance in our ability to study schizophrenia,” he adds.

“We need to develop new treatments for schizophrenia. This protein SV2A could be a target for new treatments to restore synaptic function.”

In the future, the team hopes to scan the brains of younger people during the early stages of schizophrenia, to better understand how it develops in the brain. Do all the changes seen in this study happen suddenly, or do they develop over time as the condition progresses? Such data could help us better treat the condition, and maybe even stop it altogether.

The paper “Synaptic density marker SV2A is reduced in schizophrenia patients and unaffected by antipsychotics in rats” has been published in the journal Nature Communications.


Researchers capture first ever images of microglia eating brain synapses

It’s not just zombies that chow on brains — for the first time, researchers have filmed microglia chowing down on brain synapses.


Image credits Savant-fou / Wikimedia.

Brains, like pretty much like your room, tends to get messy. Unlike your room, however, the brain has specialized support cells, a class known as “glia“, to keep everything working — and one type of glia, named “microglia” because they’re quite tiny, act as the brain’s Roombas, keeping it tidy 24/7.

While we knew, in theory, what these microglia were up to, we’ve never actually seen them go about their business. Not until recently, however, when researchers from the European Molecular Biology Laboratory (EMBL) captured them in the act.

“You finished with that?”

Neurons are arguably one of the most important cells in the brain; they’re the ones that process information and perform the job the brain is meant to do. However, the real heavy lifting inside this organ is performed by glia. And they’re quite prolific: our brains are roughly 70% glia, and around 10% of all cells in your brain are microglia specifically.

Microglia are related to macrophages — the variety of white cell that ‘eat’ bad guys. Microglia perform a similar function in the brain, acting as the main active component of its immune system. But they also take an active part in steering brain development; while not fighting anyone, microglia weed out brain synapses that have outlived their usefulness or just to make room for newer, more efficient synapses.

Given their massively influential role in the brain, it’s understandable why researchers have been trying to get a look at microglia in action for a long time now. Researchers from EMBL Rome, led by Laetitia Weinhard, working in collaboration with EMBL Heidelberg, are the first to actually succeed at this task. They set up a massive imaging study to capture the process in action in mouse brains.

The team combined two brain imaging techniques, correlative light and electron microscopy (CLEM), with light sheet fluorescence microscopy — a technique developed at EMBL — to capture the first images of microglia eating synapses.


Multiple synapse heads send out filopodia (green) converging on one microglia (red), as seen by focused ion beam scanning electron microscopy. L. Weinhard, EMBL Rome

One surprising finding was that around half of the interactions between microglia and synapses prompted the latter to send out thin projections or ‘filopoda’, as if to greet the cell, the team notes. In one case, the researchers observed no fewer than fifteen synapse heads extending filopoda toward a single microglia as it chowed on another synapse.

“Our findings suggest that microglia are nibbling synapses as a way to make them stronger, rather than weaker,” says Cornelius Gross (EMBL Rome), who led the research.

“As we were trying to see how microglia eliminate synapses, we realised that microglia actually induce their growth most of the time,” Laetitia Weinhard adds.

One other important find is that microglia could underly the formation of double synapses, where one neuron releases neurotransmitters to two others (instead of the traditional one-on-one communication). This mechanism shows that microglia are deeply involved in brain processes such as structural plasticity and can even induce the rearrangement of synapses — a process that underpins learning and memory.

The observations are the product of five years of technological development. During this time, the team worked with three different cutting-edge imaging systems before obtaining the images.

“This is what neuroscientists fantasised about for years, but nobody had ever seen before,” says Cornelius Gross. “These findings allow us to propose a mechanism for the role of microglia in the remodeling and evolution of brain circuits during development.”

Next, the team plans to investigate what role microglia play in brain development during adolescence, or if there’s any link between these cells and mental diseases.

Extra-virgin olive oil might prevent Alzheimer’s and protect your brain

A new study adds even more benefits to the already impressive pile than olive oil can boast — it protects your brain from Alzheimer’s and improves your synapses.

Image credits: Neufal / Pixabay.

Olive oil is good for you

Extra-virgin olive oil is a key component of the Mediterranean Diet, one of the few diets which have been constantly and scientifically proven to yield substantial health benefits. Olive oil itself has substantial benefits and is one of the healthiest types of fats you can consume. Now, a new study focused on the mechanism through which olive oil can protect your brain.

“Consumption of extra virgin olive oil (EVOO), a major component of the Mediterranean diet, has been associated with reduced incidence of Alzheimer’s disease (AD). However, the mechanisms involved in this protective action remain to be fully elucidated,” the study reads.

Lead investigator Dr. Domenico Praticò – a professor in the departments of Pharmacology and Microbiology and the Center for Translational Medicine at the Lewis Katz School of Medicine at Temple University (LKSOM) in Philadelphia, PA believes that this study brings us closer not only to prevention but also to the reversal of Alzheimer’s. He and his colleagues carried this study and mice and found that mice with EVOO-enriched diets had better memories and learning abilities compared to those who didn’t.

At a closer look, researchers also learned that mice who consumed more olive oil had better functioning synapses — connections between neurons. But it gets even better.

Olive oil reduces brain inflammation and activates the autophagy process, cleaning some of the intracellular debris and toxins in the process. This debris is strongly associated with an onset of Alzheimer’s, and this finding indicates that the oil would prevent and tackle the disease directly.

“Thanks to the autophagy activation, memory and synaptic integrity were preserved, and the pathological effects in animals otherwise destined to develop Alzheimer’s disease were significantly reduced,” Pratico said. “We want to know whether olive oil added at a later time point in the diet can stop or reverse the disease.”

“This is an exciting finding for us. Thanks to the autophagy activation, memory, and synaptic integrity were preserved, and the pathological effects in animals otherwise destined to develop Alzheimer’s disease were significantly reduced,” the researcher added.

Image credits: G.steph.rocket / Wiki Commons.

Future studies

Next, they want to conduct a similar experiment later in the onset of the disease and see if similar trends are noticed.

The thing is, eating olive oil once or twice does nothing — you need to introduce it firmly into your diet to reap the benefits, but that’s definitely worth it. Not only does it taste good and provide good, healthy fats for your body, but the data presented in the current paper demonstrate that chronic administration of a diet enriched with EVOO results in an amelioration of working memory, spatial learning, and synaptic pathology. The case for olive oil functioning as a prevention tool against Alzheimer’s seems quite strong, so there are solid, scientific reasons to opt for this type of oil.

However, it’s not clear if it could counteract the disease once it’s already set in. As impressive as olive oil is, it might become even more powerful as a therapeutic tool.

“Usually when a patient sees a doctor for suspected symptoms of dementia, the disease is already present,” Dr. Praticò explains. “We want to know whether olive oil added at a later time point in the diet can stop or reverse the disease.”

Considering that over 5 million Americans suffer from Alzheimer’s and the figure is expected to almost triple to 14 million by 2050, having such a simple and effective tool to combat Alzheimer’s could prove immensely useful.

Journal Reference: Elisabetta Lauretti, Luigi Iuliano, Domenico Praticò — Extra-virgin olive oil ameliorates cognition and neuropathology of the 3xTg mice: role of autophagy. DOI: 10.1002/acn3.431

Artificial synapse brings us one step closer to brain-like computers

Researchers have created a working artificial, organic synapse. The new device could allow computers to mimic some of the brain’s inner workings and improve their capacity to learn. Furthermore, a machine based on these synapses would be much more energy efficient that modern computers.

It may not look like much, but this device could revolutionize our computers forever.
Image credits Stanford University.

As far as processors go, the human brain is hands down the best we’ve ever seen. Its sheer processing power dwarfs anything humans have put together, for a fraction of the energy consumption, and it does it with elegance. If you allow me a car analogy, the human brain is a formula 1 race car that somehow uses almost no fuel and our best supercomputer… Well, it’s an old, beat-down Moskvich.

And it misfires.
Image credits Sludge G / Flickr.

So finding a way to emulate the brain’s hardware has understandably been high on the wishlist of computer engineers. A wish that may be granted sooner than they hoped. Researchers Stanford University and Sandia National Laboratories have made a breakthrough that could allow computers to mimic one element of the brain — the synapse.




“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper.

“It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”


The artificial synapse is made up of two thin, flexible films holding three embedded terminals connected by salty water. It works similarly to a transistor, with one of the terminals dictating how much electricity can flow between the other two. This behavior allowed the team to mimic the processes that go on inside the brain — as they zap information from one another, neurons create ‘pathways’ of sorts through which electrical impulses can travel faster. Every successful impulse requires less energy to pass through the synapse. For the most part, we believe that these pathways allow synapses to store information while they process it for comparatively little energy expenditure.

Because the artificial synapse mimics the way synapses in the brain respond to signals, it removes the need to separately store information after processing — just like in our brains, the processing creates the memory. These two tasks are fulfilled simultaneously for less energy than other versions of brain-like computing. The synapse could allow for a much more energy-efficient class of computers to be created, addressing a problem that’s becoming more and more poignant in today’s world.

Modern processors need huge fans because they use a lot of energy, giving off a lot of heat.

One application for the team’s synapses could be more brain-like computers that are especially well suited to tasks that involve visual or auditory signals — voice-controlled interfaces or driverless cars, for example. Previous neural networks and artificially intelligent algorithms used for these tasks are impressive but come nowhere near the processing power our brains hold in their tiny synapses. They also use a lot more energy.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper.

“Instead of simulating a neural network, our work is trying to make a neural network.”

The team will program these artificial synapses the same way our brain learns — they will progressively reinforce the pathways through repeated charge and discharge. They found that this method allows them to predict what voltage will be required to get a synapse to a specific electrical state and hold it with only 1% uncertainty. Unlike traditional hard drives where data has to be stored or lost when the machine shuts down, the neural network can just pick up where it left off without the need for any data banks.

One of a kind

Right now, the team has only produced one such synapse. Sandia researchers have taken some 15,000 measurements during various tests of the device to simulate the activity of a whole array of them. This simulated network was able to identify handwritten digits (between 0-9) with 93 to 97% accuracy — which, if you’ve ever used the recognize handwriting feature, you’ll recognize as an incredible success rate.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper.

“We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

One of the reasons these synapses perform so well is the numbers of states they can hold. Digital transistors (such as the ones in your computer/smartphone) are binary — they can either be in state 1 or 0. The team has been able to successfully program 500 states in the synapse, and the higher the number the more powerful a neural network computational model becomes. Switching from one state to another required roughly a tenth of the energy modern computing system drain to move data from processors to memory storage.

Still, this means that the artificial synapse is currently 10,000 times less energy efficient than its biological counterpart. The team hopes they can tweak and improve the device after trials in working devices to bring this energy requirement down.

Another exciting possibility is the use of these synapses in-vivo. The devices are largely composed of organic elements such as hydrogen or carbon, and should be fully compatible with the brain’s chemistry. They’re soft and flexible, and use the same voltages as those of human neurons. All this raises the possibility of using the artificial synapse in concert with live neurons in improved brain-machine interfaces.

Before they considering any biological applications, however, the team wants to test a full array of artificial synapses.

The full paper “A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing” has been published in the journal Nature Materials.



We sleep to forget things, new study finds

Sleep is as mysterious as it is vital for our wellbeing. Over the decades, researchers have proposed several mechanisms through which sleep rejuvenates us, but we still don’t fully understand the big picture. Now, two recently published studies come up with an interesting explanation: we sleep to forget some of the things we learn during the day.

Image credits: Dagon / Pixabay

We store memories in networks in our brains. Whenever we learn something new, we grow new connections between neurons, called synapses. In 2003, Giulio Tononi and Chiara Cirelli, biologists at the University of Wisconsin-Madison, proposed something very interesting: during the day, we learn so much and develop so many synapses that things sometimes get fuzzy. Since then, the two and their collaborators have made quite a few interesting additions to that study.

For starters, they showed that neurons can prune out some synapses, at least in the lab. But they suspected the same things happens every day, naturally, in our brains — probably during sleep. So they set up a painstaking experiment, in which Luisa de Vivo, an assistant scientist working in their lab, collected 6,920 synapses from mice, both awake and sleeping. Then, they determined the shape and size of all these synapses, learning that the synapses in sleeping mice were 18 percent smaller than in awake ones. That’s quite a big margin. “That there’s such a big change over all is surprising,” Dr. Tononi said. This was a big tell and helped direct their efforts.

After this, they designed a memory test for mice. They placed the animals in a room where they would get a mild electrical shock if they walked over one particular section of the floor. They injected some of the mice with a substance that had been proved to prevent the pruning of new synapses. The mice that experienced this were much more likely to forget about the section and after a good night’s sleep, they tended to walk over the section again, while mice that slept normally remembered better.

Then, Dr. Tononi and his colleagues found that the pruning didn’t strike every neuron. Some 20% were unchanged, likely well-established memories that shouldn’t be tampered with. In other words, we sleep to forget — but in a smart way. Another interesting consequence might concern sleeping pills. These pills might interfere with the brain’s pruning process and might prevent the brain from forming memories properly.

Markus H. Schmidt, of the Ohio Sleep Medicine Institute, said that the studies make a very good point and found one of the benefits of sleep, but he questions whether this is the reason why we sleep.

“The work is great,” he said of the new studies, “but the question is, is this a function of sleep or is it the function?”

Of course, in would be very difficult to replicate this study on humans.

Journal Reference: Luisa de Vivo et al — Ultrastructural evidence for synaptic scaling across the wake/sleep cycle. DOI: 10.1126/science.aah5982



Artificial synapse brings us one step closer to building a brain-like computer

A new study describes a novel computing component which emulates the way neurons connect in the human brain. This “memristor” changes its electrical resistance depending on how much current has already flowed through it, mimicking the way neurons transmit signals through synapses, the team writes.

Image credits Pixabay / JarkkoManty.

This device could lead to significant advancements in brain-like computers, capable of handling perceptual and learning tasks much better that traditional computers while being much more energy efficient.

“In the past, people have used devices like transistors and capacitors to simulate synaptic dynamics, which can work, but those devices have very little resemblance to real biological systems,” said study leader and professor of electrical and computer engineering at the University of Massachusetts Amherst Joshua Yang.

The human brain has somewhere between 86 to 100 billion neurons which connect in up to 1,000 trillion (that’s a one followed by 15 zeros) synapses — making your brain an estimated 1 trillion bit per second processor. Needless to say, computer scientists are dying to build something with even a fraction of this processing power, and a computer that mimics the brain’s structure — and thus its computing power and efficiency — would be ideal.

Building a brain

When an electrical signal hits a synapse in your brain it prompts calcium ions to flood into it, triggering the release of neurotransmitters. This is what actually transmits the information over the synapse, causing an impulse to form in the other neuron and so on. The “diffusive memristor” described in the paper is made up of silver nanoparticle clusters embedded in a silicon oxynitride film that is embedded between two electrodes.

The film is an insulator but applying a voltage through the device and the clusters start to breaks apart through a combination of electrical forces and heat. The nanoparticles diffuse through the film to form a conductive filament, allowing current to flow from one electrode to the other. Cut the voltage, the temperature drops, and the clusters re-form — similar to how calcium ions behave in a synapse.

“With the synaptic dynamics provided by our device, we can emulate the synapse in a more natural way, more direct way and with more fidelity,” he told Live Science.

The device can thus mimic short-term plasticity in neurons, the researchers said. Trains of low-voltage, high-frequency pulses will gradually increase the device’s conductivity until a current can pass through. But, if the pulses continue, the conductivity will eventually decrease.

The team also combined their diffusion memristor with a drift memristor, which relies on electrical fields and is optimized for memory applications. This allowed them to demonstrate a form of long-term plasticity called spike-timing-dependent plasticity (STDP), adjusting connection strength between neurons based on the timing of impulses. Drift memristors have previously used to approximate calcium dynamics. But, because they’re based on physical processes very different from the ones our brains employ, they have limited fidelity and variety in what functions they can simulate.

“You don’t just simulate one type of synaptic function, but [also] other important features and actually get multiple synaptic functions together,” Yang said.

“The diffusion memristor is helping the drift-type memristor behave similarly to a real synapse. Combining the two leads us to a natural demonstration of STDP, which is a very important long-term plasticity learning rule.”

Reproducing synaptic plasticity is essential to creating a brain-like computer. And we should do our best to create one, Yang said.

“The human brain is still the most efficient computer ever built,” he added.

The team uses fabrication processes similar to those being developed by computer memory companies to scale up memristor production. Silver doesn’t lend well to all these methods, however, copper nanoparticles could be used instead, Yang said. He added that the approach is definitely scalable and single units systems should be comparable to biological synapses in size. But he added that in multiunit systems, the devices will likely need to be bigger due to practical considerations involved in making a larger system work.

The full paper “Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing” has been published in the journal Nature Materials.

Image: Wikimedia Commons

Your memories last as long the neural connections: a long-standing theory now confirmed

Neuroscientists have long posited that memories last as long as the connections in the brain, but putting this theory to test has always proved challenging. Using the latest imaging techniques and sheer innovation, a group at Stanford confirmed this as being true after the researchers literally peered into the brains of mice and studied brain connections as they formed or were replaced. Once the connection was lost, so was the memory.

Image: Wikimedia Commons

Image: Wikimedia Commons

The group, led by Mark Schnitzer, an Stanford associate professor of biology and of applied physics, focused their efforts on unraveling the physical brain structures that pertain to episodic memories. These are kind of memories that are stored for a limited time then lost if not used, like conversations you had or events which took place in the past couple of weeks or months at most. Episodic memories are stored in the hippocampus, a  a small region of the brain that forms part of the limbic system and is primarily associated with memory and spatial navigation.


In mice, episodic memories typically last 30 days tops. Humans are somewhat better. When mice have hippocampus-disruptive surgery, those memories formed in the past 30 days are lost, but if the surgery takes places more than 30 days after the memory is formed then the mouse still retains that information that helps him identify a mate or navigate a maze. That’s because those memories were moved from the hippocampus to the neocortex which is a long-term repository.

Previously, researchers at  Cold Spring Harbor Laboratory in New York studied the connections formed between neurons in the neocortex. These connections were located near the surface of the brain and thus easily monitored without significant disruption. However, they didn’t observe the connections per se, but instead looked at a proxy: the bulbous region of a dendritic spine where synapses are formed. By watching the spines come and go, the researchers could know when and where new connections where being made. For instance, using this insight they found out that about half of the spines in the neocortex were permanent and the rest turned over approximately every five to 15 days. In other words, half the connections in the neocortex are established long-term memories, while the rest are malleable allowing new memories to be formed or old ones discarded (forgetfulness).

Using the same line of reasoning, Schnitzer suggested that the spines in the mouse hippocampus should turn over ever 30 days or so, along with the memories they hold. Unlike the neocortex, however, the hippocampus is nestled deep inside the brain and hence much more challenging to image. Moreover, the spines are so densely packed that multiple spines can easily be confused for one.

The Stanford team first implanted a microendoscope deep inside the brain of mice that provides high-resolution images of structures deep within the brain. It’s basically a high-tech needle. With the equipment in place, a technique first described by Schnitzer and colleagues in 2011 to  stably image a single neuron in a living mouse over long time periods was used. But like mentioned earlier, even with their best efforts the researchers still found it extremely difficult to distinguish between single neurons and spikes.  “The ability to resolve spines in the hippocampus is right on the hairy edge of our technological capability,” Schnitzer said.

The team overcame that problem with a mathematical model that took into account the limitations of the optical resolution and how that would affect the image datasets depicting the appearances and disappearances of spines.

Eventually, Schnitzer and colleagues found the region of the hippocampus that stores episodic memories contains spines that all turn over every three to six weeks, as reported in Nature. It’s no coincidence that this is roughly the duration of an episodic memory in mice.

“Just because the community has had a longstanding idea, that doesn’t make it right,” Schnitzer said. Now that the idea has been validated, he said, his technique could open up new areas of memory research: “It opens the door to the next set of studies, such as memory storage in stress or disease models.”

To recap, the Stanford researchers used novel techniques to probe how memories are formed, lost or transferred at an individual neural connection level. I don’t know about you, but that’s darn impressive!

When synapses are destroyed, the memories that they foster aren't necessarily erased. Credit: Red Orbit

Long-term memory isn’t stored in synapses, meaning it could be restored even when struck by Alzheimer’s

For a while, the general consensus was that long term memories are stored in synapses. A new  UCLA research topples this paradigm after experiments made on snails suggests that synapses aren’t that crucial storing memories as previously believed, but only facilitate the transfer of information someplace else, most likely in the nucleus of the neurons themselves – though this has yet to be proven.  The findings defy conventional wisdom and shines new hope that people struck by neruodegenerative diseases like Alzheimer’s might have be able to recover part of their memories.

Tabula rasa

When synapses are destroyed, the memories that they foster aren't necessarily erased. Credit: Red Orbit

When synapses are destroyed, the memories that they foster aren’t necessarily erased. Credit: Red Orbit

“Long-term memory is not stored at the synapse,” said David Glanzman, a senior author of the study, and a UCLA professor of integrative biology and physiology and of neurobiology. “The nervous system appears to be able to regenerate lost synaptic connections. If you can restore the synaptic connections, the memory will come back. It won’t be easy, but I believe it’s possible.”

The team led by Glanzman studied a marine snail called  Aplysia, which has a defensive system to protects its gill from potential harm. Namely, it has a withdrawal reflex that causes the sea hare’s delicate siphon and gill to be retracted when the animal is disturbed. This simple and easy to observe behaviour has made it a lab favorite for neuroscientists. Despite what you might think, the cellular and molecular processes seem to be very similar between the marine snail and humans, even though the snail has approximately 20,000 neurons and humans have about 1 trillion. Neurons each have several thousand synapses.

Researchers applied several electric shocks to the snail’s tail to enhance’s the snail’s withdrawal reflex sensibility. After a couple of more series of ‘shock therapy’, the enhancement stayed for days which shows it had been implemented in the snail’s long-term memory. On a neural level, when the shock is applied the hormone serotonin is released in the brain.

When serotonin reaches the nervous system, it promotes growth of new synaptic connections.  As long-term memories are formed, the brain creates new proteins that are involved in making new synapses. When this process is disrupted, by a concussion, some other injury or neurodegenerative disease, the proteins aren’t synthesized and long-term memories can’t form.

“If you train an animal on a task, inhibit its ability to produce proteins immediately after training, and then test it 24 hours later, the animal doesn’t remember the training,” Glanzman said.  “However, if you train an animal, wait 24 hours, and then inject a protein synthesis inhibitor in its brain, the animal shows perfectly good memory 24 hours later.  In other words, once memories are formed, if you temporarily disrupt protein synthesis, it doesn’t affect long-term memory. That’s true in the Aplysia and in human’s brains.”  (This explains why people’s older memories typically survive following a concussion.)

Brain grown in a jar

This process holds true even when the brain cells are studied in a Petri dish. The researchers placed sensory and motor neurons, involved in the snail’s withdrawal reflex, in a Petri dish and found that neurons re-formed the synaptic connections that were in place when the neurons were placed inside the snail’s body. When serotonin was added to mix, new synaptic connections altogether formed. However, if immediately after serotonin was added a protein synthesis inhibitor was also placed, then the new synaptic growth was blocked and long-term memories couldn’t be formed.

But do memories disappear when synapses do? The researchers wanted to know, so they counted the number of synapses in the dish and then, 24 hours later, they added the protein inhibitor. After the re-count, they found new synapses had grown and the synaptic connections between the neurons had been strengthened, so the inhibitor was of no consequence.

Next, the scientists added serotonin to a Petri dish containing a sensory neuron and motor neuron, waited 24 hours, and then added another brief pulse of serotonin — which served to remind the neurons of the original training — and immediately afterward add the protein synthesis inhibitor.This time, both synaptic growth and memory were erased. This suggests that the “reminder” pulse of serotonin triggered a new round of memory consolidation, and that inhibiting protein synthesis during this “reconsolidation” erased the memory in the neurons.

So, if synapses are indeed storing long term memory, then we should have seen that the lost synapses were the same ones that had grown in response to the serotonin. This wasn’t the case, however.  Instead, they found that some of the new synapses were still present and some were gone, and that some of the original ones were gone, too. Glanzman says that there doesn’t seem to be any pattern to which synapses stayed and disappear, so it must mean that they’re not connected to long-term memory storage.

Moreover, when the scientists repeated the experiment in the snail, and then gave the animal a modest number of tail shocks — which do not produce long-term memory in a naive snail — the memory they thought had been completely erased returned.

“That suggests that the memory is not in the synapses but somewhere else,” Glanzman said. “We think it’s in the nucleus of the neurons. We haven’t proved that, though.”

While in its late stage Alzheimer’s destroys neurons, even in its early stage the disease causes memory loss. So, just because synapses are lost to the disease, it doesn’t necessarily mean that memories are erased as well. By re-activating the synapses, one might be able to tap into the lost memories as well.

Findings appeared in the online journal eLife.


Scientists erase memory (and then reactivate it) in rats

Researchers have erased and then reactivated memories in rats, profoundly impacting the animals’ reaction to past events. This is the first study ever to demonstrate the ability to selectively erase and then reactivate a memory by stimulating nerves in the brain at frequencies that strengthen synapses, the connection between neurons.

The Eternal Sunshine of the Spotless Mind

Quite possibly Jim Carrey’s best movie (and this says a lot), The Eternal Sunshine of the Spotless Mind deals with erasing memories and them bringing them back, and explores the very nature of memory – and love. I won’t spoil anything if you haven’t seen it yet, but you really should look at it.

I didn’t think we’d be dealing with anything like this anytime soon – and yet here we are. In this research, published on the 1st of June, scientists have erased, and then reactivated memories in rats.

“We can form a memory, erase that memory and we can reactivate it, at will, by applying a stimulus that selectively strengthens or weakens synaptic connections,” said Roberto Malinow, MD, PhD, professor of neurosciences and senior author of the study.

First of all, they genetically modified rats in order to make some of their optical nerves sensible to light. They then stimulated the nerves, then simultaneously delivered an electrical shock to the animal’s foot. This created a conditional reflex, and the animals soon learned to associate the optical nerve stimulation with pain; whenever they would experience this, they would feel fear.

They then analyzed these nerves, and found indications of synaptic strengthening.

In the next stage of the experiment, they stimulated the same nerves, but this time with a memory-erasing, low-frequency train of optical pulses. After this, no matter how they stimulated the nerves, the rats didn’t respond with fear, and showed no indication of remembering the initial association.

Recreating Memories

But this wasn’t all. In what is maybe the study’s most startling discovery, they then found a way to recreate the initial memories, by re-stimulating the same nerves with a memory-forming, high-frequency train of optical pulses. These re-conditioned rats once again responded to the original stimulation with fear, even though they had not had their feet re-shocked.

“We can cause an animal to have fear and then not have fear and then to have fear again by stimulating the nerves at frequencies that strengthen or weaken the synapses,” said Sadegh Nabavi, a postdoctoral researcher in the Malinow lab and the study’s lead author.

To me, this is simply mind blowing. Sure, it’s a simple memory, and it’s a memory that they created – but it’s a proof of concept. The researchers showed that it is possible to eliminate and then recreate a memory using certain stimuli – this is not something I was expecting to find out when I woke up.

Naturally, it’s much too premature to talk about actually altering memories in humans. There is still a long way (and many years) to go before we can even start discussing that – but as I said, it’s a proof of concept; and there are some closer clinical applications of this discovery, for example in Alzheimer’s disease.

“Since our work shows we can reverse the processes that weaken synapses, we could potentially counteract some of the beta amyloid’s effects in Alzheimer’s patients,” he said.

Journal Reference:
Sadegh Nabavi, Rocky Fox, Christophe D. Proulx, John Y. Lin, Roger Y. Tsien and Roberto Malinow. Engineering a memory with LTD and LTP. Nature, 2014 DOI: 10.1038/nature13294

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

New transistor boasts neuron-like capabilities. It learns as it computes, hinting towards a new parallel computing future

The human brain is possibly the most complex entity in the Universe. It’s absolutely remarkable and beautiful to contemplate, and the things we are capable of because of our brains are outstanding. Even though most people might seem like they’re using their brains absolutely trivially the truth is the brain is incredibly complex. Let’s look at technicalities alone: the human brain is littered with some 100 billion nerve cells, together these form connections in tandem as each neuron is simultaneously engaged with another 1000 or so. In total some 20 million billion calculations per second are performed by the brain.

The unmatched computational strength of the human brain

That’s quite impressive. Some people think just because they can’t add, multiply or differentiate an equation in a heartbeat like a computer does, then that computer is ‘smarter’ than them. Couldn’t be farther from the truth. That machine can only do that – compute. Ask your scientific hand calculator to make you breakfast, write a novel or dig a hole. You can design a super scientific calculator with gripable limbs and program it to grab a shovel and dig – it will succeed probably, but again it will reach yet another limitation since that’s all been designed to do – it doesn’t ‘think’ for itself. Imagine this, if you were to combine the whole computing power on our planet – virtually combine all the CPUs in the world – only then would you able to reach the same computing speed of a human brain. To build a machine, by today’s sequential computational standards, similar to the human brain thus costs an enormous amount of money and energy. To cool such a machine you’d need to divert a whole river! In contrast, an adult human brain only consumes 20 Watts of energy!

Mimicking the computing power of the brain, the most complex computational ‘device’ in the Universe, is a priority for computer science and artificial intelligence enthusiasts. But we’re just beginning to learn how the brain works and what lies within our deepest recesses – the challenges are numerous.

A new step forward in this direction has been made by scientists at the Harvard School of Engineering and Applied Sciences (SEAS) who reportedly built  a transistor that behaves like a neuron, in some respects at least.

The brain is extremely plastic  as it creates a coherent interpretation of the external world based on input from its sensory system.  It’s always changing and highly adaptable. Actually, some neurons or whole brain regions can switch functions when needed, fact attested by various medical cases in which severe trauma was inflicted. One should remember a fellow named Phineas Gage. Gage worked as a construction worker during the railroad boom of the mid XIX century. A freak accident propelled  a large iron rod directly through his skull completely severing his brain’s left frontal lobe. He survived to live many years afterword, though his personality was severely altered – the prime example at the time that personality is deeply intertwined with the brain. What it also demonstrates, however, is that key brain functions were diverted to other parts of the brain.

A synaptic transitor

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

So how do you mimic this amazing plasticity? Well, if you want a chip that behaves like a human brain you first need to have its constituting elements behave like the constituting elements of the brain – transistors for neurons.  A transistor in some ways behaves like a synapse, acting as a signal gate. When two neurons are in connection (they’re never in direct contact!), electrochemical reactions through neurotransmitters relay specific signals.

In a real synapse, calcium ions induce chemical signaling. The Harvard transistors uses instead oxygen ions, engulfed in a 80-nanometer-thick layer of samarium nickelate crystal, which is the analog to the synapse channel. When a voltage is applied to the crystal, oxygen ions slip through, changing the conductive properties of the lattice and altering signal relaying capabilities.

The strength of the connection is based on the time delay in the electric signal fed into it. It’s the same way for real neurons that get stronger as they relay more signals. Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer.

“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic coection. Essentially, it memorizes the action between the neurons.”

So, it does in fact run a bit like a neuron, in the sense that it adapts and strengthens and weakens connections accordingly to external stimuli. Also, opposed to traditional transistors, the Harvard creation isn’t restricted to the binary system of ones and zeros and interestingly enough runs on non-volatile memory, which means even when power is interrupted, the device remembers its state.  Still, it can’t form new connections like a human neuron can.

“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”

It does have a significant advantage over the human brain – these transistors can run at high temperatures exceeding 160 degrees Celsius. This kind of heat typically boils the brain, so kudos.

So, in principle at least, integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance. We’re still light years away from something like this happening, still it hints of a future of highly efficient and fast parallel computing! This is the very fist baby step – a proof of concept.

“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such materials to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”

“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”

The findings were reported in the journal Nature Communications.



Gene mutation leads to insatiable eating disorder causing obesity

There are a number of factors that lead to obesity, the most obvious of which is of course eating too much, without burning the excess fat by exercising. Fact is, there are some people in the world who no matter how much they’d  eat, they never seem to be satisfied, constantly consumed by a sense of hunger and a voracious appetite. These individuals have a problem, and it’s genetic in nature. A recent research made by scientists at Georgetown University Medical Center have found that a gene mutation causes one to uncontrollably eat, as a result of a malfunctioned appetite quenching signal from the body to the right place in the brain.

obesityHunger is a an indispensable biological mechanisms, which signals a healthy individual that it’s time to ingest food and nourish the body by fluctuation of leptin and ghrelin hormone levels. Increasing levels of leptin result in a reduction of one’s motivation to eat. In humans, leptin and insulin hormones are released into to the body such that the brain may know that it’s time to stop eating, however researchers have found that mutations in the brain-derived neurotrophic factor (Bdnf) gene does not allow brain neurons to effectively pass leptin and insulin chemical signals through the brain.

A gene mutation that makes you eat continuously

The BDNF gene is crucial to the formation and maturation of the synapses, structures that link neurons with one another and allow chemical signal transmission between them. The gene generates one long and one short transcript. Researchers observed that mice which lacked the long-form Bdnf transcript had many immature synapses, resulting in deficits in learning and memory. Mice suffering from the same Bdnf mutation were also severely obese.

“This is the first time protein synthesis in dendrites, tree-like extensions of neurons, has been found to be critical for control of weight,” says the study’s senior investigator, Baoji Xu, Ph.D., an associate professor of pharmacology and physiology at Georgetown.

“This discovery may open up novel strategies to help the brain control body weight,” he says.

Other researchers began to look at the Bdnf gene in humans, and large-scale genome-wide association studies showed Bdnf gene variants are, in fact, linked to obesity.

Other large-scale genome-wide association studies have shown than the Bdnf gene variants are indeed linked to obesity in humans, as well – this is a fact that’s been well known for some time, but the mechanics weren’t understood before this study. Xu’s research shows that leptin and insulin chemical signals need to be moving along the neuronal highway to the correct brain locations, where appetite might be quenched, however when the Bdnf gene is mutated, neurons can’t communicated very well with each other anymore.

“If there is a problem with the Bdnf gene, neurons can’t talk to each other, and the leptin and insulin signals are ineffective, and appetite is not modified,” Xu says.

Hope for a cure to obesity

Scientists are now looking for way to regulate the leptin and insulin signal movements though the brain neurons. One immediate way to make this happen might be to introduce adeno-associated virus-based gene therapy such that additional long-form Bdnf transcript might be produced. Though this is a safe procedure, the researchers believe gene therapy might be ineffective, compared to a drug which can stimulate Bdnf expression in the hypothalamus.

The researchers’ findings were reported on March 18th in journal Nature Medicine.

Georgetown Press Release

The unified theory of brain learning

The brain learns basically by shifting between different strengths of its synapses, as a response to different stimuli – that much is clear. However, recently, a team of UCLA scientists have shattered the common belief about the mechanism of learning, showing that the brain learns rhythmically, and that there is an optimal ‘rhythm’, or frequency, for changing synapse strength. Any frequency higher or lower from the optimal one will result in a slower and more inefficient way of learning.

The findings, which, if correct, might pave the way towards a ‘unified theory of the brain‘, could also lead to new therapies for treating learning disabilities. The study was published in Frontiers in Computational Neuroscience.


“Many people have learning and memory disorders, and beyond that group, most of us are not Einstein or Mozart,” said Mayank R. Mehta, the paper’s senior author and an associate professor in UCLA’s departments of neurology, neurobiology, physics and astronomy. “Our work suggests that some problems with learning and memory are caused by synapses not being tuned to the right frequency.”

Any change in the strength of a synapse as a result of stimuli is known as synaptic plasticity, and it is induced through so-called ‘spike trains’, series of neural signals that occur with varying frequency and timing. Previous experiments had already shown that stimulating the brain at very high frequencies, such as 100 spikes per second, leads to strengthening the synapse, while lower frequencies reduced the synaptic strength.

These earlier experiments used hundreds of consecutive spikes in the very high-frequency range to induce plasticity. Yet when the brain is activated during real-life behavioral tasks, neurons fire only about 10 consecutive spikes, not several hundred. And they do so at a much lower frequency — typically in the 50 spikes-per-second range.

“[..]spike frequency refers to how fast the spikes come. Ten spikes could be delivered at a frequency of 100 spikes a second or at a frequency of one spike per second.”

What is different with this study is that Mehta and coworkers were able to measure data using a sophisticated mathematical model they developed and validated with experimental data. What they found, contrary to what is currently believed, is that stimulating the brain at the highest possible frequencies was not the best way to strengthen the synapse, and that the further you stray from the optimal frequency, the less efficient it is.

For example, when a synapse was stimulated with just 10 spikes at a frequency of 30 spikes per second, it induced a far greater increase in strength than stimulating that synapse with 10 spikes at 100 times per second.

“The expectation, based on previous studies, was that if you drove the synapse at a higher frequency, the effect on synaptic strengthening, or learning, would be at least as good as, if not better than, the naturally occurring lower frequency,” Mehta said. “To our surprise, we found that beyond the optimal frequency, synaptic strengthening actually declined as the frequencies got higher.”

Knowin that a synapse has a preferential frequency at which it has the best performances is a huge breakthrough in itself, but researchers also concluded that for the best effect, the frequency has to be perfectly rhythmic. Furthermore, they also showed that once a synapse learns something, the preferential frequency changes. This learning-induced “detuning” process has important implications for treating disorders related to forgetting, such as post-traumatic stress disorder, the researchers said.

Even though much, much more research is needed in order to fully understand the mechanisms at hand, but even so, the results are extremely promising, and promise much, much more.

New imaging method reveals stunning methods of brain connections

The typical healthy human brain contains about 200 billion nerve cells, called neurons, all of which are connected through hundreds of trillions of small connections called synapses. One single neuron can lead to up to 10.000 synapses with other neurons, according to Stephen Smith, PhD, professor of molecular and cellular physiology.

Along with a team of researchers from the Stanford School of Medicine, he was able to quickly and accurately locate and count these synapses in unprecedented detail, using a new state of the art imaging system on a brain tissue sample. Because the synapses are so small and close to each other, it’s really hard to achieve a thorough understanding on the complex neuronal circuits that make our brain work. However, this new method could shed some new light on the problem; it works by combining high-resolution photography with specialized fluorescent molecules that bind to different proteins and glow in different colors. The computer power required to achieve the imagery was massive.

A synapse is less than a thousandth of a millimeter in diameter, and the spaces between them are not much bigger either. This method, array tomography, is at its starting years, but as time passes, it will probably become more and more reliable, and more and more efficient.

“I anticipate that within a few years, array tomography will have become an important mainline clinical pathology technique, and a drug-research tool,” Smith said. He and Micheva are founding a company that is now gathering investor funding for further work along these lines. Stanford’s Office of Technology Licensing has obtained one U.S. patent on array tomography and filed for a second.

Full study here.