Tag Archives: Neural

We have the first genetic evidence of human self-domestication

New research at the University of Barcelona (UB) found the first genetic evidence that humanity has self-domesticated.

Comparison of Modern Human and Neanderthal skulls from the Cleveland Museum of Natural History.
Image credits DrMikeBaxter / Wikipedia.

The team found a network of genes involved in the evolution of human face structure and prosociality in modern humans which is absent in the Neanderthal genome. This suggests that our ancestors preferred to hang out and mate with friendlier and more cooperative companions over less-cooperative, more aggressive ones. In effect, this amounted to selective pressure for prosocial behavior over time, meaning that we domesticated our own species.

Our own best friend

Certain anatomical, cognitive, and behavioral traits of modern humans — chief among them docility and a fragile facial structure — are hallmarks of the domestication process. This led to the idea of human self-domestication being developed all the way back in the 19th century, the team explains. However, we lacked the tools to confirm that this process took place (i.e. that there’s genetic evidence for it).

The study builds on the team’s previous research that looked into genetic similarities between humans and domesticated animals. Now, the team went one step further and looked for genetic evidence for self-domestication in neural crest cells. This is a population of cells that have a major role to play in the early development of vertebrate embryos by differentiating into more specialized cells.

“A mild deficit of neural crest cells has already been hypothesized to be the factor underlying animal domestication,” explains co-author Alejandro Andirkó, a Ph.D. student at the Department of Catalan Philology and General Linguistics of the UB.

“Could it be that humans got a more prosocial cognition and a retracted face relative to other extinct humans in the course of our evolution as a result of changes affecting neural crest cells?”

In order to test their hypothesis, the team focused on Williams syndrome disorder, a human-specific neurodevelopmental disorder caused by a deficit of neural crest cells as the embryo develops. It is characterized by mild to moderate intellectual disability or learning problems, unique personality characteristics, distinctive facial features, and cardiovascular problems.

The researchers used in vitro models of Williams syndrome (stem cells derived from the skin of patients with this syndrome). After poking around, they found that the BAZ1B gene, conveniently located in the region of the genome associated with Williams syndrome, is responsible for controlling the behavior of neural crest cells. If this gene was under-expressed, it led to reduced migration of these cells; higher expression levels led to greater neural crest migration. Then, they compared this gene to its equivalent in samples of archaic (i.e. extinct) and modern (i.e. our ancestors’) human genomes.

“We wanted to understand if neural crest cell genetic networks were affected in human evolution compared to the Neanderthal genomes,” says Cedric Boeckx, ICREA professor at the Department of Catalan Philology and General Linguistics.

Differences in the BAZ1B gene between archaic and modern humans led to a high frequency of mutations in that accumulated over time in modern humans — but not in any of the archaic genomes currently available. The team says this points to BAZ1B as being “an important reason our face is so different when compared with our extinct relatives, the Neanderthals.”

“In the big picture, it provides for the first-time experimental validation of the neural crest-based self-domestication hypothesis,” Boeckx adds.

The paper “Dosage analysis of the 7q11.23 Williams region identifies BAZ1B as a major human gene patterning the modern human face and underlying self-domestication” has been published in the journal Science Advances.

Computers can now read handwriting with 98% accuracy

New research in Tunisia is teaching computers how to read your handwriting.

Image via Pixabay.

Researchers at the University of Sfax in Tunisia have developed a new method for computers to recognize handwritten characters and symbols in online scripts. The technique has already achieved ‘remarkable performance’ on texts written in the Latin and Arabic alphabets.

iRead

“Our paper handles the problem of online handwritten script recognition based on an extraction features system and deep approach system for sequence classification,” the researchers wrote in their paper. “We used an existent method combined with new classifiers in order to attain a flexible system.”

Handwriting recognition systems are, unsurprisingly, computer tools designed to recognize characters and hand-written symbols in a similar way to our brains. They’re similar in form and function with the neural networks that we’ve designed for image classification, face recognition, and natural language processing (NLP).

As humans, we innately begin developing the ability to understand different types of handwriting in our youth. This ability revolves around the identification and understanding of specific characters, both individually and when grouped together, the team explains. Several attempts have been made to replicate this ability in a computer over the last decade in a bid to enable more advanced and automatic analyses of handwritten texts.

The new paper presents two systems based on deep neural networks: an online handwriting segmentation and recognition system that uses a long short-term memory network (OnHSR-LSTM) and an online handwriting recognition system composed of a convolutional long short-term memory network (OnHR-covLSTM).

The first is based on the theory that our own brains work to transform language from the graphical marks on a piece of paper into symbolic representations. This OnHSR-LSTM works by detecting common properties of symbols or characters and then arranging them according to specific perceptual laws, for instance, based on proximity, similarity, etc. Essentially, it breaks down the script into a series of strokes, that is then turned into code, which is what the program actually ‘reads’.

“Finally, [the model] attempts to build a representation of the handwritten form based on the assumption that the perception of form is the identification of basic features that are arranged until we identify an object,” the researchers explained in their paper.

“Therefore, the representation of handwriting is a combination of primitive strokes. Handwriting is a sequence of basic codes that are grouped together to define a character or a shape.”

The second system, the convolutional long short-term memory network, is trained to predict both characters and words based on what it read. It is particularly well-suited for processing and classification of long sequences of characters and symbols.

Both neural networks were trained then evaluated using five different databases of handwritten scripts in the Arabic and Latin alphabets. Both systems achieved recognition rates of over 98%, which is ‘remarkable’ according to the team. Both systems, they explained, performed similarly to human subjects at the task.

“We now plan to build on and test our proposed recognition systems on a large-scale database and other scripts,” the researchers wrote.

The paper “Neural architecture based on fuzzy perceptual representation for online multilingual handwriting recognition” has been published in the preprint server arXiv.

Scientific citation.

AI developed to tackle physics problems is really good at summarizing research papers

New research from MIT and elsewhere is making an AI that can read scientific papers and generate a plain-English summary of one or two sentences.

Scientific citation.

Image credits Mike Thelwall, Stefanie Haustein, Vincent Larivière, Cassidy R. Sugimoto (paper). Finn Årup Nielsen (screenshot).

A big part of our job here at ZME Science is to trawl through scientific journals for papers that look particularly interesting or impactful. They’re written in dense, technical jargon, which we then take and present in a (we hope) pleasant and easy to follow way that anybody can understand, regardless of their educational background.

MIT researchers are either looking to make my job easier or get me unemployed, I’m not sure exactly sure which yet. A novel neural network they developed, along with other computer researchers, journalists, and editors, can read scientific papers and render a short, plain-English summary.

Autoread

“We have been doing various kinds of work in AI for a few years now,” says Marin Soljačić, a professor of physics at MIT and co-author of the research.

“We use AI to help with our research, basically to do physics better. And as we got to be more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

It’s far from perfect at what it does right now — in fact, the neural network’s abilities are quite limited. Even so, it could prove to be a powerful resource in helping editors, writers, and scientists scan a large number of studies for a quick idea of their contents. The system could also find applications in a variety of other areas besides language processing one day, including machine translation and speech recognition.

The team didn’t set out to create the AI for the purpose described in this paper. In fact, they were working to create new AI-based approaches to tackle physics problems. During development, however, the team realized the approach they were working on could be used to solve other computational problems — such as language processing — much more efficiently than existing neural network systems.

“We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm,” Soljačić adds.

Neural networks generally attempt to mimic the way our brains learn new information. The computer is fed many different examples of a particular object or concept to help it ‘learn’ what the key, underlying patterns of that element are. This makes neural networks our best digital tool for pattern recognition, for example identifying objects in photographs. However, they don’t do nearly so well when it comes to correlating information from hefty items of data, such as a research paper.

Various tricks have been used to improve their capability in that latter area, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU). All in all, however, classical neural networks are still ill-equipped for any sort of real natural-language processing, the authors say.

So, what they did was to base their neural network on mathematical vectors, instead of on the multiplication of matrices (which is classical neural-network approach). This is very deep math territory but, essentially, the system represents each word in the text by a vector — lines of a certain length, orientation, and direction — created and altered in a multidimensional space. Encyclopaedia Britannica defines “vectors, in mathematics, as quantities that have both magnitude and direction but not position,” listing velocity and acceleration as examples.

The network used each vector subsequently, as words were being read, to modify a starting vector. The final vector or set of vectors is then translated back into a string of words. The name the team gave this approach, thankfully, is much easier to wrap your head around: RUM (rotational unit of memory).

“RUM helps neural networks to do two things very well,” says Preslav Nakov, a senior scientist at the Qatar Computing Research Institute and paper co-author. “It helps them to remember better, and it enables them to recall information more accurately.”

RUM was developed to help physicists study phenomena such as the behavior of light in complex engineered materials, the team explains. However, the team soon realized that “one of the places where […] this approach could be useful would be natural language processing.”

Artificial summaries

Soljačić, says he recalls a conversation with Mićo Tatalović, a former Knight Science Journalism fellow at MIT, a former editor at New Scientist magazine, and co-author of the study, who said that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was, at the time, exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

As a proof-of-concept, the team ran the same research paper through a conventional (LSTM-based) neural network and through their RUM-based system, asking them to produce short summaries. The end results were dramatically different. RUM can read through an entire research paper, not just it’s abstract, and summarise its content. The team even ran the present study through RUM (they were probably just showing off at this point).

Here’s the summary produced by the LSTM system:

‘”Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.”

Here’s the one the RUM system produced:

“Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.”

Here’s the neural network’s summary of the study we’re discussing:

“Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.”

You guys like my coverage better, though, right? Right…?

The paper “Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications” has been published in the journal Transactions of the Association for Computational Linguistics.

Cat paying attention.

Paying attention shuts down ‘brain noise’ that isn’t related to what we’re looking for

New research sheds light into what our brains do as we try to pay attention to something.

Cat paying attention.

It seems that the price for paying attention is missing the big picture.
Image via Pixabay.

Attention has long been believed to function by turning down brain ‘noise’ — in other words, it amplifies the activity of some neurons while suppressing others. A new study comes to confirm this view by showing how too much background brain noise can interrupt focused attention and cause the brain to struggle to perceive objects.

Divert energy to attention circuits!

“This study informs us about how information is encoded in the electrical circuits in the brain,” says Salk Professor John Reynolds, senior author of the paper. “When a stimulus appears before us, this activates a population of neurons that are selective for that stimulus.”

“Layered on top of that stimulus-evoked response are large, low-frequency fluctuations in neural activity.”

It’s laughably easy to miss something you’re not looking for. You’re probably aware of the gorilla experiment / selective attention test (if not, here it is). In short, when most people were asked to pay attention to two groups of people — one in black clothes, the other in white clothes — passing a ball among them and count the number of times this ball passed from one group to the other, they became oblivious to a man dressed as a gorilla walking among the players.

More than just being funny, the experiment shows how our brains can ignore visual information when it isn’t relevant to a certain task we’re trying to perform. However, this process governing our perception and ability to pay attention to our surroundings is poorly understood. In an effort to patch this blind spot in our knowledge, the team set out to find whether background neural activity can interrupt focused attention, and cause our brains to struggle with perceiving certain objects.

Previous work from Reynolds’ lab found that when attention is directed upon a certain stimulus, low-frequency neural fluctuations (brain noise) is suppressed. The findings also suggested that not filtering out these fluctuations should impair our perception and ability to pay attention.

To find whether this is the case, the team used optogenetics — a technique that can activate or inactivate neurons by shining lasers onto light-activated proteins. They directed a low-frequency laser to the visual brain regions in animals in order to replicate brain noise. Then, they measured how this impacted the animals’ ability to detect a small change in the orientation of objects shown on a computer screen.

As predicted, the induced brain noise impaired the animals’ perception compared to controls. The team then repeated the experiment using a different laser-burst pattern to induce high-frequency fluctuations (a frequency that attention, as far as we know, doesn’t suppress). Consistent with their initial theory, this had no effect on the animals’ perception.

“This is the first time this theoretical idea that increased background noise can hurt perception has been tested,” says first and corresponding author Anirvan Nandy, assistant professor at the Yale University School of Medicine and former Salk researcher. “We’ve confirmed that attention does operate largely by suppressing this coordinated neuron firing activity.”

“This work opens a window into the neural code, and will become part of our understanding of the neural mechanisms underlying perception. A deeper understanding of the neural language of perception will be critical in building visual prosthetics,” Reynolds adds.

The team plans to examine how different types of cells in the visual networks of the brain take part in this process. Hopefully, this will give us a better idea of the neurological processes that govern attention and perception.

The paper “Optogenetically induced low-frequency correlations impair perception” has been published in the journal eLife.

We can’t grow new neurons in adulthood after all, new study says

Previous research has suggested neurogenesis — the birth of new neurons — was able to take place in the adult human brain, but a new controversial study published in the journal Nature seems to challenge this idea.

a. Toluidine-blue-counterstained semi-thin sections of the human Granule Cell Layer (GCL) from fetal to adult ages. Note that a discrete cellular layer does not form next to the GCL and the small dark cells characteristic of neural precursors are not present.

Scientists have been struggling to settle the matter of human neurogenesis for quite some time. The first study to challenge the old theory that humans did not have the ability to grow new neurons after birth was published in 1998, but scientists had been questioning this entrenched idea since the 60’s when emerging techniques for labeling dividing cells revealed the birth of new neurons in rats. Another neurogenesis study was published in 2013, reinforcing the validity of the results from 1998.

Arturo Alvarez-Buylla, a neuroscientist at the University of California, San Francisco, and his team conducted a study to test the neurogenesis theory using immunohistochemistry — a process that applies various fluorescent antibodies on brain samples. The antibodies signal if young neurons as well as dividing cells are present. Researchers involved in this study were shocked by the findings.

“We went into the hippocampus expecting to see many young neurons,” says senior author Arturo Alvarez-Buylla. “We were surprised when we couldn’t find them.”

In the new study, scientists analyzed brain samples from 59 patients of various ages, ranging from fetal stages to the age of 77. The brain tissue samples came from people who had died or pieces were extracted in an unrelated procedure during brain surgery. Scientists found new neurons forming in prenatal and neonatal samples, but they did not find any sustainable evidence of neurogenesis happening in humans older than 13. The research also indicates the rate of neurogenesis drops 23 times between the ages one and seven.

But some other uninvolved scientists say that the study left much room for error. The way the brain slices were handled, the deceased patients’ psychiatric history, or whether they had brain inflammation could all explain why the researchers failed to confirm earlier findings.

The 1998 study was performed on brains of dead cancer patients who had received injections of a chemical called bromodeoxyuridine while they were still alive. The imaging molecule — which was used as a cancer treatment — became integrated into the DNA of actively dividing cells. Fred Gage, a neuroscientist involved in the 1998 study, says that this new paper does not really measure neurogenesis.

“Neurogenesis is a process, not an event. They just took dead tissue and looked at it at that moment in time,” he adds.

Gage also thinks that the authors used overly restrictive criteria for counting neural progenitor cells, thus lowering the chances of seeing them in adult humans.

But some neuroscientists agree with the findings. “I feel vindicated,” Pasko Rakic, a longtime outspoken skeptic of neurogenesis in human adults, told Scientific American. He believes the lack of new neurons in adult primates and humans helps preserve complex neural circuits. If new neurons would be constantly born throughout adulthood, they could interfere with preexisting precious circuits, causing chaos in the central nervous system.

“This paper not only shows very convincing evidence of a lack of neurogenesis in the adult human hippocampus but also shows that some of the evidence presented by other studies was not conclusive,” he says.

Dividing neural progenitors in the granule cell layer (GCL) are rare at 17 gestational weeks (orthogonal views, inset) but were abundant in the ganglionic eminence at the same age (data not shown). Dividing neural progenitors were absent in the GCL from 22 gestational weeks to 55 years.

Steven Goldman, a neurologist at the University of Rochester Medical Center and the University of Copenhagen, said, “It’s by far the best database that has ever been put together on cell turnover in the adult human hippocampus. The jury is still out about whether there are any new neurons being produced.” He added that if there is neurogenesis, “it’s just not at the levels that have been presumed by many.”

The debate still goes on. No one really seems to know the answer yet, but I think that’s a positive — the controversy will generate a new wave of research on the subject.

Artificial synapse brings us one step closer to brain-like computers

Researchers have created a working artificial, organic synapse. The new device could allow computers to mimic some of the brain’s inner workings and improve their capacity to learn. Furthermore, a machine based on these synapses would be much more energy efficient that modern computers.

It may not look like much, but this device could revolutionize our computers forever.
Image credits Stanford University.

As far as processors go, the human brain is hands down the best we’ve ever seen. Its sheer processing power dwarfs anything humans have put together, for a fraction of the energy consumption, and it does it with elegance. If you allow me a car analogy, the human brain is a formula 1 race car that somehow uses almost no fuel and our best supercomputer… Well, it’s an old, beat-down Moskvich.

And it misfires.
Image credits Sludge G / Flickr.

So finding a way to emulate the brain’s hardware has understandably been high on the wishlist of computer engineers. A wish that may be granted sooner than they hoped. Researchers Stanford University and Sandia National Laboratories have made a breakthrough that could allow computers to mimic one element of the brain — the synapse.

 

 

 

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper.

“It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

Copycat

The artificial synapse is made up of two thin, flexible films holding three embedded terminals connected by salty water. It works similarly to a transistor, with one of the terminals dictating how much electricity can flow between the other two. This behavior allowed the team to mimic the processes that go on inside the brain — as they zap information from one another, neurons create ‘pathways’ of sorts through which electrical impulses can travel faster. Every successful impulse requires less energy to pass through the synapse. For the most part, we believe that these pathways allow synapses to store information while they process it for comparatively little energy expenditure.

Because the artificial synapse mimics the way synapses in the brain respond to signals, it removes the need to separately store information after processing — just like in our brains, the processing creates the memory. These two tasks are fulfilled simultaneously for less energy than other versions of brain-like computing. The synapse could allow for a much more energy-efficient class of computers to be created, addressing a problem that’s becoming more and more poignant in today’s world.

Modern processors need huge fans because they use a lot of energy, giving off a lot of heat.

One application for the team’s synapses could be more brain-like computers that are especially well suited to tasks that involve visual or auditory signals — voice-controlled interfaces or driverless cars, for example. Previous neural networks and artificially intelligent algorithms used for these tasks are impressive but come nowhere near the processing power our brains hold in their tiny synapses. They also use a lot more energy.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper.

“Instead of simulating a neural network, our work is trying to make a neural network.”

The team will program these artificial synapses the same way our brain learns — they will progressively reinforce the pathways through repeated charge and discharge. They found that this method allows them to predict what voltage will be required to get a synapse to a specific electrical state and hold it with only 1% uncertainty. Unlike traditional hard drives where data has to be stored or lost when the machine shuts down, the neural network can just pick up where it left off without the need for any data banks.

One of a kind

Right now, the team has only produced one such synapse. Sandia researchers have taken some 15,000 measurements during various tests of the device to simulate the activity of a whole array of them. This simulated network was able to identify handwritten digits (between 0-9) with 93 to 97% accuracy — which, if you’ve ever used the recognize handwriting feature, you’ll recognize as an incredible success rate.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper.

“We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

One of the reasons these synapses perform so well is the numbers of states they can hold. Digital transistors (such as the ones in your computer/smartphone) are binary — they can either be in state 1 or 0. The team has been able to successfully program 500 states in the synapse, and the higher the number the more powerful a neural network computational model becomes. Switching from one state to another required roughly a tenth of the energy modern computing system drain to move data from processors to memory storage.

Still, this means that the artificial synapse is currently 10,000 times less energy efficient than its biological counterpart. The team hopes they can tweak and improve the device after trials in working devices to bring this energy requirement down.

Another exciting possibility is the use of these synapses in-vivo. The devices are largely composed of organic elements such as hydrogen or carbon, and should be fully compatible with the brain’s chemistry. They’re soft and flexible, and use the same voltages as those of human neurons. All this raises the possibility of using the artificial synapse in concert with live neurons in improved brain-machine interfaces.

Before they considering any biological applications, however, the team wants to test a full array of artificial synapses.

The full paper “A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing” has been published in the journal Nature Materials.