Tag Archives: learning

Can courses be held inside Minecraft? Two researchers say “yes, and well”

Researchers at Concordia’s Milieux Institute for Arts, Culture, and Technology want to help teachers and students have a better, more productive experience with remote learning. Their solution? Minecraft.

Village Church Landscape Windmill Minecraft

The highest-selling video game of all time could, unexpectedly, point the way towards more engaging remote learning. Despite its massive popularity, Minecraft is, in the gaming world, regarded more of a “kids'” game; it has blocky graphics, it lacks epic, tense moments, and there’s no competitive scene for it.

But, according to Darren Wershler, professor of English, and Bart Simon, associate professor of sociology and director of Concordia’s Milieux Institute for Arts, Culture, and Technology, the game’s simple nature together with its malleability is exactly what made it ideal for the research.

Unusual courses

“One historically prevalent problem that game-based learning researchers have highlighted is the risk of students simply learning to play the game itself rather than learning the subject matter that the instructor is pairing with the game,” the study explains, “[or that] a game might over-emphasize the subject matter and impose stricter rules, which in turn makes self-actualizing student-driven learning impossible.”

“In this article we present a game-based teaching method where educators can address these issues by collapsing the real and the virtual into one another: the allegorical build. The allegorical build occurs when students use the relationships they have developed to in-game procedures in order to think about a range of other topics outside the game, as defined by the instructor.”

“The course is not a video game studies course, and it is not a gamified version of a course on modernity,” explains Wershler, a Tier 2 Concordia University Research Chair in Media and Contemporary Literature. “It’s this other thing that sits in an uncomfortable middle and brushes up against both. The learning comes out of trying to think about those two things simultaneously.”

Minecraft is readily modable, the duo explains — modifiable through 3rd party and user-generated add-ons — so it can be adapted to accommodate a wide range of scenarios, including teaching. The authors of the study hope that educators can draw on the massive sandbox that this game represents to play, experiment with, and teach their pupils and students.

The study itself describes how the authors used Minecraft to teach a class on the history and culture of modernity. This course was carried out entirely in the game. Instructions, communications, and course work was handled through the voice messaging app Discord (which we also recommend as excellent for remote work). The two researchers used this course to observe if and how students used the game to achieve their academic goals, and see if there’s any merit to the idea.

They report that the students were quick to adapt to this unusual classroom, and didn’t need much time to get grips with the game. Some students took on a mentoring role among their peers, instructing their colleagues who were unfamiliar with Minecraft on how to find and mine resources, build structures, and survive the game’s main bad guys — skeletons, zombies, and exploding monsters that come out at night. Such a situation allowed students, even those who would not consider themselves to be natural-born leaders, to guide their peers using their knowledge of the game, the researchers report. This is a valuable skill to learn, one which traditional classrooms and courses do not tend to cultivate.

Eventually, the students decided on group projects which would be created in the game. Each project was related to an issue of modernity that was previously addressed in Wershler’s half-hour podcast lectures and readings. One group recreated Moshe Safdie’s futuristic Habitat 67, while another built an entire working city populated by Minecraft villagers modeled after the Nakagin Capsule Tower Building in Tokyo.

The whole course was set in the (more difficult) Survival mode rather than the Creative mode that most educators favor. This meant that the students had to contend with and were often killed by the game’s antagonists. The server used several fan-made mods to enhance the game in various ways, which came at the cost of increased instability in the servers.

“It was important that the game remained a game and that while the students were working on their projects, there were all these horrible things coming out of the wilderness to kill them,” Wershler says. “This makes them think about the fact that what they are doing requires effort and that the possibility of failure is very real.”

All in all, the authors say they were surprised at how well the students adapted to the game-based environment and the course, which was co-designed along with a dozen other interdisciplinary researchers at Concordia. Wershler has been using Minecraft in his course since 2014, and believes the game — or a similar one — can serve as a bedrock for a new style of teaching.

“Educators at the high school, college and university levels can use these principles and tools to teach a whole variety of subjects within the game,” he says. “There is no reason why we could not do this with architecture, design, engineering, computer science as well as history, cultural studies or sociology. There are countless ways to structure this to make it work.”

With so many areas of our lives transitioning to the digital sphere, we’re bound to see changes in the way we merge our activities with the digital sphere. Some of them might sound quite dubious at first, and holding courses inside a video game definitely fits that bill. But this research shows that we should not dismiss ideas out of hand and that even the most improbable sounding approaches can bring value to our lives. We just have to be willing to give them a go.

The paper “The Allegorical Build. Minecraft and Allegorical Play in Undergraduate Teaching” has been published in the journal Gamevironments.

Taking short breaks while practicing lets our brains review what we’re doing — and get better at it

When cultivating a new skill, taking a short break can go a long way. This gives our brains time to replay what we just practiced, helping to cement our skills.

Image via Pixabay.

A study from the National Institutes of Health has been looking into best practices when learning a new skill such as playing a new song on the piano. The research involved monitoring participants’ brain activity while practicing, and revealed that taking short breaks during this time is a great way to help speed the process along.

Although taking time off seems counterproductive when practicing, the authors explain that our brains rapidly and repeatedly go through the activity we’re learning during these breaks, reviewing it faster and faster. The more time it gets to do this, the better a participant’s performance during subsequent practice sessions, the team adds, which suggests that these breaks actually helped strengthen their memory of the task.

Festina lente

“Our results support the idea that wakeful rest plays just as important a role as practice in learning a new skill. It appears to be the period when our brains compress and consolidate memories of what we just practiced,” said Leonardo G. Cohen, M.D., senior investigator at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS) and the senior author of the study published in Cell Reports. 

“Understanding this role of neural replay may not only help shape how we learn new skills but also how we help patients recover skills lost after neurological injury like stroke.”

The study was carried out at the NIH’s Clinical Center in Bethesda, Maryland, using a technique known as magnetoencephalography. This allowed the team to record the brain activity of 33 healthy, right-handed volunteers as they learned to type a five-digit test code (41234) with their left hands. They were seated in a chair and wore a long, cone-shaped scanner cap during the experiment. Each participant was asked to type this code out as many times as possible for 10 seconds and then take a 10-second break, a cycle which they repeated for 35 times.

During the first trials, participants massively improved their ability to type the code up to around the 11th cycle. Previous research done at the NIH shows that the largest part of this improvement happens during the short rest periods, not when the subjects are actually typing. More significantly, the improvements seen during these trials were greater than those seen after a night’s sleep (when memories are strengthened naturally).

As the participants improved at the task, the authors also saw a decrease in the size of brain waves, called beta rhythms.

“We wanted to explore the mechanisms behind memory strengthening seen during wakeful rest. Several forms of memory appear to rely on the replaying of neural activity, so we decided to test this idea out for procedural skill learning,” said Ethan R. Buch, Ph.D., a staff scientist on Dr. Cohen’s team and leader of the study.

So the team developed software that could interpret the brain wave patterns recorded while each participant typed in their test code. This showed that a faster version of these waves, around 20 times faster, were replaying in the participants’ brains during the rest periods. Over the first eleven cycles, these ‘compressed’ versions of the events were replayed around 25 times per rest period. Beyond that, they reduced in number: by two or three times during the final cycles compared to the first eleven ones.

Participants whose brains replayed the typing the most showed the greatest improvements in performance following each cycle, the authors note. This strongly suggests that the replaying has a direct impact on the efficiency of our practice sessions, likely through memory strengthening.

“During the early part of the learning curve we saw that wakeful rest replay was compressed in time, frequent, and a good predictor of variability in learning a new skill across individuals,” said Dr. Buch. “This suggests that during wakeful rest the brain binds together the memories required to learn a new skill.”

As for where in the brain this process takes place, the paper reports that it ‘often’ took place in sensorimotor regions of the brain — i.e. regions involved in movement and sensory processing. However, other areas of the brain were involved as well, most notably the hippocampus and entorhinal cortex.

“We were a bit surprised by these last results. Traditionally, it was thought that the hippocampus and entorhinal cortex may not play such a substantive role in procedural memory. In contrast, our results suggest that these regions are rapidly chattering with the sensorimotor cortex when learning these types of skills,” said Dr. Cohen.

The paper “Consolidation of human skill linked to waking hippocampo-neocortical replay” has been published in the journal Cell Reports.

Researchers teach AI to design, say it did ‘quite good’ but won’t steal your job (yet)

A US-based research team has trained artificial intelligence (AI) in design, with pretty good results.

A roof supported by a wooden truss framework.
Image credits Achim Scholty.

Although we don’t generally think of AIs as good problem-solvers, a new study suggests they can learn how to be. The paper describes the process through which a framework of deep neural networks learned human creative processes and strategies and how to apply them to create new designs.

Just hit ‘design’

“We were trying to have the [AIs] create designs similar to how humans do it, imitating the process they use: how they look at the design, how they take the next action, and then create a new design, step by step,” says Ayush Raina, a Ph.D. candidate in mechanical engineering at Carnegie Mellon and a co-author of the study.

Design isn’t an exact science. While there are definite no-no’s and rules of thumb that lead to OK designs, good designs require creativity and exploratory decision-making. Humans excel at these skills.

Software as we know it today works wonders within a clearly defined set of rules, with clear inputs and known desired outcomes. That’s very handy when you need to crunch huge amounts of data, or to make split-second decisions to keep a jet stable in flight, for example. However, it’s an appalling skillset for someone trying their hand, or processors, at designing.

The team wanted to see if machines can learn the skills that make humans good designers and then apply them. For the study, they created an AI framework from several deep neural networks and fed it data pertaining to a human going about the process of design.

The study focused on trusses, which are complex but relatively common design challenges for engineers. Trusses are load-bearing structural elements composed of rods and beams; bridges and large buildings make good use of trusses, for example. Simple in theory, trusses are actually incredibly complex elements whose final shapes are a product of their function, material make-up, or other desired traits (such as flexibility-rigidity, resistance to compression-tension and so forth).

The framework itself was made up of several deep neural networks which worked together in a prediction-based process. It was shown five successive snapshots of the structures (the design modification sequence for a truss), and then asked to predict the next iteration of the design. This data was the same one engineers use when approaching the problem: pixels on a screen; however, the AI wasn’t privy to any further information or context (such as the truss’ intended use). The researchers emphasized visualization in the process because vision is an integral part of how humans perceive the world and go about solving problems.

In essence, the researchers had their neural networks watch human designers throughout the whole design process, and then try to emulate them. Overall, the team reports, the way their AI approached the design process was similar to that employed by humans. Further testing on similar design problems showed that on average, the AI can perform just as well if not better than humans. However, the system still lacks many of the advantages a human user would have when problem-solving — namely, it worked without a specific goal in mind (a particular weight or shape, for example), and didn’t receive feedback on how successful it was on its task. In other words, while the program could design a good truss, it didn’t understand what it was doing, what the end goal of the process was, or how good it was at it. So while it’s good at designing, it’s still a lousy designer.

All things considered, however, the AI was “quite good” at the task, says co-author Jonathan Cagan, professor of mechanical engineering and interim dean of Carnegie Mellon University’s College of Engineering.

“The AI is not just mimicking or regurgitating solutions that already exist,” Professor Cagan explains. “It’s learning how people solve a specific type of problem and creating new design solutions from scratch.”

“It’s tempting to think that this AI will replace engineers, but that’s simply not true,” said Chris McComb, an assistant professor of engineering design at the Pennsylvania State University and paper co-author.

“Instead, it can fundamentally change how engineers work. If we can offload boring, time-consuming tasks to an AI, like we did in the work, then we free engineers up to think big and solve problems creatively.”

The paper “Learning to Design From Humans: Imitating Human Designers Through Deep Learning” has been published in the Journal of Mechanical Design.

We learn best when we fail around 15% of the time

If it’s too hard, or too easy, you probably won’t study very well, according to a new study.

Image credits Hans Braxmeier.

Learning is a funny process. We’d all love for us to sit down and study something only to ace it in the first five minutes with minimal effort — but that’s not how things go. Empirical observations in schools and previous research into the subject found that people learn best when challenged by something just outside of their immediate grasp. In other words, if a subject is way above our heads, we tend to give up or fail so spectacularly that we don’t learn anything; neither will we invest time into studying something we deem too simple.

However, the ideal ‘difficulty level’ in regard to learning remained a matter of some debate. According to the new study, however, we learn best when we ‘fail’ around 15% of the time (conversely, when we only get it right 85% of the time).

The sweet spot

“These ideas that were out there in the education field — that there is this ‘zone of proximal difficulty,’ in which you ought to be maximizing your learning — we’ve put that on a mathematical footing,” said UArizona assistant professor of psychology and cognitive science Robert Wilson, lead author of the study.

The team, which also included members from Brown University, the University of California, Los Angeles University, and Princeton University, conducted a series of machine-learning experiments for the study. This involved teaching computers simple tasks (such as classifying different patterns into one of two categories, or discerning handwritten digits between odd or even). The computers learned best, i.e. improved the fastest, when the difficulty of the task was such that they responded with 85% accuracy. A review of previous research on animal learning suggests that the ‘85% rule’ held true in these studies as well.

“If you have an error rate of 15% or accuracy of 85%, you are always maximizing your rate of learning in these two-choice tasks,” Wilson said.

This 85% rule most likely applies to perceptual learning, the gradual process by which we learn through experience and examples. An example of perceptual learning would be a doctor learning to tell fractured bones from fissured bones on X-ray scans.

“You get better at [the task] over time, and you need experience and you need examples to get better,” Wilson said. “I can imagine giving easy examples and giving difficult examples and giving intermediate examples. If I give really easy examples, you get 100% right all the time and there’s nothing left to learn. If I give really hard examples, you’ll be 50% correct and still not learning anything new, whereas if I give you something in between, you can be at this sweet spot where you are getting the most information from each particular example.”

Time for the pinch of salt, however. The team only worked with simple tasks involving crystal-clear right and wrong answers, but life tends to get more complicated than that. Another glaring limitation is that they worked with algorithms, not people. However, the team is confident that there is value in their findings, and believe that their ‘85%’ approach to learning could help improve our educational systems.

“If you are taking classes that are too easy and acing them all the time, then you probably aren’t getting as much out of a class as someone who’s struggling but managing to keep up,” he said. “The hope is we can expand this work and start to talk about more complicated forms of learning.”

The paper “The Eighty Five Percent Rule for optimal learning” has been published in the journal Nature Communications.

Computers can now read handwriting with 98% accuracy

New research in Tunisia is teaching computers how to read your handwriting.

Image via Pixabay.

Researchers at the University of Sfax in Tunisia have developed a new method for computers to recognize handwritten characters and symbols in online scripts. The technique has already achieved ‘remarkable performance’ on texts written in the Latin and Arabic alphabets.

iRead

“Our paper handles the problem of online handwritten script recognition based on an extraction features system and deep approach system for sequence classification,” the researchers wrote in their paper. “We used an existent method combined with new classifiers in order to attain a flexible system.”

Handwriting recognition systems are, unsurprisingly, computer tools designed to recognize characters and hand-written symbols in a similar way to our brains. They’re similar in form and function with the neural networks that we’ve designed for image classification, face recognition, and natural language processing (NLP).

As humans, we innately begin developing the ability to understand different types of handwriting in our youth. This ability revolves around the identification and understanding of specific characters, both individually and when grouped together, the team explains. Several attempts have been made to replicate this ability in a computer over the last decade in a bid to enable more advanced and automatic analyses of handwritten texts.

The new paper presents two systems based on deep neural networks: an online handwriting segmentation and recognition system that uses a long short-term memory network (OnHSR-LSTM) and an online handwriting recognition system composed of a convolutional long short-term memory network (OnHR-covLSTM).

The first is based on the theory that our own brains work to transform language from the graphical marks on a piece of paper into symbolic representations. This OnHSR-LSTM works by detecting common properties of symbols or characters and then arranging them according to specific perceptual laws, for instance, based on proximity, similarity, etc. Essentially, it breaks down the script into a series of strokes, that is then turned into code, which is what the program actually ‘reads’.

“Finally, [the model] attempts to build a representation of the handwritten form based on the assumption that the perception of form is the identification of basic features that are arranged until we identify an object,” the researchers explained in their paper.

“Therefore, the representation of handwriting is a combination of primitive strokes. Handwriting is a sequence of basic codes that are grouped together to define a character or a shape.”

The second system, the convolutional long short-term memory network, is trained to predict both characters and words based on what it read. It is particularly well-suited for processing and classification of long sequences of characters and symbols.

Both neural networks were trained then evaluated using five different databases of handwritten scripts in the Arabic and Latin alphabets. Both systems achieved recognition rates of over 98%, which is ‘remarkable’ according to the team. Both systems, they explained, performed similarly to human subjects at the task.

“We now plan to build on and test our proposed recognition systems on a large-scale database and other scripts,” the researchers wrote.

The paper “Neural architecture based on fuzzy perceptual representation for online multilingual handwriting recognition” has been published in the preprint server arXiv.

Brain scan.

Taking short breaks to reinforce memories is key to learning new skills or re-learning old ones

Taking a break is a key part of learning anything, new research suggests.

Brain scan.

Some of the brain areas that saw increased activity during the trials.
Image courtesy of Cohen lab, NIH/NINDS.

A new study from the National Institute of Health says that our brains retain the memory of a skill we’re practicing a few seconds faster by taking a short rest. The findings will help guide skill-relearning therapies for patients recovering from the paralyzing effects of strokes or other brain injuries, the team hopes. However, they should be broadly-applicable to anybody trying to learn a new skill that involves physical movement.

Slow and steady wins the race

“Everyone thinks you need to ‘practice, practice, practice’ when learning something new. Instead, we found that resting, early and often, may be just as critical to learning as practice,” said Leonardo G. Cohen, M.D., Ph.D., senior investigator at NIH’s National Institute of Neurological Disorders and Stroke and a senior author of the paper.

“Our ultimate hope is that the results of our experiments will help patients recover from the paralyzing effects caused by strokes and other neurological injuries by informing the strategies they use to ‘relearn’ lost skills.”

Lead researcher Marlene Bönstrup, M.D., a postdoctoral fellow in Dr. Cohen’s lab, says she had believed, like many of her colleagues, that our brains needed long periods of rest (i.e. sleep) to strengthen new memories. This included memories associated with learning a new skill. However, after seeing brain wave recordings of healthy volunteers in ongoing learning and memory experiments at the NIH Clinical Center, she started questioning that view.

These brain waves were recorded in right-handed volunteers with magnetoencephalography, a very sensitive scanning technique. Each participant was seated in a chair facing a computer screen under a long, cone-shaped brain scanning cap. Volunteers were shown a series of numbers on the screen then asked to type the numbers as many times as possible in 10 seconds using their left hand. Then, they took a 10-second break, and started typing again; each participant repeated this cycle of practice and rest 35 times.

Volunteer’s performance improved dramatically over the course of the trial, leveling off around the 11th cycle, the team reports. However, an important finding was ‘when’ this improvement seemed to take place in the brain.

“I noticed that participants’ brain waves seemed to change much more during the rest periods than during the typing sessions,” said Dr. Bönstrup. “This gave me the idea to look much more closely for when learning was actually happening. Was it during practice or rest?”

The team explains that the data shows participants’ performance increased primarily during the short rest periods, not while they were typing. These improvements made while resting added up to create the overall gains each volunteer saw during the trial. Furthermore, the sum improvements seen during these breaks was much greater than what the volunteers experienced over time (the trial spanned two days) — this last tidbit suggests that the short breaks played as critical a role in learning as practicing itself.

By looking at the brain waves, Dr. Bönstrup found that the participants’ brains were busy consolidating memories during these short rest periods. The team reports finding changes in the participants’ beta rhythms that correlated with the improvements the volunteers made during the rests. Further analysis reveals that the changes in beta oscillations primarily took place in the right hemispheres and along with neural networks connecting the frontal and parietal lobes. These structures are associated with planning and control of movements. These changes only happened during the breaks, and were the only brain wave patterns that correlated with performance.

“Our results suggest that it may be important to optimize the timing and configuration of rest intervals when implementing rehabilitative treatments in stroke patients or when learning to play the piano in normal volunteers,” said Dr. Cohen.

“Whether these results apply to other forms of learning and memory formation remains an open question.”

Dr. Cohen’s team plans to explore, in greater detail, the role of these early resting periods in learning and memory.

The paper ” A Rapid Form of Offline Consolidation in Skill Learning” has been published in the journal Current Biology.

Teapot golfball.

Artificial intelligence still has severe limitations in recognizing what it’s seeing

Artificial intelligence won’t take over the world any time soon, a new study suggests — it can’t even “see” properly. Yet.

Teapot golfball.

Teapot with golf ball pattern used in the study.
Image credits: Nicholas Baker et al / PLOS Computational Biology.

Computer networks that draw on deep learning algorithms (often referred to as AI) have made huge strides in recent years. So much so that there is a lot of anxiety (or enthusiasm, depending on which side of the contract you find yourself) that these networks will take over human jobs and other tasks that computers simply couldn’t perform up to now.

Recent work at the University of California Los Angeles (UCLA), however, shows that such systems are still in their infancy. A team of UCLA cognitive psychologists showed that these networks identify objects in a fundamentally different manner from human brains — and that they are very easy to dupe.

Binary-tinted glasses

“The machines have severe limitations that we need to understand,” said Philip Kellman, a UCLA distinguished professor of psychology and a senior author of the study. “We’re saying, ‘Wait, not so fast.”

The team explored how machine learning networks see the world in a series of five experiments. Keep in mind that the team wasn’t trying to fool the networks — they were working to understand how they identify objects, and if it’s similar to how the human brain does it.

For the first one, they worked with a deep learning network called VGG-19. It’s considered one of the (if not the) best networks currently developed for image analysis and recognition. The team showed VGG-19 altered, color images of animals and objects. One image showed the surface of a golf ball displayed on the contour of a teapot, for example. Others showed a camel with zebra stripes or the pattern of a blue and red argyle sock on an elephant. The network was asked what it thought the picture most likely showed in the form of a ranking (with the top choice being most likely, the second one less likely, and so on).

Combined images.

Examples of the images used during this step.
Image credits Nicholas Baker et al., 2018, PLOS Computational Biology.

VGG-19, the team reports, listed the correct item as its first choice for only 5 out of the 40 images it was shown during this experiment (12.5% success rate). It was also interesting to see just how well the team managed to deceive the network. VGG-19 listed a 0% chance that the argyled elephant was an elephant, for example, and only a 0.41% chance that the teapot was a teapot. Its first choice for the teapot image was a golf ball, the team reports.

Kellman says he isn’t surprised that the network suggested a golf ball — calling it “absolutely reasonable” — but was surprised to see that the teapot didn’t even make the list. Overall, the results of this step hinted that such networks draw on the texture of an object much more than its shape, says lead author Nicholas Baker, a UCLA psychology graduate student. The team decided to explore this idea further.

Missing the forest for the trees

For the second experiment, the team showed images of glass figurines to VGG-19 and a second deep learning network called AlexNet. Both networks were trained to recognize objects using a database called ImageNet. While VGG-19 performed better than AlexNet, they were still both pretty terrible. Neither network could correctly identify the figurines as their first choice: an elephant figurine, for example, was ranked with almost a 0% chance of being an elephant by both networks. On average, AlexNet ranked the correct answer 328th out of 1,000 choices.

Glass figurines.

Well, they’re definitely glass figurines to you and me. Not so obvious to AI.
Image credits Nicholas Baker et al / PLOS Computational Biology.

In this experiment, too, the networks’ first choices were pretty puzzling: VGG-19, for example, chose “website” for a goose figure and “can opener” for a polar bear.

“The machines make very different errors from humans,” said co-author Hongjing Lu, a UCLA professor of psychology. “Their learning mechanisms are much less sophisticated than the human mind.”

“We can fool these artificial systems pretty easily.”

For the third and fourth experiment, the team focused on contours. First, they showed the networks 40 drawings outlined in black, with the images in white. Again, the machine did a pretty poor job of identifying common items (such as bananas or butterflies). In the fourth experiment, the researchers showed both networks 40 images, this time in solid black. Here, the networks did somewhat better — they listed the correct object among their top five choices around 50% of the time. They identified some items with good confidence (99.99% chance for an abacus and 61% chance for a cannon from VGG-19, for example) while they simply dropped the ball on others (both networks listed a white hammer outlined in black for under 1% chance of being a hammer).

Still, it’s undeniable that both algorithms performed better during this step than any other before them. Kellman says this is likely because the images here lacked “internal contours” — edges that confuse the programs.

Throwing a wrench in

Now, in experiment five, the team actually tried to throw the machine off their game as much as possible. They worked with six images that VGG-16 identified correctly in the previous steps, scrambling them to make them harder to recognize while preserving some pieces of the objects shown. They also employed a group of ten UCLA undergrads as a control group.

The students were shown objects in black silhouettes — some scrambled to be difficult to recognize and some unscrambled, some objects for just one second, and some for as long as the students wanted to view them. Students correctly identified 92% of the unscrambled objects and 23% of the scrambled ones when allowed a single second to view them. When the students could see the silhouettes for as long as they wanted, they correctly identified 97% of the unscrambled objects and 37% of the scrambled objects.

Silhouette and scrambled bear.

Example of a silhouette (a) and scrambled image (b) of a bear.
Image credits Nicholas Baker et al / PLOS Computational Biology.

VGG-19 correctly identified five of these six images (and was quite close on the sixth, too, the team writes). The team says humans probably had more trouble identifying the images than the machine because we observe the entire object when trying to determine what we’re seeing. Artificial intelligence, in contrast, works by identifying fragments.

“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

The results suggest that right now, AI (as we know and program it) is simply too immature to actually face the real world. It’s easily duped, and it works differently than us — so it’s hard to intuit how it will behave. Still, understanding how such networks ‘see’ the world around them would be very helpful as we move forward with them, the team explains. If we know their weaknesses, we know where we need to put most work in to make meaningful strides.

The paper “Deep convolutional networks do not classify based on global object shape” has been published in the journal PLOS Computational Biology.

Rat pups.

Exposure to cannabis leads to cognitive changes in the offspring of rats

Pregnancy and weed probably shouldn’t go together, new research shows.

Rat pups.

Baby rats! Awww!
Image credits Karsten Paulick.

Researchers from the Washington State University (WSU) found that heavy cannabis exposure during pregnancy can lead to cognitive changes in the offspring of rats. Cannabis is the most commonly used illicit substance among pregnant women and may have similar effects when used by human mothers.

Pass on the puff, puff

“Prenatal exposure to cannabis may cause meaningful changes in brain development that can negatively impact cognitive functioning into adulthood,” the authors wrote in a summary of the work presented yesterday at the Society for Neuroscience’s annual meeting in San Diego.

For the study, McLaughlin — an assistant professor of Integrative Physiology and Neuroscience at the WSU — and his team exposed pregnant rats (dams) to various concentrations of cannabis vapor. This method was selected as it better recreates how people most often use the drug. The team then documented how these rats’ offspring performed in a lab test that required learning, and later adjusting, a strategy to get sugar rewards.

The vapor was administered in atmospherically controlled cages during two hour-long sessions each day. The treatment started from before pregnancy and carried until they gave birth. Rats in the control group received cannabis-free vapor, while other groups received vapor with low to high levels of cannabis. The treatment was designed to raise the rats’ blood THC levels to that of a person who has had a few puffs of the drug, the team notes.

Roughly 60 offspring of these rats were then submitted to a task similar to the Wisconsin Card Sorting Test (WCST). The WCST is a method used to test a person’s flexibility when the stimulus for positive reinforcement changes. The rats were trained to press one of two levers. These levers were tied to lights, and the rats learned that they’d receive a treat when pressing the lever close to the shining light. After this, however, the team shook things up: the reward was assigned to one of the levers permanently, regardless of which light was shining.

Cognitive changes

The rats who were exposed to cannabis in utero (in the womb) had no difficulties learning the first rule, the team reports. However, those who were exposed to higher concentrations “showed marked deficits in their ability to shift strategies when the new rule was implemented,” the researchers add.

This doesn’t mean they were unable to learn the new strategy, mind you — these rats (from dams exposed to high levels of cannabis) appeared to understand the change, as they pressed the correct lever several times in a row. But they simply wouldn’t hold to it. They would give up before striking the right lever ten times — which is what the offspring of dams exposed to less or no cannabis did.

“The general take-home message is that we see deficits, particularly in the domain of cognitive flexibility, in rats prenatally exposed to high doses of cannabis vapor,” McLaughlin said. “The impairment is not a general learning deficit, as they can learn the initial rule just fine.”

“The deficit only emerges when the learned strategy is no longer resulting in reward delivery. They cannot seem to adapt properly and tend to commit more regressive errors as a result, which suggests impairment in maintaining the new optimal strategy.”

McLaughlin cautions against jumping to conclusions, however. He says that high-exposure rats aren’t necessarily less intelligent, just less motivated. They might not be very interested in the task itself, the sugar reward, or they’d simply rather explore other activities during the test.

“They don’t have these opinions about how they need to perform because they don’t want to be perceived as ‘the stupid rat,'” he said. “Clearly that’s not what’s motivating their behavior. They’re just going to try to get as many sugar pellets as they can.”

“But at some point, do sugar pellets continue to motivate your behavior after you’ve eaten 100? Do you still care as much about them?”

The findings are still preliminary, and the team has a lot of work ahead of them. Among others, they plan to look for differences in gene expression and protein levels in the brain to determine why the rats’ behavior changed.

The findings have been presented at the Society for Neuroscience’s annual meeting Neuroscience 2018 in San Diego.

Feedback.

Feeback, not evidence, makes us confident we’re right — even when we’re not

We tend to only look at the most recent feedback when gauging our own levels of competence, a new paper reports. The findings can help explain why people or groups tend to stick to their beliefs even in the face of overwhelming evidence to the contrary.

Feedback.

Image credits Mohamed Hassan.

A team of researchers from the University of California, Berkeley (UC) thinks that feedback — rather than hard evidence — is what makes people feel certain of their beliefs when learning something new, or when trying to make a decision. In other words, people’s beliefs tend to be reinforced by the positive or negative reactions they receive in response to an opinion, task, or interaction, not by logic, reasoning, or data.

“Yes but you see, I’m right”

“If you think you know a lot about something, even though you don’t, you’re less likely to be curious enough to explore the topic further, and will fail to learn how little you know,” said study lead author Louis Marti, a Ph.D. student in psychology at UC Berkeley.

“If you use a crazy theory to make a correct prediction a couple of times, you can get stuck in that belief and may not be as interested in gathering more information,” adds study senior author Celeste Kidd, an assistant professor of psychology at UC Berkeley.

This dynamic is very pervasive, the team writes, playing out in every area of our lives — from how we interact with family, friends, or coworkers, to our consumption of news, social media, and the echo chambers that form around us. It’s actually quite bad news, as this feedback-based reinforcement pattern has a profound effect on how we handle and integrate new information into our belief systems. It’s especially active in the case of information that challenges our worldview, and can limit our intellectual horizons, the team explains.

It can also help explain why some people are easily duped by charlatans.

For the study, the team worked with over 500 adult subjects recruited through Amazon’s Mechanical Turk crowd-sourcing platform. Participants were placed in front of a computer screen displaying different combinations of colored shapes, and asked to identify which shapes qualify as a “Daxxy”.

If you don’t know what a Daxxy is, fret not — that was the whole point. Daxxies are make-believe objects that the team pulled out of a top hat somewhere, specifically for this experiment. Participants weren’t told what a Daxxy is, neither were they clued in as to what any of its defining characteristics were. The experiment aimed to force the participants to make blind guesses, and see how their choices evolve over time.

In the end, the researchers used these patterns of choice to see what influences people’s confidence in their knowledge or beliefs while learning.

Participants were told whether they picked right or wrong on each try, but not why their answer was correct or not. After each guess, they reported on whether or not they were certain of their answer. By the end of the experiment, the team reports, a trend was already evident: the subjects consistently based their certainty on whether they had correctly identified a Daxxy during the last four or five guesses, not all the information they had gathered throughout the trial.

“What we found interesting is that they could get the first 19 guesses in a row wrong, but if they got the last five right, they felt very confident,” Marti said. “It’s not that they weren’t paying attention, they were learning what a Daxxy was, but they weren’t using most of what they learned to inform their certainty.”

By contrast, Marti says, learners should base their certainty on observations made throughout the learning process — but not discount feedback either.

“If your goal is to arrive at the truth, the strategy of using your most recent feedback, rather than all of the data you’ve accumulated, is not a great tactic,” he said.

The paper “Certainty Is Primarily Determined by Past Performance During Concept Learning” has been published in the journal Open Mind.

Robot.

Novel system allows robots to learn new skills just by looking at you do it

It may soon be possible to teach a robot any task just by showing it how it’s done — a single time.

Robot.

Image credits John Greenaway / Flickr.

Researchers at UC Berkeley have developed a way to speed up the education of our silicone-brained friends. In a recently published paper, they report on a new learning algorithm that allows robots to mimic an activity it observed just once on video.

Copy, paste

Training robots today is hard work. Even really simple actions like picking up a cup require paragraphs upon paragraphs of code expressly telling the bot what to do each and every step of the way — a process that is hard, complicated, and probably frustrating for us humans.

There’s work to do even after the code is fully laid out. For example, take assembly line workers. After all the instructions are copy-pasted into their circuits, these bots must undergo a long training process during which they must execute every procedure multiple times. They do so until they can perform the task without making any mistake along the way.

More recently, programmers have created software that allows robots to be programmed just by observing certain tasks. While this is more similar to how we or an animal would learn, it’s still clunky to use — currently, we need to show our robotic friends such training videos thousands of times until they get the hang of it.

The team from UC Berkeley, however, describes a new technique they developed that allows robots to learn a certain action just by observing a human do it a single time.

This technique combines imitation learning with a meta-learning algorithm, the team reports. They christened the resulting system ‘model-agnostic meta-learning’ (MAML). Meta-learning basically means ‘learning to learn’. MAML is a process by which a robot builds on prior experience in order to learn something new. If a robot is shown footage of a human picking up an apple and putting it into a cup, for example, it can gauge what its objective is — putting the apple in the cup. As it learns how to handle these objects, it can expand that knowledge to other similar behaviors. So, for example, if you then go on to show it a video of somebody putting an orange down on a plate, it can recognize the overarching behavior and quickly translates that into the motions it needs to do to carry out the task.

Best of all for all those assembly-line robot trainers out there, the bot doesn’t need to know what an orange or a plate is — it will still perform the required task.

In short, MAML provides a platform that allows a neural network (or a robot) to learn a wide variety of tasks starting with relatively little data. It’s almost the polar opposite of how neural networks work today — which master a single task while drawing on a huge dataset.

The team tested MAML on several robots. After a “single video demonstration from a human”, they note, the robots could successfully perform the shown task. “After meta-learning, the robot can learn to place, push, and pick-and-place new objects using just one video of a human performing the manipulation,” they conclude.

The paper “One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning” has been published in the pre-print journal arXiv.

Atom2Vec.

An AI recreated the periodic table from scratch — in a couple of hours

A new artificial intelligence (AI) program developed at Stanford recreated the periodic table from scratch — and it only needed a couple of hours to do so.

Atom2Vec.

If you’ve ever wondered how machines learn, this is it — in picture form. (A) shows atom vectors of 34 main-group elements and their hierarchical clustering based on distance. The color in each cell stands for value of the vector on that dimension.
Image credits Zhou et al., 2018, PNAS.

Running under the alluring name of Atom2Vec, the software learned to distinguish between different atoms starting from a database of chemical compounds. After it learned the basics, the researchers left Atom2Vec to its own devices. Using methods and processes related to those in the field of natural language processing — chiefly among them, the idea that the nature of words can be understood by looking at other words around it — the AI successfully clustered the elements by their chemical properties.

It only took Atom2Vec a couple of hours to perform the feat; roughly speaking, it re-created the periodic table of elements, one of the greatest achievements in chemistry. It took us hairless apes nearly a century of trial-and-error to do the same.

I’m you, but better

The Periodic Table of elements was initially conceived by Dmitri Mendeleev in the mid-19th century, well before many of the elements we know today had been discovered, and certainly before there was even an inkling of quantum mechanics and relativity lurking beyond the boundaries of classical physics. Mendeleev recognized that certain elements fell into groups with similar chemical features, and this established a periodic pattern (hence the name) to the elements as they went from lightweight elements like hydrogen and helium, to progressively heavier ones. In fact, Mendeleev could predict the very specific properties and features of, as yet, undiscovered elements due to blank spaces in his unfinished table. Many of these predictions turned out to be correct when the elements filling the blank spots were finally discovered.

“We wanted to know whether an AI can be smart enough to discover the periodic table on its own, and our team showed that it can,” said study leader Shou-Cheng Zhang, the J. G. Jackson and C. J. Wood Professor of Physics at Stanford’s School of Humanities and Sciences.

Zhang’s team designed Atom2Vec starting from an AI platform (Word2Vec) that Google built to parse natural language. The software converts individual words into vectors (numerical codes). It then analyzes these vectors to estimate the probability of a particular word appearing in a text based on the presence of other words.

The word “king” for example is often accompanied by “queen”, and the words “man” and “woman” often appear together. Word2Vec works with these co-appearances and learns that, mathematically, “king = a queen minus a woman plus a man,” Zhang explains. Working along the same lines, the team fed Atom2Vec all known chemical compounds (such as NaCl, KCl, and so on) in lieu of text samples.

It worked surprisingly well. Even from this relatively tiny sample size, the program figured out that potassium (K) and sodium (Na) must be chemically-similar, as both bind to chlorine (Cl). Through a similar process, Atom2Vec established chemical relationships between all the species in the periodic table. It was so successful and fast in performing the task that Zhang hopes that in the future, researchers will use Atom2Vec to discover and design new materials.

Future plans

“For this project, the AI program was unsupervised, but you could imagine giving it a goal and directing it to find, for example, a material that is highly efficient at converting sunlight to energy,” he said.

As impressive as the achievement is, Zhang says it’s only the first step. The endgame is more ambitious — Zhang hopes to design a replacement for the Turing test, the golden standard for gauging machine intelligence. To pass the Turing test, a machine must be capable of responding to written questions in such a way that users won’t suspect they’re chatting with a machine; in other words, a machine will be considered as intelligent as a human if it seems human to us.

However, Zhang thinks the test is flawed, as it is too subjective.

“Humans are the product of evolution and our minds are cluttered with all sorts of irrationalities. For an AI to pass the Turing test, it would need to reproduce all of our human irrationalities,” he says. “That’s very difficult to do, and not a particularly good use of programmers’ time.”

He hopes to take the human factor out of the equation, by having machine intelligence try to discover new laws of nature. Nobody’s born educated, however, not even machines, so Zhang is first checking to see if AIs can reach of the most important discoveries we made without help. By recreating the periodic table, Atom2Vec has achieved this goal.

The team is now working on the second version of the AI. This one will focus on cracking a frustratingly-complex problem in medical research: it will try to design antibodies to attack the antigens of cancer cells. Such a breakthrough would offer us a new and very powerful weapon against cancer. Currently, we treat the disease with immunotherapy, which relies on such antibodies already produced by the body; however, our bodies can produce over 10 million unique antibodies, Zhang says, by mixing and matching between some 50 separate genes.

“If we can map these building block genes onto a mathematical vector, then we can organize all antibodies into something similar to a periodic table,” Zhang says.

“Then, if you discover that one antibody is effective against an antigen but is toxic, you can look within the same family for another antibody that is just as effective but less toxic.”

The paper “Atom2Vec: Learning atoms for materials discovery,” has been published in the journal PNAS.

Credit: Pixabay.

How stress impairs our ability to learn and recall

Credit: Pixabay.

Credit: Pixabay.

It’s happened to everyone: you study hard for weeks, you’ve got everything covered, but then on exam day — you fail miserably. You just can’t remember what you studied the night before or it’s all very fuzzy, to the point that you start confusing and mixing subjects. The problem is, of course, stress, which interferes with our ability to retrieve and encode memories. In a new study, researchers at the University of Hamburg in Germany have learned how exactly all of this pans out in the brain.

In recent years, the brain’s medial prefrontal cortex (mPFC) function has emerged as a key area of study for understanding higher-order memory and decision-making processes. It’s a large, functionally diverse region, whose intimate connectivity with the medial temporal lobes (MTL) may underlie its involvement in a multitude of memory-related tasks. For instance, one of its functions is to decide whether or not incoming information is in any way related to stored memories; e.g. “is this exam question related to anything I have studied?” When brand new information needs to be processed, this is handled by a separate brain region called the hippocampus.

The German psychologists wanted to investigate how stress interferes with both of these brain regions, looking for the neural basis of stress-induced learning impairment. During an experiment, participants pretended to attend a 15-minute job interview, which also involved public speaking in front of an intimidating-looking jury. After being exposed to the stressing episode, each participant was tasked with learning two different types of information. The first task was related to information that was already known, while the second represented completely novel information. During the tasks, researchers recorded the participants’ brain activity with functional magnetic resonance imaging (fMRI). 

When the participants learned new information that was related to stored memories, activity increased in the medial prefrontal cortex (mPFC). When completely new information was acquired, the hippocampus lit up. But when they were under stress, mPFC activity became impaired — and this functional connectivity problem predicted poor performance on the task.

These findings explain how a stressful event can disrupt our ability to access prior knowledge or perform memory-related tasks. On a more practical level, the study could help practitioners develop new therapies and techniques aimed at cases of stress-related mental disorders, such as generalized anxiety disorder (GAD). Furthermore, professionals working in education, such as teachers, professors, or coaches, could use these findings to devise new ways of mitigating stress to enhance performance.

Perhaps, the most important takeaway here is that stressing over a memory-dependent outcome can have catastrophic consequences. Before an important exam or job interview, perhaps it’s wiser to destress, wind down, and relax, instead of trying to cram up as much new information as possible.

Brain Learning.

Researchers identify brain patterns associated with learning to improve teaching, fight Alzheimer’s

Researchers have identified two different brain-wave patterns that correspond to different types of learning. They hope this discovery will allow us to help people learn faster or counteract the effects of dementia.

Brain Learning.

Image via Pixabay.

Playing the guitar and studying for an exam require two very different types of learning — and now, for the first time, researchers have distinguished each type by looking at the patterns of brain-waves they produce. The findings will go a long way to help researchers understand how our brains learn motor skills and handle complex cognitive tasks.

Firing just right

When neurons activate, their electrical signals combine to form brain waves that oscillate at different frequencies. A team of researchers led by Earl K. Miller, the Picower Professor of Neuroscience, at the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences set out to study how learning impacts these waves and gain a better understanding of how our brains learn.

“Our ultimate goal is to help people with learning and memory deficits,” notes Miller. “We might find a way to stimulate the human brain or optimize training techniques to mitigate those deficits.”

Not so long ago, scientists assumed that all learning is managed equivalently within the brain. That turned out to be wrong, as the famous case of Henry Molaison revealed. In 1953, Molaison had part of his brain removed in an attempt to bring his epileptic seizures under control and developed amnesia. He couldn’t remember eating a few minutes after he finished a meal, but he was still fully capable of learning and retaining motor skills. He, and similar patient cases, would get better at skills such as drawing a five-pointed star in a mirror but could hold no memory of ever performing the task.

Cases like this one demonstrated that our brains rely on two different learning mechanisms, dubbed explicit and implicit. Explicit learning occurs when you’re aware of what you’re learning, you’re thinking about what you’re learning, and, most importantly, you can articulate what it is that you’re learning. Memorizing part of a text or learning the rules of a new game are examples of explicit learning.

Implicit learning is, in broad terms, what you might know as motor skill learning or muscle memory. You don’t have conscious access to what you’re learning, you get better at these skills by practicing, and you can’t really articulate what you’re learning. Learning to ride a skateboard or throwing darts fall under implicit learning processes.

Some other tasks, like learning to play a new piece of music, require both kinds of learning,

All in the brain

Brain sand sculpture.

Image via Pixabay.

 

Evan G. Antzoulatos, paper co-author and a former MIT postdoc currently located at University of California, Davis, studied the behavior of animals learning new skills and found evidence that they also rely on implicit and explicit processes. For example, in tasks that required comparing and matching two objects, the animals appeared to use both correct and incorrect answers to improve their next matches, indicating an explicit form of learning. In tasks where the animals had to move their gaze in one direction or another in response to visual patterns, their performance only improved after correct answers — which would suggest implicit learning.

More importantly, the researchers found that the different types of behavior follow different patterns of brain waves.

Explicit learning tasks caused an increase in alpha2-beta brain waves (a pattern of oscillations at 10 to 30 hertz) following a correct choice and an increase in delta-theta waves (which occur at 3 to 7 hertz) after an incorrect choice. Explicit tasks also resulted in a general increase in alpha2-beta waves, which decreased as learning progressed. Finally, the team noticed neural activity spikes in response to behavioral errors, a phenomenon known as event-related negativity, only occurred in tasks that required explicit learning. This suggests that there is a conscious learning process in which the animals’ brains can ‘tell’ when they made a wrong choice or assumption.

Miller explains that the increase in alpha2-beta waves during explicit learning “could reflect the building of a model of the task,” and that after the animals learn the task, “the alpha-beta rhythms then drop off because the model is already built.”

By contrast, in implicit learning tasks, the team observed increased delta-theta rhythms in response to correct answers and a subsequent decrease in these rhythms during learning. Miller says this pattern could be indicative of “rewiring” to help encode the motor skill during learning.

“This showed us that there are different mechanisms at play during explicit versus implicit learning,” he notes.

Roman F. Loonis, a graduate student in the Miller Lab and first author of the paper, says the findings could open up new avenues of teaching or training people to do specific tasks.

“If we can detect the kind of learning that’s going on, then we may be able to enhance or provide better feedback for that individual,” he says. “For instance, if they are using implicit learning more, that means they’re more likely relying on positive feedback, and we could modify their learning to take advantage of that.”

They could also help detect the onset of disorders such as Alzheimer’s disease at an early age. This disease destroys the brain’s ability to perform explicit learning processes, leaving only implicit learning intact. Finally, the paper shows “a lot of overlap” between implicit and explicit learning, although previous research has found that the two processes are housed in separate areas of the brain.

The paper, entitled “A Meta-Analysis Suggests Different Neural Correlates for Implicit and Explicit Learning”, has been published in the journal Neuron.

 

Some otters learn how to solve problems by observing other otters

Otters are actually copycats. They solve puzzles by watching and repeating the actions of another otter. However, only the otters species that hunt together in the wild are able to learn from others.

Scientists from the University of Exeter gave otters sets of food-baited puzzles to solve. The experiments were conducted in captivity, at zoos and wildlife parks. The otters were given plastic Tupperware containers with clips on the lids, screw-top lids, or pull off lids. Inside were treats such as peanuts and fish heads. The most difficult task? A block of frozen shrimp attached to a bamboo stick — it had to be moved up and to the right to get it out of the plastic container. Only half of the otters managed to get the shrimp out.

Smooth-coated otters learn from each other, especially when they are young. Image credits: Kokhuitan.

“Social learning has been studied in many species, but never in otters,” said Dr. Neeltje Boogert, of the Centre for Ecology and Conservation at the University of Exeter’s Penryn Campus in Cornwall.

Young otters copied their parents to solve the puzzles: the offspring solved the puzzles much faster than their parents. However, not every otter species did this. Smooth-coated otters copied their parents, while Asian short-clawed otters did not. The researchers expected to find social learning in both otter species, so it was surprising that the Asian short-clawed otter didn’t exhibit it.

“Asian short-clawed otters are not known to forage in groups, and their natural diet consists mainly of prey such as shellfish and crabs that do not require group-hunting strategies. As a result, they may have less of a tendency to turn to each other to see how to solve a puzzle such as how to extract food from a new source. In the wild, smooth-coated otters show coordinated group-hunting strategies such as V-shaped swimming formations to catch fish — so it makes sense that they would be naturally inclined to watch each other for foraging information,” explained Dr. Boogert.

This finding is cool, but it is also practical. Many otters are endangered in the wild, so captive breeding programs and re-release are used to help them recover. Previous work on captive breeding and re-release has found that animals with wild skills, like catching food (or cracking open sea urchins with rocks) and avoiding predators, have a higher rate of survival. Teaching the otters certain behaviors through social learning can help them to survive in the wild.

Journal reference: Zosia Ladds, William Hoppitt, Neeltje J. Boogert. Social learning in ottersRoyal Society Open Science, 2017; 4 (8): 170489 DOI: 1098/rsos.170489

Artificial synapse brings us one step closer to brain-like computers

Researchers have created a working artificial, organic synapse. The new device could allow computers to mimic some of the brain’s inner workings and improve their capacity to learn. Furthermore, a machine based on these synapses would be much more energy efficient that modern computers.

It may not look like much, but this device could revolutionize our computers forever.
Image credits Stanford University.

As far as processors go, the human brain is hands down the best we’ve ever seen. Its sheer processing power dwarfs anything humans have put together, for a fraction of the energy consumption, and it does it with elegance. If you allow me a car analogy, the human brain is a formula 1 race car that somehow uses almost no fuel and our best supercomputer… Well, it’s an old, beat-down Moskvich.

And it misfires.
Image credits Sludge G / Flickr.

So finding a way to emulate the brain’s hardware has understandably been high on the wishlist of computer engineers. A wish that may be granted sooner than they hoped. Researchers Stanford University and Sandia National Laboratories have made a breakthrough that could allow computers to mimic one element of the brain — the synapse.

 

 

 

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper.

“It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

Copycat

The artificial synapse is made up of two thin, flexible films holding three embedded terminals connected by salty water. It works similarly to a transistor, with one of the terminals dictating how much electricity can flow between the other two. This behavior allowed the team to mimic the processes that go on inside the brain — as they zap information from one another, neurons create ‘pathways’ of sorts through which electrical impulses can travel faster. Every successful impulse requires less energy to pass through the synapse. For the most part, we believe that these pathways allow synapses to store information while they process it for comparatively little energy expenditure.

Because the artificial synapse mimics the way synapses in the brain respond to signals, it removes the need to separately store information after processing — just like in our brains, the processing creates the memory. These two tasks are fulfilled simultaneously for less energy than other versions of brain-like computing. The synapse could allow for a much more energy-efficient class of computers to be created, addressing a problem that’s becoming more and more poignant in today’s world.

Modern processors need huge fans because they use a lot of energy, giving off a lot of heat.

One application for the team’s synapses could be more brain-like computers that are especially well suited to tasks that involve visual or auditory signals — voice-controlled interfaces or driverless cars, for example. Previous neural networks and artificially intelligent algorithms used for these tasks are impressive but come nowhere near the processing power our brains hold in their tiny synapses. They also use a lot more energy.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper.

“Instead of simulating a neural network, our work is trying to make a neural network.”

The team will program these artificial synapses the same way our brain learns — they will progressively reinforce the pathways through repeated charge and discharge. They found that this method allows them to predict what voltage will be required to get a synapse to a specific electrical state and hold it with only 1% uncertainty. Unlike traditional hard drives where data has to be stored or lost when the machine shuts down, the neural network can just pick up where it left off without the need for any data banks.

One of a kind

Right now, the team has only produced one such synapse. Sandia researchers have taken some 15,000 measurements during various tests of the device to simulate the activity of a whole array of them. This simulated network was able to identify handwritten digits (between 0-9) with 93 to 97% accuracy — which, if you’ve ever used the recognize handwriting feature, you’ll recognize as an incredible success rate.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper.

“We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

One of the reasons these synapses perform so well is the numbers of states they can hold. Digital transistors (such as the ones in your computer/smartphone) are binary — they can either be in state 1 or 0. The team has been able to successfully program 500 states in the synapse, and the higher the number the more powerful a neural network computational model becomes. Switching from one state to another required roughly a tenth of the energy modern computing system drain to move data from processors to memory storage.

Still, this means that the artificial synapse is currently 10,000 times less energy efficient than its biological counterpart. The team hopes they can tweak and improve the device after trials in working devices to bring this energy requirement down.

Another exciting possibility is the use of these synapses in-vivo. The devices are largely composed of organic elements such as hydrogen or carbon, and should be fully compatible with the brain’s chemistry. They’re soft and flexible, and use the same voltages as those of human neurons. All this raises the possibility of using the artificial synapse in concert with live neurons in improved brain-machine interfaces.

Before they considering any biological applications, however, the team wants to test a full array of artificial synapses.

The full paper “A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing” has been published in the journal Nature Materials.

 

 

Google’s Neural Machine can translate nearly as well as a human

A new translation system unveiled by Google, the Neural Machine Translation (GNMT) framework comes close to human translators in it’s proficiency.

Public domain image.

Not knowing the local language can be hell — but Google’s new translation software might prove to be the bilingual travel partner you’re always wanted. A recently released paper notes that Google’s Neural Machine Translation system (GNMT) reduces translation errors by an average of 60% compared to the familiar phrase-based approach. The framework is based on unsupervised deep learning technology.

Deep learning simulates the way our brains form connections and process information inside a computer. Virtual neurons are mapped out by a program, and the connections between them receive a numerical value, a “weight”. The weight determines how each of these virtual neurons treats data imputed to it — low-weight neurons recognize the basic features of data, which they feed to the heavier neurons for further processing, and so on. The end goal is to create a software that can learn to recognize patterns in data and respond to each one accordingly.

Programmers train these frameworks by feeding them data, such as digitized images or sound waves. They rely on big sets of training data and powerful computers to work effectively, which are becoming increasingly available. Deep learning has proven its worth in image and speech recognition in the past, and adapting it to translation seems like the logical next step.

And it works like a charm

GNMT draws on 16 processors to transform words into a value called “vector.” This represents how closely it relates to other words in its training database — 2.5 billion sentence pairs for English and French, and 500 million for English and Chinese. “Leaf” is more related to “tree” than to “car”, for example, and the name “George Washington” is more related to “Roosevelt” than to “Himalaya”, for example. Using the vectors of the input words, the system chooses a list of possible translations, ranked based on their probability of occurrence. Cross-checking helps improve overall accuracy.

The increased accuracy in translation happened because Google let their neural network do without much of the previous supervision from programmers. They fed the initial data, but let the computer take over from there, training itself. This approach is called unsupervised learning, and has proven to be more efficient than previous supervised learning techniques, where humans held a large measure of control on the learning process.

In a series of tests pitting the system against human translators, it came close to matching their fluency for some languages. Bilingually fluent people rated the system between 64 and 87 percent better than the previous one. While some things still slip through GNMT’s fingers, such as slang or colloquialisms, those are some solid results.

Google is already using the new system for Chinese to English translation, and plans to completely replace it’s current translation software with GNMT.

 

Machine learning could solve the US’s police violence issue

The Charlotte-Mecklenburg Police Department of North Carolina is piloting a new machine-learning system which it hopes will combat the rise of police violence. Police brutality has been a growing issue in US in recent years.

The system combs through the police’s staff records to identify officers with a high risk of causing “adverse events” — such as racial profiling or unwarranted shootings.

Image credits Jagz Mario / Flickr.

A University of Chicago team is helping the Charlotte-Mecklenburg PD keep an eye on their police officers, and prevent cases of police violence. The team feeds data from the police’s staff records into a machine learning system that tries to spot risk factors for unprofessional conduct. Once a high-risk individual is identified, the department steps in to prevent any actual harm at the hands of the officer.

Officers are people too, and they can be subjected to a lot of stress in their line of work. The system is meant to single out officers who might behave aggressively under stress. All the information on an individual’s record — details of previous misconduct, gun use, their deployment history, how many suicide or domestic violence calls they have responded to, et cetera — is fed into the system. The idea is to prevent incidents in which officers who are stressed can behave aggressively, such as the case in Texas where an officer pulled his gun on children at a pool party after responding to two suicide calls earlier that shift.

“Right now the systems that claim to do this end up flagging the majority of officers,” says Rayid Ghani, who leads the Chicago team. “You can’t really intervene then.”

But so far, the system had some pretty impressive results. It retrospectively flagged 48 out of 83 adverse incidents that happened between 2005 and now – 12 per cent more than Charlotte-Mecklenberg’s existing early intervention system. It had a false positive rate – officers flagged as having a high risk by the system that didn’t behave aggressively – was 32 per cent lower than the existing systems.

Ghani’s team is currently testing the system with the Los Angeles County Sheriff’s Department and the Knoxville Police Department in Tennessee. They will present the results of their pilot system at the International Conference on Knowledge Discovery and Data Mining in San Francisco later this month.

So the system works, but exactly what should be done after an official has been flagged as a potential risk is still up for debate. The team is still working with the Charlotte-Mecklenburg police to find the best solution.

“The most appropriate intervention to prevent misconduct by an officer could be a training course, a discussion with a manager or changing their beat for a week,” Ghani adds.

Whatever the best course of action is, Ghani is confident that it should be implemented by humans, not a computer system.

Or adorable toy police cars, at least.
Image via pixabay

“I would not want any of those to be automated,” he says. “As long as there is a human in the middle starting a conversation with them, we’re reducing the chance for things to go wrong.”

Frank Pasquale, who studies the social impact of algorithms at the University of Maryland, is cautiously optimistic.

“In many walks of life I think this algorithmic ranking of workers has gone too far – it troubles me,” he says. “But in the context of the police, I think it could work.”

He believes that while such a system for tackling police misconduct is new, it’s likely that older systems created the problem in the first place.

“The people behind this are going to say it’s all new,” he says. “But it could be seen as an effort to correct an earlier algorithmic failure. A lot of people say that the reason you have so much contact between minorities and police is because the CompStat system was rewarding officers who got the most arrests.”

CompStat, short for Computer Statistics, is a police management and accountability system, used to implement the “broken windows” theory of policing — the idea that punishing minor infractions like public drinking and vandalism severely helps create an atmosphere of law and order, and will thus bring down serious crime. Many police researchers have suggested that the approach has led to the current dangerous tension between police and minority communities.

Pasquale warns that the University of Chicago system is not infallible. Just like any other system, it’s going to suffer from biased data — for example, a black police officer in a white community will likely get more complaints than a white colleague, he says, because the police can be subject to racism, too. Giving officers some channel to seek redress will be important.

“This can’t just be an automatic number cruncher.”

Physical exercise after learning could improve long-term memory, study finds

A new study found that physical exercise conducted learning improves memory and memory traces, but only if you take a break after learning.

Photo by Michael L. Baird.

For the study, 72 participants were split into three groups of 24, and each group of 24 was further split in half. The three big groups were No exerciseImmediate exercise, and Delayed exercise, 4 hours after the learning activity. The splitting in half of the 24 group was done to control for the time of day. Then

“Seventy-two participants were randomly assigned to one of three age- and gender-matched groups; all learned 90 picture-location associations over a period of approximately 40 min,” the scientists said. “In each group, half of the participants started at 9 a.m. and half at 12 p.m. to control for time-of-day effects. For the delayed exercise (DE) group, the protocol was identical but with the order of the exercise and control session reversed; for the no exercise (NE) group, both sessions before and after the delay period were control sessions.”

When the first test was carried out, there was no difference between the three groups, but at the second test, differences became quite visible. There was no difference for the “no exercise” and “immediate exercise”, but the “delayed exercise” group had significantly better results. This would suggest that doing physical exercise after memorizing can be helpful, but only if it’s correctly timed. This sentiment was echoed by the research team:

“Our results suggest that appropriately timed physical exercise can improve long-term memory and highlight the potential of exercise as an intervention in educational and clinical settings,” the scientists said.

So the main takeaway is: do your learning, take a break, then work out. We still don’t know how long the break should be, but 4 hours is a good starting point. Future research will probably zoom in on the optimal pause time.

Journal Reference: Physical Exercise Performed Four Hours after Learning Improves Memory Retention and Increases Hippocampal Pattern Similarity during Retrieval. Current Biology, published online June 16, 2016; doi: 10.1016/j.cub.2016.04.071

Good quality breakfast linked to better performance in school

Cardiff University public health experts have discovered a powerful link between a pupil’s breakfast quality and their performance at school. The study – the largest to date looking at how nutrition influences school performance — recorded the breakfast habits of 5000 pupils aged 9 through 11, and their results in the Key Stage 2 Teacher Assessments 6-18 months later. The pupils who ate breakfast, and had better quality food at breakfast, achieved higher academic outcomes that the ones attending classes on an empty stomach.

Image via freestockphotos

“While breakfast consumption has been consistently associated with general health outcomes and acute measures of concentration and cognitive function, evidence regarding links to concrete educational outcomes has until now been unclear,” said Hannah Littlecott, lead author of the paper.

“This study therefore offers the strongest evidence yet of links between aspects of what pupils eat and how well they do at school, which has significant implications for education and public health policy – pertinent in light of rumours that free school meals may be scrapped following the November spending review.”

The pupils were asked to remember all the food and drinks they consumed over a 24 hour period, noting what they had and the specific time of the meals throughout the day as well as what they ate in the morning of the reporting.

The data shows that beside the quality and number of healthy items consumed for breakfast, other dietary habits — such as the ratio of sweets to fruits and vegetables each pupil had daily, for example — also had a positive effect on educational performance. Eating unhealthy items like sweets and crisps for breakfast, which was reported by 1 in 5 children, had no positive impact on educational attainment.

“For schools, dedicating time and resource towards improving child health can be seen as an unwelcome diversion from their core business of educating pupils, in part due to pressures that place the focus on solely driving up educational attainment. But this resistance to delivery of health improvement interventions overlooks the clear synergy between health and education. Clearly, embedding health improvements into the core business of the school might also deliver educational improvements as well,” Hannah concluded

Professor of Sociology and Social Policy Chris Bonell, from the University College London Institute of Education, welcomed the study’s findings.

“This study adds to a growing body of international evidence indicating that investing resources in effective interventions to improve young people’s health is also likely to improve their educational performance. This further emphasises the need for schools to focus on the health and education of their pupils as complementary, rather than as competing priorities. Many schools throughout the UK now offer their pupils a breakfast. Ensuring that those young people most in need benefit from these schemes may represent an important mechanism for boosting the educational performance of young people throughout the UK”.

Dr Graham Moore, who also co-authored the report, added:

“Most primary schools in Wales are now able to offer a free school breakfast, funded by Welsh Government. Our earlier papers from the trial of this scheme showed that it was effective in improving the quality of children’s breakfasts, although there is less clear evidence of its role in reducing breakfast skipping.”

“Linking our data to real world educational performance data has allowed us to provide robust evidence of a link between eating breakfast and doing well at school. There is therefore good reason to believe that where schools are able to find ways of encouraging those young people who don’t eat breakfast at home to eat a school breakfast, they will reap significant educational benefits.”

140408111713-large

Naps are key to infant learning and memory consolidation

People spend more of their time asleep as babies than at any other point in their lives, but even if this has been common knowledge for some time we’re only beginning to understand what role sleep plays during this key stage. University of Sheffield researchers claim that sleeping is key to learning and forming new memories for infants as old as 12 months. Babies who didn’t nap were far less able to repeat what they had been taught only 24 hours earlier. The findings aren’t only important for parents looking for advice to manage their babies, though. The researchers draw a parallel between life’s dawn and twilight years, suggesting that more sleep is important for memory consolidation for the elderly and helps keep neurodegenerative diseases like Alzheimer’s at bay. Is napping good or bad? Read on.

Sleeping through our baby years

Trials were performed with 216 babies six to twelve months old. The infants were taught three new tasks involving playing with hand puppets, then divided into two equal groups. Half the babies took a nap within four hours of learning, while the rest either had no sleep or napped for fewer than 30 minutes. Remarkably, those who took a nap could repeat one-and-a-half tasks on average the following day, in stark contrast to a big zero for the babies who stayed wide awake for the whole afternoon.

“Those who sleep after learning learn well, those not sleeping don’t learn at all,” said Dr Jane Herbert, from the department of psychology at the University of Sheffield.

Previously, it was assumed that staying wide awake is best for learning, yet the findings contradict this. Instead, it seems like learning new things just before a nap is best for infant memory consolidation, according to the paper published in Proceedings of the National Academy of Sciences.

[RELATED] Strikingly similar ape and human infant gestures hint to evolution of language

Dr Herbert added: “Parents get loads of advice, some saying fixed sleep, some flexible, these findings suggest some flexibility would be useful, but they don’t say what parents should do.”

Prof Derk-Jan Dijk, a sleep scientists at the University of Surrey, said: “It may be that sleep is much more important at some ages than others, but that remains to be firmly established.”

In other words, the findings show that sleeping after training renders positive results. Being sleepy during training does not necessarily, though.