Tag Archives: Thinking

Our brains get hit by epiphany before they let us in on it too, researchers find

A team of researchers from the Ohio State University (OSU) has found that the human brain gives off telltale signs when it’s about to stumble onto an epiphany.

Image credits Gerd Altmann.

Using a computer strategy game and eye-tracking technology to record pupil dilation, the researchers have captured people in the act of having an epiphany.

A numbers game

The lion’s share of learning throughout life is reinforcement learning — the kind where you work over time to build a skill or fix knowledge. In other words, the hard way. Reinforcement Learning (RL) is widely documented in scientific literature, and in practice, it can be distinguished by a gradual improvement at given task as someone gains in knowledge. A child learning to read and write will become faster and more comfortable doing these activities the more effort he puts in. A player learning a new game will shift his or her strategy throughout rounds, getting better scores as he practices.

But the arguably cooler way to learn stuff is Epiphany Learning (EL). This happens when something just clicks in your mind, and you get the problem. I’m a big fan of this eureka moment — it’s like drawing Monopoly’s get out of jail free card, only better, because it gets you out of real life hard work. I’m guessing a lot of researchers are too and would be very interested in studying it and making it happen more often — but until now we didn’t really know how to observe people going through this experience, precisely because of its spontaneous nature.

Ian Krajbich, assistant professor of psychology and economics at OSU, working with economics doctoral student Wei Chen, have managed to record an epiphany in the making. The team asked 59 students to play a computer game against an unseen opponent. The game screen showed 11 numbers (0-10) arrayed in a circle, similar to the old rotary phone setup. The game’s rules were deliberately complicated (to give the players some hassle in figuring it out) but basically, it boiled down to this: each of the players had to chose a number, and the lower pick would win. Zero thus was objectively the best number to pick in any situation, since it was the lowest.

Each participant played 30 rounds of the game, always against a different opponent. To incentivize the participants to understand the game and win, each victory was rewarded with a small sum of money. An eye-tracking camera was placed under the computer screens so that the team could monitor what numbers the students were looking at as they decided which to pick.

To determine when a player realized the trick in the game (0 always won), which would be the epiphany, the team also gave them the option of committing to playing one number for the rest of the trial after each round. They would receive more money for each win after doing so, to incentivize committing. After deciding whether to commit or not, participants were reminded what number they played, what number their opponent played, and the outcome of the game.

Eyes-on learning

Dilated Pupil.

Image credits Nan Palmero / Flickr.

By the end of the trial, 42% of players had an epiphany sometime during the game and committed to playing only zero. A further 37% committed to another number, and the remaining 20% didn’t commit at all. So how could the team tell whether a player had an epiphany or simply got lucky? Well, it’s how they played.

“There’s a sudden change in their behavior. They are choosing other numbers and then all of a sudden they switch to choosing only zero,” Krajbich explains. “That’s a hallmark of epiphany learning.”

But most excitingly, the team found they could predict when a player was about to stumble his or her way into an epiphany.

“We could see our study participants figuring out the solution through their eye movements as they considered their options,” Krajbich says. “We could predict they were about to have an epiphany before they even knew it was coming.”

“We don’t see the epiphany in their choice of numbers, but we see it in their eyes,” Chen added. “Their attention is drawn to zero and they start testing it more and more.”

It’s likely that the participants didn’t even realize they were about to have an epiphany, the authors note. The eye-tracking camera footage showed that they were looking more towards zero and other lower numbers as their brain was subconsciously crunching the game, even if they ended up picking other numbers.

The players who were struck with an epiphany also spent less time looking at the numbers their opponents picked, and more on the actual game result, win or lose. The researchers say this suggests that some players were smartening up to the fact that it’s their own choice of a lower number that determined the outcome. Another key trait of EL, its sudden nature, was also evident when the team analyzed eye time for the commitment screen. Players exhibiting EL didn’t build up confidence over time — which would be correlated to them looking at the commit button more as the trial progressed — but suddenly went for the commitment option as they understood the game.

Levels of pupil dilation also differentiated EL-ers were reacting to the game different than their counterparts. They showed significant more pupil dilation while looking at the result screen before committing — afterward, the dilation stayed at normal levels, suggesting the epiphany had already come to pass.

“When your pupil dilates, we see that as evidence that you’re paying close attention and learning,” Krajbich said. “They were showing signs of learning before they made the commitment to zero. We didn’t see the same results for others.”

So, what can you do to help your brain help you? Trust your gut and your reasoning, and don’t blindly follow others.

“One thing we can take away from this research is that it is better to think about a problem than to simply follow others,” Krajbich concludes. “Those who paid more attention to their opponents tended to learn the wrong lesson.”

The paper “Computational modeling of epiphany learning” has been published in the journal PNAS.

Consciousness comes in “slices” roughly 400 milliseconds long

A new model proposed by EPFL scientists tries to explain how our brain processes information and then makes us consciously aware of it. According to their findings, consciousness forms as a series of short bursts of up to 400 milliseconds, with gaps of background, unconscious information processing in between.

Image via pixabay user johnhain

Subjectively, consciousness seems to be an uninterrupted state of thought and senses giving us a smooth image of the world around us. So to the best of our knowledge, sensory information is continuously recorded and fed into our perception; we then process it and become aware of it as this happens. We can clearly see the movement of objects, we hear sounds from start to end without pause, etc.

But have you ever found yourself reacting to something before actually becoming aware of the need to react? Let’s say you’re running and trip over, but you change your motions to prevent falling almost automatically. Or you’re in traffic, the car in front of you suddenly stops and you slam on the brakes instinctively, even before you realize the danger. If yes, you’ve most likely said “thanks reflexes” and left it like that.

This, however,  hints at processes that analyze data and elaborate responses without our conscious input, sparking a debate in the science community that goes back several centuries. Why does this automated response form — just as an extra safety measure? Or rather, because your consciousness isn’t always available when push comes to shove? In other words, is consciousness constant and uninterrupted, or more akin to a movie reel — a series of still shots?

Michael Herzog at EPFL and Frank Scharnowski at the University of Zurich now put forward a new model of how the brain processes unconscious information, suggesting that consciousness arises only in intervals up to 400 milliseconds, with no consciousness in between. By reviewing data from previously published psychological and behavioral experiments on the nature of consciousness — such as showing a participant several images in rapid succession and asking them to distinguish between them while monitoring their brain activity — they have developed a new conceptual framework of how it functions.

They propose a two-stage processing of information. During the first, unconscious stage, our brain processes specific features of objects such as color or shape. It then analyzes these objects with a very high time-resolution. But crucially to the proposed model, there is no actual perception of time during this phase — even time-dependent features such as duration or changes in color are not perceived as such. Time simply becomes a value assigned to each state, just as color or shape. In essence, during this stage your brain gathers and processes data, then puts them into a spreadsheet (a brainxcell if you will,) and “time” becomes just another value in a column.

Then comes the conscious stage: after unconscious processing is completed the brain renders all the features into our conscious thought. This produces the final picture, making us aware of the stimulus. Processing a stimulus to conscious perception can take up to 400 milliseconds, a considerable delay from a physiological point of view. The team focused their study on visual perception alone, and the delay might vary between the senses.

“The reason is that the brain wants to give you the best, clearest information it can, and this demands a substantial amount of time,” explains Michael Herzog. “There is no advantage in making you aware of its unconscious processing, because that would be immensely confusing.”

This is the first time a two-stage model has been proposed for how consciousness arises, and it may offer a more refined picture than the purely continuous or discrete models. It also provides useful insight into the way our brain processes time and relates it to our perception of the world.

The full paper, titled “Time Slices: What Is the Duration of a Percept?” has been published online in the journal PLOS Biology and can be read here.

 

Creative thinking requires more checks and balances that you’d think

Creative thinking requires the simultaneous activation of two distinct networks in the brain, the associative and normative networks. Higher connectivity between these completely different systems of your brain leads to new, original and useful ideas, University of Haifa research concludes.

Creativity is our ability to think in new and original ways to solve problems. But not every new idea can be called “creative.” If it’s not fully applicable, an idea is just considered to be unreasonable. Looking into how our brain can turn out both of these types of ideas, Dr. Naama Mayseless concludes that  “creative thinking apparently requires ‘checks and balances’.”

Image via sciencedaily

The team hypothesized that for a new idea or concept to be produced, two different — and perhaps contradictory — brain networks must work together. So in order to verify this, they organized two tests; in the first one, respondents were give half a minute to come up with a new, original and unexpected idea for the use of different objects. Answers that were provided infrequently were given a high score for originality, while frequently-given ones scored low.

For the second part of the tests, the volunteers were asked to give the best characteristic and accepted description of the same objects. Just as the first test, they had half a minute to complete this task.

During these tests, all subjects’ brain activity was recorded with the use of an fMRI machine, to record how the brain behaved while it was working on the answers.

The researchers found increased brain activity in an “associative” region among participants whose originality was high. This region, which includes the anterior medial brain areas, mainly works in the background when a person is not concentrating, similar to daydreaming.

But this region doesn’t operate alone. For the answer to be original (i.e. not unreasonable) another brain network had to activate in collaboration with the associative region — the administrative control region. The authors describe it as a more “conservative” part of the brain, that handles social norms and rules. The researchers also found that the stronger the connection between these two areas when they activated — the greater the level of originality of the answer.

“On the one hand, there is surely a need for a region that tosses out innovative ideas, but on the other hand there is also the need for one that will know to evaluate how applicable and reasonable these ideas are. The ability of the brain to operate these two regions in parallel is what results in creativity. It is possible that the most sublime creations of humanity were produced by people who had an especially strong connection between the two regions,” the researchers concluded.

This research was conducted as part of Dr. Naama Mayseless’ doctoral disertation, and was supervised by Prof. Simone Shamay-Tsoory from the Department of Psychology at the University of Haifa in collaboration with Dr. Ayelet Eran from the Rambam Medical Center.

Having access to the Internet changes the way you think

The Internet is a wonderful and wonderfully powerful place. Just think about it, if your parents needed an article to show their college friends that nah-i’m-totally-right-and-you’re-not (it’s a big part of college life) they had to go looking in a library — you have access to almost all of human knowledge with just a few key strokes.

Or a few minute’s walk.
Image via wikimedia

But it turns out that having such pervasive access to information may actually make us rely less on the knowledge we already have, altering how we think, found University of Waterloo Professor of Psychology Evan F. Risko in a recent study published in the journal Consciousness and Cognition..

For the study, 100 participants were asked a series of general-knowledge questions (such as naming the capital of France.) For the first half of the test participants didn’t have access to the Internet, and would indicate whether they knew the answer or not. In the second half, they had Internet access and were required to look up the answers they reported they didn’t know.

In the end, the team found that when the subjects had access to the web they were 5 percent more likely to report they didn’t know an answer, and in some contexts, they reported feeling as though they knew less compared to the ones without access.

“With the ubiquity of the Internet, we are almost constantly connected to large amounts of information. And when that data is within reach, people seem less likely to rely on their own knowledge,” said Professor Risko, Canada Research Chair in Embodied and Embedded Cognition.

The team believes that giving people access to the internet might make it seem less acceptable to them to say that they know something but be incorrect. Another theory they considered is that people were more likely to say they didn’t know the answer because looking it up on the web gave them an opportunity to confirm their knowledge or satiate their curiosity, both highly rewarding processes.

“Our results suggest that access to the Internet affects the decisions we make about what we know and don’t know,” said Risko. “We hope this research contributes to our growing understanding of how easy access to massive amounts of information can influence our thinking and behaviour.”

Professor Risko says he plans to further the research in this area by investigating the factors that lead to individuals’ reduced willingness to respond when they have access to the web.

Many parts, but the same mold – how the brain forms new thoughts

A recent study, described in the Sept. 17 edition of the Proceedings of the National Academy of Sciences and co-authored by postdoctoral fellow Steven Frankland and Professor of Psychology Joshua Greene, takes a look at exactly how the human brain creates new thoughts. According to the researchers:

The brain forms new thoughts using two adjacent brain regions that are the cornerstone of the process, like a sort of conceptual algebra similar to the workings of silicon computers that represent variables and their changing values.

Image via firstconcepts

“One of the big mysteries of human cognition is how the brain takes ideas and puts them together in new ways to form new thoughts,” said Frankland, the lead author of the study. “Most people can understand ‘Joe Biden beat Vladimir Putin at Scrabble’ even though they’ve never thought about that situation, because, as long as you know who Putin is, who Biden is, what Scrabble is, and what it means to win, you’re able to put these concepts together to understand the meaning of the sentence. That’s a basic, but remarkable, cognitive ability.”

Simple – but how are such thoughts put together? One theory holds that the brain does this by representing conceptual variables — answering recurring questions of meaning such as “What was done?” and “Who did it?” and “To whom was it done?” This way, a new idea or thought can be built around the information our brain receives. So “Biden beats Putin in Scrabble” gets processed by making “beating” the value of the action variable, adding “Biden” as the agent variable, and “Putin” the patient variable – What, by Who, to Whom? Frankland and Greene’s pioneering work is the first to point to specific regions of the brain that encode such mental syntax.

“This has been a central theoretical discussion in cognitive science for a long time, and although it has seemed like a pretty good bet that the brain works this way, there’s been little direct empirical evidence for it,” Frankland said.

To identify the regions, Frankland and Greene used functional magnetic resonance imaging (fMRI) to scan students’ brains as they read a series of simple sentences such as “The dog chased the man” and “The man chased the dog.” They crunched the data and came up with algorithms that they used to identify patterns of brain activity that corresponded to “dog” or “boy.”

“What we found is there are two regions in the left superior temporal lobe, one which is situated more toward the center of the head, that carries information about the agent, the one doing an action,” Frankland said. “An immediately adjacent region, located closer to the ear, carries information about the patient, or who the action was done to.”

Importantly, Frankland added, the brain appears to reuse the same patterns across multiple sentences, implying that these patterns function like symbols.

“So we might say ‘the dog chased the boy,’ or ‘the dog scratched the boy,’ but if we use some new verb the algorithms can still recognize the ‘dog’ pattern as the agent,” he said. “That’s important because it suggests these symbols are used over and over again to compose new thoughts. And, moreover, we find that the structure of the thought is mapped onto the structure of the brain in a systematic way.”

And it’s this ability to use and reuse concepts to formulate new thoughts that make our thought processes unique, and immensely powerful and adaptable.

“This paper is about language,” Greene said. “But we think it’s about more than that. There’s a more general mystery about how human thinking works.

“What makes human thinking so powerful is that we have this library of concepts that we can use to formulate an effectively infinite number of thoughts,” he continued. “Humans can engage in complicated behaviors that, for any other creature on Earth, would require an enormous amount of training. Humans can read or hear a string of concepts and immediately put those concepts together to form some new idea.”

Unlike models of perception, which put more complex representations at the top of a processing hierarchy, Frankland and Greene’s study supports a model of higher cognition that relies on the dynamic combination of conceptual building blocks to formulate thoughts.

“You can’t have a set of neurons that are there just waiting for someone to say ‘Joe Biden beat Vladimir Putin at Scrabble,’ ” Greene said. “That means there has to be some other system for forming meanings on the fly, and it has to be incredibly flexible, incredibly quick and incredibly precise.” He added, “This is an essential feature of human intelligence that we’re just beginning to understand.”