Tag Archives: mind

Pill for your thoughts: what are nootropics?

Nootropics are drugs that have a stimulating effect on our minds and brains. They’re meant to improve our cognitive abilities in various ways. On the face of it, that sounds awesome; who doesn’t want to get smarter by taking a pill? But many drugs touted with having a nootropic effect have no evidence to show for it. Some are complete swindles.

Image credits Lucio Alfonsi.

All of this doesn’t help give nootropics, which are a genuine category of drugs, a good name. Despite the undeniable appeal of being referred to as ‘cognitive enhancers’.

Today, we’re going to take a look at what nootropics are, talk about a few that we know are genuine, their effects, and some of the controversy around this subject.

So what are they?

The term was coined in 1972 by Romanian-born chemist and psychologist Corneliu Giurgea. At the time, he stated that to qualify as a nootropic, a compound should do the following:

  • Improve learning and memory.
  • Make learned behaviors or memories more resilient in the face of factors or conditions that disrupt them, such as hypoxia.Protect the brain against chemical or physical injuries.
  • Increase the efficacy of the tonic cortical/subcortical control mechanisms.
  • Have extremely low levels of toxicity, produce few (ideally no) side-effects, and to not induce the same effects of other psychotropic drugs (i.e. not get you high).

All of these are very useful pointers. However, I’ve found that the best way to explain what a certain family of drugs is to someone is to point at the examples people have direct experience with. We’re lucky, then, since virtually every one of us uses nootropics. Caffeine, nicotine, or L-theanine in various types of tea are some of the most-used nootropics in the world. Caffeine is the single most widely-used one. Besides coffee, caffeine is also naturally present in chocolate and tea. Many processed items such as food supplements, energy drinks, or sodas also contain caffeine.

All of these compounds influence our cognitive abilities in one form or another. Caffeine is notorious for helping pick us up when we’re feeling sleepy. But it also has a direct influence on the levels of various neurotransmitters in the brain. Past research has noted this leads to improved short-term memory performance and learning ability. These effects were not related to the stimulating effects of caffeine but occurred alongside it. According to Stephanie M. Sherman et al., 2016:

“Participants who drank caffeinated coffee were significantly more awake by the end of the experiment, while participants who drank decaffeinated coffee did not experience the same increase in perceived wakefulness”, it notes, adding that caffeine also “increased explicit memory performance for college-aged adults during early morning hours. Young adults who drank caffeinated coffee showed a 30% benefit in cued recall performance compared to the decaffeinated coffee drinkers, and this effect was independent of the perceived positive effect of the caffeine.”

Nicotine, an active ingredient in tobacco plants, also seems to have nootropic potential. D M Warburton, 1992, reports on a range of effects nicotine has on the (healthy) brain, including improvements in attention “in a wide variety of tasks” and improvements in short- and long-term memory. It further explains that nicotine can help improve attention in “patients with probable Alzheimer’s Disease”. Some of these effects were attributed to the direct effect nicotine has on attention, while others “seem to be the result of improved consolidation as shown by post-trial dosing” — meaning the compound likely also helps strengthen memories after they are formed.

Please do keep in mind here that I do not, in any way, condone that you pick up smoking. There isn’t any scenario under which I’d estimate that the potential nootropic effect of nicotine outweighs the harm posed by smoking. There are other ways to introduce nicotine into your system if you’re really keen on it.

L-theanine is very similar in structure to the neurotransmitter glutamate — which has the distinction of being the most abundant neurotransmitter in the human brain. Glutamate is our main excitatory neurotransmitter, and a chemical precursor for our main inhibitory neurotransmitter, as well. To keep things short, glutamate is an important player in our brains.

Because of how similar they are chemically, L-theanine can bind to the same sites as glutamate, although to a much lower extent. We’re not very sure what effects L-theanine has on the brain exactly, there is some evidence that it can work to reduce acute stress and anxiety in stressful situations by dampening activation in the sympathetic nervous system (Kenta Kimura et al., 2006).

How they work

Coffee and tea are some of the world’s most popular sources of natural nootropics. Image via Pixabay.

A wide range of chemically distinct substances can have nootropic effects. As such, it’s perhaps impossible to establish a single, clear mechanism through which they act. But in very broad lines, their end effect is that of boosting one or several mental functions such as memory, creativity, motivation, and attention.

The nootropic effects of caffeine come from it interacting with and boosting activity in brain areas involved in the processing and formation of short-term memories. It does this, as we’ve seen, by tweaking neurotransmitter levels in the brain. Others, like nicotine and L-theanine, also influence neurotransmitter levels, or bind to receptor sites themselves, thus influencing how our minds and brains function. Others still influence our mental capacity through more mechanical means. As noted by Noor Azuin Suliman et al., 2016:

“Nootropics act as a vasodilator against the small arteries and veins in the brain. Introduction of natural nootropics in the system will increase the blood circulation to the brain and at the same time provide the important nutrient and increase energy and oxygen flow to the brain”. Furthermore, “the effect of natural nootropics is also shown to reduce the inflammation occurrence in the brain […] will protect the brain from toxins and [minimize] the effects of brain aging. Effects of natural nootropics in improving brain function are also contributed through the stimulation of the new neuron cell. [Through this] the activity of the brain is increased, enhancing the thinking and memory abilities, thus increasing neuroplasticity”.

The brain is a very complicated mechanism, one whose inner workings we’re only beginning to truly understand. Since there are so many moving parts involved in its functions, there are many different ways to tweak its abilities. Way too many to go through them all in a single sitting. One thing to keep in mind here is that nootropics can be both natural and synthetic in nature. In general — and this is a hard ‘in general’ — we understand the working mechanisms of natural nootropics a bit more than those of synthetic nootropics.

Still, even with caffeine, we start seeing one of the main drawbacks — most of which remain poorly understood — of nootropics. The word ‘nootropic’ is a compound of two Ancient Greek root words and roughly translates to “mind growers”. But, just as tuning a guitar’s strings alters what chords it can play overall, nootropics affect our minds and brains in their entirety. They often act on multiple systems in the body at the same time to produce these effects.

We separate nootropics by their effects in three classes. These are eugeroics, which promote wakefulness and alertness. One prominent eugeroic is Modafinil, currently used to treat narcolepsy, obstructive sleep apnea, and shift work sleep disorder. It’s also being investigated as a possible avenue for the treatment of stimulant drug withdrawal.

The second class is part of the ADHD medication family, which includes Methylphenidate, Lisdexamphetamine, Dexamfetamine. Ritalin is a drug in this category. It was originally used to treat chronic fatigue, depression, and depression-associated psychosis. Today, Ritalin is the most commonly prescribed medication for ADHD as it addresses the restlessness, impulsive behaviour, and inattentiveness associated with the disorder.

Finally, we have nootropic supplements. These include certain B vitamins, fish oil, and herbal supplements such as extracts of Gingko biloba and Bacopa monnieri. Supplements tend to be the more contested than the rest, with the plant extracts themselves being the most contested overall. One thing to keep in mind here is that the FDA doesn’t regulate nootropic supplements the same way it does for prescription drugs, so buyer beware. Another is that there is little reliable evidence that these supplements actually help boost memory or cognitive performance beyond a placebo effect. A review of literature on the efficacy of supplements (Scott C. Forbes et al., 2015) concludes that:

“Omega-3 fatty acids, B vitamins, and vitamin E supplementation did not affect cognition in non-demented middle-aged and older adults. Other nutritional interventions require further evaluation before their use can be advocated for the prevention of age-associated cognitive decline and dementia”.

One final point here is that the nutrients these supplements provide — if they work — shouldn’t produce meaningful effects unless you’ve been taking them for a while. Dr. David Hogan, co-author of that review and a professor of medicine at the University of Calgary in Canada, told Time.com that age also plays a factor, and that such nutrients may not be of much help if taken “beyond the crucial period” of brain development.

No side effects?

“Caffeine has been consumed since ancient times due to its beneficial effects on attention, psychomotor function, and memory,” notes Florian Koppelstaetter et al., 2010. “Caffeine exerts its action mainly through an antagonism of cerebral adenosine receptors, although there are important secondary effects on other neurotransmitter systems”.

Adenosine receptors in the brain play a part in a number of different processes, but a few that are important to our discussion right now are: regulating myocardial (heart) activity, controlling inflammation responses in the body, and keeping tabs on important neurotransmitters in the brain such as dopamine.

Caffeine helps make us be more alert by impairing the function of these receptors; one of the things that happen when adenosine binds to these sites is that we start feeling drowsy, even sleepy. But our brains come equipped with these receptors for a very important reason — they keep us alive and healthy. Messing with their activity can lead us to some very dangerous situations. Caffeine intake, for example, increases blood pressure and heart rate, at least in part by interfering with these adenosine receptors. Heavy caffeine intake has been linked to tachycardia (rapid heart contractions) in certain cases.

The risk posed by nootropics comes down to their very nature. By design, these are drugs meant to tweak the way our brains work. But our brains are so essential to keeping our bodies alive that any wrong tweak can lead to a lot of problems. There is some evidence that the use of certain nootropics comes at “a neuronal, as well as ethical, cost”. Revving our brains ever harder could mean they wear out more quickly.

“Altering glutamate function via the use of psychostimulants may impair behavioral flexibility, leading to the development and/or potentiation of addictive behaviors”, Kimberly R. Urban, Wen-Jun Gao, 2014, reports. “Healthy individuals run the risk of pushing themselves beyond optimal levels into hyperdopaminergic and hypernoradrenergic states, thus vitiating the very behaviors they are striving to improve. Finally, recent studies have begun to highlight potential damaging effects of stimulant exposure in healthy juveniles.”

“This review explains how the main classes of cognitive enhancing drugs affect the learning and memory circuits, and highlights the potential risks and concerns in healthy individuals, particularly juveniles and adolescents. We emphasize the performance enhancement at the potential cost of brain plasticity that is associated with the neural ramifications of nootropic drugs in the healthy developing brain”.

This leads us neatly to:

The controversy

The ethical implications of using nootropics in school

Although nootropics are still poorly understood, they have an undeniable allure. And there’s no shortage of people willing to capitalize on that demand.

There are valid uses for nootropics, and there is research to support these uses. ADHD medication being a prime example of that. But there is also a lot of false advertising, inflated claims, false labeling, and general snake-oilery going on in the field of nootropics.

We live in a world where cognitive ability and academic achievement have a large impact on our livelihoods, and the quality of our lives. As such, there is a lot of incentive for us to boost these abilities, and nootropics seem to offer an easy way to achieve them. So, naturally, there’s a lot of incentive for people to try and sell them to you. There is a growing trend of use of nootropics by students trying to make it through the curriculum — or to get an edge over their peers — in universities around the world. Factor in the fact that we still have a poor understanding of nootropics, and a poorer understanding still of their side- and long-term effects on our brains, and it becomes worrying.

The Federal Drug Administration and Federal Trade Committee have sent multiple warnings to manufacturers and distributors of nootropic drugs and supplements over the years over charges of misleading marketing, the manufacture and distribution of unapproved drugs or no proven safety or efficiency at the marketed doses, even over the use of illegal substances.

In closing, nootropics are a valid and real class of drugs. While there is still much we don’t yet understand about them, we know that they exist and they can work in the way we envision them, as long as we do so responsibly. In many ways, however, they suffer from their fame. Everybody wants a pill that would make them smarter, sharper, more focused. That in itself isn’t damnable. The trouble starts when we’re willing to overlook potential risks or even willingly ignore known side-effects in chasing that goal.

Your first memory is probably older than you think

What’s your earliest memory? Statistically speaking, it’s likely from when you were two-and-a-half years old, according to a new study.

Image credits Ryan McGuire.

Up to now, it was believed that people generally form their earliest long-term memories around the age of three-and-a-half. This initial “childhood amnesia” is, to the best of our knowledge, caused by an overload of the hippocampus, an area heavily involved in the formation and retention of long-term memory, in the infant brain.

However, new research is pushing that timeline back by a whole year — it’s just that we don’t usually realize we have these memories, for the most part.

There, but fuzzy

“When one’s earliest memory occurs, it is a moving target rather than being a single static memory,” explains lead author and childhood amnesia expert Dr. Carole Peterson, from the Memorial University of Newfoundland.

“Thus, what many people provide when asked for their earliest memory is not a boundary or watershed beginning, before which there are no memories. Rather, there seems to be a pool of potential memories from which both adults and children sample. And, we believe people remember a lot from age two that they don’t realize they do.”

Dr. Peterson explains that remembering early memories is like “priming a pump”: asking an individual to remember their earliest memory, and then asking them for more, generally allows them to recall even earlier events than initially offered, even things that happened a year before their ‘first’ memory. Secondly, she adds, the team has documented a tendency among people to “systematically misdate” their memories, typically by believing they were older during certain events than they really were.

For this study, she reviewed 10 of her research articles on childhood amnesia along with both published and unpublished data from her lab gathered since 1999. All in all, this included 992 participants, with the memories of 697 of them also being compared to the recollections of their parents. This dataset heavily suggests that people tend to overestimate how old they were at the time of their first memories — as confirmed by their parents.

This isn’t to say that our memories aren’t reliable. Peterson did find evidence that, for example, children interviewed after two and eight years had passed since their earliest memory were still able to recall the events reliably, but tended to give a later age when they occurred in subsequent interviews. This, she believes, comes down to a phenomenon called ‘telescoping’.

“Eight years later many believed they were a full year older. So, the children, as they age, keep moving how old they thought they were at the time of those early memories,” says Dr. Peterson. “When you look at things that happened long ago, it’s like looking through a lens. The more remote a memory is, the telescoping effect makes you see it as closer. It turns out they move their earliest memory forward a year to about three and a half years of age. But we found that when the child or adult is remembering events from age four and up, this doesn’t happen.”

By comparing the information provided by participants with that provided by their parents, Dr. Peterson found that people likely remember much earlier into their childhood than they think they do. Those memories are also accessible, generally, with a little help. “When you look at one study, sometimes things don’t become clear, but when you start putting together study after study and they all come up with the same conclusions, it becomes pretty convincing,” she adds, admitting that this lack of hard data is quite a serious limitation on her work.

According to her, all research in this field suffers from the same lack of hard, verifiable data. Going forward, she recommends that research into childhood amnesia needs verifiable proof — either in the shape of independently confirmed memories or through documented external dates against which memories can be compared — as this would prevent errors from both participants and their parents, thus improving the reliability of the results.

The paper “What is your earliest memory? It depends” has been published in the journal Memory.

Close-in of an ant carrying something.

Ants handle social isolation about as well as humans do — poorly

If you’re having a hard time coping with the isolation this pandemic has imposed on us, find solace in the fact that ants, too, would be just as stressed as you in this situation.

Close-in of an ant carrying something, probably a crumb of bread.
Image via Pixabay.

A new paper reports that ants react to social isolation in a similar way to humans and other social species. The most notable changes identified in ants isolated from their groups involve shifts in their social and hygiene behaviors, the team explains. Gene expression for alleles governing the immune and stress response in the brains of these ants were also downregulated, they add.

The burden of loneliness

“[These observed changes] make the immune system less efficient, a phenomenon that is also apparent in socially isolating humans — notably at present during the COVID-19 crisis,” said Professor Susanne Foitzik from Johannes Gutenberg University Mainz (JGU), lead author of the study. The study on a species of ant native to Germany has recently been published in Molecular Ecology.

I don’t think I need to remind you all of this, but humans find social isolation to be a very stressful experience. It can go as far as having a significant and negative impact on our physical health and general well-being. Loneliness, depression, and anxiety can set in quite easily in isolated individuals, they also develop addictions more easily, and their immune system (along with their overall health) takes a hit.

Still, we know much less about how social insects respond to isolation than we do about social animals, including humans. Ants are extremely social insects, living their whole lives in a dense colony and depend on their mates to survive (just like everyone else there). Their lives are so deeply steeped in the social fabric of their colony that worker ants don’t even reproduce, instead caring for the nest and queen, who does all the baby-making. This would be an unthinkable proposition for most other species on Earth.

The team worked with Temnothorax nylanderi, a species endemic to Western Europe. This species lives in cavities formed in fallen plant matter such as acorns or sticks, with colonies usually containing a few dozen workers. The researchers collected young worker ants who were involved in caring for the young from 14 colonies, keeping them in isolation for varying amounts of time. The shortest was one hour, and the longest, 28 days.

After the isolation period, these ants were released back to their colonies. The team explains that these individuals seemed to show lower interest in their adult colony mates, spent less time grooming themselves, but spent more with the brood.

“This reduction in hygienic behavior may make the ants more susceptible to parasites, but it is also a feature typical of social deprivation in other social organisms,” explained Professor Susanne Foitzik.

Gene activity was also impacted. The authors report that a constellation of genes involved in governing the immune system and stress response of these ants was “downregulated”, i.e. less active. This finding is consistent with previous literature showing a weakened immune system after isolation in other social species.

“Our study shows that ants are as affected by isolation as social mammals are and suggests a general link between social well-being, stress tolerance, and immunocompetence in social animals,” concludes Foitzik.

The paper “Social isolation causes downregulation of immune and stress response genes and behavioral changes in a social insect” has been published in the journal Molecular Ecology.

People subconsciously believe that the world is ‘fair’ and that those who suffer will be rewarded later on

A new study published in the British Journal of Social Psychology reports on one of our more curious subconscious mechanisms. People expect their suffering to mean they have a greater chance of getting a reward in the future, the team explains.

Image credits Christine Schmidt.

We like to think of ourselves as fully factual, logical people, but that’s not always the case. Our brains still rely on ancient mechanisms to get us through the day, week, month, or year — and those tools don’t always follow cold facts. There’s nothing wrong or shameful about that, but it does pay to know ourselves (and what makes us tick) better. A new study looks at one such mechanism and delves into its roots.

I suffer, therefore I am (deserving)

The team reports that there are two main theories for why people believe suffering now means a greater reward later on. The first is known as the “just-world maintenance” hypothesis, which posits that people often believe we’re living in a just world where everyone gets what they deserve. In this light, unnecessary suffering would need compensation later on to restore the balance and make the world just. In essence, that their suffering will be compensated later on.

The second one is known as the “virtuous suffering” explanation, which holds that experiencing suffering can improve our moral character. This belief has been hinted at by previous research which found that committing self-punishment can make someone appear more moral. In essence, this explanation holds that suffering makes people more moral, and moral behavior leads to greater rewards in the future.

What the authors set out to determine was which one of these explanations has more merit. They started by presenting the participants with a vignette about a protagonist who had a cleft lip. Participants were either told the protagonist wasn’t suffering (the ‘low suffering conditions) or that he was experiencing a ‘high suffering condition’ due to his cleft. Next, the participants were told this protagonist had been entered into a draw where they could win free medical treatment for the condition and asked to rate the likelihood that he would win.

Based on the results, the team says that the virtuous suffering explanation doesn’t really have much support. However, they report that when the protagonist was shown to experience more suffering, participants perceived them as more ‘deserving’ of future rewards — which would support the just-world maintenance explanation.

After this, the team wanted to see how participants would react if the protagonist’s suffering was presented as being self-inflicted. Such suffering, the team believed, would be perceived as being deserved, and thus likely wouldn’t threaten participants’ belief in a just-world. To test this, they gave participants a vignette about a student who is majoring in French and recently had a limb amputation. The student applied to study abroad in France in a program that was nearly full, where the few vacant spots were to be awarded by random draw.

Depending on which group each participant was assigned to, they either read that the procedure was caused by the actions of another individual (‘other condition’), by his own decision (‘self-condition’), or as the result of random chance (‘stochastic condition’). They were then asked to rate the likelihood that the student would win the draw.

People rated the student as more likely to win if he was suffering (compared to the control condition where he wasn’t). However, they rated his likelihood of winning a spot much lower if his own actions led him to the amputation. In fact, people rated his chances in this scenario as low as they did in the control condition.

All in all, the authors write that their results support the “just-world maintenance” explanation, meaning that most people intuitively believe that the world is just and ‘acts’ fairly. They base this on the observation that unjust suffering would threaten this belief much more than the virtuous suffering one, which means people would expect needless suffering to be followed by a reward as a means to make the world just — their results, they note, align with this.

The paper “Why and when suffering increases the perceived likelihood of fortuitous rewards” has been published in the British Journal of Social Psychology.

What stress is, how it affects us, and how to handle it

Stress has many definitions, but it most usually refers to feeling overwhelmed or unable to cope with pressures in our lives. Rest assured, stress is a normal part of being alive. We all feel it to some degree in these scary, uncertain times.

Image via Pixabay.

Stress itself isn’t a bad thing. It’s our response to events that require us to change and adapt to threats or demands. It helps spur us to action and to overcome such moments in any area of our lives including family, work, hobbies, or education.

The negative effects, those we think of when saying “I’ve just been under a lot of stress lately”, stem from a build-up of this tension. When we feel that we’re not up to the challenges in our lives, or when we go too long without resting and relaxing, stress becomes chronic. This can have negative effects on our mood, performance, decision-making, and eventually on our health.

Why is stress a thing?

Stress is the product of our minds working in concert with our bodies to keep us safe. Its roots are firmly placed in the fight-or-flight response. This response is housed in our ‘lizard brain‘, meant to keep us alive in the face of danger, and shared among all vertebrates.

Stress happens in response to internal and external stimuli that place a demand upon us, either mentally or physically. It is a neutral, non-specific response (it happens for a lot of reasons and doesn’t carry any emotional charge). What it does vary in, based on the stressor, is intensity. The context shapes our emotional reaction to it.

For example, finding you’re all out of gum is a weak, negative stressor. Taking a ride on a roller coaster is a huge stressor, but a positive one (because you’re enjoying it, hopefully).

We seek out situations of controlled danger because danger can be exciting.
Image via Pixabay.

A lot of people will describe feeling ‘pumped up’ after such a ride, which is the effect of adrenaline. The release of adrenaline, also known as epinephrine, is closely tied to the fight-or-flight response — as are many other hormones such as cortisol, the stress hormone. They prime our body for either fighting off or running away from a threat by heightening physical performance, activating our immune systems (in case we get wounded), and interfering with processes that aren’t needed in a fight, such as digestion.

Boiled down, this response is our body’s go-to emergency mode when we’re threatened. It’s good at what it does, but it was meant to work in a savanna where “threatened” meant there was a lion or somebody with a sharp rock looking at you. Deadlines, lay-offs, mortgages still register as threats to our brains, but we can neither run from nor smack them, sadly, so the same response stays constant, depletes our bodies, and we become stressed.

As a general guideline, psychologists distinguish four classes of stressors: crises (such as a pandemic), major life events (getting wed, a relative dying), daily annoyances (traffic, work), and ambient stressors (pollution, climate change, crowding).

Eustress and distress

Psychologists sometimes make a distinction between eustress (‘good stress’) and distress (‘bad stress’). Stress, as we’ve seen, takes on certain emotional charges depending on the context and our reaction to it.

‘Eustress’ is an umbrella term that denotes healthy levels of positive stress which give rise to emotions such as hope, excitement, fulfillment, and being energized. It’s most commonly produced by events or demands that are outside our current zone of comfort but are still within our means to achieve. Feeling challenged and motivated to see such a task through is in no small part the product of stress.

‘Distress’, on the other hand, refers to stress caused by conditions that are far from our control or ability to rectify. It is usually characterized by prolonged periods of stress which becomes chronic, and almost always leads to maladaptive behavior (substance abuse, social retreat, irritability, aggressiveness) as a means to cope. People under distress will start experiencing problems sleeping, focusing, working, and eventually will see their health worsen.

Do we need it?

In many ways, stress impacts our performance similarly to arousal as described by the Yerkes-Dodson law. The right amount can keep us running merrily at peak performance; too much and we’re a mess. Too little arousal means that our performance suffers just as it does on the other end of the spectrum.

File:HebbianYerkesDodson.svg
Yerkes Dodson curve showing the impact of arousal on simple tasks.
Image via Wikimedia.

Stress plays a similar function to arousal. While they’re different concepts, there’s a lot of overlap between them, and they’re both linked in our bodies alongside sensations of anxiety. Stress is our initial response to events in our bodies or the environment; it’s the kick that jump-starts our response. It generally leads to arousal (which basically means ‘activation’ of our bodies and minds). Anxiety, a negative emotional state associated with feelings of worry or apprehension, is our bodies’ natural reaction to stress. Too much stress (i.e. of us being or coming close to being overwhelmed by what’s required of us) will lead to a build-up of anxiety that makes us avoid dealing with a certain task or event.

So, sadly, we can’t just do away with stress — we need it in order to function properly. But too much stress can impair our work by interfering with attention, muscle coordination and contraction, and other bodily processes.

As we’ve seen previously, one of the elements of stress involves the release of hormones that alter bodily processes (among others, making energy reserves quickly available for our muscles and organs). Spending too much time in a state of stress, then, will deplete such resources, and we’ll find ourselves running on an empty tank (which hurts our health).

How to handle stress

When dealing with stress, management is key. Keeping an eye on your stress level, and pushing it down when it becomes overwhelming, can lead to better health, productivity, and enjoyment of life.

Chronic stress keeps our fight-or-flight response always on. The hormonal changes this causes can lead to circulatory issues (due to high levels of adrenaline damaging blood vessels), heart attacks, or strokes. High levels of cortisol over long periods of time lead to issues with metabolism and energy management, i.e. it can make us eat more and fatten up.

Some of the most common signs that you’re under a lot of stress include overeating or not eating, problems sleeping, rapid weight gain (or loss in some cases), irritability, trouble concentrating, a retreat from social activities or hobbies. Some harder-to-spot ones include higher levels of anxiety, random pains and aches, issues with digestion, with memory, a drop in libido and sexual enjoyment, even autoimmune diseases.

Managing stress, unsurprisingly, involves either putting the body at ease, or giving it an outlet to channel this tension through. One useful relaxation exercise you can do is to sit in a comfortable position, breathe regularly and deeply, take a few moments to experience and enjoy what your senses are telling you, and imagine tranquil, pleasant, nice scenes or events. Physical exercise can give your body a way to expend stress (it’s the “fight or flight” mechanism, and you’re doing just that). If you just can’t fit any of those into your schedule, talking to a friend or pretty much anyone who will listen about your issues can help lower your levels of stress by giving you emotional support.

Wherever stress stems from in your life, keep in mind that our feelings are not an accurate representation of reality. They’re a product of millions of years of evolution and biochemical tweaking whose sole purpose is to keep you from dying — and if they have to ruin your mood to do so, they will. But you don’t have to bear it alone, and you don’t have to listen to it more than necessary. Take some time every day to relax, unwind, and take care of your mental health, no matter how hopeless things may seem. It’s darkest at midnight, but that’s also when things start getting brighter.

Children as young as 4 use ‘cognitive aids’ to simplify thinking

Our tendency to use external aids to simplify thinking or calculations — a process known as “cognitive offloading” — has its roots in early youth.

Image credits Esi Grünhagen.

A new paper from The University of Queensland (UQ) reports that children as old as 4 will use external aids for cognitive offloading if available. The harder a task is, the paper adds, the more likely an individual is to use these aids.

A little help can’t hurt

“We often use cognitive offloading to simplify some tasks, such as turning to calendars to remind ourselves of upcoming events or calculators when confronted with difficult mathematical problems,” says Kristy Armitage, a Ph.D. candidate at the UQ School of Psychology.

Adults, she explains, show “remarkable flexibility” in this area: they tend to rely on internal processing but will offload the work onto external aids in situations of high demand. The way this tendency develops, however, and how we use this process as we grow up is still poorly understood.

The study focused on children aged 4 to 11 who were given a series of mental rotation tasks — they were asked to imagine the movement of a given object. They could either think of the answer themselves or use a turntable the team provided to solve the problem without using cognitive resources.

Children of all ages used the turntable more frequently as the tasks got harder, the team explains. This shows that we have an early inclination towards offloading mental tasks. Armitage explains that many kids resorted to it “even in situations where it was redundant, offering no benefit to performance.”

In this experiment it was a turntable but calendars, notepads, apps, and many other things serve as cognitive aids. By over-relying on external aids as children, we hopefully better understand when their use is actually warranted by the time we’re grown up.

“With increasing age, children became better at differentiating between situations where the external strategy was beneficial and where it was redundant, showing a similar flexibility to that demonstrated by adults,” Armitage explains.

“These results show how humans gradually calibrate their cognitive offloading strategies throughout childhood and thereby uncover the developmental origins of this central facet of intelligence.”

The paper “Developmental origins of cognitive offloading” has been published in the journal Proceedings of the Royal Society B: Biological Sciences.

More atmospheric CO2 could reduce cognitive ability, especially in children

New research from the University of Colorado Boulder, the Colorado School of Public Health, and the University of Pennsylvania found that higher levels of atmospheric CO2 in the future could lead to cognitive issues.

Image via Pixabay.

A new study found that higher concentrations of atmospheric CO2 could negatively impact our cognitive abilities — especially among children in the classroom. The findings were presented at this year’s American Geophysical Union’s Fall Meeting.

Heavy breathing

Prior research has shown that higher-than-average levels of CO2 can impair our thinking and lead to cognitive problems. Children in particular and their academic performance can be negatively impacted by this, but, so far, researchers have identified a simple and elegant solution — open the windows and let some fresh air in.

However, what happens when the air outside also shows higher-than-usual CO2 levels? In an effort to find out, the team used a computer model and looked at two scenarios: one in which we successfully reduce the amount of CO2 we emit into the atmosphere, and one in which we don’t (a business-as-usual scenario). They then analyzed what effects each situation would have on a classroom of children.

In the first scenario, they explain that by 2100 students will be exposed to enough CO2 gas that, judging from the results of previous studies, they would experience a 25% decline in cognitive abilities. Under the second scenario, however, they report that students could experience a whopping 50% decline in cognitive ability.

The study doesn’t look at the effects of breathing higher-than-average quantities of CO2 sporadically — it analyzes the effects of doing so on a regular basis. The team explained that their study was the first to gauge this impact, and that the findings — while definitely worrying — still need to be validated by further research. Note that the paper has been submitted for peer-review pending publication but has yet to pass this step.

All in all, however, it’s another stark reminder that we should make an effort to cut CO2 emissions as quickly as humanly possible. Not only because they’re ‘killing the planet’, but because they will have a deeply negative impact on our quality of life, and mental capacity, in the future.

A preprint of the paper “Fossil fuel combustion is driving indoor CO2 toward levels harmful to human cognition” is available on EarthArXiv, and has been submitted for peer-review and publication in the journal GeoHealth.

When trying out creative ideas, go for your second choice, a new study finds

People aren’t very good at evaluating how creative an idea is — but they’re not terrible at it, either, so we can improve.

Image via Pixabay.

New research from the Stanford Graduate School of Business is looking into how people gauge the creativity of their ideas, and how we can improve. The findings suggest that in the very early stages of the creative process (when rough ideas are first pieced together), people have a rough understanding of which ideas are most promising — but it’s usually your second choice that ends up being the most creative.

Tortoise and hare

“Evaluating creativity is difficult,” says Justin M. Berg, an assistant professor at Stanford Graduate School of Business who studies creativity and innovation, and the study’s author.

“A lot of research suggests that people are not very good at it, that a number of biases and challenges get in the way.”

Berg carried out five experiments in which he asked participants to tackle a creative project, such as designing a new piece of fitness equipment or a way to keep people from falling asleep in self-driving cars. Participants were asked to come up with three ideas and rank them according to how promising they were from a creative standpoint. Afterwards, they were given some time to flesh out and finalize one of them.

Berg then asked a separate sample of experts and consumers to rate the creativity of the participants’ ideas.

Overall, he found that when participants only had a short time to work on their ideas, the way the experts and consumers rated them was consistent with the ranking they provided. However, when more time was afforded to work on the ideas, the one they ranked second-best tended to be rated as most creative. He explains that, just like in the fable of the tortoise and the hare, the second-ranked idea started at a disadvantage but made it to the top in the long run. This pattern was strikingly regular, he explains.

“People’s most promising initial ideas were consistently ranked second,” Berg says. “People are not terrible at identifying their best initial idea, and they are not terrible in a non-random way, which means they can get better at it.”

Abstract it

Independent raters were also asked to judge how abstract each idea was. Berg found that the ideas initially ranked second in terms of creativity were also more abstract than the ideas ranked first. A concrete idea is necessarily more developed, he explains, so its virtues are more readily apparent. Abstract ideas, even if they’re very good, can be difficult to be seen as promising.

“People value concreteness too much and abstractness too little in their initial ideas. The best initial ideas likely won’t seem very creative at the beginning—there may not be enough substance to see their potential originality and usefulness,” he adds.

“Their abstractness is a barrier that prevents people from spotting their potential.”

Participants were then put in more abstract states of mind — with questions such as “Why is this a good idea?” as opposed to “How good is this idea?” — and asked to rate the creativity of their ideas again. In this step, participants were much better able to identify the most promising idea from the get-go.

Berg says that there are obvious limitations to the study. For starters, the result could shift if participants were asked to work with more ideas.

“When you have lots of initial ideas, your most promising idea might not be your second favorite,” he says. “Instead, it may be somewhere in the top half of your predicted rankings, below the idea ranked first but above the ideas you think are your worst.”

“We’re probably all killing a lot of our best ideas early in the creative process without knowing it.”

When developing new ideas, he recommends opting for the more concrete ones if you’re under time pressure (as these will reach their potential the fastest). However, if time isn’t an issue, try focusing on asking why (versus how) an idea is good, to get you into a more abstract mindset — and then select the most promising one. If time and resources permit, develop two ideas to maturity rather than a single one. Pick a surer bet and a riskier bet, but develop the riskier bet first so you don’t get anchored by the sure bet.

Finally, when working with more abstract ideas, don’t share them until you’ve worked on them to make them more concrete.

“You may recognize an idea’s potential before others can see it. If you need to win support for an idea, sharing late may be better than sharing too early.”

The paper “When Silver is Gold: Forecasting the Potential Creativity of Initial Ideas” has been published in the journal Organizational Behavior and Human Decision Processes.

We create ‘fake news’ when facts don’t match our biases

If you also dislike fake news, you should probably find a mirror and put on a stern look. A new study found that people unconsciously twist information on controversial topics to better fit wide-held beliefs.

Image credits Roland Schwerdhöfer.

In one study, people were shown figures that the number of Mexican immigrants has been declining for a few years now — which is true, but runs contrary to what the general public believes — and tended to remember the exact opposite when asked later on. Furthermore, such denaturations of facts tended to get progressively worse as people passed the (wrong) information along.

Don’t believe everything you think

“People can self-generate their own misinformation. It doesn’t all come from external sources,” said Jason Coronel, lead author of the study and assistant professor of communication at Ohio State University.

“They may not be doing it purposely, but their own biases can lead them astray. And the problem becomes larger when they share their self-generated misinformation with others.”

The team conducted two studies for their research. In the first one, they had 110 participants read short descriptions of four societal issues that could be quantified numerically. General consensus on these issues were established with pre-tests. Data for two of them fit in with the broad societal view on these issues: for example, many people generally expect more Americans to be in support of same-sex marriage than against it, and public opinion polls seem to indicate that this is true.

However, the team also used two topics where the facts don’t match up to the public’s perception. For example, the number of Mexican immigrants to the U.S. fell from 12.8 million to 11.7 between 2007 and 2014, but most people in the U.S. believe the number kept growing.

Image credits Pew Research Center.

After reading the descriptions, the participants were asked to write down the numbers given (they weren’t informed of this step at the beginning of the test). For the first two issues (those consistent with public perception), the participants kept the relationship true, even if they didn’t remember the exact numbers. For example, they wrote a larger number for the percentage of people supporting same-sex marriage than for those that oppose it.

For the other two topics, however, they flipped the relationship around to make the facts align to their “probable biases” (i.e. popular perception on the issue). The team used eye-tracking technology to track participants’ attention when reading the descriptions.

“We had instances where participants got the numbers exactly correct—11.7 and 12.8—but they would flip them around,” Coronel said. “They weren’t guessing—they got the numbers right. But their biases were leading them to misremember the direction they were going.”

“We could tell when participants got to numbers that didn’t fit their expectations. Their eyes went back and forth between the numbers, as if they were asking ‘what’s going on.’ They generally didn’t do that when the numbers confirmed their expectations,” Coronel said.

For the second study, participants were asked to take part in a telephone (the game) process. The first person in a telephone chain would see the accurate statistics about the number of Mexican immigrants living in the United States. They then had to write those numbers down from memory and pass them along to the second person in the chain, and so on. The team reports that the first person tended to flip the numbers, stating that Mexican immigrants increased by 900,000 from 2007 to 2014 (they actually decreased by about 1.1 million). By the end of the chain, the average participant had said the number of Mexican immigrants increased in those 7 years by about 4.6 million.

“These memory errors tended to get bigger and bigger as they were transmitted between people,” said Matthew Sweitzer, a doctoral student in communication at Ohio State and co-author of the study.

Coronel said the study did have limitations. It’s possible that the participants would have better remembered the numbers if the team explained why they didn’t match their expectations. Furthermore, they didn’t measure each participant’s biases going into the tests. Finally, the telephone game study did not capture important features of real-life conversations that may have limited the spread of misinformation. However, it does showcase the mechanisms in our own minds that can spread misinformation.

“We need to realize that internal sources of misinformation can possibly be as significant as or more significant than external sources,” said Shannon Poulsen, also a doctoral student in communication at Ohio State and co-author of the study. “We live with our biases all day, but we only come into contact with false information occasionally.”

The paper “Investigating the generation and spread of numerical misinformation: A combined eye movement monitoring and social transmission approach” has been published in the journal Human Communication Research.

We learn best when we fail around 15% of the time

If it’s too hard, or too easy, you probably won’t study very well, according to a new study.

Image credits Hans Braxmeier.

Learning is a funny process. We’d all love for us to sit down and study something only to ace it in the first five minutes with minimal effort — but that’s not how things go. Empirical observations in schools and previous research into the subject found that people learn best when challenged by something just outside of their immediate grasp. In other words, if a subject is way above our heads, we tend to give up or fail so spectacularly that we don’t learn anything; neither will we invest time into studying something we deem too simple.

However, the ideal ‘difficulty level’ in regard to learning remained a matter of some debate. According to the new study, however, we learn best when we ‘fail’ around 15% of the time (conversely, when we only get it right 85% of the time).

The sweet spot

“These ideas that were out there in the education field — that there is this ‘zone of proximal difficulty,’ in which you ought to be maximizing your learning — we’ve put that on a mathematical footing,” said UArizona assistant professor of psychology and cognitive science Robert Wilson, lead author of the study.

The team, which also included members from Brown University, the University of California, Los Angeles University, and Princeton University, conducted a series of machine-learning experiments for the study. This involved teaching computers simple tasks (such as classifying different patterns into one of two categories, or discerning handwritten digits between odd or even). The computers learned best, i.e. improved the fastest, when the difficulty of the task was such that they responded with 85% accuracy. A review of previous research on animal learning suggests that the ‘85% rule’ held true in these studies as well.

“If you have an error rate of 15% or accuracy of 85%, you are always maximizing your rate of learning in these two-choice tasks,” Wilson said.

This 85% rule most likely applies to perceptual learning, the gradual process by which we learn through experience and examples. An example of perceptual learning would be a doctor learning to tell fractured bones from fissured bones on X-ray scans.

“You get better at [the task] over time, and you need experience and you need examples to get better,” Wilson said. “I can imagine giving easy examples and giving difficult examples and giving intermediate examples. If I give really easy examples, you get 100% right all the time and there’s nothing left to learn. If I give really hard examples, you’ll be 50% correct and still not learning anything new, whereas if I give you something in between, you can be at this sweet spot where you are getting the most information from each particular example.”

Time for the pinch of salt, however. The team only worked with simple tasks involving crystal-clear right and wrong answers, but life tends to get more complicated than that. Another glaring limitation is that they worked with algorithms, not people. However, the team is confident that there is value in their findings, and believe that their ‘85%’ approach to learning could help improve our educational systems.

“If you are taking classes that are too easy and acing them all the time, then you probably aren’t getting as much out of a class as someone who’s struggling but managing to keep up,” he said. “The hope is we can expand this work and start to talk about more complicated forms of learning.”

The paper “The Eighty Five Percent Rule for optimal learning” has been published in the journal Nature Communications.

Why do we do the things we do? A new study says it comes down to four factors

A new study reports that there are four broad categories for the motivations that drive human behavior: prominence, inclusiveness, negativity prevention, and tradition.

Image via Pixabay.

What do people want? That’s a question psychologists have been trying to answer for a long time now, albeit with little agreement on the results so far. In an attempt to put the subject to rest, a team led by researchers at the University of Wyoming (UW) Department of Psychology looked at goal-related words used by English speakers. They report that human goals can be attributed to one of four broad categories: “prominence,” “inclusiveness,” “negativity prevention” and “tradition.”

What makes us tick

“Few questions are more important in the field of psychology than ‘What do people want?,’ but no set of terms to define those goals has gained widespread acceptance,” says UW Associate Professor Ben Wilkowski, the paper’s first author.

“We decided the best way to address the issue was to examine the words that people use to describe their goals, and we hope our conclusions will help bring about an ultimate consensus.”

The team started with a list of more than 140,000 English nouns, which they whittled down to a set of 1,060 that they deemed most relevant to human goals. They then carried out a series of seven studies in which they quizzed participants on their commitment to pursue goals. After crunching all the data, the team reports that human motivation is built on four main components (when it’s not drugs):

  • Prominence: these goals revolve around power, money making ability, mastery over skills, perfection, and glory. All in all, these motivators underpin our pursuit of social status and our desire to earn respect, admiration, and the deference of others through our achievements.
  • Inclusiveness: this represents our drive to be open-minded, tolerant, and accepting of other people, opposing views, different lifestyles, and values. In short, goals in this category revolve around accepting people of all types.
  • Negativity prevention: while the other categories on this list push us towards a goal, negativity prevention is aimed at pushing away undesirable outcomes. It includes goals meant to avoid conflict, disagreement, isolation, or social discord. In short, it’s our desire to keep the peace in the group and avoid personal pain.
  • Tradition: such goals revolve around our desire to uphold long-standing institutions or features of the culture we belong to. Religious affiliation and zeal, attitudes towards family and nation, cultural customs, attitudes towards other social groups are in large part shaped by the culture that raised us, and we each feel the need to nurture and pass on these cultural institutions — to a lesser or greater extent.

The more rebellious of you may have noticed that all these categories are externally-focused — the team did as well. Wilkowski says that the findings point to most of human motivation being “overwhelmingly social in nature,” adding that “the ‘need to belong’ and our ultra-social nature are reflected in all four categories.”

It has to be said, by this point, that the studies only addressed the English language as used within American culture. The team believes that their four categories apply to other industrialized cultures as well, but until that’s proven, they won’t say for sure.

“For example, ‘church’ would not serve as a good marker of tradition in non-Christian cultures; and ‘fatness’ would not serve as a good marker of negativity prevention in cultures where starvation is a larger concern than obesity,” they wrote.

“Nonetheless, we suggest that the deeper concepts underlying these four constructs are relevant to the human condition more generally — at least as experienced in large, industrialized cultures.”

The paper “Lexical derivation of the PINT taxonomy of goals: Prominence, inclusiveness, negativity prevention, and tradition” has been published in the Journal of Personality and Social Psychology.

Why some people are left-handed

Around 90% of people are right-handed — and it’s been this way since at least the Paleolithic. Now, for the first time, researchers have identified regions of the human brain that are directly linked with left-handedness, and found that being a leftie is associated with both positive and negative traits.

The skewed preference for right-handedness is a uniquely human feature, researchers say. However, we still don’t know why it happens or what other effects it has. While left-handedness seems to run in the family, studies have been unable to show whether left-handedness is strictly under the genetic influence. This being said, several connections have been found between left-handedness and genetic conditions, including schizophrenia. Some results have suggested that genes are responsible for about 25% of handedness — but overall, neurologic studies on human handedness have been unable to shed light on which genes are connected to handedness.

Researchers used data from the UK Biobank, a prospective cohort study of half a million volunteers that gathers a huge range of data on their health and habits. Data from around 400,000 participants was analyzed, with 38,000 of them being self-described lefties. They identified four genetic hotspots associated with left-handedness.

“For the first time in humans, we have been able to establish that these handedness-associated cytoskeletal differences are actually visible in the brain,” said lead author Professor Gwenaëlle Douaud, who is herself left handed.

Most mutations they identified lie in areas connected to the cytoskeleton — the intricate cellular scaffolding that directs the inside of our body’s cells. Mutations in this area have been connected to changing chirality in other species. In snails, for instance, similar mutations can lead to anti-clockwise — essentially, the “left-handed” version of the shell. For snails, this is a huge problem because they can only mate with those who have the same shell chirality, due to the way their genitals are positioned.

While humans have no such issues, researchers found evidence that these cytoskeleton modifications are also affecting the way white matter is structure in the brain. There are well-established associations between left-handedness and several neurodevelopmental disorders — now, the team has found a smoking gun regarding the source of these changes, helping explain why lefties are at a slightly higher risk of some mental conditions, including Parkinson’s.

But it’s not all bad — quite the opposite.

Language-related grey matter regions functionally involved with handedness are connected by white matter tracts. Image credits: Wibeg et al / Brain.

The imaging–handedness analysis revealed an increase in functional connectivity between left and right language networks in left-handed participants. While researchers don’t have the data to back it up, they suspect that this gives lefties may have slightly better verbal skills.

However, this is still only a piece of the puzzle when it comes to understanding handedness. This study has only identified a part of the genetic differences linked to left-handedness, and only in British population — there’s still much more left to discover.

The study has been published in Brain.

Central Park.

Urban parks make people ‘as happy as Christmas’ — at least on Twitter

A quick walk in the park may just be the emotional pick-me-up you need.

Image credits Maleah Land.

The first study of its kind shows that those who visited an urban park use happier language and express less negativity on Twitter than before the visit. This boost in mood, the paper further reports, can last for up to four hours afterward.

Christmas come early

“We found that, yes, across all the tweets, people are happier in parks,” says Aaron Schwartz, a University of Vermont (UVM) graduate student who led the new research, “but the effect was stronger in large regional parks with extensive tree cover and vegetation.”

The effect is definitely strong — the team found that the increase in happiness people derived from visiting an area of urban nature was equivalent to the mood spikes seen on Christmas day (which they explain is by far the happiest day of the year on Twitter). Given that more and more of us live and work in the city — and given the growing rate of mood disorders we experience — the findings can help inform public health and urban planning strategies.

For the study, the team spent three months analyzing hundreds of tweets daily that were posted from 160 parks in San Francisco. Visitors showed the effects of elevated mood in their posts after visiting any one of these urban nature areas. Smaller neighborhood parks showed a more modest spike in positive mood, while mostly-paved civic plazas and squares showed the least mood elevation.

This suggests that it wasn’t merely going out of work, or being outside, that caused the boost in mood. The team says areas with more vegetation had the most pronounced impact, noting that one of the words that shows the biggest uptick in use in tweets from parks is “flowers.”

“In cities, big green spaces are very important for people’s sense of well-being,” says Schwartz.

“We’re seeing more and more evidence that it’s central to promoting mental health,” says Taylor Ricketts, a co-author on the new study and director of the Gund Institute for Environment at UVM.

The study’s findings are important as they quantify the benefits of natural areas beyond immediate monetary gains (i.e. “how many dollars of flood damage did we avoid by restoring a wetland?”) and look at its direct effects on public health.

Image via Pixabay.

The team used an online instrument called a hedonometer — invented by a team of scientists at UVM and The MITRE Corporation — to gather and analyze the tweets. The instrument uses a body of about 10,000 common words that have been scored by a large pool of volunteers for what the scientists call their “psychological valence,” a kind of measure of each word’s emotional temperature.

The volunteers ranked words they perceived as the happiest near the top of a 1-9 scale, with sad words near the bottom. Each word’s final score was calculated by averaging the volunteers’ responses. “Happy”, for example, ranked 8.30, “hahaha” 7.94, and “parks” 7.14. Neutral words like “and” and “the” scored 5.22 and 4.98. At the bottom were “trapped” 3.08, “crash” 2.60, and “jail” 1.76.

Using these scores, the team combed through the tweets of 4,688 users who publicly identify their location and were geotagged with latitude and longitude in the city of San Francisco (so they could pinpoint exactly which park they were tweeting from).

“Then, working with the U.S. Forest Service, we developed some new techniques for mapping vegetation of urban areas–at a very detailed resolution, about a thousand times more detailed than existing methods,” says study co-author Jarlath O’Neil-Dunne, director of UVM’s Spatial Analysis Laboratory in the UVM Rubenstein School of Environment and Natural Resources and a co-author on the new study.

“That’s what really enabled us to get an accurate understanding of how the greenness and vegetation of these urban areas relates to people’s sentiment there.”

Overall, the tweets posted from urban parks in San Francisco were 0.23 points happier on the hedonometer scale over the baseline. The increase is “equivalent to that of Christmas Day for Twitter as a whole in the same year,” the scientists write.

Exactly why parks have this effect on people isn’t fully understood — and wasn’t the object of the present study. Regardless of how it happens, the results suggest that people tend to be happier in nature. That’s a finding “that may help public health officials and governments make plans and investments,” says UVM’s Aaron Schwartz.

The paper “Visitors to urban greenspace have higher sentiment and lower negativity on Twitter” has been published in the journal People and Nature.

Sleep.

The first symptom of Alzheimer’s is excessive sleepiness

New research at UC San Francisco shows that Alzheimer’s disease directly attacks brain regions responsible for wakefulness during the day.

Sleep.

Image via Pixabay.

Both researchers and caregivers have noted that Alzheimer’s patients can develop excessive daytime napping long before showing the memory problems associated with the disease, the paper reads. Prior studies have considered that this is just a symptom of poor nighttime sleep caused by Alzheimer’s-related disruptions in the brain regions that govern sleep, while others have argued that the sleep problems themselves contribute to the progression of the disease.

However, the new study comes to show that this is in fact caused by Alzheimer’s itself.

Sleepy brain

“Our work shows definitive evidence that the brain areas promoting wakefulness degenerate due to accumulation of tau — not amyloid — protein from the very earliest stages of the disease,” said study senior author Lea T. Grinberg, MD, Ph.D., an associate professor of neurology and pathology at the UCSF Memory and Aging Center.

The brain regions that govern sleep  (including the part of the brain impacted by narcolepsy) are among the first to degrade at the onset of Alzheimer’s disease, the team reports. Therefore, excessive daytime napping, particularly when it occurs in the absence of significant nighttime sleep problems, could serve as an early warning sign of the disease.

The findings also add to the body of evidence suggesting that tau proteins contribute more directly to the brain degeneration that drives Alzheimer’s symptoms than the more extensively studied amyloid protein.

Led by lead author Jun Oh, a Grinberg lab research associate, the team measured Alzheimer’s pathology, tau protein levels, and neuron numbers in three brain regions involved in promoting wakefulness. The team used a sample of 13 deceased Alzheimer’s patients and seven healthy control subjects, which were obtained from the UCSF Neurodegenerative Disease Brain Bank.

The brains of Alzheimer’s patients had significant tau buildup in all three wakefulness-promoting brain centers compared to the healthy controls, the team reports. These three areas were the locus coeruleus (LC), lateral hypothalamic area (LHA), and tuberomammillary nucleus (TMN). The same regions had lost as many as 75% of their neurons, the team adds.

“It’s remarkable because it’s not just a single brain nucleus that’s degenerating, but the whole wakefulness-promoting network,” Oh said. “Crucially this means that the brain has no way to compensate because all of these functionally related cell types are being destroyed at the same time.”

Oh’s team also studied brain samples from seven patients with progressive supranuclear palsy (PSP) and corticobasal disease (CBD), two distinct forms of neurodegenerative dementia caused by tau accumulation. These brains didn’t show any loss of neurons in the same three areas despite showing significant tau protein build-ups.

“It seems that the wakefulness-promoting network is particularly vulnerable in Alzheimer’s disease,” Oh said. “Understanding why this is the case is something we need to follow up in future research.”

The work also ties in with previous research by Grinberg’s team, which showed that people who died with elevated levels of tau protein in their brainstem — i.e. in the earliest stages of Alzheimer’s disease onset — had already begun to experience changes in mood, such as anxiety and depression, as well as increased sleep disturbances.

“Our new evidence for tau-linked degeneration of the brain’s wakefulness centers provides a compelling neurobiological explanation for those findings,” Grinberg said. “It suggests we need to be much more focused on understanding the early stages of tau accumulation in these brain areas in our ongoing search for Alzheimer’s treatments.”

The paper “Profound degeneration of wake-promoting neurons in Alzheimer’s disease” has been published in the journal  Alzheimer’s and Dementia.

Handstand.

Healthy lifestyles can offset the genetic risk of dementia by 32%

Lifestyle choices can help reduce an individual’s genetic risk of dementia, a new paper reports.

Handstand.

Image credits Matan Ray Vizel.

New research led by researchers from the University of Exeter found that people with a high genetic risk of dementia has a 32% lower risk of developing the syndrome if they followed a healthy lifestyle, compared with their counterparts who had an unhealthy lifestyle. Participants with high genetic risk and an unfavourable lifestyle were almost three times more likely to develop dementia than those with a low genetic risk and a favourable lifestyle (a 2.83 increased occurrence of dementia from any cause).

Do good, be good

“This research delivers a really important message that undermines a fatalistic view of dementia,” says co-lead author Dr. David Llewellyn, from the University of Exeter Medical School and the Alan Turing Institute.

“Some people believe it’s inevitable they’ll develop dementia because of their genetics. However it appears that you may be able to substantially reduce your dementia risk by living a healthy lifestyle.”

The team worked with data from 196,383 adults of European ancestry aged 60 and older from UK Biobank. Out of this sample, the team identified 1,769 cases of dementia over the follow-up period of eight years. They then grouped all participants into three groups: those with high, intermediate, and low genetic risk for dementia.

“Our findings are exciting as they show that we can take action to try to offset our genetic risk for dementia,” says Joint lead author Dr Elzbieta Kuzma. “Sticking to a healthy lifestyle was associated with a reduced risk of dementia, regardless of the genetic risk.”

In order to assess genetic risk for dementia, the team looked at previous research to identify all currently-known genetic risk factors for Alzheimer’s disease. Each genetic risk factor was weighted according to the strength of its association with the disease.

To assess lifestyle, the team defined three groups based on their self-reported diet, physical activity, smoking, and alcohol consumption: favorable, intermediate, and unfavorable. People who didn’t currently smoke, engaged in regular physical activity, had a healthy diet, and only had moderate levels of alcohol intake were considered to be part of the ‘favorable’ group. A healthy lifestyle was associated with a reduced risk of dementia across all the genetic risk groups.

The paper “Association of Lifestyle and Genetic Risk With Incidence of Dementia” has been published in the journal JAMA

Child playing.

Children prefer simple objects over toys because they’re “not limited” to being a single thing

For kids, versatility might be the way to go — as far as toys are concerned, anyway.

Child playing.

Image credits Esi Grünhagen.

I have it on reasonable authority that kids are very likely to ignore a particular toy and make a starry-eyed beeline for the box it came in. I haven’t got any of my own, so I can’t attest to the accuracy of that, but I do have a cat — so I can relate to how confusing such an experience might be.

But fret not, parents around the world, for science comes to the rescue. A new study from the University of Alabama reports that children, particularly those at preschool age, are probably attracted to generic objects because they make for more versatile toys.

Is it a bird? Is it a plane?

“The inclusion of generic objects like sticks and boxes may allow children to extend their play because the generic objects can be used as multiple things,” said lead author Dr. Sherwood Burns-Nader, UA assistant professor of human development and family studies.

“Pretend play such as object substitution has so many benefits, such as increased socialization and problem solving.”

A cardboard box can become virtually anything in the mind of a child, the researchers say. In contrast, a spaceship or unicorn toy — despite being much more visually appealing — is doomed to remain a spaceship or unicorn for as long as you play with it. And therein lies the reason why children, especially younger ones, would generally prefer to play with the box.

Children often substitute one object for another during play. A stick can become a sword, a rifle, or a pen. But such substitutions aren’t made lightly — the object has to have a passable resemblance to the one it’s being substituted for. As such, an object’s features such as shape or markings can disqualify it completely for a certain play-task.

“Children don’t necessarily like the box better than the toy, but they can do more things with the box because it’s not limited,” Scofield said.

The team worked with 66 children and four primary objects: one round unmarked one, one round object marked to resemble a clock, a rectangular unmarked one, and a rectangular object marked to look like a book.

The children were read a story about a young boy named Tommy. Throughout the story, Tommy needed help finding certain items that would help in the scenarios of the story. The children were asked to pick which of the four best fit the object needed in each situation. For example, at one point Tommy wanted to go outside and play with his friends, but it was cold, and he needed a jacket. His jacket was missing a button, so the children were asked which of the four items could be a button.

“There are two parts to this,” Scofield said. “First, we expect children to choose based on shape. Since most buttons are round, we think children will choose one of the two round objects to stand in for the button. Second, we expect children to favor the unmarked shapes. We think the marked shapes have a kind of fixed identity that restricts what they can be.”

The 66 children — 22 three-year-olds, 22 four-year-olds, and 22 five-year-olds — behaved pretty much exactly as the team expected them to behave: they picked the correct shape 92% of the time in all scenarios. They also showed a preference for the unmarked objects, choosing them 65% of the time in all four scenarios. Plain objects offer more flexibility to children, which can be helpful information for parents and childcare providers when purchasing toys, Burns-Nader said.

The team concludes that children’s play spaces stand to benefit from including generic objects with few details as tools to promote object substitution and creative play.

The paper “The role of shape and specificity in young children’s object substitution” has been published in the journal Infant and Child Development.

Soldier-AI integration.

Researchers are looking into giving AI the power of reading soldiers’ minds — to help them in battle

The US Army is planning to equip its soldiers with an AI helper. A mind-reading, behavior-predicting AI helper that should make operational teams run more smoothly.

Soldier-AI integration.

The Army hopes that giving AI the ability to interpret the brain activity of soldiers will help it better respond to and support their activity in battle.
Image credits US Army.

We’re all painfully familiar with the autocomplete features in our smartphones or on the Google page — but what if we could autocomplete our soldiers’ thoughts? That’s what the US Army hopes to achieve. Towards that end, researchers at the Army Research Laboratory (ARL), the Army’s corporate research laboratory, have been collaborating with members from the University of Buffalo.

A new study published as part of this collaboration looks at how soldiers’ brain activity can be monitored during specific tasks to allow better AI-integration with the team’s activities.

Army men

“In military operations, Soldiers perform multiple tasks at once. They’re analyzing information from multiple sources, navigating environments while simultaneously assessing threats, sharing situational awareness, and communicating with a distributed team. This requires Soldiers to constantly switch among these tasks, which means that the brain is also rapidly shifting among the different brain regions needed for these different tasks,” said Dr. Jean Vettel, a senior neuroscientist at the Combat Capabilities Development Command at the ARL and co-author of this current paper.

“If we can use brain data in the moment to indicate what task they’re doing, AI could dynamically respond and adapt to assist the Soldier in completing the task.”

The Army envisions the battlefield of the future as a mesh between human soldiers and autonomous systems. One big part of such an approach’s success rests on these systems being able to intuit what each trooper is thinking, feeling, and planning on doing. As part of the ARL-University of Buffalo collaboration, the present study looks at the architecture of the human brain, its functionality, and how to dynamically coordinate or predict behaviors based on these two.

Currently, the researchers have focused on a single person, the purpose is to apply such systems” for a teaming environment, both for teams with Soldiers as well as teams with Autonomy” said Vettel.

The first step was to understand how the brain coordinates its various regions when executing a task. The team mapped how key regions connect to the rest of the brain (via bundles of white matter) in 30 people. Each individual has a specific connectivity pattern between brain regions, the team reports. So, they then used computer models to see whether activity levels can be used to predict behavior.

Each participant’s ‘brain map’ was converted into a computational model whose functioning was simulated by a computer. What the team wanted to see was what would happen when a single region of a person’s brain was stimulated. A mathematical framework, that the team themselves developed, was used to measure how brain activity became synchronized across various cognitive systems in the simulations.

Sounds like Terminator

“The brain is very dynamic,” Dr. Kanika Bansal, lead author on the work, says. “Connections between different regions of the brain can change with learning or deteriorate with age or neurological disease.”

“Connectivity also varies between people. Our research helps us understand this variability and assess how small changes in the organization of the brain can affect large-scale patterns of brain activity related to various cognitive systems.”

Bansal says that this study looks into the foundational, very basic principles of brain coordination. However, with enough work and refinement, we may reach a point where these fundamentals can be extended outside of the brain — to create dynamic soldier-AI teams, for example.

“While the work has been deployed on individual brains of a finite brain structure, it would be very interesting to see if coordination of Soldiers and autonomous systems may also be described with this method, too,” Dr. Javier Garcia, ARL neuroscientist and study co-author points out.

“Much how the brain coordinates regions that carry out specific functions, you can think of how this method may describe coordinated teams of individuals and autonomous systems of varied skills work together to complete a mission.”

Do I think this is a good thing? Both yes and no. I think it’s a cool idea. But, if I’ve learned anything during my years as a massive Sci-fi geek it’s that AI should not be weaponized. Using such systems to glue combat teams closer together and helping them operate more efficiently isn’t weaponizing them per se — but it’s uncomfortably close. Time will tell what such systems will be used for, if we develop them at all.

Hopefully, it will be for something peaceful.

The paper “Cognitive chimera states in human brain networks” has been published in the journal Science Advances.

Fried CD.

New research sheds light into how our brains handle metaphors

Your brain can read the lines, and it can read between the lines, but it does both using the same neurons.

Fried CD.

Image credits Chepe Nicoli.

While we can consciously tell when a word is being used literally or metaphorically, our brains process it just the same. The findings come from a new study by University of Arizona researcher Vicky Lai, which builds on previous research by looking at when, exactly, different regions of the brain are activated in metaphor comprehension.

Twisting our words

“Understanding how the brain approaches the complexity of language allows us to begin to test how complex language impacts other aspects of cognition,” she said.

People use metaphors all the time. On average, we sneak one in once every 20 words, says Lai, an assistant professor of psychology and cognitive science at the UA. As director of the Cognitive Neuroscience of Language Laboratory in the UA Department of Psychology, she is interested in how the brain distinguishes metaphors from the broad family of language, and how it processes them.

Previous research has hinted that our ability to understand metaphors may be rooted in bodily experiences. Functional brain imaging studies (fMRI), for example, have indicated that hearing a metaphor such as “a rough day” activates regions of the brain associated with the sense of touch. Hearing that someone is “sweet”, meanwhile, activates taste areas, whereas “grasping a concept” lights up brain regions involved in motor perception and planning are activated.

In order to get to the bottom of things, Lai used EEG (electroencephalography) to record the electrical patterns in the brains of participants who were presented with metaphors that contained action words — like “grasp the idea” or “bend the rules.” The participants were shown three different sentences on a computer screen, presented one word at a time. One of these sentences described a concrete action — “The bodyguard bent the rod.” Another was a metaphor using the same verb — “The church bent the rules.” The third sentence replaced the verb with a more abstract word that kept the metaphor’s meaning — “The church altered the rules.”

Seeing the world “bent” elicited a similar response in participants’ brains whether it was used literally or metaphorically. Their sensory-motor region activated almost immediately — within 200 milliseconds — of the verb appearing on screen. A different response, however, was elicited when “bent” was replaced with “altered.”

Lai says her work supports previous findings from fMRI (functional magnetic resonance imaging) studies. However, while fMRI measures blood flow in the brain as a proxy for neural activity, the EEG measures electrical activity directly. Thus, it provides a clearer picture of the role sensory-motor regions of the brain play in metaphor comprehension, she explains.

“In an fMRI, it takes time for oxygenation and deoxygenation of blood to reflect change caused by the language that was just uttered,” Lai said. “But language comprehension is fast — at the rate of four words per second.”

“By using the brainwave measure, we tease apart the time course of what happens first,” Lai said.

While an fMRI won’t show you exactly which brain region is working to decipher an action-based metaphor (because it won’t show you which region activates immediately and which does so after we already understand the metaphor), the EEG provides a much more precise sense of timing. The near-immediate activation of sensory-motor areas after the verb was displayed suggests that these areas of the brain are key to metaphor comprehension.

Lai recently presented ongoing research looking into how metaphors can aid learning and retention of science concepts at the annual meeting of the Cognitive Neuroscience Society in San Francisco. She hopes the study we’ve discussed today will help her lab better understand how humans comprehend language and serve as a base for her ongoing and future research.

The paper “Concrete processing of action metaphors: Evidence from ERP” has been published in the journal Brain Research.

Old and young.

Time flies as we age because our brains get bigger and less efficient, a new paper proposes

New research from Duke University says time flies as we age because of our brains maturing — and degrading.

Old and young.

Image credits Gerd Altmann.

The shift in how we perceive time throughout our lives takes place because our brain’s ability to process images slows down, reports a study penned by Adrian Bejan, the J.A. Jones Professor of Mechanical Engineering at Duke. This is a consequence of the natural development of our brains, as well as wear and tear.

Hardware, oldware

“People are often amazed at how much they remember from days that seemed to last forever in their youth,” said Bejan. “It’s not that their experiences were much deeper or more meaningful, it’s just that they were being processed in rapid fire.”

Bejan says that, as the bundles of nerves and neurons that make up our brains develop both in size and complexity, the electrical signals that encode sensory data have to travel through longer paths. We also grow in size, making the nerves feeding information to the brain physically longer. Nerve fibers are good conductors of electricity — but they’re not perfect; all that extra white matter slows down the transfer of data in our biological computers.

Wear and tear also play a role, he adds. As neural paths age, they also degrade, which further chips away at their ability to transport information.

These two elements combine to slow down our brain’s ability to transport, and thus process, data. One tell-tale sign of processing speeds degrading with age is the fact that infants tend to move their eyes more often than adults, Bejan explains. It’s not that they’re more ‘filled with energy’ or simply have shorter attention spans. Younger brains are quicker to absorb, process, and integrate new information, meaning they need to focus for shorter spans of time on a single object or stimuli to take it all in.

So, how does this impact our perception of time? The study explains that older people basically view fewer new images in a given unit of time than younglings, due to the processes outlined above. This makes it feel like time is passing more quickly for the former.  Objective, “measurable ‘clock time’ is not the same as the time perceived by the human mind,” the paper reads, as our brains tend to keep track of time by how many new bits of information it receives.

“The human mind senses time changing when the perceived images change,” said Bejan. “The present is different from the past because the mental viewing has changed, not because somebody’s clock rings.”

“Days seemed to last longer in your youth because the young mind receives more images during one day than the same mind in old age.”

It’s not the most heartening of results — who likes to hear their brains are getting laggy, right? — but it does help explain why we get that nagging feeling of time moving faster as we age. And, now that we know what’s causing it, we can try to counteract the effects.

That being said, maybe having a slower brain isn’t always that bad of a thing. If you’re stuck out on a boring date, or grinding away inside a cubicle from 9 to 5, at least you feel like you’re getting out quicker. Glass half full and all that, I suppose.

The paper “Why the Days Seem Shorter as We Get Older” has been published in the journal European Review.

Classical music.

If you want to be creative, turn the music off, new research reveals

The popular view that music enhances creativity has it all backwards, according to an international team of researchers.

Classical music.

Image via Pixabay.

Psychologists from the University of Central Lancashire, the University of Gävle in Sweden, and Lancaster University investigated the impact of background music on creative performance and let me tell you — the results aren’t encouraging if you like music.

Creatively uncreative

The team pitted participants against verbal insight tasks that require creativity to solve. All in all, they report, background music “significantly impaired” people’s ability to perform these tasks. Background noise (the team used library noises) or silence didn’t have the same effect on creativity, the team notes.

“We found strong evidence of impaired performance when playing background music in comparison to quiet background conditions,” says first author Dr Neil McLatchie of Lancaster University.

As an example, one of the tasks involved showing a participant three words (e.g. dress, dial, flower) and asking them to find a single associated word that can be combined with the three to make a common word or phrase (for example, “sun” to make sundress, sundial, and sunflower).

Each task was performed in three different settings: in the first, music with foreign or unfamiliar lyrics was played in the background. In the second setting, instrumental music (no lyrics) was played in the background. The third setting involved music with familiar lyrics being played in the background. Control groups performed the same task either in a silent environment or with a background of library noises.

All participants in settings with background music showed “strong evidence of impaired performance” in comparison to quiet background conditions, McLatchie says. The team suggests this may be because music disrupts verbal working memory.

The third experiment in particular (music with familiar lyrics) impaired creativity regardless of whether it also induced a positive mood, whether participants liked it or not, or if they usually study or work with music in the background. This effect was less pronounced when background music was instrumental with no lyrics, but still present.

“To conclude, the findings here challenge the popular view that music enhances creativity, and instead demonstrate that music, regardless of the presence of semantic content (no lyrics, familiar lyrics or unfamiliar lyrics), consistently disrupts creative performance in insight problem solving.”

However, there was no significant difference in performance on verbal tasks between the quiet and library noise conditions. The team says this is because library noise is a “steady state” environment which is not as disruptive as music.

So it may be best for your productivity to close that YouTube tab when trying to study or work. Can’t say that I’m thrilled about the findings but hey — science is science!

The paper “Background music stints creativity: Evidence from compound remote associate tasks” has been published in the journal Applied Cognitive Psychology.