Category Archives: Home science

In research studies and in real life, placebos have a powerful healing effect on the body and mind

The concept of placebos – which are sometimes called “sugar pills” – has been around since the 1800s. Image credits: Sharon McCutcheon.

Did you ever feel your own shoulders relax when you saw a friend receive a shoulder massage? For those of you who said “yes,” congratulations, your brain is using its power to create a “placebo effect.” For those who said “no,” you’re not alone, but thankfully, the brain is trainable.

Since the 1800s, the word placebo has been used to refer to a fake treatment, meaning one that does not contain any active, physical substance. You may have heard of placebos referred to as “sugar pills.”

Today, placebos play a crucial role in medical studies in which some participants are given the treatment containing the active ingredients of the medicine, and others are given a placebo. These types of studies help tell researchers which medicines are effective, and how effective they are. Surprisingly, however, in some areas of medicine, placebos themselves provide patients with clinical improvement.

As two psychologists interested in how psychological factors affect physical conditions and beliefs about mental health, we help our patients heal from various threats to well-being. Could the placebo effect tell us something new about the power of our minds and how our bodies heal?

Real-life placebo effects

Today, scientists define these so-called placebo effects as the positive outcomes that cannot be scientifically explained by the physical effects of the treatment. Research suggests that the placebo effect is caused by positive expectations, the provider-patient relationship and the rituals around receiving medical care.

Depression, pain, fatigue, allergies, irritable bowel syndrome, Parkinson’s disease and even osteoarthritis of the knee are just a few of the conditions that respond positively to placebos.

Despite their effectiveness, there is stigma and debate about using placebos in U.S. medicine. And in routine medical practice, they are rarely used on purpose. But based on new understanding of how non-pharmacological aspects of care work, safety and patient preferences, some experts have begun recommending increasing the use of placebos in medicine.

The U.S. Food and Drug Administration, the organization that regulates which medicines are allowed to go to the consumer market, requires that all new medicines be tested in randomized controlled trials that show they are better than placebo treatments. This is an important part of ensuring the public has access to high-quality medications.

But studies have shown that the placebo effect is so strong that many drugs don’t provide more relief than placebo treatments. In those instances, drug developers and researchers sometimes see placebo effects as a nuisance that masks the treatment benefits of the manufactured drug. That sets up an incentive for drug manufacturers to try to do away with placebos so that drugs pass the FDA tests.

Placebos are such a problem for the enterprise of drug development that a company has developed a coaching script to discourage patients who received placebos from reporting benefits.

Treating depression

Prior to the COVID-19 pandemic, about 1 in 12 U.S. adults had a diagnosis of depression. During the pandemic, those numbers rose to 1 in 3 adults. That sharp rise helps explain why US$26.25 billion worth of antidepressant medications were used across the globe in 2020. https://www.youtube.com/embed/REaiu-7wRvs?wmode=transparent&start=0 Brain-imaging studies show that the brain has an identifiable response to the expectations and context that come with placebos.

But according to psychologist and placebo expert Irving Kirsch, who has studied placebo effects for decades, a large part of what makes antidepressants helpful in alleviating depression is the placebo effect – in other words, the belief that the medication will be beneficial.

Depression is not the only condition for which medical treatments are actually functioning at the level of placebo. Many well-meaning clinicians offer treatments that appear to work based on the fact that patients get better. But a recent study reported that only 1 in 10 medical treatments sampled met the standards of what is considered by some to be the gold standard of high quality evidence, according to a grading system by an international nonprofit organization. This means that many patients improve even though the treatments they receive have not actually been proved to be better than the placebo.

How does a placebo work?

The power of the placebo comes down to the power of the mind and a person’s skill at harnessing it. If a patient gets a tension headache and their trusted doctor gives them a medicine that they feel confident will treat it, the relief they expect is likely to decrease their stress. And since stress is a trigger for tension headaches, the magic of the placebo response is not so mysterious anymore.

Now let’s say that the doctor gives the patient an expensive brand-name pill to take multiple times per day. Studies have shown that it is even more likely to make them feel better because all of those elements subtly convey the message that they must be good treatments.

Part of the beauty of placebos is that they activate existing systems of healing within the mind and body. Elements of the body once thought to be outside of an individual’s control are now known to be modifiable. A legendary example of this is Tibetan monks who meditate to generate enough body heat to dry wet sheets in 40-degree Fahrenheit temperatures.

A field called Mind Body Medicine developed from the work of cardiologist Herbert Benson, who observed those monks and other experts mastering control over automatic processes of the body. It’s well understood in the medical field that many diseases are made worse by the automatic changes that occur in the body under stress. If a placebo interaction reduces stress, it can reduce certain symptoms in a scientifically explainable way.

Placebos also work by creating expectations and conditioned responses. Most people are familiar with Pavlovian conditioning. A bell is rung before giving dogs meat that makes them salivate. Eventually, the sound of the bell causes them to salivate even when they do not receive any meat. A recent study from Harvard Medical School successfully used the same conditioning principle to help patients use less opioid medication for pain following spine surgery.

Furthermore, multiple brain imaging studies demonstrate changes in the brain in response to successful placebo treatments for pain. This is excellent news, given the ongoing opioid epidemic and the need for effective pain management tools. There is even evidence that individuals who respond positively to placebos show increased activity in areas of the brain that release naturally occurring opioids.

And emerging research suggests that even when people know they are receiving a placebo, the inactive treatment still has effects on the brain and reported levels of improvement.

Placebos are nontoxic and universally applicable

In addition to the ever-increasing body of evidence surrounding their effectiveness, placebos offer multiple benefits. They have no side effects. They are cheap. They are not addictive. They provide hope when there might not be a specific chemically active treatment available. They mobilize a person’s own ability to heal through multiple pathways, including those studied in the field of psychoneuroimmunology. This is the study of relationships between the immune system, hormones and the nervous system.

By defining a placebo as the act of setting positive expectations and providing hope through psychosocial interactions, it becomes clear that placebos can enhance traditional medical treatments.

Using placebos to help people in an ethical way

The placebo effect is recognized as being powerful enough that the American Medical Association considers it ethical to use placebos to enhance healing on their own or with standard medical treatments if the patient agrees to it.

Clinically, doctors use the principles of placebo in a more subtle way than it is used in research studies. A 2013 study from the U.K. found that 97% of physicians acknowledged in a survey having used some form of placebo during their career. This might be as simple as expressing a strong belief in the likelihood that a patient will feel better from whatever treatment the doctor prescribes, even if the treatment itself is not chemically powerful.

There is now even an international Society for Interdisciplinary Placebo Studies. They have written a consensus statement about the use of placebos in medicine and recommendations for how to talk with patients about it. In the past, patients who improved from a placebo effect might have felt embarrassed, as if their ailment were not real.

But with the medical field’s growing acceptance and promotion of placebo effects, we can envision a time when patients and clinicians take pride in their skill at harnessing the placebo response.


Elissa H. Patterson, Clinical Assistant Professor of Psychiatry and Neurology, University of Michigan and Hans Schroder, Clinical Assistant Professor of Psychiatry, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Even a 3-second workout every day can make you fitter and stronger

Doctors and scientists are quick to point out that working out, even just for brief periods of time, can be very helpful for your health. But in a new study, a team of researchers really took that to the extreme: they wanted to see whether even just a few seconds of working out a day can make a difference. It did.

The team from the Edith Cowan University in Australia and the Niigata University of Health and Welfare in Japan recruited a group of healthy university students. They split them into two: 39 students performed a bicep curl at maximum effort for 3 seconds a day, 5 days a week, over 4 weeks. Meanwhile, 13 other students did not exercise over the same period.

The workout group performed three different bicep curl variations: isometric (with the weight parallel to the ground), concentric (raising the weight), and eccentric (lowering the weight). They worked out with a special resistance machine. Overall, over the course of the four weeks, they worked out for just 60 seconds — but the results were visible.

The researchers measured the maximum voluntary contraction (MVC), a common measure of muscle strength before and after the regimen. Surprisingly, the students in the workout group exhibited a notable change, while for the control group, there was no difference.

The workout group exhibited improvements for all types of bicep variations (12.8% for concentric strength, 10.2% for isometric strength, and 12.2% for eccentric strength). Overall, the muscle strength improved by 11.5%. However, when they looked at other measures of strength, the results were less impressive.

The study authors note that the 3-second eccentric MVC of the elbow flexors performed increased isometric, concentric, and eccentric MVC torque by more than 10%. “It was concluded that the daily 3-second eccentric MVC over 20 days produced more potent effects than isometric or concentric MVC on neuromuscular adaptations,” the researchers write in the study.

The muscle thickness did not increase significantly, the researchers write, which was in line with what they were expecting. In addition, the study’s sample size was small, which is an important limitation. Nevertheless, the results are important and are an indication that even short (very short) workout training sessions can make a difference.

The results are expected to be particularly significant for beginners, people who have never really worked out or haven’t worked out for a while. It could also help fight muscle degradation in old age. Furthermore, researchers say the same effect could be observed in other muscle groups, though there is a need for further studies to confirm this.

The study was published in the journal Scandinavian Journal of Medicine and Science in Sports.

Thousands of tons of bread are wasted every year — in Sweden alone

The fact that food waste is a big problem is (or at least, should be) already well known. But hearing just how much food is wasted can be sobering. A new doctoral study from Sweden offers a nationwide view of how much bread is wasted every year — and how this food waste could be prevented.

“We have made calculations of the amount of bread waste, analysed the reasons behind it, and suggested solutions. Then we evaluated this in relation to potential environmental savings,” said Pedro Brancoli the lead study author.

Image credits: Douglas Alves.

The project wasn’t focused on bread initially. It mostly aimed to quantify food waste in general and assess what products were most often discarded and placed the biggest burden on the environment. Surprisingly, researchers found that bread — which has not been considered to be a significant waste source before — accounted for much of the environmental damage. The numbers are striking, Brancoli explains.

“We could establish that large amounts of bread are wasted in Sweden. To be more precise, 80,000 tons per year, or about 8 kg per person and year. The current bread distribution system also proved to be a significant source of bread waste. But we were also able to show that the bread that is wasted actually has a significant value,” he explained.

Globally, around a third of all the food produced is wasted, and food waste accounts for 6% of our total greenhouse gas emissions. There are few reliable statistics on bread waste, though some estimates place bread waste at around 30%.

The fact that there’s so much bread could, however, be a blessing in disguise. Bread waste can be used as a raw material to produce a number of different products, Brancoli explains. From animal feed, ethanol, or beer, to the substrate for fungus growth, bread can be used in a number of different applications.

“These alternatives have great potential to reduce the environmental impact in terms of the bread life cycle,” Brancoli said.

He envisions a more circular lifecycle for bread, with products being used for something else instead of simply being discarded. However, in order for that to happen, we need more cooperation between companies across the entire food chain — from wheat-growing to packaging and distribution. In addition to reducing the negative environmental impact, this can also help companies save money long term, the researcher believes.

Ultimately, Brancoli hopes his PhD thesis can start an important conversation around food waste.

“About a third of all food produced is lost on the way from farm to table. This leads to not only an environmental impact, but also unnecessary economic costs and social consequences through reduced access to food. This has led to an increased political and public debate on the need to address food waste, while at the same time increasing interest in the environmental, economic, and social effects it causes,” the researcher concludes.

The PhD thesis was published here.

Research shows how happy couples argue — and why this matters a lot

When you spend a lot of time with someone, conflict is unavoidable. Even the happiest couples argue every now and then — in fact, arguments and couple happiness aren’t as opposed as you may think — it’s all about how you argue.

In a recent study, researchers observed two samples of couples who self-described as ‘happily married’. The first group (57 couples)were in their mid-to-late 30s and had been married an average of nine years. The second group (64 couples) were in their early 70s and had been married an average of 42 years.

The two groups were first surveyed on what’s important to them, and had largely similar responses — which already tells you a few things about what’s generally important in couples. Things like intimacy, communication, and money were most serious, while jealousy, religion, and the rest of the family were viewed as less serious.

When it came to arguing, couples generally avoided focusing on the most complex problems, instead, strategically focusing on things that could be solved relatively easily.

“Focusing on the perpetual, more difficult-to-solve problems may undermine partners’ confidence in the relationship,” said Amy Rauer, associate professor of child and family studies and director of the Relationships and Development Lab in the College of Education, Health, and Human Sciences. For instance, happy couples tend to not argue about whether they trust each other, but they will bicker about who’s doing more around the house.

“Re-balancing chores may not be easy, but it lends itself to more concrete solutions than other issues,” Rauer said. “One spouse could do more of certain chores to balance the scales.”

Overall, couples that were married longer tended to report fewer arguments overall — but when they do argue, they tend to argue in productive ways, focusing on things that can be solved, and emphasizing solutions rather than just venting.

“Happy couples tend to take a solution-oriented approach to conflict, and this is clear even in the topics that they choose to discuss,” said lead author Amy Rauer, associate professor of child and family studies and director of the Relationships and Development Lab in the College of Education, Health, and Human Sciences.

Essentially, couples that stay together happily seem to (consciously or not) strategically pick their battles, and focus on battles that can be solved, and not just endless windmill fighting. Although this is a relatively small study and there can be cultural differences at play, researchers suspect that this is one of the keys to long-lasting, happy relationships.

The bottom line is not necessarily to avoid fighting — but to choose your battles carefully.

“Being able to successfully differentiate between issues that need to be resolved versus those that can be laid aside for now may be one of the keys to a long-lasting, happy relationship,” Bauer concludes.

Journal Reference: Amy Rauer, Allen K. Sabey, Christine M. Proulx, Brenda L. Volling. What are the Marital Problems of Happy Couples? A Multimethod, Two‐Sample Investigation. Family Process, 2019; DOI: 10.1111/famp.12483

Why kids should not have lots of toys (and what to do if yours have too many)

Phillip Glickman/Unsplash

In the United States, children receive more than US$6,500 (A$9,073) worth of toys between the ages of two and 12. Here in Australia, the toy industry is worth more than A$3.7 billion annually. Lockdowns have resulted in online toy sales growing by 21.4% during 2021, with the online toy industry now growing faster than the overall online retail sector.

The number of toys in Australian households is likely to increase when Christmas gift-giving starts in earnest.

Apart from environmental concerns, having lots of toys can negatively impact children as well as parents and carers.

Here are some ideas for dealing with existing toys, as well as the upcoming influx of new ones.

The problem with having too many toys

Spaces with lots of toys are overstimulating and impact the ability for babies, toddlers and younger children to learn and play creatively.

Similar to cluttered pantries or office spaces, which make it hard for adults to focus, having too many toys around the house can make it difficult for children to concentrate, learn, and develop important skills around play.

Research shows fewer toys at a time leads to better quality playtime for toddlers, allowing them to focus on one toy at a time, build concentration skills, and play more creatively.

The other issue with having lots of toys “in play” is that we tend to place less value on them. By reducing the number of toys, adults can help children develop appreciation and gratitude.

What to do if you have too many toys

De-cluttering is easier said than done, but organising toys has many benefits for children and adults alike.

Fewer toys that are well organised leads to a calmer, less stressful environment which also reduces overstimulation in children and contributes to better behavioural regulation.

Reducing the number of toys can also increase opportunities for children to build frustration tolerance and having to focus on one or two toys at a time can improve problem solving skills as well as developing independent play experience and creativity.

Organising toys can also help parents and carers improve general structure and routine in the home, which is great for everyone!

How to organise toys

A good first step is to conduct an inventory of all the toys in your house. Divide toys into “keep and play”, “keep and store” (toys that are sentimental, family heirlooms or part of a collection that can be put in storage) and “give-away or sell”.

Toys that are “keep and play” should be organised in ways that allow children to clearly see and easily access them.

Put two-thirds of these toys away in storage. Every month, rotate the number of toys available ensuring you have an interesting selection of “social” and “solo play” toys available and try to include “good” toys.

Rotating toys can help with space issues and importantly it keeps the novelty alive.

Is there such a thing as ‘good’ toys?

With such a huge variety of toys available, the choice can be overwhelming. But when you are thinking about buying toys, there are some features that make certain toys better than others.

“Good” toys are those that are appropriate for the child’s age and developmental level. If you are not sure if a toy is suitable in this regard, seek advice from staff in specialist toy stores or consult child development websites such as raisingchildren.net.au and earlychildhoodaustralia.org.au.

Toys should stimulate learning and keep a child’s interest at the same time and they should be safe and durable. In addition, toys should be able to stand the test of time (think Lego) and ideally be used in a variety of different ways over the years.

We recognise that with more than 17% of Australian children living in poverty, there are also many families who do not have the problem of having too many toys.

Good toys don’t have to be expensive. While Australians spend millions each year on toys, it’s worth remembering simple, everyday household items – cardboard boxes, saucepans and cooking implements, buckets and tubs, cardboard tubes, plastic containers and stacking cups – make excellent toys for younger children.

Categorising ‘good’ toys

Parents may find it useful to categorise good toys. This ensures when you are organising toys, children have access to a variety of toys suitable for different types of learning and play development.

Here are five ways to categorise toys:

1. manipulative/functional toys – these include construction and building toys, puzzles, stacking and nesting, brain-teasers, dressing toys, beads, blocks, bath toys, and sand and water toys. Manipulative toys are important for helping develop fine and large motor skills, dexterity and coordination, which are vital for drawing, writing, dressing and more.

2. active toys – including various outdoor toys, climbing equipment, sports equipment and ride-on toys. Active toys are great for general physical activity and motor skills development.

3. learning toys – these include board and card games, books, and specific-skill toys such as letter identification and shape and colour sorters.

4. creative toys – such as arts and craft materials, musical toys and instruments including digital music and drawing apps.

5. make-believe – including dress ups and role play (costumes, clothing, hats, masks and accessories), stuffed toys, puppets, dolls, transportation toys.

What to do with toys you don’t need

It can be hard parting with beloved toys, those that have been part of a special collection or even just trying to clear out toys that have accumulated over the years. Many people find it emotionally challenging to give away toys and prefer to keep and pass them on to children and family members.

There are many charitable organisations that will be pleased to find new homes for good quality toys – The Salvation Army, Save the Children and Vinnies – all welcome toy donations, especially at this time of year. Also search “toy donation” in your area to find local organisations and make sure what you are giving is in good condition (if it’s a puzzle, make sure it has all the pieces!).

Online platforms selling used items or secondhand dealers are other options which will give your treasures a second life.

Finally, as we head into Christmas with Australians tipped to spend more than $11 billion on gifts, it’s worthwhile having the list of “good” toys handy so you can easily answer friends and relatives when they inevitably ask “what can we get the kids for Christmas?”.


Louise Grimmer, Senior Lecturer in Retail Marketing, University of Tasmania and Martin Grimmer, Professor of Marketing, University of Tasmania

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Will COVID-19 kill the open-plan office?

Taking down walls makes office cheaper, but it also made them perfect spreaders for viruses and bacteria. A flood of changes promises to bring back those walls — or rather, take a bite out of the office itself.

Viral transmission

It was supposed to be the ‘better’ way, a design that would foster collaboration, creativity, and cooperation among teams. Companies loved it, and the open plan office became the default of many corporations. However, it wasn’t just ideas and thoughts that were easily shared, but also pathogens.

A decade ago, researchers in Arizona conducted a study to see just how fast a virus can spread inside an average office space. They placed a nonpathogenic virus on the door to an open plan office with 80 employees. In only 4 hours, over half of the commonly touched surfaces became contaminated. By the end of the day, virtually the entire office (as well as the bathrooms, doors, and breakroom) were contaminated.

“Behaviors in the workplace contribute to the spread of human viruses via direct contact between hands, contaminated surfaces and the mouth, eyes, and/or nose,” the researchers conclude.

As it turns out, while creativity and cooperation may be hard to quantify, viral spread was not, and open plan offices were more likely to make people sick. A recent study found that people working in this type of office were more likely to take sick days off.

When the COVID-19 pandemic came, the viral transmission hazards of offices confirmed, and open offices were linked to viral transmission. Droplets from a single sneeze can travel several meters, contaminating surfaces for days; even if carefully cleaned, the open office was bound to be less safe than more isolated types of offices.

Then, after people increasingly started working from home last year, returning to work in an open plan office simply seemed unacceptable to most. Many workplaces introduced layout changes including buffer zones and plastic screens intended to reduce the risk of viral transmission, up to the point where there was even a plexiglass shortage.

But this created the illusion of safety rather than actual safety, and people weren’t too keen to return to open spaces — and not just because of the pandemic.

A growing list of grievances

The open plan office, it turns out, had it coming from a long time ago.

Systematic surveys showed that the effects of open-plan offices were not always as positive as purported. Many workers complained about high levels of noise which was hampering productivity and causing stress and higher blood pressure in workers. Many would scurry on to quiet rooms, and it was not uncommon for open offices to actually decrease face-to-face conversations — in the noise and the crowd, direct communication ironically became rarer. In one 2018 study, face-to-face communication was found to decline by up to 70% due to the open office, while electronic communication increased as employees began to “socially withdraw”. Another 2018 study found that employees were aware of the viral transmission risks associated with open spaces, and the fear of infection triggered significant stress. Workers also reported feeling more distracted in open spaces.

Furthermore, the open space takes away what little privacy employees have. It’s hard to hide a cluttered desk in an open space, and it’s likely that everyone around will know what you’re eating — and when. If your job entails phone conversation, that’s also a problem: one study found that employees were less likely to share honest opinions on phone calls while in an open space, due to fears that their co-workers may hear them.

Indeed, the open plan office, the lovechild of so many corporations, was in trouble way before the pandemic.

For all the advantages it offered, like easier office logistics and breaking up silo working, open spaces seemed to cause a fair bit of trouble. The thing is, even though many disliked open spaces, they didn’t have much of a say in the matter. At least, until recently.

The Great Resignation and working from home

Working from home has a draw that many employees have discovered during the pandemic.

Among the many unexpected consequences of the pandemic is a phenomenon people are starting to call The Great Resignation. Basically, the world is experiencing an unexpected exodus of workers. A whopping 4 million Americans a month are quitting their jobs, and workers in other parts of the world are echoing similar trends, sending shockwaves across the entire market.

It’s hard to say why this is happening. A part of it can be traced to economic initiatives meant to tackle the effects of the pandemic, but that’s just the tip of the iceberg. A lot of people are feeling burned out, want a better life balance, or are just looking for better or more meaningful jobs. To add even more fuel to this fire, plenty of workers have become accustomed to the advantages of working from home and are prepared to quit their jobs if they’re not given the option of working from home.

“What will it take to encourage much more widespread reliance on working at home for at least part of each week?” asked Frank Schiff, the chief economist of the US Committee for Economic Development, in The Washington Post in 1979. Now, we know: a pandemic and a great wave of resignations.

Basically, the pandemic has shown that in a great number of cases, we can in fact work from home — despite what some employers would have you believe. A whopping 37 percent of U.S. jobs could potentially be done remotely, and this spells trouble for all offices, not just open ones.

Indeed, for many jobs, the technology of working from home is already easily accessible. It was the culture of the workplace that was keeping people inside the workplace. But now, that’s all been blown open.

The clock is ticking, but change is unlikely to be definitive

Semi-open spaces, or other types of designs tweaking the open space may be more palatable for workers in the near future.

From the very start, the idea behind open plan offices was flexibility and freedom; but now, many people want a different type of flexibility and freedom. In the short term, the pandemic virtually stopped the usage of such offices, but in the long run, it triggered changes that will likely lead to their downfall.

However, this doesn’t mean that the concept will become obsolete or go away — far from it. But the idea that the open plan office is the space of the future (as some companies were keen to believe) seems bound to fail. There is still a place for these offices in some companies, in some instances, but it’s not a panacea or a universally desirable solution; the open plan office is likely to become a niche rather than a go-to option.

Of course, offices as a whole will likely change and clever design changes may yet salvage open spaces or help convert them into something more palatable. Truth be told, we’re not sure what type of offices will be desirable, or how the idea of the office will morph in this extremely volatile period.

Ultimately, the cascade of changes triggered by the pandemic is far from over — it’s just beginning. We’re just starting to see their effects, who knows what will happen next?

Going to sleep before 10 PM could lower your risk of heart disease

Going to bed between 10 and 11 PM may be good for your heart — especially if you’re a woman.

It’s been shown time and time again that not getting enough sleep is bad for your heart (and bad for you in general, for a number of reasons). But not all sleep is equal, and not all bedtimes are equal. In a new study, researchers analyzed data from over 88,000 individuals in the UK Biobank, collecting data on their sleep and waking up time using a monitoring bracelet. Researchers then followed up four years later, looking for any cardiovascular diseases (such as heart attack, heart failure, chronic ischaemic heart disease, stroke, and transient ischaemic attack). Over 3,000 people developed cardiovascular disease over the period of the study.

The ideal period for going to sleep (the one that was correlated with the lowest risk of heart disease) was 10-11 PM. Many of the people who developed a disease went to sleep after 11 PM — the risk was 12% greater for those who went to sleep between 11:00 and 11:59 PM, 25% disease higher for those going to bed after midnight, and remarkably, 24% higher for those who went to sleep before 10 PM.

While this is just relative risk, and the total risk was still relatively low (just 3.6% of participants developed a disease), the differences are significant, especially because the data was acquired directly (as opposed to self-reporting, which is often used in this type of study but can be inaccurate).

Still, the study only establishes a correlation, which does not necessarily imply causation — so in other words, no cause-effect link was yet demonstrated. However, researchers stress that the link persisted after adjustments were made for sleep duration and sleep irregularity.

“The body has a 24-hour internal clock, called circadian rhythm, that helps regulate physical and mental functioning,” said study author Dr. David Plans of the University of Exeter, UK. “While we cannot conclude causation from our study, the results suggest that early or late bedtimes may be more likely to disrupt the body clock, with adverse consequences for cardiovascular health.”

However, there were important gender differences. The association with increased cardiovascular risk was stronger in women — only sleeping before 10:00 PM remained as significant for men. Researchers aren’t exactly sure why this seems to be the case, but they suggest that sleep timing may be a risk for everyone, not just for women.

“While the findings do not show causality, sleep timing has emerged as a potential cardiac risk factor – independent of other risk factors and sleep characteristics. If our findings are confirmed in other studies, sleep timing and basic sleep hygiene could be a low-cost public health target for lowering risk of heart disease.”

Based on these findings, researchers suggest to try and go to sleep sometime between 10 and 11 PM — or at least before midnight. The more you delay, the more pressure you are likely putting on your body.

“Our study indicates that the optimum time to go to sleep is at a specific point in the body’s 24-hour cycle and deviations may be detrimental to health. The riskiest time was after midnight, potentially because it may reduce the likelihood of seeing morning light, which resets the body clock,” Plans added.

Going to bed before midnight can be pretty challenging, especially in our modern, fast-paced world. However, here are a few science-based things you can do to ease your way into a good night’s sleep:

  • Turn off all artificial lights — yes, this includes your smartphone, laptop, TV, whatever. Because the natural production of melatonin (which is required for sleep) can be suppresed by light, looking into a screen may prevent the body from feeling sleepy.
  • Staying away from caffeine hours before bedtime — give enough time for the caffeine to exit the body.
  • Stay physically active — there is solid evidence that physical activity can help improve your sleep quality. Intense physical activity less than two hours before bedtime should be avoided, but otherwise, exercising really helps.
  • Meditation and relaxation techniques can also help — although more research is needed, there is evidence that things like breathing exercises, meditation, and relaxation methods can help reduce your stress levels and help you fall asleep faster.

Journal Reference: Nikbakhtian S, Reed AB, Obika BD, et al. Accelerometer-derived sleep onset timing and cardiovascular disease incidence: a UK Biobank cohort study. Eur Heart J Digit Health. 2021. doi:10.1093/ehjdh/ztab088.

Your cat is using your voice to constantly track your location inside the house

Previous studies suggested that cats aren’t really good at tracking objects they can’t see — but a new study shows they’re actually good at mapping your indoor position.

Image credits: Kristina Yadykina.

Cats have been with us for a very long time. However, there’s still much we don’t know about them, especially because cats are notoriously difficult to study. Compared to other pets like dogs, there have been fewer studies on cats. Saho Takagi, a researcher at Azabu University in Japan, wanted to shed new light on the minds of our adorable companions.

“My research motivation is simply to better understand cats’ mysterious minds. Cats are very familiar animals, but their minds are still shrouded in mystery compared to those of dogs because it is difficult to conduct experiments on cats.  Cats don’t adjust to humans, they sleep when they want to, and they don’t like strange places or people,” Takagi told ZME Science.

In a new study, Takagi and colleagues analyzed whether cats create mental maps of their owners inside the house. Cats have excellent hearing abilities, Takagi explains, and this ability was used to reveal tendencies inside cats’ minds.

Having a mental representation of non-visible things is linked to something called “object permanence” — the ability to know that objects or creatures continue to exist even when they are not seen. Humans develop this ability early on, and several animals have been shown to have it as well (including chimps, bonobos, bears, and jays).

In order to show if cats have the same ability, researchers devised an experimental setup in which they played recordings of their owners’ voices for cats from different parts of the house to simulate a “teleportation” scenario. This type of experiment has also been done with vervet monkeys.

Three experiments were carried out. In the first one, the owner’s voice was played back sequentially from two separate locations. In the second experiment, the voice of a familiar cat was played, and in the last one, a nonspecific sound was played as a control, to see if the cats were simply reacting to any sound, or just to that of their owner.

Image credits: Takagi et al (2021).

The team found that the cat vocalizations used in the second experiment weren’t suitable for evaluating cats’ abilities for several reasons, but results showed that cats were surprised when their owners appeared to “teleport” (ie when their voice was played from a different room than the one they were previously in). These suggest that cats keep a mental representation of their owner

“We revealed that cats have socio-spatial cognition. Specifically, when they heard their owners’ voices, they were found to be mentally tracking the location of their invisible owners. Thinking mentally about what we cannot see is a cognitive ability that forms the basis for more complex thinking skills. This study suggests that cats acquire a variety of information from sounds and “think” about them,” Takagi says.

We asked Takagi whether playing the owners’ voice through a speaker has disadvantages for recognition, but apparently, this has already been addressed in previous studies and should not pose a problem for this type of study.

“It is true that the audible range of humans is different from that of cats. In addition, since we used human speakers in this experiment, the sound may be different from what cats usually hear. However, previous studies have shown that cats can correctly identify people from their sounds even when using human speakers. This result cannot be explained without assuming that the cats are able to identify their owners’ voices.”

Ultimately, the results can only be explained if cats mentally map their owners. In other words, your cat is constantly tracking you in the house, showing an important cognitive ability.

The study has been published in PLoS.

Students are told not to use Wikipedia for research. But it’s a trustworthy source

At the start of each university year, we ask first-year students a question: how many have been told by their secondary teachers not to use Wikipedia? Without fail, nearly every hand shoots up. Wikipedia offers free and reliable information instantly. So why do teachers almost universally distrust it?

Wikipedia has community-enforced policies on neutrality, reliability and notability. This means all information “must be presented accurately and without bias”; sources must come from a third party; and a Wikipedia article is notable and should be created if there has been “third-party coverage of the topic in reliable sources”.

Wikipedia is free, non-profit, and has been operating for over two decades, making it an internet success story. At a time when it’s increasingly difficult to separate truth from falsehood, Wikipedia is an accessible tool for fact-checking and fighting misinformation.

Why is Wikipedia so reliable?

Many teachers point out that anyone can edit a Wikipedia page, not just experts on the subject. But this doesn’t make Wikipedia’s information unreliable. It’s virtually impossible, for instance, for conspiracies to remain published on Wikipedia.


Read more: On the job with a ‘Wikipedian in residence’


For popular articles, Wikipedia’s online community of volunteers, administrators and bots ensure edits are based on reliable citations. Popular articles are reviewed thousands of times. Some media experts, such as Amy Bruckman, a professor at the Georgia Institute of Technology’s computing centre, argue that because of this painstaking process, a highly-edited article on Wikipedia might be the most reliable source of information ever created.

Traditional academic articles – the most common source of scientific evidence – are typically only peer-reviewed by up to three people and then never edited again.

Less frequently edited articles on Wikipedia might be less reliable than popular ones. But it’s easy to find out how an article has been created and modified on Wikipedia. All modifications to an article are archived in its “history” page. Disputes between editors about the article’s content are documented in its “talk” page.

To use Wikipedia effectively, school students need to be taught to find and analyse these pages of an article, so they can quickly assess the article’s reliability.

Is information on Wikipedia too shallow?

Many teachers also argue the information on Wikipedia is too basic, particularly for tertiary students. This argument supposes all fact-checking must involve deep engagement. But this is not best practice for conducting initial investigation into a subject online. Deep research needs to come later, once the validity of the source has been established.

Still, some teachers are horrified by the idea students need to be taught to assess information quickly and superficially. If you look up the general capabilities in the Australian Curriculum, you will find “critical and creative thinking” encourages deep, broad reflection. Educators who conflate “critical” and “media” literacy may be inclined to believe analysis of online material must be slow and thorough.

Image via CDC.

Yet the reality is we live in an “attention economy” where everyone and everything on the internet is vying for our attention. Our time is precious, so engaging deeply with spurious online content, and potentially falling down misinformation rabbit holes, wastes a most valuable commodity – our attention.

Wikipedia can be a tool for better media literacy

Research suggests Australian children are not getting sufficient instruction in spotting fake news. Only one in five young Australians in 2020 reported having a lesson during the past year that helped them decide whether news stories could be trusted.

Our students clearly need more media literacy education, and Wikipedia can be a good media literacy instrument. One way is to use it is with “lateral reading”. This means when faced with an unfamiliar online claim, students should leave the web page they’re on and open a new browser tab. They can then investigate what trusted sources say about the claim.


Read more: We live in an age of ‘fake news’. But Australian children are not learning enough about media literacy


Wikipedia is the perfect classroom resource for this purpose, even for primary-aged students. When first encountering unfamiliar information, students can be encouraged to go to the relevant Wikipedia page to check reliability. If the unknown information isn’t verifiable, they can discard it and move on.

More experienced fact-checkers can also beeline to the authoritative references at the bottom of each Wikipedia article.

In the future, we hope first-year university students enter our classrooms already understanding the value of Wikipedia. This will mean a widespread cultural shift has taken place in Australian primary and secondary schools. In a time of climate change and pandemics, everyone needs to be able to separate fact from fiction. Wikipedia can be part of the remedy.


Rachel Cunneen, Senior Lecturer in English and Literacy Education, Student Success and LANTITE coordinator, University of Canberra and Mathieu O’Neil, Associate Professor of Communication, News and Media Research Centre, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Consumer habits in rich countries are killing 2 million people a year

A new study quantified the effects of our consumer habits on particulate emissions around the world — and the effects of these emissions on human health. Long story short: you may want to be more careful with what you’re buying.

We should be more aware of the environmental impact of our shopping.

Air pollution is a big problem. Every year, exposure to fine particulate matter contributes to over 4 million deaths from heart disease and stroke, lung cancer, chronic lung disease, and respiratory infections. Most of these deaths are in developing countries, where pollution is associated with the production of consumer goods — not just for the locals, but also for international markets.

“Among the many environmental problems affecting human health, the greatest threat is that posed by the inhalation of particles with an aerodynamic diameter of 2.5 μm or less, abbreviated to PM2.5″, the new study explains.

Air pollution emissions (especially those in low-income countries) are often associated with the production of goods that are consumed in other, often high-income, countries. Recently, researchers have started taking a closer look at the health impacts of transboundary pollution transport (pollution created in one nation which then affects another nation) and trade-related emissions. However, these impacts are hard to quantify.

In particular, the impact of PM2.5 emissions is difficult to estimate, since some of this pollution comes from secondary particle formation, which forms within the atmosphere as a result of other emissions.

A team led by Keisuke Nansai, an adjunct professor at the Graduate School of Environmental Studies at Nagoya University in Japan, was able to address this. They conducted a modeling study to quantify nation-to-nation consumer responsibility for global mortality due to primary and secondary PM2.5 particles. They focused on the impact of 19 of the 20 countries in G20, a group comprising 19 of the world’s biggest countries and the European Union.

The researchers linked the trade and consumption of goods in G20 nations to PM2.5 exposure in 199 countries. They found that in 2010, consumption in G20 nations caused 1.983 million premature deaths at an average age of 67 years; 78,600 of these occurred in infants. Overall, the consumption of goods in the USA and ten other G20 nations induced over 50% of premature deaths associated with PM2.5 in other countries.

This means that we need to rethink how we treat the link between our consumption and pollution, Nansai told ZME Science.

“The most surprising part of this analysis is not the size of the number of premature deaths due to consumption, but the fact that in many developed countries the average age of premature deaths due to consumption basis is much lower than that of their own production basis. I believe this illustrates the value of national and corporate pollution prevention on a consumption basis in saving lives,” the researcher explained.

We can all make a difference

The good thing about these findings is that we can all make a difference. The important thing, Nansai explains, is that it’s important to build awareness on the issue and incorporate it into our education. We’re all interconnected in the world, and it’s important to be aware of this.

“It is essential to know that there are people on the other side of the world who can only breathe air with a high risk of death. And I believe that understanding how we relate to that problem and empathizing with that problem is a critical element in changing our behavior. In this respect, I think it is a significant role of science to show the impact of shadow emissions in numbers,” the researcher tells ZME Science.

“Rather than a boom in people’ s interest, it needs to take root. I think it is vital that science curricula in primary and secondary schools develop an understanding that the world’s environmental problems are connected as a system, that is, that they have a life-cycle thinking.”

Another important piece of the puzzle is pushing producers and sellers to become more transparent and disclose the environmental

“When choosing a company or a product, consumers should pay attention to whether the company manages the environment throughout its life cycle. One of the most important actions is to require companies to disclose their environmental management throughout the supply chain.”

The study “Consumption in the G20 nations causes particulate air pollution resulting in two million premature deaths annually” was published in Nature Communications.

Physics sheds light on the 20-second handwashing rule. Here’s why it’s so effective

Physics plays a big role into clean hands. (Image: Pixabay)

In the past several months, the CDC has touted 20 seconds as the standard for all hand-washing activities, bringing a number of rarely sung happy birthdays, fight songs, and other multiple-second ditties out of the closet as a counter for the time period.

However, studies were short on why exactly 20 is the magic number. Well, now there is one.

Harder, better, faster, washer

In a new report out of the American Institute of Physics published in Physics of Fluids, researchers have created a model which captures the key mechanics of hand-washing. Turns out faster hand movement is better.

By simulating the motion you make when cleaning your hands, researchers estimated the time scales on which particles like viruses and bacteria were removed from your hands. Their model acted in two dimensions, with a wavy surface moving past another with a thin film of liquid separating the two — it’s imperfect but still good enough to get an idea. These wavy surfaces represented hands in their model due to the surface harshness on small spatial scales.

Particles would be trapped on the rough surfaces in wells of the hands, like the bottom of a valley. Vigorous movement and high water pressure would bring the particles to the surface and out of their little valley homes and out of your skin. According to Paul Hammond, author of the report, it turns out that 20 seconds is what he came up with in his model as the time to dislodge these particles from your hands.

“Basically, the flow tells you about the forces on the particles. Then you can work out how the particles move and figure out if they get removed,” said Hammond, who likened the process to scrubbing a stain on a shirt where the faster the motion the more likely it is to remove it. “If you move your hands too gently, too slowly, relative to one another, the forces created by the flowing fluid are not big enough to overcome the force holding the particle down.”

Hammond states that the model does not take into account chemical or biological processes that occur when using soap, but it’s pretty well known across the board that soap only improves the probability that your hands will, in fact, become cleaner.

“These viruses have membranes that surround the genetic particles that are called lipid membranes because they have an oily, greasy structure,” Thomas Gilbert, an associate professor of chemistry and chemical biology at Northeastern University, told the BBC. “It’s this kind of structure than be neutralized by soap and water.”

He explained that the dissolving of the outer “envelope” breaks up virus cells, and the genetic material, which is the RNA that takes over human cells in order to make copies of the virus, is swept away and destroyed because of the chemical or biological agents.

Just knowing how the physics of handwashing works can give us some clues as to how we can create more effective and environmentally friendly soaps, the researcher concludes.

“Nowadays, we need to be a bit more thoughtful about what happens to the wash chemicals when they go down the plughole and enter the environment.”

In the end, there is much more that goes into the story of handwashing, but this study does explain some puzzles and lay the foundation for future research. Truth be told, we’ve learned in the pandemic that we could all use a bit of work on our handwashing.

Why shaving dulls even the sharpest of blades

Shaving technology has progressed quite a bit in the past few decades, but one thing has remained annoyingly constant over time: the fact that blades get dull with use.

Image credits: Hamid Roshaan.

Human hair is 50 times softer than the blade itself — so it doesn’t seem like it would be too much of a challenge to cut through. At first, it’s not. New blades (at least the quality ones) get the job done easily. But if you use them again (and again and again), they tend to get dull.

Many shaving razors (especially the small ones) also can’t be sharpened. Knives, for comparison, can be sharpened and last for many years, but blades need to be replaced quickly. To see why this happens, a team of MIT engineers looked at this interaction in unprecedented detail.

“Our main goal was to understand a problem that more or less everyone is aware of: why blades become useless when they interact with much softer material,” says C. Cem Tasan, the Thomas B. King Associate Professor of Metallurgy at MIT. “We found the main ingredients of failure, which enabled us to determine a new processing path to make blades that can last longer.”

Tasan is a specialist. His work focuses on exploring the microstructure of metals and see what can be changed to make them more resilient in time.

“We are metallurgists and want to learn what governs the deformation of metals, so that we can make better metals,” Tasan says. “In this case, it was intriguing that, if you cut something very soft, like human hair, with something very hard, like steel, the hard material would fail.”

He found that when it comes to shaving blades, the blade doesn’t technically get dull. Whether it’s single-blade, multi-blade, or safety razors, the blade is still sharp. What happens is that it tends to develop tiny chips. If you’ve ever shaved with a blade that had chips, the odds are you would have felt it pinching at some of your hairs. But researchers weren’t exactly sure why this happened.

To make matters even more puzzling, the chips were showing up in the same places on different blades.

“This created another mystery: We saw chipping, but didn’t see chipping everywhere, only in certain locations,” Tasan says. “And we wanted to understand, under what conditions does this chipping take place, and what are the ingredients of failure?”

To solve the puzzle, Gianluca Roscioli, lead author and MIT graduate student, set up a system to carry out controlled shaving experiments. The device consisted of a movable stage with two clamps: one that held the razor blade and the other that anchored the strands of hair. They then shaved strands of hair of different thicknesses and held at various angles, observing the process with an electron microscope. Roscioli used his own hair, as well as hair sampled from several of his labmates, to ensure that he was representing a wide range of hair diameters. The process looked like this:

The team observed the same thing, regardless of the hair thickness: hair caused the blade’s edge to chip, but only in certain spots. Then, after the first chip forms, cracks accumulate around it, and further chipping quickly appears.

But they also started to see other patterns. For instance, chips didn’t seem to occur when the hair was perpendicular to the blade. But when the razor could shift and bend, chips were more likely to occur, especially on the edge of the blade. The team carried out computer simulations which predicted that failure is dependent on the angle. The models showed that the composition of the blade’s steel was also important, with heterogeneous blades more likely to chip.

The mechanism producing the chips is called stress intensification. When you shave, some stress is applied to the blade — and if there are microcracks or inhomogeneities, it’s more likely to crack.

“Our simulations explain how heterogeneity in a material can increase the stress on that material, so that a crack can grow, even though the stress is imposed by a soft material like hair,” Tasan says.

So if you want your blade to last longer, reduce its heterogeneity while maintaining its hardness — and if possible, try to shave perpendicular to the strands of hair.

The study was published in Science.

12-year astrophotography mosaic of the Milky Way

It took 13 years for the Voyager 1 to take the iconic Pale Blue Dot image, which was part of a mosaic of the solar system. The best observation of the Cosmic Microwave Background radiation, by the Planck Mission, took more than 4 years. The Hubble Space Telescope’s deep fields can take days observing distant galaxies.

That’s astronomy — many observations take a lot of time; months, even years. Many of these ‘big’ space missions take years and included hundreds of members. From project to launching and publishing, a single project can include over 1,000 people.

For astrophotographers, it’s much rarer to spend this much time on a single project, and there’s often a single photographer or a small team working. In an area with low light pollution, with a good camera in 30 seconds you even get the job done. This type of photo, with the galaxy with a city, over a tree, and even Milky Way with Aurora Borealis is becoming increasingly common.

However, a jaw-dropping amount of hard work spent can also be spent on astronomical photography. Such an example recently came from one astrophotographer, J-P Metsavainio, a Finnish artist who provides images for NASA, National Geographic, and other big groups.

He spent 1,250 hours of his life in the city of Oulu in Finland to make a mosaic of the Milky Way. He adapted his gears to shoot the image, which includes, a camera, a telescope, and other sorts of paraphernalia as you can imagine.

Credits: J-P Metsavainio

It took almost twelve years to finalize this mosaic image. The reason for a long time period is naturally the size of the mosaic and the fact, that image is very deep. Another reason is that I have shot most of the mosaic frames as an individual compositions and publish them as independent artworks. That leads to a kind of complex image set witch is partly overlapping with a lots of unimaged areas between and around frames. I have shot the missing data now and then during the years and last year I was able to publish many sub mosaic images as I got them ready first.

Added Metsavainio in his blog.

The result was a giant 1.7-gigapixel mosaic — to get a scale of how big that is, some iPhones have a 12-megapixel camera. Such a gigantic picture can show us part of our galaxy going from the Taurus to the Cygnus constellations. Here’s the photograph superimposed on the night sky. 

Credits: J-P Metsavainio

What’s even more impressive is the amount of detail from objects lying in that region. The most impressive one is the supernova remnant W63. It took 60 exposure hours to observe this object. Other remnants were observed, and also nebulae.

Credits: J-P Metsavainio

This hard work also involved placing each observation on the correct position by consulting a sky map. An incredibly detailed result for a single-person job. Not so different from astronomers from the past, like William Herschel (with help from his sister), who built a telescope themselves to collect data.  Dedication like this requires a lot of patience leaving a difficult challenge for another astrophotographer who plans to overcome Metsavainio.


For more images visit Metsavainio’s blog.

Getting a new couch can help your health more than you realize — but only if you get the right type

There’s a hidden benefit to getting a new couch, and it’s not related to how comfortable or good-looking it can be. A new study in the United States showed that getting rid of an old couch, or even just replacing the foam in upholstered furniture, significantly decreases the levels of flame retardant toxic chemicals that accumulate in household dust.

Image credit: Flickr / Emdot

Until recently, manufacturers added retardant chemicals to furniture in order to meet standards that are now outdated. Companies still add such retardants to textiles, electronics, children’s products, and building materials. Although they are meant to prevent or slow the spread of the fire, their effectiveness in furniture has been questioned by recent studies, and they can cause more harm than good.

Flame retardants migrate out of furniture into air and dust and ultimately end up in people’s bodies. Infants and young children are particularly at risk since they crawl and play on the floor, where contaminated dust tends to settle. Exposure to these substances has been linked with cancer, decreased fertility, and neurodevelopment problems such as lower IQ in children. Although there’s no reason to panic, it’s a consideration, researchers caution.

“We’ve long suspected that couches are a major source of toxic chemicals in dust. Now, for the first time, we have evidence demonstrating the positive impacts of replacing old furniture containing flame retardants,” Kathryn Rodgers, a research scientist at Silent Spring Institute and lead author of the new study, said in a statement.

Rodgers and the group of researchers wanted to assess the impact of the current standard on flame retardants in upholstered furniture for the United States and Canada. It came into effect in 2014 after growing concerns over the toxicity of flame retardants and their lack of fire safety benefits, allowing manufacturers to make furniture without them.

For the study, the researchers recruited participants from 33 homes in Northern California who claimed to be willing to replace their old couch or sofa with a new one free of flame retardant chemicals. Two-thirds of the participants chose to replace their entire couch, while the rest just changed the foam. Dust samples were collected before and after the swap.

The findings showed that the concentration of flame retardants dropped significantly after the first six months, and then remained lower a year after the furniture was replaced. The same decline was seen in homes that only changed the foam. Seven types of flame retardants were tested for in dust and two specific – PBDEs and TPHP – decreased the most.

PBDEs, or polybrominated diphenyl ethers, are a common type of fire-retardant chemicals that can be found in a variety of consumer products. Their production began in the 1970s and peaked in the 1990s. Meanwhile, TPHP, or triphenyl phosphate, is another flame retardant used in polyurethane foam for furniture and children’s products.

“This study provides further evidence that the bans on flame retardants in upholstered furniture help to reduce flame retardant levels in the home,” Tasha Stoiber, a study co-author, said in a statement. “Replacing a couch or sofa with furniture made without flame retardants makes a significant difference in people’s everyday exposures to these chemicals.”

The researchers warned that the market for flame retardants is still growing as the chemicals are used in many types of consumer products beyond furniture. They called policymakers to reduce their use through legislations and consumers to replace their couch, or at least the foam, as soon as possible. It’s also important to vacuum regularly to keep dust levels low, they suggested.

The study was published in the journal Environment International.

Fiction readers, rejoice: you probably have better language skills, study shows

When it comes to reading, non-fiction is often regarded as more useful, widening our horizons and improving our knowledge as well as our language ability. But new research contradicts that idea and brings fiction into the spotlight.

The study from Concordia University researchers found that those who read fiction (yes, even the accessible, popular stuff) have better language skills, scoring higher in language tests compared to those who read just to access specific information.

Image credit: Flickr / Paul Bence

“It’s always very positive and heartening to give people permission to delve into the series that they like,” Sandra Martin-Chang, lead author, said in a statement. “I liken it to research that says chocolate is good for you: the guilty pleasure of reading fiction is associated with positive cognitive benefits and verbal outcomes.”

Martin-Chang and her team used a scale called Predictors of Leisure Reading (PoLR) to investigate reading behavior, looking at readers’ interests, obstacles, attitudes, and motivations. Then, they looked at how well the PoLR predicted the language skills of 200 undergraduate students from York University.

The researchers chose to focus specifically on undergraduate students because they are in a crucial period of their reading life. Early adulthood is when rereading becomes self-directed rather than imposed by others, making it a period to develop one own’s reading habits. It’s also a relatively understudied group, as previous studies have focused on children.

For the study, the volunteers first completed a 48-question survey that measured various reading factors. They were then given language tests and a measure of reading habits called the Author Recognition Test – which asks respondents to pick names of fiction and non-fiction authors they are familiar with from a long list of real and fake names.

After looking at the data, the researchers found that reading enjoyment, positive attitudes, and deeply established interests predict better verbal abilities — and these traits were more strongly associated with exposure to fiction than non-fiction. For Martin-Chang, “wanting to read something over and over again and feeling connected to characters and authors are all good things.”

Previous studies have highlighted the benefits of reading, particularly of fiction, helping people develop empathy, theory of mind, and critical thinking. When we read, we strengthen several different “cognitive muscles,” which essentially makes reading the equivalent of a hardcore empathy workout.

Research also suggests that reading fiction is an effective way to enhance the brain’s ability to keep an open mind while processing information, a necessary skill for effective decision-making. A 2013 study found individuals who read short stories instead of essays had a lower need for cognitive closure – the desire to reach a quick conclusion in decision-making. Ultimately, reading fiction is also fun, reducing stress and all the pressure accumulate through the day.

Some high-level business leaders have long touted the virtues of reading fiction, while others focus more on non-fiction. Warren Buffet, CEO of Berkshire Hathaway, spends most of his day reading and recommends books every year, including fiction. SpaceX CEO Elon Musk says he learned to build rockets by reading fiction books. Of the 94 books recommended by Bill Gates from 2012 to 2020, only nine were fiction.

Still, many adults don’t read fiction because at some point they came to believe that fiction is just a waste of valuable time that could be spent on something more productive. But it isn’t true, as seen with many studies. So finish this article, go and grab one of your Harry Potter books and warm cocoa, and get on reading!

The study was published in the journal Reading and Writing.

Boys who play video games seem to have lower depression risk — but not girls

Boys who regularly play video games at age 11 are less likely to display depression symptoms when they’re 14. But this doesn’t seem to be the case for girls. Taken together, the findings suggest that video games can have both a positive and a negative effect on mental health, and it’s not always a straightforward relationship.

Image in public domain.

Screens

If there’s one thing that has changed drastically in the past two decades, it’s computers. Computers used to be incredibly big, bulky, and not that capable. That couldn’t be further from the truth nowadays. The smartphone in your pocket is millions of times more powerful than the equipment that sent people to the moon, and year on year, they just get more and more powerful.

As a result, screens have become almost ubiquitous in our society. You have your small screen that you carry in your pocket, the big screen you work on, the even bigger screen you watch movies on, sometimes even screens on utilities. Screens are everywhere, and we’re not really sure if that’s a good thing — especially when it comes to kids.

Ever since computers became mainstream, researchers have voiced concerns about screens, concerns ranging from vision to mental health problems. But screens allow us to do different things and can have varying effects, and we should consider this instead of drawing any blanket conclusions, researchers say.

“Screens allow us to engage in a wide range of activities. Guidelines and recommendations about screen time should be based on our understanding of how these different activities might influence mental health and whether that influence is meaningful,” says Aaron Kandola the author of the new study.

At first, the general idea seemed to be that video games can have a negative effect on mental health, making children more aggressive and worsening their mental health. But a flurry of recent studies paints a very different picture, showing not only that much of this damage was overstated, but that in many instances, casual video gaming can actually improve the mental health of children.

In the new study, the results are a mixed bag. A research team involving from UCL, Karolinska Institutet (Sweden) and the Baker Heart and Diabetes Institute (Australia) reviewed data from 11,341 adolescents who are part of the Millennium Cohort Study, a nationally representative sample of young people who have been involved in research since they were born in the UK in 2000-2002. They asked the teens at age 11 about how much they spend on social media, video games, and other internet activities. Then, at age 14, they asked them again about any depression symptoms.

After accounting for other factors that may affect the results (such as socioeconomic status, physical activity, or reports of bullying), the researchers look at how depression symptoms were linked with screen habits. At age 14, boys who played video games most days had 24% fewer depressive symptoms than boys who played video games less than once a month. This effect, however, was not observed on girls. Although it’s not clear why this happens, researchers link it with different screen use patterns between boys and girls.

“While we cannot confirm whether playing video games actually improves mental health, it didn’t appear harmful in our study and may have some benefits. Particularly during the pandemic, video games have been an important social platform for young people,” adds Kandola, who is a PhD student at UCL Psychiatry.

Sitting down and social media cause problems

The researchers note that the positive effect on boys was only significant among those with low physical activity. We all know (or should know) that sitting down for prolonged periods is really bad for your health, but it’s important to know that sitting down can affect your mind as well as your body. Kandola’s previous research has shown that sedentary behavior seems to increase the risk of depression and anxiety in adolescents. So it could very well be that sitting down (and not screen time itself) is causing harmful effects.

“We need to reduce how much time children – and adults – spend sitting down, for their physical and mental health, but that doesn’t mean that screen use is inherently harmful.”

Social media also plays a role. For girls, this role seems to be particularly important. Researchers found that girls (but not boys) who used social media at age 11 had 13% more depressive symptoms when they were 14. The same association was not found for more moderate use of social media. This fits with previous studies indicating that intense social media usage can increase feelings of loneliness and alienation.

The study only shows an association, not a cause-effect relationship. But it seems to suggest that not all screen time is equal, and video games can have a positive component. Researchers say that video games could support mental health, especially those that feature problem-solving, social, cooperative, and engaging elements. At any rate, reducing the amount of sedentary time seems to be a much healthier intervention than reducing screen time.

Senior author Dr. Mats Hallgren from the Karolinska Institutet has conducted other studies in adults, finding that active screen time (when you’re doing something like playing a game) seems to have a different effect on depression than passive screen time (watching something).

“The relationship between screen time and mental health is complex, and we still need more research to help understand it. Any initiatives to reduce young people’s screen time should be targeted and nuanced. Our research points to possible benefits of screen time; however, we should still encourage young people to be physically active and to break up extended periods of sitting with light physical activity,” says Hallgren.

The study was published in Psychological Medicine.

Dressed for success – as workers return to the office, men might finally shed their suits and ties

Come on men, is this the best we can do?

The summer break is over, marking a return to the office. For some, this ends almost a year of working from home in lockdown. Some analysts are predicting it might also mark an enduring shift in how we dress for success.

It’s not the first time in Australia’s history the return to “normal” life after times of turmoil has prompted calls for more comfortable dress. The suit — quintessential men’s business dress for more than a century — has sat at the heart of these debates.

What we dress in speaks of our occupation as much as it shapes how we work: a collar that is blue or white, a singlet or a suit. The history of the suit is also tied to ideas of masculinity, class, modernity and fashionable consumption.

Is it time men swapped the suit for something more relaxed?

The birth of the business suit

Young men moved away from formal professional attire of top hats and frock coats — cut with hems that fell to the knee — around the 1870s. Instead they wore “business fashion”, pairing tailored jackets, trousers and sometimes patterned waistcoats with white shirts. Stylish neckwear and bowler hats completed the look.

Group of bank managers, stock and station agents dressed for work but not the weather, circa 1900. State Library Queensland

By the turn of the century, three-piece suits cut from the same dark-coloured woollen cloth were worn for work. These became known as “business suits”. They are strikingly similar to what we see businessmen wear today, though our contemporaries no longer wear them with stiff, detachable collars or watch chains.

As business suits became ubiquitous for city wear and office workers across Australia, working-men’s attire became increasingly practical. Those labouring in the sun or in roles demanding movement stripped back to shirts with their sleeves rolled up, or down to undershirts.

Women working in offices or shops donned lightweight blouses teamed with long, dark skirts. The fascinating history of their transforming workwear deserves a piece of its own.

Many men lamented that suits and ties were hot and stuffy by comparison, particularly in the hot summer months.

Rethinking men’s dress

There were calls for men’s “dress reform” from the early 20th century. Dress reform movements were not new at the time, nor were they confined to Australia or to men’s dress.

But war was a catalyst for change, when reformers emphasised health and hygiene over conservative, heavy suits and constrictive, tight collars. The aesthetics of men’s dress — dubbed drab, austere and colourless — also came under question.

As men returned to Australia from the first world war, commentators debated new ideas around colour, comfort and clothing that was better suited to Australia’s climate. Reformers advocated for different cuts to men’s clothing or swapping certain garments: jackets with knitted jumpers, for example, or stiff collars for looser versions that freed the neck to move.

But men in the city remained hesitant. Going without jackets and ties was undoubtedly more comfortable, but unprofessional against the dress codes of the day. As one young city worker expressed in late 1922, it made a man look “as if he were going to a picnic”.

When discussions around dress reform flourished in the aftermath of the second world war, they responded to shortages as much as to dressing for the heat. “Civvy suits” issued to returning servicemen from 1943 were in short supply. These suits were lampooned and despised when they looked cheap and badly made, but wool mills were stretched to their limits and tailors struggled to keep up with demand.

Into this void, some suggested men adopt sportswear for their return to the office — a more comfortable alternative men deserved after long years of war and austerity. This form of sportswear referred to jackets and trousers sold as separates and worn in different colour combinations, or woollen cardigans and jumpers.

An example was photographed in 1947 for Pix magazine. It captured two young men breezily strolling along Sydney’s Martin Place in open-neck shirts and loose or safari-style jackets. The photograph’s caption noted that they looked “cool, smart and comfortable” unlike “conservative” men in suits left to “swelter in the heat”.

Though suits continued to be worn by many office workers, this set in place the move towards more casual dress that would resonate across decades to come.


Read more: Fashioning blue-collars: chambray shirts and indigo-dyed workwear


Post-pandemic office wear

This is what workear meant for millions during 2020.

Lockdown has again transformed our dress as we’ve tested new combinations of comfortable clothes while working from home — variously labelled “slob chic” and the “lockdown look”, with fancy dress days to keep things interesting.

Sales of athleisure and activewear brands spiked in 2020 thanks to massive sales of tracksuits and the like. The trade in locally made sheepskin boots also reportedly boomed.


Read more: COVID-19 could have a lasting, positive impact on workplace culture


Some forecast our penchant for relaxed clothing will ripple through office dress protocols this year in a move to something akin to casual Fridays.

While it’s unlikely the tracksuit will replace the suit just yet, looser styles, freer tailoring and lighter fabrics would be another step along the path suggested by dress reformers a century ago.

Lorinda Cramer, Postdoctoral Research Associate, Australian Catholic University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How to tell if your dog is a genius

Anyone who has lived with a dog will know their capacity for learning the meaning of words, even ones you don’t want them to know. How many times have you had to spell the words “walk” or “dinner” in the hope of avoiding an explosion of excitement?

Previous studies have investigated how non-human animals, including chimpanzees, sea lions and rhesus monkeys, learn words. But now a paper published in Nature shows some dogs learn the name of a new object after hearing it only four times, an ability previously thought to be confined to humans.

The researchers found this ability was not common among all the dogs studied, instead it may be limited to a few “talented” or highly trained individuals. So how can you tell whether your own dog is a genius or not?

The study was simple, and easy to replicate at home. Just follow the steps the researchers took to see whether your dog can learn the names of objects as quickly. But don’t worry if your dog doesn’t have this ability, it might just be down to their breed or previous experience.

Whisky and Vicky Nina

Whisky the collie. Claudia Fugazza.

The new study involved a collie called Whisky, who knew 59 objects by name, and a Yorkshire terrier called Vicky Nina, who knew 42 toys.

The researchers tested each dogs’ knowledge of their toys by asking them to bring each toy in turn. Neither the owners nor the experimenters could see the toys, to avoid influencing the dogs’ choice.

Once it was established the dogs knew the names of all their toys, the researchers introduced two new objects, placing each in turn in a group of known toys. In this test Whisky chose the new toy every single time. Vicky Nina fetched the right one in 52.5% of trials, which is slightly above chance.

Learning new names

For the next part of the study the dog was shown a toy, told its name and was then allowed to play with it. After four repetitions of the name of two different new toys, the dog was asked to choose one of the two new toys.

No familiar toys were included in this part of the trial, to prevent the dog choosing the right toy by exclusion. If it knows the name of all other toys, the dog might pick the correct toy because it guesses the unfamiliar word must indicate the unfamiliar toy.

Both dogs chose the new toy more often than chance would predict, suggesting they were indeed learning the name of a new object very quickly. However, their memory decayed considerably after 10 minutes and almost completely after one hour. This shows the new learning needs more reinforcement if it is to be retained.


Read more: Six tips for looking after your new puppy, according to science


The test involving the new toy was also carried out by 20 volunteers with their own dogs, but these dogs didn’t show the ability to learn new names after few hearings.

The authors suggested the difference between the performance of the two dogs in their test and the volunteer dogs means, in order to learn new names quickly, the dog might need to be unusually intelligent or to have a lot of experience in learning names.

Vicky Nina with all her toys. Marco Ojeda

Clever dogs

It seems likely there are a combination of factors at work in these experiments. It’s significant that the breed most commonly used in studies of this type is a border collie, which is purposefully bred to attend to audible commands and is very highly motivated to carry out tasks and to please the handler. Yorkshire terriers also enjoy mental and physical stimulation.

Similar tests have been carried out by other research groups, usually using border collies. In 2004, a dog called Rico was found to know the names of 200 different objects, and in 2011 Chaser learnt 1,022 unique objects.

Other breeds may simply be less interested in playing with or fetching toys. For example sight hounds, such as salukis and greyhounds, are primarily bred for hunting or racing, so are generally more difficult to train. They may show no interest in toys at all, as well as being considerably less motivated to please the handler. https://www.youtube.com/embed/Wr_P5NR1A3k?wmode=transparent&start=0 Clever dogs can learn new names quickly.

Both the experimental dogs in this study were intensively trained, through play and social interaction, to pay attention to the names and characteristics of the toys. This might make them more likely to notice the differences between new and familiar toys, and to attend to the verbal cue associated with them.

Although their training was not formal, it was nevertheless positive reinforcement training, a powerful method for teaching animals and humans. The dogs have undoubtedly learned their skills to a high degree.

It’s quite possible to teach all dogs to perform tasks, including learning the names of objects. But the degree to which they’re willing and able to learn, and to carry out the task, is very much regulated by breed of dog and the level of motivation the individual dog possesses.

If your pet is an Afghan hound or a St Bernard, you should not expect it to be interested in spending hours fetching toys for you. If, on the other hand, you have a border collie or a poodle, their abilities may only be limited by your imagination and your dedication to playing with them.


Jan Hoole, Lecturer in Biology, Keele University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The big fat sourdough study: researchers reveal surprising diversity, tackle baking myths

When the pandemic started, sourdough bread boomed almost overnight. Stuck inside their homes, many turned to this comforting ancient craft and baked, at least for a while, delicious sourdough bread.

There’s a deceptive complexity to this craft, however. In a new study, scientists mapped the microbial life of sourdough starters in unprecedented detail and confirmed two things: sourdough starters really are different from one another — but despite this, you can make delicious bread regardless of where you are on the globe.

“This is the first map of what the microbial diversity of sourdoughs looks like at this scale, spanning multiple continents,” says Elizabeth Landis, co-lead author of the study and a Ph.D. student at Tufts. “And we found that where the baker lives was not an important factor in the microbiology of sourdough starters.”

Breaking bread

Sourdough is by no means a new thing. In fact, it’s been used by human populations for thousands of years. In 2019, one researcher even baked a loaf of bread from a 4,500 year-old-yeast from ancient Egypt. “Humans have relied on sourdough starter microbial communities to make bread for thousands of years”, the authors of the new study now write.

Sourdough is essentially slow-fermented bread that doesn’t need any commercial yeast to rise. Instead, it uses a live fermented culture — a sourdough starter — which acts as a natural leavening agent.

The yeast you normally buy in stores is a dried-out version of a naturally occurring one. This commercial yeast makes bread rise faster, which some (especially the sourdough lovers) say makes for a less tasty bread. We have no idea if that’s true or not, and we’ll leave the sourdough taste debate for another time. But what we do know, thanks to this recent study, is that there don’t seem to be that many geographic patterns in sourdough diversity.

The study gathered over 500 sourdough starter kits from a network of scientist bread-bakers. The researchers looked at the microbial communities in these starter kits and how they differ from place to place.

“We didn’t just look at which microbes were growing in each starter,” says Erin McKenney, co-author of the paper and an assistant professor of applied ecology at North Carolina State University. “We looked at what those microbes are doing, and how those microbes coexist with each other.”

A jar of sourdough starter. Credit: Lauren Nichols.

The samples came mostly from Europe and the US, although there were samples from Australia, New Zealand, and Thailand as well. Although sourdough has been around for such a long time, it’s not been studied formally all that much.

“There have been quite a few small studies on microbial ecosystems in sourdough,” says Benjamin Wolfe, co-author of the study and an associate professor of biology at Tufts University. “We think this is the first large-scale study, building on all of that previous work.”

The team started by carrying DNA sequencing on all the samples, selecting 40 samples representative of the different sourdough populations they observed. Then, they assessed these 40 samples in three ways.

First, researchers asked a panel of expert professionals ‘sniffers’ (yes, apparently that’s a job) to assess each kit’s aroma profile. They then assessed the chemical qualities of each set, determining the elements that give rise to individual aromas. Lastly, they measured how quickly each of them would rise.

Geography doesn’t matter

The first thing that struck researchers is that the sourdough kits were indeed different — but not in the way conventional wisdom would have it. Most bakers believe every sourdough starter is affected by geography, but that doesn’t really seem to be the case.

“In sharp contrast with widespread assumptions, we found little evidence for biogeographic patterns in starter communities,” the study reads.

Instead, variations in both dough rise rates and aroma were explained by acetic acid bacteria — a group of sourdough microbes largely ignored. Factors often discussed among bakers as important for sourdough turned out to not be that important.

It’s not just one thing, researchers say — it’s a lot of little things.

“Lots of bakers felt sure that specific factors were responsible for variation between types of sourdough,” McKenney says. “But what we found is that, while there could be tremendous variation between the microbial ecosystems of different sourdoughs, we could not find any single variable that was responsible for much of that variation.”

“What we found instead was that lots of variables had small effects that, when added together, could make a big difference,” says Angela Oliverio, co-lead author of the study and a former Ph.D. student at the University of Colorado, Boulder. “We’re talking about things like how old the sourdough starter is, how often it’s fed, where people store it in their homes, and so on.”

The shortcoming of this study is that it’s observational — it enables researchers to look at correlations, but it can’t tell us which microbes are responsible for which characteristics. If they want to analyze this in greater detail, “a lot of follow-up work needs to be done”, Wolfe concludes.

Journal Reference: Elizabeth A Landis et al. The diversity and function of sourdough starter microbiomes, eLife (2021). DOI: 10.7554/eLife.61644

Do men really take longer to poo?

Image credits: Jan Antonin Kolar

There’s a common assumption men take longer than women to poo. People say so on Twitter, in memes, and elsewhere online. But is that right? What could explain it? And if some people are really taking longer, is that a problem?

As we sift through the evidence, it’s important to remember pooing may involve time spent sitting on the toilet and the defaecation process itself.

And there may be differences between men and women in these separate aspects of going to the toilet. But the evidence for these differences isn’t always as strong as we’d like.

Men may spend longer sitting on the toilet

Men do appear to spend more time sitting on the toilet. An online survey by a bathroom retailer suggested men spend up to 14 minutes a day compared with women, who spend almost eight minutes a day. But this survey doesn’t have the rigour of a well-designed scientific study.

Would there be any physiological reason to explain why men spend longer on the toilet? Well, the evidence actually suggests the opposite.

We know it takes longer for food to travel through the intestines in women than in men. Women are also more likely to suffer from constipation related to irritable bowel syndrome than men. So, you’d expect women to take longer to defaecate, from the start of the bowel motion to expulsion.

But this is not the case even if you take into account differences in fibre intake between men and women.

Instead, how long it takes someone to poo (the defaecation time) is heavily influenced by the mucus lining the large bowel. This mucus makes the bowel slippery and easier for the stools to be expelled. But there’s no evidence this mucus lining is different in men and women.

One thing we do know, however, is mammals from elephants to mice have a similar defaecation time, around 12 seconds.

For humans, it’s slightly longer, but still quick. In one study it took healthy adults an average two minutes when sitting, but only 51 seconds when squatting. Again, there were no differences in defaecation time between men and women, whether sitting or squatting.

If there’s no strong evidence one way or the other to explain any gender differences in how long it takes to poo, what’s going on? For that, we need to look at the total time spent on the toilet.

Why do people spend so long on the toilet?

What I call the “toilet sitting time” is the time of defaecation itself and the time allocated to other activities sitting on the toilet. For most people, the time spent just sitting, aside from defaecating, accounts for most of their time there.

So what are people doing? Mainly reading. And it seems men are more likely to read on the toilet than women.

For instance, a study of almost 500 adults in Israel found almost two-thirds (64%) of men regularly read on the toilet compared with 41% of women. The longer people spent on the toilet, the more likely they were to be reading. However, in the decade or more since this study was conducted, you’d expect adults would be more likely to be reading or playing games on their mobile phones rather than reading paper books.

People might also be sitting longer on the toilet for some temporary relief from the stresses of life.

Meme about men avoiding parenting responsibilities by sitting on the toilet for longer
Sometimes, people just need time to themselves. Ramblin Mama

One poll found 56% of people find sitting on the toilet relaxing, and 39% a good opportunity to have “some time alone”. Another online survey revealed one in six people reported going to the toilet for “peace and quiet”. Although these are not scientific studies, they offer useful insights into a social phenomenon.

Then there can be medical reasons for a prolonged defaecation time, and consequently a lengthier time sitting on the toilet.

An anal fissure (a tear or crack in the lining of the anus) can make defaecation a painful and lengthy process. These fissures are just as common in men as in women.

And obstructive defaecation, where people cannot empty the rectum properly, is a common cause of chronic constipation. This is more common in middle-aged women.

Are there any harms from spending too long on the loo?

In a Turkish study, spending more than five minutes on the toilet was associated with haemorrhoids and anal fissures. Another study from Italy noted the longer the time people spent on the toilet, the more severe their haemorrhoids.

One theory behind this is prolonged sitting increases pressure inside the abdomen. This leads to less blood flow into the veins of the rectum when passing a bowel motion, and ultimately to blood pooling in the vascular cushions of the anus. This makes haemorrhoids more likely to develop.

What can we do about this?

In addition to the usual advice about increasing the amount of fibre in your diet and ensuring you drink enough water, it would be sensible to limit the amount of time spent on the toilet.

Different researchers recommend a different upper limit. But I and others recommend the SEN approach:

  • Six minute toilet sitting time maximum
  • Enough fibre (eating more fruit and vegetables, and eating wholegrains)
  • No straining during defaecation.

Article by Vincent Ho, Western Sydney University. Vincent Ho, Senior Lecturer and clinical academic gastroenterologist, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.