Author Archives: Tibi Puiu

Images showing steadily improve observations of gamma-rays emanated by the moon's surface. This image sequence shows how longer exposure, ranging from two to 128 months (10.7 years), improved the view. Credit: NASA/DOE/Fermi LAT Collaboration

The moon is actually brighter than the sun — in gamma rays

No one has ever gone blind from staring on the moon, while the same thing can’t be said about the sun’s glare. However, the moon is, in fact, brighter than the sun when it comes to gamma-ray radiation.

Images showing steadily improve observations of gamma-rays emanated by the moon's surface. This image sequence shows how longer exposure, ranging from two to 128 months (10.7 years), improved the view. Credit: NASA/DOE/Fermi LAT Collaboration

Images showing a steady improvement in the observations of gamma-rays emanated by the moon’s surface. This image sequence shows how longer exposure, ranging from two to 128 months (10.7 years), improved the view. Credit: NASA/DOE/Fermi LAT Collaboration

These grainy images were captured by NASA’s Fermi Gamma-Ray Space Telescope. They show the moon as a huge source of gamma-ray radiation — a form of electromagnetic radiation (EM) and the highest-energy photons of which we are aware of.

Gamma rays are created when electrically charged cosmic rays — protons accelerated by supernovae or jets produced by matter sucked into black holes — interact with matter, such as the surface of the moon.

They’re the most deadly type of EM radiation for living organisms. Luckily, they’re largely absorbed by Earth’s atmosphere.

However, on the moon, most of the gamma-ray radiation produced is absorbed by Earth’s natural satellite. Meanwhile, the sun only produces a fraction of its electromagnetic radiation in gamma rays — much less than the moon — because it has a very powerful magnetic field.

You can’t see gamma rays with the naked eye, but that’s where the Fermi Telescope comes in.

In a recent study, astronomers have calculated that the moon’s gamma rays can exceed 31 million electron volts, or 10 million times more powerful than visible light.

This brightness isn’t constant, though. Fermi LAT data show that the Moon’s brightness varies by about 20% over the Sun’s 11-year activity cycle.

“Seen at these energies, the Moon would never go through its monthly cycle of phases and would always look full,” said Francesco Loparco, co-author of the new study and a researcher at the National Institute of Nuclear Physics in Italy.

Although the moon is much brighter in gamma rays than the sun, occasionally cosmic rays will strike the dense part of the sun’s atmosphere, resulting in gamma rays above one billion electron volts. In this range, the sun is actually brighter.

The findings suggest that astronauts visiting the moon or living there over extended periods in a lunar outpost would be a great risk. Gamma rays and cosmic rays have a great penetrating power, which means shielding using elements such as lead is paramount.

NASA plans on sending humans back to the moon in 2024 as part of its Artemis program. Before that happens, we will have to be prepared for the threat of radiation exposure.

Image of a carbon-18 ring made with an atomic force microscope. Credit: IBM Research.

Scientists make an 18-atom ring of pure carbon

Image of a carbon-18 ring made with an atomic force microscope. Credit: IBM Research.

Image of a carbon-18 ring made with an atomic force microscope. Credit: IBM Research.

Since the 1960s, chemists have been trying to synthetize a ring-shaped molecule made of pure carbon. In a triumph of scanning probe microscopy, researchers at IBM Research Zurich and Oxford University have done just that by bonding an 18-atom-ring of carbon — a cyclocarbon.

Many have tried to make cyclocarbons, but in vain — until now

Bonding can make the difference between crumbling graphite and almost indestructible diamond. Both are made of carbon, but the former has carbon bonded to three other carbon atoms in a hexagonal lattice while the latter has carbon bonded with four other atoms in a pyramid-shaped pattern.

Decades ago, scientists — including Nobel Prize-winning chemist Roald Hoffmann — published work that theoretically showed that carbon can form bonds with just two nearby atoms. Each atom could form either a double bond on each side or a triple bond on one side and single bond on the other.

The trouble is that a cyclocarbon molecule is very chemically reactive and hence less stable than graphite or diamond.

In their new study published in Science, researchers led by Przemyslaw Gawel of the University of Oxford first made molecules that included chains of four-carbon-squares with oxygen atoms attached to these squares.

Researchers started from a precursor molecule (C2406) and gradually went through intermediates before reaching the final product -- the cyclocarbon (C18). Credit: IBM Research.

Researchers started from a precursor molecule (C2406) and gradually went through intermediates before reaching the final product — the cyclocarbon (C18). Credit: IBM Research.

At IBM Research in Zurich, the oxygen-carbon molecules were exposed to a layer of sodium chloride in a high-vacuum chamber.

The extra oxygen was removed  each at a time using an atomic-force microscope.

Many failed attempts later, the researchers had a micrograph scan in their hand showing an 18-carbon structure. The scan revealed that the carbon rings had alternating triple and single bonds — just like the theory predicted, or at least one of them. Previously, a competing theory suggested that a cyclocarbon molecule would be made entirely of double bonds.

The alternating bond types suggests that the C-18 rings have semiconducting properties, which could make them useful as components in molecular-sized transistors.

This is still very fundamental research, though. Scientists currently have to make the rings one molecule at a time, so it might be a while before we can find any practical use for cyclocarbons.

“The high reactivity of cyclocarbon and cyclocarbon oxides allows covalent coupling between molecules to be induced by atom manipulation, opening an avenue for the synthesis of other carbon allotropes and carbon-rich materials from the coalescence of cyclocarbon molecules,” the authors concluded.

Credit: Wyss Institute at Harvard University.

Exoshorts help you walk and run longer

Credit: Wyss Institute at Harvard University.

Credit: Wyss Institute at Harvard University.

An international team of researchers has recently described a pair of exoshorts that’s lightweight, portable, and mostly made of flexible materials that enhances both walking and running.

Exosuits allow people with disabilities to partially regain motor functions. Alternatively, they can be used to enhance our physical abilities. Previously, scientists demonstrated wearable robotic tech that enabled users to walk or run more efficiently, but not both — until now.

The wearable robotic unit was developed by engineers at Harvard’s Wyss Institute for Biologically Inspired Engineering, along with colleagues at the University of Nebraska Omaha and Chung-Ang University in South Korea.

A battery powers a motor unit that pulls cables that extend the hips, thereby reducing the amount of energy the body uses during movement.

An actuator attached to the user’s lower back is controlled by a machine-learning algorithm that allows the system to adjust to different gaits on the fly.

At first, the exoshorts feel awkward, but with a bit of training both walking and running performance is enhanced as measured by the amount of oxygen needed to breathe.

During tests, the researchers found a 9% reduction in energy consumption for walking and 4% for running. That’s equivalent to taking about 15 pounds (7 kilograms) off your waist.

The improvements are rather modest at this stage, but the goal was to show that this robotic assistive technology could do more than other exoskeletons. Earlier systems designed to enhance either walking or running are more efficient, but this one does both.

The system, recently described in the journal Science, is part of DARPA’s former Warrior Web program and took years to develop. 

The researchers envision a future where these sorts of exoskeletons are used not only to help the disabled walk again but also to enhance performance in various environments. Systems that also offer back support, for instance, could be used by factory and warehouse workers to carry and maneuver heavier loads without expending more energy. Soldiers could carry more supplies and weapons. The range of possibilities is truly endless.

Credit: ZeroAvia.

Hydrogen-powered aircraft with 500 miles range is set to disrupt aviation

A small six-seat airplane that is entirely powered by hydrogen rather than fossil fuels is the largest zero-emissions aircraft in the world. For the last year, the plane designed by ZeroAvia, a California-based startup, has been in tests and only recently surfaced to the public’s attention. It allegedly has a 500-mile range which might lead to massive reductions in aircraft emissions if the technology is applied at scale.

Credit: ZeroAvia.

Credit: ZeroAvia.

The air travel industry is thought to be responsible for 900 million metric tons of CO2 emissions a year. The industry has pledged to reduce aircraft emissions in half by 2050 compared to 2005 levels but how could that realistically ever happen considering how the rate of air travel is surging? Around the world, airlines carried 4.3 billion passengers in 2018, an increase of 38 million compared to the year before.

Our only chance of drastically reducing air travel emissions isn’t to fly less but to radically alter how aircraft are powered.

There are various entrepreneurs and companies who are looking to disrupt the industry. Eviation, for instance, is a startup that designing 100% battery-electric planes.

ZeroAvia, on the other hand, has eyed hydrogen. Researchers at ZeroAvia argue that it is difficult to fly battery-electric aircraft over long ranges, whereas hydrogen fuel cells are nearly four times as energy dense as the most advanced battery currently available on the market. What’s more, high-density batteries have to be frequently replaced which translates into more cost for airlines.

Credit: ZeroAvia.

Credit: ZeroAvia.

Meanwhile, aircraft with a drive train powered by hydrogen might actually save airlines money — at least for short flights, which are quite a few. The industry estimates that nearly half of global flights are 500 miles or less.

Theoretically, there is no physical constraint on the hydrogen power train. However, larger planes with longer range would require more safety tests. At the moment, ZeroAvia is employing liquid hydrogen stored in carbon fiber cylinders.

In the future, ZeroAvia plans on demonstrating a 20-seat model. It is already in talks with several airlines, which have expressed interest in the technology.

Credit: Pixabay.

How sunscreen releases metals and nutrients in seawater

Credit: Pixabay.

Credit: Pixabay.

Applying sunscreen when going to the beach is of the utmost importance to protect our skin from the harmful effects of ultraviolet radiation. At the same time, these protective lotions contain metals and nutrients that wash off into the ocean, interacting with marine life. A new study reports how sunscreen chemicals are released into seawater.

A painful sunburn can ruin a vacation, and too much sun can also lead to more serious problems like premature skin aging and melanoma. To counter the effects of prolonged exposure to the sun, manufacturers add UV filters.

However, our protection is done at the expense of the wellbeing of marine life. About 14,000 tons of sunscreen are thought to wash into the oceans each year, affecting coral and fish embryos. And even if you don’t swim after applying sunscreen, it can go down drains when you shower.

Millions of people are now luckily aware that sunscreen can also harm wildlife. As a result, many are looking to purchase “coral-safe” sunscreens that lack oxybenzone and octinoxate, two substances known to damage coral reefs. Some destinations, such as Hawaii and Palau, have introduced bans on harmful sunscreens.

It’s not clear, however, what effects other trace compounds found in sunscreens might have on marine wildlife.

A first step in this direction was recently made by a team of researchers at the University of Cantabria in Spain.

The team, led by Araceli Rodríguez-Romero, introduced titanium-dioxide-containing sunscreen to samples of Mediterranean seawater and analyzed how the lotion releases various metals and nutrients into the water. UV light was shone onto a water tank in order to simulate real life conditions.

Aluminum, silica, and phosphorus had the highest release rates under both light and dark conditions. Based on these results, the researchers computed a theoretical model that predicts how various compounds found in sunscreens are released into the ocean under various conditions.

According to the new study reported in the journal Environmental Science & Technologybeachgoers could increase the concentration of aluminum in coastal waters by 4% and of titanium by almost 20%. These concentrations, however, are already extremely low.

In the future, the researchers plan conducting more studies to determine how these metals and nutrients could be affecting marine ecosystems.

In the meantime, each of us can help reduce our impact on marine life by using more eco-friendly alternatives to sunscreen or none at all, if it is possible. Wearing hats, shirts, and other apparel incorporating UV protection can reduce the amount of sunscreen you need by up to 90%, for instance.

Peculiar pulsar slows down before ‘glitching’

Artist impression of a pulsar. Credit: University of British Columbia.

Pulsars are rotating neutron stars that emit a focused beam of electromagnetic radiation, resulting in their nickname as “lighthouses” of the universe. Pulsars come in all shapes and sizes, and some behave quite weirdly and seemingly chaotic — but that doesn’t mean there isn’t a pattern.

Vela, a neutron star located nearly 1,000 light-years away from Earth in the southern sky, is famous among astronomers because it “glitches” once every three years, suddenly speeding up its rotational period before slowing down back to normal.

Scientists aren’t sure why this weird star is behaving this way but new observations suggest that Vela seems to slow down its rotation rate immediately before the glitch. This was the first time astronomers have ever seen anything like this.

Neutron stars are the remnants of huge dead stars and represent some of the densest objects in the universe. Imagine an object with the mass of a sun squashed down to the size of a city — that’s how dense these objects can get.

Most neutron stars are observed as pulsars, which rotate at very regular intervals ranging from milliseconds to seconds. 

For their new study, astronomers at the Monash University School of Physics and Astronomy reanalyzed observations of the Vela glitch made in December 2016.

This more thorough analysis revealed that Vela — which normally makes 11 rotations per second — started rotating even faster and then slowed down to a more normal speed very quickly.

Artist impression of the three components in the neutron star. Credit: Carl Knox

Although astronomers aren’t sure why this happens, the observation is consistent with theoretical models that suggest that neutron stars have three internal components.

“One of these components, a soup of superfluid neutrons in the inner layer of the crust, moves outwards first and hits the rigid outer crust of the star causing it to spin up,” said Dr. Paul Lasky, an astronomer at the Monash School of Physics and Astronomy and co-author of the new study published in the journal Nature Astronomy.

“But then, a second soup of superfluid that moves in the core catches up to the first causing the spin of the star to slow back down.”

Ultimately, what this new study shows is that a pulsar glitch isn’t a straightforward, single-step process. Instead, a complex interplay of internal forces seems to generate sophisticated behaviors in the neutron stars, although the exact mechanisms are still a mystery. In the future, new observations and theoretical models may reveal more.

First chlamydia vaccine boosts immune response

Credit: Needpix.

Genital chlamydia is the most common bacterial sexually transmitted infection (STI) in the world. But although the infection can be easily treated with antibiotics, it can cause an array of health problems, including infertility in women. For years, researchers have been chasing a vaccine for chlamydia in order to break the chain of reinfection — a vaccine that might not be that far away.

According to recent findings reported by researchers at Imperial College London and the Statens Serum Institut in Copenhagen, a vaccine designed for preventing genital chlamydia provoked an immune response.

“The findings are encouraging as they show the vaccine is safe and produces the type of immune response that could potentially protect against chlamydia,” said Professor Robin Shattock, Head of Mucosal Infection and Immunity within the Department of Infectious Disease at Imperial.

“The next step is to take the vaccine forward to further trials, but until that’s done, we won’t know whether it is truly protective or not.”

The randomised controlled trial involved 35 healthy women, who were assigned to three different groups: 15 received a vaccine with liposomes, 15 received a vaccine with aluminium hydroxide, and 5 got a placebo (a saline solution). In total, each participant received five vaccinations over several months. 

Both formulations of the chlamydia vaccine provoked an immune response in all participants, although the added liposomes proved more effective at producing antibodies. Meanwhile, no participant in the placebo group experienced an immune response.

The major issue with chlamydia is the fact that, most often, people carry it but are unaware of the fact, so they don’t seek treatment. About 131 million new cases occur each year, but as many as 3 out of 4 infections are symptomless, so the real number of cases is likely much higher.

If caught early on, chlamydia is easily treatable. However, the infection can cause complications such as inflammation, infertility, ectopic pregnancy, arthritis and even an increased susceptibility to other STIs, including HIV.

“It is very treatable if identified, but as many people don’t have symptoms it can be missed, and the biggest problem is that it can go on to cause infertility in women,” Shattock said.

He added: “One of the problems we see with current efforts to treat chlamydia is that despite a very big screening, test and treat programme, people get repeatedly re-infected. If you could introduce a protective vaccine, you could break that cycle.”

The findings appeared in The Lancet Infectious Diseases.

Ebola may now be curable, clinical trial in Congo finds

The first-ever multi-drug randomised trial for Ebola has proven extremely successful. Researchers treating patients in the Democratic Republic of Congo (DRC) have identified two monoclonal antibodies that block the Ebola virus and radically lower mortality. The new findings suggest that Ebola is no longer an incurable disease.

“Now we can say that 90 percent can come out of treatment cured,” one scientist said in a statement. 

Ebola is a devastating disease, characterized by a painful hemorrhagic fever, which interferes with the endothelial cells lining the interior surface of blood vessels and with coagulation. As the blood vessel walls become damaged and destroyed, the platelets are unable to coagulate, and patients succumb to hypovolemic shock. 

When the Ebola virus infects a human host, it it can kill up to 90% of the time, depending on available treatment.  The 2014–2016 outbreak in West Africa was the largest and most complex Ebola outbreak since the virus was first discovered in 1976. There were more cases and deaths in this outbreak than all others combined. 

To contain the epidemic, doctors in Sierra Leone, Libera, and Guinea have been using biopharmaceutical drugs like ZMapp and Remdesivir. However, these drugs have now been droppped in favor of two monoclonal antibodies, known as REGN-EB3 and mAb-114, which a recent clinical trial showed to be far more effective. 

The DRC trial, which started in November, found that a monoclonal antibody drug made by Regeneron has a mortality rate of only 29% while another monoclonal antibody made by Ridgeback Biotherapetics had a mortality rate of 34%. Meanwhile, Zmapp and Remdesivir had a mortality rate of 49% and 53% respectively. 

The odds of suriviving Ebola, however, are much higher if a patient arrives early at a clinic. The trial found that the death rate for those seeking treatment soon after they became sick was only 11% with the Ridgeback antibody and just 6% with Regeneron’s drug. 

According to the World Health Organization, people who fall ill with Ebola wait on average four days before they seek treatment. Many are reluctant to call for help because the chances of survival in clinics has been very low — until recently, up to 70% of those infected with Ebola in the DRC have died. 

This may finally change with this groundbreaking trial now that Ebola is preventable and treatable. 

When two sea-worms engage in "mouth fighting", they produce powerful snapping sounds. Credit: Ryutaro Goto.

Snapping worms make one of the loudest noises in the ocean

Researchers have recently described the puzzling behavior of sea-dwelling worms which can produce one of the loudest sounds ever measured in aquatic animals.

When two sea-worms engage in "mouth fighting", they produce powerful snapping sounds. Credit: Ryutaro Goto.

When two sea-worms engage in “mouth fighting”, they produce powerful snapping sounds. Credit: Ryutaro Goto.

Many aquatic animals, including mammals, fish, crustaceans and insects, produce loud sounds underwater. However, this is the first time that scientists have witnessed a soft-bodied marine invertebrate making noises.

“When I first saw their video and audio recordings, my eyes just popped out of my head because it was so unexpected,” said Richard Palmer, Professor of biology at the University of Alberta.

Palmer received the footage from Ryutaro Goto, a Japanese researcher who was looking for some help in figuring out how the snapping sea-worms were producing their weird sounds.

The animals were first discovered in 2017, during a dredging expedition off the coast of Japan. Goto and Isao Hirabayashi, a curator at the Kushimoto Marine Park, were among the first to record the sounds.

Tests found that the sounds are as loud as 157 dB, with frequencies in the 1–100 kHz range and a strong signal at ∼6.9 kHz — that’s comparable to those made by snapping shrimps, which are among the most intense biological sounds that have been measured in the sea

Writing in the journal Current Biology, the Japanese and Canadian researchers explain how and why the loud snapping sounds occur.

According to the researchers, when these worms come close to each other, they open their mouths and snap — something described as “mouth fighting”. This is essentially a territorial display of force which the worms employ to protect their dwellings.

“The real challenge was figuring out how a soft-bodied animal like a worm—which is basically a hollow, muscular tube—could possibly make such loud sounds,” Palmer said.

Palmer says that the snapping sounds are produced by cavitation bubbles due to the extensive array of muscles in the worm’s pharynx.

“It’s like trying to suck a smoothie through a paper straw,” Palmer explained. “When it gets a little bit soft at the end, the tip collapses. It doesn’t take much force to make it collapse, but if you try to suck harder and harder, you build up this immense negative pressure. When the worm finally pops the valve open, it happens so fast that the water can’t fill the space, and the sides of that space collapse together in a point, creating this explosive release of energy in the form of sound.”

This hypothesis has yet to be validated but the researchers hope to conduct an experiment soon.

“It’s just an incredibly cool animal with quite the unexpected behaviour,” Palmer added. “I’ve shown the videos to biologists who study invertebrates and their reaction is always the same: they shake their heads in wonder.”

Credit: Pixabay.

New gluten biomarker may lead to an easy blood test for diagnosing celiac disease

A new study has uncovered what the very first signs of inflammation look like at the molecular level when celiac sufferers ingest gluten. The newly identified biomarkers could someday lead to a quick and easy blood test for diagnosing celiac disease without the massive hassle current tests involve.

Credit: Pixabay.

Credit: Pixabay.

Gluten” is an umbrella term used to denote the mix of storage protein compounds found in all species and hybrids of wheat and its related grains (barley, rye, etc).

Some people can have gluten intolerance, sometimes referred to as non-celiac gluten intolerance (NCGI) or gluten sensitivity, or suffer from celiac disease (CD). The latter is much worse because it is an autoimmune disorder which causes the celiac disease sufferer’s body to violently react to the presence of gluten — to the point where their immune system will attack the inner lining of the small intestine to ‘protect it’ from gluten. About 1 in 100 people are affected by celiac disease.

If you suffer from CD, there’s not a cure per se. Patients with celiac disease have to eliminate all gluten from their diets, medicines etc. If you are gluten intolerant (not celiac), then you will get by with consuming gluten from time to time without too much discomfort (depending on just how intolerant you are).

The problem is that it’s not so straightforward differentiating between CD and gluten intolerance. And diagnosing celiac disease is not comfortable, to make an understatement.

In order to diagnose CD, doctors will look at several blood biomarkers that are telltale signs of the disease. However, many patients switch to a gluten-free diet long before they go to a doctor for diagnosis, and these biomarkers will have inevitably declined by the time a blood test is scheduled.

What usually happens is the patient is made to eat a gluten diet for several weeks in order to trigger a detectable response and allow for a clear diagnosis — but hopefully not for long.

A new study has discovered that a specific type of cytokine — special proteins made by the immune system — floods the bloodstream following exposure to gluten. This particular cytokine, called interleukin-2 (IL-2), is produced by immune T cells within just two hours of exposure to gluten.  This means that a blood test looking for this biomarker could diagnose CD within the same day that gluten was ingested by the patient.

“For the many people following a gluten-free diet without a formal diagnosis of coeliac disease, all that might be required is a blood test before, and four hours after, a small meal of gluten,” said Jason Tye-Din, head of coeliac research at the Institute and a gastroenterologist at The Royal Melbourne Hospital in Australia.

“This would be a dramatic improvement on the current approach, which requires people to actively consume gluten for at least several weeks before undergoing an invasive procedure to sample the small intestine,” he added.

The authors hope that in the future the way CD is being diagnosed and treated will never be the same.  Elsewhere, ImmusanT is working on a vaccine called Nexvax2,  which when administered in multiple doses can reprogram T-cells to stop triggering a pro-inflammatory response. In other words, this vaccine might someday allow CD suffers to follow an unrestricted diet. The CD vaccine is specifically designed to work against the HLA-DQ2.5 genetic form of the disease, which accounts for 90 percent of people with celiac.

“It is clear this research has the potential to revolutionize the current testing regime for coeliac disease globally,” says Michelle Laforest, CEO of Celiac Australia.

The findings appeared in the journal Science Advances.

Artist’s impression of the merging galaxies B14-65666 located 13 billion light years-away. Credit: National Astronomical Observatory of Japan.

Earliest merging galaxies observed more than 13 billion light-years away

Artist’s impression of the merging galaxies B14-65666 located 13 billion light years-away. Credit: National Astronomical Observatory of Japan.

Artist’s impression of the merging galaxies B14-65666 located 13 billion light-years away. Credit: National Astronomical Observatory of Japan.

Astronomers working with Atacama Large Millimeter/submillimeter Array (ALMA) in Chile and the Hubble Space Telescope have revealed a historic sight. Using three different signals, the researchers made a composite image of a bright galactic blob located 13 billion light-years away. By comparing these different signals, the astronomers determined that the cosmic object isn’t a galaxy but rather two galaxies merging together. All of this happened just one billion years after the Big Bang, making it the earliest example of merging galaxies discovered so far.

The researchers led by Takuya Hashimoto, a postdoctoral researcher at the Japan Society for the Promotion of Science and Waseda University, analyzed signals of oxygen, carbon, and dust from a far away object known as B14-65666.

Previous observations performed with the Hubble Space Telescope revealed two star clusters in B14-65666. The new observation, however, was performed using multiple signals which carry complementary information. This enabled the researchers to confirm that the two bright blobs form a single system, although they have different speeds. The only viable explanation is that the two objects are, in fact, two galaxies in the process of merging.

Composite image of B14-65666 showing the distributions of dust (red), oxygen (green), and carbon (blue), observed by ALMA and stars (white) observed by the Hubble Space Telescope. Credit: ALMA (ESO/NAOJ/NRAO), NASA/ESA Hubble Space Telescope, Hashimoto et al.

Composite image of B14-65666 showing the distributions of dust (red), oxygen (green), and carbon (blue), observed by ALMA and stars (white) observed by the Hubble Space Telescope. Credit: ALMA (ESO/NAOJ/NRAO), NASA/ESA Hubble Space Telescope, Hashimoto et al.

Because it took 13 billion years for light from B14-65666 to reach us, what we’re seeing is actually what the two merging galaxies looked like 13 billion years ago. During that time of the early universe, B14-65666 was churning out stars 100 times more actively than the Milky Way. This is another sign of galactic merging because gas compression in colliding galaxies naturally accelerates star formation.

“With rich data from ALMA and HST, combined with advanced data analysis, we could put the pieces together to show that B14-65666 is a pair of merging galaxies in the earliest era of the Universe,” Hashimoto said in a statement. “Detection of radio waves from three components in such a distant object clearly demonstrates ALMA’s high capability to investigate the distant Universe.”

Galaxy mergers are the most violent type of galaxy interaction. They’re also a beautiful sight, as the gravitational forces draw long wispy streams of stars into fluid-like shapes. But despite the destructive force of galactic mergers, collisions and mergers of whole galaxies play a crucial role in their evolution. The Milky Way, for instance, likely underwent many mergers before it reached its current form. In fact, our galaxy has only 4 billion years left to live before it collides with the giant Andromeda Galaxy.

“Our next step is to search for nitrogen, another major chemical element, and even the carbon monoxide molecule,” said Akio Inoue, a professor at Waseda University. “Ultimately, we hope to observationally understand the circulation and accumulation of elements and material in the context of galaxy formation and evolution.”

The new fabric being developed by University of Maryland scientists. Credit: Faye Levine, University of Maryland.

Smart fabric changes thermal properties based on environment

The new fabric being developed by University of Maryland scientists. Credit: Faye Levine, University of Maryland.

The new fabric being developed by University of Maryland scientists. Credit: Faye Levine, University of Maryland.

For the first time, scientists have devised a fabric that can dynamically alter its thermal properties in response to the environment. The automatic thermal regulation means people would no longer have to take off clothes when it’s hot or put clothes on when it’s too cold.

The breakthrough lies in cleverly engineering the fabric with two different types of synthetic yarn — one absorbs water, while the the other repels it. Both are coated with a lightweight, carbon nanotubes. When exposed to humidity, the fibers warp, bringing the strands of yarn together. In the process, pores open up through the fabric, allowing more heat to escape. When the fibers come together, the electromagnetic coupling of the carbon nanotubes also changes. As the body cools down (and is less sweaty or humid), the dynamic thermal gating mechanism works in reverse to trap heat.

“You can think of this coupling effect like the bending of a radio antenna to change the wavelength or frequency it resonates with,” Wang said. “It’s a very simplified way to think of it, but imagine bringing two antennae close together to regulate the kind of electromagnetic wave they pick up. When the fibers are brought closer together, the radiation they interact with changes. In clothing, that means the fabric interacts with the heat radiating from the human body,” YuHuang Wang, a professor of chemistry and biochemistry at UMD and one of the paper’s corresponding authors who directed the studies, said in a statement.

Depending on the tuning of the nanotubes, infrared radiation is either blocked or allowed to pass through — and this happens almost instantly.

Previously, researchers demonstrated fabrics that can increase porosity in response to sweat or temperature, as well as textiles that transmit infrared radiation from the human body. However, this is the first demonstration of a material that can change both porosity and infrared transparency, thereby providing more comfort in response to environmental conditions.  

The findings appeared in the journal Science.

Stress might reduces fertility in women, but not in men

Credit: Pixabay.

Women who are under considerable stress may find it more difficult to conceive, according to a new study. The findings, however, did not apply to men.

Living a modern, fast-paced lifestyle is taking its toll on Americans. According to a 2017 Gallup poll, about eight in ten Americans say they frequently (44%) or sometimes (35%) encounter stress in their daily lives. Women are more likely to report frequent stress than men (49% vs 40%), which can trigger anxiety and depression.

Researchers at the Boston University School of Medicine investigated whether there was any association between stress and the odds of conception for women among the general population. To this aim, they turned to the Pregnancy Study Online (PRESTO), a preconception cohort that followed couples for 12 months or until pregnancy, whichever came first.

PRESTO included 4,769 women and 1,272 men who had no prior history of infertility and had not been trying to conceive for more than six menstrual cycles.

The researchers measured perceived stress among the participants by employing a 10-item test designed to assess how unpredictable and overwhelming individuals find their life circumstances. Each item referred to the past month and had five response choices, ranging from 0 (never) to 4 (very often). Both partners had to complete the perceived stress scale (PSS), whose maximum score is 40, indicating severe daily stress.

Besides PSS, the researchers also assessed data on diet, sleep, household income, frequency of intercorse, and demographic factors such as race or ethnicity.


Association between baseline women’s and men’s scores on the Perceived Stress Scale (PSS) and fecundability. Credit: Boston University School of Medicine.

On average, the baseline perceived stress score was about 1 point higher among women than in men, in line previous surveys and studies. The study’s most important finding, however, was that women who scored 25 or higher on the PSS were 13% less likely to conceive than women with PSS scores under 10. This association was stronger among women who had been trying to conceive for no more than two menstrual cycles and were under 35 years old.

The researchers note that only a small proportion of this association can be explained by less frequent intercourse and increased menstrual cycle irregularity due to stress.

Another important finding was that the PSS score did not seem to influence a man’s odds of conception. If there’s really a causal relationship, perhaps stress may interfere with a woman’s hormonal balance in such a way as to interfere with conception.

The authors have proposed several biological mechanisms through which stress might directly affect a woman’s fecundability. For instance, stress is known to be associated with higher levels of corticotropin-releasing hormones and glucocorticoids, which could delay or inhibit the surge of luteinizing hormones directly involved in ovulation induction. Stress might also reduce ovarian reserves.

More research will be required in order to establish such a causal link. In the meantime, couples who find it hard having a baby might want to consider managing their stress.

The findings appeared in the American Journal of Epidemiology.

Greenland's Ice Sheet. Credit: Wikimedia Commons.

Greenland’s ice sheet is melting at an accelerated rate — it’s four times faster than in 2003

Greenland's Ice Sheet. Credit: Wikimedia Commons.

Greenland’s Ice Sheet. Credit: Wikimedia Commons.

Greenland’s ice sheet is melting at an accelerated rate, faster than previously estimated. This also implies an acceleration in sea level rise since Greenland is responsible for much of the effect that threatens coastal cities and islands throughout the world.

According to the authors at Ohio State University, Greenland is losing four times as much ice as it did in 2003. This was the year NASA and partners in Germany launched the Gravity Recovery and Climate Experiment (GRACE), involving twin satellites that measure ice loss across Greenland. Since then, Greenland has lost approximately 280 gigatons of ice per year, on average, leading to a yearly sea level rise of 0.76 millimeters (0.03 inches).

Scientists used to think that most of this ice loss came from Greenland’s southeast and northwest regions, where large glaciers break into icebergs that float — and eventually melt — in the Atlantic Ocean. However, the new study has identified the island’s southwest region as another source of significant melting.

Greenland’s southwest lacks glaciers and for this reason, it has always been assumed that the region contributes minimally to ice loss. According to Michael Bevis, a professor of geodynamics at Ohio State University and lead author of the new study, the ice loss is due to surface ice mass melting from the coastline.

Bevis and colleagues used data from GRACE but also GPS stations positioned around Greenland’s coast in order to identify the pattern of ice loss. To everyone’s surprise, the southwestern part of the island was found to be a serious contributor to sea level rise, channeling rivers of meltwater into the ocean during the summer.

“We knew we had one big problem with increasing rates of ice discharge by some large outlet glaciers,” Bevis said in a statement. “But now we recognize a second serious problem: Increasingly, large amounts of ice mass are going to leave as meltwater, as rivers that flow into the sea.”

This melting is largely due to global warming which is amplifying a natural weather phenomenon, known as the North Atlantic Oscillation (NAO) — a large-scale alternation of atmospheric mass between subtropical high surface pressure, centered on the Azores, and subpolar low surface pressures, centered on Iceland. NAO has a great influence on sea surface temperature and can cause massive net ice loss when combined with the global atmospheric warming.

“These oscillations have been happening forever,” Bevis said. “So why only now are they causing this massive melt? It’s because the atmosphere is, at its baseline, warmer. The transient warming driven by the North Atlantic Oscillation was riding on top of more sustained, global warming.”

What Greenland would like if its ice sheet vanished. Map made by the British Antarctic Survey (BAS).

What Greenland would like if its ice sheet vanished. Map made by the British Antarctic Survey (BAS).

If it completely vanished, Greenland’s ice sheet — which is 1.5-kilometer thick and covers 1,710,000 square kilometers — would raise sea levels by up to seven meters (23 feet). In a previous study, glaciologists at the Woods Hole Oceanographic Institution (WHOI) concluded that melting in Greenland’s ice sheet “has gone into overdrive”, showing that it is unprecedented in the last 400 years. The authors concluded that runoff in Greenland started to steadily rise when the first signs of climate change hit the Arctic, in the mid-19th century. However, it accelerated dramatically in the past 20 years, at a rate sixfold higher than before the Industrial Revolution.

The new findings show that along with glaciers, climate scientists now need to pay close attention to Greenland’s snowpacks and ice fields too. The GPS network that monitor’s the island’s coastline is rather sparse n the southwest, and this needs to change in order to gather more reliable data.

“We’re going to see faster and faster sea level rise for the foreseeable future,” Bevis said. “Once you hit that tipping point, the only question is: How severe does it get?”

“The only thing we can do is adapt and mitigate further global warming–it’s too late for there to be no effect,” he said. “This is going to cause additional sea level rise. We are watching the ice sheet hit a tipping point.”

The findings appeared in the Proceedings of the National Academy of Sciences.

High-temperature superconductivity helps scientists measure small magnetic fields, and aids advances in fields including geophysical exploration, medical diagnostics and magnetically levitated transportation. The discovery earned Bednorz and Müller the 1987 Nobel Prize in Physics. Credit: IBM.

Scientists break record for superconductivity at cozy Arctic temperatures

High-temperature superconductivity helps scientists measure small magnetic fields, and aids advances in fields including geophysical exploration, medical diagnostics and magnetically levitated transportation. The discovery earned Bednorz and Müller the 1987 Nobel Prize in Physics. Credit: IBM.

High-temperature superconductivity helps scientists measure small magnetic fields, and aids advances in fields including geophysical exploration, medical diagnostics and magnetically levitated transportation. The discovery earned Bednorz and Müller the 1987 Nobel Prize in Physics. Credit: IBM.

Scientists in Germany claimed that they have achieved superconductivity at 250 K, or –23 °C — about the average temperature found at the North Pole. The new record brings us closer to achieving superconductivity (zero electrical resistance) at room temperature, which would revolutionize the way we generate energy, transfer data, or build computers.

You can’t resist progress

Superconductivity was first discovered by Dutch Physicist Heike Kamerlingh Onnes in 1911, when he and his students found that the electrical resistance of a mercury wire, cooled to about 3.6 degrees above absolute zero, made a dramatic plunge. The drop was enormous – the resistance became at least twenty thousand times smaller than at room temperature. Ever since then, scientists have struggled to understand this peculiar state — but there has been very good progress.

For instance, we know superconductivity is a quantum mechanical phenomenon characterized by the Meissner effect – the complete ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state.

The most common superconductors typically have to be cooled down with liquid nitrogen close to -250°C (-480°F) for them to conduct electricity with no resistance. In this state, the conductor is comprised of a rigid lattice of positive ions drowned in a sea of electrons.

A normal conductor has electrical resistance because electrons moving through the lattice also bump into it, slowing down in the process. This motion also causes atoms to vibrate, which is why electrical resistivity also leads to heat loss.

On the other hand, in a superconductor, the lattice is so rigid due to the low temperature that mechanical sound waves (phonons) ripple through it — and electrons ride the wide along with them. What’s more, electrons in a superconductor form bonds called ‘Cooper pairs’. When the temperature rises, the Cooper pairs break apart and the superconductive state dissolves.

In 2014, researchers at the Max Planck Institute for the Structure and Dynamics of Matter were able to achieve superconductivity at –80°C, using hydrogen sulfide. The next year, the record was broken again at –70 °C. Now, Mikhail Eremets and colleagues at the Max Planck Institute for Chemistry in Germany have raised the bar even higher — they’ve achieved superconductivity at right about the average temperature of the North Pole. That’s extremely close to the ultimate goal of achieving room temperature superconductivity that would enable new electrical highways or a new generation of supercomputers, among many other things. Just four years ago, the record temperature for superconductivity was –230 °C.

Credit: Mikhail Eremets et al.

Credit: Mikhail Eremets et al.

Eremets and colleagues worked once more with hydrogen sulfide, whose atoms were firmly pressed between diamond anvil cells, subjecting them to a huge pressure of 170 gigapascals, or half that found at the center of the Earth.

“This leap, by ~ 50 K, from the previous record of 203 K indicates the real possibility of achieving room temperature superconductivity (that is at 273 K) in the near future at high pressures,” say Eremets and co.

According to MIT Technology Review, the German physicists still have to provide more evidence that the superconductivity they claimed to have reached is genuine. One obvious problem encountered thus far is the fact that hydrogen sulfide atoms do not expel a magnetic field (evidence of the Meissner effect). But this may be more of a measurement problem because the samples that the researchers worked with were in the order of micrometers.

In the meantime, there are reasons to believe that superconductivity can be reached at an even higher critical temperature. For instance, computational models suggest that yttrium superhydrides could superconduct at a cozy 27°C but at a huge pressure akin to that found at the center of the planet.

Credit: MaxPexel.

Low-income countries use up to 16 times fewer antibiotics than wealthy countries, WHO report says

Credit: MaxPexel.

Credit: MaxPexel.

Just in time for World Antibiotic Awareness Week, the World Health Organization (WHO) just released a report that tallies antibiotic consumption around the world. The main finding is that the rate of antibiotic use can vary up to 16 times between countries, signaling a two-fold problem: on one hand, wealthy countries are overprescribing antibiotics, which is fueling a dangerous trend of antibiotic resistance while on the other hand, poorer countries may be underutilizing these drugs.

“Overuse and misuse of antibiotics are the leading causes of antimicrobial resistance,” Suzanne Hill, Director of the Department of Essential Medicines and Health Products at WHO, said in a statement. “Without effective antibiotics and other antimicrobials, we will lose our ability to treat common infections like pneumonia.”

Antibiotics are medicines that combat infections caused by bacteria. However, due to the misuse and overuse of antibiotics, many bacterial strains are developing antibiotic resistance.

Antibiotic resistance occurs when an antibiotic is no longer effective at controlling or killing bacterial growth. Bacteria which are ‘resistant’ can multiply in the presence of various therapeutic levels of an antibiotic. Sometimes, increasing the dose of an antibiotic can help tackle a more severe infection but in some instances — and these are becoming more and more frequent — no dose seems to control the bacterial growth. Each year, 25,000 patients from the EU and 63,000 patients from the USA die because of hospital-acquired bacterial infections which are resistant to multidrug-action.

In 2015, the WHO called for more serious consideration of antibiotic resistance in light of recent trends. At the time, the organization stated that the world is not at all prepared to deal with such a threat.

“This is the single greatest challenge in infectious diseases today,” said Keiji Fukuda, the WHO’s assistant director-general for health security. “All types of microbes, including many viruses and parasites, are becoming resistant. This is happening in all parts of the world, so all countries must do their part to tackle this global threat.”

Since 2016, the WHO began a surveillance program that monitors antibiotic consumption in numerous countries around the world. Each country submitted data on drug consumption based on import and production records, insurance and reimbursement records, and prescription and dispensing data from physicians and pharmacies.

The bulk of the data is sourced from well-established programs, but the WHO also included data from 16 low- and middle-income countries that have only recently rolled out similar programs.

In the new report, which tallies data from 65 countries, WHO researchers uncovered significant discrepancies in antibiotic use among countries. Specifically, antibiotic consumption varied from only 4.4 daily doses of antibiotic per 1,000 inhabitants to 64.4 — a 16 times difference. This is grossly unfair because over prescription in affluent countries is causing bacterial strains to adapt, which can then move to poorer countries where there was too little antibiotic use to begin with.

The most frequent antibiotics used across all countries are Amoxicillin and Augmentin. These compounds, known as broad-spectrum antibiotics, are used to treat the most common types of infections — they’re also the cause of most antibiotic resistance. According to the report, these drugs ranged from less than 20% of total antibiotic consumption in some countries to more than 50% in others. On the other hand, “reserve” antibiotics — powerful last resort antibiotics used to treat hard cases of multidrug resistant bacteria — made up only 2% of total antibiotic consumption.

“Findings from this report confirm the need to take urgent action, such as enforcing prescription-only policies, to reduce unnecessary use of antibiotics,” Hill said.

 

Credit: Pixabay.

The 2016 election was so traumatic it caused PTSD symptoms in 1 in 4 young adults

The 2016 Presidential election, which saw Donald Trump rise to power, was marked by some of the most divisive campaigns in American history. So much so that for some young adults, the experience was genuinely traumatic. According to psychologists, one in four young adults experienced symptoms similar to those seen in post-traumatic stress disorder.

Credit: Pixabay.

Credit: Pixabay.

Melissa Hagan, an assistant professor of psychology at San Francisco State University, noticed that in the months following the election, her students appeared significantly affected. Surveys conducted at the time also confirmed some part of the population experienced psychological stress in the aftermath of the election.

Looking to put a finger on how much stress the 2016 election caused, Hagan and colleagues, enlisted 769 students who were taking a psychology course at Arizona State University. The participants included a variety of racial and ethnic backgrounds, as well as religious and political identities.

Using a standard psychological assessment tool called the Impact of Event Scale (IES), researchers surveyed how many of the students were impacted by the election in such a way that it might lead to diagnosable PTSD.

According to the results, 25% of the students crossed the PTSD threshold. Another frightening result was that the average score of students was comparable to those obtained by witnesses of a mass shooting seven months after the event.

“What we were interested in seeing was, did the election for some people constitute a traumatic experience? And we found that it did for 25 percent of young adults,” Hagan said in a statement.

Researchers also determined that 37.2% of students were completely dissatisfied with the election results and 18.5% were completely satisfied — everyone else was in the middle. When the researchers gauged the extent to which the students were upset by the results, they found that 39% of students were extremely upset, while 28.5% reported not feeling upset at all. About 24.2% of the students said their relationships were impacted negatively by the election, 10.4% said there was some negative impact, and 65% experienced no impact at all.

Some students were more affected than others. Black and nonwhite Hispanic participants scored higher on the PTSD assessment than their white counterparts. Females scored nearly 45% higher than males, and Democrats scored more than two and a half times higher than Republicans, the authors reported in the Journal of American College Health.

It’s not clear whether the traumatic effects of the 2016 elections will carry on over the long-term, as the psychological assessment of the students only happened once. The high prevalence of PTSD symptoms, however, should warrant school mental health staff to be more mindful of the political environment which their students are experiencing, apart from the usual stressors. And given the level of stress measured by the researchers, it wouldn’t be all that surprising to see some of the negative effects on people’s mental health last for years.

As to what made this election particularly traumatic, the shock of Donald Trump’s win and the hate-centered divisive narrative seem to have played important roles.

“There was a lot of discourse around race, identity and what makes a valuable American. I think that really heightened stress for a lot of people,” said Hagan.

Eddie, one of the dogs whose brain activity was scanned with a fMRI scanned. Next to the dog are the toys used in the experiment, "Monkey" and "Piggy". Credit: Gregory Berns.

Dogs may be able to process the meaning of some words, brain scans reveal

If you talk to dog owners, they’ll often swear by their pet’s ability to understand their words, but it’s only recently that scientific evidence has backed up such claims. According to researchers who imaged the brains of dogs while the canines processed words of objects, dogs may have at least a rudimentary neural representation of meaning for words that they have been taught.

Eddie, one of the dogs whose brain activity was scanned with a fMRI scanned. Next to the dog are the toys used in the experiment, "Monkey" and "Piggy". Credit: Gregory Berns.

Eddie, one of the dogs whose brain activity was scanned with a fMRI scanned. Next to the dog are the toys used in the experiment, “Monkey” and “Piggy”. Credit: Gregory Berns.

“We know that dogs have the capacity to process at least some aspects of human language since they can learn to follow verbal commands,” said Emory neuroscientist Gregory Berns, senior author of the study. “Previous research, however, suggests dogs may rely on many other cues to follow a verbal command, such as gaze, gestures and even emotional expressions from their owners.”

Berns is one of the founders of the Dog Project, which is looking to probe the evolutionary history of man’s best friend. The project was the first to train dogs to voluntarily enter and sit still in a functional magnetic resonance imaging (fMRI) scanner, which to dogs, looks like a loud and intimidating machine — but they braved through it.

Working at the Dog Project, researchers had previously gained insights about the neural inner workings of canines, such as how dogs process faces and odors. Previously, researchers at the MTA-ELTE Comparative Ethology Research Group in Budapest used fMRI to show that dogs have an uncanny ability to pick up our emotions only from our speech, suggesting that our canine friends are able to sense our emotional currents through changes in the tone of our voice. This time, the researchers were looking to investigate how dogs process words.

To this aim, 12 dogs were trained by their respective owners to retrieve two different objects when these were called out by name. The objects had to look and feel different in order to facilitate discrimination, such as a stuffed animal or a rubber toy. Training was considered complete when a dog consistently retrieved the requested toy when presented with both of the objects.

In one experiment, the dog had to stay in a fMRI scanner while the owner, who stood in front of the machine’s opening, called out the names of toys, then showed the corresponding objects. For instance, when Eddie, a golden retriever-Labrador mix, heard the words “Piggy” or “Monkey”, the owner would hold out the matching toy. Sometimes, the owner would utter gibberish words such as “bobbu” and “bodmick” and held up novel objects that the dog had never encountered before.

The fMRI scans revealed that the dog’s auditory regions in the brain became more activated when they heard the words for novel objects, compared to trained words. This result was rather unexpected because it’s the exact opposite of how people’s brains behave — we typically show greater neural activity for known words than for novel words.

While we can’t ask dogs what’s up, the researchers hypothesize that the dogs may show greater neural activation in the auditory region of the brain because they’re paying more attention to novel words — perhaps sensing that their owners want them to obey a command. Dogs know that they will receive a treat if they please their owners, so there may be an incentive to be more attentive to novel words.

Half of the dogs showed increased activation in their parietotemporal cortex, a region of the brain that the researchers say may be analogous to the angular gyrus in humans, which is where the meaning of words are processed. The other half of the canines showed heightened activity when they heard novel words in other brain regions, such as the left temporal cortex and amygdala, caudate nucleus, and the thalamus.

These differences may be due to the fact that the different breeds and sizes of dogs.

But the big takeaway of this study, published in the journal Frontiers in Neuroscience, is that dogs appear to have a neural representation for the meaning of words they’ve have been thought — a response which seems beyond a conditioned Pavlovian response. In other words, it seems like your dog might understand some of what you’re saying after all.

This graphic shows how the ancient land masses of Laurentia, Avalonia and Armorica would have collided to create the countries of England, Scotland and Wales. Credit: University of Plymouth

The British mainland was formed by three continental collisions

The British mainland was formed by the collision of three — not two — ancient continental land masses, geologists say.

This graphic shows how the ancient land masses of Laurentia, Avalonia and Armorica would have collided to create the countries of England, Scotland and Wales. Credit: University of Plymouth

This graphic shows how the ancient land masses of Laurentia, Avalonia and Armorica would have collided to create the countries of England, Scotland and Wales. Credit: University of Plymouth

The findings follow an extensive mineralogy study of exposed rock features across Devon and Cornwall. The two counties are separated by a clear geological boundary, with the north sharing properties with the rest of England and Wales while the southern part has an identical geological makeup to France and mainland Europe.

Up until now, the leading theory was that England, Wales and Scotland formed due to the merger of Avalonia and Laurentia more than 400 million years ago. The new study, carried out by scientists at the University of Plymouth, suggests that a third land mass — Armorica — was also involved in the process.

“This is a completely new way of thinking about how Britain was formed. It has always been presumed that the border of Avalonia and Armorica was beneath what would seem to be the natural boundary of the English Channel,” says lead researcher Dr. Arjan Dijkstra, who is a lecturer in Igneous Petrology at the University of Plymouth, UK.

“But our findings suggest that although there is no physical line on the surface, there is a clear geological boundary which separates Cornwall and south Devon from the rest of the UK.”

The team visited 22 sites in Devon and Cornwall where they sampled solidified magma that welled up long ago from a depth of 100km, as a result of underground volcanic eruptions or other geological events. Rocks from each site were subjected to a detailed chemical analysis in the lab using X-ray fluorescence (XRF) spectrometry. An isotopic analysis of the rocks — which involved comparing levels of strontium and neodymium elements — enabled the researchers to paint a fuller picture of the rocks’ history.

Finally, the results were compared to studies performed elsewhere in the UK or mainland Europe. The comparison shows there’s a clear boundary running from the Exe estuary in the East to Camelford in the west.

The new findings published in the journal Nature Communications mean that we have to rethink how the British Isles formed. The researchers say that the process looked very much like a three-way car crash — first, Avalonia and Laurentia collided forming much of Britain; later Armorica crashed into Avalonia from the south, only to back away, leaving behind a bumper-like formation. Later on, the landmass advanced again, crushing into Avalonia once more.

“We always knew that around 10,000 years ago you would have been able to walk from England to France,” Dr. Dijkstra added. “But our findings show that millions of years before that, the bonds between the two countries would have been even stronger.”

“It explains the immense mineral wealth of South West England, which had previously been something of a mystery, and provides a fascinating new insight into the geological history of the UK.”

Ancient farmers left their mark on the Amazon rainforest as early as 4,500 years ago

Credit: Pixabay.

There’s this romanticized notion that the Amazon was pristine before Westerners arrived in South America — but that’s not entirely true. According to the most extensive study of its kind, early farmers living in the Amazon basin had been introducing new crops to the area and using fire to improve the nutrients in the soil as early as 4,500 years ago. It’s also true, however, that these early farmers used far more sustainable farming practices than those currently used in the area.

The interdisciplinary study involved scientists specializing in a wide range of fields, including archaeology, paleoecology, botany, and ecology. The team led by Dr. Yoshi Maezumi, a researcher at the University of Exeter, studied human land use and fire management in eastern Brazil, where they found evidence of ancient farming of crops such as maize, sweet potato, manioc, and squash.

By examining charcoal, pollen, and plants dug up from the soil at archaeological sites, as well as sediments from a nearby lake, the team also learned that ancient Amazon farmers had extensively deployed controlled fires in order to increase the nutrient content of the soil. They also added manure and food waste to further improve the soil.

These practices helped Amazon farmers develop a kind of nutrient-rich soil called Amazonian Dark Earths (ADEs). After many iterations involving continuous enrichment and reuse of the soil, ADEs allowed the farmers to grow maize and other crops — which are usually grown near nutrient-rich lakes and river shores — in areas that generally had poor soil. This explains why forests around archaeological sites in the Amazon have a higher concentration of edible plants even to this day.

The researchers literally had to get their hands dirty during the study in order to investigate ancient farming practices in the Amazon rainforest. Credit: Dr. Yoshi Maezumi.

The researchers literally had to get their hands dirty during the study in order to investigate ancient farming practices in the Amazon rainforest. Credit: Dr. Yoshi Maezumi.

Ultimately, the agricultural innovations allowed these communities to grow more food, enabling a growing Amazon populace.

These findings contradict the notion of an “untouched” rainforest, indicating instead that ancient communities were making a lasting impact on the Amazon thousands of years ago.

“Ancient communities likely did clear some understory trees and weeds for farming, but they maintained a closed canopy forest, enriched in edible plants which could bring them food,” Dr. Maezumi said in a statement.

That’s not to say, however, that the impact was extreme. Rather than expanding the amount of farmland by clearing the rainforest, early Amazon farmers continuously reused the soil.

Today, the sound of chainsaws fills the air in many parts of the Amazon rainforest to make way for modern agricultural plantations. This kind of deforestation can only make matters worse in a region characterized by intensifying droughts and rising temperatures driven by global warming.

Perhaps there are some lessons in humility and respect for nature to be learned from the ancients.

“This is a very different use of the land to that of today, where large areas of land in the Amazon is cleared and planted for industrial scale grain, soya bean farming and cattle grazing. We hope modern conservationists can learn lessons from indigenous land use in the Amazon to inform management decisions about how to safeguard modern forests,” said Dr. Maezumi.

The findings appeared in the journal Nature Plants.