Category Archives: World Problems

How scientists predict climate scenarios — enter the world of climate models

Our planet’s climate is an enormously complex topic. Sure, the underlying principles are straightforward, but incorporating the interplay between all the different elements is a very hard task. To assess the large-scale situation, researchers often turn to something called climate models.

A climate model is a quantitative simulation of the elements in our planet’s climate. When researchers develop models, they divide the planet into a grid, apply known equations from the parameters they consider, and “run” the model to evaluate the results. Models can be more or less detailed, can extend into the past or into the future, and can be localized or planetary.

This world map shows the change in annual average precipitation projected by the GFDL CM2.1 model for the 21st century (Credit: NOAA Geophysical Fluid Dynamics Laboratory, via Wikimedia Commons).

Filling in the gaps

Climate models are essential tools for climatologists as they try to make sense of the complex dynamics of the Earth’s climate system. One reason for that is that we don’t have access to all the data. We’ve started observing the planet’s climate relatively recently, we haven’t covered the whole planet with meteorological instruments, so there are a lot of gaps in our climate data. Models can help fill in those gaps.

Official temperature records started in 1880 in the UK and the surface observations grew in time to become the integrated system we have today with satellites, buoys, aircraft, and stations around the planet.

But although our ability to gather data has improved substantially, especially with the advent of satellite data, the planet is so big that even with all these data points we have, there are still plenty of gaps to fill in.

Alder, Jay, Simulated global temperature changeusgs.gov. United States Geological Survey (USGS) (26 May 2016). Archived from the original on 23 August 2019.

It’s important to dive into the concept of climate. Officially, the World Meteorological Organisation defines climate as “the measurement of the mean and variability of relevant quantities of certain variables (such as temperature, precipitation or wind) over a period of time, officially stated within 30 years”.

In other words, climate is why people in the Caribbean have almost no warm jackets; weather, on the other hand, is what’s going on today. Weather is one event, climate is a long-term trend.

Complexity

Horizontal resolutions considered in today’s higher resolution models and in the very high resolution models now being tested: (a) Illustration of the European
topography at a resolution of 87.5 × 87.5 km; (b) same as (a) but for a resolution of 30.0 × 30.0 km. Source: IPCC

Part of the reason why models are so complicated is that they need to incorporate multiple types of data. Climate models need to be a good representation of the physical world, which involves things like winds, currents, chemistry, and many, many more.

This means we can’t use a model that approximates the oceans to just giant swimming pools. Oceans have currents and other complex mechanisms that play a role in governing the climate — and a comprehensive model would need to incorporate these elements. which are vital to understanding the climate.

Resolution is also a problem. Models can’t have crude representations, you can’t make a climate model that assumes the Andes Mountains are just a bunch of rectangular walls in coastal South America — we need a better resolution for our model to be accurate.

It’s like Lego sculptures, if you have many small bricks you can make a good Millennium Falcon, a few large bricks won’t be able to build the ship’s round shape. Resolution increased over the years, because now we have better computers to do the math.

However, it doesn’t mean that the older models didn’t make good predictions — quite the opposite. Models from 1970 to 2007 have proved consistent with the global mean surface temperatures observed. Still, models which incorporate more parameters and have a better resolution) tend to produce better results.

Are they good enough?

No model is a perfect representation of reality. But they don’t need to be, and can still help us understand how our planet’s climate will evolve.

One way to test a climate model is using old data and checking if it “predicts” climate phenomena we already knew happened. Simply put, you pretend that you’re building the model some years ago, and see if it successfully predicts something that’s already happened. This is called a “hindcast”, as opposed to a “forecast”. Here’s an example.

In 1991, there was a global impact eruption in Mount Pinatubo; its volcanic ashes reached 35 km in the atmosphere. Scientists knew the number of ashes would be enough to cool the atmosphere. James Hansen and his team took the opportunity to validate their climate model, to see if it was good enough it would show the cooling effect.

With 6 months of real data after the eruption, the scientists could compare the temperatures of the model and the observations. The results were almost as they expected — the model did predict a cooling effect; it wasn’t a perfect prediction, but it was good enough. Despite the complexity of simultaneous events on the global scale, the climate model has an accurate tendency of cooling.

Models that were used in the IPCC 4th Assessment Report can be evaluated by comparing their approximately 20-year predictions with what actually happened. In this figure, the multi-model ensemble and the average of all the models are plotted alongside the NASA Goddard Institute for Space Studies (GISS) Surface Temperature Index (GISTEMP). Climate drivers were known for the ‘hindcast’ period (before 2000) and forecast for the period beyond. The temperatures are plotted with respect to a 1980-1999 baseline. Credit: Gavin Schmidt

Scenarios

Models are a way to make an experiment, we can’t actually test whether the effects of CO2 emissions on the atmosphere by filling it with the gas, we don’t have an atmosphere in a lab for testing, so we model and simulate scenarios. You get the equation and add the amount of CO2 emissions per year, the model gets the number and works with it to tell you how much the temperature will rise. 

That is what the IPCC scenarios are for. These scenarios are the ‘ifs’: “if we continue emitting the same amount of greenhouse gases(GHG) we are emitting now, what happens?”, “if we increase GHG emission, what happens?”, and “if we reduce GHG emission, what happens?”. These scenarios are then used to predict different periods, in the near 20 years, a mid-term within 40 years from now, and the longer-term within 100 years. 

With these scenarios, we can decide our actions for the future and find solutions to avoid extreme climate events like the recent ones. Since we have climate models to help us we must act, because there is no doubt humanity is the cause of the negative impact on the climate.

So what do the models tell us for the future?

We’re in the middle of a global warming effect, and there is overwhelming evidence that we are causing this, through our emissions of greenhouse gases. These gases are heating up the atmosphere, which in turn, heats up the entire planet.

The models can tell us a lot of specifics regarding how this will unfold (spoiler alert: it’s bad), and what we can do to stop it. For instance, models suggest that if we don’t take any action, our planet will experience a heating of over 4 degrees Celsius. With current policies, we’re heading for around 3 degrees Celsius, and if all countries keep their current pledges and targets, we’re headed for 2.4 degrees.

However, the models also show that if we go over 2 degrees, we’re already causing catastrophic damage to the planet. Since we’re already at one-degree heating since before the Industrial Age, our best bet would be to keep heating to within 1.5 degrees. The models also show how we can do that, but whether we as a society actually come together and do it is a whole different question.

In essence, climate models are a tool. They’re a tool to help us better understand the planet, the effects that we are having on it, and how we can address the damage we’re causing. They’re not perfect, but they’re extremely useful, and we’d be wise to keep an eye on them as we navigate the dangers of a heating climate.

LED street lights may be decimating insect populations

Elephant hawk moths in the UK. Credit: Douglas Boyes/University of Newcastle.

Insects account for nearly 80% of all animal life on Earth, but both their numbers and diversity have been declining dramatically in recent years. The factors responsible for plummeting insect numbers are manifold, including deforestation, climate change, and agriculture. A new study suggests that light pollution is also contributing heavily to this worrying decline, with LED streetlights having the most impact.

Blinded by the lights

For their new study, researchers affiliated with the charity Butterfly Conservation, Newcastle University, and the UK Centre for Ecology & Hydrology surveyed 26 sites with streetlights across the United Kingdom for caterpillars and compared their numbers with similar stretches of unlit road nearby.

This analysis showed that moth caterpillar numbers declined in lit hedgerows and grass margins by 47% and 33%, respectively, compared to unlit areas. Although moths are not the only insects that are affected by light pollution, their caterpillars don’t disperse far from their hatching sites, which makes them reliable estimates for how lighting affects local populations.

Sodium lights (left) vs white LEDs. Credit: Douglas Boyes/University of Newcastle.

The results published in the journal Science Advances suggest that sites with white LEDs had the steepest reduction in the number of caterpillars compared to sites lit by sodium lamps. LED lights are much more efficient, last longer, and are cheaper in the long run than sodium lamps. However, the drawback is that while sodium lamps emit a yellow glow, LEDs emit light across the entire visible spectrum. This means there’s a much greater potential for wildlife disruption. For instance, insects are known to be very sensitive to shorter, bluer wavelengths of light, which are mostly absent from sodium lamps.

According to Douglas Boyes, a Ph.D. researcher in biology at Newcastle University, light pollution most likely prevents females from laying their eggs properly, a behavior that has evolved to occur in darkness. Furthermore, adult moths are naturally attracted to streetlights, where they become easy pickings for predators such as bats.

Boyes adds that moth numbers in Britain have declined by nearly one-third since the 1970s, and the same is probably true in other parts of Europe where monitoring isn’t as accurate.

The Buff Arches moth has declined by more than 60% since the 1970s. Credit: Iain Leach, Butterfly Conservation.

More than 40% of insect species are declining and a third are endangered, a 2019 review found. The rate of extinction is eight times faster than that of mammals, birds, and reptiles, and the total mass of insects is falling by a precipitous 2.5% a year at a global level.

Insect loss can have a huge trickling effect on biodiversity at large, as many birds, reptiles, amphibians, and fish depend on insects as a food source.

Switching lights may help but there are still many unknowns

The main cause of the decline is believed to be agricultural intensification, particularly land-use change and the treatment of fields with synthetic fertilizers and pesticides. While it’s unclear just how much effect light pollution has on this decline relative to other factors, these most recent findings suggest that lighting nevertheless has a sizable impact.

This is particularly important as many cities across the world are transitioning towards all LED lighting. The solution is to use other types of lights. For instance, rather than white LEDs, municipalities can opt for warm white LEDs, which contain less blue light, or even red LED streetlights. Dimming the lights during the early hours may also help ease the impact on insect wildlife.

Temperature extremes on both ends impair bees’ flight, raising new concerns about climate change

Rising mean temperatures could help bees in colder areas fly better. Overall, however, climate change is going to impair the insects’ ability to fly, mainly through the increase in freak and extreme weather events that it promotes.

Image via Pixabay.

In order to do their job (pollination), bees need to be able to fly. And we definitely need pollinators to do their job. But, according to researchers from Imperial College London, rising temperatures all over the world are likely to impair bees’ flight performance. While colonies in areas closer to the poles (which are naturally colder) might actually see an improvement in their flight performance, as their ranges shift closer to the bees’ ideal temperatures, the increase in extreme weather brought about by higher temperatures means that, overall, bees worldwide will have a harder time flying around.

According to the findings, bee flight performance peaks at around 25-27°C but declines rapidly in both lower and hotter temperatures.

Too hot for comfort

“Climate change is often thought of as being negative for bumblebee species, but depending on where in the world they are, our work suggests it is possible bumblebees will see benefits to aspects of an important behavior,” explains first author Daniel Kenna from the Department of Life Sciences at Imperial. “However, more extreme weather events, such as cold snaps and the unprecedented heatwaves experienced in recent years, could consistently push temperatures beyond the comfortable flight range for certain species of bumblebees”.

“These risks are particularly pertinent for ‘fixed colony’ pollinators like bumblebees, which cannot shift their position within a season if conditions become unfavorable, and potentially provide a further explanation as to why losses have been observed at species’ southern range limits.”

Air temperature has a direct effect on the body temperature of flying insects, including bees, the team explains — and body temperature has an impact on their ability to fly. Temperatures that are too low impair muscle activity, making them function too slowly to support flight. In too warm temperatures, the insects overheat.

In order to measure the impact of air temperature on bees’ ability to fly, the team temporarily attached bumblebees to ‘flight mills’ — devices in which they fly in circles like a carousel while their speed and flown distance were recorded. Bumblebees of several body sizes were tested at temperatures from 12-30°C, and the results were used to construct a thermal performance curve (TPC). This TPC predicts that while bumblebees can fly around 3km at their thermal optimum, this distance would fall to under 1kmThis TPC predicts that whilst bumblebees can fly around 3km at their thermal optimum, this average flight distance could be reduced to under 1km when temperatures rise to 35°C. At 10°C, this distance could drop to as little as a few hundred meters.

Observationally, the team found that temperatures of 15°C and below would frequently limit their flights to under 100m. Larger bees were the only ones that managed to fly in these conditions, too, which suggests that smaller individuals might be more affected by cold days but stand to benefit more from warmer conditions.

At temperatures of 15°C and below, the team observed that bees were demotivated to fly and frequently would not fly past 100m. Moreover, it was only the larger bees that successfully flew at these low temperatures, suggesting smaller individuals dislike cold days but may benefit more from climate warming.

Lead researcher Dr. Richard Gill, from the Department of Life Sciences (Silwood Park) at Imperial, said:

“While we still need to understand how these findings translate to factors like foraging return to colonies and pollination provision, as well as applicability to other bumblebee species, the results can help us understand how smaller versus larger flying insects will respond to future climate change,” says co-author Dr. Richard Gill, also from Imperial.

“It’s not just pollination: how different flying insects respond to warming temperatures could also affect the spread of insect-borne diseases and agricultural pest outbreaks that threaten food systems. Applying our experimental setup and findings to other species can help us to understand future insect trends important for managing service delivery or pest control methods.”

For now, the team’s focus was on how climate change impacts flying efficiency exclusively, but they plan to expand their work to include its effects on other stressors such as pesticide exposure. Furthermore, they’re also looking to examine how climate change stands to impact pollination efficiency across different landscapes.

The paper “Thermal flight performance reveals impact of warming on bumblebee foraging potential” has been published in the journal Functional Ecology.

Pesticides, parasites, hunger — bees worldwide are dying faster than we thought, other pollinators might be too

Bees are falling like flies, new research reports, and it seems to be due to our use of pesticide cocktails.

Image via Pixabay.

We as a species are virtually completely dependent on bees and other pollinator insects, without whom we wouldn’t be able to put food on the table. A new meta-analysis that reviewed dozens of studies published over the last 20 years reports that the use of pesticide cocktails in agriculture greatly increases mortality among bees, more so than the substances taken individually. This is further exacerbated by the combined effects of agrochemicals, parasites, and malnutrition on bee behaviors and health.

The team concludes that current risk assessments significantly underestimate how much pressure bees and other pollinators are subjected to. The steep drop in pollinator numbers we’ve seen in crop and wild areas is a testament to these pressures, with potentially dire consequences for ecosystems around the world and our food security.

Bees in a pinch

“A failure to address this and to continue to expose bees to multiple anthropogenic stressors within agriculture will result in the continued decline in bees and their pollination services, to the detriment of human and ecosystem health,” the study concluded.

Pollinators, bees included, are the unsung backbone of our agriculture, but also of wild plant life. Given that insect populations are in decline all over the world, this naturally raises concerns for the health of pollinators going forward — and whether they can continue performing their ecological role or not. Roughly 75% of the world’s crops producing fruits and seeds for human consumption, including cocoa, coffee, almonds, and cherries, rely on pollinators.

Such concerns were the starting point for the current study. The authors explain that while bees seem to be able to resist the different stressors plaguing them today taken individually, they’re chafing under their weight taken together. The combined pressure from agrochemicals, parasites, and malnutrition is taking a toll on the species, greatly increasing the likelihood of death for individual bees and hives as a whole.

Intensive agriculture relies on the use of compounds such as fungicides or pesticides to protect crops and ensure large yields. “Interactions between multiple agrochemicals significantly increase bee mortality,” said co-author Harry Siviter, of the University of Texas at Austin. Furthermore, industrial-scale use of managed honey bees (in order to produce honey) increases the species’ exposure to parasites and diseases, which places even more strain on them.

The continued shrinking of areas with wild plants and wildflowers translates to less diverse pollen and nectar sources for bees, and arguably lower overall amounts of food they can access.

Although previous research has looked at these factors independently — including the effect different agrochemicals have on bees — the meta-study is the first one to look at their effect in aggregate. According to the team, the results strongly suggest “that the regulatory process in its current form does not protect bees from the unwanted consequences of complex agrochemical exposure”. Although the current analysis focused on honey bees, as most literature on the subject focuses on them, more research is needed on other pollinators, the team explains, as they might react differently to the stressors we’ve seen here.

Back in 2019, researchers were drawing attention to the fact that almost half of the world’s insect species were in decline, and a third of them were at real risk of going extinct by the end of the century. Leading causes for this decline are pesticide use and habitat destruction. Against that background, the warnings of this meta-study are all the more biting.

The paper “A cocktail of pesticides, parasites and hunger leaves bees down and out” has been published in the journal Nature.

If civilization collapses, researchers say, try to be in one of these five countries

If you’re planning on thriving while civilization worldwide crumbles, New Zealand is probably the best place to be, says new research.

Bridal Veil Falls, New Zealand. Image credits Holger Detje.

Friday is upon us, and that can only mean one thing: it’s time to ponder the collapse of modern human civilization, as a treat. New research at the Global Sustainability Institute at Anglia Ruskin University (ARU) comes to help us along our merry way, by estimating which countries today would be most resilient to future systemic threats posed by climate change and other globe-spanning problems.

The paper itself examined which factors could lead to such a scenario, focusing on a combination of ecological destruction, resource depletion, and population growth. It then looked at today’s countries and gauged which would fare the best during the “de-complexification” we’d be bound to see after such a collapse. De-complexification refers to the gradual or sudden breakdown of the multiple overlapping systems that maintain the world as we know it, including the collapse of supply chains, international agreements, and global financial structures. In essence, globalization but in reverse.

At the end of the world

The study was carried out by Nick King and Professor Aled Jones at the ARU, and they identified New Zealand as likely the best place to weather the storm. Iceland, the United Kingdom, Australia (specifically Tasmania), and Ireland were the runner-ups.

The authors explain that the challenges which face us in the future, ecological destruction, limited resources, and population growth, could trigger a reduction in the complexity of our civilization — in essence, collapse — especially with climate change acting as a “risk multiplier” that makes these trends harder to deal with. Whether this will be a very rapid breakdown taking place in less than a year, or whether this will be a longer, more gradual descent, the paper doesn’t aim to answer. It could even be a hybrid of the two, according to the authors, starting as a gradual decline that picks up speed through “feedback loops”, leading to an abrupt collapse.

Since we live in such an interconnected and interdependent world today, any localized decline will quickly ripple across the world and affect us all.

So, where do you go to weather something like that? The researchers tried to determine that by looking at the self-sufficiency (energy and manufacturing infrastructure), carrying capacity (land available for arable farming and overall population), and isolation (distance from other large population centers which may be subject to displacement events) of countries around the world. The next step was to assess each candidate’s individual and local potential for agriculture and energy production.

According to them, New Zealand, Iceland, the United Kingdom, Australia/Tasmania, and Ireland are the countries that have the most favorable conditions to survive a global collapse while maintaining high levels of societal, technological, and organizational complexity (i.e. civilization) within their borders. All five of them are islands or island continents, have a strong oceanic climatic influence, as well as a low variability in regards to temperature and precipitation. Taken together, these conditions will likely allow the countries to remain quite stable despite the effects of climate change.

New Zealand came in first due to its low population, high geothermal and hydroelectric potential, and wide swathes of agricultural land. Iceland, Australia/Tasmania), and Ireland also have favorable characteristics, but to a lesser extent. The UK is put at risk by its complicated energy mix and high population density. Although it does have a high agricultural output today, it has low per capita availability of agricultural land, meaning each square foot of land needs to feed a lot of people. This may make it impossible to achieve self-sufficiency.

“Significant changes are possible in the coming years and decades. The impact of climate change, including increased frequency and intensity of drought and flooding, extreme temperatures, and greater population movement, could dictate the severity of these changes,” explains Professor Aled Jones.

“As well as demonstrating which countries we believe are best suited to managing such a collapse—which undoubtedly would be a profound, life-altering experience—our study aims to highlight actions to address the interlinked factors of climate change, agricultural capacity, domestic energy, manufacturing capacity, and the over-reliance on complexity, are necessary to improve the resilience of nations that do not have the most favorable starting conditions.”

The paper “An Analysis of the Potential for the Formation of ‘Nodes of Persisting Complexity'” has been published in the journal Sustainability.

A lab experiment shows that we could engineer malaria-carrying mosquitoes to kill themselves off

A new paper showcases how genetic engineering can be used to cause populations of malaria-spreading mosquitoes to self-destroy.

Image credits Egor Kamelev.

An international research effort has shown, in the context of a lab experiment, that male mosquitoes engineered to carry a certain strand of DNA can rapidly destroy entire groups of these blood-sucking insects. The main importance of this experiment is that it showcases that gene-drive technology can be used even in harsh environmental conditions, such as those in sub-Saharan Africa.

This “gene drive” sequence is essentially a damaging mutation that could prove to be a powerful tool against the carriers of malaria.

Drastic measures

“Our study is the first [that] could show that gene-drive technology works under ecologically challenging conditions,” says Ruth Muller, an entomologist who led the research at PoloGGB, a high-security lab in Terni, Italy. “This is the big breakthrough that we made with our study.”

While this experiment has been a success, that doesn’t mean it’s going to be used any time soon. For that to happen, the authors first need to prove that their edited mosquitoes can work in practice — i.e. that they’re safe to release into the wild. Not only that but local governments and residents will have to give their approval before any of the mosquitoes can be released.

Still, with that being said, malaria remains one of the most concerning diseases on Earth. It infects an estimated 200 million people every year, with an estimated annual death toll of around 400,000. This is despite decades of coordinated effort to contain it.

So the authors decided to use the CRISPR gene-editing technique to make mosquitoes, the carriers of the malaria parasite, to self-destroy. They worked with the Anopheles gambiae species, which is native to sub-Saharan Africa. The gene they modified is known as “doublesex”, and is normally carried by healthy females. The modified variant, however, deforms their mouths and reproductive organs, meaning they can’t bite (and thus spread the parasite) nor lay eggs. This is combined with a gene drive, “effectively a selfish type of genetic element that spreads itself in the mosquito population,” says Tony Nolan of the Liverpool School of Tropical Medicine, who helped develop and test the mosquitoes.

Due to the risks involved in releasing these insects into real ecosystems, the experiments were carried out in small cages in a high-security basement lab in London. The modified mosquitoes showed that they can destroy populations of the unmodified insects here.

In order to test them under more natural conditions, however, the team also built a special high-security lab in Italy, specifically designed to keep the mosquitoes in. Here, dozens of gene-edited mosquitoes were released into very large cages containing hundreds of natural mosquitoes. Temperature, humidity, and the timing of sunrise and sunset mimicked the environment in sub-Saharan Africa. In less than a year, the authors report, the population of un-altered mosquitoes was all but wiped out.

Both of these steps were carried out far from the insects’ natural range as extra insurance in case any of them got out.

Whether such an approach will ever actually be used in real-life settings is still a matter of much debate. Even so, the study showcases one possible approach and strongly suggests that it would also function in the wild. It’s also a testament to how far gene-editing technology has come, that we could potentially have one of the most threatening (to us) species right now effectively destroy itself.

The paper “Gene-drive suppression of mosquito populations in large cages as a bridge between lab and field” has been published in the journal Nature Communications.

Cities need to wean off of cars in the future or become endless traffic jams

If we want cities to remain viable in the future, we’ll have to rethink transportation and car use, a new paper warns.

Image via Pixabay.

Researchers at the University College London (UCL) trying to understand the city of the future say it doesn’t mix well with automobiles. If current trends continue, they explain, cities will eventually be swamped by cars. This will drain ever-more resources on infrastructure, and waste ever-more of our time through busy, slow commutes.

Cars will still be used, undoubtedly, but the authors recommend that walking or cycling should be promoted instead of these for short, local trips. Public transport networks should be improved and encouraged for longer journeys, where possible. In order to keep cities livable in the future, the team concludes, cars should only be used for special occasions or emergencies.

Too many

“The city of the future, with millions of people, cannot be constructed around cars and their expensive infrastructure,” explains lead author Dr. Rafael Prieto Curiel. In a few decades, we will have cities with 40 or 50 million inhabitants, and these could resemble car parks with 40 or 50 million cars.”

“The idea that we need cars comes from a very polluting industry and very expensive marketing.”

The results are based on a mathematical framework that models the use of cars in a city. For the purposes of this study, the model assumed that citizens would either use a car on a daily basis or used public transport. What the model tracked was how long (in terms of time) each journey would take, as time was considered to be the main cost individuals consider when deciding on how to travel. The baseline for the model was a city in which there is no personal car traffic, just cycling, walking, and public transport.

On the other extreme, the model considered a city with 50 million inhabitants and 50 million cars, where all residents would commute to work with their own vehicle in order to save up on time. This virtual city, quite understandably, saw much higher levels of congestion and required more spending on infrastructure such as avenues, bridges, and car parks in order to accommodate all that traffic.

Surprisingly however, while the people in this city opted to drive to work to get there sooner, they actually lost more time than those in other scenarios. While driving is the fastest solution for individuals, when everybody opted for it, commuting times were the longest seen in any of the simulated cities. The team explains that this comes down to traffic — all those cars on the road create jams and slow everybody down significantly.

Where to go from here — and how?

The paper offers reliable evidence that better public transport infrastructure would improve the travel time for citizens, as more of them would opt for public transport over personal vehicles. It also shows that even without any improvements in public transport, time costs for commuters and citizens travelling through the city can be reduced by lowering the number of people driving at any single time.

While they don’t advocate for this solution, the authors give a scenario where a group of people is allowed to drive one week, but must use other transportation options the next one, such as ride-sharing or public transport. Average commuting times could be reduced by up to 25% (depending on the size of the group) for all citizens due to reduced car traffic, less congestion, and faster transportation throughout the city on average.

However, the authors underline that decreasing car use in cities hinges on giving people efficient travel alternatives, as well as local shops and services (so as to reduce demand for transport in the first place). Interventions such as congestion charges, tolls, and driving and parking controls can help discourage car use, but unless people have alternatives to pick from, and are informed as to the local costs of car use, we can’t reasonably expect them to give up the use of their cars. Some cities have tried simply banning some vehicles based on their license plate, such as Mexico City, but this backfired as residents purchased older, cheaper, and more polluting cars to get around the ban.

Not making any changes isn’t a viable option, either. They note that car production is fast increasing, and has actually outstripped population growth. In 2019, 80 million cars were produced, while the population increased by 78 million worldwide. Pollution is a big concern: globally, car manufacturing (including electric vehicles) contributes 4% of total carbon dioxide emissions. Energy use, be it petrol, diesel, or electricity, also generates pollution (right under our noses, in the case of combustion engines) and added costs. Material costs related to the construction and maintenance of infrastructure required by these cars, as well as time lost in traffic due to congestion, are also added costs most people don’t consider.

“Currently, much of the land in cities is dedicated to cars. If our goal is to have more liveable and sustainable cities, then we must take part of this land and allocate it to alternative modes of transportation: walking, cycling, and public transport,” says co-author Dr. Humberto González Ramírez from the Université Gustave Eiffel.

Such research is actually very important, as sustainable transportation is a key objective for many large cities as part of one of the UN’s Sustainable Development Goals. This model, the authors explain, can easily be adapted to other cities around the world, although it is particularly useful for locales where the majority of travel (>90%) is done by car, which is most common for cities in the US.

The paper “A paradox of traffic and extra cars in a city as a collective behaviour” has been published in the journal Royal Society Open Science.

Iran’s groundwater resources are rapidly depleting, and everyone should pay attention

People have been relying on groundwater resources for all their drinking and washing needs since time immemorial. But some seem to be depleting fast when faced with today’s levels of demand, a new paper reports, explaining more than three-quarters of Iran’s groundwater resources are being overexploited.

Image credits Igor Schubin.

Over 75% of Iran’s land is faced with “extreme groundwater overdraft”, the paper reports. This describes the state where the natural refill rate of an area’s groundwater deposits is lower than the rate people are emptying them at. The paper was published by an international team of researchers led by members from the Concordia University, Canada.

Drying out

“The continuation of unsustainable groundwater management in Iran can lead to potentially irreversible impacts on land and the environment, threatening the country’s water, food, and socioeconomic security,” says Samaneh Ashraf, a former Horizon postdoctoral researcher now at the Université de Montréal, and co-author of the paper.

Mismanagement of these resources seems to be the biggest issue at play, the team explains. This exacerbates the obvious difficulties that a semi-arid country would have in securing water resources. Aquifers are further hampered by inefficient agricultural practices, which further drain them needlessly.

Without urgent action, the team notes, multiple, nationwide crises can arise when groundwater levels drop too low.

Iran has around 500 groundwater basins and sub-basins, and between 2002 and 2015, an estimated total of 74 km3 of water (73 billion liters) has been drained from them. This helped increase overall soil salinity across Iran and promotes land sinking (land subsidence). The Salt Lake Basin, where the country’s capital of Tehran is located is one of the most at-risk regions for land sinking.

This is quite worrying as the region, home to 15 million people, is already quite seismically active, and at risk of being hit by earthquakes.

Public data from the Iranian Ministry of Energy was used for the study.

“We wanted to quantify how much of Iran’s groundwater was depleted,” explains co-author Ali Nazemi, an assistant professor in the Department of Building, Civil, and Environmental Engineering at Concordia University. “Then we diagnosed why it was depleted. Was it driven by climate forces, by a lack of natural recharge, or because of unsustainable withdrawal?”

Agricultural use of water was the leading cause of aquifer depletion, they explain, with Iran’s west, southwest, and northeast regions being the most affected. These are agricultural areas where strategic crops like wheat and barley are grown. Consequentially, groundwater resources are most heavily depleted in these areas.

The number of registered wells for agricultural use has doubled in the last 15 years, they explain — from roughly 460,000 in 2002 to roughly 794,000 in 2015. Overall anthropogenic withdrawals of groundwater decreased in 25 of the country’s 30 basins over the same period, which suggests consumption is being concentrated in a few, overexploited aquifers.

Ground salinity levels are also rising across the country, too, as evidenced by soil electrical conductivity readings.

The national and local governments are not able to deal with this growing issue for a variety of reasons — including international sanctions, local corruption, and low trust among the population. However, the authors explain that both short- and long-term solutions are dearly needed in order to avoid these issues ballooning into huge crises.

“In the short term, the unregistered wells need to be shut down,” Nazemi says. “But longer term, Iran clearly needs an agricultural revolution. This requires a number of elements, including improving irrigation practices and adopting crop patterns that fit the country’s environment.”

Other countries would be wise to pay attention to what’s currently happening in Iran, Nazemi adds, and learn from their mistakes.

“Iran’s example clearly shows that we need to be careful how we manage our water because one bad decision can have a huge domino effect. And if the problem is ignored, it will easily get out of control,” he says. “It also illustrates the importance of environmental justice and stewardship. These are even more important when addressing the problem of climate change.”

The paper “Samaneh Ashraf et al, Anthropogenic drought dominates groundwater depletion in Iran” has been published in the journal Scientific Reports.

Google Earth’s new feature: a timelapse of the entire planet

What if you could take the entire planet, gather over 30 years of satellite data on it, and put it all together into a simple app that can even be used on your smartphone? Well… that’s exactly what Google recently unveiled. The new features for its Timelapse allow users to zoom in on any locations they choose, viewing more than three decades of imagery.

The world at our fingertips

It’s true that we now have the entire planet at our fingertips in more ways than one. Even some 20-30 years ago, most people would have had a hard time imagining this. The fact that you can use a common device most of us carry in our pockets and zoom in over any corner of the Earth and see how it evolved in the past few decades speaks a lot to how much technology and scientific observation have progressed.

You can browse your hometown, your favorite forest, a glacier, anything — in some areas, data is better than in others, but you can see a timelapse of every corner of the globe.

“In the biggest update to Google Earth since 2017, you can now see our planet in an entirely new dimension — time. With Timelapse in Google Earth, 24 million satellite photos from the past 37 years have been compiled into an interactive 4D experience. Now anyone can watch time unfold and witness nearly four decades of planetary change,” wrote Rebecca Moore, director of Google Earth, Earth Engine and outreach.

But the Google Timelapse feature also offers a sobering look at how much we are changing the planet.

Location after location, it’s the same story: the impact of mankind is changing the planet, whether directly (through deforestation, river management, building cities, etc), or indirectly (through climate change).

“Our planet has seen rapid environmental change in the past half-century — more than any other point in human history. Many of us have experienced these changes in our own communities,” Moore wrote.

More than just being eye candy (though it definitely is), Google’s project could help researchers interpret satellite data more easily, and could help citizen scientists find trends in their own communities.

Several recent studies suggest that time lapses are actually become useful tools for research, and the data could come in handy particularly in areas where local monitoring data is sparse.

To put this all together, Google used data from both U.S. Geological Survey/NASA Landsat satellites, as well as the EU’s Copernicus Program and its Sentinel series of satellites. They also worked with Carnegie Mellon University’s CREATE Lab, which helped to process and display the approximately 10 quadrillion pixels in this database.

“More than two million processing hours across thousands of machines in Google Cloud to compile 20 petabytes of satellite imagery into a single 4.4 terapixel-sized video mosaic,” Moore explains — a process that used 100% renewable energy, in line with Google’s objectives to cut its own emissions.

Here’s a list of some of the most stunning timelapses (full engine here).

CO2 concentrations quietly reached the highest daily values in recorded history

The Earth has sent us another warning that the climate crisis is here and we’d best do something about it. The highest CO2 level on record happened on April 3, the shocking observation reached 421.21 ppm. On April 2, the concentration was 416.97 ppm.

Hourly (red circles) and Daily (yellow circles) averaged CO2 values from Mauna Loa, Hawaii for the last 31 days. Credits: GML NOAA.

Why this matters

Mauna Loa Observatory is located on the Big Island of Hawaii and it has been in operation since 1950, the oldest place measuring CO2 concentrations still in action. It’s far from the continental landmass and big cities, and nearly 3,400 m above sea level. This means almost nothing contaminates the data, apart from the volcano spurious data which scientists can easily eliminate.

If this observatory planted in the middle of the Pacific Ocean, far from any perturbance, has detected the highest amount of CO2 levels without any interference from its volcano, and after a ‘decline’ of human activity which happened in 2020 — this means we are in trouble.

Keeling curve

It’s important to note that one or two extreme values don’t necessarily mean much on their own. There are always exceptions. But these large CO2 values are not exceptions — quite the contrary: they are part of a well-observed phenomenon of CO2 rising in the atmosphere. Don’t believe me? Have a look at this chart:

In 1976, a team led by Charles Keeling published the first analysis of the evolution of CO2 concentration from the Mauna Loa observatory. The curve later became known as the Keeling curve.

The graph shows the increase in carbon dioxide year by year, the recent plot shows how this is even more significant compared to the ’70s. Inside the increasing trend, you see annual oscillations, they are the result of a natural response in the climate. Leaves fall during Autumn in the Northern Hemisphere, so the vegetation is less successful in retaining the CO2 in the air.

CO2 effects

Carbon dioxide is what’s called a greenhouse gas: it ‘stores’ energy that then heats up the air.

If the carbon dioxide increases, the temperature of the atmosphere (and subsequently, the entire planet’s surface) increases. The graph below shows the correlation in the trend, how temperature and concentrations are tightly coupled. But that is a simplistic point of view, the Earth is a complex system, so add temperature response with the other greenhouse gases, other interactions, and feedbacks happening at the same time.

Levels of carbon dioxide in the atmosphere have corresponded closely with temperature over the past 800,000 years. Although the temperature changes were touched off by variations in Earth’s orbit, the increased global temperatures released CO2 into the atmosphere, which in turn warmed the Earth. Antarctic ice-core data show the long-term correlation until about 1900. (Graphs by Robert Simmon, using data from Lüthi et al., 2008, and Jouzel et al., 2007.)

It is a fact that the current record is one point of one single day in the curve, climatology requires years of observation. However, there is no doubt this a scary alert from the climate system, knowing the recent events: one of the hottest years, biomass less significant than human-made mass, shocking Hurricane season, deforestation records, fires records, all this in a small period of time.

Ultimately, it’s also important to put thigns into a historic context. Looking at the charts above, things seem normal. But things don’t seem normal at all when we include the past few centuries.

The fact that the main cause of this abrupt rise in the concentrations is obviously human, it was obvious for Keeling and Hansen back then, it seems ridiculous to ignore it now.

Study reveals the climate footprint of the food sector. And it’s a lot

From its production to its consumption, the food we eat is one of the biggest contributors to the climate crisis, a new study has shown. Researchers in Europe have found that more than a third (34%) of all man-made greenhouse gas emissions are generated by food systems – mainly because of deforestation, fertilizer use, distribution, and waste.

Beef is one of the least eco-friendly foods. Image credit: Flickr / Oli

It’s not just what you put in your mouth. Food has to be farmed, harvested or caught, transported, processed, packaged, distributed, and cooked, and the residuals have to be disposed of. Each of these steps causes emissions of anthropogenic greenhouse gases. Inputs such as fertilizers need to be produced and made available at the right time and location, causing extra emissions.

Several reports in the past have quantified the climate footprint of food, but the authors behind this new research led by the European Commission’s Joint Research Centre argue theirs is the first to cover all countries and sectors, providing a comprehensive picture of the emissions on the world’s plates. They’ve put it all in a database named EDGAR-FOOD, the first global food emission inventory ever produced.

The study covers greenhouse gas emissions between 1990 and 2015. The researchers noted in that period a decoupling of population growth and food-related emissions, which emissions growing slower than the population. Still, they found large variations across the world, with some regions seeing big increases in emissions due to domestic demand and exports.

More than 70% of food system emissions are from the use of land for agriculture, while 32% come from land-use changes, including deforestation and soil degradation. China, Indonesia, the United States, Brazil, the European Union, and India were found to be the six top-emitting economies, accounting for more than 50% of the total emissions of the food system.

“Unlike overall GHG emissions, the food production sector is not overwhelmingly dominated by CO2 emissions from fossil fuels; land-based emissions are particularly relevant. Nevertheless, in line with the ongoing socio-economic development trends, food emissions are being increasingly determined by energy use, industrial activities, and waste management,” the researchers wrote.

Looking at each greenhouse gas, the study found about half of the total emissions were carbon dioxide – mainly from land use due to deforestation and energy due to packaging and transportation. A further third was from methane, released by livestock because of entering fermentation, with the remainder corresponding to nitrous oxide from fertilizers.

The researchers also highlighted the growing volume of emissions from the increased energy use in food production, especially in the developing world – where the use of mechanization and pesticides had matched or even outpaced advanced economies. Emissions from food retail are also on the rise because of the larger demand for refrigeration to prevent food from spoiling.

It’s a massive challenge and in order to address it food system emissions have to be largely reduced, especially in the supply chain, while enabling people access to healthier diets, the researchers argued. A study last year said that if food system emissions aren’t addressed, they would push Earth above the 1.5ºC warming threshold by 2050 just by itself.

“Food systems are in need of transformation,” lead researcher Adrian Leip told Forbes. “Mitigation by reducing emissions from deforestation and on the farm is already very much in the focus of many mitigation policies. But our data show also an increasing significance of emissions from energy use, mainly post-farm gate, which shows the intricate link between the land and the energy systems.”

The study was published in the journal Nature Food.

More than 900,000 tons of food are thrown away every year, a UN report showed

Between food wasted in homes, restaurants, and shops, 17% of all food is dumped every year, reaching over 900,000 tons, according to a UN report. But the real scale of the problem could be even larger, as some food is also lost on farms and supply chains – indicating that overall a third of the food produced never ends up on our plates.

Image credit: Flickr / Gordon Joly

The average person throws away 74 kg (163 lb) of food every year

Food waste hinders efforts to help the billions of people who are either hungry or can’t afford a healthy diet. It also affects the environment, as food waste and loss are estimated to cause between 8% and 10% of the greenhouse gas emissions that drive the climate emergency. If it were a country, food waste would have the third-highest emissions.

The UN Environment Programme (UNEP) and its partner organization WRAP have now published the Food Waste Index Report, which concluded that around 931 million tons of food waste were generated in 2019, the year with the latest figures available. Of the total amount, 61% came from households, 26% from foodservice, and 13% from retail. In other words, everyone is to blame — especially consumers

“Reducing food waste would cut greenhouse gas emissions, slow the destruction of nature through land conversion and pollution, enhance the availability of food and thus reduce hunger and save money,” Inger Andersen, UNEP head, said in a statement. “Businesses, governments, and citizens around the world have to do their part.”

Household per capita food waste generation was found to be broadly similar across country income groups, suggesting the problem is equally relevant in high, upper-middle, and lower-middle-income countries. The food discarded in homes was estimated to be 74 kilograms per person per year on average around the world.

Previous estimates of consumer food waste significantly underestimated its scale, the UN argued. While data doesn’t permit a robust comparison across time, food waste at the consumer level (meaning household and food service) appeared to be more than twice the previous estimate done by the UN Food and Agriculture Organization (FAO)

The report was published in support of global efforts to meet the UN’s Sustainable Development Goal (SDG) 12.3, which aims at halving global food waste at the retail and consumer levels as well as at the production and supply chains by 2030. With nine years to go, steep action is needed by governments, international organizations, and businesses, Marcus Gover, CEO of Wrap, told the BBC.

The researchers said nobody buys food with the intention of throwing it away and that the small amount discarded every day might seem insignificant for many. That’s why increasing awareness of food waste is essential, they argued. Government and corporate action are also necessary but individual action plays specifically a very important role.

Almost 700 million people were affected by hunger in 2019, a number expected to rise amid the coronavirus pandemic, and three billion couldn’t afford a healthy diet. The figures should urge consumers to take action, the UN argued. Countries can also increase ambition by adding food waste actions in their climate change pledges, known as NDCs.

Cities are literally starting to sink under their own weight

Cities have become so big and crowded that they are gradually collapsing under the weight of their own development, according to a new study.

The researchers specifically looked at San Francisco as a case study and found that the city might have already sunk by 8 centimeters (or 3.1 inches) – something that they argue is likely happening in other cities too.

San Francisco. Image credit: Flickr / Peter Miller

The finding is especially concerning as cities (especially coastal cities) are already exposed to sea level rise because of climate change. Sea level has already risen between 21 and 24 centimeters (or 8-9 inches) since 1880 and the rate is accelerating, increasing the risk of floods, extreme weather events, and coastal erosion.

“As global populations move disproportionately toward the coasts, this additional subsidence in combination with expected sea-level rise may exacerbate risk associated with inundation,” Geophysicist Tom Parsons, from the United States Geological Survey (USGS) agency, wrote in his recent paper.

At the same time, a steady migration of Earth’s population from rural to urban centers is occurring in virtually every part of the world. Currently, about 50% of Earth’s population lives in urban settings, and by 2050 it is projected that number will grow to 70%, according to the UN. Along with people, urbanization has caused a redistribution of mass into concentrated areas. In other words, cities are constantly becoming heavier, and it’s not just because of the people moving in.

Nearly everything necessary to sustain a city’s population must be imported. A global network of suppliers ships food, fuel, water, cars, mass transit, pavement, pipes, concrete, and steel from great distances. Every conceivable object that people want or need is brought to and stored within relatively small areas. Researchers wondered whether all this concentrated weight wouldn’t have an effect. Lo and behold, it does.

For his research, Parsons used San Francisco as a case study to understand why and how many cities are sinking. The San Francisco Bay region has over 7.7 million inhabitants and is the cultural, commercial, and financial center of Northern California. He discovered that the city sunk by 80 millimeters (3.1 inches), this being the city’s level of subsidence (sudden or gradual sinking of the ground’s surface).

Parsons calculated the weight of the bay area at 1.6 trillion kilograms or roughly 8.7 million Boeing 747s, taking into account all the buildings in the city and their contents. This would be enough to bend the lithosphere, which is the rigid outer part of the Earth consisting of the crust and upper mantle, causing it to sink, according to Parsons.

The study didn’t consider the weight of things outside buildings, such as transport infrastructure, vehicles, or people. This means the city could have actually sunk more than the estimated 80 millimeters. For the researcher, it’s a clear sign that the same type of sinking is likely to be happening in other parts of the world, depending on the geography of each city.

“The specific results found for the San Francisco Bay Area are likely to apply to any major urban centre, though with varying importance,” Parsons wrote in the study. “Anthropogenic loading effects at tectonically active continental margins are likely greater than more stable continental interiors where the lithosphere tends to be thicker and more rigid.”

While other causes of sinking also have to be taken into account such as tectonic plate shifting and groundwater pumping, these findings are significant. And they could even be improved further, using satellite photos to better analyze the Earth’s surface and predict where likely flood zones might occur, Parsons argued.

The study was published in the journal AGU Advances.

Global ice loss rate increased by over 65% in the last two decades

New research reports that the planet is losing ice at an ever-faster rate. This is the first time satellite data has been used to survey global ice loss rates, according to the authors, finding that it has increased by over 50% in the last three decades, and 65% over the last two decades.

Furthermore, the authors explain that our planet has lost around 28 trillion tons of ice between 1994 and 2017, which they say is roughly the same quantity in an ice sheet the size of the UK and 100 meters thick — and the rate of melt is increasing. If left unchecked, this will lead to massive damage as communities and natural habitats on today’s coasts will flood.

No-more-ice Age

“Although every region we studied lost ice, losses from the Antarctic and Greenland ice sheets have accelerated the most. The ice sheets are now following the worst-case climate warming scenarios set out by the Intergovernmental Panel on Climate Change,” says lead author Dr. Thomas Slater, a Research Fellow at Leeds’ Centre for Polar Observation and Modelling.

“Sea-level rise on this scale will have very serious impacts on coastal communities this century.”

Led by members from the University of Leeds, the team reports that there has been a 65 % increase in the rate of melt over the 23 years it investigated, driven mainly by losses in Antarctica and Greenland. In raw numbers, we went from 0.8 trillion tons of ice melting per year in the 1990s to 1.3 trillion tons per year by 2017.

Although we had a better idea than ever before about how individual elements in the Earth’s ice system fared, we were still lacking data on how the planet as a whole was evolving. This study, says Dr. Slater, is the first to examine all of the ice at the same time, using satellite data. It includes 215,000 mountain glaciers, the ice sheets of Greenland and Antarctica, ice shelves around Antarctica, as well as sea ice bobbing along the Arctic and Southern Oceans.

The faster rates of melt are being caused by warmer waters and bodies of air — the atmosphere and oceans have warmed by 0.26°C and 0.12°C per decade since the 1980, respectively. Atmospheric melting was the prime offender (responsible for around 68% of the extra melting), with the remainder (32%) coming down to oceanic melting. The geographic distribution of ice on the planet explains the higher rates of atmospheric melting (not all ice comes in contact with the ocean).

All the elements investigated in the study lost ice, but the largest losses were in Arctic Sea ice (7.6 trillion tons) and Antarctic ice shelves (6.5 trillion tons). Mountain glaciers lost a total of 6.1 trillion tons of ice, the Greenland ice sheet lost 3.8 trillion tons, while the Antarctic ice sheet lost some 2.5 trillion tons of ice.

This contributed around 35 millimeters of global sea level rise. The team explains that every centimeter of sea level rise puts an estimated one million people at risk of being displaced by water.

“Sea ice loss doesn’t contribute directly to sea level rise but it does have an indirect influence. One of the key roles of Arctic sea ice is to reflect solar radiation back into space which helps keep the Arctic cool,” says Dr. Isobel Lawrence, a Research Fellow at Leeds’ Centre for Polar Observation and Modelling.

“As the sea ice shrinks, more solar energy is being absorbed by the oceans and atmosphere, causing the Arctic to warm faster than anywhere else on the planet. Not only is this speeding up sea ice melt, it’s also exacerbating the melting of glaciers and ice sheets which causes sea levels to rise.”

Mountain glaciers contributed around 25% of the sea level rise seen over this period, despite storing only 1% of the world’s ice. Their melting is especially worrying, as mountain glaciers are essential sources of fresh water for communities around the world.

It is estimated that for every centimetre of sea level rise, approximately a million people are in danger of being displaced from low-lying homelands.

The paper “Review article: Earth’s ice imbalance” has been published in the journal The Cryosphere.

New approach to lab-grown meat creates more realistic, more customizable steaks

A new study details how to create lab-grown meat that has a more natural taste and texture. The process will also allow more control over the structure of the meat, so consumers will be able to pick the exact amount of fat content or marbling they want.

Image via Pixabay.

Steakhouses today may ask customers how they’d like their meat to be cooked, but a new paper from McMaster University could mean they’ll soon ask how we’d like it “tuned”. Their paper describes how a more natural feeling and tasting type of lab-grown meat can be produced. According to the authors, this will provide a more “real meat” experience and allow people to have as much fat or marbling on their cut of meat as they want.

Sheets to slabs

“We are creating slabs of meat,” co-author Ravi Selvaganapathy says in a media release. “Consumers will be able to buy meat with whatever percentage of fat they like – just like they do with milk.”

The authors, both from McMaster’s School of Biomedical Engineering, developed a new technique to create lab-grown meat. It involves stacking thin sheets of cultivated muscle and fat tissues, then merging them together. It’s similar to the approach we use to grow human tissue for transplants, the authors explain.

Each of these sheets is as thin as a sheet of paper, and they’re made from cells first grown in a lab culture. They naturally bind to one another while the cells are alive, says Selvaganapathy. This process helps impart the improved texture to the meat. The team tested their approach with cells harvested from lab mice. They didn’t eat that one, but they did eventually grow, cook, and taste a sample from rabbit cells.

“It felt and tasted just like meat,” Selvaganapathy reports.

Although their experiments didn’t include these types of cells as well, the team is confident that beef, pork, or chicken will be growable using this approach in the future. The stacking-sheets approach is also easily scaled-up for industrial production, they add.

The global demand for meat is putting a heavy strain on nature, as it takes a lot of food, water, and land to grow our livestock — and they also produce ample methane, a powerful greenhouse gas. Factory farms also need to feed their animals antibiotics constantly to avoid disease, which is helping bacteria develop resistance to drugs. Lab-grown meat can help address this demand much more cleanly and efficiently.

“Meat production right now is not sustainable,” Selvaganapathy contends. “There has to be an alternative way of creating meat.”

The McMaster team is currently working on a start-up company that can produce meat using this technique and sell it commercially.

The paper “Engineering Murine Adipocytes and Skeletal Muscle Cells in Meat-like Constructs Using Self-Assembled Layer-by-Layer Biofabrication: A Platform for Development of Cultivated Meat” has been published in the journal Cells Tissues Organs.

Parrots are facing extinction, and only policymakers can save them

Parrots may not be very long for this world — and it’s on us. New research finds that parrot species around the world are threatened with extinction due to wide-spread habitat destruction. Current protected areas can’t mitigate these losses, the team adds.

Image credits Will Zhang.

Pressures from human activity is putting parrot species at risk of extinction all around the world. As such, the future of these birds is firmly in the hands of policy makers in Australia and other areas where parrots are endemic, the authors explain. Agriculture and logging are the biggest culprits that the team identified, but other events (such as the Australian wildfires of last year) are also contributing to the problem.

Parrotn’t?

“In a previous global evaluation of parrots with scientists from BirdLife International we showed that they are among the most threatened bird orders, with higher extinction risk than other comparable bird groups,” co-author Dr. George Olah, from the ANU Fenner School of Environment and Society, said.

As their current range experiences significant habitat destruction, parrot species are struggling to adapt all over the world. Those areas that are currently designated as protected are much too small to serve as alternative home for them.

The study is a product of a collaboration between parrot ecologists at The Australian National University (ANU) and spatial ecologists from the National University of Córdoba, Argentina. It looked at and compared the conservation status of parrots in different areas of the planet in order to come up with a wide-scale picture of the threats they face.

Over half of the world’s critically endangered species of parrot live in Australia, Asia, and the Pacific area, the team explains. Apart from habitat destruction, wildlife trade is further pushing parrot species towards extinction here.

The team explains that temperate forests in Australia (which house many species of parrots) were already showing signs of heavy degradation in 2020 due to human-modified landscapes. They project that this trend will continue and worsen the health of these ecosystems by 2050.

“We predicted that agricultural expansion will have a further negative effect on the conservation status of parrots, pushing many of their species to the edge of extinction in the near future,” co-author Dr. Javier Nori said.

The team identified four hotspots of parrot biodiversity, two in the Neotropics and two in Oceania, noting that each faces “different degrees of threat in regard to current habitat loss and agricultural trends”. They add that the findings “suggest the future of the group is subject to policymaking in specific regions, especially in the northeastern Andes and the Atlantic Forest”.

Deforestation, fueled by the need for arable land, remains a dire threat for parrots. These birds are “highly dependent on forests,” the authors explain, and could be pushed “to the edge of extinction in the near future” as more land is cleared. Policymakers can help protect the birds’s habitat and expand on current protected areas, or even set up new ones in the hotspots.

The paper “Global trends of habitat destruction and consequences for parrot conservation” has been published in Global Change Biology.

What made the development of the COVID-19 vaccine unique in history

To say that 2020 has revolved around the pandemic would probably be quite true. But 2021 is shaping up to be all about our answer to the virus, in the form of a vaccine. At least one vaccine has been approved for use in the EU and, although there are many candidates in the works, none of these were developed in what you’d call a ‘traditional’ timeline. We’ve had to wait a whole year, but that’s still a record-breaking speed for this class of substances.

Army Spc. Angel Laureano holds a vial of the COVID-19 vaccine at Walter Reed National Military Medical Center, Bethesda, Md. Image credits Lisa Ferdinando / Department of Defense.

So let’s take a look at this pace of development. How exactly was it that the vaccine got developed so quickly? Does it say anything about how reliable the compound is? Can we trust it?

How vaccines are made

First of all, we should take a brief view on how vaccines are produced so we have a good idea of the process we’ll be discussing today.

A vaccine basically works by giving our bodies the opportunity to see and study a viral threat in a controlled manner. This experience lets it develop its own biochemical weapons against the virus (which are much, much better than our pharmacological tools for the job). In very broad lines, there are four ways to produce such a substance:

  • Viral attenuation. This involves altering a virus’s structure or genes in such a way that it becomes hard for it to replicate. This uses a technique called cell culture adaptation, which involves re-adapting the virus to grow in specialized cells inside the lab, and not in the normal cells they’d meet in our body. While impaired, there’s still a small chance that such viruses can replicate inside the body, which is why they’re referred to as ‘live’, ‘weakened’, or ‘attenuated’ viruses. The measles, mumps, rubella, and varicella vaccines are made this way.
  • Destruction of the viral genes. This is safer than the previous approach to an extent, in the sense that a virus’s genes are completely destroyed so it can’t replicate — these are referred to as ‘killed’ vaccines. A virus’s genetic material is exposed to the chemical formaldehyde during the process, which destroys them permanently. The polio vaccine is produced this way.
  • Physical breakdown of the pathogen. In this case, the virus or bacterium in question is physically broken apart, and certain elements of it are then used to produce the vaccine. This is also a very safe approach as there isn’t any genetic material in the vaccine, so there’s no way for an infection to set in. Still, from studying the bits that do make it into the final vaccine, our immune system learns how to spot and then attack the threat. The vaccine against hepatitis B is produced this way.
  • Toxoid vaccines. For those pathogens that don’t directly make their host sick, but use toxins (weaponized proteins) to do it, we have toxoid vaccines. Bacteria like diphtheria or tetanus function this way. To make a vaccine against them, the toxins are isolated, purified, and then chemically neutralized. Toxoid vaccines also don’t pose any risk of infection as they carry no genetic material.

Typically, the process of developing a vaccine takes 10 to 15 years on average. Obviously, this process used to take a lot longer before, but we’re getting better at it, thanks to our modern know-how and technology. Apart from the COVID-19 one, the fastest-developed vaccine in history (from the time the virus was isolated to the finished product) was the mumps vaccine in 1967, which took 4 years. In contrast, the influenza virus was first isolated in 1933, but an effective vaccine was only licensed in 1945.

Infographic: Influenza Milestones 1917-2009
Am infographic that depicts the ravages caused by influenza and out steps to fight it throughout the last century.

So that’s our reference timeframe for the development of a vaccine. During this time, researchers have to isolate the threat, decide on the best approach for turning it into a vaccine and then go about it, and test their compound on every step of the way. Human trials tend to be the most visible steps of vaccine development, but they’re the tail end of this process; extremely few candidates make it this far into development.

Those who wonder why we didn’t have it sooner should keep in mind that this is the fastest-ever developed vaccine. It might seem like a long time when you’re stuck inside waiting for it, but in relative terms, we got it blazingly fast.

Wasn’t that too fast?

Here, then, comes the other side of the coin. I sometimes get asked regarding the safety of the vaccine after explaining that it’s been developed at breakneck speeds — surely, then, some corners had to be cut? Well, not really. All the data we have available for the two most promising vaccines (the Pfizer and Moderna ones) show that both are highly effective and quite safe. While the development process was streamlined as much as possible, critical steps such as testing for side-effects were not skipped or cut short. So, then, how did we pull it off?

Well, governments around the world did step up and do their best to help bring a vaccine to completion (such as the US’ Operation Warp Speed). The fact that the world was facing a scary, ruthless, and highly-contagious virus probably also helped light a fire under all kinds of people involved in the process, too.

Researchers as well as the public understood the need for such a vaccine, so volunteers stepped up quickly to help test it. Each vaccine candidate goes through three steps, or ‘phases’, of human trials. These are meant to determine if they are safe to use, and how best to do so. Phase 1 checks for side effects using a small number of participants, generally 30 at most. It starts with some of them receiving a small dosage, which increases in subsequent groups if everything goes well. Phase 2 determines what quantity of the vaccine to use for best effects, and Phase 3 compares its safety and efficacy to the current standard treatment. Phase 3 also typically employs placebos for more accurate results.

A graph showing drug development phases and timelines for various approaches — data visualization by Wikimedia user Kernsters after “Faster Evaluation of Vital Drugs” published in Scientific American.

The latter two steps use more participants, between 25 to 100 for Phase 2 and several hundred or thousands for Phase 3. Moderna’s Phase 3 trial ran with “over 30,000 participants in the U.S”, while Pfizer’s trial included some 43,500 people, which is quite sizeable. The trials tracked the candidate vaccines’ efficacy over time and any side-effects. Both found these to be mild to moderate, “short-lived”, and “generally resolved within two days”. Apart from a low incidence of fever in the Pfizer trial (16% for younger and 11% for older volunteers), these side-effects seem to typically be mild rather than worrying.

Data from current vaccination campaigns in Europe and the U.S. support these results, with relatively few cases of severe reactions to the vaccine. 

Well if they didn’t cut any corners in the safety department, how did they pull it off so fast? By far the largest deciding factor was financial.

Money matters

Piggy Bank, Money, Finance, Banking, Currency, Cash
Image via Pixabay.

We like to pick on Big Pharma for profiteering from people’s misfortunes. To an extent I definitely agree and join in on that — nobody should have to pay to avoid suffering, disease, disability, or death. It’s revolting when profits become more important than the lives this industry should protect.

But at the same time, we have to face the truth. No matter how well-intentioned a drug company is, it can only function as long as it can pay for it. It has to play within the rules of the systems that are already set in place, and for now, that means they need to turn a profit or they’re toast.

Historically speaking, vaccines have not been profitable for either the pharmaceutical industry nor medical professionals. A study published in 2009 found that “the variable costs of vaccine administration exceeded reimbursement from some insurers and health plans” for medical personnel, meaning that your doctor is probably losing money for every vaccine they administer. A darker way to look at it is that vaccines cost the healthcare sector twice — once in the cost of development, production, and administering it, and then in the profits they lose in the long term from preventing disease.

So as callous as it sounds, it does very much come down to money. Vaccine development is expensive. For every candidate that reaches the human trial phase, countless others were scrapped along the way (after heavy expense). Furthermore, vaccines are a risky product. Once a company has developed a safe, efficient vaccine, it needs further investment to ramp up production capabilities and storage sites (which often need to be refrigerated, so they incur a high running cost). The doses need to be stored until needed, which could happen tomorrow or 20 years from now. They might even go bad before they’re needed.

All of this adds up to make vaccines, in general, a very expensive and risky investment for producers. In the case of the COVID-19 vaccine, one thing that helped tremendously was that governments around the world came to shoulder some of that risk. In the US, for example (although they are not at all the only country to do so), the government moved huge sums of money to guarantee to pharmaceutical companies that any successful vaccine candidate would be bank-rolled for production. Contracts were also signed ahead of time for millions of doses.

Keep that in mind. Without even knowing if a vaccine will work or not, governments guaranteed that they would pay for its production and that they would buy a pre-known (and significant) quantity of doses. This essentially removed the risk from the equation: producers knew that their vaccine would be paid for and the minimum quantity of doses they would sell ahead of time. If you’re familiar with the whole Steam vs. Epic Games situation, you know how attractive this business model can be from the producers’ side.

Assured that they won’t bankrupt themselves, pharmaceutical companies started work on the vaccine extremely quickly. In March, five days after the WHO declared a global pandemic, Moderna was already starting safety trials for its vaccine candidate. In the end both them and Pfitzer used this opportunity to develop what was previously a theoretical vaccine production method — mRNA vaccination.

Instead of relying on traditional vaccination approaches (due to time constraints and the associated risk of using live viruses in the case of COVID-19), an mRNA vaccine uses messenger RNA (the counterpart of DNA) to teach our bodies about the virus. Researchers first isolated the part of the viral genome that encodes its spike proteins — these are the ‘keys’ the virus uses to enter our cells — and copied it into a messenger RNA strand. Our cells’ own molecular machinery then turns this into finished spike proteins, and from here on it works like any other vaccine.

Its biggest advantage is also its largest disadvantage: how unstable RNA molecules are inside our body. It makes the delivery of the vaccine way more complicated (doses need to be kept at such low temperatures to prevent the mRNA from breaking down), but it also means that any side-effects set in and are overcome quickly.

All this wouldn’t have happened if governments (and thus, society) hadn’t taken the risk off the hands of those developing the vaccine. It’s a good reminder that while the chase for profits can damage the greater good, companies have no option but to ensure they remain profitable — sometimes, making sure they have the money they need is to everyone’s benefit. It’s also a reminder that we have the ability to solve many of the world’s problems if only we were to shoulder the financial cost.

Photograph of Sir David Attenborough seated at the Great Barrier Reef, taken for his Great Barrier Reef series. Credit: 2015, Wikimedia Commons.

‘Extinction: The Facts’: Attenborough’s new documentary is surprisingly radical

Photograph of Sir David Attenborough seated at the Great Barrier Reef, taken for his Great Barrier Reef series. Credit: 2015, Wikimedia Commons.

Photograph of Sir David Attenborough seated at the Great Barrier Reef, taken for his Great Barrier Reef series. Credit: 2015, Wikimedia Commons.

We have learned so much about nature from David Attenborough’s documentaries over the past seven decades. In a new BBC film he lays bare just how perilous the state of that nature really is, why this matters for everyone who shares this planet, and what needs to change.

This film is radical. Surprisingly radical. I have written in the past about my growing frustration with Attenborough documentaries continuing, decade after decade, to depict nature as untouched by any mark of humans. I felt this might be contributing to unhelpful complacency about how much “wild” was really left.

Extinction: The Facts” is a significant departure. As one of the programme’s talking heads, I helped reveal the honest truth: in most places, remaining natural habitats are squeezed between intensive agriculture and urban sprawl.

The film starts with a bleak interview with James Mwenda, the keeper of the world’s last two northern white rhinos; a mother and daughter pair. “When Najin passes away”, says Mwenda, “she will leave the daughter alone forever … Their plight awaits 1 million more species”.

This sequence has a real emotional kick. However, the film makes clear that extinction is about so much more than the loss of large familiar mammals.

“Everything is joined up, from a single pond to a whole tropical rainforest” says Kathy Willis professor of biodiversity at the University of Oxford. “We tend to think we are somehow outside of that system. But we are part of it; and totally reliant upon on it”. The film goes on to explain the impacts of biodiversity loss on our soil functioning (with a star turn from below-ground beasties breaking down leaf litter), the role of insects in pollinating our crops, and how losing trees and wetlands can contribute to landslides and floods.

The documentary features Najin and Fatu, the last two northern white rhinos (pictured here with former head caregiver Mohammed Doyo). Dai Kurokawa / EPA

The potential link between the drivers of biodiversity loss and emerging diseases is also explored. The wildlife trade brings 1,000s of stressed animals into close contact, providing the perfect opportunity for viruses to jump) between species. At the same time, removing large predators results in increased abundance of rodents and bats which are more likely to carry dangerous viruses. “We’ve been changing biodiversity in critical ways which made [the pandemic] more likely to happen”, says Peter Daszak of Ecohealth Alliance.

In footage from the 1992 Earth Summit in Rio, then 12-year-old Severn Suzuki addresses the largest UN meeting to have ever convened. “We are a group of 12 and 13 years olds come to tell you adults that you must change your ways”. The parallels with Greta Thunberg’s recent high-profile speech to the UN serve to highlight how little progress has been made.

So if biodiversity loss is so obviously happening, and so obviously a bad thing for the future of humanity, why have we failed to act and what needs to be done?

Firstly, the film makes it clear that a key ultimate driver is consumption in rich countries. Given that the average Brit consumes more than four times the resources of the average Indian, reducing consumption in places like the UK is vital. This need not be painful. As the eminent Cambridge economist Partha Dasgupta says, “40 years ago people in the UK consumed a good deal less. But there is no evidence that we were unhappier then”. The film starkly highlights what we are losing in exchange for out-of-season food, fast fashion and cheap poultry.

Secondly, having strong environmental standards for things produced in the UK (important though it is), is not enough. We also need to consider where the products we buy and the food we eat comes from – if not, people in countries like the UK are simply offshoring environmental problems for others to deal with.

Finally, the film touched on the need to make us pay the true cost of the environmental damage we do. The idea that businesses should not be able to degrade our environment for free is far from new. However, despite some progress with policies like the UK’s landfill tax or California’s carbon trading scheme, most societies are far from doing this comprehensively.

Together, this is what makes the film so radical. It is explicitly calling for major changes in the way our economies work with a greater focus on both planetary boundaries and global inequality. I was certainly surprised to see this weaved into a Sunday night BBC prime time show.

Towards the end, the film moves back to more conventional conservation territory to insert a much-needed dose of optimism. The final story includes some of the most iconic footage from Sir David’s career: his meeting with Rwanda’s mountain gorillas 40 years ago. At the time, Attenborough felt he might be seeing some of the last of their kind – just 250 individuals were left and their future looked bleak. Today that population is doing much better.

Over his incredible career, David Attenborough has seen more of earth’s natural wonders than almost anyone. To hear him talk, with such clarity, about how bad things are getting is deeply moving. Scientists have recently demonstrated what would be needed to bend the curve on biodiversity loss. As Attenborough says in the final scene, “What happens next, is up to every one of us”.The Conversation

Julia P G Jones, Professor of Conservation Science, Bangor University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

As the world battles a pandemic, billions don’t have access to soap

Many things have changed since the start of the pandemic, but the very first piece of advice is still very much in place: wash your hands. We’re still not sure just how the coronavirus spreads, but washing your hands is a cheap, low-effort intervention.

But for many people, that’s not exactly the case. According to UNICEF data, 2 out of 5 people worldwide don’t have access to basic handwashing facilities. Pandemic or no pandemic, that’s a huge problem.

A luxury for billions. Image credits: Sean Horsburgh.

It might not get much credit in the developed world, but handwashing is one of the most critical tools in the fight against all infectious diseases. It’s simple enough, you just need two things (which, let’s face it, most of us take for granted): water and soap.

But for a disturbingly large number of people around the world, those two things are not readily available.

Worldwide, 780 million people do not have reliable access to a clean water source (as of 2017, the latest available report). Another disturbing report notes that 2.5 billion people lack access to improved sanitation (more than 35% of the world’s population). When you also consider soap availability, it comes up to 3 billion people (or 40% of the world’s population) that don’t have a handwashing facility with clean water and soap at home.

It’s not just houses, either. According to the World Health Organization, 2 in 5 schools around the world lacked basic handwashing facilities prior to COVID-19 (there is no reliable estimate for during the pandemic); that makes for 900 million school-age children that are left exposed.

“Handwashing with soap is one of the cheapest, most effective things you can do to protect yourself and others against coronavirus, as well as many other infectious diseases. Yet for billions, even this most basic of steps is simply out of reach.” said Sanjay Wijesekera, UNICEF Director of Programmes. “It is far from a magic bullet. But it is important to make sure people know what steps they should take to keep themselves and their families safe, even as we continue our longstanding efforts to make basic hygiene and sanitation available to everyone.”

This lack of basic hygiene facilities translates into a massive disease burden. Every year, millions of people are infected with neglected tropical diseases (NTDs) such as Guinea Worm Disease, Buruli Ulcer, Trachoma, and Schistosomiasis, which are believed to be water and/or hygiene-related. Soil parasitic worms infect a billion people every year, and gastrointestinal infections such as diarrhea kill over 2 million people every year (including half a million children).

While the figure has been decreasing in recent years, there is still a lot of ground to cover. Studies have shown that access to clean water and soap could cut the number by more than60%.

Then, there’s the pandemic.

It’s hard to imagine too many coronavirus protective measures for the 8 million urban Filipinos that lack handwashing facilities at home; or the 258 million urban dwellers in sub-Saharan Africa; or the 153 million in cities in south and central Asia. The virus itself may affect us all equally, but the way we can prepare against it is anything but equal. If anything, the pandemic is accentuating social inequities rather than erasing them.

There are encouraging signs. Around 2.1 billion people have gained access to basic sanitation since 2000 — but that shouldn’t be taken for granted. In many parts of the world, this new sanitation is wasteful, unsafe, or unsustainable.

The world could be using the pandemic as an opportunity to finally prioritize basic hygiene in all corners of the world. If anything, the economic case for investments in drinking water, sanitation, and hygiene services is clearer than ever: every dollar invested will return $2.5 in saved medical costs and increased productivity. With the pandemic in our faces, that return figure has certainly increased.

“Closing inequality gaps in the accessibility, quality and availability of water, sanitation and hygiene should be at the heart of government funding and planning strategies. To relent on investment plans for universal coverage is to undermine decades worth of progress at the expense of coming generations,” remarked Kelly Ann Naylor, UNICEF Associate Director for Water, Sanitation and Hygiene.

Meanwhile for those of us that do have access to basic sanitation, it’s a simple reminder to wash our hands — during the pandemic, and after. It’s a luxury many don’t have.

Between 30% to 50% of the world’s water supply is stolen every year

As much as 30% to 50% of the world’s water supply is stolen annually, with the agricultural sector largely to blame, according to a new study. The findings highlight the lack of information behind water theft and the relevance of the issue amid a global competition for water.

Credit Flickr State of Israel

While there’s no agreed-on definition of water theft, it essentially involves taking water in violation of regulations. It can be anything from installing unauthorized connections to water distribution systems or tampering with meters, to tapping boreholes without licenses, all with the objective of not paying for water.

There’s a lack of accurate data around water theft, partly because those that steal the resource are often poor, vulnerable, and at-risk in developing countries, although there are also cases in the developed world. With that in mind, a group of researchers developed a novel framework and model, which they applied to three case studies.

The framework and model created by the researchers are aimed at helping water managers to test the impact of changes in detection, prosecution, and conviction systems, as well as accurately measuring the effectiveness of current penalties (which may not provide an effective deterrent).

“As the scarcity of our most precious resource increases due to climate change and other challenges, so too do the drivers for water theft,” said Loch in a press release. “If users are motivated to steal water because it is scarce, and they need it to keep a crop alive, then the opportunity cost of that water may far exceed the penalty, and theft will occur.”

Adam Lock from the University of Adelaide and his team looked at cotton farms in Australia, marijuana cropping in the US, and strawberry fields in Spain. They found that water theft increases when governments fail to support detection and prosecution, when there is uncertainty regarding water availability in the future, and when social attitudes regarding water theft are permissive. They suggest that stronger disincentives might be needed to dissuade users from stealing water.

One example of water theft came to light in Australia in 2017, for example, when a government program found cases of cotton irrigators taking water against embargoes. The program also found a lack of metering in parts of the country and inadequate rules regarding the use of water, making such embargoes difficult to apply.

The government implemented new water-sharing rules, appointed a new regulator, and allocated more resources to the enforcement of water laws. This had led to a large number of prosecutions. Nevertheless, progress has been slow in installing meters in some parts of the country, the researchers argue.

“A significant percentage of extractions across Australia are not metered or otherwise properly measured,” the special counsel at the Environmental Defenders Office, Dr. Emma Carmody, who participated in the study, told The Guardian.

“This is a critical issue as it makes it very difficult to assess the extent of non-compliance with water laws, which has a knock-on effect on the environment.”

The researchers said there are many cases of water theft that could be studied using the (free) framework and model that they created, encouraging institutions to use these tools. Recovering some of the “lost” water would be useful for the world’s water supply, they argue.

The study was published in the journal Nature Sustainability.

A limited resource

The United Nations sets the minimum water requirement per person at 50 liters per day. This is based on the idea that a person drinks about two liters per day, using the rest for cooking, washing, and sanitation. While people in water-stressed countries can’t meet that level, the average use in the US is between 400 to 600 liters per person a day.

Agriculture gobbles up around 70% of the water that is consumed globally. While it’s an economically relevant sector and can take people out of poverty, it can also massively deplete water resources. For example, it takes 140 liters of water to make a cup of coffee, with most of the water used on growing the coffee plant.

The way water is distributed around the world doesn’t match local supply and demand. China has 40 times more people than Canada but has much less water, for example. Thirteen Arab countries are among the world’s 19 most water-scarce nations. Even water-rich countries such as Brazil have had water scarcity problems in the past.

There’s intense competition for water resources around the world, likely to intensify due to population growth, urbanization, overuse of water, environmental degradation, and climate change. Groundwater is argued to become the main supply in the future, as surface water gets depleted or becomes polluted. Nevertheless, tapping groundwater also faces a wide array of challenges on its own.