New technology aims to turn smoke from industry and power generation into useful, commercially-valuable products. The process hinges on a newly-developed metal organic framework (MOF) as a catalyst.
Smokestacks around the world release a tremendous amount of carbon dioxide gas into the atmosphere. What if, instead of letting it pile up in the atmosphere and heat up the climate, we captured this CO2 and put it to good use, instead? That’s exactly the aim of a scientific collaboration led by researchers at Oregon State University — and, according to a new study they published, one they accomplished.
The team describes a new metal organic framework, a compound material in which metals are used as a base, and interlaced with organic crystals. The compounds inside this MOF act as a catalyst, enabling the production of cyclic carbonates — a useful family of chemicals — from CO2 released in factory flue gases (smokestacks).
Up in smoke
“We’ve taken a big step toward solving a crucial challenge associated with the hoped-for circular carbon economy by developing an effective catalyst,” said chemistry researcher Kyriakos Stylianou of the Oregon State University College of Science, who led the study. “A key to that is understanding the molecular interactions between the active sites in MOFs with potentially reactive molecules.”
The novel MOF is loaded with propylene oxide, a common industrial chemical. This acts as a catalyst, allowing for the quick and easy conversion of CO2 gas into cyclic carbonates. These latter compounds have ring-shaped molecules and are quite useful for a variety of applications — ranging from pharmaceutical precursors to battery electrolytes.
The best part about this is that carbon is scrubbed out of flue gases in the process. Essentially, this MOF can be used to clean greenhouse gases from the smoke. It can also remove carbon from biogas (a mix of CO2, methane, and other gases produced by decaying organic matter).
The MOF is based on lanthanides, a somewhat special (and somewhat rare) family of metals — in fact, they’re often referred to as ‘rare earths’. They are soft, silvery-white, and have a variety of uses. Some examples of lanthanides include cerium, europium, and gadolinium.
Lanthanides were used for the MOF because they provide good chemical stability. This is especially important because the gases the MOF will be exposed to are hot, high in humidity, and quite acidic. The metal acts as a binder, holding the active organic materials in place so they can act as catalysts.
“We observed that within the pores, propylene oxide can directly bind to the cerium centers and activate interactions for the cycloaddition of carbon dioxide,” Stylianou said. “Using our MOFs, stable after multiple cycles of carbon dioxide capture and conversion, we describe the fixation of carbon dioxide into the propylene oxide’s epoxy ring for the production of cyclic carbonates.”
The team says that their findings are “very exciting”. They’re particularly thrilled about the MOF’s ability to use carbon dioxide gas even from impure sources, which saves time, energy, and costs associated with separating it before the process.
The paper “Lanthanide metal–organic frameworks for the fixation of CO2 under aqueous-rich and mixed-gas conditions” has been published in the Journal of Materials Chemistry A.
Glass is associated with brittleness and fragility rather than strength. However, researchers in China were able to create a new transparent amorphous material that is so strong and hard that it can scratch diamonds. What’s more, this high-tech glass has a semiconductor bandgap, which makes it appealing for solar panels.
Strongest amorphous material in the world
Diamond, the hardest material known to date in the universe, is often used in tools for cutting glass. But the tables have turned.
“Comprehensive mechanical tests demonstrate that the synthesized AM-III carbon is the hardest and strongest amorphous material known so far, which can scratch diamond crystal and approach its strength. The produced AM carbon materials combine outstanding mechanical and electronic properties, and may potentially be used in photovoltaic applications that require ultrahigh strength and wear resistance,” the authors of the new study wrote.
The new material developed by scientists at Yanshan University in Hebei province, China, is tentatively named AM-III and was rated at 113 gigapascals (GPA) in the Vickers hardness test. Vickers hardness, a measure of the hardness of a material, is calculated from the size of an impression produced under load by a pyramid-shaped diamond indenter.
That’s more than many natural diamonds that have a Vickers score in the range of 70-100 GPa, but less than the hardest diamonds that can score up to 150 GPa.
It’s about ten times harder than mild steel and could be 20 to 100 times tougher than most bulletproof windows.
Shaped like diamonds, looks like glass
Like diamonds, AM-III is mostly made of carbon. But while carbon atoms in diamond are arranged in an orderly crystal lattice, glass has a chaotic internal structure typical of an amorphous material. This is why glass is typically weak, but AM-III has micro-structures in the material that appear orderly, just like crystals. So, AM-III is part glass, part crystal, which explains its strength.
In order to make AM-III, the Chinese researchers had to employ a process that is even more complicated than manufacturing artificial diamonds. The most common method for creating synthetic diamonds used in the industry is called high pressure, high temperature (HPHT). During HPHT, carbon is subjected to similarly high temperatures and pressure as those that led to the formation of natural diamonds deep in the Earth, around 1,300 degrees Celsius (1650 to 2370 degrees Fahrenheit) and a pressure 50,000 times greater on the surface.
Instead of graphite, the raw material of artificial diamonds, the Chinese researchers started off with fullerene, also called buckminsterfullerene. These molecules contain at least 60 atoms of carbon, commonly denoted as C60, arranged in a lattice that can either form a ball or sphere shape and are typically 1nm diameter.
These carbon “footballs” are typically soft and squishy. But after being subjected to great heat and pressure, the carbon balls are crushed and blended together.
The fullerene was subjected to about 25 GPa of pressure and 1,200 degrees Celsius (2,192 degrees Fahrenheit). However, the researchers were careful to reach these conditions very gradually, taking their time over the course of about 12 hours. Immediately subjecting the material to high pressure and heat may have turned the carbon balls into diamonds.
The resulting transparent material is not only hard but also a semiconductor, with a bandgap range almost as effective as silicon, the main semiconductor used in electronics. So besides bulletproof glass, it could prove useful in the solar panel industry where its properties can shine by allowing sunlight to reach photovoltaic cells, while also enhancing the lifespan of the product.
Radioactive carbon dating determines the age of organic material by analyzing the ratio of different carbon isotopes in a sample. The technique revolutionized archeology when it was first developed in the 1950s, but is currently at risk from fossil fuel emissions.
Also known as radiocarbon or carbon-14 (scientific notation 14C) dating, the procedure relies on the rarest carbon isotope, carbon-14. Carbon-14 is created on Earth by interactions between nitrogen gas and radiation, usually in the higher levels of the atmosphere. With only 0.0000000001% of the carbon in today’s atmosphere being 14C, it is the rares naturally-occurring carbon isotope on our planet, the others being 12C and 13C.
Unlike the other isotopes, carbon-14 isn’t stable, and it decays over time. Its half-time, the time it takes for half of all 14C atoms in a sample to degrade, is 5,730 years. Putting together that tidbit of information, some very expensive machines, a big of educated guesswork, and ancient tree rings allows researchers to determine the age of a sample of organic material with reasonable accuracy.
Sounds a bit like magic, doesn’t it? Let’s take a look at how it works.
Who thought it up?
The theoretical foundations of radiocarbon dating were laid down by a research team led by American physical chemist Willard Libby in 1949. They were the first to calculate the radioactive decay rate of carbon-14 using carbon black powder. As a test, they took acacia wood samples from the tombs of two Egyptian kings, Zoser and Sneferu, and dated them. Their test showed the wood was cut in 2800BC +/- 250, where earlier independent dating estimated it hailed from 2625BC +/- 75 years — so their method checked out, mostly. There were still some flaws to the approach — the results were slightly fouled by nuclear weapon testing at the time — but these were soon worked out. One of the most important modifications to the initial method was to set the calibration date (we’ll get to this in a moment) to 1950.
For his work, Willard Libby would receive a Nobel Prize for Chemistry in 1960.
How does it work?
Carbon-14 is continuously created in the upper atmosphere: cosmic radiation ionizes nitrogen-14 gas atoms (which is native nitrogen and has 7 protons and 7 neutrons) into carbon-14 atoms (which have 6 protons and 8 neutrons). Interactions between radiation and the atmosphere at large supply the neutrons that collide with and kick protons out of the nitrogen atom. An atom’s chemical properties (i.e. the element) is a product of the number of protons in its nucleus, not its total mass — more on isotopes here — so this effectively transforms it into carbon-14.
These 14C atoms then get gobbled up by oxygen to become regular CO2, which is consumed by regular plants through photosynthesis, and these plants are then eaten by regular animals and so on. This is the process through which 14C becomes part of all organic matter. 14C has a half-life of 5,730 years, meaning that it takes 5,730 years for half of the 14C atoms in a sample to degrade back into nitrogen-14, and 5,730 more for half of what is left to degrade, and so on.
And here’s the kicker — when these organisms die, they stop taking in new carbon, including carbon-14. Since this latter isotope isn’t stable, it degrades over time. We know the rate it decays it at; so by comparing the current ratio of 12C to 14C atoms in a sample to the initial ratio, we can determine how long ago something died.
How do we do it?
There are three main ways to go about it, each with very sciency-sounding names: gas proportional counting, liquid scintillation counting, and accelerator mass spectrometry.
Gas proportional counting measures the amount of beta radiation — the kind of radiation given off during radioactive decay — emitted by a sample. In essence, it involves measuring the level of radiation it emits. Since 14C is the only radioactive isotope in organic material, this effectively tells you how much of it is in the sample. It gets its name from the fact that the sample needs to be transformed into carbon dioxide gas (basically, burned) before this measurement can be performed.
Liquid scintillation counting is another ol’ timer radiocarbon dating technique. It works on the same principle as gas proportional counting but uses different gear. The sample is turned into a liquid form and a scintillator is submerged in it. Scintillators are devices that emit flashes of light upon contact with a beta particle. Two additional devices (photomultipliers) are used to detect these flashes — when both pick up on it, a count is made.
Accelerator (or Accelerated) Mass Spectrometry (AMS) is the modern way of doing things, and over time has become more efficient, faster, and more accurate than the others. It is quite simple, actually — you physically count the number of 14C, 13C, and 12C in the sample using a mass spectrometer. AMS is preferred today for its speed and accuracy, but also because it works with much smaller samples than the other two, helping conserve precious artifacts. For this process, atoms in the sample are ionized (electrically charged) and accelerated using powerful magnets to gradually remove as many atoms not used in the counting as possible. Finally, the carbon-14 isotopes pass into a detector along with some other carbon atoms, and these are used to perform the measurement.
“Quite quickly after radiocarbon dating was discovered, it became clear that Libby’s assumption of constant 14C levels in the atmosphere does not hold,” a paper published in 2008 explains. “The level is affected by very many complex factors that have proven impossible to model […] such as: solar cycles, solar storms, geomagnetic variations in the earth, and unpredictable up-welling of old carbon from substantial reservoirs such as oceans. The level has also been impacted by human activity; for example, it increased substantially due to atomic bomb testing in the 1950s and has dropped again more recently due to release of old carbon in fossil fuels.”
“As a consequence, radiocarbon dating is only viable if we can obtain an estimate of the varying level of 14C back through time and can thus plot the function that links radiocarbon ages to calendar ages.”
“Put loosely, we need a calibration curve.”
While first developing their method of measuring 14C content, Libby’s team pointed to the possibility that the ratio of 12C to 14C in the atmosphere likely didn’t remain constant over time, but assumed it was as they had no way of correcting for it and wanted to finish their research. As radiocarbon dating saw more use and inconsistencies started to mount, researchers realized that his hunch was right, and set out to ‘calibrate’ the method.
Currently, the calibration date used for radiocarbon dating is the year 1950. In other words, samples are compared against the baseline value of 12C to 14C isotopes recorded in the 1950s. If a sample contains 25% of the carbon-14 you’d expect to see in an organism that died in 1950, it would be two times as old as the isotope’s half-life (so two times 5,730, giving it a rough age of 11,460 years). This isn’t the final age, however.
All the steps we’ve gone through so far don’t actually tell us how old a sample is, just much 14C it contains. As we’ve seen above, accurately dating such a sample hinges on us knowing how much 14C it contained to begin with. To know that, we need to know how much of it was in the atmosphere while the organism lived. This is the process of calibration: changing the assumed initial level of radioactive carbon. It’s perhaps the trickiest bit of the whole process.
“The convention is to assume that the [carbon isotope] ratio has remained constant over time and then to use calibration to compensate for the fact that, in reality, the ratio is changing,” Caitlin Buck, a professor in the Department of Mathematics and Statistics at the University of Sheffield told ZME Science. Professor Buck specializes in applying statistical methods to archeological and paleoenvironmental science and is the co-author of the 2008 paper above.
“At first the need to calibrate seems somewhat unfortunate but, in fact, it allows us to also compensate for several other underlying issues too, like the fact that (at any given time) the ratio in the atmosphere is not the same as that in the oceans.”
The most commonly used and reliable calibration methods are ancient trees. Since trees build a new set of rings every year, they act as 14C archives. From the data in the tree rings, a timeline of 14C levels can be constructed. Timelines from multiple trees would then be compared and overlapped, making this the most accurate record of the isotope we have. The simplest way to go about it would be to find a tree ring that contained the same ratio of radiocarbon as your sample. Other approaches include 14C curves compiled from other sources or to test artifacts that were reliably dated through other means, although this is more of a situational rather than a systemic solution.
Yearly variations in carbon ratios, however, are quite small, which is why radiocarbon dates often come with a “+/- years” variation interval.
Uncalibrated dates are denoted with the unit BP, meaning ‘radiocarbon years before present (1950)’. Calibrated dates use the unit calBP, ‘calibrated before present’. Calibrated dates are the final estimate for the sample’s age, but uncalibrated dates are routinely shown to allow for recalibration as our understanding of 14C levels through time increases. Researchers are putting a huge amount of effort into extending the calibration curve (a timeline of 14C:12C ratios throughout history) and to increase its accuracy. Currently, the curve extends to around 50,000 years ago, but with a relative degree of uncertainty over its oldest reaches.
Limitations and external factors
For starters, if a reliable starting level for carbon-14 can’t be established, radiocarbon dating can’t be used to accurately determine a sample’s age. The technique can only be used to date samples up to around 55-60,000 years old (after which the carbon-14 content drops off to negligible levels). It’s also quite expensive — particularly AMS — due to the very sensitive and highly specialized equipment and personnel needed to run these procedures. Then, there are also external factors that can throw a wrench in the workings of radiocarbon dating.
Carbon-14 is created from the interaction between radiation and the atmosphere, and the advent of nuclear technology (with its plethora of weapon and civilian testing) released a great amount of radiation and radioactive material, driving up the atmospheric ratio significantly.
“The ‘bomb’ samples [i.e. those after 1950] have very high concentrations of 14C, and so if you are working on very old samples for archaeology it is a good idea to have separate extraction lines for the ‘low-level’ samples,” Thure Cerling, a Distinguished Professor of Geology and Geophysics and a Distinguished Professor of Biology at the University of Utah, told ZME Science in an email. “It is quite easy to contaminate ‘old’ samples with ‘modern’ 14C, so a lot of effort has gone into dealing with that issue.”
On the other hand, all that nuclear weapons testing makes it very very easy to date a sample of organic matter that grew during this time, being one of the reasons why 1950 was selected as a calibration date. “Organic material formed during or after this period may be radiocarbon-dated using the abrupt rise and steady fall of the atmospheric 14C concentration known as the bomb-curve,” explains a paper co-authored by Professor Thure in 2013.
He cautions that you must be “very careful” to prevent this kind of contamination, although noting that the issue “is well known” and that “most modern labs have taken sufficient precautions that it is not the problem that it was 30 to 40 years ago.”
Another element affecting this ratio is the use of fossil fuels. Fossil fuels are originated from organic matter, but because they’re formed over millions of years, all the carbon-14 they might have contained has degraded. So when they are burned and their carbon released as CO2 in the atmosphere, it’s pure carbon-12. This further affects the carbon isotope ratio, and does it very fast, impacting the reliability of our dating efforts. Together, these two factors stand testament to the wide reach humanity has achieved over the Earth.
Contamination with external material such as soil can alter the apparent age of a sample by mixing in extra carbon; therefore, all samples are thoroughly cleaned with chemical agents to remove any contaminants. Reservoir effects — this refers to the fact that ocean water contains a different ratio of carbon isotopes than the atmosphere — need to be taken into account when dealing with samples that have been submerged or originate from aquatic environments.
Radiocarbon dating revolutionized archeology and anthropology by giving researchers a quick and reliable tool to date organic materials. It was a boon to these fields, one whose merits are very hard to overstate. Both Prof Buck and Prof Cerling pointed to the method’s ability to yield absolute age measurements for items of interest — with Prof Cerling saying that it “has revolutionized archaeology” — which allowed us to make heads and tails of historical timelines. Previous approaches such as seriation could only be used to date structures, cultures, and artifacts in relation to one another through the ample application of good, old-fashioned time and labor.
“Likewise it is very useful in determining the age of ice in ice cores that record the history of CO2 and methane in the atmosphere,” Prof Cerling told me.
But radioactive carbon isn’t useful just for dating stuff. Used as a marker molecule, it can allow researchers to, for example, trace specific drugs as they spread through the body, how masses of water move through the oceans, how carbon circulates in nature, and even in forensics to determine when an unknown person died.
Not bad for an unstable isotope of the Earth’s most abundant element.
New research from the University of Colorado Boulder, the Colorado School of Public Health, and the University of Pennsylvania found that higher levels of atmospheric CO2 in the future could lead to cognitive issues.
A new study found that higher concentrations of atmospheric CO2 could negatively impact our cognitive abilities — especially among children in the classroom. The findings were presented at this year’s American Geophysical Union’s Fall Meeting.
Prior research has shown that higher-than-average levels of CO2 can impair our thinking and lead to cognitive problems. Children in particular and their academic performance can be negatively impacted by this, but, so far, researchers have identified a simple and elegant solution — open the windows and let some fresh air in.
However, what happens when the air outside also shows higher-than-usual CO2 levels? In an effort to find out, the team used a computer model and looked at two scenarios: one in which we successfully reduce the amount of CO2 we emit into the atmosphere, and one in which we don’t (a business-as-usual scenario). They then analyzed what effects each situation would have on a classroom of children.
In the first scenario, they explain that by 2100 students will be exposed to enough CO2 gas that, judging from the results of previous studies, they would experience a 25% decline in cognitive abilities. Under the second scenario, however, they report that students could experience a whopping 50% decline in cognitive ability.
The study doesn’t look at the effects of breathing higher-than-average quantities of CO2 sporadically — it analyzes the effects of doing so on a regular basis. The team explained that their study was the first to gauge this impact, and that the findings — while definitely worrying — still need to be validated by further research. Note that the paper has been submitted for peer-review pending publication but has yet to pass this step.
All in all, however, it’s another stark reminder that we should make an effort to cut CO2 emissions as quickly as humanly possible. Not only because they’re ‘killing the planet’, but because they will have a deeply negative impact on our quality of life, and mental capacity, in the future.
A preprint of the paper “Fossil fuel combustion is driving indoor CO2 toward levels harmful to human cognition” is available on EarthArXiv, and has been submitted for peer-review and publication in the journal GeoHealth.
Around 24% of the greenhouse gas (GHG) emissions of the EU, and 14% of all human emissions worldwide, come from the transport sector. A new report published by the European Academies’ Science Advisory Council (EASAC) presented at the World Science Forum in Budapest showcases how important it is for the EU to decarbonize this sector to reach its Paris Agreement pledges (which would keep us under 2°C of warming).
We’re seeing unprecedented ecological upheaval as a result of our reliance on fossil fuels for energy. It’s encouraging that we’re also seeing unprecedented efforts being spent to clean our mess, safeguard nature, and preserve our way of life (most notably the Paris Agreement).
But it’s still not enough. The United Nations Intergovernmental Panel on Climate Change (IPCC) estimates that we’re still on track to exceed both the 1.5°C and 2°C of warming targets unless governments take “urgent actions” to reduce emissions.
Taking the ‘fossil’ out of fuel
“To limit the global temperature rise to 2°C with a probability of 66% implies an approximate global CO2 budget of between 590 and 1,240 gigatonnes of emissions until 2100,” EASAC reports.
“If the current levels of global emissions from fossil fuels were to be reduced linearly within this global CO2 budget, then the budget would be used up within about 40 years (i.e. by 2060). The use of fossil fuels, including in the transport sector, should be reduced to close to zero within that timeframe.”
For this goal, the report says we need to adopt short-term strategies (even if they are not desirable in the long term) that lead to reductions of GHG emissions as a stop-gap measure. Meanwhile, we should be working to implement sustainable strategies for the future.
One of the promising areas where the EU can do so is transportation. All in all, the EU needs to slash 60% of the emissions from its transport sector by 2050 to meet its pledge as part of the Paris Agreement, the report explains.
“Current EU policies are unlikely to deliver emission reductions quickly enough to limit global warming to less than 2°C,” EASAC adds.
“There is no ‘silver bullet’, so a combination of long- and short-term policy options must be supported at EU, national, regional, and local authority levels […] to help citizens to understand and agree to take action.”
The 13 recommendations listed in the current report fall into three categories:
Avoiding (reducing) demand for passenger and freight transport services.
Shifting passengers and freight to transport modes with lower emissions.
Improving performance through vehicle design, deploying more efficient powertrains, and substituting fossil fuels with low-carbon energy carriers.
The EASAC explains that fossil fuels (gasoline and diesel) currently dominate (95%) the energy market in the transport sector. Transport generates 24% of the EU’s GHG emissions, and out of this 72% come from road transport — 53% from light vehicles and 19% from busses and heavy vehicles (such as transport trucks).
Transport is a vital part of modern society, both from an economic and social point of view. It contributes 6.3% of the gross domestic product (GDP) and employs 13 million people in the EU. The report focuses on this sector as it’s more dynamic than construction or industry, the two other big emitters. The EASAC report states that it takes “about 20 years to renew the current vehicle fleet”, which makes reductions in GHG emissions possible in a much quicker timeframe than in other sectors.
However, efforts to reduce emissions from transportation have largely fallen flat as the higher efficiency of modern vehicles is offset by a growing number of cars and trucks in use. Passenger and freight transport in the EU has been growing since the year 2000, and “broadly follows the growth in GDP”, suggesting that this rising trend will persist in the future.
In order to tackle the issue of transport demand and supply, the report recommends reducing demand on the one hand, while improving the quality of transport supply on the other. The former can be achieved by encouraging people to change their behavior (i.e. policies that promote walking, cycling, teleworking, teleconferencing, or web-streaming of events), and shifting transport towards methods with reduced vehicle-kilometers (such as the use of vehicles with larger transport capacity, car sharing, and carpooling). The latter involves calling for more efficient vehicle designs, more efficient conventional and hybrid powertrains, substituting low-carbon fuels (e.g. natural gas or biofuels) for petrol and diesel, and promoting the development and use of vehicles that use alternative energy sources (such as electric or hydrogen-powered vehicles).
One of the most striking aspects of these recommendations is that it does away with the EU’s traditional mindset that ‘reducing mobility is not an option’. As an EU citizen, I can tell you that this is not something the Union tends to do — we’re all about civil liberties and freedom of choice. But the report explains that business as usual is simply not a scenario we can afford, saying that the “need to reduce GHG emissions [warrant] urgent short-term policies to limit and, where possible, to reverse the growth in motorized transport demand.”
The 11 other measures suggested in this report:
Shifting passengers from private cars to public transport; only 20% of passenger transport today is handled by public or privately-owned communal transport, which is not very good.
Taking freight off the road and onto railroad or waterways. The report notes that this approach would take a lot of investment for many businesses to implement, and recommends that the “public and private sectors [jointly invest] in more and better access points for” these transport services.
To improve or introduce regulation that limits demand for oversized vehicles and engines as other recommendations go into effect. This would help keep future emissions under control by preventing such vehicles from hitting the road in the first place.
To improve the average emission performance of passenger cars and light-duty vehicles. This can be achieved by introducing hard deadlines for the phasing-out of fossil fuel engines, or through the introduction of subsidized scrapping schemes that focus specifically on old, polluting vehicles to accelerate the renewal of the vehicle pool.
To improve the rate of market penetration for electric vehicles. Overall emissions from the energy sector are capped by the EU Emission Trading System (ETS), the report explains, so this will lead to an overall reduction in GHG emissions.
To improve the rate of grid penetration for low-carbon energy sources. In essence, this means that we need to install low-carbon sources of energy and decommission old fossil-fuel power plants to supply our new fleet of electric vehicles, industry, and residential consumers. The rate of growth in such energy-generation systems must exceed the rate of growth in total energy demand for it to have a net positive effect, however.
To improve and adapt the design and regulation of electricity markets and tariffs that apply to electric vehicles. Such schemes would make battery-powered electric vehicles more attractive both to consumers and to grid operators.
Tighten but also streamline guidance on the use of biofuels, biogas, natural gas, and methane for transport. “The use of all biofuels for transport should continue to be subjected to strict sustainability criteria, and there should continue to be a cap on the use of conventional biofuels made from food or feed crops,” the report states, adding that biomass used for bioenergy should come from sustainably-managed forests.
To increase resources allocated to the development of synthetic fuels. Hydrogen, methane, and other such fuels can be used in IC engines in the short term to reduce emissions and can be used long-term to power vehicles such as planes or ships.
Support the development of information and communications technologies and autonomous vehicles. This point should be handled with care as on the one hand, such systems can help reduce emissions, but they can also make transport more convenient and as such increase demand (and emissions).
Improve our ability to sustain long-term emission reductions with policy that supports useful innovation, jobs, skills, and interdisciplinary research.
The most pressing need for new policy relates to the short-term options listed in the report, the EASAC writes, as these need to go into effect fast. Even if the reductions in GHG emissions they would provide aren’t groundbreaking, it will add up over time — thereby allowing us to make the most of our carbon budget. In the 10- to 15-year timeframe, we also need to take meaningful action to decarbonize energy production as well, as it supplies all other economic sectors and thus will have the largest impact on our efforts to curb climate change.
The report “Decarbonisation of Transport: options and challenges” can be accessed on EASAC’s page here.
Sending more emails than you should many not only be pointless but also bad for the environment, according to a new study focused in the UK, which showed the high carbon footprint emails can have.
OVO Energy, England’s leading energy supply company, commissioned a study and used the UK as a case study. Among the results, the study showed that Brits send more than 64 million unnecessary emails every day, which contributes to 23,475 tons of carbon a year to its footprint.
If every adult in the UK sent one fewer “thank you” email a day it would allow saving more than 16,433 tons of carbon a year. This is equivalent to 81,152 flights to Madrid or taking 3,334 diesel cars off the road.
Among the most “unnecessary” emails, the report included those that say “Thank you,” “Thanks,” “Have a good weekend,” “Received,” “Appreciated,” “Have a good evening,” “Did you get/see this,” “Cheers,” “You too,” and “LOL,” according to the study.
The results also showed that 71% of the Brits wouldn’t mind not receiving a “thank you” email “if they knew it was for the benefit of the environment and helping to combat the climate crisis.”Also, 87% said “would be happy to reduce their email traffic to help support the same cause,” according to the study.
Mike Berners-Lee, a professor at Lancaster University in Lancashire, England and one of the study authors, said that while the carbon footprint is not highly significant “it’s a great illustration of the broader principle that cutting the waste out of our lives is good for our wellbeing and good for the environment.”
“Every time we take a small step towards changing our behavior, be that sending fewer emails or carrying a reusable coffee cup, we need to treat it as a reminder to ourselves and others that we care even more about the really big carbon decisions,” Berners-Lee said.
This is not the first time a study looks at the environmental footprint of emails. Research by McAfee in 2010 showed that 78% of all incoming emails are spam. Around 62 trillion spam messages are sent every year, requiring the use of 33 illion kilowatt-hours (KWh) of electricity and causing around 20 million tonnes of CO2e per year.
Around 98% of all the plastic waste going into the ocean is unaccounted for. A new paper looks into where it winds up, and its effect on marine life.
It’s hard to overstate just how much plastic humanity has dumped into the ocean. Trillions of bits of plastic float into massive “garbage patches” along the subtropical gyres (rotating ocean currents). These patches have a dramatic impact on ocean life, ranging from the largest mammals to the humble bacteria.
And yet, these immense plastic patches only account for 1% to 2% of all the plastic going into the ocean. Which is quite a scary thought. One promising theory is that sunlight-driven chemical reactions break the materials down until they lose buoyancy, or become too small to be captured by researchers. However, direct, experimental evidence for the photochemical degradation of marine plastics remains rare.
Where’s the plastic?
“For the most photoreactive microplastics such as expanded polystyrene and polypropylene, sunlight may rapidly remove these polymers from ocean waters. Other, less photodegradable microplastics such as polyethylene, may take decades to centuries to degrade even if they remain at the sea surface,” said Shiye Zhao, Ph.D., senior author of the paper.
“In addition, as these plastics dissolve at sea, they release biologically active organic compounds, which are measured as total dissolved organic carbon, a major byproduct of sunlight-driven plastic photodegradation.”
The team, which included members from Florida Atlantic University’s Harbor Branch Oceanographic Institute, East China Normal University, and Northeastern University wanted to verify the theory. They selected polymers that are often seen in the garbage patches, and plastic-fragments collected from the surface waters of the North Pacific Gyre, and irradiated them for approximately two months using a solar simulator.
During this time, the team captured the kinetics of plastic degradation. To assess degradation levels, they used optical microscopy, electron microscopy, and Fourier transform infrared (FT-IR) spectroscopy.
All in all, the team reports, plastic dissolution led to an increase in carbon levels in their surrounding water and reduced particle size of the plastic samples. The irradiated plastics fragmented, oxidized, and changed in color. Recycled plastics, overall, degraded more rapidly than polymers such as polypropylene (e.g. consumer packaging) and polyethylene (e.g. plastic bags, plastic films, and containers including bottles), which were the most photo-resistant polymers studied.
Based on the findings, the team estimates that recycled plastics tended to degrade completely in 2.7 years and that plastics in the North Pacific Gyre degrade in 2.8 years. Polypropylene, polyethylene, and standard polyethylene (which see ample use in food packaging) degrade completely in 4.3, 33, and a whopping 49 years, respectively, the team estimates.
The compounds leaching out of the plastic as it degrades seem to be broadly biodegradeable, the team reports. While levels of plastic-sourced carbon in ocean water pale in comparison to natural marine-dissolved organic carbon, the team found that it can inhibit microbial activity. The carbon from degraded plastics was readily used by marine bacteria, the team adds.
“The potential that plastics are releasing bio-inhibitory compounds during photodegradation in the ocean could impact microbial community productivity and structure, with unknown consequences for the biogeochemistry and ecology of the ocean,” said Zhao.
“One of four polymers in our study had a negative effect on bacteria. More work is needed to determine whether the release of bioinhibitory compounds from photodegrading plastics is a common or rare phenomenon.”
Samples in the study included post-consumer microplastics from recycled plastics like a shampoo bottle and a disposable lunch box (polyethylene, polypropylene, and expanded polystyrene), as well as standard polyethylene.
The paper “Photochemical dissolution of buoyant microplastics to dissolved organic carbon: Rates and microbial impacts” has been published in the Journal of Hazardous Materials.
A novel carbon capture technique can scrub the gas out from the air even at relatively low concentrations, such as the roughly 400 parts per million (ppm) currently found in the atmosphere.
We have a climate problem: namely, we’re making the planet hotter and hotter. This change is caused by a build-up of greenhouse gases released by our various activities, and carbon dioxide (CO2) is the single most important such gas. Tackling climate heating hinges on our ability to reduce emissions or to find ways of scrubbing them from the air. Since the former would involve at least some economic contraction, neither industry nor politicians are very keen on it. So there’s quite a lot of interest in developing the latter approach.
Most of the methods available today need high concentrations of CO2 (such as the smoke emitted by fossil fuel-based power plants) to function. The methods that can work with low concentrations, on the other hand, are energy-intensive and expensive, so there’s little economic incentive for their use. However, new research from MIT plans to change this state of affairs.
“The greatest advantage of this technology over most other carbon capture or carbon absorbing technologies is the binary nature of the adsorbent’s affinity to carbon dioxide,” explains MIT postdoc Sahag Voskian, who developed the work during his PhD.
“This binary affinity allows capture of carbon dioxide from any concentration, including 400 parts per million, and allows its release into any carrier stream, including 100 percent CO2.”
The technique relies on passing air through a stack of electrochemical plates. The process Voskian describes is that the electrical charge state of the material — charged or uncharged — causes it to either have no affinity to CO2 whatsoever or a very high affinity for the compound. To capture CO2, all you need to do is hook the material up to a charged battery or another power source; to pump it out, you cut the power.
The team says this comes in stark contrast to carbon-capture technologies today, which rely on intermediate steps involving large energy expenditures (usually in the form of heat) or pressure differences.
Essentially, the system functions the same way a battery would, absorbing CO2 around its electrodes as it charges up, and releasing it as it discharges. The team envisions successive charge-discharge cycles as the device is in operation, with fresh air or feed gas being blown through the system during the charging cycle, and then pure, concentrated carbon dioxide being blown out during the discharge phase.
The electrochemical plates are coated with a polyanthraquinone and carbon nanotubes composite. This gives the plates a natural affinity for carbon dioxide and helps speed up the reaction even at low concentrations. During the discharge phase, these reactions take place in reverse, generating part of the power needed for the whole system during this time. The whole system operates at room temperature and normal air pressure, the team explains.
The authors hope the new approach can help reduce CO2 production and increase capture efforts. Some bottling plants burn fossil fuels to generate CO2 for fizzy drinks, and some farmers also burn fuels to generate CO2 for greenhouses. The team says the new device can help them get the carbon they need from thin air, while also cleaning the atmosphere. Alternatively, the pure carbon dioxide stream could be compressed and injected underground for long-term disposal, or even made into fuel through a series of chemical and electrochemical processes.
“All of this is at ambient conditions — there’s no need for thermal, pressure, or chemical input,” says Voskian. “It’s just these very thin sheets, with both surfaces active, that can be stacked in a box and connected to a source of electricity.”
Compared to other existing carbon capture technologies, this system is quite energy efficient, using about one gigajoule of energy per ton of carbon dioxide captured. Other existing methods use up to 10 gigajoules per ton, depending on the inlet carbon dioxide concentration, Voskian says.
The paper “Faradaic electro-swing reactive adsorption for CO2 capture” has been published in the journal Energy & Environmental Science.
Early-life development is critical for a healthy development, and any exposure to pollutants or contaminants can be extremely dangerous. Now, a new study has found that black carbon particles can reach the fetal side of the placenta if women are exposed to pollution during pregnancy.
Black carbon is an air pollutant produced by gas and diesel engines, coal-fired power plants, and other sources that burn fossil fuel. It’s basically pure carbon — a component of fine particulate matter (PM ≤ 2.5 µm in aerodynamic diameter) — one of the most dangerous types of pollution.
In 2015, alone, small particulate matter was estimated to cause 4.2 million of deaths worldwide — of which 202,000 children younger than 5 years. Children are at much higher risk from pollution because their immune systems are not fully developed yet. During the in utero phase, the organism is even more vulnerable to the effects of pollution. This is why the discovery of black carbon in the placenta is so concerning.
The placenta provides oxygen and nutrients to the growing baby. It also helps with removing waste products from the baby’s blood. It’s not clear whether the carbon particles have reached the fetus — it’s quite plausible, but the exact exposure remains to be addressed in future studies.
Tim Nawrot, a researcher working at Hasselt University and Leuven University, wanted to assess whether black carbon can reach the placenta. Along with colleagues, they used high-resolution imaging on placental samples from 28 women, five of which had given birth pre-term. The presence of BC particles could be identified in all the sampled placentas. Furthermore, the mothers who had been exposed to higher levels of pollution also had more black carbon particles in their placenta.
“Our results demonstrate that the human placental barrier is not impenetrable for particles. Our observation based on exposure conditions in real-life is in agreement with previously reported ex vivo and in vivo studies studying the placental transfer of various nanoparticles,” the authors write.
“In conclusion, our study provides compelling evidence for the presence of BC particles originating from ambient air pollution in human placenta and suggests the direct fetal exposure to those particles during the most susceptible period of life.”
It’s not the first time the negative effects of pollution on pregnancy have been detailed. Previous research has found that carbon pollution is associated with pre-term births or low birth weights, as well as a swarm of long-term health issues.
“Numerous studies have indisputably demonstrated that particulate inhalation results in health problems far beyond the lungs,” the researchers emphasize in the study.
Image of a carbon-18 ring made with an atomic force microscope. Credit: IBM Research.
Since the 1960s, chemists have been trying to synthetize a ring-shaped molecule made of pure carbon. In a triumph of scanning probe microscopy, researchers at IBM Research Zurich and Oxford University have done just that by bonding an 18-atom-ring of carbon — a cyclocarbon.
Many have tried to make cyclocarbons, but in vain — until now
Bonding can make the difference between crumbling graphite and almost indestructible diamond. Both are made of carbon, but the former has carbon bonded to three other carbon atoms in a hexagonal lattice while the latter has carbon bonded with four other atoms in a pyramid-shaped pattern.
Decades ago, scientists — including Nobel Prize-winning chemist Roald Hoffmann — published work that theoretically showed that carbon can form bonds with just two nearby atoms. Each atom could form either a double bond on each side or a triple bond on one side and single bond on the other.
The trouble is that a cyclocarbon molecule is very chemically reactive and hence less stable than graphite or diamond.
In their new study published in Science, researchers led by Przemyslaw Gawel of the University of Oxford first made molecules that included chains of four-carbon-squares with oxygen atoms attached to these squares.
Researchers started from a precursor molecule (C2406) and gradually went through intermediates before reaching the final product — the cyclocarbon (C18). Credit: IBM Research.
At IBM Research in Zurich, the oxygen-carbon molecules were exposed to a layer of sodium chloride in a high-vacuum chamber.
The extra oxygen was removed each at a time using an atomic-force microscope.
Many failed attempts later, the researchers had a micrograph scan in their hand showing an 18-carbon structure. The scan revealed that the carbon rings had alternating triple and single bonds — just like the theory predicted, or at least one of them. Previously, a competing theory suggested that a cyclocarbon molecule would be made entirely of double bonds.
The alternating bond types suggests that the C-18 rings have semiconducting properties, which could make them useful as components in molecular-sized transistors.
This is still very fundamental research, though. Scientists currently have to make the rings one molecule at a time, so it might be a while before we can find any practical use for cyclocarbons.
“The high reactivity of cyclocarbon and cyclocarbon oxides allows covalent coupling between molecules to be induced by atom manipulation, opening an avenue for the synthesis of other carbon allotropes and carbon-rich materials from the coalescence of cyclocarbon molecules,” the authors concluded.
Wildfires could, surprisingly, act as net carbon traps.
Image via Pixabay.
The charcoal produced by wildfires can keep carbon out of the atmosphere for hundreds of years, new research from the Swansea University suggests.. The findings will help us better model changes in climate, especially as warmer mean temperatures in the arctic are leading to an unprecedented outbreak of wildfires and CO2 release in the area.
Burned and buried
Wildfires generate a large quantity of CO2. Generally, however, the gas is re-captured as vegetation grows back, so wildfires are considered to be more or less carbon-neutral once this regrowth process is complete.
“However, in a fire some of the vegetation is not consumed by burning, but instead transformed to charcoal,” explains Dr. Matthew Jones, lead author of the paper who recently joined the University of East Anglia’s (UEA) School of Environmental Sciences from Swansea University.
“This carbon-rich material can be stored in soils and oceans over very long time periods. We have combined field studies, satellite data, and modelling to better quantify the amount of carbon that is placed into storage by fires at the global scale.”
On average, wildfires burn an area roughly equivalent to the size of India every year and emit more carbon dioxide than global road, rail, shipping, and air transport combined, the team explains. Given the increased occurrence of wildfires in the past few years, a trend which will likely pick up in our warmer, drier future, the team set out to quantify how much carbon this charcoal can sequester from the air. All in all, the team says that this charcoal could lock away a considerable amount of carbon for years to come.
Vegetation growing back in burned areas draws on atmospheric CO2 to grow (through photosynthesis). This stage of the fire-recovery cycle takes just a bit under a year for grasslands, up to several decades in fire-adapted forests. In extreme cases, such as we’re seeing today in the arctic or tropical peatlands, full recovery may not occur for centuries. The timing of this recovery is important because the carbon that is emitted during the fire stays in the atmosphere and contributes to climate heating. Plants recapture it as they mature.
Overall, grassland fires don’t have that great of an impact; deforestation fires, however, are a particularly important contributor to climate change. Forests produce a lot of emissions as they burn, and take a long time to regrow, resulting in a long-term injection of carbon to the atmosphere.
The team explains that the charcoal resulting from forest fires — known as pyrogenic carbon — plays a larger part in mitigating these emissions than we’ve assumed. While they do emit CO2 to the atmosphere, landscape fires also transfer a significant fraction of the carbon locked in the affected vegetation to charcoal and other charred materials. The researchers say the quantity of this pyrogenic carbon is significant enough that it needs to be considered in global fire emission models.
As this material gets covered in soil, it locks carbon in place. Given time for flora to recover, the process actually leads to a net loss of carbon in the atmosphere — which is what we want.
“Our results show that, globally, the production of pyrogenic carbon is equivalent to 12 % of CO2 emissions from fires and can be considered a significant buffer for landscape fire emissions,” Dr. Jones said.
“Climate warming is expected to increase the prevalence of wildfires in many regions, particularly in forests. This may lead to an overall increase in atmospheric CO2 emissions from wildfires, but also an increase in pyrogenic carbon storage. If vegetation is allowed to recover naturally then the emitted CO2 will be recaptured by regrowth in future decades, leaving behind an additional stock of pyrogenic carbon in soils, lakes and oceans.”
The pyrogenic carbon will eventually find its way back into the atmosphere as the charcoal degrades, but it takes centuries or even millennia to do so. In the meantime, all the carbon it contains doesn’t influence the climate. It isn’t enough to offset man-made emissions, but every bit helps.
“This brings some good news, although rising CO2 emissions caused by human activity, including deforestation and some peatland fires, continue to pose a serious threat to global climate,” Dr. Jones adds.
The findings showcase the importance of factoring in pyrogenic carbon production in future climate models and in the global carbon cycle. The team plans to continue researching how the warmer more drought-prone climate of the future is going to impact the global extent of wildfires and to more accurately estimate the proportion of CO2 emissions recaptured by future vegetation regrowth.
The paper “Global fire emissions buffered by the production of pyrogenic carbon” has been published in the journal Nature Geoscience.
New research is trying to give plants stronger, deeper roots to make them scrub more CO2 out of the atmosphere.
Image via Pixabay.
Researchers at the Salk Institute are investigating the molecular mechanisms that govern root growth pattern in plants. Their research aims to patch a big hole in our knowledge — while we understand how plant roots develop, we still have no idea which biochemical mechanisms guide the process and how. The team, however, reports to finding a gene that determines whether roots grow deep or shallow in the soil and plans to use it to mitigate climate warming.
Deep roots are not reached by the scorch
“We are incredibly excited about this first discovery on the road to realizing the goals of the Harnessing Plants Initiative,” says Associate Professor Wolfgang Busch, senior author on the paper and a member of Salk’s Plant Molecular and Cellular Biology Laboratory and its Integrative Biology Laboratory.
“Reducing atmospheric CO2 levels is one of the great challenges of our time, and it is personally very meaningful to me to be working toward a solution.”
The study came about as part of Salk’s Harnessing Plants Initiative, which aims to grow plants with deeper and more robust roots. These roots, they hope, will store increased amounts of carbon underground for longer periods of time while helping to meaningfully reduce CO2 in the atmosphere.
The researchers used thale cress (Arabidopsis thaliana) as a model plant, working to identify the genes (and gene variants) that regulate auxin. Auxin is a key plant hormone that has been linked to nearly every aspect of plant growth, but its exact effect on the growth patterns of root systems remained unclear. That’s exactly what the team wanted to find out.
“In order to better view the root growth, I developed and optimized a novel method for studying plant root systems in soil,” says first author Takehiko Ogura, a postdoctoral fellow in the Busch lab. “The roots of A. thaliana are incredibly small so they are not easily visible, but by slicing the plant in half we could better observe and measure the root distributions in the soil.”
One gene called EXOCYST70A3, the team reports, seems to be directly responsible for the development of root system architecture. EXOCYST70A3, they explain, controls the plant’s auxin pathways but doesn’t interfere with other pathways because it acts on a protein PIN4, which mediates the transport of auxin. When the team chemically altered the EXOCYST70A3 gene, the plant’s root system shifted orientation and grew deeper into the soil.
“Biological systems are incredibly complex, so it can be difficult to connect plants’ molecular mechanisms to an environmental response,” says Ogura. “By linking how this gene influences root behavior, we have revealed an important step in how plants adapt to changing environments through the auxin pathway.”
“We hope to use this knowledge of the auxin pathway as a way to uncover more components that are related to these genes and their effect on root system architecture,” adds Busch. “This will help us create better, more adaptable crop plants, such as soybean and corn, that farmers can grow to produce more food for a growing world population.”
In addition to helping plants scrub CO2 out of the atmosphere, the team hopes that these findings can help other researchers understand how plants adapt to differences between seasons, such as various levels of rainfall. This could also point to new ways to tailor plants to better suit today’s warming, changing climate.
The paper “Root System Depth in Arabidopsis Is Shaped by EXOCYST70A3 via the Dynamic Modulation of Auxin Transport” has been published in the journal Cell.
This month set a record for the highest average CO2 concentration in the atmosphere. Yes, a new one.
Image credits Gerd Altmann.
Atmospheric CO2 levels have continued to rise throughout 2019, shows data published by the NOAA and Scripps Institution of Oceanography earlier today. This May, those levels averaged 414.7 parts per million (ppm) as recorded at NOAA’s Mauna Loa Atmospheric Baseline Observatory.
This value is the highest seasonal peak recorded over 61 years of observations at the Mauna Loa Observatory. The highest in 61 years because that’s how long the observatory has been up and running. It’s also the seventh consecutive year of increases in atmospheric levels of CO2 — and it’s also the highest average concentration recorded this year, which already broke a record. The value of 414.7 ppm CO2 is 3.5 ppm higher than the peak recorded in May 2018, and just shy of the 415 ppm peak value recorded in May 2019. Researchers at NOAA report that this increase is the second-highest annual jump on record.
“It’s critically important to have these accurate, long-term measurements of CO2 in order to understand how quickly fossil fuel pollution is changing our climate,” said Pieter Tans, senior scientist with NOAA’s Global Monitoring Division.
“These are measurements of the real atmosphere. They do not depend on any models, but they help us verify climate model projections, which if anything, have underestimated the rapid pace of climate change being observed.”
While still lower than the peak value, the number is still very worrying. It’s worrying because, while fluctuations can lead to high-value but transient peaks in CO2, average concentration readings show the larger trend: and that trend is that levels of CO2 in the atmosphere keep increasing year after year, and that the rate of increase is accelerating.
Some of the earliest recordings at Mauna Loa found annual increases of 0.7 ppm on average per year. This rate increased to about 1.6 ppm per year during the 1980s and 1.5 ppm per year in the 1990s. During the last decade, we’ve seen an average growth rate of atmospheric CO2 concentrations of 2.2 ppm. And, according to Tans (and pretty much every scientist out there), there is no doubt that this rate is increasing because we’re generating more and more emissions.
Monthly average readings are recorded during May of each year, just before plants start to suck up large quantities of CO2 from the atmosphere during the northern hemisphere growing season. In the northern fall, winter, and early spring, plants and soils give off CO2, which cause levels to rise through May. Charles Keeling was the first to observe this seasonal rise and subsequent fall in CO2 levels embedded within annual increases, a cycle now known as the Keeling Curve.
It’s important to take these measurements at the same time each year so as to control as many variables as possible, making the data useful for establishing reliable trends. The Mauna Loa data, together with measurements from other sampling stations around the world, are collected by NOAA’s Global Greenhouse Gas Reference Network and produce a foundational research dataset for international climate science.
A relatively simple but counterintuitive approach aims to fight climate change — by actually increasing CO2 emissions.
Image via Pixabay.
Fighting climate warming with greenhouse emissions might sound like it won’t work, because it wouldn’t. The team that authored this study, however, doesn’t just aim to increase CO2 levels in the atmosphere. Rather, it proposes that we degrade methane, a much more potent greenhouse gas, into CO2 — the swap, they write, would be a net benefit for world climate.
The study proposes zeolite, a crystalline material that consists primarily of aluminum, silicon, and oxygen, as a key material to help us scrub methane emissions.
The lesser of two evils
“If perfected, this technology could return the atmosphere to pre-industrial concentrations of methane and other gases,” said lead author Rob Jackson, the Michelle and Kevin Douglas Provostial Professor in Earth System Science in Stanford’s School of Earth, Energy & Environmental Sciences.
Much more relevant to the current situation, the team notes, is that this process is also profitable. Boiled down, the idea is to take methane from sources where it’s difficult or expensive to eliminate — from cattle farms or rice paddies, for example — and degrade it into CO2.
Methane concentrations in the atmosphere are almost two-and-a-half times higher today than before the Industrial Revolution, the team explains. There’s a lot less methane than CO2 in the air, granted, but methane is 84 times more potent than CO2 as a climate-warming gas over the first 20 years after its release. Finally, some 60% of atmospheric methane today is directly generated by human activity.
Most climate strategies today focus on CO2, which is understandable. It’s the largest (by quantity) greenhouse gas we emit, and it’s easy to relate to — we breathe out CO2, cars belch out CO2, factories do too, and plants like to munch on it. But scrubbing other greenhouse gases, particularly methane due to its enormous greenhouse effect, could be useful as a complementary approach, the team explains. Furthermore, there’s just so much CO2 already floating around — and we keep pumping it out with such gusto — that CO2-removal scenarios often call for billions of tons to be removed, over decades, which would still not get us to pre-industrial levels
“An alternative is to offset these emissions via methane removal, so there is no net effect on warming the atmosphere,” said study coauthor Chris Field, the Perry L. McCarty Director of the Stanford Woods Institute for the Environment.
Methane levels could be brought back down to pre-industrial levels by removing about 3.2 billion tons of the gas from the atmosphere, the team notes. Converting all of it into CO2 would be equivalent to a few months of global industrial emissions, which is relatively little, but would have an outsized effect: it would eliminate approximately one-sixth of all causes of global warming to date.
So why then didn’t anybody think of this before? Well, the thing is that methane is hard to scrub from the air because its overall concentrations are so low. However zeolite, the team explains, is really really good at capturing the gas due to its “porous molecular structure, relatively large surface area and ability to host copper and iron,” explains coauthor Ed Solomon, the Monroe E. Spaght Professor of Chemistry in the School of Humanities and Sciences. The whole process could be as simple as using powerful fans to push air through reactors full of zeolite and catalysts. This material can then be heat-treated to form and release carbon dioxide gas.
Now let’s talk money. If market prices for carbon offsets rise to $500 or more per ton this century as predicted by most relevant assessment models, the team writes, each ton of methane removed from the atmosphere could be worth more than $12,000. A zeolite reactor the size of a football field could thus produce millions of dollars a year in income while removing harmful methane from the air. This is very fortunate as, in my experience, nothing motivates people to care about the environment quite like making money from saving it.
In principle, the researchers add, the approach of converting a more harmful greenhouse gas to one that’s less potent could also apply to other greenhouse gases.
The paper “Methane removal and atmospheric restoration” has been published in the journal Nature Sustainability.
Cool down your home and the climate at the same time.
Image credits Sławomir Kowalewski.
New research from the Karlsruhe Institute of Technology and the University of Toronto wants to put your air conditioning unit to work on fighting climate change. The idea is to outfit air conditioners — devices which move huge amounts of air per day — with carbon-capture technology and electrolyzers, which would turn the gas into fuel.
“Carbon capture equipment could come from a Swiss ‘direct air capture’ company called Climeworks, and the electrolyzers to convert carbon dioxide and water into hydrogen are available from Siemens, Hydrogenics or other companies,” said paper co-author Geoffrey Ozin for Scientific American.
Air-conditioner units are very energy-thirsty. As most of our energy today is derived from fossil fuels, this means that air conditioners can be linked to a sizeable quantity of greenhouse emissions. It’s estimated that, by the end of the century, we’ll be using enough energy on air conditioning to push the average global temperature up by half a degree. Which is pretty ironic.
The team’s idea is pretty simple — what if heating, ventilation, and air conditioning (or HVAC) systems could act as carbon sinks, instead of being net carbon contributors? Carbon-capture devices need to be able to move and process massive quantities of air in order to be effective. HVAC systems already do this, being able to move the entire volume of air in an average office building five to ten times every hour. So they’re ideally suited for one another. The authors propose “retrofitting air conditioning units as integrated, scalable, and renewable-powered devices capable of decentralized CO2 conversion and energy democratization.”
“It would be not that difficult technically to add a CO2 capture functionality to an A/C system,” the authors write, “and an integrated A/C-DAC unit is expected to show favourable economics.”
Modular attachments could be used to add CO2-scrubbing filters to pre-existing HVAC systems. After collection, that CO2 can be mixed with water to make, basically, fossil fuels. As Ozin told Scientific American, the required technology is commercially available today.
But, in order to see if it would also be effective, the team used a large office tower in Frankfurt, Germany, as a case study. HVAC systems installed on this building could capture enough CO2 to produce around 600,000 gallons of fuel in a year. They further estimate that installing similar systems on all the city’s buildings could generate in excess of 120 million gallons of (quite wittily-named) “crowd oil” per year.
“Renewable oil wells, a distributed social technology whereby people in homes, offices, and commercial buildings all around the world collectively harvest renewable electricity and heat and use air conditioning and ventilation systems to capture CO2 and H2O from ambient air, by chemical processes, into renewable synthetic oil — crowd oil — substituting for non-renewable fossil-based oil — a step towards a circular CO2 economy.”
Such an approach would still take a lot of work and polish before it could be implemented on any large scale. Among some of the problems is that it would, in effect, turn any HVAC-equipped system into a small, flammable oil refinery. The idea also drew criticism as it could potentially distract people from the actual goal — reducing emission levels.
“The preliminary analysis […] demonstrates the potential of capturing CO2 from air conditioning systems in buildings, for making a substantial amount of liquid hydrocarbon fuel,” the paper reads.
“While the analysis considers the CO2 reduction potential, carbon efficiency and overall energy efficiency, it does not touch on spatial, or economic metrics for the requisite systems. These have to be obtained from a full techno-economic and life cycle analysis of the entire system.”
The paper “Crowd oil not crude oil” has been published in the journal Nature Communications.
As oceans warm up due to climate change, they’ll likely start generating a lot of CO2.
Image via Pixabay.
Despite being the largest carbon sink active today, oceans might become net emitters under warmer climates, a new study reports. The paper reports that warmer oceans lose some of their ability to store carbon, which will accelerate the rate of CO2 regeneration in many areas of the world. This will further reduce the ocean’s ability to store carbon, the authors explain.
Positive carbon loop
“The results are telling us that warming will cause faster recycling of carbon in many areas, and that means less carbon will reach the deep ocean and get stored there,” said study coauthor Robert Anderson, an oceanographer at Columbia University’s Lamont-Doherty Earth Observatory.
Ocean water soaks up roughly 25% of our carbon dioxide emissions year after year. While this process also involves abiotic chemical and physical processes, the lion’s share of that CO2 is gobbled up by plankton through photosynthesis. But, all plankton must die eventually, and when they do, these tiny marine plants sink to the bottom of the ocean — and the carbon they ‘ate’ goes down with them. It’s estimated that plankton produces around 40 to 50 billion tons of dry, solid organic carbon each year.
Some of this organic matter (and the carbon therein) gets locked into the depths for centuries at a time, but part of it gets consumed by aerobic bacteria before sinking into oxygen-free waters, the team writes. Those bacteria then expel it as carbon dioxide, pushing it back into the atmosphere. Only about 15% of plankton-derived carbon sinks to the bottom of the sea, the authors estimate. They further report that the environmental conditions that allow bacteria to recycle carbon are spreading as water temperatures rise.
The team used data from a 2013 research cruise from Peru to Tahiti. They focused on two distinct regions: nutrient-rich, highly productive waters off South America, and the largely infertile bodies of water that form the South Pacific Gyre. Instead of using traditional sampling methods — simple devices that trap particles as they sink — the team pumped large amounts of water from different depths and isolated particles and thorium isotopes. This approach allowed them to calculate the quantity of carbon sinking at different depth intervals, they explain, and much more reliably so — the technique yielded far more data than the traditional traps.
In the oxygenated upper waters layers off South America, the team reports, oxygen gets used up very quickly. It is consumed completely at about 150 meters of depth, halting aerobic activity. Organic matter that reaches this layer (called the oxygen minimum zone, OMZ) will sink to the bottom of the ocean. In the depths, oxygen levels do increase again, and aerobic bacteria start breaking down organic matter. However, any CO2 produced down that far will take centuries to get back into the air via upwelling currents.
The OMZ thus forms a sort of protective cap over any organic matter that sinks past it, according to the team. The common wisdom of today, held that organic matter produced near the surface makes it through the OMZ, and that most CO2 regeneration takes place in the deep ocean. However, only about 15% of this matter sinks past the OMZ, the team shows.
“People did not think that much regeneration was taking place in the shallower zone,” said the study’s lead author, Frank Pavia, a graduate student at Lamont-Doherty. “The fact that it’s happening at all shows that the model totally doesn’t work in the way we thought it did.”
As mean water temperatures in the ocean increase, OMZs will spread both horizontally and vertically, covering larger areas of ocean at shallower depths, the team estimates. At the same time, higher temperatures will drive bacterial activity above the OMZs. On one hand, this would allow more organic matter to sink undegraded into the deep. However, the increased rate of CO2 regeneration near the surface will counteract this increased trapping, the team says. Whether near surface regeneration or the cap provided by the OMZ might have a stronger effect is still something we need to look into, they explain. However, this shift in OMZs is definitely not good news, as they are not at all suitable for most marine life — and this shift will affect a lot of today’s key fishing areas.
In the South Pacific Gyre, the results were less ambiguous. There is far more regeneration near the warmer surface than previously estimated in this area. The South Pacific Gyre and similar current systems in other parts of the oceans are projected to grow as the oceans warm. The gyres will divide waters into warmer layers (on the surface) and colder ones (deeper down). Because much of the CO2 regeneration will take place in the warm, shallower waters, CO2 regeneration will pick up over wide spans of ocean, the team explains. And, unlike below the nearer-shore OMZs, “there is no counterbalancing effect in the gyres,” said Anderson.
“The story with the gyres is that over wide areas of the ocean, carbon storage is going to get less efficient.” (There are four other major gyres: the north Pacific, the south and north Atlantic, and the Indian Ocean.)
These are only parts of the ocean carbon cycle, the team notes. Abiotic reactions are responsible for significant exchanges of carbon between atmosphere and oceans, and these processes could interact with the biology in complex and unpredictable ways.
“This [the study] gives us information that we didn’t have before, that we can plug into future models to make better estimates,” said the study’s lead author, Frank Pavia, a graduate student at Lamont-Doherty.
The paper “Shallow particulate organic carbon regeneration in the South Pacific Ocean,” has been published in the journal PNAS.
New research says that the Earth’s past ice ages may have been caused by tectonic pile-ups in the tropics.
A crevasse in a glacier. Image via Pixabay.
Our planet has braved three major ice ages in the past 540 million years, seeing global temperatures plummet and ice sheets stretching far beyond the poles. Needless to say, these were quite dramatic events for the planet, so researchers are keen to understand what set them off. A new study reports that plate tectonics might be the culprit.
Cold hard plates
“We think that arc-continent collisions at low latitudes are the trigger for global cooling,” says Oliver Jagoutz, an associate professor in MIT’s Department of Earth, Atmospheric, and Planetary Sciences and a co-author of the new study.
“This could occur over 1-5 million square kilometers, which sounds like a lot. But in reality, it’s a very thin strip of Earth, sitting in the right location, that can change the global climate.”
“Arc-continent collisions” is a term that describes the slow, grinding head-butting that takes place when a piece of oceanic crust hits a continent (i.e. continental crust). Generally speaking, oceanic crust (OC) will slip beneath the continental crust (CC) during such collisions, as the former is denser than the latter. Arc-continent collisions are a mainstay of orogen (mountain range) formation, as they cause the edges of CC plates ‘wrinkle up’. But in geology, as is often the case in life, things don’t always go according to plan.
The study reports that the last three major ice ages were preceded by arc-continent collisions in the tropics which exposed tens of thousands of kilometers of oceanic, rather than continental, crust to the atmosphere. The heat and humidity of the tropics then likely triggered a chemical reaction between calcium and magnesium minerals in these rocks and carbon dioxide in the air. This would have scrubbed huge quantities of atmospheric CO2 to form carbonate rocks (such as limestone).
Over time, this led to a global cooling of the climate, setting off the ice ages, they add.
The team tracked the movements of two suture zones (the areas where plates collide) in today’s Himalayan mountains. Both sutures were formed during the same tectonic migrations, they report: one collision 80 million years ago, when the supercontinent Gondwana moved north creating part of Eurasia, and another 50 million years ago. Both collisions occurred near the equator and proceeded global atmospheric cooling events by several million years.
In geological terms, ‘several million years’ is basically the blink of an eye. So, curious to see whether one event caused the other, the team analyzed the rate at which oceanic rocks known as ophiolites can react to CO2 in the tropics. They conclude that, given the location and magnitude of the events that created them, both of the sutures they investigated could have absorbed enough CO2 to cool the atmosphere enough to trigger the subsequent ice ages.
Another interesting find is that the same processes likely led to the end of these ice ages. The fresh oceanic crust progressively lost its ability to scrub CO2 from the air (as the calcium and magnesium minerals transformed into carbonate rocks), allowing the atmosphere to stabilize.
“We showed that this process can start and end glaciation,” Jagoutz says. “Then we wondered, how often does that work? If our hypothesis is correct, we should find that for every time there’s a cooling event, there are a lot of sutures in the tropics.”
The team then expanded their analysis to older ice ages to see whether they were also associated with tropical arc-continent collisions. After compiling the location of major suture zones on Earth from pre-existing literature, they reconstruct their movement and that of the plates which generated them over time using computer simulations.
All in all, the team found three periods over the last 540 million years in which major suture zones (those about 10,000 kilometers in length) were formed in the tropics. Their formation coincided with three major ice ages, they add: one the Late Ordovician (455 to 440 million years ago), one in the Permo-Carboniferous (335 to 280 million years ago), and one in the Cenozoic (35 million years ago to present day). This wasn’t a happy coincidence, either. The team explains that no ice ages or glaciation events occurred during periods when major suture zones formed outside of the tropics.
“We found that every time there was a peak in the suture zone in the tropics, there was a glaciation event,” Jagoutz says. “So every time you get, say, 10,000 kilometers of sutures in the tropics, you get an ice age.”
Jagoutz notes that there is a major suture zone active today in Indonesia. It includes some of the largest bodies of ophiolite rocks in the world today, and Jagoutz says it may prove to be an important resource for absorbing carbon dioxide. The team says that the findings lend some weight to current proposals to grind up these ophiolites in massive quantities and spread them along the equatorial belt in an effort to counteract our CO2 emissions. However, they also point to how such efforts may, in fact, produce additional carbon emissions — and also suggest that such measures may simply take too long to produce results within our lifetimes.
“It’s a challenge to make this process work on human timescales,” Jagoutz says. “The Earth does this in a slow, geological process that has nothing to do with what we do to the Earth today. And it will neither harm us, nor save us.”
The paper “Arc-continent collisions in the tropics set Earth’s climate state” has been published in the journal Science.
Instead of burning coal and releasing CO2, new research plans to absorb CO2 and produce coal.
Image via Pixabay.
A new breakthrough could allow us to burn our coal and have it, too. Researchers from Australia, Germany, China, and the US have worked together to develop a carbon storage method that can turn CO2 gas into solid carbon particles with high efficiency. Their approach could help us scrub the atmosphere of (some of) the greenhouse emissions we produce — with a certain dash of style.
“While we can’t literally turn back time, turning carbon dioxide back into coal and burying it back in the ground is a bit like rewinding the emissions clock,” says Torben Daeneke, an Australian Research Council DECRA Fellow and paper co-author.
The idea of permanently removing CO2 from the atmosphere isn’t new — in fact, it’s heavily considered as a solution to our self-induced climate woes. We’ve developed several ways to go about it, but they simply aren’t viable yet. Current carbon capture technologies turn the gas into a liquid form, which is then carted away to be injected underground. However, the process requires high temperatures (which means high costs) and there are environmental concerns regarding possible leaks from storage sites.
The team’s approach, however, relies on an electrochemical technique to capture atmospheric CO2 and turn it into solid, easy to store carbon.
“To date, CO2 has only been converted into a solid at extremely high temperatures, making it industrially unviable,” Daeneke explains. “By using liquid metals as a catalyst, we’ve shown it’s possible to turn the gas back into carbon at room temperature, in a process that’s efficient and scalable.”
“While more research needs to be done, it’s a crucial first step to delivering solid storage of carbon.”
The liquid metal cerium (Ce) catalyst has certain surface properties that make it a very good electrical conductor — the current also chemically activates the catalyst’s surface.
Schematic of the catalytic process. Image credits Dorna Esrafilzadeh, (2019), Nature.
The whole process starts with the team dissolving carbon dioxide gas in a liquid-filled beaker and a small quantity of the liquid metal. When charged with electrical current, this catalyst slowly starts converting the CO2 into solid flakes of carbon on its surface and promptly falls off, so the process can be maintained indefinitely.
“A side benefit of the process is that the carbon can hold electrical charge, becoming a supercapacitor, so it could potentially be used as a component in future vehicles,” says Dr Dorna Esrafilzadeh, a Vice-Chancellor’s Research Fellow in RMIT’s School of Engineering and the paper’s lead author.
“The process also produces synthetic fuel as a by-product, which could also have industrial applications.”
The paper “Room temperature CO2 reduction to solid carbon species on liquid metals featuring atomically thin ceria interfaces” has been published in the journal Nature.
The global triangle of obesity, undernutrition, and climate change represents ‘The Global Syndemic’ — the greatest threat to human and planetary health, researchers say. The underlying causes of this syndemic are commercial vested interests, lack of political leadership, and insufficient societal demand for change.
Image in public domain.
A Global Syndemic
“Syndemic” is not a word you hear very often — and you most definitely don’t want to hear. It represents an aggregation of two or more epidemics or diseases which exacerbate the total damage. The ‘Global Syndemic’ refers to the devastating combination of obesity, undernutrition, and climate change — which a new report published in the Lancet identifies as the single largest threat to mankind and Earth.
Excess body weight is estimated to affect 2 billion people worldwide, causing 4 million deaths every year. At the same time, stunting and wasting affect 155 million and 52 million children worldwide, respectively; 2 billion people suffer from a micronutrient deficiency, and 815 million people are chronically undernourished. Malnutrition is the single biggest cause of ill-health globally. Climate change is already affecting the lives of most people on Earth, with devastating consequences. Even from a purely economic standpoint, these issues account for an excess of 20% of the global GDP — but from a humanitarian perspective, it’s an unmitigated disaster.
The first thing we must change, the Lancet Commission on Obesity argues, is our perspective. These three issues are generally regarded as separate — but they share a common backbone: a global policy focusing on economic growth, ignoring negative health effects, environmental damage, and social inequality.
“Until now, undernutrition and obesity have been seen as polar opposites of either too few or too many calories. In reality, they are both driven by the same unhealthy, inequitable food systems, underpinned by the same political economy that is single-focused on economic growth, and ignores the negative health and equity outcomes. Climate change has the same story of profits and power ignoring the environmental damage caused by current food systems, transportation, urban design and land use. Joining the three pandemics together as The Global Syndemic allows us to consider common drivers and shared solutions, with the aim of breaking decades of policy inertia,” says Commission co-chair, Professor Boyd Swinburn of the University of Auckland.
The effects of these issues are also intertwined. For instance, climate change will disproportionately affect the underdeveloped parts of the world, bringing even more food insecurity and extreme weather events. Fetal and infant malnutrition has also been shown to increase the risk of adult obesity, and climate change also increases the price of numerous food commodities, especially fruits and vegetables, which can fight global obesity. Overall, things revolve in a connected triangle, making each other worse, just like several diseases can make each other worse by collapsing the immune system.
The solutions, therefore, must also act on all these issues in conjunction.
“We must recognise these connections and implement double-duty actions that address both obesity and undernutrition and triple-duty actions that influence multiple parts of the syndemic simultaneously,” says Commissioner Professor Corinna Hawkes, City University London (UK).
It sounds weird to say that measures against obesity would also fight climate change (and vice versa), but here’s a very simple example: what if we were to tax red meat? Red meat requires a disproportionate amount of resources, produces a huge amount of greenhouse gases, and at the same time, it is a major contributor to the global obesity crisis. The tax money could be used to alleviate world hunger or promote healthier and more sustainable alternatives. Supporting active transportation in the form of walking, cycling, or using public transportation is another excellent example: this could reduce some of the greenhouse gases coming from transportation, while at the same time making people more healthier and alleviating infrastructure strain as a bonus.
So why aren’t we doing more of this?
The reason, the report explains, is shockingly straightforward: powerful vested interests oppose it. It’s very rare to see a scientific report being so trenchant about something so delicate, and that’s exactly what makes it so important to heed its warning.
It’s well known that major fossil fuel companies have denied climate change for decades, even though they knew it was happening as a direct result of their activities. At the same time, subsidies from the US government keep the price of oil artificially low — subsidies which would be better diverted towards more sustainable forms of energy. Attempts to include sustainability in national dietary guidelines in the USA and Australia failed as a result of corporate lobbying from the food industry, which pushed to remove sustainability from the terms of reference. Lobby from the sugary drinks industry has also been very successful against local initiatives to reduce soda consumption, and it research funded by this industry is five times less likely to find an association between sugary drinks and obesity compared to other studies.
All in all, it seems that this financial and market power of the world’s major companies translates into political power, preventing regulation that would be beneficial for people.
“With market power comes political power, and even willing governments struggle to get policies implemented against industry pressure. New governance dynamics are needed to break the policy inertia preventing action. Governments need to regain the power to act in the interests of people and the planet and global treaties help to achieve this. Vested commercial interests need to be excluded from the policy table, and civil society needs to have a stronger voice in policy-making. Without disruptive change like these, we will continue on with the status quo which is driving The Global Syndemic,” says Commissioner Tim Lobstein, World Obesity Federation
What should be done
Researchers are calling for a new worldwide social movement — which again, is highly unusual for a scientific report. Lobstein and colleagues say we need to radically rethink the relationship between the important players: policymakers, business, governance, and civil society. Since the business is the main driver of this situation and the governance and policymaking side also seem content with this status quo, it seems like the only potential source of change is the civil society. Effectively, all possible strategies that would fight this syndemic require larger support from all of us.
Not only do we need to make better individual decisions when it comes to our own lifestyles, but we need to push policymakers in the right direction and encourage them to make more sustainable decisions. Supporting businesses which take steps in the right direction is also important, as is not supporting the ones that don’t.
This type of social mobilization can work. We’ve seen the intention of the US administration to withdraw from the Paris Agreement, which itself is seen by many as not ambitious enough — so political deals can be surprisingly fragile. But even in this situation, 2,700 leaders from US cities, states, and businesses representing 159 million people and US$ 6.2 trillion in GDP have developed an alliance and continued to mitigate the effects of climate change. In Mexico and the UK, mobilization against sugary drinks has led to the implementation of a tax, despite strong resistance from the industry.
The businesses don’t need to be on the losing side of it, either. Of course, things like sugar, which have a clear negative effect (called a negative externality), should be taxed — but the only unfair thing is that this hasn’t happened so far. Similarly, the world’s leading economists are advocating for a carbon tax. But this all opens up new avenues for sustainable business models, the likes of which can turn a profit while not doing environmental and health damage. Furthermore, the incentive is to make the switch as early as possible but again, the drive needs to also come from the social level — from each and every one of us.
“The past few years have seen renewed activism at the local level, whether in cities, communities, or in particular issues. As with other social movements, such as campaigns to introduce sugary drink taxes, efforts to address the Global Syndemic are more likely to begin at the community, city, or state level, and subsequently build to a national or global level. Support for civil society is crucial to break the policy deadlock and the systems driving the Global Syndemic,” concludes Professor Dietz.
Scientists have gained a deeper understanding of the global carbon cycle, finding that much of the planet’s carbon dioxide is stored deep beneath soils — with important implications for climate change.
Since we’re kids, we’re taught about natural cycles — the most common two being the water and the carbon cycle. We’re taught that there’s a balance in these cycles, preventing the Earth’s carbon from being released into the atmosphere or being completely absorbed into the water and rocks.
In this period of our planet’s history, this balance is perturbed by the industrial activities of mankind. The basic process is extremely simple: we’re outputting too much carbon dioxide, at a much faster rate than it can be absorbed through natural processes. This process is well-documented, and its effects are also clearly severe, though intricacies and details remain less understood.
For instance, the influence of soils remains somewhat unclear.
“We know less about the soils on Earth than we do about the surface of Mars,” said Marc Kramer, an associate professor of environmental chemistry at WSU Vancouver, whose work appears in the journal Nature Climate Change. “Before we can start thinking about storing carbon in the ground, we need to actually understand how it gets there and how likely it is to stick around. This finding highlights a major breakthrough in our understanding.”
A simple representation of the carbon cycle.
Kramer and colleagues conducted the first global-scale evaluation of the role soil plays in storing carbon. They analyzed soils and climate data from the Americas, New Caledonia, Indonesia and Europe, and drew from more than 65 sites sampled to a depth of six feet from the National Science Foundation-funded National Ecological Observatory Network. In particular, they focused on how carbon is dissolved into the soils, and what are the minerals that help store it.
This let them develop a map of carbon accumulation, and gain a better understanding of the pathway which leads carbon to be trapped in these soils. Spoiler alert: there are few reasons for optimism.
The good news is that according to this estimate, soils currently store about 600 billion gigatons of carbon (two times more than mankind’s output since the Industrial Revolution). However, the bad news is that if temperatures continue to rise, this could severely impede the amount of carbon soils will store. This would happen because water is the main mechanism through which carbon gets dissolved into the soils and even if rainfall remains unchanged, higher temperatures mean less water penetrates the soil. This also helps to explain why wet soils store more carbon than dry ones.
Researchers also found that deeper soils store surprisingly much carbon — but the storage pathway is largely similar. So while carbon store in the deeper parts of the soil won’t be directly affected by rising temperatures, the pathway through which this carbon is stored is altered –. essentially, this pathway relies on water to seep carbon from roots, fallen leaves, and other organic matter, and transport it into the deeper layers, where it remains trapped. Simply put, if there’s less water, there’s less stored carbon.
Generally speaking, wet forests tend to be the most productive environments, as the thick layers of organic matter from which water will leach carbon and transport it to minerals as much as six feet below the surface.
“This is one of the most persistent mechanisms that we know of for how carbon accumulates,” Kramer said.
This isn’t the first study to draw an alarm bell regarding the soils’ impact on carbon levels. Two years ago, another study found that the ability of soils to absorb carbon has been dramatically overestimated, whereas just a few months ago, soil erosion was highlighted as a potential source of additional carbon release into the atmosphere.
The study has been published in Nature Climate Change.