Researchers at the Johns Hopkins University have completed a new model of Saturn’s interior, which hints at a thick layer of helium rain that modulates the gas giant’s magnetic field.
The so-called ‘gas giants’ are notoriously hard to peer into, and they remain some of the most mysterious planets out there. Given the extreme environments they represent, it’s likely going to be a while before this changes, and an even longer while before any astronauts can actually go see for themselves.
That doesn’t mean we can’t draw some conclusions based on what we do know, however. And a team from Johns Hopkins University did just that, creating a new digital model looking into Saturn’s interior. This model hints at a temperature difference in the helium rain layer between the planet’s equator (where it is hotter) and the poles (where it gets colder).
“By studying how Saturn formed and how it evolved over time, we can learn a lot about the formation of other planets similar to Saturn within our own solar system, as well as beyond it,” said co-author Sabine Stanley, a Johns Hopkins planetary physicist.
“One thing we discovered was how sensitive the model was to very specific things like temperature,” she adds. “And that means we have a really interesting probe of Saturn’s deep interior as far as 20,000 kilometers down. It’s a kind of X-ray vision.”
Saturn is unique among the other gas giants in that its magnetic field is almost perfectly symmetrical around its axis. Since magnetic fields are generated by structures inside a planet’s body, this tidbit could help us glean some information about Saturn’s interior layout.
Using data recorded by NASA’s Cassini mission, researchers at Johns Hopkins University created detailed computer simulations using software typically employed for weather and climate simulations. The models indicate that there is a heat gradient in Saturn’s interior, with higher temperatures towards the equator. Overall, this could point to the existence of a layer of liquid helium around the planet’s core.
This structure creates a dynamo-like mechanism, which goes on to produce the striking magnetic field recorded around Saturn. On Earth, the planet’s iron core and molten metal mantle play the role of dynamo. It was expected that gas giants rely on a different structure to create their magnetic field, given their different chemical composition and extreme mass, but this is the first study to actually pinpoint one candidate structure for this role in gas giants.
Apart from this, the simulations also suggest that a certain level of non-axisymmetry could be present near Saturn’s north and south poles.
“Even though the observations we have from Saturn look perfectly symmetrical, in our computer simulations we can fully interrogate the field,” said Stanley.
Naturally, until we can put a person on Saturn to check, we can’t confirm these findings. Until then, models will have to suffice.
The paper “Recipe for a Saturn‐Like Dynamo” has been published in the journal AGU Advances.
Pregnant women have a higher risk of having low birth weight babies if they live close to active oil and gas wells, especially in rural areas, according to a new study in California. The findings add to previous studies that had already warned over the impacts of living near fossil fuel extraction sites.
The study, which is one of the largest of its kind, looked at the medical records of nearly three million births by moms living within 6.2 miles (10 kilometers) of at least one oil or gas well between 2006 and 2015. The researchers targeted births in both rural and urban areas, as well as pregnant women living near both active and inactive oil and gas sites.
According to the findings, pregnant women who lived in rural areas within 0.62 miles (1 kilometer) of the highest producing wells were 40% more likely to birth underweight babies and 20% more likely to have babies who were small for their gestational age, compared to people living farther away from wells or near inactive wells only.
Even among term births, babies were 1.3 ounces (36 grams) smaller, on average, than those of their counterparts. Newborns are considered to have low birth weight when their weight is less than 5lb and 8oz (2.4 kilos). Having a low weight can cause a wide array of short-term development issues.
“Being born of low birth weight or small for gestational age can affect the development of newborns and increase their risk of health problems in early childhood and even into adulthood,” said in a statement Rachel Morello-Frosch, a professor at the University of California, Berkeley, and senior author of the paper.
Morello-Frosch and her team also found a link between living in close proximity to oil and gas wells and small babies born in urban areas. Nevertheless, it was much less significant than in rural communities – something they explain by differences in air quality, maternal occupation, and housing conditions.
The findings add to a growing body of evidence linking proximity to oil and gas wells to a variety of adverse birth outcomes such as premature birth, heart defects, and low birth weight. Oil and gas production has been on the rise in the US in recent years due to the expansion of non-conventional techniques like fracking.
Fracking is a method of extracting oil and gas trapped in shale and other rock formations. It involves pumping large amounts of water down a well at high pressure, along with sand and chemicals that make up a tiny fraction of the volume. The technique transformed the US energy landscape, although California hasn’t seen as much changes as other states.
In California, where the study was carried out, oil production has declined over the past three decades. Last year, Governor Gavin Newson issued stricter rules for companies to obtain fracking permits. There are now 282 fracking permits waiting for review in the state.
“This study is the first to characterize the implications for perinatal health of active oil and gas production in the state, and I think the results can inform decision-making in regulatory enforcement and permitting activities,” Morello-Frosch said. “Results from health studies such as ours support recent efforts to increase buffers between active well activities and where people live, go to school and play.”
More than 30 years ago, NASA’s Voyager 2 spacecraft flew over Uranus, getting as close as 50,600 miles to the planet’s clouds.
The data collected back revealed new rings and moons. But there was another finding as well, which remained hidden for a long time.
A team of researchers from NASA took a new look at the data from the spacecraft, discovering that the voyager had passed through a gigantic magnetic bubble, also called a plasmoid – a giant structure comprised of plasma and the planet’s magnetic field.
Space physicists Gina DiBraccio and Dan Gershman, both from NASA’s Goddard Space Flight Center, reviewed the Uranus data because they wanted to understand its strange behavior. “The structure, the way that it moves …,” DiBraccio said, “Uranus is really on its own.”
Unlike any other planet in our solar system, Uranus turns almost perfectly sideways, like a rolling barrel. This axis of rotation points in a direction 60 degrees apart from its axis of the magnetic field, making its magnetosphere wobble chaotically as it rotates.
The researchers downloaded the readings obtained by Voyager 2’s magnetometer, which monitored the strength and direction of Uranus’ magnetic field as it flew over the planet. They were much more thorough than previous studies, to the point of reviewing measurements every 1.92 seconds.
Everything seemed ordinary, but the magnetometer marked a kind of zigzag at one point during its travels. The signal corresponded to a huge bubble of electrified gas: a cylindrical plasmoid at least 204,000 kilometers long and up to 400,000 kilometers wide.
Plasmoids are recognized as an important way for planets to lose mass. They detach from the part of the magnetic field of a planet that is expelled by the Sun. This phenomenon had been observed on Earth and other planets, but never on Uranus.
Over time, the plasma in plasmoids that escapes into space drains ions from the planet’s atmosphere, significantly altering their composition. In the case of Mars, the process ended up transforming it radically: it went from being a humid planet with a thick atmosphere to the dry world that we see today.
It’s not clear yet how Uranus’ atmospheric escape has affected the planet thus far, as scientists only got a tiny glimpse at this process. But the new discovery can help get some answers. “It’s why I love planetary science,” DiBraccio said. “You’re always going somewhere you don’t really know.”
A new study on the decommissioning of coal-fired power plants in the continental United States gauges the health and agricultural benefits it has generated for local communities.
Coal-fired power plants are, unsurprisingly, quite dirty. Coal burning is particularly problematic as it generates particulate matter and ozone (which together form smog) in the lower atmosphere. These compounds can affect the health of humans, wildlife, and plant life, and impact regional climate patterns by blocking incoming sunlight.
Jennifer Burney, Associate Professor of Environmental Science at the UC San Diego School of Global Policy and Strategy, looked into the benefits associated with the decommissioning of such plants. Between 2005 and 2016, she estimates, such decommissions saved over 26,000 lives and in their immediate vicinities in the continental US and helped improve local crop yields.
Coal — still dirty
“We hear a lot about the overall greenhouse gas and economic impacts of the transition the U.S. has undergone in shifting from coal towards natural gas, but the smaller-scale decisions that make up this larger trend have really important local consequences,” Burney said.
“The unique contribution of this study is its scope and the ability to connect discrete technology changes — like an electric power unit being shut down — to local health, agriculture and regional climate impacts.”
The transition from coal towards natural gas has definitely helped reduce CO2 emissions overall, Burney explains, and has helped lower local pollution levels in hundreds of areas. In order to quantify these changes, she combined data on electricity generation from the Environmental Protection Agency (EPA) with ground-level and satellite pollution measurements from the EPA and NASA to see how coal-fired plant decommissioning affected local chemistry. She also factored in county-level mortality rates and crop yields from the Centers for Disease Control and the U.S. Department of Agriculture for the same areas.
Between 2005 and 2016, she estimates that the loss of 26,610 lives and 570 million bushels of corn, soybeans, and wheat were avoided in the immediate vicinities of these decommissioned plants as a result of lower pollution levels. From this figure, she calculated that coal plants still left in operation in the US over the same timeframe contributed to 329,417 premature deaths and the loss of 10.2 billion bushels of the same crops (roughly half of a typical year’s worth of harvest in the US).
All this being said, however, gas-fired plants aren’t completely benign, Burney adds. Even new natural gas units are associated with increased levels of local pollution, but of a different make-up than that released by coal-fired plants.
“Policymakers often think about greenhouse gas emissions as a separate problem from air pollution, but the same processes that cause climate change also produce these aerosols, ozone, and other compounds that cause important damages,” Burney concludes.
“This study provides a more robust accounting for the full suite of emissions associated with electric power production. If we understand the real costs of things like coal better, and who is bearing those costs, it could potentially lead to more effective mitigation and formation of new coalitions of beneficiaries across sectors.”
The paper “The downstream air pollution impacts of the transition from coal to natural gas in the United States” has been published in the journal Nature Sustainability.
Less coal use translates to dramatic reductions in water usage, reports a team from Duke University.
The gradual transition from coal to natural gas and renewable energy in the U.S. is dramatically reducing the use of water in the energy industry. Furthermore, these overall savings in both water consumption and water withdrawal have been seen during a period where fracking and shale gas production have intensified their use of water.
“While most attention has been focused on the climate and air quality benefits of switching from coal, this new study shows that the transition to natural gas — and even more so, to renewable energy sources — has resulted in saving billions of gallons of water,” said Avner Vengosh, Professor of Geochemistry and Water Quality at Duke’s Nicholas School of the Environment.
The team estimates that for every megawatt of electricity produced from natural gas instead of coal, the energy industry withdraws 10,500 fewer gallons of water from the environment (rivers and groundwater). That is equivalent to a 100-day water supply for a typical American household, according to the team. At the same time, water consumption (which is water used by a power plant but not returned to the environment) drops by 260 gallons per megawatt.
If these figures remain steady, and in the context of coal being slowly phased-out by fuels such as shale gas over the next decade, the team estimates that the energy industry will save up to 483 billion cubic meters of water per year by 2030. If all of today’s coal-fired plants switch to natural gas, yearly savings will reach 12,250 billion gallons. That’s over two-and-a-half times the quantity of water that the United States industry uses annually.
So whence doth the savings come? Coal mining and fracking use roughly the same quantities of water, the team explains, but natural gas power plants use much less of it than coal plants. The difference mostly comes from their cooling systems. Since around 40% of all water use in the U.S. today goes to cooling thermoelectric plants, individual reductions stack up to huge overall savings, Vengosh explains.
“The amount of water used for cooling thermoelectric plants eclipses all its other uses in the electricity sector, including for coal mining, coal washing, ore and gas transportation, drilling and fracking,” he said.
However, compared to gas and coal, solar and wind use virtually no water. The study showed that the water intensity (i.e. overall water use throughout their lifecycle) of these renewable sources is only 1% to 2% of that of coal or gas (as measured by water use per kilowatt of generated energy). In other words, a substantial shift to solar and wind would eliminate “much” of the water withdrawal and consumption for energy generation in the U.S.
Natural gas overtook coal as the primary fossil fuel for electricity generation in the United States in 2015, mainly due to the rise of unconventional shale gas exploration (fracking). It made up 35.1% of U.S. electricity in 2018, while wind and solar accounted for 6.5% and 2.3%, respectively. Coal-fired plants generated 27.4% of U.S. electricity in 2018.
The paper “Quantification of the water-use reduction associated with the transition from coal to natural gas in the U.S. electricity sector” has been published in the journal Environmental Research Letters.
Planets come in all sha… planets come in various sizes. But, some of the most striking characteristics that set them apart are their physical and chemical particularities, which we use to categorize the myriad of planets we’ve found in space.
I like planets. I like them so much I live on one. They’re heavy enough for gravity to make them round, their orbits are clear of debris, and they don’t burn like stars do. But, there’s a lot of variation in what they are and the experience they offer.
So, today, I’d thought it would be exciting to look at all the different types of planets — some of which we’ve seen in the great expanse of space, some of which we’re only expecting to find. In no particular order, they are:
A star is a delicate system where gravity compresses and heats everything up while the nuclear fusion at their core pushes outwards. With too much pressure, electrons can’t move freely, so the reaction stops. With too much ‘boom’, there’s not enough pressure to keep the reaction going.
Teetering on the edge of starhood, brown dwarfs have outgrown any definition of a ‘planet’. Yet, they’re just not quite a star. Ranging from 13 to 80 times the mass of Jupiter, brown dwarfs are immense embers barreling through space, fusing deuterium and lithium to keep themselves slightly alight. However, they need yet more matter to be able to fight their own gravity, so they can’t ignite.
Brown dwarfs aren’t planets. They don’t form like planets — they form like stars. Instead of material slowly clumping together, brown dwarfs are born from clouds of gas collapsing in on themselves.
The chonk de la chonk, gas giants are the largest planets to ever dot the universe. They are composed primarily (>90%) of hydrogen and helium (the two simplest elements in the periodic table) with traces of other compounds thrown in for good measure. Hydrogen and helium give these planets an overall brown-yellow-ocher palette, with water and ammonia clouds peppering their highest layers white. Owing to the nature of their bodies, these giants are blanketed by wild storms and furious winds.
We don’t know much about their cores, only that it has to be immensely hot (around 20,000 Kelvin, K) and pressurized in there. The main hypotheses hold that gas giants either have molten rocky cores surrounded by roiling oceans of gas, diamond cores, or ones made of super-pressured (metallic) hydrogen nuggets.
They are sometimes called ‘failed stars’ because hydrogen and helium keep stars running, but gas giants don’t have enough mass to spark nuclear fusion. We have two of them in the solar system, Jupiter and Saturn.
Most exoplanets we’ve found so far are gas giants — just because they’re huge and easier to spot.
Very similar to gas giants but won’t return your texts. Ice giants are believed to swap out hydrogen and helium (under 10% by weight) in favor of oxygen, carbon, nitrogen, and sulfur, which are heavier. Boiled down, we don’t really know what elements these planets are made of — their (admittedly thin) hydrogen envelopes hide the interior of the planets, so we can’t just go and check. This outer layer is believed to closely resemble the nature of gas giants.
Still, it is believed that, while not entirely made of the ice we know and love here on Earth exactly, there is water and water ice in their make-up. They get their name from the fact that most of their constituent matter was solid as the planets were forming, and because planetary scientists refer to elements with freezing points above about 100 K (such as water, ammonia, or methane) as “ices”.
Ice giants are, as per their name, quite gigantic, but they tend to be smaller than gas giants. However, owing to their much-denser make-up, they are also more massive overall. There are two ice giants in our solar system, Uranus and Neptune. Water, in the form of a supercritical ocean beneath their clouds, is believed to account for roughly two-thirds of their total mass.
Both ice giants and gas giants have primary atmospheres. The gas they’re made from was accreted (captured) as the planets were forming.
Also known as terrestrial or telluric planets (from the Latin word for Earth), they are formed primarily of rock and metal. Their main feature is that they have a solid surface. Mercury, Venus, Earth, and Mars, the first four from the Sun, are the rocky planets of our solar system.
To the best of our knowledge rocky planets are formed around a metallic core, although the hypothesis of coreless planets has been floated around.
Atmospheres, if they have one, are secondary — formed from captured comets or created via volcanic or biological activity. Rocky planets also form primary atmospheres but fail to retain them. Secondary atmospheres are much thinner and more pleasant than those of Saturn or Uranus. That’s not to say a secondary atmosphere can’t influence its planet: Venus’s rampant climate disaster is a great example.
Mercury, with a metallic core of 60–70% of its planetary mass, is as close as we’ve found to an Iron planet. Both those and the much more bling Carbon planets thus remain hypothetical. Another exciting and cool-named hypothetical class of rocky planets are Chthonians, the rock or metal cores of gas giants stripped bare.
Rocky worlds can harbor liquid water, terrain features, and potentially tectonic activity. Tectonically-active planets can also generate a magnetic field.
Such planets come in many different sizes. Earth is Earth-sized, Mercury is only about one third of it, while Kepler-10c is 2.35 times as large as our planet. Density is also a factor. Without going to a planet and studying its interior structure, it’s impossible to accurately estimate its density. As a rule of thumb, however, uncompressed density estimates for a rocky planet tend to be lower the farther away it orbits its star. It’s likely that planets closer to the star would thus have a higher metal (denser) content, while those further away would have higher silicate (lighter) content. Gliese 876 d is 7 to 9 times the mass of Earth.
The first extrasolar rocky planets were discovered in the early 1990s. Ironically, they were found orbiting a pulsar (PSR B1257+12), one of the most violent environments possible for a planet. Their estimated masses were 0.02, 4.3, and 3.9 times that of Earth’s.
These planets contain a large amount of water, either on the surface or subsurface. They’re an offshoot of the rocky planet, either covered in liquid water or an ice layer over liquid water. We don’t know very much about them or how many there are out there because we can’t yet spot liquid surface water, so we use atmosphere spectrometry as a proxy.
Earth is the only planet on which we’ve confirmed the existence of liquid water at the surface so far. And although water does cover around 71% of the Earth, it only makes up for 0.05% of its mass, so we’re not an Ocean planet. On these latter ones, waters are expected to run so deep that they would turn to (warm) ice even at high temperatures (due to the pressure).
This type of planet remains one of the likeliest to harbor extraterrestrial life.
Fan-favorite Pluto, along with Ceres, Haumea, Makemake, and Eris are the dwarf planets of our solar system. Dwarf planets kind of stride the line between planets and natural satellites. They’re large enough to hold their own stable shape, even to hold moons themselves, but not enough to clear their orbit of other material.
Not technically planets because they orbit another planet, moons are nevertheless telluric bodies that vary in size from ‘large asteroid’ to ‘larger than Mercury’. Titan, Saturn’s largest moon, has its own atmosphere.
There are six planets in the Solar System that sum up to 185 known natural satellites, while Pluto, Haumea, Makemake, and Eris also harbor their own moons.
These are the planets your parents warned you about.
Rogue planets deserve a mention on this list despite the fact that they don’t orbit a star. They are, for all intents and purposes, planets that orbit the galactic core after being ejected from the planetary system in which they formed. It is also possible that, somehow, they formed free of any stellar host. PSO J318.5−22 is one such planet.
Image credits NASA, ESA / A. Simon (Goddard Space Flight Center) and M.H. Wong (University of California, Berkeley)
The image was taken on June 27, 2019 and centers on the planet’s titanic Great Red Spot. It records Jupiter’s color palette, swirling clouds, and turbulent atmosphere in much higher quality than previously-available images. These elements provide an important glimpse into the processes unfurling in the gas giant’s atmosphere.
Ten year challenge photo
The image was taken in visible light as part of the Outer Planets Atmospheres Legacy program (OPAL). It was snapped with Hubble’s Wide Field Camera 3 when Jupiter was 400 million miles from Earth — near “opposition,” or almost directly opposite the Sun in the sky.
OPAL generates global views of the outer planets each year using the Hubble Telescope, which are meant to provide researchers with the data they need to track changes in their storm, wind, and cloud dynamics.
One of Jupiter’s most striking features is the Great Red Spot, around which the current image focuses. The Spot is a churning storm, rolling counterclockwise between two bands of clouds (above and below the Great Red Spot) which are moving in opposite directions. The red band to the northeast of the Great Red Spot contains clouds moving westward and around the north of the giant tempest. The white clouds to its southwest are moving eastward to the south of the spot. The swirling filaments seen around its outer edge are high-altitude clouds that are being pulled in and around the storm.
Jupiter’s bands are created by differences in the thickness and height of the ammonia ice clouds that blanket its surface, both properties dictated by local variations in atmospheric pressure. The more colorful bands and are generally ‘deeper’ clouds. Lighter bands rise higher and are thicker, generally, than the darker ones.
Winds between bands can reach speeds of up to 400 miles (644 kilometers) per hour. All of the bands seen in this image are corralled to the north and to the south by powerful, constant jet streams — these remain stable even as the bands change color on the other side of the planet. The band of deep red and bright white that border the Giant Red Spot also become much fainter on the other side of Jupiter.
You can learn more about how these colors formhere.
New research shows that early life on Earth relied on a completely different type of photosynthesis — and that delayed the formation of the atmosphere as we breathe it today.
Image via Pixabay.
It’s no understatement to say that life today is wholly dependent on photosynthesis. Not only does it power plants (which directly or indirectly feed everybody else), but it also provides the oxygen we breathe. At least as far as the oxygen-producing photosynthesis of today is concerned. This reaction is what led to the appearance of free oxygen in Earth’s atmosphere, something which was unheard of 2.3 billion years ago (as oxygen is very reactive).
However, we have evidence that oxygen-releasing photosynthesis evolved much earlier in our planet’s history, even as early as 3 billion years ago. New research looking into why Earth’s atmosphere took so long to oxygenate suggests that it may simply have been a case of good ol’ fashioned competition at play.
“The striking lag has remained an enduring puzzle in the fields of Earth history and planetary science,” says Christopher Reinhard, an assistant professor in the School of Earth and Atmospheric Sciences (EAS) and the paper’s corresponding author.
Reinhard and his colleagues, led by EAS postdoctoral researcher Kazumi Ozaki, suggest that an older form of photosynthesis may have delayed the oxygenation of Earth’s atmosphere. Chemical conditions in Earth’s early oceans helped prop-up this competitor, against which oxygen-releasing photosynthesizers could not compete effectively at the time.
Modern photosynthesizers break apart water and release oxygen gas. Primitive ones, the team explains, substitute iron ions for water — and release rust instead of oxygen gas. Through a combination of experimental microbiology, genomics, and large-scale biogeochemical modeling, the team found that these primitive photosynthesizers are “fierce competitors for light and nutrients,” Ozaki explains.
“We propose that their ability to outcompete oxygen-producing photosynthesizers is an important component of Earth’s global oxygen cycle,” Ozaki, now an assistant professor in the Department of Environmental Science at Toho University, in Japan, adds.
The findings help us better understand how geology and the biosphere worked to change the Earth’s atmosphere into what we have today. It also helps us better understand the path life took on our planet; as much as oxygenation was a boon to animals like us, it was an environmental catastrophe for organisms at the time. The findings could also help us refine our search for Earth-like planets, or planets harboring alien life, as they give us a better understanding of how life itself can change a planet — and to what extent.
“Our results contribute to a deeper knowledge of the biological factors controlling the long-term evolution of Earth’s atmosphere,” Ozaki says. “They offer a better mechanistic understanding of the factors that promote oxygenation of the atmospheres of Earth-like planets beyond our solar system.”
The results “yield an entirely new vantage from which to build theoretical models of Earth’s biogeochemical oxygen cycle,” Reinhard adds.
The paper “Anoxygenic photosynthesis and the delayed oxygenation of Earth’s atmosphere” has been published in the journal Nature.
The American military is actually one of the largest emitters of greenhouse gases in the world — more than many nations.
Image via Pixabay.
A new analysis by Dr. Neta Crawford, a professor of Political Science and Department Chair at Boston University, shows that the Pentagon was responsible for around 59 million metric tons of carbon dioxide and other greenhouse gas emissions in 2017. This figure places the U.S. military higher on the list of the world’s largest emitters than industrialized countries such as Sweden or Portugal.
The Costs of War
“In a newly released study published by Brown University’s Costs of War Project, I calculated U.S. military greenhouse gas emissions in tons of carbon dioxide equivalent from 1975 through 2017,” Dr. Crawford explains in a piece for LiveScience.
“Since 2001, the DOD has consistently consumed between 77 and 80 percent of all US
government energy consumption,” her paper explains.
In “any one year”, she explains, the Pentagon’s emissions were greater than “many smaller countries’ [emissions],” the study explains. In fact, if the Pentagon were a country, it would be the world’s 55th largest greenhouse gas emitter, overtaking even industrialized countries.
The largest single sources of military greenhouse gas emissions identified in the study are buildings and fuel. The DoD maintains over 560,000 buildings, which account for about 30% of its emissions. “The Pentagon building itself emitted 24,620.55 metric tons of [CO2 equivalent] in the fiscal year 2013,” the study says. The lion’s share of total energy use, around 70%, comes from operations. This includes moving troops and material about, as well as their use in the field, and is kept running by massive quantities of jet and diesel fuel, Crawford said.
This January, the Pentagon listed climate change as “a national security issue” in a report it presented to Congress. The military has launched several initiatives to prepare for its impacts but seems just as thirsty for fuel as ever before. It is understandable; tanks, trucks, planes, bombers without fuel — and a lot of fuel — they’re just fancy paperweights.
But, at the same time, the use of fossil fuels is changing the climate. Global climate models estimate a 3ºC to 5ºC (5.4ºF to 9ºF) rise in mean temperatures this century alone under a business as usual scenario. In a paper published in Nature that we covered earlier today, we’ve seen how 4ºC would increase the effect of climate on conflict more than five-fold. More conflict would probably mean more fuel guzzled by the army’s engines.
The paper also looks at how the U.S. military “spends about $81 billion annually defending the global oil supply” to ensure both domestic and military life can continue without a hitch.
“The military uses a great deal of fossil fuel protecting access to Persian Gulf Oil,” the paper explains. “Because the current trend is that the US is becoming less dependent on oil, it may be that the mission of protecting Persian Gulf oil is no longer vital and the US military can reduce its presence in the Persian Gulf.”
“Which raises the question of whether, in protecting against a potential oil price increase, the US does more harm than it risks by not defending access to Persian Gulf oil. In sum, the Persian Gulf mission may not be as necessary as the Pentagon assumes.”
However, not all is dead and dreary. Crawford says the Pentagon had reduced its fuel consumption significantly since 2009, mainly by making its vehicles more efficient and shifting towards cleaner sources of energy at bases. Further reductions could be achieved by cutting missions to the Persian Gulf, the paper advises, seeing as it is no longer a top priority to protect oil supply from this area as renewable energy is gaining in the overall grid make-up.
“Many missions could actually be rethought, and it would make the world safer,” Crawford concludes.
The paper “Pentagon Fuel Use, Climate Change, and the Costs of War” can be accessed here.
A group of researchers representing several institutions in Denmark, with colleagues from Sintex and Haldor Topsoe, has developed an electrified methane reformer that produces far less CO2 than conventional steam-methane reformers. The method could allow us to produce hydrogen and hydrogen fuel much more cleanly in reformers, and could also be used in tandem with other recent research to help us mitigate global warming.
Less gas for your buck
Global production of hydrogen is around 60 million tons per year. The gas is vital for the production of methanol and ammonia for fertilizer (which is its primary use so far), and could become the bedrock of a hydrogen-fuel economy. However, it’s also a pretty dirty business: some estimates place around 3% of the world’s current CO2 emissions on the back of steam-methane reformers, our primary source of hydrogen.
A steam-methane reformer is a very large implement, think of it as a simplified and scaled-down oil refinery, which is used to extract hydrogen from methane gas. The process involves burning natural gas to heat up a methane-water mixture, under pressure, ‘cooking’ it into syngas — a mix of carbon monoxide and hydrogen. Needless to say, this produces quite a lot of CO2, which is released into the atmosphere. Additional CO2 is also produced inside the reformer as an incomplete reaction product.
The team aimed to reduce the hydrogen industry’s carbon footprint by devising an electricity-based methane reformer. This device, they report, is significantly smaller (one hundred times smaller, in fact) than a traditional reformer and far cleaner. It uses electricity to heat up the water-methane mixture, which removes CO2 emissions associated with the burning of natural gas. The approach also results in a much more even and easily-controlled heating of the water-methane mix, slashing the amount of CO2 produced inside the reforming chamber.
If powered by electricity generated from a renewable resource, the team points out, the electric reformer would reduce the footprint of hydrogen production dramatically. If all the steam-methane reformers in the world were replaced by electrified systems, they add, the world would see a 1% drop in CO2 emissions.
We’ve also talked recently about a somewhat unorthodox idea to help us fight climate warming: replacing anthropic methane in the atmosphere with CO2. The authors of that study already propose degrading methane through heat into CO2. Coupled with the new electric reformer, we could also generate hydrogen for use as fuel or fertilizers.
The paper “Electrified methane reforming: A compact approach to greener industrial hydrogen production” has been published in the journal Science.
Supermassive black holes don’t really form dust ‘donuts’ — the structures surrounding these bodies are more akin to galactic matter fountains, new research reveals.
Artist’s concept of a supermassive black hole. Also shown are the accretion disk (donut) and the outflowing jet of energetic particles. Image credits NASA-JPL.
Computer simulations and new observations from the Atacama Large Millimeter/submillimeter Array (ALMA) suggest that the gas accretion rings around supermassive black holes (SBH) aren’t ring-shaped at all. Instead, gas being expelled from the SBM interacts with infalling matter to create a complex circulation pattern — one which the authors liken to a fountain.
Jets of matter
Most galaxies revolve around a SBH. These objects can be millions, even billions of times as heavy as the Sun, and they knit together galaxies through sheer gravitational power. Some of these SBHs are actively consuming new material. So far, common wisdom held that instead of falling directly in, matter builds around an active black hole in a donut or ring-shaped structure.
It wasn’t far from the truth but, new research reveals, it wasn’t spot-on either. A study led by Takuma Izumi, a researcher at the National Astronomical Observatory of Japan (NAOJ), reports that this ‘donut’ is not actually a rigid structure, rather a complex collection of highly dynamic gaseous components.
The researchers used the ALMA telescope to observe the Circinus Galaxy and the SBH at its center — which is roughly 14 million light-years away from Earth. They then compared their observations to computer models of gas falling toward a black hole. These simulations were run using the Cray XC30 ATERUI supercomputer operated by NAOJ.
All in all, the team found that there’s a surprising level of interplay between the gases in this structure. Cold molecular gas first falls towards the black hole to form a disk near the plane of rotation. Being so close to a black hole heats up the gas until its atoms break apart into protons and electrons. Not all of these products go on to be swallowed by the black hole. Some are instead expelled above and below the disk but are then snagged by the SBH’s immense gravitational presence, falling back onto the disk.
Rough schematic of the process’ dynamics. Pc stands for parsec, equal to about 3.26 light-years (30 trillion km or 19 trillion miles).
These three components circulate continuously, the team explains. Their interaction creates three-dimensional flows of highly turbulent matter around the black hole.
“Previous theoretical models set a priori assumptions of rigid donuts,” explains co-author Keiichi Wada, a theoretician at Kagoshima University in Japan who lead the simulation study.
“Rather than starting from assumptions, our simulation started from the physical equations and showed for the first time that the gas circulation naturally forms a donut. Our simulation can also explain various observational features of the system.”
The team says their paper finally explains how donut-shaped structures form around active black holes and, according to Izumi, will “rewrite the astronomy textbooks.”
The paper ” Circumnuclear Multiphase Gas in the Circinus Galaxy. II. The Molecular and Atomic Obscuring Structures Revealed with ALMA” has been published in The Astrophysical Journal.
Throughout the southern reaches of the Arctic, plants are getting taller due to climate change.
The common freckle pelt lichen (Peltigera aphthosa) is often found on mossy ground, rocks, or under trees in Arctic ecosystems. Image credits James Walton / NPS.
While not graced with the lush vegetation of the Earth’s other areas, the Arctic is far from desolate. Hundreds of species of low-lying shrubs, grasses, and other plants make a home in the frigid expanse, and they play a key role in the carbon cycle. However, anthropic climate change is causing new plants to move into the Arctic’s southern stretches which, according to a new paper, can lead to quite a bit of hassle in the future.
Growing (too) strong
An international team of 130 researchers, led by Dr Isla Myers-Smith of the School of Geosciences at the University of Edinburgh, and Dr Anne Bjorkman from the Senckenberg Biodiversity and Climate Research Centre (BiK-F) in Frankfurt, has been investigating the Arctic flora as part of a Natural Environment Research Council (NERC)-funded project.
The team looked at more than 60,000 data points from hundreds of sites across the Arctic and alpine tundra and report that higher mean temperatures are impacting the delicate balance of these ecosystems. This is the first time that a biome-scale study looking at the role plants play in this rapidly-warming part of the planet has been carried out, says Bjorkman.
“Rapid climate warming in the Arctic and alpine regions is driving changes in the structure and composition of plant communities, with important consequences for how this vast and sensitive ecosystem functions,” Dr Bjorkman adds.
“Arctic regions have long been a focus for climate change research, as the permafrost lying under the northern latitudes contains 30 to 50 percent of the world’s soil carbon”.
Among other things, plants insulate the soil they grow in from incoming sunlight. While this is rather fortunate for us during a hot summer’s day, in the Arctic, it’s a matter of ecosystem stability. Taller plants also help to trap more snow beneath their leaves. This thicker blanket of snow, in turn, further insulates the soil from temperature changes in the atmosphere, preventing it from freezing.
In other words, taller plants in the Arctic keep soil thawed for more days each year, leading to “an increase in the release of greenhouse gases” as biological matter trapped in the soil has a wider window of time annually to decompose.
“If taller plants continue to increase at the current rate, the plant community height could increase by 20 to 60 percent by the end of the century,” Dr Bjorkman explains.
The team gathered their data from sites in Alaska, Canada, Iceland, Scandinavia, and Russia. Alpine sites in the European Alps and Colorado Rockies were also included in the study. For each dataset, the team looked at the relationship between temperature and soil moisture. They also tracked plant height and leaf area, along with specific leaf area, leaf nitrogen content, leaf dry matter content, as well as ‘woodiness and evergreenness’.
Out of all these characteristics, only height increased meaningfully over time. Temperature and moisture levels (which is strongly affected by temperature) had the strongest influence on observed plant characteristics.
“We need to understand more about soil moisture in the Arctic. Precipitation is likely to increase in the region, but that’s just one factor that affects soil moisture levels,” Dr Myers-Smith said. “While most climate change models and research have focused on increasing temperatures, our research has shown that soil moisture can play a much greater role in changing plant traits than we previously thought.”
The results suggest that (through the mechanism explained previously), this increase in overall plant height could have significant implications for both the Arctic and the world at large. At the same time, they should help us better tailor our climate models, to take into account increased greenhouse gas emissions from the area.
The paper “Plant functional trait change across a warming tundra biome” has been published in the journal Nature.
Similar to how stars are formed, the most popular theory among today’s scientists regarding the creation of planets is that they are a result of a nebula breaking down. During the long evolution of the deteriorating gaseous cloud, the nebula transforms into a structure called a protoplanetary disk, with a newly-formed star at its center. Such a disk provides a place of incubation for developing planets.
Just recently, for the first time on record, young planets-to-be (also referred to as protoplanets) developing in one of these protoplanetary disks were actually “weighed”. Several scientific papers published earlier this month as inclusions in the Astrophysical Journal Letters discuss a new mode of operation which can be employed to calculate various physical attributes of these protoplanets. It’s also rather accurate and dependable.
One group of astronomers headed by Richard Teague was responsible for the discovery of two young planets having a mass close to the size of the mass of Jupiter, the largest planet in our solar system. The two bodies orbit a star which has been labeled HD 163296. This four-million-year-old ball of burning gas is still a youngster as a star the size of our Sun would have a normal life expectancy of about 10 billion years and beyond.
A Developing Star System. Source: SciTechDaily.
But a separate party of scientists, this one based in Australia and headed by Christophe Pinte, was also spending time examining the same system. They noticed a third protoplanet in a revolution around the very same star. However, the finding attributed to Pinte’s team was a young planet nearly twice as massive as the gas giant Jupiter.
Both of the teams employed data from the Atacama Large Millimeter/submillimeter Array (ALMA). This is a system of radio telescopes located in Chile, South America. The two teams of astronomers closely examined the motion of the nebulous gas. Both managed to develop a process of measuring the gas’s velocity by observing the change in the wavelength of light emitted by carbon monoxide molecules.
The gravitational pull of a planet would best explain the gaseous movements. Richard Teague thinks this method of measurement could be used effectively in observing many other stars and protoplanets. In this way, he hopes scientists will be able to discover what types of protoplanets are most common in the cosmos.
A lot of ink has been spilled on climate change and the effect of greenhouse gases emissions lately. And for good reason. But how do some gases make the planet warmer? What is the link between CO2 and the climate? Let’s find out.
Image credits orvalrochefort / Flickr.
How it all starts
With precious few exceptions, all the energy on Earth derives from the sun. Sunlight carries this energy (mostly in the form of heat, visible light, and radiation that we cannot perceive) from the fusing nuclei in the star’s core to our planet’s surface.
Part of this energy keeps our planet alive. Winds blow to appease pressure differences in the atmosphere, which are caused by differences in temperature. Plants gobble up sunlight to fuse carbon to hydrogen in photosynthesis. Even the coal and oil we burn are akin to chemical batteries storing the sun’s energy.
Part of that input of energy, however, doesn’t stay here. It gets reflected — by clouds, oceans, plants, ice caps — back into space. While every other celestial body out there does this, each will differ in how much of the incoming energy it reflects. This ratio of incoming energy to reflected energy is known as a planet’s ‘albedo‘, from the Latin word for ‘white’. Albedo is measured on a scale from 0 (no reflection) to 1 (complete reflection), and the Earth currently sits at about 0.30 on the albedo scale (measured in the 1970s), meaning it reflects some 30% of all incoming sunlight.
To summarise, conditions on a planet’s surface depend on a tug of war between the energy output of its host star and the planet’s capacity to reflect or capture it.
What is the greenhouse effect?
The greenhouse effect is caused by greenhouse gases in the atmosphere preventing heat from radiating out into space. When trapped heat radiates out of the surface, these gases absorb and contain energy inside the atmosphere.
Image via Pixabay.
While albedo reflects incoming sunlight and the energy it carries away from the surface of the Earth, its atmosphere works as a temperature battery. Clouds reflect about 23% of incoming solar energy, but they — alongside the rest of the atmosphere — also absorb roughly the same amount, 23%. What remains of the incoming solar energy is either reflected (7% of total) or absorbed (47%) by the surface.
Certain compounds in the atmosphere (most notably water vapor, carbon dioxide, and methane) are very good at absorbing heat (infrared radiation) and later emit it back out. Greenhouse gases capture most of that 23% of the incoming energy absorbed in the atmosphere.
All in all, we’re actually very lucky to have a plump atmosphere that contains some greenhouse gases — they help ‘spread’ energy around evenly. A planet like Mars, with its thin atmosphere, is plagued by temperatures of both extremes: scorching in the sunlight, freezing in the shadow. The same goes for Mercury — despite being the closest planet to the sun, nighttime temperatures here drop as low as -290° F(-180° C).
However, this is also where our climate troubles start. Greenhouse gases in the atmosphere absorb energy as long as their environment is at a higher energy state (they absorb infrared light when their environment is warmer than them). The atmosphere and surface, then, absorb most energy during the day and release most of it at night. These gases radiate energy in all directions, meaning some of that is released towards the Earth’s surface to be reabsorbed.
When they release it, some of the energy goes back into the ground. The higher the concentration of greenhouse gases in the atmosphere, the more energy gets trapped this way. Rinse and repeat enough times, and you get to where we are today — we’ve pumped so much CO2 into the atmosphere that it’s making a noticeable change in average temperatures.
Why is it a problem for us today?
Strictly speaking, climate change itself isn’t the problem — its consequences are.
For all our technology and know-how, society today is completely dependent on nature for its survival. We rely on natural processes to clean our water, fatten the fish we capture, pollinate our crops, generate the oxygen we need. We really enjoy the sea level staying where it is, and we’ve constructed various social and cultural mechanisms to adapt to the climatic and ecological particularities of the places we live in. Climate change — spurred on by the greenhouse gases we generate — threatens to destroy these natural systems we so dearly rely on.
When they change, we and our society will have to change as well, in order to survive. But the fact of the matter is that we have evolved, biologically and culturally, economically, and socially, to fit the mold our environments provided. Adapting to a post-climate-change world will entail social and economic upheaval the likes of which humanity has never faced before.
Another issue with the greenhouse effect, and by extension climate change, is that it has a lot of inertia. It takes time to fix. Even directly scrubbing CO2 out of the atmosphere will take time: there are roughly 3.200 gigatons of CO2 in Earth’s atmosphere right now (410 ppm), and we’d need to scrub out some 1.440 gigatons (45%) off that to get to pre-industrial levels.
This inertia is only compounded with respect to the effects of climate change. The world’s ecosystems will need time to recover even after greenhouse gas levels in the atmosphere have been reduced — and there’s no guarantee they’ll go back to being what they were. Every species lost clears an evolutionary niche that evolution will fill with something else. There’s no guarantee that ‘something else’ will be to our liking or be useful for us.
Finally, there is this spot of trouble:
The Greenhouse effect is self-enforcing
This is actually one of the greatest dangers facing humanity at the moment for a very simple reason: we’ve helped get it started, but greenhouse-type effects are perfectly capable of driving themselves on.
Temperature map of regions where record highs (red) and lows (blue) were set in 2015, relative to the year before. Image and caption credits Berkeley Earth / Wikimedia
Water vapor, for example, is a greenhouse gas, so it helps trap heat. Ice sheets are vast expanses of white, so they increase our planet’s albedo. A warmer climate will increase atmospheric concentrations of the first while reducing areas of the latter. The increase in greenhouse gases coupled with a reduction in albedo will warm the climate even more.
We are already seeing this positive feedback cycle at work. Warmer average temperatures, for example, are causing organic matter buried in permafrost to decompose, which releases carbon dioxide. The polar ice sheets are reeling and fragmenting under warmer conditions, reducing their ability to reflect energy back to space. These are just a few examples of how the greenhouse effect can get out of hand.
Cape Town in South Africa narrowly avoided running completely out of water after three years of relentless drought. The drought in California which ended last year was also spurred on by climate change. And there are things we just don’t know about.
“Large, abrupt climate changes have repeatedly affected much or all of the Earth, locally reaching as much as 10°C change in 10 years. Available evidence suggests that abrupt climate changes are not only possible but likely in the future, potentially with large impacts on ecosystems and societies,” reads the a consensus study report published by the National Research Council in 2002.
“We do not yet understand abrupt climate changes well enough to predict them.”
Taken to the extreme, like the state of Venus today shows, the cycle can repeat until our planet becomes a hot rock drenched in boiling acid. Not a pleasant prospect.
But, as the authors of the study themselves notes, “there is no need to be fatalistic; human and natural systems have survived many abrupt changes in the past, and will continue to do so. Nonetheless, future dislocations can be minimized by taking steps to face the potential for abrupt climate change.”
If you have a good look at some of the underlying concepts of modern science, you might notice that some of our current notions are rooted in old scientific thinking, some of which originated in ancient times. Some of today’s scientists have even reconsidered or revamped old scientific concepts. We’ve explored some of them below.
4 Elements of the Ancient Greeks vs 4 Phases of Matter
The ancient Greek philosopher and scholar Empedocles (495-430 BC) came up with the cosmogenic belief that all matter was made up of four principal elements: earth, water, air, and fire. He further speculated that these various elements or substances were able to be separated or reconstituted. According to Empedocles, these actions were a result of two forces. These forces were love, which worked to combine, and hate, which brought about a breaking down of the elements.
What scientists refer to as elements today have few similarities with the elements examined by the Greeks thousands of years ago. However, Empedocles’ proposed quadruplet of substances bares resemblance to what we call the four phases of matter: solid, liquid, gas, and plasma. The phases are the different forms or properties material substances can take.
Water in two states: liquid (including the clouds), and solid (ice). Image via Wikipedia.
Compare Empedocles’ substances to the modern phases of matter. “Earth” would be solid. The dirt on the ground is in a solid phase of matter. Next comes water which is a liquid; water is the most common liquid on Earth. Air, something which surrounds us constantly in our atmosphere, is a gaseous form of matter.
And lastly, we come to fire. Fire has fascinated human beings for time beyond history. Fire is similar to plasma in that both generate electromagnetic radiation such as light. Most flames you see in your everyday life are not hot enough to be considered plasma. They are typically considered gaseous. A prime example of an area where plasma is formed is the sun. The ancient four elements have an intriguing correspondent in modern science.
Ancient Concept of Dome Sky vs. Simulation Hypothesis
Millennia ago, people held the notion that his world was flat. Picture a horizontal cooking sheet with a transparent glass bowl set on top of it. Primitive people thought of the Earth in much the same way. They considered the land itself as flat and the sky as a dome. However, early Greek philosophers such as Pythagoras (c. 570-495 BC) — who is also known for formulating the Pythagorean theorem — understood that Earth was actually spherical.
Fast forward to the 21st century. Now scientists are considering the scientific concept of the dome once again but in a much more complex manner.
Regardless of what conspiracy lovers would have you believe, the human race has ventured into outer space, leaving the face of the Earth to travel to the stars. In the face of all our achievements, some scientists actually question if reality is real, a mindboggling and apparently laughable idea.
But some scientists have wondered if we could be existing in a computer simulation. The gap between science and science fiction starts to become very fine when considering this.
This idea calls to mind classic sci-fi plots such as those frequently played out in The Twilight Zone in which everything the characters take as real turns out to be something entirely unexpected. You might also remember the sequence in Men in Black in which the audience sees that the entire universe is inside an alien marble. Bill Nye even uses the dome as an example in discussing hypothetical virtual reality. This gives one the feeling that he is living in a snowglobe.
Medieval Alchemy vs. Modern Chemistry
The alchemists of the Middle Ages attempted to prove that matter could be transformed from one object into an entirely new object. One of their fondest goals they wished to achieve was the creation of gold from a less valuable substance. They were dreaming big, but such dreams have not yet come to fruition. Could it actually be possible to alter one type of matter into another?
Well, modern chemists may be well on their way to achieving this feat some day. They are pursuing the idea of converting light into matter, as is expressed in Albert Einstein’s famous equation. Since 2014, scientists have been claiming that such an operation would be quite feasible, especially with extant technology.
Einstein’s famous equation.
Light is made up of photons, and a contraption capable of performing the conversion has been dubbed “photon-photon collider.” Though we might not be able to transform matter into other matter in the near future, it looks like the light-to-matter transformation has a bright outlook.
An international research team has found the largest brown dwarf we’ve ever seen, and it has ‘the purest’ composition to boot. Known as SDSS J0104+1535, the dwarf trails at the edges of the Milky Way.
An artists’ representation of a brown dwarf with polar auroras. Image credits NASA / JPL.
Brown dwarfs — they’re like stars, but without the spark of love. They’re much too big to be planets but they’re too small to ignite and sustain fusion, so they’re not (that) bright and warm and so on. Your coffee is probably warmer than some Y-class brown dwarfs, which sit on the lower end of their energy spectrum. The coldest such body we know of, a Y2 class known as WISE 0855−0714, is actually so cold (−48 to −13 degrees C / −55 to 8 degrees F) your tongue would stick to it if you could lick it.
But they can still become really massive, as an international team of researchers recently discovered: nestled among the oldest of stars in the galaxy at the halo of our Milky Way, some 750 light years away from the constellation Pisces, they have found a brown dwarf which seems to be 90 times more massive than Jupiter — making it the biggest, most massive brown dwarf we’ve ever seen.
Named SDSS J0104+1535, the body is also surprisingly homogeneous as far as chemistry is concerned. Starting from its optical and near-infrared spectrum measured using the European Southern Observatory’s Very Large Telescope, the team says that this star is “the most metal-poor and highest mass substellar object known to-date”, made up of an estimated 99.99% hydrogen and helium. This would make the 10-billion-year-old star some 250 times purer than the Sun.
Y u so cold? Image credits NASA / JPL-Caltech.
“We really didn’t expect to see brown dwarfs that are this pure,” said Dr Zeng Hua Zhang of the Institute of Astrophysics in the Canary Islands, who led the team.
“Having found one though often suggests a much larger hitherto undiscovered population — I’d be very surprised if there aren’t many more similar objects out there waiting to be found.”
From its optical and infrared spectrum, measured using the Very Large Telescope, SDSS J0104+1535 has been classified as an L-type ultra-cool subdwarf — based on a classification scheme established by Dr Zhang.
The paper “Primeval very low-mass stars and brown dwarfs – II. The most metal-poor substellar object” has been published in the journal Monthly Notices of the Royal Astronomical Society.
As humanity is struggling to build a sustainable future for our planet, reducing our emissions is absolutely crucial — and when it comes to reducing emissions, the elephant in the room is fossil fuels. In 2013, a study reported that just 60 companies are responsible for 60% of all man-made global warming emissions, with big names like Exxon, Chevron, and BP leading the way. Today, the situation is similar, with a few big companies being responsible for a disproportionately large percentage of global emissions — but wait, there’s more. The world’s big oil is also investing heavily into blocking climate laws. In 2016, Exxon, Shell, and three trade associations spent US$114 million in 2015 alone to manipulate lawmakers and public discourse on climate change, according to a report by British NGO Influence Map. At least a few companies knew that global warming was incoming decades ago, and not only did they not do anything, but they invested into denying climate change. The coal industry, which is currently in much more distress than oil&gas, is also investing heavily towards the same goal. So why then are oil companies key players at the UN climate summit in Marrakech?
According to The Guardian, representatives of companies such as ExxonMobil, Chevron, BP, and Shell will not only have unquestioned access to most high-level discussions in Marrakech, but they will also be called upon to give advice and hold council with country representatives. The same goes for coal giants like Peabody Energy, BHP Billiton, and Rio Tinto. A new infographic by Corporate Accountability International reveals the true extent of the fossil fuel industry’s access to, and influence over, the talks. So what exactly is the sense of having private discussions between country leaders and the companies whose products they are trying to move away from?
“What interests—beyond slowing progress—does a corporation like Exxon Mobil or Shell have in these talks?” said Tamar Lawrence-Samuel of Corporate Accountability International. “The answer is ‘none.’ Before we can ensure the effective implementation of the Paris Agreement, we must first make sure that Big Oil and those representing its interests are not at the table.”
This is the first major summit after the Paris agreement was reached. Last year in Paris, world leaders agreed to reduce emissions, though the targets themselves are often brought into question. The Paris pact calls for an unprecedented support from the private sector, and yet it provides no protections against corporations or trade groups that might seek to steer negotiations toward their (or their members’) commercial interests. So how can this be reasonable, and how can we expect fossil fuel companies to not try to steer discussions in their favor? How could we expect any company to not follow its own interest?
A conflict of interest?
Among others, the Venezuelan delegation has spoken very strongly about this, arguing that this is a big conflict of interests which shouldn’t be allowed.
“The convention and the Paris agreement is an instrument between states. And the inclusion of non-state actors must go through a revision of conflict of interest. This is a standard request, a legal request and a moral request. It is unacceptable for our delegation that the concept of conflict of interest was not even considered as the fundamental basis for the ethical integrity and the effective implementation of the Paris agreement … It is a concern for the majority of the world represented here at this conference and the discussions in the contact room. We are astonished that this issue was completely overturned in the conclusions.”
It’s a strong point to be made — and yet, the US, the EU, and Australia have all been very vocal against limiting the access of oil companies to discussions. Australia especially framed the discussion as developing countries trying to make the process “less open”. They also argued that the concept of “conflict of interest” is too hard to define.
“There is no clear understanding of what a conflict of interest is and it means different things to different people.”
So where does that leave us? Not anywhere clear, really. The Paris Agreement was signed, it was ratified, and it has entered into force. But without an enforcing mechanism, it’s really hard to say whether participating parties, be they countries or companies, will hold up to their part of the deal. Even if they do, the math doesn’t really add up and we’re likely on course for a bigger-than-2C increase in global temperatures — and that’s a big ‘if’. The enthusiasm from Paris has waned, and the movement’s inertia seems to be bogging down.
In Marrakech, there is a lot of talk about action — but that’s still just a lot of talk. The world is racing to stop climate change caused in great part by oil and gas emissions, while still listening to the lobby of companies responsible for said emissions. Exxon’s profits, like those of most oil companies, are plummeting, but they still have a strong word to say and the lobby is as strong as ever.
Countries representing some 70% of the world’s population are asking for a special legal framework for the Paris agreement to make it less vulnerable in the face of vested interests. But that seems like an off chance right now. So ironically, even though this ‘business as usual’ is what we’re trying to change, things are as they’ve been before and they show little signs of change. Unfortunately, we can’t know just how much of an impact this lobby really has.
Scientists think they spotted the first-ever glimpse of how black holes form from a former supernova 20 million light-years away.
The Gargantua black hole from Interstellar. Image credits Double Negative
When massive stars grow old and start running short on fuel, they explode in a dazzling display of light — a supernova. Huge quantities of matter and radiation are shot out at incredible speeds, squishing the core into something so dense that not even light can escape its gravitational pull — a leftover we call a black hole.
That’s what we think happens, anyway — we’ve never actually seen it per se. But now, an Ohio State University of Columbus team led by Christopher Kochanek might have witnessed it. They were combing through data from the Hubble Space Telescope when they observed something strange with the red supergiant star N6946-BH1.
The star was discovered in 2004 and was estimated to be roughly 25 times as massive as the Sun. But when Kochanek and his team looked at snaps taken in 2009, they found that the star flared a to a few million times the brightness of our star for a few months then slowly started to fade away. On the photos Hubble took in the visible spectrum, the star had all but disappeared — the only trace left of its presence is a faint infrared signature.
What happened to N6946-BH1 fits in nicely with what our theories predict should happen when a star its size collapses into a black hole. When it runs out of fuel, the star releases an immense number of neutrinos, so many that it starts losing mass. This in turn weakens its gravitational field, so it starts losing its grip on the cloud of super-heated hydrogen ions enveloping it. As the gas floats away it cools off enough for electrons to re-attach to the hydrogen nuclei.
Now, a star is basically an explosion so massive it keeps itself together under its own weight. Gravity on one hand tries to crunch everything into a point, while the pressure generated by fusion inside the star pushes it outward. While these two are in balance, the star burns away merrily. But once it starts running out of fuel, gravity wins and draws everything together. Matter sinks in the core making it so dense that it collapses in on itself, forming a black hole.
Ironically, it’s gravity that makes stars explode into supernovas — the outer layers are drawn towards the core at such speeds that they bounce off, compacting the core even further. N6946-BH1 didn’t make it to a supernova, but its core did collapse into a black hole. The team theorizes that the flaring we’ve seen is caused by super-heated gas forming an accretion disk around the singularity.
“The event is consistent with the ejection of the envelope of a red supergiant in a failed supernova and the late-time emission could be powered by fallback accretion onto a newly-formed black hole,” the authors write.
We’re still looking for answers
There are two other ways to explain a vanishing star, but they don’t really stand up to scrutiny. N6946-BH1 could have merged with another star — but it should have burned even brighter than before and for longer than a few months — or it could be enveloped in a dust cloud — but it wouldn’t have hidden it for so long.
“It’s an exciting result and long anticipated,” says Stan Woosley at Lick Observatory in California.
“This may be the first direct clue to how the collapse of a star can lead to the formation of a black hole,” says Avi Loeb at Harvard University.
Thankfully, confirming whether or not we’re looking at a black hole isn’t very difficult. The gasses that make up the accretion disk should emit a specific spectrum of X-rays as its being pulled into the black hole, which we can pick up. Kochanek says his group will be getting new data from Chandra X-Ray Observatory sometime in the next two months.
So is this a black hole? Even if they don’t pick up on any X-rays, the team says it doesn’t rule out such an object and that they will continue to look through Hubble – the longer the star is not there, the more likely that it’s a black hole.
“I’m not quite at ‘I’d bet my life on it’ yet,” Kochanek says, “but I’m willing to go for your life.”
The full paper titled “The search for failed supernovae with the Large Binocular Telescope: confirmation of a disappearing star” is still awaiting peer review, and has been published online on arXiv.
The Environmental Defense Fund’s Oil and Gas program has released a new nation-wide report of the most common sites of methane leaks at oil and gas pads. Surprisingly, most of the leaks were traced back to faulty piping, vents or doors on gas tanks in newer, not older, wells.
Shale gas drilling rig near Alvarado, Texas. Image credits David R. Tribble.
Methane is a much more powerful greenhouse gas than carbon dioxide. It’s also extremely flammable and can be fatal following prolonged exposure. Most comes from industry, including but not limited to oil and gas – as methane is found in almost all hydrocarbon deposits and forms the bulk of naturally occurring gas reserves, monitoring wells for methane leaks is hugely important.
In order to get a nation-wide view of this issue, a team from the Environmental Defense Fund’s Oil and Gas program partnered with Gas Leaks Inc., a company that specializes in using infrared imagery to inspect well pads for leaks. They performed a helicopter survey of over 8,000 pads in seven regions of the United States. The researchers surveyed important drilling areas such as North Dakota’s Bakken Shale and the Marcellus Shale in Southwestern Pennsylvania to “better characterize the prevalence of ‘super emitters,'” the largest sources of pollution in the methane industry, according to one of the researchers’ post blog.
Their results show that in 500 polluted sites, roughly 90 percent of leaks can be traced to the vent, hatches or doors on gas tanks. These leaks aren’t indicative of wear on the installations, as emissions were predominantly seen at newer wells. The paper considers this a clear indication that the current systems installed to control leaks aren’t working. More effective measures, such as vapor recovery towers — under-pressure chambers used to draw natural gases that might otherwise leak into the atmosphere — are required at these pads to avoid further contamination.
Thermal image of a methane leak, California. Image via livescience
“Since this study found a higher frequency of detected emissions at sites within the first few months of production, controlling tank emissions as soon as a site enters production could reduce overall emissions,” the study reads.
The US Energy and Information Administration lists Pennsylvania as the second largest producer of natural gas in the country.
“The best companies understand the business case for reducing methane leaks, as what doesn’t leak into the atmosphere can be used for energy production,” said Pennsylvania Gov. Tom Wolf.
Wolf and his administration outlined a plan for reducing methane emissions in the state in January which relied heavily on the state Department of Environmental Protection (DEP). Following the publication of this study the DEP announced on Tuesday that it was restarting an initiative to make inspections more consistent, but offered few other details.
The full paper, titled “Aerial surveys of elevated hydrocarbon emissions from oil and gas production sites,” has been published online in the journal Environmental Science and Technology and can be read here.
We’ve long known just how damaging extremist governments can be to the lives of the ones they rule over and to the world at large, but it seems that the art of ruling doesn’t leave room for innocence in any part of the world. The NPR was recently investigating into records pertaining to secret experiments and found the names of nearly 4,000 individuals that were exposed to mustard gas. The names are joined by a further 1,700 individuals that the NPR could only find a “last known location” for.
Image via Pinterest
Mustard gas is a sulfur based compound used for chemical warfare that has the pleasant effect of causing blisters to form on skin and attacking optical tissue it comes into contact with. It can easily penetrate cotton, wool and other common fabrics, so special equipment is needed to protect soldiers against it — and in battle conditions there’s rarely enough to go around.
As the effects on the victims aren’t immediate, manifesting within 24 hours from contact and the pure vapors have no smell, very large doses can be administered to the subject without him or her knowing.
After this time, what starts out as an itch develops into full blown pustules filled with yellowish liquid that cover horrendous chemical burns. In high dosages, if inhaled, it attacks the lungs causing massive bleeding and blistering of the mucous membranes, leading to pulmonary edema.
It was extensively produced by the Imperial German Army starting with 1916 and used extensively in the static, trench-ridden battlefields of World War I, with terrible effect — and their memory lingered in the minds of armed corps world-wide.
So while the tests had the arguably understandable purpose of evaluating protective suits and gas masks, NPR’s investigation revealed something much darker — some trials were designed to look for racial differences (i.e. resistance to the gas) that could be exploited during combat, that required soldiers to be exposed directly to the compound.
And, in classic government fashion, the subjects weren’t even cared for:
“Officials at the Department of Veterans Affairs told NPR that since 1993, the agency had been able to locate only 610 test subjects, to offer compensation to those who were permanently injured,” the NPR page reads.
Considering NPR’s investigation took some 6 months and came up with almost ten times as many names, that’s hard to believe.
Anyway, if you’re curious to know if anyone you know was exposed to mustard gas, you can find a “work-in-progess” list on NPR’s webpage, here.