Coupled with renewable sources of energy like wind and solar, nuclear power can help us transition to a zero-emission future, a new study reports. Especially in countries with geographies less suited to these renewable sources, nuclear energy could play a key role in helping us finally get rid of our polluting fossil fuel industry.
We’re all excited about renewable energy — well, fossil fuel companies are understandably less happy about it, but in general, it’s excellent news. But renewable energy isn’t perfect; it has gaps where it doesn’t provide energy, and the infrastructure isn’t quite here yet.
“Renewable energy sources like wind and solar are great for reducing carbon-emissions,” says Lei Duan, from Carnegie’s Department of Global Ecology, and author of a new study analyzing this. “However, the wind and sun have natural variation in their availability from day to day, as well as across geographic regions, and this creates complications for total emissions reduction.”
So we need something to help fill in the gaps, at least until renewables have matured enough to take over. In today’s world, this unfortunately means either coal, gas, or oil. But there’s another way, the authors of a new study argue: by using nuclear.
Nuclear energy has a very bad rep, and many fear it based on what happened at Chernobyl and Fukushima — but this reputation is very undeserved. Study after study has shown that nuclear energy is one of the most reliable and safe sources of energy. In fact, nuclear energy is responsible for 99.8% fewer deaths than brown coal; 99.7% fewer than coal; 99.6% fewer than oil; and 97.5% fewer than gas. Most of these fossil fuel deaths come from pollution.
In terms of both safety and emissions, nuclear energy is on par with renewables, and it would be a good complement to renewables as well. Previous estimates have suggested that in many parts of the world, renewables could account for 80% of energy production within the decade — the new study suggests that the remaining 20% should come from nuclear.
“To nail down that last 10 or 20 percent of decarbonization, we need to have more tools in our toolbox, and not just wind and solar,” explained Ken Caldeira, also one of the study authors.
To assess the potential of nuclear power to address this need, Duan, Caldera, and other colleagues looked at the wind and solar potential for energy in 42 countries. They found that some countries, like the US, have great potential of implementing new sources of solar and wind energy. For these countries, nuclear power would only be needed as a complement to get over the last remaining hurdles of decarbonization. But in countries with less potential (like Brazil, for instance), nuclear power could play a more important role, accelerating the energy system’s decarbonization.
Furthermore, the team notes, nuclear energy can be cost-competitive with other types of energy, and can even promote wind and solar by storing energy.
“In our model, in moderate decarbonization scenarios, solar and wind can provide less costly electricity when competing against nuclear at near-current US Energy Information Administration cost levels,” the study reads. “In contrast, in deeply decarbonized systems (for example, beyond ~80% emissions reduction) and in the absence of low-cost grid-flexibility mechanisms, nuclear can be competitive with solar and wind. High-quality wind resources can make it difficult for nuclear to compete. Thermal heat storage coupled to nuclear power can, in some cases, promote wind and solar.”
All in all, nuclear energy seems to be the missing puzzle piece in our plans to decarbonize energy production. While often feared, nuclear energy is a safe and reliable alternative and a great complement to renewable energy.
“Our analysis looked at the cheapest way to eliminate carbon dioxide emissions assuming today’s prices. We found that at today’s price, nuclear is the cheapest way to eliminate all electricity-system carbon emissions nearly everywhere,” Caldeira concludes. “However, if energy storage technologies became very cheap, then wind and solar could potentially be the least-cost path to a zero-emission electricity system.”
Climate change could severely impact our food and water security in the future by increasing the probability of droughts co-occurring in food-producing areas around the world, a new study says.
Research led by scientists at the Washington State University (WSU) warns that the future may hold less bountiful tables, and fewer meals, for us all. According to the findings, the probability of droughts co-occurring will increase by 40% by the mid 21st century, and by 60% by the end of the century, relative to the late 20th century (before the year 2000). All in all, this amounts to an almost-ninefold increase in the exposure of agricultural lands and human populations to severe, co-occurring droughts relative to today.
While modern technology and distribution systems insulate us from the effects of drought to a much larger extent than any time previously in history, co-occurring (or ‘compound’) droughts, if they affect key food-producing areas, can severely impact the global food and water availability. If such an event were to come to pass, millions of people would encounter some difficulty in accessing food in the same quantities and varieties as before.
“There could be around 120 million people across the globe simultaneously exposed to severe compound droughts each year by the end of the century,” said lead author Jitendra Singh, a former postdoctoral researcher at the WSU School of the Environment now at ETH Zurich, Switzerland. “Many of the regions our analysis shows will be most affected are already vulnerable and so the potential for droughts to become disasters is high.”
This increased risk of compound droughts is mainly the result of climate change, which itself is the product of greenhouse gas emissions associated with decades of reliance on fossil fuels. The other element factoring in is a projected 22% increase in the frequency of El Niño and La Niña events — the two opposite phases of the El Niño Southern Oscillation (ENSO) — caused by warmer average temperatures.
Roughly 75% of compound droughts in the future will occur during these irregular but recurring periods of variation in the world’s oceans, the team explains. The shifting phases of the ENSO have historically played a part in some of the greatest periods of environmental upheaval globally, as they influence precipitation patterns across a huge stretch of the planet. Compound droughts occurring across Asia, Brazil, and Africa during 1876-1878 were generated by these shifts. They led to massive crop failures and famines which killed in excess of 50 million people.
“While technology and other circumstances today are a lot different than they were in the late 19th century, crop failures in multiple breadbasket regions still have the potential to affect global food availability,” said study coauthor Deepti Singh, an assistant professor in the WSU School of the Environment. “This could in turn increase volatility in global food prices, affecting food access and exacerbating food insecurity, particularly in regions that are already vulnerable to environmental shocks such as droughts.”
The team focused their analysis on the ten areas of the world that receive most of their rainfall between June and September, have monthly summer precipitation showing great variability, and fall under the influence of ENSO variations — factors that leave them exposed to co-occurring droughts. Several of these are important agricultural areas on a global level, they add, and they also include countries that are already experiencing food and water insecurity.
Of the investigated areas, North and South America were among the most likely to experience compound droughts in the future. Certain regions of Asia are also at risk, however, large stretches of agricultural land here are projected to become wetter instead of drier, heavily mitigating the risk of crop failure and subsequent famine.
Still, that leaves us in quite a dire situation. The United States today is a major exporter of grains, including maize, for multiple countries around the world. In the event of a severe drought, reduced production here would impact food security around the world, with increases in the price of grains and a significant decrease in food security — grains are staple foods and lack of such foods heavily impacts the most vulnerable groups throughout communities.
“The potential for a food security crisis increases even if these droughts aren’t affecting major food producing regions but rather many regions that are already vulnerable to food insecurity,” said coauthor Weston Anderson, an assistant research scientist at the Earth System Science Interdisciplinary Center at the University of Maryland.
“Simultaneous droughts in food insecure regions could in turn amplify stresses on international agencies responsible for disaster relief by requiring the provision of humanitarian aid to a greater number of people simultaneously.”
Still, for what it’s worth, these estimates are assuming that the world maintains a high rate of fossil fuel usage. If carbon emissions continue to fall, the risk and intensity of co-occurring droughts would be greatly mitigated, the team explains. Knowing that nearly 75% of compound droughts occur alongside ENSO events also gives us the chance to predict where such droughts may occur and prepare for them in advance.
“This means that co-occurring droughts during ENSO events will likely affect the same geographical regions they do today albeit with greater severity,” said Deepti Singh. “Being able to predict where these droughts will occur and their potential impacts can help society develop plans and efforts to minimize economic losses and reduce human suffering from such climate-driven disasters.”
The paper “Enhanced risk of concurrent regional droughts with increased ENSO variability and warming” has been published in the journal Nature Climate Change.
Researchers have described two species of worms sporting a distinctive hammerhead look. The worms, discovered in parts of Europe and Africa, are likely invasive species and could wreak havoc on soil biodiversity.
As the world is becoming increasingly globalized, species are being brought from one part of the world to the other. These “alien” species have the potential to overrun the new ecosystem they’re brought to, and oftentimes, by the time you realize there’s a problem, there’s little you can do about it.
Oftentimes, you don’t even notice these invasive species unless you’re really paying attention — and this is exactly the case here.
An international team led by Professor Jean-Lou Justine from ISYEB (Muséum National d’Histoire Naturelle, Paris, France) described two new species of hammerhead flatworms. This is the first study of these species, although flatworms have been invading Europe for some time.
“We were surprised at first that some of the species which were invading Europe, a place where biodiversity is supposed to be well known, did not even have a name. That was the case of Obama nungara, a species described only in 2016,” Justine told ZME Science. The researchers did not give it a name at the time, though they did describe it in a 2020 paper with a charming title. The name Obama is formed by a composition of the Tupi words oba (leaf) and ma (animal), a reference to its body shape.
“This is also the case for the two new species described in this paper, they had no names and were never described in their countries of origin.”
Hammerhead worms are predatory creatures, much like their shark namesakes. They can track their prey (typically other worms or mollusks), and bear a distinctive shape on their head region, which helps them creep over the soil substrate.
A number of hammerhead worms have been described by scientists but, in many cases, the researchers don’t describe them in their land of origin, instead finding them in countries that they have already invaded. For instance, two previously described species (Bipalium pennsylvanicum and Bipalium adventitium) originate from Asia but were first reported from the US. The two newest species follow the same trend.
“I have been working on invasive land flatworms since 2013, when I discovered that gardens in France (and Europe) were invaded by bizarre worms and that almost no scientist was working on this problem. Leigh Winsor, the Australian member of our team, has been working on them since the 80’s,” Justine adds.
The first new species was named Humbertium covidum, as an homage to the victims of COVID-19, but also because much of the work was carried out during the COVID-19 lockdown.
“Due to the pandemic, during the lockdowns most of us were home, with our laboratory closed. No field expeditions were possible. I convinced my colleagues to gather all the information we had about these flatworms, do the computer analyses, and finally write this very long paper. We decided to name one of the species “covidum”, paying homage to the victims of the pandemic.”
The worm was found in two gardens in the Pyrénées-Atlantiques (France) and also in Veneto (Italy). Although some hammerhead worms can reach up to one meter, this one is small (3 cm) and looks uniformly metallic black — an unusual color among hammerhead flatworms.
These creatures are not easy to characterize based on their morphology alone, so researchers decided to use mitochondrial genetic analysis, which can provide a lot of information about the origin of this species and which other species it is related to. This species appears to have originated in Asia and is potentially invasive. By analyzing the contents of its stomach, researchers also found that it eats snails.
The second species, Diversibipalium mayottensis was only found in Mayotte (a French island in the Mozambique Channel, Indian Ocean). The species is as small as the other one, but instead of a metallic black, it exhibits a spectacular green-blue iridescence. Based on genetic analysis, this species appears to belong to a “sister group” of all other hammerhead flatworms, which means it could help researchers understand how these creatures evolved. Its origin could be Madagascar, but it’s not entirely clear. Presumably, at some point in the past, people brought plants from Madagascar and unknowingly, also brought the worm.
“All land flatworms are generally transported with potted plants,” Justine says. “For the species in Europe, Humbertium covidum, it is likely that the species was transported in recent years, from Asia, with some imported plant. For the species in Mayotte, Diversibipalium mayottensis, it is likely that it comes from Madagascar, but the transport might have happened a long time ago, perhaps even centuries ago, by traditional exchanges between islands in this part of Africa.”
Although finding new species is generally good news, this is not necessarily the case here. These flatworms are probably bad news, especially if they're not in their natural environment. For instance, one study found that one single worm species from New Zealand became invasive in the UK, and when it became established, earthworm biomass declined by 20%.
"All land flatworms are predators of the other animals of the soil fauna, and, as such, can threaten the biodiversity and ecological balance of species in a soil. However, there are only a very few papers in which their impact was thoroughly studied, because these studies are long and expensive," Justine explained in an email to ZME Science.
The study comes with a clear warning: invasive species are probably more prevalent than we realize. In the US alone, invasive species are estimated to cause damage of around $120 billion, and the figure is likely to increase as the world becomes more and more interconnected. Unfortunately, when it comes to dealing with invasive hammerhead worms, prevention is pretty much our only weapon.
"Basically, there is not much to be done once a land flatworm has invaded a country. Prevention is the key, we need to avoid importing new flatworms (that is true for Europe and US)," Justine concludes.
New research from the Swiss Federal Laboratories For Materials Science And Technology (EMPA), Utrecht University, and the Austrian Central Institute for Meteorology and Geophysics showcase the scale and huge range of pollution carried through the atmosphere.
The findings suggest that around 3,000 tons of nanoplastic particles are deposited in Switzerland every year, including the most remote Alpine regions. Most are produced in cities around the country, but others are particles from the ocean that get introduced into the atmosphere by waves. Some of these travel as far as 2000 kilometers through the air before settling, the team explains, originating from the Atlantic.
Such results build on a previous body of research showing that plastic pollution has become ubiquitous on Earth, with nano- and microplastics, in particular, being pervasive on the planet.
Although we’re confident that the Earth has a plastic problem, judging by the overall data we have so far, the details of how nanoplastics travel through the air are still poorly understood. The current study gives us the most accurate record of plastic pollution in the air to date, according to the authors.
For the study, the researchers developed a novel chemical method that uses a mass spectrometer to measure the plastic contamination levels of different samples. These samples were obtained from a small area on the Hoher Sonnenblick mountain in the Hohe Tauern National Park, Austria, at an altitude of around 3100 meters from sea level. This area was selected as an observatory of the Central Institute for Meteorology and Geodynamics and has been in operation here since 1886.
The samples were collected on a daily basis, in all types of weather, at 8 AM. They consisted of samples of the top layer of snow, which were harvested and processed taking extreme care not to contaminate them with nanoplastics from the air or the researchers’ clothes. According to their measurements, about 43 trillion miniature plastic particles land in Switzerland every year — equivalent to around 3,000 tons.
In the lab, the team measured nanoplastic content in each sample and then analyzed these particles to try and determine their origin. Wind and weather data from all over Europe were also used in order to help determine the particles’ origins. Most of the particles were likely produced and released into the atmosphere in dense urban areas. Roughly one-third of the particles found in the samples came from within 200 kilometers. However, around 10% of the total (judging from their level of degradation and other characteristics) were blown to the mountain from over 2000 kilometers away, from the Atlantic; these particles were likely formed in the ocean from larger debris and introduced into the atmosphere by the spray of waves.
Plastic nanoparticles are produced by weathering and mechanical abrasion from larger pieces of plastic. These are light enough to be comparable to a gas in behavior. Their effect on human health is not yet known, but we do know that they end up deep into our lungs, where they could enter our bloodstream. What they do there, however, is still a mystery.
The current study doesn’t help us understand their effects any better, but it does put the scale of nanoplastic pollution into perspective. These estimates are very high compared to other studies, and more research is needed to verify them — but for now, they paint a very concerning picture.
The paper “Nanoplastics transport to the remote, high-altitude Alps” has been published in the journal Environmental Pollution.
The Omicron strain of the coronavirus is spreading quickly around the world. After the US surpassed one million new cases per day earlier this week, now the EU, as well, is passing that unfortunate milestone.
Following the New Year’s weekend, the US on Monday reported 1,082,549 new cases of coronavirus infections inside its borders, according to data from Johns Hopkins University. Although the number of cases reported on Mondays is typically higher than those on other days, due to delays in how tallying is processed over the weekend, this still marked a very worrying record. The figure was double that of the previous Monday.
Judging from previous data (leading up to the week ending on December 25th, 2021), the Omicron variant accounts for roughly 60% of these cases.
The European Union, as a whole, also reported passing this milestone yesterday, Wednesday 5th. Countries such as Britain and France have announced record numbers of daily new cases; Britain reached 200,000 on Tuesday, while France reported in excess of 270,000. Both of these figures are higher than any previously-seen number of new daily cases.
According to the Agence France-Presse (AFP), Cyprus now has the highest infection rate per capita, after reaching a record new 5,457 cases on Tuesday.
As in the US, the more infectious Omicron variant is behind a large portion of the new cases in the EU. Although this strain seems to produce less severe symptoms and generally results in fewer hospitalizations than previous variants, governments are still ill at ease over the growing number of cases. Hospitals and health services are still under immense pressure, and can easily become overwhelmed if a large number of patients seek help at the same time; the high number of infected individuals definitely raises the possibility that this can happen.
But the rampant spread of the virus also raises a chilling possibility: that of mutations taking place. The World Health Organization (WHO) warned of this possibility on Tuesday, in response to the numbers reported by the US and of the deteriorating situation in Europe.
“The more Omicron spreads, the more it transmits and the more it replicates, the more likely it is to throw out a new variant,” said WHO senior emergencies officer Catherine Smallwood in an interview for the AFP. “Now, Omicron is lethal, it can cause death […] maybe a little bit less than Delta, but who’s to say what the next variant might throw out? Even in well-capacitated, sophisticated health systems there are real struggles that are happening at the moment.”
On Tuesday, the British government announced that hospitals have switched to “war footing” due to staff shortages. Prime Minister Boris Johnson promised to take measures to address staff shortages in the most heavily affected areas, ranging from drafting medical volunteers to calling for army support.
Australia is also facing a record-high number of new cases, reaching almost 65,000 daily as of Wednesday.
School is an institution that is hated (especially during exams) by millions of kids around the world — but at the same time billions of adults remember it as the ‘good old days’. For all its good and bad, society as we know it couldn’t exist without schools — and we’re not just talking about the building, we’re talking about the entire system and environment that allows us to pass knowledge to younger generations and prepare them for what’s to come in the real world (at least in theory). But who actually invented school?
From old school to modern schooling system
Ironically enough, for all the information you can find in schools, no textbook mentions exactly when and how the idea of a school originated. This is mostly because it depends on how exactly you define a school. For instance, in ancient Greece, education was somewhat democratized, and education in a gymnasium school was considered essential for participation in Greek culture, but it was reserved only for boys (and often, not all boys). In ancient Rome, rich children were tutored by private professors, but neither of these is a school in the sense we consider today — public, formal education that is compulsory, open, and available to all — though you could argue that in some sense, school dates from ancient times, and the organized practice of teaching children dates for thousands of years.
Compulsory education was also not an unheard-of concept in ancient times –though it was mostly compulsory for those tied to royal, religious, or military organizations. In fact, Plato’s landmark The Republic, written more than 2,300 years ago, argues in favor of compulsory education, though women and slaves were not truly a part of Greek society.
Much information about schooling is also lost to the shroud of time. For instance, there is some indirect evidence about schools in China existing at least 3,000 years ago, but this comes from “oracle bones” where parents would try to divine whether it was auspicious for their children to go to ‘school’ — and there’s little information about what these schools were like.
It’s not just the Chinese, Greeks, and Romans. The Hindus, for instance, had developed their own schooling system in the form of gurukuls. In 425 AD, the Byzantine empire in Rome came up with the world’s first known primary education system dedicated to educating soldiers enrolled in the Byzantine army so that no person in the army faces problems in communicating and understanding war manuals. Different parts of the world had developed different types of education — some more efficient than others.
In Western Europe (and England, in particular), the church became involved in public education early on, and a significant number of church schools were founded in the Early Middle Ages. The oldest still operating (and continuously operating school) is The King’s School in Canterbury, which dates from the year 597. Several other schools still in operation were founded in the 6th century — though again, you could argue whether they were true schools as they were only open to boys.
Furthermore, compared to the modern schools, education in the above-mentioned institutes was more focused on religious teachings, language, and low-level or practical skills only. Many of them even used to operate in a single room with no set standards and curriculum, but as humanity progressed ahead people started to realize the need for an organized system to educate the future generations.
For more than ten centuries, schools maintained the same general profile, focused mostly on a niched set of skills and religious training. In the 9th century, the first university was founded in Fez, Morocco. However, that too was founded as a mosque and focused on religious teachings. The oldest university still in operation, the University of Bologna, in Italy, was founded in 1088. It hired scholars from the city’s pre-existing educational facilities and gave lectures in informal schools called scholae. In addition to religion, the university also taught liberal arts, notarial law, and scrivenery (official writing). The university is notable for also teaching civil law.
However, the university is not necessarily the same as a school — it wasn’t a public “for all” education system, but rather a “school” for the intellectual elite. For schools to truly emerge as we know them today, we have to fast forward a few more centuries.
Compulsory, free education for all
In 1592, a German Duchy called Palatine Zweibrücken became the first territory in the world with compulsory education for girls and boys — a remarkable and often-ignored achievement in the history of education. The duchy was followed in 1598 by Strasbourg, then a free city of the Holy Roman Empire and now part of France. Similar attempts emerged a few decades later in Scotland, although this compulsory education was subject to political and social turmoil.
In the United States — or rather, in the colonies that were to later become the United States — three legislative acts enacted in the Massachusetts Bay Colony in 1642, 1647, and 1648 mandated that every town having more than 50 families to hire a teacher, and every town of more than 100 families to establish a school.
Prussia, a prominent German state, implemented a compulsory education system in 1763 by royal decree. The Prussian General School Regulation asked for all young citizens, girls and boys, to be educated from age 5 to age 13-14 and to be provided with a basic education on religion, singing, reading, and writing based on a regulated, state-provided curriculum of textbooks. To support this financially, the teachers (often former soldiers) cultivated silkworms to make a living. In nearby Austria, Empress Maria Theresa introduced mandatory primary education in 1774 — and mandatory, systemized education was starting to take shape in Europe. Schools, as we know them today, were becoming a thing.
Meanwhile, the US was having its own educational revolution.
In 1837, a lawyer and educator Horace Mann became the Secretary of the Massachusetts Board of Education in the newly-formed United States. Mann was a supporter of public schooling and he believed that without a well-educated population political stability and social harmony could not be achieved. So he put forward the idea of a universal public education system for teaching American kids. Mann wanted a system with a set curriculum taught to students in an organized manner by well-trained subject experts.
Without undervaluing any other human agency, it may be safely affirmed that the Common School…may become the most effective and benignant of all forces of civilization.
Horace Mann, Father of the Common School Movement
Mann employed his “normal school” system in Massachusetts and later other states in the US also started implementing the education reforms that he envisioned. He also managed to convince his colleagues and other modernizers to support his idea of providing government-funded primary education for all.
Due to his efforts, Massachusetts became the first American state in 1852 to have a mandatory education law, school attendance and elementary education were made compulsory in various states (mandatory education law was enacted in all states of the US by 1917), teacher training programs were launched, and new public schools were being opened in rural areas.
At the time, when women were not even allowed to attend schools in many parts of the world, Mann advocated the appointment of women as teachers in public schools. Instead of offering religious learning to students, Mann’s normal schools were aimed at teaching them reading, writing, grammar, arithmetic, geography, and history. He believed that school education should not incorporate sectarian instructions, however, for the same reason, some religious leaders and schoolmasters used to criticize Mann for promoting non-sectarian education.
The innovative ideas and reforms introduced by Mann in the 1800s became the foundation of our modern school system. For his valuable contribution in the field of education, historians sometimes credit him as the inventor of the modern school system.
However, as we’ve seen, the history of schools is intricate, complex, and very rich. There is no one “inventor” of school — the process of arriving at the school systems we have today (imperfect as they may be) took thousands of years of progress, which was not always straightforward.
Shocking facts about school education
Now that we’ve looked a bit at the history of the school, let’s see how things are today — and why there’s still plenty of work to be done in schools around the world.
A study conducted by the Institute of Education in the UK suggests that quality of primary education is more crucial for an individual’s academic progress, social behavior, and intellectual development as compared to factors including his or her family income, background, and gender. Another study highlights that students who receive good elementary education and have a positive attitude about the significance of their performance in primary and middle school are more likely to earn well and live a better life than others in the future.
A UNESCO report reveals that school education up to nine years of age is compulsory in 155 countries but unfortunately, there are more than 250 million children in the world who are still not able to attend school.
According to International Labour Organization (ILO), due to poverty and lack of educational opportunities, 160 million kids are forced into work across the globe and about 80 million of them work in unhealthy environments. Thousands of such kids are physically and sexually abused, tortured, and are even trained to work under drug mafia, criminal groups, and terrorist organizations. Some studies reveal that child labor is also associated with school dropout in less developed countries. Due to poor financial conditions, many individuals at a young age start giving preference to economic activities and lose interest in costly education opportunities. However, an easily accessible and high-quality school education model that could allow children (from poor families) to pursue education without compromising their financial security can play an important role in eliminating child labor.
African nation South Sudan has the lowest literacy rate in the world. Only 8% of females in this country are literate and overall only 27% of its adult population is educated. 98% of the schools that offer elementary education in Sudan do not have an electric power supply and only one-third of such schools have access to safe drinking water.
City Montessori School (CMS) located in Dehradun, India is hailed as the largest school in the world. The CMS campus houses 1,050 classrooms in which more than 50,000 students attend classes every day.
For Horace Mann, schools were a means to produce good citizens, uphold democratic values and ensure the well-being of society. Though not all schools are able to achieve these goals, the power of school education can be well understood from what famous French poet Victor Hugo once said, “He who opens a school door, closes a prison”.
Bees around the world are struggling under habitat loss; solar parks could provide a safe haven, according to new research.
Researchers at Lancaster University have used computer modeling to investigate how different management scenarios of solar parks could help provide a home for ground-nesting bumblebees. The results are quite encouraging, the team explains, showcasing that solar parks can help maintain significant populations of bumblebees both inside their bounds and in their surroundings.
Although the research focused on bumblebees, the authors are confident that the findings translate over to other pollinators as well.
“Renewable energy development is projected to grow and solar is predicted to lead the way. Solar parks have a high land take per unit of energy produced and this will lead to significant land use change in the future,” says Hollie Blaydes, PhD student and Associate Lecturer at the Lancaster Environment Centre, lead author of the paper, in an email for ZME Science.
“Understanding of the environmental impacts of this land use change is only just emerging, but there is scope to incorporate environmental benefits into the energy transition. One potential benefit is the creation of pollinator habitat within solar parks.”
For the study, the team used computer models that simulated bumblebee foraging behavior across the UK’s solar parks. From there, they examined how different management strategies (each offering varying degrees of resources for the insects) would influence their numbers and activity. They then used statistical analyses to investigate differences in bumblebee density and nest density across the different solar parks in the model.
Managing solar parks as meadows, they explain, would make the most resources available to bumblebees, and could support populations four times as large as solar parks as solar parks with only grassland and no flowers. The changes required to transition solar parks from grass to meadows are quite simple and could provide significant benefits for pollinators across the country — in addition to generating clean energy.
Larger, more elongated, and more resource-rich solar parks (i.e. with more flowers) could help increase bumblebee density up to 1 km outside of their bounds, the team found. This means that well-managed solar parks could act as hotspots, delivering pollinator services to crops in nearby agricultural lands.
“Pollinator habitat has already been established within some solar parks, but there is little evidence of how effective this is and how pollinators respond,” Blaydes added for ZME Science. “This knowledge gap inspired us to perform this research and by doing so we have provided some of the first evidence to suggest that creating suitable habitat on solar parks could be an effective way to support bumblebee populations.”
Solar parks in the UK are often located within areas where intensive agriculture is practiced. This makes them ideally suited as bumblebee refuges, the team explains. Further increasing their potential in this regard is that the total land area used as solar parks in the UK is increasing steadily as more and more of the country’s energy demands are covered by solar panels.
The UK currently has around 14,000 hectares, which is projected to increase to 90,300 hectares as part of the UK’s plan to meet net zero-emission targets. All that space can be put to good use in the service of pollinators.
However, the path forward is not really clear-cut. Solar park management is often outsourced on contracts that typically last around two years at a time. This can make it hard to plan management strategies for the long term, as each new company will need to adapt to and maintain the habits they inherit.
“The creation of floral-rich habitat on solar parks is likely to benefit a wide range of pollinators. In this study, we focused only on ground-nesting bumblebees given they are a key pollinator of agricultural crops in the UK. Other pollinator groups rely on similar resources to ground-nesting bumblebees, but differences in flight ranges and foraging patterns means that a slightly different modelling approach would be needed to test solar park management and design options for these groups,” Blaydes adds for ZME Science.
Besides offering huge economic benefits for farmers and society as a whole by harboring bumblebees which would handle the pollination of crops. Pollinators the world over are struggling, and spaces such as solar parks could provide veritable lifeboats for these species, who are under pressure from habitat destruction, pesticides, pollution, and dwindling food supplies.
“Solar parks could act as safe havens for bumblebees and other pollinators if managed appropriately. Our study found that solar parks providing the most foraging and nesting resources were most effective at boosting bumblebee numbers both inside the solar park and in the surroundings,” Blaydes adds for ZME Science. “This suggests that resource-rich solar parks could be used as a conservation tool to help address drivers of bumblebee decline and that there could be implications for pollination to crops and wild plants in the surrounding land.”
The countries with the richest biodiversity don’t always take the necessary measures to protect them. Whether it’s because they lack the motivation or the resources to do so, or because they prioritize short-term economic benefits over environmental protection, countries often neglect their responsibilities. This prompted researchers to explore a curious question: why not pay them to protect their environment?
It’s not as crazy as it sounds. Almost without exception, rich countries got rich in the first place by burning a lot of fossil fuels, and it would be a fair way to balance things. At the same time, rich countries that want to reduce their emissions could make a bigger impact by investing abroad than inside their own borders.
“Human well-being depends on ecological life support. Yet, we are constantly losing biodiversity and therefore the resilience of ecosystems. At the international level, there are political goals, but the implementation of conservation policies is a national task. There is no global financial mechanism that can help nations to reach their biodiversity targets”, says lead author Nils Droste from Lund University, Sweden.
In itself, the idea isn’t exactly novel — several such mechanisms are already in place. For instance, Norway and Germany are already paying Brazil to reduce deforestation (although the payments have been frozen in light of recent deforestation). Previous research has shown that this sort of scheme can help protect existing areas and create additional protected areas.
But a global framework isn’t in place, and could offer much more benefits, researchers argue. They propose three possible mechanisms:
An ecocentric model: only protected area extent per country counts — the bigger the protected area, the better;
A socio-ecological model: protected areas and Human Development Index count. This adds an incentive for also include development justice to the previous model;
An anthropocentric model: population density is also considered, as people can benefit locally from protected areas.
In most cases, researchers say, the second model offers the most incentives. Essentially, it provided the most value for the invested money when it comes to conservation and protection. The results were particularly impressive for countries that are currently doing the least to achieve their protection goals.
“While we developed the socio-ecological design with a fairness element in mind, believing that developing countries might be more easily convinced by a design that benefits them, we were surprised how well this particular design aligns with the global policy goals”, says Nils Droste. “It would most strongly incentivize additional conservation action where the global community is lacking it the most”, he adds.
Of course, the question of “should we do it” is still on the table. The downside is obvious: you pay a lot of money for something which doesn’t benefit you directly (and there’s also the problem of some of the money being lost through corruption). However, protecting biodiversity is truly a global challenge that will require global efforts to solve. In the long run, everyone would benefit from protecting biodiversity worldwide.
Ethically, the plan would allow richer developed countries to mend past environmental damage. But politically, offering money to other countries is never a popular idea, and countries are rarely keen on opting for such plans. While researchers expect this type of project to help the planet as a whole (it’s in everyone’s best interest to have developing nations grow sustainably), it remains to be seen whether something like this can truly catch on.
“We know that we need to change land use in order to preserve biodiversity. Protecting land from degradation and providing healthy ecosystems, clean air or clean rivers is a function of the state. Giving a financial reward to governments for such public ecosystem services will ease the provision of corresponding conservation efforts and will help to put this on the agenda,” concludes Droste.
Illegal mining in the Amazon is a growing threat to local communities, but it’s continuing to grow and expand, posing a threat not just to the environment but to people’s health as well. A few days ago, a rumour that gold was found in the Madeira river in the south of the Amazon rainforest sent would-be miners into a frenzy, with hundreds of rafts being spotted on the lake.
After a rather slow crackdown, the operations have now been stopped, but many fear miners are still active, but are now more careful about hiding.
The Madeira River is the biggest tributary of the Amazon river, the biggest in the world. Madeira alone contains40% of the fish species of all the Amazon basin, including several endemic species such as the Bolivian river dolphin. It’s 3,250 km (2,020 mi), during the rainy season its depth can reach 180 m (590 ft).
Fifteen days ago, around 300 hundred dredging rafts moved to the river due to the gold rumor. The activity is obviously illegal in such an important region of the Amazon basin — but that did little to stop miners.
The rafts appeared together in lines and placed themselves in plain sight as if nothing out of normal was happening. The rafts are equipped with pumps to suck the riverbed to find gold. Then, to make things worse, miners use toxic substances made of mercury to separate the gold from sand and other rocky material. The remains of the separation are then discarded in the river itself.
This situation is an environmental disaster waiting to happen, but the Brazilian government only started preparing on November 25, well after the presence of the rafts was clear. As the government made it a public statement it made most of the miners leave the area, until finally on November 27 the remaining rafts were burned down by the Federal Police.
Not so long ago, in 2018, scientists were ‘celebrating’ the decline of mining in the river. They published a paper discussing mercury pollution and attributed the concentration levels to be mostly from the ’80s when the activity was intense in the region.
Protected areas are advocated for by scientists and conservationists alike because of their clear environmental benefits. Due to the constant expansion of our species, environments and ecosystems are under more and more pressure, and having safe havens like these protected areas is essential for the wellbeing of our planet.
Currently, around 15% of Earth’s land surface (and around 7% of Earth’s ocean surface) is protected. There is therefore a long way to go before we reach the 50% protection goal.
However, in urging our governments to reach this 50% target, some scientists have warned us there is a risk that we can get so caught up in the quantity of protected land and seas that we don’t also consider how effective those protected areas are in the first place. But before we talk about the quality of protected areas, let’s talk a bit about quantity.
Where does this 50% figure come from anyway?
Prominent voices that are calling for half the Earth to be protected include the aptly named Half-Earth Project based on the book written by E. O. Wilson, as well as Nature Needs Half, an international organization that advocates for half of the planet to be protected by 2030. Their choice of 50% of the Earth, however, is not an arbitrary one, but one that is supported by science.
The Global Safety Net is a tool developed by a team of scientists that combines a number of different data layers and spatial information to estimate how much of Earth’s terrestrial environment needs to be protected to attain three specific goals. Those goals were 1) biodiversity conservation, 2) enhancing carbon storage, and 3) connecting natural habitats through wildlife and climate corridors.
This data also found that, globally, there is significant overlap between the land that needs to be protected for conservation and Indigenous lands. The authors of the paper write that by enforcing and protecting Indigenous land rights, we can combine biodiversity and climate goals with social justice and human rights. They emphasize that “with regard to indigenous peoples, the Global Safety Net reaffirms their role as essential guardians of nature”.
Why it can be detrimental to only look at the numbers
Scientists are absolutely right in saying we should aim to protect half the planet. But there’s more to it than that. An equally important consideration is how effective those protected areas are at achieving their stated goals.
Worryingly, some scientists estimate the true quantity of protected land is much lower than the official 15% when effectiveness is considered. One paper found that “after adjusting for effectiveness, only 6.5%—rather than 15.7%—of the world’s forests are protected”. Importantly, the authors caution their readers against assuming that protected areas will completely eliminate deforestation within their boundaries. On average, they found that protected areas only reduced deforestation by 41%.
Another team of scientists analyzed over 50,000 protected areas in forests around the world and their impact from 2000-2015. A major finding from their paper was that a third of protected areas did not contribute to preventing forest loss. In addition, the areas that were effective only prevented around 30% of forest loss. The authors call for improving the effectiveness of existing protected areas in addition to expanding protected area networks.
Finally, a team of researchers recently authored a paper that analyzed protected areas established between 2000 and 2012 and found that significantly more amounts of deforestation could be avoided if existing protected areas were made more effective — this was despite the authors stating that protected areas already reduce deforestation by 72%. This is a notably higher effectiveness than is stated by those other papers – perhaps because the team analyzed only protected areas that were established relatively recently. Multiple papers have found that newer protected areas tend to be on average more effective than older ones.
So how can we make protected areas more effective then?
What all of this data shows us is that the conversation surrounding environmental protection needs to be considered in a broader context, and take into consideration economic, political, and social justice concerns. And it is an issue that is far too complex for its success to be measured by a single number.
Eugenics is the idea to selectively ‘improve’ humankind by only allowing specific physical and mental characteristics to exist. It focuses on systematically eradicating ‘undesirable’ physical traits and disabilities, and although it has been long discredited as a science, some of its ideas are still surprisingly prevalent in today’s society.
In some forms, eugenics actually has a remarkably long history. Some indigenous peoples of Brazil practiced infanticide against children born with physical abnormalities, and in ancient Greece, the philosopher Plato argued in favor of selective mating to produce a superior class. The Roman Empire and some Germanic tribes also practiced some forms of eugenics. However, eugenics didn’t truly become a large-scale idea until the 20th century.
Progress didn’t just happen in Europe
The foundation of eugenics lies on racist beliefs and ideologies — and especially something called scientific racism: a pseudoscientific belief that tries to empirical evidence to support or justify racism.
In 1981, American paleontologist Stephen Gould wrote ‘The Mismeasure of Man’, a book in which he discusses the problems of the continuous belief in biological determinism that later became eugenics. He gave examples of the instances of scientific racism and how some scientists contributed to providing ‘evidence’ to the superiority of white people, shaping faulty beliefs for decades or centuries. In the book, you can find the a remarkable list of horrid theories and studies which the researchers insisted on putting one race above the other.
The most famous ranking of races was developed by 19th-century physician Samuel George Morton. Morton, believing himself to be objective, used his collection of skulls of different American Ethnicities to compare cranial capacities and try to prove superior intelligence of one group over the other. His study was basically done by ranking average head sizes (which is not directly connected to intelligence) but mixed different heights in his samples, which induced an obvious bias to his analysis. The analysis was strongly skewed towards linking intelligence with white men, and Morton’s conclusion was that white men were the most intelligent race on the face of the Earth. Gould criticized Morton’s data (though he does mention that the bias may have been unconscious), noting that the analysis includes analytical errors, manipulated sample compositions, and selectively reported data. Gould classifies this as one of the main instances of scientific racism.
But it gets even worse. Colonialism was working hand in hand with the idea that Europeans were carrying out a ‘civilizing mission’. White Europeans were doing nothing but a generous act of ‘helping’ ‘inferior’ races to develop and become civilized. This patronizing notion is easily debunked with historical evidence. For instance, we know Mesoamerican and Andean civilizations were empires and they didn’t need foreign influence to achieve progress. Take Stonehenge for instance, a monument in England we believe was built around 3000 or 2000 BC. Though very impressive and with enough complexity, it is not as advanced as the Giza pyramid complex in Egypt, which was created around nearly the same period, proving how civilizations were evolving independently.
Another interesting aspect of eugenics is so-called social Darwinism. Social Darwinists believe that “survival of the fittest” also happens in society — some people become powerful in society because they are somehow innately better.
Social Darwinism was invented by one of the founders of eugenics, Sir Francis Galton, one of Charles Darwin’s cousins. He believed that eugenics should ‘help’ the human race to reach its ultimate ‘potential’ accelerating the ‘evolution’ by eliminating the ‘weak’ and keeping the ‘appropriate races’.
The problem is it does not fit any scientific evidence. First, genetics has clearly shown that we don’t have a separation in races, race is rather a social construct more than a genetic one. Differences do exist, but they have to do with common ancestry. In our species, we share 99.9% of our DNA, regardless of race. As a result, one ethnicity is not better than the other in anything, not in appearance, behavior, or intelligence.
The other misconception lies in natural selection itself. Evolution, for humans, is a slow process, it takes time for a genetic trait to become dominant in a species. Social change, on the other hand, is much faster; regimes fall, presidents change, policies change. The changes in society can be beneficial or not for some people, maybe everyone will have easy access to vaccines and survive an epidemic, while in a different regime people can get sick for not having these basic rights. Or worse, shorten the number of people simply because they do not have enough to eat. This has nothing to do with a group being stronger than the other, but the choice of leaving some unassisted. Simply put, social Darwinism has little scientific evidence to back it up — and a lot of evidence against it.
How technology fits in
Morton’s ideas are obviously flawed, but scientists took them as an objective analysis for decades — and that’s when the chaos started. One scientist cites the other, and the other, propagating false ideas and sending their echo through history affecting millions of lives for years. More theories like those emerged, with developments that come with the evolution of science, but the insistence of ranking white men as the ‘apex predator’ perpetuated. Even leading scientists can fall prey to racist ideas, and mask them as scientific racism.
Even with modern machine learning and big data, these ideas can still continue to propagate. If the scientists involved don’t make sure that their code is not being susceptible to biases, the computer won’t be objective. That happened to a machine learning routine using data from hospitals in the US. The algorithm wanted to find patients with risks, one of the easy ways is to look for the amount of money spent by a patient in one year. Seems reasonable, but the problem is the model excluded a large number of black people, for obvious reasons, our society is biased. The fact that this particular system involves money has nothing to do with the patient’s condition.
Machine learning is based on statistics, and some of the fathers of statistics are intertwined with eugenics. If you ever took a statistics course, you may have heard the name ‘Pearson’. Karl Pearson developed hypothesis testing, the use of p-values, the Chi-Squared test, and many other useful tools for science still used today. However, the scientist held strong beliefs in Social Darwinism, a distorted idea that due to natural selection some groups struggle more because in the end ‘the stronger survive’. Pearson even supported wars against ‘inferior races’. In 2020, the University College London renamed lecture halls and a building which originally honored Pearson and Francis Galton.
The search for the ‘special mind’
Besides ethnicity, the next eugenicist target is intelligence. The French psychologist Alfred Binet invented what we know today as the first version of the IQ test. He wanted his test to be used to help kids at school — those who performed poorly would be sent to special classes to get help adapting. He didn’t want that to be a label to segregate people. However, his ideas were distorted by some scientists in the USA. In the American continent, the test was used to reinforce the old fallacies for ranking people, even becoming a mechanism to select immigrants.
In time, the IQ test became the one you know today. The problem with it is that it’s often used to segregate people, without accounting for cultural or socioeconomic factors that could affect IQ scores. That’s not all: American psychologist Henry Goddard, the one responsible for corrupting Binet’s ideas, defended the idea that ‘feeble-minded’ people should not have children. In addition, he and other gentlemen chose words like ‘idiot’, ‘moron’, and ‘feeble-minded’ to classify people — words we still use today to insult someone.
The ultimate goal of eugenics is perpetuating only the ‘good’ genes — which means not allowing those who have ‘bad’ genes to reproduce.
This led to forced sterilizations in people with mental disorders. The most famous example was the case Buck vs. Bell in the US in 1927. Most of the over 60,000 sterilizations happened in the United States in people whose conditions were labeled as ‘feeble-minded’ and ‘insane’ between the 1920s and 1950s.
These procedures were typically carried out in asylums or prisons, with a medical supervisor having the right to decide whether the inmates’ reproductive systems should be altered or not. The practice is now considered a violation of their rights — and the motivation that “it would improve inmates’ lives” is considered bogus, as is “concern about the financial burden the inmates would provide if they had children”, punishment, and of course “avoid the reproduction of the unfit”. All these with California’s law that the person had no right for objection or appeal.
A lot happened from Goddard’s time to the 1930s and 1940s when autism was discovered. Know the famous guy, Hans Asperger? Well, he was a nazi Austrian pediatrician known for understanding one ‘type’ of autism, later known as Asperger Syndrome. The diagnostic criteria for Asperger Syndrome were removed from the Diagnostic and Statistical Manual of Mental Disorders in 2013. There are no longer sub diagnoses, it is all called Autism Spectrum Disorder (ASD).
Asperger observed there were autistic children who were more ‘adaptable’ to the social norms, they could act ‘normal’, so he labeled those children as “high functioning”, while others were “low functioning”. The low functioning was considered a burden and not fit for the Third Reich because they couldn’t do the tasks of a “normal” person. In other words, they wouldn’t be profitable. Asperger would then transfer these ‘genetically inferior’ children to the ‘euthanasia’ killing programs, making the choice of who was worth living and who wasn’t. Next time you meet people suffering from autism, ask if they want to be connected to that idea before calling anyone low functioning/high functioning/aspie — spoiler, they almost definitely don’t.
Genetic research can be eugenist, without mentioning the word or directly defending the idea. Nobody seems to ask autistic people what types of research could be done in order to make their lives better, it is usually a concern on ‘how parents should not have a burden’ – pay attention to the advertisements, do they display autistic people in successful positions, or are they pictures of children with their parents?
More recently, Spectrum10k research was paused. The UK-based researchers wanted to interview and collect DNA from autistic people and their relatives. The autistic community was not consulted and questioned on who the data would be shared with. They realized people involved in the project had a history of questionable research regarding autistic DNA, so advocates protested and the study was paused with the promise they will listen to autistic people.
“People with disabilities are genuinely concerned that these developments could result in new eugenic practices and further undermine social acceptance and solidarity towards disability – and more broadly, towards human diversity.”
Said Catalina Devandas on 28 February 2020, a UN Special Rapporteur on the rights of persons with disabilities.
Gould saw a problem with many ideas back in the 90s, he edited the book to add the biased ‘research’ of his time, with the hope to alert scientists not to make those same mistakes. It is evident that our world of today has no more space for racist/ableist science like thise, so why is it ok for labels which came from those eras to be in machine learning, the therapists’ offices, and schools? It’s about time to cut the eugenics out of our civilization.
No matter how sustainable, eco-friendly, and clean sources of energy they are, conventional solar panels require a large setup area and heavy initial investment. Due to these limitations, it’s hard to introduce them in urban areas (especially neighborhoods with lots of apartment blocks or shops). But thanks to the work of ingenious engineers at the University of Michigan, that may soon no longer be the case.
The researchers have created transparent solar panels which they claim could be used as power generating windows in our homes, buildings, and even rented apartments.
If these transparent panels are indeed capable of generating electricity cost-efficiently, the days of regular windows may be passing as we speak. Soon, we could have access to cheap solar energy regardless of where we live — and to make it even better, we could be rid of those horrific power cuts that happen every once in a while because, with transparent glass-like solar panels, every house and every tall skyscraper will be able to generate its own power independently.
An overview of the transparent solar panels
In order to generate power from sunlight, solar cells embedded on a solar panel are required to absorb radiation from the sun. Therefore, they cannot allow sunlight to completely pass through them (in the way that a glass window can). So at first, the idea of transparent solar panels might seem preposterous and completely illogical because a transparent panel should be unable to absorb radiation.
But that’s not necessarily the case, researchers have found. In fact, that’s not the case at all.
The solar panels created by engineers at the University of Michigan consist of transparent luminescent solar concentrators (TLSC). Composed of cyanine, the TLSC is capable of selectively absorbing invisible solar radiation including infrared and UV lights, and letting the rest of the visible rays pass through them. So in other words, these devices are transparent to the human eye (very much like a window) but still absorb a fraction of the solar light which they can then convert into electricity. It’s a relatively new technology, only first developed in 2013, but it’s already seeing some impressive developments.
Panels equipped with TLSC can be molded in the form of thin transparent sheets that can be used further to create windows, smartphone screens, car roofs, etc. Unlike, traditional panels, transparent solar panels do not use silicone; instead they consist of a zinc oxide layer covered with a carbon-based IC-SAM layer and a fullerene layer. The IC-SAM and fullerene layers not only increase the efficiency of the panel but also prevent the radiation-absorbing regions of the solar cells from breaking down.
Surprisingly, the researchers at Michigan State University (MSU) also claim that their transparent solar panels can last for 30 years, making them more durable than most regular solar panels. Basically, you could fit your windows with these transparent solar cells and get free electricity without much hassle for decades. Unsurprisingly, this prospect has a lot of people excited.
According to Professor Richard Lunt (who headed the transparent solar cell experiment at MSU), “highly transparent solar cells represent the wave of the future for new solar applications”. He further adds that these devices in the future can provide a similar electricity-generation potential as rooftop solar systems plus, they can also equip our buildings, automobiles, and gadgets with self-charging abilities.
“That is what we are working towards,” he said. “Traditional solar applications have been actively researched for over five decades, yet we have only been working on these highly transparent solar cells for about five years. Ultimately, this technology offers a promising route to inexpensive, widespread solar adoption on small and large surfaces that were previously inaccessible.”
Recent developments in the field of transparent solar cell technology
Apart from the research work conducted by Professor Richard Lunt and his team at MSU, there are some other research groups and companies working on developing advanced solar-powered glass windows. Earlier this year, a team from ITMO University in Russia developed a cheaper method of producing transparent solar cells. The researchers found a way to produce transparent solar panels much cheaper than ever before.
“Regular thin-film solar cells have a non-transparent metal back contact that allows them to trap more light. Transparent solar cells use a light-permeating back electrode. In that case, some of the photons are inevitably lost when passing through, thus reducing the devices’ performance. Besides, producing a back electrode with the right properties can be quite expensive,” says Pavel Voroshilov, a researcher at ITMO University’s Faculty of Physics and Engineering.
“For our experiments, we took a solar cell based on small molecules and attached nanotubes to it. Next, we doped nanotubes using an ion gate. We also processed the transport layer, which is responsible for allowing a charge from the active layer to successfully reach the electrode. We were able to do this without vacuum chambers and working in ambient conditions. All we had to do was dribble some ionic liquid and apply a slight voltage in order to create the necessary properties,” adds co-author Pavel Voroshilov.
PHYSEE, a technology company from the Netherlands has successfully installed their solar energy-based “PowerWindow” in a 300 square feet area of a bank building in The Netherlands. Though at present, the transparent PowerWindows are not efficient enough to meet the energy demands of the whole building, PHYSEE claims that with some more effort, soon they will be able to increase the feasibility and power generation capacity of their solar windows.
California-based Ubiquitous Energy is also working on a “ClearView Power” system that aims to create a solar coating that can turn the glass used in windows into transparent solar panels. This solar coating will allow transparent glass windows to absorb high-energy infrared radiations, the company claims to have achieved an efficiency of 9.8% with ClearView solar cells during their initial tests.
In September 2021, the Nippon Sheet Glass (NSG) Corporation facility located in Chiba City became Japan’s first solar window-equipped building. The transparent solar panels installed by NSG in their facility are developed by Ubiquitous Energy. Recently, as a part of their association with Morgan Creek Ventures, Ubiquitous Energy has also installed transparent solar windows on Boulder Commons II, an under-construction commercial building in Colorado.
All these exciting developments indicate that sooner or later, we also might be able to install transparent power-generating solar windows in our homes. Such a small change in the way we produce energy, on a global scale could turn out to be a great step towards living in a more energy-efficient world.
Not there just yet
If this almost sounds too good to be true, well sort of is. The efficiency of these fully transparent solar panels is around 1%, though the technology has the potential to reach around 10% efficiency — this is compared to the 15% we already have for conventional solar panels (some efficient ones can reach 22% or even a bit higher).
So the efficiency isn’t quite there yet to make transparent solar cells efficient yet, but it may get there in the not-too-distant future. Furthermore, the appeal of this system is that it can be deployed on a small scale, in areas where regular solar panels are not possible. They don’t have to replace regular solar panels, they just have to complement them.
When you think about it, solar energy wasn’t regarded as competitive up to about a decade ago — and a recent report found that now, it’s the cheapest form of electricity available so far in human history. Although transparent solar cells haven’t been truly used yet, we’ve seen how fast this type of technology can develop, and the prospects are there for great results.
The mere idea that we may soon be able to power our buildings through our windows shows how far we’ve come. An energy revolution is in sight, and we’d be wise to take it seriously.
Companies will soon have to prove that the products they sell to the European Union haven’t been contributing to deforestation, according to draft legislation introduced by the European Commission. The EU is one of the main importers of global deforestation, only exceeded by China, according to a report on trade by WWF, and this move could send a strong signal worldwide for producers to be more environmentally conscious.
Wanted: only deforestation-free products
The regulation will focus on six commodities: wood, soy, cattle, palm oil, coffee, and cocoa, as well as derived products such as chocolate, leather, and oil cakes. Imports of commodities in the EU have been linked to the loss of 3.5 million hectares of forests between 2005 and 2017 and to the release of 1.8 billion tons of carbon dioxide (CO2).
“Our deforestation regulation answers citizens’ calls to minimize the European contribution to deforestation and promote sustainable consumption,” EU Commission VP Frans Timmermans said in a statement. “It ensures that we only import these products if we can ascertain that they are deforestation-free and produced legally.”
When approved, the new law will create due diligence mandatory rules applicable to commodity exporters to the EU market. They will have to implement a strict traceability control, collecting coordinates of the land where the commodities were produced. This will ensure that only deforestation-free products enter the EU market.
The EU Commission will operate a benchmarking system to classify countries with a low, standard, or high risk of producing commodities or products that aren’t deforestation-free. The requirements for companies and government authorities will depend on the level of risk of the country, from simplified to enhanced due diligence.
With the new system, the EU hopes to prevent deforestation and forest degradation. The EU Commission estimates the bloc will reduce at least 31.9 million metric tons of carbon emissions every year due to the EU consumption of the targeted commodities. This would also mean savings of up to $3.6 billion per year, the commission estimates.
“If we expect more ambitious climate and environmental policies from partners, we should stop exporting pollution and supporting deforestation ourselves,” the EU Commissioner for the Environment, Oceans and Fisheries Virginijus Sinkevičius said in a statement. “It’s the most ambitious legislative attempt to tackle this worldwide.”
Will it pass?
The draft will now have to be approved by the EU Parliament and by each EU member country, something that might take a while. It follows recommendations included in a Parliament report last year but it has a more limited scope, not addressing human rights abuses and not creating civil liability for companies that export goods to the EU.
As it is now, it only targets recent deforestation due to its 2020 cut-off date. But this could change as lawmakers discuss the details at the EU Parliament, with some suggesting an earlier starting at 2014 – which is the earliest satellite images are available. The regulation also gives commodity exporters a 12-month transition.
Strong opposition is expected from forested countries that rely on export to the EU. This is the case of Brazil, for example, which exports beef to several bloc member countries. Deforestation rates have been on the rise in the country amid lax policies by President Bolsonaro. Recent data showed higher deforestation in October this year and many see beef imports from places like Brazil as an important contributor to deforestation.
When we think about junk, things like garbage bins or landfills come to mind — but there’s another junk problem, one that’s hard to see with the naked eye from the Earth. Space junk, researchers warn, is a growing problem, and if we don’t address it quickly, it may soon be too much to handle.
There are a total of 6,542 satellites that are currently occupying Earth’s orbit, but only half of them are actually doing something. The other half are inactive — they’re simply junk. To make matters even more problematic, over 1,200 satellites were launched in 2020 — this marks a record, but generally speaking, we could expect more and more satellites to be plopped into orbit.
Now, imagine one day Earth’s orbit becomes overcrowded and two such large satellites hit each other. Both the satellites would get broken into smaller pieces that would further clash with other satellites and trigger a series of unstoppable collisions and a lot of junk pieces flying around. This has happened a few times already.
Due to these collisions, our planet’s orbit gets more and more cluttered with debris, to the extent that eventually, we will end up having no room to launch more rockets and satellites. Such a situation in which Earth’s orbit becomes completely unusable because of large amounts of space junk is referred to as Kessler syndrome — a phenomenon first envisioned by NASA scientist Donald J. Kessler in 1978.
Fortunately, we’re not at that stage yet. For now, space junk does not seem like a big problem but aerospace experts suggest that in the coming years, the number of satellite launches and space missions could increase dramatically, and this is likely to add more junk to space and make Earth’s orbit more crowded than ever. Simply put, if we don’t start taking action quickly, it will soon be too late.
What is space junk and why it’s dangerous?
Space junk is a generic term. Unusable satellite parts, rocket components, and debris of man-made machines in space are called “space junk”. Until now, NASA has tracked 27,000 such items that are aimlessly moving in Earth’s orbit. This orbital debris can move at a speed of 24,000 km/h (15,000 mph), and therefore any such fast-moving piece of junk can hit and destroy a functional satellite or a passing by rocket at any time.
We’re already seeing some of this damage in action. In March 2021, the 18th Space Control Squadron (18SPCS), a space control unit under the US Space Force confirmed that a small debris piece named Object 48078 hit China’s Yunhai 1-02 satellite. According to Astrophysicist Jonathan McDowell, Object 48078 was a remnant of Zenet-2, a Russian rocket that was launched in the year 1996. McDowell further added that the “Yunhai 1-02 satellite broke up” after the collision.
However, such collisions due to space junk are still rare. Before the Yunhai 1-02 crash, the last collision reported was in 2009. Moreover, such collisions can be prevented by mission controllers by adjusting the position of a satellite. Every year many satellites are manoeuvered multiple times in order to avoid collision with space junk, even the International Space Station (ISS) has performed more than 20 junk avoidance maneuvers since its launch in 1998.
The space junk problem does not seem like a big issue for now but if not dealt with properly, it may lead to chaos in our planet’s orbit in the future — chaos that will be extremely difficult to address.
A small but growing problem
Before 2010, only around 100 satellites were launched every year but in the year 2020, for the first time, more than 1000 satellites were sent to space. The numbers continue to increase in 2021 as well because so far, 1400 new satellites have already been placed in orbit this year.
Moreover, in the early days of space exploration, there used to be only a few agencies that would send satellites into space — like NASA, Roscosmos, and the European Space Agency. Nowadays, active private players like SpaceX and Blue Origin have created a boom in the aerospace industry and are launching more and more satellites. These companies are planning to launch mega-constellations (groups of satellites that cover large orbital area) in Earth’s orbit to provide wireless broadband internet services across the globe, in the coming years — an exciting project that is bound to help millions around the world, but which also poses new threats to the problem of space junk.
These mega-constellations would bring an unprecedented increase in the number of satellites revolving around Earth (a report suggests that the Earth’s orbit may have 100,000 satellites by 2030). With every launch, the amount of space junk will also increase making the orbit more congested. As a result, both the existing and new satellites will have to perform more collision avoidance maneuvers.
Therefore, more fuel and resources would be spent on saving the satellites from space junk. Sooner or later, with an increasing number of space missions, the growing amounts of space junk might raise the frequency of outer space collisions and over the course of time, it could ultimately cause the Kessler syndrome.
Is it possible to free Earth’s orbit of space junk?
Cleaning up space junk is not as easy as it sounds. For starters, imposing a ban doesn’t seem like a promising idea.
Rockets are launched to explore space and collect information about other planets in our galaxy, whereas, man-made satellites are placed in Earth’s orbit in order to facilitate communication, navigation, military assistance, earth observation, weather forecast, mineral search, and many other activities that hold great importance for humans. Therefore, banning space missions and new satellite launches is obviously not a solution.
Cleaning our planet’s orbit is both an expensive and complicated process. However, researchers and space agencies are working on this and they keep coming up with new and interesting methods to remove space junk from Earth’s orbit.
Around 2012, a group of researchers working at EPFL (Swiss Federal Institute of Technology) came up with the idea of a special satellite (called CleanSpaceOne) that could attach itself to a targeted piece of space junk and drag the same back towards earth. The researchers proposed that during its journey to Earth, both the satellite and space junk would be burnt by the atmospheric heat.
This idea sounds promising, but it will also be costly, and bringing down satellites one at a time will be very time-consuming.
In 2016, the Japanese Aerospace Exploration Agency sent an electrodynamic tether in space that could direct space junk towards Earth’s atmosphere by using the planet’s magnetic field. A couple of years later, the Surrey Space Center in the UK launched the RemoveDEBRIS project in April 2018, this project was focused to encourage and demonstrate various space junk removal technologies. Under the RemoveDEBRIS initiative the effectiveness of methods involving net, harpoon, and drag sail for catching space junk was tested.
Researchers at Purdue University also developed a drag sail named Spinnaker3 in 2020. This powerful drag sail is an efficient and cost-effective way to deal with space junk as it does not require any fuel during its operation. Moreover, it can drag even rocket-sized space debris back to Earth’s atmosphere so that they get destroyed in peace. Spinnaker3 is expected to launch in November 2021 on a Firefly rocket.
Astroscale, an orbital junk removal company from Japan, launched the ELSA-d (End-of-Life Services by Astroscale-demonstration) satellite in March 2021. This advanced debris removal system uses magnetic satellite catching technology to pick small inactive satellites from Earth’s orbit. ELSA-d successfully completed its first satellite capturing test on August 25, 2021, and it is now moving on to the next phases of its space junk removing process.
The bottom line
As is generally the case, prevention is better than cure. In the case of space junk, it’s not yet a big problem — but by the time it becomes a big problem, it may be too big to handle efficiently, which is why it’s best to act as quickly as possible.
Aerospace experts are following this closely and if their research is supported, we’ll likely soon see effective waste-management strategies for space — and by the time we’re ready to go on our first interplanetary picnic, we’ll have a clean, green (hopefully), and beautiful orbital view.
When the rest of the world discovered that the Nazis were detaining and slaughtering millions in concentration camps, they were shocked. Even as inside reports described what was going on in places like Auschwitz, the world just couldn’t believe it. How could an authoritarian regime kill millions and attempt to wipe out entire populations like jews or gypsies without the rest of the world knowing or acting?
Well, another genocide may be happening once again, in front of our very eyes.
For decades, the Chinese Communist Party (CCP) has sought to forcibly assimilate the Uyghur Muslim community in the Xinjiang Uyghur Autonomous Region (Xinjiang) of northwest China, the new Holocaust Museum report reads. The Uyghurs are a Turkic ethnic group originating from the general region of Central and East Asia.
In theory, Uyghurs are recognized by the Chinese government as a regional minority and the titular people of Xinjiang. But in practice, the CCP has been trying to “integrate” them into Chinese society — this “integration” is seen as a genocide by not just the Holocaust museum, but by officials in the US, the European Union, and the UK.
Initially, this started by prohibiting the expression of any Uyghur religion and culture, as well as the destruction of sites that were important for Uyghur cultural heritage. But since 2014, things took a much darker turn. The Chinese government’s intrusive mass surveillance of the community has intensified, and comprehensive analyses have shown that the CCP built concentration camps for Uyghurs.
Initially, Chinese officials vehemently denied this and engaged in a propaganda campaign to sow disinformation and disprove the existence of concentration camps. They also attempted to block journalists from reporting from Xinjiang. However, after widespread reporting proved beyond a doubt that internment camps exist, the Chinese government tried to portray the camps as humane, denying that there are any human rights abuses in Xinjiang.
The campaign is still ongoing. In April 2021, the Chinese government released 5 propaganda videos titled, “Xinjiang is a Wonderful Land”, and released a musical titled “The Wings of Songs” which portrayed Xinjiang as harmonious and peaceful.
But increasingly, reports are claiming that Xinjiang is anything but peaceful, and the internment camps are far from harmless.
The Holocaust museum documents large-scale forced sterilization, mass incarceration, forced labor, abduction of Uyghur children away from their families, and Uyghur sites.
In 2020, the museum published a separate report in which it assessed that the Chinese government was committing “crimes against humanity”in Xinjiang. Now, the museum’s assessment has escalated, noting that “the Chinese government’s conduct has escalated beyond a policy of forced assimilation”.
“This includes, in particular, a deepening assault on Uyghur female reproductive capacity through forced sterilization and forced intrauterine device (IUD) placement as well as the separation of the sexes through mass detention and forcible transfer,” the report reads.
The report also mentions that the CCP is intentionally hiding evidence from the public.
“The Chinese government continues to intentionally impede the flow of information concerning its assault on the Uyghurs of Xinjiang. The information that has made its way into the public domain gives rise to grave concerns about crimes committed by the Chinese government.”
Essentially, the report concludes, the Chinese government is out to “biologically destroy the group of Uyghurs” — which clearly classifies as genocide under the United Nations definition.
According to the UN definition, genocide means “any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such:
Killing members of the group;
Causing serious bodily or mental harm to members of the group;
Deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part;
Imposing measures intended to prevent births within the group;
Forcibly transferring children of the group to another group.”
It’s estimated that already, over 1.5 million Uyghurs are forcefully detained, although some estimates put the figure at 3 million. In 2017 alone, over half a million children were forcefully separated from their families and placed in pre-school camps with prison-style surveillance systems and 10,000-volt electric fences.
This is only from what information could be indirectly obtained, through remote investigations and a few witnesses that managed to escape.
For instance, in 2021, a former Xinjiang police officer told reporters that the police would sometimes arrest an entire village, arranging a gathering with all the population so they could arrest everyone. Other times, they would go door-to-door with rifles and arrest residents overnight. According to the same witness, the police would interrogate and beat every man, woman, and child over age 14 “until they kneel on the floor crying.”
The evidence, while still incomplete, paints a compelling picture: a genocide is probably happening before our very eyes.
“The impunity with which the Chinese government has been able to commit these crimes thus far cannot persist. The future of a people may depend on swift, coordinated action by global actors. This report should serve as a clarion call for action to protect the Uyghur community,” the report concludes.
While this is not a governmental report, once a government has made a legal determination of genocide, international law mandates that they have an obligation to take action, Axios concludes.
The world is failing to learn the lessons of the pandemic and is still doing too little to address the issues it has caused, warns an independent watchdog set up by the World Health Organization and the World Bank.
A new report by the Global Preparedness Monitoring Board (GPMB) explains in no uncertain terms that the global response to the pandemic has been very underwhelming, and is still plagued by issues. Instead of learning from such a traumatic event, we are leaving those that most need help behind, the report concludes.
The pandemic has exposed a world that is “unequal, divided, and unaccountable”, it concludes.
Leaves much to be desired
“If the first year of the COVID-19 pandemic was defined by a collective failure to take preparedness seriously and act rapidly on the basis of science, the second has been marked by profound inequalities and a failure of leaders to understand our interconnectedness and act accordingly,” the report said.
“The health emergency ecosystem reflects this broken world. It is not fit for purpose and needs major reform.”
The report cites WHO estimates which place the overall death toll of the pandemic (both direct and indirect) at 17 million people. While that number in itself is frightening, the authors also point to a sharp — and growing — divide in the vaccination rates between wealthier and poorer areas of the globe.
Despite more than six billion vaccine doses being administered globally to date, only 1.4 percent of people in poor countries have been fully vaccinated, explained World Trade Organization chief Ngozi Okonjo-Iweala earlier this month.
The report comes in the wake of the 2020 GPMB report which was already pointing out how ill-prepared the world was for a global pandemic, despite numerous warnings from researchers and healthcare professionals that such an event was unavoidable.
“Scientific advancement during COVID-19, particularly the speed of vaccine development, gives us just cause for pride,” reads the report’s foreword, written by GPMB co-chair Elhadj As Sy.
“However, we must feel deep shame over multiple tragedies–vaccine hoarding, the devastating oxygen shortages in low-income countries, the generation of children deprived of education, the shattering of fragile economies and health systems. While this disaster should have brought us together, instead we are divided, fragmented, and living in worlds apart.”
The sheer loss of life caused by the pandemic is “neither normal nor acceptable,” he adds.
Against this backdrop, there’s little evidence that we’re actually learning from the pandemic. Deaths from COVID-19 are still mounting, while vaccination efforts are stalling in many areas of the globe. Areas of the world with the resources and infrastructure needed to distribute large quantities of vaccines are starting to ease into the illusion that the pandemic is over. On the other hand, poorer and less fortunate areas are seeing their national health system buckle and break under the strain of extra patients who need intensive care, while their own vaccination drives are progressing painfully slowly — due to a lack of resources, adequate infrastructure, or lack of trained personnel.
But in our interconnected world, there’s no feasible solution for beating this pandemic alone. The growing number of cases is a very real threat even for countries that have achieved high vaccination rates within their own borders. In a globalized society, there is no such thing as closing off your gates and weathering the storm outside.
The solution, GPMB proposes, is “a new global social contract to prevent and mitigate health emergencies”. They sum this contract up around six key points:
Strengthen global governance; adopt an international agreement on health emergency preparedness and response, and convene a Summit of Heads of State and Government, together with other stakeholders, on health emergency preparedness and response.
Build a strong WHO with greater resources, authority, and accountability.
Create an agile health emergency system that can deliver on equity through better information sharing and an end-to-end mechanism for research, development and equitable access to common goods.
Establish a collective financing mechanism for preparedness to ensure more sustainable, predictable, flexible, and scalable financing.
Empower communities and ensure engagement of civil society and the private sector.
Strengthen independent monitoring and mutual accountability.
It’s easy to read such material and feel defensive, even insulted. Haven’t we all suffered our share during this pandemic? Haven’t we all done our best to come through it? What more do these ‘organizations’ want from us, and what do they even know about us? And that’s certainly an understandable reaction.
But we have to look beyond that. Organizations such as the GPMB exist because they serve a role we as individuals, communities, governments, and countries cannot do on our own. Their job is to tell us when we all, as a species, are not acting in our own interest — and to hold us accountable. The hard truth is that our natural inclination during times of crisis is to hunker down and wait it out. But working together is the fastest and most efficient way of dealing with threats, including pandemics. We may not like the idea that our choices here can influence someone’s chances of survival half the world away, but they do. And while there’s precious little we as individuals can do, we can do our own little part, and we can hold those in charge accountable for doing their own, much larger part; both at home, and abroad.
Last week, India passed an important milestone: the country has administered 1 billion doses of COVID vaccine to its citizens, according to government data. With this, roughly 75% of its population has been immunized with at least one dose — around 708 million people. Around one-third (30%) of the country has been fully immunized with two shots of vaccine.
Other countries have managed similar vaccination rates as percentages of their population — Canada, for example, sits at around the 77% mark, while Portugal hit 88% — but India’s achievement impresses through sheer numbers. One billion doses are no small feat.
The country has had a pretty rough experience with this pandemic. But it also made sizeable efforts to contend with the virus, and this achievement carries on that trend.
For example, India was among the first countries to issue lockdowns and use contact tracing to limit the spread of the coronavirus. Still, things have not been going swimmingly for the country, especially since the rise of the Delta variant, and for a long time, India was among the countries with the most cases.
“This achievement belongs to India, every citizen of India,” wrote Indian Prime Minister Narendra Modi on Twitter (original tweet in Hindi). “I express my gratitude to all the vaccine manufacturing companies of the country, workers engaged in vaccine transportation, health sector professionals engaged in vaccine development.”
India’s very large population, currently inching toward the 1.4 billion mark, is one of the factors working against its efforts to combat the coronavirus pandemic. Other populous countries know how challenging it can be to source, deliver, and administer a large number of vaccine doses. Even countries with lower populations but robust infrastructures and strong economies — like the USA — have had their own hiccups in vaccination efforts.
Apart from this, India’s population is still largely rural, living outside metropolitan areas. Its economy, although large and diverse, is still mostly people-driven, and much less resilient to public health issues than those of more developed countries. Its infrastructure is also relatively undeveloped in many geographical areas.
This all makes the country’s vaccination milestone all that much more impressive.
Against this backdrop, New Delhi is setting even more ambitious goals for itself. Government officials are aiming to have all of India’s adult population vaccinated by the end of the year. I, personally, cheer them on, although I do have my reservations regarding how feasible such a target actually is. Experience in other areas of the world shows us that the last steps towards full vaccination are the hardest, and slowest to go through.
Still, reaching that goal means India will need to administer around 1.8 billion doses. A production target the government set in June called for 2 billion doses to be produced by December. Local manufacturers have reportedly ramped up production in recent months to reach that target.
India started its vaccination program in January of this year. So far, only those above 18 years of age can receive a shot. Several vaccines have been approved for use by the government, including the AstraZeneca shot and the Russian Sputnik-V. A new vaccine, a three-dose shot produced by local manufacturer Cadila Healthcare, has also been approved for use in those under 18.
There are over 70,000 state-run vaccination centers currently administering free shots in India. A further 2,000 private centers also offer vaccine shots, although these charge for the service.
Airborne microplastic particles could start having a significant effect on the world’s climate in the future, a new paper reports.
New research at the University of Canterbury, New Zealand, found that airborne microplastics reflect part of the sunlight incoming to the Earth’s surface, thus cooling down the climate. For now, this effect is extremely slight. However, as the quantity of microplastics in the air is bound to increase in the coming decades, this effect will grow in magnitude.
“Yes, we focussed on airborne microplastics,” Dr. Laura Revell, Senior Lecturer of Environmental Physics at the University of Canterbury and the paper’s corresponding author told ZME Science in an email. “These were first reported in Paris in 2015 and have since been reported in a range of urban and remote regions.”
“However, we believe that microplastics may be co-emitted from the ocean with sea spray, leading to the concept of the ‘plastic cycle’ i.e., microplastics might be carried with the winds over some distance, be deposited to land, get washed into a river, be transported into the ocean, and then re-enter the atmosphere.”
Microplastics are a growing environmental concern. They’re already present in soils, water, air, and their levels are steadily increasing. Some microplastics are produced directly, for items such as cosmetics, while others are the result of plastic items breaking down in landfills.
Due to their small size and weight, such particles can easily be picked up by winds and carried over immense distances. Large cities such as London or Beijing show huge concentrations of such particles, likely due to how much plastic is used within their boundaries.
That being said, we’re just beginning to understand their full impact as airborne contaminants. The present study helps further our understanding in this regard, by uncovering the interaction between these particles and the planet’s climate. According to the authors, this is the first time the direct effects of airborne microplastics on climate has been calculated.
Other airborne solutions (‘aerosols’) are known to have an effect on the Earth’s climate either by scattering or reflecting incoming sunlight back into space, cooling everything down, or by absorbing radiation on certain frequencies, which warms the planet up.
Against that backdrop, the authors set out to determine what effect airborne microplastics have in this regard. They used climate modeling software to determine the radiative effect (i.e., reflective of absorbing) of common airborne microplastic particles. They focused primarily on the lower layers of the atmosphere, where much of the microplastic contamination is located. Overall, they report, these particles scatter solar radiation, which amounts to them having a minor cooling effect on the climate at surface level.
Exactly how much cooling they produce, however, the team can’t say for sure. We simply don’t have enough measurements of the quantity and distribution of microplastics in the atmosphere, nor do we have solid data on their chemical composition and physical properties.
Further muddying the issue is that microplastic particles can also have a warming effect, which may partially or completely counteract the cooling they cause through the scattering of light.
“After we calculated the optical properties of microplastics to understand how they absorb and scatter light, we realised that we would see them absorbing infrared radiation and contributing to the greenhouse effect. That moment was a surprise, as up until then we had been thinking about microplastics as efficient scatterers of solar radiation,” Dr. Revell adds for ZME Science.
This absorption takes place on a frequency interval of infrared light where greenhouse gasses such as CO2 don’t really capture much energy. In other words, these microplastics tap into energy that’s not readily captured by the current drivers of climate warming.
“Microplastics may therefore contribute to greenhouse warming, although in a very small way (since they have such a small abundance in the atmosphere at present),” Dr. Revell adds. “The dominant effect we see in our calculations with respect to interaction with light, [however] is that microplastics scatter solar radiation (leading to a minor cooling influence).”
In closing, she told me that more recent studies on the topic of airborne microplastics are reporting “quite high” concentrations of these particles in certain areas of the world, such as Beijing. Dr. Revell explains that this is likely due to improvements in technology allowing researchers to pick up on particles of much smaller diameters than before — which passed by undetected before. All of this uncertainty in the data obviously does not bode well for our conclusions.
“Our initial estimates of the climate effects of airborne microplastics are just that — estimates — and will no doubt be revised in future as new studies are performed and gaps in our knowledge are filled,” Dr. Revell concluded for ZME Science.
However, one thing we do know for sure is that with plastic pollution on the rise, the effects of microplastics on the climate are only going to become worse. It’s very likely that it already shapes atmospheric heating or cooling on the local level, the authors explain. If steps are not taken to limit the mismanagement of plastic waste, this effect will grow in magnitude and keep influencing the climate for a long period in the future.
The paper “Direct radiative effects of airborne microplastics” has been published in the journal Nature.
As many of us are nearing the one-year mark following our immunization, questions still remain regarding the long-term efficacy of our current vaccines. New research, however, is looking into it.
A team of researchers from the Beth Israel Deaconess Medical Center (BIDMC) has been analyzing the long-term immunization efficacy of the three vaccines approved by the U.S. Food & Drug Administration in December 2020. These are BNT162b2 (BioNTech, Pfizer), mRNA-1273 (Moderna), Ad26.COV2.S (Johnson & Johnson).
They evaluated the immune response produced by these vaccines at two to four weeks after complete immunization (i.e. after receiving the full number of shots) to that at eight months after vaccination.
Declining but not determined
“The mRNA vaccines were characterized by high peak antibody responses that declined sharply by month six and declined further by month eight,” said corresponding author Dan H. Barouch, MD, Ph.D., director of the Center for Virology and Vaccine Research at BIDMC, who helped develop the Ad26 platform in collaboration with Johnson & Johnson.
“The single-shot Ad26 vaccine induced lower initial antibody responses, but these responses were generally stable over time with minimal to no evidence of decline.”
Understanding the long-term efficacy of these vaccines is critical for our efforts to combat the COVID-19 pandemic. However, we didn’t have such information on hand up to now. Simply put, while the vaccines were tested to ensure safety and efficacy, the global context meant that their development process was greatly accelerated. We simply didn’t have the opportunity to obtain data pertaining to their long-term efficacy.
In a bid to help patch up this hole in our understanding, the team at BIDMC monitored the immunization levels of 61 participants over an eight-month period after they received their vaccines. The team measured the levels of antibodies, T cells, and other immune markers in the blood of these participants at two to four weeks after they received their shot (which is the point of peak immunity) and monitored them over an eight-month follow-up period.
Out of the 61 total participants involved, 31 received the BioNTech / Pfizer vaccine, 22 received the Moderna one, with the final 8 receiving the Johnson & Johnson single-shot vaccine.
All in all, the team explains that the Moderna vaccine produced more powerful and longer-lasting immunization effects than the BioNTech / Pfizer variant. That being said, all three variants produced effective immune responses against SARS-CoV-2 and had broad cross-reactivity to its strains.
However: the authors report that both mRNA-based vaccines (BioNTech / Pfizer and Moderna) produced sizable initial immune responses, but these got progressively weaker over time. At around the 6-month mark, immune markers in patients who received either of these two had already declined sharply compared to the 2-to-4 week mark. The same markers would decline even further at the eight-month mark.
The single-shot Johnson & Johnson vaccine, meanwhile, produced a weaker initial effect but was much more consistent over the study period.
Although these results might not sound very exciting or promising, they do not mean that the vaccines leave us vulnerable over time. For starters, there are still a lot of unknowns regarding exactly what immune responses in our bodies are needed to protect against SARS-CoV-2.
Furthermore, what the team tracked here are physical markers of immunity. But the antibodies themselves, for example, are the ‘soldiers’ that our body uses to protect itself against viruses. Their presence in the bloodstream is akin to our body being on alert. But even if they are not physically there, our bodies have already been primed regarding the structure of the virus, how to identify it, and which antibodies are needed to defeat it. Against this backdrop, an immune response against the pathogen can be mounted very quickly in case of infection.
“Even though neutralizing antibody levels decline, stable T cell responses and non-neutralizing antibody functions at 8 months may explain how the vaccines continue to provide robust protection against severe COVID-19,” said lead author Ai-ris Y. Collier, MD, a maternal-fetal medicine specialist at BIDMC.
“Getting vaccinated (even during pregnancy) is still the best tool we have to end the COVID-19 pandemic.”
The paper “Differential Kinetics of Immune Responses Elicited by Covid-19 Vaccines” has been published in the New England Journal of Medicine.
In an unexpected turn of events, climate change seems to be making the Earth a little bit dimmer, according to new research.
One of the properties that define planets throughout space is their ‘albedo’. Multiple different elements factor into this property which, in its simplest definition, is the measure of how much incoming light a planetary body reflects. A planet’s albedo can thus have a significant effect on environmental conditions across its surface.
But the opposite is also true, and climate conditions on the surface can influence a planet’s overall albedo. New research explains that climate change is already affecting Earth’s albedo, causing a significant drop in our planet’s ability to reflect light over the last 20 years or so.
“The albedo drop was such a surprise to us when we analyzed the last three years of data after 17 years of nearly flat albedo,” said Philip Goode, a researcher at New Jersey Institute of Technology and the lead author of the new study.
The authors worked with earthshine data recorded by the Big Bear Solar Observatory in Southern California from 1998 to 2017. Satellite readings of earthshine over the same timeframe were also used in the study. Earthshine is the light reflected from the Earth into space, and it is what makes the Moon so bright in the night’s sky.
All in all, the team reports, the Earth is beaming back roughly one-half of a watt less per square meter of its surface than it did 20 years ago. For perspective, the typical lightbulb uses around 60 watts. A single LED uses around 0.015 watts. The authors explain that it’s equivalent to a 0.5% decrease in the Earth’s reflectance.
The two main components deciding how much sunlight reaches the Earth are how bright the Sun shines, and how reflective our planet is. But the team reports that the drop in albedo they’ve observed did not correlate with any periodic changes in the Sun’s brightness — meaning that the drop was caused entirely by changes in how reflective the Earth is.
This drop is mostly powered by warming ocean waters. The authors point to a reduction in bright, reflective low-lying clouds over the eastern Pacific Ocean over the last two decades, as shown by measurements taken as part of NASA’s Clouds and the Earth’s Radiant Energy System (CERES) project. Sea surface temperature increases have been recorded in this area following the reversal of the Pacific Decadal Oscillation (PDO).
A dimmer Earth means that the planet is absorbing much more of the incoming solar energy into Earth’s climate systems. Here, it’s likely to contribute to global warming. The authors estimate that this extra sunlight is on the same magnitude as the sum of anthropogenic climate forcing over the last two decades.
“It’s actually quite concerning,” said Edward Schwieterman, a planetary scientist at the University of California at Riverside who was not involved in the new study. For some time, many scientists had hoped that a warmer Earth might lead to more clouds and higher albedo, which would then help to moderate warming and balance the climate system, he said. “But this shows the opposite is true.”
The paper “Earth’s Albedo 1998–2017 as Measured From Earthshine” has been published in the journal Geophysical Research Letters.