Category Archives: Pieces

Eunice Foote: the first person to measure the impact of carbon dioxide on climate

We often think of climate science as something that started only recently. The truth is that, like almost all fields of science, it started a long time ago. Advancing science is often a slow and tedious process, and climate science is not an exception. From the discovery of carbon dioxide until the most sophisticated climate models, it took a long time to get where we are.

Unfortunately, many scientists who played an important role in this climate journey are not given the credit they deserve. Take, for instance, Eunice Newton Foote.

Eunice Foote. Credits: Wikimedia Commons.

Foote was born in 1819 in Connecticut, USA. She spent her childhood in New York and later attended classes in the Troy Female Seminary, a higher education institution just for women.  She married Elish Foote in 1841, and the couple was active in the suffragist and abolitionist movements. They participated in the “Women’s Rights Convention” and signed the “Declaration of Sentiments” in 1848.

Eunice was also an inventor and an “amateur” scientist, a brave endeavor in a time when women were scarcely allowed to participate in science. However, one of her discoveries turned out to be instrumental in the field of climate science.

Why do we need jackets in the mountains?

In 1856, Eunice conducted an experiment to explain why low altitude air is warmer than in mountains. Back then, scientists were not sure about it, so she decided to test it. She published her results in the American Journal of Science and Arts.

“Circumstances affecting the heat of the Sun’s rays”. American Journal of Science and Arts. Credits: Wikimedia Commons.

Foote placed two cylinders under the Sun and later in the shade, each with a thermometer. She made sure the experiment would start with both cylinders with the same temperature. After three minutes, she measured the temperature in both situations. 

She noticed that rarefied air didn’t heat up as much as dense air, which explains the difference between mountaintops and valleys. Later, she compared the influence of moisture with the same apparatus. To make sure the other cylinder was dry enough, she added calcium chloride. The result was a much warmer cylinder with moist air in contrast to the dry one. This was the first step to explain the processes in the atmosphere, water vapor is one of the greenhouse gasses which sustain life on Earth.

But that wasn’t all. Foote went further and studied the effect of carbon dioxide. The gas had a high effect on heating the air. At the time, Eunice didn’t notice it, but with her measurements, the warming effect of water vapor made the temperatures 6% higher, while the carbon dioxide cylinder was 9% higher. 

Surprisingly, Eunice’s concluding paragraphs came with a simple deduction on how the atmosphere would respond to an increase in CO2. She predicted that adding more gas would lead to an increase in the temperature — which is pretty much what we know to be true now. In addition, she talked about the effect of carbon dioxide in the geological past, as scientists were already uncovering evidence that Earth’s climate was different back then.

We now know that during different geologic periods of the Earth, the climate was significantly warmer or colder. In fact, between the Permian and Triassic periods, the CO2 concentration was nearly 5 times higher than today’s, causing a 6ºC (10.8ºF) temperature increase.

Recognition

Eunice Foote’s discovery made it to Scientific American in 1856, where it was presented by Joseph Henry in the Eighth Annual Meeting of the American Association for the Advancement of Science (AAAS). Henry also reported her findings in the New-York daily tribune but stated there were not significant. Her study was mentioned in two European reports, and her name was largely ignored for over 100 years — until it finally received credit for her observations in 2011

The credit for the discovery used to be given to John Tyndall, an Irish physicist. He published his findings in 1861 explaining how absorbed radiation (heat) was and which radiation it was – infrared. Tyndall was an “official” scientist, he had a doctorate, had recognition from previous work, everything necessary to be respected. 

But a few things draw the eye regarding Tyndall and Foote.

Atmospheric carbon dioxide concentrations and global annual average temperatures (in C) over the years 1880 to 2009. Credits: NOAA/NCDC

Dr Tyndall was part of the editorial team of a magazine that reprinted Foote’s work. It is possible he didn’t actually read the paper, or just ignored it because it was an American scientist (a common practice among European scientists back then), and or because of her gender. But it’s possible that he drew some inspiration from it as well — without quoting it.

It should be said that Tyndall’s work was more advanced and precise. He had better resources and he was close to the newest discoveries in physics that could support his hypothesis. But the question of why Foote’s work took so long to be credited is hard to answer without going into misogyny.

Today, whenever a finding is published, even if made with a low-budget apparatus, the scientist responsible for the next advance on the topic needs to cite their colleague. A good example happened to another important discovery involving another female scientist. Edwin Hubble used Henrietta Swan Leavitt’s discovery of the relationship between the brightness and period of cepheid variables. Her idea was part of the method to measure the galaxies’ velocities and distances that later proved the universe is expanding. Hubble said she deserved to share the Nobel Prize with him, unfortunately, she was already dead after the prize announcement.

It’s unfortunate that researchers like Foote don’t receive the recognition they deserve, but it’s encouraging that the scientific community is starting to finally recognize some of these pioneers. There’s plenty of work still left to be done.

How Russia already lost the information war — and Ukraine won it

How is it that Russia’s cyber-force, the alleged masters of disinformation and propaganda, lost the information war, while Ukraine has been so successful at spreading its message to the world?

Of course, being on the right side of history and not invading and bombing a country helps, but we’ve seen Russia (and Putin) spin events to their advantage before, or at least sow some discord and confuse public discourse. The well-established approach of maskirovka has been used to create deception and manipulate public discourse for decades, up until the Russian annexation of Crimea in 2014. So how is it that Putin is now losing so badly at his own game?

Let’s have a look at some of the reasons.

Preemption and pre-bunking

In previous years, Russian disinformation has largely been met with little initial resistance. And we’ve learned recently that attempting to debunk disinformation after it happens is often ineffective. So instead, both organized and ad-hoc actors moved to pre-bunk the disinformation.

This prebunking started going strong in January, when it became clear that Russia was amassing an invading force around Ukraine. In the US, the Biden administration became very vocal about this, and their voice was amplified by UK and EU intelligence. Russia denied any plans of an invasion and tried to dismiss this as a political squabble. They even ridiculed the idea that Russia would invade. But when it happened — it backfired spectacularly.

Official intelligence voices were also backed by Open-source intelligence (OSINT) sources. Russia tried to play the victim, but overwhelmingly, its reports were shut down quickly and factually — because the evidence was already gathered.

The combination of disseminated grassroots information, coupled with the fact that the US and UK governments were transparent about their intelligence warnings made it clear what was going on.

Satellite data

All of this was greatly facilitated that satellite data is now available with relative ease. Nowadays, it’s not just military satellites that can offer this type of data — civilian satellites can also offer valuable information. The satellites showed how Russian troops were amassing troops, how they were moving in, and pretty much all the things Russians tried to deny.

Journalists all around the world used Maxar satellite data to document the movement of Russian troops. It was kind of hard to deny what was going on when the eye from the sky was keeping a close watch.

Grassroots imagery

The data and intelligence reports and the bird’s eye view provided by satellites was coupled with grassroots documentation of the movements of Russian troops.

The reports flew in from residents, but also from journalists who braved the invasion and remained in place to document what was going on. The reports also documented not just the invasion itself, but its many logistical flaws as well. The world became aware that the Russian military faced fuel and food shortages and fuel-less tanks were sometimes simply abandoned.

It wasn’t just in English, either. The international journalistic community got together to produce a coherent message (or as coherent as possible given the circumstances) about what was going on.

Technology is not a friend of the Russian invasion either — everyone has a smartphone nowadays and can film what’s going on. Russian authorities have also been unable to disconnect Ukraine from the internet, which enabled the world to see what was going on through the lens of the Ukrainian people.

With the invasion documented at all levels, the world could have a clear view of what was going on.

Russia made mistakes

The flow of information was helped by the fact that Russia didn’t loudly use its propaganda machine as much as it could. According to reports, it seemed that Russian leaders were banking on a quick Blitzkrieg-type win where they could sweep things under the rug in the beginning, and then push their narrative. But the extension of the conflict meant that they wasted several days and already lost control of the narrative.

Russia also allowed Ukraine to showcase its military victories without pushing its own military successes — because yet again, Russia initially wanted to smooth the whole thing over as quickly as possible. While Ukraine showed its drones bombing Russian tanks and its people bravely holding off their invaders, Russia kept quiet. After all, Russian people at home aren’t even allowed to know there’s a war going on.

Image

Russia also made mistakes in their attempts to sow disinformation, discrediting itself with a few small but blatant errors — this made Russian leaders seem even more disingenuous.

Civilian damage

Without a doubt, few things strike fear and empathy into people like bombing civilian buildings does. It’s something that everyone (hopefully) agrees should not happen. Unfortunately, there’s been plenty of evidence of Russian shelling of civilian buildings, including the bombing of a kindergarten and multiple residential buildings.

A residential building in Kyiv was attacked by Russian artillery. Image via Wiki Commons.

They say a picture is worth a thousand words, and seeing the people of Ukraine huddled up in subways sent a clear message: people like you and me are under attack.

People in Kyiv have taken refuge in the city’s subway to escape the bombing. Every night, thousands of people sleep in the subways. Image via Wiki Commons.

Ukraine contrasts to Russia

Ukraine has also worked to push its side of the story — which you can hardly blame when you consider that Ukraine is currently faced with an existential threat. Their side of the story is very clear: they’re defending themselves against a foreign invasion. Meanwhile, Russia’s reason appears to be to crush Ukraine; it’s not hard to see why people support one of those things and not the other.

Image

Ukrainians also pushed on the idea that unlike the invaders, they treat people humanely — even prisoners of war. They showed that they are humans just like everyone else and they have no intention of waging war when given an alternative.

Russia’s initial excuse for the invasion, that they were doing “denazification” in Ukraine, is also laughable — a lengthy list of historians and researchers signed a letter condemning this idea. The fact that Russia even bombed the Holocaust memorial in Kyiv made it even clearer that this was a flimsy excuse. Even for the Russian people at home, seeing heavily censored information, this must seem like a weak excuse at best.

Another stark contrast between Russian and Ukrainian forces is that while the latter are fighting for their very survival and the defense of their loved ones, it’s not exactly clear what exactly it is that Russian forces are fighting for. In fact, some soldiers were also confused: according to reliable reports, some Russian soldiers and their families initially thought they were doing drills — not an actual invasion.

Tales of heroism and martyrs

Image

Ukraine is an underdog in any conflict with Russia but Ukrainians will not give up — and they’ve been pushing that message strongly since day one. Tales of regular people picking up arms despite all odds have circled the world, showing that Ukrainians are not afraid to fight till the bitter end.

Image

In addition, Ukraine has also published regularly on defenders that sacrificed themselves for the greater good. Tales like a Ukrainian woman telling Russian soldiers to put seeds in their pockets so flowers will grow when they die in her country, a soldier sacrificing himself to blow up a bridge and slow down Russian forces, and most famously, encircled border guards on Snake Island who told an attacking warship: “Russian warship, go fuck yourself”

A valliant leader who understands media

When the invasion started, Volodymyr Zelenskyy wasn’t that well-known or popular outside the country. He wasn’t even that popular inside his country — for many Ukrainians, he was elected as the lesser evil. But he rose to the occasion impressively. With regular updates from the middle of events, using social media to communicate directly with people, and with staunch determination communicated in true 21st century style, Zelenskyy proved instrumental for Ukraine’s defense and its morale. He was the right man at the right place, and his communication was clear and effective.

Source: Volodymyr Zelenskyy / Telegram.

Zelenskyy showed himself to be a man of the people, involved at the very center of the war zone — yet again, contrasting to what Putin was showing.

Jokes and memes

Russia failed to project a “good guy” image, it’s even failing to project a “strong guy” image. Despite its obviously superior firepower, despite its massive investments in the military, despite its gargantuan power — its invasion operation was plagued by numerous mishaps. Some of these mishaps would be outright funny if it weren’t such a tragic situation.

For instance, the scenes where Ukrainian farmers were dragging a tank across a field went viral, as did the encounter between a Ukrainian driver and a Russian tank that was out of fuel. The Ukrainian driver offered to tow the tank — back to Russia.

Memes have also been flowing, and more often than not the memes also took Ukraine’s side.

Propaganda

Lastly, Ukraine also deployed effective propaganda. The ‘Ghost of Kyiv’ fighter pilot is likely a myth, but it’s made the rounds and given hope to many Ukrainians. The ‘Panther of Kharkiv’ cat that detected Russian snipers is also quite possibly propaganda, but it gives people a good story. Let’s face it, if you can make it seem like you’ve got the cats on your side, you’ve already won a big part of the internet.

Image

The bottom line

It’s always hard to assess what’s going on during a war. Reports are inconsistent, there’s a lot of misinformation, heck — it’s a war. In this case, Russia’s propaganda machine seems to have failed to effectively push its side of the story. From the top of the information chain to the bottom, from the intelligence reports to the grassroots videos and photos, everything points in one direction: Ukraine is winning the information war, while Russia is losing it. Its actions have received nigh-universal condemnation, and Putin is essentially a pariah on the global stage — while Zelenskyy has become one of the most popular leaders alive.

Will this matter for the actual war? It’s hard to say at this point. Information is vital during a war, but so is artillery — and Russia has a lot of artillery.

These hard-bodied robots can reproduce, learn and evolve autonomously

Where biology and technology meet, evolutionary robotics is spawning automatons evolving in real-time and space. The basis of this field, evolutionary computing, sees robots possessing a virtual genome ‘mate’ to ‘reproduce’ improved offspring in response to complex, harsh environments.

Image credits: ARE.

Hard-bodied robots are now able to ‘give birth’

Robots have changed a lot over the past 30 years, already capable of replacing their human counterparts in some cases — in many ways, robots are already the backbone of commerce and industry. Performing a flurry of jobs and roles, they have been miniaturized, mounted, and molded into mammoth proportions to achieve feats way beyond human abilities. But what happens when unstable situations or environments call for robots never seen on earth before?

For instance, we may need robots to clean up a nuclear meltdown deemed unsafe for humans, explore an asteroid in orbit or terraform a distant planet. So how would we go about that?

Scientists could guess what the robot may need to do, running untold computer simulations based on realistic scenarios that the robot could be faced with. Then, armed with the results from the simulations, they can send the bots hurtling into uncharted darkness aboard a hundred-billion dollar machine, keeping their fingers crossed that their rigid designs will hold up for as long as needed.

But what if there was a is a better alternative? What if there was a type of artificial intelligence that could take lessons from evolution to generate robots that can adapt to their environment? It sounds like something from a sci-fi novel — but it’s exactly what a multi-institutional team in the UK is currently doing in a project called Autonomous Robot Evolution (ARE).

Remarkably, they’ve already created robots that can ‘mate’ and ‘reproduce’ progeny with no human input. What’s more, using the evolutionary theory of variation and selection, these robots can optimize their descendants depending on a set of activities over generations. If viable, this would be a way to produce robots that can autonomously adapt to unpredictable environments – their extended mechanical family changing along with their volatile surroundings.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist and ARE team member Jacintha Ellers. “We can come up with novel types of creatures and see how they perform under different selection pressures.” Offering a way to explore evolutionary principles to set up an almost infinite number of “what if” questions.

What is evolutionary computation?

In computer science, evolutionary computation is a set of laborious algorithms inspired by biological evolution where candidate solutions are generated and constantly “evolved”. Each new generation removes less desired solutions, introducing small adaptive changes or mutations to produce a cyber version of survival of the fittest. It’s a way to mimic biological evolution, resulting in the best version of the robot for its current role and environment.

Virtual robot. Image credits: ARE.

Evolutionary robotics begins at ARE in a facility dubbed the EvoSphere, where newly assembled baby robots download an artificial genetic code that defines their bodies and brains. This is where two-parent robots come together to mingle virtual genomes to create improved young, incorporating both their genetic codes.

The newly evolved offspring is built autonomously via a 3D printer, after which a mechanical assembly arm translating the inherited virtual genomic code selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, the artificial system wires up a Raspberry Pi computer acting as a brain to the sensors and motors – software is then downloaded from both parents to represent the evolved brain.

1. Artificial intelligence teaches newborn robots how to control their bodies

Newborns undergo brain development and learning to fine-tune their motor control in most animal species. This process is even more intense for these robotic infants due to breeding between different species. For example, a parent with wheels might procreate with another possessing a jointed leg, resulting in offspring with both types of locomotion.

But, the inherited brain may struggle to control the new body, so an algorithm is run as part of the learning stage to refine the brain over a few trials in a simplified environment. If the synthetic babies can master their new bodies, they can proceed to the next phase: testing.

2. Selection of the fittest- who can reproduce?

A specially built inert nuclear reactor housing is used by ARE for testing where young robots must identify and clear radioactive waste while avoiding various obstacles. After completing the task, the system scores each robot according to its performance which it then uses to determine who will be permitted to reproduce.

Real robot. Image credits: ARE.

Software simulating reproduction then takes the virtual DNA of two parents and performs genetic recombination and mutation to generate a new robot, completing the ‘circuit of life.’ Parent robots can either remain in the population, have more children, or be recycled.

Evolutionary roboticist and ARE researcher Guszti Eiben says this sped up evolution works as: “Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms.”

3. Real-world robots can also mate in alternative cyberworlds

In her article for the New Scientist, Emma Hart, ARE member and professor of computational intelligence at Edinburgh Napier University, writes that by “working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world.”

This parallel universe entails the creation of a digital version of every mechanical infant in a simulator once mating has occurred, which enables the ARE researchers to build and test new designs within seconds, identifying those that look workable.

Their cyber genomes can then be prioritized for fabrication into real-world robots, allowing virtual and physical robots to breed with each other, adding to the real-life gene pool created by the mating of two material automatons.

The dangers of self-evolving robots – how can we stay safe?

A robot fabricator. Image credits: ARE.

Even though this program is brimming with potential, Professor Hart cautions that progress is slow, and furthermore, there are long-term risks to the approach.

“In principle, the potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviors that could cause damage or even harm humans,” Hart says.

“We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard.” Therefore: “We could also anticipate unwanted behaviors by continually monitoring the evolved robots, then using that information to build analytical models to predict future problems. The most obvious and effective solution is to use a centralized reproduction system with a human overseer equipped with a kill switch.”

A world made better by robots evolving alongside us

Despite these concerns, she counters that even though some applications, such as interstellar travel, may seem years off, the ARE system may have a more immediate need. And as climate change reaches dangerous proportions, it is clear that robot manufacturers need to become greener. She proposes that they could reduce their ecological footprint by using the system to build novel robots from sustainable materials that operate at low energy levels and are easily repaired and recycled. 

Hart concludes that these divergent progeny probably won’t look anything like the robots we see around us today, but that is where artificial evolution can help. Unrestrained by human cognition, computerized evolution can generate creative solutions we cannot even conceive of yet.

And it would appear these machines will now evolve us even further as we step back and hand them the reins of their own virtual lives. How this will affect the human race remains to be seen.

Cultured meat is coming. But will people eat it?

Cultured chicken salad. Image credits: UPSIDE.

The prospect of cultured meat is enticing for several reasons. For starters, it’s more ethical — you don’t need to kill billions of animals every year. It could also be better for the environment, producing lower emissions and requiring less land and water than “traditional” meat production, and would also reduce the risk of new outbreaks (potentially pandemics) emerging. To top it all off, you can also customize cultured meat with relative ease, creating products that perfectly fit consumers’ tastes.

But there are also big challenges. In addition to the technological challenges, there is the need to ensure meat culturing is not only feasible and scalable but also cheap. There’s also a more pragmatic problem: taste. There’s a lot to be said about why people enjoy eating meat, but much of it boils down to how good it tastes. Meanwhile, cultured meat has an undeniable “artificial” feel to it (at least for now). Despite being made from the exact same cells as “regular” meat, it seems unnatural and unfamiliar, so there are fears that consumers may reject it as unappealing.

Before you even try it

A recent study underlines just how big this taste challenge is — and how perception (in addition to the taste per se) could dissuade people from consuming cultured meat. According to the research, which gathered data from 1,587 volunteers, 35% of non-vegetarians and 55% of vegetarians find cultured meat too disgusting to eat.

“As a novel food that humans have never encountered before, cultured meat may evoke hesitation for seeming so unnatural and unfamiliar—and potentially so disgusting,” the researchers write in the study.

For vegetarians, the aversion towards cultured meat makes a lot of sense. For starters, even though it’s not meat from a slaughtered animal, it’s still meat, and therefore has a potential to elicit disgust.

“Animal-derived products may be common triggers of disgust because they traditionally carry higher risks of disease-causing microorganisms. Reminders of a food’s animal origin may evoke disgust particularly strongly among vegetarians,” the study continues.

For non-vegetarians, it’s quite the opposite: it can elicit disgust because it’s not natural enough. Many studies highlight that meat-eaters express resistance to trying cultured meat because of its perceived unnaturalness. So if you’d want to make cultured meat more appealing for consumers, you’d have to approach things differently for vegetarians and non-vegetarians. For instance, perceiving cultured meat as resembling animal flesh predicted less disgust among meat-eaters but more disgust among vegetarians. But there were also similarities between the two groups. Perceiving cultured meat as unnatural was strongly associated with disgust toward it among both vegetarians and meat-eaters. Combating beliefs about unnaturalness could go a long way towards convincing people to at least give cultured meat a shot.

A cultured rib-eye steak. Image credits: Aleph Farms / Technion — Israel Institute of Technology.

Even before people eat a single bite of cultured meat, their opinion may already be shaped. If we want to get people to consume this type of product, tackling predetermined disgust is a big first step. Different cultures could also have widely different preferences in this regard.

“Cultured meat offers promising environmental benefits over conventional meat, yet these potential benefits will go unrealized if consumers are too disgusted by cultured meat to eat it.”

Okay, but is cultured meat actually good?

Full disclosure: no one at ZME Science has tried cultured meat yet (but we’re working on it). Even if we had, our experience wouldn’t be necessarily representative of the greater public. Herein lies one problem: compared to how big the potential market is, only a handful of people have actually tasted this type of meat. We don’t yet have large-scale surveys or focus groups (or if companies have this type of data, they haven’t publicly released it from what we could find).

The expert reviews seem to be somewhat favorable. In a recent blind test, Israel Master Chef judge Michal Ansky was unable to differentiate between “real” chicken and its cultured alternative. Ansky tasted the cultured chicken that was already approved for consumption in Singapore (the first place where cultured meat has been approved).

The remarkable progress that cultured meat has made in regards to its taste was also highlighted by a recent study from The Netherlands, in which blind-tested participants preferred the taste of cultured meat.

“All participants tasted the ‘cultured’ hamburger and evaluated its taste to be better than the conventional one in spite of the absence of an objective difference,” the researchers write.

The study authors also seemed confident that cultured meat could become mainstream given its appealing taste and environmental advantages.

“This study confirms that cultured meat is acceptable to consumers if sufficient information is provided and the benefits are clear. This has also led to increased acceptance in recent years. The study also shows that consumers will eat cultured meat if they are served it,” said Professor Mark Post from Maastricht University, one of the study authors.

Researchers are also close to culturing expensive, gourmet types of meat, including the famous Wagyu beef, which normally sells for around $400 for a kilogram. Researchers are already capable of culturing bits of this meat four times cheaper, and the price is expected to continue going down. This would be a good place for cultured meat to start, making expensive types of meat more available to the masses.

Still, there are some differences between most types of cultured meat and meat coming from animals. For instance, one study that used an “electronic tongue” to analyze the chemical make-up of the meat found “significant” differences.

“There were significant differences in the taste characteristics assessed by an electronic tongue system, and the umami, bitterness, and sourness values of cultured muscle tissue were significantly lower than those of both chicken and cattle traditional meat,” the study reads. But the same study also suggests that understanding these differences could make cultured meat even more realistic and palatable.

This technology is also progressing very quickly in this regard, and every year, cultured meat seems to be taking strides towards becoming more affordable and tasty. There are multiple companies pending approval to embark on mass production, using somewhat different technologies and products. There are multiple types of meat on the horizon, from chicken and beef to pork and even seafood, and for many of them, the taste data is only just coming in.

All in all, cultured meat promises to be one of the biggest food revolutions in the past decades. Whether it will actually deliver on this promise is a different problem that will hinge on several variables, including price, taste, and of course, environmental impact. If companies can deliver a product that truly tastes like traditional meat, they have a good chance. There’s still a long road before the technology becomes mainstream, but given how quickly things have progressed thus far, we may see cultured meat on the shelves sooner than we expect.

Your microbiota will be having non-stop sex this Valentine’s Day

Even if you’re alone this Valentine’s Day, there’s no need to worry: some parts of your body will be getting plenty of action. In fact, your body will host a veritable carnival of the sensual in your tummy, as your microbiota will engage in an orgy of sex and swinger’s parties — where they’ll be swapping genes instead of keys.

A medical illustration of drug-resistant, Neisseria gonorrhoeae bacteria. Original image sourced from US Government department: Public Health Image Library, Centers for Disease Control and Prevention. Image in the public domain.

The salacious gene

Imagine you have a severe disease with a very unusual cure: you can treat by making love with someone who then passes on the necessary genes to cure your ailment. It is, as they say, sexual healing. Using sex to protect or heal themselves is precisely what bacteria can do, and it’s a crucial defense mechanism.

In the past, the research community thought bacterial sex (or conjugation, as scientists call it) was a terrible threat for humans, as this ancient process can spread DNA capable of conveying antibiotic resistance to their neighbors. Antibiotic resistance is one of the most pressing health challenges the world is facing, being projected to cause 10 million deaths a year by 2050.

But there’s more to this bacterial sex than meets the eye. Recently, scientists from the University of Illinois at Urbana-Champaign and the University of California Riverside witnessed gut microbes sharing the ability to acquire a life-saving nutrient with one another through bacterial sex. UCR microbiologist and study lead Patrick Degnan says:

“We’re excited about this study because it shows that this process isn’t only for antibiotic resistance. The horizontal gene exchange among microbes is likely used for anything that increases their ability to survive, including sharing vitamin B12.”

For well over 200-years, researchers have known that bacteria reproduce using fission, where one cell halves to produce two genetically identical daughter cells. However, in 1946, Joshua Lederberg and Edward Tatum discovered bacteria could exchange genes through conjugation, an entirely separate act from reproduction.

Conjugation occurs when a donor and a recipient bacteria sidle up to each other, upon which the donor creates a tube, called a pilus that attaches to the recipient and pulls the two cells together. A small parcel of DNA is then passed from the donor to the recipient, providing new genetic information through horizontal transfer.

Ironically, it wasn’t until Lederberg met and fell in love with his wife, Esther Lederberg, that they made progress regarding bacterial sex.

Widely acknowledged as a pioneer of bacterial genetics, Esther still struggled for recognition despite identifying the horizontal transfer of antibiotic resistance and viruses, which kill bacteria known as bacteriophages. She discovered these phages after noticing small objects nibbling at the edges of her bacterial colonies. Going downstream to find out how they got there, she found these viral interlopers hiding dormant amongst bacterial chromosomes after being transferred by microbes during sex.

Later work found that environmental stresses such as illness activated these viruses to replicate within their hosts and kill them. Still, scientists assumed that bacterial sex was purely a defense mechanism.

Esther Ledeberg in her Stanford lab. Image credits: Esther Lederberg.

Promiscuity means longevity

The newly-published study builds on Esther’s work. The study authors felt this bacterial process extended beyond antibiotic resistance. So they started by investigating how vitamin B12 was getting into gut microbial cells, where the cells had previously been unable to extract this vitamin from their environment — which was puzzling as, without vitamin B12, most types of living cells cannot function. Therefore, many questions remained about how these organisms survived without the machinery to extract this resource from the intestine.

The new study in Cell Reports uses the Bacteroidetes species, which comprise up to 80% of the human microbiome in the intestines, where they break down complex carbohydrates for energy.

“The big, long molecules from sweet potatoes, beans, whole grains, and vegetables would pass through our bodies entirely without these bacteria. They break those down so we can get energy from them,” the team explained.

This bacteria was placed in lab dishes mixing those that could extract B12 from the stomach with some that couldn’t. The team then watched in awe while the bacteria formed their sex pilus to transfer genes enabling the extraction of B12. After the experiment, researchers examined the total genetic material of the recipient microbe and found it had incorporated an extra band of DNA from the donor.

Among living mice, something similar happens. When the group-administered two different subgroups of Bacteroidetes to a mouse – one that possessed the genes for transferring B12 and another that didn’t — they found the genes had ‘jumped’ to the receiving donee after five to nine days.

“In a given organism, we can see bands of DNA that are like fingerprints. The recipients of the B12 transporters had an extra band showing the new DNA they got from a donor,” Degnan said.

Remarkably, the team also noted that different species of phages were also transferred during conjugation, exhibiting bacterial subgroup specificity in some cases. These viruses also showed the capacity to alter the genomic sequence of its bacterial host, with the power to promote or demote the life of its microbic vessel when activated.

Sexual activity in our intestines keeps us healthy

Interestingly, the authors note they could not observe conjugation in all subgroups of the Bacteroidetes species, suggesting this could be due to growth factors in the intestine or a possible subgroup barrier within this large species group slowing the process down.

Despite this, Degnan states, “We’re excited about this study because it shows that this process isn’t only for antibiotic resistance.” And that “The horizontal gene exchange among microbes is likely used for anything that increases their ability to survive, including sharing [genes for the transport of] vitamin B12.”

Meaning that bacterial sex doesn’t just occur when microbes are under attack; it happens all the time. And it’s probably part of what keeps the microbiome and, by extension, ourselves fit and healthy.

A lot of “sea serpent sightings” could actually be whale boners

A sailor’s life is rough. You’re up against the weather, the sea, maybe even sea monsters — or so some sailors used to think. Since Ancient Greece, people have been describing sea monsters of various sorts, but according to one study, at least some of those monsters can be explained by something much more mundane: whale penises.

Copperplate engraving of Egede’s great sea monster. The Naturalist’s Library Sir William Jardine (publisher) Wm. Lizars (principal engraver). London & Edinburgh. Hans Egede (a Lutheran missionary) wrote that on 6 July 1734 his ship was off the Greenland coast. Those on board that day “saw a most terrible creature, resembling nothing they saw before.”.

In one of the more famous sea sighting reports, Danish Lutheran missionary Hans Egede wrote that on 6 July 1734, he and those on his ship saw a terrible sight — a “most terrible creature”, resembling nothing they had seen before. The monster, Egede reported, was longer than their whole ship.

“It had a long pointed snout and it blew [spouted] like a whale [it] had broad big flippers and the body seemed to be grown [covered] with carapace and [it] was very wrinkled and uneven [rough] on its skin; it was otherwise created below like a serpent and where it went under the water again threw itself backward and raised thereafter the tail up from the water a whole ship’s length from the body.”

Egede’s account is notable because he was an educated man and had described several whale encounters previously, and as a man who had seen some things in his life, he wouldn’t be one to be easily impressed. So what did Egede and his mates actually see?

Image credits: Paxton et al (2005).

Three researchers took on the challenge of answering that question. The lead author was Charles Paxton, a man familiar with unusual studies. A few years ago, Paxton was awarded the Ig Nobel award for a study on how amorous ostriches attempt to court humans in Britain — yes, really. The Ig Nobel award is offered to research “that cannot, or should not, be reproduced” and that “first makes you laugh, then makes you think”.

Paxton’s whale study was carried out in 2005, and the researchers looked at all the plausible actions that could fit the description. A key part of the description is the “serpent-like” description.

“Although whales are found, and can survive, without flukes (for example grey whales ), serpent-like or eel-like bodies are not usually associated with the rapid thrust that would be required to rear the whole body high out of the water,” Paxton writes.

So it seems like the monster couldn’t have been a whale. But it could have been a whale… part.

“There is an alternative explanation for the serpent-like tail. Many of the large baleen whales have long, snake-like penises. If the animal did indeed fall on its back then its ventral surface would have been uppermost and, if the whale was aroused, the usually retracted penis would have been visible.”

This seems compelling enough, but it still leaves up the matter of size for debate. Whale penises are indeed impressive, but could they have been bigger than the entire boat? Researchers suspect the answer is ‘no’, but there could be an explanation: multiple whales.

“The penises of the North Atlantic right whale and (Pacific) grey whale can be at least 1.8 meters long and 1.7 meters long respectively and could be taken by a naïve witness for a tail. That the tail was seen at one point a ship’s length from the body suggests the presence of more than one male whale,” the study concludes.

To make the whale erection theory even more compelling, a separate incident from 1875 is even more likely to be a whale penis. Sailors aboard the merchant vessel Pauline reported seeing a “whitish pillar” amongst a pod of sperm whales “frantic with excitement” — a description that very well fits the whale penis theory.

Ultimately, we may never know what Egede saw, and probably not all sea serpent sightings are whale penises (though that would be an interesting study), but it seems to happen quite often, and it’s not uncommon for sea serpents to “appear” in the vicinity of whales, often even attached or “battling” a whale.

There’s even a theory that the Loch Ness monster is a whale penis, though there’s a big hole in that theory, in that Loch Ness is a lake and there are no whales in it. But otherwise, a lot of sea serpent sightings could actually be whale penises.

You can read the entire study here.

An atlas for endangered alphabets could save them from disappearing

If something is important, we write it down. That’s how it’s been for millennia. However, as important as writing is, 85% of the world’s alphabets are on the brink of extinction, tossed aside for the more common and popular alphabets. But one man wants to change that.

The Soyombo script from Mongolia.

If you’re reading this, the odds are you’re familiar with the Latin alphabet — or at least use a translator that can work with this alphabet. This is probably the most common type of alphabet, being used by some 3 billion people on the planet. But there are dozens of alphabets used around the world. Some are more popular, like the Chinese kanji or the Japanese Hiragana; others, like the Georgian, are geographically isolated, but still have a strong national presence. But others are at risk of fading.

In 2009, Tim Brookes founded a non-profit called Endangered Alphabets, making an active effort to preserve these writing systems at risk of disappearing.

“When a culture is forced to abandon its traditional script, everything it has written for hundreds of years — sacred texts, poems, personal correspondence, legal documents, the collective experience, wisdom and identity of a people — is lost. This Atlas is about those writing systems, and the people who are trying to save them,” the project webpage reads.

It may seem weird, but alphabets are disappearing at an unprecedented rate. Due to globalization, colonization, and the stigma sometimes associated with writing in a minority alphabet, many writing systems are at risk of being forgotten. For instance, when colonists arrived in the Philippines, they failed (or didn’t care) to recognize the local linguistic diversity. Tagalog was pushed as the primary language, while Kulitan, a script used to write Kapampangan, a local language, was pushed to the wayside. Nowadays, Kulitan is only commonly used in seals, logos, and heraldry — though there is a movement to revive its use.

This pattern is surprisingly common. In Africa, for instance, dozens of alphabets are at risk. The Tifinagh alphabet in Morocco, for instance, is attested to have been in use from the 3rd century BC to the 3rd century AD. After a period of gradual abandonment, Tifinagh was revived in the 1980s with the invention of the “neo-Tifinagh” — a modern fully alphabetic script developed from earlier forms of Tifinagh. The Nubian script is another ancient example, being one of the oldest scripts in human history, used for at least 7,000 years.

But not all the endangered alphabets are ancient; some are surprisingly new. The Bamum script, for instance, was developed in 1896 when the 25-year-old King Ibrahim Njoya of the Bamum Kingdom in Cameroon had a dream. Although it was first envisioned by a single person practically overnight, the alphabet is surprisingly practical. The king invited subjects to send him simple signs and symbols and used these as letters. After the next few years, the alphabet became more and more rationalized until 1910, when it became fully functional. Using the script, the king wrote a history of his people, a book of medicines, as well as a guide to good sex. He built schools and libraries that used the script and supported artists and intellectuals that used it. The initial German colonists had little problem with this, but when the French came into power after Germany’s defeat in WWI, they ousted the king and sent him into exile. They also destroyed his printing press shops and burned his libraries and books, outlawing the script. It was only in 2007 that the first efforts began to revive the script, which is now taught to students as part of their Bamum heritage.

It’s this type of resurgence that Brookes hopes to bring.

“In 2009, when I started work on the first series of carvings that became the Endangered Alphabets Project, times were dark for indigenous and minority cultures,” he writes. “The lightning spread of television and the Internet were driving a kind of cultural imperialism into every corner of the world. Everyone had a screen or wanted a screen, and the English language and the Latin alphabet (or one of the half-dozen other major writing systems) were on every screen and every keyboard. Every other culture was left with a bleak choice: learn the mainstream script or type a series of meaningless tofu squares.”

Not all the scripts included are technically alphabets. Some are abjads (a writing system in which symbols or glyphs are only used for consonants, leaving it to readers to infer an appropriate vowel) or abugidas (systems in which consonant-vowel sequences are written as units), but all of them lack “official status in their country, state, or province”. The goal of the Atlas is to prevent them from being “dominated, bullied, ignored, or actively persecuted by another, more powerful culture” — before ultimately going extinct.

The people at the Atlas also carry out research and assess the status of these scripts and works on ways to promote and revitalize them. To find out more about how you can support their work, check out their webpage. To read more about the work that they do, check out their blog.

What does the universe sound like? The eerie world of cosmic sonification

Light is more than just what we see. The light spectrum can provide information about astrophysical objects — and in different wavelengths, it can provide different types of information. We can observe the sky through X-rays, visible light, gamma rays — all of which are waves at different frequencies. For sounds, something similar happens: it exists in many frequencies. High pitched sounds have higher frequencies than low ones, which is why electric guitars sound higher than bass guitars, their frequencies are a lot higher.

So what would happen if you would turn light (or other types of astronomic data) into sounds? This is technically called sonification — the use of non-speech data to represent sounds. You basically take some type of data and translate it into pitch, volume, and other parameters that define sound.

It’s not as silly or unheard of as it sounds. Scientists convert things into sounds for a number of reasons. For instance, take the Geiger counter, an electronic instrument used to measure ionizing radiation. If the radiation is high enough, you hear an increase of repetitions in the click sound from the instrument. The same can be done with astronomical data, with many lines of code, scientists can translate astronomical data into sounds. So, without further ado, here are some of the coolest sounds in the universe.

The Pillars of Creation

In the sonification in the Eagle Nebula, you can hear a combination of both optical and X-ray bands. The pitches change according to the position of the light frequencies observed, the result reminds us of a sci-fi movie soundtrack. As we listen to the features from the left to the right, the dusty parts form the Pillars as a whir, it’s eerily apparent that we’re hearing something cosmic.

Sonification Credit: NASA/CXC/SAO/K.Arcand, SYSTEM Sounds (M. Russo, A. Santaguida)

The Sun

Using Solar and Heliospheric Observatory (SOHO)’s data, we can listen to our star’s plasma flowing and forming eruptions. The sound is pretty peaceful for a 5,778 K environment.

Credits: A. Kosovichev, Stanford Experimental Physics Lab

Venus

In one of Parker Solar Probe’s flybys, the spacecraft collect data from Venus’ upper atmosphere. The planet’s ionosphere emits radio waves naturally that were easily sonified.

Video credit: NASA’s Goddard Space Flight Center/Scientific Visualization Studio

Bullet Cluster

The Bullet Cluster is famous for being proof dark matter is out there. In its sonification, the dark matter part (in blue) is lower, while the matter part (in pink) has a higher pitch. This is one of the most melodic cosmic sounds you’ll ever hear, though it does have a distinctively eerie tune as well.

Sonification Credit: NASA/CXC/SAO/K.Arcand, SYSTEM Sounds (M. Russo, A. Santaguida).

A supernova

This sonification is different from the others. We hear the sounds emanating from the centre of the Tycho’s supernova remnant and continue with the sounds of the stars visible in that plane. Inside the remnant, the sound is continuous, outside we hear distinct notes which are the stars nearby. 

Sonification Credit: NASA/CXC/SAO/K.Arcand, SYSTEM Sounds (M. Russo, A. Santaguida)

Cosmic music

With a musical approach, the sci-art outreach project SYSTEM Sounds, not just sonify data, but also make sure the sounds are harmonic. It’s even better when nature provides naturally harmonical systems.

The most incredible sonification of all comes from the TRAPPIST-1 system, a relatively close system “just” 39.1 light-years away. Six of the planets orbiting the red dwarf are in an orbital resonance that means they pull each other in pairs and their rotation match in the integer ratios 8:5, 5:3, 3:2, 3:2, 4:3, and 3:2. So the first two planets influence each other gravitationally — for every eight orbits completed by TRAPPIST-1a, TRAPPIST-1b completes five. If it all sounds a bit confusing, look at the video below and it will make more sense

SYSTEM Sounds got the advantage of the harmony in the TRAPPIST-1 system and sonified the planets orbiting their star. In the audio, first, you hear each planet completing one orbit as a piano note. Then to emphasise the orbit resonance, the team added a drum sound when the planets matched in orbit. The result is a super cool song.

Created by Matt Russo, Dan Tamayo and Andrew Santaguida 2017.

This type of project shows a new perspective and a new way of looking at data. Much more than just taking photos and looking that them, this is a way to showcase the many nuances and differences often present in astronomic data. Furthermore, this work is excellent to include visually impaired people in astronomical observation, making the cosmos accessible for those who can’t see it. If you have a friend suffering from visual impairment who would like to know what space is like — here’s your chance to show them.

A collection of sonification is found in the Chandra X-ray Center’s ‘A Universe of Sound‘ and SYSTEM Sounds.

Stealth bomber caught mid-flight by Google Maps photo

The Northrop Grumman B-2 Spirit stealth bomber was designed during the Cold War, featuring technology designed for penetrating dense anti-aircraft defenses. But this bomber may not be so stealthy after all, as one plane was caught flying over farm fields in the Midwest by Google Satellite cameras.

Image via Google Maps.

Photo Bomber

The bomber was first discovered by Redditor Hippowned in the state of Missouri, US, between Kansas City and Saint Louise (some 50 km east of Kansas City). The exact coordinates are 39°01’18.5”,-93°35’40.5” — you can check the spot yourself with this Google Maps link.

The blurry red-green-blue (RGB) halo of the plane is a result of how the image is captured: the satellite cameras first capture the RGB channels separately and then combine them into a single image. As the plane was moving quickly, the integration of the channels is imperfect.

Just 21 of these bombers were ever built. At an average cost of $2 billion, and with a maintenance cost of $6.8 million annually, it’s not hard to understand why there’s so few of them — which makes it all the more impressive that one of them was caught on Google’s cameras.

If you’re interested in spotting your own Northrop Grumman B-2 Spirit bomber, your best chance is in Missouri, at the Whiteman Air Force Base (the current home of the B-2 Spirit).

The B-2’s first public flight in 1989. Image via Wikipedia.

It’s not the first time an airplane was caught on Google’s Maps imagery. In 2010, an airliner was spotted sporting the same RGB halo effect.

Image via Google Maps.

Google Maps uses satellites and aerial photography to produce an image of the world. Most satellite images are no more than three years old and updated on a regular basis. The Street View feature boasts over 170 billion images from over 10 million miles around the planet.

‘Real’ clothes are so yesterday. Modern clothes are sustainable, flamboyant — and virtual

Whether it’s new materials, sustainability, or virtual reality, the fashion industry is keen on embracing new modern trends. But even the essence of fashion (clothes) is not safe. The means of production and consumption have started to be digitized and clothes made entirely of pixels have become a reality. 

Image credits: James Gaubert.

Nothing changes as quickly as fashion. Today’s trends are gone tomorrow. With a single click on the refresh button, people get new ideas and “one minute ago” is already old news. So, industries had to adapt, and they had to do it fast. Traditionally, the fashion industry didn’t really rely on new technology — but you know what they say, “if you can’t beat them, join them” — and the digital environment offers excellent opportunities for fashion.

The pandemic, especially, has forced most fashion houses to move a big chunk of their businesses online and rethink their marketing strategies. Sales also went down as many people no longer felt the need to buy new clothes because there was nowhere to go. The lockdown caused a lack of social events and people were advised to work from home, so pyjamas became their go-to outfit. On top of this, fashion shows were canceled, so the creators had to organize them in an online medium. 

But the fashion industry found a new toy: virtual clothes.

A make-over for the fashion industry

With companies vying for consumers’ digital attention, some have turned to video games or virtual reality (VR) to remain relevant, while others took a step forward and created cyber clothing collections. This type of clothing is made entirely of pixels and edited on the client’s body with the help of augmented reality. It is designed to be used only online, without a physical replica that you can wear in real life. Digital garments are gaining ground, solving problems such as sustainability, inclusivity and the economic crisis caused by the pandemic.

One of the first fully digital luxury brands is Republiqe, founded by James Gaubert in August 2020. The organization has its own designers who create original clothes, but they also collaborate with other brands to produce digital versions of their products. 

Sustainability is the first problem Republiqe tries to address, Gaubert tells ZME Science. With 22 years of experience in fashion, Gaubert saw firsthand the environmental damage caused by this industry, so he wished to find an ecological solution to this.

“We wanted to challenge that and create fashion that is as sustainable as fashion can ever be, which means it’s digital. I get asked probably once or twice a month if we’re gonna bring out physical clothing and the answer is always: no. That goes against our complete ethos and focus as a business,” he said.

The problems raised by Gaubert are much more serious than they may seem. New research published in the International Journal of Environmental Studies found that clothing production has increased by 400% in the last 27 years, and this is taking a huge toll on the environment. 

Sustainability woes

The means used in the manufacturing process of clothes are harmful to the planet. For a start, large quantities of natural resources are required; 60% of garments are made from petroleum-derived synthetic plastics, while 30% are from cotton. Other commonly used materials are wool, silk, and linen. 

Energy is also a concern. Factories use 65,000 kilowatt-hours of electricity and approximately 250,000 liters of water to turn raw materials into fibers. The fashion industry is the second most water-intensive industry in the world, consuming around 79 billion cubic metres of water per year — it takes 2,700 liters to make an average T-shirt.

The problems don’t stop here. Rapid changes in fashion trends make a large percentage of the clothes go unsold. Just in the USA, around 85% of all clothes end up in a landfill, but their fabrics aren’t even biodegradable. Americans are considered the biggest consumers of clothing in the world. 

Despite these destructive factors, most people do not take sustainability into account when buying clothes. The Global Consumer Survey conducted in the UK in 2021 revealed that 34% of the population considers sustainable fashion to be too expensive, while 18% does not know about eco-friendly garments — and this is in one of the most environmentally-conscious countries in the world. 

Gaubert agrees that education and awareness play an important role in understanding sustainable and virtual fashion. 

He said: “The biggest challenge for digital fashion brands is one around education because a lot of people still don’t really understand the concept. As people get educated and live their lives more and more online, the requirement will shift from physical [fashion] to digital.”

Digital garments are not made of real textiles, so they do not harm the environment. They are just a group of pixels gathered together with the help of advanced technologies and made to look like an actual outfit. This means clients cannot physically wear them.

Republiqe uses a whole range of different software, from the classical Photoshop to more complex Marvelous Designer, CLO 3D, Daz 3D and Augmented Reality to do the fittings. 

The purchasing process goes like this: customers browse a fashion catalog,  choose a piece of clothing that they like, pay for it, and upload a photo of themselves. After that, the tailoring team will work their magic and will fit the outfit on their body. 

The final product is a professionally edited photo that can be uploaded on any social media platform. 

Other digital fashion houses, such as Dressx, went further and created outfits that are suitable for wearing in videos. 

Virtual clothing is not only beneficial for sustainability but also solves the problem of inclusivity. The clothes have no size, so they fit any body type. 

“It doesn’t matter what age, sex, color, or size you are. We fitted garments to pregnant people who normally could only buy maternity wear. We fitted garments to people that have only got one leg, who would struggle to find clothing that fits,” said Gaubert.

Their best-selling collection, at the moment, is the denim one, but an all-time favorite is the fur section — because it is completely ecological. Real fur endangers the lives of animals and the fake one is made of plastic, which is also bad for the environment.

Caption: No virtual animals were harmed in the making of this virtual fur coat.Image credits: James Gaubert.

The appeal of virtual clothes


But why would people go through the trouble of spending real money to buy virtual clothes? 

Well, they do not buy just clothes, they buy fantasies. They have the chance to wear outfits with flamboyant designs that would be impossible to achieve in real life. Do you want a blouse made from glass, trousers that shine brightly, or maybe you fancy an animated dress? Nothing is far-fetched in the digital world. In the case of Republiqe, designs often feature unusual elements, like balloons or fluorescent materials. 

As the Zuckerberg-backed metaverse is starting to become a thing, people are valuing their online identity more and more. They’re seeking new ways to improve their virtual personas, so a computerized wardrobe might be the solution to look great with minimal effort.

Customers can let their imagination run free and experiment with different styles, without having to think about the impact their desires have on the environment or if they would find the right fit. 

What’s more, digital garments have managed to remove the preconception that fancy clothes can only be worn on special occasions. Now, you can be “dressed” super extravagantly in mundane moments, without attracting attention and feeling self-conscious, because no one but you would know. In reality, you wear normal clothes, but the photos you take will be edited with the unique outfits you choose. Your phone becomes your runway. 

Outfit from fluorescent materials. Image credits: James Gaubert.

Republiqe is one of the most affordable brands of virtual fashion, with prices generally ranging from $3 to $20. But virtual clothes can easily go for thousands of dollars. The English digital fashion house Auroboros is one such example. Known for its utopian designs, inspired by a SciFi universe, the company is not afraid to have high prices, up to $1,300. The same goes for The Fabricant, an Amsterdam-based company, which does a lot of collaborations with celebrities. Their best partnership was with artist Johanna Jaskowska for the ‘Iridescence’ dress, which sold for $9,500.

Like buying game skins, but for yourself

Republiqe revolutionized the way we look at fashion, but digital attires are not a new invention. Cyber garments have long been used to personalize avatars in video games and on some social networks but now, the time looks ripe for this to become a full-fledged industry.

James Gaubert revealed that he was inspired by the video games played by his son, when he created Republiqe. He saw the boy spending real money to buy clothes or skins for his characters, so Gaubert transposed this practice into his brand. 

Now the customers are treated as “real-life avatars”, Gaubert said. 

Whether you prefer to stick with real clothes or you want to try on some pixelated garments, it is impossible to ignore the advantages brought by digital fashion. With the ongoing pandemic, this type of products will continue to flourish and technology will take over runways completely.

Will COVID-19 kill the open-plan office?

Taking down walls makes office cheaper, but it also made them perfect spreaders for viruses and bacteria. A flood of changes promises to bring back those walls — or rather, take a bite out of the office itself.

Viral transmission

It was supposed to be the ‘better’ way, a design that would foster collaboration, creativity, and cooperation among teams. Companies loved it, and the open plan office became the default of many corporations. However, it wasn’t just ideas and thoughts that were easily shared, but also pathogens.

A decade ago, researchers in Arizona conducted a study to see just how fast a virus can spread inside an average office space. They placed a nonpathogenic virus on the door to an open plan office with 80 employees. In only 4 hours, over half of the commonly touched surfaces became contaminated. By the end of the day, virtually the entire office (as well as the bathrooms, doors, and breakroom) were contaminated.

“Behaviors in the workplace contribute to the spread of human viruses via direct contact between hands, contaminated surfaces and the mouth, eyes, and/or nose,” the researchers conclude.

As it turns out, while creativity and cooperation may be hard to quantify, viral spread was not, and open plan offices were more likely to make people sick. A recent study found that people working in this type of office were more likely to take sick days off.

When the COVID-19 pandemic came, the viral transmission hazards of offices confirmed, and open offices were linked to viral transmission. Droplets from a single sneeze can travel several meters, contaminating surfaces for days; even if carefully cleaned, the open office was bound to be less safe than more isolated types of offices.

Then, after people increasingly started working from home last year, returning to work in an open plan office simply seemed unacceptable to most. Many workplaces introduced layout changes including buffer zones and plastic screens intended to reduce the risk of viral transmission, up to the point where there was even a plexiglass shortage.

But this created the illusion of safety rather than actual safety, and people weren’t too keen to return to open spaces — and not just because of the pandemic.

A growing list of grievances

The open plan office, it turns out, had it coming from a long time ago.

Systematic surveys showed that the effects of open-plan offices were not always as positive as purported. Many workers complained about high levels of noise which was hampering productivity and causing stress and higher blood pressure in workers. Many would scurry on to quiet rooms, and it was not uncommon for open offices to actually decrease face-to-face conversations — in the noise and the crowd, direct communication ironically became rarer. In one 2018 study, face-to-face communication was found to decline by up to 70% due to the open office, while electronic communication increased as employees began to “socially withdraw”. Another 2018 study found that employees were aware of the viral transmission risks associated with open spaces, and the fear of infection triggered significant stress. Workers also reported feeling more distracted in open spaces.

Furthermore, the open space takes away what little privacy employees have. It’s hard to hide a cluttered desk in an open space, and it’s likely that everyone around will know what you’re eating — and when. If your job entails phone conversation, that’s also a problem: one study found that employees were less likely to share honest opinions on phone calls while in an open space, due to fears that their co-workers may hear them.

Indeed, the open plan office, the lovechild of so many corporations, was in trouble way before the pandemic.

For all the advantages it offered, like easier office logistics and breaking up silo working, open spaces seemed to cause a fair bit of trouble. The thing is, even though many disliked open spaces, they didn’t have much of a say in the matter. At least, until recently.

The Great Resignation and working from home

Working from home has a draw that many employees have discovered during the pandemic.

Among the many unexpected consequences of the pandemic is a phenomenon people are starting to call The Great Resignation. Basically, the world is experiencing an unexpected exodus of workers. A whopping 4 million Americans a month are quitting their jobs, and workers in other parts of the world are echoing similar trends, sending shockwaves across the entire market.

It’s hard to say why this is happening. A part of it can be traced to economic initiatives meant to tackle the effects of the pandemic, but that’s just the tip of the iceberg. A lot of people are feeling burned out, want a better life balance, or are just looking for better or more meaningful jobs. To add even more fuel to this fire, plenty of workers have become accustomed to the advantages of working from home and are prepared to quit their jobs if they’re not given the option of working from home.

“What will it take to encourage much more widespread reliance on working at home for at least part of each week?” asked Frank Schiff, the chief economist of the US Committee for Economic Development, in The Washington Post in 1979. Now, we know: a pandemic and a great wave of resignations.

Basically, the pandemic has shown that in a great number of cases, we can in fact work from home — despite what some employers would have you believe. A whopping 37 percent of U.S. jobs could potentially be done remotely, and this spells trouble for all offices, not just open ones.

Indeed, for many jobs, the technology of working from home is already easily accessible. It was the culture of the workplace that was keeping people inside the workplace. But now, that’s all been blown open.

The clock is ticking, but change is unlikely to be definitive

Semi-open spaces, or other types of designs tweaking the open space may be more palatable for workers in the near future.

From the very start, the idea behind open plan offices was flexibility and freedom; but now, many people want a different type of flexibility and freedom. In the short term, the pandemic virtually stopped the usage of such offices, but in the long run, it triggered changes that will likely lead to their downfall.

However, this doesn’t mean that the concept will become obsolete or go away — far from it. But the idea that the open plan office is the space of the future (as some companies were keen to believe) seems bound to fail. There is still a place for these offices in some companies, in some instances, but it’s not a panacea or a universally desirable solution; the open plan office is likely to become a niche rather than a go-to option.

Of course, offices as a whole will likely change and clever design changes may yet salvage open spaces or help convert them into something more palatable. Truth be told, we’re not sure what type of offices will be desirable, or how the idea of the office will morph in this extremely volatile period.

Ultimately, the cascade of changes triggered by the pandemic is far from over — it’s just beginning. We’re just starting to see their effects, who knows what will happen next?

Solar farms are now starting to replace golf courses

Few things scream ‘privilege’ the way playing golf does. Golfing has become a symbol of sorts, reserved only for those rich enough to afford it. The courses themselves have become a symbol: lavish, well-maintained, and large areas where people go about hitting the balls.

But the courses also pose a number of environmental problems. Despite being “green”, they don’t typically contribute to biodiversity, and often actually pose serious problems for local biodiversity, as they’re covered in short grass and frequented by humans. To make matters even worse, golf courses consume a lot of water. In the US alone, golf courses require over 2 billion gallons of water (7.5 billion liters) per day, averaging about 130,000 gallons (492,000 liters per day). However, some see an opportunity here — an opportunity to turn golf courses from an environmental problem into an environmental asset. How? By filling them with solar panels.

Image in Creative Commons.

In New York, a 27-acre that started out as a landfill and then became a golf driving range in the 1980s was transformed into a solar farm in 2019.

“This solar farm is what hope and optimism look like for our future,” Adrienne Esposito, executive director of the nonprofit Citizens Campaign for the Environment, said in a statement. The non-profit had campaigned for the transformation of the golf course. “We know over the next 20 years, the sun will shine, the power will be produced and we will have clean power. We don’t know, and we may not want to know, the cost of fossil fuels.”

The move not only ensured electricity for around 1,000 houses in Long Island but it will also eliminate some of the pesticides and pollutants in the area — pollutants that the golf course used for maintenance. Overall, the move is estimated to generate $800,000 for local authorities.

This type of project is possible because of recent developments in solar panel technology. It seems like almost overnight, solar panels have become incredibly cheap, and it’s not just the panels themselves — a multitude of solar farm components are becoming cheaper, allowing solar energy to compete, even as the fossil fuel industry remains heavily subsidized.

“I think New York is at a critical time in its history,” NextEra spokesman Bryan Garner said. NextEra is the company behind the solar farm. “The state has had really ambitious renewable energy goals, and this is clearly a step in the right direction.”

Next Era itself is not entirely a renewable energy company but drawn in by falling prices, it’s focusing more and more on solar energy.

This is not the only project to turn golf into solar energy, and New York is not the only place where this is happening. Rockwood Golf Course in Independence, Missouri, has also gone through a similar transformation. In Cape Cod, Massachusetts, solar panels were chosen as the “lesser of two evils”, with the alternative being turning the golf course into housing, which would have caused more traffic and more pollution in the area.

“We like the fact that it will be used for solar,” said Chairman Patricia Kerfoot at A meeting ON THE PROJECT. “That is a policy of the town to increase solar as much as possible, that it will keep it open space, which is part of our local comprehensive plan, as much as possible.” 

It’s a perfect fit if you think about it — golf courses cover large areas of open land, which is exactly what solar farms also need. At the same time, the dropping prices of renewable energy make it a more attractive proposition.

These aren’t just isolated examples, a trend seems to be emerging, driven not just by decreasing prices of solar energy, but also by a decrease in the interest in golf. Between 2003 and 2018, golf saw a decline of almost 7 million players, and any hopes of turning the golf industry around were shattered during the COVID-19 pandemic. Halfway through 2021, the National Golf Foundation reported the closure of 60 18-hole courses, several of which have been replaced by solar farms.

But perhaps nowhere in the world is this trend as prevalent as in Japan.

Japan is turning its abandoned golf courses into solar farms

Image in Creative Commons.

Japan even has a national plan to replace some of its golf courses with large solar plants.

This is remarkable because, despite declining costs of solar energy, Japan’s solar power is still far more expensive than the global average — and even so, the country feels like adding more and more solar farms. Renewable energy initiatives are welcome and heavily subsidized in Japan, particularly as the country is looking for alternatives to nuclear energy after the 2011 Fukushima plant disaster.

Japan’s golf courses were built during the country’s inflated-asset boom in the 1980s but interest continued declining as years passed. This is where solar energy enters the stage.

Solar energy has become a national priority for Japan, and the country has become a leader in photovoltaics. In addition to being a leading manufacturer of photovoltaics (PV), Japan is also a large installer of domestic PV systems with most of them grid-connected.

Naturally, the country also set its sights on golf courses, repurposing several of them for solar installations. The most recent of these, a 100 MW solar plant has begun operation in the Kagoshima Prefecture, becoming one of the largest photovoltaic facilities in the area.

In particular, rural golf courses in Japan were deemed as ideal places or new solar installations. A perfect example is up a mountainous road in Kamigori, in the Hyogo prefecture, where a new solar farm is installed in a former golf course link — generating enough power to meet the needs of 29,000 local households.

Another reason why golf courses are so attractive for solar investments is that the ground has already been leveled, and flood-control and landslide prevention measures are already in place. Essentially, golf courses check all the boxes for what you’d want in a solar farm.

All in all, a tide seems to be turning against some golf courses, and towards solar energy. Innovations on the technical side have made solar plants a cheap and competitive source of energy. The price of electricity generated by utility-scale solar photovoltaic systems is continuously decreasing, but solar plants do more than just offer cheap electricity — as the golf course showed, they have emerged as a space for sustainable innovation.

In the polar winter of 1961, a Soviet surgeon took out his own appendix

Leonid Rogozov clearly recognized the signs of appendicitis. After all, the 27-year-old Soviet surgeon had seen it multiple times already. But this time, there were a couple of big problems. First of all, the diagnosis took place in Antarctica during the winter, completely isolated from the outside world. Secondly, there was no other doctor at the site other than Rogozov. Lastly (and this was the biggest problem) — the patient was Rogozov himself.

“I can’t just fold my hands and give up”

Rogozov had arrived in Antarctica at the end of 1960. He was one of 12 men tasked with constructing a Soviet base in Antarctica. They finished just in time, right before the polar winter came down on them, bringing freezing temperatures and massive snow storms. It seemed that everything about the expedition was coming along just fine — until something went very wrong.

The Soviet surgeon quickly figured out he was suffering from appendicitis — an inflammation of the appendix that requires surgery. Without surgery, appendicitis can be fatal, and Rogozov knew this very well.

The young doctor interrupted a promising research career for the Antarctic expedition. He was almost due to defend his dissertation on new methods of operating on cancer of the esophagus when he left for the thrilling Antarctic expedition. An appendicitis surgery was a simple procedure and would have posed no problems to Rogozov — if the patient was someone else.

There was no escaping the base, either. Because of the snowstorms, flying was out of the question, and no ships were going in and out of Antarctica until the end of winter.

Rogozov tried to be cavalier about it. He noted in his diary:

“It seems that I have appendicitis. I am keeping quiet about it, even smiling. Why frighten my friends? Who could be of help? A polar explorer’s only encounter with medicine is likely to have been in a dentist’s chair.”

He also tried to see if he could treat himself with antibiotics, but it didn’t help much. Over the next day, his fever rose, the pain became harder to bear, and vomiting became common. The following night was hellish, and it led him to understand that there was only one possible way out of this situation: to have surgery on himself and take his own appendix.

“I did not sleep at all last night. It hurts like the devil! A snowstorm whipping through my soul, wailing like a hundred jackals. Still no obvious symptoms that perforation is imminent, but an oppressive feeling of foreboding hangs over me,” Rogozov wrote in his diary.

“. . . This is it . . . I have to think through the only possible way out: to operate on myself . . . It’s almost impossible . . . but I can’t just fold my arms and give up.”

Things did not get much better over the course of the next day. Rogozov couldn’t hide his condition from the other members anymore

“18.30. I’ve never felt so awful in my entire life. The building is shaking like a small toy in the storm. The guys have found out. They keep coming by to calm me down. And I’m upset with myself—I’ve spoiled everyone’s holiday. Tomorrow is May Day. And now everyone’s running around, preparing the autoclave. We have to sterilise the bedding, because we’re going to operate.”

“20.30. I’m getting worse. I’ve told the guys. Now they’ll start taking everything we don’t need out of the room.”

The surgery was carried out in an improvised space in Rogozov’s room. His fellow workers cleaned everything out of the room and disinfected it according to the doctor’s instructions. Two tables, a bed, and a table lamp were left, and the room was flooded with ultraviolet lighting to destroy as many pathogens as possible.

Rogozov then explained how the operation would work and delegated tasks: one colleague would hand him instruments; another would hold the mirror and adjust the table lamp; another would stand in reserve, in case nausea overcame the two helpers. Since Rogozov would operate on himself, he also prepared for the situation in which he would pass out, instructing his team to inject him with drugs and use specially prepared syringes for artificial ventilation. He disinfected his assistants, put on their gloves, and then sat down on the bed, reclining at about 30 degrees. The operation was set to start at approximately 2 AM local time.

Rogozov first injected himself with an anesthetic, and after 15 min, made an incision. It wasn’t perfect — his field of view was imperfect, his position was uncomfortable, and he was feverish. He worked without gloves so he could feel the instruments better. Around 30 minutes into the surgery, he started suffering from nausea and vertigo and had to take several short breaks. He was sweating intensely and had to ask his assistants to wipe his forehead every few minutes. Finally, he managed to reach his appendix and removed it — the appendix was severely inflammated and the surgery was his only chance of survival. A day longer and it would have burst, killing the surgeon. After removing it, he applied antibiotics and closed the wound. The whole surgery lasted 1 hour and 45 minutes, and was excruciating. As one of his assistants noted in his diary:

“When Rogozov had made the incision and was manipulating his own innards as he removed the appendix, his intestine gurgled, which was highly unpleasant for us; it made one want to turn away, flee, not look—but I kept my head and stayed. Artemev and Teplinsky also held their places, although it later turned out they had both gone quite dizzy and were close to fainting . . . Rogozov himself was calm and focused on his work [..] The operation ended at
4 am local time. By the end, Rogozov was very pale and obviously tired, but he finished everything off.”

Before taking a few sleeping pills, Rogozov instructed his assistants on how to wash and disinfect the instruments and the room. He then went to sleep, having performed surgery on himself.

The aftermath

When Rogozov woke up, his fever had dropped to 38.1°C (100 F) and he was feeling a bit better. He continued to take antibiotics for four days and slowly recovered. His fever slowly dropped, and after a week, he took out his stitches. Within two weeks, he made a full recovery. He later recalled how the surgery went from his perspective:

“I didn’t permit myself to think about anything other than the task at hand. It was necessary to steel myself, steel myself firmly and grit my teeth.”

“My poor assistants! At the last minute I looked over at them: they stood there in their surgical whites, whiter than white themselves. I was scared too. But when I picked up the needle with the novocaine and gave myself the first injection, somehow I automatically switched into operating mode, and from that point on I didn’t notice anything else.”

Work then continued as normal at the station and around a year later, Rogozov returned to Leningrad (today, St. Petersburg). He successfully defended his dissertation, Department of General Surgery of the First Leningrad Medical Institute. He never returned to the Antarctic.

It’s not entirely clear if Rogozov was the only person to take out his own appendix. There are a few other such incidents referenced in the literature, including one performed by Dr. Evan Kane in 1921, who believed that some surgeries (like an appendectomy) don’t require full anesthesia. He performed an appendectomy on himself to prove his point, but it was his assistants that completed the surgery. Rogozov was not aware of this event.

However, the fact that Rogozov was able to conduct the surgery in a time of great distress, in the wilderness, and without any professional help, is a stunning feat. It shows great willpower and medical ability, and although Rogozov rejected the glorification of this deed, it’s definitely one for the history books.

We need to protect 50% of the planet — but even that’s not enough

Image credits: Lingchor.

Protected areas are advocated for by scientists and conservationists alike because of their clear environmental benefits. Due to the constant expansion of our species, environments and ecosystems are under more and more pressure, and having safe havens like these protected areas is essential for the wellbeing of our planet.

Primarily, protected areas protect biodiversity and ecosystems while also often functioning as natural climate solutions. Protected areas also come with a host of benefits for goals beyond environmentalism. These include social and financial benefits for residents within protected areas and safeguarding against the emergence of new zoonotic diseases. Simply put, protected areas are not just good for the planet — they’re good for us as well.

Currently, around 15% of Earth’s land surface (and around 7% of Earth’s ocean surface) is protected. There is therefore a long way to go before we reach the 50% protection goal.

However, in urging our governments to reach this 50% target, some scientists have warned us there is a risk that we can get so caught up in the quantity of protected land and seas that we don’t also consider how effective those protected areas are in the first place. But before we talk about the quality of protected areas, let’s talk a bit about quantity.

Where does this 50% figure come from anyway?

Prominent voices that are calling for half the Earth to be protected include the aptly named Half-Earth Project based on the book written by E. O. Wilson, as well as Nature Needs Half, an international organization that advocates for half of the planet to be protected by 2030. Their choice of 50% of the Earth, however, is not an arbitrary one, but one that is supported by science.

The Global Safety Net is a tool developed by a team of scientists that combines a number of different data layers and spatial information to estimate how much of Earth’s terrestrial environment needs to be protected to attain three specific goals. Those goals were 1) biodiversity conservation, 2) enhancing carbon storage, and 3) connecting natural habitats through wildlife and climate corridors.

The researchers found that using this framework, a total of 50.4% of terrestrial land should be conserved to “reverse further biodiversity loss, prevent CO2 emissions from land conversion, and enhance natural carbon removal”. Interestingly, these results concur with prior calls to protect half the planet.

This data also found that, globally, there is significant overlap between the land that needs to be protected for conservation and Indigenous lands. The authors of the paper write that by enforcing and protecting Indigenous land rights, we can combine biodiversity and climate goals with social justice and human rights. They emphasize that “with regard to indigenous peoples, the Global Safety Net reaffirms their role as essential guardians of nature”.

Biodiversity inside and outside protected areas
From The Guardian

Why it can be detrimental to only look at the numbers

Scientists are absolutely right in saying we should aim to protect half the planet. But there’s more to it than that. An equally important consideration is how effective those protected areas are at achieving their stated goals.

Worryingly, some scientists estimate the true quantity of protected land is much lower than the official 15% when effectiveness is considered. One paper found that “after adjusting for effectiveness, only 6.5%—rather than 15.7%—of the world’s forests are protected”. Importantly, the authors caution their readers against assuming that protected areas will completely eliminate deforestation within their boundaries. On average, they found that protected areas only reduced deforestation by 41%.

Another team of scientists analyzed over 50,000 protected areas in forests around the world and their impact from 2000-2015. A major finding from their paper was that a third of protected areas did not contribute to preventing forest loss. In addition, the areas that were effective only prevented around 30% of forest loss. The authors call for improving the effectiveness of existing protected areas in addition to expanding protected area networks.

Finally, a team of researchers recently authored a paper that analyzed protected areas established between 2000 and 2012 and found that significantly more amounts of deforestation could be avoided if existing protected areas were made more effective — this was despite the authors stating that protected areas already reduce deforestation by 72%. This is a notably higher effectiveness than is stated by those other papers – perhaps because the team analyzed only protected areas that were established relatively recently. Multiple papers have found that newer protected areas tend to be on average more effective than older ones.

So how can we make protected areas more effective then?

One of the most important considerations is that protected areas tend to prevent more deforestation in areas where deforestation is higher. However, it is still important to protect lands currently at low risk of degradation. In that way, future forest loss can be prevented before it becomes a significant problem.

Indirectly, other attributes of a country can predict how effective or ineffective the protected areas within that nation will be. Countries that have higher human development, higher GDP per capita, better governance, and lower agricultural activity tend to host more effective protected areas than countries with lower human development and GDP per capita, lower government effectiveness, and higher levels of agriculture.

Finally, as mentioned before, there is a huge amount of overlap between potential land for protection and Indigenous land. Engaging with, and granting property rights and legal recognition to, Indigenous people is a cost-effective way to protect forests while also addressing a human rights issue at the same time.  

What all of this data shows us is that the conversation surrounding environmental protection needs to be considered in a broader context, and take into consideration economic, political, and social justice concerns. And it is an issue that is far too complex for its success to be measured by a single number.

Eugenics: how bad science was used to promote racism and ableism

Eugenics is the idea to selectively ‘improve’ humankind by only allowing specific physical and mental characteristics to exist. It focuses on systematically eradicating ‘undesirable’ physical traits and disabilities, and although it has been long discredited as a science, some of its ideas are still surprisingly prevalent in today’s society.

A Eugenics Society poster (1930s) from the Wellcome Library Eugenics Society Archive. Wikimedia Commons.

In some forms, eugenics actually has a remarkably long history. Some indigenous peoples of Brazil practiced infanticide against children born with physical abnormalities, and in ancient Greece, the philosopher Plato argued in favor of selective mating to produce a superior class. The Roman Empire and some Germanic tribes also practiced some forms of eugenics. However, eugenics didn’t truly become a large-scale idea until the 20th century.

Progress didn’t just happen in Europe

The foundation of eugenics lies on racist beliefs and ideologies — and especially something called scientific racism: a pseudoscientific belief that tries to empirical evidence to support or justify racism.

In 1981, American paleontologist Stephen Gould wrote ‘The Mismeasure of Man’, a book in which he discusses the problems of the continuous belief in biological determinism that later became eugenics. He gave examples of the instances of scientific racism and how some scientists contributed to providing ‘evidence’ to the superiority of white people, shaping faulty beliefs for decades or centuries. In the book, you can find the a remarkable list of horrid theories and studies which the researchers insisted on putting one race above the other. 

The most famous ranking of races was developed by 19th-century physician Samuel George Morton. Morton, believing himself to be objective, used his collection of skulls of different American Ethnicities to compare cranial capacities and try to prove superior intelligence of one group over the other. His study was basically done by ranking average head sizes (which is not directly connected to intelligence) but mixed different heights in his samples, which induced an obvious bias to his analysis. The analysis was strongly skewed towards linking intelligence with white men, and Morton’s conclusion was that white men were the most intelligent race on the face of the Earth. Gould criticized Morton’s data (though he does mention that the bias may have been unconscious), noting that the analysis includes analytical errors, manipulated sample compositions, and selectively reported data. Gould classifies this as one of the main instances of scientific racism.

But it gets even worse. Colonialism was working hand in hand with the idea that Europeans were carrying out a ‘civilizing mission’. White Europeans were doing nothing but a generous act of ‘helping’ ‘inferior’ races to develop and become civilized. This patronizing notion is easily debunked with historical evidence. For instance, we know Mesoamerican and Andean civilizations were empires and they didn’t need foreign influence to achieve progress. Take Stonehenge for instance, a monument in England we believe was built around 3000 or 2000 BC. Though very impressive and with enough complexity, it is not as advanced as the Giza pyramid complex in Egypt, which was created around nearly the same period, proving how civilizations were evolving independently. 

Social Darwinism

Image credits: Gennie Stafford.

Another interesting aspect of eugenics is so-called social Darwinism. Social Darwinists believe that “survival of the fittest” also happens in society — some people become powerful in society because they are somehow innately better.

Social Darwinism was invented by one of the founders of eugenics, Sir Francis Galton, one of Charles Darwin’s cousins. He believed that eugenics should ‘help’ the human race to reach its ultimate ‘potential’ accelerating the ‘evolution’ by eliminating the ‘weak’ and keeping the ‘appropriate races’. 

The problem is it does not fit any scientific evidence. First, genetics has clearly shown that we don’t have a separation in races, race is rather a social construct more than a genetic one. Differences do exist, but they have to do with common ancestry. In our species, we share 99.9% of our DNA, regardless of race. As a result, one ethnicity is not better than the other in anything, not in appearance, behavior, or intelligence.

The other misconception lies in natural selection itself. Evolution, for humans, is a slow process, it takes time for a genetic trait to become dominant in a species. Social change, on the other hand, is much faster; regimes fall, presidents change, policies change. The changes in society can be beneficial or not for some people, maybe everyone will have easy access to vaccines and survive an epidemic, while in a different regime people can get sick for not having these basic rights. Or worse, shorten the number of people simply because they do not have enough to eat. This has nothing to do with a group being stronger than the other, but the choice of leaving some unassisted. Simply put, social Darwinism has little scientific evidence to back it up — and a lot of evidence against it.

How technology fits in

Morton’s ideas are obviously flawed, but scientists took them as an objective analysis for decades — and that’s when the chaos started. One scientist cites the other, and the other, propagating false ideas and sending their echo through history affecting millions of lives for years. More theories like those emerged, with developments that come with the evolution of science, but the insistence of ranking white men as the ‘apex predator’ perpetuated. Even leading scientists can fall prey to racist ideas, and mask them as scientific racism.

Even with modern machine learning and big data, these ideas can still continue to propagate. If the scientists involved don’t make sure that their code is not being susceptible to biases, the computer won’t be objective. That happened to a machine learning routine using data from hospitals in the US. The algorithm wanted to find patients with risks, one of the easy ways is to look for the amount of money spent by a patient in one year. Seems reasonable, but the problem is the model excluded a large number of black people, for obvious reasons, our society is biased. The fact that this particular system involves money has nothing to do with the patient’s condition. 

Machine learning is based on statistics, and some of the fathers of statistics are intertwined with eugenics. If you ever took a statistics course, you may have heard the name ‘Pearson’. Karl Pearson developed hypothesis testing, the use of p-values, the Chi-Squared test, and many other useful tools for science still used today. However, the scientist held strong beliefs in Social Darwinism, a distorted idea that due to natural selection some groups struggle more because in the end ‘the stronger survive’. Pearson even supported wars against ‘inferior races’. In 2020, the University College London renamed lecture halls and a building which originally honored Pearson and Francis Galton.

The search for the ‘special mind’

Besides ethnicity, the next eugenicist target is intelligence. The French psychologist Alfred Binet invented what we know today as the first version of the IQ test. He wanted his test to be used to help kids at school — those who performed poorly would be sent to special classes to get help adapting. He didn’t want that to be a label to segregate people. However, his ideas were distorted by some scientists in the USA. In the American continent, the test was used to reinforce the old fallacies for ranking people, even becoming a mechanism to select immigrants. 

In time, the IQ test became the one you know today. The problem with it is that it’s often used to segregate people, without accounting for cultural or socioeconomic factors that could affect IQ scores. That’s not all: American psychologist Henry Goddard, the one responsible for corrupting Binet’s ideas, defended the idea that ‘feeble-minded’ people should not have children. In addition, he and other gentlemen chose words like ‘idiot’, ‘moron’, and ‘feeble-minded’ to classify people — words we still use today to insult someone.

Sterilization

The ultimate goal of eugenics is perpetuating only the ‘good’ genes — which means not allowing those who have ‘bad’ genes to reproduce.

This led to forced sterilizations in people with mental disorders. The most famous example was the case Buck vs. Bell in the US in 1927. Most of the over 60,000 sterilizations happened in the United States in people whose conditions were labeled as ‘feeble-minded’ and ‘insane’ between the 1920s and 1950s.

These procedures were typically carried out in asylums or prisons, with a medical supervisor having the right to decide whether the inmates’ reproductive systems should be altered or not. The practice is now considered a violation of their rights — and the motivation that “it would improve inmates’ lives” is considered bogus, as is “concern about the financial burden the inmates would provide if they had children”, punishment, and of course “avoid the reproduction of the unfit”. All these with California’s law that the person had no right for objection or appeal.

Autism

A lot happened from Goddard’s time to the 1930s and 1940s when autism was discovered. Know the famous guy, Hans Asperger? Well, he was a nazi Austrian pediatrician known for understanding one ‘type’ of autism, later known as Asperger Syndrome. The diagnostic criteria for Asperger Syndrome were removed from the  Diagnostic and Statistical Manual of Mental Disorders in 2013. There are no longer sub diagnoses, it is all called Autism Spectrum Disorder (ASD).

Asperger observed there were autistic children who were more ‘adaptable’ to the social norms, they could act ‘normal’, so he labeled those children as “high functioning”, while others were “low functioning”. The low functioning was considered a burden and not fit for the Third Reich because they couldn’t do the tasks of a “normal” person. In other words, they wouldn’t be profitable. Asperger would then transfer these ‘genetically inferior’ children to the ‘euthanasia’ killing programs, making the choice of who was worth living and who wasn’t. Next time you meet people suffering from autism, ask if they want to be connected to that idea before calling anyone low functioning/high functioning/aspie — spoiler, they almost definitely don’t.

Genetic research can be eugenist, without mentioning the word or directly defending the idea. Nobody seems to ask autistic people what types of research could be done in order to make their lives better, it is usually a concern on ‘how parents should not have a burden’ – pay attention to the advertisements, do they display autistic people in successful positions, or are they pictures of children with their parents? 

More recently, Spectrum10k research was paused. The UK-based researchers wanted to interview and collect DNA from autistic people and their relatives. The autistic community was not consulted and questioned on who the data would be shared with. They realized people involved in the project had a history of questionable research regarding autistic DNA, so advocates protested and the study was paused with the promise they will listen to autistic people.

“People with disabilities are genuinely concerned that these developments could result in new eugenic practices and further undermine social acceptance and solidarity towards disability – and more broadly, towards human diversity.”

Said Catalina Devandas on 28 February 2020, a UN Special Rapporteur on the rights of persons with disabilities.

Gould saw a problem with many ideas back in the 90s, he edited the book to add the biased ‘research’ of his time, with the hope to alert scientists not to make those same mistakes. It is evident that our world of today has no more space for racist/ableist science like thise, so why is it ok for labels which came from those eras to be in machine learning, the therapists’ offices, and schools? It’s about time to cut the eugenics out of our civilization.

Futuristic transparent smartphone.

Are transparent phones close to becoming a thing?

We’ve seen smartphones change drastically over the years, is going transparent the next stage of their evolution? We’re not sure yet, but companies seem to be taking it seriously.

Futuristic transparent smartphone.
Image credits: Daniel Frank/Unsplash.

A few tech giants have already received patents for their respective transparent phone designs, but this doesn’t necessarily mean they’re already working on transparent smartphones. The problem is that this type of design not only requires changes in the design or one particular part of the device but it asks for a complete makeover. 

From the display to cameras, sensors, and circuitry, phone engineers might have to make each and every component transparent if they wish to develop a true lucid smartphone — or assemble them in such a way that those components don’t overlap with the transparent screen. This is definitely not going to be easy, but if they somehow achieve this difficult feat, this might revolutionize other gadgets around us as well.

Furthermore, the advent of transparent smartphones may lead us towards the creation of transparent televisions, laptop screens, cameras, and a whole new generation of transparent gadgets. No surprise, such cool gadgets would make the current devices look like ancient artifacts (at least, in terms of appearance).

Are there any real-life transparent smartphones yet?

Well, not quite.

Although they’re not exactly like the ones you may have seen in The Expanse, Real Steel, or Minority Report, some companies have tried to develop transparent phones — not smartphones — or at least make them partially transparent. Although they were ahead of their time, some designs were actually pretty impressive.

In 2009, LG introduced the GD900, a stylish slider phone that was equipped with a see-through keyboard, it is considered the world’s first transparent phone. The same year, Sony Ericsson launched Xperia Pureness, the world’s first keypad phone with a transparent display. 

A look at LG GD900, world's first transparent phone.
LG GD-900, the first phone with a transparent design. Image credits: LG전자/flickr

Despite its unique design, the Xperia phone received poor ratings from both critics and users due to its poor display visibility and it didn’t turn out to be a very successful product. A couple of years later, Japanese tech company TDK developed transparent bendable displays using OLEDs (organic light-emitting diodes). 

In 2012, two other companies in Japan (NTT Docomo and Fujitsu) joined hands to develop a see-through touch screen phone, and they did come up with a prototype that also had a transparent OLED touchscreen. The following year, Polytron Technologies from Taiwan, released some information about a transparent smartphone prototype they developed. Though the camera, memory card, and some motherboard components in this Polytron device were clearly visible, the phone almost looked like a piece of transparent glass. 

The see-through display technologies demonstrated by TDK, Docomo, and Polytron were impressive but for reasons that are not entirely clear, they never became a part of the mainstream touch phones.

Concept image of Samsung galaxy transparent smartphone.
A concept image of Samsung’s transparent smartphone. Image credits: Stuffbox/Youtube

However, the most exciting developments concerning transparent smartphones have happened much more recently.  In November 2018, WIPO (World Intellectual Property Office) published Sony’s patent for a dual-sided smartphone transparent display, reports reveal that Sony is soon going to use this see-through display design in its upcoming premium range smartphones. The next year, LG received a smartphone design patent from USPTO (the United States Patent and Trademark Office) that shined a light on the company’s plans for a foldable transparent smartphone. However, LG has also said they will stop making phones because the market is too saturated — so it’s unclear whether something will actually come of this design.

Leading tech manufacturer Samsung is also said to be in the process of developing a see-through smartphone. According to a report from Let’s Go Digital, The company had a patent (concerning a transparent device) published on the WIPO website in August 2020. The same report also reveals that in the coming years, Samsung aims to launch smartphones and other gadgets in the market (under its popular Galaxy series) that would come equipped with a transparent luminous display panel.

Are transparent smartphones even practical?

Just because big brands like Sony, LG, and Samsung are working on different projects related to transparent smartphone technology, it doesn’t mean we’re close to seeing actual see-through phones very soon. Many tech experts believe that while transparent smartphones may sound like a futuristic idea, they may not be feasible, for several reasons.

Surprisingly, one of the main challenges with transparent smartphones is the camera. You can definitely make transparent displays using OLEDs, but what about the rear and front-side cameras? There is no known way by which a phone engineer can make camera sensors go transparent. The same goes with other parts like SIM cards, memory chips, and speakers, if these components are still visible in a see-through phone then it is no better than the Polytron prototype of 2013. So while there’s a realistic chance of transparent-screen phones becoming a reality, how exactly a fully transparent phone would be built is not at all clear.

Another issue that users might face with transparent smartphones is poor display visibility. The screens used in current smartphones may not be transparent but they offer clear and sharp picture quality, whether you use them under bright daylight or in the dark. Transparent displays might not be able to deliver such a flawless visual experience, and users may even struggle to see the text or images clearly on a see-through screen in daylight conditions.

Until and unless these major issues are resolved, we probably won’t be able to see transparent smartphones in the market. But why would we even want one? Well, there are some merits to transparent smartphones. For instance, the notification and alerts could look more clear and more distinct on a transparent screen, and such a display might be conveniently used in a divided manner to use different applications at the same time. 

Moreover, you could use both sides of a see-through display; this would facilitate multitasking and save a lot of time. For example, you are watching an educational video or recipe on YouTube and you are noting down points from the same in a different tab. With a double-sided transparent screen, you don’t need to close your video tab every time you need to switch to another tab, you can just flip your phone to jump to the tab you want to use.

Transparent smartphones might also bring a drastic improvement in the way you experience augmented reality. The screen which serves as a barrier between your real and virtual worlds if becomes transparent, then you may not need an AR app to see virtual elements in the real world. The transparent screen itself may act as an AR simulator but then again such a screen may not be able to give you as good virtual imagery as you experience on a normal display.

Let’s face it: transparent phones would be very cool, but we’re not quite there yet. We can geek out about them as much as we want, but a transparent smartphone still requires a healthy amount of innovation that might take some time to evolve. With how quickly technologies are progressing, though, we may see them in the not too distant future.

The ‘Tsar Bomba’: the most powerful nuclear weapon ever made

The Tsar Bomba in 1960. The footage was declassified in 2020. Credit: Rosatom.

On October 30, 1961, during a cloudy morning, a Soviet bomber dropped a thermonuclear bomb over Novaya Zemlya Island, deep in the Arctic Ocean, in the most extreme northeastern part of Europe. The blast exploded with a staggering yield of 50 megatons (equivalent to 50 million tons of conventional explosives) whose detonation flash could be seen from over 1,000 km away. The bomb, known as the Tsar Bomba (“King of Bombs”), represents the most powerful thermonuclear weapon ever detonated in history. No other bomb as strong as it was ever tested. This is the story of the pinnacle of nuclear weapons.

The bomb of all bombs

Ground-level view of detonation of Tsar Bomba. Credit: Wikimedia Common.

In the late 1950s, the Soviets found themselves in a pickle. The Cold War was in full swing and the Americans were clearly winning. Although by that time, the USSR had also developed its own thermonuclear weapons to match the USA arsenal, the Soviets had no effective means of delivering its nukes to US targets.

The post-WWII military doctrine was dramatically disrupted by the introduction of nuclear weapons. Once nukes came into the picture, the US and the Soviet Union, the only nuclear powers at the beginning of the Cold War, each adopted nuclear deterrence as their strategy. Nuclear deterrence represents the credible threat of retaliation to forestall an enemy attack. So if your threat of retaliation isn’t really a genuine threat, you may face total annihilation.

To level the playing field, the Soviets thought of the mother of all bluffs: a weapon so powerful it could level huge cities like New York or Paris in a single blow.

It was Soviet leader Nikita Khrushchev who ordered scientists to start work on the most powerful bomb in the world with development beginning in 1956. In its first phase, the Tsar Bomba went by the code name “product 202”, then from 1960, it was known as “item 602”. In this second phase, nuclear physicist Andrei Sakharov was key to the bomb’s development.

The nuclear scientists settled on a 50Mt thermonuclear warhead design, which is equivalent to nearly 3,300 Hiroshima-era atom bombs. Thermonuclear weapons, also known as hydrogen bombs, are a step above atomic bombs, classed as second-generation nuclear weapons. While atomic bombs employ nuclear fission to release copious amounts of energy from uranium or plutonium, hydrogen bombs employ a second step in which the energy from fission of heavy elements is used to fuse hydrogen isotopes deuterium and tritium.

How the Soviets built the world’s most powerful bomb ever

Total destruction radius, superimposed on Paris. Red circle = total destruction (radius 35 kilometers), yellow circle = fireball (radius 3.5 kilometers). Credit: Wikimedia Commons.

The design of hydrogen bombs is very clever, as far as you can afford to admire a weapon of mass destruction. In order to increase the yield of a conventional atom bomb, you basically have to add proportionately more uranium and plutonium, both highly scarce elements. But a hydrogen bomb only uses a tiny amount of uranium or plutonium, just enough to kick-start the fusion of heavy hydrogen isotopes.

After the fission of the primary stage, the temperature inside the thermonuclear device soars to 100 million Kelvin (20,000 times higher than the surface of the Sun). Thermal X-rays from the first stage reach the secondary fusion stage, which implodes from all the energy, thereby activating a sequence of events that ultimately triggers the nuclear fission chain reaction.

The first full-scale thermonuclear test was carried out by the United States in 1952, but the Soviets took things to a whole new level. The Tsar Bomba actually had three stages: two fission reaction stages and a final fusion reaction.

The fission of uranium or plutonium generates tremendous heat and pressure that initiates another fission reaction in stage two, where neutrons from the first stage combine with lithium-6 to create deuterium and tritium. The hydrogen isotopes start to fuse under extreme heat and pressure, causing the thermonuclear explosion. Around 97% of Tsar Bomba’s total yield resulted from thermonuclear fusion alone, leading to minimal nuclear fallout relative to the incomprehensible destruction of the nuclear warhead and making it one of the “cleanest” nuclear bombs ever made.

The final iteration of the Tsar Bomba measured 8 meters in length with a diameter of about 2 meters. Its weight was around 25 tons, which was far too much to be handled by any intercontinental ballistic missile developed at the time by either the Soviets or Americans. In fact, the Tsar Bomba was so big it couldn’t be carried by any plane fielded by the Soviet Union.

The Tsar Bomba was dropped from a modified Tu-95 bomber. Credit: Picryl.

Sakharov had to work closely with aviation engineers to modify a Tupolev Tu-95 plane. The carrier had its fuel tanks and bomb bay doors removed and its bomb-holder replaced by a new holder attached directly to the longitudinal weight-bearing beams.

In 1961, after a brief respite, political tensions between the United States and the Soviet Union were once again high. This was just a year before the Cuban Missile Crisis, after all. The Cold War thus resumed and so did the Tsar Bomba testing.

The day the Earth trembled before the Tsar Bomba

The Tsar Bomba’s fireball grew 8 km (5 miles) wide at its maximum. It didn’t touch the surface of the Earth due to the shock wave, but nearly reached 10.5 km (6.5 miles) in altitude — the same cruising altitude as the deploying bomber. Credit: Wikimedia Commons.

On October 17, 191, Khrushchev announced the upcoming test of its 50Mt mega weapon. The Tu-95V aircraft, No. 5800302, armed with the warhead took off from the Olenya airfield and was flown to State Test Site No. 6 of the USSR Ministry of Defense located on the deserted island of Novaya Zemlya. The crew numbered nine officers led by Andrei Durnovtsev.

During the deployment of the warhead, the bomb was released from a height of 10,500 meters (13,780 ft). Immediately an 800-kilogram parachute was deployed to give the carrier and observer plane enough time to fly about 45 kilometers (28 miles) away from ground zero. The crew had a 50 percent chance of survival, and they all made out alive.

Site of the detonation. Credit: Wikimedia Commons.

The Tsar Bomba exploded for the first and last time about 4,200 meters (13,780 ft) above the Mityushikha Bay nuclear testing range. All went according to plan — meaning all hell broke loose.

The 8-kilometre-wide (5.0 mi) fireball reached nearly as high as the altitude of the release plane and was visible at almost 1,000 km (620 mi) away. After the fireball subsided, it made way for a mushroom cloud made of debris, smoke and condensed water vapor, which extended about 67 km (42 miles) high, about seven times taller than Mount Everest. The flare from the detonation was visible in Norway, Greenland, and Alaska.

The heat from the explosion could have caused third-degree burns 100 km (62 mi) away from ground zero. And although the warhead was detonated miles above ground, it generated a seismic wave that was felt with an estimated magnitude of 5.0-5.25.

One of the Soviet cameramen described the harrowing experience:

“The clouds beneath the aircraft and in the distance were lit up by the powerful flash. The sea of light spread under the hatch and even clouds began to glow and became transparent. At that moment, our aircraft emerged from between two cloud layers and down below in the gap a huge bright orange ball was emerging. The ball was powerful and arrogant like Jupiter. Slowly and silently it crept upwards…Having broken through the thick layer of clouds it kept growing. It seemed to suck the whole Earth into it. The spectacle was fantastic, unreal, supernatural.”

The mushroom cloud of Tsar Bomba seen from a distance of 161 km (100 mi). Credit: Wikimedia Commons.

There were no fatalities resulting from the Tsar Bomba’s test, windows were shattered due to the explosion in a village on Dikson Island, although it was 780 km (480 mi) away from the testing site.

In 2020, Rosatom, the Russian nuclear energy agency, released a 30-minute documentary video that shows the preparation and detonation of the Tsar Bomba. The video was previously a state secret. You can now watch it below.

https://www.youtube.com/watch?v=nbC7BxXtOlo&feature=youtu.be

The bomb that blasted a new era of peace

Predictably, the Tsar Bomba test unleashed a wave of indignation in the United States. But behind closed doors, the White House and the Pentagon were not actually sure how to respond. A new study published in October, which is based on recently declassified documents, offers valuable insights into how President John F. Kennedy decided to act in these highly tense times.

The study that appeared in the Bulletin of the Atomic Scientists shows that the Soviets weren’t the only ones contemplating mega thermonuclear weapons. Lead author Alex Wellerstein, a nuclear historian at the Stevens Institute of Technology in Hoboken, found documents showing that Edward Teller, the mastermind of the hydrogen bomb, wanted to get the green light from the Atomic Energy Commission for two superbomb designs. One was for 1,000 megatons (20 times more powerful than the Tsar Bomba) and the other for 10,000 megatons (a staggering 200 times more powerful than the Soviet doom bringer). The proposal was made in 1954, before the Soviets thought about making the Tsar Bomba.

If you’re shocked by the idea of making a 10,000 megaton super nuclear weapon, congratulations! You’re actually an empathetic human being. Seriously though, we all need to bear in mind something about thermonuclear weapons: they have unlimited destructive power, meaning they can be scaled to blow up the entire planet if a large enough warhead is produced. The Tsar Bomba, for instance, was initially designed as a 100-megaton warhead, but the Soviets scaled it down by adding a lead sheath. In 1950s prices, the cost of increasing the yield of a thermonuclear bomb was just 60 cents per kiloton of TNT.

While many fellow nuclear scientists were indeed shocked by this audacious proposal, the military was all ears. But they too cooled off once they learned a 1,000-megaton warhead would be so powerful that the radioactivity would be impossible to keep confined within the borders of an enemy state.

After the Tsar Bomba was detonated, enthusiasm for an American super bomb reignited. According to Dr. Wellerstein, in 1962, the defense secretary, Robert S. McNamara, lobbied the Atomic Energy Commission to build the American equivalent of the Tsar Bomba.

Andrei Sakharov. Credi: Wikimedia Commons.

But President Kennedy, who was famous for his loathing of nuclear weapons, had other plans. By then, scientists figured out how to conduct nuclear tests underground in the Nevada desert. However, even if it was detonated deep underground, a super thermonuclear bomb would still break through the hard rock and release radiation into the atmosphere.

In the aftermath of the Cuban Missile Crisis, whose threat of total obliteration was too close for comfort, President Kennedy managed to convince the Soviets to limit nuclear testing to underground sites. On October 7, 1963, the United States, the United Kingdom, and the Soviet Union signed the Partial Nuclear Test Ban Treaty, which prohibited tests in the atmosphere, outer space, and underwater. In doing so, these countries ensured that no one would detonate a Tsar Bomba-like weapon ever again.

A key role in the Partial Test Ban Treaty was held by Sakharov, one of the lead designers of the Tsar Bomba. Concerned with the moral and political implications of his work, Sakharov pushed his Moscow contacts to sign the treaty.

In 1968, Sakharov fell out of the Kremlin’s good graces after publishing an essay in which he described anti-ballistic missile defense as a major threat of nuclear war. In the Soviet nuclear scientist’s opinion, an arms race in the new technology would increase the likelihood of nuclear war. After publishing this manifesto, Sakharov was banned from conducting military-related research. In response, Sakharov assumed the role of an open dissident in Moscow and continued to write anti-nuclear weapon essays and support human rights movements.

In 1975, Sakharov was awarded the Nobel Peace Prize, with the Norwegian Nobel Committee calling him “a spokesman for the conscience of mankind,” adding that “in a convincing manner Sakharov has emphasized that Man’s inviolable rights provide the only safe foundation for genuine and enduring international cooperation.” Of course, Sakharov was not allowed to leave the Soviet Union in order to receive his prize.

The last straw was when Sakharov staged a protest in 1980 against the Soviet intervention in Afghanistan. He was arrested and exiled to the city of Gorky (now Nizhny Novgorod), which was completely off-limits to foreigners. Sakharov spent the rest of his days in an apartment under police surveillance until one day in 1986, when he got a call from Mikhail Gorbachev telling him that he and his wife could return to Moscow. Sakharov died in December 1989. The Tsar Bomba, his own brainchild, was dead long before that, thanks partly to him. 

Why did plague doctors wear that weird beaked costume?

The COVID-19 pandemic has become one of the worst health crises in a century, with over five million killed so far by the coronavirus. But, let’s face it: we’ve seen much worse. The Black Death, for instance, loomed like a specter of pestilence for centuries, rapidly spreading, then subsiding, only to return in yet another wave. At one point, the plague killed one-third of Europe’s population in only a few years.

In Medieval times, you knew things were serious when the plague doctor came to town, who was immediately recognizable by his beaked mask. If you thought hazmat suits were scary, the costumes worn by these plague doctors elicited a whole new level of dread mixed with mystery, the kind that would be at home in a David Lynch film.

The plague doctor uniform: was this the first example of personal protective equipment?

Although the Black Plague reached Sicilian ports in the late 1340s, the plague doctors didn’t start wearing their now-iconic fashion until the 17th century. The design of the costume is credited to Charles de Lorme, the personal physician of King Louis XIII of France and the wealthy Médici family. It is believed that de Lorme introduced the uniform in 1619.

The outfit consisted of a long coat that was covered in scented wax, which extended all the way down to the ankle where the feet were dressed in boots made of goat leather. Underneath the coat, the plague doctor wore a short-sleeved blouse that was tucked-in, as well as gloves and a hat made of the same goat leather. But the most defining feature of the outfit is definitely the long-beaked mask that was stuffed with powerfully scented herbs and spices. Finally, the costume was completed by a pair of round glass spectacles tethered by leather bands that also kept the mask tightly to the doctor’s head. A long wooden stick was also part of the look, which the plague doctor used to examine patients but also to ward off desperate and dangerous plague-stricken people.

In order to understand the motivations behind designing such a peculiar uniform, we need some context. The consensus among the most educated physicians of those times was that the plague, like many other epidemics, was caused by miasma — a noxious bad air. Sweet and pungent odors were thought to cancel out the miasma in plague-stricken areas and protect from disease. Nosegays, incense, and other perfumes were sprayed furiously when plague knocked on the door.

Clothing Against Death (1656) by Gerhart Altzenbach. Credit: Public domain.

The first illustration of a plague doctor’s uniform, completed by Gerhart Altzenbach in the mid-1600s, not only features the entire costume but also provides explanations for how each part was intended to protect the wearer from the plague. The six-inch beak worn by the plague doctors was supposed to act as a face mask that filters out the bad air. It was designed so long in order to accommodate herbs enclosed further along in the beak, with only two small holes for ventilation. Many times, the herbs — typically a mixture of more than 50 plants and flavors like cinnamon, myrrh, viper flesh powder, and honey — were burned before the doctor put on his mask.

However, since the uniform was supposed to be worn tightly over the entire body and not leave the skin exposed, the physicians were at least somewhat aware that the plague was spread by close proximity to the infected.

Unfortunately for both plague doctors and their patients, the uniform wasn’t very effective and mostly served to terrorize people.

They couldn’t have known it at the time, but the plague is actually caused by a species of bacteria called Yersinia pestis, which is transmitted from animals like rats to humans through flea bites. You could also catch the plague easily if you came in contact with contaminated fluid or tissue or inhaled droplets from sneezing or coughing patients that had pneumonic plague. So perhaps the costume offered some degree of protection, but without any proper protocols for hygiene and disinfection, the protection was likely marginal at best.

Not only was their outfit ineffective at combating the plague, so were the plague doctors’ strategies — even by the standards of the time.

Some of the “cures” in a plague doctors’ medicine purpose include onions, herbs, and even chopped up snakes that would be rubbed on the boils of the patient. Sometimes a pigeon may have been sacrificed, whose bloody carcass is then rubbed all over the infected body. Others covered blisters with human excrement.

Since the miasma theory was in fashion, almost every house call involved fumigating the house with herbs to purify the air. If the proper odors were not available, people were advised to sit by a fire or even a sewer to drive out the smell of fever.

Baths were also prescribed but not in the most hygienic conditions. Bathing should be done with vinegar and rosewater, alternatively in your own urine.

But the worst procedure was bursting the buboes —  painful lymph nodes that form in the armpits, upper femoral, groin, and neck region of individuals infected with the plague — which did nothing to aid the patient. Bloodletting was a common (and highly ineffective) medical procedure during those times employed against a wide range of illnesses, but opening the festering blisters only helped  to further spread the infection to other people. Some patients were even told to drink the pus of lanced buboes.

The satirical engraving of Paulus Fürst, which is also perhaps the most famous illustration of a plague doctor. Credit: Wellcome Collection.

The ineffectiveness of plague doctors and their wacky costumes did not go unnoticed by their contemporaries. In the same year that the first illustration of a plague doctor costume was released, another engraver by the name Paulus Fürst released a satirical version in which he referred to the plague doctors as ‘Doctor Schnabel von Rom.’ (‘Doctor Beaky from Rome’). In one of the sentences on the engraving, Fürst alluded that the doctor ‘does nothing but terrify people and take money from the dead and dying.’

Indeed the plague doctors weren’t even actual physicians most of the time. Instead, they were usually unqualified, poor individuals who didn’t have much to lose when they were hired by municipalities to treat plague patients. As you might imagine, competent and successful doctors weren’t too keen on taking the job, which saw many plague doctors die on the job. Of the 18 plague doctors who worked in Venice at one time during the 14th century, five died and 12 fled.

Not all plague doctors were motivated by good intentions either. A plague doctor was not only tasked with treating and quarantining the ill, but also had responsibilities when it came to assisting in the occasional autopsy or witnessing the wills of the dead and dying. This gave them a lot of power and it was not uncommon for a plague doctor to take advantage of his position and run off with a patient’s finances and objects of worth.

Before COVID-19, the plague doctors were seen as an oddity of history and a great character to go out as during Halloween. But the harsh reality of the pandemic is perhaps making us more sympathetic with these first responders who risked their lives during highly uncertain times. And although most of their medical interventions were not based on science and did more harm than good, the plague doctors were on to something with their head-to-toe uniform. Today, we know for a fact that hazmat suits and even surgical masks can greatly diminish one’s risk of contracting an infectious disease. If it took a very sinister suit to kick things off for personal protective equipment, we should be grateful for having plague doctors, I guess. 

These poignant cartoons sum up exactly how we feel about COP26

The first week of the climate change conference COP26 in the UK is almost over. Governments have made dozens of ambitious pledges to tackle the climate crisis, but how they will actually deliver on these promises remains very unclear. The hypocrisy, denial, and slow place of progress has frustrated many — including cartoonists who are featuring their work for participants at COP26.

The summit is being held in downtown Glasgow. Almost 40,000 people have registered to participate at COP26, which was regarded as a watershed moment and probably the most relevant climate summit after COP21 in 2015 in Paris, when the Paris Agreement on climate was signed.

Alongside the official negotiations by government representatives, COP26 showcases initiatives from civil society organizations, innovators and artists, which is where the cartoons enter. The “Cartoon Gallery” shows 60 cartoons by artists from all around the world, using humor to express everyone’s frustration with the lack of climate ambition.

A lot of negotiations is “same old, same old” — the same promises we’ve been hearing for years, with little concrete action.

The gallery was created by the Climate Centre, an organization that helps the Red Cross and the Red Crescent Movement to reduce the impacts of climate change on vulnerable people. In recent decades, there has been an increase in extreme weather events, particularly targeting poor countries that can’t afford to do much about it. 

Many regard the ongoing climate crisis as a health crisis as well.

Our house is on fire, as Swedish activist Greta Thunberg has said.

The Climate Centre explained the important role that humor can play — especially at a meeting such as COP26, when the politics lingo is typically in the front, whereas real action is somewhere in the background.

“Humor, like humanitarian work, is about the gap between what is and what should be. It flourishes in the midst of our absurdities, contradictions, tensions, and denial. Cartoonists can help us notice, then confront, what is unacceptable yet accepted. “

For anyone looking to make sense of what’s going on at this mammoth event, it can be daunting to even follow all the announcements — let alone get a sense of whether there’s any substance to them as well. Perhaps this is why these cartoons hit the nerve so well: they make a direct and clear point, contrasting the ambiguity at the summit.

Heatwaves are coming in harder and harder — and you can’t hide from them in your own home.

Ultimately, many politicians seem determined to simply cover their eyes and pretend like climate change will go away. Unfortunately, it won’t. It will affect all of us, regardless of whether we believe in it or not. Unlike the dinosaurs who were wiped out by a meteorite, we have a choice, and we can protect ourselves. Whether we’ll actually choose to do so is a different thing, though.

Hallucinogens’ long trip from Anesthetic to Party Drugs to Antidepressants

Routine Surgery Gone Wrong

The year is 1958 and a patient is undergoing a routine surgical procedure. There’s only one catch: the patient has agreed to participate in the experimental use of a new anesthetic that’s been proven to be safe in animals. Everything goes smoothly; the surgery is a success. Seeing that the patient’s vitals are normal by the end of the operation, the surgeon takes a sigh of relief. But as the patient wakes up, something is clearly wrong.

The patient returns to consciousness kicking and flailing. He is extremely agitated and confused but cannot be consoled despite the doctor’s desperate attempts of reassurance. The patient complains of disturbing symptoms: extreme anxiety, vision difficulties, dysphoria, frightening hallucinations, and detachment from reality. The patient reports total numbness and can’t feel any of his limbs.

The doctors quickly link these symptoms to the new anesthetic they had been using.

The experimental sedative used during this surgery was phencyclidine — or PCP. After numerous reports of similarly concerning cases along with a sharp rise in the abuse of PCP recreationally, clinical research on the pervasive substance was discontinued, and PCP was classified as a Schedule II drug.

Fast forward to March 2019 and a variation of PCP, known as ketamine, is approved by the FDA for the treatment of depression. So what happened?

K is for Ketamine

Ketamine, an analog of PCP, is a powerful, sedating painkiller, pain-reliever, and tranquilizer, producing mild euphoria and enhanced sensory perception. Ketamine is also a dissociative hallucinogen, meaning that it can cause a sense of detachment from reality.

Being a hallucinogen, ketamine carries a certain stigma. This stigma largely originated when a multitude of mind-altering drugs became scheduled substances under the Controlled Substances Act of 1970, which labeled them as having a high potential for abuse. Basically, when the “War on Drugs” started in the US in the 1970s, ketamine and PCP were targeted.

It’s definitely true that ketamine has a potential for abuse — all drugs have side effects, some of which are worse than others. Dosage can also be a big problem with drugs like ketamine. It’s the dose that makes the poison, but some also have a much narrower margin between their healing dosages and their deleterious dosages. Furthermore, high doses and prolonged exposure to any psychotropic substance can lead to complications. All in all, ketamine can be a dangerous drug. But that’s not to say that all applications of psychedelics are harmful, and it certainly shouldn’t discredit their potential therapeutic benefits.

Ketamine has a fascinating history that begins with PCP. Ketamine is a less concentrated form of PCP, which is entirely man-made. PCP was first synthesized in 1926, and following preclinical trials in monkeys, it was subsequently developed as an anesthetic for humans in the 1950s. Higher doses were needed to properly sedate humans as opposed to monkeys, and while PCP worked relatively well as an anesthetic, problems arose when people began experiencing troubling side effects upon regaining consciousness after surgery.

In no time, PCP was being abused by increasingly large numbers of people. Chronic, high doses of PCP were causing schizophrenia-like impairments: paranoia, disordered thinking, memory issues, erratic speech and behavior, and so on. PCP was clearly a dangerous drug and was therefore prohibited for human use shortly thereafter.

In an attempt to minimize these unpleasant side effects, ketamine was derived and was thought to be safer because it is roughly ten times less potent than PCP. In its very first clinical study, ketamine was also used as a general sedative for surgery. Unlike PCP, ketamine exhibited few side effects. It additionally acted profoundly as a painkiller without high risk of impacting breathing function. Ketamine was so successful that it is still used in operating rooms today.

However, ketamine also has a high risk for abuse, so it too became a controlled substance in 1970 along with several other substances, and almost all research on these psychedelics was halted. That is, until the past couple of decades, when scientists have reopened new avenues of research on numerous mind-altering drugs.

Antidepressant Effects

Image credits: Adam Nieścioruk.

The 21st century has brought major breakthroughs in our understanding of ketamine. Some of ketamine’s most exciting applications have involved the treatment of treatment-resistant depression. Treatment-resistant depression refers to patients with major depressive disorder who do not respond to several standard interventions, including antidepressants and talk therapy.

Multiple ground-breaking studies have found that as little as a single dose of ketamine can have rapid antidepressant properties with effects lasting up to two weeks or longer, taking mere hours to alleviate depressive symptoms as opposed to the days and weeks it takes typical antidepressant medications to stabilize mood if they ever do; only about one-third of people achieve remission in response to typical antidepressants. Especially in patients at risk of taking their lives, this can truly be a lifesaving intervention.

These results aren’t based solely on subjective reports of mood. There are actual, measurable alterations in brain chemistry to further support these findings. Ketamine significantly increases the expression of brain-derived neurotrophic factor (BDNF), a protein that essentially supports brain cells and is typically reduced in patients with major depression.

In terms of efficacy, that’s pretty incredible. So incredible, in fact, that ketamine is one of the first treatments in decades to show such a pronounced, rapid relief of depressive symptoms. Before ketamine, electroconvulsive therapy, or ECT, was the touchstone treatment for medication-resistant depression – and that technique was introduced nearly a century ago.

After years of research, a ketamine-derived nasal spray was approved by the FDA in 2019 as a novel treatment for depression. The spray is administered in very low doses, thus avoiding complications experienced during the first round of clinical trials in the 1960s.

It’s important to note that ketamine should only be used in strictly monitored clinical settings because it does have a potential for abuse, and there is still so much that we don’t know.

The Future of Psychedelics in Medicine

Interestingly, ketamine isn’t the only hallucinogen with therapeutic implications. All sorts of hallucinogens have made their way back to the forefront of research. For instance, psilocybin, or “magic mushrooms,” also exhibit antidepressant properties. In June 2021, a study published in JAMA Psychiatry found similarly powerful, rapid antidepressant effects in a large clinical trial using psilocybin to treat major depressive disorders. Institutions like Johns Hopkins have even opened laboratories specifically centered around such research. These insightful findings challenge everything we thought we knew about this class of drugs.

Patients battling major depression, post-traumatic stress disorder, and other mood disorders are in desperate need of new, effective treatments, and ketamine seems to be just one of many massive steps in the right direction. Even more exciting is the fact that the therapeutic benefits of ketamine and other hallucinogens don’t stop there. The success of these drugs has also been implicated in myriad other conditions including cancer, fibromyalgia, migraine, obsessive-compulsive disorder, social anxiety disorder, anorexia, and more.

Throughout the years, ketamine transformed from being an anesthetic to a party drug to an antidepressant, and yet the molecular makeup itself didn’t change at all — just our perception and understanding of it. There is vast terrain yet to be covered before fully appreciating the complex mechanisms of psychedelics, but recent studies have provided unprecedented leaps in knowledge. Though hallucinogens still hold rightfully frightening connotations, when it comes to finding relief for patients suffering from debilitating conditions, we mustn’t allow fear and ignorance to impede further investigation that might just lead to revolutionary treatment for millions of people.

As 19th century physicist and chemist Marie Curie so eloquently put it, “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.”