Category Archives: Other

Eunice Foote: the first person to measure the impact of carbon dioxide on climate

We often think of climate science as something that started only recently. The truth is that, like almost all fields of science, it started a long time ago. Advancing science is often a slow and tedious process, and climate science is not an exception. From the discovery of carbon dioxide until the most sophisticated climate models, it took a long time to get where we are.

Unfortunately, many scientists who played an important role in this climate journey are not given the credit they deserve. Take, for instance, Eunice Newton Foote.

Eunice Foote. Credits: Wikimedia Commons.

Foote was born in 1819 in Connecticut, USA. She spent her childhood in New York and later attended classes in the Troy Female Seminary, a higher education institution just for women.  She married Elish Foote in 1841, and the couple was active in the suffragist and abolitionist movements. They participated in the “Women’s Rights Convention” and signed the “Declaration of Sentiments” in 1848.

Eunice was also an inventor and an “amateur” scientist, a brave endeavor in a time when women were scarcely allowed to participate in science. However, one of her discoveries turned out to be instrumental in the field of climate science.

Why do we need jackets in the mountains?

In 1856, Eunice conducted an experiment to explain why low altitude air is warmer than in mountains. Back then, scientists were not sure about it, so she decided to test it. She published her results in the American Journal of Science and Arts.

“Circumstances affecting the heat of the Sun’s rays”. American Journal of Science and Arts. Credits: Wikimedia Commons.

Foote placed two cylinders under the Sun and later in the shade, each with a thermometer. She made sure the experiment would start with both cylinders with the same temperature. After three minutes, she measured the temperature in both situations. 

She noticed that rarefied air didn’t heat up as much as dense air, which explains the difference between mountaintops and valleys. Later, she compared the influence of moisture with the same apparatus. To make sure the other cylinder was dry enough, she added calcium chloride. The result was a much warmer cylinder with moist air in contrast to the dry one. This was the first step to explain the processes in the atmosphere, water vapor is one of the greenhouse gasses which sustain life on Earth.

But that wasn’t all. Foote went further and studied the effect of carbon dioxide. The gas had a high effect on heating the air. At the time, Eunice didn’t notice it, but with her measurements, the warming effect of water vapor made the temperatures 6% higher, while the carbon dioxide cylinder was 9% higher. 

Surprisingly, Eunice’s concluding paragraphs came with a simple deduction on how the atmosphere would respond to an increase in CO2. She predicted that adding more gas would lead to an increase in the temperature — which is pretty much what we know to be true now. In addition, she talked about the effect of carbon dioxide in the geological past, as scientists were already uncovering evidence that Earth’s climate was different back then.

We now know that during different geologic periods of the Earth, the climate was significantly warmer or colder. In fact, between the Permian and Triassic periods, the CO2 concentration was nearly 5 times higher than today’s, causing a 6ºC (10.8ºF) temperature increase.


Eunice Foote’s discovery made it to Scientific American in 1856, where it was presented by Joseph Henry in the Eighth Annual Meeting of the American Association for the Advancement of Science (AAAS). Henry also reported her findings in the New-York daily tribune but stated there were not significant. Her study was mentioned in two European reports, and her name was largely ignored for over 100 years — until it finally received credit for her observations in 2011

The credit for the discovery used to be given to John Tyndall, an Irish physicist. He published his findings in 1861 explaining how absorbed radiation (heat) was and which radiation it was – infrared. Tyndall was an “official” scientist, he had a doctorate, had recognition from previous work, everything necessary to be respected. 

But a few things draw the eye regarding Tyndall and Foote.

Atmospheric carbon dioxide concentrations and global annual average temperatures (in C) over the years 1880 to 2009. Credits: NOAA/NCDC

Dr Tyndall was part of the editorial team of a magazine that reprinted Foote’s work. It is possible he didn’t actually read the paper, or just ignored it because it was an American scientist (a common practice among European scientists back then), and or because of her gender. But it’s possible that he drew some inspiration from it as well — without quoting it.

It should be said that Tyndall’s work was more advanced and precise. He had better resources and he was close to the newest discoveries in physics that could support his hypothesis. But the question of why Foote’s work took so long to be credited is hard to answer without going into misogyny.

Today, whenever a finding is published, even if made with a low-budget apparatus, the scientist responsible for the next advance on the topic needs to cite their colleague. A good example happened to another important discovery involving another female scientist. Edwin Hubble used Henrietta Swan Leavitt’s discovery of the relationship between the brightness and period of cepheid variables. Her idea was part of the method to measure the galaxies’ velocities and distances that later proved the universe is expanding. Hubble said she deserved to share the Nobel Prize with him, unfortunately, she was already dead after the prize announcement.

It’s unfortunate that researchers like Foote don’t receive the recognition they deserve, but it’s encouraging that the scientific community is starting to finally recognize some of these pioneers. There’s plenty of work still left to be done.

International Women’s Day: Ten Women in Science Who Aren’t Marie Curie

As the world celebrates International Women’s day, it’s important to remember what this date stands for: equal rights between men and women. Women’s day is tightly connected to the Suffragette movement, where women in many parts of the world fought and suffered for their right to vote. It was on March 8, 1917, that women in Russia gained the right to vote, and in 1975 the United Nations also adopted the day. Unfortunately, we still have a long way to go before we can talk about gender equality in the world and, sadly, science is no exception. When it comes to female scientists, one name always dominates the conversation: Marie Curie. Curie’s brilliance and impact are undeniable, but there are many more women who left a strong mark on science. Here, we will celebrate just a few of them, some of the names we should remember for their remarkable contribution.


Hypatia inspired numerous artists, scientists, and scholars. Here: The play Hypatia, performed at the Haymarket Theatre in January 1893, based on the novel by Charles Kingsley.

Any discussion about women in science should start with Hypatia — the head of the Neoplatonic school in ancient Alexandria, where she taught philosophy and astronomy. Hypatia was praised as a universal genius, though, for most of her life, she focused on teaching more than innovating. Also an accomplished mathematician, Hypatia was an advisor to Orestes, the Roman prefect of Alexandria, and is the first female scientist whose life was decently recorded.

Hypatia lived through a period of political turmoil, with Orestes fighting for power with Cyril, the Christian bishop of Alexandria. Although she was a “pagan” herself, Hypatia was tolerant of Christian students and hoped to prove that Neoplatonism and Christianity could coexist peacefully and cooperatively. Sadly, this wasn’t the case. She was brutally murdered by a mob of Christian monks known as the parabalanisomething which many historians today believe was orchestrated by Cyril (or at the very least, Cyril had some involvement in this process). Her murder fueled hatred against Christians and unfortunately, her legacy was completely tarnished and turned against what she had hoped to achieve.

Mary Anning

Portrait of Mary Anning with her dog Tray and the Golden Cap outcrop in the background, Natural History Museum, London.

Moving a bit closer to our age, Mary Anning was one of the most significant figures in paleontology. An English fossil collector, Anning was unable to join the Geological Society of London and did not fully participate in the scientific community of 19th-century Britain, who were mostly Anglican gentlemen. This stressed her tremendously, and she struggled financially for much of her life. Also, despite her significant contributions, it was virtually impossible for her to publish any scientific papers. The only scientific writing of hers published in her lifetime appeared in the Magazine of Natural History in 1839. It was an extract from a letter that Anning had written to the magazine’s editor questioning one of its claims. “The world has used me so unkindly, I fear it has made me suspicious of everyone,” she wrote in a letter.

However, she was consulted by many of the time’s leading scientists on issues of anatomy and fossil collection. Her observations played a key role in the discovery that coprolites are fossilized faeces, and she was also the first to find a complete ichthyosaur skeleton — one of the most emblematic dinosaur-aged marine creatures — as well as two complete plesiosaur skeletons, the first pterosaur skeleton located outside Germany, and important fish fossils. Her work also paved the way for our understanding of extinction and her most impressive findings are hosted at the London Natural History Museum.

Ichthyosaur and Plesiosaur by Édouard Riou, 1863.

Ada Lovelace

Ada Lovelace was one of the most interesting personalities of the 19th century. The daughter of famous and controversial Lord Byron, Ada inherited her father’s writing gift, but her most important legacy was in a completely different area: mathematics. She is often regarded as the first to recognize the full potential of a “computing machine” and the first computer programmer, chiefly for her work with Charles Babbage, regarded as the father of the computer.

Watercolor portrait of Ada King, Countess of Lovelace (Ada Lovelace).

But Ada Lovelace saw something in computers that Babbage didn’t — way ahead of its time, she glimpsed the true potential that computers can offer. Historian of computing and Babbage specialist Doron Swade explains:

“Ada saw something that Babbage in some sense failed to see. In Babbage’s world his engines were bound by number…What Lovelace saw—what Ada Byron saw—was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation [..]”.

Example of a computing machine developed by Babbage and Lovelace. Image credits: Jitze Couperus from Los Altos Hills, California, USA.

Unfortunately, the life of Ada Lovelace was cut short, at 36, by uterine cancer, with more than a century passing before her vision could be accomplished.

Henrietta Swan Leavitt

If you like astronomy, the odds are that you’ve heard the name Hubble — but the same can’t be said for Henrietta Swan Leavitt, even though it should. Her scientific work identified 1777 variable stars and discovered that the brighter ones had the larger period, a discovery known as the “period–luminosity relationship” or “Leavitt’s law.” Her published work paved the way for the discoveries of Edwin Hubble, renowned American astronomer, whose findings changed our understanding of the universe forever. Although Henrietta received little recognition in her lifetime, Hubble often said that Leavitt deserved the Nobel for her work.

Henrietta Swan Leavitt working in her office. Image from the American Institute of Physics, Emilio Segrè Visual Archives.

In 1892, she graduated from Harvard University’s Radcliffe College, taking only one course in astronomy. She gathered credits toward a graduate degree in astronomy for work completed at the Harvard College Observatory, though she never finished the degree. However, she began working as one of the women human “computers,” working on measuring and cataloguing the brightness of stars. It was her work that first allowed astronomers to measure the distance between the Earth and faraway galaxies, ultimately allowing Hubble to figure out that the universe is expanding. The Swedish Academy of Sciences tried to nominate her for the Nobel prize in 1924, only to learn that she had died of cancer three years earlier.

Inge Lehmann

Image courtesy The Royal Library, National Library of Denmark, and University of Copenhagen University Library.

Before Lehmann, researchers believed the Earth’s core to be a single molten sphere. However, observations of seismic waves from earthquakes were inconsistent with this idea, and it was Lehmann who first solved this conundrum in a 1936 paper. She showed that the Earth has a solid inner core inside a molten outer core. Within a few years, most seismologists adopted her view, even though the theory wasn’t proven correct by computer calculations until 1971.

Unlike most of her predecessors, Lehmann was allowed to join scientific organizations, serving as Chair of the Danish Geophysical Society in 1940 and 1944 respectively. However, she was significantly hampered in her work and in maintaining international contacts during the German occupation of Denmark in World War II. She continued to work on seismological studies, moving on to discover another seismic discontinuity, which lies at depths between 190 and 250 km and was named for her, the Lehmann discontinuity. In praise of her work, renowned geophysicist Francis Birch noted that the “Lehmann discontinuity was discovered through exacting scrutiny of seismic records by a master of a black art for which no amount of computerization is likely to be a complete substitute.”

Rosalind Franklin

Image credits: Robin Stott.

Rosalind Franklin was an English chemist and X-ray crystallographer who made contributions to the understanding of the molecular structures of DNA (deoxyribonucleic acid), RNA (ribonucleic acid), viruses, coal, and graphite. While her work on the latter was largely appreciated during her lifetime, her work on DNA was extremely controversial, only being truly recognized after her lifetime.

In 1953, the work she did on DNA allowed Watson and Crick to conceive their model of the structure of DNA. Essentially, her work was the backbone of the study, but the two didn’t grant her any recognition, in an academic context largely dominated by sexism. Franklin had first presented important contributions two years earlier, but due to  Watson’s lack of chemistry understanding, he failed to comprehend the crucial information. However, Franklin also published a more thorough report on her work, which made its way to the hands of Watson and Crick, even though it was “not expected to reach outside eyes“.

There is no doubt that Franklin’s experimental data were used by Crick and Watson to build their model of DNA, even though they failed to cite her even once (in fact, Watson’s reviews of Franklin were often negative). Ironically, Watson and Crick cited no experimental data at all in support of their model. In a separate publication in the same issue of Nature, they showed a DNA X-ray image which, in fact, served as the principal evidence.

Anne McLaren

Image via Wikipedia.

Zoologist Anne McLaren is one of the pioneers of modern genetics, her work being instrumental to the development of in vitro fertilization. She experimented with culturing mouse eggs and was the first person to successfully grow mouse embryos outside of the womb. McLaren was also involved in the many moral discussions about embryo research, leading her to help construct the UK’s Human Fertilization and Embryology Act of 1990. This work is still greatly important for policy regarding abortion, and also offers guidelines for the process. She authored over 300 papers over the course of her career.

She received many honours for her contributions to science, being widely regarded as one of the most prolific biologists in modern times. She also became the first female officer of the Royal Society in 331 years.

Vera Rubin

Vera Rubin with John Glenn. Image credits: Jeremy Keith.

Vera Rubin was a pioneering astronomer who first uncovered the discrepancy between the predicted angular motion of galaxies and the observed motion — the so-called Galaxy rotation problem. Although her work was received with great skepticism, it was confirmed time and time again, becoming one of the key pieces of evidence for the existence of dark matter.

Ironically, Rubin wanted to avoid controversial areas of astronomy such as quasars, and focused on the rotation of galaxies. She showed that spiral galaxies rotate quickly enough that they should fly apart if the gravity of their constituent stars was all that was holding them together. So, she inferred the presence of something else — something which today, we call dark matter. Rubin’s calculations showed that galaxies must contain at least five to ten times as much dark matter as ordinary matter. Rubin spent her life advocating for women in science and was a mentor for aspiring female astronomers.

Sally Ride

Image credits: U.S. Information Agency.

Sally Ride was the third woman in outer space, after USSR cosmonauts Valentina Tereshkova (1963) and Svetlana Savitskaya (1982). However, her main focus was astrophysics, primarily researching nonlinear optics and Thomson scattering. She had two bachelor’s degrees: literature, because Shakespeare intrigued her, and physics, because lasers fascinated her. She was also in excellent physical shape, being a nationally ranked tennis player who flirted with turning pro, and was essentially tailored to be an astronaut — and yet, the subject of the media attention was always her gender, and not her accomplishments. At press conferences, she would get questions like “Will the flight affect your reproductive organs?” and “Do you weep when things go wrong on the job?” to which she would laconically and patiently answer.

After flying twice on the Orbiter Challenger, she left NASA in 1987, after spending 343 hours in space. She wrote and co-wrote several science books aimed at children and encouraging them to pursue science. She also participated in the Gravity Probe B (GP-B) project, which provided solid evidence to support Einstein’s general theory of relativity.

Jane Goodall

Image credits: U.S. Department of State.

Most biologists consider Jane Goodall to be the world’s foremost expert on chimpanzees, and for good reason. Goodall has dedicated her life towards studying chimps, having spent over 55 years studying the social and family interactions of wild chimpanzees.

Since she was a child, Goodall was fascinated by chimps, and dedicated a lot of her early days towards studying them. She first went to Gombe Stream National Park, Tanzania in 1960, after becoming one of the very few people who were allowed to study for a PhD without first having obtained a BA or BSc. Without any supervisors directing her research, Goodall observed things that strict scientific doctrines may have overlooked, and which led to stunning discoveries. She observed behaviors such as hugs, kisses, pats on the back, and even tickling — which we would consider strictly “human” actions. She was the first to ever show non-human tool-making and overall, showed that many attributes we considered to be human were shared by chimps. She has also worked extensively on conservation and animal wildlife welfare.

This article doesn’t intend to be a thorough history of women in science, nor does it claim to mention all the noteworthy ones and the unsung heroes. It is meant to be an appreciation of the invaluable contributions women have made to science and the hardships they had — and still have — to overcome to do so.

What color is a mirror? It’s not a trick question

Credit: Pixabay.

When looking into a mirror, you can see yourself or the mirror’s surroundings in the reflection. But what is a mirror’s true color? It’s an intriguing question for sure since answering it requires us to delve into some fascinating optical physics.

If you answered ‘silver’ or ‘no color’ you’re wrong. The real color of a mirror is white with a faint green tint.

The discussion itself is more nuanced, though. After all, a t-shirt can also be white with a green tint but that doesn’t mean you can use it in a makeup kit.

The many faces of reflected light

We perceive the contour and color of objects due to light bouncing off them that hits our retina. The brain then reconstructs information from the retina — in the form of electrical signals — into an image, allowing us to see.

Objects are initially hit by white light, which is basically colorless daylight. This contains all the wavelengths of the visible spectrum at equal intensity. Some of these wavelengths are absorbed, while others are reflected. So it is these reflected visible-spectrum wavelengths that we ultimately perceive as color.

When an object absorbs all visible lengths we perceive it as black while an object that reflects all visible wavelengths will appear white to our eyes. In practice, there is no object that absorbs or reflects 100% of incoming light — this is important when discerning the true color of a mirror.

Why isn’t a mirror plain white?

Not all reflections are the same. The reflection of light and other forms of electromagnetic radiation can be categorized into two distinct types of reflection. Specular reflection is light reflected from a smooth surface at a definite angle, whereas diffuse reflection is produced by rough surfaces that reflect light in all directions.

Credit: Olympus Lifescience.

A simple example of both types using water is to observe a pool of water. When the water is calm, incident light is reflected in an orderly manner thereby producing a clear image of the scenery surrounding the pool. But if the water is disturbed by a rock, waves disrupt the reflection by scattering the reflected light in all directions, erasing the image of the scenery.

Credit: Olympus Lifescience.

Mirrors employ specular reflection. When visible white light hits the surface of a mirror at an incident angle, it is reflected back into space at a reflected angle that is equal to the incident angle. The light that hits a mirror is not separated into its component colors because it is not being “bent” or refracted, so all wavelengths are being reflected at equal angles. The result is an image of the source of light. But because the order of light particles (photons) is reversed by the reflection process, the product is a mirror image.

However, mirrors aren’t perfectly white because the material they’re made from is imperfect itself. Modern mirrors are made by silvering, or spraying a thin layer of silver or aluminum onto the back of a sheet of glass. The silica glass substrate reflects a bit more green light than other wavelengths, giving the reflected mirror image a greenish hue.

This greenish tint is imperceptible but it is truly there. You can see it in action by placing two perfectly aligned mirrors facing each other so the reflected light constantly bounces off each other. This phenomenon is known as a “mirror tunnel” or “infinity mirror.” According to a study performed by physicists in 2004, “the color of objects becomes darker and greener the deeper we look into the mirror tunnel.” The physicists found that mirrors are biased at wavelengths between 495 and 570 nanometers, which corresponds to green.

So, in reality, mirrors are actually white with a tiny tint of green.

How Russia already lost the information war — and Ukraine won it

How is it that Russia’s cyber-force, the alleged masters of disinformation and propaganda, lost the information war, while Ukraine has been so successful at spreading its message to the world?

Of course, being on the right side of history and not invading and bombing a country helps, but we’ve seen Russia (and Putin) spin events to their advantage before, or at least sow some discord and confuse public discourse. The well-established approach of maskirovka has been used to create deception and manipulate public discourse for decades, up until the Russian annexation of Crimea in 2014. So how is it that Putin is now losing so badly at his own game?

Let’s have a look at some of the reasons.

Preemption and pre-bunking

In previous years, Russian disinformation has largely been met with little initial resistance. And we’ve learned recently that attempting to debunk disinformation after it happens is often ineffective. So instead, both organized and ad-hoc actors moved to pre-bunk the disinformation.

This prebunking started going strong in January, when it became clear that Russia was amassing an invading force around Ukraine. In the US, the Biden administration became very vocal about this, and their voice was amplified by UK and EU intelligence. Russia denied any plans of an invasion and tried to dismiss this as a political squabble. They even ridiculed the idea that Russia would invade. But when it happened — it backfired spectacularly.

Official intelligence voices were also backed by Open-source intelligence (OSINT) sources. Russia tried to play the victim, but overwhelmingly, its reports were shut down quickly and factually — because the evidence was already gathered.

The combination of disseminated grassroots information, coupled with the fact that the US and UK governments were transparent about their intelligence warnings made it clear what was going on.

Satellite data

All of this was greatly facilitated that satellite data is now available with relative ease. Nowadays, it’s not just military satellites that can offer this type of data — civilian satellites can also offer valuable information. The satellites showed how Russian troops were amassing troops, how they were moving in, and pretty much all the things Russians tried to deny.

Journalists all around the world used Maxar satellite data to document the movement of Russian troops. It was kind of hard to deny what was going on when the eye from the sky was keeping a close watch.

Grassroots imagery

The data and intelligence reports and the bird’s eye view provided by satellites was coupled with grassroots documentation of the movements of Russian troops.

The reports flew in from residents, but also from journalists who braved the invasion and remained in place to document what was going on. The reports also documented not just the invasion itself, but its many logistical flaws as well. The world became aware that the Russian military faced fuel and food shortages and fuel-less tanks were sometimes simply abandoned.

It wasn’t just in English, either. The international journalistic community got together to produce a coherent message (or as coherent as possible given the circumstances) about what was going on.

Technology is not a friend of the Russian invasion either — everyone has a smartphone nowadays and can film what’s going on. Russian authorities have also been unable to disconnect Ukraine from the internet, which enabled the world to see what was going on through the lens of the Ukrainian people.

With the invasion documented at all levels, the world could have a clear view of what was going on.

Russia made mistakes

The flow of information was helped by the fact that Russia didn’t loudly use its propaganda machine as much as it could. According to reports, it seemed that Russian leaders were banking on a quick Blitzkrieg-type win where they could sweep things under the rug in the beginning, and then push their narrative. But the extension of the conflict meant that they wasted several days and already lost control of the narrative.

Russia also allowed Ukraine to showcase its military victories without pushing its own military successes — because yet again, Russia initially wanted to smooth the whole thing over as quickly as possible. While Ukraine showed its drones bombing Russian tanks and its people bravely holding off their invaders, Russia kept quiet. After all, Russian people at home aren’t even allowed to know there’s a war going on.


Russia also made mistakes in their attempts to sow disinformation, discrediting itself with a few small but blatant errors — this made Russian leaders seem even more disingenuous.

Civilian damage

Without a doubt, few things strike fear and empathy into people like bombing civilian buildings does. It’s something that everyone (hopefully) agrees should not happen. Unfortunately, there’s been plenty of evidence of Russian shelling of civilian buildings, including the bombing of a kindergarten and multiple residential buildings.

A residential building in Kyiv was attacked by Russian artillery. Image via Wiki Commons.

They say a picture is worth a thousand words, and seeing the people of Ukraine huddled up in subways sent a clear message: people like you and me are under attack.

People in Kyiv have taken refuge in the city’s subway to escape the bombing. Every night, thousands of people sleep in the subways. Image via Wiki Commons.

Ukraine contrasts to Russia

Ukraine has also worked to push its side of the story — which you can hardly blame when you consider that Ukraine is currently faced with an existential threat. Their side of the story is very clear: they’re defending themselves against a foreign invasion. Meanwhile, Russia’s reason appears to be to crush Ukraine; it’s not hard to see why people support one of those things and not the other.


Ukrainians also pushed on the idea that unlike the invaders, they treat people humanely — even prisoners of war. They showed that they are humans just like everyone else and they have no intention of waging war when given an alternative.

Russia’s initial excuse for the invasion, that they were doing “denazification” in Ukraine, is also laughable — a lengthy list of historians and researchers signed a letter condemning this idea. The fact that Russia even bombed the Holocaust memorial in Kyiv made it even clearer that this was a flimsy excuse. Even for the Russian people at home, seeing heavily censored information, this must seem like a weak excuse at best.

Another stark contrast between Russian and Ukrainian forces is that while the latter are fighting for their very survival and the defense of their loved ones, it’s not exactly clear what exactly it is that Russian forces are fighting for. In fact, some soldiers were also confused: according to reliable reports, some Russian soldiers and their families initially thought they were doing drills — not an actual invasion.

Tales of heroism and martyrs


Ukraine is an underdog in any conflict with Russia but Ukrainians will not give up — and they’ve been pushing that message strongly since day one. Tales of regular people picking up arms despite all odds have circled the world, showing that Ukrainians are not afraid to fight till the bitter end.


In addition, Ukraine has also published regularly on defenders that sacrificed themselves for the greater good. Tales like a Ukrainian woman telling Russian soldiers to put seeds in their pockets so flowers will grow when they die in her country, a soldier sacrificing himself to blow up a bridge and slow down Russian forces, and most famously, encircled border guards on Snake Island who told an attacking warship: “Russian warship, go fuck yourself”

A valliant leader who understands media

When the invasion started, Volodymyr Zelenskyy wasn’t that well-known or popular outside the country. He wasn’t even that popular inside his country — for many Ukrainians, he was elected as the lesser evil. But he rose to the occasion impressively. With regular updates from the middle of events, using social media to communicate directly with people, and with staunch determination communicated in true 21st century style, Zelenskyy proved instrumental for Ukraine’s defense and its morale. He was the right man at the right place, and his communication was clear and effective.

Source: Volodymyr Zelenskyy / Telegram.

Zelenskyy showed himself to be a man of the people, involved at the very center of the war zone — yet again, contrasting to what Putin was showing.

Jokes and memes

Russia failed to project a “good guy” image, it’s even failing to project a “strong guy” image. Despite its obviously superior firepower, despite its massive investments in the military, despite its gargantuan power — its invasion operation was plagued by numerous mishaps. Some of these mishaps would be outright funny if it weren’t such a tragic situation.

For instance, the scenes where Ukrainian farmers were dragging a tank across a field went viral, as did the encounter between a Ukrainian driver and a Russian tank that was out of fuel. The Ukrainian driver offered to tow the tank — back to Russia.

Memes have also been flowing, and more often than not the memes also took Ukraine’s side.


Lastly, Ukraine also deployed effective propaganda. The ‘Ghost of Kyiv’ fighter pilot is likely a myth, but it’s made the rounds and given hope to many Ukrainians. The ‘Panther of Kharkiv’ cat that detected Russian snipers is also quite possibly propaganda, but it gives people a good story. Let’s face it, if you can make it seem like you’ve got the cats on your side, you’ve already won a big part of the internet.


The bottom line

It’s always hard to assess what’s going on during a war. Reports are inconsistent, there’s a lot of misinformation, heck — it’s a war. In this case, Russia’s propaganda machine seems to have failed to effectively push its side of the story. From the top of the information chain to the bottom, from the intelligence reports to the grassroots videos and photos, everything points in one direction: Ukraine is winning the information war, while Russia is losing it. Its actions have received nigh-universal condemnation, and Putin is essentially a pariah on the global stage — while Zelenskyy has become one of the most popular leaders alive.

Will this matter for the actual war? It’s hard to say at this point. Information is vital during a war, but so is artillery — and Russia has a lot of artillery.

These hard-bodied robots can reproduce, learn and evolve autonomously

Where biology and technology meet, evolutionary robotics is spawning automatons evolving in real-time and space. The basis of this field, evolutionary computing, sees robots possessing a virtual genome ‘mate’ to ‘reproduce’ improved offspring in response to complex, harsh environments.

Image credits: ARE.

Hard-bodied robots are now able to ‘give birth’

Robots have changed a lot over the past 30 years, already capable of replacing their human counterparts in some cases — in many ways, robots are already the backbone of commerce and industry. Performing a flurry of jobs and roles, they have been miniaturized, mounted, and molded into mammoth proportions to achieve feats way beyond human abilities. But what happens when unstable situations or environments call for robots never seen on earth before?

For instance, we may need robots to clean up a nuclear meltdown deemed unsafe for humans, explore an asteroid in orbit or terraform a distant planet. So how would we go about that?

Scientists could guess what the robot may need to do, running untold computer simulations based on realistic scenarios that the robot could be faced with. Then, armed with the results from the simulations, they can send the bots hurtling into uncharted darkness aboard a hundred-billion dollar machine, keeping their fingers crossed that their rigid designs will hold up for as long as needed.

But what if there was a is a better alternative? What if there was a type of artificial intelligence that could take lessons from evolution to generate robots that can adapt to their environment? It sounds like something from a sci-fi novel — but it’s exactly what a multi-institutional team in the UK is currently doing in a project called Autonomous Robot Evolution (ARE).

Remarkably, they’ve already created robots that can ‘mate’ and ‘reproduce’ progeny with no human input. What’s more, using the evolutionary theory of variation and selection, these robots can optimize their descendants depending on a set of activities over generations. If viable, this would be a way to produce robots that can autonomously adapt to unpredictable environments – their extended mechanical family changing along with their volatile surroundings.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist and ARE team member Jacintha Ellers. “We can come up with novel types of creatures and see how they perform under different selection pressures.” Offering a way to explore evolutionary principles to set up an almost infinite number of “what if” questions.

What is evolutionary computation?

In computer science, evolutionary computation is a set of laborious algorithms inspired by biological evolution where candidate solutions are generated and constantly “evolved”. Each new generation removes less desired solutions, introducing small adaptive changes or mutations to produce a cyber version of survival of the fittest. It’s a way to mimic biological evolution, resulting in the best version of the robot for its current role and environment.

Virtual robot. Image credits: ARE.

Evolutionary robotics begins at ARE in a facility dubbed the EvoSphere, where newly assembled baby robots download an artificial genetic code that defines their bodies and brains. This is where two-parent robots come together to mingle virtual genomes to create improved young, incorporating both their genetic codes.

The newly evolved offspring is built autonomously via a 3D printer, after which a mechanical assembly arm translating the inherited virtual genomic code selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, the artificial system wires up a Raspberry Pi computer acting as a brain to the sensors and motors – software is then downloaded from both parents to represent the evolved brain.

1. Artificial intelligence teaches newborn robots how to control their bodies

Newborns undergo brain development and learning to fine-tune their motor control in most animal species. This process is even more intense for these robotic infants due to breeding between different species. For example, a parent with wheels might procreate with another possessing a jointed leg, resulting in offspring with both types of locomotion.

But, the inherited brain may struggle to control the new body, so an algorithm is run as part of the learning stage to refine the brain over a few trials in a simplified environment. If the synthetic babies can master their new bodies, they can proceed to the next phase: testing.

2. Selection of the fittest- who can reproduce?

A specially built inert nuclear reactor housing is used by ARE for testing where young robots must identify and clear radioactive waste while avoiding various obstacles. After completing the task, the system scores each robot according to its performance which it then uses to determine who will be permitted to reproduce.

Real robot. Image credits: ARE.

Software simulating reproduction then takes the virtual DNA of two parents and performs genetic recombination and mutation to generate a new robot, completing the ‘circuit of life.’ Parent robots can either remain in the population, have more children, or be recycled.

Evolutionary roboticist and ARE researcher Guszti Eiben says this sped up evolution works as: “Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms.”

3. Real-world robots can also mate in alternative cyberworlds

In her article for the New Scientist, Emma Hart, ARE member and professor of computational intelligence at Edinburgh Napier University, writes that by “working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world.”

This parallel universe entails the creation of a digital version of every mechanical infant in a simulator once mating has occurred, which enables the ARE researchers to build and test new designs within seconds, identifying those that look workable.

Their cyber genomes can then be prioritized for fabrication into real-world robots, allowing virtual and physical robots to breed with each other, adding to the real-life gene pool created by the mating of two material automatons.

The dangers of self-evolving robots – how can we stay safe?

A robot fabricator. Image credits: ARE.

Even though this program is brimming with potential, Professor Hart cautions that progress is slow, and furthermore, there are long-term risks to the approach.

“In principle, the potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviors that could cause damage or even harm humans,” Hart says.

“We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard.” Therefore: “We could also anticipate unwanted behaviors by continually monitoring the evolved robots, then using that information to build analytical models to predict future problems. The most obvious and effective solution is to use a centralized reproduction system with a human overseer equipped with a kill switch.”

A world made better by robots evolving alongside us

Despite these concerns, she counters that even though some applications, such as interstellar travel, may seem years off, the ARE system may have a more immediate need. And as climate change reaches dangerous proportions, it is clear that robot manufacturers need to become greener. She proposes that they could reduce their ecological footprint by using the system to build novel robots from sustainable materials that operate at low energy levels and are easily repaired and recycled. 

Hart concludes that these divergent progeny probably won’t look anything like the robots we see around us today, but that is where artificial evolution can help. Unrestrained by human cognition, computerized evolution can generate creative solutions we cannot even conceive of yet.

And it would appear these machines will now evolve us even further as we step back and hand them the reins of their own virtual lives. How this will affect the human race remains to be seen.

Rumble in the concrete jungle: what history teaches us about urban defense

Given ongoing events in Ukraine, the age-old adage that offense is the best defense is being put to the test. So far, throughout the country’s towns and cities, the answer seems to be “not so much”.

Urban Urban Design Landscape Desing Qom Iran

With that being said, history gives us ample examples and wisdom on how best to handle urban combat in general and urban defense in particular. Fighting in such environments is a very different beast to combat in other types of landscapes, and it raises unique challenges, as well as offering its own set of options and opportunities. Many of these are related to the huge availability of solid cover and line-of-sight denial. Others arise from the way cities naturally funnel pedestrian and vehicle traffic, constraining them to known and predictable avenues.

So today, we will go through wisdom gathered painfully, at great cost of human lives and material damage over history, on how defenders can best employ built environments against attackers.

Erzats fortresses

In olden days, architects would design fortresses so that the defenders would have as much of an advantage over attackers as possible. The first and most obvious advantage is the physical protection afforded by thick, sturdy walls.

While most buildings today aren’t built to repel invaders, they do offer sturdy bases that defenders can use when bracing for an attack. Structures erected from concrete and rebar are especially tough and can act as impromptu fortifications. Most government buildings, apartment blocks, and office complexes are ideal for this role, as are banks.

If defenders have enough time to dig in, such buildings should be reinforced with materials such as lumber, steel girders, or sandbags. Such elements should be used to protect the structure from direct damage, help maintain integrity after damage is inflicted on the building, or cover areas through which attackers can enter the perimeter. Ideally, combat engineers would carry out reinforcement works, but if they are not available, civilians can fill the role partially.

Mines, barbed wire, and other physical barriers can also be used to deny attackers entry points into the building and make it hard for them to approach the site. Furniture, rubble, barbed wire, and mines should also be used to block or limit access to stairways and elevators; even if these do not neutralize any of the attackers, they can still delay a fighting force massively. Such makeshift defenses require a lot of time, effort, and resources (such as explosives and specialized combat engineers) to remove.

Inside the building itself, reinforcing materials should be used to create bunkers or similar fighting compartments that break a building’s open floors into multiple areas of overlapping fire.

Like for ancient fortresses, however, the key to picking the right building to fortify is location. Strongpoints should have a good command of their surroundings (direct line of sight for soldiers to fire). Several close-together buildings can be fortified to ensure overlapping fields of fire that the enemy cannot hide from. Whether fortified alone or in groups, these buildings should be surrounded by obstacle courses that prevent attackers from simply bypassing them, or isolating the strongpoint from support from other defending units.

Heavy weapons such as rocket launchers, guns, automatic cannons, and heavy machine guns can also benefit from an elevated position from which to fire. Such weapons can be disassembled, carried to upper floors, and reassembled for use. Equipment such as this can allow defenders to halt entire armored columns.

A single fortified building can completely blunt even an armored assault, or at least stall it. One such building — known today as “Pavlov’s House” — became famous during the Battle of Stalingrad in 1942. A platoon led by Sergeant Yakov Pavlov held out in this house against the German army for 60 days, repelling infantry and armored attacks. The soldiers surrounded the building with barbed wires and mines, broke holes through the interior walls to allow for movement, dug machinegun emplacements in the building’s corners, and used the top floors to lay down anti-tank rifle fire on advancing tanks. When artillery would fire on the building, they retreated to the safety of the cellar, only to re-emerge and continue fighting.

Such stories illustrate just how hard it can be for attackers to negotiate a single fortified building. Still, modern battlefields involve systems that were not available during World War II, so one extra element should be considered:


The advent of modern surveillance systems such as drones, satellites, and reconnaissance planes, together with the precision weapons in use today, means that strongpoints are at risk of precision strikes. Concealment saves lives, so defenders should take steps to hide their exact position and activity as much as possible.

Citizens embroiled in the Syrian conflict would routinely hang large pieces of cloth, tarps, or sheet metal in between buildings to hide personnel from snipers and aircraft. Such measures are disproportionally effective compared to their simplicity. Soldiers rely on sight heavily on the battlefield and don’t generally shoot unless they have reliable knowledge of where the enemy might be. In the case of heavy weaponry such as tank- or aircraft-mounted munitions, this is even more true. A pilot is much less likely to drop a bomb without a clear sighting than a soldier is to fire a single shot.

Even if the enemy chooses to fire, concealment measures still bring value to defenders. A weapon fired at an empty emplacement is effectively wasted, and cannot be used against an active defender — contributing to the so-called ‘virtual attrition’ of the attacking forces.

Concealment measures should be used in conjuncture with fortifications to hide the defenders’ movements and decrease the efficacy of enemy fire. Even so, a big fortified apartment building is hard to hide, and will undoubtedly draw some heavy ordinance its way. So another element should be considered to ensure the safety of defending soldiers.

Tunnels, mouseholes

Mouseholes are openings cut out to allow soldiers easy access through the interior as well as exterior walls of a building. They have been a mainstay of urban combat ever since the advent of gunpowder weaponry. Mouseholes can be created using explosives or simple tools, and should comfortably fit a soldier so as not to clog traffic during a tense situation. In the case that a building should be run over by the attackers, defenders can also use mouseholes as chokepoints to contain the enemy’s advance by covering them with machine-gun fire or personal weapons.

Tunnels, on the other hand, are dug underground. They require significantly more work than mouseholes but have the added benefit of concealing and protecting troops that transit them from fire. Due to their nature, tunnel networks are hard to set up, so they should be used to allow strategic access to important sites and give defenders safe avenues of reinforcing strongpoints. Whenever possible, defenders should work to build extensive tunnel networks to give troops safe avenues of passage on the battlefield.

Underground transportation avenues and infrastructure, such as metro lines or sewage lines, can also be used as tunnels and bunkers. German soldiers used them to great effect during the Battle of Berlin in 1945 to cause great pain to Soviet soldiers moving into the city. Such infrastructure is usually roomy enough to also be usable as hospital and storage space, is extensive enough to act as a communications network, and offers an ideal setting to set up ambushes, bunkers, or counter attacks. Some can even allow for the passage of armored vehicles. They are also sturdy enough — and dug deep enough underground — to withstand most artillery and airstrikes.

But what about other areas of the city?


As daunting as fortified spaces can be, the fact of the matter is that not every building can be fortified. There simply isn’t enough time, manpower, and material available when preparing a defense. But not every area needs to be fortified to help stop an attack. Sometimes, it’s as simple as tearing buildings down.

Defenders have the advantage that they can use the terrain in their favor to a much greater extent than attackers. They are the first of the two sides to hold a position, they know the land, and can take up the best spots to punish any invaders. Rubbling buildings can help in this regard on several levels.

First, rubble presents a physical barrier that an invading army will have difficulty navigating and removing. This is especially true for concrete or brick rubble produced by demolishing buildings. It forces attackers to move through pre-determined areas, where defenses can be set up to stop their advance. It also prevents them from using all their firepower on a single objective as it prevents direct fire. Rubble serves to also block line of sight, thus limiting the ability of an attacking force to keep tabs on what the defenders are doing.

Rubbling is, understandably, a very damaging process and thus quite regrettable to use. But it does bring huge benefits to defenders by allowing them to alter the landscape to their purposes.


Although less effective than rubbling at containing an enemy’s movements, barricades can be surprisingly effective at stopping both infantry and armored vehicles. Furniture, tires, sandbags, metallic elements, and wire all make for good barricades.

Urban landscapes are also very rich in objects that can be used for barricades such as trash containers, cars, manholes, industrial piping, and so forth. These should be used liberally and ideally set up in areas where defenders can unleash fire on any attackers attempting to navigate or remove them.

Concrete barriers

These aren’t very common in cities, but any checkpoint or protected infrastructure site might have some of these barriers. If you have time and concrete to spare, makeshift barriers can also be quite effective. They usually come in 3ft / 1 m tall anti-vehicle walls or 12ft / 4 m tall wall segments used by the military to reinforce strategic points.

These are essentially portable fortifications. They are made of rebar and concrete and are exceedingly hard to destroy directly. Use cranes and heavy trucks to move them, as they weigh a few tons each.


Another important advantage defenders have is that the attackers have to come to them — so there’s not much need to carry supplies to the front line.

Pre-prepared ammo caches can be strewn throughout the city to keep defenders in the fight as long as possible. Urban landscapes offer a lot of hidden spots where ammo or weapons can be deposited discretely. Food, water, and medical supplies are also essential, so make sure these are distributed throughout the engagement zone as well.

Strongpoints should have designated rooms for storage of such supplies. Smaller items such as magazines or grenades can be distributed in smaller quantities throughout several points of the building, to ensure that soldiers always have what they need on hand.

Attacking an urban environment is a very daunting proposition even for the most well-trained of military forces. It gives defenders an ideal landscape to set up ambushes, entrench, deceive their attackers, and launch counter-offensives. Making the most of the terrain, and preparing carefully, can give defenders a huge upper hand against their foes while making it hard for attackers to leverage their strengths. such landscapes can level the playing field even against a superior attacking force. The events in Ukraine stand as a testament to this.

Russian electrical vehicle chargers get hacked: “Putin is a dickhead”

Chargers along one of Russia’s most important motorways are not working and are displaying messages like “Putin is a dickhead” and “Glory to Ukraine. Glory to the heroes.”

Image credits: Instagram user Oleg Moskovtsev.

The M11 Motorway in Russia, which connects the country’s two biggest cities (Moscow and St. Petersburg) is one of the busiest roads in the country. But for the few people driving electric cars in the country, it’s become virtually unusable.

Following Russia’s invasion of Ukraine, the electric car chargers along the motorway were hacked. The Russian energy company Rosseti admitted the problem but claimed it’s not an external hack, but rather an internal one.

Reportedly, some of the main components in the chargers come from a Ukrainian company. A Facebook statement from Rosseti claims the Ukrainian company left a backdoor access to the pumps, shutting them down and displaying the scrolling anti-Putin messages.

“Charging stations installed on the M-11 route were purchased in 2020 according to the results of an open purchase procedure. The chargers were provided by the LLC “Gzhelprom” (Russia). It was later discovered that the main components (incl. A. the controller) are actually produced by the company Autoenterprise (Ukraine), and the Russian supplier produced a open assembly.”

“The manufacturer left a “marketing” in the controller, which gave him the opportunity to have hidden internet access. According to our information, data controllers are widely used on power charging stations exported by Ukraine to Europe.”

AutoEnterprise’s Facebook page re-posted a video showing the pumps, but it’s not clear if they claimed responsibility for this or if they were just happy to see it.

As its troops continue to bomb Ukraine and march in on its main cities, Russia has been increasingly under cybernetic attack, with hackers from all around the world hitting at Russian websites and even television.

The Russian state-funded television was hacked by the activist group Anonymous, displaying anti-war messages and urging the Russian people to act to stop the water. Russian TV channels were also attacked and made to play Ukrainian music and display uncensored news of the conflict from news sources outside Russia.

Ultimately, it’s unlikely that any of these actions will have a major impact on Russia’s military attack, but they could help spread more information inside Russia about the events in Ukraine. Russian authorities are actively censoring the situation and for years, they have tried to censor and control what the Russian people get to hear — not shying away from detaining journalists or even worse.

Cyber attacks will likely continue to escalate on both sides, involving both state and non-state actors. War is no longer fought only on the front lines — nowadays, it’s fought online as well.

Stanislav Petrov – the man who probably saved the world from a nuclear disaster

As Vladimir Putin is forcing the world to contemplate nuclear war once again, it’s time to remember one time when one Soviet military may have saved the world from disaster.

It was September 26, 1983. The Cold War was at one of its most tense periods ever. With the United States and the USSR at each other’s throat, they had already built enough nuclear weapons to destroy each other (as well as the rest of the world) a couple times over — and the slightest sign of an attack would have lead to a worldwide disaster, killing hundreds of millions of people.

Stanislav Petrov played a crucial role in monitoring what the US was doing. In the case of an attack, the Soviet strategy was to launch an all out retaliation as quickly as possible. So a few minutes after midnight, when the alarms went on and the screens turned red, the responsibility fell on his shoulders.

The Soviet warning software analyzed the information and concluded that it wasn’t static; the system’s conclusion was that the US had launched a missile.  But the system however, was flawed. Still, the human brain surpassed the computer that day; on that faithful day, Stanislav Petrov put his foot down and decided that it was a false alarm, advising against retaliation – and he made this decision fast.

He made the decision based mostly on common sense – there were too few missiles. The computer said there were only five of them.

“When people start a war, they don’t start it with only five missiles,” he remembered thinking at the time. “You can do little damage with just five missiles.”

However, he also relied on an old fashion gut feeling.

“I had a funny feeling in my gut,” Petrov said. “I didn’t want to make a mistake. I made a decision, and that was it.”

There’s also something interesting about that night. Petrov wasn’t scheduled then. Somebody else should have been there; and somebody else could have made a different decision. The world would probably have turned out very different.

Saltwater Crocodiles: the world’s oldest and largest reptile

From the east of India, all through to the north of Australia, one fearsome, cold-blooded predator stalks the coasts. This hypercarnivore will contend with any that enters its watery domain, from birds to men to sharks, and almost always win that fight. Fossil evidence shows that this species has been plying its bloody trade for almost 5 million years, remaining virtually unchanged, a testament to just how efficient a killing machine it is. Looking it in the eye is the closest thing we have to staring down a carnivorous dinosaur.

Saltwater crocodile at the Australia Zoo, Beerwah, South Queensland. Image credits Bernard Dupont / Flickr.

This animal is the saltwater crocodile (Crocodylus porosus). It has the distinction of being the single largest reptile alive on the planet today, and one of the oldest species to still walk the Earth.

Predatory legacy

The earliest fossil evidence we have of this species dates back to the Pliocene Epoch, which spanned from 5.3 million to 2.6 million years ago.

But the crocodile family is much older. They draw their roots in the Mesozoic Era, some 250 million years ago, when they branched off of archosaurs (the common ancestor they share with modern birds). During those early days, they lived alongside dinosaurs.

Crocodiles began truly coming into their own some 55 million years ago, evolving into their own species in the shape we know them today. They have remained almost unchanged since, a testament to how well-adapted they are to their environments, and the sheer efficiency with which they hunt.

This makes the crocodile family, and the saltwater crocodile as one of its members, one of the oldest lineages alive on the planet today.

The saltwater crocodile

With adult males reaching up to 6 or 7 meters (around 20 to 23 ft) in length, this species is the largest reptile alive today. Females are smaller than males, generally not exceeding 3 meters in length (10 ft); 2.5 meters is considered large for these ladies.

Image credits fvanrenterghem / Flickr.

The saltwater crocodile will grow up to its maximum size and then start increasing in bulk. The weight of these animals generally increases cubically (by a power of 3) as they age; an individual at 6 m long will weigh over twice as much as one at 5 m. All in all, they tend to be noticeably broader and more heavy-set than other crocodiles.

That being said, they are quite small as juveniles. Freshly-hatched crocs measure about 28 cm (11 in) in length and weigh an average of only 71 g — less than an average bag of chips.

Saltwater crocodiles have large heads, with a surprisingly wide snout compared to other species of croc. Their snout is usually twice as long overall as they are wide at the base. A pair of ridges adorn the animal’s eyes, running down the middle of their snout to the nose. Between 64 and 68 teeth line their powerful jaws.

Like their relatives, saltwater crocodiles are covered in scales. These are oval in shape. They tend to be smaller than the scales of other crocodiles and the species has small or completely absent scutes (larger, bony plates that reinforce certain areas of the animal’s armored cover) on their necks, which can serve as a quick identifier for the species.

Young individuals are pale yellow, which changes with age. Adults are a darker yellow with tan and gray spots and a white or yellow belly. Adults also have stripes on the lower sides of their bodies and dark bands on their tails.

That being said, several color variations are known to exist in the wild; some adults can maintain a pale coloration throughout their lives, while others can develop quite dark coats, almost black.

Behavior, feeding, mating

Saltwater crocodiles are ambush predators. They lie in wait just below the waterline, with only their raised brows and nostrils poking above the water. These reptiles capture unsuspecting prey from the shore as they come to drink, but are not shy to more actively hunt prey in the water, either. Their infamous ‘death roll’ — where they bite and then twist their unfortunate victim — is devastating, as is their habit of pulling animals into the water where they drown. But even their bite alone is terrifying. According to an analysis by Florida State University paleobiologist Gregory M. Erickson, saltwater crocodiles have the strongest bite of all their relatives, clocking in at 3,700 pounds per square inch (psi).

That’s a mighty bitey. Image credits Sankara Subramanian / Flickr.

Apart from being the largest, the saltwater crocodile is also considered one of the most intelligent reptiles, showing sophisticated behavior. They have a relatively wide repertoire of sounds with which they communicate. They produce bark-like sounds in four known types of calls. The first, which is only performed by newborns, is a short, high-toned hatching call. Another is their distress call, typically only seen in juveniles, which is a series of short, high-pitched barks. The species also has a threat call — a hissing or coughing sound made toward an intruder — and a courtship call, which is a long and low growl.

Saltwater crocodiles will spend most of their time thermoregulating to maintain an ideal body temperature. This involves basking in the sun or taking dips into the water to cool down. Breaks are taken only to hunt or protect their territory. And they are quite territorial. These crocodiles live in coastal waters, freshwater rivers, billabongs (an isolated pond left behind after a river changes course), and swamps. While they are generally shy and avoidant of people, especially on land, encroaching on their territory is one of the few things what will make a saltwater crocodile attack humans. They’re not shy to fight anything that tresspasses, however, including sharks, monkeys, and buffalo.

This territoriality is also evident in between crocs. Juveniles are raised in freshwater rivers but are quickly forced out by dominant males. Males who fail to establish a territory of their own are either killed or forced out to sea. They just aren’t social souls at all.

Females lay clutches of about 50 eggs (though there are records of a single female laying up to 90 in extraordinary cases). They will incubate them in nests of mud and plant fibers for around 3 months. Interestingly, ambient temperatures dictate the sex of the hatchlings. If temperatures are cool, around 30 degrees Celsius, all of them will be female. Higher sustained temperatures, around 34 degrees Celsius, will produce an all-male litter.

Only around 1% of all hatchlings survive into adulthood.

Conservation status

Saltwater crocodiles have precious few natural predators. Still, their skins have historically been highly prized, and they have suffered quite a lot from hunting, both legal and illegal. Their eggs and meat are also consumed as food.

In the past, this species has been threatened with extinction. Recent conservation efforts have allowed them to make an impressive comeback, but the species as a whole is much rarer than in the past. They are currently considered at low risk for extinction, but they are still of especial interest for poachers due to their valuable meat, eggs, and skins.

Saltwater crocodiles are an ancient and fearsome predator. They have evolved to dominate their ecosystems, and do so by quietly lurking just out of sight. But, like many apex predators before them, pressure from humans — both directly, in the form of hunting, and indirectly, through environmental destruction and climate change — has left the species reeling.

Conservation efforts for this species are to be applauded and supported. Even though these crocodiles have shown themselves willing to attack humans if we are not careful, we have to bear in mind that what they want is to be left alone and unbothered. It would be a pity for this species, which has been around for millions of years, which has come from ancient titans, survived for millennia and through global catastrophe, to perish.

Brain scans are saving convicted murderers from death row–but should they?

Over a decade ago, a brain-mapping technique known as a quantitative electroencephalogram (qEEG) was first used in a death penalty case, helping keep a convicted killer and serial child rapist off death row. It achieved this by swaying jurors that traumatic brain injury (TBI) had left him prone to impulsive violence.

In the years since, qEEG has remained in a weird stasis, inconsistently accepted in a small number of death penalty cases in the USA. In some trials, prosecutors fought it as junk science; in others, they raised no objections to the imaging: producing a case history built on sand. Still, this handful of test cases could signal a new era where the legal execution of humans becomes outlawed through science.

Quantifying criminal behavior to prevent it

As it stands, if science cannot quantify or explain every event or action in the universe, then we remain in chaos with the very fabric of life teetering on nothing but conjecture. But DNA evidentiary status aside, isn’t this what happens in a criminal court case? So why is it so hard to integrate verified neuroimaging into legal cases? Of course, one could make a solid argument that it would be easier to simply do away with barbaric death penalties and concentrate on stopping these awful crimes from occurring in the first instance, but this is a different debate.

The problem is more complex than it seems. Neuroimaging could be used not just to exempt the mentally ill from the death penalty but also to explain horrendous crimes to the victims or their families. And just as crucial, could governments start implementing measures to prevent this type of criminal behavior using electrotherapy or counseling to ‘rectify’ abnormal brain patterns? This could lead down some very slippery slopes.

Especially it’s not just death row cases that are questioning qEEG — nearly every injury lawsuit in the USA also now includes a TBI claim. With Magnetic Resonance Imaging (MRIs) and Computed tomography (CT) being generally expensive, lawyers are constantly seeking new ways to prove brain dysfunction. Readers should note that both of these neuroimaging techniques are viewed as more accurate than qEEG but can only provide a single, static image of the neurological condition – and thus provide no direct measurement of functional, ongoing brain activity.

In contrast, the cheaper and quicker qEEG testing purports to monitor active brain activity to diagnose many neurological conditions continuously and could one-day flag those more inclined to violence, enabling early interventional therapy sessions and one-to-one help, focusing on preventing the problem.

But until we can reach this sort of societal level, defense and human rights lawyers have been attempting to slowly phase out legal executions by using brain mapping – to explain why their convicted clients may have committed these crimes. Gradually moving from the consequences of mental illness and disorders to understanding these conditions more.

The sad case of Nikolas Cruz

But the questions surrounding this technology will soon be on trial again in the most high-profile death penalty case in decades: Florida vs. Nikolas Cruz. On the afternoon of February 14, 2018, Cruz opened fire on school children and staff at Marjory Stoneman Douglas High in Parkland when he was just 19 years of age. Now classed as the deadliest school shooting in the country’s history, the state charged the former Stoneman Douglas High student with the premeditated murder of 17 school children and staff and the attempted murder of a further seventeen people. 

With the sentencing expected in April 2022, Cruz’s defense lawyers have enlisted qEEG experts as part of their case to persuade jurors that brain defects should spare him the death penalty. The Broward State Attorney’s Office signaled in a court filing last month that it will challenge the technology and ask a judge to exclude the test results—not yet made public—from the case.

Cruz has already pleaded guilty to all charges, but a jury will now debate whether to hand down the death penalty or life in prison.

According to a court document filed recently, Cruz’s defense team intends to ask the jury to consider mitigating factors. These include his tumultuous family life, a long history of mental health disorders, brain damage caused by his mother’s drug addiction, and claims that a trusted peer sexually abused him—all expected to be verified using qEEG.

After reading the flurry of news reports on the upcoming case, one can’t help but wonder why, even without the use of qEEG, someone with a record of mental health issues at only 19 years old should be on death row. And as authorities and medical professionals were aware of Cruz’s problems, what were the preventative-based failings that led to him murdering seventeen individuals? Have these even been addressed or corrected? Unlikely.

On a positive note, prosecutors in several US counties have not opposed brain mapping testimony in more recent years. According to Dr. David Ross, CEO of NeuroPAs Global and qEEG expert, the reason is that more scientific papers and research over the years have validated the test’s reliability. Helping this technique gain broader use in the diagnosis and treatment of cognitive disorders, even though courts are still debating its effectiveness. “It’s hard to argue it’s not a scientifically valid tool to explore brain function,” Ross stated in an interview with the Miami Herald.

What exactly is a quantitative electroencephalogram (qEEG)?

To explain what a qEEG is, first, you must know what an electroencephalogram or EEG does. These provide the analog data for computerized qEEGs that record the electrical potential difference between two electrodes placed on the outside of the scalp. Multiple electrodes (generally >20) are connected in pairs to form various patterns called montages, resulting in a series of paired channels of EEG activity. The results appear as squiggly lines on paper—brain wave patterns that clinicians have used for decades to detect evidence of neurological problems.

More recently, trained professionals have computerized this data to create qEEG – translating raw EEG data using mathematical algorithms to help analyze brainwave frequencies. Clinicians then compare this statistical analysis against a database of standard or neurotypical brain types to discern those with abnormal brain function that could cause criminal behavior in death row cases.

While this can be true, results can still go awry due to incorrect electrode placement, unnatural imaging, inadequate band filtering, drowsiness, comparisons using incorrect control databases, and choice of timeframes. Furthermore, processing can yield a large number of clinically irrelevant data. These are some reasons that the usefulness of qEEG remains controversial despite the volume of published research. However, many of these discrepancies can be corrected by simply using trained medical professionals to operate the apparatus and interpret the data.

Just one case is disrupting the use of this novel technology

Yet, despite this easy correction, qEEG is not generally accepted by the relevant scientific community to diagnose traumatic brain injuries and is therefore inadmissible under Frye v. the United States. An archaic case from way back in 1923 based on a polygraph test, the trial came a mere 17-years after Cajal and Golgi won a Nobel Prize for producing slides and hand-drawn pictures of neurons in the brain.

Experts could also argue that a lie detector test (measuring blood pressure, pulse, respiration, and skin conductivity) is far removed from a machine monitoring brain activity. Furthermore, when the Court of Appeals of the District of Columbia decided on this lawsuit, qEEG didn’t exist. 

Applying the Frye standard, courts throughout the country have excluded qEEG evidence in the context of alleged brain trauma. For example, the Florida Supreme Court has formally noted that the relevant scientific community for purposes of Frye showed “qEEG is not a reliable method for determining brain damage and is not widely accepted by those who diagnose a neurologic disease or brain damage.” 

However, in a seminal paper covering the use of qEEG in cognitive disorders, the American Academy of Neurology (AAN) overall felt computer-assisted diagnosis using qEEG is an accurate, inexpensive, easy to handle tool that represents a valuable aid for diagnosing, evaluating, following up and predicting response to therapy — despite their opposition to the technology in this press. The paper also features other neurological associations validating the use of this technology.

The introduction of qEEg on death row was not that long ago

Only recently introduced, the technology was first deemed admissible in court during the death-penalty prosecution of Grady Nelson in 2010. Nelson stabbed his wife 61 times with a knife, then raped and stabbed her 11-year-old intellectually disabled daughter and her 9-year old son. The woman died, while her children survived. Documents state that Nelson’s wife found out he had been sexually abusing both children for many years and sought to keep them away from him.

Nelson’s defense argued that earlier brain damage had left him prone to impulsive behavior and violence. Prosecutors fought to strike the qEEG test from evidence, contending that the science was unproven and misused in this case.

“It was a lot of hocus pocus and bells and whistles, and it amounted to nothing,” the prosecutor on the case, Abbe Rifkin, stated. “When you look at the facts of the case, there was nothing impulsive about this murder.”

However, after hearing the testimony of Dr. Robert W. Thatcher, a multi-award-winning pioneer in qEEG analysis for the defense, Judge Hogan-Scola, found qEEG met the legal prerequisites for reliability. She based this on Frye and Daubert standards, two important cases involving the technology.

She allowed jurors to hear the qEEG report and even permitted Thatcher to present a computer slide show of Nelson’s brain with an explanation of the effects of frontal lobe damage at the sentencing phase. He testified that Nelson exhibited “sharp waves” in this region, typically seen in people with epilepsy – explaining that Grady doesn’t have epilepsy but does have a history of at least three TBIs, which could explain the abnormality seen in the EEG.  

Interpreting the data, Thatcher also told the court that the frontal lobes, located directly behind the forehead, regulate behavior. “When the frontal lobes are damaged, people have difficulty suppressing actions … and don’t understand the consequences of their actions,” Thatcher told ScienceInsider.

Jurors rejected the death penalty. Two jurors who agreed to be interviewed by a major national publication later categorically stated that the qEEG imaging and testimony influenced their decision.

“The moment this crime occurred, Grady had a broken brain,” his defense attorney, Terry Lenamon, said. “I think this is a huge step forward in explaining why people are broken—not excusing it. This is going to go a long way in mitigating death penalty sentences.”

On the other hand, Charles Epstein, a neurologist at Emory University in Atlanta, who testified for the prosecution, states that the qEEG data Thatcher presented flawed statistical analysis riddled with artifacts not naturally present in EEG imaging. Epstein adds that the sharp waves Thatcher reported may have been blips caused by the contraction of muscles in the head. “I treat people with head trauma all the time,” he says. “I never see this in people with head trauma.”

You can see Epstein’s point as it’s unclear whether these brain injuries occurred before or after Nelson brutally raped a 7-year old girl in 1991, after which he was granted probation and trained as a social worker.

All of which invokes the following questions: Firstly, do we need qEEG to state this person’s behavior is abnormal or that the legal system does not protect children and secondly, was the reaction of authorities in the 1991 case appropriate, let alone preventative?

As more mass shootings and other forms of extreme violence remain at relatively high levels in the United States, committed by younger and younger perpetrators flagged as loners and fantasists by the state mental healthcare systems they disappear into – it’s evident that sturdier preventative programs need to be implemented by governments worldwide. The worst has already occurred; our children are unprotected against dangerous predators and unaided when affected by their unstable and abusive environments, inappropriate social media, and TV.  

A potential beacon of hope, qEEG is already beginning to highlight the country’s broken socio-legal systems and the amount of work it will take to fix them. Attempting to humanize a diffracted court system that still disposes of the product of trauma and abuse like they’re nothing but waste, forcing the authorities to answer for their failings – and any science that can do this can’t be a bad thing.

Want to live like a Roman? This historical rowing cruise on the Danube has you covered

An unusual ship will set sail in November 2022 on the Danube River in Europe. Well, unusual for our times, at least. A Roman rowing and sailing ship built just like the ones in late antiquity will start its journey in Bavaria, and sail down the Danube all the way to the Black Sea in Romania.

A reconstructed navis lusoria at the Museum of Ancient Seafaring, Mainz.

For centuries, the Romans ruled vast swaths of Europe, Africa, and western Asia. Their maritime prowess was unrivaled and has fascinated historians for centuries. But no matter how many Roman documentaries you watch, it’s still kind of hard to imagine what they lived like, or what a journey would have been in Roman times. Well now, you can experience that firsthand.

Thanks to a project supported by the Donau-Universität Krems, you can embark on a Roman adventure. “Danuvina Alacris”, a modern reconstruction of a “Lusoria” type roman ship, is taking volunteers. Lusoria ships were small military vessels of the late Roman Empire that served as troop transport. They once roamed the Danube River guarding the boundary between the roman empire and the “barbarian” wasteland beyond what the Romans called “barbaricum”.

The ship itself was built with special care as to resemble Roman ships as much as possible. The Lusoria ships were nimble on the river waters, but whenever they couldn’t sail properly, they would also rely on strong rowers. The 2022 Roman Cruise will also require participants to pull in some rowing work when necessary.

“Our ship named “Danuvia Alacris” will cover about 40 km a day which, will be rowed and partially sailed, if possible. The crew, which will consist of about 18-20 rowers and a leadership team of 4-5 people, will have an international composition, so the language on the ship will be English,” the project announcement page reads.

It won’t just be going from point A to point B — the organizers announced a series of events around the cruise. In addition, you’ll be living as close to Roman times as possible.

“The crew will change approximately every second week; they will row in Roman clothes (tunic, shoes, etc.). In addition, there will be smaller to larger festivals and interested visitors at the stops of the ship.”

The organizers are still looking for volunteers that will rotate out of the crew ever two weeks. The project will start on July 15th and is expected to end in October 2022. Registrations are now open, for more information check out the official announcement page.

What are fisher cats, the most misleadingly-named animals out there?

One of the more obscure animals out there, fisher cats (Pekania pennanti) or ‘fishers’, in short, are predators endemic to North America. Despite the name, these animals are not cats, and they do not fish. They are, however, increasingly moving into a lot of urban and suburban areas across the USA.

Image credits of USFWS Pacific Southwest Region / Flickr.

Fisher cats are slim, short-legged mammals that resemble weasels or small wolverines. They can grow to about 145 centimeters in length (4 ft 9 in) including the tail. They’re covered in dark-brown fur, which is glossy and thick in the winter, and more mottled in the summer. They have rounded ears, and overall look quite cute and cuddly. Don’t let that fool you, however: fisher cats have vicious, retractable claws, and are quite fearsome predators for their size.

The species is endemic to various areas of North America. New England, Tennessee, the Great Lakes area, and the northern stretches of the Rocky Mountains all house populations of fisher cats. Smaller populations have also been reported in California, the southern Sierra Nevada, and the west coast of Oregon. The boreal forests of Canada also make great homes for these mammals.

The cat that’s not a cat

Taxonomically speaking, fisher cats are closely related to martens, being part of the Mustelidae family. This is the largest family in the order of Caniformia (‘dog-like’ animals) and the greater order Carnivora (meat-eaters). As such, they’re part of the most successful and rich group of predators on the planet.

Despite this taxonomic allegiance to the group Carnivora, fisher cats are omnivorous. They will happily hunt a wide range of animals of comparable size to them. They are of the very few animals that even attempt to hunt porcupines, and do so quite successfully, but prefer to hunt hares. They’re not above scouring the forest floor for plants to eat, however. They generally forage around fallen trees, looking for fruits, mushrooms, nuts, and insects. A bit surprisingly, given their name, fisher cats only very rarely eat fish.

It’s not exactly clear, then, how the animal got its name. Folklore says that fisher cats would steal the fish the early settlers used to bait traps in the Great Lakes region, but this is wholly unconfirmed. More likely, the ‘fisher’ in ‘fisher cat’ comes from ‘fisse’, the Dutch equivalent of the word ‘fitch’, from early settlers in the region. It’s also possible that it draws its roots in the French term ‘fishe’. These words refer to the European polecat or its pelt, respectively; given that fur trade was an important source of income for early settlers, it is likely that fisher cats were prized and sought-after for their pelts, and the species became associated with the polecat, who was raised for fur in Europe.

It’s easy to see why their pelts were so prized. Image via Wikimedia.

However, due to this association, fisher cats have been hunted to extinction in some parts of their natural habitat. Due to a drop in hunted pelts since the Americas were first colonized by Europeans, the animals are making a comeback and their populations are recovering and moving back into the areas they previously inhabited. Despite this, legal harvesting for fur, through trapping, is still one of the main sources of information regarding their numbers at our disposal right now.

A baby fisher cat is called a ‘kit’. Females tend to give birth to litters of one up to four kits at a time in the spring and nurture them until late summer. The kits are sightless and quite helpless at first, but become well able to take care of themselves by summertime and leave in search of their own mates.

How do they live?

Fishers spend most of their time on the ground, and have a marked preference for forested lands compared to other habitats. They’re most often found in boreal or conifer forests, but individuals have been seen in transition forests as well, such as mixed hardwood-conifer forests. They seem to avoid areas where overhead cover isn’t very thick, preferring at least 50% coverage.

Female fisher cats also make their dens in moderately large and large trees when giving birth and rearing their kits. Because of these factors, they’re most likely to be seen in old-growth forests, since heavily-logged or young forests seem not to provide the habitat that fishers like to live in.

Towards the west of the continent, where fires routinely clear forests of fallen trees (the most-liked foraging environments of the fishers), these animals tend to gravitate towards forests adjacent to bodies of water (riparian forests). They also seem to not be fond of heavily snowed areas regardless of geographical location.

Despite their habitat preferences, fisher cats have been seen encroaching ever more deeply into urban landscapes, most likely drawn by the prospect of easy food. While it is still unclear whether fisher cats hunt for pets such as household cats or small dogs, such activities would be within their abilities. Most likely, however, they search for food items discarded in trash cans.

Fisher cats stay away from humans for the most part and avoid contact. They will defend themselves if they feel cornered, however. They are quite small, so the chances of a deadly encounter with a fisher cat are slim to none, but if you ever meet one, don’t be fooled by their cuddly exterior. Give it space; their claws and fangs can be quite nasty, and there’s always the risk of infection when dealing with wounds from wildlife.

Today, these furry mammals are listed as Least Concern on IUCN Red List of Threatened Species; they are making quite a successful comeback following their historic lows. Still, habitat destruction and human encroachment remain serious issues for the species. Their ever-more-frequent sightings in cities and urban landscapes across North America are a warning sign of an issue wildlife everywhere faces: humans are taking up more space than ever, so they are coming to visit our cities, as well. Depending on what we do in the future, they may be forced to set up shop here for good.

Fossil Friday: new armless dinosaur species unearthed in Argentina

Researchers in Argentina have discovered a new — and pretty armless — species of dinosaur.

Carnotaurus sastrei, an abelisaurid relative of the new species, and probable look-alike dinosaur. Image credits Fred Wierum / Wikimedia.

Christened Guemesia ochoai, it was a species of abelisaurid, a clade of dinosaurs that roamed today’s Africa, South America, and India, and lived around 70 million years ago. Based on its age, researchers believe that this species was a close relative of the ancestors of all abelisaurids.

The animal’s partially-complete fossil skull was unearthed in Argentina and points to a unique ecosystem that developed in the area during the Late Cretaceous. The discovery is quite exciting as the area where it was found has yielded very few abelisaurid fossils, so it fills in an important piece of its historical puzzle.

Armless in Argentina

“This new dinosaur is quite unusual for its kind. It has several key characteristics that suggest that is a new species, providing important new information about an area of the world which we don’t know a lot about,” says Professor Anjali Goswami, co-author of the study describing the species and a Research Leader at the Natural History Museum of London.

“It shows that the dinosaurs that live in this region were quite different from those in other parts of Argentina, supporting the idea of distinct provinces in the Cretaceous of South America. It also shows us that there is lot more to be discovered in these areas that get less attention than some of the more famous fossil sites.”

By the time this species emerged, the ancient supercontinent of Pangaea had already begun to break apart forming Gondwana and Laurasia. The former would, in turn, split into the major continents in the Southern Hemisphere today and India.

Despite these landmasses slowly drifting apart, species could still move between them, so researchers assume that the fauna of these landmasses remained quite similar, as animals migrated between them. Abelisaurids were among these species.

Abelisaurids were top predators in their ecosystems, preying even on the mighty Titanosaurus. One of their most defining features was the front limbs; even shorter than those of the T. rex, these were virtually useless. In other words, the species did their hunting without being able to grasp, relying instead on their powerful jaws and necks to capture and subdue prey. They seem to have been quite successful at it, too: fossils of these dinosaurs have been found in rocks across Africa, South America, India, and Europe, dated all the way to the extinction of the dinosaurs 66 million years ago.

Although Argentina is well-known for abelisaur fossils (35 species have been discovered here so far), the overwhelming majority of these were discovered in Patagonia, in the country’s south. The north-western stretches of the country have yielded precious few. The newly-discovered skull joins this exclusive list.

The fossil, consisting of the braincase with the upper and back parts of the skull, was unearthed in the Los Blanquitos Formation near Amblayo, in the north of Argentina. The rocks it was encased in have been dated to between 75 and 65 million years ago. In other words, this specimen lived very close to the end-Cretaceous mass extinction, the event that wiped out the dinosaurs.

Like other abelisaurids, the skull contains a “remarkably small” braincase, according to its discoverers; its cranium is around 70% smaller than that of any of its relatives. This could suggest that the animal was a juvenile, but this is yet unconfirmed. One distinguishing feature of the dinosaur is a series of small holes at the front of its skull, arranged in rows, known as foramina. Researchers believe these holes helped the animal cool down, by allowing blood pumped into them (and covered by the thin skin at the front of the head) to release the heat it contained.

In contrast to other species of abelisaurids, the skull completely lacks any horns. This suggests that the species is among the first to emerge in the abelisaurid clade before these dinosaurs evolved horns.

Given that there is enough evidence to distinguish it as a new species, the team christened it after General Martin Miguel de Güemes, a hero of the Argentine War of Independence, and Javier Ochoa, a museum technician who discovered the specimen.

“Understanding huge global events like a mass extinction requires global datasets, but there are lots of parts of the world that have not been studied in detail, and tons of fossils remaining to be discovered,” Professor Anjali says.

“We left some exciting fossils in the ground on our last trip, not knowing that it would be years before we could get back to our field sites. Now we are hoping that it won’t be too much longer before we can finish digging them up and discovering many more species from this unique fauna.”

The paper “First definitive abelisaurid theropod from the Late Cretaceous of Northwestern Argentina” has been published in the Journal of Vertebrate Paleontology.

What is the Oedipus complex?

Sigmund Freud. Credit: Public Domain.

The Oedipus complex is a concept introduced by Sigmund Freud, part of his theory of psychosexual stages of development, that describes a desire for sexual involvement with the opposite-sex parent and a sense of jealousy and rivalry with the same-sex parent. This development stage of major conflict supposedly takes place in boys between 3 and 5 years old.

The term is named after the main character of Sophocles’ Oedipus Rex. In this ancient Greek tragedy, Oedipus is abandoned by his parents as a baby. Later, in adulthood, he becomes the king of Thebes and unknowingly murders his father and marries his mother. The female analog of the psychosexual term is the Electra complex, named after another tragic mythological figure who helped kill her mother. Oedipal is the generic term for both Oedipus and Electra complexes.

Often, these theories are interpreted as the propensity of men to pick women who look like their mothers, while women pick men who resemble their fathers.

Both the Oedipus and Electra complexes proved controversial since they were first introduced to the public in the early 20th-century. Critics of Freud note that there is very little empirical evidence proving the theory’s validity. Even so, the Oedipus cornerstone is still regarded as a cornerstone of psychoanalysis to this day.

Oedipus: Freud’s shibboleth

According to Freud, personality development in childhood takes place during five psychosexual stages: oral, anal, phallic, latency, and genital stages. In each stage, sexual energy is expressed in different ways and through different parts of the body. Each of these psychosexual stages is associated with a particular conflict that must be resolved in order to successfully and healthily advance to the next stage. The manner in which each conflict is resolved can determine a person’s personality and relationships in adulthood.

The Oedipal complex, introduced by Freud in 1899 in his work Interpretations of Dreams, occurs during the phallic stage of development (ages 3-6), a period when a child becomes aware of anatomical sex differences, setting in motion the conflict between erotic attraction, rivalry, jealousy, and resentment. The young boy unconsciously feels sexually attached to his mother. Envy and jealousy are aimed toward the father, the object of the mother’s affection and attention.

Freud believed that a little boy is condemned to follow his drives and wishes, in the same way as Sophocles’ Oedipus was condemned to do. That’s unless he abandoned his Oedipal wishes.

The hostile feelings towards the father cause castration anxiety, which is the irrational fear of both literal and figurative emasculation as punishment for desiring his mother. To cope with this anxiety, the boy starts identifying with the father, adopting attitudes, characteristics, and values that the father calls his own. In other words, the father transitions from rival to role model.

It is through this identification with the aggressor that the boy resolves the phallic stage of psychosexual development and acquires their “superego”, a set of morals and values that dominate the conscious adult mind. In the process, the child finally relinquishes sexual feelings towards the mother, transferring them to other female figures. The implication is that overcoming the Oedipus complex, and the reactions that follow, represent the most important social achievement of the human mind, Freud says.

“It has justly been said that the Oedipus complex is the nuclear of the neuroses, and constitutes the essential part of their content. It represents the peak of infantile sexuality, which, through its after-effects, exercises a decisive influence on the sexuality of adults. Every new arrival on this planet is faced with the task of mastering the Oedipus complex; anyone who fails to do so falls a victim to neurosis. With the progress of psycho-analytic studies the importance of the Oedipus complex has become more and more clearly evident; its recognition has become the shibboleth that distinguishes the adherents of psychoanalysis from its opponents.”

Sigmund Freud,
Footnote added to the 1914 edition of Three Essays on Sexuality (1905)

The Electra complex: the female Oedipal drive

Freud’s analogous psychosexual development for little girls involves the Electra complex, which begins the moment the girl realizes she lacks a penis. The mother is blamed for this and becomes an object of resentment for triggering penis envy. At the same time, the girl develops feelings of sexual desire towards her father. The fact that the mother receives affection from the father, while she doesn’t, causes the girl to become jealous of her mother, now seen as a rival.

Like little boys who have to overcome their Oedipus complex, little girls resolve this conflict by renouncing incestuous and rivalrous feelings, identifying with the mother, thereby developing the superego.

However, Freud was never able to form a complex conflict resolution theory for the Electra complex as he did for the Oedipus complex. In boys, the resolution of the Oedipal drive is motivated by fear of castration, but Freud was never able to find an equally potent incentive in little girls, although he reasoned she may be motivated by worries about the loss of her parents’ love.

As an interesting factoid, The Electra complex, while often attributed to Freud, was actually proposed by Freud’s protégé, Carl Jung.

Failing the Oedipal complex

Freud reasoned that if the conflict arising from the Oedipal complex isn’t successfully resolved, this can cause “neuroses”, which he defined as being manifestations of anxiety-producing unconscious material that is too difficult to think about consciously but must still find a means of expression. In other words, failing to resolve this central conflict before moving on to the next stage will result in experiencing difficulties in areas of love and competition later in adulthood.

Boys may become overly competitive with other men, projecting his latent rivalry for his father, and may become mother-fixated, seeking out significant others that resemble his mother, in more than one way. Meanwhile, girls who don’t overcome their penis envy may develop a masculinity complex as an adult, making it challenging for them to become intimate with men in adulthood. Instead, she may try to rival men by becoming excessively aggressive. The men that she interacts with intimately often resemble her father. Moreover, since the girls’ identification with their mothers is weaker than boys’ with their fathers (who have castration anxiety), the female superego is weaker and, consequently, their identity as separate, independent individuals is less well developed. Psychoanalysis is supposed to solve these unresolved conflicts.

Modern criticism of the Oedipal complex

Freud exemplified his theory of the Oedipal complex using a single case study, that of the famous “Little Hans”, a five-year-old boy with a phobia for horses. At about age three, little Hans showed an interest in both his own penis and those of other males, including animals. His alarmed mother threatened to cut off his penis unless he stops playing with it. Around this time, he developed an unnatural fear of horses. Freud reasoned that the little boy responded to his mother’s threat of castration by fearing horses and their large penises. The phobia subdued when Hans would interact with horses with a black harness over their noses that had black fur around the mouth, which his father suggested symbolized his mustache. In Freud’s interpretation, Hans’s fear of horses unconsciously represented his fear of his father. Hans’s Oedipus complex was only resolved when he started fantasizing about himself with a big penis and married to his mother, allowing him to overcome his castration anxiety and identify with his father.

Although the case study of Little Hans perfectly (and very conveniently) exemplifies Freud’s theory of the Oedipus Complex, this is a single case — not nearly enough to generalize the results to the wider population. The problems don’t stop here. Freud only met Hans once and  his information only came from Hans’s father, who was an open admirer of Freud’s work and could thus have asked leading questions in a way that would fabricate a fantasy of marriage to his mother. Even if Hans (whose real name was Herbert Graf) truly suffered from an Oedipus complex, that doesn’t mean it is universal as Freud claimed.

For instance, in 1929, Polish-British scientist Bronisław Kasper Malinowski, who is widely regarded as the father of modern anthropology, conducted a now-famous ethnographic study in the Trobriand Islands in Oceania where fathers aren’t at all involved in disciplining their sons. In this society, the relationship between father and son was always good. The disciplinarian in the Trobriand populations is the uncle, which shatters the Oedipus Complex.

Malinowski with natives on the Trobriand Islands, circa 1918. Credit: Wikimedia Commons.

Psychoanalytic writer Clara Thompson criticized Freud’s attitude towards women, which she believes is culturally biased. Freud’s idea that penis envy is biologically based can be explained better and with less woe-woe by the general envy girls feel towards boys because they often lack the same level of freedom at a young age and opportunities in adulthood. You may call it penis envy, as long as you use the term as a metaphor for wanting equal rights rather than what dangles between your legs.

All of that is to say that Freud’s Oedipal complex is riddled with holes and, at best, may apply to a small fraction of the general population. However, this doesn’t necessarily demean Freud’s brilliance. Both psychoanalysts and modern psychologists now agree that early experiences, even those when we were so young that we can’t remember them, have a profound influence on our adult selves — that’s just one of Freud’s legacies in developmental theory. 

Cultured meat is coming. But will people eat it?

Cultured chicken salad. Image credits: UPSIDE.

The prospect of cultured meat is enticing for several reasons. For starters, it’s more ethical — you don’t need to kill billions of animals every year. It could also be better for the environment, producing lower emissions and requiring less land and water than “traditional” meat production, and would also reduce the risk of new outbreaks (potentially pandemics) emerging. To top it all off, you can also customize cultured meat with relative ease, creating products that perfectly fit consumers’ tastes.

But there are also big challenges. In addition to the technological challenges, there is the need to ensure meat culturing is not only feasible and scalable but also cheap. There’s also a more pragmatic problem: taste. There’s a lot to be said about why people enjoy eating meat, but much of it boils down to how good it tastes. Meanwhile, cultured meat has an undeniable “artificial” feel to it (at least for now). Despite being made from the exact same cells as “regular” meat, it seems unnatural and unfamiliar, so there are fears that consumers may reject it as unappealing.

Before you even try it

A recent study underlines just how big this taste challenge is — and how perception (in addition to the taste per se) could dissuade people from consuming cultured meat. According to the research, which gathered data from 1,587 volunteers, 35% of non-vegetarians and 55% of vegetarians find cultured meat too disgusting to eat.

“As a novel food that humans have never encountered before, cultured meat may evoke hesitation for seeming so unnatural and unfamiliar—and potentially so disgusting,” the researchers write in the study.

For vegetarians, the aversion towards cultured meat makes a lot of sense. For starters, even though it’s not meat from a slaughtered animal, it’s still meat, and therefore has a potential to elicit disgust.

“Animal-derived products may be common triggers of disgust because they traditionally carry higher risks of disease-causing microorganisms. Reminders of a food’s animal origin may evoke disgust particularly strongly among vegetarians,” the study continues.

For non-vegetarians, it’s quite the opposite: it can elicit disgust because it’s not natural enough. Many studies highlight that meat-eaters express resistance to trying cultured meat because of its perceived unnaturalness. So if you’d want to make cultured meat more appealing for consumers, you’d have to approach things differently for vegetarians and non-vegetarians. For instance, perceiving cultured meat as resembling animal flesh predicted less disgust among meat-eaters but more disgust among vegetarians. But there were also similarities between the two groups. Perceiving cultured meat as unnatural was strongly associated with disgust toward it among both vegetarians and meat-eaters. Combating beliefs about unnaturalness could go a long way towards convincing people to at least give cultured meat a shot.

A cultured rib-eye steak. Image credits: Aleph Farms / Technion — Israel Institute of Technology.

Even before people eat a single bite of cultured meat, their opinion may already be shaped. If we want to get people to consume this type of product, tackling predetermined disgust is a big first step. Different cultures could also have widely different preferences in this regard.

“Cultured meat offers promising environmental benefits over conventional meat, yet these potential benefits will go unrealized if consumers are too disgusted by cultured meat to eat it.”

Okay, but is cultured meat actually good?

Full disclosure: no one at ZME Science has tried cultured meat yet (but we’re working on it). Even if we had, our experience wouldn’t be necessarily representative of the greater public. Herein lies one problem: compared to how big the potential market is, only a handful of people have actually tasted this type of meat. We don’t yet have large-scale surveys or focus groups (or if companies have this type of data, they haven’t publicly released it from what we could find).

The expert reviews seem to be somewhat favorable. In a recent blind test, Israel Master Chef judge Michal Ansky was unable to differentiate between “real” chicken and its cultured alternative. Ansky tasted the cultured chicken that was already approved for consumption in Singapore (the first place where cultured meat has been approved).

The remarkable progress that cultured meat has made in regards to its taste was also highlighted by a recent study from The Netherlands, in which blind-tested participants preferred the taste of cultured meat.

“All participants tasted the ‘cultured’ hamburger and evaluated its taste to be better than the conventional one in spite of the absence of an objective difference,” the researchers write.

The study authors also seemed confident that cultured meat could become mainstream given its appealing taste and environmental advantages.

“This study confirms that cultured meat is acceptable to consumers if sufficient information is provided and the benefits are clear. This has also led to increased acceptance in recent years. The study also shows that consumers will eat cultured meat if they are served it,” said Professor Mark Post from Maastricht University, one of the study authors.

Researchers are also close to culturing expensive, gourmet types of meat, including the famous Wagyu beef, which normally sells for around $400 for a kilogram. Researchers are already capable of culturing bits of this meat four times cheaper, and the price is expected to continue going down. This would be a good place for cultured meat to start, making expensive types of meat more available to the masses.

Still, there are some differences between most types of cultured meat and meat coming from animals. For instance, one study that used an “electronic tongue” to analyze the chemical make-up of the meat found “significant” differences.

“There were significant differences in the taste characteristics assessed by an electronic tongue system, and the umami, bitterness, and sourness values of cultured muscle tissue were significantly lower than those of both chicken and cattle traditional meat,” the study reads. But the same study also suggests that understanding these differences could make cultured meat even more realistic and palatable.

This technology is also progressing very quickly in this regard, and every year, cultured meat seems to be taking strides towards becoming more affordable and tasty. There are multiple companies pending approval to embark on mass production, using somewhat different technologies and products. There are multiple types of meat on the horizon, from chicken and beef to pork and even seafood, and for many of them, the taste data is only just coming in.

All in all, cultured meat promises to be one of the biggest food revolutions in the past decades. Whether it will actually deliver on this promise is a different problem that will hinge on several variables, including price, taste, and of course, environmental impact. If companies can deliver a product that truly tastes like traditional meat, they have a good chance. There’s still a long road before the technology becomes mainstream, but given how quickly things have progressed thus far, we may see cultured meat on the shelves sooner than we expect.

Would you quit social media for $2,700? Consider joining this experiment

Social media has become a near-ubiquitous part of our lives, up to the point where many are struggling without it. In fact, social media is affecting our mental health and productivity — but most of us would struggle to give it up, even temporarily. To study why this happens, one app wants to pay people £2,000 ($2,700) to quit social media for just two months.

Stop Doomscrolling

Like many things that technology has brought us, there are both benefits and downsides to social media. For many people, such networks can offer people a chance to connect to their friends and freely express their thoughts and hobbies. But as we spend more and more time on social media, we also get more disinformation, polarization, and doomscrolling — the act of spending an excessive amount of screen time scrolling through mostly bad news.

So would be better off just quitting social media? Uptime app wants to explore that.

Uptime is a free app that claims to offer “Knowledge Hacks” from the “world’s best books, courses and documentaries.” The app is looking for an applicant to quit social media for two months. You don’t need to have any predetermined skills and qualifications, just to be a “social media lover,” with profiles over at least four social media networks. The aim is to see whether quitting social media will have a positive effect on the applicant’s wellbeing and productivity.

“The successful applicant will be paid £2,000 to stop using all social media for the eight-week period.  We will also find out how they use their newfound downtime, as well as ask them to record their happiness levels, behaviour and productivity whilst not spending their free time on the platforms they use like Facebook, Instagram, TikTok, Twitter, Snapchat and YouTube,” the Uptime blog post reads.

“We will ask the successful ‘social media quitter’ to answer a frequent questionnaire and will be asked to keep both a written and video journal to record their experience. We want to discover as much information as we can about how much time a person could spend improving themselves and their knowledge – alongside their wellbeing and productivity – if they were to decrease their time spent on social media or ‘doomscrolling’.”

It should be said that this isn’t a proper, large-scale study. We won’t know whether social media truly is bad for your mental health and productivity after this. But it could be an interesting experiment and a way to make some money while trying to improve your wellbeing.

If that sounds like something you’d be interested in, then you can apply here. Applications close on February 21.

Annie Jump Cannon: the legend behind stellar classification

It is striking that today, we can not only discover but even classify stars that are light-years from Earth — sometimes, even billions of light-years away. Stellar classification often uses the famous Hertzsprung–Russell diagram, which summarises the basics of stellar evolution. The luminosity and the temperature of stars can teach us a lot about their life journey, as they burn their fuel and change chemical composition.

We know that some stars are made up mostly of ionised helium or neutral helium, some are hotter than others, and we fit the Sun as a not so impressive star compared to the giants. Part of that development came from Annie Jump Cannon’s contribution during her long career as an astronomer. 

The Hertzsprung diagram where the evolution of sun-like stars is traced. Credits: ESO.

On the shoulders of giantesses

Cannon was born in 1863 in Dover, Delaware, US. When she was 17 years old, thanks to her father’s support, she managed to travel 369 miles all the way from her hometown to attend classes at Wellesley College. It’s no big deal for teens today, but back then, this was an imaginable adventure for a young lady. The institution offered education exclusively for women, an ideal environment to spark in Cannon an ambition to become a scientist. In 1884, she graduated and later in 1896 started her career at the Harvard Observatory.

In Wellesley, she had Sarah Whiting as her astronomy professor, who sparked Cannon’s interest in spectroscopy:

“… of all branches of physics and astronomy, she was most keen on the spectroscopic development. Even at her Observatory receptions, she always had the spectra of various elements on exhibition. So great was her interest in the subject that she infused into the mind of her pupil who is writing these lines, a desire to continue the investigation of spectra.”

Whiting’s obituary in 1927, Annie Cannon.

Cannon had an explorer spirit and travelled across Europe, publishing a photography book in 1893 called “In the footsteps of Columbus”. It is believed that during her years at Wellesley, after the trip, she got infected with scarlet fever. The disease infected her ears and she suffered severe hearing loss, but that didn’t put an end to her social or scientific activities. Annie Jump Cannon was known for not missing meetings and participating in all American Astronomical Society meetings during her career.


At Radcliffe College, she began working more with spectroscopy. Her first work with southern stars spectra was later published in 1901 in the Annals of the Harvard College Observatory. The director of the observatory, Edward C. Pickering chose Cannon as the responsible for observing stars which would later become the Henry Draper Catalogue, named after the first person to measure the spectra of a star. 

Annie Jump Cannon at her desk at the Harvard College Observatory. Image via Wiki Commons.

The job didn’t pay much. In fact, Harvard employed a number of women as “women computers” that processed astronomic data. The women computer at Harvard earned less than secretaries, and this enabled researchers to hire more women computers, as men would have need to be paid more.

Her salary was only 25 cents an hour, a small income for a difficult job to look at the tiny details from the spectrographs, often only possible with magnifying glasses. She was known for being focused (possibly also influenced by her deafness), but she was also known for doing the job fast. Simply put,

During her career, she managed to classify the spectra of 225,000 stars. At the time, Williamina Fleming, a Scottish astronomer, was the Harvard lady in charge of the women computers. She had previously observed 10,000 stars from Draper Catalogue and classified them from letters A to N. But Annie Jump Cannon saw the link between the stars’ temperature and rearranged Fleming’s classification to the OBAFGKM system. The OBAFGKM system divides the stars from the hottest to the coldest, and astronomers created a popular mnemonic for it: “Oh Be A Fine Guy/Girl Kiss Me”.


“A bibliography of Miss Cannon’s scientific work would be exceedingly long, but it would be far easier to compile one than to presume to say how great has been the influence of her researches in astronomy. For there is scarcely a living astronomer who can remember the time when Miss Cannon was not an authoritative figure. It is nearly impossible for us to imagine the astronomical world without her. Of late years she has been not only a vital, living person; she has been an institution. Already in our school days she was a legend. The scientific world has lost something besides a great scientist.”

Cecilia Payne-Gaposchkin in Annie Jump Cannon’s obituary.
Annie Jump Cannon at Harvard University. Smithsonian Institution @ Flickr Commons.

Annie Jump Cannon was awarded many prizes, she became honorary doctorate of Oxford University, the first woman to receive the Henry Draper Medal in 1931, and the first woman to become an officer of the American Astronomical Society. 

Her work in stellar classification was followed by Cecilia Payne-Gaposchkin, another dame of stellar spectroscopy. Payne improved the system with quantum mechanics and described what stars are made of

Very few scientists have such a competent and exemplary career as Cannon. Payne continued the work left from Cannon, her advisor, Henry Norris Russell, then improved it with minimum citation. From that, we got today’s basic understanding of stellar classification. Her beautiful legacy has been rescued recently by other female astronomers who know the importance of her life’s work.

Left, right, or ambidextrous: What determines handedness?

Credit: YouTube capture.

Although on the outside our bodies look symmetrical, our body movements are anything but. If you’re like most people, you write, use a phone, eat, and perform just about any task that requires tactile dexterity with your right hand. A small fraction of the population, comprising around 10% of the population, is left-handed. Rarer still are those who can use either hand with equal ease for various, though not necessarily all, tasks. These people are known as ambidextrous, with fewer than 1% of the population capable of this feat.

It isn’t generally understood why some people are ambidextrous, but the limited research conducted thus far suggests it all starts in the brain. Ambidexterity isn’t as great as it sounds either, as studies have associated ambivalent handedness with poor cognitive and mental health outcomes.

What determines hand preference?

The brain is divided into the left and right hemispheres by a deep longitudinal fissure of nerves called the corpus callosum. You probably know about these hemispheres and you may have also heard that the left hemisphere handles language, learning and other analytical processes while the right hemisphere processes images and emotions, among other things. This has inevitably led to the erroneous notion that some people who are “more logical” are left-brained while those who are “more creative” are right-brained.

Despite this enduring belief, there’s no such thing as being “right-brained” or “left-brained.” We’re actually “whole-brained” since we use both hemispheres when speaking, solving math, or playing an instrument. But that’s not to say that the brain’s two regions aren’t specialized — and the actual science of how the two halves of the brain work together may be stranger than fiction.

Credit: ResearchGate.

Without going into lengthy details about how the brain performs its division of labor across all areas, we can simply observe our motor functions to see brain lateralization in action. In all vertebrates, the right hemisphere controls the left side of the body via the spinal cord and vice versa. The jury’s still out on why that is, but some scientists believe that this basic organizational feature of the vertebrate nervous system evolved even before the appearance of vertebrates.

Over 90% of humans are naturally right-handed, a proclivity that may start as early as the womb. This suggests that handedness — the tendency to be more skilled and comfortable using one hand instead of the other for tasks such as writing and throwing a ball — is genetic in nature. However, like most aspects of human behavior, it’s like a complex trait that is influenced by numerous other factors, including the environment and chance.

Until not too long ago, it was thought that a single gene determined handedness, but more recently scientists have identified up to 40 that may contribute to this trait. Each gene has a weak effect in isolation, but together their sum is greater than their parts, playing an important role in establishing hand preference.

These genes are associated with some of these brain asymmetries, especially of language-related regions. This suggests links between handedness and language during human development and evolution. For instance, one implicated gene is NME7, which is known to affect the placement of the visceral organs (heart, liver, etc.) on the left to right body axis—a possible connection between brain and body asymmetries in embryonic development.

However, handedness is not a simple matter of inheritance — not in the way eye color or skin tone is, at least. While children born to left-handed patterns are more likely to be left-handed themselves compared to children of right-handed parents, the overall chance of being left-handed is relatively low in the first place. Consequently, most children born out of left-handed parents are right-handed. Even among identical twins, many have opposite hand preferences.

According to a 2009 study, genetics contribute around 25% toward handedness, the rest being accounted for by environmental factors such as upbringing and cultural influences.

In the majority of right-handed people, language dominance is on the left side of the brain. However, that doesn’t mean that the sides are completely switched in left-handed individuals — only a quarter of them show language dominance on the right side of the brain. In other words, hand preference is just one type of lateralized brain function and need not represent a whole collection of other functions.

Since writing activates language and speech centers in the brain, it makes sense that most people use their right hand. However, most individuals do not show as strong a hand preference on other tasks, using the left hand for some, the right hand for others, with the notable exception of tasks involving tools. For instance, even people who have a strong preference for using their right hand tend to be better at grabbing a moving ball with their left hand; that’s consistent with the right hemisphere’s specialization for processing spatial tasks and controlling rapid responses.

Ambidexterity may hijack brain asymmetry — and that may actually be a bug, not a feature

This brings us to mixed-handedness, in which some people have a preference for a particular hand for certain tasks. A step above are ambidextrous people, who are thought to be exceptionally rare and can perform tasks equally well with both hands.

But if the picture of what makes people left or right handed is murky, ambidexterity is even more nebulous. We simply don’t know why a very small minority of people, fewer than 1%, is truly ambidextrous. And from the little we know, it doesn’t sound like such a good deal either.

Studies have linked ambidexterity with poor academic performance and mental health. Ambidextrous people perform more poorly than both left- and right-handers on various cognitive tasks, particularly those that involve arithmetic, memory retrieval, and logical reasoning. Being ambidextrous is also associated with language difficulties and ADHD-like symptoms, as well as greater age-related decline in brain volume. The findings suggest that the brain is more likely to encounter faulty neuronal connections when the information it’s processing has to shuttle back and forth between hemispheres.

Again, no one is sure why this is the case, nor are any of these studies particularly robust since ambidextrous people comprise such a small fraction of the general population and any study involving them will naturally involve a small sample size that invites caution when interpreting results in a statistically meaningful way. All scientists can say for now is that naturally ambidextrous people have an atypical brain lateralization, meaning they simply have brain circuitry and function that is likely different from the normal pattern we see in right-handed and left-handed people.

Of course, it’s not all bad news for the handedness-ambivalent. Being able to use both hands with (almost) equal ease certainly has its perks, which can really pay off, especially in sports, arts, and music.

Can you train yourself to be ambidextrous?

Left-handers have always been stigmatized, often being punished in school and forced to use their non-dominant right hand. However, starting with the late 19th-century, people have not only become more tolerant of left-handedness but some have actually gone as far as to praise the merits of ambidexterity and worked to actively promote it by teaching others how to use both their hands well.

For instance, in 1903, John Jackson, a headteacher of a grammar school in Belfast, founded the Ambidextral Culture Society. Jackson believed that the brain’s hemispheres are distinct and independent. Being either right or left hand dominant effectively meant that half of your brainpower potential was being wasted. To harness this potential, Jackson devised ambidexterity training that, he claimed, would eventually allow each hand “to be absolutely independent of the other in the production of any kind of work whatever… if required, one hand shall be writing an original letter, and the other shall be playing the piano, with no diminution of the power of concentration.”

Although these claims have been proven to be bogus, to this day you can find shady online programs that claim to teach you to become ambidextrous. Training involves all sorts of routines such as using your non-dominant hand for writing, brushing your teeth, and all sorts of daily activities that require the fine manipulation of a tool. Doing so would allow you to strengthen neural connections in the brain and activate both hemispheres, which may help you think more creatively — or so they claim. But that’s never been shown by any study I could find. On the contrary, if anything, ambidextrous training may actually hamper cognition and mental health, judging from studies on natural ambidextrous people.

“These effects are slight, but the risks of training to become ambidextrous may cause similar difficulties. The two hemispheres of the brain are not interchangeable. The left hemisphere, for example, is typically responsible for language processing, whereas the right hemisphere often handles nonverbal activities. These asymmetries probably evolved to allow the two sides of the brain to specialize. To attempt to undo or tamper with this efficient setup may invite psychological problems,” Michael Corballis, professor of cognitive neuroscience and psychology at the University of Auckland in New Zealand, wrote in an article for Scientific American.

“It is possible to train your nondominant hand to become more proficient. A concert pianist demonstrates superb skill with both hands, but this mastery is complementary rather than competitive. The visual arts may enhance right-brain function, though not at the expense of verbal specialization in the left hemisphere. A cooperative brain seems to work better than one in which the two sides compete.”

Handedness is a surprisingly complex trait that isn’t easily explained by inheritance. Whether you’re left or right handed, this doesn’t make you necessarily smarter or better than the other. Brain lateralization exists for a reason, and that should be celebrated. 

How to Make Good Ideas Great and Great Ideals Scale: ‘The Voltage Effect’

What’s the one thing that high-growth companies like Amazon, Microsoft, and Apple have in common? All of these companies started out with just their founders toiling at their idea in their humble garages, only to grow their market-cap past the trillion-dollar range in only a few decades. While each of these unicorn’s business trajectories is unique, their common secret sauce is building products and services that scale.

According to data from the Bureau of Labor Statistics, almost half of all businesses fail during their first year. And there are a lot of reasons why a company can go under, including poor management, insufficient capital, or not a large enough market. One often-overlooked reason for failure though is overexpansion. That’s because scaling is hard. Really hard.

And it’s not just companies that can get into big trouble. Like many governments, research institutes, and charities are painfully aware, a policy, study, or campaign that performs brilliantly in a particular market or demographic can fail miserably when attempting to replicate the same success at scale. The COVID pandemic, for instance, is a living testament to this, evidenced by the widely successful vaccine rollout that saw over 10 billion shots delivered across the world lighting-fast by industry standards, as well as the disappointingly botched contact tracing program done by many countries.

That’s because the road from local to worldwide is paved with many pitfalls. Unless you mind your step, you might get sorely bruised. But what are these pitfalls?

The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale
John A. List
Currency, 288 pages | Buy on Amazon

In his latest book, The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale, John A. List, the Kenneth C. Griffin Distinguished Service Professor in Economics at the University of Chicago, not only gives a rundown of the leading but often underestimated factors that can make or break the scaling of an idea, but also outlines ways to supercharge it. This also explains the book name, the analogy being that ideas that scale — be they a new government policy meant to improve learning outcomes, a wildlife conservation program to help repopulate an endangered species, or a new restaurant chain — experience “voltage gain” as they scale, meaning it becomes increasingly easier as you expand. Meanwhile, ideas that fail at scaling experience “voltage drops”, with operations becoming increasingly inefficient to the point of reaching an inevitable collapse.

Professor List should know a thing or two about scaling. He served in the White House on the Council of Economic Advisers in the early 2000s under the Bush Administration, where he designed policies that would produce the greatest positive impact on the largest number of American citizens at a fair cost, but also as the chief economist of Uber and, later, at Lyft — two startups that have scaling almost down to an art form. That’s in addition to the over 200 studies List published as a behavioral economist, studying what drives people to make the decisions that they do, from Florida to Costa Rica or from Asia to Africa.

Many think that scalable ideas have a “silver bullet” quality to them that makes them a sure shot, but as Professor List skillfully explains, this thinking is wrong. In the first part of the book, the author outlines and expands on the most important pitfalls that cause voltage drops as an idea is scaled, called the The Five Vital Signs. These are: false positives, misjudging the representativeness of an initial population or situation, spillovers, and prohibitive costs. Along the way, you’ll learn, for instance, how celebrity chef Jamie Oliver did all the right things to expand his restaurant chain to over a dozen countries and why it all came crashing down when he changed his scaling recipe.

The second part tackles the winning concepts that, when applied well, can drive voltage gain like a particle accelerator, including using the right incentives (any behavioral economist’s bread and butter), marginal thinking, scaling culture, and knowing when it’s time to quit on a losing idea. This is the how to make good ideas great part.

All of it is skillfully done thanks to List’s sense for compelling prose and storytelling. While reading this comprehensive book, I found myself turning page after page, as I went through numerous excellent research and case studies, many of which Professor List was personally involved with.

Careful, comprehensive, and fun, The Voltage Effect excels in turning a seemingly boring niche topic into a fascinating book that’s relevant to all, from CEOs and policymakers to naturally curious people with a taste for learning how economics shapes our lives in the real world. 

Deductive versus inductive reasoning: what’s the difference

Sir Arthur Conan Doyle’s fictional Sherlock Holmes is supposedly the best detective in the world. What’s the secret behind his astonishing ability to gather clues from the crime scene that the police always seem to be missing? The answer is quite elementary, my dear reader.

While typical police detectives might use deductive reasoning to solve crimes, Sherlock on the other hand is a master of inductive reasoning. But what’s the difference?

Credit: Pixabay.

What is deductive reasoning

Deductive reasoning involves drawing a conclusion based on premises that are generally assumed to be true. If all the premises are true, then it holds that the conclusion has to be true.

Deduction always starts with a general statement and ends with a narrower, specific conclusion, which is why it’s also called “top-down” logic.

The initial assumption presumes that if something is true, then it must be true in all cases. A second premise is made in relation to the first statement, and since the initial premise is supposed to be true, so must be the second statement as well. The association between two statements — a major and a minor statement — to form a logical conclusion is called a syllogism.

In math terms, you can think of it this way: A=B, B=C, therefore A=C.

We use deduction often in our day-to-day lives, but this reasoning method is most widely used in research, where it forms the bedrock of the scientific method that tests the validity of a hypothesis.

Here are some examples:

Premise A: All people are mortal.

Premise B: Socrates is a person.

Conclusion: Therefore, Socrates is mortal.

Premise A: All mammals have a backbone.

Premise B: Dogs are mammals.

Conclusion: Dogs have backbones.

Premise A: Multiplication is done before addition.

Premise B: Addition is done before subtraction.

Conclusion: Multiplication is done before subtraction.

Premise A: Oppositely charged particles attract one another.

Premise B: These two molecules repel each other.

Conclusion: The two molecules are either both positively or negatively charged.

What is inductive reasoning

Inductive reasoning is the opposite of deductive reasoning, in the sense that we start with specific arguments to form a general conclusion, rather than making specific conclusions starting from general arguments.

For this reason, inductive reasoning is often used to formulate a hypothesis from limited data rather than supporting an existing hypothesis. Also, the accuracy of a conclusion inferred through induction is typically lower than through deduction, even if the starting statements themselves are true.

For instance, take these examples of inductive logic:

  • The first marble from the bag is black, so is the second, and so is the third. Therefore, all the marbles in the bag must be black.
  • Every cat I meet has fur. All cats then must have fur.
  • Whenever I get a cold, people around me get sick. Therefore, colds are infectious.

Deductive versus inductive reasoning: which one is better?

Deductive inference goes from the general to the specific, while inductive inference goes from the specific to the general. Deductive reasoning cannot be false if its premises are true, whereas inductive reasoning can still be false due to the fact that you cannot account for those instances where you are not correct. In deduction, the conclusion either follows or it doesn’t. There is no in-between like there are degrees of strength or weakness in induction.

In science, neither deduction nor induction is necessarily superior to one another. Instead, there’s a constant interplay between the two, depending on whether we’re making predictions based on observations or on theory.

Sometimes, it makes sense to start with a theory to form a new hypothesis, then use observation to confirm it. Other times, we can form a hypothesis from observations that seem to form a pattern, which can turn into a theory.

Both methods allow us to get closer and closer to the truth, depending on how much or how little information we have at hand. However, we can never prove something with absolute certainty, which is why science is a tool of approximation — the best there is, but still an approximation.

That being said, each method is far from perfect and has its drawbacks. A deductive argument might be based on non-factual information (the premise is wrong), while an inductive statement might lack sufficient data to form a reliable conclusion, for instance.

As an example of when deduction can go hilariously wrong, look no further than Diogenes and his naked chicken. Diogenes was an ancient Greek philosopher who was contemporary with the honorable Plato — and the two couldn’t be more different. Diogenes slept in a large jar in the marketplace and begged for a living. He was famous for his philosophical stunts, such as carrying a lit lamp in the daytime, claiming to be looking for an honest man.

When the opportunity presented itself, Diogenes would always try to embarrass Plato. He would, for instance, distract attendees during Plato’s lectures and bring food and eat loudly when Plato would speak. But one day, he really outdid himself.

Plato would often quote and interpret the teachings of his old mentor, Socrates. On one occasion, Plato held a talk about Socrates’ definition of a man as a “featherless biped”. Diogenes cleverly plucked a chicken and with a wide grin on his face proclaimed “Behold! I’ve brought you a man.”

Painting of Diogenes and his chicken. Credit: shardcore.

The implication is that a deductive conclusion is only as good as its premise.

Meanwhile, inductive reasoning leads to a logical conclusion only when the available data is robust. For instance, penguins are birds. Penguins can’t fly. Therefore, all birds can’t fly, which is obviously wrong if you know more birds than just penguins or weird plucked chickens.

Abductive reasoning: the educated guess

There’s another widely used form of reasoning — in fact, it is the one that we most use most often in our day-to-day lives. Abductive reasoning combines aspects of deductive and inductive reasoning to determine the likeliest outcome from limited available information.

For instance, if you see a person sitting idly on her phone at a table with two glasses of wine in front of her, you can use abduction to conclude her company is away and will likely return soon. Seeing a dog on a leash in front of a store makes us infer that the owner is likely shopping for a brief while and will soon return to join their pet.

In abductive reasoning, the major premise is evident, but the minor premise and therefore the conclusion are only probable. Abduction is also often called “Inference to the Best Explanation” for this very reason.

Abductive and inductive reasoning are very similar to each other, although the former is more at ease with reasoning with probable premises that may or may not be true.

This excerpt from Conan Doyle’s The Adventure of the Dancing Men provides a great example of Sherlock’s inductive and abductive mind:

Holmes had been seated for some hours in silence with his long, thin back curved over a chemical vessel in which he was brewing a particularly malodorous product. His head was sunk upon his breast, and he looked from my point of view like a strange, lank bird, with dull gray plumage and a black top-knot.

“So, Watson,” said he, suddenly, “you do not propose to invest in South African securities?”

I gave a start of astonishment. Accustomed as I was to Holmes’s curious faculties, this sudden intrusion into my most intimate thoughts was utterly inexplicable.

“How on earth do you know that?” I asked.

He wheeled round upon his stool, with a steaming test-tube in his hand, and a gleam of amusement in his deep-set eyes.

“Now, Watson, confess yourself utterly taken aback,” said he.

“I am.”

“I ought to make you sign a paper to that effect.”


“Because in five minutes you will say that it is all so absurdly simple.”

“I am sure that I shall say nothing of the kind.”

“You see, my dear Watson”–he propped his test-tube in the rack, and began to lecture with the air of a professor addressing his class–“it is not really difficult to construct a series of inferences, each dependent upon its predecessor and each simple in itself. If, after doing so, one simply knocks out all the central inferences and presents one’s audience with the starting-point and the conclusion, one may produce a startling, though possibly a meretricious, effect. Now, it was not really difficult, by an inspection of the groove between your left forefinger and thumb, to feel sure that you did NOT propose to invest your small capital in the gold fields.”

“I see no connection.”

“Very likely not; but I can quickly show you a close connection. Here are the missing links of the very simple chain. 1. You had chalk between your left finger and thumb when you returned from the club last night. 2. You put chalk there when you play billiards, to steady the cue. 3. You never play billiards except with Thurston. 4. You told me, four weeks ago, that Thurston had an option on some South African property which would expire in a month, and which he desired you to share with him. 5. Your check book is locked in my drawer, and you have not asked for the key. 6. You do not propose to invest your money in this manner.”

“How absurdly simple!” I cried.

“Quite so!” said he, a little nettled.

In laying out his arguments that led to his conclusion, Holmes can be seen reasoning by elimination (“By the method of exclusion, I had arrived at this result, for no other hypothesis would meet the facts,” A Study in Scarlet) and reasoning backward, i.e. imagining several hypotheses for explaining the given facts and selecting the best one. But he does this always with consideration of probabilities of hypotheses and the probabilistic connections between hypotheses and data.

This makes Holmes a very good logician, which is the perfect skill to have as a criminal investigator, as well as a scientist.

All of these reasoning techniques are important tools in any critical thinking arsenal, with each having its own time and place. Whether starting from the general or the specific, you have everything you need to win your next argument in style.