Category Archives: Feature Post

Eunice Foote: the first person to measure the impact of carbon dioxide on climate

We often think of climate science as something that started only recently. The truth is that, like almost all fields of science, it started a long time ago. Advancing science is often a slow and tedious process, and climate science is not an exception. From the discovery of carbon dioxide until the most sophisticated climate models, it took a long time to get where we are.

Unfortunately, many scientists who played an important role in this climate journey are not given the credit they deserve. Take, for instance, Eunice Newton Foote.

Eunice Foote. Credits: Wikimedia Commons.

Foote was born in 1819 in Connecticut, USA. She spent her childhood in New York and later attended classes in the Troy Female Seminary, a higher education institution just for women.  She married Elish Foote in 1841, and the couple was active in the suffragist and abolitionist movements. They participated in the “Women’s Rights Convention” and signed the “Declaration of Sentiments” in 1848.

Eunice was also an inventor and an “amateur” scientist, a brave endeavor in a time when women were scarcely allowed to participate in science. However, one of her discoveries turned out to be instrumental in the field of climate science.

Why do we need jackets in the mountains?

In 1856, Eunice conducted an experiment to explain why low altitude air is warmer than in mountains. Back then, scientists were not sure about it, so she decided to test it. She published her results in the American Journal of Science and Arts.

“Circumstances affecting the heat of the Sun’s rays”. American Journal of Science and Arts. Credits: Wikimedia Commons.

Foote placed two cylinders under the Sun and later in the shade, each with a thermometer. She made sure the experiment would start with both cylinders with the same temperature. After three minutes, she measured the temperature in both situations. 

She noticed that rarefied air didn’t heat up as much as dense air, which explains the difference between mountaintops and valleys. Later, she compared the influence of moisture with the same apparatus. To make sure the other cylinder was dry enough, she added calcium chloride. The result was a much warmer cylinder with moist air in contrast to the dry one. This was the first step to explain the processes in the atmosphere, water vapor is one of the greenhouse gasses which sustain life on Earth.

But that wasn’t all. Foote went further and studied the effect of carbon dioxide. The gas had a high effect on heating the air. At the time, Eunice didn’t notice it, but with her measurements, the warming effect of water vapor made the temperatures 6% higher, while the carbon dioxide cylinder was 9% higher. 

Surprisingly, Eunice’s concluding paragraphs came with a simple deduction on how the atmosphere would respond to an increase in CO2. She predicted that adding more gas would lead to an increase in the temperature — which is pretty much what we know to be true now. In addition, she talked about the effect of carbon dioxide in the geological past, as scientists were already uncovering evidence that Earth’s climate was different back then.

We now know that during different geologic periods of the Earth, the climate was significantly warmer or colder. In fact, between the Permian and Triassic periods, the CO2 concentration was nearly 5 times higher than today’s, causing a 6ºC (10.8ºF) temperature increase.

Recognition

Eunice Foote’s discovery made it to Scientific American in 1856, where it was presented by Joseph Henry in the Eighth Annual Meeting of the American Association for the Advancement of Science (AAAS). Henry also reported her findings in the New-York daily tribune but stated there were not significant. Her study was mentioned in two European reports, and her name was largely ignored for over 100 years — until it finally received credit for her observations in 2011

The credit for the discovery used to be given to John Tyndall, an Irish physicist. He published his findings in 1861 explaining how absorbed radiation (heat) was and which radiation it was – infrared. Tyndall was an “official” scientist, he had a doctorate, had recognition from previous work, everything necessary to be respected. 

But a few things draw the eye regarding Tyndall and Foote.

Atmospheric carbon dioxide concentrations and global annual average temperatures (in C) over the years 1880 to 2009. Credits: NOAA/NCDC

Dr Tyndall was part of the editorial team of a magazine that reprinted Foote’s work. It is possible he didn’t actually read the paper, or just ignored it because it was an American scientist (a common practice among European scientists back then), and or because of her gender. But it’s possible that he drew some inspiration from it as well — without quoting it.

It should be said that Tyndall’s work was more advanced and precise. He had better resources and he was close to the newest discoveries in physics that could support his hypothesis. But the question of why Foote’s work took so long to be credited is hard to answer without going into misogyny.

Today, whenever a finding is published, even if made with a low-budget apparatus, the scientist responsible for the next advance on the topic needs to cite their colleague. A good example happened to another important discovery involving another female scientist. Edwin Hubble used Henrietta Swan Leavitt’s discovery of the relationship between the brightness and period of cepheid variables. Her idea was part of the method to measure the galaxies’ velocities and distances that later proved the universe is expanding. Hubble said she deserved to share the Nobel Prize with him, unfortunately, she was already dead after the prize announcement.

It’s unfortunate that researchers like Foote don’t receive the recognition they deserve, but it’s encouraging that the scientific community is starting to finally recognize some of these pioneers. There’s plenty of work still left to be done.

International Women’s Day: Ten Women in Science Who Aren’t Marie Curie

As the world celebrates International Women’s day, it’s important to remember what this date stands for: equal rights between men and women. Women’s day is tightly connected to the Suffragette movement, where women in many parts of the world fought and suffered for their right to vote. It was on March 8, 1917, that women in Russia gained the right to vote, and in 1975 the United Nations also adopted the day. Unfortunately, we still have a long way to go before we can talk about gender equality in the world and, sadly, science is no exception. When it comes to female scientists, one name always dominates the conversation: Marie Curie. Curie’s brilliance and impact are undeniable, but there are many more women who left a strong mark on science. Here, we will celebrate just a few of them, some of the names we should remember for their remarkable contribution.

Hypatia

Hypatia inspired numerous artists, scientists, and scholars. Here: The play Hypatia, performed at the Haymarket Theatre in January 1893, based on the novel by Charles Kingsley.

Any discussion about women in science should start with Hypatia — the head of the Neoplatonic school in ancient Alexandria, where she taught philosophy and astronomy. Hypatia was praised as a universal genius, though, for most of her life, she focused on teaching more than innovating. Also an accomplished mathematician, Hypatia was an advisor to Orestes, the Roman prefect of Alexandria, and is the first female scientist whose life was decently recorded.

Hypatia lived through a period of political turmoil, with Orestes fighting for power with Cyril, the Christian bishop of Alexandria. Although she was a “pagan” herself, Hypatia was tolerant of Christian students and hoped to prove that Neoplatonism and Christianity could coexist peacefully and cooperatively. Sadly, this wasn’t the case. She was brutally murdered by a mob of Christian monks known as the parabalanisomething which many historians today believe was orchestrated by Cyril (or at the very least, Cyril had some involvement in this process). Her murder fueled hatred against Christians and unfortunately, her legacy was completely tarnished and turned against what she had hoped to achieve.

Mary Anning

Portrait of Mary Anning with her dog Tray and the Golden Cap outcrop in the background, Natural History Museum, London.

Moving a bit closer to our age, Mary Anning was one of the most significant figures in paleontology. An English fossil collector, Anning was unable to join the Geological Society of London and did not fully participate in the scientific community of 19th-century Britain, who were mostly Anglican gentlemen. This stressed her tremendously, and she struggled financially for much of her life. Also, despite her significant contributions, it was virtually impossible for her to publish any scientific papers. The only scientific writing of hers published in her lifetime appeared in the Magazine of Natural History in 1839. It was an extract from a letter that Anning had written to the magazine’s editor questioning one of its claims. “The world has used me so unkindly, I fear it has made me suspicious of everyone,” she wrote in a letter.

However, she was consulted by many of the time’s leading scientists on issues of anatomy and fossil collection. Her observations played a key role in the discovery that coprolites are fossilized faeces, and she was also the first to find a complete ichthyosaur skeleton — one of the most emblematic dinosaur-aged marine creatures — as well as two complete plesiosaur skeletons, the first pterosaur skeleton located outside Germany, and important fish fossils. Her work also paved the way for our understanding of extinction and her most impressive findings are hosted at the London Natural History Museum.

Ichthyosaur and Plesiosaur by Édouard Riou, 1863.

Ada Lovelace

Ada Lovelace was one of the most interesting personalities of the 19th century. The daughter of famous and controversial Lord Byron, Ada inherited her father’s writing gift, but her most important legacy was in a completely different area: mathematics. She is often regarded as the first to recognize the full potential of a “computing machine” and the first computer programmer, chiefly for her work with Charles Babbage, regarded as the father of the computer.

Watercolor portrait of Ada King, Countess of Lovelace (Ada Lovelace).

But Ada Lovelace saw something in computers that Babbage didn’t — way ahead of its time, she glimpsed the true potential that computers can offer. Historian of computing and Babbage specialist Doron Swade explains:

“Ada saw something that Babbage in some sense failed to see. In Babbage’s world his engines were bound by number…What Lovelace saw—what Ada Byron saw—was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation [..]”.

Example of a computing machine developed by Babbage and Lovelace. Image credits: Jitze Couperus from Los Altos Hills, California, USA.

Unfortunately, the life of Ada Lovelace was cut short, at 36, by uterine cancer, with more than a century passing before her vision could be accomplished.

Henrietta Swan Leavitt

If you like astronomy, the odds are that you’ve heard the name Hubble — but the same can’t be said for Henrietta Swan Leavitt, even though it should. Her scientific work identified 1777 variable stars and discovered that the brighter ones had the larger period, a discovery known as the “period–luminosity relationship” or “Leavitt’s law.” Her published work paved the way for the discoveries of Edwin Hubble, renowned American astronomer, whose findings changed our understanding of the universe forever. Although Henrietta received little recognition in her lifetime, Hubble often said that Leavitt deserved the Nobel for her work.

Henrietta Swan Leavitt working in her office. Image from the American Institute of Physics, Emilio Segrè Visual Archives.

In 1892, she graduated from Harvard University’s Radcliffe College, taking only one course in astronomy. She gathered credits toward a graduate degree in astronomy for work completed at the Harvard College Observatory, though she never finished the degree. However, she began working as one of the women human “computers,” working on measuring and cataloguing the brightness of stars. It was her work that first allowed astronomers to measure the distance between the Earth and faraway galaxies, ultimately allowing Hubble to figure out that the universe is expanding. The Swedish Academy of Sciences tried to nominate her for the Nobel prize in 1924, only to learn that she had died of cancer three years earlier.

Inge Lehmann

Image courtesy The Royal Library, National Library of Denmark, and University of Copenhagen University Library.

Before Lehmann, researchers believed the Earth’s core to be a single molten sphere. However, observations of seismic waves from earthquakes were inconsistent with this idea, and it was Lehmann who first solved this conundrum in a 1936 paper. She showed that the Earth has a solid inner core inside a molten outer core. Within a few years, most seismologists adopted her view, even though the theory wasn’t proven correct by computer calculations until 1971.

Unlike most of her predecessors, Lehmann was allowed to join scientific organizations, serving as Chair of the Danish Geophysical Society in 1940 and 1944 respectively. However, she was significantly hampered in her work and in maintaining international contacts during the German occupation of Denmark in World War II. She continued to work on seismological studies, moving on to discover another seismic discontinuity, which lies at depths between 190 and 250 km and was named for her, the Lehmann discontinuity. In praise of her work, renowned geophysicist Francis Birch noted that the “Lehmann discontinuity was discovered through exacting scrutiny of seismic records by a master of a black art for which no amount of computerization is likely to be a complete substitute.”

Rosalind Franklin

Image credits: Robin Stott.

Rosalind Franklin was an English chemist and X-ray crystallographer who made contributions to the understanding of the molecular structures of DNA (deoxyribonucleic acid), RNA (ribonucleic acid), viruses, coal, and graphite. While her work on the latter was largely appreciated during her lifetime, her work on DNA was extremely controversial, only being truly recognized after her lifetime.

In 1953, the work she did on DNA allowed Watson and Crick to conceive their model of the structure of DNA. Essentially, her work was the backbone of the study, but the two didn’t grant her any recognition, in an academic context largely dominated by sexism. Franklin had first presented important contributions two years earlier, but due to  Watson’s lack of chemistry understanding, he failed to comprehend the crucial information. However, Franklin also published a more thorough report on her work, which made its way to the hands of Watson and Crick, even though it was “not expected to reach outside eyes“.

There is no doubt that Franklin’s experimental data were used by Crick and Watson to build their model of DNA, even though they failed to cite her even once (in fact, Watson’s reviews of Franklin were often negative). Ironically, Watson and Crick cited no experimental data at all in support of their model. In a separate publication in the same issue of Nature, they showed a DNA X-ray image which, in fact, served as the principal evidence.

Anne McLaren

Image via Wikipedia.

Zoologist Anne McLaren is one of the pioneers of modern genetics, her work being instrumental to the development of in vitro fertilization. She experimented with culturing mouse eggs and was the first person to successfully grow mouse embryos outside of the womb. McLaren was also involved in the many moral discussions about embryo research, leading her to help construct the UK’s Human Fertilization and Embryology Act of 1990. This work is still greatly important for policy regarding abortion, and also offers guidelines for the process. She authored over 300 papers over the course of her career.

She received many honours for her contributions to science, being widely regarded as one of the most prolific biologists in modern times. She also became the first female officer of the Royal Society in 331 years.

Vera Rubin

Vera Rubin with John Glenn. Image credits: Jeremy Keith.

Vera Rubin was a pioneering astronomer who first uncovered the discrepancy between the predicted angular motion of galaxies and the observed motion — the so-called Galaxy rotation problem. Although her work was received with great skepticism, it was confirmed time and time again, becoming one of the key pieces of evidence for the existence of dark matter.

Ironically, Rubin wanted to avoid controversial areas of astronomy such as quasars, and focused on the rotation of galaxies. She showed that spiral galaxies rotate quickly enough that they should fly apart if the gravity of their constituent stars was all that was holding them together. So, she inferred the presence of something else — something which today, we call dark matter. Rubin’s calculations showed that galaxies must contain at least five to ten times as much dark matter as ordinary matter. Rubin spent her life advocating for women in science and was a mentor for aspiring female astronomers.

Sally Ride

Image credits: U.S. Information Agency.

Sally Ride was the third woman in outer space, after USSR cosmonauts Valentina Tereshkova (1963) and Svetlana Savitskaya (1982). However, her main focus was astrophysics, primarily researching nonlinear optics and Thomson scattering. She had two bachelor’s degrees: literature, because Shakespeare intrigued her, and physics, because lasers fascinated her. She was also in excellent physical shape, being a nationally ranked tennis player who flirted with turning pro, and was essentially tailored to be an astronaut — and yet, the subject of the media attention was always her gender, and not her accomplishments. At press conferences, she would get questions like “Will the flight affect your reproductive organs?” and “Do you weep when things go wrong on the job?” to which she would laconically and patiently answer.

After flying twice on the Orbiter Challenger, she left NASA in 1987, after spending 343 hours in space. She wrote and co-wrote several science books aimed at children and encouraging them to pursue science. She also participated in the Gravity Probe B (GP-B) project, which provided solid evidence to support Einstein’s general theory of relativity.

Jane Goodall

Image credits: U.S. Department of State.

Most biologists consider Jane Goodall to be the world’s foremost expert on chimpanzees, and for good reason. Goodall has dedicated her life towards studying chimps, having spent over 55 years studying the social and family interactions of wild chimpanzees.

Since she was a child, Goodall was fascinated by chimps, and dedicated a lot of her early days towards studying them. She first went to Gombe Stream National Park, Tanzania in 1960, after becoming one of the very few people who were allowed to study for a PhD without first having obtained a BA or BSc. Without any supervisors directing her research, Goodall observed things that strict scientific doctrines may have overlooked, and which led to stunning discoveries. She observed behaviors such as hugs, kisses, pats on the back, and even tickling — which we would consider strictly “human” actions. She was the first to ever show non-human tool-making and overall, showed that many attributes we considered to be human were shared by chimps. She has also worked extensively on conservation and animal wildlife welfare.

This article doesn’t intend to be a thorough history of women in science, nor does it claim to mention all the noteworthy ones and the unsung heroes. It is meant to be an appreciation of the invaluable contributions women have made to science and the hardships they had — and still have — to overcome to do so.

What color is a mirror? It’s not a trick question

Credit: Pixabay.

When looking into a mirror, you can see yourself or the mirror’s surroundings in the reflection. But what is a mirror’s true color? It’s an intriguing question for sure since answering it requires us to delve into some fascinating optical physics.

If you answered ‘silver’ or ‘no color’ you’re wrong. The real color of a mirror is white with a faint green tint.

The discussion itself is more nuanced, though. After all, a t-shirt can also be white with a green tint but that doesn’t mean you can use it in a makeup kit.

The many faces of reflected light

We perceive the contour and color of objects due to light bouncing off them that hits our retina. The brain then reconstructs information from the retina — in the form of electrical signals — into an image, allowing us to see.

Objects are initially hit by white light, which is basically colorless daylight. This contains all the wavelengths of the visible spectrum at equal intensity. Some of these wavelengths are absorbed, while others are reflected. So it is these reflected visible-spectrum wavelengths that we ultimately perceive as color.

When an object absorbs all visible lengths we perceive it as black while an object that reflects all visible wavelengths will appear white to our eyes. In practice, there is no object that absorbs or reflects 100% of incoming light — this is important when discerning the true color of a mirror.

Why isn’t a mirror plain white?

Not all reflections are the same. The reflection of light and other forms of electromagnetic radiation can be categorized into two distinct types of reflection. Specular reflection is light reflected from a smooth surface at a definite angle, whereas diffuse reflection is produced by rough surfaces that reflect light in all directions.

Credit: Olympus Lifescience.

A simple example of both types using water is to observe a pool of water. When the water is calm, incident light is reflected in an orderly manner thereby producing a clear image of the scenery surrounding the pool. But if the water is disturbed by a rock, waves disrupt the reflection by scattering the reflected light in all directions, erasing the image of the scenery.

Credit: Olympus Lifescience.

Mirrors employ specular reflection. When visible white light hits the surface of a mirror at an incident angle, it is reflected back into space at a reflected angle that is equal to the incident angle. The light that hits a mirror is not separated into its component colors because it is not being “bent” or refracted, so all wavelengths are being reflected at equal angles. The result is an image of the source of light. But because the order of light particles (photons) is reversed by the reflection process, the product is a mirror image.

However, mirrors aren’t perfectly white because the material they’re made from is imperfect itself. Modern mirrors are made by silvering, or spraying a thin layer of silver or aluminum onto the back of a sheet of glass. The silica glass substrate reflects a bit more green light than other wavelengths, giving the reflected mirror image a greenish hue.

This greenish tint is imperceptible but it is truly there. You can see it in action by placing two perfectly aligned mirrors facing each other so the reflected light constantly bounces off each other. This phenomenon is known as a “mirror tunnel” or “infinity mirror.” According to a study performed by physicists in 2004, “the color of objects becomes darker and greener the deeper we look into the mirror tunnel.” The physicists found that mirrors are biased at wavelengths between 495 and 570 nanometers, which corresponds to green.

So, in reality, mirrors are actually white with a tiny tint of green.

Rumble in the concrete jungle: what history teaches us about urban defense

Given ongoing events in Ukraine, the age-old adage that offense is the best defense is being put to the test. So far, throughout the country’s towns and cities, the answer seems to be “not so much”.

Urban Urban Design Landscape Desing Qom Iran

With that being said, history gives us ample examples and wisdom on how best to handle urban combat in general and urban defense in particular. Fighting in such environments is a very different beast to combat in other types of landscapes, and it raises unique challenges, as well as offering its own set of options and opportunities. Many of these are related to the huge availability of solid cover and line-of-sight denial. Others arise from the way cities naturally funnel pedestrian and vehicle traffic, constraining them to known and predictable avenues.

So today, we will go through wisdom gathered painfully, at great cost of human lives and material damage over history, on how defenders can best employ built environments against attackers.

Erzats fortresses

In olden days, architects would design fortresses so that the defenders would have as much of an advantage over attackers as possible. The first and most obvious advantage is the physical protection afforded by thick, sturdy walls.

While most buildings today aren’t built to repel invaders, they do offer sturdy bases that defenders can use when bracing for an attack. Structures erected from concrete and rebar are especially tough and can act as impromptu fortifications. Most government buildings, apartment blocks, and office complexes are ideal for this role, as are banks.

If defenders have enough time to dig in, such buildings should be reinforced with materials such as lumber, steel girders, or sandbags. Such elements should be used to protect the structure from direct damage, help maintain integrity after damage is inflicted on the building, or cover areas through which attackers can enter the perimeter. Ideally, combat engineers would carry out reinforcement works, but if they are not available, civilians can fill the role partially.

Mines, barbed wire, and other physical barriers can also be used to deny attackers entry points into the building and make it hard for them to approach the site. Furniture, rubble, barbed wire, and mines should also be used to block or limit access to stairways and elevators; even if these do not neutralize any of the attackers, they can still delay a fighting force massively. Such makeshift defenses require a lot of time, effort, and resources (such as explosives and specialized combat engineers) to remove.

Inside the building itself, reinforcing materials should be used to create bunkers or similar fighting compartments that break a building’s open floors into multiple areas of overlapping fire.

Like for ancient fortresses, however, the key to picking the right building to fortify is location. Strongpoints should have a good command of their surroundings (direct line of sight for soldiers to fire). Several close-together buildings can be fortified to ensure overlapping fields of fire that the enemy cannot hide from. Whether fortified alone or in groups, these buildings should be surrounded by obstacle courses that prevent attackers from simply bypassing them, or isolating the strongpoint from support from other defending units.

Heavy weapons such as rocket launchers, guns, automatic cannons, and heavy machine guns can also benefit from an elevated position from which to fire. Such weapons can be disassembled, carried to upper floors, and reassembled for use. Equipment such as this can allow defenders to halt entire armored columns.

A single fortified building can completely blunt even an armored assault, or at least stall it. One such building — known today as “Pavlov’s House” — became famous during the Battle of Stalingrad in 1942. A platoon led by Sergeant Yakov Pavlov held out in this house against the German army for 60 days, repelling infantry and armored attacks. The soldiers surrounded the building with barbed wires and mines, broke holes through the interior walls to allow for movement, dug machinegun emplacements in the building’s corners, and used the top floors to lay down anti-tank rifle fire on advancing tanks. When artillery would fire on the building, they retreated to the safety of the cellar, only to re-emerge and continue fighting.

Such stories illustrate just how hard it can be for attackers to negotiate a single fortified building. Still, modern battlefields involve systems that were not available during World War II, so one extra element should be considered:

Concealment

The advent of modern surveillance systems such as drones, satellites, and reconnaissance planes, together with the precision weapons in use today, means that strongpoints are at risk of precision strikes. Concealment saves lives, so defenders should take steps to hide their exact position and activity as much as possible.

Citizens embroiled in the Syrian conflict would routinely hang large pieces of cloth, tarps, or sheet metal in between buildings to hide personnel from snipers and aircraft. Such measures are disproportionally effective compared to their simplicity. Soldiers rely on sight heavily on the battlefield and don’t generally shoot unless they have reliable knowledge of where the enemy might be. In the case of heavy weaponry such as tank- or aircraft-mounted munitions, this is even more true. A pilot is much less likely to drop a bomb without a clear sighting than a soldier is to fire a single shot.

Even if the enemy chooses to fire, concealment measures still bring value to defenders. A weapon fired at an empty emplacement is effectively wasted, and cannot be used against an active defender — contributing to the so-called ‘virtual attrition’ of the attacking forces.

Concealment measures should be used in conjuncture with fortifications to hide the defenders’ movements and decrease the efficacy of enemy fire. Even so, a big fortified apartment building is hard to hide, and will undoubtedly draw some heavy ordinance its way. So another element should be considered to ensure the safety of defending soldiers.

Tunnels, mouseholes

Mouseholes are openings cut out to allow soldiers easy access through the interior as well as exterior walls of a building. They have been a mainstay of urban combat ever since the advent of gunpowder weaponry. Mouseholes can be created using explosives or simple tools, and should comfortably fit a soldier so as not to clog traffic during a tense situation. In the case that a building should be run over by the attackers, defenders can also use mouseholes as chokepoints to contain the enemy’s advance by covering them with machine-gun fire or personal weapons.

Tunnels, on the other hand, are dug underground. They require significantly more work than mouseholes but have the added benefit of concealing and protecting troops that transit them from fire. Due to their nature, tunnel networks are hard to set up, so they should be used to allow strategic access to important sites and give defenders safe avenues of reinforcing strongpoints. Whenever possible, defenders should work to build extensive tunnel networks to give troops safe avenues of passage on the battlefield.

Underground transportation avenues and infrastructure, such as metro lines or sewage lines, can also be used as tunnels and bunkers. German soldiers used them to great effect during the Battle of Berlin in 1945 to cause great pain to Soviet soldiers moving into the city. Such infrastructure is usually roomy enough to also be usable as hospital and storage space, is extensive enough to act as a communications network, and offers an ideal setting to set up ambushes, bunkers, or counter attacks. Some can even allow for the passage of armored vehicles. They are also sturdy enough — and dug deep enough underground — to withstand most artillery and airstrikes.

But what about other areas of the city?

Rubble

As daunting as fortified spaces can be, the fact of the matter is that not every building can be fortified. There simply isn’t enough time, manpower, and material available when preparing a defense. But not every area needs to be fortified to help stop an attack. Sometimes, it’s as simple as tearing buildings down.

Defenders have the advantage that they can use the terrain in their favor to a much greater extent than attackers. They are the first of the two sides to hold a position, they know the land, and can take up the best spots to punish any invaders. Rubbling buildings can help in this regard on several levels.

First, rubble presents a physical barrier that an invading army will have difficulty navigating and removing. This is especially true for concrete or brick rubble produced by demolishing buildings. It forces attackers to move through pre-determined areas, where defenses can be set up to stop their advance. It also prevents them from using all their firepower on a single objective as it prevents direct fire. Rubble serves to also block line of sight, thus limiting the ability of an attacking force to keep tabs on what the defenders are doing.

Rubbling is, understandably, a very damaging process and thus quite regrettable to use. But it does bring huge benefits to defenders by allowing them to alter the landscape to their purposes.

Barricades

Although less effective than rubbling at containing an enemy’s movements, barricades can be surprisingly effective at stopping both infantry and armored vehicles. Furniture, tires, sandbags, metallic elements, and wire all make for good barricades.

Urban landscapes are also very rich in objects that can be used for barricades such as trash containers, cars, manholes, industrial piping, and so forth. These should be used liberally and ideally set up in areas where defenders can unleash fire on any attackers attempting to navigate or remove them.

Concrete barriers

These aren’t very common in cities, but any checkpoint or protected infrastructure site might have some of these barriers. If you have time and concrete to spare, makeshift barriers can also be quite effective. They usually come in 3ft / 1 m tall anti-vehicle walls or 12ft / 4 m tall wall segments used by the military to reinforce strategic points.

These are essentially portable fortifications. They are made of rebar and concrete and are exceedingly hard to destroy directly. Use cranes and heavy trucks to move them, as they weigh a few tons each.

Supply

Another important advantage defenders have is that the attackers have to come to them — so there’s not much need to carry supplies to the front line.

Pre-prepared ammo caches can be strewn throughout the city to keep defenders in the fight as long as possible. Urban landscapes offer a lot of hidden spots where ammo or weapons can be deposited discretely. Food, water, and medical supplies are also essential, so make sure these are distributed throughout the engagement zone as well.

Strongpoints should have designated rooms for storage of such supplies. Smaller items such as magazines or grenades can be distributed in smaller quantities throughout several points of the building, to ensure that soldiers always have what they need on hand.


Attacking an urban environment is a very daunting proposition even for the most well-trained of military forces. It gives defenders an ideal landscape to set up ambushes, entrench, deceive their attackers, and launch counter-offensives. Making the most of the terrain, and preparing carefully, can give defenders a huge upper hand against their foes while making it hard for attackers to leverage their strengths. such landscapes can level the playing field even against a superior attacking force. The events in Ukraine stand as a testament to this.

Stanislav Petrov – the man who probably saved the world from a nuclear disaster

As Vladimir Putin is forcing the world to contemplate nuclear war once again, it’s time to remember one time when one Soviet military may have saved the world from disaster.

It was September 26, 1983. The Cold War was at one of its most tense periods ever. With the United States and the USSR at each other’s throat, they had already built enough nuclear weapons to destroy each other (as well as the rest of the world) a couple times over — and the slightest sign of an attack would have lead to a worldwide disaster, killing hundreds of millions of people.

Stanislav Petrov played a crucial role in monitoring what the US was doing. In the case of an attack, the Soviet strategy was to launch an all out retaliation as quickly as possible. So a few minutes after midnight, when the alarms went on and the screens turned red, the responsibility fell on his shoulders.

The Soviet warning software analyzed the information and concluded that it wasn’t static; the system’s conclusion was that the US had launched a missile.  But the system however, was flawed. Still, the human brain surpassed the computer that day; on that faithful day, Stanislav Petrov put his foot down and decided that it was a false alarm, advising against retaliation – and he made this decision fast.

He made the decision based mostly on common sense – there were too few missiles. The computer said there were only five of them.

“When people start a war, they don’t start it with only five missiles,” he remembered thinking at the time. “You can do little damage with just five missiles.”

However, he also relied on an old fashion gut feeling.

“I had a funny feeling in my gut,” Petrov said. “I didn’t want to make a mistake. I made a decision, and that was it.”

There’s also something interesting about that night. Petrov wasn’t scheduled then. Somebody else should have been there; and somebody else could have made a different decision. The world would probably have turned out very different.

Saltwater Crocodiles: the world’s oldest and largest reptile

From the east of India, all through to the north of Australia, one fearsome, cold-blooded predator stalks the coasts. This hypercarnivore will contend with any that enters its watery domain, from birds to men to sharks, and almost always win that fight. Fossil evidence shows that this species has been plying its bloody trade for almost 5 million years, remaining virtually unchanged, a testament to just how efficient a killing machine it is. Looking it in the eye is the closest thing we have to staring down a carnivorous dinosaur.

Saltwater crocodile at the Australia Zoo, Beerwah, South Queensland. Image credits Bernard Dupont / Flickr.

This animal is the saltwater crocodile (Crocodylus porosus). It has the distinction of being the single largest reptile alive on the planet today, and one of the oldest species to still walk the Earth.

Predatory legacy

The earliest fossil evidence we have of this species dates back to the Pliocene Epoch, which spanned from 5.3 million to 2.6 million years ago.

But the crocodile family is much older. They draw their roots in the Mesozoic Era, some 250 million years ago, when they branched off of archosaurs (the common ancestor they share with modern birds). During those early days, they lived alongside dinosaurs.

Crocodiles began truly coming into their own some 55 million years ago, evolving into their own species in the shape we know them today. They have remained almost unchanged since, a testament to how well-adapted they are to their environments, and the sheer efficiency with which they hunt.

This makes the crocodile family, and the saltwater crocodile as one of its members, one of the oldest lineages alive on the planet today.

The saltwater crocodile

With adult males reaching up to 6 or 7 meters (around 20 to 23 ft) in length, this species is the largest reptile alive today. Females are smaller than males, generally not exceeding 3 meters in length (10 ft); 2.5 meters is considered large for these ladies.

Image credits fvanrenterghem / Flickr.

The saltwater crocodile will grow up to its maximum size and then start increasing in bulk. The weight of these animals generally increases cubically (by a power of 3) as they age; an individual at 6 m long will weigh over twice as much as one at 5 m. All in all, they tend to be noticeably broader and more heavy-set than other crocodiles.

That being said, they are quite small as juveniles. Freshly-hatched crocs measure about 28 cm (11 in) in length and weigh an average of only 71 g — less than an average bag of chips.

Saltwater crocodiles have large heads, with a surprisingly wide snout compared to other species of croc. Their snout is usually twice as long overall as they are wide at the base. A pair of ridges adorn the animal’s eyes, running down the middle of their snout to the nose. Between 64 and 68 teeth line their powerful jaws.

Like their relatives, saltwater crocodiles are covered in scales. These are oval in shape. They tend to be smaller than the scales of other crocodiles and the species has small or completely absent scutes (larger, bony plates that reinforce certain areas of the animal’s armored cover) on their necks, which can serve as a quick identifier for the species.

Young individuals are pale yellow, which changes with age. Adults are a darker yellow with tan and gray spots and a white or yellow belly. Adults also have stripes on the lower sides of their bodies and dark bands on their tails.

That being said, several color variations are known to exist in the wild; some adults can maintain a pale coloration throughout their lives, while others can develop quite dark coats, almost black.

Behavior, feeding, mating

Saltwater crocodiles are ambush predators. They lie in wait just below the waterline, with only their raised brows and nostrils poking above the water. These reptiles capture unsuspecting prey from the shore as they come to drink, but are not shy to more actively hunt prey in the water, either. Their infamous ‘death roll’ — where they bite and then twist their unfortunate victim — is devastating, as is their habit of pulling animals into the water where they drown. But even their bite alone is terrifying. According to an analysis by Florida State University paleobiologist Gregory M. Erickson, saltwater crocodiles have the strongest bite of all their relatives, clocking in at 3,700 pounds per square inch (psi).

That’s a mighty bitey. Image credits Sankara Subramanian / Flickr.

Apart from being the largest, the saltwater crocodile is also considered one of the most intelligent reptiles, showing sophisticated behavior. They have a relatively wide repertoire of sounds with which they communicate. They produce bark-like sounds in four known types of calls. The first, which is only performed by newborns, is a short, high-toned hatching call. Another is their distress call, typically only seen in juveniles, which is a series of short, high-pitched barks. The species also has a threat call — a hissing or coughing sound made toward an intruder — and a courtship call, which is a long and low growl.

Saltwater crocodiles will spend most of their time thermoregulating to maintain an ideal body temperature. This involves basking in the sun or taking dips into the water to cool down. Breaks are taken only to hunt or protect their territory. And they are quite territorial. These crocodiles live in coastal waters, freshwater rivers, billabongs (an isolated pond left behind after a river changes course), and swamps. While they are generally shy and avoidant of people, especially on land, encroaching on their territory is one of the few things what will make a saltwater crocodile attack humans. They’re not shy to fight anything that tresspasses, however, including sharks, monkeys, and buffalo.

This territoriality is also evident in between crocs. Juveniles are raised in freshwater rivers but are quickly forced out by dominant males. Males who fail to establish a territory of their own are either killed or forced out to sea. They just aren’t social souls at all.

Females lay clutches of about 50 eggs (though there are records of a single female laying up to 90 in extraordinary cases). They will incubate them in nests of mud and plant fibers for around 3 months. Interestingly, ambient temperatures dictate the sex of the hatchlings. If temperatures are cool, around 30 degrees Celsius, all of them will be female. Higher sustained temperatures, around 34 degrees Celsius, will produce an all-male litter.

Only around 1% of all hatchlings survive into adulthood.

Conservation status

Saltwater crocodiles have precious few natural predators. Still, their skins have historically been highly prized, and they have suffered quite a lot from hunting, both legal and illegal. Their eggs and meat are also consumed as food.

In the past, this species has been threatened with extinction. Recent conservation efforts have allowed them to make an impressive comeback, but the species as a whole is much rarer than in the past. They are currently considered at low risk for extinction, but they are still of especial interest for poachers due to their valuable meat, eggs, and skins.


Saltwater crocodiles are an ancient and fearsome predator. They have evolved to dominate their ecosystems, and do so by quietly lurking just out of sight. But, like many apex predators before them, pressure from humans — both directly, in the form of hunting, and indirectly, through environmental destruction and climate change — has left the species reeling.

Conservation efforts for this species are to be applauded and supported. Even though these crocodiles have shown themselves willing to attack humans if we are not careful, we have to bear in mind that what they want is to be left alone and unbothered. It would be a pity for this species, which has been around for millions of years, which has come from ancient titans, survived for millennia and through global catastrophe, to perish.

Brain scans are saving convicted murderers from death row–but should they?

Over a decade ago, a brain-mapping technique known as a quantitative electroencephalogram (qEEG) was first used in a death penalty case, helping keep a convicted killer and serial child rapist off death row. It achieved this by swaying jurors that traumatic brain injury (TBI) had left him prone to impulsive violence.

In the years since, qEEG has remained in a weird stasis, inconsistently accepted in a small number of death penalty cases in the USA. In some trials, prosecutors fought it as junk science; in others, they raised no objections to the imaging: producing a case history built on sand. Still, this handful of test cases could signal a new era where the legal execution of humans becomes outlawed through science.

Quantifying criminal behavior to prevent it

As it stands, if science cannot quantify or explain every event or action in the universe, then we remain in chaos with the very fabric of life teetering on nothing but conjecture. But DNA evidentiary status aside, isn’t this what happens in a criminal court case? So why is it so hard to integrate verified neuroimaging into legal cases? Of course, one could make a solid argument that it would be easier to simply do away with barbaric death penalties and concentrate on stopping these awful crimes from occurring in the first instance, but this is a different debate.

The problem is more complex than it seems. Neuroimaging could be used not just to exempt the mentally ill from the death penalty but also to explain horrendous crimes to the victims or their families. And just as crucial, could governments start implementing measures to prevent this type of criminal behavior using electrotherapy or counseling to ‘rectify’ abnormal brain patterns? This could lead down some very slippery slopes.

Especially it’s not just death row cases that are questioning qEEG — nearly every injury lawsuit in the USA also now includes a TBI claim. With Magnetic Resonance Imaging (MRIs) and Computed tomography (CT) being generally expensive, lawyers are constantly seeking new ways to prove brain dysfunction. Readers should note that both of these neuroimaging techniques are viewed as more accurate than qEEG but can only provide a single, static image of the neurological condition – and thus provide no direct measurement of functional, ongoing brain activity.

In contrast, the cheaper and quicker qEEG testing purports to monitor active brain activity to diagnose many neurological conditions continuously and could one-day flag those more inclined to violence, enabling early interventional therapy sessions and one-to-one help, focusing on preventing the problem.

But until we can reach this sort of societal level, defense and human rights lawyers have been attempting to slowly phase out legal executions by using brain mapping – to explain why their convicted clients may have committed these crimes. Gradually moving from the consequences of mental illness and disorders to understanding these conditions more.

The sad case of Nikolas Cruz

But the questions surrounding this technology will soon be on trial again in the most high-profile death penalty case in decades: Florida vs. Nikolas Cruz. On the afternoon of February 14, 2018, Cruz opened fire on school children and staff at Marjory Stoneman Douglas High in Parkland when he was just 19 years of age. Now classed as the deadliest school shooting in the country’s history, the state charged the former Stoneman Douglas High student with the premeditated murder of 17 school children and staff and the attempted murder of a further seventeen people. 

With the sentencing expected in April 2022, Cruz’s defense lawyers have enlisted qEEG experts as part of their case to persuade jurors that brain defects should spare him the death penalty. The Broward State Attorney’s Office signaled in a court filing last month that it will challenge the technology and ask a judge to exclude the test results—not yet made public—from the case.

Cruz has already pleaded guilty to all charges, but a jury will now debate whether to hand down the death penalty or life in prison.

According to a court document filed recently, Cruz’s defense team intends to ask the jury to consider mitigating factors. These include his tumultuous family life, a long history of mental health disorders, brain damage caused by his mother’s drug addiction, and claims that a trusted peer sexually abused him—all expected to be verified using qEEG.

After reading the flurry of news reports on the upcoming case, one can’t help but wonder why, even without the use of qEEG, someone with a record of mental health issues at only 19 years old should be on death row. And as authorities and medical professionals were aware of Cruz’s problems, what were the preventative-based failings that led to him murdering seventeen individuals? Have these even been addressed or corrected? Unlikely.

On a positive note, prosecutors in several US counties have not opposed brain mapping testimony in more recent years. According to Dr. David Ross, CEO of NeuroPAs Global and qEEG expert, the reason is that more scientific papers and research over the years have validated the test’s reliability. Helping this technique gain broader use in the diagnosis and treatment of cognitive disorders, even though courts are still debating its effectiveness. “It’s hard to argue it’s not a scientifically valid tool to explore brain function,” Ross stated in an interview with the Miami Herald.

What exactly is a quantitative electroencephalogram (qEEG)?

To explain what a qEEG is, first, you must know what an electroencephalogram or EEG does. These provide the analog data for computerized qEEGs that record the electrical potential difference between two electrodes placed on the outside of the scalp. Multiple electrodes (generally >20) are connected in pairs to form various patterns called montages, resulting in a series of paired channels of EEG activity. The results appear as squiggly lines on paper—brain wave patterns that clinicians have used for decades to detect evidence of neurological problems.

More recently, trained professionals have computerized this data to create qEEG – translating raw EEG data using mathematical algorithms to help analyze brainwave frequencies. Clinicians then compare this statistical analysis against a database of standard or neurotypical brain types to discern those with abnormal brain function that could cause criminal behavior in death row cases.

While this can be true, results can still go awry due to incorrect electrode placement, unnatural imaging, inadequate band filtering, drowsiness, comparisons using incorrect control databases, and choice of timeframes. Furthermore, processing can yield a large number of clinically irrelevant data. These are some reasons that the usefulness of qEEG remains controversial despite the volume of published research. However, many of these discrepancies can be corrected by simply using trained medical professionals to operate the apparatus and interpret the data.

Just one case is disrupting the use of this novel technology

Yet, despite this easy correction, qEEG is not generally accepted by the relevant scientific community to diagnose traumatic brain injuries and is therefore inadmissible under Frye v. the United States. An archaic case from way back in 1923 based on a polygraph test, the trial came a mere 17-years after Cajal and Golgi won a Nobel Prize for producing slides and hand-drawn pictures of neurons in the brain.

Experts could also argue that a lie detector test (measuring blood pressure, pulse, respiration, and skin conductivity) is far removed from a machine monitoring brain activity. Furthermore, when the Court of Appeals of the District of Columbia decided on this lawsuit, qEEG didn’t exist. 

Applying the Frye standard, courts throughout the country have excluded qEEG evidence in the context of alleged brain trauma. For example, the Florida Supreme Court has formally noted that the relevant scientific community for purposes of Frye showed “qEEG is not a reliable method for determining brain damage and is not widely accepted by those who diagnose a neurologic disease or brain damage.” 

However, in a seminal paper covering the use of qEEG in cognitive disorders, the American Academy of Neurology (AAN) overall felt computer-assisted diagnosis using qEEG is an accurate, inexpensive, easy to handle tool that represents a valuable aid for diagnosing, evaluating, following up and predicting response to therapy — despite their opposition to the technology in this press. The paper also features other neurological associations validating the use of this technology.

The introduction of qEEg on death row was not that long ago

Only recently introduced, the technology was first deemed admissible in court during the death-penalty prosecution of Grady Nelson in 2010. Nelson stabbed his wife 61 times with a knife, then raped and stabbed her 11-year-old intellectually disabled daughter and her 9-year old son. The woman died, while her children survived. Documents state that Nelson’s wife found out he had been sexually abusing both children for many years and sought to keep them away from him.

Nelson’s defense argued that earlier brain damage had left him prone to impulsive behavior and violence. Prosecutors fought to strike the qEEG test from evidence, contending that the science was unproven and misused in this case.

“It was a lot of hocus pocus and bells and whistles, and it amounted to nothing,” the prosecutor on the case, Abbe Rifkin, stated. “When you look at the facts of the case, there was nothing impulsive about this murder.”

However, after hearing the testimony of Dr. Robert W. Thatcher, a multi-award-winning pioneer in qEEG analysis for the defense, Judge Hogan-Scola, found qEEG met the legal prerequisites for reliability. She based this on Frye and Daubert standards, two important cases involving the technology.

She allowed jurors to hear the qEEG report and even permitted Thatcher to present a computer slide show of Nelson’s brain with an explanation of the effects of frontal lobe damage at the sentencing phase. He testified that Nelson exhibited “sharp waves” in this region, typically seen in people with epilepsy – explaining that Grady doesn’t have epilepsy but does have a history of at least three TBIs, which could explain the abnormality seen in the EEG.  

Interpreting the data, Thatcher also told the court that the frontal lobes, located directly behind the forehead, regulate behavior. “When the frontal lobes are damaged, people have difficulty suppressing actions … and don’t understand the consequences of their actions,” Thatcher told ScienceInsider.

Jurors rejected the death penalty. Two jurors who agreed to be interviewed by a major national publication later categorically stated that the qEEG imaging and testimony influenced their decision.

“The moment this crime occurred, Grady had a broken brain,” his defense attorney, Terry Lenamon, said. “I think this is a huge step forward in explaining why people are broken—not excusing it. This is going to go a long way in mitigating death penalty sentences.”

On the other hand, Charles Epstein, a neurologist at Emory University in Atlanta, who testified for the prosecution, states that the qEEG data Thatcher presented flawed statistical analysis riddled with artifacts not naturally present in EEG imaging. Epstein adds that the sharp waves Thatcher reported may have been blips caused by the contraction of muscles in the head. “I treat people with head trauma all the time,” he says. “I never see this in people with head trauma.”

You can see Epstein’s point as it’s unclear whether these brain injuries occurred before or after Nelson brutally raped a 7-year old girl in 1991, after which he was granted probation and trained as a social worker.

All of which invokes the following questions: Firstly, do we need qEEG to state this person’s behavior is abnormal or that the legal system does not protect children and secondly, was the reaction of authorities in the 1991 case appropriate, let alone preventative?

As more mass shootings and other forms of extreme violence remain at relatively high levels in the United States, committed by younger and younger perpetrators flagged as loners and fantasists by the state mental healthcare systems they disappear into – it’s evident that sturdier preventative programs need to be implemented by governments worldwide. The worst has already occurred; our children are unprotected against dangerous predators and unaided when affected by their unstable and abusive environments, inappropriate social media, and TV.  

A potential beacon of hope, qEEG is already beginning to highlight the country’s broken socio-legal systems and the amount of work it will take to fix them. Attempting to humanize a diffracted court system that still disposes of the product of trauma and abuse like they’re nothing but waste, forcing the authorities to answer for their failings – and any science that can do this can’t be a bad thing.

What are fisher cats, the most misleadingly-named animals out there?

One of the more obscure animals out there, fisher cats (Pekania pennanti) or ‘fishers’, in short, are predators endemic to North America. Despite the name, these animals are not cats, and they do not fish. They are, however, increasingly moving into a lot of urban and suburban areas across the USA.

Image credits of USFWS Pacific Southwest Region / Flickr.

Fisher cats are slim, short-legged mammals that resemble weasels or small wolverines. They can grow to about 145 centimeters in length (4 ft 9 in) including the tail. They’re covered in dark-brown fur, which is glossy and thick in the winter, and more mottled in the summer. They have rounded ears, and overall look quite cute and cuddly. Don’t let that fool you, however: fisher cats have vicious, retractable claws, and are quite fearsome predators for their size.

The species is endemic to various areas of North America. New England, Tennessee, the Great Lakes area, and the northern stretches of the Rocky Mountains all house populations of fisher cats. Smaller populations have also been reported in California, the southern Sierra Nevada, and the west coast of Oregon. The boreal forests of Canada also make great homes for these mammals.

The cat that’s not a cat

Taxonomically speaking, fisher cats are closely related to martens, being part of the Mustelidae family. This is the largest family in the order of Caniformia (‘dog-like’ animals) and the greater order Carnivora (meat-eaters). As such, they’re part of the most successful and rich group of predators on the planet.

Despite this taxonomic allegiance to the group Carnivora, fisher cats are omnivorous. They will happily hunt a wide range of animals of comparable size to them. They are of the very few animals that even attempt to hunt porcupines, and do so quite successfully, but prefer to hunt hares. They’re not above scouring the forest floor for plants to eat, however. They generally forage around fallen trees, looking for fruits, mushrooms, nuts, and insects. A bit surprisingly, given their name, fisher cats only very rarely eat fish.

It’s not exactly clear, then, how the animal got its name. Folklore says that fisher cats would steal the fish the early settlers used to bait traps in the Great Lakes region, but this is wholly unconfirmed. More likely, the ‘fisher’ in ‘fisher cat’ comes from ‘fisse’, the Dutch equivalent of the word ‘fitch’, from early settlers in the region. It’s also possible that it draws its roots in the French term ‘fishe’. These words refer to the European polecat or its pelt, respectively; given that fur trade was an important source of income for early settlers, it is likely that fisher cats were prized and sought-after for their pelts, and the species became associated with the polecat, who was raised for fur in Europe.

It’s easy to see why their pelts were so prized. Image via Wikimedia.

However, due to this association, fisher cats have been hunted to extinction in some parts of their natural habitat. Due to a drop in hunted pelts since the Americas were first colonized by Europeans, the animals are making a comeback and their populations are recovering and moving back into the areas they previously inhabited. Despite this, legal harvesting for fur, through trapping, is still one of the main sources of information regarding their numbers at our disposal right now.

A baby fisher cat is called a ‘kit’. Females tend to give birth to litters of one up to four kits at a time in the spring and nurture them until late summer. The kits are sightless and quite helpless at first, but become well able to take care of themselves by summertime and leave in search of their own mates.

How do they live?

Fishers spend most of their time on the ground, and have a marked preference for forested lands compared to other habitats. They’re most often found in boreal or conifer forests, but individuals have been seen in transition forests as well, such as mixed hardwood-conifer forests. They seem to avoid areas where overhead cover isn’t very thick, preferring at least 50% coverage.

Female fisher cats also make their dens in moderately large and large trees when giving birth and rearing their kits. Because of these factors, they’re most likely to be seen in old-growth forests, since heavily-logged or young forests seem not to provide the habitat that fishers like to live in.

Towards the west of the continent, where fires routinely clear forests of fallen trees (the most-liked foraging environments of the fishers), these animals tend to gravitate towards forests adjacent to bodies of water (riparian forests). They also seem to not be fond of heavily snowed areas regardless of geographical location.

Despite their habitat preferences, fisher cats have been seen encroaching ever more deeply into urban landscapes, most likely drawn by the prospect of easy food. While it is still unclear whether fisher cats hunt for pets such as household cats or small dogs, such activities would be within their abilities. Most likely, however, they search for food items discarded in trash cans.

Fisher cats stay away from humans for the most part and avoid contact. They will defend themselves if they feel cornered, however. They are quite small, so the chances of a deadly encounter with a fisher cat are slim to none, but if you ever meet one, don’t be fooled by their cuddly exterior. Give it space; their claws and fangs can be quite nasty, and there’s always the risk of infection when dealing with wounds from wildlife.

Today, these furry mammals are listed as Least Concern on IUCN Red List of Threatened Species; they are making quite a successful comeback following their historic lows. Still, habitat destruction and human encroachment remain serious issues for the species. Their ever-more-frequent sightings in cities and urban landscapes across North America are a warning sign of an issue wildlife everywhere faces: humans are taking up more space than ever, so they are coming to visit our cities, as well. Depending on what we do in the future, they may be forced to set up shop here for good.

What is the Oedipus complex?

Sigmund Freud. Credit: Public Domain.

The Oedipus complex is a concept introduced by Sigmund Freud, part of his theory of psychosexual stages of development, that describes a desire for sexual involvement with the opposite-sex parent and a sense of jealousy and rivalry with the same-sex parent. This development stage of major conflict supposedly takes place in boys between 3 and 5 years old.

The term is named after the main character of Sophocles’ Oedipus Rex. In this ancient Greek tragedy, Oedipus is abandoned by his parents as a baby. Later, in adulthood, he becomes the king of Thebes and unknowingly murders his father and marries his mother. The female analog of the psychosexual term is the Electra complex, named after another tragic mythological figure who helped kill her mother. Oedipal is the generic term for both Oedipus and Electra complexes.

Often, these theories are interpreted as the propensity of men to pick women who look like their mothers, while women pick men who resemble their fathers.

Both the Oedipus and Electra complexes proved controversial since they were first introduced to the public in the early 20th-century. Critics of Freud note that there is very little empirical evidence proving the theory’s validity. Even so, the Oedipus cornerstone is still regarded as a cornerstone of psychoanalysis to this day.

Oedipus: Freud’s shibboleth

According to Freud, personality development in childhood takes place during five psychosexual stages: oral, anal, phallic, latency, and genital stages. In each stage, sexual energy is expressed in different ways and through different parts of the body. Each of these psychosexual stages is associated with a particular conflict that must be resolved in order to successfully and healthily advance to the next stage. The manner in which each conflict is resolved can determine a person’s personality and relationships in adulthood.

The Oedipal complex, introduced by Freud in 1899 in his work Interpretations of Dreams, occurs during the phallic stage of development (ages 3-6), a period when a child becomes aware of anatomical sex differences, setting in motion the conflict between erotic attraction, rivalry, jealousy, and resentment. The young boy unconsciously feels sexually attached to his mother. Envy and jealousy are aimed toward the father, the object of the mother’s affection and attention.

Freud believed that a little boy is condemned to follow his drives and wishes, in the same way as Sophocles’ Oedipus was condemned to do. That’s unless he abandoned his Oedipal wishes.

The hostile feelings towards the father cause castration anxiety, which is the irrational fear of both literal and figurative emasculation as punishment for desiring his mother. To cope with this anxiety, the boy starts identifying with the father, adopting attitudes, characteristics, and values that the father calls his own. In other words, the father transitions from rival to role model.

It is through this identification with the aggressor that the boy resolves the phallic stage of psychosexual development and acquires their “superego”, a set of morals and values that dominate the conscious adult mind. In the process, the child finally relinquishes sexual feelings towards the mother, transferring them to other female figures. The implication is that overcoming the Oedipus complex, and the reactions that follow, represent the most important social achievement of the human mind, Freud says.

“It has justly been said that the Oedipus complex is the nuclear of the neuroses, and constitutes the essential part of their content. It represents the peak of infantile sexuality, which, through its after-effects, exercises a decisive influence on the sexuality of adults. Every new arrival on this planet is faced with the task of mastering the Oedipus complex; anyone who fails to do so falls a victim to neurosis. With the progress of psycho-analytic studies the importance of the Oedipus complex has become more and more clearly evident; its recognition has become the shibboleth that distinguishes the adherents of psychoanalysis from its opponents.”

Sigmund Freud,
Footnote added to the 1914 edition of Three Essays on Sexuality (1905)

The Electra complex: the female Oedipal drive

Freud’s analogous psychosexual development for little girls involves the Electra complex, which begins the moment the girl realizes she lacks a penis. The mother is blamed for this and becomes an object of resentment for triggering penis envy. At the same time, the girl develops feelings of sexual desire towards her father. The fact that the mother receives affection from the father, while she doesn’t, causes the girl to become jealous of her mother, now seen as a rival.

Like little boys who have to overcome their Oedipus complex, little girls resolve this conflict by renouncing incestuous and rivalrous feelings, identifying with the mother, thereby developing the superego.

However, Freud was never able to form a complex conflict resolution theory for the Electra complex as he did for the Oedipus complex. In boys, the resolution of the Oedipal drive is motivated by fear of castration, but Freud was never able to find an equally potent incentive in little girls, although he reasoned she may be motivated by worries about the loss of her parents’ love.

As an interesting factoid, The Electra complex, while often attributed to Freud, was actually proposed by Freud’s protégé, Carl Jung.

Failing the Oedipal complex

Freud reasoned that if the conflict arising from the Oedipal complex isn’t successfully resolved, this can cause “neuroses”, which he defined as being manifestations of anxiety-producing unconscious material that is too difficult to think about consciously but must still find a means of expression. In other words, failing to resolve this central conflict before moving on to the next stage will result in experiencing difficulties in areas of love and competition later in adulthood.

Boys may become overly competitive with other men, projecting his latent rivalry for his father, and may become mother-fixated, seeking out significant others that resemble his mother, in more than one way. Meanwhile, girls who don’t overcome their penis envy may develop a masculinity complex as an adult, making it challenging for them to become intimate with men in adulthood. Instead, she may try to rival men by becoming excessively aggressive. The men that she interacts with intimately often resemble her father. Moreover, since the girls’ identification with their mothers is weaker than boys’ with their fathers (who have castration anxiety), the female superego is weaker and, consequently, their identity as separate, independent individuals is less well developed. Psychoanalysis is supposed to solve these unresolved conflicts.

Modern criticism of the Oedipal complex

Freud exemplified his theory of the Oedipal complex using a single case study, that of the famous “Little Hans”, a five-year-old boy with a phobia for horses. At about age three, little Hans showed an interest in both his own penis and those of other males, including animals. His alarmed mother threatened to cut off his penis unless he stops playing with it. Around this time, he developed an unnatural fear of horses. Freud reasoned that the little boy responded to his mother’s threat of castration by fearing horses and their large penises. The phobia subdued when Hans would interact with horses with a black harness over their noses that had black fur around the mouth, which his father suggested symbolized his mustache. In Freud’s interpretation, Hans’s fear of horses unconsciously represented his fear of his father. Hans’s Oedipus complex was only resolved when he started fantasizing about himself with a big penis and married to his mother, allowing him to overcome his castration anxiety and identify with his father.

Although the case study of Little Hans perfectly (and very conveniently) exemplifies Freud’s theory of the Oedipus Complex, this is a single case — not nearly enough to generalize the results to the wider population. The problems don’t stop here. Freud only met Hans once and  his information only came from Hans’s father, who was an open admirer of Freud’s work and could thus have asked leading questions in a way that would fabricate a fantasy of marriage to his mother. Even if Hans (whose real name was Herbert Graf) truly suffered from an Oedipus complex, that doesn’t mean it is universal as Freud claimed.

For instance, in 1929, Polish-British scientist Bronisław Kasper Malinowski, who is widely regarded as the father of modern anthropology, conducted a now-famous ethnographic study in the Trobriand Islands in Oceania where fathers aren’t at all involved in disciplining their sons. In this society, the relationship between father and son was always good. The disciplinarian in the Trobriand populations is the uncle, which shatters the Oedipus Complex.

Malinowski with natives on the Trobriand Islands, circa 1918. Credit: Wikimedia Commons.

Psychoanalytic writer Clara Thompson criticized Freud’s attitude towards women, which she believes is culturally biased. Freud’s idea that penis envy is biologically based can be explained better and with less woe-woe by the general envy girls feel towards boys because they often lack the same level of freedom at a young age and opportunities in adulthood. You may call it penis envy, as long as you use the term as a metaphor for wanting equal rights rather than what dangles between your legs.

All of that is to say that Freud’s Oedipal complex is riddled with holes and, at best, may apply to a small fraction of the general population. However, this doesn’t necessarily demean Freud’s brilliance. Both psychoanalysts and modern psychologists now agree that early experiences, even those when we were so young that we can’t remember them, have a profound influence on our adult selves — that’s just one of Freud’s legacies in developmental theory. 

Annie Jump Cannon: the legend behind stellar classification

It is striking that today, we can not only discover but even classify stars that are light-years from Earth — sometimes, even billions of light-years away. Stellar classification often uses the famous Hertzsprung–Russell diagram, which summarises the basics of stellar evolution. The luminosity and the temperature of stars can teach us a lot about their life journey, as they burn their fuel and change chemical composition.

We know that some stars are made up mostly of ionised helium or neutral helium, some are hotter than others, and we fit the Sun as a not so impressive star compared to the giants. Part of that development came from Annie Jump Cannon’s contribution during her long career as an astronomer. 

The Hertzsprung diagram where the evolution of sun-like stars is traced. Credits: ESO.

On the shoulders of giantesses

Cannon was born in 1863 in Dover, Delaware, US. When she was 17 years old, thanks to her father’s support, she managed to travel 369 miles all the way from her hometown to attend classes at Wellesley College. It’s no big deal for teens today, but back then, this was an imaginable adventure for a young lady. The institution offered education exclusively for women, an ideal environment to spark in Cannon an ambition to become a scientist. In 1884, she graduated and later in 1896 started her career at the Harvard Observatory.

In Wellesley, she had Sarah Whiting as her astronomy professor, who sparked Cannon’s interest in spectroscopy:

“… of all branches of physics and astronomy, she was most keen on the spectroscopic development. Even at her Observatory receptions, she always had the spectra of various elements on exhibition. So great was her interest in the subject that she infused into the mind of her pupil who is writing these lines, a desire to continue the investigation of spectra.”

Whiting’s obituary in 1927, Annie Cannon.

Cannon had an explorer spirit and travelled across Europe, publishing a photography book in 1893 called “In the footsteps of Columbus”. It is believed that during her years at Wellesley, after the trip, she got infected with scarlet fever. The disease infected her ears and she suffered severe hearing loss, but that didn’t put an end to her social or scientific activities. Annie Jump Cannon was known for not missing meetings and participating in all American Astronomical Society meetings during her career.

OBAFGKM

At Radcliffe College, she began working more with spectroscopy. Her first work with southern stars spectra was later published in 1901 in the Annals of the Harvard College Observatory. The director of the observatory, Edward C. Pickering chose Cannon as the responsible for observing stars which would later become the Henry Draper Catalogue, named after the first person to measure the spectra of a star. 

Annie Jump Cannon at her desk at the Harvard College Observatory. Image via Wiki Commons.

The job didn’t pay much. In fact, Harvard employed a number of women as “women computers” that processed astronomic data. The women computer at Harvard earned less than secretaries, and this enabled researchers to hire more women computers, as men would have need to be paid more.

Her salary was only 25 cents an hour, a small income for a difficult job to look at the tiny details from the spectrographs, often only possible with magnifying glasses. She was known for being focused (possibly also influenced by her deafness), but she was also known for doing the job fast. Simply put,

During her career, she managed to classify the spectra of 225,000 stars. At the time, Williamina Fleming, a Scottish astronomer, was the Harvard lady in charge of the women computers. She had previously observed 10,000 stars from Draper Catalogue and classified them from letters A to N. But Annie Jump Cannon saw the link between the stars’ temperature and rearranged Fleming’s classification to the OBAFGKM system. The OBAFGKM system divides the stars from the hottest to the coldest, and astronomers created a popular mnemonic for it: “Oh Be A Fine Guy/Girl Kiss Me”.

Legacy

“A bibliography of Miss Cannon’s scientific work would be exceedingly long, but it would be far easier to compile one than to presume to say how great has been the influence of her researches in astronomy. For there is scarcely a living astronomer who can remember the time when Miss Cannon was not an authoritative figure. It is nearly impossible for us to imagine the astronomical world without her. Of late years she has been not only a vital, living person; she has been an institution. Already in our school days she was a legend. The scientific world has lost something besides a great scientist.”

Cecilia Payne-Gaposchkin in Annie Jump Cannon’s obituary.
Annie Jump Cannon at Harvard University. Smithsonian Institution @ Flickr Commons.

Annie Jump Cannon was awarded many prizes, she became honorary doctorate of Oxford University, the first woman to receive the Henry Draper Medal in 1931, and the first woman to become an officer of the American Astronomical Society. 

Her work in stellar classification was followed by Cecilia Payne-Gaposchkin, another dame of stellar spectroscopy. Payne improved the system with quantum mechanics and described what stars are made of

Very few scientists have such a competent and exemplary career as Cannon. Payne continued the work left from Cannon, her advisor, Henry Norris Russell, then improved it with minimum citation. From that, we got today’s basic understanding of stellar classification. Her beautiful legacy has been rescued recently by other female astronomers who know the importance of her life’s work.

Left, right, or ambidextrous: What determines handedness?

Credit: YouTube capture.

Although on the outside our bodies look symmetrical, our body movements are anything but. If you’re like most people, you write, use a phone, eat, and perform just about any task that requires tactile dexterity with your right hand. A small fraction of the population, comprising around 10% of the population, is left-handed. Rarer still are those who can use either hand with equal ease for various, though not necessarily all, tasks. These people are known as ambidextrous, with fewer than 1% of the population capable of this feat.

It isn’t generally understood why some people are ambidextrous, but the limited research conducted thus far suggests it all starts in the brain. Ambidexterity isn’t as great as it sounds either, as studies have associated ambivalent handedness with poor cognitive and mental health outcomes.

What determines hand preference?

The brain is divided into the left and right hemispheres by a deep longitudinal fissure of nerves called the corpus callosum. You probably know about these hemispheres and you may have also heard that the left hemisphere handles language, learning and other analytical processes while the right hemisphere processes images and emotions, among other things. This has inevitably led to the erroneous notion that some people who are “more logical” are left-brained while those who are “more creative” are right-brained.

Despite this enduring belief, there’s no such thing as being “right-brained” or “left-brained.” We’re actually “whole-brained” since we use both hemispheres when speaking, solving math, or playing an instrument. But that’s not to say that the brain’s two regions aren’t specialized — and the actual science of how the two halves of the brain work together may be stranger than fiction.

Credit: ResearchGate.

Without going into lengthy details about how the brain performs its division of labor across all areas, we can simply observe our motor functions to see brain lateralization in action. In all vertebrates, the right hemisphere controls the left side of the body via the spinal cord and vice versa. The jury’s still out on why that is, but some scientists believe that this basic organizational feature of the vertebrate nervous system evolved even before the appearance of vertebrates.

Over 90% of humans are naturally right-handed, a proclivity that may start as early as the womb. This suggests that handedness — the tendency to be more skilled and comfortable using one hand instead of the other for tasks such as writing and throwing a ball — is genetic in nature. However, like most aspects of human behavior, it’s like a complex trait that is influenced by numerous other factors, including the environment and chance.

Until not too long ago, it was thought that a single gene determined handedness, but more recently scientists have identified up to 40 that may contribute to this trait. Each gene has a weak effect in isolation, but together their sum is greater than their parts, playing an important role in establishing hand preference.

These genes are associated with some of these brain asymmetries, especially of language-related regions. This suggests links between handedness and language during human development and evolution. For instance, one implicated gene is NME7, which is known to affect the placement of the visceral organs (heart, liver, etc.) on the left to right body axis—a possible connection between brain and body asymmetries in embryonic development.

However, handedness is not a simple matter of inheritance — not in the way eye color or skin tone is, at least. While children born to left-handed patterns are more likely to be left-handed themselves compared to children of right-handed parents, the overall chance of being left-handed is relatively low in the first place. Consequently, most children born out of left-handed parents are right-handed. Even among identical twins, many have opposite hand preferences.

According to a 2009 study, genetics contribute around 25% toward handedness, the rest being accounted for by environmental factors such as upbringing and cultural influences.

In the majority of right-handed people, language dominance is on the left side of the brain. However, that doesn’t mean that the sides are completely switched in left-handed individuals — only a quarter of them show language dominance on the right side of the brain. In other words, hand preference is just one type of lateralized brain function and need not represent a whole collection of other functions.

Since writing activates language and speech centers in the brain, it makes sense that most people use their right hand. However, most individuals do not show as strong a hand preference on other tasks, using the left hand for some, the right hand for others, with the notable exception of tasks involving tools. For instance, even people who have a strong preference for using their right hand tend to be better at grabbing a moving ball with their left hand; that’s consistent with the right hemisphere’s specialization for processing spatial tasks and controlling rapid responses.

Ambidexterity may hijack brain asymmetry — and that may actually be a bug, not a feature

This brings us to mixed-handedness, in which some people have a preference for a particular hand for certain tasks. A step above are ambidextrous people, who are thought to be exceptionally rare and can perform tasks equally well with both hands.

But if the picture of what makes people left or right handed is murky, ambidexterity is even more nebulous. We simply don’t know why a very small minority of people, fewer than 1%, is truly ambidextrous. And from the little we know, it doesn’t sound like such a good deal either.

Studies have linked ambidexterity with poor academic performance and mental health. Ambidextrous people perform more poorly than both left- and right-handers on various cognitive tasks, particularly those that involve arithmetic, memory retrieval, and logical reasoning. Being ambidextrous is also associated with language difficulties and ADHD-like symptoms, as well as greater age-related decline in brain volume. The findings suggest that the brain is more likely to encounter faulty neuronal connections when the information it’s processing has to shuttle back and forth between hemispheres.

Again, no one is sure why this is the case, nor are any of these studies particularly robust since ambidextrous people comprise such a small fraction of the general population and any study involving them will naturally involve a small sample size that invites caution when interpreting results in a statistically meaningful way. All scientists can say for now is that naturally ambidextrous people have an atypical brain lateralization, meaning they simply have brain circuitry and function that is likely different from the normal pattern we see in right-handed and left-handed people.

Of course, it’s not all bad news for the handedness-ambivalent. Being able to use both hands with (almost) equal ease certainly has its perks, which can really pay off, especially in sports, arts, and music.

Can you train yourself to be ambidextrous?

Left-handers have always been stigmatized, often being punished in school and forced to use their non-dominant right hand. However, starting with the late 19th-century, people have not only become more tolerant of left-handedness but some have actually gone as far as to praise the merits of ambidexterity and worked to actively promote it by teaching others how to use both their hands well.

For instance, in 1903, John Jackson, a headteacher of a grammar school in Belfast, founded the Ambidextral Culture Society. Jackson believed that the brain’s hemispheres are distinct and independent. Being either right or left hand dominant effectively meant that half of your brainpower potential was being wasted. To harness this potential, Jackson devised ambidexterity training that, he claimed, would eventually allow each hand “to be absolutely independent of the other in the production of any kind of work whatever… if required, one hand shall be writing an original letter, and the other shall be playing the piano, with no diminution of the power of concentration.”

Although these claims have been proven to be bogus, to this day you can find shady online programs that claim to teach you to become ambidextrous. Training involves all sorts of routines such as using your non-dominant hand for writing, brushing your teeth, and all sorts of daily activities that require the fine manipulation of a tool. Doing so would allow you to strengthen neural connections in the brain and activate both hemispheres, which may help you think more creatively — or so they claim. But that’s never been shown by any study I could find. On the contrary, if anything, ambidextrous training may actually hamper cognition and mental health, judging from studies on natural ambidextrous people.

“These effects are slight, but the risks of training to become ambidextrous may cause similar difficulties. The two hemispheres of the brain are not interchangeable. The left hemisphere, for example, is typically responsible for language processing, whereas the right hemisphere often handles nonverbal activities. These asymmetries probably evolved to allow the two sides of the brain to specialize. To attempt to undo or tamper with this efficient setup may invite psychological problems,” Michael Corballis, professor of cognitive neuroscience and psychology at the University of Auckland in New Zealand, wrote in an article for Scientific American.

“It is possible to train your nondominant hand to become more proficient. A concert pianist demonstrates superb skill with both hands, but this mastery is complementary rather than competitive. The visual arts may enhance right-brain function, though not at the expense of verbal specialization in the left hemisphere. A cooperative brain seems to work better than one in which the two sides compete.”

Handedness is a surprisingly complex trait that isn’t easily explained by inheritance. Whether you’re left or right handed, this doesn’t make you necessarily smarter or better than the other. Brain lateralization exists for a reason, and that should be celebrated. 

Deductive versus inductive reasoning: what’s the difference

Sir Arthur Conan Doyle’s fictional Sherlock Holmes is supposedly the best detective in the world. What’s the secret behind his astonishing ability to gather clues from the crime scene that the police always seem to be missing? The answer is quite elementary, my dear reader.

While typical police detectives might use deductive reasoning to solve crimes, Sherlock on the other hand is a master of inductive reasoning. But what’s the difference?

Credit: Pixabay.

What is deductive reasoning

Deductive reasoning involves drawing a conclusion based on premises that are generally assumed to be true. If all the premises are true, then it holds that the conclusion has to be true.

Deduction always starts with a general statement and ends with a narrower, specific conclusion, which is why it’s also called “top-down” logic.

The initial assumption presumes that if something is true, then it must be true in all cases. A second premise is made in relation to the first statement, and since the initial premise is supposed to be true, so must be the second statement as well. The association between two statements — a major and a minor statement — to form a logical conclusion is called a syllogism.

In math terms, you can think of it this way: A=B, B=C, therefore A=C.

We use deduction often in our day-to-day lives, but this reasoning method is most widely used in research, where it forms the bedrock of the scientific method that tests the validity of a hypothesis.

Here are some examples:

Premise A: All people are mortal.

Premise B: Socrates is a person.

Conclusion: Therefore, Socrates is mortal.

Premise A: All mammals have a backbone.

Premise B: Dogs are mammals.

Conclusion: Dogs have backbones.

Premise A: Multiplication is done before addition.

Premise B: Addition is done before subtraction.

Conclusion: Multiplication is done before subtraction.

Premise A: Oppositely charged particles attract one another.

Premise B: These two molecules repel each other.

Conclusion: The two molecules are either both positively or negatively charged.

What is inductive reasoning

Inductive reasoning is the opposite of deductive reasoning, in the sense that we start with specific arguments to form a general conclusion, rather than making specific conclusions starting from general arguments.

For this reason, inductive reasoning is often used to formulate a hypothesis from limited data rather than supporting an existing hypothesis. Also, the accuracy of a conclusion inferred through induction is typically lower than through deduction, even if the starting statements themselves are true.

For instance, take these examples of inductive logic:

  • The first marble from the bag is black, so is the second, and so is the third. Therefore, all the marbles in the bag must be black.
  • Every cat I meet has fur. All cats then must have fur.
  • Whenever I get a cold, people around me get sick. Therefore, colds are infectious.

Deductive versus inductive reasoning: which one is better?

Deductive inference goes from the general to the specific, while inductive inference goes from the specific to the general. Deductive reasoning cannot be false if its premises are true, whereas inductive reasoning can still be false due to the fact that you cannot account for those instances where you are not correct. In deduction, the conclusion either follows or it doesn’t. There is no in-between like there are degrees of strength or weakness in induction.

In science, neither deduction nor induction is necessarily superior to one another. Instead, there’s a constant interplay between the two, depending on whether we’re making predictions based on observations or on theory.

Sometimes, it makes sense to start with a theory to form a new hypothesis, then use observation to confirm it. Other times, we can form a hypothesis from observations that seem to form a pattern, which can turn into a theory.

Both methods allow us to get closer and closer to the truth, depending on how much or how little information we have at hand. However, we can never prove something with absolute certainty, which is why science is a tool of approximation — the best there is, but still an approximation.

That being said, each method is far from perfect and has its drawbacks. A deductive argument might be based on non-factual information (the premise is wrong), while an inductive statement might lack sufficient data to form a reliable conclusion, for instance.

As an example of when deduction can go hilariously wrong, look no further than Diogenes and his naked chicken. Diogenes was an ancient Greek philosopher who was contemporary with the honorable Plato — and the two couldn’t be more different. Diogenes slept in a large jar in the marketplace and begged for a living. He was famous for his philosophical stunts, such as carrying a lit lamp in the daytime, claiming to be looking for an honest man.

When the opportunity presented itself, Diogenes would always try to embarrass Plato. He would, for instance, distract attendees during Plato’s lectures and bring food and eat loudly when Plato would speak. But one day, he really outdid himself.

Plato would often quote and interpret the teachings of his old mentor, Socrates. On one occasion, Plato held a talk about Socrates’ definition of a man as a “featherless biped”. Diogenes cleverly plucked a chicken and with a wide grin on his face proclaimed “Behold! I’ve brought you a man.”

Painting of Diogenes and his chicken. Credit: shardcore.

The implication is that a deductive conclusion is only as good as its premise.

Meanwhile, inductive reasoning leads to a logical conclusion only when the available data is robust. For instance, penguins are birds. Penguins can’t fly. Therefore, all birds can’t fly, which is obviously wrong if you know more birds than just penguins or weird plucked chickens.

Abductive reasoning: the educated guess

There’s another widely used form of reasoning — in fact, it is the one that we most use most often in our day-to-day lives. Abductive reasoning combines aspects of deductive and inductive reasoning to determine the likeliest outcome from limited available information.

For instance, if you see a person sitting idly on her phone at a table with two glasses of wine in front of her, you can use abduction to conclude her company is away and will likely return soon. Seeing a dog on a leash in front of a store makes us infer that the owner is likely shopping for a brief while and will soon return to join their pet.

In abductive reasoning, the major premise is evident, but the minor premise and therefore the conclusion are only probable. Abduction is also often called “Inference to the Best Explanation” for this very reason.

Abductive and inductive reasoning are very similar to each other, although the former is more at ease with reasoning with probable premises that may or may not be true.

This excerpt from Conan Doyle’s The Adventure of the Dancing Men provides a great example of Sherlock’s inductive and abductive mind:

Holmes had been seated for some hours in silence with his long, thin back curved over a chemical vessel in which he was brewing a particularly malodorous product. His head was sunk upon his breast, and he looked from my point of view like a strange, lank bird, with dull gray plumage and a black top-knot.

“So, Watson,” said he, suddenly, “you do not propose to invest in South African securities?”

I gave a start of astonishment. Accustomed as I was to Holmes’s curious faculties, this sudden intrusion into my most intimate thoughts was utterly inexplicable.

“How on earth do you know that?” I asked.

He wheeled round upon his stool, with a steaming test-tube in his hand, and a gleam of amusement in his deep-set eyes.

“Now, Watson, confess yourself utterly taken aback,” said he.

“I am.”

“I ought to make you sign a paper to that effect.”

“Why?”

“Because in five minutes you will say that it is all so absurdly simple.”

“I am sure that I shall say nothing of the kind.”

“You see, my dear Watson”–he propped his test-tube in the rack, and began to lecture with the air of a professor addressing his class–“it is not really difficult to construct a series of inferences, each dependent upon its predecessor and each simple in itself. If, after doing so, one simply knocks out all the central inferences and presents one’s audience with the starting-point and the conclusion, one may produce a startling, though possibly a meretricious, effect. Now, it was not really difficult, by an inspection of the groove between your left forefinger and thumb, to feel sure that you did NOT propose to invest your small capital in the gold fields.”

“I see no connection.”

“Very likely not; but I can quickly show you a close connection. Here are the missing links of the very simple chain. 1. You had chalk between your left finger and thumb when you returned from the club last night. 2. You put chalk there when you play billiards, to steady the cue. 3. You never play billiards except with Thurston. 4. You told me, four weeks ago, that Thurston had an option on some South African property which would expire in a month, and which he desired you to share with him. 5. Your check book is locked in my drawer, and you have not asked for the key. 6. You do not propose to invest your money in this manner.”

“How absurdly simple!” I cried.

“Quite so!” said he, a little nettled.

In laying out his arguments that led to his conclusion, Holmes can be seen reasoning by elimination (“By the method of exclusion, I had arrived at this result, for no other hypothesis would meet the facts,” A Study in Scarlet) and reasoning backward, i.e. imagining several hypotheses for explaining the given facts and selecting the best one. But he does this always with consideration of probabilities of hypotheses and the probabilistic connections between hypotheses and data.

This makes Holmes a very good logician, which is the perfect skill to have as a criminal investigator, as well as a scientist.

All of these reasoning techniques are important tools in any critical thinking arsenal, with each having its own time and place. Whether starting from the general or the specific, you have everything you need to win your next argument in style.

The safest and most deadly types of energy — how do renewables compare to fossil fuels?

Energy is the cornerstone of our modern society. For most of human civilization, the energy we used was biological: from our bodies and the animals we used (for instance, for plowing in agriculture). We also burned a lot of wood for heating.

Then, some 250 years ago, people started realizing that they can burn something else: fossil fuels; specifically, coal. Coal offers a lot more energy we can use than wood. Fast forward to about 1880, and people also started burning coal for electricity. This usage of fossil fuels, both directly and to produce energy, has been instrumental to our recent evolution as a society. It’s allowed work to become more productive than ever, enabling people in industrialized nations to eventually enjoy much better living conditions than their predecessors. It’s also brought in unprecedented wealth and technology. Essentially, the energy we produce has become central to nearly every major challenge and opportunity the world faces today. It’s hard to overstate just how important energy is for our society.

But this has come at a cost.

Energy generation causes a lot of problems. The first is pollution; the second is accidents; the third is greenhouse gas emissions. Data compiled by Our World in Data shows regardless of what metric you choose, fossil fuel energy is by far the worst.

Good energy, bad energy

It’s true that fossil fuel energy got us to where we are now. Without it, the past couple of centuries would have been unimaginable. But we’ve reached a point where the problems associated with fossil fuel sources are impossible to ignore. Not only do they produce emissions and cause global warming, but they also claim the most lives.

Here’s another way to look at it, as pointed out by Hannah Ritchie from our Our World in Data. If there was an average town of just over 180,000 people, and that town were to get all its energy from one single source, how many lives would that energy source cost? Here’s a rundown:

  • coal would kill 25 people a year;
  • oil would kill 18 people a year;
  • gas would kill 3 people a year;
  • nuclear would kill one person every 14 years;
  • wind would kill one person every 29 years;
  • hydropower would kill one person every 42 years;
  • solar would kill one person every 53 years.

Another way of looking at it is that nuclear energy, for instance, is responsible 99.8% fewer deaths than brown coal; 99.7% fewer than coal; 99.6% fewer than oil; and 97.5% fewer than gas. Wind, solar, and hydropower do even better.

So as you can see right off the bat, there’s a huge difference in how dangerous different types of energy are. Gas is not as bad as coal or oil, but it’s still nowhere near renewable energy. Which begs the question, how does energy kill people?

How energy kills people

Historically, coal mining has been the most dangerous energy-associated activity, and there’s a long list of coal mining disasters. Working in a mine is dangerous, and the threats include suffocation, gas poisoning, roof collapse, and gas explosions. In the US alone, over 100,000 coal miners have been killed in the past century, and though the number is decreasing (as coal production is also decreasing), it remains a dangerous activity. For instance, in 2005, coal accidents claimed 5,938 lives worldwide, and in 2006, accidents in China alone killed 4,746 people.

The safety culture of the oil and gas industries was also found lacking several times. According to the US CDC, in the past decade, the oil and gas had approximately 108 deaths per year, which comes to a yearly fatality rate of 1 in 4,000 employees.

Renewable sources like wind and solar are virtually never associated with dangerous accidents. Things are different for hydropower, though. At first glance, hydropower is even more dangerous than the oil and gas industry, but the data is heavily skewed by a single disaster: Typhoon Nina in 1975. The Typhoon washed out the Shimantan Dam (Henan Province, China), In August 1975, the Banqiao dam collapsed, creating one of the largest floods in history, inundating 30 cities and killing over 200,000 people.

However, even that is nothing compared to the indirect damage that fossil fuel energy does through pollution.

Pollution is a silent killer — we don’t really see it, and its effects can be hard to track in individual cases. However, the burning of fossil fuels (and especially coal) emits a number of hazardous air pollutants that are transported through the atmosphere. These pollutants can cause cardiovascular problems, respiratory problems, lung cancer, infections, and many, many more issues. The damage from this pollution dwarfs the numbers from accidents.

For instance, one recent analysis found that through pollution, the fossil fuel industry killed 8.7 million people in 2018 alone — more than the toll claimed by tobacco and malaria combined. That’s equivalent to saying that fossil fuel air pollution kills 1 in 5 people. A more conservative analysis found that fossil fuel combustion kills “only” a million people a year.

So by and large, pollution is the biggest killer in the room, and burning fossil fuels dwarfs all other accidents, which is even more concerning considering that despite the rise in renewables, fossil fuel consumption also continues to grow.

You may have noticed we haven’t mentioned nuclear energy much yet.

Remarkably, although technically not a renewable source, nuclear energy is surprisingly safe, on par with renewables, providing one of the safest and cleanest types of energy available.

Wait, I thought nuclear energy was dangerous?

A lot of people are afraid or at least uncomfortable with nuclear energy, and that’s understandable. The Chernobyl and Fukushima disasters still burn in people’s mind, and few words are as unnerving as “nuclear disaster.” However, let’s put things into perspective. The Chernobyl disaster, the worst nuclear disaster in human history, killed between 4,000 and 60,000 (the estimates differ). The Fukushima disaster claimed under 600 lives. These were both very severe events but compared to the magnitude of deaths caused by pollution, it’s almost negligible.

Nevertheless, despite being such a remarkably clean source of energy, nuclear energy has remained extremely controversial, being shunned for often more damaging sources of energy. Just 3% of Japanese say they want more nuclear energy, while the country gets 26% of its energy from coal and 40% from oil. Germany is shutting down its nuclear power, and despite notable renewable progress, much of the country’s energy still comes from polluting sources. Meanwhile, on the other end of the spectrum, around 70% of the energy produced in France is nuclear, and it shows.

Nuclear energy has saved lives overall. Nuclear energy instead of coal, for instance, saved over 2 million lives in the past few decades. Nevertheless, it will likely remain a controversial option in most parts of the world.

A clear path

There’s some hidden good news in here. The good news is that we’re not facing a trade-off — it’s not like we either have to choose the energy that’s best for the climate or best for saving human lives. The energy that’s best for the climate is also best for us. Furthermore, the main ‘villain’ also tops both lists: coal. Coal is responsible for a disproportionate amount of greenhouse gases, and it also infers the most severe health costs in terms of accidents and pollution. Oil, biomass, and gasare better than coal — but much worse than everything else, on all counts.

So if we want to save lives and reduce emissions and reduce pollution, the path is clear: we need to start renouncing fossil fuel energy, especially coal. The safest sources of energy are also the cleanest: renewable and nuclear.

But despite progress, there’s a very long way to go. Some 60% of the world’s energy comes from coal and oil; another 25% comes from gas. In fact, just 15% of the global energy production is low-carbon (either renewable or nuclear).

Things are changing, and the deployment of renewable energy continues at an accelerated pace. But in the meantime, we’ll continue paying a dear cost for our energy.

What do frogs eat — and other froggy facts you never wanted to know

Frogs are by far the most widespread amphibians — they make up almost 90% of all the current amphibian species. Frogs generally spend their time around bodies of freshwater, in areas that remain wet even during the summer. So, naturally, this influences their eating patterns.

Adult individuals of almost all species are carnivorous, most often preying on invertebrates such as worms, snails, slugs, and arthropods. It’s sometimes said that frogs are “insectivores”, but that’s not technically true. They’re often generalist carnivores, eating pretty much anything they can swallow. Sometimes, they will hunt reptiles, amphibians, even small mammals. They sometimes even engage in cannibalism, while some species mostly feed on plants.

When it comes to what frogs eat, the answer is both simple and complicated.

Image credits: Ed van duijn.

What frogs eat — the tadpole edition

Frogs typically have five life stages. They start out as eggs and then become tadpoles, tadpoles with legs, young frogs, and adult frogs. During their tadpole stage, they’re extremely different from their adult stage. Tadpoles generally lack limbs and have a tail, they breathe through gills and live exclusively in water. The metamorphosis from tadpole to frog involves some major biological changes, including a change in diet.

The diet of tadpoles is also different from that of adults. Tadpoles are typically herbivorous and their preferred food is algae. They also scrape leaves from the pond, if available. If you want to feed tadpoles (though you shouldn’t start randomly feeding tadpoles in the wild), greens are probably your best options. Lettuce, broccoli, baby spinach all work great.

However, tadpoles aren’t exactly picky. Most species are carnivorous at the tadpole stage, but in a pinch, almost all tadpoles would eat insects, mosquito larvae, smaller tadpoles, or even carcasses. In fact, several species have been found to be cannibalistic at the tadpole stage, and tadpoles that develop legs early are more likely to be eaten, so late bloomers are more likely to survive.

It’s a tough life for a tadpole, and being picky with food is a luxury you can’t really afford.

A curious (and probably hungry) tadpole. Image in public domain.

Adult frogs and what they eat

Frogs don’t really roam much; they tend to stick close to the water that is so crucial to them. Although some species can travel for several kilometers, it’s common for frogs to stay within a few hundred meters (usually less than 500 meters) around their pond area. As a result, they have to eat things that they can reliably catch around their area.

This being said, frogs will often eat any living thing they can fit into their mouths. If it flies, walks, or crawls and it’s not too big, frogs will often have a go at it. Aside from the common prey (bugs, worms, snails, slugs), they will eat smaller mammals, reptiles, fish, or even small marsupials. The list isn’t limited to only these. Frogs are true generalist predators, and anything small enough to be eaten by a frog could be eaten by a frog. Moths, butterflies, crickets, even bees — all could be found on frogs’ menus.

Frogs hunt by using their specialized tongue and spit. Frog spit is one of the stickiest substances on the planet, and frogs’ tongues can extend out at a whopping 4 meters per second, and can be retracted in 0.07 seconds — five times faster than you can blink.

Generally, frogs like to hunt. They don’t really like carrion or leftovers from other animals (though on very rare occasions, they might also eat it). When they eat things like slugs or other mollusks, they generally swallow the shell whole. They don’t really pay much attention, and if they’re hungry and they can snag something, they’ll generally go for it.

Drawing of several species of frogs. Image credits: Wiki Commons.

 This being said, a few species also eat plant matter; for instance, the tree frog Xenohyla truncata is partly herbivorous, and its diet includes a large proportion of fruit. Several other species of frogs have been found to consume significant quantities of plants, and the diet of Euphlyctis hexadactylus consists of 80% leaves and flowers (though its juveniles are insectivores.

During the winter, frogs hibernate, slowing their metabolism and surviving until spring from the food they’ve consumed. Some species dig a burrow for themselves, others bury themselves in leaves, while some merely sink to the bottom of the pond, half-covered in mud. During hibernation, they obviously don’t eat anything. Fun fact: some frogs can indeed freeze and survive frozen for months, coming back to life when they thaw.

What does the common frog eat

There are over 5,000 species of frogs, making up around 88% of all amphibian species on Earth, and researchers are constantly finding new species as well. Here, we’ve tried to address the question of what frogs eat generally, but let’s take a moment to talk about the common frog.

The common frog (Rana temporaria), true to its name, can be found across most of Europe, including Scandinavia, Ireland, and the Balkans. It can also be found across vast swaths from Asia, up to Japan. By and large, it’s the most common frog species out there.

The common frog’s eating patterns are greatly influenced by the time of year, and like many other frogs, they also enter a type of hibernation. When they are active, they mostly eat invertebrates: snails, worms, wood lice, and spiders. They have a keen sense of smell and can detect worms or other prey of interest. They also eat larvae from other common frogs.

What about toads, what do toads eat?

Although the difference between toads and frogs seems significant, and you occasionally come across someone who’s quick to point that out, the use of the name toads and frogs has no taxonomic justification. It’s more of an esthetic consideration. ‘Frog’ usually refers to species that are either fully aquatic or semi-aquatic and have moise, smooth skins, while toads are terrestrial and have dry, warty skins (although there are exceptions).

The European Common Frog (Rana temporaria, left) & European Toad (Bufo bufo, right) hanging out in a London garden. Image credits: Thomas Brown.

As a result, because toads and frogs are so similar, they eat kind of the same thing. Toads mainly eat insects and other arthropods. They often enjoy eating things like worms and crickets. Sometimes, toads will also hunt prey like small mammals or even other amphibians.

Notably, frogs and toads are useful as they can keep the insect population under control. But they can also cause substantial damage, and several species of frogs and toads are invasive. A notable example dates from 1935 when cane toads from Puerto Rico were brought to Australia to control the sugarcane beetle population. The idea backfired spectacularly. Out of 102 toads that were introduced, their numbers grew to over 2 billion. They killed the beetles alright, but they killed a ton of native species as well and have become a major environmental problem.

At the end of the day, there’s a bunch of different frogs out there, with different eating patterns. Generally, frogs are indiscriminate predators, but some have more varied preferences. Undoubtedly, there’s still a lot left to learn about species of frogs, especially species from remote areas.

Frogs are also faced with a number of environmental threats; the common frog may be common, but other species are under a great deal of pressure. Out of the around 5,000 species of frogs we know, 737 species are endangered and 549 are critically endangered, and over 100 have probably gone extinct in recent times already (that we know of — the reality is quite possibly even worse). Among the biggest environmental threats, frogs are faced with are habitat destruction and other invasive species.

The fascinating science behind the first human HIV mRNA vaccine trial – what exactly does it entail?

In a moment described as a “potential first step forward” in protecting people against one of the world’s most devastating pandemics, Moderna, International AIDS Vaccine Initiative (IAVI), and the Bill and Melinda Gates Foundation have joined forces to begin a landmark trial — the first human trials of an HIV vaccine based on messenger ribonucleic acid (mRNA) technology. The collaboration between these organizations, a mixture of non-profits and a company, will bring plenty of experience and technology to the table, which is absolutely necessary when taking on this type of mammoth challenge.

The goal is more than worth it: helping the estimated 37.7 million people currently living with HIV (including 1.7 million children) and protecting those who will be exposed to the virus in the future. Sadly, around 16% of the infected population (6.1 million people) are unaware they are carriers.

Despite progress, HIV remains lethal. Disturbingly, in 2020, 680,000 people died of AIDS-related illnesses, despite inroads made in therapies to dampen the disease’s effects on the immune system. One of these, antiretroviral therapy (ART), has proven to be highly effective in preventing HIV transmission, clinical progression, and death. Still, even with the success of this lifelong therapy, the number of HIV-infected individuals continues to grow.

There is no cure for this disease. Therefore, the development of vaccines to either treat HIV or prevent the acquisition of the disease would be crucial in turning the tables on the virus.

However, it’s not so easy to make an HIV vaccine because the virus mutates very quickly, creating multiple variants within the body, which produce too many targets for one therapy to treat. Plus, this highly conserved retrovirus becomes part of the human genome a mere 72 hours after transmission, meaning that high levels of neutralizing antibodies must be present at the time of transmission to prevent infection.

Because the virus is so tricky, researchers generally consider that a therapeutic vaccine (administered after infection) is unfeasible. Instead, researchers are concentrating on a preventative or ‘prophylactic’ mRNA vaccine similar to those used by Pfizer/BioNTech and Moderna to fight COVID-19.

What is the science behind the vaccine?

The groundwork research was made possible by the discovery of broadly neutralizing HIV-1 antibodies (bnAbs) in 1990. They are the most potent human antibodies ever identified and are extremely rare, only developing in some patients with chronic HIV after years of infection.

Significantly, bnAbs can neutralize the particular viral strain infecting that patient and other variants of HIV–hence, the term ‘broad’ in broadly neutralizing antibodies. They achieve this by using unusual extensions not seen in other immune cells to penetrate the HIV envelope glycoprotein (Env). The Env is the virus’s outer shell, formed from the cell membrane of the host cell it has invaded, making it extremely difficult to destroy; still, bnAbs can target vulnerable sites on this shell to neutralize and eliminate infected cells.

Unfortunately, the antibodies do little to help chronic patients because there’s already too much virus in their systems; however, researchers theorize if an HIV-free person could produce bnABS, it might help protect them from infection.

Last year, the same organizations tested a vaccine based on this idea in extensive animal tests and a small human trial that didn’t employ mRNA technology. It showed that specific immunogens—substances that can provoke an immune response—triggered the desired antibodies in dozens of people participating in the research. “This study demonstrates proof of principle for a new vaccine concept for HIV,” said Professor William Schief, Department of Immunology and Microbiology at Scripps Research, who worked on the previous trial.

BnABS are the desired endgame with the potential HIV mRNA vaccine and the fundamental basis of its action. “The induction of bnAbs is widely considered to be a goal of HIV vaccination, and this is the first step in that process,” Moderna and the IAVI (International AIDS Vaccine Initiative) said in a statement.

So how exactly does the mRNA vaccine work?

The experimental HIV vaccine delivers coded mRNA instructions for two HIV proteins into the host’s cells: the immunogens are Env and Gag, which make up roughly 50% of the total virus particle. As a result, this triggers an immune response allowing the body to create the necessary defenses—antibodies and numerous white blood cells such as B cells and T cells—which then protect against the actual infection.

Later, the participants will also receive a booster immunogen containing Gag and Env mRNA from two other HIV strains to broaden the immune response, hopefully inducing bnABS.

Karie Youngdahl, a spokesperson for IAVI, clarified that the main aim of the vaccines is to stimulate “B cells that have the potential to produce bnAbs.” These then target the virus’s envelope—its outermost layer that protects its genetic material—to keep it from entering cells and infecting them.  

Pulling back, the team is adamant that the trial is still in the very early stages, with the volunteers possibly needing an unknown number of boosters.

“Further immunogens will be needed to guide the immune system on this path, but this prime-boost combination could be the first key element of an eventual HIV immunization regimen,” said Professor David Diemert, clinical director at George Washington University and a lead investigator in the trials.

What will happen in the Moderna HIV vaccine trial?

The Phase 1 trial consists of 56 healthy adults who are HIV negative to evaluate the safety and efficacy of vaccine candidates mRNA-1644 and mRNA-1644v2-Core. Moderna will explore how to deliver their proprietary EOD-GT8 60mer immunogen with mRNA technology and investigate how to use it to direct B cells to make proteins that elicit bnABS with the expert aid of non-profit organizations. But readers should note that only one in every 300,000 B cells in the human body produces them to give an idea of the fragility of the probability involved here.

Sensibly, the trial isn’t ‘blind,’ which means everyone who receives the vaccine will know what they’re getting at this early stage. That’s because the scientists aren’t trying to work out how well the vaccine works in this first phase lasting approximately ten months – they want to make sure it’s safe and capable of mounting the desired immune response.

And even though there is much hype around this trial, experts caution that “Moderna are testing a complicated concept which starts the immune response against HIV,” says Robin Shattock, an immunologist at Imperial College London, to the Independent. “It gets you to first base, but it’s not a home run. Essentially, we recognize that you need a series of vaccines to induce a response that gives you the breadth needed to neutralize HIV. The mRNA technology may be key to solving the HIV vaccine issue, but it’s going to be a multi-year process.”

And after this long period, if the vaccine is found to be safe and shows signs of producing an immune response, it will progress to more extensive real-world studies and a possible solution to a virus that is still decimating whole communities.

Still, this hybrid collaboration offers future hope regarding the prioritization of humans over financial gain in clinical trials – the proof is that most HIV patients are citizens of the third world.

As IAVI president Mark Feinberg wrote in June at the 40th anniversary of the HIV epidemic: “The only real hope we have of ending the HIV/AIDS pandemic is through the deployment of an effective HIV vaccine, one that is achieved through the work of partners, advocates, and community members joining hands to do together what no one individual or group can do on its own.”

Whatever the outcome, money is no longer a prerogative here, and with luck, we may see more trials based on this premise very soon.

What is vitamin K?

Vitamin K plays a key role in our blood’s ability to form clots. It’s one of the less glamorous vitamins, more rarely discussed than its peers and, although it’s usually referred to as a single substance, it comes in two natural varieties — K1 and K2 — and one synthetic one, K3. People typically cover their requirements of vitamin K through diet, so it’s rarely seen in supplement form, but we’ll also look at some situations that might require an extra input of vitamin K.

A molecule of menatetrenone, one of the forms of vitamin K2. Image via Wikimedia.

The ‘K’ in vitamin K stands for Koagulations-vitamin, Danish for ‘coagulation vitamin’. This is a pretty big hint as to what these vitamers — the term used to denote the various chemically-related forms of a vitamin — help our bodies do. Vitamin K is involved in modification processes that proteins undergo after they have been synthesized, and these proteins then go on to perform clotting wherever it is needed in our blood. Apart from this, vitamin K is also involved in calcium-binding processes for tissues throughout our bodies, for example in bones.

Although we don’t need very high amounts of vitamin K to be healthy (relative to other vitamins), a deficiency of it is in no way a pretty sight. Without enough vitamin K, blood clotting is severely impaired, and uncontrollable bleeding starts occurring throughout our whole bodies. Some research suggests that a deficiency of this vitamin can also cause bones to weaken, leading to osteoporosis, or to the calcification of soft tissues.

What is vitamin K?

Chemically speaking, vitamin K1 is known as phytomenadione or phylloquinone, while K2 is known as menaquinone. They’re quite similar from a structural point of view, being made up of two aromatic rings (rings of carbon atoms) with a long chain of carbon atoms tied to one side. K2 has two subtypes, one of which is longer than the other, but they perform the same role in our bodies. The K1 variety is the most often seen one in supplements.

Vitamin K3 is known as menadione. It used to be prescribed as a treatment for vitamin K deficiency, but it was later discovered that it interfered with the function of glutathione, an important antioxidant and key metabolic molecule. As such, it is no longer in use for this role in humans.

They are fat-soluble substances that tend to degrade rapidly when exposed to sunlight. It also breaks down very quickly and is excreted quickly in the body, so it’s exceedingly rare for it to reach toxic concentrations in humans. Vitamin K is concentrated in the liver, brain, heart, pancreas, and bones.

Sources

Vitamin K is abundant in green, leafy vegetables, where it is involved in photosynthesis. Image credits Local Food Initiative / Flickr.

As previously mentioned, people tend to get enough vitamin K from a regular diet.

Plants are a key synthesizer of vitamin K1, especially their tissues which are directly involved in photosynthesis; as such, mixing leafy or green vegetables into your diet is a good way to access high levels of the vitamin. Spinach, asparagus, broccoli, or legumes such as soybeans are all good sources. Strawberries also contain this vitamin, to a somewhat lesser extent.

Animals also rely on this vitamin for the same processes human bodies do, so animal products can also be a good source of it. Animals tend to convert the vitamin K1 they get from eating plants into one of the varieties K2 (MK-4). Eggs or organ meats such as liver, heart, or brain are high in K2.

All other forms of K2 vitamin are produced by bacteria who produce it during anaerobic respiration. As such, fermented foods can also be a good source of this vitamin.

Some of the most common signs of deficiency include:

  • Slow rates of blood clotting;
  • Long prothrombin times (prothrombin is a key clotting factor measured by doctors);
  • Spontaneous or random bleeding;
  • Hemorrhaging;
  • Osteoporosis (loss of bone mass) or osteopenia (loss of bone mineral density).

Do I need vitamin K supplements?

Cases of deficiency are rare. However, certain factors can promote such deficiencies. Most commonly, this involves medication that blocks vitamin K metabolism as a side-effect (some antibiotics do this) or medical conditions that prevent the proper absorption of nutrients from food. Some newborns can also experience vitamin K deficiencies as this compound doesn’t cross through the placenta from the mother, and breast milk only contains low levels of it. Due to this, infants are often given vitamin K supplements.

Although it is rare to see toxicity caused by vitamin K overdoses, it is still advised that supplements only be taken when prescribed by a doctor. Symptoms indicative of vitamin K toxicity are jaundice, hyperbilirubinemia, hemolytic anemia, and kernicterus in infants.

Vitamin K deficiencies are virtually always caused by malnourishment, poor diets, or by the action of certain drugs that impact the uptake of vitamin K or its role in the body. People who use antacids, blood thinners, antibiotics, aspirin, and drugs for cancer, seizures, or high cholesterol are sometimes prescribed supplements — again, by a trained physician.

How was it discovered?

The compound was first identified by Danish biochemist Henrik Dam in the early 1930s. Dam was studying another topic entirely: cholesterol metabolism in chickens. However, he observed that chicks fed with a diet low in fat and with no sterols had a high chance of developing subcutaneous and intramuscular hemorrhages (strong bleeding under the skin and within their muscles).

Further studies with different types of food led to the identification of the vitamin, which Dam referred to as the “Koagulations-Vitamin”.

Some other things to know

Some of the bacteria in our gut help provide us with our necessary intake of vitamin K — they synthesize it for us. Because of this, antibiotic use can lead to a decrease in vitamin K levels in our blood, as they decimate the populations of bacteria in our intestines. If you’re experiencing poor appetite following a lengthy or particularly strong course of antibiotics, it could be due to such a deficiency. Contact your physician and tell them about your symptoms if you think you may need vitamin K supplements in this situation; it’s not always the case that you do, but it doesn’t hurt to ask.

Another step you can take to ensure you’re getting enough vitamin K is to combine foods that contain a lot of it with fats — as this vitamin is fat-soluble. A salad of leafy greens with olive oil and avocado is a very good way of providing your body with vitamin K and helping it absorb as much of it as possible.

What are komodo dragons, the largest lizards in the world?

An impressive and ruthless predator, Komodo dragons are the largest living lizards on Earth. Their success is based on a very deadly bite, but there’s more than meets the eye to this endagnered, cold-blooded carnivore.

Image via Pixabay.

Reptiles used to rule the Earth, in the form of dinosaurs; today, they’re no longer top dogs. Some of their larger ancestors, such as crocodiles or alligators, bear hints of that fearsome legacy. Of others, such as lizards, for example, we tend to think of more as critters or cutesy pets basking under a heat lamp.

But not all lizards are born equal, and they can be quite fearsome creatures. The Komodo dragon (Varanus komodoensis) is living proof. Not only is it the largest, heaviest lizard on the planet, but the dragon is armed with vicious, shark-like serrated teeth and a potent toxic bite that bleeds its prey dry.

A living dragon

Komodo dragons are one branch of the monitor lizard family that is endemic to a few islands in Indonesia — they get their name from one of these, the island of Komodo, one of their prime habitats. They are the largest living lizards, growing up to 3 meters (10 ft) in length and approximately 70 kilograms (150 lb) of weight. Whichever way you cut it, that’s a lot of lizard. Wild specimens weigh around 70kg (150 lb), but those in captivity can weigh a lot more. The largest specimen officially found in the wild to date was 3.13 m (10.3 ft) long and weighed 166 kg (366lb), although that weight included an undigested meal.

The dragon’s tail is around the same length as its body, and they’re covered in very tough scales. Each scale is reinforced with a tiny bone (these are called osteoderms — ‘bony skins’), meaning that Komodo dragons are, essentially, encased in armor. Although such osteoderms are not unique to the Komodo dragons, they have been studied and described extensively in this species.

Their study was made possible by the Fort Worth Zoo, which housed the longest-living specimens bred in captivity, which lived for 19-and-a-half years. After its death, the zoo donated the body to the University of Texas at Austin, where researchers at the Jackson School of Geosciences examined it with a very powerful CT (computer tomography) device. The animal’s extensive age made for a well-developed, intricate, and striking suit of osteoderm armor.

Osteoderms, colored orange, cover the dragon’s body, as seen by this CT scan of its skull. Image credits The University of Texas at Austin / Jessica A. Maissano et al., (2019), The Anatomical Record.

The study revealed that the osteoderms in Komodo dragons differ in shape and overall coverage from other lizards — they’re more robust and cover more of the animal’s surface. A similar procedure on a baby Komodo dragon found no osteoderms, meaning that this bone skin develops as the animal becomes older.

Diet and behavior

As is befitting of a dragon, these lizards are top predators. They completely dominate their ecosystems, hunting and eating anything and everything from invertebrates to birds or mammals. They will happily eat carrion or other dragons, as well.

Their bite is vicious. Komodo dragons have serrated teeth that are ideal for ripping through flesh and bone. Their lower jaws house glands that secrete an anticoagulant toxin. This makes a bite from such a creature a very dangerous thing. When hunting, Komodo dragons bite down hard and pull back using powerful neck muscles; this tears flesh to shreds. The toxins then kick in to prevent clotting which leads to massive blood loss, sending their unlucky prey into shock.

Komodo dragons are not very active creatures, on account of their slow metabolism (a trait typical of most reptiles), so, most often, these reptiles rely on their camouflage and patience to pounce on unsuspecting prey. Despite their usual lethargy, Komodo dragons are capable of incredibly-fast strikes when hunting. Since they’re not very fast runners, their hunting strategy involves getting one good bite into their target, which virtually always escapes. Then, the dragons will calmly follow their victim, waiting for them to bleed out, using their keen sense of smell to follow the trail of blood. Such a hunt can take them miles away from the place where they delivered the bite.

But when they do happen upon the dead or dying prey, Komodo dragons feast in style. They can eat up to 80% of their body weight in a single feeding. This gluttonous nature, together with their slow metabolism, means that Komodo dragons in the wild typically eat only around once per month.

They are not above eating carrion, which they can detect using their sense of smell as far as six miles away. They are known for digging up graves in search of food. Komodo dragons can attack humans but only do so rarely.

An endangered species

First recorded by Western scientists in 1910, the Komodo dragon has never been an abundant species. Today, they are threatened with extinction as per the IUCN Red List. The main driver of their extinction historically was hunting for sport and trophy, with habitat destruction and climate change being the most pressing issues facing the species in modern times.

Komodo dragons are currently protected under Indonesian law. Authorities have gone so far as to temporarily ban tourist travel to the island of Komodo, and set up the Komodo National Park there in 1980 to aid in conservation efforts.

Such developments are especially surprising since female dragons can reproduce asexually — if no male is present, they can fertilize themselves. However, only males will result from such pregnancies. Combined with the Komodo dragon’s distaste from traveling far from their birthplace, this can quickly lead to inbreeding and collapse of isolated populations. Habitat destruction in the form of forest burning for agriculture leaves the species especially prone to inbreeding.

If the atmosphere is chaotic, how can we trust climate models?

Before they can understand how our planet’s climate is changing, scientists first need to understand the basic principles of this complicated system — the gears that keep the Earth’s climate turning. You can make simple models with simple interactions, and this is what happened in the first part of the 20th century. But starting from the 1950s and 1960s, researchers started increasingly incorporating more complex components into their models, using the ever-increasing computing power.

But the more researchers looked at climate (and the atmosphere, in particular), the more they understood that not everything is neat and ordered. Many things are predictable — if you know the state of the system today, you can calculate what it will be like tomorrow with perfect precision. But some components are seemingly chaotic.

Chaos theory studies these well-determined systems and attempts to describe their inner workings and patterns. Chaos theory states that behind the apparent randomness of such systems, there are interconnected mechanisms and self-organization that can be studied. So-called chaotic systems are very sensitive to their initial conditions. In mathematics (and especially in dynamic systems), the initial conditions are the “seed” values that describe a system. Even very small variations in the conditions today can have major consequences in the future.

It’s a lot to get your head around, but if you want to truly study the planet’s climate, this is what you have to get into.

The Butterfly Effect

Edward Lorenz and Ellen Fetter are two of the pioneers of chaos theory. These “heroes of chaos” used a big noisy computer called LGP-30 to develop what we know as chaos theory today.

Lorenz used the computer to run a weather simulation. After a while, he wanted to run the results again, but he just wanted half of the results, so he started the calculations using the results from the previous run as an initial condition. The computer was running everything with six digits, but the results printed were rounded to 3 digits. When the calculations were complete, the result was completely different from the previous one.  

That incident resulted in huge changes for the fields of meteorology, social sciences and even pandemic strategies. A famous phrase often used to describe this type of situation is “the butterfly effect”. You may be familiar with the idea behind it: “The flap of a butterfly’s wings in Brazil can set off a Tornado in Texas”. This summarizes the whole idea behind the small change in the initial conditions, and how small shifts in seemingly chaotic systems can lead to big changes. 

Simulation of Lorenz attractor of a chaotic system. Wikimedia Commons.

To get the idea, Lorenz went on to construct a diagram that depicts this chaos. It is called the Lorenz Attractor, and basically, it displays the trajectory of a particle described by a simple set of equations. The particle starts from a point and spirals around a critical point — a chaotic system is not cyclical so it never returns to the original point. After a while, it exceeds some distance and starts spiraling around another critical point, forming the shape of a butterfly. 

Why is it chaotic?

If the atmosphere is chaotic, how can we make predictions about it? First, let’s clarify two things. Predicting the weather is totally different from predicting the climate. Climate is a long period of atmospheric events, on the scale of decades, centuries, or even more. The weather is what we experience within hours, days, or weeks. 

Weather forecasting is based on forecast models which focus on predicting conditions for a few days. To make a forecast for tomorrow, the models need today’s observations as the initial condition. The observations aren’t perfect due to small deviations from reality but have improved substantially due to increases in computation power and satellites.

However, fluctuations contribute to making things harder to predict because of chaos. There is a limit to when the predictions are accurate — typically, no more than a few days. Anything longer than that makes the predictions not trustworthy. 

Thankfully, our knowledge about the atmosphere and technological advances made predictions better compared to 30 years ago. Unfortunately, there are still uncertainties due to the chaotic atmospheric behavior. This is illustrated in the image below, the model’s efficiency is compared between the day’s ranges. The 3-day forecast is always more accurate, compared to predictions from 5 to 10 days. 

The evolution of weather predictability. Credits: Shapiro et al. (AMS).

This image also shows an interesting societal issue. The Northern Hemisphere has always been better at predicting the weather than the South.

This happens because this region contains a larger number of richer countries that developed advanced science and technology earlier than the Global South, and have more monitoring stations in operation. Consequently, they used to have many more resources for observing the weather than poorer countries. Without these observations, you don’t have initial conditions to use for comparison and modeling. This started to change around the late ’90s and early 2000s when space agencies launched weather satellites that observe a larger area of the planet.

Predicting the climate

Predicting the climate is a different challenge, and in some ways, is surprisingly easier than predicting the weather. A longer period of time means more statistical predictability added to the problem. Take a game of chance, for instance. If you throw dice once and try to guess what you’ll get, the odds are stacked against you. But throw a dice a million times, and you have a pretty good idea what you’ll get. Similarly, when it comes to climate, a bunch of events are connected on average to long-term conditions and taken together, may be easier to predict.

In terms of models, there are many different aspects of weather and climate models. Weather models can predict where and when an atmospheric event happens. Climate models don’t focus on where exactly something will happen, but they care how many events happen on average in a specific period.

When it comes to climate, the Lorenz Attractor is the average of the underlying system conditions — the wings of the butterfly as a whole. Scientists use an ensemble of smaller models to ‘fill the butterfly’ with possibilities that on average represent a possible outcome, and figure out how the system as a whole is likely to evolve. That’s why climate models predictions and projections like those from the IPCC are extremely reliable, even when dealing with a complex, seemingly chaotic system.

Comparing models

Today, climate scientists have the computer power to average a bunch of models trying to predict the same climate pattern, further finessing the results. They can also carry out simulations with the same model, changing the initial conditions slightly and averaging the results. This provides a good indicator of what could happen for each outcome. Even further than that, there is a comparative workforce between the scientific community to show that independent models from independent science groups are agreeing about the effects of the climate crisis.

Organized in 1995, the Coupled Model Intercomparison Project (CMIP) is a way of analysing different models. This workforce makes sure scientists are comparing the same scenario but with different details in the calculations. With many results pointing to a similar outcome, the simulations are even more reliable.

Changes in global surface temperature over the past 170 years (black line) relative to 1850–1900 and annually-averaged, compared to CMIP6 climate model simulations of the temperature response to both human and natural drivers (red), and to only natural drivers (solar and volcanic activity, green). Solid coloured lines show the multi-model average, and coloured shades show the range (“very likely”) of simulations. Source: IPCC AR6 WGI>

Ultimately, predicting the climate is not like we are going to predict if it will be rainy on January 27 2122. Climate predictions focus on the average conditions that a particular season of an oscillatory event will be like. Despite the chaotic nature of the atmosphere, thanks to climate’s time length and statistical predictability, long-term climate predictions can be reliably made.

What’s behind the mystery of Easter Island’s statues?

Credit: Pixabay.

Located smack in the middle of the South Pacific Ocean, Easter Island is one of the most enigmatic places in the world. Even to this day, no one is sure how the first humans on the island managed to paddle at least 3,600 kilometers – the shortest distance from mainland South America. But the most mysterious feature of Easter Island is the nearly 1,000 monolithic statues that dot its surface.  

We still don’t know how exactly the islanders moved the human-head-on-torso statues, known as “moai” in the native language. Why the early Easter islands undertook this colossal effort deep in their isolation is also a mystery.

Unfortunately, the natives did not keep a written record and the oral history is scant. But recent research is starting to fit at least some of the pieces into this puzzle, providing clues as to the purpose and significance of these stone giants that have stirred the public’s imagination for so long.

A most intriguing island and people

Credit: Wikimedia Commons.

Easter Island, or Rapa Nui as it is known by the indigenous people, is truly a unique place. Although Pacific islands conjure the image of a tropical paradise, the triangular Easter Island is a very rugged landscape, lacking coral reefs and idyllic beaches. Geologically speaking, Easter Island is an amalgamation of three volcanoes that erupted sometime around 780,000 to 110,000 years ago, so it’s an extremely young island. It lies near the western end of a 2,500-kilometer-long chain of underwater volcanoes called the Easter Seamount Chain that resembles the classic Hawaiian hot spot track.

The original colonizers of the island are thought to have voyaged 2,000 kilometers from southeastern Polynesia in open canoes, or as far as 3,600 kilometers from mainland Chile. The most recent archeological evidence suggests colonization didn’t occur until about 1200 C.E. From that time until Dutch explorer Jacob Roggeveen first spied it on Easter Day 1722 – hence the island’s name – the people of Easter Island lived in absolute isolation from the outside world. No one from Easter Island sailed back to the mainland, nor did anyone from the mainland come to visit.

Once these people arrived at the island, that was it. They were stuck there and had to work with the limited resources they had at their disposal — and it wasn’t much.  The volcanic material meant much of the soil was unusable for agriculture, but the natives did manage to grow yams, sweet potatoes, bottle gourds, sugar cane, taro, and bananas.

Intriguingly, although the island is tiny, which at 164 square kilometers is slightly smaller than Washington D.C., people were segregated into multiple clans that maintained their distinct cultures. Archeological evidence shows stylistically distinct artifacts in communities only 500 meters apart, while DNA and isotope analyses of the natives’ remains also showed that they didn’t stray too far from their homes, despite the small population size.

Speaking of which, researchers disagree about the size of the island’s population. Some estimate the population peaked at about 15,000, before it crashed to just a few thousand prior to European contact. Most estimates, however, hover at around 3,000 by 1350 C.E., and remained more or less stable until Roggeveen spotted the island, after which the population started decreasing as slavery and mass deportation followed shortly thereafter.

But what seems certain is that the Easter Island civilization was in decline well before Europeans first set foot on its shores. Easter Island used to be covered by palm trees for 30,000 years, as many as 16 million of them, some towering 30 meters high — but it is largely treeless today. Early settlers burned down woods to open spaces for farming and began to rapidly increase in population. Besides unsustainable deforestation, there is evidence that palm seed shells were gnawed on by rats, which would have badly impacted the trees’ ability to reproduce.

Once most of the trees were gone, the entire ecosystem rapidly deteriorated: the soil eroded, most birds vanished along with other plant life, there was no wood available to build canoes or dwellings, people started starving and the population crashed. When Captain James Cook arrived at the island in 1774, his crew counted roughly 700 islanders, living miserable lives, their once mighty canoes reduced to patched fragments of driftwood.

For this reason, the fate of Easter Island and the self-destructive behavior of its populace has often been called “ecocide”, a cautionary tale that serves as a reminder of what can happen when humans use their local resources unsustainably. However, more recent research suggests that deforestation was gradual rather than abrupt. And, in any event, archeological evidence shows that the Rapanui people were resilient even in the face of deforestation and remained healthy until European contact, which contradicts the popular view of a cultural collapse prior to 1722.

So, perhaps the Rapanui weren’t as foolish and reckless as some have suggested. After all, they not only managed to flourish for centuries on the most remote inhabited island in the world but built some of the most impressive monuments in history, the amazing moai (pronounced mo-eye)

What we know about the mysterious moai

Moai with fully visible bodies. Credit: Pixabay.

Archeologists have documented 887 of the massive statues, known as moai, but there may be as many as 1,000 of them on the island. These massive statues carved from volcanic rock usually weigh 80 tons and can reach 10 meters (32.8 ft) in height, though the average is around half that. The largest moai, dubbed “El Gigante”, weighs around 150 tons and towers at an impressive 20 meters (65.6 ft), while the smallest only measures 1.13 meters (3.7 ft). Each moai, carved in the form of an oversized male head on a torso, sits on a stone platform called ahu.

“We could hardly conceive how these islanders, wholly unacquainted with any mechanical power, could raise such stupendous figures,” the British mariner Captain James Cook wrote in 1774.

Archaeologists have documented 887 of the massive statues, known as moai, but there may be up as many as 1,000 of them on the island. These massive statues carved in volcanic rock usually weigh 80 tons and can reach 10 meters in height, though the average is around half that. The largest moai, dubbed “El Gigante”, weighs around 150 tons and towers at an impressive 20 meters, while the smallest only measures 1.13 meters. Each moai, carved in the form of an oversized male head with bust, sits on a stone platform called ahu.

More than 95% of the moai were carved in a quarry at the volcano Rano Raraku. This quarry is rich in tuff, compressed volcanic ash that is easy to carve with limited tools. The natives had no metal at all and only used stone tools called toki.

From the quarry, the heavy statues were transported to the coast, often kilometers away. They likely employed wooden logs which they rolled to move the massive monoliths or used wooden sleds pulled by ropes. However they managed to transport the statues, they did so very gently, without breaking the nose, lips, and other features. Accidents did sometimes happen though, since there are a few statues with broken heads and statues lying at the bottom of slopes.

Eyeholes would not be carved into the statues until they reached their destination. In the Rapanui civilization’s later years, a pukao of red scoria stone from the Pruna Pau quarry would sometimes be placed on the head of the statue, a sign of mana (mental power). The final touch would be marked with eyes of coral, thereby completing the moai, turning it into an ‘ariŋa ora or living face.

However, half of all identified moai, nearly 400 statues, were found still idling at the Rano Raraku quarry. Only a third of the statues reached their final resting place while around 10% were found lying ‘in transit’ outside of Rano Raraku. It’s unclear why so many moai never left their quarry after the craftsmen went to such lengths to carve them, but the great challenges when attempting to move such large blocks of stone didn’t make it easy.

Most of the transported moai are believed to have been carved, moved, and erected between 1400 and 1600 BCE. By the time Cook arrived at the island, the natives seem to have stopped carving such statues — or at least not nearly as the rate they used to — and were neglecting those still standing.

What were the moai for?

Many of the transported moai are found on Easter Islands’ southeast coast, positioned with their backs to the sea. The consensus among archaeologists is that they represent the spirits of the ancestors, chiefs, and other high-ranking males who made important contributions to Rapanui culture. However, the statues don’t capture the defining features of individuals, as you’d see in Roman or Greek sculptures of, say, Caesar or Alexander the Great. Instead, they’re all more or less standardized in design, representing a generic male head with exaggerated features.

Carl Lipo, an anthropologist at Binghamton University in central New York, doesn’t buy into the idea that moai represent their ancestors. There are no ahu and statues found on the top of hills, the obvious place where you’d expect to find monuments meant to send a symbolic message. The moai are instead placed right next to where the natives lived and worked, which suggests they may be landmarks positioned near a valuable resource.

Lipo and colleagues mapped the location of the moai alongside the location of various important resources, such as farmlands, freshwater, and good fishing spots. The statistical analysis suggests the moai sites were most associated with sources of potable water.

“Every single time we found a big source of freshwater, there would be a statue and an ahu. And we saw this over and over and over again. And places where we didn’t find freshwater, we didn’t find statues and ahu,” Lipo told Scientific American, adding that the statues weren’t exactly markers that communicate “this is where you can find drinking water”. That would have been highly impractical considering the Herculean task of carving and moving the statues. Instead, the statues were placed where they are since that’s where people could find the resources they needed to survive.

Since there were many culturally distinct tribes on the small island and there is a great deal of variation in terms of size for the statues, the moai could also serve to signal status to neighboring communities. Large statues are costly, meaning the biggest moai could be regarded as proof that a particular group of tribesmen is clever and hard-working.

Another line of thought suggests the statues are sacred sites of worship. When Roggeveen arrived on the island in 1722, he described in his ship log how he witnessed natives praying to the statues.

“The people had, to judge by appearances, no weapons; although, as I remarked, they relied in case of need on their gods or idols which stand erected all along the sea shore in great numbers, before which they fall down and invoke them. These idols were all hewn out of stone, and in the form of a man, with long ears, adorned on the head with a crown, yet all made with skill: whereat we wondered not a little. A clear space was reserved around these objects of worship by laying stones to a distance of twenty or thirty paces. I took some of the people to be priests, because they paid more reverence to the gods than did the rest; and showed themselves much more devout in their ministrations. One could also distinguish these from the other people quite well, not only by their wearing great white plugs in their ear lobes, but in having the head wholly shaven and hairless.”

Finally, the giant stone sculptures may have served an important role in farming — not for astronomy purposes as seen with other megalithic sites like Stonehenge but in the very literal sense. The soil on Easter Island is highly prone to erosion, especially in the absence of the once plentiful woods. But when Jo Anne Van Tilburg, an archeologist and head of the Easter Island Statue Project, sampled the soil around quarries, she found it was unexpectedly fertile, high in calcium and phosphorus.

“Our analysis showed that in addition to serving as a quarry and a place for carving statues, Rano Raraku also was the site of a productive agricultural area,” Tilburg said in a statement.

“Coupled with a fresh-water source in the quarry, it appears the practice of quarrying itself helped boost soil fertility and food production in the immediate surroundings,” said Dr. Sarah Sherwood, a geoarchaeologist and soils specialist at the University of the South in Sewanee and a member of the Easter Island Statue Project.

In related research, anthropologist Mara Mulrooney of the Bernice Pauahi Bishop Museum in Honolulu analyzed various archeological sites on the island and found the Rapanui people cultivated gardens of yams, sweet potatoes, taro and other crops in enclosures with stones and boulders strategically placed on the soil. The rocks not only protected the plants from the wind and deterred weed growth but also boosted soil nutrients thanks to the weathering of minerals.

When Tilburg and Sherwood excavated two of 21 partially buried statues on the slopes of Rano Raraku, they revealed each statue was etched with crescent shapes and other figures on their back. A carved human head found resting against the base of one of the statues suggests that these moai may have served a ceremonial purpose of some kind, perhaps related to plant growth.

Carved designs on the back of an Easter Island statue suggest that the stone creation was used in soil fertility rituals, researchers say. Credit: Easter Island Project.

If quarry sites were the main farming plots, this would explain why so many statues haven’t been moved from their origin. Perhaps the islanders were not aware that the volcanic statues were making the soil fertile thanks to the minerals they contain, and instead attributed their plant growth to some divine intervention. As such, the statues may serve a double role as a ritual object and fertilizer. 

The culture of Easter Island and why the heads are there is something we may never fully understand, but with each archeological trip, we are getting closer to uncovering the secrets of the Rapanui.

How the ancient Romans built roads to last thousands of years

An ancient Roman road leading into the Arc of Trajanus in Timgad, Batna, Algeria. Credit: Travel.com

During its zenith under the reign of Septimius Severus in 211 C.E., the mighty Roman Empire stretched over much of Europe, from the Atlantic to the Ural Mountains and from modern-day Scotland to the Sahara or the Arabian Gulf. Crucial to maintaining dominion over such a large empire was Rome’s huge and intricate network of roads that remained unparalleled even a thousand years after its collapse.

It is estimated that the Roman road network was more than 400,000 kilometers long, out of which over 80,000 km were stone-paved. Like arteries, these marvelous feats of engineering ferried goods and services rapidly and safely, connecting Rome, “the capital of the world”, to the farthest stretches of the empire, and facilitated troop movements to hastily assemble legions for both border defense and expansion. Encompassing both military and economic outcomes, roads were truly central to Rome’s political strategy.

While the Romans didn’t invent road building, they took this Bronze Age infrastructure to a whole new level of craftsmanship. Many of these roads were so well designed and built that they are still the basis of highways that we see today. These include Via Flaminia and Britain’s Fosse Way, which still carry car, bike, and foot traffic. The answer to their longevity lies in the precision and thoroughness of Roman engineering.

Roman road types and layout

Just like today, the Roman transportation network consisted of various types of roads, each with its pros and cons. These ranged from small local dirt roads to broad, stone-paved highways that connected cities, major towns, and military outposts.

According to Ulpian, a 2nd-century C.E. Roman jurist and one of the greatest legal authorities of his time, there were three major types of roads:

  • Viae publicae. These were public or main roads, built and maintained at the expense of the state. These were the most important highways that connected the most important towns in the empire. As such, they were also the most traveled, dotted by carts full of goods and people traveling through the vast empire. But although they were funded by the state, not all public roads were free to use. Tolls were common at key points of crossing, such as bridges and city gates, enabling the state to collect import and export taxes on goods.
  • Viae militares. Although Roman troops marched across all types of roads and terrain for that matter, they also had their dedicated corridors in the road network. The military roads were very similar to public roads in design and building methods, but they were specifically built and maintained by the military. They were built by legionaries and were generally closed to civilian travel.
  • Viae privatae. These were private roads that were built and maintained by citizens. These were usually dirt or gravel roads since local estate owners or communities did not possess the funds nor the engineering skills to match the quality of private roads.
  • Viae vicinales. Finally, there were secondary roads that lead through or towards a vicus or village. These roads ran into high roads or into other viae vicinales and could be either public or private.

The first and most famous roman road was Via Appia (Appian Way) which linked Rome to Capua, covering 132 Roman miles or 196 kilometers. Via Appia was highly typical of how the Romans thought about building roads. It was very much a straight line that all but ignored geographical obstacles. The stretch from Rome to Terracina was essentially one 90-km long straight line.

Map of major Roman highways in the Italic peninsula.

Other important Roman roads of note include Via Flaminia which went from Rome to Fanum (Fano), Via Aemilia from Placentia to Augusta Praetoria (Aosta), Via Postumia from Aquileia to Genua (Genoa), and Via Popillia from Ariminum (Rimini) to Padova in the north and from Capua to Rheghium (Reggio Calabria) in the south.

Map of Roman Empire at its height in 125 C.E., showing the most important roads. Credit: Wikimedia Commons.

These roads were typically named after the Roman censor that paved them. For instance, Via Appia was named after censor Appius Claudius Caecus, who began and completed the first section as a military road to the south in 312 B.C.E during the Samnite Wars when Rome was still a fledgling city-state on a path to dominate the Italic peninsula.

While they had curved roads when it made sense for them, the Romans preferred taking the straightest path possible between two geographical points, which led to intriguing zig-zag road patterns if you zoom out far enough.

Building a straight road, especially over large distances, is a lot more technically challenging than meets the eye. Mensors were essentially the equivalent of today’s land surveyors who were tasked with determining the most appropriate placement and path a new road should take, depending on the terrain and locally available construction materials. These surveyors were well trained and employed standardized practices.

For instance, the incline of a road could not exceed 8 degrees in order to facilitate the movement of heavy carts packed with goods. To measure slopes, mensors employed a device called a khorobat, a 6-meter ruler with a groove on top into which water was poured. Road construction often started from two simultaneous opposing points that eventually joined in the middle. To draw perpendicular lines on the landscape and make sure the roads were straight and actually met, the surveyors employed the thunder or groma, the ancestor to the modern protractor, which consisted of a cross, at the four ends of which threads with lead weights were tied. When one weight on the same piece of wood correctly lined up with the one in front of it, the surveyor knew that the path of the road was straight.

Mistakes were bound to occur, which explains the small changes in direction that archeologists have found when excavating these ancient roads. When roads had to inevitably bend due to the terrain, at the bends the roads became much wider so that carriages traveling towards each other could safely pass each other without interlocking the wheels.

Roman roads purposely avoided difficult terrain such as marshes or the immediate vicinity of rivers. When they had to cross a river, Roman engineers built wooden or stone bridges, some of which survive and are still in use to this day, like the 60-meter-long Pons Fabricius, which was built in 62 B.C.E. and connects an island in the Tiber River with the opposite bank. Other times, tunnels were dug through mountains, in the spirit of straight Roman roads.

How Roman roads were made

After completing all the geodetic measurements and projections, the Roman surveyors marked the path of the future road using milestones. All trees, shrubs, and other vegetation that might interfere with the construction of the road were razed. Marshes were drained and mountains would be cut through, if needed.

The average width of an ancient Roman road was around 6 meters (20 ft.), although some large public roads could be much wider.

According to the writings of Mark Vitruvius Pollio, an outstanding Roman architect and engineer who lived in the 1st century C.E., Roman public roads consisted of several layers:

  • Foundation soil – depending on the terrain, builders either dug depressions on level ground or installed special supports in places where the soil subsided. The soil is then compacted and sometimes covered with sand or mortar to provide a stable footing for the multiple layers above.
  • Statumen – a layer that was laid on compacted foundation soil, consisting of large rough stone blocks. Cracks between the slabs would allow drainage to be carried through. The thickness of this layer ranged from 25 to 60 cm.
  • Rudus – a 20-cm-thick layer consisting of crushed rock about 5 cm in diameter in cement mortar.
  • Nucleus – a concrete base layer made of cement, sand and gravel, that was about 30 cm thick.
  • Summum dorsum – the final layer consisting of large 15-cm-thick rock blocks. But more often fine sand, gravel, or earth was used in the top layer, depending on the available resources at the workers’ disposal. This layer had to be soft and durable at the same time. Paved roads were very expensive and were typically reserved for sections located near and inside important cities. When pavement (pavimentum) was used, large cobblestones of basalt laval were typically used in the vicinity of Rome.
The main layers of a Roman road.

This puff pie structure ensured that the roads would be very sturdy. Roman roads also had a slightly curved surface, a clever design that allowed rainwater to drain over to the side of the road or into drainage ditches, thereby keeping the road free of puddles.

Upkeep was also very important. In fact, the Romans were so meticulous about maintaining their roads — which they considered the backbone of their empire — that they had regularly placed markers along the side of the road, indicating who was in charge of repairing that particular section of the road and when the last repair was made. That’s remarkably modern accountability-based upkeep.

Swift travel and easy navigation

Rome’s unparalleled extensive network of roads was crucial for both expanding and maintaining its borders, and allowing the economy to flourish. Rome’s legions could travel 25 to 50 kilometers (around 15 to 31 miles) a day, allowing them to respond relatively quickly to outside threats or internal uprisings. This means that costly garrison units at frontier outposts could be kept to a minimum as reinforcements could be mustered within weeks or even days.

Imperial Rome even had a postal service, which exploited the road network to its fullest. By switching fatigued horses with fresh ones, a postman could relay a message up to 80 kilometers from its destination within a single day. If the message was urgent, maybe even farther. For the slow-paced world of antiquity, this was incredibly fast and efficient communication, making the state far more agile than its ‘barbarian’ neighbors.

A Roman milestone in Portugal.

Besides the military, Rome’s roads were used by travelers from all parts of society , from slaves to emperors. Although traveling across the empire without maps might seem daunting, travelers could easily make their way to their destination thanks to large pillars that dotted the side of the road. These milestones, which could be as high as four meters and weigh two tons, indicated who built or was tasked with maintaining the road, as mentioned earlier, but also informed travelers how far the nearest settlement was. The pillars were modeled after a marble column in gilded bronze erected inside the Roman Forum in 20 B.C. under Caesar Augustus. It represented the starting point for all the roads in the empire, hence the phrase ‘All roads lead to Rome’.

All important Roman roads and notable stopping places along them were cataloged by the state. The catalog was updated regularly in the form of The Antonine Itinerary, which at its peak contained 225 lists. Each list, or iter, gives the start and end of each route, with the total mileage of that route, followed by a list of intermediate points with the distances in between. 

There were also maps — but not the landscape kind you’re imagining. Instead, these were schematic maps known as itinerarii that originally only listed cities along a route, but gradually these guidelines became pretty complex. The itinerarii grew to include roads, each with their own number and city of origin, and how they branched, alongside the length in Roman miles (equal to 1,000 paces or 0.92 English miles) and the most intermediate cities and stops along the way.

Roman roads even had service stations

A well-preserved section of the Appian Way. Credit:  Carole Raddato.

Every 15-20 kilometer (around 9-12 mi) or so along a public road, it was common to find rest stops where postmen could change horses for a fresh mount. These government stables were known as mutationes. Alongside these establishments, travelers could expect to find mansiones, a sort of early version of an inn where people could purchase basic lodgings for themselves and their animals, as well as eat, bathe, repair wagons, and even solicit prostitutes. In more busy intersections, these service stations morphed into small towns complete with shops and other amenities.

Roman roads were surprisingly safe

The flow of trade and the taxes that went with it were crucial to the Roman empire, so any disruption caused by bandits and other roadside outlaws was unacceptable. A special detachment of the army known as stationarii and beneficiarii regularly patrolled public roads and manned police posts and watchtowers to monitor traffic. They also doubled as toll collectors.

Roman roads tended to roll through sparsely populated areas, and special attention was given to clearing vegetation and digging ditches along the sides of the road. This reduced the cover that bandits could use to ambush carts and law-abiding citizens.

To this day, hundreds if not thousands of routes across Europe and the Middle East are built right on top of old Roman roads that have remained in use throughout the ages. Although suffering from major deterioration due to neglect, Roman roads continued to serve Europe throughout the Middle Ages. In fact, Roman road-building technology wasn’t surpassed until the late 19th century, when Belgian chemist Edmund J. DeSmedt laid the first true asphalt pavement in the front of the city hall building in Newark, New Jersey. Of course, Roman roads would be totally impractical today for busy car traffic, but one can only stand in awe in front of their durability, in stark contrast to modern roads that quickly form potholes after a mild winter.