Category Archives: History

Eunice Foote: the first person to measure the impact of carbon dioxide on climate

We often think of climate science as something that started only recently. The truth is that, like almost all fields of science, it started a long time ago. Advancing science is often a slow and tedious process, and climate science is not an exception. From the discovery of carbon dioxide until the most sophisticated climate models, it took a long time to get where we are.

Unfortunately, many scientists who played an important role in this climate journey are not given the credit they deserve. Take, for instance, Eunice Newton Foote.

Eunice Foote. Credits: Wikimedia Commons.

Foote was born in 1819 in Connecticut, USA. She spent her childhood in New York and later attended classes in the Troy Female Seminary, a higher education institution just for women.  She married Elish Foote in 1841, and the couple was active in the suffragist and abolitionist movements. They participated in the “Women’s Rights Convention” and signed the “Declaration of Sentiments” in 1848.

Eunice was also an inventor and an “amateur” scientist, a brave endeavor in a time when women were scarcely allowed to participate in science. However, one of her discoveries turned out to be instrumental in the field of climate science.

Why do we need jackets in the mountains?

In 1856, Eunice conducted an experiment to explain why low altitude air is warmer than in mountains. Back then, scientists were not sure about it, so she decided to test it. She published her results in the American Journal of Science and Arts.

“Circumstances affecting the heat of the Sun’s rays”. American Journal of Science and Arts. Credits: Wikimedia Commons.

Foote placed two cylinders under the Sun and later in the shade, each with a thermometer. She made sure the experiment would start with both cylinders with the same temperature. After three minutes, she measured the temperature in both situations. 

She noticed that rarefied air didn’t heat up as much as dense air, which explains the difference between mountaintops and valleys. Later, she compared the influence of moisture with the same apparatus. To make sure the other cylinder was dry enough, she added calcium chloride. The result was a much warmer cylinder with moist air in contrast to the dry one. This was the first step to explain the processes in the atmosphere, water vapor is one of the greenhouse gasses which sustain life on Earth.

But that wasn’t all. Foote went further and studied the effect of carbon dioxide. The gas had a high effect on heating the air. At the time, Eunice didn’t notice it, but with her measurements, the warming effect of water vapor made the temperatures 6% higher, while the carbon dioxide cylinder was 9% higher. 

Surprisingly, Eunice’s concluding paragraphs came with a simple deduction on how the atmosphere would respond to an increase in CO2. She predicted that adding more gas would lead to an increase in the temperature — which is pretty much what we know to be true now. In addition, she talked about the effect of carbon dioxide in the geological past, as scientists were already uncovering evidence that Earth’s climate was different back then.

We now know that during different geologic periods of the Earth, the climate was significantly warmer or colder. In fact, between the Permian and Triassic periods, the CO2 concentration was nearly 5 times higher than today’s, causing a 6ºC (10.8ºF) temperature increase.

Recognition

Eunice Foote’s discovery made it to Scientific American in 1856, where it was presented by Joseph Henry in the Eighth Annual Meeting of the American Association for the Advancement of Science (AAAS). Henry also reported her findings in the New-York daily tribune but stated there were not significant. Her study was mentioned in two European reports, and her name was largely ignored for over 100 years — until it finally received credit for her observations in 2011

The credit for the discovery used to be given to John Tyndall, an Irish physicist. He published his findings in 1861 explaining how absorbed radiation (heat) was and which radiation it was – infrared. Tyndall was an “official” scientist, he had a doctorate, had recognition from previous work, everything necessary to be respected. 

But a few things draw the eye regarding Tyndall and Foote.

Atmospheric carbon dioxide concentrations and global annual average temperatures (in C) over the years 1880 to 2009. Credits: NOAA/NCDC

Dr Tyndall was part of the editorial team of a magazine that reprinted Foote’s work. It is possible he didn’t actually read the paper, or just ignored it because it was an American scientist (a common practice among European scientists back then), and or because of her gender. But it’s possible that he drew some inspiration from it as well — without quoting it.

It should be said that Tyndall’s work was more advanced and precise. He had better resources and he was close to the newest discoveries in physics that could support his hypothesis. But the question of why Foote’s work took so long to be credited is hard to answer without going into misogyny.

Today, whenever a finding is published, even if made with a low-budget apparatus, the scientist responsible for the next advance on the topic needs to cite their colleague. A good example happened to another important discovery involving another female scientist. Edwin Hubble used Henrietta Swan Leavitt’s discovery of the relationship between the brightness and period of cepheid variables. Her idea was part of the method to measure the galaxies’ velocities and distances that later proved the universe is expanding. Hubble said she deserved to share the Nobel Prize with him, unfortunately, she was already dead after the prize announcement.

It’s unfortunate that researchers like Foote don’t receive the recognition they deserve, but it’s encouraging that the scientific community is starting to finally recognize some of these pioneers. There’s plenty of work still left to be done.

Russia invades Ukraine – 5 essential reads from experts

Image from a previous military drill. Credits: Pixabay.

This is a frightening moment. Russia has invaded Ukraine, and certainly those most frightened right now are the people of Ukraine. But violent aggression – a war mounted by a country with vast military resources against a smaller, weaker country – strikes fear in all of us. As a Washington Post headline writer recently wrote: The Ukraine crisis is “5,000 miles away but hitting home.”

The Conversation U.S. has spent the past couple of months digging into the history and politics of Ukraine and Russia. We’ve looked at their cultures, their religions, their military and technological capacities. We’ve provided you with stories about NATO, about cyberwarfare, the Cold War and the efficacy of sanctions.

Below, you’ll find a selection of stories from our coverage. We hope they will help you understand that today may feel both inevitable – yet inexplicable.

1. The US promised to protect Ukraine

In 1994, Ukraine got a signed commitment from Russia, the U.S. and the U.K. in which the three countries promised to protect the newly independent state’s sovereignty.

“Ukraine as an independent state was born from the 1991 collapse of the Soviet Union,” write scholars Lee Feinstein of Indiana University and Mariana Budjeryn of Harvard. “Its independence came with a complicated Cold War inheritance: the world’s third-largest stockpile of nuclear weapons. Ukraine was one of the three non-Russian former Soviet states, including Belarus and Kazakhstan, that emerged from the Soviet collapse with nuclear weapons on its territory.”

The 1994 agreement was signed in return for Ukraine giving up the nuclear weapons within its borders, sending them to Russia for dismantling. But the agreement, not legally binding, was broken by Russia’s illegal annexation of Ukraine’s Crimean Peninsula in 2014. And today’s invasion is yet another example of the weakness of that agreement.

2. Clues to how Russia will wage war

During the opening ceremony of the 2008 Beijing Olympics, Russia invaded Georgia, a country on the Black Sea. In 2014, Putin ordered troops to seize Crimea, a peninsula that juts into the Black Sea and housed a Russian naval base.

West Point scholar and career U.S. special forces officer Liam Collins conducted field research on the 2008 and 2014 wars in Georgia and Ukraine.

“From what I have learned, I expect a possible Russian invasion would start with cyberattacks and electronic warfare to sever communications between Ukraine’s capital and the troops. Shortly thereafter, tanks and mechanized infantry formations supported by the Russian air force would cross at multiple points along the nearly 1,200-mile border, assisted by Russian special forces. Russia would seek to bypass large urban areas.”

3. Spies replaced by smartphones

If you love spy movies, you’ve got an image of how intelligence is gathered: agents on the ground and satellites in the sky.

But you’re way out of date. These days, writes Craig Nazareth, a scholar of intelligence and information operations at the University of Arizona, “massive amounts of valuable information are publicly available, and not all of it is collected by governments. Satellites and drones are much cheaper than they were even a decade ago, allowing private companies to operate them, and nearly everyone has a smartphone with advanced photo and video capabilities.”

This means people around the world may see this invasion unfold in real time. “Commercial imaging companies are posting up-to-the-minute, geographically precise images of Russia’s military forces. Several news agencies are regularly monitoring and reporting on the situation. TikTok users are posting video of Russian military equipment on rail cars allegedly on their way to augment forces already in position around Ukraine. And internet sleuths are tracking this flow of information.”

4. Targeting the US with cyberattacks

As Russia edged closer to war with Ukraine, cybersecurity scholar Justin Pelletier at Rochester Institute of Technology wrote of the growing likelihood of destructive Russian cyberattacks against the U.S.

Pelletier quoted a Department of Homeland Security bulletin from late January that said, “We assess that Russia would consider initiating a cyberattack against the Homeland if it perceived a U.S. or NATO response to a possible Russian invasion of Ukraine threatened its long-term national security.”

And that’s not all. “Americans can probably expect to see Russian-sponsored cyber-activities working in tandem with propaganda campaigns,” writes Pelletier. The aim of such campaigns: to use “social and other online media like a military-grade fog machine that confuses the U.S. population and encourages mistrust in the strength and validity of the U.S. government.”

5. Will war sink Putin’s stock with Russians?

https://twitter.com/SaurabhMishraUA/status/1497102914879471658

“War ultimately requires an enormous amount of public goodwill and support for a political leader,” writes Arik Burakovsky, a scholar of Russia and public opinion at Tufts University’s Fletcher School.

[Over 140,000 readers rely on The Conversation’s newsletters to understand the world. Sign up today.]

Putin’s support among Russians has been rising as the country massed troops along the Ukrainian border – the public believes that its leaders are defending Russia by standing up to the West. But Burakovsky writes that “the rally ‘round the flag effect of supporting political leadership during an international crisis will likely be short-lived.”

Most Russians, it turns out, don’t want war. The return of body bags from the front could well prove damaging to Putin domestically.


Want to learn more? Here’s an even bigger collection of The Conversation’s coverage of the crisis in Ukraine.

Naomi Schalit, Senior Editor, Politics + Society, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Want to live like a Roman? This historical rowing cruise on the Danube has you covered

An unusual ship will set sail in November 2022 on the Danube River in Europe. Well, unusual for our times, at least. A Roman rowing and sailing ship built just like the ones in late antiquity will start its journey in Bavaria, and sail down the Danube all the way to the Black Sea in Romania.

A reconstructed navis lusoria at the Museum of Ancient Seafaring, Mainz.

For centuries, the Romans ruled vast swaths of Europe, Africa, and western Asia. Their maritime prowess was unrivaled and has fascinated historians for centuries. But no matter how many Roman documentaries you watch, it’s still kind of hard to imagine what they lived like, or what a journey would have been in Roman times. Well now, you can experience that firsthand.

Thanks to a project supported by the Donau-Universität Krems, you can embark on a Roman adventure. “Danuvina Alacris”, a modern reconstruction of a “Lusoria” type roman ship, is taking volunteers. Lusoria ships were small military vessels of the late Roman Empire that served as troop transport. They once roamed the Danube River guarding the boundary between the roman empire and the “barbarian” wasteland beyond what the Romans called “barbaricum”.

The ship itself was built with special care as to resemble Roman ships as much as possible. The Lusoria ships were nimble on the river waters, but whenever they couldn’t sail properly, they would also rely on strong rowers. The 2022 Roman Cruise will also require participants to pull in some rowing work when necessary.

“Our ship named “Danuvia Alacris” will cover about 40 km a day which, will be rowed and partially sailed, if possible. The crew, which will consist of about 18-20 rowers and a leadership team of 4-5 people, will have an international composition, so the language on the ship will be English,” the project announcement page reads.

It won’t just be going from point A to point B — the organizers announced a series of events around the cruise. In addition, you’ll be living as close to Roman times as possible.

“The crew will change approximately every second week; they will row in Roman clothes (tunic, shoes, etc.). In addition, there will be smaller to larger festivals and interested visitors at the stops of the ship.”

The organizers are still looking for volunteers that will rotate out of the crew ever two weeks. The project will start on July 15th and is expected to end in October 2022. Registrations are now open, for more information check out the official announcement page.

Annie Jump Cannon: the legend behind stellar classification

It is striking that today, we can not only discover but even classify stars that are light-years from Earth — sometimes, even billions of light-years away. Stellar classification often uses the famous Hertzsprung–Russell diagram, which summarises the basics of stellar evolution. The luminosity and the temperature of stars can teach us a lot about their life journey, as they burn their fuel and change chemical composition.

We know that some stars are made up mostly of ionised helium or neutral helium, some are hotter than others, and we fit the Sun as a not so impressive star compared to the giants. Part of that development came from Annie Jump Cannon’s contribution during her long career as an astronomer. 

The Hertzsprung diagram where the evolution of sun-like stars is traced. Credits: ESO.

On the shoulders of giantesses

Cannon was born in 1863 in Dover, Delaware, US. When she was 17 years old, thanks to her father’s support, she managed to travel 369 miles all the way from her hometown to attend classes at Wellesley College. It’s no big deal for teens today, but back then, this was an imaginable adventure for a young lady. The institution offered education exclusively for women, an ideal environment to spark in Cannon an ambition to become a scientist. In 1884, she graduated and later in 1896 started her career at the Harvard Observatory.

In Wellesley, she had Sarah Whiting as her astronomy professor, who sparked Cannon’s interest in spectroscopy:

“… of all branches of physics and astronomy, she was most keen on the spectroscopic development. Even at her Observatory receptions, she always had the spectra of various elements on exhibition. So great was her interest in the subject that she infused into the mind of her pupil who is writing these lines, a desire to continue the investigation of spectra.”

Whiting’s obituary in 1927, Annie Cannon.

Cannon had an explorer spirit and travelled across Europe, publishing a photography book in 1893 called “In the footsteps of Columbus”. It is believed that during her years at Wellesley, after the trip, she got infected with scarlet fever. The disease infected her ears and she suffered severe hearing loss, but that didn’t put an end to her social or scientific activities. Annie Jump Cannon was known for not missing meetings and participating in all American Astronomical Society meetings during her career.

OBAFGKM

At Radcliffe College, she began working more with spectroscopy. Her first work with southern stars spectra was later published in 1901 in the Annals of the Harvard College Observatory. The director of the observatory, Edward C. Pickering chose Cannon as the responsible for observing stars which would later become the Henry Draper Catalogue, named after the first person to measure the spectra of a star. 

Annie Jump Cannon at her desk at the Harvard College Observatory. Image via Wiki Commons.

The job didn’t pay much. In fact, Harvard employed a number of women as “women computers” that processed astronomic data. The women computer at Harvard earned less than secretaries, and this enabled researchers to hire more women computers, as men would have need to be paid more.

Her salary was only 25 cents an hour, a small income for a difficult job to look at the tiny details from the spectrographs, often only possible with magnifying glasses. She was known for being focused (possibly also influenced by her deafness), but she was also known for doing the job fast. Simply put,

During her career, she managed to classify the spectra of 225,000 stars. At the time, Williamina Fleming, a Scottish astronomer, was the Harvard lady in charge of the women computers. She had previously observed 10,000 stars from Draper Catalogue and classified them from letters A to N. But Annie Jump Cannon saw the link between the stars’ temperature and rearranged Fleming’s classification to the OBAFGKM system. The OBAFGKM system divides the stars from the hottest to the coldest, and astronomers created a popular mnemonic for it: “Oh Be A Fine Guy/Girl Kiss Me”.

Legacy

“A bibliography of Miss Cannon’s scientific work would be exceedingly long, but it would be far easier to compile one than to presume to say how great has been the influence of her researches in astronomy. For there is scarcely a living astronomer who can remember the time when Miss Cannon was not an authoritative figure. It is nearly impossible for us to imagine the astronomical world without her. Of late years she has been not only a vital, living person; she has been an institution. Already in our school days she was a legend. The scientific world has lost something besides a great scientist.”

Cecilia Payne-Gaposchkin in Annie Jump Cannon’s obituary.
Annie Jump Cannon at Harvard University. Smithsonian Institution @ Flickr Commons.

Annie Jump Cannon was awarded many prizes, she became honorary doctorate of Oxford University, the first woman to receive the Henry Draper Medal in 1931, and the first woman to become an officer of the American Astronomical Society. 

Her work in stellar classification was followed by Cecilia Payne-Gaposchkin, another dame of stellar spectroscopy. Payne improved the system with quantum mechanics and described what stars are made of

Very few scientists have such a competent and exemplary career as Cannon. Payne continued the work left from Cannon, her advisor, Henry Norris Russell, then improved it with minimum citation. From that, we got today’s basic understanding of stellar classification. Her beautiful legacy has been rescued recently by other female astronomers who know the importance of her life’s work.

British archeologists uncover 5,000-year-old stone drum in the grave of three children

One of the “most significant ancient objects ever found in the British Isles”, a stone-carved drum, will be put on display at the British Museum starting next week.

The 5,000-year-old drum, carved from chalk. Image credits The British Museum.

Art is hard to define, but it can be very easy to recognize. A 5,000-year-old drum, carved from a block of chalk uncovered near Yorkshire in northern England in 2015, definitely seems to fit the bill. According to Neil Wilkin, the curator of the exhibition “The World of Stonehenge” at the British Museum, this is one of the most remarkable archeological discoveries ever made in Britain.

The piece will go on display at the exhibition, which opens February 17, for the public to enjoy and discuss.

Stone and roll

“This is a truly remarkable discovery, and is the most important piece of prehistoric art to be found in Britain in the last 100 years,” said Neil Wilkin.

According to the Museum, this drum is one of the most significant objects ever discovered on the British Isles. By all indication, it is not a functional musical instrument — as it is carved from a single piece of chalk and has no internal resonance cavity — but was, rather, created as a talisman or artistic sculpture.

The drum was discovered in the grave of three children that were buried close together, either touching or holding hands. It was placed above the head of the eldest child, together with a chalk ball and a pin made from polished bone. The burial site lies around 240 miles (380 kilometers) from Stonehenge near the village of Burton Agnes.

It is one of only four known examples of its kind. Known as the Folkton Drums, all three are part of the British Museum’s collections. The other three were discovered in 1889 at the burial site of a single child around 15 miles (24 kilometers) from the site ar Burton Agnes. They are currently on loan to the Stonehenge Visitor Centre.

These drums are “some of the most famous and enigmatic ancient objects ever unearthed in Britain”, according to the Museum, with the most recent one “of the most elaborately decorated objects of this period found anywhere in Britain and Ireland”.

Radiocarbon dating places the creation of the drum between 3005 and 2890 BC, the same time as the first construction phase of Stonehenge. As such, it provides invaluable cultural context regarding that time.

“Analysis of its carvings will help to decipher the symbolism and beliefs of the era in which Stonehenge was constructed,” said Wilkin.

These drums showcase the fact that communities across Britain and Ireland maintained quite significant levels of contact and communications, as they shared artistic styles of expressions and, as suggested by the discovery of these objects in burial sites, spiritual beliefs.

The drums are all sculpted out of local chalk and adorned with stylized human faces and geometric patterns. A pair of concentric circles with pairs of eyes on each drum resembles a human face.

While it is still unclear what the purpose of these drums were — ritual purposes are definitely involved here — archaeologist Anne Teather notes that they may have been teaching aides or items meant to maintain standardization of measurement. She notes that the circumferences of each of the drums form whole-number divisions of ten long feet (ten, nine, and eight times, respectively), which was a unit of measuring distance in wide use in stone-age Britain.

While it’s very likely that other such drums were fashioned from more accessible and more easily processed materials such as wood, these examples were carved out of stone (likely for ceremonial purposes), which helped them survive through the ages.

Inca-era human ‘vertebrae on posts’ may have been one last-ditch effort to save their ancestors’ remains from Conquistador looting

Some of the vertebrae-on-reed posts that were found in the Andes. Credit: C. O’Shea/Antiquity Publications Ltd

Archeologists were startled to find over 200 reed sticks strung up with human vertebrae while exploring tombs in Peru. These peculiar burial customs, which date from the 16th century, have invited all sorts of speculation as to their purpose. Although at first glance it seems like this manipulation of human remains looks like a desecration of fallen enemies, a new study suggests the opposite. According to archeologists from the UK, Colombia, and the USA, these ‘vertebrae-on-posts’ are a response to tomb destruction performed extensively by Spanish conquistadors during the early colonization of South America – a desperate act by local Andean indigenous communities to salvage the remains of their ancestors.

The odd reed posts threaded with vertebrae were first uncovered in 2012 during an archaeological expedition to Peru’s Chincha Valley, inside the ruins of stone burial chambers called chullpas. Among the team was Jacob Bongers, who at the time was still a graduate student. Over the years, Bongers would return to the site, examining chullpas across the valley, once part of a prosperous nation known as the Chincha Kingdom, before its incorporation into the mighty Inca Empire during the Late Horizon period (after 1400 AD).

Chullpas. Credit: J.L. Bongers/Antiquity

Now an archeologist at the Sainsbury Research Unit at the University of East Anglia, Bongers has documented 192 individual examples of vertebrae-on-posts, with bones belonging to both children and adults. One of the sets even had a skull threaded. There is no evidence of cut marks, which suggests the bones were placed on the sticks after the skeletal remains were exposed, and the vertebrae aren’t strung up in their natural order.

At first, the scientists thought that the threaded vertebrae were the object of a bad joke by looters. But as they kept finding more, it became clear there was more to it and a systematic and unique burial practice was unfolding before their eyes. Interviews with locals who had encountered similar burials confirmed the vertebrae-threaded posts weren’t made by looters and are likely very old. Just how old no one could tell them.

In a study published this week in the journal Antiquity, Bongers and international colleagues performed radiocarbon dating of some of the samples, finding they are about 500 years old, dating between 1520 and 1550 C.E. This timeline places the remains into a brutal historical context, when early European colonists were in actively campaigning to obliterate Inca culture, particularly Andean religious practices that were seen as heretical. To Bongers, this context may serve to explain the chullpas reed sticks: remains in an advanced state of decomposition were strung up wooden poles deliberately to transport and store them to other, more remote tombs where they would be spared from the foreigners’ desecration.

Credit: J. Gmez Meja/Antiquity 

The Colonial period was devastating to the Chincha, whose population plummeted from over 30,000 heads of households in 1533 to just 979 households in 1583 through a combination of disease, famine, and murder. Tomb looting was also widespread, as chronicled by Peruvian historian Pedro Cieza de León, who wrote “there was an enormous number of graves in this valley in the hills and wastelands. Many of them were opened by the Spaniards, and they removed large sums of gold”. 

 “When the Spanish came in and looted these tombs, they are ripping up textile bundles and looking for gold, they are looking for silver,” Bongers told Haaretz. “You can imagine it being a fairly violent act, bodies and body parts are being scattered about.”

The looting was seen as a great transgression, perhaps much greater than in other cultures given the special relationship Andean societies had with their dead. Starting with the second millennium BCE, and perhaps much earlier, cultural traditions in the Andes often involved the removal and modification of parts of dead human bodies. This includes removing the hands from old remains and depositing them elsewhere as offerings, as well as trophies like Nazca heads, drums made from flayed human skins, skulls carved into drinking cups, and more.

It was also common to keep the mummified remains of family members out in the open, from common households to palaces. These open and public tombs invited the community to venerate their ancestors by placing offerings or, on some occasions, parading the remains during festivals.

To European conquistadors and their Judeo-Christian mindset, these were unacceptable spectacles of heresy.

“In this vein, we argue that after chullpas were looted—possibly as part of European campaigns to extirpate Indigenous religious practices—local groups re-entered these graves to assemble disaggregated human remains by threading posts through vertebrae. As looting became widespread and epidemics and famine decimated the Chincha population in the sixteenth century AD, it is possible that communities across the Chincha Valley coordinated to string vertebrae on reeds to reconstruct the dead. This social process may have served as a means of restoring the potency of the formerly corrupted dead,” the authors wrote in their study.

This interpretation is, for the time being, a sort of educated speculation. The researchers hope to uncover more insights using genetic sequencing of remains from tombs where vertebrae strung on posts were found, as well as elsewhere.

What’s behind the mystery of Easter Island’s statues?

Credit: Pixabay.

Located smack in the middle of the South Pacific Ocean, Easter Island is one of the most enigmatic places in the world. Even to this day, no one is sure how the first humans on the island managed to paddle at least 3,600 kilometers – the shortest distance from mainland South America. But the most mysterious feature of Easter Island is the nearly 1,000 monolithic statues that dot its surface.  

We still don’t know how exactly the islanders moved the human-head-on-torso statues, known as “moai” in the native language. Why the early Easter islands undertook this colossal effort deep in their isolation is also a mystery.

Unfortunately, the natives did not keep a written record and the oral history is scant. But recent research is starting to fit at least some of the pieces into this puzzle, providing clues as to the purpose and significance of these stone giants that have stirred the public’s imagination for so long.

A most intriguing island and people

Credit: Wikimedia Commons.

Easter Island, or Rapa Nui as it is known by the indigenous people, is truly a unique place. Although Pacific islands conjure the image of a tropical paradise, the triangular Easter Island is a very rugged landscape, lacking coral reefs and idyllic beaches. Geologically speaking, Easter Island is an amalgamation of three volcanoes that erupted sometime around 780,000 to 110,000 years ago, so it’s an extremely young island. It lies near the western end of a 2,500-kilometer-long chain of underwater volcanoes called the Easter Seamount Chain that resembles the classic Hawaiian hot spot track.

The original colonizers of the island are thought to have voyaged 2,000 kilometers from southeastern Polynesia in open canoes, or as far as 3,600 kilometers from mainland Chile. The most recent archeological evidence suggests colonization didn’t occur until about 1200 C.E. From that time until Dutch explorer Jacob Roggeveen first spied it on Easter Day 1722 – hence the island’s name – the people of Easter Island lived in absolute isolation from the outside world. No one from Easter Island sailed back to the mainland, nor did anyone from the mainland come to visit.

Once these people arrived at the island, that was it. They were stuck there and had to work with the limited resources they had at their disposal — and it wasn’t much.  The volcanic material meant much of the soil was unusable for agriculture, but the natives did manage to grow yams, sweet potatoes, bottle gourds, sugar cane, taro, and bananas.

Intriguingly, although the island is tiny, which at 164 square kilometers is slightly smaller than Washington D.C., people were segregated into multiple clans that maintained their distinct cultures. Archeological evidence shows stylistically distinct artifacts in communities only 500 meters apart, while DNA and isotope analyses of the natives’ remains also showed that they didn’t stray too far from their homes, despite the small population size.

Speaking of which, researchers disagree about the size of the island’s population. Some estimate the population peaked at about 15,000, before it crashed to just a few thousand prior to European contact. Most estimates, however, hover at around 3,000 by 1350 C.E., and remained more or less stable until Roggeveen spotted the island, after which the population started decreasing as slavery and mass deportation followed shortly thereafter.

But what seems certain is that the Easter Island civilization was in decline well before Europeans first set foot on its shores. Easter Island used to be covered by palm trees for 30,000 years, as many as 16 million of them, some towering 30 meters high — but it is largely treeless today. Early settlers burned down woods to open spaces for farming and began to rapidly increase in population. Besides unsustainable deforestation, there is evidence that palm seed shells were gnawed on by rats, which would have badly impacted the trees’ ability to reproduce.

Once most of the trees were gone, the entire ecosystem rapidly deteriorated: the soil eroded, most birds vanished along with other plant life, there was no wood available to build canoes or dwellings, people started starving and the population crashed. When Captain James Cook arrived at the island in 1774, his crew counted roughly 700 islanders, living miserable lives, their once mighty canoes reduced to patched fragments of driftwood.

For this reason, the fate of Easter Island and the self-destructive behavior of its populace has often been called “ecocide”, a cautionary tale that serves as a reminder of what can happen when humans use their local resources unsustainably. However, more recent research suggests that deforestation was gradual rather than abrupt. And, in any event, archeological evidence shows that the Rapanui people were resilient even in the face of deforestation and remained healthy until European contact, which contradicts the popular view of a cultural collapse prior to 1722.

So, perhaps the Rapanui weren’t as foolish and reckless as some have suggested. After all, they not only managed to flourish for centuries on the most remote inhabited island in the world but built some of the most impressive monuments in history, the amazing moai (pronounced mo-eye)

What we know about the mysterious moai

Moai with fully visible bodies. Credit: Pixabay.

Archeologists have documented 887 of the massive statues, known as moai, but there may be as many as 1,000 of them on the island. These massive statues carved from volcanic rock usually weigh 80 tons and can reach 10 meters (32.8 ft) in height, though the average is around half that. The largest moai, dubbed “El Gigante”, weighs around 150 tons and towers at an impressive 20 meters (65.6 ft), while the smallest only measures 1.13 meters (3.7 ft). Each moai, carved in the form of an oversized male head on a torso, sits on a stone platform called ahu.

“We could hardly conceive how these islanders, wholly unacquainted with any mechanical power, could raise such stupendous figures,” the British mariner Captain James Cook wrote in 1774.

Archaeologists have documented 887 of the massive statues, known as moai, but there may be up as many as 1,000 of them on the island. These massive statues carved in volcanic rock usually weigh 80 tons and can reach 10 meters in height, though the average is around half that. The largest moai, dubbed “El Gigante”, weighs around 150 tons and towers at an impressive 20 meters, while the smallest only measures 1.13 meters. Each moai, carved in the form of an oversized male head with bust, sits on a stone platform called ahu.

More than 95% of the moai were carved in a quarry at the volcano Rano Raraku. This quarry is rich in tuff, compressed volcanic ash that is easy to carve with limited tools. The natives had no metal at all and only used stone tools called toki.

From the quarry, the heavy statues were transported to the coast, often kilometers away. They likely employed wooden logs which they rolled to move the massive monoliths or used wooden sleds pulled by ropes. However they managed to transport the statues, they did so very gently, without breaking the nose, lips, and other features. Accidents did sometimes happen though, since there are a few statues with broken heads and statues lying at the bottom of slopes.

Eyeholes would not be carved into the statues until they reached their destination. In the Rapanui civilization’s later years, a pukao of red scoria stone from the Pruna Pau quarry would sometimes be placed on the head of the statue, a sign of mana (mental power). The final touch would be marked with eyes of coral, thereby completing the moai, turning it into an ‘ariŋa ora or living face.

However, half of all identified moai, nearly 400 statues, were found still idling at the Rano Raraku quarry. Only a third of the statues reached their final resting place while around 10% were found lying ‘in transit’ outside of Rano Raraku. It’s unclear why so many moai never left their quarry after the craftsmen went to such lengths to carve them, but the great challenges when attempting to move such large blocks of stone didn’t make it easy.

Most of the transported moai are believed to have been carved, moved, and erected between 1400 and 1600 BCE. By the time Cook arrived at the island, the natives seem to have stopped carving such statues — or at least not nearly as the rate they used to — and were neglecting those still standing.

What were the moai for?

Many of the transported moai are found on Easter Islands’ southeast coast, positioned with their backs to the sea. The consensus among archaeologists is that they represent the spirits of the ancestors, chiefs, and other high-ranking males who made important contributions to Rapanui culture. However, the statues don’t capture the defining features of individuals, as you’d see in Roman or Greek sculptures of, say, Caesar or Alexander the Great. Instead, they’re all more or less standardized in design, representing a generic male head with exaggerated features.

Carl Lipo, an anthropologist at Binghamton University in central New York, doesn’t buy into the idea that moai represent their ancestors. There are no ahu and statues found on the top of hills, the obvious place where you’d expect to find monuments meant to send a symbolic message. The moai are instead placed right next to where the natives lived and worked, which suggests they may be landmarks positioned near a valuable resource.

Lipo and colleagues mapped the location of the moai alongside the location of various important resources, such as farmlands, freshwater, and good fishing spots. The statistical analysis suggests the moai sites were most associated with sources of potable water.

“Every single time we found a big source of freshwater, there would be a statue and an ahu. And we saw this over and over and over again. And places where we didn’t find freshwater, we didn’t find statues and ahu,” Lipo told Scientific American, adding that the statues weren’t exactly markers that communicate “this is where you can find drinking water”. That would have been highly impractical considering the Herculean task of carving and moving the statues. Instead, the statues were placed where they are since that’s where people could find the resources they needed to survive.

Since there were many culturally distinct tribes on the small island and there is a great deal of variation in terms of size for the statues, the moai could also serve to signal status to neighboring communities. Large statues are costly, meaning the biggest moai could be regarded as proof that a particular group of tribesmen is clever and hard-working.

Another line of thought suggests the statues are sacred sites of worship. When Roggeveen arrived on the island in 1722, he described in his ship log how he witnessed natives praying to the statues.

“The people had, to judge by appearances, no weapons; although, as I remarked, they relied in case of need on their gods or idols which stand erected all along the sea shore in great numbers, before which they fall down and invoke them. These idols were all hewn out of stone, and in the form of a man, with long ears, adorned on the head with a crown, yet all made with skill: whereat we wondered not a little. A clear space was reserved around these objects of worship by laying stones to a distance of twenty or thirty paces. I took some of the people to be priests, because they paid more reverence to the gods than did the rest; and showed themselves much more devout in their ministrations. One could also distinguish these from the other people quite well, not only by their wearing great white plugs in their ear lobes, but in having the head wholly shaven and hairless.”

Finally, the giant stone sculptures may have served an important role in farming — not for astronomy purposes as seen with other megalithic sites like Stonehenge but in the very literal sense. The soil on Easter Island is highly prone to erosion, especially in the absence of the once plentiful woods. But when Jo Anne Van Tilburg, an archeologist and head of the Easter Island Statue Project, sampled the soil around quarries, she found it was unexpectedly fertile, high in calcium and phosphorus.

“Our analysis showed that in addition to serving as a quarry and a place for carving statues, Rano Raraku also was the site of a productive agricultural area,” Tilburg said in a statement.

“Coupled with a fresh-water source in the quarry, it appears the practice of quarrying itself helped boost soil fertility and food production in the immediate surroundings,” said Dr. Sarah Sherwood, a geoarchaeologist and soils specialist at the University of the South in Sewanee and a member of the Easter Island Statue Project.

In related research, anthropologist Mara Mulrooney of the Bernice Pauahi Bishop Museum in Honolulu analyzed various archeological sites on the island and found the Rapanui people cultivated gardens of yams, sweet potatoes, taro and other crops in enclosures with stones and boulders strategically placed on the soil. The rocks not only protected the plants from the wind and deterred weed growth but also boosted soil nutrients thanks to the weathering of minerals.

When Tilburg and Sherwood excavated two of 21 partially buried statues on the slopes of Rano Raraku, they revealed each statue was etched with crescent shapes and other figures on their back. A carved human head found resting against the base of one of the statues suggests that these moai may have served a ceremonial purpose of some kind, perhaps related to plant growth.

Carved designs on the back of an Easter Island statue suggest that the stone creation was used in soil fertility rituals, researchers say. Credit: Easter Island Project.

If quarry sites were the main farming plots, this would explain why so many statues haven’t been moved from their origin. Perhaps the islanders were not aware that the volcanic statues were making the soil fertile thanks to the minerals they contain, and instead attributed their plant growth to some divine intervention. As such, the statues may serve a double role as a ritual object and fertilizer. 

The culture of Easter Island and why the heads are there is something we may never fully understand, but with each archeological trip, we are getting closer to uncovering the secrets of the Rapanui.

How the ancient Romans built roads to last thousands of years

An ancient Roman road leading into the Arc of Trajanus in Timgad, Batna, Algeria. Credit: Travel.com

During its zenith under the reign of Septimius Severus in 211 C.E., the mighty Roman Empire stretched over much of Europe, from the Atlantic to the Ural Mountains and from modern-day Scotland to the Sahara or the Arabian Gulf. Crucial to maintaining dominion over such a large empire was Rome’s huge and intricate network of roads that remained unparalleled even a thousand years after its collapse.

It is estimated that the Roman road network was more than 400,000 kilometers long, out of which over 80,000 km were stone-paved. Like arteries, these marvelous feats of engineering ferried goods and services rapidly and safely, connecting Rome, “the capital of the world”, to the farthest stretches of the empire, and facilitated troop movements to hastily assemble legions for both border defense and expansion. Encompassing both military and economic outcomes, roads were truly central to Rome’s political strategy.

While the Romans didn’t invent road building, they took this Bronze Age infrastructure to a whole new level of craftsmanship. Many of these roads were so well designed and built that they are still the basis of highways that we see today. These include Via Flaminia and Britain’s Fosse Way, which still carry car, bike, and foot traffic. The answer to their longevity lies in the precision and thoroughness of Roman engineering.

Roman road types and layout

Just like today, the Roman transportation network consisted of various types of roads, each with its pros and cons. These ranged from small local dirt roads to broad, stone-paved highways that connected cities, major towns, and military outposts.

According to Ulpian, a 2nd-century C.E. Roman jurist and one of the greatest legal authorities of his time, there were three major types of roads:

  • Viae publicae. These were public or main roads, built and maintained at the expense of the state. These were the most important highways that connected the most important towns in the empire. As such, they were also the most traveled, dotted by carts full of goods and people traveling through the vast empire. But although they were funded by the state, not all public roads were free to use. Tolls were common at key points of crossing, such as bridges and city gates, enabling the state to collect import and export taxes on goods.
  • Viae militares. Although Roman troops marched across all types of roads and terrain for that matter, they also had their dedicated corridors in the road network. The military roads were very similar to public roads in design and building methods, but they were specifically built and maintained by the military. They were built by legionaries and were generally closed to civilian travel.
  • Viae privatae. These were private roads that were built and maintained by citizens. These were usually dirt or gravel roads since local estate owners or communities did not possess the funds nor the engineering skills to match the quality of private roads.
  • Viae vicinales. Finally, there were secondary roads that lead through or towards a vicus or village. These roads ran into high roads or into other viae vicinales and could be either public or private.

The first and most famous roman road was Via Appia (Appian Way) which linked Rome to Capua, covering 132 Roman miles or 196 kilometers. Via Appia was highly typical of how the Romans thought about building roads. It was very much a straight line that all but ignored geographical obstacles. The stretch from Rome to Terracina was essentially one 90-km long straight line.

Map of major Roman highways in the Italic peninsula.

Other important Roman roads of note include Via Flaminia which went from Rome to Fanum (Fano), Via Aemilia from Placentia to Augusta Praetoria (Aosta), Via Postumia from Aquileia to Genua (Genoa), and Via Popillia from Ariminum (Rimini) to Padova in the north and from Capua to Rheghium (Reggio Calabria) in the south.

Map of Roman Empire at its height in 125 C.E., showing the most important roads. Credit: Wikimedia Commons.

These roads were typically named after the Roman censor that paved them. For instance, Via Appia was named after censor Appius Claudius Caecus, who began and completed the first section as a military road to the south in 312 B.C.E during the Samnite Wars when Rome was still a fledgling city-state on a path to dominate the Italic peninsula.

While they had curved roads when it made sense for them, the Romans preferred taking the straightest path possible between two geographical points, which led to intriguing zig-zag road patterns if you zoom out far enough.

Building a straight road, especially over large distances, is a lot more technically challenging than meets the eye. Mensors were essentially the equivalent of today’s land surveyors who were tasked with determining the most appropriate placement and path a new road should take, depending on the terrain and locally available construction materials. These surveyors were well trained and employed standardized practices.

For instance, the incline of a road could not exceed 8 degrees in order to facilitate the movement of heavy carts packed with goods. To measure slopes, mensors employed a device called a khorobat, a 6-meter ruler with a groove on top into which water was poured. Road construction often started from two simultaneous opposing points that eventually joined in the middle. To draw perpendicular lines on the landscape and make sure the roads were straight and actually met, the surveyors employed the thunder or groma, the ancestor to the modern protractor, which consisted of a cross, at the four ends of which threads with lead weights were tied. When one weight on the same piece of wood correctly lined up with the one in front of it, the surveyor knew that the path of the road was straight.

Mistakes were bound to occur, which explains the small changes in direction that archeologists have found when excavating these ancient roads. When roads had to inevitably bend due to the terrain, at the bends the roads became much wider so that carriages traveling towards each other could safely pass each other without interlocking the wheels.

Roman roads purposely avoided difficult terrain such as marshes or the immediate vicinity of rivers. When they had to cross a river, Roman engineers built wooden or stone bridges, some of which survive and are still in use to this day, like the 60-meter-long Pons Fabricius, which was built in 62 B.C.E. and connects an island in the Tiber River with the opposite bank. Other times, tunnels were dug through mountains, in the spirit of straight Roman roads.

How Roman roads were made

After completing all the geodetic measurements and projections, the Roman surveyors marked the path of the future road using milestones. All trees, shrubs, and other vegetation that might interfere with the construction of the road were razed. Marshes were drained and mountains would be cut through, if needed.

The average width of an ancient Roman road was around 6 meters (20 ft.), although some large public roads could be much wider.

According to the writings of Mark Vitruvius Pollio, an outstanding Roman architect and engineer who lived in the 1st century C.E., Roman public roads consisted of several layers:

  • Foundation soil – depending on the terrain, builders either dug depressions on level ground or installed special supports in places where the soil subsided. The soil is then compacted and sometimes covered with sand or mortar to provide a stable footing for the multiple layers above.
  • Statumen – a layer that was laid on compacted foundation soil, consisting of large rough stone blocks. Cracks between the slabs would allow drainage to be carried through. The thickness of this layer ranged from 25 to 60 cm.
  • Rudus – a 20-cm-thick layer consisting of crushed rock about 5 cm in diameter in cement mortar.
  • Nucleus – a concrete base layer made of cement, sand and gravel, that was about 30 cm thick.
  • Summum dorsum – the final layer consisting of large 15-cm-thick rock blocks. But more often fine sand, gravel, or earth was used in the top layer, depending on the available resources at the workers’ disposal. This layer had to be soft and durable at the same time. Paved roads were very expensive and were typically reserved for sections located near and inside important cities. When pavement (pavimentum) was used, large cobblestones of basalt laval were typically used in the vicinity of Rome.
The main layers of a Roman road.

This puff pie structure ensured that the roads would be very sturdy. Roman roads also had a slightly curved surface, a clever design that allowed rainwater to drain over to the side of the road or into drainage ditches, thereby keeping the road free of puddles.

Upkeep was also very important. In fact, the Romans were so meticulous about maintaining their roads — which they considered the backbone of their empire — that they had regularly placed markers along the side of the road, indicating who was in charge of repairing that particular section of the road and when the last repair was made. That’s remarkably modern accountability-based upkeep.

Swift travel and easy navigation

Rome’s unparalleled extensive network of roads was crucial for both expanding and maintaining its borders, and allowing the economy to flourish. Rome’s legions could travel 25 to 50 kilometers (around 15 to 31 miles) a day, allowing them to respond relatively quickly to outside threats or internal uprisings. This means that costly garrison units at frontier outposts could be kept to a minimum as reinforcements could be mustered within weeks or even days.

Imperial Rome even had a postal service, which exploited the road network to its fullest. By switching fatigued horses with fresh ones, a postman could relay a message up to 80 kilometers from its destination within a single day. If the message was urgent, maybe even farther. For the slow-paced world of antiquity, this was incredibly fast and efficient communication, making the state far more agile than its ‘barbarian’ neighbors.

A Roman milestone in Portugal.

Besides the military, Rome’s roads were used by travelers from all parts of society , from slaves to emperors. Although traveling across the empire without maps might seem daunting, travelers could easily make their way to their destination thanks to large pillars that dotted the side of the road. These milestones, which could be as high as four meters and weigh two tons, indicated who built or was tasked with maintaining the road, as mentioned earlier, but also informed travelers how far the nearest settlement was. The pillars were modeled after a marble column in gilded bronze erected inside the Roman Forum in 20 B.C. under Caesar Augustus. It represented the starting point for all the roads in the empire, hence the phrase ‘All roads lead to Rome’.

All important Roman roads and notable stopping places along them were cataloged by the state. The catalog was updated regularly in the form of The Antonine Itinerary, which at its peak contained 225 lists. Each list, or iter, gives the start and end of each route, with the total mileage of that route, followed by a list of intermediate points with the distances in between. 

There were also maps — but not the landscape kind you’re imagining. Instead, these were schematic maps known as itinerarii that originally only listed cities along a route, but gradually these guidelines became pretty complex. The itinerarii grew to include roads, each with their own number and city of origin, and how they branched, alongside the length in Roman miles (equal to 1,000 paces or 0.92 English miles) and the most intermediate cities and stops along the way.

Roman roads even had service stations

A well-preserved section of the Appian Way. Credit:  Carole Raddato.

Every 15-20 kilometer (around 9-12 mi) or so along a public road, it was common to find rest stops where postmen could change horses for a fresh mount. These government stables were known as mutationes. Alongside these establishments, travelers could expect to find mansiones, a sort of early version of an inn where people could purchase basic lodgings for themselves and their animals, as well as eat, bathe, repair wagons, and even solicit prostitutes. In more busy intersections, these service stations morphed into small towns complete with shops and other amenities.

Roman roads were surprisingly safe

The flow of trade and the taxes that went with it were crucial to the Roman empire, so any disruption caused by bandits and other roadside outlaws was unacceptable. A special detachment of the army known as stationarii and beneficiarii regularly patrolled public roads and manned police posts and watchtowers to monitor traffic. They also doubled as toll collectors.

Roman roads tended to roll through sparsely populated areas, and special attention was given to clearing vegetation and digging ditches along the sides of the road. This reduced the cover that bandits could use to ambush carts and law-abiding citizens.

To this day, hundreds if not thousands of routes across Europe and the Middle East are built right on top of old Roman roads that have remained in use throughout the ages. Although suffering from major deterioration due to neglect, Roman roads continued to serve Europe throughout the Middle Ages. In fact, Roman road-building technology wasn’t surpassed until the late 19th century, when Belgian chemist Edmund J. DeSmedt laid the first true asphalt pavement in the front of the city hall building in Newark, New Jersey. Of course, Roman roads would be totally impractical today for busy car traffic, but one can only stand in awe in front of their durability, in stark contrast to modern roads that quickly form potholes after a mild winter. 

Stone toilet in Israel shows that the rich and powerful in antiquity were suffering from parasites

Despite their advanced sanitation systems, the ancient elites of Jerusalem were plagued by intestinal parasites, new research reports.

The 2,700 year-old toilet. Image credits Yoli Schwartz / The Israel Antiquities Authority.

The findings are drawn from an archaeological site at the ancient Armon Hanatziv royal estate in Jerusalem. The site lies close to the Dead Sea, to the north of today’s Bethlehem. Analysis of soil samples taken from an ancient toilet found that residents of the estate harbored several intestinal parasites, as evidenced by the discovery of parasitic eggs in the samples.

The findings of this study are among some of the earliest discoveries ever made in Israel up to now.

Egg surprise

“These are durable eggs, and under the special conditions provided by the cesspit, they survived for nearly 2,700 years,”  said Dafna Langgut of Tel Aviv University and the Steinhardt Museum of Natural History, a leading researcher in the emerging field of archeoparasitology, and sole author of the study. “Intestinal worms are parasites that cause symptoms like abdominal pain, nausea, diarrhea, and itching. Some of them are especially dangerous for children and can lead to malnutrition, developmental delays, nervous system damage, and, in extreme cases, even death.”

The findings can go a long way towards helping us understand the daily habits of people who once lived in this area, and of how ancient people dealt (or suffered with) infectious disease. This site is particularly valuable in this regard as it showcases the lives of the very rich and wealthy, who were most likely to enjoy the best lifestyle — in regards to resources, practices, and habits — of the time. Sites like this are also relatively hard to get this type of evidence from.

For example, Langgut explains that prior research had compared fecal parasites in hunter-gatherer and farming communities here and elsewhere, helping us better understand what this transition looked like for the people at the time.

One particularly important event for archeoparasitology (the study of parasites throughout human history) is the domestication of animals. At this time, the number of parasitic infections throughout farming communities rose sharply. Hunter-gatherers were generally exposed to fewer parasites and infectious diseases on account of living nomadic lifestyles — this, Langgut adds, is still the case today.

According to the paper, the area Israel occupies today — known as the Fertile Crescent in history — was probably one of the first where human populations suffered from wide-scale intestinal parasitic infection. Various ancient texts have been found throughout Israel referencing such diseases.

Excavation works at the ruins of Armon Hanatziv, or the Commissioner’s Palace, which dates back to the mid-7th century BCE, sometime between the reigns of King Hezekiah and King Josiah, started in 2019-2020.

Pollen found in samples taken from the site suggest that a garden of fruit trees and decorative plants existed around or next to the estate. Together with the lavish architecture and evidence of quality furnishings found at the site, this showcases the sheer level of wealth that was concentrated at the Armon Hanatziv.

During excavations in the garden, archaeologists from the Israel Antiquity Authority also discovered the remains of a primitive toilet: this consisted of a large water reservoir and a cubical limestone slab with a hole drilled in the center. Pollen was found in this structure as well, so the team believes that it was built either in a small room with windows, or in one without a roof, to ensure better ventilation. It was likely constructed in the garden, away from the main building, in an effort to have the plants mask some of the smell.

Toilets were quite a luxury during this time. The earliest examples of toilets in Israel all date to the Late Bronze Age and have been located in palace areas, indicative of their rarity and cost. Due to this, there is a relative lack of opportunities to study the contents of toilets for parasites. Only two such studies had been carried out before according to Langgut, one of which reported the presence of intestinal parasites.

Archaeologists collected 15 samples from the Armon Hanatziv, alongside a few controls from the area. The parasitic eggs were chemically extracted and studied under a microscope to determine their species and measure them. Langgut found eggs of four different species in six of the samples — whipworm, beef/pork tapeworm, roundworm, and pinworm. She adds that it’s the single earliest record of roundworm and pinworm in Israel.

Whipworm and roundworm eggs were the most common in the samples. None of the four control samples yielded any eggs, which ruled out the possibility of outside contamination into the toilet.

“It is possible that as early as the 7th century BCE, human feces were collected systematically from the city of Jerusalem in order to fertilize crops grown in the nearby fields,” Langgut wrote. “The inhabitants were forced to farm inhospitable rock terrain and were told which type of crop to grow. Additionally, the type of fertilizer used might have also been dictated by the Assyrian economy [at this time, Israel was under Assyrian rule].”

Human feces can act as useful and efficient fertilizer. Today, however, they are composted for a few months before use to limit the risk of any viable parasite eggs surviving. It’s very likely that people living in the area at that time were not using this practice, which allowed for the spread of parasites throughout the community. Langgut adds that the presence of tapeworm eggs is indicative that the inhabitants of the palace were eating poorly cooked or raw beef or pork, as these are “the only meats that carry the parasite”.

“While the mere existence of something as rare as a toilet installation seems to indicate that at least some ancient Jerusalemites enjoyed a relatively high level of sanitation, the evidence of intestinal parasite eggs suggests just the opposite,” she concludes. “The presence of indoor toilets may have been more a matter of convenience than an attempt to improve personal hygiene. A toilet was a symbol of wealth, a private installation that only the rich could have afforded.”

The paper “Mid-7th century BC human parasite remains from Jerusalem” has been published in the International Journal of Paleopathology.

The sordid underbelly of Christmas past

When English Puritans outlawed Christmas in 1647, it was not without good reason. When American Puritans, in turn, outlawed Christmas in Massachusetts between 1659 and 1681, it too was not without good reason.

Christmas past was anything but innocent.

Until the mid-19th century, Christmas was a time for drunkenness and debauchery.

Men dressed like women, women dressed like men, servants dressed like masters, boys dressed like bishops, everyone else either dressed as animals or wore blackface – all to subvert the godly order in the safety of anonymity.

Christmas was a carnival of drink, cross-dressing, violence and lust during which Christians were unshackled from the ethical norms expected of them the rest of the year.

No wonder the Puritans wanted it banned.

The Origins of Christmas Revelry

It was not until the 4th century that the Church of Rome recognised December 25 as the date to celebrate the birth of the messiah. And it did so knowing well that there were no biblical or historical reasons to place Christ’s birth on that day.

There is some evidence the Romans worshipped Sol Invictus, their sun god, on December 25. But what the Romans really celebrated during the month of December was Saturnalia, an end of harvest festival that concluded with the winter solstice. As historian Stephen Nissenbaum pointed out in his acclaimed The Battle for Christmas, the early Church entered into a compromise: in exchange for widespread celebration of the birth of Christ, it permitted the traditions of Saturnalia to continue in the name of the saviour.

Gambling, as seen here in a fresco from Pompeii, was a hallmark of the Roman celebration of Saturnalia. Wikimedia Commons

Gift-giving, feasting, candles, gambling, promiscuity and misrule were the hallmarks of Saturnalia. Add to this the holly, the mistletoe and (much later) the tree, and we have a Christmas inclusive of a variety of pagan traditions.

But as time went on, Church leaders became increasingly disillusioned by the way the carnival that was Saturnalia simply carried on under a thin veneer of Christian piety.

The 16th century bishop Hugh Latimer lamented that many Christians “dishonoured Christ more in the 12 days of Christmas than in all the 12 months besides.”

Lords and Ladies of Misrule

In early modern England, it was common practice to elect a “Lord of Misrule” to oversee Christmas celebrations. Revellers under the auspices of the “Lord” marched the streets dressed in costume, drinking ale, singing carols, playing instruments, fornicating and causing damage to property.

One account from Lincolnshire in 1637 relates how the revellers decided the Lord must have a “Christmas wife,” and brought him “one Elizabeth Pitto, daughter of the hog-herd of the town.” Another man dressed as a vicar then married the lord and lady, reading the entire service from the Book of Common Prayer, after which “the affair was carried to its utmost extent.” Had they not carried the matter so far, the account continues, “probably there would be no harm.” As it was, “the parties had time to repent at leisure in prison.”

Twelfth-night (The King Drinks), painted by David Teniers the Younger, between 1650 and 1660. © Museo Nacional del Prado, CC BY-NC-SA

“December was called […] the Voluptuous Month” for a reason, wrote Reverend Increase Mather in 1687. Young men and women often took advantage of the moral laxity of the Christmas season to engage in late-night drinking and sex.

Not surprisingly, such seasonal merrymaking resulted in higher than usual birth rates in the months of September and October, as well as real rather than burlesque marriages.

Wassailing

Even Christmas charity was far from innocent. Gifting, this hallmark of the season, was rarely given freely, but demanded with threats of mischief or violence.

In the practice known as “wassailing” during the 17th and 18th centuries, roving bands of poor men and boys asserted their Christmas right to enter the houses of the prosperous and claim the finest food and drink, singing:

We’ve come here to claim our right,
And if you don’t open up your door,
We will lay you flat upon the floor.

A depiction of wassailing from the Illustrated London News, 1856. © The Trustees of the British Museum, CC BY-NC-SA

Though most wassailing ended without violence, the occasional stone was thrown through the window of an uncharitable lord. To the lord who was generous, the goodwill of the wassailers could be hoped for the rest of the year.

Domesticating Christmas

Ultimately, the efforts of Puritans to ban Christmas failed. The irreligious revelry that marked Christmas past was too deeply entrenched in Western culture. But where the forces of religion failed, the forces of the market would soon succeed in taming Christmas. The sordid behaviour of Christmas past would be substituted for another type of irreligion: consumerism.

Still, much of the sordid underbelly of Christmas past remains. That family member who always has a bit too much to drink, the overeating, the regretful rendezvous with a colleague at the office party – all telltale signs our oldest Christmas traditions are alive and well.

James A. T. Lancaster, Lecturer in Studies in Western Religious Traditions, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday essay: dreaming of a ‘white Christmas’ on the Aboriginal missions

Christmas Dinner, Mt Margaret Mission 1933. State Library of Western Australia

Aboriginal missions, which existed across Australia until the 1970s, are notorious for their austerity. Aboriginal people lived on meagre rations – flour, sugar, tea and tobacco – and later, token wages. At some missions, schoolgirls wore hessian sacks as clothes or skirts made from old bags.

Christmas, however, was a joyful time on them. Old people remember Christmas for food, gifts and carols. But the celebration had a sinister edge. For years, missionaries hoped the joy of Christmas would replace Aboriginal traditions. But Christmas actually became an opportunity for creative cross-cultural engagement, with Aboriginal people adopting its traditions and making them their own.

The food was a respite from the usual diet of damper, rice or stew. On the Tiwi Islands in the Northern Territory, missionaries would shoot a bullock, and the old women remember feasting on beef and mangoes on the beach.

Oenpelli Mission (Gunbalanya) Christmas, 1928. National Archives of Australia

Missionaries used food to attract people to church. Christmas might be the only day of the year that it was distributed to everyone. Cake was a favourite. On Christmas Day at Gunbalanya in western Arnhem Land in 1940 the superintendent called it “the happiest we’ve experienced here. Ten huge cakes for Natives – no complaints – 106 at service” (suggesting that church attendance was linked to cake quantity).

For elders on Groote Eylandt in the Gulf of Carpentaria, turtle-egg cake was a highlight of Christmas in the 1940s. As Jabani Lalara recalled:

We used to have a lovely Christmas … In front of the church, that’s where they used to put the Christmas tree and that’s where we used to get a present. Especially like cake, used to make from turtle egg. I love that cake. True.

Gifts were another drawcard. On Christmas 1899, the Bloomfield River Mission in far-north Queesland was said to be “overflowing” because Aboriginal people “heard there would be a distribution of gifts”. These included prized items such as handkerchiefs, pipes and knives. At some missions, Santa (often the superintendent) distributed gifts.

Father Christmas arriving at Mt Margaret Mission in a rickshaw, 1945. State Library of Western Australia

However looking back, old people have mixed feelings about the gifts. As much as they loved them at the time, they discovered their treasures were only toys that white children had rejected. As one person told me:

We didn’t have much in them days, it was tough, but we were happy. We were happy with those secondhand toys at Christmas from the Salvation Army. We didn’t know they were secondhand toys at the time. I found out in my later years.

Christmas rally church service, Fitzroy Crossing Mission, 1954. State Library of Western Australia

Missionaries and Aboriginal people alike loved carols; they were an opportunity for shared enjoyment. Tiwi women look back fondly on their time singing with nuns. Said one woman:

Sister Marie Alfonso, she used to play organ and all of us girls used to sing in Latin, but we still remember… Every Christmas [the old women] sing really good. They all can remember that Latin. It’s really nice.

There were also nativity plays, with Aboriginal children proudly performing for their communities. Said another:

When there was Christmas or even Easter Day there was a role-play… On Christmas Day I used to read. Three of them was the Wise Men and the other one was Mary and the other young boy was Jesus.

Christmas at Nepabunna, C.P. Mountford, 1937. State Library of South Australia

Behind the lightheartedness came an agenda. As one priest commented, Christmas was to be a “magnet” to draw people into missions. Ultimately, missionaries hoped the celebration of Jesus’s birth would prove more attractive than Aboriginal people’s own ceremonies.

For those who would not settle on missions, Christmas was used against them. At Yarrabah in Queensland the “unconverted heathens” were invited to join the festivities, but their exclusion was symbolised by them walking at the back of processions, sitting at the back of the church and being the last to be served their meal.

Aboriginal Christmas

In missionaries’ eagerness to use Christmas to spread Christianity, they started to use Aboriginal languages (with Aboriginal co-translators). At Ngukurr in southern Arnhem Land and Gunbalanya, the first church services in Aboriginal languages were Christmas services (in 1921 and 1936).

Aboriginal people loved carols, so these were the first songs translated. On the 1947 release of the Pitjantjatjara Hymnal, Christmas carols were the most popular (The First Noel sung in parts being the favourite). On Groote Eylandt, translation began with Christmas carols, nativity plays and Christmas readings in the 1950s. At Galiwin’ku on Elcho Island in Arnhem Land, the annual Christmas Drama was in Yolngu Matha from 1960.

Translation was meant to make missionary Christianity more attractive, but it opened the way for more profound cultural experimentation. Aboriginal people infused Christmas with their own traditions. On the Tiwi Islands, in 1962 there was a “Corrobboree Style” nativity on the mission told through traditional Tiwi dance. Dance traditions missionaries had previously called “pagan” were now used by Tiwi people to share the Christian celebration.

At Warruwi on the Goulburn Islands in western Arnhem Land, Maung people began “Christmas and Easter Ceremonies” from the 1960s, blending ceremonial styles with Western musical traditions as well as their own music and dance. At Wadeye, in the Northern Territory, “Church Lirrga” (“Liturgy Songs”) include Christmas music, sung in Marri Ngarr with didjeridu. The Church Lirrga share the melodies of other Marri Ngarr songs that tell of Dreamings on the Moyle River.

Many who embraced Christianity sought to express their spirituality without missionary control. At Milingimbi in the NT, Yolngu people developed a Christmas ceremony with clap sticks and dijeridu outside the mission and free of missionary interference.

Mt Margaret Mission Christmas, 1933. State Library of Western Australia

At Ernabella Mission in South Australia in 1971, people began singing the Christmas story to ancient melodies, with the permission of their songmen. Senior Anangu women at Mimili, SA, later sang the Pitjantjatjara gospel to their witchetty grub tune, blending Christmas with their Dreamings and songlines.

Christmas was woven into community life. Just as introduced animals found their way into Aboriginal songs and stories, Christmas became part of the seasons and landscape, as Therese Bourke explained at Pirlangimpi on the Tiwi Islands:

They used to have donkeys [here] and the donkeys used to come round in December. And my mother’s mob used to say, “they’re coming around because it’s Christmas and Jesus rode on the back of one.”

The missions transformed into “communities” under a policy framework of self-determination in the 1970s, although missionaries themselves often remained active in the communities for decades. Meanwhile, many Aboriginal people have mixed memories of the missions – fondness for some aspects, anger at others – including Christmas.

But regardless of the missionaries, Christmas became an Aboriginal celebration in its own right. Some missionaries even came to appreciate Aboriginal ways of celebrating Christmas in line with their Dreamings. Though missionaries had wanted to replace Aboriginal spirituality with a “white Christmas”, it became a season of deeper meetings of cultures.


Laura Rademaker, Postdoctoral Research Fellow in Modern History, Australian Catholic University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Extraordinarily, the effects of the Spanish Inquisition linger to this day

Pedro Berruguete Saint Dominic Presiding over an Auto-da-fe. Wikimedia

Jordi Vidal-Robert, University of Sydney; Hans-Joachim Voth, University of Zurich, and Mauricio Drelichman, University of British Columbia

From Imperial Rome to the Crusades, to modern North Korea or the treatment of Rohingya in Myanmar, religious persecution has been a tool of state control for millennia.

While its immediate violence and human consequences are obvious, less obvious is whether it leaves scars centuries after it ends.

In a new study, we have attempted to examine the present-day consequences of one of the longest-running and most meticulously documented persecutions of them all – the trials of the Spanish Inquisition between 1478 to 1834.

The records of 67,521 trials still exist, along with indicators of their locations and places of birth and residence of the people they tried.

We find that today – two hundred years after its abolition – the locations in which the inquisition was strong have markedly lower levels of economic activity, trust and educational attainment than those in which it was weak.

Secret denunciations

Charged with combating heresy, defined as deviation from Catholic doctrine, the Inquisition extended into every strata of Spain’s society and almost every corner of its global empire.

Trials originated with secret denunciations and lasted years. Penalties ranged from mild admonishments to burning at the stake. Sentences were usually handed down in large public ceremonies – ensuring widespread publicity.

The geographical distribution of inquisitorial intensity shows widespread variation over relatively small areas, but no broad geographical patterns.

We set the geographical distribution of inquisitorial intensity against a modern-day measure of gross domestic product per capita constructed using nighttime luminosity captured by satellite photography.

In Spain, estimating GDP at the municipal level from administrative data is fraught with data availability and compliance problems.

Night light is highly correlated with per capita income and widely used as a proxy for economic performance in the development literature.

The Iberian Peninsula at night, showing Spain and Portugal. Madrid is the bright spot just above the centre. NASA

We find municipalities with no recorded inquisitorial activity as well those with inquisitorial activity in the lowest third have the highest GDP per capita today.

Those with persecution in the middle third have markedly lower incomes.

In those where the inquisition struck with the highest intensity (in the top third) the level of economic activity is sharply lower.

The magnitudes are large. In places with no persecution, the median GDP per capita was €19,450 (A$30,100). In places where the inquisition was most active, it is below €18,000 (A$28,670).

Our estimates imply that had Spain not suffered from the inquisition, its annual production today would be 4.1% higher – €811 (A$1,290) for each man, woman, and child.

More persecution, less education

To get an idea of why the inquisition continues to cast such a dark economic shadow centuries after it ended, we used data from the barometer surveys conducted by the Spanish Centre for Sociological Research.

Since the inquisition was particularly suspicious of the educated, literate middle class, its impact on Spain’s cultural, scientific, and intellectual climate was severe. (As was the impact of the Stasi, or secret police, in East Germany.)

Once we control for other variables, we find that going from a region which had no exposure to the inquisition to one which had mid-range exposure cuts the share of the population receiving higher education today by 5.6%.

More persecution, less trust

The inquisition also changed the way civil society functioned. The prospect of secret denunciations by acquaintances made it harder for residents to cooperate. It diminished trust.

A standard trust question asked in the Spanish surveys is:

In general, would you say people on average can be trusted, or would you say that one can never be too careful?

We analysed responses from more than 26,000 Spaniards interviewed between 2006 and 2015 and (after adjusting for time-specific effects) found that greater inquisitorial activity is still associated with somewhat less trust today. Although small, the effect is robust to different methods of calculation.

We also measured the frequency of church attendance and found a related effect on religiosity. The greater the persecution in a location, the greater the level of church attendance today.

More persecution, less income

An objection that could be raised to our findings is that the inquisition might have been more active in poorer areas.

Standard histories suggest this is unlikely. The inquisition was self-financing. It had to confiscate property and impose fines to pay for its expenses.

Its mission was to persecute heresy, but it had strong incentives to look for it in richer places. Its early focus on persecuting Jews and later Protestants led it to target populations with higher levels of education.

The inquisition’s persecution of perceived heretics is only one example of authoritarian intervention in people’s private lives. Other institutions, such as Stalin’s People’s Commissariat for Internal Affairs and Hitler’s Gestapo, instituted similarly intrusive regimes of thought-control.

While the suffering of the accused and convicted was the single most important result of persecution, our findings suggest its effects live on.

Even now, 200 years on from the Spanish Inquisition, the locations affected appear to be poorer, more religious, less educated, and less trusting.


Jordi Vidal-Robert, Lecturer in Economics, University of Sydney; Hans-Joachim Voth, UBS Professor of Macroeconomics and Financial Markets, University of Zurich, and Mauricio Drelichman, Associate Professor, Vancouver School of Economics, University of British Columbia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Paradoxes are contradictory.

The appeal of the paradox — mankind’s fascination with self-contradicting ideas

Just like true love, a paradox cannot be explained with logic alone. Simply put, a paradox is a self-contradicting statement. Any idea, situation, puzzle, statement, or question that challenges your ability to reason, and leads you to an unexpected and seemingly illogical conclusion, can be considered a paradox.

The classic paradox example is the so-called Grandfather Paradox. Imagine a psychotic time traveler who goes back in time and kills his grandfather before his father is conceived. This means that the traveler wouldn’t have been conceived, and if he wasn’t conceived, then who went back to kill his grandfather?

The answer to this theoretical time travel mystery is still unclear, as is the case with many other interesting paradoxes. In this information age, logic helps us understand what is known to us but a paradox serves as a reminder of what else we need to know. Let’s dive in.

Image credits: cottonbro/pexels

How do you define a paradox?

A paradox is a thought that can sound reasonable and illogical at the same time. The Cambridge dictionary defines paradox as a situation that could be true but is impossible to comprehend due to its contrary characteristics. In the Greek language, ‘para’ translates to ‘abnormal’, ‘distinct, or ‘contrary’’ and ‘dox’ means ‘idea’ or ‘opinion’. Therefore, according to some Greek philosophers, a paradox is an abnormal or self-contrary belief or idea that ultimately leads to an unsolvable contradiction.  

You don’t need time travel to create a crazy paradox. For instance, in the famous crocodile paradox (of which there are many variations), a magical crocodile steals a child and promises to return it only if the father can guess correctly what the crocodile will do. If the father says “The child will not be returned” — what can the crocodile do? If he doesn’t return the child, that means the father’s guess was true so he should have returned the child. If he does return it, then the father’s guess was false, so he shouldn’t have. It’s a paradox, nothing the crocodile does can satisfy the situation.

The face a crocodile makes when faced with an unsolvable paradox. Image credits: Pixabay/pexels.

This paradox is believed to have originated centuries ago in ancient Greece, but there are hundreds of different paradoxes that are found in literature, mathematics, philosophy, science, and various other domains as well. Though a true paradox can seem both true and false at the same time, logic tends to suggest most of the paradoxes as invalid statements. 

There are four main types of paradoxes:

  1. Falsidical paradox: A paradox that leads to a false conclusion resulting from a misconception or false belief. For example, Zeno’s Achilles and tortoise.
  1. Veridical paradox: When a situation or statement tells us about a result that sounds absurd but is actually valid by logic. Shrodinger’s cat is a famous example of a veridical paradox.
  1. Antinomy paradox: A question, puzzle, or statement that does not lead to a solution or conclusion is called an antimony paradox (also known as self-referential paradox). One of its examples is the Barber’s paradox (discussed below).
  1. Dialetheia: When the opposite of a situation and the original situation co-exist together, such a paradox is called dialetheia. No concrete examples are known but some real-life situations can be considered dialetehia (for example when you are standing at the kitchen door, and one of your family members ask you if you are in the kitchen? You are right whether you answer yes or no.    

Why paradoxes matter

Paradoxes are important because they make us think. They force us to reassess what we thought we knew and ponder things from unusual perspectives. A paradox mindset, in which we embrace contradicting (or seemingly contradicting ideas) is a key to success, some studies have shown. Leading thinkers were found to spend considerable time developing ideas and counter-ideas simultaneously, something called the Janusian process.

Studying paradoxes is also important, especially for mathematicians. Mathematicians love to break everything into small pieces and define things carefully, and they do that with paradoxes. For instance, let’s take a simple paradox called the Temperature paradox, which states:

“If the temperature is 90 and the temperature is rising, that would seem to entail that 90 is rising.”

Obviously, 90 is not rising, it’s a fixed number, it can’t be rising. We know that intuitively, but how do we prove it? American mathematician and philosopher Richard Montague dealt with this paradox (and many others), and explained that the paradox emerges from linguistic vagueness, which can be addressed through mathematical clarity. The linguistic formalization of the paradox would go something like this:

  1. The temperature is rising.
  2. The temperature is ninety.
  3. Therefore, ninety is rising. (invalid conclusion)

But the mathematical formalization implies that point 1. marks how the temperature changes over time, while point 2. makes an assertion about the temperature at a particular point in time. Therefore, we cannot draw conclusions based on this single point in time.

This type of paradox, which emerges from language issues and ambiguity is not often important, but other paradoxes, especially those that can’t be resolved through normal means, hold importance because they help us find better definitions of objects and relationships. A good example of this is Curry’s paradox.

Now that we know the types of paradoxes and why they matter, let’s look at of the most popular and insane paradoxes of all time:

Paradox examples

“This sentence is false”

This so-called liar’s paradox is the canonical example of a self-referential paradox. Other classic examples are “Is the answer to this question ‘no’?”, and “I’m lying.”

Mathematicians have tried to dissect and analyze this paradox in great detail because it can hold some importance to defining inherent limitations of mathematical axioms.  The liar’s paradox was used in 1931 by a mathematician named Kurt Gödel to define mathematical axioms, but the paradox itself dates back to at least 600 BC, when the semi-mythical seer Epimenides, a Cretan, reportedly stated that “All Cretans are liars.”

The Barber paradox

The scene of a Bucharest-based barbershop in 1842. Image credits: Charles Doussault/Wikimedia Commons

Proposed by British mathematician Bertrand Russell, this paradox states that if a barber is defined as the person who only shaves individuals who do not shave on their own, then who shaves the barber? In this case, the barber would shave himself — but then, according to the definition, he is no longer the barber as he cannot shave a person who would shave on their own. 

Now, if he is not shaving on his own, then he is among those who are supposed to be shaved by the barber. In this case, also, the barber has to shave himself. Therefore, the barber paradox suggests that no such barber can ever exist who is called a barber because he only shaves people who do not do their own shave. Well, then what the heck even is a barber?

Sorites’ paradox

If there is a heap of sand that has one million grains, and one by one, grains are being removed from the heap such that at the end of the process only one grain remains, would it still be seen as a heap? If not then when does the heap of sand become a non-heap? Sounds crazy, right? But that is the Sorites paradox given by Eubuildus of Miletus around the fourth century BCE, and till this day, no math genius has been able to give a logical solution to this problem.

Another similar type of puzzle is the so-called ship of Theseus. The mythological hero Theseus sails on to his adventures, and at some point, one of the ship parts needs replacing. It’s still the same ship, right? Just one part was replaced. But part after part, every component on the ship is replaced. Is it still the same ship? If not, when did it stop being the same ship?

Zeno’s Achilles and the tortoise

Achilles and the tortoise.

In this paradox developed by the ancient Greek philosopher Zeno, there is a race between the great Greek warrior Achilles and a tortoise. The tortoise is given a head start of 100 meters. Achilles runs faster than the tortoise so it will catch up to it. But here’s how Zeno looked at things:

  • Step #1: Achilles runs to the tortoise’s starting point while the tortoise walks forward.
  • Step #2: Achilles runs to where the tortoise was at the end of Step #1, while the tortoise goes a bit further.
  • Step #3: Achilles runs to where the tortoise was at the end of Step #2 while the tortoise goes yet further.
  • … and so on.

The gaps get smaller and smaller every time, but there is an infinity of these steps, so how can Achilles overcome an infinite number of gaps and catch up to the turtle? How does anything catch up to anything, for that matter? Obviously, things do catch up to other things, so what’s going on here?

The ancient Greeks lacked the mathematical tools to address this paradox, but nowadays, we know better. There may be an infinite number of steps, but they are also infinitely small. It’s a bit like how 1/2 + 1/4 + 1/8 + 1/16 +… to infinite adds up to 1. It’s an infinite number of steps, but the steps become infinitely small, and in the end, they add up to something tangible.

Animalia Paradoxa – The classification of magical creatures

This is actually not a paradox but a biological classification of the beasts and magical creatures that are also mentioned in ancient storybooks. In the versions of Systema Naturae that arrived before its sixth edition, author Carl Linnaeus (father of modern taxonomy) has listed creatures like Hydra (snake with seven faces), Draco (a dragon with bat-like wings and ability to spit fire), Unicorn (beautiful single-horned horse), Lamia (half-human half-animal), etc.

From a scientific point of view, these creatures don’t exist so then why did a genius like Carl Linnaeus mention such creatures in his greatest scientific work? It seems paradoxical that the man who defined our classification of biological creatures would introduce unreal creatures; one might say it’s a bit paradoxical.

A painting of Hercules and the Lernaean Hydra by Gustave Moreau. Image credits: The Yorck Project/Wikimedia Commons

Paradoxes have a unique draw because they appeal to human curiosity and mystery. They seem to ignite the curiosity of the human mind for thousands of years and will likely continue to do so for many years to come. 

Asia’s languages developed and spread alongside rice, millet agriculture

New research is peering into the shared past of the Transeurasian (or ‘Altaic’) family of languages. According to the findings, the hundreds of millions of people who speak one such language today can trace their shared legacy back to a single group of millet farmers that lived 9,000 years ago in what today is northeast China.

Integration of linguistic, agricultural, and genetic expansions in Northeast Asia. Red arrows show the eastward migrations of
millet farmers in the Neolithic, alongside Koreanic and Tungusic languages. Green arrows mark the integration of rice
agriculture in the Late Neolithic and the Bronze Age, alongside the Japonic language. Image credits Martine Robbeets at al, (2021), Nature.

This family of languages includes peoples and countries all across Eurasia, with notable members including Japanese, Korean, Tungusic, Mongolic, and Turkic. As such, it is definitely a very populous language family. Exactly how Transeurasian languages came to be, however, is still a matter of heated debate. This history is rife with expansions, population dispersals, and linguistic dispersals, making it exceedingly difficult to trace back and determine its origin.

New research, however, aims to shed light on this topic. The study combined three disciplines — historical linguistics, ancient DNA research, and archaeology — to determine where Transeurasian languages first originated. According to the findings, its roots formed around 9,000 years ago in modern China and then spread alongside the development and adoption of agriculture throughout Eurasia.

Hard to pinpoint

“We developed a method of ‘triangulation’, bringing linguistics, archaeology, and genetics together in equal proportions in a single approach,” Prof. Dr. habil Martine Robbeets, the corresponding author of the paper, said for ZME Science. “Taken by itself, linguistics alone will not conclusively resolve the big issues in the science of human history but taken together with genetics and archaeology it can increase the credibility and validity of certain scenarios.”

“Aligning the evidence offered by the three disciplines, we gained a more balanced and richer understanding of Transeurasian prehistory than each of the three disciplines could provide us with individually.”

The origin of Transeurasian languages can be traced back to a group of millet farmers — the “Amur ” people — in the Liao valley, according to the team’s findings.

These languages spread throughout Eurasia in two major phases. The first one took place during the Early–Middle Neolithic (Stone Age), when sub-groups of the Amur spread throughout the areas around the West Liao River. During this time, the five major branches of the Transeurasian linguistic family started to develop among the different groups, as the distance between them allowed for the creation of dialects.

The second phase involved contact between these five daughter branches during the Late Neolithic, Bronze Age, and Iron Age. This phase was characterized by these intergroup interactions as well as genetic inflows (and possible linguistic imports from) populations in the Yellow River area, western Eurasian peoples, and Jomon populations. Agriculturally speaking, this period also saw the adoption of rice farming (from the Yellow River area), the farming of crops native to west Eurasia, and pastoralism.

Although the spread of Transeurasian languages was largely driven by the expansion of a single ethnic group, it was not limited to a single one. Several peoples mixed together with the descendants of those millet farmers from the Liao River over time to create the rich tapestry of language, customs, and heritages seen in Eurasia today.

“Our [results] show that prehistoric hunter-gatherers from Northeast Asia as well as Neolithic farmers from the West Liao and Amur all project within the cluster of present-day Tungusic speakers. We call this shared genetic profile Amur-like ancestry,” explains Dr. Robbeets for ZME Science. “Turkic and Mongolic speakers and their ancestors preserve some of this Amur ancestry but with increasing gene flow from western Eurasia from the Bronze Age onwards.”

“As Amur-related ancestry can also be traced back to speakers of Japanese and Korean, it appears to be the original genetic component common to all speakers of Transeurasian languages. So the languages spread with a certain ethnic group, but this ethnic group got admixed with other ethnic groups as it spread across North and East Asia.”

Although we can trace these interactions in the genomes of individuals from across Eurasia, there are still a lot of unknowns. For example, we can’t estimate the degree or direction of linguistic and cultural exchanges between different groups. We can tell that there was an increasing degree of Yellow River genetic legacy woven into the peoples of the West Liao River, but there is no record after which we can gauge whether there was an exchange of words or cultural practices between these groups. Similarly, we can’t estimate the magnitude of the influence this exchange had on the two groups.

Still, one of the topics that Dr. Robbeets wants to underline with these findings is that, in order to truly understand the history of languages in Northeast Asia, a different approach is needed compared to what is being performed today.

“Archaeology and linguistics in Northeast Asia have tended to be conducted within the framework of modern nation-states,” she explained in an email for ZME Science. “Accepting that the roots of one’s language, culture, or people lie beyond the present national boundaries is a kind of surrender of identity, which some people are not yet prepared to make. Powerful nations such as Japan, Korea, and China are often pictured as representing one language, one culture, and one genetic profile but a truth that makes people with nationalist agendas uncomfortable is that all languages, cultures, and humans, including those in Asia, are mixed.”

“Our results show that a much more flexible and international framework is needed.”

Another more direct implication of these findings is that it implies that sedentarism and agriculture took root in the area much earlier than assumed up to now. Previously, the emergence of the Transeurasian family of languages was believed to have coincided with the adoption of livestock herding in Asia’s Eastern Steppes. Tying it to agricultural practices in the Liao River area, however, pushes the timeline of its emergence back roughly 4,000 years.

The paper “Triangulation supports agricultural spread of the Transeurasian languages” has been published in the journal Nature.

Off-cuts of wood show Vikings were settled in America one thousand years ago

Several wooden items discovered at an archaeological site in Newfoundland, Canada, paint an exciting picture: Vikings were on these shores in AD 1021, one thousand years ago. This would be the earliest known human crossing of the Atlantic in history, preceding Columbus’ discovery of the Americas by over 450 years.

Aerial image of a reconstructed Viking-Age building adjacent to the L’Anse aux Meadows site. Image credits Glenn Nagel Photography.

It isn’t exactly news that the Vikings reached the Americas before European explorers officially ‘discovered’ it. To the best of our knowledge, these Scandinavian explorers settled at a site known as L’Anse aux Meadows in what is today the Newfoundland peninsula. We know this was happening as early as the first millennium BC, but we didn’t have a precise date as to when.

New research, however, comes to give us a reliable estimation of when the first Europeans reached and settled these shores.

One man’s trash…

“The artefacts are not ‘display pieces’ or ‘works of art’ in any sense. They are actually just off-cuts of wood. Pieces of wood that were discarded by the Vikings,” explained Prof. Dr. Michael Dee, Associate Professor of Isotope Chronology at the University of Groningen and corresponding author of the paper, for ZME Science in an email. “The wood ended up in a nearby bog and the conditions in that bog were very good for the preservation of organic material. That is how they have survived until today.”

These pieces of wood were identified as having belonged to Vikings based on their location within the settlement, and by evidence on their surface of being processed using metal tools. Indigenous people living in America at the time did not have knowledge of metalwork, making this a very reliable indication of the artifacts’ origins.

The authors analyzed these pieces of wood found at the L’Anse aux Meadows site using carbon-dating (or ‘radiocarbon dating’) techniques. While this type of analysis cannot reveal when the timber was processed, it can tell us when the original trees were first cut down. While organisms such as plants live, they take in carbon from their environment. When they die or are cut down, this process stops. By analyzing the ratio of carbon isotopes in a sample of organic tissue, and then comparing it to a lot of historical references, researchers can estimate with pretty good accuracy when the processes stopped. More on carbon dating here.

Microscope image of a wood fragment from the Norse layers at L’Anse aux Meadows. Image credits Petra Doeve.

What allowed the team to reach such an accurate result in the case of these pieces of wood were “sudden increases [in the production of the 14C isotope] caused by cosmic radiation events”. This increase has been documented occurring “synchronously in dendrochronological records all around the world”, and is thus a very well-established and reliable event by which to date the pieces of wood. The particular marker they used here was a shift in the ratio of atmospheric carbon isotopes caused by a cosmic-ray event in AD 993.

I asked Dr. Dee what the most exciting moment of performing this research was for him, and he told me:

“Well it was pretty amazing to measure the isotope concentrations of lots and lots of tree-rings from, ultimately, three different pieces of wood from three different trees to discover they were all cut down in exactly the same year — and that year was exactly one millennium ago!”

According to the team, these results place the year AD 1021 as the new timeline for when Europe and the Americas first came into contact.

“We provide the earliest date for Europeans in the Americas. Indeed it is the only date for Europeans in the Americas before the arrival of Columbus — some 471 years later. This date also represents the first time in all of human history that the Atlantic Ocean was crossed — and humanity had travelled all the way around the globe. We think this in itself has particular significance.”

Beyond the value of these findings for historians, the paper also showcases how cosmic-ray events, despite being something completely removed from archaeology or the goings-on on planet Earth, can be used as reference points to date historical events.

The paper “Evidence for European presence in the Americas in AD 1021” has been published in the journal Nature.

Roman concrete from noblewoman’s tomb still stands strong 2,000 years later. Here’s why

The tomb of Caecilia Metella is still remarkably intact after nearly 2,000 years since it was completed. Credit: Tyler Bell.

One of the world’s biggest engineering problems is concrete. Critical infrastructure built over the last century — bridges, highways, dams, and buildings — are now crumbling before our eyes. Repairing and rebuilding this decaying infrastructure is estimated to cost trillions of dollars in the United States alone.

When steel reinforcements were introduced to concrete in the 19th century, it was rightfully at the time hailed as a massive step up in innovation. Adding steel bars to concrete speeds up construction time, uses less concrete, and allows the engineering of long, cantilevered structures such as miles-long bridges and tall skyscrapers. These early engineers who introduced these projects thought reinforced concrete structures would last at least 1,000 years. In reality, we now know their lifespan is between 50 and 100 years.

Concrete was originally developed by the ancient Romans, whose building techniques were lost with the fall of the empire and wasn’t reinvented until 1824 when an Englishman named Joseph Aspdin discovered Portland cement by burning finely ground chalk and clay in a kiln until the carbon dioxide was removed.

However, the durability of the two types of concrete is worlds apart. Many magnificent Roman buildings, such as the Pantheon, still stand proud even to this day after nearly 2,000 years.

In a new study, scientists describe another example that serves as a testament to the craftsmanship of Roman concrete, illustrating the case of a large cylindrical tomb that serves as the final resting place for 1st-century noblewoman Caecilia Metella.

Investigations performed by geologists and geophysicists at the University of Utah show that the tomb’s concrete is of particularly high quality and durability, even by Roman standards, surpassing that of the tombs for her male contemporaries.

The secret is the particular type of volcanic aggregate the Roman craftsmen use and a bit of luck owed to the fortuitous chemical interaction of rainwater and groundwater with these aggregates.

The concrete that outlived an empire

Caecilia’s tomb lies on the edge of the Appian Way, the famous ancient Roman road that connected Rome to Brindisi, in the southeast. The structure is monumental for its time, measuring 70 feet (21 meters) in height and 100 feet (29 meters) in diameter. It consists of a drum-shaped tower on top of a square-shaped base.

It was erected around the year 30 BCE, which means Caecilia must have passed away while Rome was still a Republic. Just a few years later, in 27 BCE, Octavianus Augustus, Julius Caesar’s nephew, proclaimed himself Emperor, opening up a new age for Rome.

Her imposing tomb is worthy of her status. The daughter of a wealthy nobleman, she married into the family of Marcus Crassus, probably the wealthiest man in the world at the time (and one of the wealthiest in history, relatively speaking) and the third member of the famous triumvirate alliance with Caesar and Pompey.

Marie Jackson, research associate professor of geology and geophysics at the University of Utah, first visited the tomb in 2006 with a permit from Italian archaeologists to collect a small sample of mortar for analysis. When she arrived at the site, she was stunned by the almost perfectly preserved brick masonry walls and the water-saturated volcanic rock outcrop in the substructure.

Now, in a new study, Jackson teamed up with colleagues from MIT and the Lawrence Berkeley National Laboratory to zoom into the microstructure of the tomb’s concrete using an array of modern tools at their disposal. These instruments include the microdiffraction beamline at the Advanced Light Source (ALS) that produces a “micron size, extremely bright and energetic pencil X-ray beam that can penetrate through the entire thickness of the samples, making it a perfect tool for such a study,” said co-author Nobumichi Tamura of Lawrence Berkeley National Laboratory.

Modern concrete mixes Portland cement— limestone, sandstone, ash, chalk, iron, and clay, among other ingredients, heated to form a glassy material that is finely ground — with aggregates, such as ground sand or rocks. These aggregates, usually sand or crushed stone, are not intended to chemically react because if they do, they can cause unwanted expansions in the concrete.

In contrast, Roman concrete didn’t use cement. Instead, they would make the concrete by first mixing volcanic ash, known as “tephra”, with limestone and seawater to make mortar, which is later incorporated into chunks of volcanic rock, the ‘aggregate’. Previously, while studying drilled cores of Roman harbor concrete, Jackson found an exceptionally rare mineral, aluminous tobermorite (Al-tobermorite) in the marine mortar. The mineral’s presence surprised everyone because it is very difficult to make. For Al-tobermorite to form, you need a very high temperature. “No one has produced tobermorite at 20 degrees Celsius,” she says. “Oh — except the Romans!”

Later, Jackson studied mortar from the Markets of Trajan and found a mineral called strätlingite, whose crystals block the propagation of microcracks in the mortar, preventing them from linking together and fracturing the concrete structure.

Roman concrete can actually grow stronger with time

Scanning electron microscopy image of the tomb mortar. The C-A-S-H binding phase appears as gray while the volcanic scoriae (and leucite crystals) appear as light gray. Credit: Marie Jackson.

At Caecilia’s tomb, the researchers were in for yet another surprise. The particular variety of tephra used in the ancient Roman structure was richer in leucite, a rock-forming mineral of the feldspathoid group. Over the centuries, rainwater and groundwater percolated through the walls of the tomb and dissolved the leucite, releasing its potassium into the mortar. The potassium dissolved and reacted with a building block in the mortar called C-A-S-H binding phase (calcium-aluminum-silicate-hydrate).

This remodeling led to a more robust cohesion in the concrete, despite much less strätlingite than seen in the Markets of Trajan.

“It turns out that the interfacial zones in the ancient Roman concrete of the tomb of Caecilia Metella are constantly evolving through long-term remodeling,” said Admir Masic, associate professor of civil and environmental engineering at MIT. “These remodeling processes reinforce interfacial zones and potentially contribute to improved mechanical performance and resistance to failure of the ancient material.”

If Roman concrete is so awesome, why don’t we still use it? There are many reasons why the ancient construction material is not at all feasible for our contemporary needs. Sourcing the kind of volcanic ash in the original recipe is not possible for much of the world, which now uses an estimated 4 billion tons of cement every year. Roman concrete also lacks the compressive strength required for modern huge infrastructure projects, among other things.

But that doesn’t mean there aren’t important lessons to be learned from Roman concrete that may help the next generation of concrete to overcome current shortcomings in our crumbling infrastructure. That’s exactly what Jackson and colleagues are set to do, part of an ongoing U.S. Department of Energy ARPA-e project. The objective is to find a new ‘recipe’ that could reduce energy emissions associated with concrete production by 85% and vastly improve the lifespan of the material.

The findings appeared in the Journal of the American Ceramic Society.

A Milanese friar mentions North America in 1345 text, 150 years before Columbus

Despite pervasive myths, Cristopher Columbus was not the first European to discover and explore North America. We know from the Sagas of Icelanders, confirmed by archaeological evidence, that Vikings traveled from Scandinavia to Newfoundland via Greenland from around 999 AD. Some more informed Europeans, including perhaps Columbus himself, weren’t oblivious to this fact.

Painting depicting Vikings landing in North America. Credit: Wikimedia Commons.

In a new study, Paolo Chiesa of the department of literary studies at the University of Milan has documented the first written mention of America in the Mediterranean area. The researcher was stunned to come across a reference to a “terra que dicitur Marckalada,” found west from Greenland, in the work called Cronica universalis written by the Milanese friar Galvaneus Flamma in 1345.

“Galvaneus’s reference, probably derived by oral sources heard in Genoa, is the first mention of the American continent in the Mediterranean region, and gives evidence of the circulation (out of the Nordic area and 150 years before Columbus) of narratives about lands beyond Greenland,” Chiesa wrote in the study published in the Journal of the Society for the History of Discoveries.

Marckalada refers to Markland, the name Icelandic sources give to a part of the Atlantic coast of North America. The mention of Markland occurs in the third book, which discusses the third age of humankind from Abraham to David. At one point, the Middle Age author “inserts a long geographical excursus, mainly dealing with exotic areas: the Far East, Arctic lands, Oceanic islands, Africa,” Chiesa says.

In his texts, the Milanese friar employs a variety of sources, ranging from biblical to scholarly treatises, including the accounts of travelers the likes of Marco Polo and Odoric of Pordenone. Galvaneus ascribed his description of Markland to the oral testimony of sailors who traveled the seas of Denmark and Norway, which was most likely passed down to the friar by seafarers in Genoa. The port of Genoa was the nearest to Milan and was the city where the medieval scholar studied for his doctorate.

The full-text mentioning Markland, what we now know as North America, was translated from Latin to English and reads as follows:

“Further northwards there is the Ocean, a sea with many islands where a great quantity of peregrine falcons and gyrfalcons live. These islands are located so far north that the Polar Star remains behind you, toward the south. Sailors who frequent the seas of Denmark and Norway say that northwards, beyond Norway, there is Iceland; further ahead there is an island named Grolandia, where the Polar Star remains behind you, toward the south. The governor of this island is a bishop. In this land, there is neither wheat nor wine nor fruit; people live on milk, meat, and fish. They dwell in subterranean houses and do not venture to speak loudly or to make any noise, for fear that wild animals hear and devour them. There live huge white bears, which swim in the sea and bring shipwrecked sailors to the shore. There live white falcons capable of great flights, which are sent to the emperor of Katai. Further westwards there is another land, named Marckalada, where giants live; in this land, there are buildings with such huge slabs of stone that nobody could build with them, except huge giants. There are also green trees, animals and a great quantity of birds. However, no sailor was ever able to know anything for sure about this land or about its features.

“From all these facts it is clear that there are settlements at the Arctic pole.”

Navigation routes Vikings took to reach Newfoundland.

The mentions of America are vague compared to those of Iceland and Greenland and even involves myth and hyperbole such as the land “where giants live”. This mention is likely owed to Galvaneus’ second-hand sources. For instance, Chiesa mentions in the study that the “huge stones” reference may recall the description of Helluland in the Eiríks saga rauða and in the Grœnlendinga Saga, which mention that Thorfinn Karlsefni “found many slabs of stones so huge that two men could stretch out on them sole to sole.” Giants are also common in Old Norse epic traditions.

Even the fact that the friar knew about Greenland in such stunning detail, a region that was very obscure to most 14th-century people living in south-central Europe, is very remarkable.

“Although the papal curia was aware of the existence of Greenland since the eleventh century, Galvaneus is the first to give some information about its features in the Italian area, and, more generally, in a Latin “scientific” and encyclopedic work, as his Cronica universalis claims to be,” the study mentions.

Columbus himself was a Genoese and these amazing descriptions may explain why the explorer was so daring in his plan to set off across the ocean when most of his contemporaries found the idea mad. Perhaps Columbus, like Galvaneus, was connected to sources that informed him that an entire continent may be found if he just sailed far enough west.

Ice Age humans have been using tobacco since at least 12,300 years ago

The findings were made at the Wishbone site in northwestern Utah. Credit: Daron Duke.

Nicotine is one of the most addictive drugs, and humans may have first noticed this as early as 12,300 years ago. That’s the age of old charred seeds of the wild tobacco plant, which were found within an ancient preserved hearth at the Wishbone site, near the Great Salt Lake Desert in Utah. Alongside the charred seeds, archeologists found stone tools and duck bones.

Previously, the oldest evidence of tobacco used dated to 3,300 years ago, based on nicotine residue found inside a pipe from Alabama. The new findings show that hunter-gatherer communities were familiar with tobacco much earlier than thought, even during the last Ice Age.

Chewing wild tobacco around the campfire

Some of the burned wild tobacco seeds that were found by the archaeologists. Credit: Angela Armstrong-Ingram.

Today, there are over 1.3 billion tobacco users worldwide. The addictive habit is responsible for more than eight million deaths every year in the world.

The tobacco plant is native to North and South America and until Cristopher Columbus was given some dry leaves as a gift, people outside the two continents were not exposed to it. It soon proved a hit, though. If it wasn’t for tobacco, the English may have never succeeded in colonizing North America since the riches were far fewer than in South America where the Spaniards rapidly expanded thanks to the economic incentives.

While Native Americans used tobacco in religious ceremonies and for supposed medical purposes, the smoking of tobacco in Europe became a daily habit.

However, what the European colonists were smoking was the domesticated variety. Scientists don’t know when the tobacco plant was first domesticated and used in agriculture, but there is evidence that suggests the process began some 5,000 years ago in the Southern United States and in Mexico. Around this time, archaeologists noticed an uptick in the domestication of food crops at large and an increase in tobacco use artifacts, such as seeds, residues, and pipes stained with nicotine.

The Utah charred seeds discovered by archaeologists led by Daron Duke of the Far Western Anthropological Research Group in Nevada belong to Nicotiana attenuata, also known as coyote tobacco. This particular species of wild tobacco was never domesticated but Indigenous people in the region use it to this day.

“On a global scale, tobacco is the king of intoxicant plants, and now we can directly trace its cultural roots to the ice age,” said Duke.

Although the area where the seeds were found is now desert terrain, during the time that Ice Age hunter-gatherers consumed them, the region was a marshland filled with waterfowl and wetland plants.

Alongside the seeds, archaeologists found sharp stone-cutting tools and spear tips made from obsidian. One of the spear points was stained with remains of blood. Analysis in the lab showed the blood proteins belonged to a mammoth or mastodon.

There are no other hints regarding the culture of these hunter-gatherer groups that experimented with tobacco. But seeing how popular the plant went on to become, it is likely that people “have already been at least casually tending, manipulating and managing tobacco well before the population and food-requirement incentives that drove investments in agriculture,” Duke said.

The findings appeared in the journal Nature Human Behavior.

Researchers in Turkey uncover what may be the world’s first mosaic

Archeologists working in central Turkey have uncovered what may be the “ancestor” of all Mediterranean mosaics. This piece dates back over three and a half centuries, hailing from the Bronze Age. While impressive in and of itself, researchers hope that its discovery can help us better understand the history of the quite mysterious Hittite people.

The mosaic. Image via Phys.org.

The mosaic was unearthed at a site some three hours’ driving distance from Turkey’s capital city of Ankara, according to local news outlets. This site is known as Uşaklı Höyük, and some 3,500 years ago, it was the site of a Hittite temple. On its grounds, archeologists have uncovered a mosaic consisting of over 3,000 unpainted stones, whose natural shades of beige, red, and black were used to create various curves and triangle shapes. This piece of art predates the oldest known mosaics — from ancient Greece — by around 700 years.

Researchers working at the site believe that this element was meant as a stepping stone of some sort and wasn’t necessarily put together with the intention of being a mosaic. However, given its age, this may very well be the “ancestor” of all mosaics, with the ideas used in its construction later replicated throughout the Mediterranean.

A true original

“It is the ancestor of the classical period of mosaics that are obviously more sophisticated. This is a sort of first attempt to do it,” says Anacleto D’Agostino, excavation director of Uşaklı Höyük.

“For the first time, people felt the necessity to produce some geometric patterns and to do something different from a simple pavement. Maybe we are dealing with a genius? Maybe not. It was maybe a man who said ‘build me a floor’ and he decided to do something weird?”

The site was first located in 2018, and teams of Turkish and Italian archaeologists have been working here ever since. The site sits in the shadow of the Kerkenes mountain on the grounds of an ancient temple which, the team explains, was very likely dedicated to Teshub. This was a storm god of the Hittites, roughly equivalent to Zeus in ancient Greek mythology.

D’Agostino says that while the exact use of this proto-mosaic is unknown, it is possible that it was made to resemble the Kerkenes mountain, likely to serve a ritual purpose. Ceramic fragments and the remains of a palace have also been found at the site, hinting at its original size, inhabitation levels, and overall importance.

Based on these, the team is quite confident that Uşaklı Höyük is the lost city Zippalanda, an important settlement and place of worship for Teshub, mentioned frequently in Hittite tablets. The Hittites employed cuneiform writing and left behind a relative wealth of texts on clay tablets.

“Researchers agree that Uşaklı Höyük is one of two most likely sites. With the discovery of the palace remains alongside the luxurious ceramics and glassware, the likelihood has increased,” D’Agostino says.

Still, until solid, verifiable proof is found — such as a tablet or inscription mentioning the site’s original name — this remains pure conjecture. Despite the extent of the ruins at Uşaklı Höyük, precious few artifacts have been uncovered at the site. 

By the way, these are not the same Hittites that most people are familiar with — those mentioned in the bible. The Hittites who made this mosaic lived during the late Bronze Age and were vying for supremacy in the region with other great civilizations of the era: the New Kingdom of Egypt, the Middle Assyrian Empire, and the Mittani Empire.

These Hittites were pretty advanced for their time, being some of the first to use, and perhaps even the inventors of, iron smelting from meteoric iron. Even so, like many other empires and states during the time, the Hittite empire crumbled during the Bronze Age collapse, and the Hittites broke into small kingdoms scattered through today’s Syria, Lebanon, and the Levant. What caused this collapse of virtually every major organized state at the end of the Bronze Age is still a matter of much debate and little evidence; among the leading theories is that either invading ‘sea peoples’ or shifts in climate caused widespread social unrest.

“I don’t know if we can find a connection between ancient Hittites and people living here now. Centuries and millennia have passed, and people moved from one place to another,” D’Agostino says. “But I would like to imagine that some sort of spiritual connection exists.”

In honor of this possible connection, the archeologists working at Uşaklı Höyük have also been recreating dishes from recipes found on clay tablets at the site, trying to stay as faithful to the techniques and materials used in antiquity. The team explains that they also reproduced Hittite ceramics using local clay for the purpose. So far, they’ve sampled baked dates and bread cooked using these dishes, says Valentina Orsi, co-director of the excavation, which were “very good”.

Sugarcane, slaves, empire-toppling — the story of rum

Rum is a popular spirit all throughout the world and is probably best known for its association with pirates and the Caribbean. But its history extends far beyond these waters, and the men and women that sailed them. Not all of it is nice and happy, but it’s definitely an interesting story. So let’s dive right in.

Rum being left to age in wooden barrels.
Image via Pxfuel.

First off, while there is broad agreement on what exactly constitutes rum, there is no universally accepted ‘proper’ way of producing this drink. Different communities or areas of the world will have their own styles of distilling rum, some based on traditional approaches, some designed more with economic practicality in mind.

With that being said, there is relative consensus (and customer expectation) on what rum should be. Chief among these is that the liquor is produced through distillation from sugarcane molasses or, much less commonly, from fresh sugarcane juice; this latter one is distinguished as ‘rhum agricole’. Rums vary in color from clear or light amber to heavy brown and even black, depending on how it’s produced. In very broad terms rum doesn’t have a powerful taste, but it does carry over the flavor and aroma of the sugarcane plant that it comes from.

Most rums are used in mixers or cocktails, although some are meant to be drunk neat. Due to its flavor, it is often used in cooking as well — where I’m from, rum and rum essence are very popular ingredients in cakes and sweets, especially as a companion for chocolate.

How it all started

Rum is intrinsically tied to the sugarcane plant so, unsurprisingly, its story starts in the areas where sugarcane evolved naturally. This mostly means Melanesia (today’s New Guinea), the Indian subcontinent, and parts of Southeast Asia corresponding to today’s Taiwan and southeast China.

People living in these areas likely drank one type of rum-like alcohol or another ever since they first found the plant. As we discovered a long time ago, alcohol is pretty easy to make, actually; all you need is some organic material rich in sugar or starch. It’s actually so easy to do that it happens naturally, as microorganisms in the air or soil break down ripe fruit. In fact, one hypothesis holds that humans developed the ability to resist much higher quantities of alcohol than (most) other animals because this allowed our ancestors to chow down on spoiled fruits in our evolutionary past. Alcohol as a molecule contains a lot of energy, so it’s quite advantageous to ingest it, as long as your liver is able to handle its toxicity.

But back to our story. I call these rum-like drinks because although making alcohol is easy, distilling it into spirits is not. The earliest evidence of distilling we’ve ever found comes from 12th century China, so around 800-900 years ago. On the other hand, we have evidence of deliberate alcohol brewing, in pottery containers, almost ten thousand years old.

So for most of history, sugarcane alcohol was more similar to a modern beer or wine than a bottle of cognac or whiskey. Brum, a traditional alcoholic drink from today’s Malaysia, is a good example of how these would have looked and tasted. Marco Polo, the famous Italian explorer, claimed that he was offered a “wine of sugar” in today’s Iran and that it was “very good”. This sugar wine was most likely Brum or a close relative of Brum.

Now for context, up through to most of the Age of Sail, sugar was a pain to process. Refining sugar out of sugarcane by hand is an incredibly time-consuming and labor-intensive process, so even in areas where sugarcane grew naturally, it had always been expensive. In all other areas, mainly Europe, it remained unknown for a long time, and people just used honey instead. But even after it was introduced to Europe, it wasn’t just expensive — it was laughably expensive.

Sugar-producing areas throughout the world by year. Image via Wikimedia.

To give you an idea of just how expensive it was, take Cyprus, an otherwise not particularly rich island in the Mediterranean. During the middle ages, after the crusaders lost Jerusalem, Cyprus was one of the last places where Europeans could acquire domestic sugar. All other supply was controlled, either directly or indirectly, by “the Saracens” — Muslim peoples in Arabia and the Middle East. There were other sugar plantations and processing sites in Europe, geographically speaking, but in areas that were under Muslim control, such as the southern stretches of Spain and Sicily. This meant that they were politically not really accessible to Europeans. Rhodes, Malta, and Crete would also eventually produce sugar, but Cyprus became the main supplier of sugar in Europe for a few hundred years.

Cyprus, at the height of production, exported a few tons of sugar every year, maybe a few tens of tons on a good harvest. Even this low export quantity made Cyprus the de-facto center of trade in the region and ensured the livelihood of locals from serfs to king — although the former didn’t get very much, if anything, for their work. It was the main money-making industry on the island and was so profitable that it warranted the construction of stone factories and investments into research and improved technology such as mechanization. It printed so much money for Cyrpus that they tried to automate the process in the middle ages.

“But wait,” you may ask, being the inquisitive bunch that you are. “What does all this have to do with rum?”. Well, an argument can be made that the very high price of sugar was, through some pretty tragic circumstances, the catalyst for the invention of rum.

How it continued

Around the late 15th century, the Portuguese had colonized São Tome (Saint Thomas), an island in the African Gulf of Guinea, and Madeira, off the West coast of Africa. The climate here was suitable for sugarcane, so plantations started popping up on the islands.

It would be on São Tome that the Portuguese changed the European sugar game from the ground up. Sadly, the way they did that was by employing slave labor. Processing sugarcane is hard, arduous, and definitely dangerous work. It’s also time-sensitive since a whole harvest can rot if it’s not turned into sugar quickly enough. And the end product was extremely expensive, so there’s a lot of pressure not to waste any cane.

If you were a medieval peasant, this haul was definitely worth more than your income. Probably worth more than you were, frankly. Image credits Jah Cordova.

Exploiting slaves was a way to reduce expenses on labor and other inconveniences, such as basic worker safety. To quote the Pirates of the Caribbean, it was “nothing personal, just good business”. They definitely did lower expenses, as Portuguese sugar managed to out-compete Cypriot sugar during this time. I haven’t been able to find reliable records of just how bad slaves on São Tome had it. However these were, essentially, the precursor days to the trans-Atlantic slave trade, so they definitely didn’t have a good time.

Since there was money to be made, the system was copy-pasted in European colonies in the new world where sugarcane could grow. The Caribbean became an extremely important region economically, in no small part due to the cheap sugar slaves would produce here. The work was harsh, and due to its time-sensitive nature, it was common for slaves to be sleep-deprived, working multiple shifts. Since the whole point of the slave system was to bypass costs associated with regular workers, they also didn’t live in good conditions or get much healthcare.

All in all, a pretty bad life. And, like countless people living bad lives both before and after them, these slaves turned to boozing. But being slaves, nobody would just give them alcohol, and they didn’t have any money to spend, either. What they did have was molasses, a thick, sweet, syrupy by-product left over from the processing of cane. So they started brewing their own alcohol using this molasses which, at the time, was generally considered a waste product by sugar manufacturers.

Now, one piece of this puzzle that is missing is exactly how the rum-like drink the slaves brewed transitioned to actual, distilled rum. The main issue here is that slaves as a group don’t tend to own stills. So it’s probably safe to assume that they were not singularly responsible for the development of rum. Whether or not they received help from sympathetic free people on the islands, or whether their masters took the practice up for themselves and then tried distilling the product, however, we don’t know.

What we do know is that by 1650 or so, we have written evidence of a drink called ‘Rumbullion’ or ‘Kill-Divil’ being produced on the island of Nevis in the northeastern stretches of the Caribbean Sea. Rumbullion is the name of a modern drink that is based on rum, but it’s very doubtful that this is the same Rumbullion noted in historical documents. Rum was also produced in Brazil around 1620, likely accompanying local production of sugarcane.

How it toppled an empire (sort-of)

Bottle of molasses, the base ingredient used in making rum.
Mmmmm, molasses. Image credits Marshall / Flickr.

Colonial USA was actually a pretty large producer of rum in the late 17th century and that rum would start being used as the exchange good for slaves in Africa (before rum, French-made brandy was used). New England, in particular, would develop a sizable and profitable rum-distilling business. Although sugarcane couldn’t grow here, the area had the benefit of skilled metalworkers, coopers, and significant resources of lumber.

This pool of know-how and resources meant New England could produce and feed sophisticated stills and had the barrels it needed to export their rum. In the end, all these developments led to a secondary “triangular trade” forming in the new world: merchants would trade American rum for African slaves, sell them to plantation masters in the West Indies (the Caribbean islands) for sugar and molasses, which would then buy them fresh rum in the colonies.

Putting that aside, the colonies did have a significant hand to play in shaping modern rum. From what we can tell, the rumbullion produced on sugar plantations was quite different from today’s rum. American breweries, which had the weight of whiskey-making tradition behind them, naturally made rum that more closely resembled whiskey. This seems to have been lighter both in taste and in alcohol content than the original concoctions, and is by all accounts very similar to the rum of today.

But, as it tends to always happen in history, everything changed again. Around the 18th century, at the height of the triangular trade, French authorities banned the production of rum in their offshore colonies — rum was competing with brandy for market share, and the French didn’t like that one bit.

Although this very likely didn’t factor into the decision, I’m sure the French would have been delighted to know that their ban would end messing up English affairs monumentally. The ban led to a massive drop in molasses prices from French colonies, so distillers in the New World started buying from them, instead of from English holdings. Compounding the issue was the fact that British colonies didn’t really trade in what the Americas had to sell — raw resources such as fish, lumber, skins — while French, Spanish, or Dutch colonies would accept a wide range of goods.

Naturally, people shifted towards the more convenient and affordable option. American rum then suddenly became much cheaper than English rum, with no drop in quality. This caused massive outrage in England, the kind of outrage a crown can never ignore — outrage from people with plantations, stills, and ultimately, wealth. This led to the implementation of the Molasses Act in 1733, which attempted to levy a hefty tax on the import of molasses to the colonies from non-British plantations. The point here was to make non-British molasses too expensive to realistically purchase, not necessarily to make money.

Naturally, the colonies had no love for nor real interest in enforcing an act that would cripple one of their main and most sophisticated industries. Smuggling became the unspoken rule, as producers didn’t want to pay the tax, and authorities didn’t want to force them to pay it either. Where common interest didn’t prevail, bribery and intimidation provided the lubrication needed to keep the American rum business going.

Seeing the utter and abject failure of this law, British Parliament then passed the Sugar Act / American Revenue Act in 1764. This reduced the original tax from six pence per gallon of molasses down to three per gallon, but much more effort was expended by Parliament to actually collect the tax.

Still, the damage had already been done since the Molasses Act. Political authority is a fickle thing. A large part of being in power is people believing that you are in power. The rampant evasion of the molasses tax — one that everyone could see and was part of — together with the resentment of the colonies towards a measure they perceived punished them unjustly, shattered the illusion of British supremacy over the Americas.

This crack would eventually grow and help shape the events and sentiments that made the American colonies seek out their independence.

The British empire itself outlived the loss of the American colonies by quite a large margin. But the event did mark the end of its golden days, and sent the single largest empire the world has ever seen into decline.

Can rum thus be said to have toppled the British empire? Not exactly. Misrule and, arguably, our innate human need for freedom and autonomy, did that job. But rum, and the interests of those making money on all the stages of its production chain, certainly helped foster the conditions needed to topple an empire.

I’m personally prone to thinking in and understanding the world around me through metaphors. I can’t help but see the symbolism in a drink, initially brewed by slaves seeking some measure of escape from their lives, lending a hand in giving a country its independence. I know it’s just a spirit distilled from sugarcane. But reading about its history, the honestly tragic roots it grew from, it’s impressive to see how many people lost, sought, and found a measure of freedom through rum. Maybe some of those slaves’ dreams were distilled down into the rum alongside the molasses.