Tag Archives: history

Sugarcane, slaves, empire-toppling — the story of rum

Rum is a popular spirit all throughout the world and is probably best known for its association with pirates and the Caribbean. But its history extends far beyond these waters, and the men and women that sailed them. Not all of it is nice and happy, but it’s definitely an interesting story. So let’s dive right in.

Rum being left to age in wooden barrels.
Image via Pxfuel.

First off, while there is broad agreement on what exactly constitutes rum, there is no universally accepted ‘proper’ way of producing this drink. Different communities or areas of the world will have their own styles of distilling rum, some based on traditional approaches, some designed more with economic practicality in mind.

With that being said, there is relative consensus (and customer expectation) on what rum should be. Chief among these is that the liquor is produced through distillation from sugarcane molasses or, much less commonly, from fresh sugarcane juice; this latter one is distinguished as ‘rhum agricole’. Rums vary in color from clear or light amber to heavy brown and even black, depending on how it’s produced. In very broad terms rum doesn’t have a powerful taste, but it does carry over the flavor and aroma of the sugarcane plant that it comes from.

Most rums are used in mixers or cocktails, although some are meant to be drunk neat. Due to its flavor, it is often used in cooking as well — where I’m from, rum and rum essence are very popular ingredients in cakes and sweets, especially as a companion for chocolate.

How it all started

Rum is intrinsically tied to the sugarcane plant so, unsurprisingly, its story starts in the areas where sugarcane evolved naturally. This mostly means Melanesia (today’s New Guinea), the Indian subcontinent, and parts of Southeast Asia corresponding to today’s Taiwan and southeast China.

People living in these areas likely drank one type of rum-like alcohol or another ever since they first found the plant. As we discovered a long time ago, alcohol is pretty easy to make, actually; all you need is some organic material rich in sugar or starch. It’s actually so easy to do that it happens naturally, as microorganisms in the air or soil break down ripe fruit. In fact, one hypothesis holds that humans developed the ability to resist much higher quantities of alcohol than (most) other animals because this allowed our ancestors to chow down on spoiled fruits in our evolutionary past. Alcohol as a molecule contains a lot of energy, so it’s quite advantageous to ingest it, as long as your liver is able to handle its toxicity.

But back to our story. I call these rum-like drinks because although making alcohol is easy, distilling it into spirits is not. The earliest evidence of distilling we’ve ever found comes from 12th century China, so around 800-900 years ago. On the other hand, we have evidence of deliberate alcohol brewing, in pottery containers, almost ten thousand years old.

So for most of history, sugarcane alcohol was more similar to a modern beer or wine than a bottle of cognac or whiskey. Brum, a traditional alcoholic drink from today’s Malaysia, is a good example of how these would have looked and tasted. Marco Polo, the famous Italian explorer, claimed that he was offered a “wine of sugar” in today’s Iran and that it was “very good”. This sugar wine was most likely Brum or a close relative of Brum.

Now for context, up through to most of the Age of Sail, sugar was a pain to process. Refining sugar out of sugarcane by hand is an incredibly time-consuming and labor-intensive process, so even in areas where sugarcane grew naturally, it had always been expensive. In all other areas, mainly Europe, it remained unknown for a long time, and people just used honey instead. But even after it was introduced to Europe, it wasn’t just expensive — it was laughably expensive.

Sugar-producing areas throughout the world by year. Image via Wikimedia.

To give you an idea of just how expensive it was, take Cyprus, an otherwise not particularly rich island in the Mediterranean. During the middle ages, after the crusaders lost Jerusalem, Cyprus was one of the last places where Europeans could acquire domestic sugar. All other supply was controlled, either directly or indirectly, by “the Saracens” — Muslim peoples in Arabia and the Middle East. There were other sugar plantations and processing sites in Europe, geographically speaking, but in areas that were under Muslim control, such as the southern stretches of Spain and Sicily. This meant that they were politically not really accessible to Europeans. Rhodes, Malta, and Crete would also eventually produce sugar, but Cyprus became the main supplier of sugar in Europe for a few hundred years.

Cyprus, at the height of production, exported a few tons of sugar every year, maybe a few tens of tons on a good harvest. Even this low export quantity made Cyprus the de-facto center of trade in the region and ensured the livelihood of locals from serfs to king — although the former didn’t get very much, if anything, for their work. It was the main money-making industry on the island and was so profitable that it warranted the construction of stone factories and investments into research and improved technology such as mechanization. It printed so much money for Cyrpus that they tried to automate the process in the middle ages.

“But wait,” you may ask, being the inquisitive bunch that you are. “What does all this have to do with rum?”. Well, an argument can be made that the very high price of sugar was, through some pretty tragic circumstances, the catalyst for the invention of rum.

How it continued

Around the late 15th century, the Portuguese had colonized São Tome (Saint Thomas), an island in the African Gulf of Guinea, and Madeira, off the West coast of Africa. The climate here was suitable for sugarcane, so plantations started popping up on the islands.

It would be on São Tome that the Portuguese changed the European sugar game from the ground up. Sadly, the way they did that was by employing slave labor. Processing sugarcane is hard, arduous, and definitely dangerous work. It’s also time-sensitive since a whole harvest can rot if it’s not turned into sugar quickly enough. And the end product was extremely expensive, so there’s a lot of pressure not to waste any cane.

If you were a medieval peasant, this haul was definitely worth more than your income. Probably worth more than you were, frankly. Image credits Jah Cordova.

Exploiting slaves was a way to reduce expenses on labor and other inconveniences, such as basic worker safety. To quote the Pirates of the Caribbean, it was “nothing personal, just good business”. They definitely did lower expenses, as Portuguese sugar managed to out-compete Cypriot sugar during this time. I haven’t been able to find reliable records of just how bad slaves on São Tome had it. However these were, essentially, the precursor days to the trans-Atlantic slave trade, so they definitely didn’t have a good time.

Since there was money to be made, the system was copy-pasted in European colonies in the new world where sugarcane could grow. The Caribbean became an extremely important region economically, in no small part due to the cheap sugar slaves would produce here. The work was harsh, and due to its time-sensitive nature, it was common for slaves to be sleep-deprived, working multiple shifts. Since the whole point of the slave system was to bypass costs associated with regular workers, they also didn’t live in good conditions or get much healthcare.

All in all, a pretty bad life. And, like countless people living bad lives both before and after them, these slaves turned to boozing. But being slaves, nobody would just give them alcohol, and they didn’t have any money to spend, either. What they did have was molasses, a thick, sweet, syrupy by-product left over from the processing of cane. So they started brewing their own alcohol using this molasses which, at the time, was generally considered a waste product by sugar manufacturers.

Now, one piece of this puzzle that is missing is exactly how the rum-like drink the slaves brewed transitioned to actual, distilled rum. The main issue here is that slaves as a group don’t tend to own stills. So it’s probably safe to assume that they were not singularly responsible for the development of rum. Whether or not they received help from sympathetic free people on the islands, or whether their masters took the practice up for themselves and then tried distilling the product, however, we don’t know.

What we do know is that by 1650 or so, we have written evidence of a drink called ‘Rumbullion’ or ‘Kill-Divil’ being produced on the island of Nevis in the northeastern stretches of the Caribbean Sea. Rumbullion is the name of a modern drink that is based on rum, but it’s very doubtful that this is the same Rumbullion noted in historical documents. Rum was also produced in Brazil around 1620, likely accompanying local production of sugarcane.

How it toppled an empire (sort-of)

Bottle of molasses, the base ingredient used in making rum.
Mmmmm, molasses. Image credits Marshall / Flickr.

Colonial USA was actually a pretty large producer of rum in the late 17th century and that rum would start being used as the exchange good for slaves in Africa (before rum, French-made brandy was used). New England, in particular, would develop a sizable and profitable rum-distilling business. Although sugarcane couldn’t grow here, the area had the benefit of skilled metalworkers, coopers, and significant resources of lumber.

This pool of know-how and resources meant New England could produce and feed sophisticated stills and had the barrels it needed to export their rum. In the end, all these developments led to a secondary “triangular trade” forming in the new world: merchants would trade American rum for African slaves, sell them to plantation masters in the West Indies (the Caribbean islands) for sugar and molasses, which would then buy them fresh rum in the colonies.

Putting that aside, the colonies did have a significant hand to play in shaping modern rum. From what we can tell, the rumbullion produced on sugar plantations was quite different from today’s rum. American breweries, which had the weight of whiskey-making tradition behind them, naturally made rum that more closely resembled whiskey. This seems to have been lighter both in taste and in alcohol content than the original concoctions, and is by all accounts very similar to the rum of today.

But, as it tends to always happen in history, everything changed again. Around the 18th century, at the height of the triangular trade, French authorities banned the production of rum in their offshore colonies — rum was competing with brandy for market share, and the French didn’t like that one bit.

Although this very likely didn’t factor into the decision, I’m sure the French would have been delighted to know that their ban would end messing up English affairs monumentally. The ban led to a massive drop in molasses prices from French colonies, so distillers in the New World started buying from them, instead of from English holdings. Compounding the issue was the fact that British colonies didn’t really trade in what the Americas had to sell — raw resources such as fish, lumber, skins — while French, Spanish, or Dutch colonies would accept a wide range of goods.

Naturally, people shifted towards the more convenient and affordable option. American rum then suddenly became much cheaper than English rum, with no drop in quality. This caused massive outrage in England, the kind of outrage a crown can never ignore — outrage from people with plantations, stills, and ultimately, wealth. This led to the implementation of the Molasses Act in 1733, which attempted to levy a hefty tax on the import of molasses to the colonies from non-British plantations. The point here was to make non-British molasses too expensive to realistically purchase, not necessarily to make money.

Naturally, the colonies had no love for nor real interest in enforcing an act that would cripple one of their main and most sophisticated industries. Smuggling became the unspoken rule, as producers didn’t want to pay the tax, and authorities didn’t want to force them to pay it either. Where common interest didn’t prevail, bribery and intimidation provided the lubrication needed to keep the American rum business going.

Seeing the utter and abject failure of this law, British Parliament then passed the Sugar Act / American Revenue Act in 1764. This reduced the original tax from six pence per gallon of molasses down to three per gallon, but much more effort was expended by Parliament to actually collect the tax.

Still, the damage had already been done since the Molasses Act. Political authority is a fickle thing. A large part of being in power is people believing that you are in power. The rampant evasion of the molasses tax — one that everyone could see and was part of — together with the resentment of the colonies towards a measure they perceived punished them unjustly, shattered the illusion of British supremacy over the Americas.

This crack would eventually grow and help shape the events and sentiments that made the American colonies seek out their independence.

The British empire itself outlived the loss of the American colonies by quite a large margin. But the event did mark the end of its golden days, and sent the single largest empire the world has ever seen into decline.

Can rum thus be said to have toppled the British empire? Not exactly. Misrule and, arguably, our innate human need for freedom and autonomy, did that job. But rum, and the interests of those making money on all the stages of its production chain, certainly helped foster the conditions needed to topple an empire.

I’m personally prone to thinking in and understanding the world around me through metaphors. I can’t help but see the symbolism in a drink, initially brewed by slaves seeking some measure of escape from their lives, lending a hand in giving a country its independence. I know it’s just a spirit distilled from sugarcane. But reading about its history, the honestly tragic roots it grew from, it’s impressive to see how many people lost, sought, and found a measure of freedom through rum. Maybe some of those slaves’ dreams were distilled down into the rum alongside the molasses.

Famous Egyptologist reports the discovery of a whole ancient settlement

A new ancient city has been discovered under the sands of Egypt, a team of archaeologists reported on Saturday. The settlement dates back to a golden era of ancient Egypt, roughly 3,000 years ago, they explain.

The site. Image credits Zahi Hawass / Facebook.

Zahi Hawass, one of the country’s best-known archaeologists and Egyptologists, announced the finding to the public. The ancient site includes brick houses, tools, and other artifacts dating back to the rule of Amenhotep III of Egypt‘s 18th dynasty.

The discovery will help us better understand how ancient people, particularly those in Egypt, lived three millennia ago.

New old place

“Many foreign missions searched for this city and never found it,” said Dr. Hawass, a former antiquities minister, for the BBC. “[The site represents] a large city in a good condition of preservation, with almost complete walls, and with rooms filled with tools of daily life.”

The city was known as Aten and is located in Luxor, on the west bank of the Nile, between the temple of King Rameses III and the colossi of Amenhotep III. Archaeologists started working there last year, there to look for the mortuary temple of King Tutankhamun. In a few weeks’ time, however, they eventually found a whole city built from mud brick. Whole buildings, rooms full of ovens, pottery meant for storing food, and general use tools were found here, even human remains.

The ancient city seems to have been organized into three major districts: one for administration, one for workshops and other industrial pursuits, and a district where workers could sleep and presumably live. There was also a dedicated area for dried meat, the team explains. The settlement dates back around 3,000 years, to the reign of Amenhotep III. We know of this timeframe because some mud bricks discovered at the site bear the seal of King Amenhotep III’s cartouche, or name insignia.

Hawass said he believes that the city was “the most important discovery” since the tomb of Tutankhamun was unearthed in the Valley of the Kings in Luxor in 1922. He ruled between 1391 B.C. and 1353 B.C. and built large parts of the Luxor and Karnak temple complexes in Thebes.

The discovery has been hailed by other Egyptologists around the world, both due to how unique it is and due to its incredible scale. The city has not been officially identified until now, as far as we know, and it could very well be just one part of a larger city.

The spicy history of how pumpkin spice got so popular

Autumn is here, and that can only mean one thing — everything now comes in a ‘pumpkin spice’ option. You might be surprised to hear, however, that this isn’t a modern fad; the spice mix goes back a long way.

Image via Pikrepo.

Despite its name, pumpkin spice doesn’t contain any pumpkin. The name stuck because this mix was originally marketed as flavoring for pumpkin-containing items such as pies or cakes. It hails from the 1930’s when US-based McCormick started selling the mix commercially under the name of ‘pumpkin pie spice’.

So if it’s not pumpkin, what is it? Well, it is a mixture of sweet spices — cinnamon, nutmeg, ginger, cloves — all ground together in various proportions; sometimes, it can also include allspice. It’s similar in composition to mixes typically added to British pudding. Although it doesn’t necessarily have the same proportions of each spice, it’s very likely that the two are related.

So where did the British get it from?

In very broad lines, as tended to be the case for most spices in Europe, the answer is South-East Asia.

Even today, the world’s chief source of nutmeg (Myristica fragrans) are the Malaku Islands of Indonesia, where the plant is endemic. This area accounts for around 90% of the global production of nutmeg, which earned it the moniker of the ‘Spice Islands‘. Malaysia also produces some nutmeg, as do the Caribbean islands. There is a species of ‘California nutmeg’ native to the US, but this isn’t related to true nutmeg and is not used as a spice.

Nutmeg fruit with the seed (from which the spice is made) and seed covering or ‘aril’ (from which mace is produced). Image via Pixabay.

Cinnamon is also chiefly produced in South and South-East Asia, with Indonesia, China, Vietnam, and Sri Lanka accounting for over 90% of total production. India, China, Nigeria, and Nepal grow most of the world’s ginger. Cloves used to come from Indonesia, but the species has since been transported to and successfully grown in other warm countries such as Mexico.

How did it all start

The individual spices that make up pumpkin spice have long been used in the places where they’re endemic.

Map of protohistoric spice trade routes of the Austronasian peoples  in the Indian Ocean. Image created by Wikiuser Obsidian Soul after Palgrave Macmillan.

Nutmeg, for example, is a traditional ingredient in Indian cuisine and employed as a medicinal plant in countries all around the Indian Ocean. It was also the lynchpin of bustling trade routes in the area; natives on the Banda Islands made a decent living by growing their crops of spices, while Arab and Indian traders made a fortune from carrying it around.

Europeans got their first taste of nutmeg from Arab traders. There is some evidence to suggest that it made its way around these parts back when Rome still had an empire. For example, Pliny the Elder describes several spices in his book Naturalis Historia, including nutmeg, and his description is accurate enough to suggest he had actually encountered the plant and wasn’t basing his words on hearsay. Keep in mind however that the Romans had a more wide-bearing definition of spices than we do today, which included medical plants, those used for perfumes or intended to be burnt, such as incense, plants that were used in makeup, and those that could be employed to preserve food. Pedanius Disocorides, a physician born in the Greek states of Asia Minor also describes over six hundred medicinal plants (including spices) coming from the Orient in his medical treatise De materia medica.

The land routes (red) and maritime routes (blue) of the Silk Road cca. the 11th century. Image via Wikimedia.

From works such as these, we gather that spices including nutmeg and cinnamon made their way to Europe either by ship from ports on the western coast of India, through the Red Sea and then Turkey, or on land routes through China on the “Via scitica” (the Scythian Road) — from Beijing through the Gobi Desert, Kazakhstan, over the Ural Mountains and the Caspian Sea, then over the Black Sea and Azov Sea, finally arriving at Constantinople (today’s Istanbul). Spices also sometimes traveled via the “Via serica” (literally “the Silk Road”), which was the avenue of trade and diplomacy between China and Europe (mostly the Roman Empire) in antiquity.

Although these trade routes of Antiquity introduced Europeans to the plants that would eventually culminate in the pumpkin spice, these were still extremely expensive commodities. By the time that Emperor Diocletian issued his Edict of Maximum Prices (“Edictum de Maximis pretiis”) in the year 301 AD, many spices were worth more than gold or jewels per the same unit of volume. It has to be mentioned that Roman coins were severely debased (devalued by inflation) at the time due to shenanigans by a long string of rulers during the Imperial Crisis, but these affected the prices of all commodities.

Spices were so extremely expensive because they needed to be transported over vast stretches of land or sea, changing hands several times, with everyone taking a cut (and increasing the price) in the process. This leads us nicely to:

The middle bit

Everybody was making bank from carrying spices towards Europe. So they kept the source a secret from those who bought them. Even in the days of Pliny, merchants would talk of winged beasts guarding the spices and other similarly fantastical tales, and Pliny mocked them relentlessly for it. But the mystery only deepened with the collapse of the Roman Empire when a lot of local knowledge (including that of far-away places) was lost.

A 17th century plaque to Dutch East India Company in Hoorn, Holland. Image via Wikimedia.

Although we know that spices such as nutmeg were still being used in Europe by the 8th century, where they came from was still a mystery. As far as the locals knew, spices grew on foreign ships and were harvested in Venice. By the time it reached Europe, a bag of nutmeg was worth more than most people made in a lifetime.

With that in mind, you’ll be so surprised to hear what happened after explorers such as Vasco da Gama discovered the sea route to India, and the islands of South-East Asia in the process, in around 1499. Yup, it was war.

Not with the natives, per se — although the Dutch East India Company would end up wiping out around 90% of them and enslaving the rest — and not instantly. Portuguese traders were very content to just buy the spices from the Bandanese at first, partly because it was still lucrative, partly because they tried (and failed) to establish fortifications on the island.

But there was, eventually, war between the European powers of Britain, Portugal, Spain, and the Dutch. Spices were stupidly expensive in Europe (at this time, nutmeg was still more expensive than gold), so everybody wanted to have a monopoly on them, overcharge, and keep all the profits. The fighting started after the Dutch established their first land bases in the area, around 1512, and raged on until the late 1660s.

“The surrender of the Prince Royal” by Willem van de Velde the Younger. Prince Royal was a massive English flagship surrendered to the Dutch after it hit a sandbank during the Four Days’ Battle, as both countries fought to over spice colonies and trade routes.

Britain managed to hold onto the island of Rhun (or Run) which became their first (but definitely not last) colony until 1667, when they traded it over for Manhattan. This gave the Dutch pretty much full control over all the nutmeg, cloves, and a bunch of other local spices that flowed towards Europe — so they had full control over how expensive they were (very expensive). From there, the Dutch East India Company (or ‘Vereenigde Oostindische Compagnie’, VOC, in Dutch) grew into the richest company in history, estimated at around 7.8 trillion of your modern dollars. Apple and Google, for comparison, are worth ‘only’ $2 trillion and $1 trillion, respectively.

As a side-note, it can be argued that the globalized trade and interdependent markets of today wouldn’t exist in their current form if it weren’t for the VOC. It set the blueprint we still follow today: for example, it was the first-ever company to sell its shares publicly and to directly tie two economies on different continents together. All in all a very impressive endeavor, if you can overlook the astonishing depths of moral depravity and human suffering it was built on.

It would all however end eventually, as all things do, when in 1769 French-born Pierre Poivre smuggled nutmeg seedlings to the Mauritius islands. The company would eventually be dissolved, on the last day of the year 1799.

During the Napoleonic wars, the Netherlands were technically England’s enemies as they were kind of strong-armed into the French Empire. With this excuse in hand, the British invaded Dutch holdings in South-East Asia, and nutmeg became a cherished part of British culture (and was enthusiastically planted in any and all colonies where it would grow).

How we put it in a latte

Pumpkin spice today is heavily associated with autumn. It got here because the mix was advertised specifically for products containing pumpkin, such as pies or cakes, in the US (mainly due to the British legacy of using spices such as nutmeg and cloves in cakes and puddings). Since pumpkins were in season in the autumn, when those orange gourds were ripe for harvest (they don’t keep very well so most don’t last the winter), that’s when most Americans first encountered the mix, and an association formed.

Image via Pxfuel.

Over time, however, people figured out that you can sell pumpkin spice even sans pumpkin. So they did.

Spices today definitely don’t command the astronomical prices they did a mere 200 years ago — chiefly because most aren’t controlled by any monopolies. They’re definitely still valued, but they’re not reserved only for the tables of the rich and powerful any longer. Being much more affordable means that more people are willing to pay the extra cost just to enjoy the flavors they bring. So, naturally, people started putting it in coffee.

Pumpkin spice, in itself, isn’t even very unique — variations on this mix have been in use for the last 2 to 3 centuries now. It caught on specifically during the late 90s in coffee shops, and really took off with Starbucks’ pumpkin spice latte, introduced in 2003.

Exactly why it’s so popular is debatable. However, it’s worth pointing out that cinnamon, nutmeg, and to some extent cloves as well, are heavily used in home-made food and sweets during the winter holidays, especially in British culture (which heavily informed American cooking and customs). Smells and aromas are strong elicitors of feelings and memories, so maybe these lattes bring us back to our emotional happy place, one where we’re enjoying the holidays with our family at home around the fire.

New approach of dating pottery involves analyzing traces of old meals

Researchers at the University of Bristol have developed a new method of dating pottery — that was used to cook.

Image via Pixabay.

The approach involves carbon-dating animal fat residue recovered from the pores in such vessels, the team explains. Previously, archeologists would date pottery either by using context information — such as depictions on coins or in art — or by dating organic material that was buried with them. This new method is much more accurate, however, and the team explains it can be used to date a site even to within a human life span.

Old cuisine

“Being able to directly date archaeological pots is one of the “Holy Grails” of archaeology,” says Professor Richard Evershed from the University of Bristol’s School of Chemistry, who lead the research.

“This new method is based on an idea I had going back more than 20 years. We made several earlier attempts to get the method right, but it wasn’t until we established our own radiocarbon facility in Bristol that we cracked it.”

Really old pottery, for example those made and used by stone-age farmers, is pretty tricky to date. Some are pretty simple and not particularly distinctive, and there is no context to date it against. So archeologists use radiocarbon dating, or 14C-dating, to analyze bones or other organic material that was buried with the pots. This is an inexact measurement and less accurate than dating the pots directly. Raw clay or fired pots, however, can’t be dated this way.

Professor Evershed’s idea was to analyze fatty acids from food preparation — which can be dated — that were protected from the passage of time within the pores of these pots. The team used spectroscopy and mass spectrometry to isolate these fatty acids and check that they could be tested.

As an experimental proof of concept, they analyzed fat extracts from ancient pottery at a range of sites in Britain, Europe, and Africa with already precise dating which were up to 8,000 years old, with very good results.

“It is very difficult to overstate the importance of this advance to the archaeological community,” says Professor Alex Bayliss, Head of Scientific Dating at Historic England, who undertook the statistical analyses. “Pottery typology is the most widely used dating technique in the discipline, and so the opportunity to place different kinds of pottery in calendar time much more securely will be of great practical significance.”

The new method has been used to date a collection of pottery found in Shoreditch, thought to be the most significant group of Early Neolithic pottery ever found in London. It is comprised of 436 fragments from at least 24 separate vessels and was discovered by archaeologists from MOLA (Museum of London Archaeology). Analysis of traces of milk fats extracted from these fragments showed that the pottery was 5,500 years old. The team were able to date the pottery collection to a window of just 138 years, to around 3600BC.

These people were likely linked to the migrant groups who first introduced farming to Britain from Continental Europe around 4000 BC, the team explains.

The paper “Accurate compound-specific 14C dating of archaeological pottery vessels” has been published in the journal Nature.

Humans figured out how to start fires way sooner than expected

Let’s be honest for a second here — we say humans ‘mastered’ fire, but most of us wouldn’t be able to light something up without some matches to save our lives.

Image credits Gerd Altmann.

It’s understandable, then, for researchers to assume that early humans likely harvested (instead of starting) fires. However, the ability to harness fire was a key developmental step for our species, enabling us to cook, protect ourselves from wildlife, or just by making the cave a more enjoyable place to hang around in. As such, archeologists are very keen (and eager to debate on) when exactly we learned to start fires.

New research from an international team now reports that Neanderthals, one of our ancient (and now extinct) relative species knew how to produce fire, overturning our previous assumptions.

Baby light my fire

“Fire was presumed to be the domain of Homo sapiens but now we know that other ancient humans like Neanderthals could create it,” says co-author Daniel Adler, associate professor in anthropology at the University of Connecticut (UConn). “So perhaps we are not so special after all.”

The team drew on hydrocarbon and chemical isotope analysis, archeological evidence of fire use, and models of the Earth’s climate tens of thousands of years ago to show that our ancient cousins did indeed know how to light a fire. The study focused on the Lusakert Cave 1 in the Armenian Highlands.

The team analyzed sediment samples to determine the level of polycyclic aromatic hydrocarbons (PAHs) — compounds that are released by burning organic materials. Light PAHs disperse widely, the team explains, and are indicative of wildfires. Heavy PAHs, on the other hand, spread narrowly around a source of fire.

“Looking at the markers for fires that are locally made, we start to see other human activity correlating with more evidence of locally-made fire,” says lead author Alex Brittingham, a UConn doctoral student in anthropology.

Higher levels of heavy PAHs at the site (which indicate regular fire use) correlate with evidence of increased human occupation (such as dumps of animal bones from meals) and of tool making, the team explains.

In order to rule out the possibility that these fires started naturally (for example, following lightning strikes), the team analyzed hydrogen and carbon isotope ratios in plant waxes preserved in sediment from those ancient days. This step is useful for recreating the kind of climate the plants grew in, the team reports. All in all, they didn’t find any link between the paleoclimatic conditions at the time and the chemical evidence left over by the fires. The inhabitants were not living in drier, wildfire-prone conditions while they were utilizing fires within the cave.

“In order to routinely access naturally caused fires, there would need to have been conditions that would produce lighting strikes at a relative frequency that could have ignited wildfires,” says Michael Hren, study author and associate professor of geosciences.

In fact, the team reports that there were fewer wildfires going on in the area while humans inhabited the cave (light PAH frequency was low while heavy PAH frequency in the cave was high). This finding suggests that the Neanderthals acted as a kind of fire control in the area they inhabited, intentionally or not. It also shows they were able to control (i.e. start) fire without having to rely on natural wildfires.

The team now plans to expand their research to other caves occupied by early humans, to determine whether different groups learned to control fire independently of people in other geographic areas. In other words, was it something that only certain groups figured out, or more wide-spread knowledge?

The paper “Geochemical Evidence for the Control of Fire by Middle Palaeolithic Hominins” has been published in the journal Scientific Reports.

Çatalhöyük after the first excavations.

Cities from 9,000 years ago had pretty much the same problems as those of today, study finds

Big city problems are not recent news — in fact, they’re about 9,000 years old, according to a new study.

 Çatalhöyük after the first excavations.

The Çatalhöyük site during excavation work in 2013.
Image credits Omar Hoftun / Wikimedia.

A new study finds that one of the world’s first large farming settlements experienced many of the hazards of modern urban life — overcrowding, infectious diseases, exposure to violence and environmental problems — almost ten millennia ago.

New dog, old tricks

“Çatalhöyük was one of the first proto-urban communities in the world and the residents experienced what happens when you put many people together in a small area for an extended time,” says aid Clark Spencer Larsen, lead author of the study, and Professor of Anthropology at The Ohio State University. “It set the stage for where we are today and the challenges we face in urban living.”

Çatalhöyük in modern-day Turkey is one of the earliest-known large farming settlements in the world. It was a sprawling, populous place inhabited from about 7100 to 5950 B.C. At its peak, Çatalhöyük housed anywhere between 3,500 to 8,000 people. Because it lacks some of the key traits of cities today, however, we call it a ‘proto-city— but don’t let the moniker fool you. An international team of bioarchaeologists reports in a new paper: life in Çatalhöyük was rife with the same perils we’re exposed to in New York or London today.

The findings, which were drawn from 5 years of study of human remains unearthed at the site, show what people from a nomadic hunter-gatherer lifestyle were up against as they transitioned to a sedentary, agricultural life. As part of the larger Çatalhöyük Research Project, directed by Ian Hodder of Stanford University, Larsen first began studying human remains from the site in 2004. Fieldwork at Çatalhöyük ended in 2017 and the paper represents the culmination of the bioarchaeology work at the site, Larsen said.

The ruins of Çatalhöyük were first excavated in 1958, and today the site measures around 13 hectares (about 32 acres) with nearly 21 meters of deposits spanning 1,150 years of continuous occupation, according to the study. The city definitely had its ups and downs during the ages. It was a modest enough settlement in the Early period, a handful of mud-brick houses, but grew to a substantial size by its peak in the Middle period (6700-6500 B.C.). By the Late period, the population had declined sharply. Çatalhöyük was abandoned around 5950 B.C.

Farmville

Çatalhöyük room restoration.

On-site restoration of a typical interior at Çatalhöyük.
Image via Wikimedia.

Farming was a pretty central part of life in Çatalhöyük. Based on stable carbon isotope analysis of the bones found at the site, the team determined that the residents relied heavily on wheat, barley, and rye for food, a diet they fleshed-out with wild plants. This grain-centric diet caused some locals to develop tooth decay, one of the so-called “diseases of civilization,” according to Larsen. Between 10% and 13% of all adult teeth retrieved at the site showed signs of dental cavities.

Stable nitrogen isotope ratios (nitrogen gets concentrated the further up a food chain you go, so it can be used to see which animals eat which) they also report that Çatalhöyükers primarily ate mutton, goat, and game as far as meats go. Cattle were also introduced to the area during the Later period, but sheep remained the primary source of meat for locals here.

“They were farming and keeping animals as soon as they set up the community, but they were intensifying their efforts as the population expanded,” Larsen said.

Residents also saw high infection rates. Up to one-third of remains from the Early period show evidence of infections on their bones. Pathogens thrived here due to crowding and poor hygiene. The team explains that during the settlement’s peak, houses were built like apartments, with no space between them so that residents came and left through ladders to the roofs of the houses. Excavations showed that interior walls and floors were re-plastered many times with clay. And while the residents kept their floors mostly debris-free, analysis of house walls and floors showed traces of animal and human fecal matter.

“They are living in very crowded conditions, with trash pits and animal pens right next to some of their homes. So there is a whole host of sanitation issues that could contribute to the spread of infectious diseases,” Larsen said.

Crowdedness may have also helped spark violence between locals, the team adds. Out of a sample of 93 skulls the team examined, 25 showed signs of healed fractures. Out of the same sample, 12 showed signs of repeated injuries (between 2 and 5).

Based on the shape of the lesions, they were produced by blows to the head made with hard, round objects. Clay balls with matching size and shape were found at the site. Most injuries were found on the top or back of the heads, suggesting the attacks came from the back (such as would happen during a mugging). Over half the victims were women (13 women to 10 men). The team also found evidence that such injuries were most common during the Middle period, “when the population was largest and most dense,” according to Larsen. “An argument could be made that overcrowding led to elevated stress and conflict within the community,” he adds.

The study also offers some clues as to why Çatalhöyük was abandoned. The authors report finding changes in the shape of leg bone cross-sections of locals over the generations which are indicative of walking long distances. Locals in the proto-city’s Late period had to walk around significantly more than their earlier counterparts, likely for farming and grazing. This finding suggests that drier climate conditions had a key role to play in Çatalhöyük’s demise, Larsen explains.

“We believe that environmental degradation and climate change forced community members to move further away from the settlement to farm and to find supplies like firewood,” he added. “That contributed to the ultimate demise of Çatalhöyük.”

Another surprising finding came from the way the locals were inhumed in Çatalhöyük. Most people were buried in pits dug into the floor of houses, likely under the home where they lived. This showed that most members of a household weren’t biologically related at all.

“The morphology of teeth are highly genetically controlled,” Larsen said. “People who are related show similar variations in the crowns of their teeth and we didn’t find that in people buried in the same houses. It is still kind of a mystery.”

“We can learn about the immediate origins of our lives today, how we are organized into communities. Many of the challenges we have today are the same ones they had in Çatalhöyük—only magnified.”

The paper “Bioarchaeology of Neolithic Çatalhöyük reveals fundamental transitions in health, mobility, and lifestyle in early farmers,” has been published in the journal PNAS.

Ice cream.

The delicious history of ice cream throughout the ages

Who doesn’t love ice cream? Less clear cut, however, is who invented it. We don’t know for sure how ice cream came to be but here’s what we do know about its history.

Ice cream.

Image via Pixabay.

The first brush Europeans had with something resembling ice-cream was likely around the 1300s, when explorer Marco Polo returned to Italy from China. Along with his wild stories of adventure and exotic lands, Polo also bore the recipe for a dessert we’d call sherbet or sorbet. Later on, this recipe likely evolved into the ice cream we know and love today sometime during the 16th century. It really came into its own during the 20th century, with the advent of new refrigeration techniques that allowed for the mass production of ice cream.

But, let’s not get ahead of ourselves — let’s not start eating this treat from the cone up, as it were. The story of ice-cream (what we know of it, at least) starts, surprisingly enough, in Antiquity.

Ice Cream Age

To the best of our knowledge, ice cream first reared its refreshing head in the Persian empire of yore. We don’t know, for sure, who first came up with the idea or when. However, around 500 B.C., we have evidence of the Persians mixing ice with grape juice, fruit juice, or other pleasantly-tasting flavors to produce an ice-cream-like treat. Needless to say, during that time and especially in that place (the Persian Empire stretched from India to Egypt and Turkey, so it was a very hot place generally) this delicacy was very hard and very expensive to produce, making it a noble or royal dish.

Their ice cream more closely resembled what we’d call sorbet today in texture and taste. Still, it was highly-regarded due to its scarcity and was probably greatly enjoyed in the Persian heat by those who could afford it.

Eventually, the Persian Empire met its maker in the form of one Alexander the Great, who waged war on them for about ten years. Warmaking is hot, tiring stuff, and accounts from Alexander’s campaigns say he took a particular liking to the local “fruit ices”, which are described as a honey-sweetened dish chilled using snow. The Persian dessert further evolved through time and was inherited by Iranians in the form of faloodeh, a traditional chilled dessert. Following the Muslim conquest of Persia in 651 AD, the Arab world also adopted this dish.

Sorbet.

This is sherbet.
Image credits Elizabeth Rose.

Likely through Alexander’s phalangites returning home from their campaigns, ice cream was gradually introduced to early Western societies, eventually finding its way to the Emperor’s court in Rome. Icecreamhistory cites “tales from this period” telling of “armies of runners, who carried ice from mountains to big Roman cities during summers”, showcasing how appreciated the dish became among Roman nobles and Emperors. Emperor Nero is recorded as being a big fan of the dessert.

Ice cream R&D was going strong in China and Arab countries during the 9th to 11th centuries. Around this time, confectioners started experimenting with milk-based ice creams, more akin to the ones we enjoy today. Their ideas slowly made their way to Europe on the backs of traders and wanderers such as Marco Polo. The strong Mediterranean economic presence of the Italian city-states at the time, especially their trade with Muslim countries, put them in a unique position to draw on these ideas, which is why the country has such a strong tradition of ice cream making to this day.

The fact that ice cream was definitely still rare and expensive to produce at this time likely helped fuel its development, alongside that of refrigeration techniques, as there was a lot of money to be made in the business at the time. However, it also kept ice cream from becoming the widely-enjoyed treat that it is today. With a hefty price tag, and in the absence of any means of effectively storing ice or snow, it remained a very exclusive dish up until the 17th or 18th century in Europe.

The Icedustrial Revolution

There is some debate as to where ice cream first made its European debut. “Cream Ice” as it was known there at the time, made its way to England sometime in the 16th century. During the 17th century, it was a regular fixture at the table of Charles I. France got its first taste of the desert in 1553 after Catherine de Medici (Italian) wed Henry II of France.

However, everybody seems to agree that ice cream was first made available to the general public in 1660, when a Sicilian man named Procopio Cutò introduced a recipe of frozen milk, cream, butter, and eggs (gelato) at Café Procope (called the oldest café in Paris), which he owned. Procopio is credited as the inventor of gelato.

New production and refrigeration methods allowed ice and ice cream to be produced in greater quantities, and cheaper than ever before. The dessert made its way to America on the backs of these technologies in the mid-17th century, and after a few decades became available to the general public. Around 1850, large commercial entities started dabbling in the production and sale of ice cream, which further brought costs down and allowed more people than ever to enjoy the frozen treat.

The biggest single boon for ice cream was the advent of commercially-available, continuous electrical refrigeration after World War I. The ability to store ice cream for long periods of time without damaging it practically gave the industry wings; production during this time rose hundredfold, especially in the United States that escaped the war unravaged, which brought prices down to unheard-of-before lows.

Ice cream truck.

And to new neighborhoods.
Image credits Leonie Schoppema.

Ice cream also gained an unexpected boost on global markets during World War II, when both flash-frozen and dried ice creams became part of the official US Army combat rations. These were distributed to US soldiers in every and all theater of operations: Europe, North Africa, East Asia, and the Pacific fronts. In fact, ice cream played a central role in keeping up US soldiers’ calorie intake,  as well as their morale and fighting spirit. An article in The Atlantic that looks at the role of ice cream in the American war effort during World War II (it’s a very good piece, do give it a read) cites an editorial in the May 1918 issue of The Ice Cream Review, a monthly trade magazine, that shows where this treat fit into military life during the first world war.

“In this country every medical hospital uses ice cream as a food and doctors would not know how to do without it. But what of our wounded and sick boys in France? Are they to lie in bed wishing for a dish of good old American ice cream? They are up to the present, for ice cream and ices are taboo in France,” The Ice Cream Review wrote. “It clearly is the duty of the Surgeon General or some other officer to demand that a supply be forthcoming.”

You could chalk those lines up to industry lobbying — and it’s probably exactly what that was. But by 1942, the situation had changed dramatically. Whether as a result of lobbying, of grassroots support from GIs, or simply out of a desire to give those on the front the best comforts one could realistically provide them with, ice cream was often seen on American lines.

When the U.S.S. Lexington, the second-largest aircraft carrier in the US Navy at the time, had to be scuttled to avoid capture by Japanese forces, “the crew abandoned ship — but not before breaking into the freezer and eating all the ice cream. Survivors describe scooping ice cream into their helmets and licking them clean before lowering themselves into the Pacific,” the article explains.

“The U.S. Navy spent $1 million in 1945 converting a concrete barge into a floating ice-cream factory to be towed around the Pacific, distributing ice cream to ships incapable of making their own,” Matt Siegel wrote for The Atlantic. “It held more than 2,000 gallons of ice cream and churned out 10 gallons every seven minutes.”

“Not to be outdone, the U.S. Army constructed miniature ice-cream factories on the front lines and began delivering individual cartons to foxholes. This was in addition to the hundreds of millions of gallons of ice-cream mix they manufactured annually, shipping more than 135 million pounds of dehydrated ice cream in a single year.”

Immediately after the war, ice cream was perceived as an American invention. It’s not hard to understand why. Most of the industrialized world had been bombed halfway back to the stone age in not one but two massive conflicts, so frozen dessert wasn’t high on anybody else’s to-do list. Hollywood also helped promote ice cream, which was regularly included in movies and its overarching culture. The icy appeal of ice cream proved irresistible, and as the world dragged itself out of the rubble and horror of war, other countries started churning out their own. This period also saw a great deal of experimentation with and development of new types of ice cream, most notably the soft ice cream and sundae varieties that are highly-appreciated to this day.

The kill rate in Game of Thrones is actually quite realistic, a new study reveals

“When you play the game of thrones, you win or you die,” Queen Cersei said. Apparently, she was right.

Sorry, Jon. Your world is just as brutal as the real one was in Medieval times.

George R. R. Martin started writing the first Game of Thrones (GoT) novel in 1996. That’s right kids, A Song of Ice and Fire, as the fantasy series is called, started more than 20 years ago. But it definitely blew up when HBO turned it into a TV series. It was an instant hit. The drama, the strong, believable characters — and the violence. GoT stood out through violence, leaving many people wondering if they’ve gone too far. As it turns out, the answer is no — they haven’t gone too far. In fact, they haven’t really gone far at all. The fatality rates in the series match those in reality.

Celine Cunen, PhD student at the Department of Mathematics at University of Oslo has examined whether the mortality rate in GoT is higher than that of a real civil war from the Middle Ages. She compared the fantasy with  the famous War of the Roses, a civil war that ravaged England between 1455 and 1487, from which Martin took a lot of inspiration. Cunen found that mortality rates of main characters in the TV show match the mortality rate of the noble classes in the War of the Roses.

This might come as quite a shocker for most people, but we tend to forget just how brutal and unforgiving those times were. That a series which strikes us as incredibly violent is just normal for the Middle Ages says a lot about those times, as much as the ones we live in now.

Statistics for “important” people

Left: Graph of survival for GoT characters (red) and War of the Roses people (blue). The faster the arc falls, the higher the mortality rate. Right side: Graph of survival for noble GoT characters (dark red) and GoT commoners (light red) and noble historical people (dark blue) and historical commoners (light blue). Graph: Celine Cunen/UiO.

The Wars of the Roses was fought between supporters of two rival branches of the royal House of Plantagenet: the House of Lancaster and the House of York. This is mirrored in the GoT series as the Houses of Lannister and Stark (spoiler alert, in the real war, the Lancasters won). Although Martin mentions the lower classes more than other authors, it’s clear that the books are biased towards the higher echelons of society — that’s where all the cool things happen. The same can be said in the historical evidence of the War of Roses. The information we have is greatly skewed towards mentioning the nobles. In a world where rulers fight for the throne, there’s little place for commoners. So this is not a statistic made for the general population of the time, as Cunen herself mentions.

“The medieval wars were fought by the nobility and professional warriors. Furthermore, all historical data from these wars are completely fixated on so-called “important people”, meaning politicians or nobility. Consequently, this is not a comparison on these wars general mortality, but a comparison on the mortality between «important» people.”

The mortality rate in the War of the Roses and GoT, considering the sample, is very similar. Even without dragons, medieval noblemen managed to kill each other just as fine as Martin’s groundbreaking series. So remember, instead of cursing Martin or HBO for killing of your favorite character, you can just curse the reality who inspired them.

Cunen adds that she did all this for fun. She found that it’s a nice way of communicating statistics to a broad audience. Adding a layer of both historical and literary significance is just a bonus.

“I have to add: I did this just for fun. I am part of a group of researchers that wish to communicate statistics to the world. I regarded this as a good opportunity to do just that! However, it is of course impossible to obtain a correct statistical basis in a comparison between a TV-show and a historical war. But it might give you some idea of the realism of the show!”

Elusa trash mound.

Ancient trash suggests climate change helped drive the Byzantine Empire into the ground

The Byzantine Empire, the eastern fringe of Rome that spanned both continents and centuries, may have fallen due to climate change — at least in part.

Theodora.

Mosaic showing Empress Theodora, arguably the most influential and powerful of the Eastern Roman empresses, wife of the Emperor Justinian I.
Image via Pixabay.

A research team from Israel reports finding evidence to support the view that rapid climatic changes have contributed to the fall of the Byzantine Empire. The findings, surprisingly, come from trash mounds outside an ancient Byzantine settlement, Elusa.

One man’s trash is another man’s study

The Byzantine (or Eastern Roman) Empire was, for over a millennium, a powerhouse of European culture, science, politics, and economy. It was the product of a schism in Rome — one half of an empire so successful it had grown beyond its ability to govern itself.

In 293 AD, Emperor Diocletian took an Augustus (a co-emperor) to govern the western heartlands of the empire and divided its government into a tetrarchy (an ancient Greek word that translates, roughly, into “rule of four people”). It didn’t go swimmingly at all. Too many cooks spoil the broth, and too many emperors spoiled the empire. Massive (and mutually-destructive) civil wars raged behind the empire’s sprawling borders, bringing it to its knees. In 313, Constantine the Great (who held the rank of Augustus) reunited the empire, and moved the capital from Rome to Constantinople. The schism was set in stone with the emperor Theodosius I, who, in 395, gave his sons Arcadius and Honorius the rule of the East and the West, respectively.

Both halves considered themselves “Roman”, but they were different beasts. The Latin West was overwhelmed by invaders, and slowly collapsed under its own immensity; the East, a richer, more urban, Hellenistic (Greek) entity, squared off against the barbarians and bribed away the few it couldn’t defeat. At its largest, it included land in Greece, Italy, the Balkans, Asia Minor, North Africa, and the Levant. It would outlive its western brother by nearly a thousand years.

Still, it too would eventually fall. Officially, this happened on May 29, 1453, when the Ottoman Turks conquered Constantinople. However, the whole process was painstaking, with the Byzantines losing, regaining, and re-losing areas of their huge holdings to emerging empires.

One such event was the loss of the Levant, modern-day Israel. What we know today is that this area was taken over by Islamic conquests in the seventh century, with — honestly — surprising efficiency. The team suspected there was more to the story — and their results suggest natural events played a big part in the Byzantine loss of the Levant.

Elusa trash mound.

One of Elusa’s trash mounds.
Image credits Guy Bar-Oz et al., (2019), PNAS.

The study didn’t originally intend to focus on trash heaps in Elusa, but the team took an interest in what the mounds just outside the settlement’s walls were. They dug all the way through the bottom of one such mound and found that it had a layered structure — suggesting it was created by an organized, concerted group of trash collectors during the Byzantine rule. Surprisingly, however, no trash dumping seems to have occurred for almost a century before the settlement had been overrun by invaders.

The researchers take this as a sign that not all was fine in the settlement, and trash collection stopped as a symptom of its hardships. Perusing through literature, the team identified a possible culprit in the form of the Late Antique Little Ice Age. This event, which started around 536 CE, was basically a mini ice-age generated by three volcanoes erupting in a short span of time. They filled the air with enough debris and chemical compounds to cool the climate of much of Europe and Asia.

This mini ice age likely led to crop failures, the team adds. Elusa’s chief export at the time was Gaza wine, which probably didn’t suffer from the colder climate. However, it definitely affected Elusa’s customers — without people to sell its main product to, the city likely went through a severe economic downturn and saw a decline in population. Thus, by the time war came to Elusa’s walls, the city was already reeling and unable to put up much resistance.

The paper “Ancient trash mounds unravel urban collapse a century before the end of Byzantine hegemony in the southern Levant” has been published in the journal Proceedings of the National Academy of Sciences.

Christ statue.

Divine punishment didn’t goad us into building civilization — it was the other way around

New research is looking into the interplay between society and moralizing, ‘big gods’ — the latter seem to be a consequence, rather than a driver, of the former.

Christ statue.

Jesus upon hearing you tipped less than 20%.
Image credits Patrick Neufelder.

Prevailing theories today, the paper explains, hold that ‘big gods’ nurtured cooperation between large groups of genetically-distinct people, in effect underpinning societies as we know them today. These deities are defined as having a powerful moralizing effect over societies — being perceived as entities that punish ethical faux-pas — thereby acting as the common moral glue holding large groups together. However, the study we’re discussing today finds that this isn’t the case. Rather, the team suggests, it’s these complex societies that produced their complex gods, not the other way around.

Of gods and men

“It has been a debate for centuries why humans, unlike other animals, cooperate in large groups of genetically unrelated individuals,” says Peter Turchin from the University of Connecticut and the Complexity Science Hub Vienna, one of the paper’s co-authors.

The earliest bunches of people lived in close-knit family units. They would cooperate because doing so would help ensure that their bloodline — and thus, their genes — would survive. People would try their best to keep themselves and their families alive, even if that meant raiding other family-groups (with whom they shared no genes, thus making it a-OK). It’s a very straightforward, very intuitive approach to survival and cooperation.

Since then, things have changed. We work with and for people who have no blood ties to ourselves. We share residential buildings with people we sometimes never even meet. We help fund charities for causes half the world away. If we applied the ‘me and mine’ mentality of yore, it would make perfect sense to protect our kin-group even at the expense of others. But we don’t do that. Society would run amok, and we like society. The million-dollar question (or whatever currency it is that anthropologists use; knucklebones, maybe?) is why.

Agriculture, warfare, and religion have been proposed as the main driving forces behind our need to cooperate. Such pursuits hinge on a community’s ability to work together in large numbers. Tilling the fields requires many able hands, as do raids or defense; gods, in turn, provide the moral incentives (i.e. eternal punishment) needed to hold communities together when blood-ties don’t apply.

Lord Shiva.

Nobody wants to make the boss mad. Especially when the boss is the god of war and can thus really mess up eternity for you.
Image credits Bishnu Sarangi.

But, the team wasn’t convinced. Working with data from the Seshat Global History Databank, “the most current and comprehensive body of knowledge about human history in one place” according to their website, they pitted these theories against statistical rigor. The databank contains about 300,000 records on social complexity, religion, and other characteristics of 500 past societies over 10,000 years of human history, which the team used to analyze the relationship between religion and social complexity.

If ‘big gods’ spawned complex societies, then logic dictates that they appeared in these peoples’ collective imaginations before their societies increased in complexity — or, in other words, that the fear of divine retribution coaxed people into behaving in a socially-acceptable way. The team, however, reports that this wasn’t the case.

“To our surprise, our data strongly contradict this [big god] hypothesis,” says lead author Harvey Whitehouse. “In almost every world region for which we have data, moralizing gods tended to follow, not precede, increases in social complexity.”

“Our results suggest that collective identities are more important to facilitate cooperation in societies than religious beliefs.”

The complexity of a society can be estimated by social characteristics such as population, territory, the sophistication of its institutions and information systems, the team explains. Religious data used in the study included the presence of beliefs in supernatural enforcement of reciprocity, fairness, and loyalty, as well as the frequency and standardization of religious rituals.

Big gods may not have spearheaded communities, the team explains, but ritual and religion definitely had a large part to play. Standardized rituals tended to appear, on average, hundreds of years before the earliest evidence of moralizing gods, they report. Where such deities would have been the proverbial stick, these rituals acted like the carrot — they gave people a sense of belonging and group identity that allowed cooperation.

Gabillou Sorcier.

Picture of a half-animal half-human in a Paleolithic cave painting in Dordogne, France. Paleoanthropologists Andre Leroi-Gourhan and Annette Michelson take the depiction of such hybrid figures as evidence for early shamanic practices during the Paleolithic.
Image and caption credits José-Manuel Benito / Wikipedia.

The Seshat database proved invaluable in this study. It has been founded by data and social scientist Peter Turchin, together with Harvey Whitehouse and Pieter François from the University of Oxford (also a co-author of this study) in 2011. It aimed to integrate the expertise from various fields into an open-access database specifically to allow researchers to tease out cause from effect in social and historical theories, they say. Through the work of dozens of researchers the world over — who compiled data on social complexity and religious beliefs and practices from polites (communities) from 9600 BCE up to today — Seshat grew into the first databank of standardized, quantitative historical knowledge in the world.

“Seshat is an unprecedented collaboration between anthropologists, historians, archaeologists, mathematicians, computer scientists, and evolutionary scientists”, says Patrick Savage, corresponding author of the article. “It shows how big data can revolutionize the study of human history.”

“[It] allows researchers to analyze hundreds of variables relating to social complexity, religion, warfare, agriculture and other features of human culture and society that vary over time and space,” explains Pieter François. “Now that the database is ready for analysis, we are poised to test a long list of theories about human history.”

One of the biggest questions the team wants to tackle (of which the present paper is the first step) is why we came to work together in societies in excess of millions of people despite lacking any genetic incentive to do so.

The paper “Complex societies precede moralizing gods throughout world history” has been published in the journal Nature.

Pizza Slice.

A look at how the world invented pizza

Thin, inviting, and delicious, pizza has a unique place in many people’s hearts (and bellies). Pizza today is considered the quintessential Italian dish, but many other cultures around the world have also created pizza-like dishes. So grab a slice and let’s take a look at the history of pizza.

Pizza Slice.

Image via Pixabay.

There’s some debate as to where the term “pizza” comes from. One of the prevailing theories, however, is that it comes from the Latin pitta, a type of flatbread. And, to the best of our knowledge, that is exactly how pizza started out: flatbread with extra toppings meant to give it flavor.

Flavor up!

But this idea didn’t originate in Italy. Or, more to the point, it didn’t only originate in Italy.

The fact is that ancient peoples loved bread. For many reasons. Grain kept relatively well in a world bereft of refrigerators, and bread is one of the more enjoyable ways to eat it. It was also among the cheaper foodstuffs, generally, as grain is easy to produce, ship, and process in large quantities. Finally, bread is also quite dense in protein, carbohydrates, fiber, and calories — especially whole-grain bread, which our ancestors ate. Bread doesn’t particularly shine in the taste department, however. Sure, it’s easy to carry and it will get you full, but it’s not very exciting on the palate.

This is perhaps why, as Genevieve Thiers writes in the History of Pizza, soldiers of the Persian King Darius I “baked a kind of bread flat upon their shields and then covered it with cheese and dates” as early as the 6th century B.C. The Greeks (they used to fight the Persians a lot) seem to have later adopted and adapted this dish for their own tables.

Naan bread.

Naan bread, apart from being delicious, can be seen as far-flung relative of pizza.
Image credits Jason Goh.

It was pretty common for ancient Greeks to mix olive oil, cheese, and various herbs into their bread — again, all in the name of flavor. But it seems that contact with Persian soldiers added a twist or two to the tradition, according to Thiers, and Greece started baking “round, flat” bread with a variety of toppings such as meats, fruits, and vegetables.

One interesting bit evidence of this culinary development comes from the Aeneid, an epic poem written around 30 or 20 B.C. In the work, Aeneas and his men (who were running away from Greek-obliterated Troy) receive a prophecy/curse from Celaeno (queen of the harpies). Caleano told him that his group will “have reached [their] promised land” when they “arrive at a place so tired and hungry that [they] eat [their] tables”. When the party came ashore mainland Italy they gathered some “fruits of the field” and placed them on top of the only food they had left — stale round loaves of bread.

The use of hardened bread or crusts of bread in lieu of bowls was quite common in antiquity and the middle ages. So the group’s actions can be seen as them putting the food — the fruits of the field — on a plate, or a table, rather than being used as a topping. Still, famished, the adventurers quickly ate the plants, and then moved on to the ‘plates’ of bread. Aeneas’ son, Ascanius, then remarks that the group has “even eaten the tables” (“etiam mensas consumimus!” Aeniad Book IV), fulfilling the prophecy.

Aeneas fleeing Troy.

Painting by Pompeo Batoni, “Aeneas fleeing from Troy”, 1753. He’s carrying his father, Anchises. Also shown are his first wife, Creusa, and their child, Ascanius.
Image credits Galleria Sabauda.

Italian cuisine

The ‘pizzas’ we’ve talked about up to now are far from unique. Cultures around the world have developed their own brand of goodie-laden bread. Flatbreads, naan, and plakountas are all early preparations that could be considered cousins to the modern pizza, and they sprung up from ancient Greece to India, from Persia to Egypt. However, it would be kind of a stretch to call them pizza; they’re certainly not what you’d expect to see inside a pizza box today.

One Greek settlement would become the forefront of pizza as we know it: Naples. The city was founded by Greek colonists in the shadow of Vesuvius around 600 B.C. Writing in Pizza: A Global History, Carol Helstosky explains that by the 1700s and early 1800s, Naples was a thriving waterfront city — and, technically at least, an independent kingdom.

Lazzaroni.

Painted lithography showing a group of lazzaroni. Author: Silvestro Bossi.
Image in the public domain, via Wikimedia.

The city was famous for its many lazzaroni, or working poor. They needed inexpensive food that could be consumed quickly, for the lazzaroni had neither the time nor the money to invest in their meals. Many street vendors and other informal “restaurants” catered to their need, primarily offering flatbreads with various toppings (as per the area’s Greek heritage). By this time, Naples’ flatbreads featured all the hallmarks of today’s pizzas: tomatoes (which were brought over from the Americas), cheese, oil, anchovies, and garlic.

Still, the dish wasn’t enjoying widespread appeal or recognition at this time. Pizza was considered a poor man’s dish, partially due to the lazzaroni, partly due to the fact that tomatoes were considered poisonous at the time. Wealthy people, you see, used to dine from pewter (a lead alloy) plates at the time. Tomatoes, being somewhat acidic, would leach lead out of the plates into food — which would eventually kill these wealthy people. The tomatoes were blamed, and that made them cheap. The lazzaroni were poor and hungry, so the tomato was right up their alley. Luckily for the lazzaroni, pewter plates were expensive, so they weren’t poisoned.

“Judgmental Italian authors often called [the lazzaroni’s] eating habits ‘disgusting,'” Helstosky notes.

Pizza got its big break around 1889. After the kingdom of Italy unified in 1861, King Umberto I and Queen Margherita visited Naples, Thiers writes. It’s not exactly known how but they ended up being served ‘pies’ made by Raffaele Esposito, often hailed as the father of modern pizza. Legend has it that the royal pair was bored with the French cuisine they were being offered, although Europeans love bad-mouthing their neighbors and especially their neighbors’ foods, so that may not be completely factual.

“He first experimented with adding only cheese to bread, then added sauce underneath it and let the dough take the shape of a large round pie,” Theirs explains.

Esposito is said to have made three of his pies/pizzas. The story goes that the one the Queen favored most showcased the three colors on Italy’s flag — green basil, white mozzarella, and red tomatoes. Whether this was a coincidence or by design, we’ll never know. But you can pick the story you like most. Esposito named his pizza “Margherita” in honor of the Queen, although today it’s more commonly referred to as ‘cheese pizza’.

From there, pizza has only reached greater heights. It established itself as an iconic Italian dish, first in Italy and later within Europe. America’s love of pizza began with Italian immigrants and was later propelled by soldiers who fought — and ate — in Italy during the Second World War.

Today, it’s a staple in both fast-food and fancy restaurants, can be bought frozen, or can be prepared at home (it’s quite good fun with the right mates). I think it’s fair to say that although Persia’s soldiers couldn’t conquer the world, their food certainly did.

Old map.

Globalization is an ancient practice, new research reveals

Globalization isn’t a new phenomenon — far from it, new research reveals.

Old map.

A vintage, hand-drawn map.
Image via Pixabay.

An international research team reports that ancient civilizations engaged in globalization to a much higher level than previously assumed. Viewed in this light, the level of international integration we see in today’s economies isn’t unique, but the norm.

I consume energy therefore I exist

“In this work, we present evidence that the attributes of human populations, at a global scale, display synchrony for the last 10,000 [years],” the paper reads.

The research is the first of its kind, as it didn’t focus on a specific region or culture, but on the broad, long-term evolution of human societies. The team used the energy expenditure levels of these societies as a proxy to judge their development and how closely they were involved with the rest of the world.

It may sound like a strange angle to approach the issue from, but energy expenditure is actually quite a reliable indicator of a society’s development. Energy is one of the main drivers of a society — or, perhaps more accurately, a society’s ability to generate and harness energy is the main factor limiting its development.

To drive that point home, imagine two cities. The inhabitants of the first one only know how to harness muscle energy (i.e. that generated by their bodies or those of other animals from food) to perform work. Those living in the other city know about electricity, can build engines, the whole shebang. Needless to say, City number 2 will be able to address its own needs or to expand much more easily than its primitive counterpart, because it has the means to generate energy and apply it to change its environment.

So, for the study, the team assumed that greater energy consumption suggested a society was booming with population, political, and economic activity. Energy consumption was estimated — starting from historical records and further propped up by radiocarbon dating — for a period of history ranging from 10,000 to 400 years ago. Some of the areas included in the study were the western United States, the British Isles, Australia, and northern Chile.

Radiocarbon dating was used on preserved organic items such as seeds, animal bones, and burned wood from ancient trash deposits at these sites. The method was used to assess each society’s waste output over time, as radiocarbon dating is very good at establishing the age of organic matter — which represented the team’s main source of energy consumption estimates up to the 1880s when official records become available and reliable.

All in this together

Matrioska dolls.

Image credits Ricardo Liberato / Flickr.

The first surprising find here was that societies often boomed or collapsed simultaneously, a process known as synchrony, the team writes. Synchrony is indicative of interconnected groups — on the scale employed by the team, such groups would be whole societies and nations — of people who trade, migrate, and even fight with one another.

“If every culture was unique, you would expect to see no synchrony, or harmony, across human records of energy consumption,” said lead author Jacob Freeman, an assistant professor of archaeology at Utah State University.

“The causes likely include the process of societies becoming more interconnected via trade, migration, and disease flows at smaller scales and common trajectories of cultural evolution toward more complex and energy-consuming political economies at larger scales,” the paper explains.

This tidbit suggests that early globalization may have been a strategy for societies to keep growing even after exceeding their carrying capacity, the team explains. Overall, the findings point to ancient societies creating connections and becoming interdependent — a trend we refer to as globalization — even millennia ago.

By looking at so vast a stretch of human history, the team could also notice patterns associated with the rise and fall of different groups and cultures. Building closer ties to other societies benefits everyone, they write, but there are also pitfalls: “The more tightly connected and interdependent we become, the more vulnerable we are to a major social or ecological crisis in another country spreading to our country,” adds Erick Robinson, paper co-author and a postdoctoral assistant research scientist in the Department of Anthropology at the University of Wyoming. This “all eggs in one basket” approach, he explains, makes societies less adaptive to unforeseen changes.

“The financial crisis of 2007 to 2008 is a good recent example,” Robinson adds.

According to them, we shouldn’t consider a society’s collapse as a failure, however — it seems to be an intrinsic part of civilization. Still, they hope that by looking back at how our forefathers handled such events, we may very well avoid them in the future.

“Importantly, these causes of synchrony operate at different time scales [which] may lead to path dependencies that make major reorganizations a common dynamic of human societies,” the paper reads.

“Our data stop at 400 years ago, and there has been a huge change from organic economies to fossil fuel economies,” says co-author by Jacopo A. Baggio, an assistant professor in the University of Central Florida political science department.

“However, similar synchronization trends continue today even more given the interdependencies of our societies. [Societal] resilience is intrinsically dynamic. So, it becomes very hard to understand resilience in a short time span. Here we have the opportunity to look at these longer trends and really see how society has reacted and adapted and what were the booms and busts of these societies. Hopefully this can teach some lessons to be learned for modern day society.”

The paper “Synchronization of energy consumption by human societies throughout the Holocene” has been published in the journal PNAS.

Navajo pottery.

Ancient pottery portrays perilous path for agriculture under climate change

Ancient communities said ‘nay’ to beef and ‘yay’ to mutton and chevon when faced with shifting climates.

Navajo pottery.

Navajo American Indian Pottery. Not related to this study — but pretty!
Image via Pixabay.

We’re not the first generation to struggle with climate change. While our current predicament is of our own making, ancient communities also had to struggle with natural climate shifts. New research explores how farmers 8,200 years ago adapted to such changes.

Food for dry days

“Changes in precipitation patterns in the past are traditionally obtained using ocean or lake sediment cores,” says Dr. Mélanie Roffet-Salque, lead author of the paper. “This is the first time that such information is derived from cooking pots.”

“We have used the signal carried by the hydrogen atoms from the animal fats trapped in the pottery vessels after cooking. This opens up a completely new avenue of investigation – the reconstruction of past climate at the very location where people lived using pottery.”

The study centers on the Neolithic (late stone age) and Chalcolithic (copper age) city of Çatalhöyük in southern Anatolia, Turkey. Çatalhöyük was one of the first cities (if not the first city) to pop up, being settled from approximately 7500 BC to 5700 BC.

Some 8,200 years ago, an event would force these ancient city folk to change their lifestyle. A lake in northern Canada spewed huge quantities of glacial runoff into the ocean, which impacted global water currents, leading to a sudden drop in average temperatures. Hoping to get a better understanding of how such changes impacted the lives of people living during the time, a team led by Dr. Roffet-Salque from the University of Bristol looked at what these people ate.

Animal bones excavated at the site revealed that the city’s inhabitants tried their hand at rearing sheep and goats instead of cattle, as these smaller animals are more resistant to drought. The bones also show an unusually high number of cut marks. The team reports that this is a sign of people trying to free every last scrap of meat from the bones — suggesting that food was likely scarce.

This food scarcity was brought on by changes in precipitation patterns in the Anatolian region during this time, the team reports.

Food for… climate research?

The people of Çatalhöyük didn’t leave any written records we could check — but they did have clay pots used for preparing food. The analysis first revealed the presence of ruminant fats on the pots, which were consistent with and reinforced the hypothesis that herders in Çatalhöyük began favoring sheep and goats in their flocks.

This study is the first time that animal fat residues recovered in an archaeological setting have been used to gauge climate evolution in the past. The team analyzed the isotopic ratios of hydrogen atoms (the deuterium to hydrogen ratio) from these fats. Since animals incorporate atoms from their food and drink, the team found a change in this isotope ratio over the period corresponding to the climate event.

The authors also examined the animal fats surviving in ancient cooking pots. They detected the presence of ruminant carcass fats, consistent with the animal bone assemblage discovered at Çatalhöyük. For the first time, compounds from animal fats detected in pottery were shown to carry evidence for the climate event in their isotopic composition.

“It is really significant that the climate models of the event are in complete agreement with the H signals we see in the animal fats preserved in the pots,” says co-author Richard Evershed.

“The models point to seasonal changes farmers would have had to adapt to — overall colder temperatures and drier summers — which would have had inevitable impacts on agriculture.”

The findings are important given our own climatic complications. We didn’t really know the implications of this event — known as the 8.2 ka event — or that of a similar but smaller one called the 9.2 ka event. They’re encouraging in the sense that the effects weren’t as dramatic as they could have been. There is “no evidence for a simultaneous and widespread collapse, large-scale site abandonment, or migration at the time of the events,” which was a real possibility given that early populations were at the mercy of the environment.

However, the study shows that climate change does indeed come with impacts on the food supply. Society today is much better equipped to mitigate the effect of precipitations on crops, and our food networks span the globe. Even so, we’re still dependent on the environment, and there are many more mouths to feed today. In this light, the findings raise a warning that we should look to our crops lest plates go empty in the near future.

The paper “Evidence for the impact of the 8.2-kyBP climate event on Near Eastern early farmers” has been published in the journal Proceedings of the National Academy of Sciences.

Supermassive black holes eventually stop star formation

Researchers analyzed the correlation between the mass of supermassive black hole and the history of star formation in its galaxy. They found that the bigger the black hole is, the harder it is for the galaxy to generate new stars.

Scientists have been debating this theory for a while, but until now, they lacked enough observational data to prove or disprove it.

Via Pixabay/12019

Researchers from the University of Santa Cruz, California used data from previous studies measuring supermassive black hole mass. They then used spectroscopy to determine how stars formed in galaxies featuring such gargantuan black holes and correlate the two.

Spectroscopy is a technique that relies on measuring the wavelength of light emerging from objects — stars, in this case. The paper’s lead author Ignacio Martín-Navarro used computational analysis to determine how the black holes affected star formation — in a way, he tried to solve a light puzzle.

“It tells you how much light is coming from stellar populations of different ages,” he said in a press release.

Via Pixabay / imonedesign.

Next, the research team plotted the size of supermassive black holes and compared them to a history of star formation in that galaxy. They found that as the black holes grew more and more, star formation was significantly slowed down. Other characteristics of the galaxies, such as shape or size, were found irrelevant to the study.

“For galaxies with the same mass of stars but different black hole mass in the center, those galaxies with bigger black holes were quenched earlier and faster than those with smaller black holes. So star formation lasted longer in those galaxies with smaller central black holes,” Martín-Navarro said.

Star gas from Carina Nebula, source: Pixabay/skeeze

Scientists still trying to determine why this happens. One theory suggests that the lack of cold gas is the main culprit for reduced star formation. The supermassive black holes suck in the nearby gas, creating high-energy jets in the process. These jets ultimately expel cold gas from the galaxy. Without enough cold gas, there is no new star formation, so the galaxy becomes practically sterile.

In the press release, co-author Aaron Romanowsky concluded:

“There are different ways a black hole can put energy out into the galaxy, and theorists have all kinds of ideas about how quenching happens, but there’s more work to be done to fit these new observations into the models.”

The paper was published in Nature on the 1st of January 2018.

Foragers farmers fossil fuels.

Book Review: ‘Foragers, Farmers, and Fossil Fuels: How Human Values Evolve’

Foragers farmers fossil fuels.

 

“Foragers, Farmers, and Fossil Fuels: How Human Values Evolve”
By Ian Morris
Princeton University Press, 400pp | Buy on Amazon

What we consider as ‘right’ or ‘just’ isn’t set in stone — far from it. In Foragers, Farmers, and Fossil Fuels, Stanford University’s Willard Professor of Classics Ian Morris weaves together several strands of science, most notably history, anthropology, archeology, and biology, to show how our values change to meet a single overriding human need: energy.

Do you think your boss should be considered better than you in the eyes of the law? Is it ok to stab someone over an insult? Or for your country’s military to shell some other country back to the stone age just because they’re ‘the enemy’? Do leaders get their mandate from the people, from god, or is power something to be taken by force? Is it ok to own people? Should women tend to home and family only, or can they pick their own way in life?

Your answers and the answers of someone living in the stone age, the dark age, or even somebody from a Mad-Men-esque 1960’s USA wouldn’t look the same. In fact, your answers and the answers of someone else living today in a different place likely won’t be the same.

Values derive from culture

They’ll be different because a lot of disparate factors weigh in on how we think about these issues. For simplicity’s sake, we’ll bundle all of them up under the umbrella-term of ‘culture’, taken to mean “the ideas, customs, and social behavior of a particular people or society.” I know what you’ll answer in broad lines because I can take a look at Google Analytics and see that most of you come from developed, industrialized countries which (for the most part) are quite secular and have solid education systems. That makes most of you quite WEIRD — western, educated, industrialized, rich, and democratic.

As we’re all so very weird, our cultures tend to differ a bit on the surface (we speak different languages and each have our own national dessert, for example). The really deep stuff, however — the frameworks on which our cultures revolve —  these tend to align pretty well (we see equality as good, violence as being bad, to name a few). In other words, we’re a bit different but we all share a core of identical values. Kind of like Christmass time, when everybody has very similar trees but decorates them differently, WEIRD cultures are variations on the same pattern.

It’s not the only pattern out there by any means, but it’s one of the (surprisingly) few that seem to work. Drawing on his own experience of culture shock working as an anthropologist and archaeologist in non-WEIRD countries, Professor Morris mixes in a bird’s eye view of history with biology and helpings from other fields of science to show how the dominant source of energy a society draws on forces them to clump into one of three cultural patterns — hunter-gatherers, farmers (which he names Agraria), and fossil-fuel users (Industria).

Energy dictates culture

In broad lines, Morris looks at culture as a society’s way to adapt to sources of energy capture. The better adapted they become, the bigger the slice of available energy they can extract, and the better equipped they will be to displace other cultures — be them on the same developmental level or not. This process can have ramifications in seemingly unrelated ways we go about our lives.

To get an idea of how Morris attacks the issue, let’s take a very narrow look at Chapter 2, where he talks about prehistoric and current hunter-gatherer cultural patterns. Morris shows how they “share a striking set of egalitarian values,” and overall “take an extremely negative view of political and economic hierarchy, but accept fairly mild forms of gender hierarchy and recognize that there is a time and place for violence.”

This cultural pattern stems from a society which extracts energy from its surroundings without exercising any “deliberate alterations of the gene pool of harvested resources.” Since everything was harvested from the wild and there was no way to store it, there was a general expectation to share food with the group. Certain manufactured goods did have an owner, but because people had to move around to survive, accumulating wealth beyond trinkets or tools to pass on was basically impossible, and organized government was impractical. Finally, gender roles only went as far as biological constraints — men were better tailored to hunt, so they were the ones that hunted, for example. But the work of a male hunter or a female gatherer were equally important to assuring a family’s or group’s caloric needs were met — as such, society had equal expectations and provided almost the same level of freedom and the same rights for everyone, regardless of sex. There was one area, however, where foragers weren’t so egalitarian:

“Abused wives regularly walk away from their husbands without much fuss or criticism [in foraging societies],” Morris writes, something which would be unthinkable in the coming Agraria.

“Forager equalitarianism partially breaks down, though, when it comes to gender hierarchy. Social scientists continue to argue why men normally hold the upper hand in foarger societies. After all, […] biology seems to have dealt women better cards. Sperm are abundant […] and therefore cheap, while eggs are scarce […] and therefore expensive. Women ought to be able to demand all kinds of services from men in return for access to their eggs,” Morris explains in another paragraph. “To some extent, this does happen,” he adds, noting that male foragers participate “substantially more in childrearing than […] our closest genetic neighbours.”

But political or economic authority is something they can almost never demand from the males. This, Morris writes, is because “semen is not the only thing male foragers are selling.”

“Because [males] are also the main providers of violence, women need to bargain for protection; because men are the main hunters, women need to bargain for meat; and because hunting often trains men to cooperate and trust one another, individual women often find themselves negotiating with cartels of men,” he explains.

This is only a sliver of a chapter. You can expect to see this sort of in-depth commentary of how energy capture dictates the shape of societies across the span of time throughout the 400-page book. I don’t want to spoil the rest of it, since it really is an enjoyable read so I’ll give you the immensely-summed-up version:

Farmers / Agraria exercise some genetic modifications in other species (domestication), tolerate huge political, economic, and gender hierarchies, and are somewhat tolerant of violence (but less than foragers). Fossil-fuelers / Industria was made possible by an “energy bonanza,” and are very intolerant of political hierarchies, gender hierarchy, and violence, but are somewhat tolerant of economic hierarchies (less than Agrarians).

These sets of values ‘stuck’ because they maximised societies’ ability to harvest energy at each developmental level. Societies which could draw on more energy could impose themselves on others (through technology, culture, economy, warfare), eventually displacing them or making these other societies adopt the same values in an effort to compete.

Should I read it?

Definitely. Morris’ is a very Darwinian take on culture, and he links this underlying principle with cultural forms in a very pleasant style that hits the delicate balance of staying comprehensive without being boring, accessible without feeling dumbed down.

The theory is not without its shortcomings, and the book even has four chapters devoted to very smart people (University of Exter professor emeritus of classics and ancient history Richard Seaford, former Sterling Professor of History at Yale University Jonathan D. Spence, Harvard University Professor of Philosophy Christine Korsgaard, and The Handmaiden’s Tale’s own Margaret Atwood) slicing the theory and bashing it about for all its flaws. Which I very much do appreciate, as in Morris’ own words, debates “raise all kinds of questions that I would not have thought of by myself.” Questions which the author does not leave unanswered.

All in all, it’s a book I couldn’t more warmly recommend. I’ve been putting off this review for weeks now, simply because I liked it so much, I wanted to make sure I do it some tiny bit of justice. It’s the product of a lifetime’s personal experience, mixed with a vast body of research, then distilled through the hand of a gifted wordsmith. It’s a book that will help you understand how values — and with them, the world we know today — came to be, and how they evolved through time. It’ll give you a new pair of (not always rose-tinted) glasses through which to view human cultures, whether you’re in your home neighborhood or vacationing halfway across the world.

But most of all, Foragers, Farmers, and Fossil Fuels will show you that apart from a few biologically “hardwired” ones it’s the daily churn of society, not some ultimate authority or moral compass, that dictates our values — that’s a very liberating realization. It means we’re free to decide for ourselves which are important, which are not, and what we should strive for to change our society for the better. Especially now that new sources of energy are knocking at our door.

Atlas on display.

Watch the (2nd) biggest book in the world get digitized, all thanks to the British Library

This mammoth of an atlas is so big that you need two people to flip the pages. It’s so heavy you need even more people to move it around. And now, almost three and a half centuries after its creation, the Klencke Atlas has been fully digitized.

Atlas on display.

The Klencke Atlas on display at the British Library.
Image credits British Library.

The Klencke Atlas is instantly recognizable, and for good reason — this atlas is the crown jewel of the British Library’s cartographic collection and dwarfs lesser tomes, towering an incredible 1.75 meters in height (roughly 6 ft) by 1.9 meters wide when stretched open (about 6,5 ft). Since its creation in 1660, the atlas has been the biggest book on the planet, likely the whole Solar System all the way up to 2012 when Millennium House’s gigantic publication Earth Platinum claimed the title with a .5 meter advantage.

Creation of the book is attributed Dutch Prince John Maurice of Nassau, but it’s named after Johannes Klencke who in 1660 presented it to King Charles II of England to celebrate his restoration of the throne. At least, that’s the official reason — rumor has it that the Dutch delegation, mostly made up of sugar merchants, aimed to secure a favorable trade deal with England with the Atlas, as Charles was a known map enthusiast.

Like maps do you? Gonna love this.
Image credits British Library.

And boy oh boy a kingly present it was, indeed. The Klencke Atlas isn’t an atlas in the strictest sense of the word, since it wasn’t intended to be read and enjoyed as a regular book — the size alone made that a very challenging, rather infuriating task. Rather, it represents a collection of maps meant to removed from the spine and displayed on walls. It contains 37 maps which held the sum of geographical knowledge in Europe at the time — Britain and other European states, Brazil, South Asia, and the Holy Land — transposed onto 39 beautifully executed, detailed, and engraved sheets. The sheer size and complexity would send a clear message to anyone who saw it: king knew and ‘owned’ the geography of the world.

Luckily for us, the king liked it so much that he kept it among his most prized possessions in the ‘Cabinet and Closset of rarities’ in Whitehall. There it was kept safe and well cared after for until 1828, when King George III gave the hefty atlas to the British Museum as part of a larger donation of maps and atlases. There it was re-bound and extensively restored in the 1950s, and is currently held by the museum’s Antiquarian Mapping division, keeping watch on the entrance lobby of the maps reading room.

Since it is so old and so evidently unique, the Klencke Atlas has usually been left to rest out of the spotlit. The only time the public could see it with its pages opened since its creation 350 years ago was in April of 2010 at an exhibition organised by the British Library. But such a book shouldn’t be kept hidden — and yet, to keep it from being damaged, one must keep it very safe. What do to?

Klencke Atlas Europe.

Image credits British Library.

Well, one solution is to copy it. Just last month, the British Library teamed up with Daniel Crouch Rare Books to digitize the whole book. It which took several days of several people transporting the book, mounting it into an XXXL stand to take the shots, flipping the mammoth pages, and days of photographing each page so every map was fully recorded.

The online version can be viewed on the British Library’s website. They also put together this cool time-lapse video so you can see how the whole process went. Enjoy!

buddha_shrine

Earliest Buddhist shrine uncovered right at the birthplace of Buddha

buddha_shrine

At one of the sites where it’s possible that Buddha was born, archaeologists have identified the remains of an ancient shrine – a timber structure which used to encircle a tree – right at the heart of the present day Maya Devi Temple in Lumbini, Nepal. Carbon dating reveals this ancient structure is at least 2500 years old, making it the earliest Buddhist shrine uncovered thus far while also giving credence to the legends and timeless fables that speak of Buddha’s birth and early life.

Like with all ancient prophets and holy men, the exact historical context of Buddha is subject to debate. Most scholars believe the “the englightened one” was brought to this world some time between 390-340 BC, while the earliest evidence of Buddhist structures at Lumbini, Buddha’s supposed birth place, have been dated no earlier than the 3rd century, during the rule of emperor Ashoka who enshrined Buddha’s cremated remains into 84,000 stupas.

At the heart of Lumbini temple, however, archaeologists have unearthed an ancient wooden structure that may alter current theories relating to Budda’s life and birth. Inside the wooden structure, a void was identified that had no roof above it and where signs of ancient tree roots were discovered suggesting this was a tree shrine. Later on, brick temples were built around this tree shrine structure.

“Now, for the first time, we have an archaeological sequence at Lumbini that shows a building there as early as the 6th century BC,” said archaeologist Prof Robin Coningham of Durham University, who co-led the international team, supported by the National Geographic Society.

Pilgrims and monks meditating while archaeologists excavated the site at Lumbini.

Pilgrims and monks meditating while archaeologists excavated the site at Lumbini.

Before becoming Buddha, which means “awaken one” in sanskrit, first came Siddhārtha Gautama. Gautama was the son of Śuddhodana, “an elected chief of the Shakya clan”, and Queen Maha Maya (Māyādevī). Legend has it, Gautama was born on the way, at Lumbini, in a garden beneath a sal tree. Could this shrine had once housed the said ancient tree?

“This is the earliest evidence of a Buddhist shrine anywhere in the world.

“It sheds light on a very long debate, which has led to differences in teachings and traditions of Buddhism.

“The narrative of Lumbini’s establishment as a pilgrimage site under Ashokan patronage must be modified since it is clear that the site had already undergone embellishment for centuries.”

via the BBC

Wright Brothers Glider in mid flight. It was made in 1911.

Rare and amazing photos of the Wright brothers and their historic flights [GALLERY]

Orville and Wilbur Wright are credited as the first men who built an aircraft capable of manned controlled flight. The first manned flight by airplane (powered, controlled and heavier than air) occurred on  December 17, 1903,  when Orville flew at 120 feet (37 m) over the ground for 12 seconds, at a speed of only 6.8 miles per hour (10.9 km/h). Introductions are rather unnecessary, though.  For more on how the Wright brothers started their work and an informative historical timeline of their achievements, I’d recommend you read this Wikipedia entry.

The Wright brothers worked fundamentally different from other manned flight pioneers of their time. While others concentrated on fitting stronger engines and making more tests, Orville and Wilbur preferred to tackle on aerodynamics instead. The brothers built their own wind tunnel and extensively carried out aerodynamic tests. This eventually lead to the advent of the three-axis control system: wing-warping for roll (lateral motion), forward elevator for pitch (up and down) and rear rudder for yaw (side to side). This was indispensable for the pilot to have control and thus both better flight performance and avoid accidents which were so often at the time.

Some scholars agree that the 1902 glider was the most revolutionary aircraft ever created and the real embodiment of the genius of Orville and Wilbur Wright. Although the addition of a power plant to their 1903 Flyer resulted in their famous first flight, some scholars regard that improvement as a noteworthy addition to something that was truly a work of genius – the 1902 glider..

For your consideration we’ve curated some of the most amazing photographs featuring the Wright brothers and their creations – various historical flights like the very first take off at Kitty Hawk, model gliders including 1902 and 1903 versions, mid-air shots and other fantastic vintage relics that tell of a time just a century ago when people daring to fly were labeled as mad.

Empires, institutions and religion arise from war

empires

Peter Turchin, a population dynamicist at the University of Connecticut in Storrs, and his colleagues finished a study which concluded that war drove the formation of complex social institutions such as religions and bureaucracies. The study showed that these institutions helped give much needed stability to large and ethnically diverse early societies.

“Our model says they spread because they helped societies compete against each other,” says Turchin. The results are published in the Proceedings of the National Academy of Sciences.

The team analyzed the areas of the world where the competition was fiercest: Africa and Eurasia between 1500 BC and AD 1500. In the first millennium BC, nomads on the Eurasian steppe stepped it up, inventing mounted archery, the most effective projectile weaponry technique until gunpowder; this technology was instrumental, as it encouraged further developments (chariot and cavalry warfare), which in turn led to increased warfare.

After analyzing developments such as that one, they devised a model which divided Africa and Eurasia into a grid of cells 100 kilometres on a side. Each cell was characterized according to the kind of landscape, its elevation about sea level and whether or not it had agriculture – because agriculture was instrumental to early societies. When they started the simulation, each agricultural cell was inhabited by an independent state, and states on the border between agrarian societies and the steppe were seeded with military technology. The team followed the diffusion of military technology spread and the effects of warfare on societies.

Although this model oversimplifies and ignores the fact that societies compete between each other in ways far more complex than just warfare, it accurately identified the formation of 65% of all empires. Moreover, the disintegration of empires led to the dismantling of institutions which had devastating effects – all predicted by the model.

“When the Roman Empire broke up, literacy effectively went extinct, because the smaller fragment states did not need a literate bureaucracy,” says Turchin.

Turchin is advocating an approach called cliodynamics – after Clio, the ancient Greek muse of history. This approach tests hypotheses against big data, and has been both criticized and prasied by historians.

Joe Manning, an ancient historian at Yale University in New Haven, Connecticut, is a fan of cliodynamics and its real world applications:

“Being able to predict extreme behaviour in much the same way as epidemiologists predict disease outbreaks would enable governments to establish early-warning systems and deploy damage-limitation measures,” says Whitehouse.

Scientific reference: Nature doi:10.1038/nature.2013.13796
Full article

The smoking pot - an ancient 1300 years old urn used by ancient Mayans to deposit tobacco. (c) RCMS

First evidence of tobacco consumption in Mayan culture found

The smoking pot - an ancient 1300 years old urn used by ancient Mayans to deposit tobacco. (c) RCMS

The smoking pot - an ancient 1300 years old urn used by ancient Mayans to deposit tobacco. (c) RCMS

Archaeologists have uncovered an ancient urn dated from the Mayan classical period, which after a thorough chemical analysis was found to contain traces of nicotine. Though it has been documented in Mayan texts and folklore that tobacco use was a common part of the local community, this is the first hard evidence supporting the fact that Mayans smoked. Moreover, the same analysis has revealed that they tobacco consumed then was a lot stronger than today, almost hallucinogenic.

The 1,300-year-old Mayan flask actually literary had tobacco written all over it, marked with Mayan hieroglyphs reading “y-otoot ‘u-may,” which is translated as “the house of its/his/her tobacco.” A scientist at the Rensselaer Polytechnic Institute and an anthropologist from the University at Albany teamed up, after they saw this as an excellent opportunity, and used high-end chemical analysis to prove tobacco usage in Mayan culture.

Their discovery represents new evidence on the ancient use of tobacco in the Mayan culture and a new method to understand the ancient roots of tobacco use in the Americas.

The urn most probably contained tobacco leaves, however it is believed that the Maya also grounded tobacco into a powder which they used for all sorts of activities, from therapeutic (bug bite treatment) to protection against minions lurking in the jungle (burning tobacco powder is said to have been used as a  snake repellent). Of course, the Mayan knew how to party hard. Powdered tobacco could be added to drinks for an extra kick or directly snorted.

‘This was very strong tobacco, much stronger than it is today,’ Jennifer Loughmiller-Newman, an archaeologist at the University of Albany in New York, told MSNBC.

‘Nicotiana rustica was nearly hallucinogenic.’

Dmitri Zagorevski, director of the Proteomics Core in the Center for Biotechnology and Interdisciplinary Studies(CBIS) at Rensselaer, the leading scientist involved in the study, used technology typically reserved for the study of modern diseases and proteins, to analyze the chemical fingerprints of the urn. This involved, among other, gas chromatography mass spectrometry (GCMS) and high-performance liquid chromatography mass spectrometry (LCMS).

‘Our study provides rare evidence of the intended use of an ancient container,’ said Zagorevski.

‘Mass spectrometry has proven to be an invaluable method of analysis of organic residues in archaeological artifacts.

‘This discovery is not only significant to understanding Mayan hieroglyphics, but an important archaeological application of chemical detection.’

via