Category Archives: Research

What are ‘iron lungs’, and could this old tech still be useful today?

Although they’re relatively old technology in this day and age, there is renewed interest in iron lungs today against the backdrop of the coronavirus pandemic.

An iron lung device. Image credits The B’s / Flickr.

Few devices can boast having as terrifying — and cool — a name as the iron lung. These somewhat outdated medical devices were the earliest devices designed to help patients breathe. Compared to modern breathing aides, these devices were huge and quite scary-looking.

Still, iron lungs were a very important development at their time. In the aftermath of the COVID-19 epidemic, there has also been renewed interest in these devices as they can be used as an alternative to modern ventilators.

So let’s take a look at exactly what iron lungs are, and how they came to be.

So what are they?

Iron lungs are quite aptly named; unlike other modern ventilators, they function using the same mechanisms as our own lungs.

An iron lung is a type of negative pressure ventilator. This means that it creates an area of low-pressure or vacuum to move and draw air into a patient’s chest cavity. In very broad lines, this is the exact mechanism our bodies employ, via movements of the diaphragm, to let us breathe.

The concept behind these devices is quite simple. The main component of an iron lung is a chamber, usually a metal tube (hence the ‘iron’ part in its name) that can fit the body of a patient from the neck down. This acts as an enclosed space in which pressure can be modified to help patients breathe. The other main component of the device is mobile and actually changes the pressure inside the tube. Usually, this comes in the form of a rubber diaphragm connected to an electrical motor, although other sources of power have been used, including manual labor.

Patients are placed inside an iron lung, with only their head and part of their neck (from their voice box upwards) left outside the cylinder. A membrane is placed around their neck to ensure that the cylinder is sealed. Afterward, the diaphragm is repeatedly retracted and contracted to cycle between low and high pressure inside the chamber. Because the patient’s head and airways are left outside of the cylinder, when pressure is low inside it, air moves inside the patient’s lungs. When pressure increases inside the cylinder, the air is pushed back out.

The whole process mirrors the way our bodies handle breathing. Our diaphragm muscles draw on the lungs, increasing their internal volume, which pulls air in from the outside. To breathe out, the diaphragm muscle squeezes on the lungs, pushing air out. Iron lungs work much the same way, but they expand and contract the lungs alongside the rest of the chest cavity from outside the body.

This process is known as negative pressure breathing; low (‘negative’) pressure is generated in the lungs in order to draw in air. Most modern ventilators work via positive pressure: they generate high pressure inside the device to push air into the patient’s lungs.

One advantage of such ventilators is that patients can use them without being sedated or intubated. On the one hand this eases the pressure on medical supplies each patient requires; on the other, it slashes the risks associated with the use of anesthetics — such as allergic reactions or overdoses — and the risk of mechanical lesions following intubation.

Epidemics, pandemics

An opened iron lung device at the Science Museum, London. Image credits Stefan Kühn / Wikimedia.

“The desperate requests for ventilators in today’s treatment of patients in the grasp of the coronavirus brought to mind my encounter with breathing machines in the early 1950s polio epidemic, when I signed up as a volunteer to manually pump iron lungs in case of power failure at Vancouver’s George Pearson Centre,” recounts George Szasz, CM, MD, in a post for the British Columbia Medical Journal.

Iron lungs saw their greatest levels of use in developed countries during the poliomyelitis outbreaks of the 1940s and 1950s. One of the deadliest symptoms of polio is muscle paralysis, which can make it impossible for patients to breathe. The worst cases would see patients requiring ventilation for up to several weeks. Back then, iron lungs were the only available option for mechanical ventilation, and they saved innumerable lives.

As technology progressed, however, iron lungs fell out of use. They were bulky and intimidating machines, hard to transport and store despite their reliability and mechanical simplicity. With more compact ventilators, the advent of widespread intubation, and techniques such as tracheostomies, such devices quickly dwindled in number and use. From an estimated height of around 1,200 iron lung devices in the U.S. during the ’40s and ’50s, less than 30 are estimated to still be in use today

There are obvious parallels between those polio epidemics of old and today’s COVID-19 pandemic in regards to the need for ventilation. Machines such as the iron lung have been suggested as a possible treatment option for COVID-19 patients due to this. For most cases, such devices can help, but not for all.

In cases of severe COVID-19 infections, the tissues of the lungs themselves are heavily affected. A buildup of fluid in the lungs can physically prevent air from reaching the alveoli (the structures in the lung where gases are exchanged between the blood and the environment). While iron lungs can perform the motions required to breathe even for patients who are incapable of doing it themselves, they cannot generate enough pressure to push air through the tissues affected by a COVID-19 infection.

“Iron lungs will not work for patients suffering from severe COVID-19 infections,” explains Douglas Gardenhire, a Clinical Associate Professor and Chair of Respiratory Therapy at the Georgia State University (GSU) Department of Respiratory Therapy. “Polio interrupted the connection between brain and diaphragm and while some polio patients did have pneumonia, it was not the principal issue. For the most part, the lungs themselves did not have any change in their dynamic characteristics.”

“COVID-19 pneumonia physically changes the composition of the lungs,” adds Robert Murray, a Clinical Assistant Professor at the GSU. “The consolidation of fluid in the lungs will not respond with low pressure generated by the iron lung. The lungs of a COVID-19 patient will be a heterogenous mix of normal and consolidated lung tissue making mechanical ventilation very difficult.”

Still an alternative

Although patients with severe COVID-19 infections might not benefit from the iron lung, there are cases in which the device can prove useful. One paper (Chandrasekaranm, Shaji, 2021) explains that there still is a need for negative pressure ventilators in modern hospitals, especially for patients who have experienced ventilator-induced lung injuries. The use of negative pressure ventilators, especially in concert with an oxygen helmet, may also play a part in reducing the number of infections by limiting the spread of viruses through contaminated materials in cases where resources are stretched thin, the team adds.

While the concept is being retained, however, the actual devices are getting an upgrade. One example is the device produced by UK charity Exovent, which aims to be a more portable iron lung. Exovent’s end goal is to provide a life-saving device that will impose fewer limits on what activities patients can undertake. A seemingly-simple but still dramatic improvement, for example, is that patients can use their hands to touch their faces even while the Exovent device is in operation. Eating or drinking while using the device is also possible.

Exovent’s ventilator was designed before the coronavirus outbreak to help the millions of people suffering from respiratory issues including pneumonia worldwide. However, its designers are confident that, in conjunction with oxygen helmets, it can help patients who are recovering from a coronavirus infection — a process that leaves them with breathing difficulties for months.

All things considered, iron lungs have made a huge difference for the lives of countless patients in the past, and they continue to serve many. Although most of them today look like archaic devices, engineers are working to update and spruce them up for the modern day. And, amid modern ventilators, there still seems to be a role — and a need — for devices such as iron lungs.

New AI approach can spot anomalies in medical images with better accuracy

Researchers have trained a neural network to analyze medical images and detect anomalies. While this won’t replace human analysts anytime soon, it can help physicians sift through countless scans quicker and look for any signs of problems.

Image credits: Shvetsova et al (2021).

If there’s one thing AI is really good at, it’s spotting patterns. Whether it’s written data, audio, or images, AI can be trained to identify patterns — and one particularly interesting application is using it to identify anomalies in medical images. This has already been tested in some fields of medical imagery with promising results.

However, AI can also be notoriously easy to fool, especially with real-life data. In the new study, researchers in the group of Professor Dmitry Dylov at Skoltech presented a new method through which AI can detect anomalies. The method, they say, is better than existing ones and can detect barely visible anomalies.

“Barely visible abnormalities in chest X-rays or metastases in lymph nodes on the scans of the pathology slides resemble normal images and are very difficult to detect. To address this problem, we introduce a new powerful method of image anomaly detection.”

The proposed approach essentially suggests a new baseline for anomaly detection in medical image analysis tasks. It’s good at detecting anomalies that represent medical abnormalities, as well as problems associated with medical equipment

“An anomaly is anything that does not belong to the dominant class of “normal” data,” Dylov told ZME Science. “If something unusual is present in the field of view of a medical device, the algorithm will spot it. Examples include both imaging artifacts (e.g., dirt on the microscope’s slide) and actual pathological abnormalities in certain areas of the images (e.g., cancerous cells which differ in shape and size from the normal cells). In the clinical setting, there is value in spotting both of these examples.”

The maximum observed improvement compared to conventional AI training was 10%, Dylov says, and excitingly, the method is already mature enough to be deployed into the real world.

“With our algorithm, medical practitioners can immediately sort out artifactual images from normal ones. They will also receive a recommendation that a certain image or a part of an image looks unlike the rest of the images in the dataset. This is especially valuable when big batches of data are to be reviewed manually by the experts,” Dylov explained in an email.

The main application of this approach is to ease the workload of experts analyzing medical images and help them focus on the most important images rather than manually going through the entire dataset. The more this type of approach is improved, the more AI can help doctors make the most of their time and improve the results of medical imaging analysis.

The study was published in the journal IEEE (Institute of Electrical and Electronics Engineers).

Flyboard Air from Zapata.

Hoverboards are now real — and the science behind them is dope

What could be the coolest way of going to work you can imagine? Let me help you out. Flying cars — not here yet. Jetpacks — cool, but not enough pizzaz. No, there’s only one correct answer to this question: a hoverboard.

A whole generation of skateboarders and sci-fi enthusiasts (especially Back to the Future fans) have been waiting for a long time to see an actual levitating hoverboard. Well, the wait is over. The future is here. 

Franky Zapata flying on Flyboard Air. Image credits: Zapata/YouTube.

There were rumors in the 90s that claimed hoverboards had been invented but were not made available in the market because some powerful parent groups are against the idea of flying skateboards being used by children. Well, there was little truth to those rumors — hoverboards haven’t been truly developed until very recently. No longer a fictional piece of technology, levitating boards exist for real and there is a lot of science working behind them.

A hoverboard is basically a skateboard without tires that can fly above the ground while carrying a person on it. As the name implies, it’s a board that hovers — crazy, I know.

The earliest mention of a hoverboard is found in Michael K. Joseph’s The Hole in the Zero, a sci-fi novel that was published in the year 1967. However, before Michael Joseph, American aeronautical engineer Charles Zimmerman had also come up with the idea of a flying platform that looked like a large hoverboard.

Zimmerman’s concept later became the inspiration for a small experimental aircraft called Hiller VZ-1 Pawnee. This bizarre levitating platform was developed by Hiller aircraft for the US military, and it also had a successful flight in 1955. However, only six such platforms were built because the army didn’t find them of any use for military operations. Hoverboards were feasible, but it was still too difficult to build them with the day’s technology.

Hoverboards were largely forgotten for decades and seemed to fall out of favor. Then, came Back to the Future.

A page from the book Back to the Future: The Ultimate Visual History. Image credits: /Film

The hoverboard idea gained huge popularity after the release of Robert Zemeckis’s Back to the Future II in 1989. The film featured a chase sequence in which the lead character Marty McFly is seen flying a pink hoverboard while being followed by a gang of bullies. In the last two decades, many tech companies and experts have attempted to create a flying board that could function like the hoverboard shown in the film.

Funnily enough, Back to the Future II takes place in 2015, and hoverboards were common in the fictional movie. They’re not quite as popular yet, but they’re coming along.

The science behind hoverboards

Real hoverboards work by cleverly exploiting quantum mechanics and magnetic fields. It starts with superconductors — materials that have no electrical resistance and expel magnetic flux fields. Scientists are very excited about superconductors and have been using them in experiments like the Large Hadron Collider.

Because superconductors expel magnetic fields, something weird happens when they interact with magnets. Because magnets must maintain their North-South magnetic field lines, if you place a superconductor on a magnet, it interrupts those field lines, and the magnet lifts the superconductor out of its way, suspending it into the air.

A magnet levitating above a high-temperature superconductor, cooled with liquid nitrogen. Image credits: Mai Linh Doan.

However, there’s a catch: superconductors gain their “superpowers” only at extremely low temperatures, at around -230 degrees Fahrenheit (-145 Celsius) or colder. So real-world hoverboards need to be fueled with supercooled liquid nitrogen around every 30 minutes to maintain their extremely low temperature. 

All existing hoverboards use this approach. While there has been some progress in creating room-temperature superconductors, this technology is not yet ready to be deployed in the real world. But then again, 30 minutes is better than nothing.

Some promising hoverboards and the technology behind them

In 2014, an inventor and entrepreneur Greg Henderson listed a hoverboard prototype Hendo hoverboards on the crowdfunding platform Kickstarter. The Hendo hoverboard could fly 2.5 cm above the ground with 300 lb (140 kg) of weight but just like maglev trains, it required a magnetic track made of non-ferromagnetic metals to function. 

The hoverboard followed magnetic levitation, a principle that allows an object to overcome gravitation and stay suspended in the air in the presence of a magnetic field. However, the hoverboard didn’t go into mass production because Henderson used the gadget only as a means to promote his company Arx Pax Labs.

A year later, another inventor (Cătălin Alexandru Duru) developed a drone-like hoverboard prototype (which is registered under the name omni hoverboard) and using the same approach, he set a Guinness World Record for covering maximum distance with an autonomous hoverboard. During his flight, Alexandru covered a distance of about 276 meters and reached a height of 5 meters. 

ARCA CEO Dumitru Popescu controlling his ArcaBoard through body movement. Image Credits: Dragos Muresan/Wikimedia Commons

In 2015, Japanese auto manufacturer Lexus also came up with a cool liquid-nitrogen-filled hoverboard that could levitate when placed on a special magnetic surface. The Lexus hoverboard consists of yttrium barium copper oxide, a superconductor which if cooled down beyond its critical temperature becomes repulsive to magnetic field lines. The superconductor used both quantum levitation (and quantum locking) to make the hoverboard perfectly fly over a magnetic surface.

The same year in December, Romania-based ARCA Space Corporation introduced an electric hoverboard called ArcaBoard. Being able to fly over any terrain and water, this rechargeable hoverboard was marketed as a new mode of personal transportation. The company website mentions that ArcaBoard is powered by 36 in-built electric fans and can be easily controlled either from your smartphone or through the rider’s body movements.   

Components in an ArcaBoard. Image Credits: ARCA

One of the craziest hoverboard designs is Franky Zapata’s Flyboard Air. This hoverboard came into the limelight in the year 2016 when Zapata broke Cătălin Alexandru Duru’s.Guinness World Record by covering a distance of 2,252.4 meters on his Flyboard Air. This powerful hoverboard is capable of flying at a speed of 124 miles per hour (200 km/h), and can reach as high as 3000 meters (9,842 feet) up in the sky. 

Flyboard Air comes equipped with five jet turbines that run on kerosene and has a maximum load capacity of 264.5 lbs (120 kg). At present, it can stay in the air for only 10 minutes but Zapata and his team of engineers are making efforts to improve the design further and make it more efficient. In 2018, his company Z-AIR received a grant worth $1.5 million from the French Armed Forces. The following year, Zapata crossed the English Channel with EZ-Fly, an improved version of Flyboard Air.

While ArcaBoard really went on sale in 2016 at an initial price of $19,900, Lexus Hoverboard and Flyboard Air are still not available for public purchase. However, in a recent interview with DroneDJ, Cătălin Alexandru Duru revealed that he has plans to launch a commercial version of his omni hoverboard in the coming years.

Racial disparities in police shootings in the US are even bigger than previously thought

An analysis of four US states finds that when you look at non-fatal shootings, the racial disparities are even greater than in fatal shootings.

According to data compiled by the Washington Post, police officers shot and killed 1001 people in the US in 2019, and 936 in 2020. Per capita, the rate of deaths for Black Americans is twice as high as that for white Americans. This is backed by studies that have found a consistent racial bias in police shootings that cannot be explained by differences in local crime rates.

However, most studies only look at fatal shootings. Not only does this data eliminate an important part of police shootings, but fatality can also be influenced by other types of factors (such as if police officers administered first aid or whether a hospital is nearby). Justin Nix of the University of Nebraska Omaha and John Shjarback of Rowan University in New Jersey wanted to analyze this data and see if they can get a more comprehensive picture of the bias.

The problem is, most states don’t even gather data on non-fatal police shootings. So they focused on four states that do gather it: Florida, Texas, Colorado, and California. They then used statistical analysis to account for demographics and other factors that could be influencing the data.

Overall, they found that Black civilians were more likely to be shot than white civilians, and the racial bias is even more pronounced than in fatal shootings. For instance, in California, Black people are 3.08 times more likely to be fatally shot, but 3.91 times more likely to be non-fatally shot. Overall, the risk of Black people suffering a non-fatal shooting was greater than that of a fatal shooting in all the analyzed four states. California actually had the smallest difference out of the four states, while in Colorado and Florida, Black people are over five times more likely to be shot than white people, the analysis found.

Hispanic people were also more likely to be shot than white people in California or Colorado, but in Texas and Florida, the differences were negligible. Here’s how the Black-white and Hispanic-white disparities compare across all four states:

Image credits: Nix, Shjarback, 2021, PLOS ONE.

There are still shortcomings to this study (for instance, there is no data on the number of rounds fired and the data is reported by the police without external verification), but overall, this paints a compelling picture of existing bias among US police shootings.

This could be better studied, the researchers say if more states would record and report data on non-fatal shootings.

“We currently have no comprehensive national data on police firearm discharges. Our study suggests there are likely hundreds of people non-fatally injured by police gunfire each year – a disproportionate share of them Black,” the authors note.

The study has been published in PLOS.

California cultured meat plant is ready to produce 50,000 pounds of meat per year

In a residential neighborhood in Emeryville, California, a rather unusual facility has taken shape. The factory, which almost looks like a brewery, is actually a meat factory — but rather than slaughtering animals, it uses bioreactors to “grow” meat. According to the company that built it, it can already produce 50,000 pounds of meat per year, and has room to expand production to 400,000 pounds.

UPSIDE Chicken Salad

Upside Foods (previously called Memphis Meats) started out in 2015 as one of the pioneers of the nascent food-growing industry. Now, just 6 years later, there are over 80 companies working to bring lab-grown meat to the public — including one in Singapore which is already selling cultured chicken.

The fact that such a factory can be built (while regulatory approval is still pending and Upside can’t technically sell its products) already is striking. Upside’s new facility is located in an area known more for its restaurants than its factories, but with $200 million in funding and ever-growing consumer interest, the company seems to be sending a strong message.

Cultivating meat

The new facility is a testament to how much technology in this field has grown. The company can not only produce ground meat, but cuts of meat as well. Chicken breast is the first planned product, and the company says they can produce many types of meat, from duck to lobster.

“When we founded UPSIDE in 2015, it was the only cultivated meat company in a world full of skeptics,” says Uma Valeti, CEO and Founder of UPSIDE Foods. “When we talked about our dream of scaling up production, it was just that — a dream. Today, that dream becomes a reality. The journey from tiny cells to EPIC has been an incredible one, and we are just getting started.”

There’s still no word yet on how much these products will cost, but it’s probably not gonna be the cheapest meat on the market. Although lab-grown meat is nearing cost-competitiveness with slaughter meat, it’s not quite there yet. Besides, Upside already announced that their chicken products will be served by three-Michelin-starred chef Dominique Crenn. Crenn is the only chef in the US to be awarded three Michelin stars, and she famously removed meat from her menus in 2018 to make a statement against the negative impact of animal agriculture on the global environment and the climate crisis

Not for sale yet

Upside isn’t the only company to recently receive a lot of money in funding. Their San Francisco rival Eat Just, which became the first company in the world to sell lab-grown meat, received more than $450 million in funding. A 2021 McKinsey & Company report estimates that the cultivated meat industry will surge to $25 billion by 2030. However, in the US (and almost every country on the globe) cultured meat isn’t approved for sale yet.

The FDA has largely been silent on lab-grown meat since 2019, and while many expect a verdict soon, there’s no guarantee of a timeline. Even if the FDA allows the sale and consumption of lab-grown meat in the US, it will likely do so on a product-by-product basis rather than opening the floodgates to lab-grown meat as a whole. In the EU, things will likely move even slower.

However, pressure is mounting. In addition to the obvious ethical advantages of lab-grown meat, its environmental impact may also be less severe than that of slaughter meat. However, this has not been confirmed since we don’t yet have a large-scale production facility, and the few available studies don’t have definitive conclusions.

This is why having a working factory is so exciting, because it could offer the first glimpses of how sustainable the practice actually is. Upside says the facility uses 100% renewable energy and has expressed its desire to have a third party verify the facility’s sustainability by mid-2022.

Of course, all of this depends on the regulatory approval that may or may not come anytime soon. In the meantime, the factory is ready and good to go.

We are one step closer to forecasting how volcanoes will behave during eruptions

We tend to not think about it, but around 10% of the human population currently lives in the risk zone of active volcanoes. While the other 90% of us are relatively safe from the eruptions of smaller volcanoes (such as the Cumbre Vieja in La Palma, which recently erupted), if one of the larger magmatic systems were to erupt, we would all find ourselves in one hot pickle — no matter where we reside.

A crater inside the larger caldera of Nisyros Volcano, in Greece. The caldera was formed after a catastrophic explosion that disintegrated a large part of the volcanic cone almost 60,000 years ago, while the smaller crater is about 150 years old.

The problem is, we have plenty of those big, mean volcanoes to start with — and they’re often closer to home than you’d think. Even in Europe, where volcanic eruptions are relatively rare to begin with, there are Nisyros, Santorini, Hekla, or Campi Flegrei. We don’t even want to think about what an eruption from the likes of Yellowstone, Toba, or Tambora would bring — and yet we have to.

When volcanoes erupt

Nowadays, with the knowledge and technology we have at our disposal, it is pretty easy to know when a particular volcano is going to erupt. The rise of magma through the crust triggers swarms of small intensity earthquakes, it causes the rocks to bulge, and hot waters and gas to reach the surface well before the magma does, heralding incoming trouble.

What we don’t really know — and this has bugged volcanologists for decades — is how a volcano is going to behave during the eruption. Will it generate effusive eruptions that lead to relatively mild lava flows which can damage property but are relatively harmless to people? Or will it trigger violent explosions, which eject clouds of hot gas and ash, or even disintegrate entire volcanic structures, leaving behind caldera depressions instead of mountains?

To solve this problem, volcanologists have mostly focused on what happens in the volcanic conduit — the pipeline that connects the magma chamber to the surface. Once an eruption begins, magma ascends through the crust, generally for about 8-10 km before reaching the volcanic summit. During this ascent, what happens to the gas that bubbles in the magma is the key to how the volcano will erupt.

If, for example, the gas remains trapped in the melt and can’t escape to seep away, there’s a big chance the magma will explode. If on the other hand, the gas bubbles get to sneak out and leave the melt behind, or outgas, the explosive potential of the magma is neutralized and the volcano will likely ooze lava flows. Letting the gas escape is more or less like defusing a bomb, reducing the risk of a big explosion.

A simplified diagram showing what happens with the magma underground, as it ascends towards the surface. In both cases the eruption is imminent, but the difference is in what happens with the gas bubbles: the magma to the left will explode, while the one to the right will effuse.

It sounds simple, but it’s deceptively complicated. Decompression, changes in ascent velocity and melt viscosity, gas bubbling and percolation, stress buildup, the mechanical resistance of the melt, and all sorts of complicated interactions between melt, crystals, gas bubbles and country rocks, will all compete or cooperate to either block the gas in the magma, or to allow it to outgas. It’s so complicated in fact that we don’t yet have a clear understanding of how all these processes interact, and we are still unable to build robust numerical models to simulate all of them. Even if we would reach the required level of understanding, and if we’d be able to forecast eruptive styles based on conduit processes, it would only give us a few minutes’ worth of time to do anything about it since the magma is already on its way up.

A few minutes isn’t exactly enough to do that much in case of an incoming explosive eruption. It would give you the chance to open that bottle of wine you’ve been saving (because you may not get another opportunity) but not much more. But what can we do if we forecast the eruptive behavior of a volcano well before the eruption is even triggered? What if instead of mere minutes, we’d have weeks, months, years, even decades to prepare?

We can now stop dreaming about it and start planning, because we are one step closer to achieving this goal.

Predicting eruptions

A recent study published in Nature Geoscience by researchers from the Swiss Federal Institute of Technology (ETH Zürich, Switzerland) and Brown University (USA), with myself as one of the authors, makes a major breakthrough in the direction of forecasting eruptive styles. The question we designed the study around was: what if the magma chamber conditions can predetermine eruptive behavior, regardless (to some extent) of what happens in the conduit?

It should be possible, after all, since the magma entering the volcanic conduit inherits all its initial properties from the magma chamber.

The big difference between magma chamber processes and conduit processes is that whatever happens in the magmatic reservoir takes place over days, months, years, even thousands or tens of thousands of years, giving us ample time to detect changes. Indeed, this new study shows a striking correlation between how the magma is stored underground, and how it ultimately behaves at the surface.

The study is based on reconstructing the magmatic storage conditions of about 245 eruptions generated by 75 volcanoes worldwide, including some really famous ones. To achieve this, we relied on the chemistry of minerals and glasses from erupted products, which are windows to processes and conditions that had happened deep underground, and which we can’t really probe directly. Using this approach, we determined the temperatures of the magmas, the amounts of solid crystals floating in the melt, the content of dissolved gas it stored, and whether some of that gas might have started exsolving (or forming gas bubbles) while still in the magma chamber.

As was expected, low amounts of dissolved gas (generally lower than 3.5 wt% water) leads to effusive outpourings of lava, while higher water contents (roughly between 4 and 5.5 wt%) favor explosive events. Interestingly, however, crystallinity (the volume of solid particles in the magma) has an important say in this as well. When more than 40% of the volume of the magma consists of crystals, the eruption becomes mild no matter the stored gas content. This happens because the solid particles form a kind of skeleton that the gas bubbles connect to, allowing them to form finger channels that act like pipes. In this way, even if the magma has enough gas content to explode, the crystals help the gas permeate the melt efficiently and defuse the volcanic bomb. At the same time, a large amount of crystals increases the bulk viscosity of the magma and its resistance to flowing. By doing so, the magma is slowed down considerably on its way to the surface (even by ten times), allowing more time for the gas to escape through the finger channels.

Deposits generated by volcanic explosions at Nisyros Volcano, in Greece.

A key observation, which is counter-intuitive and bound to spark a debate in the volcanological community, is that at very high gas contents (more than 5.5 wt% water), the magmas start behaving effusively again. Why, though? The higher the gas content, the more explosive the magma should be, right? But as we found, this is not necessarily the case.

At very high dissolved gas contents, the melt is unable to store all its water in dissolved form anymore: as disseminated molecules. Instead, the molecules come together to form gas bubbles, or to exsolve. It’s very much like a stirred bottle of champagne. What we found is that magmas very rich in gas are also likely to contain quite a few gas bubbles in the magma chamber. Their presence dramatically changes how the eruption is initiated, and as a result, how it is likely to behave.

How? Well, this is where things get complicated again. Most volcanic eruptions are triggered when magma that is even hotter and comes from even greater depths, from the lower crust of the Earth, intrudes the shallow magma chamber of the volcano. Yes, for us volcanologists 10 km is shallow… This intrusion of hot magma into another body of liquid magma is known as magmatic recharge. As more magma is being crammed inside, the magmatic reservoir is being pressurized: it’s more or less like blowing a balloon that has no space to expand, while more air keeps on going inside. At some point, the balloon will just break. The same happens in a magma chamber: the rocks sealing it fail, and the hot stuff starts threading its way towards the surface.

If a magma chamber doesn’t contain gas bubbles (or contains very few of them), it pressurizes fast during magmatic recharge, and the eruption is triggered readily. When many gas bubbles are present in the magma chamber though, as the article highlights through numerical simulations, they act as a myriad of tiny cushions. Each gas bubble compresses to allow space for the extra magma that comes from below. This means that even more hot material needs to intrude the magma chamber until the surrounding rocks finally break and allow the material to erupt at the surface. More and more hot recharge coming in, and more time for it to interact with the magma chamber means that the resident melt heats up. Heating up a melt is like heating up honey: it becomes less viscous, and a less viscous melt is able to lose gas easily. See the connection?

Solidified lava flow generated by water-rich magmas rich in gas bubbles, from Nisyros Volcano, in Greece.

Basically, a magma chamber containing gas bubbles heats up more intensely before an eruption is initiated, it ends up feeding the conduit with a melt of lower viscosity which allows the gases to seep through it faster, and in addition it already has a multitude of gas bubbles that are ready to connect and outgas even at the base of the volcanic conduit.

In conclusion, what the article shows is a clear window of explosivity, at between 4-5.5 wt% water and at low to moderate crystallinities. All we need to do now is to find a way of looking inside active magma chambers and evaluate their state. Scanning something buried at a depth of about 8-10 km might sound science-fiction, but geophysics is here to do the job. One method, in particular, magnetotellurics, which uses the natural magnetic and electric fields of the earth, is capable to reconstruct the electrical resistivity structure of active magma chambers. By approaching the problem interdisciplinary and integrating the geophysical and volcanological data, we can use this electrical resistivity structure to estimate the crystallinity of the magmatic reservoir and to check whether significant volumes of gas bubbles are currently present or not in the magma chamber. These are two of the three key parameters required for the timely forecasting of the eruptive behaviour of volcanoes.

The study has been published in Nature Geoscience. Journal Reference:

Popa, RG., Bachmann, O. & Huber, C. Explosive or effusive style of volcanic eruption determined by magma storage conditions. Nat. Geosci. 14, 781–786 (2021). https://doi.org/10.1038/s41561-021-00827-9

Scientists make eco-knives from hardened wood that slice through steak

Wood might be the last material you’d think of to use in cutting tools, but researchers employed a novel method that processes wood into knives sharp enough to easily slice steak. In fact, these wooden knives are nearly three times sharper than a stainless steel dinner table knife.

The knife is made from processed wood that is 23 times harder than natural wood and up to three times sharper than a stainless-steel dinner table knife. Credit: Bo Chen.

Most kitchen knives are either made of steel or ceramic, both of which require high temperatures of up to a few thousand degrees Celsius to forge. Wood, on the other hand, is sustainable and far less energy-intensive to process.

“A wood knife could be a promising sustainable alternative for a stainless-steel dinner table knife, with even better performance,” Teng Li, senior author of the study and a materials scientist at the University of Maryland, told ZME Science.

Wood is one of the oldest materials in human history, having been used for tens of thousands of years in virtually all areas of life, from construction and furniture to energy production. Wood can be turned, planed, finely carved, bent, and woven. When burned in the absence of oxygen, it turns to coal, a fuel still used by millions for cooking and heating.

However, natural wood has its limits. When processed into furniture or construction materials, wood tends to rebound after shaping. Seeking to make wood more versatile, Li and colleagues devised a new processing method that keeps the advantageous properties of the material while removing those that may hamper wood’s ability to act as a cutting tool.

Wood is super strong thanks to cellulose, which has a higher ratio of strength to density than ceramics, most metals, and polymers. However, cellulose only makes up to 50% of the wood, the rest consisting of lignin and hemicellulose.

Using a two-step process, the scientists first delignify the wood by boiling it at 100° Celsius in a bath of chemicals. Typically, wood is very rigid, but once the binding lignin is gone, the material becomes soft and flexible. In the second step, the now squishy wood is hot pressed to densify and remove the excess water.

Finally, after the processed material is carved into the desired shape, a mineral oil coating is applied so that the wooden knife doesn’t go dull (cellulose likes water a bit too much for a cutting knife).

Besides knives, the researchers also fashioned their processed wood into nails, which proved as sharp and sturdy as conventional steel nails. But unlike their metal counterparts, the wooden nails don’t rust. In one demonstration, the researchers hammered together three boards without any damage to the wooden nails.

It was obvious from these demonstrations that the researchers had made a super strong material — and they soon found out why. When viewing treated samples under the lens of a high-resolution microscope, the scientists found the processed wood had much fewer voids and pits, which are common defects in natural wood.

“When we found that the processed wood can be 23 times harder than natural wood, we were excited and wondered what such hardened wood could be used for. The brainstorming we enjoyed led to two potential demonstrations, wood knives and nails, which could be a sustainable alternative for the steel and plastic dinner table knives and steel nails. Cutting a medium-well done steak with our wood knife easily was fun and satisfying,” Li said.

For now, these are just demonstrations of the technology at the lab scale. However, the researchers believe they can scale the process so that sharp wooden knives could be sold at a cost that is competitive with conventional steel knives. “This will take some time and extra research and development efforts. But this is definitely worth doing,” Li added.

“The wood knife and nails are just two demonstrations of the hardened wood. Hard and strong materials are widely used in our daily life. There exist fertile opportunities to use hardened wood as a potential replacement of current cutlery and construction materials, such as steel and ceramics,” Li said.

“There are more than 3 trillion mature trees on earth, per a recent study published in Nature. This translates to more than 400 trees for each of us in the world. Trees are renewable and wood is sustainable. Our existing use of wood barely touches its full potential. There are fertile opportunities for us to use widely available materials in nature toward a sustainable future.”

The findings were reported in the journal Matter.

Could machine learning help us develop next-generation materials? These researchers believe so

In the past few years, 3D printing has seen a massive growth in popularity — and it’s not just for toys and trinkets anymore. Scientists and engineers are 3D printing everything from boats to bridges to nuclear plants components. But as 3D printing becomes a more and more integral part of modern engineering, it’s also important to develop new innovative materials.

To cut down on the time and resources required to develop these new materials, researchers at MIT have used machine learning to help them find new materials with the desired characteristics (like toughness and strength).

Image in public domain.

Materials development is still very much a manual process, says Mike Foshey, a mechanical engineer and project manager in the Computational Design and Fabrication Group (CDFG) of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and co-lead author of the paper.

“A chemist goes into a lab, mixes ingredients by hand, makes samples, tests them, and comes to a final formulation. But rather than having a chemist who can only do a couple of iterations over a span of days, our system can do hundreds of iterations over the same time span,” says Foshey.

In the new process, a scientist selects some ingredients, inputs their chemical compositions into the algorithm, and defines what mechanical properties he wants the new material to have. Then, instead of having the researcher do trial-and-error themselves, the algorithm increases and decreases the amounts of these properties and checks how each version would affect the material’s properties and make it most similar to what is desired.

Then, the researcher would actually create the material in the way recommended by the algorithm and test it. Streamlining this development process not only saves a lot of time and effort but also has a positive environmental impact by reducing the amount of chemical waste. In addition, the algorithm could find some combinations that may escape human intuition.

“We think, for a number of applications, this would outperform the conventional method because you can rely more heavily on the optimization algorithm to find the optimal solution. You wouldn’t need an expert chemist on hand to preselect the material formulations,” Foshey says.

The team trialed the system by asking it to optimize formulations for a 3D-printing ink that only hardens when exposed to ultraviolet light. They found six chemicals that could be used in the mix and asked the algorithm to find the best material that can be made from those six chemicals, in terms of toughness, stiffness, and strength.

This was a particularly challenging task because the properties can be contradictory — the strongest material may not be the toughest or the stiffest. However, the team was impressed to see just how many different materials the algorithm suggested — and how good the properties of these materials were. Ultimately, the material zoomed in on 12 top-performing materials that had optimal tradeoffs between the desired properties.

To encourage other researchers to use it, the researchers have also created a free, open-source materials optimization platform called AutoOED that incorporates the algorithm. AutoOED is a full software package that encourages exploration and allows researchers to optimize the process.

Researchers expect algorithm-driven material development to become more and more important over the next few years. Overall, the approach promises great improvements over the old-fashioned way of doing things.

“This has broad applications across materials science in general. For instance, if you wanted to design new types of batteries that were higher efficiency and lower cost, you could use a system like this to do it. Or if you wanted to optimize paint for a car that performed well and was environmentally friendly, this system could do that, too,” Foshey concludes.

Journal Reference: Timothy Erps, Accelerated Discovery of 3D Printing Materials Using Data-Driven Multi-Objective Optimization, Science Advances (2021). DOI: 10.1126/sciadv.abf7435www.science.org/doi/10.1126/sciadv.abf7435

Machine learning reveals archaeology from up to 5,000 years ago

As modern technologies are emerging, they can help us learn a thing or two about ancient history as well. In a new study published by Penn State researchers, a machine learning algorithm was able to find previously undiscovered shell rings and shell mounds left by Indigenous people 3,000 to 5,000 years ago.

Shell rings in LiDAR data. The rings stand out due to their slope and elevation change compared to the surrounding landscape. 
Image credits: Dylan Davis, Penn State.

When humans build structures, it changes the environment around. Even once a structure is gone, the remains can still detectable for hundreds or even thousands of years. For instance, if you build a house, the porosity and topography of the surrounding soil will change ever so slightly, as will the chemistry of the soil beneath your house (as traces of man-made materials seep underground). Oftentimes, we can detect these changes if we look closely enough — and with the proper technological tool. Maybe it’s a tiny slope, maybe it’s some difference in soil humidity, or something else, but if we can gather the right type of data, we can see where human structures were built even thousands of years ago.

But it’s not easy. For decades, researchers looked for structures from the ground based on historical hints or what they could see with the naked eye. But vegetation can easily mask these subtle differences. In recent years though, aerial surveys have made a big difference. With airborne Lidar, Synthetic Aperture Data, or other types of spectral data, researchers were able to uncover more archaeological structures far easier than before.

But there was still a problem: there’s a lot of airborne data to analyze, and the data isn’t always clear. So how do you comb through all the data and find what looks promising? Well, you train an algorithm, of course.

The team began with a public Lidar data set and then used a deep learning process to recognize the algorithm to find shell rings, shell mounds, and other landscape objects that could be indicative of archaeological remains. They then manually went over the maps and located the known rings, using these to train the algorithm. For an even better training program, they rotated some of the maps by 45 degrees.

“There are only about 50 known shell ring sites in the Southeastern U.S.,” says Dylan S. Davis, doctoral candidate in anthropology at Penn State. Davis is also an author of the new study. “So, we needed more locations for training.”

“One difficulty with deep learning is that it usually requires massive amounts of information for training, which we don’t have when looking for shell rings,” Davis adds. “However, by augmenting our data and by using synthetic data, we were able to get good results, although, because of COVID-19, we have not been able to check our new shell rings on the ground.”

After training the algorithm, the team was able to use it to discover hundreds of new promising structures, including ones in counties where no previous discovery had been made. Since shell rings are thought to be centers of exchange of goods, they can provide a lot of information on ancient societies, showing what resources they traded and whether or not they used the available resources sustainably or not.

Aerial view of shell rings
Shell rings located on Daws Island, South Carolina. Both rings are approximately 150 to 200 feet in diameter and are comprised largely of oyster, mussel and clam shells.

“The rings themselves are a treasure trove for archaeologists,” said “Excavations done at some shell rings have uncovered some of the best preservation of animal bones, teeth and other artifacts.”

Archaeologists will now try to explore these sites on the ground and confirm the findings. But what’s perhaps even more exciting is that the artificial intelligence algorithms that they used are already included in ArcGis, a commercially available geographic information system. This means that the algorithms could be trained to find different types of structures in different geographical areas, potentially opening a whole new era of airborne archaeological exploration. The researchers also provide the code and tools they used and encourage others to replicate their approach. It doesn’t even need to be archaeology — other structures of interest could also be scoured thusly.

“Archaeologists are using more and more AI and automation techniques,” Davis concludes. “It can be extremely complicated and requires specific skill sets and usually requires large amounts of data.”

Long-term, they’re cheaper: Electric cars can cost 40% less to maintain over their lifetime

In addition to the cost you can save on fuel, electric cars also have fewer parts and are cheaper to service and manage, a new report highlights.

Electric cars have something new going for them: low repair costs.

Cost is one of the main arguments against electric cars. As relatively new technology and with still expensive batteries, the upfront cost of an electric car is typically higher than that of a regular car (without subsidies). But in the long run, electric cars may actually be cheaper.

The price of a car is just a small part of what a car actually costs. In the vast majority of cases, you end up spending way more than the initial purchase price on operating and servicing the vehicle. Andrew Burnham of Argonne National Laboratory recently co-authored a report about the total cost of vehicle ownership. According to the report (which is focused on the US), electric cars could be a surprisingly sweet deal.

“Over the lifetime of a vehicle, the maintenance and repair for a gasoline car might be $25,000 or so – so a very significant amount,” he says.

In order to conduct this comparison, Burnham and colleagues conducted an analysis looking at the total cost of ownership that considered the vehicle cost and depreciation, financing options, fuel costs, insurance, maintenance and repairs, taxes, fees, and a number of other cost parameters — they looked at pretty much everything that’s involved in buying and owning a car. They also selected several representative cars for comparison.

The researchers found that an important difference is that electric cars have fewer parts to service (they don’t need things like a timing belt, motor oil, oxygen sensors, etc). All in all, Burnham and colleagues estimate that maintenance for fully electric vehicles costs around 40% less than for conventional cars. In addition, taxes (e.g. pollution taxes) are already lower for electric cars in many places. In addition, electric car drivers can also save a lot of money on fuel.

“There is a potential for a large amount of maintenance and repair savings over the lifetime of an electric vehicle versus a gasoline one,” Burnham says.

So even though the upfront cost is higher, Burnham advises buyers to look beyond the price tag and think in the longer term, though this still all depends on multiple factors (such as how long you drive, local taxes, etc).

However, one thing that could play a role (and was not analyzed in the report) is the so-called “right to repair“. What happens in practice is that owners are not allowed to take their car to just any service, and are forced to go to the car manufacturer. As a result, the manufacturer can sometimes heavily overcharge for even simple repairs — because the owner has no alternative.

If a fair repair market is ensured, then it could be good news for the electric vehicle market — and consequently, for our planet’s climate as well.

A neural network has learned to identify tree species from satellite

A detailed land-cover map showing forest in Chiapas state in southern Mexico. The map was produced using Copernicus Sentinel-2 optical data from 14 April 2016. The image is not part of the discussed study.

Much of what we know about forest management comes from aerial photos nowadays. Whether it’s drones, helicopters, or satellites, bird’s-eye views of forests are crucial for understanding how our forests are faring — especially in remote areas that are hard to monitor on the ground.

Satellite imagery, in particular, offers a cheap and effective tool for monitoring. But the problem with satellite data is that oftentimes, the resolution is pretty low, and it can be hard to tell what you’re looking at.

But a new study using neural networks to distinguish between satellite imagery may help with that.

Hierarchical model structure/Svetlana Illarionova et al., IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.

“Commercial forest taxation providers and their end-users, including timber procurers and processors, as well as the forest industry entities can use the new technology for quantitative and qualitative assessment of wood resources in leased areas. Also, our solution enables quick evaluations of underdeveloped forest areas in terms of investment appeal,” explains Svetlana Illarionova, the first author of the paper and a Skoltech PhD student.

Illarionova and her colleagues from the Skoltech Center for Computational and Data-Intensive Science and Engineering (CDISE) and Skoltech Space Center used a neural network to automate dominant tree species’ identification in high and medium resolution images.

Classes markup of the study area. Image credits: Illarionova et al.

After training, the neural networks were able to identify the dominant tree species in the test site from Leningrad Oblast, Russia. The data was confirmed with ground-based observations during the year 2018. A hierarchical classification model and additional data, such as vegetation height, helped further enhance the predictions’ quality while improving the algorithm’s stability to facilitate its practical application.

The study focused on identifying the dominant species. Of course, among the forests with different compositions, there will be forests where the distribution is roughly equal between two or even more species, but the compositions of these mixed forests was outside the scope of the study.

“It is worth noting that the “dominant species” in forestry does not exactly match the biological term “species” and is connected mostly with the timber class and quality,” the researchers write in the paper.

Overall, the algorithm appeared capable of identifying the dominant species, although the researchers note that the outcome can be improved by a better training markup, which they plan on doing in future research

“However, in future research, we are going to cover mixed forest cases, which will fall entirely into the hierarchical segmentation scheme. The other goal is to add more forest inventory characteristics, which can also be estimated from the satellite imagery,” the study concludes.

Earrings aren’t just fashionable. They’re also a form of communication.

Something as simple as earrings can serve as a means of communication just by themselves. As it turns out, humans have been using this type of non-verbal communication for millennia, with researchers recently recovering shell beads that were used for earrings 150,000 years ago.  

The researchers working at the cave. Image credit: The researchers

The beads are the earliest known evidence of widespread non-verbal communication, according to the group of anthropologists who made the discovery. In their study, they argue that this brings new valuable information on the evolution of human cognitive abilities and interactions. 

“You think about how society works—somebody’s tailgating you in traffic, honking their horn and flashing their lights, and you think, ‘What’s your problem?'” Steven Kuhn, lead author of the study, said in a statement. “But if you see they’re wearing a blue uniform and a peaked cap, you realize it’s a police officer pulling you over.”

Kuhn and the group of researchers recovered 33 marine shell beads between 2014 and 2018 from Bizmoune Cave, located about 10 miles from the Atlantic coast of southwest Morocco. The cave, formed in Upper Cretaceous limestone, was discovered during a survey of the area in 2004 and was then subject to archaeological excavations — including the excavations .

The beads were made from sea snails shells belonging to the species Tritia Gibbosula, each measuring half an inch long. They feature an oval-shaped or circular perforation, indicating they were hung on strings or from clothing. There are also traces of human modification such as chipping, possibly using a stone tool, the researchers found.

“They were probably part of the way people expressed their identity with their clothing,” Kuhn said. “They’re the tip of the iceberg for that kind of human trait. They show that it was present even hundreds of thousands of years ago, and that humans were interested in communicating to bigger groups of people than their immediate friends and family.”

Looking into the beads

While is far from the first time researchers have found symbolic artifacts such as beads, previous examples date back to no older than 130,000 years. Some of the earliest examples are associated with the Aterian industry, a Middle Stone Age culture known for its advanced tools such as spear points, used to hunt diverse wild animals. 

For anthropologists as Kuhn, the beads are a way to advance our understanding of the evolution of human cognition and communication. They are a fossilized form of basic communication, Kuhn said. While they don’t know exactly what they meant, they are symbolic objects deployed in a way for other people to see them, Kuhn explained. 

The researchers agree that their findings, while significant, also leave a lot of open questions. They will know to explore further the role of the Aterian industry and why they had the need to make the beads when they did. A possibility is that they wanted to identify themselves as more people started expanding into the North of Africa. 

Using a certain bead might have meant that you belonged to a certain clan, created as a way to protect limited resources due to population expansion. Still, as Kuhn explains, it’s one thing to know that they were capable of making them, and another to understand what actually stimulated them to do it. A chapter for another day. 

The study was published in the journal Science Advances. 

Life in the universe may be way more common than we thought

A group of astronomers at the University of Leeds have identified rich reservoirs of life-giving molecules around young stars in our galaxy — which was previously believed to happen only under rare circumstances. The findings suggest that there could be as much as 100 times more of these molecules in the Milky Way than previously thought. 

Artist’s depiction of a protoplanetary disk with young planets forming around a star. The right-side panel zooms in to show various nitrile molecules that are accreting onto a planet. Image credit: M.Weiss/Center for Astrophysics

The researchers published a set of papers in which they detail the discovery of the molecules around disks of gas and dust particles, orbiting around stars. These disks are formed simultaneously with the stars and can eventually form planets. Such as it happened with the disc near the Sun that formed the planets of the Solar Systems. 

“These planet-forming disks are teeming with organic molecules, some which are implicated in the origins of life here on Earth,” Kartin Öberg, one of the authors, said in a statement. “This is really exciting. The chemicals in each disk will ultimately affect the type of planets that form and determine whether or not the planets can host life.”

The researchers used the Atacama Large Millimetre/submillimetre Array (or ALMA) radio telescope in Chile to look at the composition of the five discs. ALMA can detect even the weakest signals from molecules in outer space thanks to its 60 antennas. Each molecule emits a light at a different wavelength that scientists can investigate. 

The researchers looked for certain organic molecules and found them in four of the five disks, and in much larger numbers than they originally anticipated. These molecules are considered essential to life on Earth. They are believed to have reached the planet through asteroids or comets that crashed into Earth billions of years ago. 

The theory of the molecules traveling in asteroids and comets was reaffirmed here, as they were located in the same region that produces space rocks. They weren’t evenly distributed in the discs, with each containing a different mix of molecules. For the researchers, this shows that each planet is created based on a different mix of ingredients.

“ALMA has allowed us to look for these molecules in the innermost regions of these disks, on size scales similar to our Solar System, for the first time. Our analysis shows that the molecules are primarily located in these inner regions with abundances between 10 and 100 times higher than models had predicted,” John Ilee, one of the authors, said in a statement. 

The researchers specifically looked for three molecules, cyanoacetylene (HC3N), acetonitrile (CH3CN), and cyclopropenylidene (c-C3H2), in five protoplanetary disks, known as IM Lup, GM Aur, AS 209, HD 163296, and MWC 480. The discs were found 300 to 500 lights years from Earth, with each of them showing signals of on-going planet formation.

The next steps

Following this remarkable discovery, the researchers want to keep on searching for more complex molecules in the protoplanetary disks. They are specifically looking forward to the launch of the James Webb Telescope, so far scheduled for December 18th, as it will help to examine the molecules in much greater detail than before, they added. 

“If we are finding molecules like these in such large abundances, our current understanding of interstellar chemistry suggests even more complex molecules should also be observable,” Ilee said in a statement. “If we detect them, then we’ll be even closer to understanding how the raw ingredients of life can be assembled around other stars.”

All the studies related to this finding can be accessed here. 

Norway’s “Wind Catching System” wants to revolutionize how we use wind energy

We’ve seen some remarkable innovations in wind energy in recent years — so much so that within a decade or so, wind energy went from a fringe alternative to cost-competitive with cheap, polluting fossil fuels. But most of these innovations took place “behind the scenes”, in the materials and mechanisms powering wind turbines. Seen from afar, a wind turbine developed yesterday looks pretty similar to one made ten years ago.

But that may soon change. A company in Norway wants to redesign offshore wind farms. According to the company, its 1,000-feet (304-meter) tall structure can generate five times more energy than the largest existing wind turbines, and at a lower cost to boot.

Render imag. Credits: WCS.

Just one of these arrays could offer double the swept area of the world’s biggest conventional wind turbines and, with its smaller rotors, use that wind energy more efficiently, says Wind Catching Systems (WCS), the company behind the project.

The system also works in stronger wind speeds (over 40 km/h or 25 miles), when larger turbines tend to limit production or stop entirely to protect themselves from damage.

Overall, the net result is that the system produces a 500% boost in annual energy output, and the price is already comparable to other offshore wind farms in Norway. It’s still more expensive than land-based wind or solar energy, but this is in large part owed to the high installation cost, says WCS. As operations are scaled, the costs could be brought down substantially.

Floating offshore wind farms have become a popular topic in renewable energy technology, because they could also be used in shallow waters, which extends the total surface area available for offshore wind farms. For countries like Norway or Japan, with a lot of shallow waters, this is a big deal.

“Wind Catching will make floating offshore wind competitive as soon as in 2022-2023, which is at least ten years earlier than conventional floating offshore wind farms,” claims Ole Heggheim, CEO of Wind Catching Systems.

A conventional floating farm already exists in Scotland, but this design is more efficient, says WCS.

Easier to build, deploy, and recycle

Another problem with windmills is that the bigger ones tend to be more efficient, but the bigger ones are also harder to transport and recycle. The wind catching system has no large components and can be monitored without the need for any special cranes or vessels. The costs could be brought down significantly. The system also has an alleged lifespan of 50 years, longer than conventional wind turbines.

However, WCS has not yet released further details about a prototype or a potential first installation. All we have so far are render images and press releases, so we’ll have to wait and see whether this is actually as good as they say or whether there’s also some hot air to their announcements. At any rate, there’s still plenty of room for innovation in wind energy.

We trained AI to recognise footprints, but it won’t replace forensic experts yet

brown sand with heart shaped print

We rely on experts all the time. If you need financial advice, you ask an expert. If you are sick, you visit a doctor, and as a juror you may listen to an expert witness. In the future, however, artificial intelligence (AI) might replace many of these people.

In forensic science, the expert witness plays a vital role. Lawyers seek them out for their analysis and opinion on specialist evidence. But experts are human, with all their failings, and the role of expert witnesses has frequently been linked to miscarriages of justice.

We’ve been investigating the potential for AI to study evidence in forensic science. In two recent papers, we found AI was better at assessing footprints than general forensic scientists, but not better than specific footprint experts.

What’s in a footprint?

As you walk around your home barefoot you leave footprints, as indentations in your carpet or as residue from your feet. Bloody footprints are common at violent crime scenes. They allow investigators to reconstruct events and perhaps profile an unknown suspect.

Shoe prints are one of the most common types of evidence, especially at domestic burglaries. These traces are recovered from windowsills, doors, toilet seats and floors and may be visible to or hidden from the naked eye. In the UK, recovered marks are analysed by police forces and used to search a database of footwear patterns.

The size of barefoot prints can tell you about a suspect’s height, weight, and even gender. In a recent study, we asked an expert podiatrist to determine the gender of a bunch of footprints and they got it right just over 50% of the time. We then created a neural network, a form of AI, and asked it to do the same thing. It got it right around 90% of the time. What’s more, much to our surprise, it could also assign an age to the track-maker at least to the nearest decade.

A series of footprints with a heat map over them.
The footprints analysed by the Bluestar AI, with a heat map over them suggesting areas of ambiguity. Matthew Bennett, Author provided

When it comes to shoe prints, footwear experts can identify the make and model of a shoe simply by experience – it’s second nature to these experts and mistakes are rare. Anecdotally, we’ve been told there are fewer than 30 footwear experts in the UK today. However, there are thousands of forensic and police personnel in the UK who are casual users of the the footwear database. For these casual users, analysing footwear can be challenging and their work often needs to be verified by an expert. For that reason, we thought AI may be able to help.

We tasked a second neural network, developed as part of an ongoing partnership with UK-based Bluestar Software, with identifying the make and model of footwear impressions. This AI takes a black and white footwear impression and automatically recognises the shape of component treads. Are the component treads square, triangular or circular? Is there a logo or writing on the shoe impression? Each of these shapes corresponds to a code in a simple classification. It is these codes that are used to search the database. In fact the AI gives a series of suggested codes for the user to verify and identifies areas of ambiguity that need checking.

In one of our experiments, an occasional user was given 100 randomly selected shoe prints to analyse. Across the trial, which we ran several times, the casual user got it right between 22% and 83% of the time. In comparison the AI was between 60% and 91% successful. Footwear experts, however, are right nearly 100% of the time.

One reason why our second neural network was not perfect and didn’t outperform real experts is that shoes vary with wear, making the task more complex. Buy a new pair of shoes and the tread is sharp and clear but after a month or two it becomes less clear. But while the AI couldn’t replace the expert trained to spot these things it did outperform occasional users, suggesting it could help free up time for the expert to focus on more difficult cases.

Will AI replace experts?

Systems like this increase the accuracy of footwear evidence and we will probably see it used more often than it is currently – especially in intelligence-led policing that aims to link crimes and reduce the cost of domestic burglaries. In the UK alone they cost on average £5,930 per incident in 2018, which amounts to a total economic cost of £4.1 billion.

AI will never replace the skilled and experienced judgement of a well-trained footwear examiner. But it might help by reducing the burden on those experts and allow them to focus on the difficult cases by helping the casual users to identify the make and model of a footprint more reliably on their own. At the same time, the experts who use this AI will replace the ones who don’t.


Matthew Robert Bennett, Professor of Environmental and Geographical Sciences, Bournemouth University and Marcin Budka, Professor of Data Science, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The crowd can do as good a job spotting fake news as professional fact-checkers — if you group up enough people

New research suggests that relatively small, politically balanced groups of laymen could do a reliable job of fact-checking news for a fraction of today’s cost.

Image credits Gerd Altmann.

A study from MIT researchers reports that crowdsourced fact-checking may not actually be a bad idea. Groups of normal, everyday readers can be virtually as effective as professional fact-checkers, it explains, at assessing the veracity of news from the headline and lead sentences of an article. This approach, the team explains, could help address our current misinformation problem by increasing the number of fact-checkers available to curate content at lower prices than currently possible.

Power to the people

“One problem with fact-checking is that there is just way too much content for professional fact-checkers to be able to cover, especially within a reasonable time frame,” says Jennifer Allen, a Ph.D. student at the MIT Sloan School of Management and co-author of a newly published paper detailing the study.

Let’s face it — we’re all on social media, and we’ve all seen some blatant disinformation out there. That people were throwing likes or retweets at, just to add insult to injury. Calls to have platforms better moderate content have been raised again and again. Steering clear of the question of where exactly moderation ends and manipulation or censoring begins, one practical issue blocking such efforts is sheer work volume. There is a lot of content out in the online world, and more is published every day. By contrast, professional fact-checkers are few and far between, and they don’t enjoy particularly high praise or high pay, so not many people are planning on becoming one.

With that in mind, the authors wanted to determine whether unprofessional fact-checkers could help stymie the flow of bad news. It turns out they can if you lump enough of them together. According to the findings, the accuracy of crowdsourced judgments — from relatively small, politically balanced groups of normal readers — can be virtually as accurate as those from professional fact-checkers.

The study examined over 200 news pieces that Facebook’s algorithms flagged as requiring further scrutiny. They were flagged either due to their content, due to the speed and scale they were being shared at, or for covering topics such as health. The participants, 1,128 U.S. residents, were recruited through Amazon’s Mechanical Turk platform.

“We found it to be encouraging,” says Allen. “The average rating of a crowd of 10 to 15 people correlated as well with the fact-checkers’ judgments as the fact-checkers correlated with each other. This helps with the scalability problem because these raters were regular people without fact-checking training, and they just read the headlines and lead sentences without spending the time to do any research.”

Participants were shown the headline and lead sentence of 20 news stories and were asked to rate them over seven dimensions: how “accurate,” “true,” “reliable,” “trustworthy,” “objective,” and “unbiased” they were, and how much they “describ[ed] an event that actually happened”. These were pooled together to generate an overall score for each category.

These scores were then compared to the verdicts of three professional fact-checkers, who evaluated all 207 stories involved in the study after researching each. Although the ratings these three produced were highly correlated with each other, they didn’t see eye to eye on everything — which, according to the team, is par for the course when studying fact-checking. More to the point, these fact-checkers agreed on the verdict about individual stories 49% of the stories. Two of the three agreed on a verdict with the third disagreeing on 42%, and all three disagreed on a verdict on 9% of the stories.

When the regular reader participants were sorted into groups with equal numbers of Democrats and Republicans, the average ratings were highly correlated with those of the professional fact-checkers. When these balanced groups were expanded to include between 12 and 20 participants, their ratings were as strongly correlated with those of the fact-checkers as the fact-checkers’ were with each other. In essence, these groups matched the performance of the fact-checkers, the authors explain. Participants were asked to undertake a political knowledge test and a test of their tendency to think analytically

Overall, the ratings of people who were better informed about civic issues and engaged in more analytical thinking were more closely aligned with the fact-checkers.

Judging from these findings, the authors explain, crowdsourcing could allow for fact-checking to be deployed on a wide scale for cheap. They estimate that the cost of having news verified in this way rounds up to roughly $0.90 per story. This doesn’t mean that the system is ready to implement, or that it could fix the issue completely by itself. Mechanisms have to be set in place to ensure that such a system can’t be tampered with by partisans, for example.

“We haven’t yet tested this in an environment where anyone can opt in,” Allen notes. “Platforms shouldn’t necessarily expect that other crowdsourcing strategies would produce equally positive results.”

“Most people don’t care about politics and care enough to try to influence things,” says David Rand, a professor at MIT Sloan and senior co-author of the study. “But the concern is that if you let people rate any content they want, then the only people doing it will be the ones who want to game the system. Still, to me, a bigger concern than being swamped by zealots is the problem that no one would do it. It is a classic public goods problem: Society at large benefits from people identifying misinformation, but why should users bother to invest the time and effort to give ratings?”

The paper “Scaling up fact-checking using the wisdom of crowds” has been published in the journal Science Advances.

The next innovative material for clothes? How about muscles

We wear clothes made from unusual things all the time — you even start to wonder what a “normal” material would be. From plant fibers to plastic to stuff produced by worms, there’s no shortage of raw materials that can be used to make clothes. But researchers are constantly looking for others, with potentially even better properties.

An unusual idea is muscles — or muscle fibers, to be more precise. It sounds a bit odd, but according to a new study, it could be more resilient than Kevlar, at a price that is competitive with other materials? Oh, and it’s also more eco-friendly, and no animals are harmed in the process.

Would you wear clothes made from synthetic muscle protein? Image credit: Washington University in St. Louis.

Cheap, durable, scalable

A belt made from muscle sounds like something straight out of a horror movie, but thanks to the work of researchers at Washington University in St. Louis, it may become real in the not too distant future. The team used microbes to polymerize proteins which were then spun into fibers (somewhat like how silkworms produce silk, but using microbes instead of worms).

The microbes can be engineered to tweak the properties of the protein, and in this case, researchers designed fibers that can endure a lot of energy before breaking.

“Its production can be cheap and scalable. It may enable many applications that people had previously thought about, but with natural muscle fibers,” said Fuzhong Zhang, professor in the Department of Energy, Environmental & Chemical Engineering, and one of the study authors.

No actual animal tissues are needed for the process. Instead, the process starts from a protein called titin, which grants muscles passive elasticity. Adult humans have about 0.5 kg of titin in their bodies.

Titin was desirable because of its molecular size. “It’s the largest known protein in nature,” said Cameron Sargent, a Ph.D. student in the Division of Biological and Biomedical Sciences and a first author on the paper. This makes it very resilient but raises some challenges in producing it.

Surprisingly doable

As weird as it may sound, the idea is not new. In fact, researchers have been toying with the idea of using muscle protein as fibers for a long time — but gathering them from animals is unethical and challenging in many ways. So they looked for another idea.

“We wondered, ‘Why don’t we just directly make synthetic muscles?'” Zhang said. “But we’re not going to harvest them from animals, we’ll use microbes to do it.”

Getting bacteria to produce large proteins is very hard. So instead, the researchers engineered bacteria to piece together smaller parts of the protein into an ultra-sturdy structure. They ended up with a protein with a high molecular weight and about 50 times larger than the average bacterial protein. Then, they used a wet-spinning process, converting the proteins into fibers about 10 times thinner than a human hair.

They opted for a fiber that is especially strong, but the process could be tweaked for any desired property. You could make clothes that are softer or dry quicker, the process can be scaled in any desired direction.

“The beauty of the system is that it’s really a platform that can be applied anywhere,” Sargent said. “We can take proteins from different natural contexts, then put them into this platform for polymerization and create larger, longer proteins for various material applications with a greater sustainability.”

Furthermore, because the fibers are almost indistinguishable from natural muscle, they can also be used in medical procedures, for instance for sutures and stitching up wounds. Unlike other synthetic polymers, this is also biodegradable and less polluting to the environment.

“By harnessing the biosynthetic power of microbes, this work has produced a novel high-performance material that recaptures not only the most desirable mechanical properties of natural muscle fibers (i.e., high damping capacity and rapid mechanical recovery) but also high strength and toughness, higher even than that of many manmade and natural high-performance fiber,” the researchers conclude.

So, would you wear clothes made from muscle?

The research has been published in Nature Communications.

Batman cloak-like chainmail switches from flexible to tough on command

Credit:Caltech.

Researchers at Caltech and JPL have devised a new smart material that can instantly morph from fluid and flexible to tough and rigid. The material’s configuration is inspired by chainmail armors and could potentially prove useful in exoskeletons, casts for broken limbs, and robotics.

This modern chainmail sounds mighty similar to Batman’s cloak, which drapes behind the superhero at rest but stiffens into a glider when he needs to make a fast escape. However, unlike the DC movies, the technology was initially inspired by the physics of vacuum-packed coffee.

Coffee inspiration

 “Think about coffee in a vacuum-sealed bag. When still packed, it is solid, via a process we call ‘jamming’. But as soon as you open the package, the coffee grounds are no longer jammed against each other and you can pour them as though they were a fluid,” Chiara Daraio, a professor of mechanical engineering and applied physics at Caltech, explained.

While individual coffee grounds or sand particles only jam when compressed, sheets of linked rings can jam together under both compression and tension. Starting from this idea, Daraio and colleagues experimented with a number of different configurations of linked particles and tested each using both computer simulations and 3-D printing.

Testing the impact resistance of the material when unjammed (soft). Credit: Caltech.
Testing the impact resistance of the material when jammed (rigid). Credit: Caltech.

Although it doesn’t lead to the stiffest configuration, the researchers settled on an octagonal shape of the chainmail links. The best stiffness effect is achieved with circular rings and squares, which is actually the design used in ancient armors. However, these configurations are also much heavier due to the denser stacking of the links. The octagonal configuration is the most optimal one in terms of both stiffness and lighter weight.

The chainmail is made from linked octahedrons. Credit: Catech.

During one demonstration, 3-D printed polymer chainmail was compressed using a vacuum chamber or by dropping weight to control the jamming of the material. The vacuum-locked chainmail remarkably supported a load more than 50 times its weight.

When stiffened the chainmail can support 40 times its own weight. Credit: Caltech.

“Granular materials are a beautiful example of complex systems, where simple interactions at a grain scale can lead to complex behavior structurally. In this chain mail application, the ability to carry tensile loads at the grain scale is a game changer. It’s like having a string that can carry compressive loads. The ability to simulate such complex behavior opens the door to extraordinary structural design and performance,” says José E. Andrade, the George W. Housner Professor of Civil and Mechanical Engineering and Caltech’s resident expert in the modeling of granular materials.

The modern chainmail fabrics have potential applications in smart wearable clothing. “When unjammed, they are lightweight, compliant, and comfortable to wear; after the jamming transition, they become a supportive and protective layer on the wearer’s body,” says Wang, now an assistant professor at Nanyang Technological University in Singapore.

In parallel, the researchers are working on a new design consisting of strips of polymers that shrink on command when heat is present. These strips could be woven into the chainmail to create objects like bridges that fold down flat when required. The two materials joining together could use prove highly useful when incorporated into robots that can morph into different shapes and configurations.

4 Technological Trends Shaping The Post-Pandemic Future Of Business

Working remotely is also bound to be more common in the future.

For millions of people around the globe, the pandemic caused untold sorrow. In addition to the direct damage it caused, the pandemic also laid bare everything that’s wrong with the world. It exposed inequalities, weaknesses, and flaws across all industries. But then again, it also showed our capacity to adapt and reinvent new ways to get things done — both for people and for businesses.

As people turned to the Internet for almost everything (by necessity this time), businesses that are well into their digital transformation managed to survive or even thrive. Businesses that didn’t have a strong presence online often faltered. Now, as business leaders try to predict what will work in this new reality, one thing is evident: technologies and digital infrastructure will be the major players in the future of business.

This isn’t a new thing. The pressure for digital transformation has been going on for a decade or so. The pandemic, however, made it an imperative — moving online wasn’t something that needed to be done at some point in the future, it was something that needed to be done now. Technological tools that were seen by businesses as options or features became crucial tools for their operations, such as video conferencing and live streaming.

Ultimately though, the consumers’ wants and needs, including their preferences, will dictate which technologies will affect businesses. Technologies that are most suited to cater to consumers will shape the future of business. Below are just a few of them.

Increased investment in technology

Now that it has become clear that businesses’ reliance on technology will be much greater, many companies will prioritize the upgrades of their existing technological infrastructure. Thus, the trend for an increased budget for (and prioritization of) IT is inevitable. Businesses would need an upgraded IT department to handle other technological trends.

However, not many private companies will be able to deploy a proper IT department with enough resources to set up technology, like artificial intelligence (AI), which can cost a pretty penny — and are still not fully mature yet. They’d also need staff that possesses the necessary expertise to handle AI and its subsets.

For these companies to step up their efforts of advancing their technological strategies, it’d be wise to partner and potentially outsource IT services. They will need to upgrade their IT capability quickly, and they also need access to IT experts, something which an outsourced IT can provide. Various organizations across all industries, in fact, have started to follow this trend. In a 2020 report, 45% of companies worldwide have stated that they will be outsourcing their IT.

Rise of digitalization

Image credits: Samuel Regan-Asante.

Another obvious sign of increasing digitalization is the rise in the adoption of software that fosters collaboration as well as customer relationship management (CRM) software. On-demand service applications on users’ devices have also emerged as among the top trends that impact businesses. People realized that they can save time by using apps like these, and they only need an Internet connection and a smartphone.

Users of Software as a Service (SaaS) like MS Teams, Google Meet, Zoom, and others have also surged, as mandated lockdowns all over the world forced many employees to work remotely. Businesses like restaurants have also adopted the use of digital menus and expanded to touchless and cashless payments.

Virtual and augmented reality

Virtual reality (VR) and augmented reality (AR) are two techs that have been around for many years; they are part of immersive technology, also known as extended reality or XR. Popularized in video games via a headset, these techs are proving to be invaluable in such sectors as healthcare, education, business, and many others.

VR can immerse you into a different environment, while AR lets you see your environment with an overlay of added, or ‘augmented,’ elements. A few retail businesses use these techs by letting you have the choice of ‘seeing’ their product, for example, a piece of furniture, in your own home. Virtual shopping for clothes is easier, too. You get to try on clothes in different styles and colors virtually, without visiting the physical store.

Many businesses have also started using XR in training for various things, including customer service. The applications for these techs are endless; their potential uses guarantee that XR will be a major influence on business and other sectors in the years to come.

Rise of AI

Image credits: Franki Chamaki.

In manufacturing, AI will be a great help in designing products. It can also assist managers in deciding how products are procured and manufactured. Processing’ Big Data’ will also be the purview of AI, helping marketers and engineers glean insights in analytics quicker. AI and its subsets, including machine learning and natural language processing (NLP), will also be a big help in customer service.

NLP enables speech recognition, which is used by digital assistants like Siri and Alexa. Additionally, NLP enables devices to have a deeper understanding of users’ words—it can be used to gauge client opinions, monitor feedback, and customer satisfaction. It can also help give more contextual answers to users’ voice searches.

The bottom line

Serving the customer’s interest is a concept as old as business itself. However, the pandemic made customers’ demands go on a trajectory that gives technology a greater role — and especially the online environment. And so, businesses will have to adapt to these new realities or else go the way of the dodo. The pandemic made digital transformation a necessity and no longer just an option to be accomplished piecemeal and spread through several years.

An upgraded IT department to handle these technological trends should be at the top of every business’s priority list. In today’s business climate, only those who can innovate and adapt can survive.

3D-printed components are now in use at US nuclear plant

At the US Department of Energy’s (DOE) Manufacturing Demonstration Facility at Oak Ridge National Laboratory (ORNL), two unusual components were assembled — and by assembled, I mean 3D-printed. The two channel fasteners are now in use at the Tennessee Valley Authority’s Browns Ferry Nuclear Plant Unit 2 in Athens, Alabama.

ORNL used novel additive manufacturing techniques to 3D print channel fasteners for Framatome’s boiling water reactor fuel assembly. Four components, like the one shown here, were installed at the TVA Browns Ferry nuclear plant. Credit: Framatome

Not too long ago, 3D-printing was an innovative but still new technology that promised to change the world — at some point in the future. Well, that point in the future has come. Not only is the technology mature enough to be used, but it’s mature enough to be used in a crucial system where failure is simply not acceptable.

“Deploying 3D-printed components in a reactor application is a great milestone,” said ORNL’s Ben Betzler in a recent press release. “It shows that it is possible to deliver qualified components in a highly regulated environment. This program bridges basic and applied science and technology to deliver tangible solutions that show how advanced manufacturing can transform reactor technology and components.”

“ORNL offers everything under one roof: state-of-the-art printing capabilities, world-class expertise in machining, next-generation digital manufacturing technologies, plus comprehensive characterization and testing equipment,” said Ryan Dehoff, ORNL section head for Secure and Digital Manufacturing.

The components are a good fit for the task. The channel fasteners have a relatively simple geometry, which works excellently with an additive manufacturing application (which is what “3D printing” commonly refers to). Fuel channel fasteners have been used for many years in boiling water nuclear reactors. They attach the external fuel channel to the fuel assembly, ensuring that the coolant is restrained around each fuel assembly.


[Also Read: The first ever 3D-printed steel bridge opens in Amsterdam]


Growing up

3D printing has matured dramatically in recent years, and the fact that the nuclear industry is increasingly looking towards it speaks volumes about that.

The components were developed in collaboration with the Tennessee Valley Authority, French nuclear reactor Framatome, and the DOE Office of Nuclear Energy. This was funded by the Transformational Challenge Reactor, or TCR, program based at ORNL.

Currently, the TCR aims to further mature and implement innovative technologies (and algorithms such as artificial intelligence) to its components and projects.

“Collaborating with TVA and ORNL allows us to deploy innovative technologies and explore emerging 3D printing markets that will benefit the nuclear energy industry,” said John Strumpell, manager of North America Fuel R&D at Framatome. “This project provides the foundation for designing and manufacturing a variety of 3D-printed parts that will contribute to creating a clean energy future.”

The change has been made for a couple of months now, and operations at the Browns Ferry plant resumed on April 22, 2021. The components appear to operate as intended, and they will remain in the reactor for six years with regular inspections during this period.

This is just one example of the projects that involve 3D printing for nuclear reactors. ORNL are looking at ways to extend the viability and operations of nuclear plants, while also deploying new components that would make plants more efficient and robust.

3D printing is reshaping what’s possible with nuclear energy, and could very well have an important part to play in our transition towards a sustainable, low-carbon future. At the very least, it’s bound to make nuclear energy cheaper and more competitive with fossil fuels.

“There is a tremendous opportunity for savings,” said John Strumpell, manager of U.S. fuel research and development at Framatome, in a previous press release earlier this year. Indeed, 3D printing seems ready to enter the market.