In the two years that SARS‑CoV‑2 has ravaged across the globe, it has caused immeasurable human loss. But we as a species have been able to create monumental solutions amidst great adversity. The latest achievement involves a standard face mask that can detect COVID-19 in your breath, essentially making the pathogen visible.
Japanese researchers at Kyoto Prefectural University have created a mask that glows in the dark if COVID-19 is detected in a person’s breath or spit. They did this by coating masks with a mixture containing ostrich antibodies that react when they contact the SARS‑CoV‑2 virus. The filters are then removed from the masks and sprayed with a chemical that makes COVID-19 (if present) viewable using a smartphone or a dark light. The experts hope that their discovery could provide a low-cost home test to detect the virus.
Yasuhiro Tsukamoto, veterinary professor and president of Kyoto Prefectural University, explains the benefits of such a technology: “It’s a much faster and direct form of initial testing than getting a PCR test.”
Tsukamoto notes that it could help those infected with the virus but who show no symptoms and are unlikely to get tested — and with a patent application and plans to commercialize inspection kits and sell them in Japan and overseas within the next year, the test appears to have a bright future. However, this all hinges on large-scale testing of the mask filters and government approval for mass production.
Remarkably, this all came with a little help from ostriches.
The ostrich immune system is one of the most potent on Earth
To make each mask, the scientists injected inactive SARS‑CoV‑2 into female ostriches, in effect vaccinating them. Scientists then extracted antibodies from the eggs the ostriches produced, as the yolk transfers immunity to the offspring – the same way a vaccinated mother conveys disease resistance to her infant through the placenta.
An ostrich egg yolk is perfect for this job as it is nearly 24 times bigger than a chicken’s, allowing a more significant number of antibodies to form. Additionally, immune cells are also produced far more quickly in these birds—taking a mere six weeks, as opposed to chickens, where it takes twelve.
Because ostriches have an extremely efficient immune system, thought to be the strongest of any animal on the planet, they can rapidly produce antibodies to fight an enormous range of bacteria and viruses, with a 2012 study in theBrazilian Journal of Microbiology showing they could stop Staphylococcus aureus and E. coli in their tracks – experts also predict that this bird will be instrumental in fending off epidemics in the future.
Tsukamoto himself has published numerous studies using ostrich immune cells harvested from eggs to help treat a host of health conditions, from swine flu to hair loss.
Your smartphone can image COVID-19 with this simple test
The researchers started by creating a mask filter coated with a solution of the antibodies extracted from ostriches’ eggs that react with the COVID-19 spike protein. After they had a working material, a small consort of 32 volunteers wore the masks for eight hours before the team removed the filters and sprayed them with a chemical that caused COVID-19 to glow in the dark. Scientists repeated this for ten days. Masks worn by participants infected with the virus glowed around the nose and mouth when scientists shone a dark light on them.
In a promising turn, the researchers found they could also use a smartphone LED light to detect the virus, which would considerably widen the scope of testing across the globe due to its ease of use. Essentially, it means that the material could be used to the fullest in a day-to-day setting without any additional equipment.
“We also succeeded in visualizing the virus antigen on the ostrich antibody-carrying filter when using the LED ultraviolet black light and the LED light of the smartphone as the light source. This makes it easy to use on the mask even at home.”
To further illustrate the practicability of the test, Tsukamoto told the Kyodo news agency he discovered he was infected with the virus after he wore one of the diagnostic masks. The diagnosis was also confirmed using a laboratory test, after which authorities quarantined him at a hotel.
Next, the team aims to expand the trial to 150 participants and develop the masks to glow automatically without special lighting. Dr. Tsukamoto concludes: “We can mass-produce antibodies from ostriches at a low cost. In the future, I want to make this into an easy testing kit that anyone can use.”
The elites of the ancient Wari empire in Peru, which ruled the highlands of the country from 600 to 1,000 AD, used communal drugs and beer to maintain their political control for thousands of years, according to a new study. Archaeologists believe that hallucinogenic from a native tree were added to beer during their massive feasts.
Previous studies have highlighted the role that chicha, a beer-like drink still consumed today in many Andean countries, played a key role in the culture of the Wari — a civilization that flourished in the south-central Andes and coastal area of modern-day Peru, whse used to host big parties for their neighbors. Now, the discovery of a psychotropic tree in a Wari brewery suggests they combined the two intoxicants for a bigger punch.
Archaeologists from the Royal Ontario Museum made the discovery at Quilcapampa, a former Wari village in Peru, where the remains of what the residents drank and ate were preserved thanks to the arid environment. They found traces of potatoes, quinoa, and molle tree (Schinus mole), used to make chicha with a 5% alcohol content.
So far, nothing spectacular. But among the leftovers, the researchers found hallucinogenic vilca seeds from the Anadenanthera colubrina tree. Previous studies suggest the seeds were used extensively across South America. The earliest evidence, a pipe with the seeds, is from a site in Northern Argentina that dates back 4,000 years.
The Quilcapampa settlement
The Wari arrived in Quilcapampa late in the ninth century. A group of migrant families from the heartlands farther north settled in the area and likely introduced the practice of combining chicha with vilca seeds to strengthen alliances with non-Wari communities. It was a strategy to make friends and also to consolidate political power.
“Our excavations at Quilcapampa have recovered vilca seeds, which were probably imported, in direct association with large quantities of molle used to create the beer for a feast that was held just before the site was abandoned,” researchers wrote. “This was one of many such events hosted by Quilcapampa’s Wari-associated families.”
Similar to the drug ayahuasca, used by Amazonian communities, vilca results in an out-of-body experience. The seeds, bark, and other parts of the tree have tryptamine alkaloids, including the psychedelic substance DMT. Since the effects are weakened if ingested, the Wari usually smoked or grounded the seeds into snuff, the team said.
The molle tree used to make chicha grew near the settlement. But this wasn’t the case of the vilca seeds, which had to be imported from the eastern borders of the Andes and transported over the mountains. Archaeologists found in Quilcapampa painted drinking vessels from the Wari that portray the vilca tree with distinctive seed pods
Vilca was incorporated in communal feasts hosted by the elites, the researchers argue. This helped to cement social relationships and highlight the Wari hospitality. They offered their visitors an experience that wasn’t available elsewhere and couldn’t be easily replicated, as it was too dry in the region near Quilcapampa to grow vilca.
“We argue that the addition of vilca to molle chicha was an effective method for the hosts of Wari feasts to channel its psychotropic effects into a more collective experience,” the researchers wrote. “A host who provides alcohol and food to guests reinforces patron-client relationships, forging an indebtedness that confirms heightened position.”
Football is increasingly looking like a gentrified, unequal society, a new study shows.
Football (that is, the sport that people in America tend to call soccer) has never been more popular and financially lucrative. In Europe alone, football is a multi-billion dollar industry, with top players being sold for well over 100 million dollars. The appeal of football, its supporters say, is that you never know what will happen. Underdog tales can always emerge, and just being the bigger team doesn’t guarantee success. The ball is round and anything is possible… in theory.
But according to one study, being the bigger team does make success much more likely. The new study, which used a computer model to look at football games in major European leagues over the past 26 years, found that over time, football games have become more and more predictable and the inequality in teams has become more pronounced.
“On the one hand, playing football has become a high-income profession and the players are highly motivated; on the other hand, stronger teams have higher incomes and therefore afford better players leading to an even stronger appearance in tournaments that can make the game more imbalanced and hence predictable,” the study reads.
The computer model worked on some 88,000 matches played since 1993, trying to predict whether the home or away team would win based on their performance in previous games. The home advantage, once prevalent in all areas of football, has almost vanished, in all countries. It’s not clear exactly why this has happened, though it could be due to non-football reasons: transportation has improved substantially, minimizing the challenges and effort required to play away.
The computer model, researchers say, is simpler than most existing algorithms, such as the ones developed by betting houses to calculate the odds of winning. The advantage of this is that you can input much more data into it and go back further in time with the analysis, something that more sophisticated models would struggle with.
So how much more predictable have matches become? For instance, the model could correctly predict the winner of a Bundesliga game (the top German league) with 60% success in 1993 — in 2019, the figure had grown to 80%. Overall, the model was able to predict results correctly roughly 75% of the time in 2019. Researchers stress that this is not because there was more data to train the models, but it is because indeed, football has become more predictable.
Football as a gentrified society
Initially, this came as a surprise.
Researchers were expecting that more money and higher stakes would make the game more competitive, but this doesn’t seem to be the case. Instead, as the leagues mature, they resemble a gentrified society, with the underlying inequality bringing more and more predictability. In particular, researchers have found that the points in a given season were distributed among teams much less evenly. They plotted this point distribution in a similar way to how economists plot income or wealth disparity between members of society — using the Gini coefficient. While there were some exception years, in general, leagues are becoming more and more unequal, with the top clubs gathering more points year after year. This echoes the notion that “the rich get richer and the poor get poorer”, the researchers write.
“It seems football as a sport is emulating society in its somewhat ‘gentrification’ process, i.e. the richer leagues are becoming more deterministic because better teams win more often; consequently, becoming richer; allowing themselves to hire better players (from a talent pool that gets internationally broader each year); becoming even stronger; and, closing the feedback cycle, winning even more matches and tournaments; hence more predictability in more professional and expensive leagues,” the study reads.
When this growing inequality is coupled with the disappearance of the home-field advantage, a plausible theory emerges regarding the growing predictability of football. Decades ago, the home advantage granted weaker teams playing at home a boost, making it more likely that they can win even against stronger teams — at least once in a while. Now, it seems that stronger teams simply win more, regardless of whether it’s home or away.
However, the researchers emphasize that they did not investigate the direct cause for football’s growing predictability.
When Republican Representative Jim Jordan attended a judicial hearing in 2020, he made it clear why he disliked companies like Twitter.
“Big Tech is out to get conservatives,” Jordan proclaimed. “That’s not a suspicion. That’s not a hunch. It’s a fact. I said that two months ago at our last hearing. It’s every bit as true today.”
Jordan’s claim isn’t isolated. Led by former President Trump, a growing number of right-leaning voices are claiming that social media is biased in favor of liberals and progressives, shutting down conservatives. But an internal study released by Twitter shows that the opposite is true — in the US, as well as most countries that were analyzed, it’s actually conservative voices that are amplified more than liberal voices.
“Our results reveal a remarkably consistent trend: In 6 out of 7 countries studied [including the US], the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the U.S. media landscape revealed that algorithmic amplification favours right-leaning news sources,” Twitter’s study reads.
Algorithmic amplification refers to a type of story ‘amplified’ by Twitter’s algorithm — in other words, a story that the algorithm is more likely to show to other users.
The study has two main parts. The first one focused on the US and analyzed where media outlets were more likely to be amplified if they were politicized, while the other focused on tweets from politicians from seven countries.
Twitter analyzed millions of tweets posted between April 1st and August 15th, 2020. The tweets were selected from news outlets and elected officials in 7 countries: Canada, France, Germany, Japan, Spain, the UK, and the US. In all countries except Germany, tweets from right-leaning accounts “receive more algorithmic amplification than the political left.” In general, right-leaning content from news outlets seemed to benefit from the same bias. In other words, users on Twitter are more likely to see right-leaning content rather than left-leaning, all things being equal. In the UK, for instance, the right-leaning Conservatives enjoyed an amplification rate of 176%, compared to 112% for the left-leaning Labour party.
However, Twitter emphasizes that its algorithm doesn’t favor extreme content from either side of the political spectrum.
“We further looked at whether algorithms amplify far-left and far-right political groups more than moderate ones: contrary to prevailing public belief, we did not find evidence to support this hypothesis. We hope our findings will contribute to an evidence-based debate on the role personalization algorithms play in shaping political content consumption,” the study read.
While it is clear that politicized content is amplified on Twitter, it’s not entirely clear why this happens. However, this seems to be connected to a phenomenon present on all social media platforms. Algorithms are designed to promote intense conversations and debate — and a side effect of this is that controversy is often boosted. Simply put, if a US Democrat says something about a Republican (or vice versa), this is likely to draw both praise and criticism, and is likely to be promoted and boosted by the algorithm.
Although Twitter did not focus on this directly, the phenomenon is also key to disinformation, which we’ve seen a lot of during the pandemic. For instance, if a conspiracy theory is posted on Twitter, there’s a good chance it will gather both the appraisal of those who believe it and the criticism of those who see through it — which makes it more likely to be further amplified on social media.
Ultimately, in addition to contradicting a popular conspiracy theory that social media is against conservatives, the study shows just how much social media algorithms can shape and sway public opinion, by presenting some posts instead of others. Twitter’s study is an encouraging first step towards more transparency, but it’s a baby step when we’re looking at a very long race ahead of us.
Today, iron is the most widely-used metal. It’s durable, versatile, and abundant in the Earth’s crust (making it cheap). However, in its pure form, iron is very susceptible to oxygen — it rusts rapidly. Stainless steel provides a solution to this problem.
The world as we know it wouldn’t be possible without stainless steel. It’s a material that combines strength, flexibility, and durability at an affordable cost. This alloy makes an appearance in everything, from high-rises and high-performance cars to spoons and baby monitors. To quite a large extent, our world is built on stainless steel. So let’s learn more about it.
What is stainless steel?
‘Stainless steel’ is a generic, umbrella term, that denotes a wide range of metal alloys — cocktails of metals — based on iron. Like all other types of steel, it also contains carbon.
It has excellent resistance to corrosion (oxidation or rusting), is relatively non-reactive with most chemicals, has high durability, and good hygienic properties. It sees wide use today in products ranging from cutlery to medical devices to construction materials.
Aesthetically, stainless steel is a lustrous, silvery metal, that can take a very high polish. From a practical point of view, stainless steel is a strong and highly resilient material; its exact properties depend on the composition of the alloy, but it can be tailored to suit a wide range of needs, having the potential to be highly flexible, resistant to scratching, mechanically tough, or any other property needed in a certain application. It can be recycled practically forever, as its recovery rate during recycling is close to 100%
Although it is more difficult and expensive to produce than iron metal, stainless steel has practically replaced iron in all except the most specialized cases due to the advantages it holds over the un-alloyed metal. Almost all ‘iron’ products you’ve encountered in your life were made from stainless steel.
What is stainless steel made of?
Stainless steel differs from other types of steel through the addition of a handful of elements to the mix. The exact elements added vary with alloy type but, as a rule of thumb, stainless steels contain chromium (Cr), in quantities ranging from 10.5 to 30% by weight.
Chromium is what gives these alloys their high resistance to corrosion. As it interacts with corrosive agents in the air, chromium forms a passive layer — a ‘film’ of chromium oxide — on the metal’s surface which protects the alloy. Oxygen and moisture cannot penetrate this film, so it protects the iron throughout the body of the steel from rusting.
Other elements that are added to stainless steel include non-metals such as sulphur, silicon, or nitrogen, metals such as nickel, aluminium, copper, or more exotic metals such as selenium, niobium, and molybdenum. Although the exact composition of the alloy is decided based on its desired properties — each element added in, and their proportion, changes the characteristics of the alloy — some of the most commonly-seen extra elements in stainless steel alloys are nickel and nitrogen. These improve its hardiness and ability to resist corrosion in certain conditions, but also increase its price per pound.
There are currently over 100 types (known as ‘grades’) of stainless steel being produced and used, each with its own ISO number, many of them for specialized applications. The most common five types are known as ‘austenitic’, ‘ferritic’, ‘martensitic’, ‘duplex’, and ‘precipitation hardening’ steels.
Austenitic stainless steels are the most widely used grade. They have very good resistance to corrosion and heat, offering good mechanical properties over a wide range of temperatures. It’s used in household goods, industrial applications, in construction, and in decorations.
Ferritic stainless steels have lower mechanical resitance — they resemble mild steels in strength — but are better able to resist corrosion, heat, and are harder to crack. Any washing machine or boiler you have at home are probably made of ferritic steel.
Martensitic stainless steels are much harder and stronger than their peers, but they’re not as able to withstand corrosion. This is the type of steel that makes high-grade knives, and is also used for turbine blades.
Duplex stainless steel is a mixture of austenitic and ferritic steels, and their properties are, similarly, a middle-ground between these two grades. As a rule of thumb, it is used in applications where both strength and flexibility are required, and corrosion resistance is a plus; shipbuilding is a prime example.
Precipitation hardening stainless steels are a subclass of alloys, somewhere in the overlap between martensitic and austenitic steels. They offer the best mechanical properties of the lot (they have very high material strength), due to the addition of elements such as aluminium, copper, and niobium.
What is stainless steel used for?
With a material as versatile as stainless steel, it’s hard to cover all its uses in any detail. Suffice to say, it’s used in virtually all goods and applications where strength, flexibility, good looks, and hygiene are required, for relatively low cost, and weight is not a huge concern.
Household goods and appliances make heavy use of stainless steel, especially kitchenware or other products meant to come into contact with water. Knives and cutlery, home appliances such as washing machines, bathroom fixtures, piping, cookware make use of stainless steel due to its resistance to corrosion, its good looks, ease of washing, and high durability. Various grades of stainless steel are used depending on the intended role and usage of each product.
Medical tools also make ample use of stainless steel. Things like surgical and dental instruments, scissors, trays, and a wide range of other medical-use objects are made from this alloy. Here, it is the chemical inertness and corrosion resistance of stainless steel that is most important. Medical devices also contain stainless steel, in particular structural elements and coverings, due to their strength and ease of cleaning. Medical implants, such as those used in knee or hip replacement surgery, are also made of stainless steel.
Stainless steel is also used in the construction of vehicles, mostly ships, trains, and cars. Aircraft manufacturers tend to prefer aluminium alloys, as they are more lightweight. That being said, stainless steel is essential in the production of aircraft frames and various structural elements of the landing gear. For all vehicles, however, stainless steel combines good mechanical properties with high longevity (due to its resistance to corrosion), making for durable and long-lived parts.
Construction and architecture are two further domains that love stainless steel. The combination of strength and high chemical inertness makes this alloy ideal for structural elements in buildings such as skyscrapers, or in exposed elements, such as fire escapes or service ladders.
Jewelry manufacturers also employ stainless steel in their products, where it’s preferred due to its hypoallergnic properties (it doesn’t trigger metal allergies).
Stainless steel, today, is an indispensible alloy. Our societies depend heavily on it, using it in everything from tiny knicknacks around the house to the mightiest skyscraper. Its unique combination of strength, longevity, and relatively low cost makes it so that, most likely, stainless steel won’t be replaced anytime soon.
Imagine a swarm of insect-sized robots capable of recording criminals for the authorities undetected or searching for survivors caught in the ruins of unstable buildings. Researchers worldwide have been quietly working toward this but have been unable to power these miniature machines — until now.
Engineers from MIT have developed powerful micro-drones that can zip around with bug-like agility, which could eventually perform these tasks. Their paper in the journal Advanced Materialsdescribes a new form of synthetic muscle (known as an actuator) that converts energy sources into motion to power these devices and enable them to move around. Their new fabrication technique produces artificial muscles, which dramatically extend the lifespan of the microbot while increasing its performance and the amount it can carry.
In an interview with Tech Xplore, Dr. Kevin Chen, senior author of the paper, explained that they have big plans for this type of robot:
“Our group has a long-term vision of creating a swarm of insect-like robots that can perform complex tasks such as assisted pollination and collective search-and-rescue. Since three years ago, we have been working on developing aerial robots that are driven by muscle-like soft actuators.”
Soft artificial muscles contract like the real thing
Your run-of-the-mill drone uses rigid actuators to fly as these can supply more voltage or power to make them move, but robots on this miniature scale couldn’t carry such a heavy power supply. So-called ‘soft’ actuators are a far better solution as they’re far lighter than their rigid counterparts.
In their previous research, the team engineered microbots that could perform acrobatic movements mid-air and quickly recover after colliding with objects. But despite these promising results, the soft actuators underpinning these systems required more electricity than could be supplied, meaning an external power supply had to be used to propel the devices.
“To fly without wires, the soft actuator needs to operate at a lower voltage,” Chen explained. “Therefore, the main goal of our recent study was to reduce the operating voltage.”
In this case, the device would need a soft actuator with a large surface area to produce enough power. However, it would also need to be lightweight so a micromachine could lift it.
To achieve this, the group elected for soft dielectric elastomer actuators (DEAs) made from layers of a flexible, rubber-like solid known as an elastomer whose polymer chains are held together by relatively weak bonds – permitting it to stretch under stress.
The DEAs used in the study consists of a long piece of elastomer that is only 10 micrometers thick (roughly the same diameter as a red blood cell) sandwiched between a pair of electrodes. These, in turn, are wound into a 20-layered ‘tootsie roll’ to expand the surface area and create a ‘power-dense’ muscle that deforms when a current is applied, similar to how human and animal muscles contract. In this case, the contraction causes the microbot’s wings to flap rapidly.
A microbot that acts and senses like an insect
The result is an artificial muscle that forms the compact body of a robust microrobot that can carry nearly three times its weight (despite weighing less than one-quarter of a penny). Most notably, it can operate with 75% lower voltage than other versions while carrying 80% more payload.
They also demonstrated a 20-second hovering flight, which Chen says is the longest recorded by a sub-gram robot with the actuator still working smoothly after 2 million cycles – far outpacing the lifespan of other models.
“This small actuator oscillates 400 times every second, and its motion drives a pair of flapping wings, which generate lift force and allow the robot to fly,” Chen said. “Compared to other small flying robots, our soft robot has the unique advantage of being robust and agile. It can collide with obstacles during flight and recover and it can make a 360 degree turn within 0.16 seconds.”
The DEA-based design introduced by the team could soon pave the way for microbots that work using untethered batteries. For example, it could inspire the creation of functional robots that blend into our environment and everyday lives, including those that mimic dragonflies or hummingbirds.
The researchers add:
“We further demonstrated open-loop takeoff, passively stable ascending flight, and closed-loop hovering flights in these robots. Not only are they resilient against collisions with nearby obstacles, they can also sense these impact events. This work shows soft robots can be agile, robust, and controllable, which are important for developing next generation of soft robots for diverse applications such as environmental exploration and manipulation.”
And while they’re thrilled about producing workable flying microbots, they hope to reduce the DEA thickness to only 1 micrometer, which would open the door to many more applications for these insect-sized robots.
If one in 10 cold infections are from coronaviruses, then antibodies produced from these illnesses could surely give a bit more protection against COVID-19, right? A new study has just provided the answer to this question by showing that immunity induced by colds can indeed help fight off the far more dangerous novel coronavirus.
A study from Imperial College London that studied people exposed to SARS-CoV-2 or COVID-19 found that only half of the participants were infected, while the others tested negative. Before this, researchers took blood samples from all volunteers within days of exposure to determine the levels of an immune cell known as a T cell – cells programmed by previous infections to attack specific invaders.
Results show that participants who didn’t test positive had significantly higher levels of these cells; in other words, those who evaded infection had higher levels of T cells that attack the Covid virus internally to provide immunity — T cells that may have come from previous coronavirus infections (not SARS-CoV-2). These findings, published in the journal Nature Communications, may pave the way for a new type of vaccine to prevent infection from emerging variants, including Omicron.
Dr. Rhia Kundu, the first author of the paper from Imperial’s National Heart & Lung Institute, says: “Being exposed to the SARS-CoV-2 virus doesn’t always result in infection, and we’ve been keen to understand why. We found that high levels of pre-existing T cells, created by the body when infected with other human coronaviruses like the common cold, can protect against COVID-19 infection.” Despite this promising data, she warns: “While this is an important discovery, it is only one form of protection, and I would stress that no one should rely on this alone. Instead, the best way to protect yourself against COVID-19 is to be fully vaccinated, including getting your booster dose.”
The common cold’s role in protecting you against Covid
The study followed 52 unvaccinated people living with someone who had a laboratory-confirmed case of COVID-19. Participants were tested seven days after being exposed to see if they had caught the disease from their housemates and to analyze their levels of pre-existing T cells. Tests indicated that the 26 people who tested negative for COVID-19 had significantly higher common cold T cells levels than the remainder of the people who tested positive. Remarkably, these cells targeted internal proteins within the SARS-CoV-2 virus, rather than the spike protein on its surface, providing ‘cross-reactive’ immunity between a cold and COVID-19.
Professor Ajit Lalvani, senior author of the study and Director of the NIHR Respiratory Infections Health Protection Research Unit at Imperial, explained:
“Our study provides the clearest evidence to date that T cells induced by common cold coronaviruses play a protective role against SARS-CoV-2 infection. These T cells provide protection by attacking proteins within the virus, rather than the spike protein on its surface.”
However, experts not involved in the study caution against presuming anyone who has previously had a cold caused by a coronavirus will not catch the novel coronavirus. They add that although the study provides valuable data regarding how the immune system fights this virus, it’s unlikely this type of illness has never infected any of the 150,000 people who’ve died of SARS-CoV-2 in the UK to date.
Other studies uncovering a similar link have also warned cross-reactive protection gained from colds only lasts a short period.
The road to longer-lasting vaccines
Current SARS-CoV-2 vaccines work by recognizing the spike protein on the virus’s outer shell: this, in turn, causes an immune reaction that stops it from attaching to cells and infecting them. However, this response wanes over time as the virus continues to mutate. Luckily, the jabs also trigger T cell immunity which lasts much longer, preventing the infection from worsening or hospitalization and death. But this immunity is also based on blocking the spike protein – therefore, it would be advantageous to have a vaccine that could attack other parts of the COVID virus.
Professor Lalvani surmises, “The spike protein is under intense immune pressure from vaccine-induced antibodies which drives the evolution of vaccine escape mutants. In contrast, the internal proteins targeted by the protective T cells we identified mutate much less. Consequently, they are highly conserved between the SARS-CoV-2 variants, including Omicron.” He ends, “New vaccines that include these conserved, internal proteins would therefore induce broadly protective T cell responses that should protect against current and future SARS-CoV-2 variants.”
Scientists have identified a previously unknown mutant strain in a fully vaccinated person who tested positive after returning from a short three-day trip to Cameroon.
Academics based at the IHU Mediterranee Infection in Marseille, France, discovered the new variant on December 10. So far, the variant doesn’t appear to be spreading rapidly and the World Health Organization has not yet labeled it a variant of concern. Nevertheless, researchers are still describing and keeping an eye on it.
The discovery of the B.1.640.2 mutation, dubbed IHU, was announced in the preprint server medRxiv, in a paper still awaiting peer review. Results show that IHU’s spike protein, the part of the virus responsible for invading host cells, carries the E484K mutation, which increases vaccine resistance. The genomic sequencing also revealed the N501Y mutation — first seen in the Alpha variant — that experts believe can make COVID-19 more transmissible.
In the paper, the clinicians highlight that it’s important to keep our guard and expect more surprises from the virus: “These observations show once again the unpredictability of the emergence of new SARS-CoV-2 variants and their introduction from abroad,” they write. For comparison Omicron (B.1.1.529) carries around 50 mutations and appears to be better at infecting people who already have a level of immunity. Thankfully, a growing body of research proves it is also less likely to trigger severe symptoms.
Like many countries in Europe, France is experiencing a surge in the number of cases due to the Omicron variant.
Experts insist that IHU, which predates Omicron but has yet to cause widespread harm, should not cause concern – predicting that it may fade into the background. In an interview with the Daily Mail, Dr. Thomas Peacock, a virologist at Imperial College London, said the mutation had “a decent chance to cause trouble but never really materialized. So it is definitely not one worth worrying about too much at the moment.”
The strain was first uploaded to a variant tracking database on November 4, more than two weeks before Omicron was sequenced. For comparison, French authorities are now reporting over 300,000 new cases a day thought to be mostly Omicron, with data suggesting that the researchers have identified only 12 cases of IHU over the same period.
On the whole, France has good surveillance for COVID-19 variants, meaning health professionals quickly pinpoint any new mutant strains. In contrast to Britain, which only checks three in ten cases for variants. The paper’s authors state that the emergence of the new variant emphasizes the importance of regular “genomic surveillance” on a countrywide scale.
American farm equipment manufactured John Deere has teamed up with French agricultural robot start-up Naio to create a driverless tractor that can plow, by itself, and be supervised by farmers through a smartphone.
There are more people alive in the world today than ever before, and not very many of us want to work the land. A shortage of laborers is not the only issue plaguing today’s farms however: climate change, and the need to limit our environmental impact, are further impacting our ability to produce enough food to go around.
In a bid to address at least one of these problems, John Deere and Naio have developed a self-driving tractor that can get fields heady for crops on its own. This is a combination of John Deere’s R8 tractor, a plow, GPS suite, and 360-degree cameras, which a farmer can control remotely, from a smartphone.
The machine was shown off at the Consumer Electronics Show in Las Vegas, an event that began last Wednesday. According to a presentation held at the event, the tractor only needs to be driven into the field, after which the operator can sent it on its way with a simple swipe of their smartphone.
The tractor is equipped with an impressive sensory suite — six pairs of cameras, able to fully perceive the machine’s surroundings — and is run by artificial intelligence. These work together to check the tractor’s position at all times with a high level of accuracy (within an inch, according to the presentation) and keep an eye out for any obstacles. If an obstacle is met, the tractor stops and sends a warning signal to its user.
John Deere Chief Technology Officer Jahmy Hindman told AFP that the autonomous plowing tractor will be available in North America this year, although no price has yet been specified.
While the tractor, so far, can only plow by itself, the duo of companies plan to expand into more complicated processes — such as versions that can seed or fertilize fields — in the future. However, they add that combine harvesters are more difficult to automate, and there is no word yet on a release date for such vehicles.
However, with other farm equipment manufacturers (such as New Holland and Kubota) working on similar projects, they can’t be far off.
“The customers are probably more ready for autonomy in agriculture than just about anywhere else because they’ve been exposed to really sophisticated and high levels of automation for a very long time,” Hindman said.
Given their price and relative novelty, automated farming vehicles will most likely first be used for specialized, expensive, and labor-intensive crops. It may be a while before we see them working vast cereal crop fields, but they will definitely get there, eventually.
There is hope that, by automating the most labor-intensive and unpleasant jobs on the farm, such as weeding and crop monitoring, automation can help boost yields without increasing costs, while also reducing the need for mass use of pesticides or fungicides — which would reduce the environmental impact of the agricultural sector, while also making for healthier food on our tables.
School is an institution that is hated (especially during exams) by millions of kids around the world — but at the same time billions of adults remember it as the ‘good old days’. For all its good and bad, society as we know it couldn’t exist without schools — and we’re not just talking about the building, we’re talking about the entire system and environment that allows us to pass knowledge to younger generations and prepare them for what’s to come in the real world (at least in theory). But who actually invented school?
From old school to modern schooling system
Ironically enough, for all the information you can find in schools, no textbook mentions exactly when and how the idea of a school originated. This is mostly because it depends on how exactly you define a school. For instance, in ancient Greece, education was somewhat democratized, and education in a gymnasium school was considered essential for participation in Greek culture, but it was reserved only for boys (and often, not all boys). In ancient Rome, rich children were tutored by private professors, but neither of these is a school in the sense we consider today — public, formal education that is compulsory, open, and available to all — though you could argue that in some sense, school dates from ancient times, and the organized practice of teaching children dates for thousands of years.
Compulsory education was also not an unheard-of concept in ancient times –though it was mostly compulsory for those tied to royal, religious, or military organizations. In fact, Plato’s landmark The Republic, written more than 2,300 years ago, argues in favor of compulsory education, though women and slaves were not truly a part of Greek society.
Much information about schooling is also lost to the shroud of time. For instance, there is some indirect evidence about schools in China existing at least 3,000 years ago, but this comes from “oracle bones” where parents would try to divine whether it was auspicious for their children to go to ‘school’ — and there’s little information about what these schools were like.
It’s not just the Chinese, Greeks, and Romans. The Hindus, for instance, had developed their own schooling system in the form of gurukuls. In 425 AD, the Byzantine empire in Rome came up with the world’s first known primary education system dedicated to educating soldiers enrolled in the Byzantine army so that no person in the army faces problems in communicating and understanding war manuals. Different parts of the world had developed different types of education — some more efficient than others.
In Western Europe (and England, in particular), the church became involved in public education early on, and a significant number of church schools were founded in the Early Middle Ages. The oldest still operating (and continuously operating school) is The King’s School in Canterbury, which dates from the year 597. Several other schools still in operation were founded in the 6th century — though again, you could argue whether they were true schools as they were only open to boys.
Furthermore, compared to the modern schools, education in the above-mentioned institutes was more focused on religious teachings, language, and low-level or practical skills only. Many of them even used to operate in a single room with no set standards and curriculum, but as humanity progressed ahead people started to realize the need for an organized system to educate the future generations.
For more than ten centuries, schools maintained the same general profile, focused mostly on a niched set of skills and religious training. In the 9th century, the first university was founded in Fez, Morocco. However, that too was founded as a mosque and focused on religious teachings. The oldest university still in operation, the University of Bologna, in Italy, was founded in 1088. It hired scholars from the city’s pre-existing educational facilities and gave lectures in informal schools called scholae. In addition to religion, the university also taught liberal arts, notarial law, and scrivenery (official writing). The university is notable for also teaching civil law.
However, the university is not necessarily the same as a school — it wasn’t a public “for all” education system, but rather a “school” for the intellectual elite. For schools to truly emerge as we know them today, we have to fast forward a few more centuries.
Compulsory, free education for all
In 1592, a German Duchy called Palatine Zweibrücken became the first territory in the world with compulsory education for girls and boys — a remarkable and often-ignored achievement in the history of education. The duchy was followed in 1598 by Strasbourg, then a free city of the Holy Roman Empire and now part of France. Similar attempts emerged a few decades later in Scotland, although this compulsory education was subject to political and social turmoil.
In the United States — or rather, in the colonies that were to later become the United States — three legislative acts enacted in the Massachusetts Bay Colony in 1642, 1647, and 1648 mandated that every town having more than 50 families to hire a teacher, and every town of more than 100 families to establish a school.
Prussia, a prominent German state, implemented a compulsory education system in 1763 by royal decree. The Prussian General School Regulation asked for all young citizens, girls and boys, to be educated from age 5 to age 13-14 and to be provided with a basic education on religion, singing, reading, and writing based on a regulated, state-provided curriculum of textbooks. To support this financially, the teachers (often former soldiers) cultivated silkworms to make a living. In nearby Austria, Empress Maria Theresa introduced mandatory primary education in 1774 — and mandatory, systemized education was starting to take shape in Europe. Schools, as we know them today, were becoming a thing.
Meanwhile, the US was having its own educational revolution.
In 1837, a lawyer and educator Horace Mann became the Secretary of the Massachusetts Board of Education in the newly-formed United States. Mann was a supporter of public schooling and he believed that without a well-educated population political stability and social harmony could not be achieved. So he put forward the idea of a universal public education system for teaching American kids. Mann wanted a system with a set curriculum taught to students in an organized manner by well-trained subject experts.
Without undervaluing any other human agency, it may be safely affirmed that the Common School…may become the most effective and benignant of all forces of civilization.
Horace Mann, Father of the Common School Movement
Mann employed his “normal school” system in Massachusetts and later other states in the US also started implementing the education reforms that he envisioned. He also managed to convince his colleagues and other modernizers to support his idea of providing government-funded primary education for all.
Due to his efforts, Massachusetts became the first American state in 1852 to have a mandatory education law, school attendance and elementary education were made compulsory in various states (mandatory education law was enacted in all states of the US by 1917), teacher training programs were launched, and new public schools were being opened in rural areas.
At the time, when women were not even allowed to attend schools in many parts of the world, Mann advocated the appointment of women as teachers in public schools. Instead of offering religious learning to students, Mann’s normal schools were aimed at teaching them reading, writing, grammar, arithmetic, geography, and history. He believed that school education should not incorporate sectarian instructions, however, for the same reason, some religious leaders and schoolmasters used to criticize Mann for promoting non-sectarian education.
The innovative ideas and reforms introduced by Mann in the 1800s became the foundation of our modern school system. For his valuable contribution in the field of education, historians sometimes credit him as the inventor of the modern school system.
However, as we’ve seen, the history of schools is intricate, complex, and very rich. There is no one “inventor” of school — the process of arriving at the school systems we have today (imperfect as they may be) took thousands of years of progress, which was not always straightforward.
Shocking facts about school education
Now that we’ve looked a bit at the history of the school, let’s see how things are today — and why there’s still plenty of work to be done in schools around the world.
A study conducted by the Institute of Education in the UK suggests that quality of primary education is more crucial for an individual’s academic progress, social behavior, and intellectual development as compared to factors including his or her family income, background, and gender. Another study highlights that students who receive good elementary education and have a positive attitude about the significance of their performance in primary and middle school are more likely to earn well and live a better life than others in the future.
A UNESCO report reveals that school education up to nine years of age is compulsory in 155 countries but unfortunately, there are more than 250 million children in the world who are still not able to attend school.
According to International Labour Organization (ILO), due to poverty and lack of educational opportunities, 160 million kids are forced into work across the globe and about 80 million of them work in unhealthy environments. Thousands of such kids are physically and sexually abused, tortured, and are even trained to work under drug mafia, criminal groups, and terrorist organizations. Some studies reveal that child labor is also associated with school dropout in less developed countries. Due to poor financial conditions, many individuals at a young age start giving preference to economic activities and lose interest in costly education opportunities. However, an easily accessible and high-quality school education model that could allow children (from poor families) to pursue education without compromising their financial security can play an important role in eliminating child labor.
African nation South Sudan has the lowest literacy rate in the world. Only 8% of females in this country are literate and overall only 27% of its adult population is educated. 98% of the schools that offer elementary education in Sudan do not have an electric power supply and only one-third of such schools have access to safe drinking water.
City Montessori School (CMS) located in Dehradun, India is hailed as the largest school in the world. The CMS campus houses 1,050 classrooms in which more than 50,000 students attend classes every day.
For Horace Mann, schools were a means to produce good citizens, uphold democratic values and ensure the well-being of society. Though not all schools are able to achieve these goals, the power of school education can be well understood from what famous French poet Victor Hugo once said, “He who opens a school door, closes a prison”.
Have you ever chatted with a friend about buying a certain item and been targeted with an ad for that same item the next day? If so, you may have wondered whether your smartphone was “listening” to you.
But is it really? Well, it’s no coincidence the item you’d been interested in was the same one you were targeted with.
But that doesn’t mean your device is actually listening to your conversations — it doesn’t need to. There’s a good chance you’re already giving it all the information it needs.
Can phones hear?
Most of us regularly disclose our information to a wide range of websites and apps. We do this when we grant them certain permissions, or allow “cookies” to track our online activities.
So-called “first-party cookies” allow websites to “remember” certain details about our interaction with the site. For instance, login cookies let you save your login details so you don’t have to re-enter them each time.
Third-party cookies, however, are created by domains that are external to the site you’re visiting. The third party will often be a marketing company in a partnership with the first-party website or app.
The latter will host the marketer’s ads and grant it access to data it collects from you (which you will have given it permission to do — perhaps by clicking on some innocuous looking popup).
As such, the advertiser can build a picture of your life: your routines, wants and needs. These companies constantly seek to gauge the popularity of their products and how this varies based on factors such as a customer’s age, gender, height, weight, job and hobbies.
By classifying and clustering this information, advertisers improve their recommendation algorithms, using something called recommender systemsto target the right customers with the right ads.
Computers work behind the scenes
There are several machine-learning techniques in artificial intelligence (AI) that help systems filter and analyse your data, such as data clustering, classification, association and reinforcement learning (RL).
An RL agent can train itself based on feedback gained from user interactions, akin to how a young child will learn to repeat an action if it leads to a reward.
By viewing or pressing “like” on a social media post, you send a reward signal to an RL agent confirming you’re attracted to the post — or perhaps interested in the person who posted it. Either way, a message is sent to the RL agent about your personal interests and preferences.
If you start actively liking posts about “mindfulness” on a social platform, its system will learn to send you advertisements for companies that can offer related products and content.
Ad recommendations may be based on other data, too, including but not limited to:
other ads you clicked on through the platform
personal details you provided the platform (such as your age, email address, gender, location and which devices you access the platform on)
information shared with the platform by other advertisers or marketing partners that already have you as a customer
specific pages or groups you have joined or “liked” on the platform.
In fact, AI algorithms can help marketers take huge pools of data and use them to construct your entire social network, ranking people around you based on how much you “care about” (interact with) them.
They can then start to target you with ads based on not only your own data, but on data collected from your friends and family members using the same platforms as you.
For example, Facebook might be able to recommend you something your friend recently bought. It didn’t need to “listen” to a conversation between you and your friend to do this.
Exercising your right to privacy is a choice
While app providers are supposed to provide clear terms and conditions to users about how they collect, store and use data, nowadays it’s on users to be careful about which permissions they give to the apps and sites they use.
When in doubt, give permissions on an as-needed basis. It makes sense to give WhatsApp access to your camera and microphone, as it can’t provide some of its services without this. But not all apps and services will ask for only what is necessary.
Perhaps you don’t mind receiving targeted ads based on your data, and may find it appealing. Research has shown people with a more “utilitarian” (or practical) worldview actually prefer recommendations from AI to those from humans.
That said, it’s possible AI recommendations can constrain people’s choices and minimise serendipity in the long term. By presenting consumers with algorithmically curated choices of what to watch, read and stream, companies may be implicitly keeping our tastes and lifestyle within a narrower frame.
Don’t want to be predicted? Don’t be predictable
There are some simple tips you can follow to limit the amount of data you share online. First, you should review your phone’s app permissions regularly.
Also, think twice before an app or website asks you for certain permissions, or to allow cookies. Wherever possible, avoid using your social media accounts to connect or log in to other sites and services. In most cases there will be an option to sign up via email, which could even be a burner email.
Once you do start the sign-in process, remember you only have to share as much information as is needed. And if you’re sensitive about privacy, perhaps consider installing a virtual private network (VPN) on your device. This will mask your IP address and encrypt your online activities.
Try it yourself
If you still think your phone is listening to you, there’s a simple experiment you can try.
Go to your phone’s settings and restrict access to your microphone for all your apps. Pick a product you know you haven’t searched for in any of your devices and talk about it out loud at some length with another person.
Make sure you repeat this process a few times. If you still don’t get any targeted ads within the next few day, this suggests your phone isn’t really “listening” to you.
It has other ways of finding out what’s on your mind.
The vaccine is the result of two years of work (a short period for vaccine development), and it is claimed to work against all strains of SARS-origin viruses — including strains and viruses that haven’t even emerged yet.
In an exclusive interview with Defense One, Dr. Kayvon Modjarrad, director of Walter Reed’s infectious diseases branch, discussed the army’s Spike Ferritin Nanoparticle COVID-19 vaccine, or SpFN. The vaccine, Modjarrad says, completed animal trials and Phase 1 human trials. Results will soon be published in a peer-reviewed journal.
In a statement, officials stopped short of making any clear statements, but they mentioned that so far, everything seems to be going as planned. Although the vaccine hasn’t been tested specifically against the Omicron variant (which is capable of evading some of the immunity provided by current vaccines), it should work against all coronaviruses variants, not just SARS-CoV-2.
“We want to wait for those clinical data to be able to kind of make the full public announcements, but so far everything has been moving along exactly as we had hoped,” Modjarrad said.
The vaccine trials took a bit longer than expected because the researchers found it hard to gather subjects who had been neither vaccinated against nor infected with COVID-19. In fact, Modjarrad says, because Omicron is so contagious, it’s only a matter of time before everyone gets either vaccinated or infected. But the pan-coronavirus vaccine can help not just against Omicron — but against strains that haven’t even evolved yet, and potentially against future coronavirus pandemics as well.
With peer-reviewed results set to be published soon, the next step is for Walter Reed researchers to test the vaccine on a large, real-world population — Phase II and Phase III trials. In order to do this, researchers are working with a yet-unnamed industrial partner.
This is not the first pan-coronavirus vaccine in development. Several other such vaccines are in development. In fact, such vaccines were already being researched in the first SARS outbreak some 20 years ago — but funding was cut short for these projects, which is very unfortunate given what has happened with the current pandemic. Hopefully, such projects will continue to be funded and supported even if (or when) the current pandemic subsides.
Whether or not the army vaccine (or others of this type) will be successful remains to be seen, but their success may be decisive in how we deal with future coronaviruses pandemics. Whether we like it or not, this is probably not the last time we’ll be hearing from this type of virus.
The Faroe Islands, a small archipelago located halfway between Iceland and Norway, was once home to an unknown group of people in the year 500 AD, around 350 years before the Vikings arrived, according to a new study. The finding is based on the analysis of centuries-old sheep poop found on the bottom of a lake in the islands.
The Vikings were excellent sailors; any rugged and inhospitable place in Europe that you could reach by water, they did it. But maybe, in some places, others groups arrived before them. Until recently, evidence of people arriving in the Faeroe before Vikings has been limited. The islands are rocky and windswept, so not that much has remained intact on the surface. In 2013, researchers found burnt barley grains, not native to the island, beneath the floor of a Viking house. The grains were dated 300 to 500 years before the Vikings occupied the islands.
Seeking to unravel the history that these grains hold, a group of researchers focused on a lake on the Faroese island of Eysturoy, located near a village that previously hosted a Viking settlement. They dropped tubes into the lake bottom and collected cores that were 2.7 meters in length (nine feet). Its analysis showed the presence of plenty of domesticated sheep.
The team of researchers estimated the animals arrived between 492 and 512 – determined based on the depth of the sediment layers. There’s no evidence of mammals on the island before the arrival of the sheep, so this would indicate they were brought by people arriving on the islands. Sheep is now a staple of the Faroese diet.
“We show conclusive evidence that humans had introduced livestock to the Faroe Islands three to four centuries before the Viking-age Norse settlement period that is widely documented in the archaeological record. We constrain the most likely timing of human arrival to 500 CE, approximately 350 years before Viking Age settlements,” they wrote.
The mystic Faroe Islands
Located over 300 kilometers northwest of Scotland, the Faroes have impressive towering cliffs as their coastlines, with cloudy weather and strong winds. The landscape is largely tundra and only a few places would have been enticing for settlement. There are a few flat places near protected bays, where the Vikings would usually camp. It’s definitely not the place where you’d expect to find an inexperienced sailor.
Some medieval writings suggest that Irish monks used to live in the islands by the year 500, including among them the Irish navigator St. Brendan – famous for sailing the Atlantic. Now, the sheep DNA helps to better understand the history of the island. The researchers believe these first people were the Celts, crossing from Scotland or Ireland.
In fact, there are names in the Faroe Islands that come from Celtic words, as well as undated Celtic grave markings on the islands. Previous studies have found maternal Celtic lineage in Faroese people. It’s possible that Vikings had Celtic brides with them, but the maternal Celtic background is so high that the researchers think Celts were very likely on the islands before the Vikings.
Since it was introduced in the 1970s, the MRI has become one of the most impactful imaging techniques in medicine. MRIs are highly potent and versatile, capable of offering much better resolutions than a CT scan and being used in a wide array of situations, from scanning the brain to looking for tumors. But there’s a big problem: the conventional MRI is expensive to buy and maintain.
This is why a new study published in Nature Communications is so exciting. In it, researchers from the University of Hong Kong describe the construction of a new type of MRI that can be built for a fraction of the cost of existing machines.
Ed X. Wu has been working in MRI research for the past 30 years. He’s worked on the engineering side as well as on image formation and biomedical applications. He’s seen the field grow and develop, as both the technology and the algorithms that operate MRI machines have become more capable and elegant.
“However, these continuously evolving high-end features also drive up the complexity of these scanners,” Wu tells ZME Science, “thus further increasing the cost of purchasing, hosting, and maintaining these clinical MRI scanners.”
Although the MRI is widely considered to be the most valuable and sophisticated medical imaging technology in modern healthcare, Wu explains, it comes at a cost of over $1 million per unit, and a maintenance cost of around $15,000 per month. As a result, despite their utility, MRIs are hardly affordable. Every hospital in the world needs at least one, but 2 in 3 people worldwide have limited or no MRI access.
“The accessibility to clinical MRI scanners is very low,” Wu continues. The total number of clinical scanners is only about 50,000 in the entire world. They are mostly installed inside the highly specialized radiology departments or centralized imaging facilities, operated by highly trained technicians. Meanwhile, there are actual unmet clinical needs for imaging needs in almost in every corner of healthcare, as demonstrated by the success of ultrasound imaging and x-ray imaging.”
Since MRI is especially used to diagnose conditions, not having access to one can delay or even prevent the discovery and treatment of serious medical conditions, increasing medical risks for billions of patients around the world. Having access to an MRI, even a less performant one, could save a lot of lives and improve many livelihoods.
“In short, we need to democratize MRI technologies to serve healthcare at low cost and large scale,” Wu explains.
In order to do this, the cost and complexity of MRI scanners must be brought down substantially. It’s not just the engineering part, but also the installation, maintenance, and operation costs that need to be brought down. For instance, commercial MRIs typically require high power outputs, which may not be available in some places. To achieve this, researchers developed an MRI that works at a very low field and can be constructed for only $20,000.
Lowering the Teslas
An MRI scanner is essentially a giant magnet. It employs powerful, superconducting magnets that force the protons in the human body to align to its magnetic field. To get a sense of how strong the magnet is, most MRIs operate at 1.5 Teslas (although the range can vary from 0.2 to 3 Teslas) and the magnetic field of the Earth is around 0.0000305 Teslas.
The MRI prototype developed by Wu and colleagues operates at 0.055 Teslas, much lower than existing commercial units. It can operate from a standard AC wall power outlet and requires neither radiofrequency (RF) nor magnetic shielding.
The shielding part is particularly exciting. Normally, MRIs need the shielding to eliminate interference (for instance, with other electronic devices) — but researchers managed to eliminate the need for shielding by using a deep learning algorithm, Wu tells ZME Science:
“Our innovations encompass three aspects: (i) we eliminated the bulky RF shielding room requirement through deep learning, thus the MRI scan can now be made in open space; (ii) we implemented and demonstrated the feasibility of key and widely adopted clinical brain imaging protocols on this low-cost platform, which were previously believed challenging if not impossible at very low field and on low-cost hardware platforms; and (iii) we performed preliminary clinical study and validated results by directly comparing to 3T results.”
It’s not the first time something like this was attempted, but this innovation was only possible thanks to breakthroughs on the algorithm side. “In short, it’s our new algorithms & hardware concept that made this advance possible,” the researcher tells me in an email. In fact, Wu expects much of the innovation in the MRI field to come on the computing side.
“I believe computing and big-data will be an integral as well as inevitable part of the future MRI technology. Given the inherent nature of MRI, I believe widely deployed MRI technologies will lead to immense opportunities in the future through data-driven MRI image formation and diagnosis in healthcare. This will lead to low-cost, effective, and more intelligent clinical MRI applications, ultimately benefiting more patients.”
For now, at least, the new technology isn’t meant to replace conventional MRIs, but rather to complement them and offer a low-cost solution where none is currently available. But if Wu is right and low-cost computing and AI can help push the field even further, we may be seeing these in hospitals in the not too distant future.
Wu hopes that this research could inspire more engineering and data scientists to develop and adopt such low-cost and low-power MRI technology — both in developed and underdeveloped countries. He believes that without any cost increase, the prototype can be improved to achieve more usable image quality and become a valuable tool for medical diagnosis.
“Our body is mostly made of water molecules, on which MRI thrives — MRI is a gift to mankind from nature, we’ve got to use it more,” the researcher concludes.
When the first Crusaders reached the Middle East in the 11th century, they were in for a shocking surprise. It wasn’t just the scorching heat, unfamiliar territory, and foreign culture that threw them off guard, but also a novel deadly weapon: sabers made from Damascus steel.
Damascus steel is incredibly strong but malleable at the same time. It’s easily recognizable due to the watery dark patterns, called “damask”, that form on the surface of the metal. When Damascus steel is hammered into a blade, its edge can stay sharp for years even after clashing through many battles.
Prized for its distinctive wavy surface, linked by poets to ant tracks or rippling water, the Damascene sword was a weapon of the highest quality. But despite their best efforts, Europeans could never replicate Damascus steel to a tee, as its manufacturing method was kept a closely guarded state secret by Middle Eastern armorers.
Reforging Damascus steel
Although the recipe for Damascus steel has been lost over the ages and we’ll probably never be able to replicate it exactly, modern analytical methods allow us to infer some of the material’s most important properties. Damascus steel’s number one requirement, for instance, is a very high carbon content.
Modern steel contains about 1% carbon, which increases the hardness and strength of the alloy. Damascus steel contains between 1% and 2% carbon, according to an analysis conducted by metallurgists at Stanford University in the 1980s.
Another key requirement was forging and hammering at a relatively low temperature of about 920 degrees Celsius (1,700 degrees Fahrenheit). After the blade is shaped, the steel is again reheated to the same temperature, then rapidly cooled by quenching it in a fluid.
It was during this quenching process that some armorers in Persia believed the blade acquired its magical properties. Legend had it that the finest Damascus blades were quenched in “dragon blood”, according to the Encyclopedia of the Sword.
What form this dragon blood took is anyone’s guess. One Pakistani man sent a letter to the Arms and Armor Division of the Metropolitan Museum of Art in New York claiming that the Damascus sword held in his family for many generations was quenched by its Afghan blacksmiths in the urine of a donkey, goat, or even a redheaded boy.
One written account from Turkey dating from eight centuries ago stressed that the Damascus sword had to be heated until it glowed “like the sun rising in the desert”. After the blade is cooled until it gains the color of royal purple, it then has to be thrust “into the body of a muscular slave” so that his vitality and strength are transferred to the sword. Oddly specific.
While these anecdotes cannot be trusted for their veracity, outrageous as they may sound, it is possible that some of these outlandish quenching techniques may have genuinely contributed to the blade’s quality. For instance, by adding nitrogen to the alloy.
However, the most important component of Damascus steel — and the reason why we’ll probably never replicate a true Damascus sword — is “wootz”, a special type of steel that used to be made in India.
Wootz, which first began production more than 2,300 years ago, is an ultra-high grade carbon steel, containing between 1% and 2% carbon. Excavations in India and Sri Lanka suggest that it was fabricated inside a crucible, a container that can be used in very hot temperatures required to melt steel. In 300 B.C., when the first Wootz steel was cast, the crucible was likely made of clay. Inside the crucible, iron was melted with charcoal with no oxygen. Under these reducing atmospheric conditions, the steel absorbed the carbon from charcoal.
When Europeans started descending onto the Indian subcontinent in great numbers in the 19th century, some scientists from England attempted to replicate the wootz manufacturing method in order to understand how steel with such extraordinary strength was made with ancient tools. In the process, they found that the high carbon content was a key requirement.
But it wasn’t until the 20th-century that scientists learned about another property of Wootz steel. Steel with such a high carbon content can become “superplastic”, which allows it to be formed into complex shapes.
According to Encyclopedia of the Sword by Nick Evangelista, a batch of wootz steel was heated until molten then bundled into sheets that were yet again heated and hammered into the rough shape of a sword. Forging the material alters the crystalline structure into the familiar waving or watered pattern that Damascus steel is known for.
After culling, the blade was filed, ground, polished, and finely decorated. The most prized Damascus swords were the ones with a series of bars crossing the blade, known as “Mohamet’s Ladder”. These fine blades were often decorated with gold or silver.
Unfortunately, the technique for making wootz was lost in the 1700s. With the source material gone, so were the Damascus swords, whose production was already exceptionally rare by the 15th century.
Despite a great deal of research and effort to reverse engineer Damascus blades, no one has been able to cast a material that is close to the ancient quality. That’s despite what you might find on Amazon. Those are Damascus sword ripoffs made from pattern-welded steel that has been merely etched with acid in order to mimic the watery light/dark patterns.
In Japan, as in most other countries, disabled people are often invisible, hidden away in a homogeneous society that prioritizes productivity and fitting in. While the country has made some progress, issuing new anti-discrimination laws and ratifying a UN rights treaty, the issue is far from solved. Now, a cafe in Tokyo hopes to make a difference, bringing together technology and inclusion in a unique type of café.
DAWN, or Diverse Avatar Working Network, is a café managed by robots operated remotely by people with physical disabilities such as Amyotrophic Lateral Sclerosis (ALS) and Spinal Muscular Atrophy (SMA). The operators, referred as pilots, can control the robots from home, using a mouse, tablet or gaze-controlled remote.
The cafe is the latest project of the Japanese robotics company Ory Laboratory, which has the overall purpose of creating an accessible society. Its co-founder and CEO Kentaro Yoshifuji got the idea of a cafe with remote-controlled robots after spending a long time in hospital when he was a child – unable to go to school for over three years.
The project started in 2018 as a pilot and has changed three times ever since. Following positive feedback from customers, Ory Laboratory opened a permanent café in Tokyo’s Nihonbashi district in June this year. The researchers behind the robot, Kazuaki Takeuchi, and Yoichi Yamazaki, even published paper last year describing how the robots were developed and how they can be used.
The robots are called OriHime-D. Users can remotely control them as their real avatars, that is, an alter ego with body by selecting prepared patterned motions. In addition, the user can communicate with real speech sound and speech synthesis. This enables communication for persons with difficulty speaking unable to engage in physical work. The researchers behind the project emphasize that the more abstract and vague the robot shape is, the more the user’s personality can show up.
A unique coffee shop
The café in Tokyo has several types of OriHime robots, which have been used previously when it was all only a pilot project. There’s one table top-stationary robot that takes order from customers, capable of taking on different poses. Tables at the café also come with an iPad to support the interaction with the robots, operated by pilots remotely.
Pilots, wherever they are based, can watch the customers through their computer screens while moving the robots around the café with a software that can be operated with slight eye movements. The OriHime are about 1.20 centimeters tall and come with a camera, microphone and speaker, which they use to speak and take orders in a space.
There’s also a larger robot that is used to bring food to the customers. This provides opportunities for people who face difficulties in chatting with customers. At the same time, instead of having baristas, the cafe comes with a “TeleBarista OriHime” with automatically brews any coffee selected by customers and is then taken to the table.
The café is a joint effort between Ory Laboratory, All Nippon Airways (ANA), the Nippon Foundation, and the Avatar Robotic Consultative Association (ARCA). Each operator gets paid 1,000 yen ($8.80) an hour, which is the standard wage in Japan. As well as working with the cafe, Ory’s robots can also be found in transportations and department stores.
If you’re in Tokyo and would like to have a cup of coffee at Dawn, here’s how you can find it:
It’s not easy being soft and strong at the same time — unless you’re the new hydrogel developed at the University of Cambridge. This is the first soft material that has such a huge degree of resistance to compression, the authors report.
A new material developed by researchers at the University of Cambridge looks like a squishy gel normally, but like an ultra-hard, shatterproof glass when compressed — despite being 80% water. Its secret lies in the non-water portion of the material; this consists of a polymer network with elements held together by “reversible interactions”. As these interactions turn on and off, the properties of the materials shift.
The so-dubbed ‘super jelly’ could be employed for a wide range of applications where both strength and softness are needed such as bioelectronics, cartilage replacement in medicine, or in flexible robots.
“In order to make materials with the mechanical properties we want, we use crosslinkers, where two molecules are joined through a chemical bond,” said Dr. Zehuan Huang from the Yusuf Hamied Department of Chemistry, the study’s first author.
“We use reversible crosslinkers to make soft and stretchy hydrogels, but making a hard and compressible hydrogel is difficult and designing a material with these properties is completely counterintuitive.”
The macroscopic properties of any substance arise from its microscopic properties — its molecular structure and the way its molecules interact. Because of the way hydrogels are structured, it’s exceedingly rare to see such a substance show both flexibility and strength.
The team’s secret lay in the use of molecules known as cucurbiturils. These are barrel-shaped molecules that the team used as ‘handcuffs’ to hold other polymers together (a practice known as ‘crosslinking’). This holds two ‘guest molecules’ inside the cavity it forms, which were designed to preferentially reside inside the cucurbituril molecule. Because the polymers are linked so tightly, the overall material has a very high resistance to compression (there isn’t much free space at the molecular level for compression to take place).
The alterations the team made to the guest molecules also slows down the internal dynamics of the material considerably, they report. This gives the hydrogel overall properties ranging between rubber-like and glass-like states. According to their experiments, the gel can withstand pressures of up to 100 MPa (14,503 pounds per square inch). An average car, for comparison, weighs 2,871 pounds.
“The way the hydrogel can withstand compression was surprising, it wasn’t like anything we’ve seen in hydrogels,” said co-author Dr. Jade McCune, also from the Department of Chemistry. “We also found that the compressive strength could be easily controlled through simply changing the chemical structure of the guest molecule inside the handcuff.”
“People have spent years making rubber-like hydrogels, but that’s just half of the picture,” said Scherman. “We’ve revisited traditional polymer physics and created a new class of materials that span the whole range of material properties from rubber-like to glass-like, completing the full picture.”
The authors say that, as far as they know, this is the first time a glass-like hydrogel has been developed. They tested the material by using it to build a real-time pressure sensor to monitor human motions.
They’re now working on further developing their glass-like hydrogel for various biomedical and bioelectronic applications.
The paper “Highly compressible glass-like supramolecular polymer networks” has been published in the journal Nature Materials.
No matter how sustainable, eco-friendly, and clean sources of energy they are, conventional solar panels require a large setup area and heavy initial investment. Due to these limitations, it’s hard to introduce them in urban areas (especially neighborhoods with lots of apartment blocks or shops). But thanks to the work of ingenious engineers at the University of Michigan, that may soon no longer be the case.
The researchers have created transparent solar panels which they claim could be used as power generating windows in our homes, buildings, and even rented apartments.
If these transparent panels are indeed capable of generating electricity cost-efficiently, the days of regular windows may be passing as we speak. Soon, we could have access to cheap solar energy regardless of where we live — and to make it even better, we could be rid of those horrific power cuts that happen every once in a while because, with transparent glass-like solar panels, every house and every tall skyscraper will be able to generate its own power independently.
An overview of the transparent solar panels
In order to generate power from sunlight, solar cells embedded on a solar panel are required to absorb radiation from the sun. Therefore, they cannot allow sunlight to completely pass through them (in the way that a glass window can). So at first, the idea of transparent solar panels might seem preposterous and completely illogical because a transparent panel should be unable to absorb radiation.
But that’s not necessarily the case, researchers have found. In fact, that’s not the case at all.
The solar panels created by engineers at the University of Michigan consist of transparent luminescent solar concentrators (TLSC). Composed of cyanine, the TLSC is capable of selectively absorbing invisible solar radiation including infrared and UV lights, and letting the rest of the visible rays pass through them. So in other words, these devices are transparent to the human eye (very much like a window) but still absorb a fraction of the solar light which they can then convert into electricity. It’s a relatively new technology, only first developed in 2013, but it’s already seeing some impressive developments.
Panels equipped with TLSC can be molded in the form of thin transparent sheets that can be used further to create windows, smartphone screens, car roofs, etc. Unlike, traditional panels, transparent solar panels do not use silicone; instead they consist of a zinc oxide layer covered with a carbon-based IC-SAM layer and a fullerene layer. The IC-SAM and fullerene layers not only increase the efficiency of the panel but also prevent the radiation-absorbing regions of the solar cells from breaking down.
Surprisingly, the researchers at Michigan State University (MSU) also claim that their transparent solar panels can last for 30 years, making them more durable than most regular solar panels. Basically, you could fit your windows with these transparent solar cells and get free electricity without much hassle for decades. Unsurprisingly, this prospect has a lot of people excited.
According to Professor Richard Lunt (who headed the transparent solar cell experiment at MSU), “highly transparent solar cells represent the wave of the future for new solar applications”. He further adds that these devices in the future can provide a similar electricity-generation potential as rooftop solar systems plus, they can also equip our buildings, automobiles, and gadgets with self-charging abilities.
“That is what we are working towards,” he said. “Traditional solar applications have been actively researched for over five decades, yet we have only been working on these highly transparent solar cells for about five years. Ultimately, this technology offers a promising route to inexpensive, widespread solar adoption on small and large surfaces that were previously inaccessible.”
Recent developments in the field of transparent solar cell technology
Apart from the research work conducted by Professor Richard Lunt and his team at MSU, there are some other research groups and companies working on developing advanced solar-powered glass windows. Earlier this year, a team from ITMO University in Russia developed a cheaper method of producing transparent solar cells. The researchers found a way to produce transparent solar panels much cheaper than ever before.
“Regular thin-film solar cells have a non-transparent metal back contact that allows them to trap more light. Transparent solar cells use a light-permeating back electrode. In that case, some of the photons are inevitably lost when passing through, thus reducing the devices’ performance. Besides, producing a back electrode with the right properties can be quite expensive,” says Pavel Voroshilov, a researcher at ITMO University’s Faculty of Physics and Engineering.
“For our experiments, we took a solar cell based on small molecules and attached nanotubes to it. Next, we doped nanotubes using an ion gate. We also processed the transport layer, which is responsible for allowing a charge from the active layer to successfully reach the electrode. We were able to do this without vacuum chambers and working in ambient conditions. All we had to do was dribble some ionic liquid and apply a slight voltage in order to create the necessary properties,” adds co-author Pavel Voroshilov.
PHYSEE, a technology company from the Netherlands has successfully installed their solar energy-based “PowerWindow” in a 300 square feet area of a bank building in The Netherlands. Though at present, the transparent PowerWindows are not efficient enough to meet the energy demands of the whole building, PHYSEE claims that with some more effort, soon they will be able to increase the feasibility and power generation capacity of their solar windows.
California-based Ubiquitous Energy is also working on a “ClearView Power” system that aims to create a solar coating that can turn the glass used in windows into transparent solar panels. This solar coating will allow transparent glass windows to absorb high-energy infrared radiations, the company claims to have achieved an efficiency of 9.8% with ClearView solar cells during their initial tests.
In September 2021, the Nippon Sheet Glass (NSG) Corporation facility located in Chiba City became Japan’s first solar window-equipped building. The transparent solar panels installed by NSG in their facility are developed by Ubiquitous Energy. Recently, as a part of their association with Morgan Creek Ventures, Ubiquitous Energy has also installed transparent solar windows on Boulder Commons II, an under-construction commercial building in Colorado.
All these exciting developments indicate that sooner or later, we also might be able to install transparent power-generating solar windows in our homes. Such a small change in the way we produce energy, on a global scale could turn out to be a great step towards living in a more energy-efficient world.
Not there just yet
If this almost sounds too good to be true, well sort of is. The efficiency of these fully transparent solar panels is around 1%, though the technology has the potential to reach around 10% efficiency — this is compared to the 15% we already have for conventional solar panels (some efficient ones can reach 22% or even a bit higher).
So the efficiency isn’t quite there yet to make transparent solar cells efficient yet, but it may get there in the not-too-distant future. Furthermore, the appeal of this system is that it can be deployed on a small scale, in areas where regular solar panels are not possible. They don’t have to replace regular solar panels, they just have to complement them.
When you think about it, solar energy wasn’t regarded as competitive up to about a decade ago — and a recent report found that now, it’s the cheapest form of electricity available so far in human history. Although transparent solar cells haven’t been truly used yet, we’ve seen how fast this type of technology can develop, and the prospects are there for great results.
The mere idea that we may soon be able to power our buildings through our windows shows how far we’ve come. An energy revolution is in sight, and we’d be wise to take it seriously.
Earth may one day have its own ring system — one made from space junk.
Whenever there are humans, pollution seems to follow. Our planet’s orbit doesn’t seem to be an exception. However, not all is lost yet! Research at the University of Utah is exploring novel ideas for how to clear the build-up before it can cause more trouble for space-faring vessels and their crews.
Their idea involves using a magnetic tractor beam to capture and remove debris orbiting the Earth.
Don’t put a ring on it
“Earth is on course to have its own rings,” says University of Utah professor of mechanical engineering Jake Abbott, corresponding author of the study, for the Salt Lake Tribune. “They’ll just be made of space junk.”
The Earth is on its way to becoming the fifth planet in the Solar System to gain planetary rings. However, unlike the rock-and-ice rings of Jupiter, Saturn, Neptune, and Uranus, Earth’s rings will be made of scrap and junk. It would also be wholly human-made.
According to NASA’s Orbital Debris Program Office, there are an estimated 23,000 pieces of orbital debris larger than a softball; these are joined by a few hundreds of millions of pieces smaller than a softball. These travel at speeds of 17,500 mph (28,160 km/h), and pose an immense threat to satellites, space travel, and hamper research efforts.
Because of their high speeds, removing these pieces of space debris is very risky — and hard to pull off.
“Most of that junk is spinning,” Abbott added. “Reach out to stop it with a robotic arm, you’ll break the arm and create more debris.”
A small part of this debris — around 200 to 400 — burns out in the Earth’s atmosphere every year. However, fresh pieces make their way into orbit as the planet’s orbit is increasingly used and traversed. Plans by private entities to launch thousands of new satellites in the coming years will only make the problem worse.
Abbott’s team proposes using a magnetic device to capture or pull debris down into low orbit, where they will eventually burn up in the Earth’s atmosphere.
“We’ve basically created the world’s first tractor beam,” he told Salt Lake Tribune. “It’s just a question of engineering now. Building and launching it.”
The paper “Dexterous magnetic manipulation of conductive non-magnetic objects” has been published in the journal Nature.
Researchers at Northwestern University have devised a high-resolution holographic camera that images objects outside its line of sight, revealing objects hidden behind corners, as well as those obstructed by barriers such as a deer behind a forest line. The camera can also see through the fog and even the human skin, which would make it a fantastic new medical imaging tool on par with MRI machines and CT scanners.
This impressive new imaging method, known as synthetic wavelength holography, works by reconstructing the path a beam of light takes as it scatters onto various objects, bouncing off surfaces until the beam makes its way back to the source where it hits a detector. An algorithm traces the path of the scattered light, making it is possible to see the world from the perspective of a remote surface, even if it’s behind the camera’s line of sight.
“If you have ever tried to shine a flashlight through your hand, then you have experienced this phenomenon,” said Florian Willomitzer, first author of the study, explaining how light scattering works. “You see a bright spot on the other side of your hand, but, theoretically, there should be a shadow cast by your bones, revealing the bones’ structure. Instead, the light that passes the bones gets scattered within the tissue in all directions, completely blurring out the shadow image.”
The new technology is a type of non-line-of-sight (NLoS) imaging. Researchers at Stanford University recently presented another impressive demonstration of NLoS that images moving objects inside a room using a single laser beam fired through a keyhole.
But compared to other NLoS technologies, this new method takes things to a whole new level, rapidly capturing full-field images at high resolution with submillimeter precision.
The key to imaging obstructed objects is to intercept the scattered light and measure its time of travel with precision. Typically, you’d need a cumbersome apparatus consisting of very fast detectors to achieve this goal. The researchers thought of a workaround and combined two lasers to generate a synthetic light wave that can capture the entire field of vision of an object in a hologram, essentially reconstructing its entire 3-D shape.
Due to its high temporal resolution and fast response time (under 50 milliseconds), the camera could theoretically be able to image fast-moving objects, such as cars or pedestrians hidden behind a curving road.
“This technique turns walls into mirrors,” Willomitzer said. “It gets better as the technique also can work at night and in foggy weather conditions.”
The same tool can also see through tissue, revealing a beating heart or other internal organs obstructed by the skin since the same principle of light scattering applies in both instances. As long as there’s an opaque barrier, such as a wall, shrub, box, or skin, the holographic camera can see objects around corners.
Self-driving cars would have a lot to gain by incorporating this technology that could prevent a lot of accidents and save lives, but the Northwestern researchers believe it could prove most useful in medical imaging where it could replace or supplement endoscopes. Rather than cramming and tugging a flexible camera through tight spaces and around corners, such as during a colonoscopy, the holographic imaging could use light instead to image the many folds inside the intestines in a completely non-invasive manner. Similarly, the same method could be used to image damaged industrial equipment without having to disassemble it part by part.
“If you have a running turbine and want to inspect defects inside, you would typically use an endoscope,” Willomitzer said. “But some defects only show up when the device is in motion. You cannot use an endoscope and look inside the turbine from the front while it is running. Our sensor can look inside a running turbine to detect structures that are smaller than one millimeter.”
The current sensor prototype uses visible or infrared light, but it could theoretically be reconfigured and extended to other frequencies for use in space exploration or underwater acoustic imaging. It might take a while though before we see this technology transition from the lab to the commercial market.
“It’s still a long way to go before we see these kinds of imagers built-in cars or approved for medical applications,” Willomitzer said. “Maybe 10 years or even more, but it will come.”