Category Archives: Technology

Russians flock to VPNs to escape internet censorship

As the war (or if you’re in Russia, the “special operation“) continues to rage on, Russian authorities have banned the last semblance of independent journalism and are amplifying efforts to restrict domestic access to free information. But millions of Russians are not having it and are flocking to virtual private networks (or VPNs) to browse the free internet.

The demand for VPNs, which allow the user to browse the internet privately and without restriction, skyrocketed in Russia after the invasion. Between February 27 and March 3, demand surged by 668% — but after Russia blocked Facebook and Twitter on March 4, the demand for VPNs grew even more, peaking at 1,092% above the average before the invasion.

By March 5, all the top ten most downloaded apps in Russia are essentially VPNs.

Overall, the Google Play Store saw 3.3 million VPN downloads, while the Apple App Store had 1.3 million. That’s 4.6 million VPN downloads since the invasion started (Russia has a population of around 144 million).

Russian authorities have not yet blocked app stores, although they have the ability to do so. However, they are trying to block VPN traffic at the network level — drawing from China’s experience in censoring the internet. It’s a bit of an arms race: VPNs may be blocked, and then they have to find new ways of evading censorship (often by switching servers).

For users, this means they may be forced to change servers or even apps regularly if they want to access independent, foreign publishers and social media. Otherwise, they will have to contend with the warped, distorted reality typically present in Russian state-owned media.

Russia’s internet censorship is not as stringent as China’s, but it could be getting there very quickly. As Russia becomes more and more isolated, the Kremlin is trying to cast an online iron curtain to block its people from accessing the free internet. The Russian parliament also approved a law making the spreading of “false” news about the war in Ukraine a criminal offense punishable by up to 15 years in prison. Even the word “war” is banned in Russian media.

It’s not the first time we’re seeing something like this. In January, VPN demand in Kazakhstan also skyrocketed by over 3,400% following an internet blackout during anti-government protests. When China passed the Hong Kong national security law, VPN demand also surged (in a country where VPN usage is already common). Myanmar and Nigeria went through similar situations. However, the increase in demand is unprecedented, VPN providers say

VPN demand in Ukraine has also climbed 609% higher than before the invasion, mostly spiked by fears that invading forces will also carry out cyberattacks.

These hard-bodied robots can reproduce, learn and evolve autonomously

Where biology and technology meet, evolutionary robotics is spawning automatons evolving in real-time and space. The basis of this field, evolutionary computing, sees robots possessing a virtual genome ‘mate’ to ‘reproduce’ improved offspring in response to complex, harsh environments.

Image credits: ARE.

Hard-bodied robots are now able to ‘give birth’

Robots have changed a lot over the past 30 years, already capable of replacing their human counterparts in some cases — in many ways, robots are already the backbone of commerce and industry. Performing a flurry of jobs and roles, they have been miniaturized, mounted, and molded into mammoth proportions to achieve feats way beyond human abilities. But what happens when unstable situations or environments call for robots never seen on earth before?

For instance, we may need robots to clean up a nuclear meltdown deemed unsafe for humans, explore an asteroid in orbit or terraform a distant planet. So how would we go about that?

Scientists could guess what the robot may need to do, running untold computer simulations based on realistic scenarios that the robot could be faced with. Then, armed with the results from the simulations, they can send the bots hurtling into uncharted darkness aboard a hundred-billion dollar machine, keeping their fingers crossed that their rigid designs will hold up for as long as needed.

But what if there was a is a better alternative? What if there was a type of artificial intelligence that could take lessons from evolution to generate robots that can adapt to their environment? It sounds like something from a sci-fi novel — but it’s exactly what a multi-institutional team in the UK is currently doing in a project called Autonomous Robot Evolution (ARE).

Remarkably, they’ve already created robots that can ‘mate’ and ‘reproduce’ progeny with no human input. What’s more, using the evolutionary theory of variation and selection, these robots can optimize their descendants depending on a set of activities over generations. If viable, this would be a way to produce robots that can autonomously adapt to unpredictable environments – their extended mechanical family changing along with their volatile surroundings.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist and ARE team member Jacintha Ellers. “We can come up with novel types of creatures and see how they perform under different selection pressures.” Offering a way to explore evolutionary principles to set up an almost infinite number of “what if” questions.

What is evolutionary computation?

In computer science, evolutionary computation is a set of laborious algorithms inspired by biological evolution where candidate solutions are generated and constantly “evolved”. Each new generation removes less desired solutions, introducing small adaptive changes or mutations to produce a cyber version of survival of the fittest. It’s a way to mimic biological evolution, resulting in the best version of the robot for its current role and environment.

Virtual robot. Image credits: ARE.

Evolutionary robotics begins at ARE in a facility dubbed the EvoSphere, where newly assembled baby robots download an artificial genetic code that defines their bodies and brains. This is where two-parent robots come together to mingle virtual genomes to create improved young, incorporating both their genetic codes.

The newly evolved offspring is built autonomously via a 3D printer, after which a mechanical assembly arm translating the inherited virtual genomic code selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, the artificial system wires up a Raspberry Pi computer acting as a brain to the sensors and motors – software is then downloaded from both parents to represent the evolved brain.

1. Artificial intelligence teaches newborn robots how to control their bodies

Newborns undergo brain development and learning to fine-tune their motor control in most animal species. This process is even more intense for these robotic infants due to breeding between different species. For example, a parent with wheels might procreate with another possessing a jointed leg, resulting in offspring with both types of locomotion.

But, the inherited brain may struggle to control the new body, so an algorithm is run as part of the learning stage to refine the brain over a few trials in a simplified environment. If the synthetic babies can master their new bodies, they can proceed to the next phase: testing.

2. Selection of the fittest- who can reproduce?

A specially built inert nuclear reactor housing is used by ARE for testing where young robots must identify and clear radioactive waste while avoiding various obstacles. After completing the task, the system scores each robot according to its performance which it then uses to determine who will be permitted to reproduce.

Real robot. Image credits: ARE.

Software simulating reproduction then takes the virtual DNA of two parents and performs genetic recombination and mutation to generate a new robot, completing the ‘circuit of life.’ Parent robots can either remain in the population, have more children, or be recycled.

Evolutionary roboticist and ARE researcher Guszti Eiben says this sped up evolution works as: “Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms.”

3. Real-world robots can also mate in alternative cyberworlds

In her article for the New Scientist, Emma Hart, ARE member and professor of computational intelligence at Edinburgh Napier University, writes that by “working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world.”

This parallel universe entails the creation of a digital version of every mechanical infant in a simulator once mating has occurred, which enables the ARE researchers to build and test new designs within seconds, identifying those that look workable.

Their cyber genomes can then be prioritized for fabrication into real-world robots, allowing virtual and physical robots to breed with each other, adding to the real-life gene pool created by the mating of two material automatons.

The dangers of self-evolving robots – how can we stay safe?

A robot fabricator. Image credits: ARE.

Even though this program is brimming with potential, Professor Hart cautions that progress is slow, and furthermore, there are long-term risks to the approach.

“In principle, the potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviors that could cause damage or even harm humans,” Hart says.

“We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard.” Therefore: “We could also anticipate unwanted behaviors by continually monitoring the evolved robots, then using that information to build analytical models to predict future problems. The most obvious and effective solution is to use a centralized reproduction system with a human overseer equipped with a kill switch.”

A world made better by robots evolving alongside us

Despite these concerns, she counters that even though some applications, such as interstellar travel, may seem years off, the ARE system may have a more immediate need. And as climate change reaches dangerous proportions, it is clear that robot manufacturers need to become greener. She proposes that they could reduce their ecological footprint by using the system to build novel robots from sustainable materials that operate at low energy levels and are easily repaired and recycled. 

Hart concludes that these divergent progeny probably won’t look anything like the robots we see around us today, but that is where artificial evolution can help. Unrestrained by human cognition, computerized evolution can generate creative solutions we cannot even conceive of yet.

And it would appear these machines will now evolve us even further as we step back and hand them the reins of their own virtual lives. How this will affect the human race remains to be seen.

Meanwhile, Ukraine and Russia’s hackers are embroiled in a war of their own

Way before Russian tanks invaded Ukraine, a vicious attack of a different sort was unleashed. In mid-January, a massive cyberattack was unleashed on the Ukrainian servers, likely originating from Russian hackers.

‘Ukrainians … be afraid and expect worse’, the attack read.

Several prominent Ukrainian websites were attacked, including the ministry of foreign affairs and the education ministry portals.

Disturbingly, this may have been dismissed as “business as usual” — after all, Russia has been waging cyberattacks against the world for over a decade, actively trying to influence elections, hacking newspapers and TV channels, and obtaining data. But this time, it was different. This time, the cyberattack prefaced a military invasion.

It wasn’t just Kremlin-backed hackers that attacked Ukraine. Some self-proclaimed “patriotic” Russian hackers, with “respectable” daytime jobs, also participated in cyberattacks.

“Considering everyone is attacking Ukraine servers. I am thinking we should cause some disruption too?” one such hacker posted on social media, as quoted by the BBC.

In this case, the anonymous Russian hacker (and his team of six companions) temporarily brought down Ukrainian government websites through a rudimentary but effective attack called distributed denial of service (DDoS).

But there were also more sophisticated attacks, presumably orchestrated by organized, Russia-backed hackers. Just days before the military invasion began, on 23 February, numerous Ukrainian government websites and financial services were hit with another wave of DDoS attacks. But in addition to the attacks, a special malware virus was also discovered.

According to cyber-security experts at ESET and Symantec, this second form of attack installed a “wiper” on infected computers, deleting all data on the machines.

“ESET researchers have announced the discovery of a new data wiper malware used in Ukraine, which they have named HermeticWiper,” a spokesman said. “ESET telemetry shows that the malware was installed on hundreds of machines in the country.”

In parallel to all these attacks, a disinformation campaign was also raged against Ukraine. Meta (Facebook and Instagram’s parent company) discovered and erased a Russian disinformation network — but many more remain, leaving tech giants faced with a game of whack-a-mole.

Ukraine (and Anonymous) strike back

Just like the Russian military has way more firepower than the Ukrainian one, the difference in the two countries’ cyber-power is also substantial. In retaliation to these cyber-attacks, Ukraine issued a desperate call for volunteer hackers to join the fight.

“We have a lot of talented Ukrainians in the digital sphere: developers, cyber specialists, designers, copywriters, marketers,” Mykhailo Fedorov, Ukraine’s First Vice Prime Minister and Minister of Digital Transformation announced in a post on his official Telegram channel. “We continue to fight on the cyber front.”

His call was heard.

The volunteer IT Army was assigned to a Telegram channel, and 175,000 people have subscribed. Of course, not all of these are hackers. The vast majority are just people with internet that want to help. They are doing things like reporting Russian propaganda channels on Youtube, Facebook, or Twitter. The more savvy users are asked to perform their own DDoS attacks on the websites of Russian ministries and key companies like Gazprom.

The development of such a volunteer unit is unprecedented in history — but we are pretty much living in unprecedented times, and for a country faced with an existential threat, as Ukraine is, it’s unsurprising that they try to muster every bit of help they can.

Some international hackers have also joined the cyber-fight, most notably the decentralized hacktivist collective Anonymous.

Anonymous started with more DDoS attacks on Russian propaganda channels and government websites. At some point, all of the state-controlled Russian banks had their websites shut down. But they soon moved on to other things.

Russian TV channels were hijacked to play Ukrainian music.

“Ukrainian music is playing on Russian TV channels. It is believed that this is the work of hackers from Anonymous, who continue to hack Russian services and websites,” Fedorov said.

Meanwhile, a hacktivist group from Belarus has claimed to be disrupting the movement of military units by shutting down railways in the country (with Belarus supporting Russia’s invasion), though these reports have been hard to confirm.

In addition, Anonymous has leaked vast amounts of emails from a large Belarusian weapons company that worked with Russia on the invasion. The group also leaked a massive database of the Russian Ministry of Defense. “We are also undergoing operations to best support Ukrainians online,” Anonymous said.

The shadow war

The worst may still be coming for Ukraine, as Russia has intensified its bombing of cities — including the use of cluster bombs and the bombing of civilian centers. The country may be headed for a long, dreadful, guerilla war. Behind this war, in the shadows, the cyber-war will also likely descend into lengthy guerilla skirmishes.

‘If Kyiv falls, we keep hacking Putin,’ one volunteer cyber-soldier told Forbes.

It’s still too early to tell how impactful all this will be, and it’s still unclear just how important the data leaked from Russia and Belarus is (most of it is in Russian and is extensive, which means it will take a long time to analyze). But if there’s one thing this is doing, it’s making more publicity for Ukraine’s cause — especially back in Russia, where Putin has a strong grip on what information goes through and the truth is often censored, but also internationally.

For now, the invasion continues to rage on.

People find AI-generated faces to be more trustworthy than real faces — and it could be a problem

Not only are people unable to distinguish between real faces and AI-generated faces, but they also seem to trust AI-generated faces more. The findings from a relatively small study suggest that nefarious actors could be using AI to generate artificial faces to trick people.

The most (top row) and least (bottom row) accurately classified real (R) and synthetic (S) faces. Credit: DOI: 10.1073/pnas.2120481119

Worse than a coin flip

In the past years, Artificial Intelligence has come a long way. It’s not just to analyze data, it can be used to create text, images, and even video. A particularly intriguing application is the creation of human faces.

In the past couple of years, algorithms have become strikingly good at creating human faces. This could be useful on one hand — it enables low-budget companies to produce ads, for instance, essentially democratizing access to valuable resources. But at the same time, AI-synthesized faces can be used for disinformation, fraud, propaganda, and even revenge pornography.

Human brains are generally pretty good at telling apart real from fake, but when it comes to this area, AIs are winning the race. In a new study, Dr. Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley, conducted experiments to analyze whether participants can distinguish state of the art AI-synthesized faces from real faces and what level of trust the faces evoked.

 “Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,” the researchers note.

The researchers designed three experiments, recruiting volunteers from the Mechanical Turk platform. In the first one, 315 participants classified 128 faces taken from a set of 800 (either real or synthesized). Their accuracy was 48% — worse than a coin flip.

Representative faces used in the study. Could you tell apart the real from the synthetic faces? Participants in the study couldn’t. Image credits: Credit: DOI: 10.1073/pnas.2120481119.

More trustworthy

In the second experiment, 219 new participants were trained on how to analyze and give feedback on faces. They were then asked to classify and rate 128 faces, again from a set of 800. Their accuracy increased thanks to the training, but only to 59%.

Meanwhile, in the third experiment, 223 participants were asked to rate the trustworthiness of 128 faces (from the set of 800) on a scale from 1 to 7. Surprisingly, synthetic faces were ranked 7.7% more trustworthy.

“Faces provide a rich source of information, with exposure of just milliseconds sufficient to make implicit inferences about individual traits such as trustworthiness. We wondered if synthetic faces activate the same judgements of trustworthiness. If not, then a perception of trustworthiness could help distinguish real from synthetic faces.”

“Perhaps most interestingly, we find that synthetically-generated faces are more trustworthy than real faces.”

There were also some interesting takeaways from the analysis. For instance, women were rated as significantly more trustworthy than men, and smiling faces were also more trustworthy. Black faces were rated as more trustworthy than South Asian, but otherwise, race seemed to not affect trustworthiness.

“A smiling face is more likely to be rated as trustworthy, but 65.5% of the real faces and 58.8% of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy,” the study notes

The researchers offer a potential explanation as to why synthetic faces could be seen as more trustworthy: they tend to resemble average faces, and previous research has suggested that average faces tend to be considered more trustworthy.

Although it’s a fairly small sample size and the findings need to be replicated on a larger scale, the findings are pretty concerning, especially considering how fast the technology has been progressing. Researchers say that if we want to protect the public from “deep fakes,” there should be some guidelines on how synthesized images are created and distributed.

“Safeguards could include, for example, incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often-laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

“At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic-media technologies that incorporate ethical guidelines for researchers, publishers, and media distributors.”

The study was published in PNAS.

An AI was just used to control plasma inside a nuclear fusion reactor

A groundbreaking technology has been used to improve another, as researchers have demonstrated how AI could be used to control the superheated plasma inside a tokamak-type fusion reactor.

“This is one of the most challenging applications of reinforcement learning to a real-world system,” says Martin Riedmiller, a researcher at DeepMind.

DeepMind produced a range of shapes whose properties are under study by plasma physicists. Image credits: DeepMind & SPC/EPFL.

Current nuclear plants use nuclear fission to harness energy, forcing larger atoms to split into two smaller atoms. Fusion, on the other hand, is the opposite process. In nuclear fusion, two or more atomic nuclei combine to form one or more larger atoms. It’s the process that powers stars, but harnessing this power and using it on Earth is extremely challenging.

If you’re essentially building a miniature star (hotter than the surface of the Sun) and then using it to harness its power, you need to be absolutely certain you can control it. Researchers use a lot of tricks to achieve this, like magnets, lasers, and clever designs, but it’s still proven to be a gargantuan challenge.

This is where AI could enter the stage.

Researchers use several designs to try and contain this superheated plasma — one of these designs is called a tokamak. A tokamak uses magnetic fields in a donut-shaped containment area to keep the superheated atoms (as plasma) under control long enough that we can extract energy from it. The main idea is to use this magnetic cage to keep the plasma from touching the reactor walls, which would damage the reactor and cool the plasma.

TCV plasma. Image credits: Curdin Wüthrich, SPC/EPFL

Controlling this plasma requires constant shifts in the magnetic field, and the researchers at DeepMind (the Google-owned company that built the AlphaGo and AlphaZero AIs that dominated Go and chess) felt like this would be a good task for an algorithm.

They trained an unnamed AI to control and change the shape of the plasma by changing the magnetic field using a technique called reinforcement learning. Reinforcement learning is one of the three main machine learning approaches (alongside supervised learning and unsupervised learning). In reinforcement learning, the AI takes certain actions to maximize the chance of earning a predefined reward.

After the algorithm was trained on a virtual reactor, it was given control of the magnets inside the Variable Configuration Tokamak (TCV), an experimental tokamak reactor in Lausanne, Switzerland.

The AI-controlled the plasma for only two seconds, but this is as much as the TCV can go without overheating — and it was a long enough period to assess the AI’s performance.

Every 0.0001 seconds, the AI took 90 different measurements describing the shape and location of the plasma, adjusting the magnetic field accordingly. To speed the process up, the AI was split into two different networks — a large network that learned via trial and error in the virtual stage, and a faster, smaller network that runs on the reactor itself.

“Our controller first shapes the plasma according to the requested shape, then shifts the plasma downward and detaches it from the walls, suspending it in the middle of the vessel on two legs. The plasma is held stationary, as would be needed to measure plasma properties. Then, finally the plasma is steered back to the top of the vessel and safely destroyed,” DeepMind explains in a blog post.

“We then created a range of plasma shapes being studied by plasma physicists for their usefulness in generating energy. For example, we made a “snowflake” shape with many “legs” that could help reduce the cost of cooling by spreading the exhaust energy to different contact points on the vessel walls. We also demonstrated a shape close to the proposal for ITER, the next-generation tokamak under construction, as EPFL was conducting experiments to predict the behaviorr of plasmas in ITER. We even did something that had never been done in TCV before by stabilizing a “droplet” where there are two plasmas inside the vessel simultaneously. Our single system was able to find controllers for all of these different conditions. We simply changed the goal we requested, and our algorithm autonomously found an appropriate controller.”

The controller trained with deep reinforcement learning steers the plasma through multiple phases of an experiment. On the left, there is an inside view in the tokamak during the experiment. On the right, the reconstructed plasma shape and the target points the researchers wanted to hit. Image credits: DeepMind & SPC/EPFL.

While this is still in its early stages, it’s a very promising achievement. DeepMind’s AIs seem ready to move on from complex games into the real world, and make a real difference — as they previously did with protein structure.

This doesn’t mean that we’ll have nuclear fusion tomorrow. Although we’ve seen spectacular breakthroughs in the past couple of years, and although AI seems to be a promising tool, we’re still a few steps away from realistic fusion energy. But the prospect of virtually limitless fusion energy, once thought to be technically impossible, now seems within our reach.

The study was published in Nature.

Anonymizing smartphone data is no longer enough — users can be identified with just a few details

Vast amounts of data from users are available to smartphone companies. Companies ensure us that this data is anonymized — devoid of personal indicators that could pinpoint individual users. But these insurances are hollow, a new study claims: a skilled attacker can identify individuals in anonymous datasets.

Image credits: Olia Nayda.

When the pandemic started and lockdowns were enforced, the world seemed to grind to a halt. You could see that easily just by looking around, but the data also confirmed it. For instance, mobility trends published by the likes of Apple and Google showed that a significant part of the population had stopped commuting to work, and people were increasingly using more cars and less public transit.

At first, users were understandably spooked by the data. Do tech companies know where I go and what I do? That’s not how it goes, the companies assured us. The data is anonymized — they know a user went somewhere and did something, but they don’t know who that user is. Other apps also scoop vast quantities of data from your smartphone, either for ad targeting or for other purposes, though in many cases, they are still legally mandated to make the data anonymized, removing all identifiable bits like names and phone numbers.

But that’s no longer enough. With just a few details (like for instance, how they communicate with an app like WhatsApp), researchers were able to identify many users from anonymized data. Yves-Alexandre de Montjoye, associate professor at Imperial College London and one of the study authors, told AFP it’s time to “reinvent what anonymisation means”.

What is anonymous?

The researchers started by looking at anonymized data from around 40,000 smartphone users, mostly gathered from messaging apps. They then “attacked” the data — mimicking a process a malicious actor would do. Essentially, this involved searching for patterns in the data to see whether it could be figured out who individual users are.

With only the direct contacts included in the dataset, they were able to pinpoint individual users 15% of the time. When, in addition, further interactions between those primary contacts were included, they were able to identify 52% of the users.

This doesn’t mean that we should give up on anonymization, the researchers explain. However, we should strengthen what this anonymization means, making sure that the data is indeed anonymous.

“Our results provide evidence that disconnected and even re-pseudonymised interaction data remain identifiable even across long periods of time,” the researchers wrote. “These results strongly suggest that current practices may not satisfy the anonymisation standard set forth by (European regulators) in particular with regard to the linkability criteria.”

“Our results provide strong evidence that disconnected and even re-pseudonymized interaction data can be linked together,” the researchers conclude.

Researchers suggest restricting large datasets to simple questions-and-answers systems or using differential privacy systems that add arbitrary substitutions that ensure data privacy,

The study was published in Nature Communications.

European Football is becoming increasingly predictable as the rich get richer

Football is increasingly looking like a gentrified, unequal society, a new study shows.

Football (that is, the sport that people in America tend to call soccer) has never been more popular and financially lucrative. In Europe alone, football is a multi-billion dollar industry, with top players being sold for well over 100 million dollars. The appeal of football, its supporters say, is that you never know what will happen. Underdog tales can always emerge, and just being the bigger team doesn’t guarantee success. The ball is round and anything is possible… in theory.

But according to one study, being the bigger team does make success much more likely. The new study, which used a computer model to look at football games in major European leagues over the past 26 years, found that over time, football games have become more and more predictable and the inequality in teams has become more pronounced.

“On the one hand, playing football has become a high-income profession and the players are highly motivated; on the other hand, stronger teams have higher incomes and therefore afford better players leading to an even stronger appearance in tournaments that can make the game more imbalanced and hence predictable,” the study reads.

The computer model worked on some 88,000 matches played since 1993, trying to predict whether the home or away team would win based on their performance in previous games. The home advantage, once prevalent in all areas of football, has almost vanished, in all countries. It’s not clear exactly why this has happened, though it could be due to non-football reasons: transportation has improved substantially, minimizing the challenges and effort required to play away.

The computer model, researchers say, is simpler than most existing algorithms, such as the ones developed by betting houses to calculate the odds of winning. The advantage of this is that you can input much more data into it and go back further in time with the analysis, something that more sophisticated models would struggle with.

So how much more predictable have matches become? For instance, the model could correctly predict the winner of a Bundesliga game (the top German league) with 60% success in 1993 — in 2019, the figure had grown to 80%. Overall, the model was able to predict results correctly roughly 75% of the time in 2019. Researchers stress that this is not because there was more data to train the models, but it is because indeed, football has become more predictable.

Football as a gentrified society

Initially, this came as a surprise.

Researchers were expecting that more money and higher stakes would make the game more competitive, but this doesn’t seem to be the case. Instead, as the leagues mature, they resemble a gentrified society, with the underlying inequality bringing more and more predictability. In particular, researchers have found that the points in a given season were distributed among teams much less evenly. They plotted this point distribution in a similar way to how economists plot income or wealth disparity between members of society — using the Gini coefficient. While there were some exception years, in general, leagues are becoming more and more unequal, with the top clubs gathering more points year after year. This echoes the notion that “the rich get richer and the poor get poorer”, the researchers write.

“It seems football as a sport is emulating society in its somewhat ‘gentrification’ process, i.e. the richer leagues are becoming more deterministic because better teams win more often; consequently, becoming richer; allowing themselves to hire better players (from a talent pool that gets internationally broader each year); becoming even stronger; and, closing the feedback cycle, winning even more matches and tournaments; hence more predictability in more professional and expensive leagues,” the study reads.

When this growing inequality is coupled with the disappearance of the home-field advantage, a plausible theory emerges regarding the growing predictability of football. Decades ago, the home advantage granted weaker teams playing at home a boost, making it more likely that they can win even against stronger teams — at least once in a while. Now, it seems that stronger teams simply win more, regardless of whether it’s home or away.

However, the researchers emphasize that they did not investigate the direct cause for football’s growing predictability.

Ultimately, with the richest teams pouring more and more money (often, “dirty money” or money from questionable sources), this trend is likely to accentuate. The “beautiful game” may be beautiful to watch — but in other ways, it is increasingly not.

The study was published in Royal Society Open Science.

Contrary to popular belief, Twitter’s algorithm amplifies conservative, not liberal voices

When Republican Representative Jim Jordan attended a judicial hearing in 2020, he made it clear why he disliked companies like Twitter.

“Big Tech is out to get conservatives,” Jordan proclaimed. “That’s not a suspicion. That’s not a hunch. It’s a fact. I said that two months ago at our last hearing. It’s every bit as true today.”

Jordan’s claim isn’t isolated. Led by former President Trump, a growing number of right-leaning voices are claiming that social media is biased in favor of liberals and progressives, shutting down conservatives. But an internal study released by Twitter shows that the opposite is true — in the US, as well as most countries that were analyzed, it’s actually conservative voices that are amplified more than liberal voices.

“Our results reveal a remarkably consistent trend: In 6 out of 7 countries studied [including the US], the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the U.S. media landscape revealed that algorithmic amplification favours right-leaning news sources,” Twitter’s study reads.

Algorithmic amplification refers to a type of story ‘amplified’ by Twitter’s algorithm — in other words, a story that the algorithm is more likely to show to other users.

The study has two main parts. The first one focused on the US and analyzed where media outlets were more likely to be amplified if they were politicized, while the other focused on tweets from politicians from seven countries.

Twitter analyzed millions of tweets posted between April 1st and August 15th, 2020. The tweets were selected from news outlets and elected officials in 7 countries: Canada, France, Germany, Japan, Spain, the UK, and the US. In all countries except Germany, tweets from right-leaning accounts “receive more algorithmic amplification than the political left.” In general, right-leaning content from news outlets seemed to benefit from the same bias. In other words, users on Twitter are more likely to see right-leaning content rather than left-leaning, all things being equal. In the UK, for instance, the right-leaning Conservatives enjoyed an amplification rate of 176%, compared to 112% for the left-leaning Labour party.

The difference was larger in some countries, but overall, there was a clear trend of Twitter’s algorithm favoring the political right.

However, Twitter emphasizes that its algorithm doesn’t favor extreme content from either side of the political spectrum.

“We further looked at whether algorithms amplify far-left and far-right political groups more than moderate ones: contrary to prevailing public belief, we did not find evidence to support this hypothesis. We hope our findings will contribute to an evidence-based debate on the role personalization algorithms play in shaping political content consumption,” the study read.

While it is clear that politicized content is amplified on Twitter, it’s not entirely clear why this happens. However, this seems to be connected to a phenomenon present on all social media platforms. Algorithms are designed to promote intense conversations and debate — and a side effect of this is that controversy is often boosted. Simply put, if a US Democrat says something about a Republican (or vice versa), this is likely to draw both praise and criticism, and is likely to be promoted and boosted by the algorithm.

Although Twitter did not focus on this directly, the phenomenon is also key to disinformation, which we’ve seen a lot of during the pandemic. For instance, if a conspiracy theory is posted on Twitter, there’s a good chance it will gather both the appraisal of those who believe it and the criticism of those who see through it — which makes it more likely to be further amplified on social media.

It’s interesting that Germany stands out as an exception, but this could be related to Germany’s agreement with Facebook, Twitter, and Google to remove hate speech within 24 hours. This is still only speculation and there could be other factors at play.

Ultimately, in addition to contradicting a popular conspiracy theory that social media is against conservatives, the study shows just how much social media algorithms can shape and sway public opinion, by presenting some posts instead of others. Twitter’s study is an encouraging first step towards more transparency, but it’s a baby step when we’re looking at a very long race ahead of us.

Fields in North America will see their first robot tractors by the end of the year

American farm equipment manufactured John Deere has teamed up with French agricultural robot start-up Naio to create a driverless tractor that can plow, by itself, and be supervised by farmers through a smartphone.

Image credits CES 2022.

There are more people alive in the world today than ever before, and not very many of us want to work the land. A shortage of laborers is not the only issue plaguing today’s farms however: climate change, and the need to limit our environmental impact, are further impacting our ability to produce enough food to go around.

In a bid to address at least one of these problems, John Deere and Naio have developed a self-driving tractor that can get fields heady for crops on its own. This is a combination of John Deere’s R8 tractor, a plow, GPS suite, and 360-degree cameras, which a farmer can control remotely, from a smartphone.

Plowing ahead

The machine was shown off at the Consumer Electronics Show in Las Vegas, an event that began last Wednesday. According to a presentation held at the event, the tractor only needs to be driven into the field, after which the operator can sent it on its way with a simple swipe of their smartphone.

The tractor is equipped with an impressive sensory suite — six pairs of cameras, able to fully perceive the machine’s surroundings — and is run by artificial intelligence. These work together to check the tractor’s position at all times with a high level of accuracy (within an inch, according to the presentation) and keep an eye out for any obstacles. If an obstacle is met, the tractor stops and sends a warning signal to its user.

John Deere Chief Technology Officer Jahmy Hindman told AFP that the autonomous plowing tractor will be available in North America this year, although no price has yet been specified.

While the tractor, so far, can only plow by itself, the duo of companies plan to expand into more complicated processes — such as versions that can seed or fertilize fields — in the future. However, they add that combine harvesters are more difficult to automate, and there is no word yet on a release date for such vehicles.

However, with other farm equipment manufacturers (such as New Holland and Kubota) working on similar projects, they can’t be far off.

“The customers are probably more ready for autonomy in agriculture than just about anywhere else because they’ve been exposed to really sophisticated and high levels of automation for a very long time,” Hindman said.

Given their price and relative novelty, automated farming vehicles will most likely first be used for specialized, expensive, and labor-intensive crops. It may be a while before we see them working vast cereal crop fields, but they will definitely get there, eventually.

There is hope that, by automating the most labor-intensive and unpleasant jobs on the farm, such as weeding and crop monitoring, automation can help boost yields without increasing costs, while also reducing the need for mass use of pesticides or fungicides — which would reduce the environmental impact of the agricultural sector, while also making for healthier food on our tables.

Is your phone really listening to your conversations? Well, turns out it doesn’t have to

Have you ever chatted with a friend about buying a certain item and been targeted with an ad for that same item the next day? If so, you may have wondered whether your smartphone was “listening” to you.

But is it really? Well, it’s no coincidence the item you’d been interested in was the same one you were targeted with.

But that doesn’t mean your device is actually listening to your conversations — it doesn’t need to. There’s a good chance you’re already giving it all the information it needs.

Can phones hear?

Most of us regularly disclose our information to a wide range of websites and apps. We do this when we grant them certain permissions, or allow “cookies” to track our online activities.

So-called “first-party cookies” allow websites to “remember” certain details about our interaction with the site. For instance, login cookies let you save your login details so you don’t have to re-enter them each time.

Third-party cookies, however, are created by domains that are external to the site you’re visiting. The third party will often be a marketing company in a partnership with the first-party website or app.

The latter will host the marketer’s ads and grant it access to data it collects from you (which you will have given it permission to do — perhaps by clicking on some innocuous looking popup).

As such, the advertiser can build a picture of your life: your routines, wants and needs. These companies constantly seek to gauge the popularity of their products and how this varies based on factors such as a customer’s age, gender, height, weight, job and hobbies.

By classifying and clustering this information, advertisers improve their recommendation algorithms, using something called recommender systems to target the right customers with the right ads.

Computers work behind the scenes

There are several machine-learning techniques in artificial intelligence (AI) that help systems filter and analyse your data, such as data clustering, classification, association and reinforcement learning (RL).

An RL agent can train itself based on feedback gained from user interactions, akin to how a young child will learn to repeat an action if it leads to a reward.

By viewing or pressing “like” on a social media post, you send a reward signal to an RL agent confirming you’re attracted to the post — or perhaps interested in the person who posted it. Either way, a message is sent to the RL agent about your personal interests and preferences.

If you start actively liking posts about “mindfulness” on a social platform, its system will learn to send you advertisements for companies that can offer related products and content.

Ad recommendations may be based on other data, too, including but not limited to:

  • other ads you clicked on through the platform
  • personal details you provided the platform (such as your age, email address, gender, location and which devices you access the platform on)
  • information shared with the platform by other advertisers or marketing partners that already have you as a customer
  • specific pages or groups you have joined or “liked” on the platform.

In fact, AI algorithms can help marketers take huge pools of data and use them to construct your entire social network, ranking people around you based on how much you “care about” (interact with) them.

They can then start to target you with ads based on not only your own data, but on data collected from your friends and family members using the same platforms as you.

For example, Facebook might be able to recommend you something your friend recently bought. It didn’t need to “listen” to a conversation between you and your friend to do this.

Exercising your right to privacy is a choice

While app providers are supposed to provide clear terms and conditions to users about how they collect, store and use data, nowadays it’s on users to be careful about which permissions they give to the apps and sites they use.

When in doubt, give permissions on an as-needed basis. It makes sense to give WhatsApp access to your camera and microphone, as it can’t provide some of its services without this. But not all apps and services will ask for only what is necessary.

Perhaps you don’t mind receiving targeted ads based on your data, and may find it appealing. Research has shown people with a more “utilitarian” (or practical) worldview actually prefer recommendations from AI to those from humans.

That said, it’s possible AI recommendations can constrain people’s choices and minimise serendipity in the long term. By presenting consumers with algorithmically curated choices of what to watch, read and stream, companies may be implicitly keeping our tastes and lifestyle within a narrower frame.

Don’t want to be predicted? Don’t be predictable

There are some simple tips you can follow to limit the amount of data you share online. First, you should review your phone’s app permissions regularly.

Also, think twice before an app or website asks you for certain permissions, or to allow cookies. Wherever possible, avoid using your social media accounts to connect or log in to other sites and services. In most cases there will be an option to sign up via email, which could even be a burner email.

Once you do start the sign-in process, remember you only have to share as much information as is needed. And if you’re sensitive about privacy, perhaps consider installing a virtual private network (VPN) on your device. This will mask your IP address and encrypt your online activities.

Try it yourself

If you still think your phone is listening to you, there’s a simple experiment you can try.

Go to your phone’s settings and restrict access to your microphone for all your apps. Pick a product you know you haven’t searched for in any of your devices and talk about it out loud at some length with another person.

Make sure you repeat this process a few times. If you still don’t get any targeted ads within the next few day, this suggests your phone isn’t really “listening” to you.

It has other ways of finding out what’s on your mind.


Dana Rezazadegan, Lecturer, Swinburne University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Inexpensive, AI-driven MRI machines could revolutionize medical imaging

Since it was introduced in the 1970s, the MRI has become one of the most impactful imaging techniques in medicine. MRIs are highly potent and versatile, capable of offering much better resolutions than a CT scan and being used in a wide array of situations, from scanning the brain to looking for tumors. But there’s a big problem: the conventional MRI is expensive to buy and maintain.

This is why a new study published in Nature Communications is so exciting. In it, researchers from the University of Hong Kong describe the construction of a new type of MRI that can be built for a fraction of the cost of existing machines.

Image credits: Liu et al (2021).

Democratizing MRIs

Ed X. Wu has been working in MRI research for the past 30 years. He’s worked on the engineering side as well as on image formation and biomedical applications. He’s seen the field grow and develop, as both the technology and the algorithms that operate MRI machines have become more capable and elegant.

“However, these continuously evolving high-end features also drive up the complexity of these scanners,” Wu tells ZME Science, “thus further increasing the cost of purchasing, hosting, and maintaining these clinical MRI scanners.”

Although the MRI is widely considered to be the most valuable and sophisticated medical imaging technology in modern healthcare, Wu explains, it comes at a cost of over $1 million per unit, and a maintenance cost of around $15,000 per month. As a result, despite their utility, MRIs are hardly affordable. Every hospital in the world needs at least one, but 2 in 3 people worldwide have limited or no MRI access.

“The accessibility to clinical MRI scanners is very low,” Wu continues. The total number of clinical scanners is only about 50,000 in the entire world. They are mostly installed inside the highly specialized radiology departments or centralized imaging facilities, operated by highly trained technicians. Meanwhile, there are actual unmet clinical needs for imaging needs in almost in every corner of healthcare, as demonstrated by the success of ultrasound imaging and x-ray imaging.”

Since MRI is especially used to diagnose conditions, not having access to one can delay or even prevent the discovery and treatment of serious medical conditions, increasing medical risks for billions of patients around the world. Having access to an MRI, even a less performant one, could save a lot of lives and improve many livelihoods.

“In short, we need to democratize MRI technologies to serve healthcare at low cost and large scale,” Wu explains.

In order to do this, the cost and complexity of MRI scanners must be brought down substantially. It’s not just the engineering part, but also the installation, maintenance, and operation costs that need to be brought down. For instance, commercial MRIs typically require high power outputs, which may not be available in some places. To achieve this, researchers developed an MRI that works at a very low field and can be constructed for only $20,000.

Lowering the Teslas

An MRI scanner is essentially a giant magnet. It employs powerful, superconducting magnets that force the protons in the human body to align to its magnetic field. To get a sense of how strong the magnet is, most MRIs operate at 1.5 Teslas (although the range can vary from 0.2 to 3 Teslas) and the magnetic field of the Earth is around 0.0000305 Teslas.

The MRI prototype developed by Wu and colleagues operates at 0.055 Teslas, much lower than existing commercial units. It can operate from a standard AC wall power outlet and requires neither radiofrequency (RF) nor magnetic shielding.

Images obtained with the low-cost MRI. Image credits: Liu et al.

The shielding part is particularly exciting. Normally, MRIs need the shielding to eliminate interference (for instance, with other electronic devices) — but researchers managed to eliminate the need for shielding by using a deep learning algorithm, Wu tells ZME Science:

“Our innovations encompass three aspects: (i) we eliminated the bulky RF shielding room requirement through deep learning, thus the MRI scan can now be made in open space; (ii) we implemented and demonstrated the feasibility of key and widely adopted clinical brain imaging protocols on this low-cost platform, which were previously believed challenging if not impossible at very low field and on low-cost hardware platforms; and (iii) we performed preliminary clinical study and validated results by directly comparing to 3T results.”

It’s not the first time something like this was attempted, but this innovation was only possible thanks to breakthroughs on the algorithm side. “In short, it’s our new algorithms & hardware concept that made this advance possible,” the researcher tells me in an email. In fact, Wu expects much of the innovation in the MRI field to come on the computing side.

“I believe computing and big-data will be an integral as well as inevitable part of the future MRI technology.  Given the inherent nature of MRI, I believe widely deployed MRI technologies will lead to immense opportunities in the future through data-driven MRI image formation and diagnosis in healthcare. This will lead to low-cost, effective, and more intelligent clinical MRI applications, ultimately benefiting more patients.”

For now, at least, the new technology isn’t meant to replace conventional MRIs, but rather to complement them and offer a low-cost solution where none is currently available. But if Wu is right and low-cost computing and AI can help push the field even further, we may be seeing these in hospitals in the not too distant future.

Wu hopes that this research could inspire more engineering and data scientists to develop and adopt such low-cost and low-power MRI technology — both in developed and underdeveloped countries. He believes that without any cost increase, the prototype can be improved to achieve more usable image quality and become a valuable tool for medical diagnosis.

“Our body is mostly made of water molecules, on which MRI thrives — MRI is a gift to mankind from nature, we’ve got to use it more,” the researcher concludes.

The study was published in Nature Communications.

This cafe in Japan has robot waiters controlled remotely by disabled workers

In Japan, as in most other countries, disabled people are often invisible, hidden away in a homogeneous society that prioritizes productivity and fitting in. While the country has made some progress, issuing new anti-discrimination laws and ratifying a UN rights treaty, the issue is far from solved. Now, a cafe in Tokyo hopes to make a difference, bringing together technology and inclusion in a unique type of café. 

Image credit: Ory Lab.

DAWN, or Diverse Avatar Working Network, is a café managed by robots operated remotely by people with physical disabilities such as Amyotrophic Lateral Sclerosis (ALS) and Spinal Muscular Atrophy (SMA). The operators, referred as pilots, can control the robots from home, using a mouse, tablet or gaze-controlled remote. 

The cafe is the latest project of the Japanese robotics company Ory Laboratory, which has the overall purpose of creating an accessible society. Its co-founder and CEO Kentaro Yoshifuji got the idea of a cafe with remote-controlled robots after spending a long time in hospital when he was a child – unable to go to school for over three years. 

The project started in 2018 as a pilot and has changed three times ever since. Following positive feedback from customers, Ory Laboratory opened a permanent café in Tokyo’s Nihonbashi district in June this year. The researchers behind the robot, Kazuaki Takeuchi, and Yoichi Yamazaki, even published paper last year describing how the robots were developed and how they can be used.

The robots are called OriHime-D. Users can remotely control them as their real avatars, that is, an alter ego with body by selecting prepared patterned motions. In addition, the user can communicate with real speech sound and speech synthesis. This enables communication for persons with difficulty speaking unable to engage in physical work. The researchers behind the project emphasize that the more abstract and vague the robot shape is, the more the user’s personality can show up.

A unique coffee shop

The café in Tokyo has several types of OriHime robots, which have been used previously when it was all only a pilot project. There’s one table top-stationary robot that takes order from customers, capable of taking on different poses. Tables at the café also come with an iPad to support the interaction with the robots, operated by pilots remotely.

Pilots, wherever they are based, can watch the customers through their computer screens while moving the robots around the café with a software that can be operated with slight eye movements. The OriHime are about 1.20 centimeters tall and come with a camera, microphone and speaker, which they use to speak and take orders in a space.

There’s also a larger robot that is used to bring food to the customers. This provides opportunities for people who face difficulties in chatting with customers. At the same time, instead of having baristas, the cafe comes with a “TeleBarista OriHime” with automatically brews any coffee selected by customers and is then taken to the table. 

The café is a joint effort between Ory Laboratory, All Nippon Airways (ANA), the Nippon Foundation, and the Avatar Robotic Consultative Association (ARCA). Each operator gets paid 1,000 yen ($8.80) an hour, which is the standard wage in Japan. As well as working with the cafe, Ory’s robots can also be found in transportations and department stores. 

If you’re in Tokyo and would like to have a cup of coffee at Dawn, here’s how you can find it:

Why transparent solar cells could replace windows in the near future

No matter how sustainable, eco-friendly, and clean sources of energy they are, conventional solar panels require a large setup area and heavy initial investment. Due to these limitations, it’s hard to introduce them in urban areas (especially neighborhoods with lots of apartment blocks or shops). But thanks to the work of ingenious engineers at the University of Michigan, that may soon no longer be the case.

The researchers have created transparent solar panels which they claim could be used as power generating windows in our homes, buildings, and even rented apartments.

Image credits: Djim Loic/Unsplash

If these transparent panels are indeed capable of generating electricity cost-efficiently, the days of regular windows may be passing as we speak. Soon, we could have access to cheap solar energy regardless of where we live — and to make it even better, we could be rid of those horrific power cuts that happen every once in a while because, with transparent glass-like solar panels, every house and every tall skyscraper will be able to generate its own power independently.

An overview of the transparent solar panels

In order to generate power from sunlight, solar cells embedded on a solar panel are required to absorb radiation from the sun. Therefore, they cannot allow sunlight to completely pass through them (in the way that a glass window can). So at first, the idea of transparent solar panels might seem preposterous and completely illogical because a transparent panel should be unable to absorb radiation. 

But that’s not necessarily the case, researchers have found. In fact, that’s not the case at all.

Professor R. Lunt at MSU showing the transparent luminescent solar concentrator. Image credits: Michigan State University

The solar panels created by engineers at the University of Michigan consist of transparent luminescent solar concentrators (TLSC). Composed of cyanine, the TLSC is capable of selectively absorbing invisible solar radiation including infrared and UV lights, and letting the rest of the visible rays pass through them. So in other words, these devices are transparent to the human eye (very much like a window) but still absorb a fraction of the solar light which they can then convert into electricity. It’s a relatively new technology, only first developed in 2013, but it’s already seeing some impressive developments.

Panels equipped with TLSC can be molded in the form of thin transparent sheets that can be used further to create windows, smartphone screens, car roofs, etc. Unlike, traditional panels, transparent solar panels do not use silicone; instead they consist of a zinc oxide layer covered with a carbon-based IC-SAM layer and a fullerene layer. The IC-SAM and fullerene layers not only increase the efficiency of the panel but also prevent the radiation-absorbing regions of the solar cells from breaking down.

Surprisingly, the researchers at Michigan State University (MSU) also claim that their transparent solar panels can last for 30 years, making them more durable than most regular solar panels. Basically, you could fit your windows with these transparent solar cells and get free electricity without much hassle for decades. Unsurprisingly, this prospect has a lot of people excited.

According to Professor Richard Lunt (who headed the transparent solar cell experiment at MSU), “highly transparent solar cells represent the wave of the future for new solar applications”. He further adds that these devices in the future can provide a similar electricity-generation potential as rooftop solar systems plus, they can also equip our buildings, automobiles, and gadgets with self-charging abilities.

“That is what we are working towards,” he said. “Traditional solar applications have been actively researched for over five decades, yet we have only been working on these highly transparent solar cells for about five years. Ultimately, this technology offers a promising route to inexpensive, widespread solar adoption on small and large surfaces that were previously inaccessible.”

Recent developments in the field of transparent solar cell technology

Apart from the research work conducted by Professor Richard Lunt and his team at MSU, there are some other research groups and companies working on developing advanced solar-powered glass windows. Earlier this year, a team from ITMO University in Russia developed a cheaper method of producing transparent solar cells. The researchers found a way to produce transparent solar panels much cheaper than ever before.

“Regular thin-film solar cells have a non-transparent metal back contact that allows them to trap more light. Transparent solar cells use a light-permeating back electrode. In that case, some of the photons are inevitably lost when passing through, thus reducing the devices’ performance. Besides, producing a back electrode with the right properties can be quite expensive,” says Pavel Voroshilov, a researcher at ITMO University’s Faculty of Physics and Engineering.

“For our experiments, we took a solar cell based on small molecules and attached nanotubes to it. Next, we doped nanotubes using an ion gate. We also processed the transport layer, which is responsible for allowing a charge from the active layer to successfully reach the electrode. We were able to do this without vacuum chambers and working in ambient conditions. All we had to do was dribble some ionic liquid and apply a slight voltage in order to create the necessary properties,” adds co-author Pavel Voroshilov.

Image credits: Kenrick Baksh/Unsplash

PHYSEE, a technology company from the Netherlands has successfully installed their solar energy-based “PowerWindow” in a 300 square feet area of a bank building in The Netherlands. Though at present, the transparent PowerWindows are not efficient enough to meet the energy demands of the whole building, PHYSEE claims that with some more effort, soon they will be able to increase the feasibility and power generation capacity of their solar windows.   

California-based Ubiquitous Energy is also working on a “ClearView Power” system that aims to create a solar coating that can turn the glass used in windows into transparent solar panels. This solar coating will allow transparent glass windows to absorb high-energy infrared radiations, the company claims to have achieved an efficiency of 9.8% with ClearView solar cells during their initial tests.

In September 2021, the Nippon Sheet Glass (NSG) Corporation facility located in Chiba City became Japan’s first solar window-equipped building. The transparent solar panels installed by NSG in their facility are developed by Ubiquitous Energy.  Recently, as a part of their association with Morgan Creek Ventures, Ubiquitous Energy has also installed transparent solar windows on Boulder Commons II, an under-construction commercial building in Colorado.

All these exciting developments indicate that sooner or later, we also might be able to install transparent power-generating solar windows in our homes. Such a small change in the way we produce energy, on a global scale could turn out to be a great step towards living in a more energy-efficient world.

Not there just yet

If this almost sounds too good to be true, well sort of is. The efficiency of these fully transparent solar panels is around 1%, though the technology has the potential to reach around 10% efficiency — this is compared to the 15% we already have for conventional solar panels (some efficient ones can reach 22% or even a bit higher).

So the efficiency isn’t quite there yet to make transparent solar cells efficient yet, but it may get there in the not-too-distant future. Furthermore, the appeal of this system is that it can be deployed on a small scale, in areas where regular solar panels are not possible. They don’t have to replace regular solar panels, they just have to complement them.

When you think about it, solar energy wasn’t regarded as competitive up to about a decade ago — and a recent report found that now, it’s the cheapest form of electricity available so far in human history. Although transparent solar cells haven’t been truly used yet, we’ve seen how fast this type of technology can develop, and the prospects are there for great results.

The mere idea that we may soon be able to power our buildings through our windows shows how far we’ve come. An energy revolution is in sight, and we’d be wise to take it seriously.

Earth might develop ‘junk’ rings — but engineers are working to prevent that

Earth may one day have its own ring system — one made from space junk.

Rendering of man-made objects in Earth’s orbit. Image via ESA.

Whenever there are humans, pollution seems to follow. Our planet’s orbit doesn’t seem to be an exception. However, not all is lost yet! Research at the University of Utah is exploring novel ideas for how to clear the build-up before it can cause more trouble for space-faring vessels and their crews.

Their idea involves using a magnetic tractor beam to capture and remove debris orbiting the Earth.

Don’t put a ring on it

“Earth is on course to have its own rings,” says University of Utah professor of mechanical engineering Jake Abbott, corresponding author of the study, for the Salt Lake Tribune. “They’ll just be made of space junk.”

The Earth is on its way to becoming the fifth planet in the Solar System to gain planetary rings. However, unlike the rock-and-ice rings of Jupiter, Saturn, Neptune, and Uranus, Earth’s rings will be made of scrap and junk. It would also be wholly human-made.

According to NASA’s Orbital Debris Program Office, there are an estimated 23,000 pieces of orbital debris larger than a softball; these are joined by a few hundreds of millions of pieces smaller than a softball. These travel at speeds of 17,500 mph (28,160 km/h), and pose an immense threat to satellites, space travel, and hamper research efforts.

Because of their high speeds, removing these pieces of space debris is very risky — and hard to pull off.

“Most of that junk is spinning,” Abbott added. “Reach out to stop it with a robotic arm, you’ll break the arm and create more debris.”

A small part of this debris — around 200 to 400 — burns out in the Earth’s atmosphere every year. However, fresh pieces make their way into orbit as the planet’s orbit is increasingly used and traversed. Plans by private entities to launch thousands of new satellites in the coming years will only make the problem worse.

Abbott’s team proposes using a magnetic device to capture or pull debris down into low orbit, where they will eventually burn up in the Earth’s atmosphere.

“We’ve basically created the world’s first tractor beam,” he told Salt Lake Tribune. “It’s just a question of engineering now. Building and launching it.”

The paper “Dexterous magnetic manipulation of conductive non-magnetic objects” has been published in the journal Nature.

Holographic camera can see around corners or even through the skin

The holographic camera prototype. Credit: Northwestern University.

Researchers at Northwestern University have devised a high-resolution holographic camera that images objects outside its line of sight, revealing objects hidden behind corners, as well as those obstructed by barriers such as a deer behind a forest line. The camera can also see through the fog and even the human skin, which would make it a fantastic new medical imaging tool on par with MRI machines and CT scanners.

This impressive new imaging method, known as synthetic wavelength holography, works by reconstructing the path a beam of light takes as it scatters onto various objects, bouncing off surfaces until the beam makes its way back to the source where it hits a detector. An algorithm traces the path of the scattered light, making it is possible to see the world from the perspective of a remote surface, even if it’s behind the camera’s line of sight.

“If you have ever tried to shine a flashlight through your hand, then you have experienced this phenomenon,” said Florian Willomitzer, first author of the study, explaining how light scattering works. “You see a bright spot on the other side of your hand, but, theoretically, there should be a shadow cast by your bones, revealing the bones’ structure. Instead, the light that passes the bones gets scattered within the tissue in all directions, completely blurring out the shadow image.”

The new technology is a type of non-line-of-sight (NLoS) imaging. Researchers at Stanford University recently presented another impressive demonstration of NLoS that images moving objects inside a room using a single laser beam fired through a keyhole.

But compared to other NLoS technologies, this new method takes things to a whole new level, rapidly capturing full-field images at high resolution with submillimeter precision.

The key to imaging obstructed objects is to intercept the scattered light and measure its time of travel with precision. Typically, you’d need a cumbersome apparatus consisting of very fast detectors to achieve this goal. The researchers thought of a workaround and combined two lasers to generate a synthetic light wave that can capture the entire field of vision of an object in a hologram, essentially reconstructing its entire 3-D shape.

Due to its high temporal resolution and fast response time (under 50 milliseconds), the camera could theoretically be able to image fast-moving objects, such as cars or pedestrians hidden behind a curving road.

“This technique turns walls into mirrors,” Willomitzer said. “It gets better as the technique also can work at night and in foggy weather conditions.”

The same tool can also see through tissue, revealing a beating heart or other internal organs obstructed by the skin since the same principle of light scattering applies in both instances. As long as there’s an opaque barrier, such as a wall, shrub, box, or skin, the holographic camera can see objects around corners.

The technology combines four key attributes highlighted in the following potential future NLoS application scenarios: in each example, a scattering surface or medium is used to indirectly illuminate, and intercept light scattered by the hidden objects. Credit: Nature Communications.

Self-driving cars would have a lot to gain by incorporating this technology that could prevent a lot of accidents and save lives, but the Northwestern researchers believe it could prove most useful in medical imaging where it could replace or supplement endoscopes. Rather than cramming and tugging a flexible camera through tight spaces and around corners, such as during a colonoscopy, the holographic imaging could use light instead to image the many folds inside the intestines in a completely non-invasive manner. Similarly, the same method could be used to image damaged industrial equipment without having to disassemble it part by part.

“If you have a running turbine and want to inspect defects inside, you would typically use an endoscope,” Willomitzer said. “But some defects only show up when the device is in motion. You cannot use an endoscope and look inside the turbine from the front while it is running. Our sensor can look inside a running turbine to detect structures that are smaller than one millimeter.”

The current sensor prototype uses visible or infrared light, but it could theoretically be reconfigured and extended to other frequencies for use in space exploration or underwater acoustic imaging. It might take a while though before we see this technology transition from the lab to the commercial market.

“It’s still a long way to go before we see these kinds of imagers built-in cars or approved for medical applications,” Willomitzer said. “Maybe 10 years or even more, but it will come.”

The findings appeared in the journal Nature Communications.

What are ‘iron lungs’, and could this old tech still be useful today?

Although they’re relatively old technology in this day and age, there is renewed interest in iron lungs today against the backdrop of the coronavirus pandemic.

An iron lung device. Image credits The B’s / Flickr.

Few devices can boast having as terrifying — and cool — a name as the iron lung. These somewhat outdated medical devices were the earliest devices designed to help patients breathe. Compared to modern breathing aides, these devices were huge and quite scary-looking.

Still, iron lungs were a very important development at their time. In the aftermath of the COVID-19 epidemic, there has also been renewed interest in these devices as they can be used as an alternative to modern ventilators.

So let’s take a look at exactly what iron lungs are, and how they came to be.

So what are they?

Iron lungs are quite aptly named; unlike other modern ventilators, they function using the same mechanisms as our own lungs.

An iron lung is a type of negative pressure ventilator. This means that it creates an area of low-pressure or vacuum to move and draw air into a patient’s chest cavity. In very broad lines, this is the exact mechanism our bodies employ, via movements of the diaphragm, to let us breathe.

The concept behind these devices is quite simple. The main component of an iron lung is a chamber, usually a metal tube (hence the ‘iron’ part in its name) that can fit the body of a patient from the neck down. This acts as an enclosed space in which pressure can be modified to help patients breathe. The other main component of the device is mobile and actually changes the pressure inside the tube. Usually, this comes in the form of a rubber diaphragm connected to an electrical motor, although other sources of power have been used, including manual labor.

Patients are placed inside an iron lung, with only their head and part of their neck (from their voice box upwards) left outside the cylinder. A membrane is placed around their neck to ensure that the cylinder is sealed. Afterward, the diaphragm is repeatedly retracted and contracted to cycle between low and high pressure inside the chamber. Because the patient’s head and airways are left outside of the cylinder, when pressure is low inside it, air moves inside the patient’s lungs. When pressure increases inside the cylinder, the air is pushed back out.

The whole process mirrors the way our bodies handle breathing. Our diaphragm muscles draw on the lungs, increasing their internal volume, which pulls air in from the outside. To breathe out, the diaphragm muscle squeezes on the lungs, pushing air out. Iron lungs work much the same way, but they expand and contract the lungs alongside the rest of the chest cavity from outside the body.

This process is known as negative pressure breathing; low (‘negative’) pressure is generated in the lungs in order to draw in air. Most modern ventilators work via positive pressure: they generate high pressure inside the device to push air into the patient’s lungs.

One advantage of such ventilators is that patients can use them without being sedated or intubated. On the one hand this eases the pressure on medical supplies each patient requires; on the other, it slashes the risks associated with the use of anesthetics — such as allergic reactions or overdoses — and the risk of mechanical lesions following intubation.

Epidemics, pandemics

An opened iron lung device at the Science Museum, London. Image credits Stefan Kühn / Wikimedia.

“The desperate requests for ventilators in today’s treatment of patients in the grasp of the coronavirus brought to mind my encounter with breathing machines in the early 1950s polio epidemic, when I signed up as a volunteer to manually pump iron lungs in case of power failure at Vancouver’s George Pearson Centre,” recounts George Szasz, CM, MD, in a post for the British Columbia Medical Journal.

Iron lungs saw their greatest levels of use in developed countries during the poliomyelitis outbreaks of the 1940s and 1950s. One of the deadliest symptoms of polio is muscle paralysis, which can make it impossible for patients to breathe. The worst cases would see patients requiring ventilation for up to several weeks. Back then, iron lungs were the only available option for mechanical ventilation, and they saved innumerable lives.

As technology progressed, however, iron lungs fell out of use. They were bulky and intimidating machines, hard to transport and store despite their reliability and mechanical simplicity. With more compact ventilators, the advent of widespread intubation, and techniques such as tracheostomies, such devices quickly dwindled in number and use. From an estimated height of around 1,200 iron lung devices in the U.S. during the ’40s and ’50s, less than 30 are estimated to still be in use today

There are obvious parallels between those polio epidemics of old and today’s COVID-19 pandemic in regards to the need for ventilation. Machines such as the iron lung have been suggested as a possible treatment option for COVID-19 patients due to this. For most cases, such devices can help, but not for all.

In cases of severe COVID-19 infections, the tissues of the lungs themselves are heavily affected. A buildup of fluid in the lungs can physically prevent air from reaching the alveoli (the structures in the lung where gases are exchanged between the blood and the environment). While iron lungs can perform the motions required to breathe even for patients who are incapable of doing it themselves, they cannot generate enough pressure to push air through the tissues affected by a COVID-19 infection.

“Iron lungs will not work for patients suffering from severe COVID-19 infections,” explains Douglas Gardenhire, a Clinical Associate Professor and Chair of Respiratory Therapy at the Georgia State University (GSU) Department of Respiratory Therapy. “Polio interrupted the connection between brain and diaphragm and while some polio patients did have pneumonia, it was not the principal issue. For the most part, the lungs themselves did not have any change in their dynamic characteristics.”

“COVID-19 pneumonia physically changes the composition of the lungs,” adds Robert Murray, a Clinical Assistant Professor at the GSU. “The consolidation of fluid in the lungs will not respond with low pressure generated by the iron lung. The lungs of a COVID-19 patient will be a heterogenous mix of normal and consolidated lung tissue making mechanical ventilation very difficult.”

Still an alternative

Although patients with severe COVID-19 infections might not benefit from the iron lung, there are cases in which the device can prove useful. One paper (Chandrasekaranm, Shaji, 2021) explains that there still is a need for negative pressure ventilators in modern hospitals, especially for patients who have experienced ventilator-induced lung injuries. The use of negative pressure ventilators, especially in concert with an oxygen helmet, may also play a part in reducing the number of infections by limiting the spread of viruses through contaminated materials in cases where resources are stretched thin, the team adds.

While the concept is being retained, however, the actual devices are getting an upgrade. One example is the device produced by UK charity Exovent, which aims to be a more portable iron lung. Exovent’s end goal is to provide a life-saving device that will impose fewer limits on what activities patients can undertake. A seemingly-simple but still dramatic improvement, for example, is that patients can use their hands to touch their faces even while the Exovent device is in operation. Eating or drinking while using the device is also possible.

Exovent’s ventilator was designed before the coronavirus outbreak to help the millions of people suffering from respiratory issues including pneumonia worldwide. However, its designers are confident that, in conjunction with oxygen helmets, it can help patients who are recovering from a coronavirus infection — a process that leaves them with breathing difficulties for months.

All things considered, iron lungs have made a huge difference for the lives of countless patients in the past, and they continue to serve many. Although most of them today look like archaic devices, engineers are working to update and spruce them up for the modern day. And, amid modern ventilators, there still seems to be a role — and a need — for devices such as iron lungs.

New AI approach can spot anomalies in medical images with better accuracy

Researchers have trained a neural network to analyze medical images and detect anomalies. While this won’t replace human analysts anytime soon, it can help physicians sift through countless scans quicker and look for any signs of problems.

Image credits: Shvetsova et al (2021).

If there’s one thing AI is really good at, it’s spotting patterns. Whether it’s written data, audio, or images, AI can be trained to identify patterns — and one particularly interesting application is using it to identify anomalies in medical images. This has already been tested in some fields of medical imagery with promising results.

However, AI can also be notoriously easy to fool, especially with real-life data. In the new study, researchers in the group of Professor Dmitry Dylov at Skoltech presented a new method through which AI can detect anomalies. The method, they say, is better than existing ones and can detect barely visible anomalies.

“Barely visible abnormalities in chest X-rays or metastases in lymph nodes on the scans of the pathology slides resemble normal images and are very difficult to detect. To address this problem, we introduce a new powerful method of image anomaly detection.”

The proposed approach essentially suggests a new baseline for anomaly detection in medical image analysis tasks. It’s good at detecting anomalies that represent medical abnormalities, as well as problems associated with medical equipment

“An anomaly is anything that does not belong to the dominant class of “normal” data,” Dylov told ZME Science. “If something unusual is present in the field of view of a medical device, the algorithm will spot it. Examples include both imaging artifacts (e.g., dirt on the microscope’s slide) and actual pathological abnormalities in certain areas of the images (e.g., cancerous cells which differ in shape and size from the normal cells). In the clinical setting, there is value in spotting both of these examples.”

The maximum observed improvement compared to conventional AI training was 10%, Dylov says, and excitingly, the method is already mature enough to be deployed into the real world.

“With our algorithm, medical practitioners can immediately sort out artifactual images from normal ones. They will also receive a recommendation that a certain image or a part of an image looks unlike the rest of the images in the dataset. This is especially valuable when big batches of data are to be reviewed manually by the experts,” Dylov explained in an email.

The main application of this approach is to ease the workload of experts analyzing medical images and help them focus on the most important images rather than manually going through the entire dataset. The more this type of approach is improved, the more AI can help doctors make the most of their time and improve the results of medical imaging analysis.

The study was published in the journal IEEE (Institute of Electrical and Electronics Engineers).

Flyboard Air from Zapata.

Hoverboards are now real — and the science behind them is dope

What could be the coolest way of going to work you can imagine? Let me help you out. Flying cars — not here yet. Jetpacks — cool, but not enough pizzaz. No, there’s only one correct answer to this question: a hoverboard.

A whole generation of skateboarders and sci-fi enthusiasts (especially Back to the Future fans) have been waiting for a long time to see an actual levitating hoverboard. Well, the wait is over. The future is here. 

Franky Zapata flying on Flyboard Air. Image credits: Zapata/YouTube.

There were rumors in the 90s that claimed hoverboards had been invented but were not made available in the market because some powerful parent groups are against the idea of flying skateboards being used by children. Well, there was little truth to those rumors — hoverboards haven’t been truly developed until very recently. No longer a fictional piece of technology, levitating boards exist for real and there is a lot of science working behind them.

A hoverboard is basically a skateboard without tires that can fly above the ground while carrying a person on it. As the name implies, it’s a board that hovers — crazy, I know.

The earliest mention of a hoverboard is found in Michael K. Joseph’s The Hole in the Zero, a sci-fi novel that was published in the year 1967. However, before Michael Joseph, American aeronautical engineer Charles Zimmerman had also come up with the idea of a flying platform that looked like a large hoverboard.

Zimmerman’s concept later became the inspiration for a small experimental aircraft called Hiller VZ-1 Pawnee. This bizarre levitating platform was developed by Hiller aircraft for the US military, and it also had a successful flight in 1955. However, only six such platforms were built because the army didn’t find them of any use for military operations. Hoverboards were feasible, but it was still too difficult to build them with the day’s technology.

Hoverboards were largely forgotten for decades and seemed to fall out of favor. Then, came Back to the Future.

A page from the book Back to the Future: The Ultimate Visual History. Image credits: /Film

The hoverboard idea gained huge popularity after the release of Robert Zemeckis’s Back to the Future II in 1989. The film featured a chase sequence in which the lead character Marty McFly is seen flying a pink hoverboard while being followed by a gang of bullies. In the last two decades, many tech companies and experts have attempted to create a flying board that could function like the hoverboard shown in the film.

Funnily enough, Back to the Future II takes place in 2015, and hoverboards were common in the fictional movie. They’re not quite as popular yet, but they’re coming along.

The science behind hoverboards

Real hoverboards work by cleverly exploiting quantum mechanics and magnetic fields. It starts with superconductors — materials that have no electrical resistance and expel magnetic flux fields. Scientists are very excited about superconductors and have been using them in experiments like the Large Hadron Collider.

Because superconductors expel magnetic fields, something weird happens when they interact with magnets. Because magnets must maintain their North-South magnetic field lines, if you place a superconductor on a magnet, it interrupts those field lines, and the magnet lifts the superconductor out of its way, suspending it into the air.

A magnet levitating above a high-temperature superconductor, cooled with liquid nitrogen. Image credits: Mai Linh Doan.

However, there’s a catch: superconductors gain their “superpowers” only at extremely low temperatures, at around -230 degrees Fahrenheit (-145 Celsius) or colder. So real-world hoverboards need to be fueled with supercooled liquid nitrogen around every 30 minutes to maintain their extremely low temperature. 

All existing hoverboards use this approach. While there has been some progress in creating room-temperature superconductors, this technology is not yet ready to be deployed in the real world. But then again, 30 minutes is better than nothing.

Some promising hoverboards and the technology behind them

In 2014, an inventor and entrepreneur Greg Henderson listed a hoverboard prototype Hendo hoverboards on the crowdfunding platform Kickstarter. The Hendo hoverboard could fly 2.5 cm above the ground with 300 lb (140 kg) of weight but just like maglev trains, it required a magnetic track made of non-ferromagnetic metals to function. 

The hoverboard followed magnetic levitation, a principle that allows an object to overcome gravitation and stay suspended in the air in the presence of a magnetic field. However, the hoverboard didn’t go into mass production because Henderson used the gadget only as a means to promote his company Arx Pax Labs.

A year later, another inventor (Cătălin Alexandru Duru) developed a drone-like hoverboard prototype (which is registered under the name omni hoverboard) and using the same approach, he set a Guinness World Record for covering maximum distance with an autonomous hoverboard. During his flight, Alexandru covered a distance of about 276 meters and reached a height of 5 meters. 

ARCA CEO Dumitru Popescu controlling his ArcaBoard through body movement. Image Credits: Dragos Muresan/Wikimedia Commons

In 2015, Japanese auto manufacturer Lexus also came up with a cool liquid-nitrogen-filled hoverboard that could levitate when placed on a special magnetic surface. The Lexus hoverboard consists of yttrium barium copper oxide, a superconductor which if cooled down beyond its critical temperature becomes repulsive to magnetic field lines. The superconductor used both quantum levitation (and quantum locking) to make the hoverboard perfectly fly over a magnetic surface.

The same year in December, Romania-based ARCA Space Corporation introduced an electric hoverboard called ArcaBoard. Being able to fly over any terrain and water, this rechargeable hoverboard was marketed as a new mode of personal transportation. The company website mentions that ArcaBoard is powered by 36 in-built electric fans and can be easily controlled either from your smartphone or through the rider’s body movements.   

Components in an ArcaBoard. Image Credits: ARCA

One of the craziest hoverboard designs is Franky Zapata’s Flyboard Air. This hoverboard came into the limelight in the year 2016 when Zapata broke Cătălin Alexandru Duru’s.Guinness World Record by covering a distance of 2,252.4 meters on his Flyboard Air. This powerful hoverboard is capable of flying at a speed of 124 miles per hour (200 km/h), and can reach as high as 3000 meters (9,842 feet) up in the sky. 

Flyboard Air comes equipped with five jet turbines that run on kerosene and has a maximum load capacity of 264.5 lbs (120 kg). At present, it can stay in the air for only 10 minutes but Zapata and his team of engineers are making efforts to improve the design further and make it more efficient. In 2018, his company Z-AIR received a grant worth $1.5 million from the French Armed Forces. The following year, Zapata crossed the English Channel with EZ-Fly, an improved version of Flyboard Air.

While ArcaBoard really went on sale in 2016 at an initial price of $19,900, Lexus Hoverboard and Flyboard Air are still not available for public purchase. However, in a recent interview with DroneDJ, Cătălin Alexandru Duru revealed that he has plans to launch a commercial version of his omni hoverboard in the coming years.

California cultured meat plant is ready to produce 50,000 pounds of meat per year

In a residential neighborhood in Emeryville, California, a rather unusual facility has taken shape. The factory, which almost looks like a brewery, is actually a meat factory — but rather than slaughtering animals, it uses bioreactors to “grow” meat. According to the company that built it, it can already produce 50,000 pounds of meat per year, and has room to expand production to 400,000 pounds.

UPSIDE Chicken Salad

Upside Foods (previously called Memphis Meats) started out in 2015 as one of the pioneers of the nascent food-growing industry. Now, just 6 years later, there are over 80 companies working to bring lab-grown meat to the public — including one in Singapore which is already selling cultured chicken.

The fact that such a factory can be built (while regulatory approval is still pending and Upside can’t technically sell its products) already is striking. Upside’s new facility is located in an area known more for its restaurants than its factories, but with $200 million in funding and ever-growing consumer interest, the company seems to be sending a strong message.

Cultivating meat

The new facility is a testament to how much technology in this field has grown. The company can not only produce ground meat, but cuts of meat as well. Chicken breast is the first planned product, and the company says they can produce many types of meat, from duck to lobster.

“When we founded UPSIDE in 2015, it was the only cultivated meat company in a world full of skeptics,” says Uma Valeti, CEO and Founder of UPSIDE Foods. “When we talked about our dream of scaling up production, it was just that — a dream. Today, that dream becomes a reality. The journey from tiny cells to EPIC has been an incredible one, and we are just getting started.”

There’s still no word yet on how much these products will cost, but it’s probably not gonna be the cheapest meat on the market. Although lab-grown meat is nearing cost-competitiveness with slaughter meat, it’s not quite there yet. Besides, Upside already announced that their chicken products will be served by three-Michelin-starred chef Dominique Crenn. Crenn is the only chef in the US to be awarded three Michelin stars, and she famously removed meat from her menus in 2018 to make a statement against the negative impact of animal agriculture on the global environment and the climate crisis

Not for sale yet

Upside isn’t the only company to recently receive a lot of money in funding. Their San Francisco rival Eat Just, which became the first company in the world to sell lab-grown meat, received more than $450 million in funding. A 2021 McKinsey & Company report estimates that the cultivated meat industry will surge to $25 billion by 2030. However, in the US (and almost every country on the globe) cultured meat isn’t approved for sale yet.

The FDA has largely been silent on lab-grown meat since 2019, and while many expect a verdict soon, there’s no guarantee of a timeline. Even if the FDA allows the sale and consumption of lab-grown meat in the US, it will likely do so on a product-by-product basis rather than opening the floodgates to lab-grown meat as a whole. In the EU, things will likely move even slower.

However, pressure is mounting. In addition to the obvious ethical advantages of lab-grown meat, its environmental impact may also be less severe than that of slaughter meat. However, this has not been confirmed since we don’t yet have a large-scale production facility, and the few available studies don’t have definitive conclusions.

This is why having a working factory is so exciting, because it could offer the first glimpses of how sustainable the practice actually is. Upside says the facility uses 100% renewable energy and has expressed its desire to have a third party verify the facility’s sustainability by mid-2022.

Of course, all of this depends on the regulatory approval that may or may not come anytime soon. In the meantime, the factory is ready and good to go.

Machine learning reveals archaeology from up to 5,000 years ago

As modern technologies are emerging, they can help us learn a thing or two about ancient history as well. In a new study published by Penn State researchers, a machine learning algorithm was able to find previously undiscovered shell rings and shell mounds left by Indigenous people 3,000 to 5,000 years ago.

Shell rings in LiDAR data. The rings stand out due to their slope and elevation change compared to the surrounding landscape. 
Image credits: Dylan Davis, Penn State.

When humans build structures, it changes the environment around. Even once a structure is gone, the remains can still detectable for hundreds or even thousands of years. For instance, if you build a house, the porosity and topography of the surrounding soil will change ever so slightly, as will the chemistry of the soil beneath your house (as traces of man-made materials seep underground). Oftentimes, we can detect these changes if we look closely enough — and with the proper technological tool. Maybe it’s a tiny slope, maybe it’s some difference in soil humidity, or something else, but if we can gather the right type of data, we can see where human structures were built even thousands of years ago.

But it’s not easy. For decades, researchers looked for structures from the ground based on historical hints or what they could see with the naked eye. But vegetation can easily mask these subtle differences. In recent years though, aerial surveys have made a big difference. With airborne Lidar, Synthetic Aperture Data, or other types of spectral data, researchers were able to uncover more archaeological structures far easier than before.

But there was still a problem: there’s a lot of airborne data to analyze, and the data isn’t always clear. So how do you comb through all the data and find what looks promising? Well, you train an algorithm, of course.

The team began with a public Lidar data set and then used a deep learning process to recognize the algorithm to find shell rings, shell mounds, and other landscape objects that could be indicative of archaeological remains. They then manually went over the maps and located the known rings, using these to train the algorithm. For an even better training program, they rotated some of the maps by 45 degrees.

“There are only about 50 known shell ring sites in the Southeastern U.S.,” says Dylan S. Davis, doctoral candidate in anthropology at Penn State. Davis is also an author of the new study. “So, we needed more locations for training.”

“One difficulty with deep learning is that it usually requires massive amounts of information for training, which we don’t have when looking for shell rings,” Davis adds. “However, by augmenting our data and by using synthetic data, we were able to get good results, although, because of COVID-19, we have not been able to check our new shell rings on the ground.”

After training the algorithm, the team was able to use it to discover hundreds of new promising structures, including ones in counties where no previous discovery had been made. Since shell rings are thought to be centers of exchange of goods, they can provide a lot of information on ancient societies, showing what resources they traded and whether or not they used the available resources sustainably or not.

Aerial view of shell rings
Shell rings located on Daws Island, South Carolina. Both rings are approximately 150 to 200 feet in diameter and are comprised largely of oyster, mussel and clam shells.

“The rings themselves are a treasure trove for archaeologists,” said “Excavations done at some shell rings have uncovered some of the best preservation of animal bones, teeth and other artifacts.”

Archaeologists will now try to explore these sites on the ground and confirm the findings. But what’s perhaps even more exciting is that the artificial intelligence algorithms that they used are already included in ArcGis, a commercially available geographic information system. This means that the algorithms could be trained to find different types of structures in different geographical areas, potentially opening a whole new era of airborne archaeological exploration. The researchers also provide the code and tools they used and encourage others to replicate their approach. It doesn’t even need to be archaeology — other structures of interest could also be scoured thusly.

“Archaeologists are using more and more AI and automation techniques,” Davis concludes. “It can be extremely complicated and requires specific skill sets and usually requires large amounts of data.”