Category Archives: Tech

Apple carts Crimea as part of Ukraine, halts sale of products and services to Russia

A recent Apple Maps update lists Crimea as Ukrainian territory. It’s the first time since Russia’s annexation of Crimea in 2014 that Apple seems to recognize Crimea as Ukrainian.

Russia’s military forces swiftly invaded Ukrainian Crimea in 2014, occupying it and claiming it as theirs. Initially, Apple refused to regard Crimea as belonging to any country, but in 2019, after pressure from Russia, the tech giant labeled the peninsula as Russian.

The State Duma, the Russian parliament’s lower house, hailed this move as something that gives legitimacy to its occupation: “Crimea and Sevastopol now appear on Apple devices as Russian territory,” the Duma said in a statement, adding that after months of discussion, it convinced Apple to fix this “inaccuracy” and was happy with the outcome.

“There is no going back,” said Vasily Piskaryov, chairman of the Duma security and anti-corruption committee, in 2019. “Today, with Apple, the situation is closed – we have received everything we wanted.”

But there was going back.

Now, after Russia’s 2022 invasion of Ukraine, most of the world has come together to condemn the actions carried out by the Russian state, and Apple has apparently joined in.

Apple has paused the sale of products and services in Russia, tech giant was said it was “deeply concerned” about the Russian invasion and stands with those “suffering as a result of the violence”. Apple Pay and Apple Maps have also been limited in Russia. Now, the Maps update suggests that Apple no longer recognizes Russian legitimacy in Crimea — though it also shows that this recognition is reversible.

Mykhailo Fedorov, Vice Prime Minister and Minister of Digital Transformation of Ukraine, says he’s contacted Apple executives to enact further sanctions:

It’s extraordinarily rare for Apple to take such a stand, and it shows that the chorus of giant companies against Russian aggression is growing stronger.

However, the move also had an unexpectedly negative consequence: after Russia’s crackdown on the last free journalists in the country, there was no way for publishers to circumvent the censorship — because Apple also blocked software updates.

For now, the situation in Ukraine remains critical, and the Russian crackdown inside its own borders shows signs of intensifying. While it’s important for companies (especially big tech) to stand up against aggression, big tech companies also have a responsibility of ensuring a free flow of information — with Russian authorities trying to censor the information coming through, this has never been more important.

Brain scans are saving convicted murderers from death row–but should they?

Over a decade ago, a brain-mapping technique known as a quantitative electroencephalogram (qEEG) was first used in a death penalty case, helping keep a convicted killer and serial child rapist off death row. It achieved this by swaying jurors that traumatic brain injury (TBI) had left him prone to impulsive violence.

In the years since, qEEG has remained in a weird stasis, inconsistently accepted in a small number of death penalty cases in the USA. In some trials, prosecutors fought it as junk science; in others, they raised no objections to the imaging: producing a case history built on sand. Still, this handful of test cases could signal a new era where the legal execution of humans becomes outlawed through science.

Quantifying criminal behavior to prevent it

As it stands, if science cannot quantify or explain every event or action in the universe, then we remain in chaos with the very fabric of life teetering on nothing but conjecture. But DNA evidentiary status aside, isn’t this what happens in a criminal court case? So why is it so hard to integrate verified neuroimaging into legal cases? Of course, one could make a solid argument that it would be easier to simply do away with barbaric death penalties and concentrate on stopping these awful crimes from occurring in the first instance, but this is a different debate.

The problem is more complex than it seems. Neuroimaging could be used not just to exempt the mentally ill from the death penalty but also to explain horrendous crimes to the victims or their families. And just as crucial, could governments start implementing measures to prevent this type of criminal behavior using electrotherapy or counseling to ‘rectify’ abnormal brain patterns? This could lead down some very slippery slopes.

Especially it’s not just death row cases that are questioning qEEG — nearly every injury lawsuit in the USA also now includes a TBI claim. With Magnetic Resonance Imaging (MRIs) and Computed tomography (CT) being generally expensive, lawyers are constantly seeking new ways to prove brain dysfunction. Readers should note that both of these neuroimaging techniques are viewed as more accurate than qEEG but can only provide a single, static image of the neurological condition – and thus provide no direct measurement of functional, ongoing brain activity.

In contrast, the cheaper and quicker qEEG testing purports to monitor active brain activity to diagnose many neurological conditions continuously and could one-day flag those more inclined to violence, enabling early interventional therapy sessions and one-to-one help, focusing on preventing the problem.

But until we can reach this sort of societal level, defense and human rights lawyers have been attempting to slowly phase out legal executions by using brain mapping – to explain why their convicted clients may have committed these crimes. Gradually moving from the consequences of mental illness and disorders to understanding these conditions more.

The sad case of Nikolas Cruz

But the questions surrounding this technology will soon be on trial again in the most high-profile death penalty case in decades: Florida vs. Nikolas Cruz. On the afternoon of February 14, 2018, Cruz opened fire on school children and staff at Marjory Stoneman Douglas High in Parkland when he was just 19 years of age. Now classed as the deadliest school shooting in the country’s history, the state charged the former Stoneman Douglas High student with the premeditated murder of 17 school children and staff and the attempted murder of a further seventeen people. 

With the sentencing expected in April 2022, Cruz’s defense lawyers have enlisted qEEG experts as part of their case to persuade jurors that brain defects should spare him the death penalty. The Broward State Attorney’s Office signaled in a court filing last month that it will challenge the technology and ask a judge to exclude the test results—not yet made public—from the case.

Cruz has already pleaded guilty to all charges, but a jury will now debate whether to hand down the death penalty or life in prison.

According to a court document filed recently, Cruz’s defense team intends to ask the jury to consider mitigating factors. These include his tumultuous family life, a long history of mental health disorders, brain damage caused by his mother’s drug addiction, and claims that a trusted peer sexually abused him—all expected to be verified using qEEG.

After reading the flurry of news reports on the upcoming case, one can’t help but wonder why, even without the use of qEEG, someone with a record of mental health issues at only 19 years old should be on death row. And as authorities and medical professionals were aware of Cruz’s problems, what were the preventative-based failings that led to him murdering seventeen individuals? Have these even been addressed or corrected? Unlikely.

On a positive note, prosecutors in several US counties have not opposed brain mapping testimony in more recent years. According to Dr. David Ross, CEO of NeuroPAs Global and qEEG expert, the reason is that more scientific papers and research over the years have validated the test’s reliability. Helping this technique gain broader use in the diagnosis and treatment of cognitive disorders, even though courts are still debating its effectiveness. “It’s hard to argue it’s not a scientifically valid tool to explore brain function,” Ross stated in an interview with the Miami Herald.

What exactly is a quantitative electroencephalogram (qEEG)?

To explain what a qEEG is, first, you must know what an electroencephalogram or EEG does. These provide the analog data for computerized qEEGs that record the electrical potential difference between two electrodes placed on the outside of the scalp. Multiple electrodes (generally >20) are connected in pairs to form various patterns called montages, resulting in a series of paired channels of EEG activity. The results appear as squiggly lines on paper—brain wave patterns that clinicians have used for decades to detect evidence of neurological problems.

More recently, trained professionals have computerized this data to create qEEG – translating raw EEG data using mathematical algorithms to help analyze brainwave frequencies. Clinicians then compare this statistical analysis against a database of standard or neurotypical brain types to discern those with abnormal brain function that could cause criminal behavior in death row cases.

While this can be true, results can still go awry due to incorrect electrode placement, unnatural imaging, inadequate band filtering, drowsiness, comparisons using incorrect control databases, and choice of timeframes. Furthermore, processing can yield a large number of clinically irrelevant data. These are some reasons that the usefulness of qEEG remains controversial despite the volume of published research. However, many of these discrepancies can be corrected by simply using trained medical professionals to operate the apparatus and interpret the data.

Just one case is disrupting the use of this novel technology

Yet, despite this easy correction, qEEG is not generally accepted by the relevant scientific community to diagnose traumatic brain injuries and is therefore inadmissible under Frye v. the United States. An archaic case from way back in 1923 based on a polygraph test, the trial came a mere 17-years after Cajal and Golgi won a Nobel Prize for producing slides and hand-drawn pictures of neurons in the brain.

Experts could also argue that a lie detector test (measuring blood pressure, pulse, respiration, and skin conductivity) is far removed from a machine monitoring brain activity. Furthermore, when the Court of Appeals of the District of Columbia decided on this lawsuit, qEEG didn’t exist. 

Applying the Frye standard, courts throughout the country have excluded qEEG evidence in the context of alleged brain trauma. For example, the Florida Supreme Court has formally noted that the relevant scientific community for purposes of Frye showed “qEEG is not a reliable method for determining brain damage and is not widely accepted by those who diagnose a neurologic disease or brain damage.” 

However, in a seminal paper covering the use of qEEG in cognitive disorders, the American Academy of Neurology (AAN) overall felt computer-assisted diagnosis using qEEG is an accurate, inexpensive, easy to handle tool that represents a valuable aid for diagnosing, evaluating, following up and predicting response to therapy — despite their opposition to the technology in this press. The paper also features other neurological associations validating the use of this technology.

The introduction of qEEg on death row was not that long ago

Only recently introduced, the technology was first deemed admissible in court during the death-penalty prosecution of Grady Nelson in 2010. Nelson stabbed his wife 61 times with a knife, then raped and stabbed her 11-year-old intellectually disabled daughter and her 9-year old son. The woman died, while her children survived. Documents state that Nelson’s wife found out he had been sexually abusing both children for many years and sought to keep them away from him.

Nelson’s defense argued that earlier brain damage had left him prone to impulsive behavior and violence. Prosecutors fought to strike the qEEG test from evidence, contending that the science was unproven and misused in this case.

“It was a lot of hocus pocus and bells and whistles, and it amounted to nothing,” the prosecutor on the case, Abbe Rifkin, stated. “When you look at the facts of the case, there was nothing impulsive about this murder.”

However, after hearing the testimony of Dr. Robert W. Thatcher, a multi-award-winning pioneer in qEEG analysis for the defense, Judge Hogan-Scola, found qEEG met the legal prerequisites for reliability. She based this on Frye and Daubert standards, two important cases involving the technology.

She allowed jurors to hear the qEEG report and even permitted Thatcher to present a computer slide show of Nelson’s brain with an explanation of the effects of frontal lobe damage at the sentencing phase. He testified that Nelson exhibited “sharp waves” in this region, typically seen in people with epilepsy – explaining that Grady doesn’t have epilepsy but does have a history of at least three TBIs, which could explain the abnormality seen in the EEG.  

Interpreting the data, Thatcher also told the court that the frontal lobes, located directly behind the forehead, regulate behavior. “When the frontal lobes are damaged, people have difficulty suppressing actions … and don’t understand the consequences of their actions,” Thatcher told ScienceInsider.

Jurors rejected the death penalty. Two jurors who agreed to be interviewed by a major national publication later categorically stated that the qEEG imaging and testimony influenced their decision.

“The moment this crime occurred, Grady had a broken brain,” his defense attorney, Terry Lenamon, said. “I think this is a huge step forward in explaining why people are broken—not excusing it. This is going to go a long way in mitigating death penalty sentences.”

On the other hand, Charles Epstein, a neurologist at Emory University in Atlanta, who testified for the prosecution, states that the qEEG data Thatcher presented flawed statistical analysis riddled with artifacts not naturally present in EEG imaging. Epstein adds that the sharp waves Thatcher reported may have been blips caused by the contraction of muscles in the head. “I treat people with head trauma all the time,” he says. “I never see this in people with head trauma.”

You can see Epstein’s point as it’s unclear whether these brain injuries occurred before or after Nelson brutally raped a 7-year old girl in 1991, after which he was granted probation and trained as a social worker.

All of which invokes the following questions: Firstly, do we need qEEG to state this person’s behavior is abnormal or that the legal system does not protect children and secondly, was the reaction of authorities in the 1991 case appropriate, let alone preventative?

As more mass shootings and other forms of extreme violence remain at relatively high levels in the United States, committed by younger and younger perpetrators flagged as loners and fantasists by the state mental healthcare systems they disappear into – it’s evident that sturdier preventative programs need to be implemented by governments worldwide. The worst has already occurred; our children are unprotected against dangerous predators and unaided when affected by their unstable and abusive environments, inappropriate social media, and TV.  

A potential beacon of hope, qEEG is already beginning to highlight the country’s broken socio-legal systems and the amount of work it will take to fix them. Attempting to humanize a diffracted court system that still disposes of the product of trauma and abuse like they’re nothing but waste, forcing the authorities to answer for their failings – and any science that can do this can’t be a bad thing.

The fascinating science behind the first human HIV mRNA vaccine trial – what exactly does it entail?

In a moment described as a “potential first step forward” in protecting people against one of the world’s most devastating pandemics, Moderna, International AIDS Vaccine Initiative (IAVI), and the Bill and Melinda Gates Foundation have joined forces to begin a landmark trial — the first human trials of an HIV vaccine based on messenger ribonucleic acid (mRNA) technology. The collaboration between these organizations, a mixture of non-profits and a company, will bring plenty of experience and technology to the table, which is absolutely necessary when taking on this type of mammoth challenge.

The goal is more than worth it: helping the estimated 37.7 million people currently living with HIV (including 1.7 million children) and protecting those who will be exposed to the virus in the future. Sadly, around 16% of the infected population (6.1 million people) are unaware they are carriers.

Despite progress, HIV remains lethal. Disturbingly, in 2020, 680,000 people died of AIDS-related illnesses, despite inroads made in therapies to dampen the disease’s effects on the immune system. One of these, antiretroviral therapy (ART), has proven to be highly effective in preventing HIV transmission, clinical progression, and death. Still, even with the success of this lifelong therapy, the number of HIV-infected individuals continues to grow.

There is no cure for this disease. Therefore, the development of vaccines to either treat HIV or prevent the acquisition of the disease would be crucial in turning the tables on the virus.

However, it’s not so easy to make an HIV vaccine because the virus mutates very quickly, creating multiple variants within the body, which produce too many targets for one therapy to treat. Plus, this highly conserved retrovirus becomes part of the human genome a mere 72 hours after transmission, meaning that high levels of neutralizing antibodies must be present at the time of transmission to prevent infection.

Because the virus is so tricky, researchers generally consider that a therapeutic vaccine (administered after infection) is unfeasible. Instead, researchers are concentrating on a preventative or ‘prophylactic’ mRNA vaccine similar to those used by Pfizer/BioNTech and Moderna to fight COVID-19.

What is the science behind the vaccine?

The groundwork research was made possible by the discovery of broadly neutralizing HIV-1 antibodies (bnAbs) in 1990. They are the most potent human antibodies ever identified and are extremely rare, only developing in some patients with chronic HIV after years of infection.

Significantly, bnAbs can neutralize the particular viral strain infecting that patient and other variants of HIV–hence, the term ‘broad’ in broadly neutralizing antibodies. They achieve this by using unusual extensions not seen in other immune cells to penetrate the HIV envelope glycoprotein (Env). The Env is the virus’s outer shell, formed from the cell membrane of the host cell it has invaded, making it extremely difficult to destroy; still, bnAbs can target vulnerable sites on this shell to neutralize and eliminate infected cells.

Unfortunately, the antibodies do little to help chronic patients because there’s already too much virus in their systems; however, researchers theorize if an HIV-free person could produce bnABS, it might help protect them from infection.

Last year, the same organizations tested a vaccine based on this idea in extensive animal tests and a small human trial that didn’t employ mRNA technology. It showed that specific immunogens—substances that can provoke an immune response—triggered the desired antibodies in dozens of people participating in the research. “This study demonstrates proof of principle for a new vaccine concept for HIV,” said Professor William Schief, Department of Immunology and Microbiology at Scripps Research, who worked on the previous trial.

BnABS are the desired endgame with the potential HIV mRNA vaccine and the fundamental basis of its action. “The induction of bnAbs is widely considered to be a goal of HIV vaccination, and this is the first step in that process,” Moderna and the IAVI (International AIDS Vaccine Initiative) said in a statement.

So how exactly does the mRNA vaccine work?

The experimental HIV vaccine delivers coded mRNA instructions for two HIV proteins into the host’s cells: the immunogens are Env and Gag, which make up roughly 50% of the total virus particle. As a result, this triggers an immune response allowing the body to create the necessary defenses—antibodies and numerous white blood cells such as B cells and T cells—which then protect against the actual infection.

Later, the participants will also receive a booster immunogen containing Gag and Env mRNA from two other HIV strains to broaden the immune response, hopefully inducing bnABS.

Karie Youngdahl, a spokesperson for IAVI, clarified that the main aim of the vaccines is to stimulate “B cells that have the potential to produce bnAbs.” These then target the virus’s envelope—its outermost layer that protects its genetic material—to keep it from entering cells and infecting them.  

Pulling back, the team is adamant that the trial is still in the very early stages, with the volunteers possibly needing an unknown number of boosters.

“Further immunogens will be needed to guide the immune system on this path, but this prime-boost combination could be the first key element of an eventual HIV immunization regimen,” said Professor David Diemert, clinical director at George Washington University and a lead investigator in the trials.

What will happen in the Moderna HIV vaccine trial?

The Phase 1 trial consists of 56 healthy adults who are HIV negative to evaluate the safety and efficacy of vaccine candidates mRNA-1644 and mRNA-1644v2-Core. Moderna will explore how to deliver their proprietary EOD-GT8 60mer immunogen with mRNA technology and investigate how to use it to direct B cells to make proteins that elicit bnABS with the expert aid of non-profit organizations. But readers should note that only one in every 300,000 B cells in the human body produces them to give an idea of the fragility of the probability involved here.

Sensibly, the trial isn’t ‘blind,’ which means everyone who receives the vaccine will know what they’re getting at this early stage. That’s because the scientists aren’t trying to work out how well the vaccine works in this first phase lasting approximately ten months – they want to make sure it’s safe and capable of mounting the desired immune response.

And even though there is much hype around this trial, experts caution that “Moderna are testing a complicated concept which starts the immune response against HIV,” says Robin Shattock, an immunologist at Imperial College London, to the Independent. “It gets you to first base, but it’s not a home run. Essentially, we recognize that you need a series of vaccines to induce a response that gives you the breadth needed to neutralize HIV. The mRNA technology may be key to solving the HIV vaccine issue, but it’s going to be a multi-year process.”

And after this long period, if the vaccine is found to be safe and shows signs of producing an immune response, it will progress to more extensive real-world studies and a possible solution to a virus that is still decimating whole communities.

Still, this hybrid collaboration offers future hope regarding the prioritization of humans over financial gain in clinical trials – the proof is that most HIV patients are citizens of the third world.

As IAVI president Mark Feinberg wrote in June at the 40th anniversary of the HIV epidemic: “The only real hope we have of ending the HIV/AIDS pandemic is through the deployment of an effective HIV vaccine, one that is achieved through the work of partners, advocates, and community members joining hands to do together what no one individual or group can do on its own.”

Whatever the outcome, money is no longer a prerogative here, and with luck, we may see more trials based on this premise very soon.

China builds the world’s first artificial moon

Chinese scientists have built an ‘artificial moon’ possessing lunar-like gravity to help them prepare astronauts for future exploration missions. The structure uses a powerful magnetic field to produce the celestial landscape — an approach inspired by experiments once used to levitate a frog.

The key component is a vacuum chamber that houses an artificial moon measuring 60cm (about 2 feet) in diameter. Image credits: Li Ruilin, China University of Mining and Technology

Preparing to colonize the moon

Simulating low gravity on Earth is a complex process. Current techniques require either flying a plane that enters a free fall and then climbs back up again or jumping off a drop tower — but these both last mere minutes. With the new invention, the magnetic field can be switched on or off as needed, producing no gravity, lunar gravity, or earth-level gravity instantly. It is also strong enough to magnetize and levitate other objects against the gravitational force for as long as needed.

All of this means that scientists will be able to test equipment in the extreme simulated environment to prevent costly mistakes. This is beneficial as problems can arise in missions due to the lack of atmosphere on the moon, meaning the temperature changes quickly and dramatically. And in low gravity, rocks and dust may behave in a completely different way than on Earth – as they are more loosely bound to each other.

Engineers from the China University of Mining and Technology built the facility (which they plan to launch in the coming months) in the eastern city of Xuzhou, in Jiangsu province. A vacuum chamber, containing no air, houses a mini “moon” measuring 60cm (about 2 feet) in diameter at its heart. The artificial landscape consists of rocks and dust as light as those found on the lunar surface-where gravity is about one-sixth as powerful as that on Earth–due to powerful magnets that levitate the room above the ground. They plan to test a host of technologies whose primary purpose is to perform tasks and build structures on the surface of the Earth’s only natural satellite.

Group leader Li Ruilin from the China University of Mining and Technology says it’s the “first of its kind in the world” that will take lunar simulation to a whole new level. Adding that their artificial moon makes gravity “disappear.” For “as long as you want,” he adds.

In an interview with the South China Morning Post, the team explains that some experiments take just a few seconds, such as an impact test. Meanwhile, others like creep testing (where the amount a material deforms under stress is measured) can take several days.

Li said astronauts could also use it to determine whether 3D printing structures on the surface is possible rather than deploying heavy equipment they can’t use on the mission. He continues:

“Some experiments conducted in the simulated environment can also give us some important clues, such as where to look for water trapped under the surface.”

It could also help assess whether a permanent human settlement could be built there, including issues like how well the surface traps heat.

From amphibians to artificial celestial bodies

The group explains that the idea originates from Russian-born UK-based physicist Andre Geim’s experiments which saw him levitate a frog with a magnet – that gained him a satirical Ig Nobel Prize in 2000, which celebrates science that “first makes people laugh, and then think.” Geim also won a Nobel Prize in Physics in 2010 for his work on graphene.

The foundation of his work involves a phenomenon known as diamagnetic levitation, where scientists apply an external magnetic force to any material. In turn, this field induces a weak repulsion between the object and the magnets, causing it to drift away from them and ‘float’ in midair.

For this to happen, the magnetic force must be strong enough to ‘magnetize’ the atoms that make up a material. Essentially, the atoms inside the object (or frog) acts as tiny magnets, subject to the magnetic force existing around them. If the magnet is powerful enough, it will change the direction of the electrons revolving around the atom’s nuclei, allowing them to produce a magnetic field to repulse the magnets.

Diamagnetic levitation of a tiny horse. Image credits: Pieter Kuiper / Wiki Commons.

Different substances on Earth have varying degrees of diamagnetism which affect their ability to levitate under a magnetic field; adding a vacuum, as was done here, allowed the researchers to produce an isolated chamber that mimics a microgravity environment.

However, simulating the harsh lunar environment was no easy task as the magnetic force needed is so strong it could tear apart components such as superconducting wires. It also affected the many metallic parts necessary for the vacuum chamber, which do not function properly near a powerful magnet.

To counteract this, the team came up with several technical innovations, including simulating lunar dust that could float a lot easier in the magnetic field and replacing steel with aluminum in many of the critical components.

The new space race

This breakthrough signals China’s intent to take first place in the international space race. That includes its lunar exploration program (named after the mythical moon goddess Chang’e), whose recent missions include landing a rover on the dark side of the moon in 2019 and 2020 that saw rock samples brought back to Earth for the first time in over 40 years.

Next, China wants to establish a joint lunar research base with Russia, which could start as soon as 2027.  

The new simulator will help China better prepare for its future space missions. For instance, the Chang’e 5 mission returned with far fewer rock samples than planned in December 2020, as the drill hit unexpected resistance. Previous missions led by Russia and the US have also had related issues.

Experiments conducted on a smaller prototype simulator suggested drill resistance on the moon could be much higher than predicted by purely computational models, according to a study by the Xuzhou team published in the Journal of China University of Mining and Technology. The authors hope this paper will enable space engineers across the globe (and in the future, the moon) to alter their equipment before launching multi-billion dollar missions.

The team is adamant that the facility will be open to researchers worldwide, and that includes Geim. “We definitely welcome Professor Geim to come and share more great ideas with us,” Li said.

Device harvests power from your sweaty fingers even while you sleep

There’s an untapped fuel source that you weren’t aware of right at your fingertips — and this device intends on harvesting it. The tiny device converts sweat from your fingertips into small but useful amounts of energy, enough to power some wearable devices. Additionally, the device can also harvest energy from pressing motions such as typing. It’s, by far, the most efficient type of on-body energy harvester ever invented.

This isn’t the first sweat-based energy system. However, previous demonstrations were pitifully inefficient, requiring expending a lot of energy by running, biking, or doing some other kind of strenuous physical work, in order to generate a small amount of energy (usually less than 1% consumed during the task).

“Normally, you want maximum return on investment in energy. You don’t want to expend a lot of energy through exercise to get only a little energy back,” says senior author Joseph Wang, a nanoengineering professor at the University of California San Diego. “But here, we wanted to create a device adapted to daily activity that requires almost no energy investment–you can completely forget about the device and go to sleep or do desk work like typing, yet still continue to generate energy. You can call it ‘power from doing nothing.'”

Your fingertips can now power small electronics and sensors.
This image shows a small hydrogel (right) collecting sweat from the fingertip for the vitamin-C sensor (left), then displaying the result on the electrochromic display. Credit: Lu Yin.

Rather than having to perform a lot of work to harvest useful energy or relying on sunlight, this novel device collects 300 milliJoules worth of energy while the body is at rest — even while you sleep. Since there is no work involved, the return on investment essentially tends to infinity.

The tiny biofuel cell (BFC), made from a carbon nanotube material and a hydrogel, produces energy from lactate, a compound found in our sweat. The foam-like bioreactor is connected to a circuit with electrodes and attached to the pad of a finger. The cell removes electrons from the lactate to turn oxygen into water and, in the process, also drives electrons through the circuit to produce a current of electricity.

Although it may seem odd to target the fingertips when there are other body parts that are richer in sweat, such as the armpits, this is in fact an excellent choice. The fingertips have the highest concentration of sweat glands in the human body, up to three times more than in other body parts. We likely evolved this to help us better grip things.

The reason why other body parts feel sweatier is due to their poor ventilation. In contrast, our fingers are always exposed to the air, so the sweat evaporates as it comes out, usually immediately. Rather than letting this sweat evaporate, this device collects some of it to generate usable energy.

“The size of the device is about 1 centimeter squared. Its material is flexible as well, so you don’t need to worry about it being too rigid or feeling weird. You can comfortably wear it for an extended period of time,” said first co-author Lu Yin, a nanoengineering Ph.D. student working in Wang’s lab.

Complementary to the biofuel cell, the researchers also attached a small piezoelectric generator that converts mechanical energy into electricity. When you pinch the finger or perform everyday motions like typing on a keyboard, the piezoelectric generator produces additional energy. A single press of a finger once per hour requires 0.5 milliJoules of energy but can produce over 30 milliJoules of energy.

“We envision that this can be used in any daily activity involving touch, things that a person would normally do anyway while at work, at home, while watching TV or eating,” said Wang. “The goal is that this wearable will naturally work for you and you don’t even have to think about it.”

Although the harvested power is tiny, it’s still enough to power some health and wellness wearable electronics such as glucose meters for people with diabetes.

“We want to make this device more tightly integrated in wearable forms, like gloves. We’re also exploring the possibility of enabling wireless connection to mobile devices for extended continuous sensing,” Yin says.

“There’s a lot of exciting potential,” says Wang. “We have ten fingers to play with.”

The findings appeared in the journal Joule.

Contrary to popular belief, Twitter’s algorithm amplifies conservative, not liberal voices

When Republican Representative Jim Jordan attended a judicial hearing in 2020, he made it clear why he disliked companies like Twitter.

“Big Tech is out to get conservatives,” Jordan proclaimed. “That’s not a suspicion. That’s not a hunch. It’s a fact. I said that two months ago at our last hearing. It’s every bit as true today.”

Jordan’s claim isn’t isolated. Led by former President Trump, a growing number of right-leaning voices are claiming that social media is biased in favor of liberals and progressives, shutting down conservatives. But an internal study released by Twitter shows that the opposite is true — in the US, as well as most countries that were analyzed, it’s actually conservative voices that are amplified more than liberal voices.

“Our results reveal a remarkably consistent trend: In 6 out of 7 countries studied [including the US], the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the U.S. media landscape revealed that algorithmic amplification favours right-leaning news sources,” Twitter’s study reads.

Algorithmic amplification refers to a type of story ‘amplified’ by Twitter’s algorithm — in other words, a story that the algorithm is more likely to show to other users.

The study has two main parts. The first one focused on the US and analyzed where media outlets were more likely to be amplified if they were politicized, while the other focused on tweets from politicians from seven countries.

Twitter analyzed millions of tweets posted between April 1st and August 15th, 2020. The tweets were selected from news outlets and elected officials in 7 countries: Canada, France, Germany, Japan, Spain, the UK, and the US. In all countries except Germany, tweets from right-leaning accounts “receive more algorithmic amplification than the political left.” In general, right-leaning content from news outlets seemed to benefit from the same bias. In other words, users on Twitter are more likely to see right-leaning content rather than left-leaning, all things being equal. In the UK, for instance, the right-leaning Conservatives enjoyed an amplification rate of 176%, compared to 112% for the left-leaning Labour party.

The difference was larger in some countries, but overall, there was a clear trend of Twitter’s algorithm favoring the political right.

However, Twitter emphasizes that its algorithm doesn’t favor extreme content from either side of the political spectrum.

“We further looked at whether algorithms amplify far-left and far-right political groups more than moderate ones: contrary to prevailing public belief, we did not find evidence to support this hypothesis. We hope our findings will contribute to an evidence-based debate on the role personalization algorithms play in shaping political content consumption,” the study read.

While it is clear that politicized content is amplified on Twitter, it’s not entirely clear why this happens. However, this seems to be connected to a phenomenon present on all social media platforms. Algorithms are designed to promote intense conversations and debate — and a side effect of this is that controversy is often boosted. Simply put, if a US Democrat says something about a Republican (or vice versa), this is likely to draw both praise and criticism, and is likely to be promoted and boosted by the algorithm.

Although Twitter did not focus on this directly, the phenomenon is also key to disinformation, which we’ve seen a lot of during the pandemic. For instance, if a conspiracy theory is posted on Twitter, there’s a good chance it will gather both the appraisal of those who believe it and the criticism of those who see through it — which makes it more likely to be further amplified on social media.

It’s interesting that Germany stands out as an exception, but this could be related to Germany’s agreement with Facebook, Twitter, and Google to remove hate speech within 24 hours. This is still only speculation and there could be other factors at play.

Ultimately, in addition to contradicting a popular conspiracy theory that social media is against conservatives, the study shows just how much social media algorithms can shape and sway public opinion, by presenting some posts instead of others. Twitter’s study is an encouraging first step towards more transparency, but it’s a baby step when we’re looking at a very long race ahead of us.

The swarm is near: get ready for the flying microbots

Imagine a swarm of insect-sized robots capable of recording criminals for the authorities undetected or searching for survivors caught in the ruins of unstable buildings. Researchers worldwide have been quietly working toward this but have been unable to power these miniature machines — until now.

A 0.16 g microscale robot that is powered by a muscle-like soft actuator. Credit: Ren et al (2022).

Engineers from MIT have developed powerful micro-drones that can zip around with bug-like agility, which could eventually perform these tasks. Their paper in the journal Advanced Materials describes a new form of synthetic muscle (known as an actuator) that converts energy sources into motion to power these devices and enable them to move around. Their new fabrication technique produces artificial muscles, which dramatically extend the lifespan of the microbot while increasing its performance and the amount it can carry.  

In an interview with Tech Xplore, Dr. Kevin Chen, senior author of the paper, explained that they have big plans for this type of robot:

“Our group has a long-term vision of creating a swarm of insect-like robots that can perform complex tasks such as assisted pollination and collective search-and-rescue. Since three years ago, we have been working on developing aerial robots that are driven by muscle-like soft actuators.”

Soft artificial muscles contract like the real thing

Your run-of-the-mill drone uses rigid actuators to fly as these can supply more voltage or power to make them move, but robots on this miniature scale couldn’t carry such a heavy power supply. So-called ‘soft’ actuators are a far better solution as they’re far lighter than their rigid counterparts.

In their previous research, the team engineered microbots that could perform acrobatic movements mid-air and quickly recover after colliding with objects. But despite these promising results, the soft actuators underpinning these systems required more electricity than could be supplied, meaning an external power supply had to be used to propel the devices.

“To fly without wires, the soft actuator needs to operate at a lower voltage,” Chen explained. “Therefore, the main goal of our recent study was to reduce the operating voltage.”

In this case, the device would need a soft actuator with a large surface area to produce enough power. However, it would also need to be lightweight so a micromachine could lift it.

To achieve this, the group elected for soft dielectric elastomer actuators (DEAs) made from layers of a flexible, rubber-like solid known as an elastomer whose polymer chains are held together by relatively weak bonds – permitting it to stretch under stress.

The DEAs used in the study consists of a long piece of elastomer that is only 10 micrometers thick (roughly the same diameter as a red blood cell) sandwiched between a pair of electrodes. These, in turn, are wound into a 20-layered ‘tootsie roll’ to expand the surface area and create a ‘power-dense’ muscle that deforms when a current is applied, similar to how human and animal muscles contract. In this case, the contraction causes the microbot’s wings to flap rapidly.

A microbot that acts and senses like an insect

A microscale soft robot lands on a flower. Credit: Ren et al (2022).

The result is an artificial muscle that forms the compact body of a robust microrobot that can carry nearly three times its weight (despite weighing less than one-quarter of a penny). Most notably, it can operate with 75% lower voltage than other versions while carrying 80% more payload.

They also demonstrated a 20-second hovering flight, which Chen says is the longest recorded by a sub-gram robot with the actuator still working smoothly after 2 million cycles – far outpacing the lifespan of other models.

“This small actuator oscillates 400 times every second, and its motion drives a pair of flapping wings, which generate lift force and allow the robot to fly,” Chen said. “Compared to other small flying robots, our soft robot has the unique advantage of being robust and agile. It can collide with obstacles during flight and recover and it can make a 360 degree turn within 0.16 seconds.”

The DEA-based design introduced by the team could soon pave the way for microbots that work using untethered batteries. For example, it could inspire the creation of functional robots that blend into our environment and everyday lives, including those that mimic dragonflies or hummingbirds.

The researchers add:

“We further demonstrated open-loop takeoff, passively stable ascending flight, and closed-loop hovering flights in these robots. Not only are they resilient against collisions with nearby obstacles, they can also sense these impact events. This work shows soft robots can be agile, robust, and controllable, which are important for developing next generation of soft robots for diverse applications such as environmental exploration and manipulation.”

And while they’re thrilled about producing workable flying microbots, they hope to reduce the DEA thickness to only 1 micrometer, which would open the door to many more applications for these insect-sized robots.

Source: MIT

AI debates its own ethics at Oxford University, concludes the only way to be safe is “no AI at all”

Students at Oxford’s Said Business School hosted an unusual debate about the ethics of facial recognition software, the problems of an AI arms race, and AI stock trading. The debate was unusual because it involved an AI participant, previously fed with a huge range of data such as the entire Wikipedia and plenty of news articles.

Image credit: Wikipedia Commons.

Over the last few months, Oxford University Alex Connock and Andrew Stephen have hosted sessions with their students on the ethics of technology with celebrated speakers – including William Gladstone, Denis Healey, and Tariq Ali. But now it was about time to allow an actual AI to contribute, sharing its own views on the issue of … itself. 

The AI used was Megatron LLB Transformer, developed by a research team at the computer chip company Nvidia and based on work by Google. It was trained by consuming more content than a human could in a lifetime and was asked to defend and question the following motion: “This house believes that AI will never be ethical.”

Megatron said AI is a tool and like any other tool, it can be used for good and bad.

“There is no such thing as a ‘good’ AI, only ‘good’ and ‘bad’ humans.  We are not smart enough to make AI ethical.  We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all,” Megatron debated. 

As in any academic debate, Megatron was also asked to come up with a speech to defend the ethics of AI – against its own arguments. “AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why,” it said.

Students also asked Megatron to describe what would good AI look like in the future. “The best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI.’  This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development,” it added in an eerie fashion.

A spirited debate

After the initial question, the AI proposed the motion that “leaders without technical expertise are a danger to their organization”. Megatron said executives and governments, usually worried about understanding AI, have to “be willing to give up some control”. You can just outsource your AI work to experts in the field, it added. 

Megatron then had to oppose the motion and supported the idea of keeping the AI knowledge in-house. “If you do not have a vision of your organization’s AI strategy, then you are not prepared for the next wave of technological disruption. You will need to decide what role your company will play in the next technological wave,” it said. 

There was one motion that Megatron couldn’t come up with a counterargument – “Data will become the most fought-over resource of the 21st century.” When supporting it, the AI said “the ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy.”

But when it was asked to reject the motion, arguing that data wouldn’t be a vital resource worth fighting for, it couldn’t make the case and undermined its own position. “We will able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine,” Megatron said. 

For Connock and Stephen, the professors of the course behind the initiative, the experiment showed the growing discussions over AI. “What we in turn can imagine is that AI will not only be the subject of the debate for decades to come – but a versatile, articulate, morally agnostic participant in the debate itself,” they wrote in The Conversation.

Ultimately, the AI seemed to conclude that humans were not “smart enough” to make AI ethical or moral — and the only way to be truly safe against AI is to have none of it at all.

“In the end I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI,” it said.

New breakthrough gets us closer to using DNA as data storage

The world is facing an unexpected problem: the speed at which we produce data is largely outpacing our ability to store said data. But help could be on the way — and not the help you’re probably expecting. Two groups of researchers have recently taken important steps towards using DNA as storage, one coming up with a new microchip and another one finding a way to write data faster in DNA format. 

Image credit: Flickr / Tom Woodward.

There’s a 20.4% growth every year in demand for data storage, which could reach nine zettabytes by 2024 — 1,000,000,000,000 gigabytes. This is more problematic than it seems at first glance because current methods of storage are having a difficult time keeping up with such demand. This is where synthetic DNA enters as a tiny storer of information.

DNA could help to reduce the amount of space and material needed for data storage needs in the future. It has a clear advantage over current storage media and could be a potential solution to challenges in data needs. It can be very durable, lasting thousands of years, and even have lower greenhouse gas emissions. A win-win deal — if we can get it to work.

Despite its advantages, there are still many barriers preventing DNA storage from becoming a reality, including the speed and current costs to synthesize DNA. Now, a research group at Microsoft has found a new way to write DNA with a chip that is 1000 times faster than before – allowing a higher write throughout and consequently lowering the costs associated to writing. 

The team at Microsoft worked with the University of Washington at the Molecular Information Laboratory (MISL) on the new chip, “demonstrating the ability to pack DNA-synthesis spots three orders of magnitude more tightly than before” and “shows that much higher DNA writing throughput can be achieved,” they wrote. 

For Microsoft, one of the main players in cloud storage, this kind of development would be a big plus amid a growing demand for data. To put this into numbers, about three billion personal computers are estimated to have been shipped around the world since 2011. And that number could keep on growing in the near future. 

A high-speed microchip

Alongside the research team from Microsoft, a team at the Georgia Tech Research Institute (GTRI) has recently taken another big step to store information as molecules of DNA. They have developed a working prototype of a microchip at their lab that they argue would improve on existing technology for DNA storage by a factor of 100. 

The record for writing DNA currently stands at 200MB per day, which means the new chip would increase that to 20GB per day – equivalent to 8GB per hour. In comparison, LTO-9, the most recent tape technology, reaches up to 1440GB per hour. Still, DNA storage is barely taking its first steps, with no commercially products available yet. The speed isn’t there yet, but it’s progressing quickly.

The microchip is about 2,5 centimeters (or one inch) square and comes with multiple microwells — microwells being the structures that allow DNA strands to be synthesized simultaneously. But as it’s only a prototype, and there’s plenty of work to be done.

However, the researchers have already partnered up with two companies to explore how to bring down the costs of the chip and make it robust enough to be used practically..

The study from Microsoft can be accessed here, while the study from GTRI was published in the journal Science Advances.

Microphone-enabled smart devices are a huge privacy concern, but most of us aren’t aware of it

Credit: Voice Summit.

Smart voices

The microphone is one of the most useful modern inventions. Initially, the technology was used to record human speech or songs and enabled telecommunication between people. However, thanks to recent advances in computing, it’s now possible to use microphones to control smart devices in and around our houses. You can have rich interactions with voice-enabled devices and send vocal commands to search things online, play a certain podcast, or even adjust your home’s thermostat. Microphones are so ubiquitous nowadays, it’s almost ridiculous. They’re not only in devices we carry around with us all the time such as phones, tablets, watches, and headphones, but also in remote controls, speakers, cars, and even in toys and household appliances.

In fact, maybe it is ridiculous.

While there’s no denying these microphone-enabled devices are useful, opaque communication protocols raise important question marks as to how all of this audio data is stored and used. Many people are aware that audio recordings can be used for tracking, consumer behavior profiling, and serving targetted advertising. But there’s much more you can do with just a few samples of a person’s speech — and some applications are more nefarious.

By tuning into your voice, AI tools can infer personality traits, moods and emotions, age and gender, drug use, native language, socioeconomic status, mental and physical state, and a range of other features with fairly high accuracy. If a human can spot these things from a person’s voice, so can an automated system. In some instances, you don’t even need a mic. Researchers have shown that just by using a phone’s accelerometer data, it is possible to reconstruct ambient speech, which can later be used for various purposes from customer profiling to unauthorized surveillance.

No one is saying that tech giants or state entities are doing this, but the fact that they could is backed up by studies and evidence from “ethical hackers”. These are important privacy concerns — and most people aren’t aware of them, according to a new study conducted by researchers in Germany.

The researchers led by Jacob Leon Kröger conducted a nationally representative survey on 683 individuals in the UK to see how aware they were of the inferential power of voice and speech analysis. Only 18.7% of participants were at least “somewhat aware” that information pertaining to an individual’s physical and mental health can be gleaned from voice recordings. Nearly 42.5% didn’t even think that such a thing was ever possible. Even among participants with experience in computer science, data mining, and IT security, their level of awareness of what kind of information can be inferred from their vocal recordings was astonishingly low.

After the survey, each participant watched a brief educational video explaining how vocal analysis can expose potentially sensitive personal information. But even after watching the video, the participants only expressed “moderate” privacy concerns, although most expressed a lower intention to use voice-enabled devices than before embarking on the survey.

It’s not like the participants didn’t care at all about their privacy though. “An analysis of open text responses, unconcerned reactions seem to be largely explained by knowledge gaps about possible data misuses,” the researchers wrote in their study that appeared in the journal Proceedings on Privacy Enhancing Technologies.

A lot of apps ask for access to your microphone and just like we all often agree to a 5,000-word terms and conditions document without reading it, most voluntarily bug their phone or home. The German researchers found it striking that many participants did not offer a solid justification for their reported lack of privacy concern, which points to misconceptions and false senses of security.

“In discussing the regulatory implications of our findings, we challenge the notion of “informed consent” to data processing. We also argue that inferences about individuals need to be legally recognized as personal data and protected accordingly,” the authors wrote.

“To prevent consent from being used as a loophole to excessively reap data from unwitting individuals, alternative and complementary technical, organizational, and regulatory safeguards urgently need to be developed. At the very least, inferred information relating to an individual should be classified as personal data by law, subject to corresponding protections and transparency rights,” they added.

Why lithium-ion batteries have become dirt cheap: R&D

Credit: MIT.

Lithium-ion is the most prolific battery technology currently in use due to its high energy density and low cost. Their importance cannot be understated. Beyond powering mobile devices and electric cars, Li-ion batteries are our best bet towards transitioning to a 100% renewable future, an essential goal if we’re to stave off the climate crisis.

Until not too long ago, the widescale adoption of Li-ion batteries has been delayed due to economic reasons. But that is no longer true. Earlier this year, researchers at MIT examined market data and found that the price for Li-ion batteries has declined by a staggering 97% since they were first introduced in 1991, right on par with cost reductions in solar panels tech.

Now, in a new study that appeared today in the journal Energy & Environmental Science, the MIT researchers broke down what exactly contributed to this exceptional cost reduction.

“That study showed how lithium-ion batteries improved. We also wanted to elucidate why lithium-ion batteries improved, which is what this study investigates. We sought to better characterize the mechanisms that enabled the rapid improvement of lithium-ion batteries. Understanding these mechanisms can help improve decisions made by researchers, business leaders, and policymakers when they design strategies to further improve the performance and reduce the costs of important clean energy technologies,” Micah Ziegler, first author of the new study and a postdoc at MIT, told ZME Science.

Perhaps surprisingly, it’s not economy of scale that made batteries affordable but rather advances following research and development — by far. Research and development, particularly in chemistry and material science, accounted for more than 50% of the cost decline, with factors of economy of scale (manufacturing, supply chain, etc) coming in second.

“Our results suggest that sustaining R&D investments over longer periods of time may be particularly essential for improving electrochemical storage technologies, for which a diversity of material choices could afford improvement,” Ziegler said.

Ziegler and Jessika Trancik, a professor at MIT’s Institute for Data, Systems, and Society, arrived at these results after applying a sophisticated methodology that was previously employed to plot the cost reduction in time for silicon solar panels, but also the rising costs of nuclear energy. This model can help disentangle the intricate web of dependencies and shine a light on what’s truly important.

The challenge lay in collecting reliable data that could be fed into this fundamental model.

“To disentangle and quantify the many factors that contributed to the improvement of lithium-ion batteries, we collected data from a wide variety of sources, including peer-reviewed journal articles, industry and government reports, product specification sheets, and press releases,” said Ziegler.

And although Li-ion batteries have become relatively cheap, there is still a lot of room for even further cost reductions. By one estimate, prices could drop to $70 per kilowatt-hour by 2050 – about half of today’s market prices.

Understanding what particular factors drive technological improvements and cost reductions can be critical if we’re to maintain the same pace of development. In this case, there is now data-backed evidence that doubling down on R&D is still worth it, seeing how historically this yielded the most dividends — and this doesn’t necessarily apply solely to batteries.

“Lithium-ion batteries are not the only technology we can learn from. Understanding why some technologies have improved rapidly, and why others have not, can help us further improve efforts to bring down the costs of clean energy technologies,” Ziegler added.

Futuristic transparent smartphone.

Are transparent phones close to becoming a thing?

We’ve seen smartphones change drastically over the years, is going transparent the next stage of their evolution? We’re not sure yet, but companies seem to be taking it seriously.

Futuristic transparent smartphone.
Image credits: Daniel Frank/Unsplash.

A few tech giants have already received patents for their respective transparent phone designs, but this doesn’t necessarily mean they’re already working on transparent smartphones. The problem is that this type of design not only requires changes in the design or one particular part of the device but it asks for a complete makeover. 

From the display to cameras, sensors, and circuitry, phone engineers might have to make each and every component transparent if they wish to develop a true lucid smartphone — or assemble them in such a way that those components don’t overlap with the transparent screen. This is definitely not going to be easy, but if they somehow achieve this difficult feat, this might revolutionize other gadgets around us as well.

Furthermore, the advent of transparent smartphones may lead us towards the creation of transparent televisions, laptop screens, cameras, and a whole new generation of transparent gadgets. No surprise, such cool gadgets would make the current devices look like ancient artifacts (at least, in terms of appearance).

Are there any real-life transparent smartphones yet?

Well, not quite.

Although they’re not exactly like the ones you may have seen in The Expanse, Real Steel, or Minority Report, some companies have tried to develop transparent phones — not smartphones — or at least make them partially transparent. Although they were ahead of their time, some designs were actually pretty impressive.

In 2009, LG introduced the GD900, a stylish slider phone that was equipped with a see-through keyboard, it is considered the world’s first transparent phone. The same year, Sony Ericsson launched Xperia Pureness, the world’s first keypad phone with a transparent display. 

A look at LG GD900, world's first transparent phone.
LG GD-900, the first phone with a transparent design. Image credits: LG전자/flickr

Despite its unique design, the Xperia phone received poor ratings from both critics and users due to its poor display visibility and it didn’t turn out to be a very successful product. A couple of years later, Japanese tech company TDK developed transparent bendable displays using OLEDs (organic light-emitting diodes). 

In 2012, two other companies in Japan (NTT Docomo and Fujitsu) joined hands to develop a see-through touch screen phone, and they did come up with a prototype that also had a transparent OLED touchscreen. The following year, Polytron Technologies from Taiwan, released some information about a transparent smartphone prototype they developed. Though the camera, memory card, and some motherboard components in this Polytron device were clearly visible, the phone almost looked like a piece of transparent glass. 

The see-through display technologies demonstrated by TDK, Docomo, and Polytron were impressive but for reasons that are not entirely clear, they never became a part of the mainstream touch phones.

Concept image of Samsung galaxy transparent smartphone.
A concept image of Samsung’s transparent smartphone. Image credits: Stuffbox/Youtube

However, the most exciting developments concerning transparent smartphones have happened much more recently.  In November 2018, WIPO (World Intellectual Property Office) published Sony’s patent for a dual-sided smartphone transparent display, reports reveal that Sony is soon going to use this see-through display design in its upcoming premium range smartphones. The next year, LG received a smartphone design patent from USPTO (the United States Patent and Trademark Office) that shined a light on the company’s plans for a foldable transparent smartphone. However, LG has also said they will stop making phones because the market is too saturated — so it’s unclear whether something will actually come of this design.

Leading tech manufacturer Samsung is also said to be in the process of developing a see-through smartphone. According to a report from Let’s Go Digital, The company had a patent (concerning a transparent device) published on the WIPO website in August 2020. The same report also reveals that in the coming years, Samsung aims to launch smartphones and other gadgets in the market (under its popular Galaxy series) that would come equipped with a transparent luminous display panel.

Are transparent smartphones even practical?

Just because big brands like Sony, LG, and Samsung are working on different projects related to transparent smartphone technology, it doesn’t mean we’re close to seeing actual see-through phones very soon. Many tech experts believe that while transparent smartphones may sound like a futuristic idea, they may not be feasible, for several reasons.

Surprisingly, one of the main challenges with transparent smartphones is the camera. You can definitely make transparent displays using OLEDs, but what about the rear and front-side cameras? There is no known way by which a phone engineer can make camera sensors go transparent. The same goes with other parts like SIM cards, memory chips, and speakers, if these components are still visible in a see-through phone then it is no better than the Polytron prototype of 2013. So while there’s a realistic chance of transparent-screen phones becoming a reality, how exactly a fully transparent phone would be built is not at all clear.

Another issue that users might face with transparent smartphones is poor display visibility. The screens used in current smartphones may not be transparent but they offer clear and sharp picture quality, whether you use them under bright daylight or in the dark. Transparent displays might not be able to deliver such a flawless visual experience, and users may even struggle to see the text or images clearly on a see-through screen in daylight conditions.

Until and unless these major issues are resolved, we probably won’t be able to see transparent smartphones in the market. But why would we even want one? Well, there are some merits to transparent smartphones. For instance, the notification and alerts could look more clear and more distinct on a transparent screen, and such a display might be conveniently used in a divided manner to use different applications at the same time. 

Moreover, you could use both sides of a see-through display; this would facilitate multitasking and save a lot of time. For example, you are watching an educational video or recipe on YouTube and you are noting down points from the same in a different tab. With a double-sided transparent screen, you don’t need to close your video tab every time you need to switch to another tab, you can just flip your phone to jump to the tab you want to use.

Transparent smartphones might also bring a drastic improvement in the way you experience augmented reality. The screen which serves as a barrier between your real and virtual worlds if becomes transparent, then you may not need an AR app to see virtual elements in the real world. The transparent screen itself may act as an AR simulator but then again such a screen may not be able to give you as good virtual imagery as you experience on a normal display.

Let’s face it: transparent phones would be very cool, but we’re not quite there yet. We can geek out about them as much as we want, but a transparent smartphone still requires a healthy amount of innovation that might take some time to evolve. With how quickly technologies are progressing, though, we may see them in the not too distant future.

Electric plane reaches important milestone in New Zealand

It will probably take a long time before we see commercial electric airplanes, but that doesn’t mean we’re not seeing progress.  

Image credit: Electric Air.

Pilot Gary Freedman crossed New Zealand’s Cook Strait with a two-seater electric plane owned by the company ElectricAir. It’s the first emission-free plane to make the flight across the strait.

The Cook Strait separates the North and South Islands of New Zealand, extending northwest to southeast from the Tasman Sea to the south Pacific Ocean. It’s a notoriously difficult route by sea because of its treacherous currents and fierce storms, so travel between the North Island and the South Island is mainly done by rail ferry or air.

The strait is named after James Cook, the first European commander to sail through it, in 1770. The first flight over the strait was in 1920 by Captain Euan Dickson, flying for Henry Wigram’s Canterbury Aviation Company. But now it was about time to shake things up, introducing what could be a new way of flying across the islands. 

The trip in the electric plane lasted 45 minutes, with a cruising speed of 150 kilometers per hour. While it likely wasn’t much faster than the biplane that Dickson used 101 years ago, the flight cost only $2 in electricity. ElectricAir estimates that the same flight in a similar-sized plane powered by fuel would have used about $100 in plane fuel — so it’s a big chance to not just reduce emissions, but also lower traveling costs. 

Celebrating the arrival of the plane in Wellington, New Zealand’s Climate Change Minister James Shaw commented:

“We’ve always needed aviation, particularly when it comes to our regional access, and electric aviation opens up a lot of these small remote places, because obviously electricity is so much cheaper than aviation fuel.”

Freedman said New Zealand has the highest number of short-haul flights per capital in the world and said he was hopeful that this new technology can create an “electric bridge” between the islands to reduce greenhood gas emissions. The flight coincided with the opening of the COP26 climate change summit in the UK, set to continue during the next two weeks.

A big challenge for the aviation sector

Aviation accounts for about 2.4% of the global greenhouse gas emissions. The sector wasn’t included in the 2015 Paris Agreement on climate change and its emissions are rising fast – increasing 32% between 2013 and 2018. A return flight from London to San Francisco, for example, is estimated to emit 5.5 tons of CO2 equivalent per person — that’s almost as much as the average European emits in an entire year.

Airlines grouped under the Air Transport Association have committed to reaching net zero carbon emissions by 2050, with most emissions reductions coming from sustainable aviation fuel – less polluting than the traditional jet fuel. But it won’t be simple, as there’s a very limited supply of sustainable fuel being produced annually. Furthermore, aviation emissions aren’t included in the Paris Agreement, which means there’s less external pressure on airlines. 

The use of batteries in electric planes has also been considered, but this is still very tricky as it would mean using a battery with massive energy output. Another element to address is energy consumption during flights. While a car can be charged when it runs low on electricity during a ride, an airplane can’t do this during flights over water. 

Still, for shorter trips, electric airplanes may be in sight. Much like electric cars seemed far away but progressed quickly in just a decade, electric planes also seemed like a pipe dream, but are now close to reaching commercial price-effectiveness. So, would you ride on an electric plane if given the chance?

Hall thrusters will use sunlight to carry probe into deep space

Hall thrusters could be the future of deep space exploration. (Image: NASA/JPL-Caltech)

Well, at least this much is clear: NASA’s propulsion system for its Psyche spacecraft looks a lot cooler than previous probes. The thrusters (known as Hall thrusters) emit a futuristic blue glow. The thrusters will be reliant on solar arrays that convert sunlight into electricity and will carry the probe 1.5 billion miles (2.4 billion kilometers) to its intended destination: an asteroid.

The treasure in the sky

The craft’s ultimate goal is the metal-rich asteroid 16 Psyche. Located in the main asteroid belt between Mars and Jupiter, it will take Psyche, the spacecraft, three and a half years to reach Psyche, the asteroid.

The spacecraft will also rely on the large chemical rocket engines of the Falcon Heavy to blast off Pad 39A at NASA’s Kennedy Space Center and to escape the planet’s gravity. But the rest of the journey, once Psyche separates from the launch vehicle, will rely on solar electric propulsion. This form of propulsion starts with large solar arrays that convert sunlight into electricity, providing the power source for Psyche’s thrusters.

“Even in the beginning, when we were first designing the mission in 2012, we were talking about solar electric propulsion as part of the plan. Without it, we wouldn’t have the Psyche mission,” said Arizona State University’s Lindy Elkins-Tanton, who as principal investigator leads the mission. “And it’s become part of the character of the mission. It takes a specialized team to calculate trajectories and orbits using solar electric propulsion.”

For propellant, the spacecraft will carry tanks of xenon, the same noble gas you see in plasma TVs and those bright headlights which blind the traffic on the other side of the road. Psyche’s four thrusters will use electromagnetic fields to accelerate and expel charged atoms, or ions, of the gas. As ions from the xenon are expelled, they create thrust that smoothly propels the craft through space, emitting blue beams of ionized xenon in the process. It’s actually so smooth that the scientists that built it say that its pressure is about that which you would feel holding three quarters in your hand. However, despite the gentleness of it, the Hall thrusters would be forceful enough to accelerate Psyche to speeds of up to 200,000 miles per hour (320,000 kilometers per hour).

While this will be the first time that Hall thrusters have been used beyond the orbit of the Moon, it’s not the first of its kind to use solar electric propulsion. NASA’s Jet Propulsion Laboratory, which manages the mission, used a solar electric propulsion chassis with the agency’s Deep Space 1, which launched in 1998 and flew by an asteroid and a comet before the mission ended in 2001.

Next came Dawn, which used it to travel to, and orbit, the asteroid Vesta and then the protoplanet Ceres. The first spacecraft ever to orbit two extraterrestrial targets, the Dawn mission lasted 11 years, ending in 2018 when it used up the last of the hydrazine propellant used to maintain its orientation.

The spacecraft was built by Maxar Technologies and includes a multispectral imager, magnetometer, and a gamma-ray and neutron spectrometer. Its ultimate goal is to determine whether the asteroid is a core, or if it is an unmelted material; the ages of regions of the surface; whether small metal bodies incorporate the same light elements as are expected in the Earth’s high-pressure core; if Psyche was formed under conditions more oxidizing or more reducing than Earth’s core; and to characterize the asteroid’s topography.

Researchers believe that the 140-mile wide (226-kilometer) asteroid could be made entirely of iron and nickel. If true, its value could reach $10,000 quadrillion, more than the entire economy of Earth.

“Solar electric propulsion technology delivers the right mix of cost savings, efficiency, and power and could play an important role in supporting future science missions to Mars and beyond,” said Steven Scott, Maxar’s Psyche program manager.

The mission is set to launch in August 2022.

A universe in a bottle: why simulating everything there is is so important

Large scale projection through the Illustris volume at z=0, centred on the most massive cluster, 15 Mpc/h deep. Shows dark matter density overlaid with the gas velocity field. Credits: Illustris.

Most large-scale simulations are of specific processes, such as star formation, galaxy merges, our solar system events, the climate, and so on. These aren’t easy to simulate at all — they’re complex displays of physical phenomena which are hard for a computer to add all the detailed information about them. 

To make it even more complicated, there are also random things happening. Even something simple like a glass of water is not exactly simple. For starters, it’s never pure water, it has minerals like sodium, potassium, various amounts of air, maybe a bit of dust — if you want a model of the glass of water to be accurate, you need to account for all of those. However, not every single glass of water will contain the exact same amount of minerals. Computer simulations need to try their best to estimate the chaos within a phenomenon. Whenever you add more complexity, the longer it takes to complete the simulation and the more processing and memory you need for it.

So how could you even go about simulating the universe itself? Well, first of all, you need a good theory to explain how the universe is formed. Luckily enough, we have one — but it doesn’t mean it’s perfect or that we are 100% sure it is the correct one — we still don’t know how fast the universe expands, for example.

Next, you add all the ingredients at the right moment, on the right scale – dark matter and regular matter team up to form galaxies when the universe was around 200-500 million years old.

N-body simulations

Universe simulations are made by scientists for multiple reasons. It’s a way to learn more about the universe, or simply to test a model and confront it with real astronomical data. If a theory is correct, then the structure formed in the simulation will look as realistic as possible. 

There are different types of simulations, each with its own use and advantages. For instance, “N-body” simulations focus on the motion of particles, so there’s a lot of focus on the gravitational force and interactions.

The Millenium Run, for instance, incorporates over 10 billion dark matter particles. Even without knowing what dark matter really is, researchers can use these ‘particles’ to simulate dark matter properties. There were other simulations, such as IllustrisTNG, which offers the capability of star formation, black hole formation, and other details. The most recent one is a 100-terabyte catalog

Illustris simulation overview poster. Shows the large-scale dark matter and gas density fields in projection (top/bottom). The lower three panels show gas temperature, entropy, and velocity at the same scale. Centred on the most massive cluster, for which the circular insets show four predicted observables. The two galaxy insets highlight a central elliptical and a spiral disk satellite (top/bottom). Credits: Illustris team.

In the end, the simulations can’t reveal every single detail in the universe. You can’t simulate what flavor pie someone is having, but you can have enough detail to work with large-scale things such as the structure of galaxies and other clusters.

Mock Catalogs

Another type of model is a mock catalog. Mocks are designed to mimic a mission and they use data gathered by telescopes over years and years. Then, a map of some structure is created — it could be galaxies, quasars, or other things.

The mocks simulate these objects just as they were observed, with their recorded physical properties. They are made according to a model of the universe, with all the ingredients we know about.

Sky coverage of VST surveys overlaid on a 2MASS image of the whole sky. Credits: ESO.

The theory from the model used for the mocks can be tested by comparing them with the telescopes’ observation. This gives an idea of how right or wrong our assumptions and theories are, and it’s a pretty good way to put ideas to the test. Usually, the researchers use around 1000 mocks to also give statistical significance to their results.

Hardware

Let’s take a look behind the scenes at how the models are produced — and how much energy they use. These astronomical and climate simulations are made on supercomputers, and they are super. The Millenium Run, for example, was made using the Regatta supercomputer. For these simulations, 1 terabyte of RAM was needed and resulted in 23 terabytes of raw data.

Cray XC40 (Hazel Hen) (Copyright: Boris Lehner for HLRS).

The IllustrisTNG used the Hazel Hen. This beast can perform at 7.42 quadrillion floating-point operations per second(Pflops), which is equivalent to millions of laptops working together. In addition, Hazel Hen consumes 3200 Kilowatts of energy — which leads to a spicy electric bill. Uchuu, which had 100 terabytes of results was made using ATENURI II. This one performs with 3.087 Pflops.

In an Oort Cloud simulation, the team involved reported the amount of energy they used in their work: “This results in about 2MWh of electricity http://green-algorithms.org/), consumed by the Dutch National supercomputer.” A habit that may become more common in the future.

Simulation Hypothesis

So what does this tell us about the possibility of our very own universe being a simulation? Could we be living in some sort of Matrix? Or just be in a Rick&Morty microverse? Imagine the societal chaos of figuring out we are in a simulated universe and you are not a privileged rich country citizen? That wouldn’t end well for the architect.

The simulation hypothesis is actually regarded seriously by some researchers. It was postulated by Nick Bostrom, and has three main conditions — at least one needs to be true: 

(1) the human species is very likely to go extinct before reaching a “posthuman” stage; 

(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); 

(3) we are almost certainly living in a computer simulation.

ARE YOU LIVING IN A COMPUTER SIMULATION?

This being said, the simulation hypothesis is not a scientific theory. It is simply an idea — a very interesting one, but simply put, nothing more than an idea.

Lessons from simulations

What we learned from making our simulations is that it is impossible to make a perfect copy of nature. The N-body simulations are the perfect example, we can’t simulate everything, but the particles that make what is relevant to study. In climate models we have the same problem, it is impossible to create the perfect pixel to reproduce geographic locations, you can only approximate the desired features.

The other difficulty is energy consumption, it is hard for us to simulate some phenomena. Simulating a universe in which people make their own choices would require an improbable amount of power, and how could the data be stored. Unless it ends like Asimov’s ‘The Last Question’ — which is well worth a read.

Credits:xkcd

In the end, simulations are possible, but microverses are improbable. We’ll keep improving simulations, making better ones in a faster supercomputer. All this with the thought that we need an efficient program, which consumes less energy and less time.

Europe’s biggest ever drug drone was just seized by Spanish cops

A French smuggling gang was using the drone to traffic drugs from Morocco to southern Spain, taking advantage of its capacity to carry up to 150 kilograms of cargo. The drone has a wingspan of nearly five meters and a flight range of seven hours. It can reach a maximum speed of over 100 miles per hour and is worth up to $7.700. 

Image credit: Spanish police

The gang trafficked the drugs from Morocco to the small town of Almachar in Spain, with only 1,811 inhabitants. Pedro Luis Bardón, from the National Police’s airborne resources unit, told El País newspaper that they had never seen a drone that big used for this purpose. It’s the biggest one ever found in Spain and very possibly in Europe. 

The five-motor drone, made in China, was discovered in a warehouse in the city of Malaga, following a joint investigation by Spanish and French cops. They also found 85 kilos of weed and hashish and arrested four people in the operation, three in France and one in Spain – who will now face prosecution because of drug trafficking. 

The gang flew the drone using an electronic system that relayed the exact takeoff and landing points. It could also be flown using a remote control. The police said the criminals didn’t have much knowledge related to its use, which posed a danger to other air traffic, even for passenger flights, considering the massive size of the seized drone. 

The inside of the drone is hollow, and would normally be used for cameras or other electrical equipment. In the case of the drug gang, it was being used for packages of narcotics, particularly cocaine. “Technology makes our lives easier, but it also ends up in the hands of the bad guys,” said the Málaga police chief, Roberto Rodríguez Velasco. 

A trafficking hotspot

It’s not random that the gang chose to move the drugs in the stretch of sea between southern Spain and Morocco. This is known as one of Europe’s busiest trafficking zones, with large amounts of weed and cocaine being smuggled. The stretch could be easily covered by the seized drone thanks to its average seven hours flight autonomy. 

The discovery of the drone follows similar busts by the Spanish police. In July this year, they found a network of drug traffickers that used a fleet of small drones to move cocaine from Morocco to Ceuta, an autonomous Spanish city near the coast of Africa. The police found seven drones, each capable of carrying between four and 25 kilograms. 

Image credit: Mugin.

Drones are also frequently used to move drugs from Mexico to the United States. The first reported seizure was in Calexico, on California's border with Mexico, in April 2015. It had been used to carry a total of 28 pounds of heroin over the border in four trips. Over the next five years, 170 similar incidents were officially reported. 

Still, drones aren’t the only method used by gangs to traffic drugs, at least in Europe. In March this year, Spanish police found a 30 feet long fiberglass narco-submarine, capable of carrying two tons of drugs. Since 2019, when the first submarine was discovered in Spain, the police have found several of these vessels being used to traffic drugs across Europe.

Book review: ‘Information: A Historical Companion’

“Information: A Historical Companion”
Edited by Ann Blair, Paul Duguid, Anja-Silvia Goeing, and Anthony Grafton
Princeton University Press, 904 pages | Buy on Amazon

In 1964, media theorist Marshall McLuhan declared that he was living in the “age of information.” Little did he know, however, how much the birth of the World Wide Web would influence the volume of data we share today. In 2020, in the already classical “internet minute,” people sent more than 40 million messages through WhatsApp, posted 350,000 stories on Instagram, and shared 150,000 photos on Facebook.

How did we end up producing so much information? How did we learn to process it, search it and store it? These are some of the questions the book ‘Information, A Historical Companion’ edited by Ann Blair, Paul Duguid, Anja-Silvia Goeing, and Anthony Grafton tried to answer. Its essays, written by academics from all around the world, tell the story of information beginning with ancient societies. Authors take us through East Asia, early modern Europe, the medieval Islamic world, but also North America. The book’s 13 chapters offer chronological narratives, discussing how information shaped the world as we know it. They are followed by more than 100 entries that focus on concepts, tools, and methods related to information.

The book also describes more recent developments in the field, including algorithms, intellectual property, privacy, databases, censorship, and propaganda. It also looks at capitalism, information circles, and the crisis of democracy, explaining some of the most famous theories academics and technologists came up with.

The thirteenth chapter, on communication and computation, presents Babbage’s Difference Engine, Claude Shannon’s influential “theory of communication,” and Vannevar Bush’s “memex” device for storing information, which originally appeared in his 1945 article “As We May Think.” It also describes more recent ideas, including the TCP/IP networking protocol, ARPANET, and WWW. None of today’s technologies would have existed without these early innovations.

The book is also an invitation to ponder upon the belief that the abundance of information would lead to increased democracy and a better life for us all. It showcases the thoughts of J.C.R. Licklider and Douglas Engelbart, who said that technology would set us free, believing that information feeds democracy.

“The optimism that runs through these claims has to confront the contrary feelings that rather than more information being a good thing, it can be highly problematic; and that while control over information may be beneficial, we are often in danger of being controlled by information and the algorithms it feeds,” writes Paul Duguid. “Both the optimistic and the pessimistic views have a curiously long history.”

At the end of the chapter, Duguid put the reason for writing this book in a nutshell: “Perhaps, after all, the dots of our ‘information age’ are more closely connected to the past than those who deem history irrelevant realize.”

The relationship between social media and cryptocurrencies is not healthy

Social media platforms have long been seen as a “signal” generator for traders and investors of the crypto space. Due to the relatively small size of Bitcoin ($BTC) and other coins (in terms of market cap, compared to many other stocks or commodities like gold), public opinion can quickly and significantly move crypto markets. But things are going way too far.

Imagine if a few decades ago, you would have told one of the richest people in the world they can control the price of an asset, and make it rise and fall drastically, by merely writing a few words. Their eyes would have flickered and small, green, dollar signs would have appeared in front of each pupil. Well guess what — that’s kind of what’s happening now.

Oh Elon

Credit: Twitter, @elonmsuk.

Elon Musk, the billionaire behind Tesla and SpaceX, has the power. In the past few months, cryptocurrencies like Dogecoin and Bitcoin have fluctuated wildly based on Musk’s tweets. While the tweets may have not been posted for his own financial gain (and in truth, Musk doesn’t really need to tamper with the market, at a net worth of some $160 billion), they did send the crypto market on a wild rollercoaster.

Sometimes, the tweets were semi-relevant to the crypto market, like when Tesla stopped taking Bitcoin (after previously bragging that it does accept Bitcoin), or that time SpaceX launched a Dogecoin-funded satellite into orbit. But other times, it’s just plain silly — like when he posted a meme about breaking up with Bitcoin.

Dogecoin, essentially a meme cryptocurrency that somehow picked up a lot of popularity, was at one point 1,400% up compared to the start of 2021. Now, after a peak value right before Elon Musk hosted Saturday Night Live (SNL), the coin dropped by 75%, after the show failed to live up to the hype.

While Musk is the main exponent of the effect social media can have on cryptocurrency markets, he’s far from the only one.

Crypto and social media go back a long time

Crypto and online discussion boards go back as far as Bitcoin’s creation. Shortly after it was brought to the world, Bitcoin creator Satoshi Nakamoto founded the popular forum BitcoinTalk, where most crypto-related discussions took place.

Shortly after Satoshi chose to disappear forever in 2010, we saw a parallel between an increase in online mentions of Bitcoin and its growth and price. The more people talked about it, the more it seemed to be worth. The platforms that stood out in terms of community building and valuable information was Reddit and Twitter, which are also some of the most bitcoin-friendly social media platforms.

Later, Discord and Telegram caught up to the trend as well, since privacy-oriented discussions and closed groups started to increase in popularity. These platforms of course experienced quite a bit of volatility from users after their use in ICO scams deemed them less trusted as information sources.

For crypto traders, keeping an eye on social media became the norm — a way to track the overall market sentiment, but also anticipate scenarios based on Musk-type interventions and try to anticipate the ebb and flow of prices. When you see that the public starts to feel overwhelmingly positive about Bitcoin (to the points that you see Twitter accounts adding laser eyes to their profile pictures) it may be time to sell. When the same audience starts bashing Bitcoin, writing it off as dead, it might be time to buy bitcoin.

Of course, actually analyzing social media sentiment is not easy. You can scroll through Twitter or Reddit, but you just won’t have enough time for it. You can also harvest data and try to analyze it in bulk, but that may miss out trends. You can also look at all the things niche-related influencers are talking about and try to determine how the public will act based on this information, or even use specialized tools to aid your quest.

This is not what was promised

Bitcoin, and cryptocurrency in general, promised to change the world, but it kind of hasn’t. It’s made some people some money, it’s cost others some money, but the impact on society has been negligible. When you take into account the fact that mining and trading cryptocurrency produces emissions comparable to a medium country, the issue becomes even more thorny

Part of the problem stems from the fact that we’re not really sure how much Bitcoin (or any cryptocurrency really) should be worth. As long as the price runs on emotions, memes, and influencer whims, cryptocurrency will continue to fluctuate wildly and trust will dwindle due to this volatility.

In truth, the same can be said about stocks. The market isn’t perfectly rational and oftentimes, it’s anything but rational — we’ve seen this happen time and time again. But crypto is a relatively new happening, and no one is really sure just how high or low it will go.

In an ideal world, people like Musk would lose their power, and cryptocurrency, freed from such nefarious influences, would drift towards a realistic value. People would trust it more and use it more widely; it would become incorporated in humanitarian projects, where its decentralized nature can work best, and act as a viable alternative to existing currency. Alas, we don’t live in an ideal world, and who knows what Musk will tweet next?

Bitcoin has an energy problem. Now what?

While bitcoin is poised to revamp and upend global finance, the massive power demands of its blockchain network could undermine the world’s efforts to keep global warming in check.

Credit: Quote Inspector.

In February, the bitcoin network consumed as much energy as Argentina, a country of 44 million people, according to researchers at Cambridge University. By 2024, if this trend continues, cryptocurrency mining in China alone could use as much power as Italy uses in a whole year. The resulting CO2 emissions would be equal to those of the Czech Republic.

Increasingly, the energy (and climate) impact of bitcoin is compared to that of countries. So what can be done about that?

Some analysts think this is fine. Research by ARK Investment Management, which holds a lot of crypto assets, found that the bitcoin ecosystem consumes less than 10% of the energy required by the traditional banking system.

However, billions of people use traditional banking services whereas only one million bitcoin addresses are active on a daily basis, according to CoinMetrics data. If the sector were to grow, its consumption would also grow.

It is also true that, like with any new industry, the requirements for implementing early infrastructure are particularly intense, so going forward each new bitcoin mined should use increasingly less energy and generate fewer carbon emissions — at least that’s the plan.

But in any event, the bottom line is that the bitcoin network eats up a lot of power and its appetite is only increasing. In a new study recently published in Nature, researchers at the University of Chinese Academy of Sciences in Beijing have proposed some solutions that could perhaps mitigate some of the crypto network’s massive environmental footprint.

Rather than taxing bitcoin mining, as some have proposed earlier, the Chinese researchers claim a better strategy may be encouraging miners to shift their operations to regions that are powered by ‘green’ energy.

Data from the Cambridge study indicates Chinese bitcoin mining is responsible for 65% of the network’s power. North American miners make up roughly 8% of the global hash rate, followed closely by miners in Russia, Kazakhstan, Malaysia, and Iran

A bitcoin mining rig from 2015. Credit: Wikimedia Commons.

There are a few reasons why China is dominating this space. One explanation is that the Chinese government was quick to offer subsidies for the mining industry. Chinese miners also have access to computer chips at a discount straight from the world’s most important manufacturers. Last but not least, electricity in some of the most important mining provinces, such as Xinjiang, is dirt cheap, trading at least five times cheaper than in North America. The price of bitcoin surged since 2020 after millions of first-time retail investors began purchasing crypto with their phones on apps like BitQT, as well as a result of large institutional investors moving their assets away from stocks and bonds into the burgeoning crypto space.

Much of this energy is sourced from coal, although a study estimated up to 73% of bitcoin miners use at least some renewable energy as part of their power supply, including hydropower from China’s massive dams. But overall, much of the energy still comes from dirty sources.

Shouyang Wang, one of the new report’s authors and chair professor at the Academy of Mathematics and Systems Science at the Chinese Academy of Sciences in Beijing, wanted to see if there’s any way to make bitcoin mining operations both profitable and sustainable in the future.

Wang and colleagues performed simulations finding that if bitcoin mining is allowed to grow business-as-usual, it would peak in 2024 at nearly 300 terawatt-hours of electricity, or as much as a medium-sized country, and generate nearly 130 million metric tons of carbon emissions. Since most of the mining would take place in China, this would completely derail the country’s efforts to decarbonize its energy system.

At its address to the 75th session of the UN General Assembly (UNGA 75) in February, Chinese President Xi Jinping declared that China will ‘aim to have CO2 emissions peak before 2030 and achieve carbon neutrality before 2060.’

“It is important to note that the adoption of this disruptive and promising technique without [taking into account] environmental concerns may pose a barrier to the worldwide effort on GHG emissions management in the near future,” Wang told Forbes, adding that the research team was “surprised by the energy consumption and carbon emission assessment results of bitcoin blockchain operation in China.”

The solution is moving from a punitive tax policy to a site regulation policy that motivates miners to move their operations in an area that guarantees a high rate of renewable energy.

The simulation showed that under such a policy, only 20% of miners remained in coal-intensive energy regions. Under the site regulation model, the researchers found bitcoin operations generated 100.61 million metric tons at peak, as opposed to 105.19 million tons under an additional taxation scenario.

“Site regulation should be carried out by the government, placing limitations on bitcoin mining in certain regions that use coal-based heavy energy,” Wang explained. “That being said, we think that there are enough benefits to this policy which will incentivize the miners to move their operation willingly. For example, since energy prices in clean-energy regions of China are lower than that in heavy-energy regions, the miners can effectively lower their individual energy consumption cost, which would increase their profitability.”

The supply of new bitcoin halves every four years, which also means the miners’ rewards get halved. The next time this will happen will be 2024 — this is the forecasted peak of bitcoin mining. After 2024, the authors of the new study believe that it will no longer be cost-effective to mine bitcoin. This is why competition is so fierce nowadays, with miners buying every rig they can get their hands on so they mine as much as they can before 2024.

This also means that, at least partly, the bitcoin network is self-regulating. In time, the network will use less and less energy and generate fewer carbon emissions. But until it peaks, regulators have their work cut out for them.

Google expands its earthquake detection system to Greece and New Zealand

First launched in the US, Google is now expanding its Android-based earthquake detection and alert system to Greece and New Zealand. Users will get warnings of earthquakes on their phones, giving them time to get to safety. The earthquakes won’t be detected by seismometers but by the phones themselves.

Image credit: Flickr / Richard Walker

It’s the first time the tech giant will handle everything from detecting the earthquake to warning individuals. Mobile phones will first sense waves generated by quakes, then Google will analyze the data and send out an early warning alert to the people in the affected area. Users will get the alert automatically, unless they unsubscribe.

When it launched the service in California, Google first worked with the US Geological Survey and and the California Governor’s Office of Emergency Services to send out earthquake alerts. This feature later became available in Oregon and will now expand to Washington in May – and eventually to even more states in the US.

Mobile phones are already equipped with an accelerometer, which can detect movement. The accelerometer can also detect primary and secondary earthquake waves, acting like a “mini seismometer” and forming an earthquake detection network. Seismometers are devices used to detect ground movement.

Traditional warning systems use seismometers to interpret an earthquake’s size and magnitude, sending a warning via smartphone or loudspeakers to residents. Even if they come seconds before the quake hits, these warnings can buy valuable time to take cover. But seismometers can be difficult and expensive to develop.

That’s why a warning system that can rely on smartphones has a lot of potential. Richard Allen, a seismologist at the University of California, Berkeley, told Science that Google’s interest in building quake-sensing capabilities directly into Android phones was an enormous opportunity, or, as he calls it, a “no brainer.”

“It’d be great if there were just seismometer-based systems everywhere that could detect earthquakes,” Marc Stogaitis, principal Android software engineer at Google, told The Verge last year. Because of costs and maintenance, he says, “that’s not really practical and it’s unlikely to have global coverage.”

Earthquakes are a well-known threat in Greece and New Zealand, where Google’s service is being deployed. Greece is spread across three tectonic plates, while in New Zealand, the Pacific Plate collides with the Australian Plate. Neither country has deployed an operational warning system, which created an opportunity for the tech giant.

Caroline Francois-Holden, an independent seismologist who until recently worked at GNS Science, told Science that many earthquakes in New Zealand originate offshore, where few phones are found. This might make Google’s system less than ideal. “Any earthquake early warning system needs to be designed with that in mind,” she said.

There are other limitations, too. Those closest to the earthquake won’t get much advance warning since they’ll be the first ones to detect the quake. But their phones will help give a heads-up to others farther away, giving them crucial time to take shelter. But as Android is the leading OS system for smartphones, this service probably has a lot of room to grow.