Category Archives: Research

Men could significantly outnumber women within decades — and this is a problem

Cultural preferences for boys and prenatal sex selection are causing uneven ratios of men and women around the world, a group of researchers found in a new study. If this continues, there will be a deficit of at least 4.7 million female births by 2030 under a conservative scenario. By 2100, that number could even escalate to 22 million. 

Image credit: Flickr / Mulan

Over the last 40 years, prenatal gender-biased sex selection has become the most visible consequence of “son preference”. Simply put, with prenatal screening allowing parents to tell the sex of the child, many aren’t settling with a girl. Along with child marriage and female genital mutilation, sex selection is one of the key harmful practices defined by the United Nations (UN) and targeted under the Sustainable Development Goals (SDGs). 

Sex-selective abortions, the main mechanism behind sex selection, have been observed across various countries from Southeast Europe to South Asia. They lead to a hike in the sex ratio at birth above its natural level and to the emergence of a surplus of males, contributing to a population with fewer females than men.

This is not a new phenomenon. Previous studies showed there were 45 million “missing” female births between 1970 and 2017 due to prenatal sex selection – 95% in China and India. Now, in a new modeling study, the same group of scientists predicted that in 12 countries known to have skewed sex ratios at birth, there will be an extra 4.7 million missing female births by 2030. 

This would continue even more in the longer term, the researchers said, with a shortfall in female births of 5.7 million expected by 2100. The higher ratio of males to females will eventually decline in populous countries such as India and China, but could inflate in other countries such as Pakistan and Nigeria, Fengqing Chao, who co-authored the study, said in a statement. 

A global problem

Chao developed the predictive models with researchers from the UN, the National University of Singapore, the University of Massachusetts Amherst and the Centre de Sciences Humaines in India. They based their projections on a database that incorporated over three billion birth records from more than 200 countries. 

The researchers warned the trends they identified would lead to a preponderance of men in more than a third of the world’s population, which could bring unknown social and economic consequences. They anticipate a set of demographic problems, such as large numbers of young men being unable to find wives in the coming decades, as well as violence against women becoming an even greater problem.  

“Prenatal sex selection accounts for about half of the recent deficit of females in the world during the previous decades. Fewer-than-expected females in a population could also result in elevated levels of antisocial behaviour and violence, and may ultimately affect long-term stability and social sustainable development,” the researchers wrote.

The main challenge now, the authors argued, is to understand whether birth masculinity will stay indefinitely skewed in countries affected by sex-selective abortions and whether new countries may be affected in the future. They described this as “essential” to anticipate and plan for changing sex structures around the world.

In addition, policies based on monitoring, advocacy campaigns, as well as direct and indirect measures to combat gender bias are required to slow down the rise of sex ratio at birth or to accelerate its decline. A broader objective is related to the need to influence gender norms that lie at the core of prenatal sex selection, they wrote.

The study was published in the journal BMJ Global Health. 

Why Stonehenge megaliths stay up after 5,000 years — it’s all geology

The famous prehistoric landmark of Stonehenge in the United Kingdom has always been shrouded in a layer of mystery. But one step at a time, scientists are starting to answer some of the questions behind the monument. Now, a new study has revealed how the monument is still standing after all this time. 

Image credit: Flickr / Stanley Simny.

Built some 4,600 years ago, Stonehenge has fascinated historians, geologists, travelers, and artists for centuries. We know that it was a bustling spiritual center and that it must have held a huge significance for the society that built it, based on previous studies. We also have a pretty good idea of where the rocks that make it come from (about 180 miles). But Stonehenge is still keen on keeping some of its secrets.

An old drill

In 1958, Robert Phillips, a representative of a drilling company performing restoration work on the monument, took a cylindrical core after it was drilled from one of Stonehenge’s pillars — Stone 58. Philipps emigrated to the US and took the core with him. The piece then returned to the UK in 2018 and was handed over to a group of researchers.

Because of its protected status, it’s no longer possible to extract samples from the stones, which makes Philipps’ core quite unique. That’s why the core’s return presented a big opportunity, allowing the researchers to do unprecedented geochemical analyses of the Stonehenge pillar, which they described in the new study. It’s the first comprehensive scientific analysis of the megalith.

“Getting access to the core drilled from Stone 58 was very much the Holy Grail for our research. All the previous work on sarsens at Stonehenge involved samples either excavated from the site or knocked off from random stones,” David Nash, who led the study, said in a statement. “This small sample is now probably the most analyzed piece of stone other than moon rock.”

A comprehensive analysis

The megalith is made of stone called silcrete that formed gradually within a few meters of the ground surface as a result of groundwater washing through buried sediment. Using X-rays and microscopes, the researchers found the silcrete is made of sand-sized quartz grains joined together by an interlocking mosaic of quartz crystals.

Quartz is extremely durable and doesn’t easily crumble or erode even when exposed to eons of wind and weather. This may have been why the builders chose to use it for their massive monument thousands of years ago. Instead of using the closest and biggest boulders, they went for the ones that could stand the longest time, Nash said.

The sample of the core analyzed in the study. Image credit: The researchers.

The study also showed that the sediments within which the stone developed were deposited during the Paleogene period, from 66 million to 23 million years ago. This means that the megalith can’t be older than this. However, when comparing the isotopes in the samples, they found certain sediments were even more ancient, which raises an interesting question.

“Some of the grains were likely eroded from rocks dating to the Mesozoic era, from 252 million to 66 million years ago, when they may have been trodden upon by dinosaurs. And some of the sand grains formed as long ago as 1 billion to 1.6 billion years ago,” Nash said in a statement.

While the study answered some questions about the monument, other unresolved puzzles remain – including the location of the other two cores that were drilled from Stone 58 during the 1958 restoration that vanished from the record. Curators from the Salisbury Museum in England discovered part of one of those cores in their collection in 2019. 

The study was published in the journal Plos One. 

These are the cheapest electric vehicles in the US today

Electric cars have a reputation for being more expensive than their traditional internal combustion engine (ICE) counterparts. But improvements in technology mean that the gap is closing every day — up to the point where many electric cars are cost-competitive with their petrol-based equivalents.

There’s a considerable selection of affordable electric cars that provide all the benefits of an EV without really breaking the bank. Here’s our list of our favorite cheapest electric cars available in the US today — and you should be paying attention to cheap EVs.

The Mini Cooper SE, the cheapest one. Image credit: Wikipedia Commons.

Why cheap electric cars matter

As the name suggests, electric vehicles run (at least in part) on electricity. Instead of having an ICE, they are powered by electric motors for propulsion. The motor derives energy from rechargeable batteries (typically lithium batteries).

EVs have actually been around for more than a century but not until recently they have become mainstream around the world, and they offer a few important advantages compared to “regular” cars.

Thanks to not having a clutch, gearbox, and even an exhaust pipe, they are significantly quieter and offer a smoother ride than conventional gasoline-driven vehicles. Until recently, standard EVs were capable of covering somewhere between 93 and 105 miles (150 km to 170 km) before needing to be recharged, with variations depending on the actual model. But now, it’s not uncommon for electric cars to have autonomy well over 200 or even 300 miles.

As well as being two to four times more efficient than ICE engine models, electric vehicles can reduce the world’s reliance on oil-based fuels and can deliver significant reductions in greenhouse gas emissions. Plus, they are well suited to solve air pollution issues, a global health problem, and can drive advances in battery technology.

Electric vehicles fleets are currently expanding at a fast pace in several of the world’s largest vehicle markets. This is thanks to a dropping costs of batteries and EVs and an expanding charging infrastructure network. Charging stations are usually installed by utility companies as on-street facilities. They can even be situates at workplaces.  

“While they can’t do the job alone, electric vehicles have an indispensable role to play in reaching net-zero emissions worldwide,” Fatih Birol, Executive Director of the IEA, said in a statement. “Current sales trends are very encouraging, but our shared climate and energy goals call for even faster market uptake.”

EV sales rose 41% (to about three million electric cars) in 2020, with Europe overtaking China as the world’s main buyers, according to the International Energy Agency (IEA). Sales will continue growing through this decade, with the number of EVs registered around the world increasing from 10 million today to 145 million in 2030, IEA said. 

Similar optimism is shared by the consultancy HIS Markit, which identified 2027 as the “tipping point” for EVs. That year, EVs will reach manufacturing cost parity with ICE vehicles in Chile and soon thereafter in the EU and the US. Of the forecasted 89 million vehicles sold in 2030, the consultancy predicts 23.5 million will be electric (about 27%). 

There are plenty of options to choose from regarding electric vehicles, at least in the US. But how expensive are they and which is more convenient? We compiled a list of the cheapest models currently available in the market. They all have different features, with the main differences being the autonomy they offer and the level of comfort.

Mini Cooper SE ($30.750)

Image credits: Marco Verch.

While Mini vehicles ARE USUALLY more expensive than their mainstream counterparts, the company is now trying something different with its Mini Cooper SE — the cheapest electric car on sale today in the US. With an estimated 100 miles of range, it can travel less than other models but it’s still enough for most commutes. 

Mini had already released an electric vehicle in 2008, the Mini E, but it was the release of the Cooper SE in 2020 that gave the company a fully sorted electric runabout. It’s a tiny two-door with a fast charger that can restore 80% of its range in just 40 minutes. It’s stylish, affordable and quick but it’s also the lowest-range EV currently on sale. 

The autonomy of the Mini Cooper SE comes at about 115 miles (185 km). Its closest competitors, the 40-kWh Nissan Leaf and Hyundai Ioniq achieve 149 and 170 miles, respectively. The Cooper SE is powered by an electric motor between its front wheels that’s fed by a 32.6-kWh lithium-ion battery. In a test run by MotorTrend, it sprinted to 60 miles per hour in six seconds, faster than other models from Mini. 

For the price, the electric Mini is a pretty good alternative for city usage. You won’t go very far on it outside town, but it’ll easily get you through most days.

Nissan Leaf ($32.620)

The Nissan Leaf is the second-least expensive electric vehicle in the US, with 149 miles of range in its affordable base trim. It has a slightly better autonomy than the Mini, and an optional larger battery-pack version is also available, enabling the Nissan Leaf to travel up to 226 miles. The car made its debut over ten years ago with a range of 73 miles and has been regularly improved with new models ever since.

The Nissan Leaf. Image credit: Wikipedia Commons.

The Nissan Leaf comes standard with a 40-kWh lithium-ion battery and an electric motor that makes 147 horsepower (hp). Plus versions up the ante with a 62-kWh battery and a motor with 214 hp. It benefits from a quiet ride with little road and wind noise entering the cabin but it has limited storage and back seats that don’t fold flat. 

Its e-Pedal feature allows the driver to shift between regenerative braking modes. One mode allows the car to coast when the driver lifts off the throttle and another that slows the car when the drivers take the foot off the gas and uses that energy to recharge the battery. It can be plugged either into a 120-volt or to a 240-volt outlet. 

Hyundai Ioniq Electric ($34.250)

With impressive specs as well as a refreshing new look at car design, the Ioniq Electric is the third-least expensive electric vehicle in the US. It has a 170-mile range, which is probably enough for most daily driving scenarios but it’s still below some of the competition. Other electric vehicles like the Tesla Model 3 offer 250 miles range at a higher cost.

The Hyundai Ioniq. Image credit: Wikipedia Commons.

The Ioniq has a 38.3-kWh battery, which powers a 100-kW electric motor. The charging ability is one of its most impressive features, supporting both 400V and 800V charging without additional adapters. This means that it can charge from 10% to 80% in 18 minutes with a 350kW charger. Or that you can get 100km of range in five minutes. 

It’s equipped with an eight-inch touchscreen that includes Apple CarPlay and Android Auto with Bluetooth connectivity. There are two front-row USB ports and an eight-speaker premium audio system. The Ioniq is compatible with Hyundai’s Blue Link smartphone app, which allows remote monitoring and control of vehicle functions.

Chevrolet Bolt ($37.495)

The Chevrolet Bolt EV is one of the older cars on this list and one of the least expensive ones in the market today. It offers 259 miles of total range and a roomy interior, with accurate steering and linear acceleration. The range number is competitive with other mainstream EVs including the Kia Niro and the Tesla Model 3.

The Chevrolet Bolt. Image credit: Wikipedia Commons.

The Bolt charges at a rate of four miles per hour with a standard 120-volt portable charge cord and reaches a full charge in about 10 hours with a 240-volt cable. Under the metal, it packs a front-axle motor and 66-kWh lithium-ion battery. Critics have said that it’s a bit plasticky and with overly firm seats, but overall a very decent EV option.

It features a 10.2-inch touchscreen infotainment system with popular standard features. A subscription-based Wi-Fi hotspot and wireless smartphone charging are also available. The Bolt also includes standard and optional driver-assistance technology, including a 360-degree camera and a rear cross-traffic alert system.

Hyundai Kona Electric ($38.565)

Like the regular Kona, the electric version drives very well and has a decent acceleration. However, as it happens frequently with electric vehicles, brake feel isn’t very progressive. It has a long range of 258 miles, which should be enough to cover most daily driving needs. In testing, it accelerates to 60 mph in just 6.6 seconds.

The Hyundai Kona Electric. Image credit: Wikipedia Commons

The Kona Electric carries a 64.0-kWh battery, which powers a 150-kW electric motor. It has two USB ports and a seven-inch infotainment touchscreen that includes Apple CarPlay, Android Auto and Bluetooth streaming. The cabin is made from quality materials and feels comfortable, and the cargo area can fit five carry-on suitcases. 

The latest 2022 version includes new front and rear bumpers, new wheel designs, and a tweaked interior. Blind-spot monitoring and automated emergency braking are standard across the range, but adaptive cruise control is only offered on the top-spec Limited model. Overall, it’s a fine basis for an electric vehicle and at a good value. 

Tesla Model 3 ($38.690)

You wouldn’t expect to see a Tesla on any cheapest list, but the company is no longer just offering high-end cars. Tesla’s Model 3 is the cheapest electric car of the company and an attractive proposition, with 263 miles of range and zero to 60 miles per hour in just 5.3 seconds – figures that many of the cheaper EVs on this list can’t match. It’s a game-changing electric vehicle, with a generous range that’s more accessible to average consumers.

Tesla Model 3. Image credit: Wikipedia Commons

It has a minimalist interior design, a wireless charging pad, USB-C ports, a power-operated trunk and 19-inch wheels. And, even more impressively, its Smart Summon function allows the Model 3 to drive to pull out of its parking spot and come to your location autonomously. It works through an app and you have to be within 200 feet. 

Still, not everything is perfect with the Model 3. Nearly everything is controlled via the massive touchscreen and that creates a significant learning curve. The menu layout might be simple but there’s still too many submenus to go through to do simple tasks like adjusting the steering wheel. Also, too much road and tire noise enter the cabin. Although, having said all this, at the end of a the day — it’s still a Tesla.

Kia Niro EV ($40.265)

With long and low proportions, the Kia Niro EV looks more like a tall wagon than a crossover. Kia’s all-electric vehicle, which is also available with gas-only and plug-in-hybrid powertrains, is fairly attractive and packed with desirable standard features. It has a range of 239 miles, which is good for most driving duties besides long road trips.

The Niro EV. Image credit: Wikipedia Commons

The Niro EV is powered by a single electric motor that produces 201 hp, sent through the front wheels by a one-speed by a one-speed direct drive transmission. It reaches 60 miles per hour in just 6.5 seconds, which is faster than the Chevy Bolt and slower than the Hyundai Kona Electric. It has a 65KwH battery capacity, in line with the rest.

The battery can be recharged using either a 120-volt or 240-volt connection, but the two connections offer different charge times. On a 240-volt connection, the car can be recharged in about nine hours. If you can’t wait that long, the EV offers standard DC fast charge capability, allowing you to recharge the battery to 80% in an hour with a 100-kW connection.

Ford Mustang Mach-E ($43.995)

The Mustang Mach-E is Ford’s first all-electric crossover, designed and named after the company’s iconic pony car. It’s available with either a standard-range 75.7Kwh battery or an extended -range 98.8kWh pack. These feed an electric motor mounted on the rear or both axles. It can go from zero to 60 miles per hour in just 3.5 seconds. 

Ford Mustang Mach-E. Image credit: Wikipedia Commons

The EV has an estimated range of between 211 and 305 miles, depending on the battery pack and type of electric motor, which isn’t as impressive as other EVs. Every model has a fast-charging capability. The Mach-E comes with a Ford mobile charger that can add 30 miles of range with a 120-volt outlet, and up to 80% of battery life with a 240-volt outlet. 

The battery is located under the floor of the car, which allows to optimize cargo and passenger space. Unlike its exterior, the inside of the Mach-E doesn’t have much in common with the regular Mustang. Its dashboard has an attractive digital gauge cluster, dominated by a touchscreen. It also has heated front seats and a panoramic sunroof.


 The electric car market is shifting rapidly, offering quick progress almost from month to month. No doubt, by the end of the year, we’ll have even more cars to add to this list. We’ll do our best to keep it updated if this page garners interest.

AI helps NASA look at the Sun with new eyes

The top row of images shows the degradation of AIA’s channel over the years since SDO’s launch. The bottom row of images is corrected for this degradation using a machine learning algorithm. Credit: Luiz Dos Santos/NASA GSFC.

It’s not easy being a telescope — just look at Hubble’s recent woes (and Hubble is hardly an exception). But being a solar telescope, constantly being exposed to intense light and particle bombardment, is especially rough.

Solar telescopes have to be constantly recalibrated and checked, not to ensure that damage isn’t happening — because damage is always happening. Instead, they have to be recalibrated to understand just how the instrument is changing under the effect of the Sun.

But recalibrating a telescope like NASA’s Solar Dynamics Observatory, which is in Earth orbit, isn’t easy. Its Atmospheric Imagery Assembly, or AIA, created a trove of solar images enabling us to understand our star better than ever before. In order to recalibrate AIA, researchers have to use sounding rockets: smaller rockets that carry a few instruments and only fly for about 15 minutes or so into space.

The reason why the rockets are needed is that the wavelengths that AIA is analyzing can’t be observed from Earth. They’re filtered by the atmosphere. So you need the sounding rockets carrying a small telescope to look at the same wavelengths and map out how AIA’s lenses are changing.

The Sun seen by AIA in 304 Angstrom light in 2021 before degradation correction (left) and with corrections from a sounding rocket calibration (right). Credits: NASA GSFC

Obviously, the rocket procedure isn’t ideal. It costs a bit, and rockets can’t always be launched. So a group of NASA researchers looked for a more elegant solution.

“The current best calibration techniques rely on flights of sounding rockets to maintain absolute calibration. These flights are infrequent, complex, and limited to a single vantage point, however,” the new study reads. But that’s only part of the challenge.

“It’s also important for deep space missions, which won’t have the option of sounding rocket calibration,” said Dr. Luiz Dos Santos, a solar physicist  at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and lead author on the paper. “We’re tackling two problems at once.” 

First, they set out to train a machine-learning algorithm to recognize solar structures and compare them with existing AIA data — they used images from the sounding rockets for that. The idea was that, by looking at enough images of a solar flare, the algorithm could identify a solar flare regardless of AIA lens degradation; and then, it could also figure out how much calibration was needed.

After enough examples, they gave the algorithm images to see if it would correctly identify just how much calibration was needed. The approach worked on multiple wavelengths.

“This was the big thing,” Dos Santos said. “Instead of just identifying it on the same wavelength, we’re identifying structures across the wavelengths.” 

This image shows seven of the ultraviolet wavelengths observed by the Atmospheric Imaging Assembly on board NASA’s Solar Dynamics Observatory. The top row is observations taken from May 2010 and the bottom row shows observations from 2019, without any corrections, showing how the instrument degraded over time.
Credits: Luiz Dos Santos/NASA GSFC.

When they compared the virtual calibration (algorithm calibration predictions) with the data from the sounding rockets, the results were very similar, indicating that the algorithm had done a good job at estimating what type of calibration was needed.

The approach can also be used for more space missions, even for deep space missions where calibration methods with rockets won’t be possible.

The study was published in the journal Astronomy and Astrophysics.

The first ever 3D-printed steel bridge opens in Amsterdam

Queen Maxima of the Netherlands inaugurated the bridge. Image credit: Imperial.

The 12-meter long structure was developed by engineers at Imperial College London, in partnership with the Dutch Company MX3D. It was created by robotic arms using welding torches to deposit the structure of the bridge layer by layer. The construction took over four years, using about 4,500 kilograms of stainless steel. 

“A 3D-printed metal structure large and strong enough to handle pedestrian traffic has never been constructed before,” Imperial co-contributor Professor Leroy Gardner, who was involved in the research, said in a statement. “We have tested and simulated the structure and its components throughout the printing process and upon its completion.”

The bridge will be used by pedestrians to cross the capital’s Oudezijds Achterburgwal canal. Its performance will be regularly monitored by the researchers at Imperial College, who set up a network of sensors in different parts of the bridge. The data will also be made available to other researchers worldwide who also want to contribute to the study.

The researchers will insert the data into a “digital twin” of the bridge, a computerized version that will imitate the physical bridge in real-time as the sensor data comes in. The performance of the physical bridge will be tested against the twin and this will help answer questions about the long-term behavior of the 3D-printed steel and its use in future projects. 

“For over four years we have been working from the micrometre scale, studying the printed microstructure up to the meter scale, with load testing on the completed bridge,” co-contributor Craig Buchanan said in a statement. “This challenging work has been carried out in our testing laboratories at Imperial, and during the construction process on site in Amsterdam.”

Mark Girolami at the University of Cambridge, who worked on the digital model of the bridge, told New Scientist that investigations into bridge failures often reveal deterioration that was missed. Now, with constant data coming from the bridge, they may be able to detect these failures before they do any damage, he added. 

Image credit: Imperial

3D printing has been consistently making headlines over the past few years, slowly becoming a reality for us commoners. Companies are building houses either fully on 3D or with most of their elements made out of a printer. In Mexico, the world’s first 3D printed neighborhood is already moving forward, while Germany’s first 3D residential building is under construction.

But it’s not just housing, it can be almost anything. With the COVID-19 pandemic, researchers discovered they could print face shields and ventilator parts much faster and cheaper than with regular methods. A 3D printer even built a miniature heart, using a patient’s own cells, as well as human cartilage.

A set of research papers were published by Imperial academics during the construction and testing of the bridge. One was published in September 2020 in the Journal of Construction Steel Research, another one in July 2020 in the journal Materials & Design, and a third one in February 2019 in the journal Engineering Structures

Japan just shattered the internet speed record: 319 Terabits per Second

How’s your internet working these days? At a recent conference, researchers from Japan demonstrated a whopping data transmission rate of 319 Terabits per second (Tb/s). Remarkably, the transmission was carried out over a long distance (3001 km / 1864 miles) and using technology that is already available today.

Image credits: Joshua Sortino.

A minute of footage, in high definition, takes about 100 Megabytes. That means that with this speed, you could download around 5,300 hours of footage every second. You could download the entire Spotify library in a few seconds. Wikipedia, you’d download in 0.01 seconds.

This speed is almost double the previous record of 178 Tb/s, and almost seven times the earlier record of 44.2 Tb/s. Meanwhile, NASA’s internet tops out at 91 Gb/s (1 Tb = 1,000 Gb = 1,000,000 Mb) and the fastest home internet you can get is about 10 Gb/s. We at ZME feel fortunate to be working with a 1 Gb/s connection.

The record was achieved with infrastructure that already exists, though researchers did add a few pieces of key equipment. The team used fiber-optic equipped with four “cores” — glass tubes within the cable — instead of the standard one core. To amplify the speed, the researchers divided the signal into different wavelengths. The key innovation seems to be that they employed a rarely-used band of wavelengths.

“In this demonstration, in addition to the C and L-bands, typically used for high-data-rate, long-haul transmission, we utilize the transmission bandwidth of the S-band, which has not yet been used for further than single-span transmission,” the researchers write in the study.

Image via Fiber Labs.

With more bands, researchers were able to take the normal data sending process (which starts with a “comb” laser fired at different wavelengths), and extend it over a much longer distance. After 70 km (43.5 miles), the signal was boosted with optical amplifiers. But the researchers didn’t use regular boosters. They used two novel types of fiber optic amplifies: one doped in thulium and the other in erbium — both materials have been used as boosters before. This amplification process is called Raman amplification. After this, the process is repeated on and on, enabling the signal to span the whopping 3,000 km distance.

Although the researchers did implement a few innovations, the whole structure uses the same diameter as the conventional, single-core fiber optic — which means conventional cables can be replaced with these novel ones. This would make it much easier to transition to a new type of infrastructure.

“The standard cladding diameter, 4-core optical fiber can be cabled with existing equipment, and it is hoped that such fibers can enable practical high data-rate transmission in the near-term, contributing to the realization of the backbone communications system.”

It remains to be seen whether the results will be confirmed, and just how expensive it would be to implement, but given the huge increase in speed, it’s bound to catch on, especially in tech-savvy countries like Japan. Soon enough, existing internet speeds will soon look primitive.

So, what would you use 319 Terabits per Second for?

Can AI helps us discover new, innovative materials?

In their never-ending quest for better materials, researchers have found an unexpected ally — one that can scour through giant datasets with ease and compute how materials will behave at various temperatures and pressures. This ally, commonly known as Artificial Intelligence (or AI) could usher in a new age of material science.

Computing materials

Here’s the thing with materials: you have a lot of things that can be put together to obtain new materials with exciting properties, but it takes time, money, and effort. So instead, what researchers do before actually making a new material is creating a model of it on a computer.

The current prediction methods work well, and they’ve become quite standard — but it also takes a lot of computation power. Oftentimes, these simulations need supercomputers and can use up a lot of resources, which many researchers and companies just don’t have access to.

“You would typically have to run tons of physics-based simulations to solve that problem,” says Mark Messner, principal mechanical engineer at the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

So instead, Messner and colleagues looked for a shortcut. That shortcut came in the form of AI that uncovers patterns in massive datasets (something which neural networks are particularly good at) and then simulates what happens to the material in extreme conditions using much less processing power. If it works, it’s much more efficient and fast than existing methods… but does it work?

In a new study, Messner and his team say it does.

AI, sort this out

In their new study, they computed the properties of a material 2,000 times faster than the standard modeling approach, and many of the necessary calculations could be performed on a common laptop. The team used a convolutional neural network — a relatively simple class of deep neural networks, most commonly applied to analyze images — to recognize a material’s structural properties.

“My idea was that a material’s structure is no different than a 3D image,” he said. “It makes sense that the 3D version of this neural network will do a good job of recognizing the structure’s properties—just like a neural network learns that an image is a cat or something else,” Messner said.

To put the approach to the test, Messner first designed a square with bricks, somewhat similar to how an image is built from pixels. He then took random samples of that design and used a simulation to create two million data points, which linked the design structure to physical properties like density and stiffness. These two million data points were fed into the neural network, and then the network was trained to look for the desired properties. Lastly, he used a different type of AI (a genetic algorithm, commonly used for optimization) to find an overall structure that would match the desired properties.

This simulation shows the steps that neural networks and genetic algorithms take to find an overall structure that matches specific material properties. Credit: Image by Argonne National Laboratory.

With this approach, the AI method found the right structure in 0.00075 seconds, compared to 0.207 seconds, which would have been the standard physics-based model. If the same ratio can be maintained for more complex computation, the approach could make it much easier for labs and companies with fewer resources to enter the material-making arena.

The potential is especially great in the field of renewable energy, where materials must withstand high temperatures, pressures, and corrosion, and must last decades. Another promising avenue is 3D printing materials — making a structure layer by layer allows for more flexibility than traditional measures, and if you can tell the machine exactly what you want it to produce.

“You would give the structure—determined by a neural network—to someone with a 3D printer and they would print it off with the properties you want,” he said. “We are not quite there yet, but that’s the hope.”

Messner and the team are even working on designing a molten salt nuclear reactor, which uses molten salt as a coolant and can operate at pressures far lower than existing nuclear reactors — but researchers first need to ensure that the stainless steel needed for the reactor will behave well under extreme conditions for decades.

The future of mechanical engineering looks bright. With ever-increasing computing power, 3D printing, and smarter algorithms, engineers can finely tune materials and produce the innovative materials industries need to thrive.

The study has been published in the Journal of Mechanical Design.

Beetles produce a lubricant that’s more slippery than Teflon

Humans may come up with clever innovations and designs for useful many products, but chances are nature has beaten us to the job. The latest example is lubricant: researchers have discovered that beetles can naturally lubricate their knees with a substance that works better than Teflon.

Image credit: Flickr / Budak

Insects are the largest group of animals on Earth, but there’s still much we haven’t discovered about them yet. For instance, scientists have a limited understanding of how insects’ joints reduce friction and are protected from wear and tear. In vertebrates, joints are enclosed into a cavity filled out by the synovial fluid serving as a lubricant between contacting cartilage surfaces. These fluid-lubricated joints exhibit a very low coefficient of friction.

But what happens to insects who don’t have this?

Researchers at the Christian-Albrechts University of Kiel and Aarhus University used a scanning electron microscope to examine the knee joint of the darkling beetle (Zophobas morio). They found that the area where the femur and tibia meet is covered with pores that excrete a lubricant substance made of proteins and fatty acids. Turns out, the lubricant is very powerful.

The team put it to the test, placing it between two glass surfaces and rubbing them together. The friction between the planes with the material between was much lower than without the material between them. The researchers also found that his material was better as a lubricant than vacuum grease and better than Teflon.

Polytetrafluoroethylene (PFTE), commonly known as Teflon, is a synthetic polymer containing carbon and fluorine. It’s commonly used as a non-stick coating in kitchen cookware, such as pans and baking trays. It’s also used in the manufacture of semiconductors and medical devices and as an inert ingredient of pesticides. 

The researchers think the insect lubricant also has other functions. Under a high load, chunks of it deformed and created a squashable layer between two surfaces that acted like a shock absorber and prevented abrasive contact. Still, extracting the lubricant would be too expensive and time consuming, so the team wants to find a way to synthesize it. 

“First of all, we need to understand the molecular structure, and then perhaps it is possible. Maybe it is necessary to involve biotechnology and use bacteria to produce it,” co-author Konstantin Nadein from the University of Kiel in Germany told New Scientist.

The researchers think this natural lubricant from beetles might be useful for small-scale robots and prosthetics, for which conventional lubricants don’t work that well. They called for further studies on the properties of the lubricant so to come up with ideas for further biometric applications in the area of novel lubricating materials. 

“In this regard, this research may be of particular interest for robotics and MEMS technology, and especially for prosthetics, in order to develop a new generation of completely bio-organic lubricants with friction-reducing properties similar to PTFE (Teflon),” the researchers wrote. 

The study was published in the journal Proceedings of the Royal Society B. 

Scientists develop world’s thinnest technology – only two atoms thick

Researchers at Tel Aviv University have engineered what is currently the single smallest and thinnest piece of technology ever seen, with a thickness of just two atoms. The new invention uses quantum-mechanical electron tunneling, which allows information to travel through the thin film, and is able to store electric information, making it potentially applicable to all sorts of electronic devices.

In a screenshot from video released by Tel Aviv University on June 30, PhD student Maayan Wizner Stern uses tweezers to hold an electronic storage unit that is two-atoms thick. (Screen capture: YouTube)

Moshe Ben Shalom, who was involved in the project, said the research started from the team’s curiosity about the behavior of atoms and electrons in solid materials, which has generated the technology used by many modern devices. They tried to “predict and control” the properties of these particles, he added in a statement. 

“Our research stems from curiosity about the behavior of atoms and electrons in solid materials, which has generated many of the technologies supporting our modern way of life,” says Dr. Ben Shalom. “We (and many other scientists) try to understand, predict, and even control the fascinating properties of these particles as they condense into an ordered structure that we call a crystal. At the heart of the computer, for example, lies a tiny crystalline device designed to switch between two states indicating different responses — “yes” or “no,” “up” or “down” etc. Without this dichotomy — it is not possible to encode and process information. The practical challenge is to find a mechanism that would enable switching in a small, fast, and inexpensive device.

Modern devices have small crystals with a million atoms (one hundred atoms in height, width and thickness). This new development means that the crystals can be reduced to just two atoms thick, allowing the information to flow with greater speed and efficiency — which, if equal or comparable performance can be achieved, would make devices much more efficient.

For the study, the researchers used a two-dimensional material – one-atom-thick layers of boron and nitrogen, arranged in a repetitive hexagonal structure, drawing inspiration from graphene. They could break the symmetry of this crystal by artificially assembling two such layers “despite the strong repulsive force between them” due to their identical charges, Dr. Shalom explained. 

“In its natural three-dimensional state, this material is made up of a large number of layers placed on top of each other, with each layer rotated 180 degrees relative to its neighbors (antiparallel configuration)” said Dr. Shalom in a statement. “In the lab, we were able to artificially stack the layers in a parallel configuration with no rotation.” 

Maayan Wizner Stern, a PhD student who led the study, said the technology could have other applications beyond information storage, including detectors, energy storage and conversion and interaction with light. She hopes miniaturization and flipping through sliding will improve today’s electronic devices and allow new ways of controlling information in future devices. 

The new technology proposes a way for storing electric information in the thinnest unit known to science, in one of the most stable and inert materials in nature, the researchers said. The quantum-mechanical electron tunneling through the atomically thin film could boost the information reading process far beyond current technologies.

Researchers also expect the same approach to work with multiple crystals, potentially offering even more desirable properties. Wizner Stern concludes:

“We expect the same behaviors in many layered crystals with the right symmetry properties. The concept of interlayer sliding as an original and efficient way to control advanced electronic devices is very promising, and we have named it Slide-Tronics.”

The study has been published in the journal Science. 

This 5,000-year-old-man may have been the “oldest” plague victim

About 5,000 years ago, a young man in Northern Europe was buried in a region called Riņņukalns in Latvia. As it turns out, the man had been infected with the oldest strain of Yersinia pestis — the bacterium that caused the Black Death plague, which spread through medieval Europe. This is the oldest case of the plague researchers have ever found.

This means that the strain of the infectious bug emerged about 2,000 years earlier than previously thought, according to a new study. The black plague swept through Europe in the 1300s, wiping out as much as half of the population. Later waves continued to strike regularly over several centuries, causing millions of deaths. But we’re still not quite sure when the pathogen first emerged in humans.

“It seems this bacterium has been around for quite a long time,” study co-author Ben Krause-Kyora, who heads the Ancient DNA Laboratory at the University of Kiel in Germany, told ABC News. “Up to now this is the oldest-identified plague victim we have. He most likely was bitten by a rodent and got the primary infection.”

Riņņukalns is an archaeological site next to the River Salaca in Latvia, with layers of mussel shells and fish bones left by hunter-gatherers. The site was first excavated in 1875 by an archaeologist, who found two graves with the remains of a man and a girl. The bones were given to anthropologist Rudolf Virchow, but vanished during World War II.

In 2011, the bones were rediscovered in Virchow’s anthropological collection in Berlin. Shortly after, two more graves were uncovered at Riņņukalns. The remains were thought to be part of the same group of hunter-gatherers as the teenage and the man. Not much was known about their genetic makeup or the infectious diseases they encountered. 

To find out, Krause-Kyora and his team took samples from the teeth and bons of the four hunter gatherers, hoping to sequence their genomes. They also screened their genomic sequences for bacteria and viruses. All individuals were clear of Yersinia pestis, except one – the RV 2039 specimen who was a 20 to 30-year-old man.

The researchers compared the bacterium’s genome to ancient and modern Yersinia Pestis strains. The man had been infected with a strain that was part of a lineage that emerged about 7,000 years ago – the oldest-known strain. It may have evolved after breaking away from its predecessor, Yersinia pseudotuberculosis, which causes an illness similar to scarlet fever. 

This strain of the plague didn’t contain the gene that lets it spread from fleas to humans, unlike its medieval counterpart. But the researchers believe the man may have been infected after being bitten by a rodent carrying the bacterium. The man’s genome had signs of carrying the bug in his blood, suggesting he could have died of the infection. 

The fact that only one man and not the rest of the people buried showed signs of infection suggests that this Yersinia pestis srtain may have been less contagious than later strains. The infections caused by it may have occurred in small isolated cases, and evolved to its medieval and modern forms, alongside the growth of human civilization and the development of bigger cities.

The study was published in the journal Cell. 

Almost half of the goals scored in football (soccer) have some sort of randomness to them

With EURO2021, football lovers around the world rejoice at the chance of watching one of the very first international sports competitions since the pandemic started. For fans, it’s an enthralling competition, especially as the end outcome is almost always unpredictable.

A team of researchers focused exactly on that: the purely random events that lead to goal scoring in football. According to their research every other goal (46%) has some random influence on it.

Football is a weird game. Unlike most other team sports, where plenty of points are scored, football games typically have 2-3 goals — and each of them is a very big deal. Unlike basketball or handball, where a shot going in versus narrowly missing is unlikely to decide the end result of the game, football is often decided by this type of detail. Watch enough football games, and you’ll be stuck with the feeling that sometimes, games are decided not by who is the better team, but rather by a bit of luck.

Dr. Daniel Memmert, Executive Head of the Institute of Exercise Training and Sports Informatics at the German Sport University Cologne, wanted to focus on that. Along with his colleagues, Memmert analyzed 7,263 goals scored in the English Premier League in seven years, starting with the 2012/13 seasons. Since the English Premier League is arguably the best football league in the world, it seems like a good place to start.

The researchers’ analysis selected six variables that defined the randomness involved in scoring a goal: goals following a rebound, long-range shots, deflected shots or goals created by defensive errors like, for instance, own goals. The study also included nine situational variables such as season, matchday, match location, match situation, goal number or team strength.

The researchers were surprised to see that 46% of all scored goals had some form of random influence to them. Furthermore, more than 60% of all matches ended either in a draw or with a goal difference of one goal — emphasizing the importance of these chance goals.

“A single random goal can therefore be enough to significantly change the outcome of a match. Thus, chance is not only highly relevant in the case of that particular goal. Chance also plays a significant role in deciding the final score of the match,” explains Memmert.

Interestingly, the prevalence of these chance goals appears to be dropping in recent seasons. Sport scientist Fabian Wunderlich, first author of the recently published paper explains:

“The results clearly highlight the essential role of chance in football, as almost every second goal benefits from random influence.”

“Another interesting finding is that the proportion of chance goals has dropped from 50% to 44% over the last seven seasons. This might be caused by the fact that match preparation is becoming increasingly professional and data-driven, or that players are becoming better trained technically as well as tactically.”

The occurrence of chance goals also appears to be dependent on the match situations. Some specific situations (like free kicks or corner kicks) tend to lead to more random influence in scoring goals, which suggests that coaches should better prepare for this type of situation. In fact, they likely already are: the influence of randomness goals is much higher for weaker teams, which suggests that consciously or not, teams may already be playing to these odds. The likelihood of chance goals was also higher when the score was a draw.

The team encourages other researchers to further look at the data, at other types of leagues and games, and on women’s football as well, to see if “randomness in goal scoring is a relatively stable inherent characteristic of football or highly dependent on the circumstances of the play.”

“Moreover, further research should tackle the question whether physical, technical, tactical and psychological changes over the last years were responsible for the decreasing influence of randomness on goal scoring,” the study adds.

As football (and sports in general) becomes more and more influenced by this type of data, we can expect studies like this to make an impact on how managers and teams approach the game. For better or for worse though, football remains an unpredictable game, with a lot of randomness involved.

The study was published in the Journal of Sports Science.

The sound of music: violins could soon be designed by Artificial Intelligence

Ever since the first violins were made some 500 years ago, the process of violin-making has changed surprisingly little. Traditionally, violins are “bench-made” — by a single individual, often a master maker (or “luthier”). More recently, “shop-made” instruments, where many people participate under the supervision of a master maker, have become more common. But in both instances, the layout is designed by a master violin maker — either from scratch, or copied from the old masters.

That may soon change. According to a new study, Artificial Intelligence (AI) could soon take part in the process.

Image credits: Providence Doucet.T

A violin is a surprisingly complex object. Its geometry is defined by its outline and the arching on the horizontal and vertical section. In a new study, the Chilean physicist and luthier Sebastian Gonzalez (a postdoc) and the professional mandolin player Davide Salvi (a Phd student) showed how a simple and effective neural network can predict the vibrational behavior of violin designs — in order words, how the violin would sound.

The prediction uses a small set of geometric and mechanical parameters from the violin. The researchers developed a model that describes the violin’s outline based on the arcs of nine circles. Using this approach, they were able to draw a violin plate as a function of only 35 parameters.

A drawing from the workshop of Enrico Ceruti showing the outline as a series of connected arcs of circles, image courtesy of the Violin Museum of Cremona, Italy. Image credits: Gonzalez et al.

After starting from a basic design, they randomly changed the parameters they were using (such as the position and the radii of the circles, the thickness, the type of wood, etc) — until they obtained a database of virtual violins. Some of the designs are very similar to shapes already used in violin making, while others have never been attempted before. These shapes were then used to predict what the violin would sound.

“Using standard statistical learning tools, we show that the modal frequencies of violin tops can, in fact, be predicted from geometric parameters, and that artificial intelligence can be successfully applied to traditional violin making. We also study how modal frequencies vary with the thicknesses of the plate (a process often referred to as plate tuning) and discuss the complexity of this dependency. Finally, we propose a predictive tool for plate tuning, which takes into account material and geometric parameters,” the researchers write in the study.

Left: example of an historical violin. Credit: 2008 Stoel, Borman. Right: examples of three violins in the dataset. Credit: Politecnico di Milano

The algorithm was able to predict how the violins would sound with 98% accuracy — far better than even the researchers expected.

The innovative work promises to save a lot of work for violin makers, and it also paves the way for new, innovative types of designs to be tried. For the future research, the team will also look at how to select wood that is most desirable for the violin design.

The study was published in Scientific Reports.

Graphene protective coatings could improve hard disk data storage potential ten-fold

A paper published by researchers at the Cambridge Graphene Center, in collaboration with an international team, might change the way your PC stores data forever — or, at least, for a while!

An “opened, old hard disk drive”. Image credits Norlando Pobre / Flickr.

Are you looking for a storage upgrade on your device? Thinking of trading ye olde hard disk drive (HDD) for the sleeker, cooler, faster, solid-state drive (SSD)? I can completely empathize. But fear not! The HDD is getting an upgrade in graphene form, according to a new paper, which should increase the amount of data they can store tenfold (compared to currently available technology).

The study was carried out in collaboration with researchers at the University of Exeter, India, Switzerland, Singapore, and the US.

Hard graphene drive

“Demonstrating that graphene can serve as a protective coating for conventional hard disk drives and that it is able to withstand HAMR conditions is a very important result. This will further push the development of novel high areal density hard disk drives,” said Dr. Anna Ott from the Cambridge Graphene Center, one of the co-authors of this study.

HDDs were first introduced in the 1950s, but they wouldn’t have a meaningful impact on personal computers until the 1980s, mostly due to cost and complexity of manufacture. Since then, however, they have been a game-changer: HDDs can store much more data in a smaller package than any medium before them. In later years, SSDs have become the more popular choice for mobile devices due to their greater speed and more compact size, but HDDs still offer greater data density at a low cost, and are still the preferred choice of storage medium for desktop computers.

There are two main components that make up an HDD: the platters, and a head, mounted on a mobile arm. Data is stored on the platters, written there by the magnetic head as the platters spin rapidly. The head is also what reads data off the platters. The sound you can maybe hear coming from your PC as it tries to access something in its memory are these parts in motion inside the HDD. More modern drives leave less and less room between these parts, in order to save on space.

Still, a key part of the HDD’s design is to keep the platters from being damaged, either from mechanical shock or chemical corrosion. Our current way of doing this — carbon-based overcoats (the unfortunately shortened ‘COCs’) — occupy very little space. Today they’re around 3nm thick, but they used to be 12.5nm thick or more in the 1990s. This thinning of the COCs has helped increase the HDDs’ overall data density to about one terabyte per square inch of platter. The new graphene coatings could increase this extra storage space tenfold.

The team replaced commercial-grade COCs with between one to four layers of graphene, and then tested their resilience against friction, wear, corrosion, as well as their thermal stability and compatibility with current lubricants. Apart from being much thinner, these layers fulfil the same job as current COC materials, the team explains, having ideal properties in all the analyzed categories. They’re actually better at corrosion resistance and two times better at friction reduction than our best COC options right now.

Additionally, the graphene layers were compatible with Heat-Assisted Magnetic Recording (HAMR), a technique that allows more data to be stored on the HDD by heating up the platter. Current COC materials do not perform well at these high temperatures, the authors add.

An iron-platinum platter was used for the study. The team estimates that such a disk, coupled with the graphene coatings and HAMR technology could lead to potential data densities of over 10 terabytes per square inch of platter.

“This work showcases the excellent mechanical, corrosion and wear resistance properties of graphene for ultra-high storage density magnetic media. Considering that in 2020, around 1 billion terabytes of fresh HDD storage was produced, these results indicate a route for mass application of graphene in cutting-edge technologies,” says Professor Andrea C. Ferrari, Director of the Cambridge Graphene Center, and co-author of the study.

The paper “Graphene overcoats for ultra-high storage density magnetic media” has been published in the journal Nature Communications.

Drones can elicit emotions from people, which could help integrate them into society more easily

Could we learn to love a robot? Maybe. New research suggests that drones, at least, could elicit an emotional response in people if we put cute little faces on them.

A set of rendered faces representing six basic emotions in three different intensity levels that were used in the study. Image credits Viviane Herdel.

Researchers at Ben-Gurion University of the Negev (BGU) have examined how people react to a wide range of facial expressions depicted on a drone. The study aims to deepen our understanding of how flying drones might one day integrate into society, and how human-robot interactions, in general, can be made to feel more natural — an area of research that hasn’t been explored very much until today.

Electronic emotions

“There is a lack of research on how drones are perceived and understood by humans, which is vastly different than ground robots,” says Prof. Jessica Cauchard, lead author of the paper.

“For the first time, we showed that people can recognize different emotions and discriminate between different emotion intensities.”

The research included two experiments, both using drones that could display stylized facial expressions to convey basic emotions to the participants. The object of these studies was to find out how people would react to these drone-borne expressions.

Four core features were used to compose each of the facial expressions used in the study: eyes, eyebrows, pupils, and mouth. Out of the possible emotions these drones could convey, five were recognized ‘with high accuracy’ from static images (joy, sadness, fear, anger, surprise), and four more (joy, surprise, sadness, anger) were recognized most easily in dynamic expressions conveyed through video. However, people had a hard time recognizing disgust no matter how it was conveyed to them by the drone.

What the team did find particularly surprising, however, is how involved the participants themselves were with understanding these emotions.

“Participants were further affected by the drone and presented different responses, including empathy, depending on the drone’s emotion,” Prof. Cauchard says. “Surprisingly, participants created narratives around the drone’s emotional states and included themselves in these scenarios.”


Based on the findings, the authors list a number of recommendations that they believe will make drones more easily acceptable in social situations or for use in emotional support. The main recommendations include adding anthropomorphic features to the drones, using the five basic emotions for the most part (as these are easily understood), and using empathetic responses in health and behavior change applications, as they make people more likely to listen to instructions from the drone.

The paper “Drone in Love: Emotional Perception of Facial Expressions on Flying Robots” has been published in the journal Association for Computing Machinery and has been presented at the CHI Conference on Human Factors in Computing Systems (2021).

Microsoft releases simple “auto-complete for programmers” that uses mammoth AI

Less than one year ago, a new language AI called GPT-3 hit the stage. By far the most powerful AI of its type, GPT-3 can write in different styles, answer complex questions and, surprisingly, even write bits of code. This last fact was not lost on programmers. Software development is a giant, half-a-trillion dollar industry, always on the rise and always adapting to emerging technology.

Microsoft purchased a license for GPT-3 a few months ago, and now, they’ve announced their first product based on the AI: a tool that will help users build apps without needing to know how to write computer code or formulas.

It’s not the first time advanced algorithms were used to make programming easier. Indeed, writing code has changed a lot since the early days of pure black text on a white background on a terminal. But companies are always looking for ways to make writing code easier, and therefore, more attractive to a lot of people.

In truth, Microsoft’s new tool won’t write the next big app for you, but it can take some of the lower-level bits of code and enable you to “write” them with a click of a button — something which we also pointed out as a possibility when covering the initial release of the AI, and which wasn’t lost on big tech.

“Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code,” said Charles Lamanna, corporate vice president for Microsoft’s low-code application platform.

Microsoft has been working on this for a while with its suite of “low code, no code” software through its Power Platform. The idea is simple: users still have to understand the logic and structure behind the code they’re writing, but smart tools like this one can make the boring part of writing routine code much easier. It’s a bit like autocomplete for code: you still need to know what you’re writing, but it helps you when you can’t find the world you’re looking for.

This could also be useful for smaller companies that can’t afford to hire a lot of experienced programmers for things like analytics, data visualization, or workflow automation. In a sense, GPT-3 becomes a hired assistant for the company.

For instance, instead of having users learn how to address the database properly, they can just ask it what to do in layperson language, and GPT-3 then makes the translation. For instance, if you’d want to find products that start with “kids” on the Power Platform, you’d have to use a certain syntax, which sounds something like:

  • “Filter(‘BC Orders’ Left(‘Product Name’,4)=”Kids”)

With GPT-3, all you need to do is say:

  • “find products where the name starts with ‘kids’.”

It’s a simple trick, but it could save users a lot of time and resources, enabling people and smaller companies to build apps more rapidly, and with less effort. Since GPT-3 is such a powerful and capable language AI, there’s a good chance it will also understand more complex queries. It’s not all that different from the natural language query functions that are already available in software like Excel or Google Sheets, but GPT-3 is more sophisticated.

“GPT-3 is the most powerful natural language processing model that we have in the market, so for us to be able to use it to help our customers is tremendous,” said Bryony Wolf, Power Apps product marketing manager. “This is really the first time you’re seeing in a mainstream consumer product the ability for customers to have their natural language transformed into code.”

Programming languages are notoriously unforgiving, with small errors causing big headaches for even advanced users. Microsoft’s approach isn’t the first, but it has one big advantage: it’s extremely simple. The feature accelerates the trend of simplifying programming and cements Microsoft’s ambitions to dominate the landscape. But perhaps the most interesting part about this is how a breed of AI language models is starting to enter the world of programming.

New approach creates power out of thin WiFi

Researchers at the National University of Singapore (NUS) and the Tohoku University (TU), Japan, are working to make a device near you powered by the WiFi signals that are commonplace in the modern world.

Image via Pixabay.

Wireless networks are everywhere in towns and cities today. They connect a million devices to the Internet, day in, day out. Needless to say, that’s a lot of energy — over the 2.4GHz radio frequency that such networks use — being beamed all around us all the time. New research is working to harness this energy for a useful purpose, such as charging (for now, tiny) devices.

Airdropping charge

“We are surrounded by WiFi signals, but when we are not using them to access the Internet, they are inactive, and this is a huge waste. Our latest result is a step towards turning readily-available 2.4GHz radio waves into a green source of energy, hence reducing the need for batteries to power electronics that we use regularly.”

“In this way, small electric gadgets and sensors can be powered wirelessly by using radio frequency waves as part of the Internet of Things. With the advent of smart homes and cities, our work could give rise to energy-efficient applications in communication, computing, and neuromorphic systems,” said Professor Yang Hyunsoo from the NUS Department of Electrical and Computer Engineering, who led the research.

The team developed a new technology that uses tiny smart devices known as spin-torque oscillators (STOs), which can harvest and convert wireless radio waves into power for devices. They showed that these devices can successfully harvest energy from WiFi signals and that they could generate enough energy to power a light-emitting diode (LED) wirelessly, without using a battery.

STOs are devices that can receive radio signals and transform them into microwaves. Although that sounds amazing, they’re still an emerging technology, and are still quite inefficient at their job. Currently, STOs are only able to output low levels of power.

One workaround we’re using right now is to stack several STOs together, but this isn’t always viable: many devices have spatial constraints because nobody likes chunky items. Individual STOs are also limited in the range of frequencies they can react to, generally limited to only a few hundred MHz, which further complicates their use.

The team’s solution was to use an array of eight STOs connected in a series. This was then used to convert the 2.4 GHz electromagnetic radio waves used by WiFi into a direct voltage signal (electrical current), fed to a capacitor, and used to light a 1.6-volt LED. Five seconds of charging time on the capacitor was enough to keep the LED lit for one minute with the wireless power switched off.

As part of the research, they also performed a comparison between the STOs series design they used and a parallel design. The latter, they explain, has better time-domain stability, spectral noise behavior, and control over impedance mismatch — or, more to the point for us laymen, it’s better for wireless transmission. The series layout is more efficient at harvesting energy.

“Aside from coming up with an STO array for wireless transmission and energy harvesting, our work also demonstrated control over the synchronising state of coupled STOs using injection locking from an external radio-frequency source,” explains Dr Raghav Sharma, the first author of the paper.

“These results are important for prospective applications of synchronised STOs, such as fast-speed neuromorphic computing.”

In the future, the team plans to increase the number of STOs in their array, and test it for wirelessly charging other devices and sensors. They also hope to get interest from industry in developing on-chip STOs for self-sustained smart systems.

The paper “Electrically connected spin-torque oscillators array for 2.4 GHz WiFi band transmission and energy harvesting” has been published in the journal Nature Communications.

How wireless charging works — and why it can be a game changer

Wireless charging has already been around for some time. The odds are, if you have a flagship smartphone or a new electric car, you’re already familiar with it. But what is it, and how does it work?

To get to the bottom of it, we’ll have to greet an old friend: magnetism.

The wonders of induction

Wireless charging, as the name implies, means that you no longer need a cable to connect the device to a source of power. The charger creates a magnetic field that your device can absorb to gain energy, bypassing the need for a wire.

The bulk of the work is done by coils: there’s one special coil in the charger (which is typically a pad of some sort), and another one in your device.

When you place a device on a wireless charging pad, a small coil in the device receives and harvests energy from the magnetic field, and uses it to power the battery. It looks something like this:

Coils (in electric circuits) are usually circular or cylindrical tools designed to produce a magnetic field. Example of a charger coil. Image credits: Vishay Intertechnology.

It all works thanks to the wonders of physics. Alternating current is sent to the induction coil inside the charger. The moving electric charge creates a magnetic field. The magnitude of alternating current is always fluctuating up and down, which also makes the magnetic field fluctuate in strength.

This happens in the charger coil. Then, this alternating magnetic field gets picked up by the coil in your device, which creates a secondary alternating electric current.

Batteries can only work with direct current, so this alternating current must then be passed through a rectifier, where it is transformed into direct current — now, finally, it can be used to charge up the battery.

Alternating current (AC) vs Direct current (DC).
Alternating electric current flows through the solenoid, producing a changing magnetic field. This field causes an electric current to flow in a wire loop by electromagnetic induction. Image credits: Ponor / Wiki Commons.

Wireless charging is sometimes also called inductive charging, because the energy is transferred through inductive coupling. Electromagnetic induction (and devices that use inductive coupling) are widely used in devices like electric motors and generators.

Modern wireless charging

The origins of induction go back to Michael Faraday’s experiments in 1831. James Clerk Maxwell mathematically described it as Faraday’s law of induction, and the resulting equations are one of essential tenets of electromagnetism.

Nikola Tesla famously managed to transmit electricity through the air using resonant-inductive coupling, but the technology was not efficient and wasted a lot of energy. Induction power transfer was first used in 1894 when M. Hutin and M. Le-Blanc proposed an apparatus and method to power an electric vehicle. They were pretty ahead of their time, and combustion engines proved more popular for the following hundred years — though that’s changing again nowadays.

The 1980s proved to be a pivotal point for wireless charging. Several research groups from California, France, and Germany created buses that could be charged wirelessly. Although the technology didn’t receive all that much attention at the time, it paved the way for what we have now.

In 2006, MIT began using resonant coupling, which ensures that large amounts of power can be sent over a few meters without radiation. This was a turning point for commercial devices. Just two years later, in 2008, the Wireless Power Consortium was established and in 2010, they established the Qi standard of charging — which today is the most common wireless charging protocol. Nowadays, it has become fairly common for small consumer electronic devices such as smartphones and electric toothbrushes, but also larger devices such as electric cars.

Smartphones and smartwatches are already routinely charged with wireless equipment. Because of the way magnetic fields work (decreasing in power very quickly with distance), the charger needs to be very close.

The advantages of wireless charging

Wireless charging is preferable to conventional charging for a number of reasons. It protects the connections around devices from water, oxygen, and mechanical damage, as the electronics are enclosed. There’s no more risk of loosening and damaging the socket on your device.

There’s also a big advantage of not cluttering the place with more cables and wires, which makes it a bit more convenient. There’s no more risk of the cable charger getting broken since, well, there’s no more charger cable.

For electric cars, it’s a really nifty thing because you can just park your car above a charging unit, without needing to plug it in. Inductive charging systems can also be operated automatically, without needing people to plug and unplug, which not only saves time but leads to improved reliability.

Electric car wireless parking charge closeup at the 2011 Tokyo Motor Show.

It seems like wireless charging offers a lot of advantages. However…

The disadvantages of wireless charging

The main disadvantages of wireless charging are time and money: wireless charging is slower (around 15% slower when supplied the same power), and chargers are also more expensive, as they require more complex components.

But there are other inconveniences as well. For starters, you can’t really move the device around while it’s being charged — it needs to stay there. There’s also the problem of standards: relatively few devices are compatible with inductive chargers, although this is starting to change.

Wireless charging is also less efficient. Some of the charging energy is transformed into heat, which can make devices hotter, and in time, can result in battery damage. Newer approaches are reducing some of these problems through the use of special, ultra-thin coils that work at higher frequencies. It’s quite possible that some of these downsides can be entirely overcome within a couple of years, making wireless chargers even more competitive and attractive for consumers.

Wireless charging is here to stay — and it’s already changing

An online electric vehicle (OLEV) is an electric vehicle that charges wirelessly while moving using electromagnetic induction (the wireless transfer of power through magnetic fields). It functions by using a segmented “recharging” road that induces a current in “pick-up” modules on the vehicle.

For instance, researchers in Korea have already developed an electric transport system where there are cables underneath the surface of the road that can charge the car — and essentially make it so that you wouldn’t need to charge your car anymore, provided sufficient coverage. It’s like using the road as your charger.

Another important application is in the medical sector, which can now use implants and sensors without worrying about the problem of charging them. The future of wireless charging really seems promising. However, it’s still early days for now.

It’s not perfect and there’s still plenty of room for improvement, but it’s unlikely that wireless charging will go away anytime soon. It remains to be seen just how popular and widespread it becomes in the coming years, but the technology is promising and can be ramped up in more ways than you’d probably imagine.

No green thumb required: Open-source robots can now grow a small farm for you

Image credits: FarmBot.

If you’ve always wanted to grow your own fruits and veggies but could never quite make the time for it — technology is here to rescue you.

At first glance, technology and farming don’t go hand in hand, but that’s old school thinking. In this day and age, technology and farming are a perfect match. With cheap sensors, simple phone apps, and available equipment, you can build your very own farming robot. 

FarmBot, enter the stage

Give it power, water, and WiFi, and it will take care of the rest. FarmBot can plant, water, weed, and monitor the soil and plants with an array of sensors. All you need to do is harvest the produce once it’s done.

Soil moisture sensor and watering heads are shown here. Image credits: FarmBot.

FarmBot is an open-source robot developed by the eponymous company. It runs on custom, extensible tracks, and uses game-like open-source software.

Everything is customizable and adaptable. You design your patch and drop plants onto a virtual map of your plot. The seeds are spaced automatically, and you can apply different growing plans. It can be controlled a phone, tablet, or computer.

Image credits: FarmBot.

FarmBot is an example of precision farming — a series of tools and techniques that enables farmers to optimize their resources and increase yield, while also being more sustainable. For instance, a soil humidity sensor that lets you know when it’s time to water the plants, or a nutrient detector that lets you know which areas (if any) need any more nutrients.

Back in the day, precision farming would require heavy and expensive machinery. But recently, the miniaturization of sensors, coupled with the advent of smartphones, internet, and apps, has made it much more accessible. FarmBot is taking that idea and applying it — no green thumb required.

The best part about it is that it’s open-source, which means that everyone from the community can customize it, adapting it for various setups and equipment.

The catch

I like the FarmBot idea. I really do — it’s great! But boy, it’s expensive! After a successful Kickstarter campaign, the design is sold for over $3,000 — which for a patch this size, likely means the patch won’t repay the cost for years (if ever).

Image credits: FarmBot.

If you’re buying something like this though, you’re probably not doing it to earn a buck. There’s a distinct pleasure in eating food that you’ve grown, and the pleasure is arguably even greater when the robot does most of the work for you.

Still, at this price, the likely target audience is restricted to well-off urbanites. However…

The counter catch

As previously mentioned — what’s really great about it is that it’s open-source. The folks at FarmBot have published detailed documentation on how to assemble and get the FarmBot working and augment or customize it to your needs.

“This opens up a world of opportunities for students to explore fields like coding, makers to modify their FarmBot with 3D printing, and scientists to take full advantage of the platform,” the website reads.

In other words, for someone with some maker experience (or simply who’s willing to dive into this world), you can build your own robot. In fact, there are plenty of resources online instructing you how to build a smart farming system. Here are just a few examples. The FarmBot itself uses Arduino and Raspberry Pi — two favorites of DIY makers.

Ultimately, this could be useful for a number of different communities, whether it’s students who would like to learn a practical application for coding or electronics, people who are really into growing their own produce, or those who just want to add a little pizzazz to their farming — to give just a few examples. Even for those whose livelihoods depend on farming, systems like this one can make a big difference, helping them manage their land a bit more effectively.

So, if you like FarmBot and can afford one, that’s great, go for it! If you can’t, you can still get into the world of maker precision farming with a far smaller investment. You can probably get started for around $100, and then decide if you want to explore it further.

Eco-friendly geometry: smart pasta can halve packaging waste at no extra cost

Pasta comes in a variety of shapes and sizes — from the plain and simple to all sorts of quirky spirals. But for the most part, they have one thing in common: they’re not using space very effectively. But a new study may change that.

Researchers from Carnegie Mellon University have found a way to change that, designing new types of pasta that use less packaging and are easier to transport, reducing both transportation emissions and packaging plastic.

Unconventional pasta shapes use up less space but spring to life in water. Image credits: Morphing Matter Lab. Carnegie Mellon University.

Pasta is big business. In 2019, nearly 16 million tons of pasta were produced in the world — up from 7 million tons produced 20 years ago. That adds up to billions of packets that are transported, stored, and ultimately discarded across the world.

Since pasta often has such odd shape, pasta packages often end with a lot of wasted space, which also has to be transported and stored. Using less space means less trucks driving across states and less plastic.

Carnegie Mellon University’s Morphing Matter Lab director Lining Yao had an idea on how that could be reduced — with a bit of help from an old friend: geometry.

“By tuning the grooving pattern, we can achieve both zero (e.g., helices) and nonzero (e.g., saddles) Gaussian curvature geometries,” the study reads. It then goes on to translate what this means. “This mechanism allows us to demonstrate approaches that could improve the efficiency of certain food manufacturing processes and facilitate the sustainable packaging of food, for instance, by creating morphing pasta that can be flat-packed to reduce the air space in the packaging.”

They started out with a computer simulation to see how different shapes would achieve the goal. They tried various designs, including helixes, saddles, twists, and even boxes. After they settled on a few efficient shapes, they put it to the boiler test — quite literally.

Flat-packed pasta before and after boiling. Image credits: Morphing Matter Lab. Carnegie Mellon University.

Speaking to Inverse, Yao says wasted space could be reduced by 60% by flat-packing pasta — and that’s just the start of it. The method could also be used for things like wagashi or gelatin products. The method could also be used to design more complex and fancy shapes for special occasions. In dry form, a piece of pasta could look like a disc, but when boiled, it could become a rose flower.

Credits: Carnegie Mellon University..

However, there are limitations to study. Flour dough is known to be a complex material. It can have different proportions of water, starch, gluten, fiber, and fat. Flour dough also has variable, nonlinear properties, which makes it hard to anticipate how different types of pasta would behave — this was just a proof of concept.

Researchers recommend more quantitative models of assessment to see how different materials with complicated groove shapes and patterns would behave.

The study was published in Science.

Ukraine seizes spirit made from apples grown near the Chernobyl nuclear site

Would you drink an “artisanal spirit” made from apples grown near the Chernobyl nuclear power plant? A group of researchers from the United Kingdom has just finished producing the first 1,500 bottles. They assure us the drink is completely safe and radiation-free and hope to get it soon on the UK market.

But there’s a problem. The Ukrainian government just seized it all.

Image credits: Chernobyl Spirit Company

The bottles are now in the hands of prosecutors who are investigating the case. The researchers argue they are wrongly accused of using forged Ukrainian excise stamps.

Atomik!

The Chernobyl Spirit Company aims to produce high-quality spirits made with crops from the nuclear disaster exclusion zone. This is a more than 4,000-square kilometer area around the Chernobyl nuclear power plant that was abandoned due to fears of radioactive contamination after the devastating nuclear accident there in 1986.

The event is considered the world’s worst nuclear disaster and exposed millions of people to dangerous radiation levels in large swathes of Ukraine and neighboring Belarus. Jim Smith, a UK researcher, has spent years studying the transfer of radioactivity to crops within the main exclusion zone, alongside a group of researchers.

They have grown experimental crops to find out if grain, and other food that is grown in the zone, could be used to make products that are safe to consume, hoping to prove that land around the exclusion zone could be put back to productive use. This would allow communities in the area to grow and sell produce, something that’s currently illegal due to fears of spreading radiation.

Image credits: Chernobyl Spirit Company

Smith and his team launched in 2019 the first experimental bottle of “Atomik,” a spirit made from the Chernobyl Exclusion Zone. Since then, they have been working with the Palinochka Distillery in Ukraine to develop a small-scale experimental production, using apples from the Narodychi District – an inhabited area after the nuclear accident.

“There are radiation hotspots [in the exclusion zone] but for the most part contamination is lower than you’d find in other parts of the world with relatively high natural background radiation,” Smith told the BBC. “The problem for most people who live there is they don’t have the proper diet, good health services, jobs or investment.”

The drink was initially produced using water and grain from the Chernobyl exclusion zone but the researchers have now adjusted the recipe and incorporated the apples. It’s the first consumer product to come from the abandoned area around the damaged nuclear power plant, the argue, excited about the opportunities that it represents.

The aim of selling the drink, Smith explains, is to enable the team to distribute most of the money to local communities. The rest will be reinvested in the business, as Smith hopes to provide the team with an income to work on the project. The most important thing for the area now is economic development, not radioactivity, he argues.

The researchers are now working hard to get the shipment released. Elina Smirnova, the lawyer representing them in court, said in a statement that the seizure was in violation of Ukrainian law, and accused the authorities of targeting “a foreign company which has tried to establish an ethical ‘white’ business to primarily help Ukraine.”