Tag Archives: Technology

The swarm is near: get ready for the flying microbots

Imagine a swarm of insect-sized robots capable of recording criminals for the authorities undetected or searching for survivors caught in the ruins of unstable buildings. Researchers worldwide have been quietly working toward this but have been unable to power these miniature machines — until now.

A 0.16 g microscale robot that is powered by a muscle-like soft actuator. Credit: Ren et al (2022).

Engineers from MIT have developed powerful micro-drones that can zip around with bug-like agility, which could eventually perform these tasks. Their paper in the journal Advanced Materials describes a new form of synthetic muscle (known as an actuator) that converts energy sources into motion to power these devices and enable them to move around. Their new fabrication technique produces artificial muscles, which dramatically extend the lifespan of the microbot while increasing its performance and the amount it can carry.  

In an interview with Tech Xplore, Dr. Kevin Chen, senior author of the paper, explained that they have big plans for this type of robot:

“Our group has a long-term vision of creating a swarm of insect-like robots that can perform complex tasks such as assisted pollination and collective search-and-rescue. Since three years ago, we have been working on developing aerial robots that are driven by muscle-like soft actuators.”

Soft artificial muscles contract like the real thing

Your run-of-the-mill drone uses rigid actuators to fly as these can supply more voltage or power to make them move, but robots on this miniature scale couldn’t carry such a heavy power supply. So-called ‘soft’ actuators are a far better solution as they’re far lighter than their rigid counterparts.

In their previous research, the team engineered microbots that could perform acrobatic movements mid-air and quickly recover after colliding with objects. But despite these promising results, the soft actuators underpinning these systems required more electricity than could be supplied, meaning an external power supply had to be used to propel the devices.

“To fly without wires, the soft actuator needs to operate at a lower voltage,” Chen explained. “Therefore, the main goal of our recent study was to reduce the operating voltage.”

In this case, the device would need a soft actuator with a large surface area to produce enough power. However, it would also need to be lightweight so a micromachine could lift it.

To achieve this, the group elected for soft dielectric elastomer actuators (DEAs) made from layers of a flexible, rubber-like solid known as an elastomer whose polymer chains are held together by relatively weak bonds – permitting it to stretch under stress.

The DEAs used in the study consists of a long piece of elastomer that is only 10 micrometers thick (roughly the same diameter as a red blood cell) sandwiched between a pair of electrodes. These, in turn, are wound into a 20-layered ‘tootsie roll’ to expand the surface area and create a ‘power-dense’ muscle that deforms when a current is applied, similar to how human and animal muscles contract. In this case, the contraction causes the microbot’s wings to flap rapidly.

A microbot that acts and senses like an insect

A microscale soft robot lands on a flower. Credit: Ren et al (2022).

The result is an artificial muscle that forms the compact body of a robust microrobot that can carry nearly three times its weight (despite weighing less than one-quarter of a penny). Most notably, it can operate with 75% lower voltage than other versions while carrying 80% more payload.

They also demonstrated a 20-second hovering flight, which Chen says is the longest recorded by a sub-gram robot with the actuator still working smoothly after 2 million cycles – far outpacing the lifespan of other models.

“This small actuator oscillates 400 times every second, and its motion drives a pair of flapping wings, which generate lift force and allow the robot to fly,” Chen said. “Compared to other small flying robots, our soft robot has the unique advantage of being robust and agile. It can collide with obstacles during flight and recover and it can make a 360 degree turn within 0.16 seconds.”

The DEA-based design introduced by the team could soon pave the way for microbots that work using untethered batteries. For example, it could inspire the creation of functional robots that blend into our environment and everyday lives, including those that mimic dragonflies or hummingbirds.

The researchers add:

“We further demonstrated open-loop takeoff, passively stable ascending flight, and closed-loop hovering flights in these robots. Not only are they resilient against collisions with nearby obstacles, they can also sense these impact events. This work shows soft robots can be agile, robust, and controllable, which are important for developing next generation of soft robots for diverse applications such as environmental exploration and manipulation.”

And while they’re thrilled about producing workable flying microbots, they hope to reduce the DEA thickness to only 1 micrometer, which would open the door to many more applications for these insect-sized robots.

Source: MIT

Extremely efficient microprocessors can make your computer more eco-friendly

Researchers in Japan have developed a new type of superconductor microprocessor that uses far less energy than today’s microprocessors. That’s good news for you and for the planet.

The diagram for the world’s first adiabatic superconductor processor.

Our use of computers and smartphones has grown tremendously in recent years. It’s hard to even imagine what life would be like nowadays without these devices (which are used not just for communication and enjoyment, but also have important economic roles).

But this has come at a cost. It’s not just the materials we use to create these devices, but also the electricity that these devices use. This figure has grown more and more, up to the point where data centers are being built near lakes and rivers to help cool them down.

Around 10% of the global use of electricity goes to electronic communications, researchers say, and that figure is only expected to grow.

“The digital communications infrastructure that supports the Information Age that we live in today currently uses approximately 10% of the global electricity. Studies suggest that in the worst-case scenario, if there is no fundamental change in the underlying technology of our communications infrastructure such as the computing hardware in large data centers or the electronics that drive the communication networks, we may see its electricity usage rise to over 50% of the global electricity by 2030,” says Christopher Ayala, an associate professor at Yokohama National University, and lead author of the study.

To tackle this issue, a team of researchers set out to design an extremely efficient structure with the dazzling name of adiabatic quantum-flux-parametron (AQFP). In thermodynamics, something is “adiabatic” if it occurs without transferring heat or mass to the surroundings. A quantum-flux-parametron is essentially a digital logic system based on superconductivity.

Armed with this efficient AQFP, the team used it as a building block for low-power high-performance microprocessors, demonstrating their new processor in a new paper. It’s 80 times more efficient and scalable, the team explains.

“These demonstrations show that AQFP logic is capable of both processing and memory operations and that we have a path toward practical adiabatic computing operating at high-clock rates while dissipating very little energy,” the study authors write.

The demonstration shows that the AQFP is capable of “all aspects of computing”, Ayala explains — namely data processing and data storage. It can be clocked up to 2.5 GHz, making it comparable to today’s existing technologies. “We even expect this to increase to 5-10 GHz as we make improvements in our design methodology and our experimental setup,” Ayala said.

But there’s a tiny catch: superconductors need freezing cold temperatures to operate. This means that the chips would, by default, need more power for cooling. As it turns out, even when you factor in this extra power, the devices are still more efficient.

“The AQFP is a superconductor electronic device, which means that we need additional power to cool our chips from room temperature down to 4.2 Kelvin to allow the AQFPs to go into the superconducting state. But even when taking this cooling overhead into account, the AQFP is still about 80 times more energy-efficient when compared to the state-of-the-art semiconductor electronic devices found in high-performance computer chips available today.”

Of course, there are still major challenges. For instance, price remains a big issue, and may very well be the ultimate constraint that dictates whether the technology will catch on or not. For now, researchers are working on bringing the technology from a working prototype to a more scalable and faster design, something that can compete with or even surpass existing technology.

“We are now working towards making improvements in the technology, including the development of more compact AQFP devices, increasing the operation speed, and increasing the energy-efficiency even further through reversible computation,” Ayala said. “We are also scaling our design approach so that we can fit as many devices as possible in a single chip and operate all of them reliably at high clock frequencies.”

The study has been published in IEEE Explore.

Google publishes user location data to help governments tackle coronavirus

The fight against the global pandemic has brought us at an important crossroad: we’ve seen that surveillance can be used to understand the outbreak and enforce an efficient quarantine, but on the other hand, how much information about our location and habits do we really want to give companies and governments? It’s a decision that should not be taken lightly and may be consequential for decades.

For better or for worse, we already provide a ton of our information to tech companies — willingly. The data we give to Google Maps, for instance, can show us how the world is responding to the quarantine.

Using this information in an anonymized fashion can be instrumental in helping decisionmakers see how the quarantine is being respected and how it is affecting communities. In turn, this can be used to make more efficient plans for our long upcoming fight with COVID-19.

Mobility trends in the USA — how much people are traveling outside of their house. The drop is recent and relatively low compared to other countries battling the pandemic.

If your GPS is on and it’s sharing data with Google Maps, the app has a general idea not only where people are and what they are doing, but what kind of places we like to go to. It’s not perfect, but Google Maps has a pretty good understanding of what different places are, whether they’re a restaurant or a supermarket or something else. Many smartphone owners share this information without a second thought.

This can have some advantages. In normal times, Google Maps might recommend a bar near us that we’d enjoy, or tell us that the street to our favorite supermarket is closed for renovation. In the current pandemic times, this information can be used to gather valuable insights about how much people are socially distancing and how a community is affected by the quarantine.

Google has recently announced that it will publish national mobility reports, for most of the countries on the planet to aid governments attempting to draft COVID-19 policies.

“We hope these reports will help support decisions about how to manage the COVID-19 pandemic,” the Google execs said.

“This information could help officials understand changes in essential trips that can shape recommendations on business hours or inform delivery service offerings.”

The data is anonymized and presented in a generalistic fashion. No “personally identifiable information,” such as an individual’s location, contacts or movements, will be made available, the post said. To make sure that there is no identifiable information in the data, the reports will also use a statistical noise-adding technique, making it harder to identify any individual aspects of the data (ie a big mall in a relatively small community).

The mobility trends in Spain present a far more compelling picture. Changes for each day are compared to a baseline value for that day of the week, Google writes. The baseline value is chosen as a median for the day of the week.

For instance, trends display “a percentage point increase or decrease in visits” to locations like parks, shops, homes and places of work, not “the absolute number of visits,” said the post, signed by Jen Fitzpatrick, who leads Google Maps, and the company’s chief health officer Karen DeSalvo.

The data is telling. In France, for instance, retail and recreation visits have dropped by 88%. Initially, local shops saw an increase of 40%, as people preferred local shops to farther-away supermarkets — but after this initial surge, local shops also dropped by 72%.

Of course, user tracking opens up a major can of worms that was problematic even before the pandemic. Now, given the current strain, multiple technology firms have begun sharing anonymized smartphone data to better track the outbreak. In several countries (including Taiwan and South Korea), this type of surveillance was also enforced, and we’re seeing some European countries considering it as well.

We’ve already seen, in countries such as Hungary or Israel, that the pandemic can be used to erode democracy and impose authoritarian leaders. Data harvesting and intrusion could bring lasting harm to privacy and digital rights, gnawing at our privacy and human rights.

Multiple researchers and advocacy groups have warned against offering tech companies and governments too much surveillance power. Of course, we need all the help we can get against this invisible threat, but as a society, we must ensure that we don’t offer up our privacy on a silver platter.

Singapore sends daily WhatsApp updates on the coronavirus

The story of how a small government unit developed a simple, clear messaging system — and helped Singapore’s citizens better understand the COVID-19 outbreak.

The team behind the information project. Image credits: Gov Insider Asia.

It’s not just a pandemic — also an infodemic

There are many things we still don’t know about the disease, the virus, and the best course of action. But if there’s one thing we do know for sure it’s that accurate, reliable information is crucial — and communicating it is equally crucial.

We’ve seen in recent times that misinformation can spread like wildfire. As the novel coronavirus pandemic was taking shape, so too was an infodemic. Information was spreading faster than the virus itself, and not all of it was accurate.

Singapore is one of the countries that managed the COVID-19 situation best, at least so far. They quickly understood the power of important communication, and they also understood the importance of using the right channels for this communication.

See, having ministers or presidents hold daily briefings is an excellent start — it’s absolutely crucial in a time of crisis. But it’s not enough.

It can take hours for the information to spread from the politicians to the media, and for the public to finally absorb the information. So instead, the government took to social media.

Credits: Gov.Sg

Of course, using social media in an official fashion is always challenging, so the government asked Open Government Products, its GovTech agency to design a clear and simple solution.

WhatsApp has the highest penetration among social messaging apps in the nation, and it is also the platform that hosts the most misinformation so that’s where the efforts were focused.

Singapore has been using WhatsApp for government updates since October 2019, but the system had never truly been tested, and needed to be tweaked for the purpose.

“While the Ministry of Communications and Information had an existing citizen notification system, it was for a very different use case and not built for this kind of scale and time sensitivity,” says Sarah Espaldon, Operations Marketing Manager from Singapore’s Open Government Products.

Singapore has the added difficulty of having four official languages: Chinese, English, Malay and Tamil. The government used an AI tool to rapidly translate the material from English so that every community can access the information with ease.

The AI was quickly trained with medical terms and other specific phrasings that the government might use in its communication. The channel was also extremely easy to sign up for.

The channel sent out regular updates. Under the initial settings, it took several hours to reach the entire subscriber list (which quickly surpassed 500,000 people). But the issue was quickly addressed so that the entire list could receive the updates within 30 minutes.

It’s a simple tool, but having access to a reliable source of information on a channel that people are familiar with should not be underestimated, particularly in challenging times.

This isn’t the first remarkable innovation accomplished by Singapore’s programmers. A month ago, Singapore health tech agency adapted a commercial scanning device into an AI tool to scan crowds of people and see if anyone has high temperature — and they did it within 2 weeks.

It’s important to keep in mind that such solutions need to be culturally appropriate. For instance, the same team has built two other tools to help agencies ensure compliance with the quarantine policies.

“The tool sends out SMSes at randomised timings throughout the day, and participants click on a unique link provided in the SMS which then reports their current location through a web app,” says Li Wei Loh, Product Manager.

It’s hard to see something like this applied in Europe or the US, but an official messaging platform, be it on WhatsApp or something else, can definitely help keep the people informed quickly.

European Union wants phones to have a universal charger

We’ve all experienced it at one point, either directly or with a friend: you want to charge your phone, but you don’t have the right charger. Whether you have an iPhone, an Android, or something else — different phone chargers are a drag.

Now, the European Union has voted overwhelmingly in favor of introducing a single universal charger for all mobile phones sold on the continent.

“A common charger should be developed for all mobile phones sold in the EU, to reduce waste, costs and hassle for users,” Members of the European Parliament declared.

Less waste, less inconvenience

More than 51,000 tonnes of electronic waste per year happen because of old chargers being thrown out, stated the E.U.’s assessment. That’s not a trivial figure.

This alone would be reason enough to develop a single, standard charger. It’s even better for consumers: you would no longer need to go and get a new charger every time you upgrade or change your phone.

On top of that, the E.U. believes this would also improve the lives of consumers who would no longer have to go out and buy a new charger every time they upgrade their phone.

So what would the charger look like? Currently, the frontrunner is the micro-USB connector, which is already used in plenty of devices (mostly Android devices, although some models are switching away from it). It’s unclear if this will only apply to smartphones, or if it will also extend to tablets and other mobiles, and it’s also unclear when the directive will enter into force. When it does, countries will have two years to make the switch.

But if the vote is any indication, the switch will happen.

The draft law was approved by 550 votes to 12, with 8 abstentions — a crushing result with votes of all political colors. The economic assessment also found that this won’t affect consumers in any significant negative way. It’s the manufacturers and resellers of chargers that will be affected. In addition, this will combat market fragmentation and increase competition in the sector, which usually means better, more reliable products for consumers in the long run.

Opposition from Apple

The main opposition to this approach is Apple. Apple has made it one of its central goals to develop its own different technology for pretty much anything, including chargers.

Chargers have long been a thorny issue for Apple: they’re expensive, often flimsy, and have been pretty much surpassed by other options on the market — especially the USB-C.

Micro-USB and USB-C are the two candidates for the standard.

Apple has opposed this proposal, arguing that people have already bought chargers and forcing them to switch would be negative for the environment.

“More than 1 billion Apple devices have shipped using a Lightning connector in addition to an entire ecosystem of accessory and device manufacturers who use Lightning to serve our collective customers. We want to ensure that any new legislation will not result in the shipment of any unnecessary cables or external adaptors with every device, or render obsolete the devices and accessories used by many millions of Europeans and hundreds of millions of Apple customers worldwide. This would result in an unprecedented volume of electronic waste and greatly inconvenience users,” the Apple statement reads.

But while Apple’s point is important to consider, it appears to be rather short-sighted. The waste which would be initially generated by rendering some chargers obsolete would be saved in the long run.

Apple also argued that it is working alongside “six companies” to develop a common standard based on USB-C.

But its point is contradicted by the European Environmental Citizens’ Organisation for Standardisation, which argues that “voluntary agreements (VAs) have proven to be ineffective in fostering market change and a single competitive market”. The organization also suggests other amendments, such as making the charger and the cable separate, so that waste is further limited. It also recommends using USB-C as an alternative to micro-USB.

Nevertheless, it is always complicated when establishing this sort of universal standard. What if some better technology appears along the way? Changing the standard would no doubt be difficult, costly, and wasteful. Also, what if wireless chargers become commonplace, how would this be implemented? The EU’s decision has the potential to backfire, if not drafted carefully.

The impact on innovation, as Apple rightfully points out, also needs careful consideration. But the EU seems determined to make the move nonetheless. We shall see how it plays out.

First was the genome. Now, it’s time for the screenome

All of us have a human genome, which is basically a composite of our genes. But we also have a “screenome,” a composite of our digital lives, according to a group of researchers from the United States. Their goal is to make sense of how the screens in our lives are affecting us.

Credit Wikipedia Commons

A decade ago, the Human Genome Project worked to identify and map all of the genes of the human genome. In a nod to their research, academics Byron Reeves, Thomas Robinson and Nilam Ram created the concept of the ‘screenome’ to describe the entity formed by all the digital activity individuals subject themselves to.

The three argued that everything we know about the effects of media use on individuals and societies could be incomplete, irrelevant or wrong. We are all doing more online and as this expanding form of behavior is digitalized, it is open to all forms of manipulation, they said.

In a comment article in the latest edition of the journal Nature, the authors argued that a large-scale analysis of detailed recordings of digital life could provide far greater insights than simply measuring screen time. Americans now spend over half of their day interacting with digital media.

The academics said most of the thousands of studies investigating the effects of media over the past decade used people’s estimates of the amount of time they spend engaging with technologies or broadly categorized platforms such as ‘smartphone’, ‘social media’ or ‘entertainment media’.

Nevertheless, the range of content has become “too broad, patterns of consumption too fragmented, information diets too idiosyncratic, experiences too interactive, and devices too mobile,” for such simplistic characterization. Technologies now available can “allow researchers to record digital life in exquisite detail,” they said.

“Digital life is life these days. As we spend more of our life on our devices, so more of our life is expressed through these screens. This gives us a tremendous opportunity to learn about all aspects of human behaviour,” said Robinson to the Australian Financial Review.

Screenomics – the new tech

Tracking our digital life has become much easier. Instead of using a range of devices for different things, applications have been consolidated into smartphones and other mobile devices. At the same time, there are now tools available to see what people are doing on their screens.

The researchers are using so-called screenomics technologies to observe and understand our digital lives, minute by minute. The result of their initial work is a call for the Human Screenome Project, a collection of large-scale data that will inform knowledge of and solutions to a wide variety of social issues.

“Screenomics emerges from the development of systems for capturing and recording the details of individuals’ digital experiences,” said Ram to Penn News. “The system includes software that collects screenshots every five seconds on smartphones and laptop computers, extracts text and images, and allows analysis of the timing, content, function and context of digital life.”

In their article in Nature, the researchers outlined the possibilities of the technology. Over 600 participants have so far consented to use screenomics software on laptops and Android smartphones that were linked to the researchers’ secure computational infrastructure.

Participants then went about their daily lives while the system unobtrusively recorded their device use. In their initial analyses of these data, the researchers found that participants quickly changed tasks, approximately every 19 seconds on a laptop, and every 10 seconds on a smartphone.

All the information collected includes indicators of health and well-being and can be shared with larger interdisciplinary projects. Reeves, Robinson and Ram suggested that researchers wishing to study digital life could even create a repository that everyone can contribute to and use.

That type of large interdisciplinary project they call for would have far-reaching benefits for all areas of life touched by digital technology. “In the future, it might be possible for various apps to ‘interact’ with an individual’s screenome and to deliver interventions that alter how people think, learn, feel and behave,” said Ram.

Experimental setup.

Humans don’t need to understand what they’re doing to create new technology

Humanity didn’t rise to where it is today because of their smarts alone — we got here because we had no issue copying whatever our neighbors invented.

Experimental setup.

Participants could modify the position of 4 weights attached to the wheel’s spokes, in an attempt to increase its speed along the sloping rail.
Image credits Maxime Derex.

An international research team including members from the University of Exeter, the Université Catholique de Lille, CNRS, and Arizona State University, say that new technology doesn’t necessarily hinge on competence. In fact, the creation of effective new technologies doesn’t even require that we understand them, they write.

Monkey see monkey do

It’s easy — and let’s admit it, pleasant — to believe that our success relies on us being smarter and more ingenious than other species. Our fancy tools allowed us to adapt to a variety of environments and out-compete native species, leading us to the world of today, so, naturally, they must be the product of a deeply capable species.

While that may be true, it’s not all brains, says the team. The functionality of many traditional technologies — the bow and arrow or kayaks for example — depends to some extent on parameters that are hard to understand or model even today. This makes some anthropologists suspect that technology arises from our propensity to copy other members of the group, not raw smarts. In such a system, small improvements to any existing technology will be selected for — similarly to biological evolution — eventually generating technologies that are effective despite not being understood by individuals.

The team tested this theory in the lab by asking students to optimize a wheel traveling down on a set of rails. Each was allowed five attempts to produce the most effective configuration they could, before filling out a questionnaire that gauged their knowledge of the physics involved. To simulate successive generations of people, the team created ‘chains’ of students: each individual had access to the wheel configuration and effectiveness from the final two attempts made by the preceding participant.

The set-up did become more efficient (as judged by the wheel’s speed) over the course of these simulated generations, the team reports. However, each individual’s understanding of the physical mechanisms impacting its speed remained mediocre. This strongly suggests that the wheel’s speed wasn’t linked to the participant’s levels of understanding. Each student produced more or less random configurations, but the sum of their trials and errors — as well as the ability to copy the fastest known configuration from previous uses — was enough to refine the ‘technology’ over time.

The team also carried out a second experiment in which participants handed down their last two attempts to the following student. This included the system’s set-up and a piece of text describing their theory on the wheel’s effectiveness. Once again, the wheel would move faster over time, but the individuals were oblivious as to why. The team says that this step shows how transmission of false or incomplete theories could hinder or even prevent later generations properly understanding the system in a way, blinding them to a part of the problem.

All in all, the experiments show how important cultural processes are in the emergence of complex tools, the team explains. Our ability to copy others lets us create technology that no single individual could generate on their own. The authors say the findings suggest we should be more reserved in interpreting archeological remains in terms of cognitive capacity, as their results show that the later does not necessarily drive the former.

Paper DOI: http://dx.doi.org/10.1038/s41562-019-0567-9



Massive solar storms are naturally-recurring events, study finds — and we’re unprepared for them

Solar storms can be even more powerful than what our measurements so far have indicated — and we’re still very unprepared.


Image via Pixabay.

Although our planet’s magnetic field keeps us blissfully unaware of it, the Earth is constantly being pelted with cosmic particles. Sometimes, however — during events known as solar storms, caused by explosions on the sun’s surface — this stream of particles turns into a deluge and breaks through that magnetic field.

Research over the last 70 years or so has revealed that these events can threaten the integrity of our technological infrastructure. Electrical grids, various communication infrastructure, satellites, and air traffic can all be floored by such storms. We’ve seen extensive power cuts take place in Quebec, Canada (1989) and Malmö, Sweden (2003) following such events, for example.

Now, new research shows that we’ve underestimated the hazards posed by solar storms — the authors report that we’ve underestimated just how powerful they can become.

‘Tis but a drizzle!

“If that solar storm had occurred today, it could have had severe effects on our high-tech society,” says Raimund Muscheler, professor of geology at Lund University and co-author of the study. “That’s why we must increase society’s protection again solar storms.”

Up to now, researchers have used direct instrumental observations to study solar storms. But the new study reports that these observations likely underestimated how violent the events can become. The paper, led by researchers at Lund University, analyzed ice cores recovered from Greenland to study past solar storms. These cores formed over the last 100,000 years or so, and have captured evidence of storms over that time.

According to the team, the cores recorded a very powerful solar storm occurring in 600 BCE. Also drawing on data recovered from the growth rings of ancient trees, the team pinpointed two further (and powerful) solar storms that took place in 775 and 994 CE.

The result thus showcases that, although rare, massive solar storms are a naturally recurring part of solar activity.

This finding should motivate us to review the possibility that a similar event will take place sooner or later — and we should prepare. Both the Quebec and Malmö incidents show how deeply massive solar storms can impact our technology, and how vulnerable our society is to them today.

“Our research suggests that the risks are currently underestimated. We need to be better prepared,” Muscheler concludes.

The paper “Multiradionuclide evidence for an extreme solar proton event around 2,610 B.P. (∼660 BC)” has been published in the journal Proceedings of the National Academy of Sciences.

Small teams are better at producing new ideas, new study finds

If you want to come up with a breakthrough, you’re probably better off working with a small team.

In almost all fields of science, it’s becoming more and more common to work in larger teams. This shift has been attributed to improvements in communication technology as well as the heavy specialization of most researchers. Simply put, if you want to solve complex, modern problems, then you need a large interdisciplinary team — or so you’d think.

James Evans and colleagues from the University of Chicago (a small team of three people) have carried out an analysis of over 65 million papers, patents, and software products over the past 50 years. They developed a metric to assess how a paper or product builds on previous work.

They describe a common trend: smaller teams tend to come up with disruptions of science and technology, often generating new ideas and opportunities, whereas larger teams tend to finesse these existing ideas. Small teams consist of 1-9 people, whereas big teams have 10 or more people. Differences in topic and research design only explain a small part of this trend — no matter how you look at it, small teams tend to bring innovations and disruptions more often than big teams.

Advocates of larger teams claim that only this approach can solve complex, interdisciplinary problems. However, while the professional benefits of working in a large team have been demonstrated, the efficiency of large teams remains surprisingly understudied — and there is little evidence to support the idea that larger teams are optimized for knowledge discoveries and technological disruptions.

Furthermore, researchers found that work from larger teams receives more attention, and often deals with more recent and popular topics. In contrast, smaller teams more often reach into the past to solve underlying problems and receive substantially less attention. There is a notable exception, though even this exception validates the other findings: Nobel-winning papers are generally written by small teams.

“We find that solo authors and small teams much more often build on older, less popular ideas,” researchers write. “Larger teams more often target recent, high-impact work as their primary source of inspiration and this tendency increases monotonically with team size.”

“Large teams receive more of their citations rapidly, as their work is immediately relevant to more contemporaries whose ideas they develop and audiences primed to appreciate them. Smaller teams experience a much longer citation delay […] and receive less recognition overall owing to the rapid decay of collective attention .”

It’s not just attention, either: smaller teams also tend to receive less proportional funding. Overall, the study paints a picture where the solo or small-team researcher is fading away, even though he or she is extremely important to the science ecosystem.

The results suggest that both small and large teams are essential to a flourishing ecology of science and technology. In order to achieve this, science policies should aim to support and fund diverse team sizes.

The study has been published in Nature.

Steam Power Might Help in Space Exploration



A vast array of gas fuels have been used in the launching and transportation of spacecraft with liquid hydrogen and oxygen among them. Other spacecraft rely heavily on solar power to sustain their functionality once they have entered outer space. But now steam-powered vessels are being developed, and they are working efficiently as well.

People have been experimenting with this sort of technology since 1698, some decades before the American Revolution. Steam power has allowed humanity to run various modes of transportation such as steam locomotives and steamboats which were perfected and propagated in the early 1800s. In the century prior to the car and the plane, steam power revolutionized the way people traveled.

Now, in the 21st century, it is revolutionizing the way in which man, via probing instruments, explores the cosmos. The private company Honeybee Robotics, responsible for robotics being employed in fields including medical and militaristic, has developed WINE (World Is Not Enough). The project has received funding from NASA under its Small Business Technology Transfer program.

The spacecraft is intended to be capable of drilling into an asteroid’s surface, collecting water, and using it to generate steam to propel it toward its next destination. Late in 2018, WINE’s abilities were put to the test in a vacuum tank filled with simulated asteroid soil. The prototype mined water from the soil and used it to generate steam to propel it. Its drilling capabilities have also been proven in an artificial environment. To heat the water, WINE would use solar panels or a small radioisotopic decay unit.

“We could potentially use this technology to hop on the moon, Ceres, Europa, Titan, Pluto, the poles of Mercury, asteroids — anywhere there is water and sufficiently low gravity,” The University of Central Florida’s planetary researcher Phil Metzger stated.

Without having to carry a large amount of fuel and assumably having unlimited resources for acquiring its energy, WINE and its future successors might be able to continue their missions indefinitely. Similar technology might even be employed in transporting human space travelers.

Cheap CubeSat snaps first images of Mars

NASA wanted to see if a generation of small, cheap satellites could survive the journey to deep space. Now, the first one already has Mars in its sights.

One of NASA’s twin MarCO spacecraft took this image of Mars. It’s the first time a CubeSat has ever done so. Image credits: Credit: NASA/JPL-Caltech.

If you think satellites have to be expensive and bulky, think again. CubeSats are a type of miniaturized satellites used for space research, consisting of modules no larger than 10×10×10 cm (3.9×3.9×3.9 in). CubeSats often use off-the-shelf equipment and components and are relatively easy to design and build.

NASA’s MarCO mission, which stands for Mars Cube One, wanted to see whether CubeSats can withstand the challenges of deep space — and so far, so good.

MarCO-A and MarCO-B, the two satellites of the mission, have wide-angle cameras and basic communication equipment, which they will now put to good use. A wide-angle camera on top of MarCO-B produced the image above as a test of exposure settings. Mars may look like nothing more than a smidge of red light, but from 8 million miles (12.8 million kilometers) away, that’s quite a performance.

An artist’s rendering of the twin Mars Cube One (MarCO) spacecraft as they fly through deep space. Image credit: NASA/JPL-Caltech.

MarCO-B’s wide-angle camera looks straight out from the deck of the CubeSat. Parts of the satellite are visible on the sides of the image. Aside from the navigation and camera work, the satellite also had to be programmed to “turn” towards Mars — and engineers were excited that it worked smoothly.

“We’ve been waiting six months to get to Mars,” said Cody Colley, MarCO’s mission manager at JPL. “The cruise phase of the mission is always difficult, so you take all the small wins when they come. Finally seeing the planet is definitely a big win for the team.”

While this isn’t a remarkable achievement in itself, it shows that CubeSats are a viable technology for interplanetary missions, and feasible on a short development timeline, with surprisingly low resources and investment. This demonstration could lead to many other applications to explore and study our solar system.

Many have been made by university students or small companies and can be brought launched into Earth orbit using extra payload mass available on larger spacecraft.

Google just let an Artificial Intelligence take care of cooling a data center

The future is here, and it’s weird: Google is now putting a self-taught algorithm in charge of a part of its infrastructure.

It should surprise no one that Google has been intensively working on artificial intelligence (AI). The company managed to develop an AI that beat the world champion at Go, an incredibly complex game, but that’s hardly been the only implementation. Google taught one of its AIs how to navigate the London subway, and more practically, it developed another algorithm to learn all about room cooling.

They had the AI learn how to adjust a cooling system in order to reduce power consumption, and based on recommendations made by the AI, they almost halved energy consumption at one of their data centers.

“From smartphone assistants to image recognition and translation, machine learning already helps us in our everyday lives. But it can also help us to tackle some of the world’s most challenging physical problems — such as energy consumption,” Google said at the time.

“Major breakthroughs, however, are few and far between — which is why we are excited to share that by applying DeepMind’s machine learning to our own Google data centres, we’ve managed to reduce the amount of energy we use for cooling by up to 40 percent.”

The algorithm learns through a technique called reinforcement learning, which uses trial and error. As it learns, it starts to ask better questions and design better trials, which allows it to continue learning much faster. Essentially, it’s a self-taught method.

In this particular case, the AI tried different cooling configurations and found ways that greatly reduced energy consumption, saving Google millions of dollars in the long run as well as lowering carbon emissions for the data center.

Now, Google took things one step further and has completely assigned control of the cooling center to the AI. Joe Kava, vice president of data centers for Google, says engineers already trusted the system, and there were few issues regarding the transition. There’s still a data manager that will oversee the entire process, but if everything goes according to plan, the AI will manage the entire process on its own.

This is no trivial matter. Not only does it represent an exciting first (allowing an AI to manage an important infrastructure component), but it also may help reduce the energy used by data centers, which can be quite substantial. A recent report from researchers at the US Department of Energy’s Lawrence Berkeley National Laboratory concluded that US data centers accounted for about 1.8% of the overall national electricity use.

Efforts to reduce this consumption have been made, but true breakthroughs are few and far between. This is where machine learning could end up making a big difference. Who knows — perhaps the next energy revolution won’t be powered by human ingenuity, but rather by artificial intelligence.

Young person smartphone.

Child and teen obesity on the rise as they’re consuming too much… screen time

If you want to see you health improving, stop looking at the screen.

Young person smartphone.

Image credits Paul Henri Degrande.

A new scientific statement from the American Heart Association warns that children and teens should try to wean off of screens. Screen time from any device is associated with an increased amount of sedentary behavior, they explain, which promotes obesity and other health complications associated with lack of physical exercise.

The heart of the issue

Sedentary behaviors — things like sitting, reclining, or laying down while awake — exert little physical energy and contribute to overweightedness and obesity. That’s not exactly news. However, we’re spending more time than ever before with our eyes glued onto screens, and this is especially true for children, teens, and yours truly.

Now, the American Heart Association (AHA) says that this lifestyle poses serious consequences to the health of teens and children.

The new scientific statement — a scholarly synopsis of a topic and official point of view of the emitter — was developed by a panel of experts who reviewed the existing literature on the subject of sedentary behavior’s relation to cardiovascular disease or stroke. The document holds that children and adolescents have seen a net increase in the recreational use of screen-based devices over the last twenty years. While TV-viewing has declined over the same period, those hours were usurped by other devices such as smartphones or tablet computers.

Current estimates are that 8- to 18-year-olds spend more than 7 hours using screens daily, according to the paper. However, the authors caution that almost all of the available scientific literature on this subject relied on self-reported screen time. Very few of the studies looked at which types of devices were used in different contexts, they add. All in all, this means that the studies can’t be used to establish a cause-effect relationship between the use of these devices and the health complications examined as part of the paper.

There is a large body of evidence pointing to the relationship between screen time and obesity, however. Writing for Reuters in late 2016, Lisa Rapaport reported that “a minimum five-hour-a-day [TV time] increased the odds of obesity by 78 percent compared with teens who didn’t have TV time,” and that similarly “heavy use of other screens was tied to a 43 percent greater risk of obesity.”

“Still, the available evidence is not encouraging: overall screen time seems to be increasing — if portable devices are allowing for more mobility, this has not reduced overall sedentary time nor risk of obesity,” says Tracie A. Barnett, chair of the writing group.

“Although the mechanisms linking screen time to obesity are not entirely clear, there are real concerns that screens influence eating behaviors, possibly because children ‘tune out’ and don’t notice when they are full when eating in front of a screen.”

“There is also evidence that screens are disrupting sleep quality, which can also increase the risk of obesity,” Barnett said.

The most important takeaway from the study is for parents and children to try limiting screen time, the authors add. AHA recommends that children and teens get no more than 1 or 2 hours of recreational screen time daily, which the authors also support. Given that younglings already “far exceed these limits,” they add, parents should step up to the plane and be vigilant about their children’s screen time “including phones,” Barnett believes.

Efforts to minimize screen time should center around parent involvement, the team explains. Parents can help push children to reduce the time they spend on devices by setting a good personal example and establish screen-time regulations around the house.

Try to keep screens out of the bedroom (as much as one can do that in the XXIst century), the team adds, as some studies have shown they can interfere with sleep patterns. Also, try to maximize face-to-face interactions and outdoor activities.

“In essence: Sit less; play more,” Barnett explains.

The team says that more research is needed to help us understand the long-term effects of screen time on children and teens. We also don’t really know how to help youngsters be less sedentary — a problem that the appeal of screens aggravates, but doesn’t necessarily cause. Before we can address this imbalance in how children and then choose to spend their time, we need more comprehensive information on the impact of today’s sedentary pursuits.

The paper “Sedentary Behaviors in Today’s Youth: Approaches to the Prevention and Management of Childhood Obesity: A Scientific Statement From the American Heart Association” has been published in the journal Circulation: Journal of the American Heart Association.


UK millennials would happily sow, reap, and eat GMOs — unlike older generations

The majority of young adults in the UK say they’ve got no problems with GM crops and more tech in agriculture.


Image via Wikimedia.

Ah, GMOs, that horrible enemy that sends soccer moms scrambling for cover ever since the 1990s. According to a new poll, under-30s in the UK don’t share that view. In their eyes, GMO is a-ok, as is more technology and more futuristic techniques in farms.

Put it on m’plate!

The poll, carried out for the Agricultural Biotechnology Council (ABC) to gauge public opinion following farmers’ calls for post-Brexit innovation, involved more than 1,600 participants aged 18 to 30. Two-thirds of responders said more technology in farming is a good thing and that they would support futuristic farming techniques — such as the use of drones in livestock and arable farming to monitor livestock, assess and spray crops — according to The Telegraph.

A similar number said they’d support more innovation, such as the use of unmanned aerial vehicles (UAVs) to improve crop security and yields.

Only 20% of responders expressed having any concern regarding GM crops or about the benefits that gene editing can bring to agriculture — a very stark contrast to older generations. A similar number said they’d object to the use of self-driving tractors on farms.

The poll was requested by the ABC as part of their drive to have the UK Government capitalize on novel technology over Brexit. Many of the measures have been previously proposed and blocked on the EU-level. Once the country leaves the bloc and resets its agricultural policy, however, it will be free to pursue such technology should it desire. Michael Gove, the Environment Secretary, believes next-generation food and farming technology could reduce the impact of pests and diseases — helping keep the UK agricultural sector competitive amid the Brexit fallout.

“We are delighted to see young people embrace technology as part of the future of farming,” says Mark Buckingham, chair of the ABC.“Using cutting edge technology and growing techniques will enable the UK to deal with the serious challenges of keeping our farmers competitive, maintaining a safe, affordable food supply, and protecting our natural environment.


How technology is revolutionizing medicine

Medicine has come a long way in the past century, and we have technology to thank for much of it.

When we look at the last decades, we can see major improvements that almost seemed to happen overnight — but nothing happens overnight in medicine. Drug development and clinical trials can take years or decades, but because technology is moving so fast, medicine has been able to take significant leaps, sometimes at dramatic speeds.

Let’s have a look at just a few of the improvements brought by such developments.

Restoring movement in paralyzed patients

What started out as a sci-fi dream turns into reality more and more — several labs and research institutes across the world are making breakthroughs in restoring movement to paralyzed patients. For instance, in 2016, 24-year-old Ian Burkhart, who lost the ability to move or feel from the shoulder down in a diving accident, recovered movement in his arm, enough to pour water from a bottle into a jar and even play some Guitar Hero. Previously, in 2014, researchers had helped paraplegic patients regain some control over their legs, and brain-computer interfaces that allow paralyzed people to “type” with brain power alone were also developed.

We’re still a ways away from making these widely available treatments, but the mere fact that scientists can tackle things like paralysis, which were once thought to be a final sentence, shows how far science has gone.

Bionic eyes

There are several different types of visual prostheses—also known as bionic eyes—being developed around the world. While this technology can only restore partial vision and not full vision, this multidisciplinary approach truly seems unbelievable.

For instance, Steve Myers, a 63-year-old Iowa man will be able to see after decades of being in the dark.

“The implant consists of 60 electrodes,” Myers said. “It’s got a short cabling on it; then it has the receiver, which is about like two nickels stacked together and impregnated (with) a silicone rubber and implanted into the white of the eye.”

Getting Treated Online

Of course, this is no substitute for a real doctor, just a way to communicate with him or her.

Some medical practices now offer the ability to “see” a doctor on their website. More often than not, you leave a message on the website about what you are experiencing. A doctor reviews it and can discuss your options. While online STD testing may still be a ways away, in other cases, a lot of time, effort, and energy can be saved this way. Imagine having a urinary tract infection and your doctor easily calls in a UTI prescription at your local pharmacy. The idea is that you save a lot of time and so does the doctor — it’s a win-win situation. Of course, not all illnesses can be handled online. Serious problems always require a trip to the doctor, but for an advice or a discussion about a well-known condition, this could work out efficiently.

The Cure for HIV

Did we ever think we would truly get there? Well, we have finally seen someone cured of HIV! A child, that was infected at birth, went through 40 weeks of rigorous treatment. At 9 years old he was tested and while HIV was still found in his system, they are incapable of reproducing. If he chooses to have his own children later in life, this means he could very well not transfer the disease on to his offspring.

What the study showed was that with early treatment in those infected at birth, we could slow the progress of the disease. Education about safe sex will still be vital, especially in countries in which HIV runs rampant. This is still an impressive advancement we have made in the fight against HIV.

A sponge-filled syringe

Image via US Army.

This might not seem like a much at first, but a simple sponge-filled syringe could save numerous lives by preventing blood loss. The mechanic process is extremely simple: the sponges absorb blood, they swell up and cling to the wound, ensuring that they stay in place. Enough pressure is applied that the bleeding is temporarily stopped, giving enough time for the patient to reach a medical facility.

It can be extremely useful in war zones or in the ER.

3D Printing

At a first glance, 3D printing and medicine can have very little in common, but the two have started to entwine staggeringly fast. Cheap 3D printed prostheses are already a reality, as is biomedical printing. Tumor and cell models are also developing fast, and the fact that it’s all so cheap and fast makes the technology even more attractive for future research.

Diabetes Takes Another Hit

The year of 2016 proved to be one where diabetes took some big hits. One item, called the artificial pancreas or MiniMed670G, checks blood sugar around every 5 minutes. A needle slipped under the skin checks glucose levels and a pump worn over the abdomen administers insulin as needed. This reduces instances of hypoglycemia and improves the life of the diabetic. Additionally, Lexicon Pharmaceuticals have an inhibitor called sotagliflozin that is working wonders and is now in phase 3 trial. Not only is it able to control glucose levels in the kidneys and intestines, it’s also assisting in weight loss and lowering systolic blood pressure in diabetic patients. Both of those are often needed in conjunction with maintaining healthy glucose levels.

There are a ton more medical breakthroughs happening in recent years and we suspect that we will keep seeing more and more in the coming years as our technology continues to grow at a rapid pace. We are certainly seeing improved quality of life with many of these breakthroughs and we couldn’t be more excited!



Unmanned US plane lands after two-year secret mission

After circling our planet for an unprecedented 718 days doing classified scientific experiments, an unmanned plane landed at the Shuttle Landing Facility at NASA’s Kennedy Space Center in Florida.

The Air Force’s secret X-37B Orbital Test Vehicle landed at NASA ‘s Kennedy Space Center Shuttle Landing Facility Sunday, setting off a sonic boom that surprised residents. Image credits: Secretary of the Air Force.

It was an unusual sight at the Kennedy Space Center — the first space plane to land there since Atlantis in 2011. People from Orlando and other places in Florida reported hearing sonic booms just like during the golden days of NASA’s space shuttle program, but this was a different kind of mission: a secret, military one.

It was one of the military’s two X-37 space plane vehicles. The Boeing X-37, also known as the Orbital Test Vehicle (OTV), is a reusable unmanned spacecraft which can launch vertically and land horizontally, but the more interesting feature is that it can fly for so long without a recharge. Officials were thrilled to see the plane intact and functioning properly.

“Today marks an incredibly exciting day for the 45th Space Wing as we continue to break barriers,” Air Force Brig. Gen. Wayne Monteith, the 45th SW commander, said in a statement. “Our team has been preparing for this event for several years, and I am extremely proud to see our hard work and dedication culminate in today’s safe and successful landing of the X-37B.”

So what exactly is the purpose of the X-37? Well… we don’t really know. The official U.S. Air Force statement is that the project is “an experimental test program to demonstrate technologies for a reliable, reusable, unmanned space test platform for the U.S. Air Force”. However, speculations have gone much further. Various allegations have been made, from delivering weapons from space to spying on China’s Tiangong-1 space station module. The Guardian suggested that its purpose was “to test reconnaissance and spy sensors, particularly how they hold up against radiation and other hazards of orbit,” while International Business Times stated that the U.S. government was testing a version of the EmDrive electromagnetic microwave thruster. However, all these claims were denied by The Pentagon and Boeing subsequently. While the mystery hasn’t really been explained, at this point, there’s little reason to doubt the claims of officials. It’s unlikely that the X-37 is doing something aggressive in space, but it is quite possible that it’s testing out sensors or other developing technologies — somewhere at the science-military border.

“Technologies being tested in the program include advanced guidance, navigation and control; thermal-protection systems; avionics; high-temperature structures and seals; conformal, reusable insulation, lightweight electromechanical flight systems; and autonomous orbital flight, re-entry and landing,” Capt. AnnMarie Annicelli, an Air Force spokeswoman, told Space.com via email in March. “Also, the Air Force Research Laboratory (AFRL), Space and Missile Systems Center (SMC), and the Air Force Rapid Capabilities Office (AFRCO) are investigating an experimental propulsion system,” she said.

It’s certainly a bit unnerving that we’re already toying with the idea of militarizing space, especially given the scale of such projects. Military space programs “are as big as NASA,” astrophysicist and astronomer Jonathan McDowell told NPR’s Here and Now back in 2015, when X-37 started this mission. McDowell also said that at this point, there are at least 20 “full-fledged spy satellites or other really secret vehicles” orbiting the Earth and that’s a scary thought.

This was the fourth and lengthiest mission from the project, also notable for the plane’s autonomous landing. It’s unclear what the project’s further objectives are. The US Air Force has at least confirmed that they are actively researching reusable space vehicles and establishing a space test platform for themselves.

Types of engines and how they work

Engines are machines that convert a source of energy into physical work. If you need something to move around, an engine is just the thing to slap onto it. But not all engines are made the same, and different types of engines definitely don’t work the same.

Jet engine

Image credits Little Visuals / Pixabay.

Probably the most intuitive way to differentiate between them is the type of energy each engine uses for power.

  • Thermal engines
    • Internal combustion engines (IC engines)
    • External combustion engines (EC engines)
    • Reaction engines
  • Electrical engines
  • Physical engines

Thermal engines

In the broadest definition possible, these engines require a source of heat to convert into motion. Depending on how they generate said heat, these can be combustive (that burn stuff) or non-combustive engines. They function either through direct combustion of a propellant or through the transformation of a fluid to generate work. As such, most thermal engines also see some overlap with chemical drive systems. They can be airbreathing engines (that take oxidizer such as oxygen from the atmosphere) or non-airbreathing engines (that have oxidizers chemically tied in the fuel).

Internal combustion engines

Internal combustion engines (IC engines) are pretty ubiquitous today. They power cars, lawnmowers, helicopters, and so on. The biggest IC engine can generate 109,000 HP to power a ship that moves 20,000 containers. IC engines derive energy from fuel burned inside a specialized area of the system called a combustion chamber. The process of combustion generates reaction products (exhaust) with a much greater total volume than that of the reactants combined (fuel and oxidizer). This expansion is the actual bread and butter of IC engines — this is what actually provides the motion. Heat is only a byproduct of combustion and represents a wasted part of the fuel’s energy store, because it doesn’t actually provide any physical work.

An inline, 4-cylinder IC engine.

An inline, 4-cylinder IC engine.
Image credits NASA / Glenn Research Center.

IC engines are differentiated by the number of ‘strokes’ or cycles each piston makes for a full rotation of the crankshaft. Most common today are four-stroke engines, which break down the combustion reaction in four steps:

  1. Induction or injection of a fuel-air mix (the carburate) into the combustion chamber.
  2. Compression of the mix.
  3. Ignition by a spark plug or compression — fuel goes boom.
  4. Emission of the exhaust.

This radial engine looks like the funkiest little man I’ve ever seen.
Image credits Duk / Wikimedia.

For every step, a 4-stroke piston is alternatively pushed down or back up. Ignition is the only step where work is generated in the engine, so for all other steps, each piston relies on energy from external sources (the other pistons, an electric starter, manual cranking, or the crankshaft’s inertia) to move. That’s why you have to pull the chord on your lawnmower, and why your car needs a working battery to start running.

Other criteria for differentiating IC engines are the type of fuel used, the number of cylinders, total displacement (internal volume of cylinders), distribution of cylinders (inline, radial, V-engines, etc.), as well as power and power-to-weight output.

External combustion engines

External combustion engines (EC engines) keep the fuel and exhaust products separately — they burn fuel in one chamber and heat the working fluid inside the engine through a heat exchanger or the engine’s wall. That grand daddy-o of the Industrial Revolution, the steam engine, falls into this category.

In some respects, EC engines function similarly to their IC counterparts — they both require heat which is obtained by burning stuff. There are, however, several differences as well.

EC engines use fluids that undergo thermal dilation-contraction or a shift in phase, but whose chemical composition remains unaltered. The fluid used can either be gaseous (as in the Stirling engine), liquid (the Organic Rankine cycle engine), or undergo a change of phase (as in the steam engine) — for IC engines, the fluid is almost universally a liquid fuel and air mixture that combusts (changes its chemical composition). Finally, the engines can either exhaust the fluid after use like IC engines do (open-cycle engines) or continually use the same fluid (closed-cycle engines).

A Stephenson’s Steam Engine working

Surprisingly, the first steam engines to see industrial use generated work by creating a vacuum rather than pressure. Called ‘atmospheric engines’, these were ponderous machines and highly fuel inefficient. In time, steam engines took on the form and characteristics we expect to see from engines today and became more efficient — with reciprocating steam engines introducing the piston system (still in use by IC engines today) or compound engine systems that re-used the fluid in cylinders at decreasing pressures to generate extra ‘oomph’.

Today, steam engines have fallen out of widespread use: they’re heavy, bulky things, have a much lower fuel efficiency and power-to-weight ratio than IC engines, and can’t change output as quickly. But if you’re not bothered by their weight, size, and need a steady supply of work, they’re awesome. As such, EC is currently employed with great success as steam turbine engines for naval operations and power plants.

Nuclear power applications have the distinction of being called non-combustive or external thermal engines since they operate on the same principles of EC engines but don’t derive their power from combustion.

Reaction engines

Reaction engines, colloquially known as jet engines, generate thrust by expelling reactionary mass. The basic principle behind a reactionary engine is Newton’s Third Law — basically, if you blow something with enough force through the back end of the engine, it will push the front end forward. And jet engines are really good at doing that.

Mad good at that.
Image credits thund3rbolt / Imgur.

The things we usually refer to as a ‘jet’ engine, the ones strapped to a Boeing passenger plane, are strictly speaking airbreathing jet engines and fall under the turbine-powered class of engines. Ramjet engines, which are usually considered simpler and more reliable as they contain fewer (up to none) moving parts, are also airbreathing jet engines but fall into the ram-powered class. The difference between the two is that ramjets rely on sheer speed to feed air into the engine, whereas turbojets use turbines to draw in and compress air into the combustion chamber. Beyond that, they function largely the same.

In turbojets, air is drawn into the engine chamber and compressed by a rotating turbine. Ramjets draw and compress it by going really fast. Inside the engine, it’s mixed with high-power fuel and ignited. When you concentrate air (and thus oxygen), mix it up with a lot of fuel and detonate it (thus generating exhaust and thermally expanding all the gas), you get a reactionary product that has a huge volume compared to the air drawn in. The only place all this mass of gasses can go through is to the back end of the engine, which it does with extreme force. On the way there, it powers the turbine, drawing in more air and sustaining the reaction. And just to add insult to injury, at the back end of the engine there’s a propelling nozzle.

Hello, I am the propelling nozzle. I will be your guide.

This piece of hardware forces all the gas to pass through an even smaller space than it initially came in by — thus further accelerating it into ‘a jet’ of matter. The exhaust exits the engine at incredible speeds, up to three times the speed of sound, pushing the plane forward.

Non-airbreathing jet engines, or rocket engines, function just like jet engines without the front bit — because they don’t need external material to sustain combustion. We can use them in space because they have all the oxidizer they need, packed up in the fuel. They’re one of the few engine types to consistently use solid fuels.

Heat engines can be ridiculously big, or adorably small. But what if all you have is a socket, and you need to power your stuff? Well, in that case, you need:

Electrical engines

Ah yes, the clean gang. There are three types of classical electrical engines: magnetic, piezoelectric, and electrostatic.

And of course, the Duracell drive.

The magnetic one, like the battery there, is the most commonly used of the three. It relies on the interaction between a magnetic field and electrical flow to generate work. It functions on the same principle a dynamo uses to generate electricity, but in reverse. In fact, you can generate a bit of electrical power if you hand crank an electrical-magnetic motor.

To create a magnetic motor you need some magnets and a wound conductor. When an electrical current is applied to the winding, it induces a magnetic field that interacts with the magnet to create rotation. It’s important to keep these two elements separated, so electrical motors have two major components: the stator, which is the engine’s outer part and remains immobile, a rotor that spins inside it. The two are separated by an air gap. Usually, magnets are embedded into the stator and the conductor is wound around the rotor, but the two are interchangeable. Magnetic motors are also equipped with a commutator to shift electrical flow and modulate the induced magnetic field as the rotor is spinning to maintain rotation.

Piezoelectric drives are types of engines that harness some materials’ property of generating ultrasonic vibrations when subjected to a flow of electricity in order to create work. Electrostatic engines use like-charges to repulse each other and generate rotation in the rotor. Since the first uses expensive materials and the second requires comparatively high voltages to run, they’re not as common as magnetic drives.

Classical electrical engines have some of the highest energy efficiency of all the engines out there, converting up to 90% of energy into work.

Ion drives

Ion drives are kind of a mix between a jet engine and an electrostatic one. This class of drives accelerates ions (plasma) using an electrical charge to generate propulsion. They don’t function if there are ions already around the craft, so they’re useless outside of the vacuum of space.

The Hall Thruster.
Image credits NASA / JPL-Caltech.

They also have a very limited power output. However, since they only use electricity and individual particles of gas as fuel, they’ve been studied extensively for use in spaceships. Deep Space 1 and Dawn have successfully used ion drives. Still, the technology seems best suited for small craft and satellites since the electron trail left by these drives negatively impacts their overall performance.

EM/Cannae drives 

EM/Cannae drives use electromagnetic radiation contained in a microwave cavity to generate trust. It’s probably the most peculiar among all types of engines. It’s even been referred to as the ‘impossible’ drive since it’s a nonreactionary drive — meaning it doesn’t produce any discharge to generate thrust, seemingly bypassing the Third Law.

“Instead of fuel, it uses microwaves bouncing off a carefully tuned set of reflectors to achieve small amounts of force and therefore achieve propellant-free thrust,” Andrei reported on the drive.

There was a lot of debate on whether this type of engine actually works or not, but NASA tests have confirmed it’s functionally sound. It’s even getting an upgrade in the future. Since it uses only electrical power to generate thrust, albeit in tiny amounts, it seems to be the best-suited drive for space exploration.

But that’s in the future. Let’s take a look at how it all started. Let’s take a look at:

Physical engines

These engines rely on stored mechanical energy to function. Clockwork engines, pneumatic, and hydraulic engines are all physical drives.

A model of Le Plongeour, showing the huge air tanks.
Image credits Musée national de la Marine.

They’re not terribly efficient. They usually can’t call upon large energy reserves either. Clockwork engines for example store elastic energy in springs, and need to be wound each day. Pneumatic and hydraulic types of engines have to carry hefty tubes of compressed fluids around, which generally don’t last very long. For example, the Plongeur, the world’s first mechanically powered submarine built in France between 1860 and 1863, carried a reciprocating air engine supplied by 23 tanks at 12.5 bars. They took up a huge amount of space (153 cubic m / 5,403 cubic ft) and were only enough to power the craft for 5 nautical miles (9 km / 5.6 mi) at 4 knots.

Still, physical drives were probably the first ever used. Catapults, trebuchets, or battering rams all rely on this type of engines. So too are man or beast powered cranes — all of which have been in use long before any other kind of engines.


This is by no means a complete list of all the engines man has made. Not to mention that biology has produced drives too  — and they’re among the most efficient we’ve ever seen. But if you read all of this, I’m pretty sure yours are running out of fuel by this point. So rest, relax, and the next time you come across an engine, get your hands and your nose all greased up exploring through it — we’ve told you the basics.

The shadows in such images can indicate the fullness of an oil container. Image: Orbital Insight

Eyes up above: you can’t lie satellite imagery

A couple decades ago, satellite were solely the provision of governments, since they were the only ones that could afford launching billions dollars worth of tech into space. Slowly but surely, corporations hitched a ride and now, when an imaging satellite can fit in the palm of your hand and costs only a fraction it used to, small enterprises are flourishing. Along with them is innovation.

Take for instance, the Palo Alto-based Orbital Insight – a company that specializes in satellite imagery analytics to provide business insights. Using commercially available satellite images and machine learning, the company has analyzed 60 different retail chains to assess their performance just by looking at how the parking lots have shifted each day. To predict retail sales based on retailers’ parking lots, humans at Orbital Insights use Google Street View images to pinpoint the exact location of the stores’ entrances. Satellite imagery is acquired from a number of commercial suppliers, some of it refreshed daily.

Orbital Insights CEO, James Crawford, says this sort of analysis has helped them show Home Depot is busier than Lowe’s, or that Chipotle is getting popular – all of this just from the satellite imaging, without any insight from the companies themselves. This is just the icing of a new field that promises to deliver economic trends based on satellite imaging.

A heat map made by Orbital Insight showing the activity in a retailer's parking lot.

A heat map made by Orbital Insight showing the activity in a retailer’s parking lot.

There’s of course much more to it than just counting cars in a retailer’s parking lot. Another ingenious way to expose a trend is measuring the shadow cast by buildings to gather information about the rate of construction of buildings. In China, Orbital Insights is using this method to study the country’s infamous ghost towns – brand new cities made from the ground up complete with everyone would ever wish for: municipal buildings, living quarters, cultural theaters, shopping malls, etc. Everything, except people. More broadly, Orbital Insights confirmed something analysts have forecasted for a while: that Chinese construction has indeed slowed down. Satellite images could also predict oil yields before they’re officially reported because it’s possible to see how much crude oil is in a container from the height of its lid.

The shadows in such images can indicate the fullness of an oil container. Image: Orbital Insight

The shadows in such images can indicate the fullness of an oil container. Image: Orbital Insight

“Then it’s not an image anymore—it’s some sort of measurement,” Crawford says.

Another company, called Planet Labs, is taking a slightly different route to imaging analysis. The company measure different wavelengths of light to calculate the actual amount of plant material in a field as well as the plant strain itself. It’s quite likely that sometime in the future, we could calculate how many crops and what’s their yield in the whole world. This sort of information is quite powerful and may be a lot more accurate than collecting individual reports from all over the world.

Investors are particularly interested in this sort of technology, since the information is harder to fake. In the image below taken by Planet Labs, you can see how a gold mine in northern China changes between September and October. An astute investor could use this information, coupled with some supplementary data fed by algorithms that put the pics into context (how many trucks are leaving the mine each day, how much material is being displaced) to assess whether or not the mining operation is truly profitable or not. Papers, notes and income can be fabricated, but the real activity is harder to hide.

A gold mine in Northern China, image taken Sep. 28 2014. Image: Planet Labs

A gold mine in Northern China, image taken Sep. 28 2014. Image: Planet Labs

The same gold mine captured Oct. 25 2014. Image: Planet Labs

The same gold mine captured Oct. 25 2014. Image: Planet Labs

“A satellite can cover every square inch of the earth every two weeks. You can’t stop that,” he says. “We don’t drive what imagery the satellite takes.”

As more and more satellites are deployed in space, and as space launches get ever cheaper, expect this sort of tech to soar. Imagine what you can do if a patch of ground is imaged 15-20 times a day. The data would be incredibly valuable.

Google’s new finger control technology seems taken from a science fiction movie

Swiping your phone’s touchscreen might disappear just as quickly as it emerged, if Google have their way. When their new technology hits the shelves, you won’t even have to touch a screen ever again. Here’s why.

It’s called Project Soli, and it uses radar waves to detect precise finger movements – or as they call them, “micromotions”. The technology would allow detection of extremely fine movements, but also ignore other, irrelevant gestures (such as insects, for example). The project is the work of the Advanced Technology and Projects lab, where Google works on all their futuristic projects, and the prototypes already seem promising.

In one of the demos, Ivan Poupyrev changed the hours on a clock by simply turning an imaginary dial, and then changed the minutes by moving his hand a little higher and doing the same motion. He then proceeded to kick a virtual football by flicking at the screen.

Soli is fundamentally different from current technologies; unlike conventional capacitive screens, it can detect 3D motion, and unlike other motion detection technologies like Kinect, it is tuned to detect fine movements. The key is the high-frequency radar (60 Hz), which enables a high enough resolution to detect these fine movements. The transmitter also doesn’t scan with a fine beam of radio waves and uses a wide cone spread instead, which helps keep the costs down. The current proposal includes two transmitters and four receivers, which I can only assume filter out the noise and the unwanted movements. For example, if for a certain action you have to conduct a certain movement with 2 fingers, you don’t want your other fingers’ movement interfere.

It took Google only 10 months of work to shrink this array down into a fingernail-sized chip – small enough that it could be integrated into electronic devices, especially smartphones. A particular interest was given to smartwatches, which seem to be the new gadget everyone wants to get their hands on nowadays. Smartwatches are an interesting example, because they highlight a physical problem of smart devices: as they become smaller and smaller, it becomes harder and harder to actually use the touchscreen.

The possibilities for this technology are virtually endless. Enabling people to interact with their smart devices is only the first step – mixing virtual reality and the ability to remotely control devices seems like the logical step. So far, Google hasn’t announced if it will release this technology itself, or if it will emerge as a stand-alone project for other companies to integrate. We’re eagerly waiting for more details.

Here’s a video showing how Soli works:


Your smartphone will be able to tell if you have blood parasites

Scientists have managed to use a simple smartphone to test for blood parasites; the device and app was successful in small trials in Cameroon.

Image credits: BBC.

Parasitic worms cause many problems in several areas of Africa, especially in central Africa, where tropical diseases are running rampant. There are many issues with detecting and treating these diseases, especially diagnosing infections in the early stages, while treatment can still be effective. With this new app, with only a finger prick, you can find if your blood is infected with a worm. But it gets even better.

Among the problems in dealing with such diseases is the fact that some people react well to treatment, while for others, the treatment can be fatal. This app can also detect who will react well and who won’t.

“With one touch of the screen, the device moves the sample, captures video and automatically analyses the images,” said one of the researchers, Prof Daniel Fletcher.

The trick is that the app doesn’t try to actually detect the worm, but it focuses on detecting movement within the blood. The whole system is very efficient and it was praised by experts in the field.

“This is a very important technology,” said Baylor College of Medicine’s Dr. Peter Hotez, a well-known specialist in neglected tropical diseases who wasn’t involved in the new research. “It’s very practical,” by eliminating the need for specially trained health workers and pricey equipment in remote villages, he added.

Now, researchers are wondering if a similar system could be used to detect other diseases, including TB, malaria and soil-transmitted parasitic worms. Considering other recent advancements in using smartphones to detect diseases, there’s reasons to be optimistic – Columbia University scientists created a device that can detect HIV or syphilis and are pilot testing it in Rwanda, while at the Massachusetts General Hospital, doctors are researching a tool that clips over a smartphone camera to detect cancer in blood or tissue samples. To me, using advanced, yet commonly accessible technology to detect such serious diseases is a spectacular achievement.

Prof Simon Brooker from the London School of Hygiene & Tropical Medicine, commented:

“I think it’s one of the most fundamental advances in neglected tropical diseases in a long time. In the 21st Century we are using 20th Century technology to diagnose these infections, this brings us into the modern world. It really is exciting; when you see it you just go ‘wow’; hopefully it will transform efforts to eliminate diseases,” he added.