Tag Archives: Google

Google expands its earthquake detection system to Greece and New Zealand

First launched in the US, Google is now expanding its Android-based earthquake detection and alert system to Greece and New Zealand. Users will get warnings of earthquakes on their phones, giving them time to get to safety. The earthquakes won’t be detected by seismometers but by the phones themselves.

Image credit: Flickr / Richard Walker

It’s the first time the tech giant will handle everything from detecting the earthquake to warning individuals. Mobile phones will first sense waves generated by quakes, then Google will analyze the data and send out an early warning alert to the people in the affected area. Users will get the alert automatically, unless they unsubscribe.

When it launched the service in California, Google first worked with the US Geological Survey and and the California Governor’s Office of Emergency Services to send out earthquake alerts. This feature later became available in Oregon and will now expand to Washington in May – and eventually to even more states in the US.

Mobile phones are already equipped with an accelerometer, which can detect movement. The accelerometer can also detect primary and secondary earthquake waves, acting like a “mini seismometer” and forming an earthquake detection network. Seismometers are devices used to detect ground movement.

Traditional warning systems use seismometers to interpret an earthquake’s size and magnitude, sending a warning via smartphone or loudspeakers to residents. Even if they come seconds before the quake hits, these warnings can buy valuable time to take cover. But seismometers can be difficult and expensive to develop.

That’s why a warning system that can rely on smartphones has a lot of potential. Richard Allen, a seismologist at the University of California, Berkeley, told Science that Google’s interest in building quake-sensing capabilities directly into Android phones was an enormous opportunity, or, as he calls it, a “no brainer.”

“It’d be great if there were just seismometer-based systems everywhere that could detect earthquakes,” Marc Stogaitis, principal Android software engineer at Google, told The Verge last year. Because of costs and maintenance, he says, “that’s not really practical and it’s unlikely to have global coverage.”

Earthquakes are a well-known threat in Greece and New Zealand, where Google’s service is being deployed. Greece is spread across three tectonic plates, while in New Zealand, the Pacific Plate collides with the Australian Plate. Neither country has deployed an operational warning system, which created an opportunity for the tech giant.

Caroline Francois-Holden, an independent seismologist who until recently worked at GNS Science, told Science that many earthquakes in New Zealand originate offshore, where few phones are found. This might make Google’s system less than ideal. “Any earthquake early warning system needs to be designed with that in mind,” she said.

There are other limitations, too. Those closest to the earthquake won’t get much advance warning since they’ll be the first ones to detect the quake. But their phones will help give a heads-up to others farther away, giving them crucial time to take shelter. But as Android is the leading OS system for smartphones, this service probably has a lot of room to grow.

Google-Alphabet balloon breaks record for longest flight in the stratosphere

Alphabet, the parent company of Google, announced that it has broken the record for the longest stratospheric flight, with a balloon that flew for 312 days straight.

Image credits Loon.

The feat is the merit of the Loon project team, and was announced on their blog (via Medium). The Loon project was started in 2013 and aims to provide cellphone service to remote areas by employing balloons instead of phone towers.

Floating proud

The team has spent those years hard at work perfecting very high-altitude balloon flight. And they’ve come a long way — their first balloons could only manage a few days aloft.

Each and every balloon they trialed was equipped with two-way communications technology, allowing them to act as floaty cell towers. Among other hurdles, the team had to learn how to properly insulate on-board equipment and keep the batteries warm enough to function. The stratosphere starts from 4 to 12 miles (6 to 20 km) above the Earth’s surface and extends around 31 miles (50 km). Temperatures here range between an average of -60°F (-51°C) at the tropopause (where the stratosphere begins) to a maximum of about 5°F (-15°C) at the top. These upper temperatures are warmer due to the formation of Ozone, which releases heat. The balloons also carry an on-board solar panel to power all the gear.

When handling balloons, one has to be extremely careful, the team explains, as the tiniest of holes can ruin the whole trip. However, they’ve recovered almost every one of the balloons they sent up, even those that fell in remote areas. To ensure that everything is in tip-top shape, each of them is scanned using “the world’s largest flatbed scanner” after every trip.

The flight path of the record-setting balloon. Image credits Loon.

That’s a lot of scanning because the team launches a new balloon almost every week, and has around 100 flying at any time. A ground crew monitors their altitude and alters them if needed, for example, to avoid areas of extremely low temperatures.

The record-setting balloon was launched from Puerto Rico in May of last year, flew over South America, went almost around the globe, and then landed in Baja, California this past March.

https://www.youtube.com/watch?v=e_QI5llQrF0

Google Maps adds COVID-19 layer to alert about cases

From now on, the popular navigation application Google Maps will report on COVID-19 outbreaks that occur around the world, with geographic information on the cases. The new functionality was added for users of Android and iOS operating systems as an extra layer on top of the maps.

More than one billion people use Google Maps for essential information on how to get from place to place. Amid the pandemic, the app has already included several new features such as the location checkpoints in driving navigation, COVID-19 alters in transit, as well as when individual businesses see the most visitors, helping people to stay safe.

Now, the novel feature will show how many COVID-19 cases there are in particular geographic regions. The information displayed is the average number of cases per 100,000 inhabitants in the last seven days for the area being viewed, with a label indicating if the trend in new cases is upward or not. The tool will be available for 220 countries.

Using it is quite simple. First, make sure you have the latest version of the Google Maps app, as you’ll need to update to it for the information to be visible (it might take a few days for the update to be available, depending on where you live). Once you get the update, open Google Maps, tap the layers button, and chose “COVID-19” info layer.

Google Maps Product Manager Sujoy Banerjee announced the new feature in a post on Google’s website. He said the goal is to help users “make decisions about where to go and what to do with the best information,” and get to their destinations “in the safest and most efficient way possible.”

The data included in the new COVID-19 layer comes from multiple sources, including Johns Hopkins University, the newspaper The New York Times, and Wikipedia, which in turn collect data from health entities such as the World Health Organization, health ministries, and hospitals around the world.

While this sounds good, there are a few issues to consider. Wikipedia’s COVID-19 coronavirus data will depend on who happens to log on to Wikipedia and enter the information onto the corresponding page, for example. This means the accuracy of Google Maps will depend on what kind of COVID-19 data is being reported where.

At the same time, identifying cases depends on testing. It’s not enough to just know the number of tests being done, you also need to know where and how often testing is being done and on whom. Without this, it’s difficult to know the accuracy of case reporting around the world.

There could also be reporting delays. Once you get tested, the results have to somehow make their way through to the proper public health authorities. Plus, there’s the granularity of the data. If cases are being reported at a county level, the data won’t be able to tell you much about which specific streets to avoid.

Tech companies have been trying to collaborate amid the pandemic. Apple displayed testing centers and shared mobility data through Apple Maps, while Facebook launched a COVID-19 information center to provide information. However, companies also struggle with misinformation. A study showed Google funneled over $19 million to websites spreading misinformation about the pandemic.

Should Google be able to purchase Fitbit?

From Australia to the European Union, data experts and privacy organizations are expressing their concerns over Google’s plan to acquire Fitbit, a company that produces and sells health-tracking technologies.

The move would cement Google’s digital dominance at the expense of consumers, critics say.

Image Credits: Flickr

In November, Google announced that it wanted to buy the wearables company Fitbit for $2.1 billion, mainly as a way to compete with Apple and Samsung in the market of fitness trackers and smartwatches. But the move almost immediately raised privacy and antitrust concerns.

Fitbit currently boasts over 28 million users around the world, which means that along with the company, Google would also acquire a large amount of health data that would have at its disposal to use as it pleases. Google promised the data won’t be used for advertising purposes, but regulators and civil society are skeptical and they’re asking for more transparency on what will happen with the data.

Two weeks ago, the European Commission was formally asked to decide on the merger. The antitrust regulators have until July 20 to decide if they allow the deal to go ahead. If that’s not the case, a four-month investigation will have to be carried out to look further into the move.

But some have already made up their mind on the matter. Privacy International, a UK civil society organization, said in a statement that the merger should be blocked and called the EU to block it.

Google has become too big and already has too much market power, they argued, claiming the company is now trying to access sensitive health data.

“While big isn’t always a bad thing, abusing this dominance violates the law. The value of personal data increases as more and more data is combined with it, and this incentivizes companies to pursue business strategies aimed at collecting as much data as possible,” the NGO argued.

The EU classifies health data as special category data, Privacy International said. This means that health data is granted a very high level of protection due to what they can reveal about our everyday habits and the potential consequences (such as discrimination) if the data is used. The merger would allow Google to access “our most intimate data” and profit from that, Privacy International says.

For Ioannis Kouvakas, Legal Officer at the NGO, this is something that has wide and far-reaching implications, no matter if we are Fitbit users or not.

“Can we trust a company with a shady competition and data protection past with our most intimate data? We must not let big tech once again sacrifice our wellbeing,” he argued.

A shady story with data

Google actually doesn’t have a clean record on the use of data. Last year, the company was fined $57 million by the French data regulator for a breach of the EU’s data protection rules. The regulator said Google had a “lack of transparency, inadequate information, and lack of valid consent.”

The French regulator said it judged that people were “not sufficiently informed” about how Google collected data to personalize advertising. Google didn’t obtain clear consent to process data because “essential information” was “disseminated across several documents,” CNIL said.

This is used as an argument to block the current merger with Fitbit, not only in Europe but also in Australia, where the antitrust regulator (ACCC) warned the merger would give Google too much of people’s data and hurt the competition. It’s the first regulator to officially voice its concerns about the deal.

“Buying Fitbit will allow Google to build an even more comprehensive set of user data, further cementing its position and raising barriers to entry to potential rivals,” ACCC Chairman Rod Sims said in a statement. “User data available to Google has made it so valuable to advertisers that it faces only limited competition.”

In the US, the Justice Department said in December that it will review the plans by Google, having already opened a larger investigation into Google in September. US Watchdog groups like Public Citizen and the Center for Digital Democracy have urged antitrust enforcers to block the deal.

A Google spokesperson rejected criticism over the use of data.

“Throughout this process, we have been clear about our commitment not to use Fitbit health and wellness data for Google ads and our responsibility to provide people with choice and control,” the spokesperson told Euronews.

This is a pressing reminder that we need to carefully consider what role tech companies play in our society — and we need to do it fast. More often than not, technology changes much faster than society itself, and consumers risk being left behind. We all want to enjoy the benefits that technology can bring us, but as we’re already starting to see, the consequences can be dire if we’re not careful.

Apple and Google ban GPS tracking in contact-tracing apps

Apple and Alphabet’s Google announced that they would ban any coronavirus tracing apps that use location tracking, which will make things difficult for several governments which want these apps to feature location tracking.

An example of how the Apple/Google might look like.

No external data

Contact tracing apps are the next big hurdle in our fight against COVID-19 — not that we’ve already solved everything else, but in general terms, we know what needs to be done in terms of social distancing and increasing hospital capacity (though whether or not that will actually get done is a different question). Contact tracing apps are essentially a way of notifying people when they’ve been around someone who has COVID-19, and most experts consider this a crucial tool in returning society to a quasi-normal state amid the coronavirus pandemic. 

But there’s a catch. Aside from technological hurdles (which are not trivial), there are big privacy concerns. Most authorities want these types of apps to also track user location. It makes a lot of sense epidemiologically — it’s worth knowing where the infection hotspots are and who the potential superspreaders may be. But both Apple and Google have announced that that’s a big no-no.

The companies have stressed that privacy is a major concern, and any apps sold on their platforms will prevent governments from using the system to compile data on citizens.

In other words, they won’t allow any centralized contact tracing apps — nothing that can send data to an external, central server. But that’s not the only thing the tech giants want.

The Apple and Google Rules

The two fierce competitors have teamed up to produce their own contact tracing app. While they will also allow other contact tracing apps on the market, these apps need to follow a set of rules, Apple and Google announced. These guidelines include:

  • only health authorities can create contact tracing apps;
  • all apps must get user consent before sending notification;
  • a second consent will be required before sharing positive test results and diagnosis with health authorities;
  • all data collection must be minimized and used only for health purposes — it cannot be used for advertising or policing.

It’s like Bizarro world. Just a few months ago, we were riled up in the Facebook Cambridge Analytica scandal and many were hoping that governments would introduce legislation to protect user data from big tech companies. Now, the tables have turned, and in this case, it’s the companies that want to limit the surveillance data that governments have access to as much as possible.

Of course, it’s a completely different scenario, and there are justified reasons why health organizations and government would want as much data as possible — but there’s little to guarantee how this data will be used.

Already, there have been some clashes between governments and tech giants, as several US officials have expressed displeasure at Apple and Google’s approach to disallowing location tracking. Germany initially wanted to develop its own such app but caved in the face of mounting public pressure demanding stricter privacy settings, and the country will now work with Google’s solution. In the UK, a separate national app is being developed, amid intense criticism for both its usefulness and legality.

While several national apps are in development or already developed, the Apple-Google one will likely be the largest and most significant globally. The two companies cover 99% of the smartphone market, and the app could work internationally without any problems — whereas national apps can’t really communicate with one another when you’re abroad. It will likely be the make-or-break for contact tracing apps.

It might also be the make-or-break of our future smartphone privacy.

Google introduces digital Braille keyboard for Android

Last week, Google unveiled a built-in virtual Braille keyboard for their blind and visually impaired users . The feature — named the TalkBack keyboard — has been enabled on devices running Android 5.0 or later on Thursday.

Image via Google.

The keyboard has six keys, each representing the dots used to create letters, numbers, and symbols in Braille script. Users can type the letter “A” for example by pressing the button labeled “1”, and the letter “B” by pressing buttons 1 and 2 at the same time. In Braille, ‘A’ is represented as a dot and ‘B’ as two dots side-by-side. In a blog post last week, Google stated that anyone who has already used Braille will be familiar with the new program.

Dot Comms

“As part of our mission to make the world’s information universally accessible,” the post said, “we hope this keyboard can broadly expand Braille literacy and exposure among blind and low-vision people.”

To activate the keyboard, users have to go to the Accessibility section in Android Settings. Google says grades 1 and 2 in Braille are supported, meaning it can convey fundamental letters and characters as well as phonetic symbols, punctuation, formatting marks, contractions, and abbreviations.

Feedback is provided to the users as they type; this can be a spoken letter, word, other types of audio cues, or vibrations. Gesture functionality has been implemented to delete letters, whole words, start a new line, and sending the text.

UI + shell.png
Image via Google.

For now, the program is available only in English, but Google hopes to expand on it soon. But, whichever language it operates in, the keyboard is definitely a step in the right direction. Our societies revolve heavily around sight, but this isn’t readily obvious to those who aren’t visually impaired. Such users, for example, can’t make heads or tails of the touchscreen keyboards on smartphones and have long had to rely on physical keyboards connected to their computers to allow them to type in Braille. According to the American Foundation for the Blind, such equipment costs between $3,500 and $15,000.

Google’s first attempt to assist visually impaired users came in 2018 with Voice Access which allows for control of the device using only voice commands. When initiated, numbers appear on-screen next to any actionable options such as clicking, saving, deleting and sending. Google then added a feature to their web browser that recognizes images and uses artificial intelligence to describe what appears in those images.

This was meant to help visually impaired people better understand and navigate the information their device was showing. Previously, visually impaired users would hear only that “an image” or “unlabeled graphic” is present. Live Caption, a program that generates real-time captioning for videos, podcasts, and audio messages introduced last fall is another feature they highlight meant to help such users enjoy their devices to the fullest.

Microchip.

Google, Intel, Qualcomm, and others stop supplying Huawei after Gov’t ban

Google announced that it is beginning to cut ties with China’s Huawei, as per the US Government’s instructions, according to Bloomberg. Google will stop selling Huawei parts it needs to manufacture smartphones and other electronics.

Microchip.

Image via Pixabay.

Washington considers Huawei Technologies Co., a Chinese state-run telecommunications equipment and consumer electronics manufacturer, as a threat to national security. As such, the Trump Administration moved on Wednesday to bar Chinese tech companies from selling their products in the US and blacklisting Huawei, especially, from buying US components.

Whether this burgeoning trade war is needed or even if it will work, time will tell — but in the meantime, Google announced that it is complying with the Government’s decision and beginning to cut ties with the Chinese company. Although Huawei is believed to have some stockpiles of parts and components, this development could severely hamstring it in the long run. Moreover, it could have meaningful effects for users themselves, as Huawei will no longer have access to Google’s proprietary services — such as Gmail and Google Maps apps — reports AFP citing a ‘source close to the matter’. Other companies are also moving to comply with the ban.

Smartfights

This all stems from growing rivalries between the US and China over the past few years. Given the company’s CEO Ren Zhengfei’s army background and Huawei’s opaque culture, suspicions are mounting that the firm has links with the Chinese military and intelligence services. On Friday, this culminated in the Trump Administration blacklisting Huawei under suspicions of engaging in espionage for Beijing.

The trade ban imposed by the administration extends to U.S. software and semiconductor materials that are essential to Huawei. Although not unexpected, the ban inflicted a terrible blow to the company, which is the world’s largest provider of networking gear and second largest smartphone vendor. Huawei has been listed by the US Commerce Department among firms that American companies can only trade with if authorities grant permission.

Google, who owns the Android mobile operating system (OS), the most widely-used mobile OS out there, is already taking steps to comply with the ban. Like all tech companies, Google collaborates directly with smartphone manufacturers to ensure its systems are compatible with their devices — and amid concerns of espionage, that has to stop.

While this will definitely be felt by Huawei, other companies in the US — such as Intel, Qualcomm, Xilinx, and Broadcom — might follow. All of them cutting trade with Huawei is undoubtedly a scary prospect of the Chinese company, as it directly relies on these other suppliers to function. “Intel is the main supplier of server chips to Huawei, Qualcomm provides it with processors and modems for many of its smartphones, Xilinx sells programmable chips used in networking, and Broadcom is a supplier of switching chips, another key component in some types of networking machinery”, according to Bloomberg.

“We are complying with the order and reviewing the implications,” a Google spokesperson told AFP.

On their official @Android Twitter, the comany further stated that “while we are complying with all US gov’t requirements, services like Google Play & security from Google Play Protect will keep functioning on your existing Huawei device.”

So, what does this mean for consumers? In the long run, probably nothing good, but we’ll see how the situation develops. In the short term, it does mean that Google software and technical services that are not publicly available might stop working on Huawei devices. The Chinese company will only have access to the open source version of Android. Furthermore, it will need to manually access any updates or software patches from the Android Open Source Project, and also distribute the updates to users themselves. A company statement held that Huawei will “continue to provide security updates and after-sales services” to all existing smartphones and tablets globally, including those not yet sold.

“At the same time, the Chinese side supports Chinese enterprises in taking up legal weapons and defending their legitimate rights,” said Lu Kang, a spokesman for the Chinese foreign ministry, adding that the organization is actively following developments on the ban.

This isn’t a one-sided battle, however. Huawei does have some influence in the device market that it can throw around. The company is working on establishing itself as a leader in 5G technology, currently offering the most advanced and cheapest 5G capability in the world. It also outsold Apple in smartphones in the first quarter of this year, seizing its second place globally (after Samsung).

The ban could stop Huawei’s ascent, with Ryan Koontz, a Rosenblatt Securities analyst, saying that it could “cause China to delay its 5G network build until the ban is lifted, having an impact on many global component suppliers,” as the company is “is heavily dependent on US semiconductor products and would be seriously crippled without supply of key US components.” The US has also “pressured both allies and foes to avoid using Huawei for 5G networks that will form the backbone of the modern economy,” Bloomberg adds.

So on a macro, geopolitical level, things are definitely heating up. On the micro, consumer level, however, things aren’t that bad right now. Some of you may have to re-think your device purchases, and those who do own Huawei devices right now might find it impossible to use certain apps. The development and implementation of 5G technology as a whole, however, will undoubtedly come bruised and battered out of the trenches of this trade war.

Wikipedia

Google AI dabbles in writing Wikipedia articles

Researchers from Google Brain — the company’s inventive machine-learning lab — have developed a new software that can generate Wikipedia-style articles by summarizing info from the web.

Wikipedia

Credit: Pixabay.

The software written by the Google engineers first scrapes the top ten web pages for a given subject, excluding the Wikipedia entry — think of it as a summary of the information found in the top 10 results of a Google search. Most of these pages are used to train the machine-learning algorithm, while a few are kept to test and validate the output of the software.

Paragraphs from each page are collected and ranked to create a long document, which is then shortened by splitting it into 32,000 individual words. This large text is used as input for an abstractive model where the long sentences are cut shorter — a trick to create a summary of the text.

Because the sentences are shortened from the earlier extraction phase, rather than written from scratch, the end result can sound rather repetitive and dull. For instance, here’s what the AI’s Wikipedia-style blur looks like compared to the text currently online edited by humans. 

Left: Automated Wikipedia entry for Wings over Kansas. Right: The Wiki entry edited by humans. Image credit: Liu et al.

Left: Automated Wikipedia entry for Wings over Kansas. Right: The Wiki entry edited by humans. Image credit: Liu et al.

Mohammad Saleh and colleagues at Google Brain hope that they can improve their bot by designing models and hardware that support longer input sequences. Their study will be presented at the upcoming International Conference on Learning Representations (ICLR).

As things stand now, it would be unwise to have Wiki entries written by this AI but progress is good. Perhaps, one day, a hybrid solution between AI content generation and human supervision might populate Wikipedia at an unprecedented rate.

Currently, the English Wikipedia alone has over 5,573,495 articles of any length, and the combined Wikipedias for all other languages greatly exceed the English Wikipedia in size, giving more than 27 billion words in 40 million articles in 293 languages. That’s a lot but with an AI solution could come up with even more info, especially for the millions of Wiki pages that are unpopulated “stubs”.

And if an AI will one day be good enough to populate Wikipedia, perhaps it will be good enough to “write” all sorts of other content. You wouldn’t have to pay someone to write a paper or yours truly for the news. News-writing AIs are actually quite advance nowadays. Reuters’ algorithmic prediction tool helps journalists gauge the integrity of a tweet, the BuzzBot collects information from on-the-ground sources at news events, and the Washington Post uses its in-house built Heliograf, a bot that writes short news.

 

 

Google AI can now look at your retina and predict the risk of heart disease

Google researchers are extremely intuitive: just by looking into people’s eyes they can see their problems — cardiovascular problems, to be precise. The scientists trained artificial intelligence (AI) to predict cardiovascular hazards, such as strokes, based on the analysis of retina shots.

The way the human eye sees the retina vs the way the AI sees it. The green traces are the pixels used to predict the risk factors. Photo Credit: UK Biobank/Google

After analyzing data from over a quarter million patients, the neural network can predict the patient’s age (within a 4-year range), gender, smoking status, blood pressure, body mass index, and risk of cardiovascular disease.

“Cardiovascular disease is the leading cause of death globally. There’s a strong body of research that helps us understand what puts people at risk: Daily behaviors including exercise and diet in combination with genetic factors, age, ethnicity, and biological sex all contribute. However, we don’t precisely know in a particular individual how these factors add up, so in some patients, we may perform sophisticated tests … to help better stratify an individual’s risk for having a cardiovascular event such as a heart attack or stroke”, declared study co-author Dr. Michael McConnell, a medical researcher at Verily.

Even though you might think that the number of patients the AI was trained on is large, AI networks typically work with much larger sample sizes. In order for neural networks to be more accurate in their predictions, they must analyze as much data as possible. The results of this study show that, until now, the predictions made by AI cannot outperform specialized medical diagnostic methods, such as blood tests.

“The caveat to this is that it’s early, (and) we trained this on a small data set,” says Google’s Lily Peng, a doctor and lead researcher on the project. “We think that the accuracy of this prediction will go up a little bit more as we kind of get more comprehensive data. Discovering that we could do this is a good first step. But we need to validate.”

The deep learning applied to photos of the retina and medical data works like this: the network is presented with the patient’s retinal shot, and then with some medical data, such as age, and blood pressure. After seeing hundreds of thousands of these kinds of images, the machine will start to see patterns correlated with the medical data inserted. So, for example, if most patients that have high blood pressure have more enlarged retinal vessels, the pattern will be learned and then applied when presented just the retinal shot of a prospective patient. The algorithms correctly discovered patients who had great cardiovascular risks within a 5-year window 70 percent of the time.

“In summary, we have provided evidence that deep learning may uncover additional signals in retinal images that will allow for better cardiovascular risk stratification. In particular, they could enable cardiovascular assessment at the population level by leveraging the existing infrastructure used to screen for diabetic eye disease. Our work also suggests avenues of future research into the source of these associations, and whether they can be used to better understand and prevent cardiovascular disease,” conclude the authors of the study.

The paper, published in the journal Nature Biomedical Engineering, is truly remarkable. In the future, doctors will be able to screen for the number one killer worldwide much more easily, and they will be doing it without causing us any physical discomfort. Imagine that!

Artist's concept of the MX-1 robotic explorer Moon Express intended to land on the lunar surface. Credit: Moon Express.

Google-backed civilian race to the moon ends with no winner for $20 million prize

After many delays and pushed deadlines, the Google Lunar X Prize is officially over, with no winner to claim the $20 million bounty.

Artist's concept of the MX-1 robotic explorer Moon Express intended to land on the lunar surface. Credit: Moon Express.

Artist’s concept of the MX-1 robotic explorer Moon Express intended to land on the lunar surface. Credit: Moon Express.

Google launched the Lunar X Prize in 2007, offering $20 million to the first privately-funded team to put a working rover on the moon’s surface.The first deadline was set for 2012 but it has been pushed back four times, with the very last deadline set for March 31, 2018.

To win, a privately funded team had to build, launch, and deploy a rover on the moon, drive it across the lunar surface for at least 500 meters (1,640 feet) and beam back footage to Earth. First prize would receive $20 million. Another $5 million was awarded to the second team, while a remaining $5 million was set aside for accomplishing various milestones in the contest.

Out of an initial batch of 30, only five finalists remained in the game : SpaceIL (Israel), Moon Express (United States), TeamIndus (India), HAKUTO (Japan), and Synergy Moon (International).

But wouldn’t you know it, launching payloads to the moon — and ensuring they function once they get there — is extremely challenging. Engineering difficulties aside, contenders had to worry about regulations and fundraising and at the end of the day, it was just too much for the competitors. They just didn’t have enough resources to make the deadline. After all, landing on the moon has so far been the reserve of well-funded government space programs — and for good reason.

“After close consultation with our five finalist Google Lunar XPRIZE teams over the past several months, we have concluded that no team will make a launch attempt to reach the moon by the March 31st, 2018 deadline,” said Peter Diamandis and Marcus Shingles, the XPRIZE Foundation executives.

 

“This literal ‘moonshot’ is hard, and while we did expect a winner by now, due to the difficulties of fundraising, technical and regulatory challenges, the grand prize of the $30 million Google Lunar XPRIZE will go unclaimed,” they remarked.

Maybe with another deadline extension, one of the team would have made it. There was certainly progress. All finalists had contracts signed with SpaceX to send their rovers to the moon. Moon Express, for instance, became the first private company to receive approval from the US governments to land a payload on the moon’s surface.

That being said, even without Google’s money, some of these teams may choose to go ahead anyway. It will certainly be a lot harder, of course. So far, there’s been no word from any of the teams so we don’t know if it will still happen.

You can now use Google Maps to explore other moons and planets

It’s now possible to explore Venus, Mercury, Pluto, and several icy moons from the comfort of your own home.

Credits: Google / NASA.

Working with NASA, Google engineers have rolled out a new feature (see here) where you can navigate between various celestial bodies in our solar system, rotating and zooming as you wish. The project drew inspiration from the Cassini spacecraft, which sent us hundreds of thousands of pictures, offering us an unprecedented view of Jupiter, Saturn, and their moons. Google explained:

“Twenty years ago, the spacecraft Cassini launched from Cape Canaveral on a journey to uncover the secrets of Saturn and its many moons. During its mission, Cassini recorded and sent nearly half a million pictures back to Earth, allowing scientists to reconstruct these distant worlds in unprecedented detail. Now you can visit these places—along with many other planets and moons—in Google Maps right from your computer.”

It can be a bit tricky to navigate since Google hasn’t implemented a search feature, but you can just scroll around and explore the areas on your own. The company notes that it worked with astronomical artist Björn Jónsson to bring the images to life.

Image credits: Google / NASA.

Previously, you could have used Google maps to navigate the Earth, the Moon, Mars, Mercury, as well as the International Space Station. Now, you can also check out Ceres, Io, Europa, Ganymede and Mimas. These are not simply small frozen moons, they are active places rich in features, and some of the likeliest places to host extraterrestrial life (not Io though, that place is crazy).

“Explore the icy plains of Enceladus, where Cassini discovered water beneath the moon’s crust—suggesting signs of life. Peer beneath the thick clouds of Titan to see methane lakes. Inspect the massive crater of Mimas—while it might seem like a sci-fi look-a-like, it is a moon, not a space station”, the Google press release reads.

However, the maps aren’t perfect; a few problems have already been reported with the labeling. Planetary scientist Emily Lakdawalla has already contacted Google in order to fix the problems.

Still, minor bugs aside, it’s an excellent resource to use both educationally and for fun. Just think about it, the first plane flew about a century ago, and now we have high-resolution maps of planets and moon in our solar systems, available for everyone to access. If that’s not a huge technological leap, I don’t know what is.

Miami, Florida.

Follow the last 30 years of humanity shaping the planet through the eyes of Google’s Timelapse

The latest update to Google’s Timelapse shows you just how fast parts of our planet are changing — and how much of that change is brought on by humans.

Miami, Florida.

Timelapse of Miami, Florida. You can see some of the islands disappearing in the lower right.

We have outgrown our feral origins into a truly world-shaping force. The sheer magnitude of our mark on the planet can be a daunting thing to convey in writing, simply because you’re trying to cram glaciers melting, string theory, and sprawling cities in 140 characters or less.

Thankfully, we have technology much more powerful than ink to play around with. The latest update to Google’s Timelapse (first introduced in 2013) helps put everything into perspective. Drawing on satellite data recorded as far back as 1984, most of it collected by NASA’s Landsat program, the tech giant pieced together a year-by-year view of the entire planet. It took a huge amount of work — some 5 million individual images had to be collected and made to fit — but users can now select any place on Earth and watch how it changed over the last three decades. And boy oh boy did it change.

Drying of the Aral sea.

The drying of the Aral sea, which is regarded as one of the worst environmental disasters in modern history. Originally one of the largest inland seas/lakes in the world, with an area of 68,000 sq km (26,300 sq miles,) by 2014 it had largely dried up. Its eastern basin is now known as the Aralkum Desert.

The update added petabytes of data to bring Timelapse up to date, and help make the images crisper by mixing in data from ESA’s Sentinel-1, Sentinel-2, and the Landsat 8 satellites and a host of other programs as well.

It comes with a few pre-selected places that Google felt experienced the most eye-catching transformations. These showcase the incredibly organic-looking development of cities such as Miami, Florida, Las Vegas, Nevada, or the sprouting of Dubai’s palm islands.

But Timelapse also showcases the more worrying changes out there: the drying of the Aral sea, the receding Columbia Glacier in Alaska, massive environmental displacement in Southeast Asia as juggernaut cities grow even larger, rampant deforestation in the Amazon, and more.

Colombia Glacier.

Colombia Glacier, in Alaska, becoming Colombia Water as average temperatures increase.

It’s an awesome testament to how far we’ve come as a species — but also a terrible sight of how much damage we can unintentionally wreak upon the world around us. Either way, it’s truly a sight to see, and it’s now in higher definition than ever before. So go play around with the Timelapse and see what strange, impressive, or terrifying sights you can stumble upon.

You’d be hard pressed to find an area that hasn’t changed over the last three decades. Makes you wonder how much, and in what way, it will change over the next 30 years.

voice-control-2598422_960_720

Popular voice assistants like Siri or Alexa easily hacked with ultrasonic commands

The world’s biggest tech companies have devoted huge resources to voice assistants such as Siri or Alexa. Yet despite a user base numbering in the millions, these apps have serious flaws as researchers at Zhejiang University, China, recently showed. They found a gaping vulnerability that can be easily exploited by hackers who only need to send ultrasound commands to the voice assistant to gain access to personal information.

voice-control-2598422_960_720

Credit: Pixabay.

This is a very sneaky exploit since a hacker can take command of your handheld device standing right next to you. You’ll never notice since the voice commands are ‘whispered’ in ultrasounds, whose frequencies are above the human audible range (20Hz to 20kHz).

Although we can’t hear this mosquito squeal, the software’s voice command software is perfectly capable of picking the ultrasound frequencies which it decodes as instructions for the device.

The Zhejiang researchers showed that this exploit aptly called DolphinAttack can be used to send commands to popular devices from Apple, Google, Amazon, Microsoft, Samsung, and Huawei. They transmitted the attack using a common smartphone with $3-woth of additional hardware — a microphone and an amplifier.

Mark Wilson, writing for Fast Company, described what happened next:

The researchers didn’t just activate basic commands like “Hey Siri” or “Okay Google,” though. They could also tell an iPhone to “call 1234567890” or tell an iPad to FaceTime the number. They could force a Macbook or a Nexus 7 to open a malicious website. They could order an Amazon Echo to “open the backdoor.” Even an Audi Q3 could have its navigation system redirected to a new location.

“Inaudible voice commands question the common design assumption that adversaries may at most try to manipulate a [voice assistant] vocally and can be detected by an alert user,” the research team writes in a paper just accepted to the ACM Conference on Computer and Communications Security.

The transmitter had to be as close as only a couple inches to some devices for the exploit to work, it has to be said, though others like the Apple Watch were vulnerable within several feet. Even so, a hacker would simply need to stand right next to a vulnerable device in a crowd or public transit to get it to open malware.

At this point, some readers might be wondering why manufacturers don’t simply stick to the audible range. The problem is that that would come at the cost of sacrificing performance and user experience, due to filtering algorithms which use harmonic content outside the human range of hearing. Moreover, manufacturers use different microphones, most of which are designed to transduce pressure waves in electricity. This means it’s mechanically impossible to block ultrasounds from the hardware.

It’s all up to Google, Amazon, Apple, and the likes to decide how they’ll address this vulnerability.

Meanwhile, the best thing you can do to keep your device safe is to turn off ‘always-on’ listening, which is typically turned on by default. Otherwise, a hacker might just be able to send commands via DolphinAttack even when the device is locked.

Google founding principle.

Google is shifting their focus from Search to artificial intelligence, CEO says

While delivering Google’s first quarterly income report on Thursday, the company’s CEO said that Google is transitioning — the search-engine giant will become an A.I.-first company.

Google founding principle.

One of Google’s founding principles.
Image credits Tangi Bertin / Flickr.

The measure of the sheer success Google has achieved is that they’re no longer just the guys who do your searching for you — it has become the de facto verb for it. While that isn’t likely to change anytime soon, the company is switching its focus away from search engines to put A.I. development on the forefront.

“We continue to set the pace in machine learning and A.I. research,” said Google CEO Sundar Pichai said in a call [embedded at the end of the article] to investors on Thursday to report the company’s Q1 2017 earnings.

“We’re transitioning to an A.I.-first company.”

So what does this mean for Google, and what does it mean for you? Well, in short, Google wants to become the first totally personalized corporation, tailoring their service to each individual. We’ll probably see machine learning embedded at the core of most of Google’s systems and platforms — such as Google’s Assistant service being merged with Android and Chrome OSs. The goal will be to use machine learning to tailor the devices to each user, timing apps and notifications after their usual schedule or location, personal interests, or other characteristics such as typing habits.

Last year, Pichai said that the ultimate goal of Assistant is that if you ask for a pizza, it will bring you that pizza without any further input or oversight required. The company also has to figure out how to keep increasing the number and sophistication of its machine learning algorithms without increasing energy or mobile data use — which would cripple current devices.

Pichai said they would increasingly rely on Google’s recently unveiled “federated learning” technology. It should allow A.I. to run more efficiently on the limited resources a mobile device can marshal, for example, allowing for a wider range of applications while saving up on battery and bandwidth.

What about Search

Although Google is looking to focus on A.I., search currently remains its single most profitable platform. But even though they’re shifting focus away from search, Google thinks A.I. would only make its results more relevant and its ads more profitable. It would let them find patterns of search and favorite results more easily, and eventually even predict them. It would let Google not only give you what you ask for but predict what you’ll ask for, even change what you want — if that’s a good thing or not, I guess it’s a question we all have to answer for ourselves.

Still, this answers why Google is moving away from search as its main focus — there’s a lot of profit to be made with machine learning, especially through analysis of search behavior. Everything done online with even a minute involvement of a Google platform will become quantifiable, and monetizable. Until now, they’ve relied on a sort of passive data acquisition with you coming to them, via search, and feeding the data yourself. A.I. will take up the busywork of actively gathering the data.

And the sort of things they can do with this data is amazing. Think about driving home and being prompted with what parking spaces are open near your house in Google Maps. Food shops that cater your absolute favorite kinds of foods in whatever town you happen to be in, or all over the globe on Earth.

But with so much data on their hands, the risk of Google derailing from their founding principle of “Don’t be evil” becomes a lot harder to maintain — and a lot scarier if broken.

What a solar energy potential map from Google looks like. Credit: Google.

Google tool that calculates the solar energy potential of your rooftop expands to all 50 states

What a solar energy potential map from Google looks like. Credit: Google.

What a solar energy potential map from Google looks like. Credit: Google.

Until not too long ago, solar used to be an alternative energy source reserved for the hip and wealthy. In the last five years, however, the price of installing solar, both utility-scale and residentially, has gone down so much that in some places it doesn’t make sense to be using anything else. But even in those sunny states like Nevada or Texas, there are many homeowners who are skeptical installing rooftop solar is cost-effective.

Launched in 2015, Google’s Project Rooftop came as a solution. Just as easy as using Google Maps, users need only to locate their home and the app calculates the solar potential but also the savings involved so you can make an informed, market-based decision. Now, the service has expanded to all 50 states and chances are you can also calculate your home’s solar potential with ease.

How it works

Solar potential of Googleplex in Mountain View, CA. Credit: Google.

Solar potential of Googleplex in Mountain View, CA. Credit: Google.

The tool exploits Google Maps and Google Earth, both extremely powerful services, in addition to machine learning techniques to come up with the most accurate answer — all at a massive scale. We’re talking about 60 million buildings currently in this ‘solar index’ with many more to be included in the future.

For years, NASA has a publically available tool which anyone can use to assess the solar flux hitting a particular area. Such information has proven invaluable for utility-scale projects but Google is doing it on a whole different level because Project Rooftop is smart enough to not only identify your rooftop surface area but also knows by how much it gets pounded by incoming photons.

The tool takes into account each portion of the roof, weather patterns, the position of the sun in the sky at different times of year, shade from nearby constructions or even trees. All of this information is translated into an energy production estimate based on industry standard models.

Essentially, this is no longer a guessing game.

Sample result of query using the solar energy savings tool. Credit: Google.

Sample result of a query using the solar energy savings tool. Credit: Google.

Some highlights from Google’s recent update for Project Rooftop:

 

  • Seventy-nine percent of all rooftops analyzed are technically viable for solar, meaning those rooftops have enough unshaded area for solar panels.
  • Over 90 percent of homes in Hawaii, Arizona, Nevada and New Mexico are technically viable, while states like Pennsylvania, Maine and Minnesota reach just above 60 percent viability.
  • Houston, TX has the most solar potential of any U.S. city in the Project Sunroof data, with an estimated 18,940 gigawatt-hours (GWh) of rooftop solar generation potential per year. Los Angeles, Phoenix, San Antonio, and New York follow Houston for the top 5 solar potential cities — see the full top 10 list in the chart below.

 

 

Credit: Google.

According to the EIA. the average American home consumes 10,812 kilowatt-hours (kWh) a year. In Houston alone, there’s enough rooftop solar potential to power 1,704,600 average American homes. If all the top 10 cities ranked above reached their full solar rooftop potential, they’d generated enough energy to power roughly 8 million homes.

The Project Sunroof data explorer tool allows anyone to explore rooftop solar potential across U.S. zip codes, cities, counties and states. If you want to calculate your personal financial savings from going solar use the  Project Sunroof savings estimator tool instead.

 

Book review: ‘The Power of Networks: Six Principles that connect our Lives’

The Power of Networks: Six Principles that Connect Our Lives
By Christopher G. Brinton & Mung Chiang
Princeton University Press, 328 pp | Buy on Amazon

Ever wondered how Netflix seems to know you better than you do when it recommends new series? Well, it does so thanks to a framework that’s common in other situations — like how Google sorts search results or how WiFI directs bandwidth. In their book, authors Christopher G. Brinton & Mung Chiang explain how networks work and how these affect our lives based on six core principles.

Networks have always existed, today much more so than ever thanks to devices that enable us to connect to the largest network in the world — the internet. Building up from a massive open online course presented by the pair a few years back, The Power of Networks aims to demystify the complex structure of rules, standards, and processes which networks use today.

The book is divided into six chapters, each with its corresponding theme or ‘principle’: sharing resource, ranking and ordering, the collective wisdom and folly of crowds, routing, and management. Along the way, the authors also include interviews they made with renowned experts such as former Google CEO Eric Schmidt, former Verizon Wireless CEO Dennis Strigl or Vint Cerf and Bob Kahn, the founders of the great internet itself.

Using clear language and familiar analogies, the authors take turns in explaining some very big ideas. For instance, one analogy that pops up on more than one occasion is that of the crowded cocktail party. If everyone talked simultaneously, it would be very difficult for anyone to engage in a meaningful conversation. A host might decide to solve this capacity issue by asking guests to speak at separate times (analogous to how TDMA or 2G allowed mobile phone users to share the spectrum). Alternatively, the host might ask every guest to speak in a different language and then they can all talk simultaneously since each pair listening for one language in particular (analogous to the CDMA system). Things get a lot more exciting when the authors explain 3G and 4G networks.

If your job demands it or if you’re simply interested in learning about how networks function under the hood, this is a great introduction. That’s not to say that the subjects and content tackled are superficial. You’ll get a great overview as a non-specialist but each chapter also dives in deep into its treated subject — again, in a manner that simplifies highly complex topics.

It’s my impression that you’ll get a much better understanding of the ubiquitous networks that bind our digital lives together after reading this book.

An 8x8 pixels source (left) vs. an upscaled version on the right. Credit: Google.

“Zoom in. Now… enhance.” Well, Google just turned this old TV trick into reality

An 8x8 pixels source (left) vs. an upscaled version on the right. Credit: Google.

An 8×8 pixels source (left) vs. an upscaled version on the right. Credit: Google.

Google, your favorite search engine and soon-to-be overlord of all human knowledge, just demonstrated one of the most impressive feats of software engineering. Exploiting the power of neural networks and an unrivaled database of trillions of photos, the Mountain View corporation showed that its possible to render a detailed, higher resolution image from a tiny, pixelated source, some only 8×8 pixels.

“Let’s enhance!”

It’s a classic moment in TV. Two special agents are gathered in front of a computer screen handled by a brainy technician. Finally, the agents get a glimpse of the suspect on a subway security cam. They instruct the technician to zoom-in on a patch. A pixelated image featured five people comes into focus. ‘Enhance!’ says Special Agent Smith. And they zoom and enhance again and again until they get a clear mugshot. ‘That’s our man’. And it only took 30 seconds. Kudos to law enforcement!

Problem is, in real life, you can’t do this as it would imply extracting more information from a limited source. Of course, there are plenty of image enhancing algorithms that do a pretty good job cleaning up blurry or grainy images. This sort of approach fills in the blanks so to speak but if your zoomed-in source is only a couple pixels wide there’s nothing you can do.

Google used a nifty trick, though, as you can notice in the image below. The first column is made of 8×8 sources, the middle column shows the images Google Brain was able to create from the pixelated source, and the third column shows the ‘ground truth’ or real depiction of what the 8×8 sources were to look like if they had been in higher resolution.

Credit: Google.

Credit: Google.

The way Google Brain handles this task is it first compares the 64-pixel source with other higher resolution images which were downsized to the same 8×8 grid. This is the conditioning neural network. The prior neural network then upscales and compares the 8×8 source against many real high-res images of celebrities and bedrooms, for this case study at least. The network then adds new pixels one by one in a way that matches what the machine already knows. In the case of an 8×8 block, a brown pixel on the far right and far left would correspond to eyebrows. When the image is scaled to 32×32, the blanks are filled with pixels that depict an eyebrow. The final image combines both conditioning and prior networks.

Credit: Google.

The resulting constructed (fake) images managed to fool quite a few people in tests that these were actually real images. Upscaled images of celebrities fooled 10 percent of human participants that these were genuine, where 50 percent would imply a perfect score. The bedroom images fooled 28 percent of humans. Bicubic scaling, a method that interpolates data points on a two-dimensional regular grid, didn’t manage to fool anyone.

That being said, the upscaled images made by Google Brain are fake. This is important to consider, else we might fall for it just like on TV. The upscaled images are educated guesses at best because, again, you can’t create more information from limited information. Some hunches are spot on, though, as evident in these press photos. Another consideration is that the networks knew they had to find a photo of a celebrity or bedroom which made the job a lot easier.

Police investigations and forensic scientists could make use of such a software, however, it would never stand in court. Rather, the “zoom-in, enhance” capability could offer a lead where there’s none to begin with. For now, at least, Google has no plans on turning this sandbox research project into something useful.

via Ars Technica

Here’s how to stay safe from the latest phishing scam plaguing Gmail

A phishing scam that’s so convincing it even fooled experienced technical users is going around on Gmail, trying to get a hold of your login details.

Image credits Gerd Altmann / Pixabay.

It looks like a genuine email one of your friends sent you. There’s even an attachment — something important that he or she is sending you. When you try to download it, you’re taken to a page requesting your log-in credentials. Huh. Must be those wacky Google geeks, always working hard to improve security. You log in.

Congratulations my friend, you’ve just hacked yourself.

How to spot it

The scam is one of the most convincing ever made, and works to trick users into giving up their user credentials, thus allowing the attacker full access to their inbox. It all starts with one email containing a rogue PDF attachment. This message will come from people in your own address book and it’s extremely convincing, even copying their style of writing and to a certain extent, personal touches such as commonly used idioms or smiley faces.

Once you click on the attachment, you will be redirected to a phishing page that looks like the Google sign-in. The scam doesn’t seem to trigger Google’s HTTPS security warnings which usually tell you you’ve reach a shady page. Immediately after you log in, the attackers access your account and use one of your own attachments and subject lines to form a malicious email that is sent to your entire contact list.

A HackerNews user reported on the scam:

“They went into one student’s account, pulled an attachment with an athletic team practice schedule, generated the screenshot, and then paired that with a subject line that was tangentially related, and emailed it to the other members of the athletic team.”

“It may be automated or they may have a team standing by to process accounts as they are compromised.”

Thankfully, Mark Maunder of Wordfence, the company that provides security services for WordPress, discovered the scam.

“Once they have access to your account, the attacker also has full access to all your emails including sent and received at this point and may download the whole lot,” he wrote on Wordfence.

“Now that they control your email address, they could also compromise a wide variety of other services that you use by using the password reset mechanism including other email accounts, any SaaS services you use and much more.”

How not to be phished

Maunder recommends enabling two-factor authentication for your account so no one else can access it even if your credentials are compromised. He also says you should keep an eye out for “data:text/html” in the browser location bar, as it’s a clear sign of a fake page.

“You should also take special note of the green colour and lock symbol that appears on the left. If you can’t verify the protocol and verify the hostname, stop and consider what you just clicked on to get to that sign-in page.”

Google to run 100% on renewable energy in 2017

Google announced it will get all its 2017 energy from solar and wind.

Image credits: romanboed

It’s a good sign when companies are leading the way in terms of sustainability, especially when it comes to the big boys. Last year Google used about as much energy as the city of San Francisco, and they’ve pushed the usage of renewables more and more – without really getting a lot of credit for it. But they don’t even do it for only environmental purposes, they say it’s good for business.

“We are the largest corporate purchaser of renewable energy in the world,” said Joe Kava, Google’s senior vice president of technical infrastructure. “It’s good for the economy, good for business and good for our shareholders.”

What Google does is actually a pretty interesting (and complex) scheme. They don’t have their own grid and, just like everyone else, take their electricity from a power company which operates a regular grid with multiple energy sources – both renewable and fossil. But what Google does is that they’ve negotiated deals with renewable producers, typically guaranteeing to buy the energy they produce with their wind turbines and solar cells. That makes it easier for renewable companies to obtain funding and ensures that they can pump clean energy into the grid, basically nullifying their impact. It’s not directly that they’re only using renewable energy, it’s more that they’re ensuring a renewable energy generation equivalent to their consumption. It’s something more and more companies are doing and something which can make a big difference.

Most businesses don’t make their electricity consumption public, but it’s estimated that a quarter of US energy goes to businesses, and these businesses are adapting to the new energy market faster than governments. The market itself is shifting towards the renewables more so than policy. So what corporations are doing through these deals is increasing demand for renewables and lowering the price, in quite a significant way.

The 5.7 terawatt-hours of electricity Google consumed in 2015 “is equal to the output of two 500 megawatt coal plants,” said Jonathan Koomey, a lecturer in the school of earth, energy and environmental sciences at Stanford. That is enough for two 140,000-person towns. “For one company to be doing this is a very big deal. It means other companies of a similar scale will feel pressure to move.”

A consumer like Google, Koomey adds, allows companies more room for improvement and innovation. The more renewable energy you produce, the cheaper it gets.

“Every time you double production, you reduce the cost of solar by about 20 percent. Wind goes down 10 to 12 percent,” he said.

Tech companies especially have been committed to renewables. Aside from Google, Facebook has signed similar deals with wind farms and Amazon, while not nearly completely renewable, are taking noticeable steps towards this goal. Microsoft has already been carbon neutral since 2014, but a significant part of that comes from offsetting such as planting trees or financing renewable sources. Hopefully, the world will follow.

Google asks Pixar, The Onion writers to make its helper more human-like

Google has enlisted help from Pixar and The Onion writers to give its new AI helper that dash of humanity they feel will be a game-changer for the tech industry.

Image via gadgetreport.ro

Search giant Google hopes to make its new AI helper more likeable by taking a cue from animation studios Pixar and news satire publication The Onion. The company hopes the writers’ talent will help “infuse personality” into the helper, which will interact with users from Google’s new Pixel phones, Duo app, and Home speakers.

The ultimate goal is to make a personal software agent that people can actually relate to and care for, and Google thinks a livelier personality and a dash of humor are the way to make it happen.

The announcement came after Google unveiled its Pixel smartphones earlier this month, and in anticipation of the Home speaker — which will both feature the helper. Gummi Hafsteinsson, product-management director of Google Home, told The Wall Street Journal that the writers are already hard at work on making the Assistant more relatable. He says Google wants users to feel an emotional connection to the system, but the technology ‘is still a ways off’.

“Our goal is to build a personal Google for each and every user,” said said Sundar Pichai, CEO of Google.

While the other virtual assistants on the market, such as Siri or Alexa, have some sort of personality to engage users, they’re still very basic. They can tell jokes or do some tricks and are actually quite funny, but they can’t compare to a conversation with an actual human. Their responses are scripted, and if they can’t solve a task they respond with ‘I don’t know’ or ‘I’m not sure’. Google wants to mix their proprietary AI tech — which beat a grandmaster Go player twice — with humor to make an Assistant you can talk to as if it were a human.

This may very well change how we think about and interact with AI, but actually implementing it into a device will be much harder, investors point out. They’d rather see more attention to issues such as latency saying that people don’t have time to deal with it while conversing, The Wall Street Journal writes.

Google officially unveiled its range of new products in San Francisco, including the Pixel and the Home, earlier this month. Most of them were leaked prior to the event, however. Still, the event gave Google a chance to announce that starting in November, the devices will be available in Canada, the UK, Germany, Australia, and India. And they will all share the same AI.

“We’re at a seminal moment in computing,” said Pichai. “We are evolving from a mobile first to an AI first world. Computing will be everywhere, people will be able to interact seamlessly, and above all it will be intelligent.”

If the AI comes out as a tumbled mix of Pixar and The Onion humor, it might just be the best thing that anyone has ever sold in the history of everything. We’ll just have to wait and see.