Tag Archives: data

Should Google be able to purchase Fitbit?

From Australia to the European Union, data experts and privacy organizations are expressing their concerns over Google’s plan to acquire Fitbit, a company that produces and sells health-tracking technologies.

The move would cement Google’s digital dominance at the expense of consumers, critics say.

Image Credits: Flickr

In November, Google announced that it wanted to buy the wearables company Fitbit for $2.1 billion, mainly as a way to compete with Apple and Samsung in the market of fitness trackers and smartwatches. But the move almost immediately raised privacy and antitrust concerns.

Fitbit currently boasts over 28 million users around the world, which means that along with the company, Google would also acquire a large amount of health data that would have at its disposal to use as it pleases. Google promised the data won’t be used for advertising purposes, but regulators and civil society are skeptical and they’re asking for more transparency on what will happen with the data.

Two weeks ago, the European Commission was formally asked to decide on the merger. The antitrust regulators have until July 20 to decide if they allow the deal to go ahead. If that’s not the case, a four-month investigation will have to be carried out to look further into the move.

But some have already made up their mind on the matter. Privacy International, a UK civil society organization, said in a statement that the merger should be blocked and called the EU to block it.

Google has become too big and already has too much market power, they argued, claiming the company is now trying to access sensitive health data.

“While big isn’t always a bad thing, abusing this dominance violates the law. The value of personal data increases as more and more data is combined with it, and this incentivizes companies to pursue business strategies aimed at collecting as much data as possible,” the NGO argued.

The EU classifies health data as special category data, Privacy International said. This means that health data is granted a very high level of protection due to what they can reveal about our everyday habits and the potential consequences (such as discrimination) if the data is used. The merger would allow Google to access “our most intimate data” and profit from that, Privacy International says.

For Ioannis Kouvakas, Legal Officer at the NGO, this is something that has wide and far-reaching implications, no matter if we are Fitbit users or not.

“Can we trust a company with a shady competition and data protection past with our most intimate data? We must not let big tech once again sacrifice our wellbeing,” he argued.

A shady story with data

Google actually doesn’t have a clean record on the use of data. Last year, the company was fined $57 million by the French data regulator for a breach of the EU’s data protection rules. The regulator said Google had a “lack of transparency, inadequate information, and lack of valid consent.”

The French regulator said it judged that people were “not sufficiently informed” about how Google collected data to personalize advertising. Google didn’t obtain clear consent to process data because “essential information” was “disseminated across several documents,” CNIL said.

This is used as an argument to block the current merger with Fitbit, not only in Europe but also in Australia, where the antitrust regulator (ACCC) warned the merger would give Google too much of people’s data and hurt the competition. It’s the first regulator to officially voice its concerns about the deal.

“Buying Fitbit will allow Google to build an even more comprehensive set of user data, further cementing its position and raising barriers to entry to potential rivals,” ACCC Chairman Rod Sims said in a statement. “User data available to Google has made it so valuable to advertisers that it faces only limited competition.”

In the US, the Justice Department said in December that it will review the plans by Google, having already opened a larger investigation into Google in September. US Watchdog groups like Public Citizen and the Center for Digital Democracy have urged antitrust enforcers to block the deal.

A Google spokesperson rejected criticism over the use of data.

“Throughout this process, we have been clear about our commitment not to use Fitbit health and wellness data for Google ads and our responsibility to provide people with choice and control,” the spokesperson told Euronews.

This is a pressing reminder that we need to carefully consider what role tech companies play in our society — and we need to do it fast. More often than not, technology changes much faster than society itself, and consumers risk being left behind. We all want to enjoy the benefits that technology can bring us, but as we’re already starting to see, the consequences can be dire if we’re not careful.

Apps aimed at kids are a sponge of personal data, in direct violation of federal law, study reports

Thousands of apps targeted at children are silently and unlawfully gathering their data, study finds.

Kids apps.

Peekaboo, they see you. Image credits: Thomas Quinn.

In the wake of the Facebook / Cambridge Analytica meltdown, people are understandably quite concerned about the heap of data apps have gathered on them, and what happens to this wealth of information. Well, I’m sorry to break it to you, but according to a study published on April 16th, you should be even more concerned.

Hide your kids

Researchers from the International Computer Science Institute say that the majority of free Android apps intended for children are tracking their data — in direct violation of the Children’s Online Privacy Protection Act, or COPPA, a federal law that regulates data collection from users under 13 years of age.

The study analyzed 5,855 apps targeted at children, each gathering an average of 750,000 downloads between November 2016 through to March 2018, according to the paper. These apps, which had over 172 million downloads combined, were games like Fun Kid Racing and Motocross Kids — Winter Storm. Using a Nexus 5X as a platform, the team downloaded and ran each app for about 10 minutes, to simulate a usual session. The results were quite worrying.

Thousands of the apps the team looked at collected data from the device in some way or another, some including location (GPS) data or personal information. Up to 235 of these apps accessed the phone’s GPS data, 184 of which later transmitted this data to advertisers, according to the study. According to Serge Egelman, the paper’s co-author, the findings are bound to worry parents, particularly since they would need an ‘expert’ level of technical knowledge to be able to figure out which apps did this for themselves.

“They’re not expected to reverse-engineer applications in order to make a decision whether or not it’s safe for their kids to use,” he said.

People often give permission for apps to gather ad-tracking data in exchange for free service — we’re all guilty of doing this at one point or another. It isn’t only Android apps that do it, either. For better or for worse, there is a myriad of apps — and most likely a Facebook tracker — peeking at your data all the time.

However, we’re adults, and the right to make our own choices comes with its own risks, including giving away permissions for apps. Children, who aren’t discerning enough to know what consequences their buttoning might have, are given protected legal status through COPPA. Children’s apps are thus not allowed to track data without first gaining explicit parental consent. The study, however, found that many of the apps they analyzed didn’t conform to the law.

Egelman says that even if companies try to ensure they conform to COPPA, the results are still worrying. The simulated interactions were handled by a machine randomly pressing buttons, and most apps still tracked data in one form or another. COPPA requires producers to get “verifiable consent,” meaning that they have to take steps to ensure that people know what information they were releasing to the app.

“If a robot is able to click through their consent screen which resulted in carrying data, obviously a small child that doesn’t know what they’re reading is likely to do the same,” Egelman said.

Back in 2014, Google allowed users to reset their Android Advertising ID to give them better control over how online apps track their data. Developers are required to only use that ID when tracking user data, but the team says two-thirds of the apps they looked at didn’t allow users to reset their ID. Even more glaringly, over 1,000 of the apps also collected personal information in direct violation of Google’s terms of service, which prohibits such tracking in apps targeted towards children.

To add insult to injury, over 40% of the apps further failed to transfer the data in a secure way. Some 2,344 children’s apps transferring collected data did not use TLS encryption, a security standard that makes sure the data and its recipient are authentic. The security measure is the “standard method for securely transmitting information,” the researchers said.

The paper ” “Won’t Somebody Think of the Children?” Examining COPPA Compliance at Scale,” has been published in the journal Proceedings on Privacy Enhancing Technologies.

Single-atom magnets used to create data storage one million times more dense than regular hard disks

A team of researchers has created the smallest and most efficient hard drive in existence using only two atoms. This technology is currently extremely limited in the amount of data it can store, but the technique could provide much better storage when scaled up.

Image credits Michael Schwarzenberger.

Hard drives store data as magnetic fields along a disk housed inside the drive. It’s split into tiny pieces and each acts like a bar magnet, with the field pointing either up or down (1 or 0) to store binary information. The smaller you can make these areas, the more data you can cram onto the disk — but you can’t make them too small, or you risk making them unstable so the 1’s and 0’s they store can and will switch around.

What if you used magnets that remained stable even when made to be really tiny? Well, those of you that remember physics 101 will know that cutting a magnet in two makes two smaller magnets. Cut them again in half and you get four, then eight and so on smaller magnets — but they also become less stable.

But a team of researchers has now created something which seems to defy all odds: stable magnets from single atoms. In a new paper, they describe how using these tiny things they created an atomic hard drive, with the same functionality as a traditional drive, but limited to 2 bits of data storage.

Current commercially-available technology allows for one bit of data to be stored in roughly one million atoms — although this number has been reduced to 1 in 12 in experimental settings. This single-atom approach allows for one bit of data to be stored in one single atom. A scaled-up version of this system will likely be less efficient, but could increase current storage density by a factor of 1,000, says Swiss Federal Institute of Technology (EPFL) physicist and first author Fabian Natterer.

Holmium bits

Looks hairy.
Image source Images of Elements / Wikipedia.

Natterer and his team used holmium atoms, a rare-earth metal, placed on a sheet of magnesium oxide and cooled to below 5 degrees Kelvin. Holmium was selected because it has many unpaired electrons (which creates a strong magnetic field) sitting in a close orbit to the atom’s nucleus (so they’re relatively well protected from outside factors). These two properties taken together give holmium a strong and stable magnetic field, Natter explains, but it also makes the element frustratingly difficult to interact with.

 

The team used a pulse of electric current released from the magnetized tip of scanning tunneling microscope to flip the atoms’ field orientation — essentially writing data into the atoms. Testing showed that these atomic magnets could retain their state for several hours, and showed no case of spontaneous flip. The same microscope was used to then read the bits stored in the atoms. To double-check that the data could be reliably read, the team also devised a second read-out method. By placing an iron atom close to the magnets and tuning it so that its electronic properties depended on the orientations of the 2-bit systems. This approach allowed the team to read out multiple bits at the same time, making for a faster and less invasive method than the microscope reading technique, Otte said.

It works, but the system is far from being practical. Two bits is an extremely low level of data storage compared to every other storage method. Natterer says that he and his colleagues are working on ways to make large arrays of single-atom magnets to scale-up the amount of data which can be encoded into the drives.

But the merits and possibilities of single-atom magnets shouldn’t be overlooked, either. In the future, Natterer plans to observe three mini-magnets that are oriented so their fields are in competition with each other, making each other continually flip.

“You can now play around with these single-atom magnets, using them like Legos, to build up magnetic structures from scratch,” he says.

 

Other physicists are sure to continue research into these magnets as well.

The full paper “Reading and writing single-atom magnets” has been published in the journal Nature.

New method piggybacks data on radio waves to make singing posters and smart cities

A new technique developed by University of Washington engineers will allow “smart” objects to communicate directly with your car or smartphone.

Image credits JudaM / Pixabay.

A bus stop billboard could do much more than just advertise local attractions — why not enable it to send your smartphone a link with directions to the venue, maybe even a discount for your ticket? A t-shirt could do more than just clothe you while you run — why not have it monitor your vital signs, keeping an eye out for any emergency? Well, that’s exactly what one team from the University of Washington wants to do.

The problem is that up until now we didn’t have any viable way to power these devices for any meaningful period of time. So the team decided to swap out internal power sources for a ubiquitous form of energy in modern cities — ambient radio signals.

“The challenge is that radio technologies like WiFi, Bluetooth and conventional FM radios would last less than half a day with a coin cell battery when transmitting,” explains co-author and UW electrical engineering doctoral student Vikram Iyer. “So we developed a new way of communication where we send information by reflecting ambient FM radio signals that are already in the air, which consumes close to zero power.”

“FM radio signals are everywhere. You can listen to music or news in your car and it’s a common way for us to get our information,” adds co-author and UW computer science and engineering doctoral student Anran Wang. “So what we do is basically make each of these everyday objects into a mini FM radio station at almost zero power.”

They’re the first research team to ever prove this method of harnessing existing radio signals — called “backscattering” — actually works. Their system transmits messages by encoding data into these waves and then reflecting them without affecting the original transmissions.

Singing posters

To prove that their technology works, they created a “singing poster” for band Simply Three and placed it at a bus stop. The poster could transmit an ad and sample of the band’s music to a smartphone up to 12 feet away (3.6 meters) or to a car up to 60 feet (9 meters) away. The audio and image data were transmitted an ambient signal — a news broadcast from a local NPR radio station.

The poster uses a low-power reflector that can tap into the radio broadcast and manipulate the signal in such a way as to piggy-back the desired data on top of the signal. This data is distinct enough from the original wave to be picked up by a smartphone receiver on an unoccupied frequency in the FM radio band, not interfering with any other technology.

“Our system doesn’t disturb existing FM radio frequencies,” said co-author Joshua Smith, UW associate professor of computer science and engineering and of electrical engineering. “We send our messages on an adjacent band that no one is using — so we can piggyback on your favorite news or music channel without disturbing the original transmission.”

“Because of the unique structure of FM radio signals, multiplying the original signal with the backscattered signal actually produces an additive frequency change,” adds co-author Vamsi Talla, a UW postdoctoral researcher in computer science and engineering. “These frequency changes can be decoded as audio on the normal FM receivers built into cars and smartphones.”

Beyond this method of adding data to an unused frequency, the team demonstrated two more methods for transferring data using FM backscatter: one which simply overlays the new information on top of the existing signals, and one that relies on cooperation between two smartphones to decode the message.

In the team’s demonstrations, the total power consumption of the backscatter system was 11 microwatts, which could be easily supplied by a tiny coin-cell battery for a couple of years or powered using tiny solar cells. Connectivity requiring such a low level of power can also be used to create smart fabrics and clothes. The researchers from the UW Networks & Mobile Systems Lab used a conductive thread to sew an antenna into a T-shirt which was able to similarly backscatter data at rates of up to 3.2 kilobits per second.

The end game isn’t to make smart posters of clothes alone — but entire smart cities which can talk to you for almost no power requirement.

“What we want to do is enable smart cities and fabrics where everyday objects in outdoor environments — whether it’s posters or street signs or even the shirt you’re wearing — can ‘talk’ to you by sending information to your phone or car,” concludes lead faculty and UW assistant professor of computer science and engineering Shyam Gollakota.

The full paper “FM Backscatter: Enabling Connected Cities and Smart Fabrics” will be presented in Boston at the 14th USENIX Symposium on Networked Systems Design and Implementation this month.

New method developed to encode huge quantity of data in diamonds

A team from the City College of New York have developed a method to store data in diamonds by using microscopic defects in their crystal lattice.

Image credits George Hodan / Publicdomainpictures

Image credits George Hodan / Publicdomainpictures

I’ve grown up on sci-fi where advanced civilizations stored immense amounts of data in crystals (like Stargate SG-1. You’re welcome). Now a U.S. team could bring the technology to reality, as they report exploiting structural defects in diamonds to store information.

“We are the first group to demonstrate the possibility of using diamond as a platform for the superdense memory storage,” said study lead author Siddharth Dhomkar.

It works similarly to how CDs or DVDs encode data. Diamonds are made up of a cubic lattice of carbon atoms, but sometimes an atom just isn’t there. So the structure is left with a hole — a structural defect. They’re also referred to as nitrogen vacancy centers as nitrogen atoms align themselves to the defects.

These vacancies are negatively charged (as there are no protons to offset the electrons’ charge from neighboring atoms). But, the team found that by shining a laser on the defects — in essence neutralizing their electrical charge — they could alter how each vacancy behaved. Vacancies with a negative charge fluoresced brightly, while those with neutral charges stayed dark. The change is reversible, long-lasting, and stable under weak and medium levels of illumination, the team said.

So just as a laser can be used to encode data on a CD’s medium, it can be turned to storing data by changing these defects’ charges. In theory, this method could allow scientists to write, read, erase, and re-write the diamonds, the team added.

Dhomkar said that in principle, each bit of data can be encoded in a spot a few nanometers — a few billionths of a meter — wide. This is a much denser information packing than in any similar data storing device. So we could use diamonds to build the superdense computer memories of the future. But, we currently have no way to read or write on such a small scale so currently “the smallest bit size that we have achieved is comparable to a state-of-the-art DVD,” Dhomkar told Live Science.

Here “but nr.2” comes into the picture. We can’t yet fully use the diamonds’ capacity, but the team has shown they can encode data in 3D by stacking layers of 2D data stores.

“One can enhance storage capacity dramatically by utilizing the third dimension,” Dhomkar said.

By using this 3D approach, the technique could be used to store up to 100 times more data than a typical DVD. Dhomkar and his team are now looking into developing ways to read and write the diamond stores with greater density.

“The storage density of such an optimized diamond chip would then be far greater than a conventional hard disk drive,” he said.

The full paper “Long-term data storage in diamond” has been published in the journal Science Advances.

French computer scientist turn Wikipedia into an universe of knowledge. Literally

What’s life worth if you don’t sometimes waste a whole afternoon on Wikipedia, chain-reading entries? Not much.

But with so much information available, I sometimes have difficulty staying focused on one topic and then I start shotgunning articles left and right. Thankfully, Wikiverse comes to help put order into the chaos by displaying all the articles on Wikipedia as a tiny universe of information for you to navigate. Which is awesome.

wikiwerse

Interconnected topics form clusters of stars, each one a single article (that will load up right in the interface if you click on it.) Each star is visually connected to related topics through colored loopy lines, so you can hop around like you would on the actual Wikipedia website. Zoom out to see how it all fits together, then zoom in for the actual information.

Wikiverse is the latest update of a 2014 Chrome experiment called WikiGalaxy, that sadly never truly took off. The software is designed by Owen Cornec, a French computer scientist who wanted to make Wikipedia more engaging. He initially tried to have star clusters color coded after which category they fit in, but there were just too much information and he ran out of colors.

So he just made different clusters stand out from each other and used colors to indicate whether an entry belonged to one cluster or another. Wikiverse also runs more smoothly than the older WikiGalaxy, even on browsers other than Chrome (I had a lot of fun on it and I run Mozilla.)

So if it’s been a long week and all you want to do is unwind, there’s now a whole universe (of information) you can explore.

New NASA transfer protocol makes space Wi-Fi better than yours

NASA has been working on a space-friendly internet technology for years and earlier this month their efforts have been rewarded. The agency has installed the first functioning Delay/Disruption Tolerant Networking (DTN,) system aboard the ISS. It is expected to improve data availability and automate transfer for space station experimenters, resulting in more efficient bandwidth utilization and more data return.

The DTN protocol would allow for data to be reliably transferred through unstable channels, allowing for storing data in nodes until transfer can be performed.
Image via NASA.

Keeping an open line between our planet and outer space is a very difficult task at best. The huge distances involved are the foremost problem, but there’s also radiation waves to consider plus planets and asteroids and spacecraft whizzing about, blocking the signal.

Up to now, NASA handled data transfer through three networks of distributed ground stations and relay satellites, supporting both their own and non-NASA missions: the Deep Space Network (DSN), the Near Earth Network (NEN), and the Space Network (SN). All of them transfer information using point-to-point (or direct) relaying between two nodes — similarly to how a telephone landline works.

The problem is that successful space exploration requires the ability to exchange data, a lot of data, fast and reliably, between many different nodes. It’s not something you can handle over the phone, even with the most stable of lines. So NASA has been looking to adapt the terrestrial Internet, on a much wider scale, for space use.

The result is called Delay/Disruption Tolerant Networking, and has been in the making for a few years now. The main difference between the DTN protocol and that those governing a wireless network down here is in how they handle data transfer. For you and me, when something blocks our Wi-Fi the connection slows or disconnects entirely. The DTN protocol however stores data if a connection becomes interrupted, and then forwards it using relay stations to its intended destination. This means the network can function even when a recipient server is offline.

To create the DTN, NASA enlisted the help of one of the pioneers of the Internet, Dr. Vinton G. Cerf, Google vice president and a distinguished visiting scientist at NASA’s Jet Propulsion Laboratory in Pasadena, California. He predicts the technology will bring many benefits in space as well as on Earth, especially in disaster relief conditions.

“Our experience with DTN on the space station leads to additional terrestrial applications especially for mobile communications in which connections may be erratic and discontinuous,” said Cerf. “In some cases, battery power will be an issue and devices may have to postpone communication until battery charge is adequate. These notions are relevant to the emerging ‘Internet of Things’. ”

NASA installed the first DTN system earlier this month in the ISS‘s Telescience Resource Kit (TReK) — a software suite for researchers to transmit and receive data between operations centers and their payloads aboard the station. NASA reports that adding this service on the station will also enhance mission support applications, including operational file transfers.

 

Harvard team turns bacteria into living hard drives

A research team from Harvard University, led by Seth Shipman and Jeff Nivala, has developed a novel method of writing information into the genetic code of living bacterial cells. They pass the information on to their descendants, which can later be read by genotyping the bacteria.

BacteriaUSB

Storing information into DNA isn’t a new idea — for starters, nature’s been doing it for a long, long time now. Researchers at the University of Washington have also shown that we can synthetically manufacture DNA in the lab and write any information they want into it — and to prove it, they encoded a whole book and some images into DNA strands. But combining the two methods into an efficient data storage process has proven beyond our grasp up to now.

“Rather than synthesizing DNA and cutting it into a living cell, we wanted to know if we could use nature’s own methods to write directly onto the genome of a bacterial cell, so it gets copied and pasted into every subsequent generation,” says Shipman. “But working within a living cell is an entirely different story and challenge.”

The team exploited an immune response certain bacteria use to protect themselves from viral infection, called the CRISPR/Cas system. When the bacteria are attacked by viruses, they physically cut out a segment of the invaders’ DNA and paste it into a specific region of their own genome. This way, if that same virus attacks again, the bacteria can identify it and respond accordingly. Plus, the cell passes this information over to its progeny, transferring the viral immunity to future generations.

The geneticists found that if you introduce a piece of genetic data that looks like viral DNA into a colony of bacteria that have the CRISPR/Cas system, it would incorporate it into their genetic code. So Shipman and Nivala flooded a colony of E. coli bacteria that has this system with loose segments of viral-looking DNA strands, and they gulped it all up — essentially becoming tiny, living hard drives.

The segments used were arbitrary strings of A, T, C, G nucleotides with chunks of viral DNA at the end. Shipman introduced one segment of information at a time and let the bacteria do the rest, storing away information like fastidious librarians.

Conveniently enough, the bacteria store new immune system entries sequentially, with earlier viral DNA recorded before that of more recent infections.

“That’s quite important,” Shipman says. “If the new information was just stored randomly, that wouldn’t be nearly as informative. You’d have to have tags on each piece of information to know when it was introduced into the cell. Here it’s ordered sequentially, like the way you write down the words in a sentence.”

Bugs with the bugs

One issue the team ran into is that not all of the bacteria record every strand of DNA introduced to the culture. So even if you introduce the information step by step, let’s say the numbers from 1 to 5, some bacteria would have “12345” but others may only have “12” or “245” and so on. But Shipman thinks that because you can rapidly genotype thousands or millions of bacteria in a colony and because the data is always stored sequentially, you’ll be able to clearly deduce the full message even with these errors.

Shipman adds that the 100 bytes his team demonstrated are nothing near the limit. Cells like the microorganism Sulfolobus tokodaii could potentially store more than 3,000 bytes of data. And with synthetic engineering, you could design or program specialized hard-drive bacteria with vastly expanded regions of their genetic code, able to rapidly upload vast amounts of data.

Microsoft sniffs for cancer clues in your search queries

Search engines today use extremely sophisticated algorithms to guess what you’ll be searching for next based on your previous queries. This optimization has paid off very well for companies like Google, for instance, which can use this information to serve you better, more relevant results to queries, but also sell better ads.

It’s thrilling to hear, though, that more or less the same technology is used to predict which people have cancer before they even visited a doctor — powerful tech that’s used to save lives, not just make a hefty profit.

Dr. Eric Horvitz is both a medical doctor and a computer scientist, a double background that serves him well as the head of Microsoft’s newly founded Health and Wellness division.

One day, Horvitz got a call from a friend who was feeling sick. After describing his symptoms, Horvitz advised him to seek medical help. Not long after, the man was diagnosed with pancreatic cancer and died only a few months later.

Pancreatic cancer is one of the most unforgiving diseases out there, with only 3 percent of patients surviving five years after the diagnosis.

Today, Horvitz and colleagues at Microsoft published a paper in which they claim search queries can be used to predict if a person has pancreatic cancer with pretty good accuracy, considering they’re only working with anonymous queries.

Using data from Bing, Microsoft’s search engine, the researchers devised a computer model that could tell if the symptoms people query online are linked with pancreatic cancer. The researchers say they could distinguish between ‘serious’ concerned queries and those based on anxiety. They could also sniff cancer before a person was even considering searching for ‘cancer symptoms.’

Pancreatic cancer is particularly hard to sniff because its symptoms don’t seem very severe: itchy skin, weight loss, light-colored stools, patterns of back pain and a slight yellowing of the eyes and skin. This also made it very interesting to use as a data mining target because the symptoms could be easily confused with other diseases.

Eric Horvitz. Photo: Scott Eklund/Red Box Pictures)

Eric Horvitz. Photo: Scott Eklund/Red Box Pictures)

According to the paper published in the Journal of Oncology Practice, the researchers could “identify 5% to 15% of pancreatic cancer cases, while preserving extremely low false-positive rates (0.00001 to 0.0001).”

That means 1 in 100,000 to 10,000 people would be informed that they might have pancreatic cancer, but not actually have it. This would scare some, but it would be worth it considering the lives saved. The authors of the paper say this early diagnosis could up the five-year pancreatic cancer survival rate to 5 to 7 percent.

“We are excited about applying this analytical pipeline to other devastating and hard-to-detect diseases,” Horvitz said.

It’s worth noting that the researchers didn’t check their results with the health records of the people doing the online queries since these were anonymous. As such, the team’s claim that it could raise the survival rate, as well as the false positive rate, should be taken with a grain of salt.

Nevertheless, it’s refreshing to see health search queries being put to good use. We now live in an age where people first go to Google to type their symptoms instead of calling their doctors. At least on in ten internet searches are health related.

With no medical training or experience, it’s easy to get lost down a rabbit hole which can cause anxious, sleepless nights. But maybe soon enough, you’ll get a personal health assistant that can actually interpret your illness and pain, then give you the right shove to visit a (human) professional.

“People are being diagnosed too late,” he said. “We believe that these results frame a new approach to pre-screening or screening, but there’s work to do to go from the feasibility study to real-world fielding.”

One drop of this solution contains millions of DNA molecules. It's enough to store 10 Tb of data. Image: University of Washington)

Digital images stored/read in synthetic DNA

Digital storage has gone a long day since the advent of digital computing. It used to take hard drives the size of a boat to store a fraction of the data you can now access with a smartphone. The bottleneck is closing in though and the progress in increasing storage density is incremental while demand is soaring. Silicon, a faithful ally, seems to be reaching its limits, so scientists are looking for alternatives. For archiving purposes, at least, DNA — the blueprint or genetic recipe that codes all life — might be worth considering. One team, for instance, coded digital images into synthetic DNA using a novel method, then decoded and read this data.

One drop of this solution contains millions of DNA molecules. It's enough to store 10 Tb of data. Image: University of Washington)

One drop of this solution contains millions of DNA molecules. It’s enough to store 10 Tb of data. Image: University of Washington

We’ve known DNA is an amazing storage medium ever since it was first discovered, after all only one cell contains all the information necessary to build an entire human being. It’s actually fantastically good at storing data since the information can be coded in a volume unlike a planar surface as is the case for man-made flash and hard drives.

In one single cubic millimeter, DNA can hold up to 1 exabyte. Only 4 grams of DNA can hold up to all the digital data we create annually. This is much denser than digital storage media such as flash drives, and more stable since the DNA sequences could be read thousands of years after they were encoded.

“A large part of building better computers is about finding better materials to build computers with,” says Luis Ceze, an associate professor in the Computer Science Department at the University of Washington. “So, silicon happens to be a fantastic material, but it’s reaching a point where it’s unclear that we can continue pushing forward with silicon. So I find it fascinating that biology has evolved many molecules that are useful for building better computers in the future.”

Ceze and colleagues first start by ‘translating’ data from digital 1s and 0s into the four letters of DNA: A, G, C and T.  A DNA synthesizer creates short strands of DNA that each holds a part of a file’s code. A DNA sequencer can then read and retrieve the information encoded, akin to the way scientists use sequencing today to read genetic information about our ancestors. To demonstrate, they encoded a couple of images into DNA, then retrieved and displayed the data.

The encoded images used by the University of Washington researchers.

The encoded images used by the University of Washington researchers.

Ceze and team aren’t the first to encode data into DNA. One notable attempt from 2012 encoded a book which was 53,400 words long into , along with 11 images in JPG format and a JavaScript program. This most recent research is more efficient since it reduces redundancies. Fewer strands of DNA have to be synthesized to account for errors, significantly improving storage. Concerning retrieval, data can be read without actually sequencing all of the DNA strands.

We might be a long way from using DNA inside your computer, though. Unlike flash drives, DNA has to be read by shuttling molecules around. This process is at snail’s pace compared to the movements of electrons which move with the speed of light. Instead, someday DNA will be used to first store sensitive information for archival purposes. It won’t be alone either. Previously, researchers crammed 360TB worth of five dimensional (5D) digital data onto a small quartz disk, which should be stable for 13 billion years.

 

People pick up and use discarded USB drives they find almost half the time

Connectivity has never been more pervasive than today. In a span of just two hundred years western civilization has gone from the electric telegraph to satellite communication. Access to the internet, which just thirty years ago was limited to land-line dial-up connections, has become ubiquitous — only a screen swipe away. Portable data storage, such as USB drives, might not be quite as useful or sought after as they once were but they remain an undeniably handy method to carry your data around.

Image via flirk user Custom USB.

So when you spot an USB drive lying abandoned on the floor or on the sidewalk, you’re faced with a very puzzling choice. Should you pick it up, or not? Surely a quick peek at the files it contains will help you return the drive to its rightful (and thankful) owner; it’s a civic duty and who better than you to see it through the end? Or maybe you’re more inclined to use it yourself, it’s finders keepers after all! Moral conundrums aside, one thing is sure — USB drives discarded in public places won’t go unnoticed for long, a new study has found.

An University of Illinois Urbana-Champaign team left 297 USB memory dropped seemingly by accident around the university grounds in places like parking lots, classrooms, cafeterias, libraries or hallways. Roughly 98% of them were removed from their original location, and almost half of them were snooped through.

The researchers wanted to know what people would do with the data on the drives after they found them, so they put HTML documents cunningly disguised with names such as “documents,” “math notes,” or “winter break pictures” on the USB sticks. If anyone tried to open these files on a computer connected to the internet, the researchers would receive a notification.

In the end, the team received 135 notifications of users opening the files, corresponding to 45% of the discarded drives. The actual number of accessed drives is most likely higher than this, as the researchers were only notified if the HTML files were opened (and even then, if an internet connection was established at the time of opening the file.)

The unknowing subjects were informed about the experiment when they opened the HTML files on the drive, and were invited to complete an anonymous survey to explain what had motivated them to pick up and use the drive in the first place. Only 43 percent of the participants chose to provide feedback. Most of them (68 percent) said that they were trying to return the drive to its owner. Part of the drives had been put on key rings with dummy house keys, and many of the participants listed this as one of the reasons behind their altruistic intentions. Another 18 percent reported that they were just curious to see what was in the files. Two very honest people admitted that they were simply planning on keeping the drive.

Ca-ching!.
Image via flirk user Custom USB.

Still, even those driven by good intentions snooped around the data, opening files like photos or texts on the drives. An argument could be made that they were trying to see how the owner looks like; but seeing as the drives had a “personal resume” file complete with contact details, I think it’s safe to say that they just let their curiosity get the better of them.

There’s nothing wrong with that. Curiosity can be a very powerful force; and when you combine that with the temptation of an USB drive, containing data only you have access to, it can become downright irresistible. But it’s also a huge security risk.

More than two-thirds of respondents had taken no precautions before connecting the drive to their computer. “I trust my Macbook to be a good defence against viruses,” said one report. Others admitted opening the files on university computers to protect their own systems.

“This evidence is a reminder to the security community that less technical attacks remain a real-world threat and that we have yet to understand how to successfully defend against them,” the authors write. “We need to better understand the dynamics of social engineering attacks, develop better technical defences against them, and learn how to effectively teach end users about these risks.”

Despite the ridiculousness of these kinds of experiments, the study shows that people aren’t cautious enough when it comes to opening unknown files on totally random drives.

“It’s easy to laugh at these attacks, but the scary thing is that they work,” said lead researcher Matt Tischer for Motherboard, “and that’s something that needs to be addressed.”

The findings, which are being presented next month at the 37th IEEE Symposium on Security and Privacy in California, also highlight just how unaware or unconcerned we can be about the potential security risks of opening unknown files on randomly found devices.

 

Scientists achieve a record 57Gbps through fiber optic lines

Data is key to our modern society, and data transfer has become pivotal for many industries, as well as for our day to day lives. Thankfully, the maximum speed is constantly increasing and while we may not see this in current infrastructure, there are reasons to be optimistic.

Photo via Virginia Tech.

University of Illinois researchers report that they’ve set a record for fiber data transmission, delivering 57Gbps of error-free data. This isn’t the fastest speed ever achieved, that record being a whopping 1.125 Tbps, or 1125 Gbps. However, that speed was achieved with an optical communications system that combined multiple transmitter channels and a single receiver. This time, the data transmission was achieved through fiber optic and more importantly, they did that at room temperature.

The research team was led byelectrical and computer engineering professor Milton Feng, who declared:

“Our big question has always been, how do you make information transmit faster?” Feng said. “There is a lot of data out there, but if your data transmission is not fast enough, you cannot use data that’s been collected; you cannot use upcoming technologies that use large data streams, like virtual reality. The direction toward fiber-optic communication is going to increase because there’s a higher speed data rate, especially over distance.”

Feng’s group has been trying to push forth the upper limits of this speed, but maintaining room temperature has remained difficult. Ironically, high speeds tend to heat up materials, and heated materials tend to reduce speeds.

“That’s why data centers are refrigerated and have cooling systems,” Feng said. “For data centers and for commercial use, you’d like a device not to carry a refrigerator. The device needs to be operational from room temperature all the way up to 85 degrees without spending energy and resources on cooling.”

This is what makes their achievement even more impressive – the fact that they managed to keep everything cool enough. This means they could not only use this technology in data centers, but also in other industries in need of high-speed information transfer.

First ever optical chip to permanently store data developed

Material scientists at Oxford University, collaborating with experts from Karlsruhe, Munster and Exeter, have developed the world’s first light-based memory banks that can store data permanently. The device is build from simple materials, in use in CDs and DVDs today, and promises to dramatically improve the speed of modern computing.

A schematic of the device, showing its structure and the propagation of light through it.
Image courtesy of University of Oxford

Von-Neumann’s Bottleneck

Computing power has come a long in a very short time, with the processors that brought Apollo 11 to the Moon only 50 years ago being outmatched by your average smartphone. But in coming so far with their development, other areas of hardware have lagged behind in evolution, holding back our computers’ overall performance. The relatively slow flow of data between the processor and memory is the main limiting factor, as Professor Harish Bhaskaran, who led the research, explains.

“There’s no point using faster processors if the limiting factor is the shuttling of information to-and-from the memory — the so-called von-Neumann bottleneck,” he says. “But we think using light can significantly speed this up.”

However, simply basing the flow of information on light wouldn’t solve the problem.

Think of the processor as a busy downtown area, the data banks as being the residential areas and information bits as the cars commuting between the two. Even if the areas were to be connected by a highway with enough lanes and light-speed speed limits, the cars getting off it and driving through the neighborhoods at low speed to reach individual homes would clog up the traffic. In the same way, the need to convert the information from photons back to electrical signals would mean that the bottleneck isn’t removed, merely constrained to that particular process.

What scientists need is to base the whole system — processing, flow and memory — on light. There have been previous attempts to create this kind of photonic memory storage before, but they proved too volatile to be useful — they require power to store data. For them to be useful as computer disk drives, for example, they need to be able to store data indefinitely, with or without power.

And international team of researchers headed by Oxford University’s Department of Materials has successfully produced just that — the world’s first all-photonic nonvolatile memory chip.

A bright future for data storage

The device uses the phase-change material Ge2Sb2Te5 (GST) — the same as that used in rewritable CDs and DVDs — to store data. The material can assume an amorphous state (like glass) or a crystalline state (like a metal) when subjected to either an electrical or optical pulse.

To take advantage of this property, the team fused small sections of GST onto a silicon nitride ridge (known as a waveguide) that carries light to the chips, and successfully proved that intense pulses sent through the waveguide can produce the desired changes in the material. An intense pulse causes it to momentarily melt and quickly cool, causing it to assume an amorphous structure; a slightly less-intense pulse can put it into a crystalline state. This is how the data is stored.

Later, when the data is required, light with much lower intensity is sent through the waveguide. The two states of the GST dictates how much light can pass through the chip, the difference is read and interpreted as either 1 or 0.

“This is the first ever truly non-volatile integrated optical memory device to be created,” explains Clarendon Scholar and DPhil student Carlos Ríos, one of the two lead authors of the paper. “And we’ve achieved it using established materials that are known for their long-term data retention — GST remains in the state that it’s placed in for decades.”

And by sending out different wavelengths of light through the waveguide at the same time, a technique called wavelength multiplexing, they can use a single pulse to encode and recover the data at the same time.

“In theory, that means we could read and write to thousands of bits at once, providing virtually unlimited bandwidth,” explains Professor Wolfram Pernice from the University of Munster.

The researchers have also found that different intensities of strong pulses can accurately and repeatedly create different mixtures of amorphous and crystalline structure within the GST. When lower intensity pulses were sent through the waveguide to read the contents of the device, they were also able to detect the subtle differences in transmitted light, allowing them to reliably write and read off eight different levels of state composition — from entirely crystalline to completely amorphous. This multi-state capability could provide memory units with more than the usual binary information of 0 and 1, allowing a single bits of memory to store several states or even perform calculations themselves instead of at the processor.

“This is a completely new kind of functionality using proven existing materials,’ explains Professor Bhaskaran. ‘These optical bits can be written with frequencies of up to one gigahertz and could provide huge bandwidths. This is the kind of ultra-fast data storage that modern computing needs.”

Now, the team is working on a number of projects that aim to make use of the new technology. They’re particularly interested in developing a new kind of electro-optical interconnect, which will allow the memory chips to directly interface with other components using light, rather than electrical signals.

Big Data

‘Data Smashing’ algorithm might help declutter Big Data noise without Human Intervention

There’s an immense well of information humanity is currently sitting on and it’s only growing exponentially. To make sense of all the noise, whether we’re talking about apps like speech recognition, cosmic body identification or search engine results, highly complex algorithms that use less processing power by hitting the bull’s eye or as close as possible are warranted. In the future, such algorithms will be comprised of machine learning technology that gets smarter and smarter after each information parse; this will most likely employ quantum computing as well. Until then, we have to make use of conventional algorithms and a most exciting paper detailing such a technique was recently reported.

Smashing data – the bits and pieces that follow are the most important

Big Data

Credit: 33rd Square

Called ‘data smashing’, the algorithm tries to fix one major flaw in today’s information processing. Immense amounts of data are currently being fed in and while algorithms help us declutter, at the end of the day companies and governments still need experts to oversee the process and grant a much need human fine touch. Basically, computers are still pretty bad at solving complex patterns. Sure, they’re awesome for crunching the numbers, but in the end, humans need to compare the outputted scenarios and pick out the most relevant answer. As more and more processes are being monitored and fed into large data sets, however, this task is becoming ever more difficult and human experts are in low supply.

[ALSO READ] Breakthrough in computing: brain-like chip features 4096 cores, 1 million neurons, 5.4 billion transistors

The algorithm, developed by Hod Lipson, associate professor of mechanical engineering and of computing and information science, and Ishanu Chattopadhyay, a former postdoctoral associate with Lipson now at the University of Chicago, is nothing short of brilliant. It works by estimating the similarities between streams of arbitrary data without human intervention, and even without access to the data sources.

Basically, data is being ‘smashed’ with one another to tease out unique information by measuring what remains after each ‘collision’. The more info stands, the less likely it is it originated from the same streams.

Data smashing could open doors to a new body of research – it’s not just helping experts sort through data easier, it might also actually identify anomalies that are impossible to spot by humans in virtue of pure computing brute force. For instance, the researchers demonstrated data smashing using data from real-world problems, including detection of anomalous cardiac activity from heart recordings and classification of astronomical objects from raw photometry. Results showed that the info was on par with the accuracy of specialized algorithms and heuristics tweaked by experts to work.