Tag Archives: information

Is information the fifth state of matter? Physicist says there’s one way to find out

Credit: Pixabay.

Einstein’s theory of general relativity was revolutionary on many levels. One of its many groundbreaking consequences is that mass and energy are basically interchangeable at rest. The immediate implication is that you can make mass — tangible matter — out of energy, thereby explaining how the universe as we know it came to be during the Big Bang when a heck lot of energy turned into the first particles. But there may be much more to it.

In 2019, physicist Melvin Vopson of the University of Portsmouth proposed that information is equivalent to mass and energy, existing as a separate state of matter, a conjecture known as the mass-energy-information equivalence principle. This would mean that every bit of information has a finite and quantifiable mass. For instance, a hard drive full of information is heavier than the same drive empty.

That’s a bold claim, to say the least. Now, in a new study, Vopson is ready to put his money where his mouth is, proposing an experiment that can verify this conjecture.

“The main idea of the study is that information erasure can be achieved when matter particles annihilate their corresponding antimatter particles. This process essentially erases a matter particle from existence. The annihilation process converts all the [remaining] mass of the annihilating particles into energy, typically gamma photons. However, if the particles do contain information, then this also needs to be conserved upon annihilation, producing some lower-energy photons. In the present study, I predicted the exact energy of the infrared red photons resulting from this information erasure, and I gave a detailed protocol for the experimental testing involving the electron-positron annihilation process,” Vopson told ZME Science.

Information: just another form of matter and energy?

The mass-energy-information equivalence (M/E/I) principle combines Rolf Launder’s application of the laws of thermodynamics with information theory — which says information is another form of energy — and Claude Shannon’s information theory that led to the invention of the first digital bit. This M/E/I principle, along with its main prediction that information has mass, is what Vopson calls the 1st information conjecture.

The 2nd conjecture is that all elementary particles store information content about themselves, similarly to how living things are encoded by DNA. In another recent study, Vopson used this 2nd conjecture to calculate the information storage capacity of all visible matter in the Universe. The physicist also calculated that — at a current 50% annual growth rate in the number of digital bits humans are producing — half of Earth’s mass would be converted to digital information mass within 150 years.

However, testing these conjectures is not trivial. For instance, a 1 terabyte hard drive filled with digital information would gain a mass of only 2.5 × 10-25 Kg compared to the same erased drive. Measuring such a tiny change in mass is impossible even with the most sensitive scale in the world.

Instead, Vopson has proposed an experiment that tests both conjectures using a particle-antiparticle collision. Since every particle is supposed to contain information, which supposedly has its own mass, then that information has to go somewhere when the particle is annihilated. In this case, the information should be converted into low-energy infrared photons.

The experiment

According to Vopson’s predictions, an electron-positron collision should produce two high-energy gamma rays, as well as two infrared photons with wavelengths around 50 micrometers. The physicist adds that altering the samples’ temperature wouldn’t influence the energy of the gamma rays, but would shift the wavelength of the infrared photons. This is important because it provides a control mechanism for the experiment that can rule out other physical processes.

Validating the mass-energy-information equivalence principle could have far-reaching implications for physics as we know it. In a previous interview with ZME Science, Vopson said that if his conjectures are correct, the universe would contain a stupendous amount of digital information. He speculated that — considering all these things — the elusive dark matter could be just information. Only 5% of the universe is made of baryonic matter (i.e. things we can see or measure), while the rest of the 95% mass-energy content is made of dark matter and dark energy — fancy terms physicists use to describe things that they have no idea what they look like.

Then there’s the black hole information loss paradox. According to Einstein’s general theory of relativity, the gravity of a black hole is so overwhelming, that nothing can escape its clutches within its event horizon — not even light. But in the 1970s, Stephen Hawking and collaborators sought to finesse our understanding of black holes by using quantum theory; and one of the central tenets of quantum mechanics is that information can never be lost. One of Hawking’s major predictions is that black holes emit radiation, now called Hawking radiation. But with this prediction, the late British physicist had pitted the ultimate laws of physics — general relativity and quantum mechanics — against one another, hence the information loss paradox. The mass-energy-information equivalence principle may lend a helping hand in reconciling this paradox.

“It appears to be exactly the same thing that I am proposing in this latest article, but at very different scales. Looking closely into this problem will be the scope of a different study and for now, it is just an interesting idea that must be followed,” Vopson tells me.

Finally, the mass-energy-information equivalence could help settle a whimsical debate that has been gaining steam lately: the notion that we may all be living inside a computer simulation. The debate can be traced to a seminal paper published in 2003 by Nick Bostrom of the University of Oxford, which argued that a technologically adept civilization with immense computing power could simulate new realities with conscious beings in them. Bostrom argued that the probability that we are living in a simulation is close to one.

While it’s easy to dismiss the computer simulation theory, once you think about it, you can’t disprove it either. But Vopson thinks the two conjectures could offer a way out of this dilemma.

“It is like saying, how a character in the most advanced computer game ever created, becoming self-aware, could prove that it is inside a computer game? What experiments could this entity design from within the game to prove its reality is indeed computational?  Similarly, if our world is indeed computational / simulation, then how could someone prove this? What experiments should one perform to demonstrate this?”

“From the information storage angle – a simulation requires information to run: the code itself, all the variables, etc… are bits of information stored somewhere.”

“My latest article offers a way of testing our reality from within the simulation, so a positive result would strongly suggest that the simulation hypothesis is probably real,” the physicist said.

Each one of us falls into one of three information-seeking ‘personalities’

Knowing what people want to know, and why, can go a long way towards designing public information campaigns. However, it’s easier said than done. New research comes to shed some light on the topic, reporting on the criteria people rely on when deciding to get informed on a topic, or not.

Image via Pixabay.

According to the findings, at least in matters regarding to their health, finances, and personal traits, people, in general, rely on one of three criteria: the emotional reaction they assume they will have when presented with that information, how useful they consider said information will be to them, and whether or not it pertains to something that they think about often. The team says each person falls into one of these three “information-seeking types”, and that they don’t tend to change them over time.

Knowing, why?

“Vast amounts of information are now available to individuals. This includes everything from information about your genetic make-up to information about social issues and the economy. We wanted to find out: how do people decide what they want to know?” says Professor Tali Sharot from the University College London (UCL) Psychology & Language Sciences, co-lead author of the study. “And why do some people actively seek out information, for example about COVID vaccines, financial inequality and climate change, and others don’t?”

“The information people decide to expose themselves to has important consequences for their health, finance and relationships. By better understanding why people choose to get informed, we could develop ways to convince people to educate themselves.”

The study pools together data the researchers obtained over the course of five experiments with 543 research participants.

In one of the experiments, participants were asked to rate how much they would like to know about a certain topic related to their health — for example, whether they had a gene that put them at risk of developing Alzheimer’s, or one that strengthened their immune system. Another experiment followed the same pattern but substituted financial information (for example, what income percentile they fall into) in lieu of personal health. A third asked them to rate how much they would like to know where their family and friends rated them on personal traits such as intelligence or laziness.

Later on, they were asked how useful they thought the information would be, how they expected to feel upon receiving the info, and how often they thought about the subject matter of each experiment.

Based on their responses during these five experiments, the team explains that people tend to seek out information based predominantly on one of the three factors — expected utility, emotional impact, and relevance to their interests. They add that the three-factor model they establish could be used to more accurately predict a participant’s choices to seek or refuse information compared to a range of other models they tested.

Some of the participants also repeated this series of experiments several times, at intervals of a few months. Based on their responses over time, the team explains that people tend to routinely prioritize one of the three motives over the others, and they tend to stick to that one motive over time and across topics. This, they argue, suggests that our motivators in this regard are ‘trait-like’.

These traits do have a direct impact on our lives; the first, obviously, is that they drive us towards and away from certain topics and pieces of data. But they also have a bearing on our wellbeing. In two of the five experiments, participants were also asked to fill in a questionnaire that estimated their general mental health. The team explains that participants who wanted to know more about traits they often thought about showed more signs of positive mental health when seeking out information about their own traits.

“By understanding people’s motivations to seek information, policy makers may be able to increase the likelihood that people will engage with and benefit from vital information. For example, if policy makers highlight the potential usefulness of their message and the positive feelings that it may elicit, they may improve the effectiveness of their message,” says PhD student Christopher Kelly from UCL Psychology & Language Sciences a, co-lead author of the study.

“The research can also help policy makers decide whether information, for instance on food labels, needs to be disclosed, by describing how to fully assess the impact of information on welfare. At the moment policy-makers overlook the impact of information on people’s emotions or ability to understand the world around them, and focus only on whether information can guide decisions.”

The paper “Individual differences in information-seeking” has been published in the journal Nature Communications.

Book review: ‘Information: A Historical Companion’

“Information: A Historical Companion”
Edited by Ann Blair, Paul Duguid, Anja-Silvia Goeing, and Anthony Grafton
Princeton University Press, 904 pages | Buy on Amazon

In 1964, media theorist Marshall McLuhan declared that he was living in the “age of information.” Little did he know, however, how much the birth of the World Wide Web would influence the volume of data we share today. In 2020, in the already classical “internet minute,” people sent more than 40 million messages through WhatsApp, posted 350,000 stories on Instagram, and shared 150,000 photos on Facebook.

How did we end up producing so much information? How did we learn to process it, search it and store it? These are some of the questions the book ‘Information, A Historical Companion’ edited by Ann Blair, Paul Duguid, Anja-Silvia Goeing, and Anthony Grafton tried to answer. Its essays, written by academics from all around the world, tell the story of information beginning with ancient societies. Authors take us through East Asia, early modern Europe, the medieval Islamic world, but also North America. The book’s 13 chapters offer chronological narratives, discussing how information shaped the world as we know it. They are followed by more than 100 entries that focus on concepts, tools, and methods related to information.

The book also describes more recent developments in the field, including algorithms, intellectual property, privacy, databases, censorship, and propaganda. It also looks at capitalism, information circles, and the crisis of democracy, explaining some of the most famous theories academics and technologists came up with.

The thirteenth chapter, on communication and computation, presents Babbage’s Difference Engine, Claude Shannon’s influential “theory of communication,” and Vannevar Bush’s “memex” device for storing information, which originally appeared in his 1945 article “As We May Think.” It also describes more recent ideas, including the TCP/IP networking protocol, ARPANET, and WWW. None of today’s technologies would have existed without these early innovations.

The book is also an invitation to ponder upon the belief that the abundance of information would lead to increased democracy and a better life for us all. It showcases the thoughts of J.C.R. Licklider and Douglas Engelbart, who said that technology would set us free, believing that information feeds democracy.

“The optimism that runs through these claims has to confront the contrary feelings that rather than more information being a good thing, it can be highly problematic; and that while control over information may be beneficial, we are often in danger of being controlled by information and the algorithms it feeds,” writes Paul Duguid. “Both the optimistic and the pessimistic views have a curiously long history.”

At the end of the chapter, Duguid put the reason for writing this book in a nutshell: “Perhaps, after all, the dots of our ‘information age’ are more closely connected to the past than those who deem history irrelevant realize.”

Physicists claim information is the fifth state of matter. By 2245, half of Earth’s mass could be converted to digital bits

Credit: Pixabay.

All the matter that surrounds us exists either as a solid, liquid, gas, or plasma. But as our lives become increasingly digitized, more and more physical matter, such as oil, silicon, and carbon, is required to sustain our insatiable need for more computing power and information processing.

Giving current trends of 50% annual growth in the number of digital bits produced, Melvin Vopson, a physicist at the University of Portsmouth in the UK, forecasted that the number of bits would equal the number of atoms on Earth in approximately 150 years. By 2245, half of Earth’s mass would be converted to digital information mass, according to a study published today in AIP Advances.

It’s just a matter of time before digital bits outnumber all the atoms on Earth, a future in which the world is converted into a planetary-sized supercomputer — and all of this leads to an enticing theory: that information is no different from ordinary matter. In fact, Vopson says, the information should be considered the fifth state of matter (or sixth if you count Bose-Einstein condensates).

“How can information, a mathematical concept, be physical? To my surprise, this principle, which makes sense theoretically, has now been demonstrated experimentally,” Vopson told me in an email.

In the new study, Vopson draws parallels between Einstein’s theory of general relativity, which among other things states that mass and energy are equivalent, Rolf Launder’s application of the laws of thermodynamics to information theory, which equivalates information to energy, and, finally, Claude Shannon’s information theory that led to the invention of the first digital bit.

“Since both special relativity and Landauer’s principle have been proven correct, it is highly probable that the new principle will also be proven correct, although currently it is just a theory,” Vopson said.

According to Vopson, physicists have always expanded their awareness of what makes up the universe. As scientists refined their sensing instruments and theories, they learned that the universe isn’t just made of baryonic matter (particles), but also radiation, dark matter and energy, and space-time. Information, although seemingly more abstract, could naturally join them because it is such an integral part of “both non-organic matter and life”.

“Although information manifests itself in many formats including analogue information, biological DNA encoded information and digital information, the most fundamental form is the binary digital bit because it can successfully represent or duplicate all existing forms of information. This is also valid for quantum processing/quantum information / q-bits, as the final output of a quantum computer is still in the binary digital format,” Vopson told ZME Science.

“These ideas are best articulated by Wheeler’s suggestion that, quote, ‘…the universe emanates from the information inherent within it…’ or, ‘It from bit’,” he added.

“Since there are incredibly large numbers of elementary particles making up the universe, then the visible universe would also contain a huge amount of digital bits associated with the information content within these particles.”

“Landauer’s principle demonstrated that information is physical. The mass- energy-information equivalence principle extrapolated this and demonstrated that information has in fact mass. Since there is a lot of information associated with the baryonic mass in the universe, then it must be a huge amount of mass that corresponds to that information.
This is the basis of postulating that the information is the 5th element, or the 5th form of matter,” the physicist explained.

Dark matter and information, are they the same?

But how could something as intangible as information have mass? The new paper argues that such a thing is indeed possible and could manifest itself through gravitational interactions. In fact, the elusive dark matter that every theoretical physicist who’s worth his salt is now searching for may just be information.

“For over 60 years we have been trying unsuccessfully to detect, isolate or understand what is the mysterious dark matter in the universe. Its presence is widely accepted in order to explain the dynamics and stability of cluster of galaxies and the galaxy rotation curves. Unfortunately, all efforts to isolate or detect dark matter have failed so far. In fact, it is well accepted that the matter distribution in the universe is 5% ordinary baryonic matter, 27% dark matter and 68% dark energy. This is equivalent to saying that about 5% of the visible universe is known and 95% of it we don’t have a clue what it is made of, i.e. dark matter and dark energy. If mass-energy-information equivalence principle is correct and information has indeed mass, a digital informational universe would contain a lot of it, and perhaps the missing dark matter could be just information,” Vopson said.

The rate at which the number of bits equals Earth’s mass for various information generation growth scenarios. Credit: Melvin Vopson.

The implications of the mass-energy-information equivalence principle are important considering the rate at which humanity has been generating digital information. According to IBM Research, 90% of all the data generated by humans thus far has been created in the last ten years alone.

In fact, the authors of the new study employed conservative growth rates for information generation and storage. In reality, the rate of information generation is greater and may even accelerate in the future, as the current COVID-19 pandemic has demonstrated.

As long as civilization doesn’t collapse at the hand of climate change or thermonuclear war, the ever-expanding digital domain seems to be steering the world towards a future where our existence is intrinsically linked to computers. A century from now, the line between physical reality and virtual reality might be so blurred, you may not be able to tell the difference.

“Today we moved the banks on Internet and we use digital cash, we store everything on digital storage platforms and the new industries are the digital data storage servers and the high tech corporations. I see a slow transitioning to a world, just as depicted in many SciFi movies, where our basic VR helmet kit becomes more like a simulated cyberspace, perhaps driven by gaming industry at the beginning, then entering the education market, tourism, sex industry, health care, etc…eventually these cyberspaces joining together into a cyber reality where people can meet up and undertake activities, go to work in a simulated cyber office building, etc, until the real world is indistinguishable to the simulated world,” Vopsan said.

“If this is simultaneously happening with the evolution of AI, and the humans’ ability to achieve transcendence into machines (I believe there is even a movement called transhumanism, i.e. people that believe in merging biological life with the computers), then it is not too hard to see how the whole landscape will change to a digitally simulated world. In fact, a growing number of serious academics believe we already live in a simulated universe. Prof. Nick Bostrom from Oxford University first proposed this, known as the simulation hypothesis. I do not like this idea, but unfortunately, some of my recent research supports this, or points to this outcome in the future,” he added.

In order to accommodate more bits than there are atoms on Earth, the way humans generate and store information has to change fundamentally. How exactly that might happen is impossible to tell at this point — that’s a problem that scientists alive 100 to 200 years from now will have to figure out. Some ideas may include using non-tangible storage media such as photons, vacuum, and holograms.

“Everyone should find this interesting because the projections show that we are going to produce so much digital content in the near future that the number of bits produced would equal all the atoms on Earth. So the question is: Where do we store this information? How do we power this? It is a wake-up call for the big data industries, internet giants, high tech companies, energy research, and environmental research. I call this the invisible crisis, as today it is truly an invisible problem, but the projections show a different story,” Vopsan concluded.  

Youtube.

YouTube conspiracy theorists dominate climate science content by hijacking search terms

YouTube is rife with false info regarding climate change, a new study finds.

Youtube.

Image via Pixabay.

If you’re planning to go online and watch a few informational videos about climate change over dinner, I have some bad news: a new study reports that some scientific terms (such as ‘geoengineering’) are being dominated by conspiracy theorists. These individuals have ‘hijacked’ the terms so that searches take users to a list of almost entirely non-scientific video content.

The authors recommend that influential YouTubers, politicians, and influential individuals in popular culture work together to ensure that scientifically-accurate content reaches as many people as possible.

WrongTube

“Searching YouTube for climate-science and climate-engineering-related terms finds fewer than half of the videos represent mainstream scientific views,” says study author Dr. Joachim Allgaier, Senior Researcher at the RWTH Aachen University.

“It’s alarming to find that the majority of videos propagate conspiracy theories about climate science and technology.”

YouTube is a humongous platform. Almost 2 billion logged-in users visit it every month, which is roughly half the online world. Many people, including yours truly, see YouTube as a great resource for learning, and many channels produce accessible content about science, health, and technology. However, whether this content is reliable or not is a whole different discussion.

Allgaier wanted to know the quality of the information users find when searching for climate change and climate modification — it turns out much of it is complete baloney.

“So far, research has focused on the most-watched videos, checking their scientific accuracy, but this doesn’t tell us what an average internet user will find, as the results are influenced by previous search and watch histories,” reports Allgaier. “To combat this, I used the anonymization tool TOR to avoid personalization of the results.”

Allgier searched for ten climate change-related search terms and analyzed 200 of the videos YouTube showed him (these videos all treated climate change and climate modification topics). Most of these videos go directly against the worldwide scientific consensus, as detailed by the UN Intergovernmental Panel on Climate Change, he reports.

Contrails.

Consensus such as “chemtrails are a conspiracy theory”.
Image via Pixabay.

Many of these videos propagated the chemtrail conspiracy theory, Allgeier explains. In broad lines, chemtrailers believe that the condensation trails airplanes generate are purposefully laced with harmful substances to modify the weather, control human populations, or to carry out biological and chemical warfare. I don’t think it needs to be said, but there is no evidence to support this theory.

Worryingly, however, Allgaier found that these theorists have taken over some scientific terms by mixing them into their content. Chemtrailers, he explains, explicitly advise their followers to use scientific terms in their videos to make them seem more reliable.

“Within the scientific community, ‘geoengineering’ describes technology with the potential to deal with the serious consequences of climate change, if we don’t manage to reduce greenhouse gases successfully. For example, greenhouse gas removal, solar radiation management or massive forestation to absorb carbon dioxide,” explains Allgaier.

“However, people searching for ‘geoengineering’ or ‘climate modification’ on YouTube won’t find any information on these topics in the way they are discussed by scientists and engineers. Instead, searching for these terms results in videos that leave users exposed to entirely non-scientific video content.”

Some of the conspiracy videos Allgaier found were monetized via adverts or through the sale of merchandise with conspiracy-theory motives. This made him question whether YouTube’s search algorithms help direct traffic towards this ‘dubious’ content. The way these algorithms work “is not very transparent,” he says, arguing that “YouTube should take responsibility to ensure its users will find high-quality information if they search for scientific and biomedical terms, instead of being exposed to doubtful conspiracy videos.”

Allgaier suggests that scientists and science communicators should seriously consider YouTube as a platform for sharing scientific information.

“YouTube has an enormous reach as an information channel, and some of the popular science YouTubers are doing an excellent job at communicating complex subjects and reaching new audiences,” he explains.

“Scientists could form alliances with science-communicators, politicians and those in popular culture in order to reach out to the widest-possible audience. They should speak out publicly about their research and be transparent in order to keep established trustful relationships with citizens and society.”

The paper “Science and Environmental Communication via Online Video: Strategically Distorted Communications on Climate Change and Climate Engineering on YouTube” has been published in the journal  Frontiers in Communication.

The brain rewards new information like it does food, money, or drugs

Credit: Pixabay.

Are you constantly checking your phone even though you’re not expecting any important messages? Well, blame your brain. According to a new study, digital addiction may be pinned to the way the brain rewards new information, which it seems to value in the same way as money and food.

“To the brain, information is its own reward, above and beyond whether it’s useful,” said Assoc. Prof. Ming Hsu, a neuroeconomist. “And just as our brains like empty calories from junk food, they can overvalue information that makes us feel good but may not be useful—what some may call idle curiosity.”

Hsu’s research utilized functional magnetic imaging (fMRI), psychological theory, economic modeling, and machine learning in order to answer two fundamental questions about curiosity. First, why do people seek information, and second, how does curiosity manifest itself inside the brain?

There are two leading theories about the purpose and function of curiosity. Economists tend to view curiosity as a means to an end, helping actors gain information that may improve decision making. Psychologists, on the other hand, see curiosity as an innate motivation that triggers action without the need for any other motive — reading just for the sake of reading or digging in the dirt just to see what lies beneath the soil.

From a neuroscience perspective, Hsu and colleagues analyzed curiosity by scanning the brains of volunteers who had to play a gambling game. Each participant played a series of lotteries where they had the opportunity to pay in order to find out more about the odds of winning. In some lotteries where the stakes were high, this information could be highly valuable — for instance, when what seemed like a longshot was revealed to actually be very likely to happen. However, in other cases, this information was worth very little if the stakes of the lotteries were low.

For the most part, the people in the study behaved rationally from an economic standpoint: they chose to spend money when it made sense, i.e. when it helped them win more. Some choices, however, weren’t rational when seen from an economic standpoint. For instance, the participants tended to overvalue information pertaining to high-priced lotteries. In other words, when the stakes were high, people displayed curiosity in information even when that information had little effect on their decision whether or not to play.

In order to explain the two sets of behaviors, both economic and psychological models had to be taken into account. In other words, people choose to seek out new information for its immediate and actual benefits, as well as for the anticipation of its benefits, whether they were of any use or not.

“Anticipation serves to amplify how good or bad something seems, and the anticipation of a more pleasurable reward makes the information appear even more valuable,” Hsu said in a statement.

The fMRI scans showed that the information about the lotteries’ odds activated the striatum and ventromedial prefrontal cortex (vmPFC), which are dopamine-producing reward areas activated by food, money, and many addictive drugs. These brain areas were activated no matter if the information was useful and changed a person’s original decision, or not.

Using a machine learning technique called support vector regression, the researchers showed that the brain processes the same neural code for information about lottery odds as it does for money. Hsu says that similarly to how we might convert a steak dinner or a vacation into a monetary value, so can the brain covert curiosity about information using the same common code it uses for less abstract rewards like money.

“We can look into the brain and tell how much someone wants a piece of information, and then translate that brain activity into monetary amounts,” he says.

The new findings might explain people’s tendency to overconsume digital information or why we find notifications of new likes on our social media photos so delicious and irresistible.

“The way our brains respond to the anticipation of a pleasurable reward is an important reason why people are susceptible to clickbait,” Hsu says. “Just like junk food, this might be a situation where previously adaptive mechanisms get exploited now that we have unprecedented access to novel curiosities.

Harvard team turns bacteria into living hard drives

A research team from Harvard University, led by Seth Shipman and Jeff Nivala, has developed a novel method of writing information into the genetic code of living bacterial cells. They pass the information on to their descendants, which can later be read by genotyping the bacteria.

BacteriaUSB

Storing information into DNA isn’t a new idea — for starters, nature’s been doing it for a long, long time now. Researchers at the University of Washington have also shown that we can synthetically manufacture DNA in the lab and write any information they want into it — and to prove it, they encoded a whole book and some images into DNA strands. But combining the two methods into an efficient data storage process has proven beyond our grasp up to now.

“Rather than synthesizing DNA and cutting it into a living cell, we wanted to know if we could use nature’s own methods to write directly onto the genome of a bacterial cell, so it gets copied and pasted into every subsequent generation,” says Shipman. “But working within a living cell is an entirely different story and challenge.”

The team exploited an immune response certain bacteria use to protect themselves from viral infection, called the CRISPR/Cas system. When the bacteria are attacked by viruses, they physically cut out a segment of the invaders’ DNA and paste it into a specific region of their own genome. This way, if that same virus attacks again, the bacteria can identify it and respond accordingly. Plus, the cell passes this information over to its progeny, transferring the viral immunity to future generations.

The geneticists found that if you introduce a piece of genetic data that looks like viral DNA into a colony of bacteria that have the CRISPR/Cas system, it would incorporate it into their genetic code. So Shipman and Nivala flooded a colony of E. coli bacteria that has this system with loose segments of viral-looking DNA strands, and they gulped it all up — essentially becoming tiny, living hard drives.

The segments used were arbitrary strings of A, T, C, G nucleotides with chunks of viral DNA at the end. Shipman introduced one segment of information at a time and let the bacteria do the rest, storing away information like fastidious librarians.

Conveniently enough, the bacteria store new immune system entries sequentially, with earlier viral DNA recorded before that of more recent infections.

“That’s quite important,” Shipman says. “If the new information was just stored randomly, that wouldn’t be nearly as informative. You’d have to have tags on each piece of information to know when it was introduced into the cell. Here it’s ordered sequentially, like the way you write down the words in a sentence.”

Bugs with the bugs

One issue the team ran into is that not all of the bacteria record every strand of DNA introduced to the culture. So even if you introduce the information step by step, let’s say the numbers from 1 to 5, some bacteria would have “12345” but others may only have “12” or “245” and so on. But Shipman thinks that because you can rapidly genotype thousands or millions of bacteria in a colony and because the data is always stored sequentially, you’ll be able to clearly deduce the full message even with these errors.

Shipman adds that the 100 bytes his team demonstrated are nothing near the limit. Cells like the microorganism Sulfolobus tokodaii could potentially store more than 3,000 bytes of data. And with synthetic engineering, you could design or program specialized hard-drive bacteria with vastly expanded regions of their genetic code, able to rapidly upload vast amounts of data.

Got an exam coming up? Better start sketching

A new study found that drawing information you need to remember is a very efficient way to enhance your memory. The researchers believe that the act of drawing helps create a more cohesive memory as it integrates visual, motor and semantic information.

Image via youtube

“We pitted drawing against a number of other known encoding strategies, but drawing always came out on top,” said lead author Jeffrey Wammes, PhD candidate in the Department of Psychology at the University of Waterloo.

Wammes’ team included fellow PhD candidate Melissa Meade and Professor Myra Fernandes. Together, they enlisted some of the University’s students’ help and presented them with a list of simple, easily drawn words, such as “apple.” Participants were given 40 seconds in which to draw or write out the the word repeatedly. After this, they were given a filler task of classifying musical tones, to facilitate memory retention. In the last step of the trial, the students were asked to recall as many of the initial words as they could in just 60 seconds.

“We discovered a significant recall advantage for words that were drawn as compared to those that were written,” said Wammes.

“Participants often recalled more than twice as many drawn than written words. We labelled this benefit ‘the drawing effect,’ which refers to this distinct advantage of drawing words relative to writing them out.”

In later variations of this experiment, students were asked to draw the words repeatedly or add visual details to the written letters — shading or doodling on them for example. Here, the team found the same results; memorizing by drawing was more efficient than all other alternatives. Drawing led to better later memory performance than listing physical characteristics, creating mental images, and viewing pictures of the objects depicted by the words.

“Importantly, the quality of the drawings people made did not seem to matter, suggesting that everyone could benefit from this memory strategy, regardless of their artistic talent. In line with this, we showed that people still gained a huge advantage in later memory, even when they had just 4 seconds to draw their picture,” said Wammes.

While the drawing effect proved itself in testing, its worth noting that the experiments were conducted with single words only. The team is now working to find out why the memory benefit of drawing is so powerful, and if it can be carried over to other types of information.

The full paper, titled “The drawing effect: Evidence for reliable and robust memory benefits in free recall” has been published online in The Quarterly Journal of Experimental Psychology and can be read here.

Would you be willing to take an electric shock in the name of curiosity? Science says yes, several actually

Curiosity is probably the single most powerful force behind our species’ scientific discoveries. It can drive us to explore and discover even if the outcome might be painful or harmful. But this need to discover and learn can also become a curse; a new study found that people are willing to face unpleasant outcomes with no apparent benefits just to sate their curiosity.

Curiosity; killer of cats and purveyor of great shots since the dawn of time.
Image credits flickr user Esin Üstün.

Previous research into curiosity found that it can drive humans to seek out miserable or risky experiences, such as viewing gruesome scenes or exploring dangerous terrain, in their search for information. Bowen Ruan and co-author Christopher Hsee from the University of Chicago Booth School of Business believe that our primal need to resolve uncertainty, regardless of personal harm or injury we might endure in the process, is the cornerstone upon which our curiosity is based.

So they designed a series of experiments exposing participants to several unpleasant outcomes, to see how far they would go to obtain a sense of certainty about their environment. In one of the studies, 54 college students were taken to a lab with electric shock pens supposedly left over from a previous experiment. They were told that they were free to pass the time by testing the pens while the experiment they were about to take part in was set up.

*click*
Image credits smartphotostock

Some of the participants had color coded pens — red stickers for the five pens that would deliver a shock, and green stickers for the five that wouldn’t. Others however only had pens with yellow stickers, meaning they didn’t have any certainty what would happen if they clicked them. They were also told that only some of these pens still had working batteries, compounding their level of uncertainty. In the meantime, the team counted how many times each participant clicked each type of pen.

While they waited, students who knew the outcome clicked one green pen and two red ones on average. But those that had no clue what was going to happen clicked noticeably more, around five pens each.

For the second study, another group of students were shown 10 pens of each color. Here too students clicked the pens with uncertain outcomes more than those which were clearly identified as safe or shock-inducing.

“Just as curiosity drove Pandora to open the box despite being warned of its pernicious contents, curiosity can lure humans–like you and me–to seek information with predictably ominous consequences,” explains study author Bowen Ruan of the Wisconsin School of Business at the University of Wisconsin-Madison.

For the third study, the researchers wanted to know how well their findings hold under different circumstances, and if satiating their curiosity would make participants feel worse. They designed a test involving exposure to both pleasant and unpleasant sound recordings. Participants had to choose between 48 buttons on a computer screen, each with a different sound recording attached to it. For example, the “nails” button would play a recording of nails on a chalkboard, buttons labeled “water” played a sound of running water, and buttons labeled “?” could play either sound.

On average, students who had to choose from mostly identified buttons clicked around 28 of them. In contrast, those who had mostly unidentified buttons clicked around 39 of them. Participants who clicked more also reported feeling worse at the end of the experiment. Those who had mostly uncertain buttons reported being less happy overall than those who faced mostly certain outcomes.

The team carried out a separate, online study in which participants were shown partially obscured pictures of unpleasant insects — centipedes, cockroaches, and silverfish for example — and were informed they could click the image to reveal the insect. As with the previous studies, participants clicked on more pictures, and felt worse overall, when faced with uncertain results.

But interestingly, when they were prompted to predict how they would feel about their choice first, their number of clicks went down (and they reported feeling happier overall). This suggests that predicting the consequences of your choice might dampen your curiosity.

So while curiosity is often seen as one of the more desirable human qualities, it can also be a curse. Many times our drive to seek information and satisfy our curiosity can become a huge risk.

“Curious people do not always perform consequentialist cost-benefit analyses and may be tempted to seek the missing information even when the outcome is expectedly harmful,” Ruan and Hsee write in their paper.

“We hope this research draws attention to the risk of information seeking in our epoch, the epoch of information,” Ruan concludes.

The full paper, titled “The Pandora Effect, The power and Peril of Curiosity” has been published online in the journal Psychological Science and can be read here.

DNA coding info

DNA might make the ultimate time capsule; one gram is enough to store most human knowledge

The best storage medium might actually be DNA, considering the vast amount of information it store relative to its weight – one gram can theoretically hold some 455 exabytes or more than the data collectively stored by all of Google, Facebook and every other tech company. It’s also very durable. Remember how some scientists thought about cloning mammoths? Well, the DNA they would use is at least 4,000 years old, but  DNA has been extracted and sequenced from much older samples, like a 700,000 years-old horse. With this in mind, some scientists got the idea of storing the most critical pieces of modern human knowledge into specially treated DNA – maybe the most effective time capsule ever.

DNA: the information vault

DNA coding info

Image: Evolution News

 

History has thought us that following the collapse of a civilization, whether or not its collective knowledge and science can be passed down to future generations is directly dependent on its capability of storing it somehow – scrolls, books, stone tablets, marked tombs etc. This is why we now know so much about the Romans, ancient Greek or Chinese, but so little about North American natives or Incas. Concerning the latter, the Incas had Amautas, people who were paid to memorize history and teach it to its successors. This system worked for hundreds of years, but it’s clearly unreliable. People might forget some facts, come up with new ones instead and re-write history as they please. Not to mention that if the Amauta dies and there’s no one alive in his stead, then his knowledge and that of his whole people is lost forever. It’s not like it didn’t happen. In times like ours, however, when more information is produced every day than in the whole antiquity, what’s the best way to store our information, such that our children’s children might learn the current understanding of nature and build upon it for generations to come and so forth? Books have been great so far, but clearly they can’t hope to contain the vast amounts of information today and to come. Here’s where DNA might come in.

Robert Grass of the Swiss Federal Institute of Technology in Zurich is currently exploring ways to write and read information in DNA molecules. The simplest method involves treating DNA bases adenine (A) and cytosine (C) as “0” and guanine (G) and thymine (T) as “1”. Now, the problem with DNA is that if it suffers deterioration, then the code encased inside becomes riddled with errors – sort of like bad sectors on a magnetic hard drive. These always give researchers head aches when they’re trying to sequence ancient DNA from fossils. Error-correcting technique like the Reed-Solomon code solve glitches in the DNA to partially reconstruct it. The smartest thing you can do, however, is to prevent DNA degradation altogether, and scientists are taking notes from one of their pet favorites – fossils.

Excluding all water close to the DNA is one of the most important steps, so the researchers encased the DNA in microscopic spheres of glass. Two documents, totaling 83 kilobytes, were encoded in the DNA:  the Swiss federal charter from 1291, and the Archimedes Palimpsest, a 10th-century version of ancient Greek texts. To test how well the capsules protected the DNA, these were  kept at 60, 65 and 70 °C for a week to simulate ageing. No errors were reported. If kept at a comfortable 10  °C, the researchers estimate the DNA and its precious encoded information could survive for 2,000 years. But it could survive for up to two million years if kept at a chilly -18 °C at the Global Seed Vault in the Arctic, as reported in Angewandte Chemie.

So, sounds like the perfect HDD right? Well, can you imagine the read/write on it? It must be terrible. In fact, it’s so difficult that it costs around £1000 to encode the 83 kilobytes. Wikipedia could end up costing billions! This might change as technology evolves. Let’s not forget that it cost hundreds of million to sequence the first human genome. Now, only a couple of years later it’s in the tens of thousands of dollars range. With this sort of technological leaps in mind, it’s acceptable to think of a time when DNA might actually be employed as functional storing devices – the biological computers of the future. Until then, Grass thinks we should focus on storing essential information that future historians might want to read.

“If you look at how we look at the Middle Ages, it’s very influenced by what information has been stored,” he says. “It’s very important that we get a relatively neutral documentation of our current time and store that.”

But if modern civilization collapses, will anyone know how to read DNA then?

via New Scientist

 

N. Farsad et al./PLOS ONE

Sending a text message using Vodka molecules – the first continuous molecular communication

In nature, organisms communicate in various ways, be it through acoustic or biological signals. Insects, for instance, communicate and relay important information, such as a threat to a hive, using pheromones – an excreted chemical with a particular signature. Scientists at the University of Warwick in the UK and the York University in Canada have created a molecular communications system which they used to send a continuous signal, like a text message, just by spraying alcohol molecules. The system in a more advanced form could facilitate communication in environments where electromagnetic waves can’t be used, like through pipelines or oil rigs.

N. Farsad et al./PLOS ONE

N. Farsad et al./PLOS ONE

Previous attempts also relayed information using molecular signaling, however this is the first time continuous data transmission has been achieved. Moreover, the system was built using off-the shelf components with an overall cost that doesn’t exceed $100.

Molecular receiver: one of three sensors (for various types of tests) demodulates the incoming signal by assigning the bit 1 to increasing concentration and 0 to decreasing. The binary data is converted back to letters in the Arduino board and sent via serial port to a computer for display. (Credit: N. Farsad et al./PLOS ONE)

Molecular receiver: one of three sensors (for various types of tests) demodulates the incoming signal by assigning the bit 1 to increasing concentration and 0 to decreasing. The binary data is converted back to letters in the Arduino board and sent via serial port to a computer for display. (Credit: N. Farsad et al./PLOS ONE)

The message is inputted through an LCD Shield Kit then encoded by an Arduino board as a binary sequence – 1 corresponds to higher concentration of molecules, while 0 to lower concentration. In their demonstration, the researchers programmed a sprayer to release evaporated alcohol molecules several meters across open space before it was decoded by a receiver. The message was “O Canada” – a tribute to the Canadian national anthem.

A sprayed text message

“We believe we have sent the world’s first text message to be transmitted entirely with molecular communication, controlling concentration levels of the alcohol molecules, to encode the alphabets with single spray representing bit 1 and no spray representing the bit 0,” said York doctoral candidate Nariman Farsad, who led the experiment.

“Imagine sending a detailed message using perfume — it sounds like something from a spy thriller novel, but in reality it is an incredibly simple way to communicate,” said Dr. Weisi Guo from the School of Engineering at the University of Warwick.

“Of course, signaling or cues are something we see all the time in the natural world — bees for example use chemicals in pheromones to signal to others when there is a threat to the hive, and people have achieved short-range signaling using chemicals.

“But we have gone to the next level and successfully communicated continuous and generic messages over several meters.

The system could find potential use in medicine. Recent advancements have allowed nanoscale devices to be embedded into organs, for instance, where they sense and gather important data. In this tiny environment, however, there are some constraints to using electromagnetic waves to propagate information – after all an antenna can only be so small. Chemical communication require very little energy, is bio-compatible and could thus provide the means to solve this problem.

A more immediate practical application, however, may be seen in places like pipe lines, sewers or oil rigs. The molecular communication system could be used here to send important safety information and advert potentially catastrophic accidents.

The system was described in a paper published in the journal PLOS ONE.

All life on earth could come from alien zombies

That’s right people, all the life on this beautiful planet (yep, that includes you) could descend from alien zombies. Well, this is indeed a slight imagination leap, but what I’m talking about are viruses; dead viruses, to be more exact. Dead viruses who contained information, enough information to pave the way for lifeforms to appear.

The theory of panspermia suggests that life on Earth came from outer space, on comets or meteorites or even on dust grains; this theory has been around for more than a century, when Lord Kelvin suggested that microbes could have come from comets. However, most astrobiologists believe that radiation would be fatal for the microbes in case.

“That essentially kills panspermia in the classical sense,” said astrobiologist Rocco Mancinelli of the SETI Institute in Mountain View, California.

But maybe panspermia doesn’t have to die; maybe our zombie viruses could save it (yes, zombies is definitely not the best word, but it sounds too damn cool). Paul Wesson, a visiting researcher at the Herzberg Institute of Astrophysics in Canada argues that even the microbes are dead on arrival, the information they carry can still allow life to rise from the charred remains.

“The vast majority of organisms reach a new home in the Milky Way in a technically dead state,” Wesson wrote. “Resurrection may, however, be possible.”

The key here is how much genetic information survives; genetic information can be quantified just like hard disk space. For example, a bacteria such as E. coli carries about 6 million bits of information in their DNA. Random chemical processes can only produce 194 bits of information over 500 million years, which couldn’t suffice for even a single cell. So how can this paradox be solved ?

“It must be admitted that all versions of panspermia suffer from a hole in our knowledge, concerning how to go from an astrophysically delivered entity which contains substantial information to one which has the characteristics of what we normally regard as life,” he wrote.

He pinpointed the virus as a good source however; they can carry about 100.000 bits of information, which would be more than enough. David Morrison, the director of the Carl Sagan Center for the Study of Life in the Universe admits that the “looks good, and interesting, although of course highly speculative”.

“The critical issue is whether the information in broken strands of nucleic acid could serve as the template for life on another world … since we know so little about the actual process by which life originated on Earth, who can really say?”

There are of course those who challenge this idea – Mancinelli is one of them.

“Once you’re dead, you’re dead,” he said. “It’ll give off enough radiation that it’ll just chop up all the nucleic acids,” he said. “There’s no way the organism will survive. Going from Earth to Mars, not a problem,” he said. “Even going from Earth to Pluto, or from Pluto to Earth, not a problem. But once you start heading out of the solar system, it’s so far away that it takes a long time. That’s the thing, the length of time.”

First Universal Two-Qubit quantum processor created

qbitPhysicists from NIST (National Institute of Standards and Technology) have demonstrated what they claim to be the first universal programmable quantum information processor that will be able to run any program allowed by quantum mechanics (the set of principles that describe the atomic and subatomic matter). They managed to accomplish this using two quantum bits (qubits) of information.

This processor could prove to be a major breakthrough for a future quantum computer, that could very well be the ‘evolutionary leap’ in the computers’ life thus resulting the possible solve of problems that are untouchable today. The discovery was presented in the latest edition of Nature Physics and this marks the first time anybody has moved beyond asking a single task from a quantum computer.

“This is the first time anyone has demonstrated a programmable quantum processor for more than one qubit,” says NIST postdoctoral researcher David Hanneke, first author of the paper. “It’s a step toward the big goal of doing calculations with lots and lots of qubits. The idea is you’d have lots of these processors, and you’d link them together.”

The processor basically stores binary information in just two beryllium ions held in an electromagnetic ‘trap’, and then handled with ultraviolet lasers. With these in hand, the NIST team managed to perform 160 different processing routines using just the two qubits. Although practically there is an infinite number of programs you can perform with the two qubits, the 160 are pretty much totally relevant, and they prove that the processor is “universal”, Hanneke says.

Of course there will be many more qubits and logic operations to solve bigger problems, but when you come to think about it, all this was done with just two atoms, basically; and the operations they performed were no easy task. Each program consisted of 31 logic operations, 15 of which were varied during programming.

The Future Is Now? Pretty Soon, at Least

ray kurzweil

When I read what Ray Kurzweil said about the future, I was just awed! I mean, coming from somebody else, it would seem ludacris (even from him, I find it really hard to believe), but c’mon, the man is one of the best futurists we have, so he HAS to know what he’s talking about. According to him, in his really optimistic view, many of the world’s current problems will be solved way sooner than 50 years.For example, if you’re worried about green gas emissions, fear no more! In 5 years, he claims solar power will be cost competitive with fossil fuels and within 20 years all of our energy will come from clean sources. That really would be nice, but could it really happen ?? Wait, that’s just the beginning. Having problems sticking to a diet or losing weight? In less than 10 years, he says, there will be a pill that will allow you to eat whatever you want without gaining weight! Sounds too good to be true? Could be, but the thing is that even critics agree that he is by no means your average sci-fi fantasits. In fact, he has just enought credibility that the National Academy of Engineering published his view of solar energy.What’s even more surprising for me (at least) is what he predicted about aging.

Well, fasten your seat belts: in 15 years, your life expectancy will rise every year faster than you are aging. Yeah, that means that your chances of dying get slimmer and slimmer every year, until about 50 years from now, when humans (and perhaps even machines) start evolving into everliving beings. Could this actually happen ? I have no idea, and I’m not sure how many people have – not that many, anyway. During the years he mande some predictions that awed the world by their accuracy and seemingly impossible odds of happening (such as the explosive growth of the Internet in the 1990s and a computer chess champion by 1998; with that he was off by a year – Deep Blue won in 1997).

Also, 20 years ago he predicted that “early in the 21st century” blind people would be able to read anything anywhere using a handheld device. In 2002 he actually narrowed the date to sometime in 2008. On thursday night at a festival, he pulled out a new gadget as big as your average cell phone, that had absolutely no problem reading out loud the text from a science magazine.“Certain aspects of technology follow amazingly predictable trajectories,” he said, and showed a graph of computing power starting with the first electromechanical machines more than a century ago. At first the machines’ power doubled every three years; then in midcentury the doubling came every two years (the rate that inspired Moore’s Law); now it takes only about a year.“My colleague Francis Crick used to say that God is a hacker, not an engineer,” Dr. Ramachandran said. “You can do reverse engineering, but you can’t do reverse hacking.”

“Scientists imagine they’ll keep working at the present pace,” he told NY Times after his speech. “They make linear extrapolations from the past. When it took years to sequence the first 1 percent of the human genome, they worried they’d never finish, but they were right on schedule for an exponential curve. If you reach 1 percent and keep doubling your growth every year, you’ll hit 100 percent in just seven years.”