Tag Archives: quantum mechanics

Time Travel Without the Paradoxes

Time Travel Without the Paradoxes

It’s one of the most popular ideas in fiction — travelling back through time to alter the course of history. The idea of travelling through time — more than we do every day that is — isn’t just the remit of science fiction writers though. Many physicists have also considered the plausibility of time travel, especially since Einstein’s theory of special relativity changed our concept of what time actually is. 

Yet, as many science fiction epics warn, such a journey through time could carry with it some heavy consequences. 

Ray Bradbury’s short story ‘A Sound of Thunder’ centres around a group of time travellers who blunder into prehistory, making changes that have horrendous repercussions for their world. In an even more horrific example of a paradox, during an award-winning episode of the animated sci-fi sitcom Futurama, the series’ hapless hero Fry travels back in the past and in the ultimate grandfather paradox, kills his supposed gramps. Then, after ‘encounter’ with his grandmother, Fry realises why he hasn’t faded from reality, he is his own grandfather. 

Many theorists have also considered methods of time travel without the risk of paradox. Techniques that don’t require the rather extreme measure of getting overly friendly with one’s own grandmother Fry. These paradox-escape mechanisms range from aspects of mathematics to interpretations of quantum weirdness. 

ZME’s non-copyright infringing time machine. Any resemblance to existing time travel devices is purely coincidental *cough* (Christopher Braun CC by SA 1.0/ Robert Lea)

Before looking at those paradox escape plans it’s worth examining just how special relativity changed our thinking about time, and why it started theoretical physicists really thinking about time travel. 

Luckily at ZME Science, we have a pleasingly non-copyright infringing time machine to travel back to the past. Let’s step into this strange old phone booth, take a trip to the 80s to pick up Marty and then journey back to 1905, the year Albert Einstein published ‘Zur Elektrodynamik bewegter Körper’ or ‘On the Electrodynamics of Moving Bodies.’ The paper that gave birth to special relativity. 

Don’t worry Marty… You’ll be home before you know it… Probably.

A Trip to 1905: Einstein’s Spacetime is Born

As Marty reads the chronometer and discovers that we have arrived in 1905, he questions why this year is so important? At this point, physics is undergoing a revolution that will give rise to not just a new theory of gravity, but also will reveal the counter-intuitive and somewhat worrisome world of the very small. And a patent clerk in Bern, Switzerland , who will be at the centre of this revolution,  is about to have a very good year. 

The fifth year of the 20th Century will come to be referred to as Albert Einstein’s ‘Annus mirabilis’ — or miracle year — and for good reason. The physicist will publish four papers in 1905, the first describing the photoelectric effect, the second detailing Brownian motion. But, as impressive those achievements are–one will see him awarded the Nobel after all–it’s the third and fourth papers we are interested in. 

1905: young Albert Einstein contenplates the future, unaware he is about to change the way we think about time and space forever. (Original Author Unknown)
1905: young Albert Einstein contemplates the future, unaware he is about to change the way we think about time and space forever. (Original Author Unknown)

In these papers, Einstein will first introduce special relativity and then will describe mass-energy equivalence most famously represented by the reduced equation E=mc². It’s no exaggeration to say that these works will change how we think of reality entirely — especially from a physics standpoint. 

Special relativity takes time — and whereas it had previously been believed to be its own separate entity — unites it with the four known dimensions of space. This creates a single fabric— spacetime. But the changes to the concept of time didn’t end there. Special relativity suggests that time is different depending on how one journeys through it. The faster an object moves the more time ‘dilates’ for that object. 

This idea of time running differently in different reference frames is how relativity gets its name. The most famous example for the time difference is the ‘twin paradox.’

Meet twin sisters Stella and Terra. Stella is about to undertake a mission to a distant star in a craft that is capable of travelling at near the speed of light, leaving her sister, Terra, behind on Earth. 

A spacetime diagram of Terra’s journey through spacetime, against her twin Stella’s. Less ‘proper time’ passes for Stella than Terra meaning when she returnes to Earth Terra has aged more than she has. (Robert Lea)

After travelling away from Earth at near the speed of light, then undertaking a return journey at a similar speed, Stella touches down and exits her craft to be greeter by Terra who has aged more in relation to herself. More time has passed for the ‘static’ Earthbound twin than for her sister who underwent the journey into space.

Thus, one could consider Stella to have travelled forward in time. How else could a pair of twins come to be of considerably different ages? That’s great, but what about moving backwards through time? 

Well, if the faster a particle in a reference frame moves, the ‘slower’ time progresses in that frame, it raises the question, is there a speed at which time stands still? And if so, is there a speed beyond this at which time would move backwards? 

A visualisation of a tachyon. Because a tachyon would always travel faster than light, it would not be possible to see it approaching. After a tachyon has passed nearby, an observer would be able to see two images of it, appearing and departing in opposite directions.
(Wiki CC by SA 3.0)

Tachyons are hypothetical particles that travel faster than the speed of light — roughly considered as the speed at which time would stand still — and thus, would move backwards rather than forwards in time. The existence of tachyons would open up the possibility that our space-bound sister could receive a signal from Terra and send her back a tachyon response. Due to the nature of tachyons, this response could be received by Terra before she sent the initial signal.

Here’s where that becomes dangerous; what if Stella sends a tachyon signal back that says ‘Don’t signal me’? Then the original signal isn’t sent, leading to the question; what is Stella responding to?

Or in an even more extreme example; what if Stella sends a tachyon signal back that is intercepted by herself before she embarks on her journey, and that signal makes her decide not to embark on that journey in the first place? Then she’ll never be in space to send the tachyon signal… but, if that signal isn’t sent then she would have embarked on the journey…

And that’s the nature of the causality violating paradoxes that could arise from even the ability to send a signal back through time. Is there a way out of this paradox?

Maybe…

Interlude. From the Journal of Albert Einstein

27th September 1905

A most astounding thing happened today. A young man in extraordinary attire visited me at the patent office. Introducing himself as ‘Marty’ the youngster proceeded to question me about my paper ‘On the Electrodynamics of Moving Bodies‘– a surprise especially as it was only published yesterday.

In particular, the boy wanted to know about my theory’s implications on time travel! A pure flight of fancy of course… Unless… For another time perhaps.

If this wasn’t already unusual to the extreme, after our talk, I walked Marty to the banks of the Aare river where he told me that his transportation awaited him. I was, of course expecting a boat. I was therefore stunned when the boy stepped into a battered red box, which then simply disappeared.

I would say this was a figment of my overworked imagination, a result of tiredness arising from working the patent office during the day and writing papers at night. That is, were I the only witness!

A young man also saw the box vanish, and his shock must have been more extreme than mine for he stumbled into the river disappearing beneath its surface.

His body has not yet been recovered… I fear the worst.

Present Day: The Self Correcting Universe

As the battered old phone box rematerializes in the present day, Marty is determined to seek out an academic answer to the time travel paradox recounted to him in 1905. 

He pays a visit to the University of Queensland where Bachelor of Advanced Science student Germain Tobar has been investigating the possibility of time travel. Under the supervision of physicist Dr Fabio Costa, Tobar believes that a mathematical ‘out’ from time travel paradoxes may be possible.

“Classical dynamics says if you know the state of a system at a particular time, this can tell us the entire history of the system,” Tobar explains. “For example, if I know the current position and velocity of an object falling under the force of gravity, I can calculate where it will be at any time.

“However, Einstein’s theory of general relativity predicts the existence of time loops or time travel — where an event can be both in the past and future of itself — theoretically turning the study of dynamics on its head.”

Tobar believes that the solution to time travel paradoxes is the fact that the Universe ‘corrects itself’ to remove the causality violation. Events will occur in such a way that paradoxes will be removed.

So, take our twin dilemma. As you recall Stella has sent herself a tachyon message that has persuaded her younger self not to head into space. Tobar’s theory — which he and his supervisor Costa say they arrived at mathematically by squaring the numbers involved in time travel calculations — suggests that one of two things could happen.

Some event would force Stella to head into space, she could accidentally stumble into the capsule perhaps, or receive a better incentive to head out on her journey. Or another event could send out the tachyon signal, perhaps Stella could accidentally receive the signal from her replacement astronomer. 

“No matter what you did, the salient events would just recalibrate around you,” says Tobar. “Try as you might, to create a paradox, the events will always adjust themselves, to avoid any inconsistency.

“The range of mathematical processes we discovered show that time travel with free will is logically possible in our universe without any paradox.”

The Novikov self-consistency principle
The Novikov self-consistency principle (Brightroundircle/ Robert Lea)

Tobar’s solution is similar in many ways to he Novikov self-consistency principle — also known as Niven’s Law of the conservation of history — developed by Russian physicist Igor Dmitriyevich Novikov in the late 1970s. This theory suggested the use of geodesics similar to those used to describe the curvature of space in Einstein’s theory of general relativity to describe the curvature of time. 

These closed time-like curves (CTCs) would prevent the violation of any causally linked events that lie on the same curve. It also suggests that time-travel would only be possible in areas where these CTCs exist, such as in the presence of wormholes as speculated by Kip Thorne and colleagues in the 1988 paper “Wormholes, Time Machines, and the Weak Energy Condition”. The events would cyclical and self-consistent. 

The difference is, whereas Tobar suggests a self-correcting Universe, this idea strongly implies that time-travellers would not be able to change the past, whether this means they are physically prevented or whether they actually lack the ability to chose to do so. In our twin analogy, Stella’s replacement sends out a tachyon signal and travelling along a CTC, it knocks itself off course, meaning Stella receives rather than its intended target.

After listening to Tobar, strolling back to his time machine Marty takes a short cut through the local graveyard. Amongst the gravestones baring unfamiliar dates and names, he notices something worrying–chilling, in fact. There chiselled in ageing stone, his grandfather’s name.

The date of his death reads 27th September 1905. 

Interlude: From the Journal of Albert Einstein

29th September 1905

This morning the Emmenthaler Nachrichten reports that the body of the unfortunate young man who I witnessed fall into the Aare has been recovered. The paper even printed a picture of the young man. 

I had not realised at the time, but the boy bares the most remarkable resemblance to Marty — the unusually dressed youngster who visited with me the very day the boy fell…

Strange I such think of Marty’s attire so frequently, the young man told me his garish armless jacket, flannel shirt and ‘jeans’ were ‘all the rage in the ‘86.’ 

Yet, though I was seven in 1886 and have many vague memories from that year, I certainly do not remember such colourful clothes…

Lost in Time: How Quantum Physics provides an Escape Route From Time Travel Paradoxes

Marty folds the copy of the Emmenthaler Nachrichten up and places it on the floor of the cursed time machine that seems to have condemned him. The local paper has confirmed his worst fears; his trip to the past to visit Einstein doomed his grandfather. 

After confirming his ancestry, he knows he is caught in a paradox. He waits to be wiped from time…

After some time, Marty wonders how it could possibly be that he still lives? Quantum physics, or more specifically one interpretation of it has the answer. A way to escape the (literal) grandfather paradox. 

The double slit experiment (Robert Lea)

The ‘many worlds’ interpretation of quantum mechanics was first suggested by Hugh Everett III in the 1950s as a solution to the problem of wave-function collapse demonstrated in Young’s infamous double-slit experiment.

As the electron is travelling it can be described as a wavefunction with a finite probability of passing through either slit S1 or slit S2. When the electron appears on the screen it isn’t smeared across it as a wave would be. It’s resolved as a particle-like point. We call this the collapse of the wavefunction as wave-like behaviour has disappeared, and it’s a key factor of the so-called Copenhagen interpretation of quantum mechanics.

The question remained, why does the wavefunction collapse? Hugh Everett asked a different question; does the wavefunction collapse at all?

The Many Worlds Interpretation of Quantum Physics (Robert Lea)



Everett imagined a situation in which instead of the wavefunction collapsing it continues to grow exponentially. So much so that eventually the entire universe is encompassed as just one of two possible states. A ‘world’ in which the particle passed through S1, and a world where the particle passed through S2.

Everett also stated the same ‘splitting’ of states would occur for all quantum events, with different outcomes existing in different worlds in a superposition of states. The wavefunction simply looks like it has collapsed to us because we occupy one of these worlds. We are in a superposition of states and are forbidden from seeing the other outcome of the experiment.

Marty realises that when he arrived back in 1905, a worldline split occurred. He is no longer in the world he came from– which he labels World 1. Instead, he has created and occupies a new world. When he travels forward in time to speak to Tobar he travels along the timeline of this world–World 2.

This makes total sense. In the world Marty left, a phone box never appeared on the banks of the Aera on September 27th 1905. This world is intrinsically different than the one he left.

What happens as a result of Marty’s first journey to 1905 according to the Many World’s Interpretation (Robert Lea)

He never existed in this world and in truth he hasn’t actually killed his grandfather. His grandfather exists safe and sound back in 1905 of World 1. If the Many World’s Interpretation of quantum physics is the correct solution to the grandfather paradox, however, then Marty can never return to World 1. It’s intrinsic to this interpretation that superpositioned worlds cannot interact with each other.

With reference to the diagrams above, Marty can only move ‘left and down’ or ‘right’–up is a forbidden direction because it’s his presence at a particular moment that creates the new world. This makes total sense, he has changed history and is in a world in which he appeared in 1905. He can’t change that fact.

The non-interaction rule means no matter what measures he takes, every time he travels back into the past he creates a new state and hops ‘down’ to that state and can then only move forward in time (right) on that line.

Marty’s multiple journey’s to the past create further ‘worlds’ (Robert Lea)

So what happens when Marty travels back to the past in an attempt to rescue his world? He inadvertently creates another state–World 3. This world may resemble World 1 & 2 in almost every conceivable way, but according to the application of the interpretation, it is not the same due to one event–one extra phone box on the banks of the River Aare for each journey back.

As Marty continues to attempt to get back to World 1 — his home — he realises he now lives in a world in which one day in September 1905 on the streets of Bern, hundreds of phone box suddenly appeared on the banks of the Aare, and then simply disappeared.

The sudden appearance of hundreds of red telephone boxes around the banks of the River Aare really started to affect property prices. (Britannica)

He also realises that his predicament answers the question ‘if time travellers exist why do they never appear in our time?’ The truth being, that if a person exists in the world from which these travellers departed they can never ‘get back’ to this primary timeline. 

To someone in World 1, the advent of time travel will just result in the gradual disappearance of daring physicists. That’s the moment it dawns on Marty that as far as World 1 — his world — is concerned, he stepped in a phone box one day and vanished, never to return.

Marty escaped the time travel paradox but doomed himself to wander alternate worlds.

Hey… how do we get our time machine back?

[no_toc]

Heisenberg’s uncertainty principle is more than a mathematical quirk, a handy guiding principle, or the inspiration for some really nerdy t-shirts. It is intrinsic to nature, weaved into the fabric of all matter. Together we take a trip to ZME labs to use some everyday objects to demonstrate how nature tells us “you can’t have it all.”

Certainly Uncertain: What’s Heisenberg’s Uncertainty Principle

At the beginning of the 20th Century, physicists were developing the field of quantum physics, discovering in the process that the rules they had grown comfortable with no longer applied at the smallest scales. For example; the argument about the nature of light — was it particle or wave — that had raged for decades  could be answered only by concluding it is neither but has properties of both. Furthermore, they found that this particle/wave duality applies to matter particles like electrons too.  

German theoretical physicist Werner Heisenberg was about to make his own shocking discovery, he was about to find that nature imposes a fundamental limit on what even the most aspiring physicists could know.

He would formulate this concept into the uncertainty principle.

A portrait of Werner Heisenberg taken in 1933. Ironically the author of the image is unknown (CC by SA)

In 1925, Heisenberg would publish a paper that informed physicists that nature has a way of telling you that you can’t have your cake and eat it too. Something intrinsic and built into the fabric of the very Universe that reminds you that no matter how smart you are, no matter how sophisticated your experimental method, how sensitive your equipment, you can’t ‘know’ everything about a system. An idea that contradicts the principles that classical physics was built upon.

Assuming the name Heisenberg’s uncertainty principle, the Heisenberg uncertainty principle, or simply, the uncertainty principle, the concept would become arguably the second most commonly recognised element of quantum physics, outside of Schrodinger’s eponymous feline. Eventually, this idea would find itself absorbed into pop-culture, making its way to jokes, newspaper strips, t-shirts, and cartoons.

“The uncertainty principle ‘protects’ quantum mechanics,” said legendary physicist Richard Feynman of the utility of Heisenberg’s breakthrough. “Heisenberg recognized that if it were possible to measure both the momentum and the position simultaneously with greater accuracy, quantum mechanics would collapse. So he proposed that must be impossible.”

What is the Uncertainty Principle?

The most generalised version of Heisenberg’s uncertainty principle says that if you measure the momentum of a particle with uncertainty Δp, then you are limited in how precisely you can ‘know’ its position. You can’t know it any more precisely than Δx ≥ ℏ/2Δp, where ℏ (or H-bar) is a value known as the reduced Planck’s constant and is extremely small, a fact that will become important when we ask why macroscopic objects like cars and balls don’t seem to be affected by the uncertainty principle. 

Rearranging the equation above gives the most common version of Heisenberg’s uncertainty principle, and perhaps the most famous equation in physics outside of E=mc^2. This equation tells us that when the uncertainty in position is multiplied by the uncertainty in momentum its value can’t be greater than the reduced Planck’s constant divided by two. 

The equation above also applies to several other variables, most notably energy and time, it can also be adapted to any suitable pairs of operators in a system.

The momentum and position version of the uncertainty principle may well be the most familiar but it is by no means the only version, nor should the other versions be considered less important. In fact, the energy/time variation of the uncertainty principle gives rise to one of the most striking and counter-intuitive elements of reality — the idea that virtual pairs of particles can pop in and out of existence. 

If you consider an infinitesimal isolated area of spacetime observed for a precisely ‘known’ period of time, then the uncertainty principle for energy and time (ΔE Δt = ≥ ℏ/2Δ) says that you can’t precisely know the energy content of that area. Meaning that particles must be popping in and out of existence in that box.

This concept, wittily named ‘nature’s overdraft facility’ by some waggish physicists, is a phenomenon that sounds unlikely, impossible even, but has been experimentally verified. The Heisenberg uncertainty principle limits just how long the Universe will allow itself to go ‘overdrawn’ before the particles annihilate and that energy loan is paid back.

In order to get aspiring-physicists to accept the radical ideas birthed from the uncertainty principle, and that concept that there is a fundamental limit to what can be known about a system — something contrary to classical physics —thus meaning that everything classical physics imparts about the ‘knowability’ of a system is wrong,  a ‘semi-classical’ version was first presented to the scientific community.

We approach it now with some trepidation and the warning that it barely scratches the surface of the uncertainty principle and somewhat downplays how intrinsic it is in nature. 

The Semi-Classical Uncertainty Principle

You’re asked to take part in a quantum physics experiment at the ZME labs. You arrive, are immediately handed a tennis racquet and asked to step into an extremely dark room. Once in there, a voice announces that your task is simply to find the tennis balls in the room with the racquet. 

“Sounds simple enough,” you think. That is until a tennis ball strikes your leg at high-speed. You realise that the tennis balls are being fired into the room at completely random angles. Eventually, after some failing around in the dark, you’re racquet hits a ball. “Got one!” you exclaim. 

“That’s great,” comes the voice over the intercom. “Where is it now?”

Of course, the problem with that crude little analogy is that the very act of ‘measuring’ the ball’s position or momentum, intrinsically changes the state of the system and essentially puts you back at square one. It’s a little like that every time we try to take a quantum mechanical measurement. 

In order to ‘see’ an electron, researchers have to fire photons at it. The problem is that photons carry with them momentum. And as electrons are so small, the wavelengths of the photons have to be of a similar scale. The issue is, the shorter the wavelength, the higher the energy and, in turn, the greater the momentum. 

Thus, bombarding an electron with photons imparts this moment to them, changing the very state of the system. 

The reason that the semiclassic description of the Heisenberg uncertainty principle is that it gives the impression that if there was some incredibly sensitive measuring technique, it could, perhaps, be ‘worked around.’ This isn’t true. No matter how sensitive, this relationship is something that can’t really be avoided. It’s ‘built-in’ to nature. 

To see why this is the case it’s necessary to investigate one of the founding principles of quantum mechanics, the ubiquitousness of waves. 

Wave Certainty Goodbye

You receive a call from ZME labs. “We know the last experiment didn’t go so well, and we really hope the bruises are healing,” says a painfully familiar voice. “Look, we’ve got another test and this one will really demonstrate Heisenberg’s uncertainty principle… no tennis balls.”

You reluctantly agree to attend. 

Upon your arrival, you are handed a skipping rope and asked to wave it up and down rhythmically. The opposite end is held by a nervous-looking lab assistant who you notice is covered in tennis ball sized welts. 

Below is what results from your frantic, yet rhythmically waving. A steady wave shape. But, here comes the voice through the loudspeaker again: “Ok, now tell us, where on the x-axis [which marks position] is the wave?”

As you can see, the wave has no well-defined position, and here is how that is analogous to a particle in quantum mechanics. In the mathematics used to describe a quantum system, the spread of the wave is momentum, the square of the amplitude is the probability of the particle being located in a particular position.

Thus, in the above image what we actually have is a very well-known momentum. And as Heisenberg’s uncertainty principle primes us to believe, we can see that we can say nothing about the position as the wave can’t be said to possess a single position on the x-axis. The square of the amplitude is the same everywhere.

Back to ZME labs. You’ve had just about enough of these cryptic unanswerable questions and bizarre sports-equipment related experiments. So to teach the researchers a lesson, you give the rope one sudden ‘whip’ — Indiana Jones style. 

The wave is suddenly localised, as you can see, the amplitude and the thus the square of the amplitude is zero everywhere but in one spot. A position can be assigned to the wave, but as you can see, there is no spread anymore — the wavefunction is destroyed. 

This is analogous to having exact knowledge of a particle’s location. As the wavefunction spread is destroyed and this was the representation of the particle’s momentum, you suddenly have no knowledge of momentum. 

All this shows that Heisenberg’s uncertainty principle really arises from the fact that matter can be described as waves on the quantum level.

You are on your way out of ZME labs, for what you hope is the final time and nursing serious whiplash in your wrist when the lead researcher hands you a tennis ball. “As a memento,” he says chirpily.

You thank him, but mentally vow to throw it over the tallest wall you can find on the way to your car and home. 

Little do you know, your rage against the ball will reveal how without the phenomena described by Heisenberg’s uncertainty principle  the Universe would be a much colder, and darker place. 

Quantum tunnelling: Quantum balls and tall walls

One of the most remarkable features of the quantum realm is the phenomenon of quantum tunnelling, without which the nuclear fusion processes that power the stars and create the Universe’s heavier elements would not be able to take place. 

Tunnelling allows protons in the core of the sun to overcome mutual repulsion caused by their positive charges, a potential barrier that even under extreme pressure, they do not have the kinetic energy to overcome. This allows the formation of deuterium from hydrogen nuclei and begins the nuclear fusion process in the star’s core which leads to the formation of helium from hydrogen and powers its immense energy output.

You’re thinking about quantum tunnelling on your way to your car when you feel the tennis ball you received as a ‘memento’ and stuffed in your pocket pressing into your thigh. Remembering your promise, you look at the nearest wall, noting that it’s probably higher than you can throw the ball. 

You resolve to give it a few tries anyway. 

You throw the ball a few times, each with exactly the same force against the same resistance provided by gravity and air resistance, realising you can’t give it enough kinetic energy to get it over the top of the wall. In fact, you’re falling considerably short. But, this is a special ball. The researchers at ZME labs have found a way to imbue it with the qualities of a quantum particle. 

On your 47th throw of the quantum ball with the same kinetic energy, the ball approaches its usual limit and simply disappears. You inspect the wall seeing no holes, and you know there is no way the ball could have broke through the wall… then you hear a loud cry from the over the side of the wall scream: “My flowers… Whose ball is this?” You decide discretion is the better part of valour, and flee the scene. 

So, how can Heisenberg’s uncertainty principle be responsible for the ball travelling to the other side of the wall, an area that in physics we would describe as ‘classically forbidden’?

The key is, that as we precisely know this quantum ball’s momentum, we can’t be sure of its position. This means that there is a tiny probability that the ball can be found in a region it should be impossible to reach. 

Below, you can see a simulation of what happens when a particle of certain energy approaches an energy barrier that exceeds its energy. It should be noted here that the ‘wider’ or ‘taller’ the barrier, ie. the greater the energy demand, the less likely a particle is to clear it. 

You can think about tunnelling like this. A particle of energy <E> approaches a barrier of <2E>. Clearly it doesn’t have enough energy to ‘jump’ this barrier. Yet, in quantum physics, we find a small probability that transmission occurs. Obviously, that means that in circumstances where you have a lot of particles, like in the core of a star, the law of large numbers suggests that this kind of rare event still happens a lot.

As you muse on this, you have a worrying thought: “I know the exact momentum of my car. Does that mean I can’t know its position?”

You quicken your step considerably. 

Dude where’s my car? Why Heisenberg’s uncertainty principle doesn’t apply to everyday objects

Heisenberg and Schrödinger get pulled over for speeding. The cop asks Heisenberg: “Do you know how fast you were going?”
Heisenberg replies: “No, but we know exactly where we are!”
The officer looks at him confused and says: “you were going 108 miles per hour!”
Heisenberg throws his arms up and cries: “Great! Now we’re lost!”


We’ve thus far had a little fun describing macroscopic objects like tennis balls and skipping ropes displaying quantum behaviour, so it’s probably an idea to explain why this isn’t something we actually see every day.

The key is the very small value of the reduced Planck’s constant (ℏ). This means that the lower limit in the uncertainty of measuring the position and momentum of large objects is negligible when compared to massive objects like tennis balls, skipping ropes or cars. 

All matter has a de Broglie wave (λdb) but that wave has to be a comparable size to the Planck’s reduced constant for Heisenberg’s uncertainty principle to have a considerable effect. The de Broglie wave of a tennis ball is way too small to be subject to the uncertainty principle in any significant way.

It’s for much the same reason that moving objects don’t diffract around trees. Their de Broglie wave is way too small.

Sorry, you’re not getting off with that speeding ticket so easily. 


Sources and further reading

Griffiths. D. J, ‘Introduction to Quantum Mechanics,’ Cambridge University Press, [2017].

Feynman. R, Leighton. R. B, Sands. M, ‘The Feynman Lectures on Physics. Volume III: Quantum Mechanics,’ California Institute of Technology, [1965].

Bolton. J, Lambourne. R, ‘Wave Mechanics,’ The Open University, [2007].

Two different quantum optomechanical systems used to demonstrate novel dynamics in backaction-evading measurements. Left (yellow): silicon nanobeam supporting both an optical and a 5 GHz mechanical mode, operated in a helium-3 cryostat at 4 Kelvin and probed using a laser sent in an optical fibre. Right (purple): microwave superconducting circuit coupled to a 6 MHz mechanically-compliant capacitor, operated in a dilution refrigerator at 15 milli-Kelvin. (I. Shomroni, EPFL.)

Side stepping Heisenberg’s Uncertainty Principle isn’t easy

Two different quantum optomechanical systems used to demonstrate novel dynamics in backaction-evading measurements. Left (yellow): silicon nanobeam supporting both an optical and a 5 GHz mechanical mode, operated in a helium-3 cryostat at 4 Kelvin and probed using a laser sent in an optical fibre. Right (purple): microwave superconducting circuit coupled to a 6 MHz mechanically-compliant capacitor, operated in a dilution refrigerator at 15 milli-Kelvin. (I. Shomroni, EPFL.)

Recent developments in science such as the detection of gravitation waves by way of the minute displacement of mirrors at LIGO and the development of atomic and magnetic force microscopes to reveal atomic structure and spins of single atoms have pushed the boundaries of what can be defined as measurable. 

Yet, as scientists push the limits of mechanical measurements the spectre of Heisenberg’s Uncertainty principle remains to remind that no matter how accurate their equipment and procedures become, nature has an intrinsic, in-built limit to what they can ‘know’. 

One of the main results of early investigations in quantum physics, the uncertainty principle says that even as the sensitivity of our measuring equipment improves — these conventional measures are limited by a “measurement backaction”. The most common and easiest to explain example of the uncertainty principle is the idea that knowledge of a particle’s exact location immediately destroys knowledge of its momentum — and by extension, the ability to predict its location in the future. 

Sense and sensitivity in laser interferometers

Despite this seeming hinderance, researchers are hard at work developing potential methods to help them ‘sidestep’ Heisenberg’s uncertainty principle. Thes techniques hinge on the careful collection of only certain information about a system, whilst intentionally omitting other aspects.

So, for example, waves and wavefunctions are of vital importance in quantum mechanics. Using this selective method researchers would attempt to take the measurement of the wave’s amplitude, whilst simultaneously ignoring its phase. 

These methods could, in principle at least, have unlimited sensitivity with the drawback of only being able to gauge half of the information about a system. That is the aim of Tobias Kippenberg at Ecole Polytechnique Federale De Lausanne (EPFL). In conjunction with scientists at the University of Cambridge and IBM Research, Zurich, Kippenberg has uncovered new dynamics that place further unexpected constraints on such systems and just what levels of sensitivity are achievable.

An aerial view of LIGO. The laser interferometer that runs through these massive kilometre scale arms must be incredibly sensitive to detect gravitational waves. But new research suggests another hindrance to such sensitivity. (LIGO)

The team’s work shows particular interest to the interferometers that are used to measure gravitational waves. The sensitivity of these instruments is of vital importance as gravitational waves are incredibly difficult to detect. As these pieces of equipment use disturbances in laser beams shined down their massive, kilometre-scale arms, improving their sensitivity means trying to avoid backaction in electromagnetic waves. 

The team’s study — published in the journal Physical Review X — demonstrates that small deviations optical frequency, coupled with deviations in mechanical frequency can lead to mechanical oscillations being amplified out of control. This mimics the physics displayed in a state physics refer to as “degenerate parametric oscillator”.

This behaviour was found by Kippenberg and his team in two radically different systems — one operating with optical radiation, the other operating with microwave radiation. This is a fairly disastrous discovery as it implies that the dynamics are not unique to any particular system, but rather, are common across many such systems. 

The researchers from EPFL investigated these dynamics further — tuning the frequencies and demonstrating a perfect match with pre-existing theories. EPFL scientist Itay Shomroni, the paper’s first author, explains: “Other dynamical instabilities have been known for decades and shown to plague gravitational wave sensors. 

“Now, these new results will have to be taken into account in the design of future quantum sensors and in related applications such as backaction-free quantum amplification.”


Original research: Shomroni, A. Youssefi, N. Sauerwein, L.Qiu, P. Seidler, D. Malz, A. Nunnenkamp, T. J. Kippenberg. Two-tone optomechanical instability and its fundamental implications for backaction-evading measurements. Physical Review X 9, 041022; 30 October 2019. DOI:10.1103/PhysRevX.9.041022

Artistic illustration of the delocalization of the massive molecules used in the experiment. © Yaakov Fein/University of Vienna/ Universität Wien

Super-Superposition: 2,000 atoms in ‘two places at once’

Artistic illustration of the delocalization of the massive molecules used in the experiment. © Yaakov Fein/University of Vienna/ Universität Wien

A research team of scientists from the University of Vienna and the University of Basel have tested the principle of quantum superposition on an unprecedented scale. The team brought hot, complex molecules comprised of approximately 2,000 atoms into quantum superposition and caused them to interfere. 

This confirms quantum phenomena can occur on a mass-scale never achieved before, resulting in new constraints being placed on alternative theories to quantum mechanics. 

Markus Arndt is a Professor of Quantum Nanophysics at the University of Vienna and led the research team. Their results are published in the journal Nature Physics. He explains the principle of superposition:

“While in classical mechanics you describe bodies with momentum and position, quantum mechanics tells us that matter has to be described by a wave function.

“The amplitude squared of this wave function tells us where to find a particle.”

The principle of superposition emerges from one of the fundamental elements of quantum mechanics — the Schrodinger equation. Drawing an analogy to water waves, this means that these ‘quantum waves’ — or de Broglie Waves named after French Physicist Louis De Broglie — can exhibit constructive and destructive interference effects. The major difference is, whilst the water wave contains a multitude of particles, the de Broglie wave describes a single particle.

Arndt continues the analogy: “Like water waves, the quantum matter-waves can fill large areas of space, this means that a particle has no well-defined position anymore.

“Colloquially we say that the particle can be in two or more places at once. Whilst in free propagation the wave function collects information about places that a classical billiard ball could never know. ”

When a measurement is made the particle can only be measured in one place, a behaviour described as the collapse of the wavefunction. The most famous and familiar demonstration of this effect is the double-slit experiment. 

Light allowed to pass through both slits displays a wave-like pattern

The experiment was initially run with photons — unveiling that they possess both wave-like and particle-like properties [you’ll often see this described as light ‘being both a particle and a wave’ which is incorrect. It’s actually neither]. 

This particle-wave duality nature was later discovered in matter particles by re-running the experiment with particles of increasing mass — first electrons — right up to carbon-60 molecules, also known as Bucky Balls.

This wave-like aspect of nature is clearly an aspect of quantum mechanics and the very small — but not something we see in the world around us. This leads researchers to an interesting question: where does the boundary between quantum and classical mechanics lie? 

Or alternatively, how large does a billiard ball have to be before it can’t be described with the mathematics of waves anymore?

Where is the quantum/classical boundary?

Arndt and the team demonstrated quantum interference with larger objects than ever attempted before. The previous largest molecule used for such experiments weighed in at 10.123 atomic mass units (amu), whilst the molecules that Arndt and his team used were over 2.5 x 10⁴ amu — greater by a magnitude of 100 — the researcher tells me. 

One of the largest molecules the team sent through their interferometer —  C707H260F908N16S53Zn4 — consists of more than 4.0 x 10⁴ protons, neutrons and electrons. Its de Broglie wavelength is a thousand times smaller than the diameter of a hydrogen atom. 

These molecules were specially created for the experiment by Marcel Mayor and his team at the University of Basel. Their technique makes the molecules stable enough to form a beam of molecules in an ultra-high vacuum. 

The matter-wave interferometer in Vienna that the team used was specially designed with a two-metre-long baseline, in order to make it adept at highlighting the quantum nature of the particles in question. They also have exciting potential future applications. 

“These interferometers that we build for these foundational questions, are exquisite force sensors,” Arndt explains. “Applied to biomolecules or cluster we use them to learn about the internal properties of these particles, even though quantum mechanics forbids us to know where they are.”

The team calls this matter-wave interference assisted metrology and are still the only group in the world working on it, Arndt says.

Living on the edge. Probing quantum physics boundaries

By showing that a superposition can be maintained for a massive particle, Arndt and his team have effectively placed important boundary conditions on a class of models aimed to define that transition from quantum to classical mechanics. 

Arndt explains why models that tie mass to the collapse of superposition — legitimate mathematical extensions to Schrodinger’s equation — are appealing:

“Continuous spontaneous localization models need this to explain why small things behave quantum mechanically and big ones don’t.

“ The Schrödinger equation depends on derivatives by space and time. If a particle curves space-time in different places, how can one still have a consistent Schrödinger equation?”

As for where that boundary lies, Arndt believes that the researchers aren’t quite at that limit just yet. 

“It’s hard to say. It’s a matter of experimental control above all,” he explains. “There is good reason to believe that if we do it right, we will see quantum effects for quite a while yet.”

The crux of the challenge of finding where quantum effects cease could lie in the discovery of a quantum theory of gravity. 

Arndt elaborates: “There is a well-founded suspicion that something may change at high masses because gravity deforms space-time and the particles themselves become a source for that. 

“When exactly this may happen, no one can say with certainty. There is no quantum gravity theory yet.”


Original research: https://www.nature.com/articles/s41567-019-0663-9

The Micius satellite--the first experiment to test quantum physics in space

Quantum satellite investigates the gap between Quantum Mechanics and General Relativity

Experimental diagram of testing gravity-induced decoherence of entanglement (provided by University of Science and Technology of China)

Quantum mechanics and general relativity represent the two most successful theories in 20th-century physics. But despite almost 100 years of continued experimental verification and practical application, researchers remain unable to unite the disciplines. 

As general relativity describes the effects of gravity on Einstein’s four-dimensional spacetime — three dimensions of space and time — this means that a quantum theory of gravity continues to evade detection. 

As the problem of unification remains unsolved, physicists put forward various models that require experimental verification. 

A team of international researchers has developed a framework to test a model which may account for the breakdown of general relativity’s rules on the quantum scale. They tested this framework using the quantum satellite — Micius — a Chinese project which tests quantum phenomena in space. 

The research — documented in a paper published in the journal Science — represents the first meaningful quantum optical experiment testing fundamental physics between quantum theory an gravity, says Jian-Wei Pan, director of the CAS centre for Excellence in Quantum Information and Quantum Physics at the University of Science and Technology of China.

Pan and his team wanted to test the event formalism model of quantum fields model — a theory that suggests that the correlation between entangled particles would collapse — a phenomenon known as decoherence — as they pass through the gravitational well of Earth. The idea is that the differences in the gravitational force would force decoherence as the particle experiencing less gravity would be able to travel with less constraint than its counterpart in an area of stronger gravity.

Pan suggests that event formalism presents a description of quantum fields existing in spacetime as described by general relativity — consisting of curvature caused by the presence of mass. Thus if the team can observe this model’s effects, they can imply the presence of quantum phenomena on a larger scale as described by general relativity.

Pan says: “If we did observe the deviation, it would mean that event formalism is correct, and we must substantially revise our understanding of the interplay between quantum theory and gravity theory.”

In their test, the team used pairs of particles described as ‘time-energy entangled’ — a recently discovered type of entanglement which photons are entangled in terms of their energies and the times they are detected. 

The team was unable to detect the particles deviating from standard behaviour expected in quantum mechanics, but they plan to retest a version of their theory that is more flexible. 

“We ruled out the strong version of event formalism, but there are other versions to test,” Pan says. “A modified model remains an open question.”

To put this revised version to the test a new satellite will be launched that will orbit up to sixty-times higher than Micius — enabling it to test a wider variation in gravitational field strength. 


Original research: https://science.sciencemag.org/content/early/2019/09/18/science.aay5820

Physicists are a step closer to a theory of quantum gravity

New research centering around the Unruh effect has created a set of necessary conditions that theories of quantum gravity must meet.

Quantum physics has, since its development in the early years of the 20th century, become one of the most successful and well-evidenced areas of science. But, despite all of its successes and experimental triumphs, there is a shadow that hangs over it. 

Despite successfully integrating electromagnetic, the weak and strong nuclear forces — three of the four fundamental forces — quantum physics is yet to find a place for gravity. 

As such, it cannot link with one of physics other great triumphs, that of Einstein’s theory of general relativity. Thus, physicists are currently working hard to develop a quantum theory of gravity. 

Now researchers led by the SISSA (Scuola Internazionale Superiore di Studi Avanzati), the Complutense University of Madrid and the University of Waterloo — have identified the sufficient and necessary conditions that the low-energy limit of quantum gravity theories must satisfy to preserve the main features of the Unruh effect.

Any new theory of physics must factor in this effect — this means quantum gravity theories too, must have a place for the Unruh effect and its predictions (which are detailed below). 

The new study — published in the journal Physical Review Letters — provides a solid theoretical framework to discuss modifications to the Unruh effect caused by the microstructure of space-time.

Eduardo Martin-Martinez, an assistant professor in Waterloo’s Department of Applied Mathematics, elaborates on the team’s work: “What we’ve done is analyzed the conditions to have Unruh effect and found that contrary to an extended belief in a big part of the community thermal response for particle detectors can happen without a thermal state.”

The team’s findings of importance because the Unruh effect exists in the boundary between quantum field theory and general relativity, and quantum gravity, which we are yet to understand.

“So, if someone wants to develop a theory of what’s going on beyond what we know of quantum field theory and relativity, they need to guarantee they satisfy the conditions we identify in their low energy limits.”

What is the Unruh effect?

The Unruh effect was first described by Stephen Fulling in 1973, followed by Paul Davies in 1975 and William G Unruh — after whom it was named — in 1976.

W.G Unruh, one of the developers of the Unruh effect, after whom it was named.

 It predicts that an observer in a non-inertial reference frame — one that is accelerating — would observe photons and other particles in a seemingly empty space while another person who is inertial would see a vacuum in that same area.

In other words; a consequence of the Unruh effect is that the nature of a vacuum in the universe is dependant on the path taken through it.

As an analogy, consider a universe with a constant temperature of zero and in which, no heat arises from the effects of friction or kinetic energy contributions. A still thermometer would have its mercury-level sat permanently at zero.

But the Unruh effect posits that if that thermometer was waved from side-to-side, the temperature measured would no longer be zero. The temperature measured would be proportional to the acceleration that the thermometer undergoes.

Raúl Carballo-Rubio, a postdoctoral researcher at SISSA, Italy, explains further: “Inertial and accelerated observers do not agree on the meaning of ‘empty space.

 “What an inertial observer carrying a particle detector identifies as a vacuum is not experienced as such by an observer accelerating through that same vacuum. The accelerated detector will find particles in thermal equilibrium, like a hot gas.”

He further explains that as a result of this, it is reasonable to expect that any new physics that modifies the structure of quantum field theory at short distances, would induce deviations from this law. 

Carballo-Rubio continues: “While probably anyone would agree that these deviations must be present, there is no consensus on whether these deviations would be large or small in a given theoretical framework. 

“This is precisely the issue that we wanted to understand.”

Defining the conditions theories of quantum gravity must satisfy

The researchers analyzed the mathematical structure of the correlations of a quantum field in frameworks beyond standard quantum field theory. 

The result of this analysis was then used to identify the three necessary conditions that are sufficient to preserve the Unruh effect. 

Low-energy predictions of quantum gravity theories can be constructed from the results. The findings of this research provide the tools necessary to make these predictions in a broad spectrum of situations.

Having been able to determine how the Unruh effect is modified by alterations of the structure of quantum field theory, as well as the relative importance of these modifications, the researchers believe the study provides a solid theoretical framework to discuss and perhaps test this particular aspect as one of the possible phenomenological manifestations of quantum gravity. 

This is particularly important and appropriates even if the effect has not yet been measured experimentally, as it is expected to be verified in the not so distant future.


Original research: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.041601

Purdue researchers have modified a popular theorem for identifying quantum entanglement and applied it to chemical reactions. This quantum simulation of a chemical reaction yielding deuterium hydride validated the new method. ( Purdue University image/Junxu Li)

Physicists measure quantum entanglement in chemical reactions

Quantum entanglement and other quantum phenomena have long been suspected by scientists to play a role in chemical reactions like photosynthesis. But, until now, their presence has been hard to identify.

Purdue researchers have modified a popular theorem for identifying quantum entanglement and applied it to chemical reactions. This quantum simulation of a chemical reaction yielding deuterium hydride validated the new method. ( Purdue University image/Junxu Li)

Purdue researchers have modified a popular theorem for identifying quantum entanglement and applied it to chemical reactions. This quantum simulation of a chemical reaction yielding deuterium hydride validated the new method. ( Purdue University image/Junxu Li)

Researchers at Purdue University have unveiled a new method that enables them to measure entanglement — the correlation between the properties of two separated particles — in chemical reactions.

Discovering just what role entanglement play in chemical reactions has implications for the improvement of technologies like solar energy systems if we can learn to replicate them.

The study — published in the journal Science Advances — takes the theorem ‘Bell’s Inequality’ and generalises it to identify entanglement in chemical reactions. In addition to theoretical arguments, they also performed a series of quantum simulations to verify this generalized inequality.

Sabre Kais, a professor of chemistry at Purdue, explains further: “No one has experimentally shown entanglement in chemical reactions yet because we haven’t had a way to measure it. For the first time, we have a practical way to measure it.

“The question now is, can we use entanglement to our advantage to predict and control the outcome of chemical reactions?”

Bell’s Inequality — identifying entanglement.

John S. Bell designed an experiment to prove if quantum mechanics is complete (CERN)

John S. Bell designed an experiment to prove if quantum mechanics is complete (CERN)

Since its development in 1964, Bell’s Inequality has been validated as the go-to test that physicists use to identify entanglement in particles. The theorem uses discrete measurements of properties of particles such as the orientation in their spin — nothing to do with angular momentum in the quantum world — to find if the particles are correlated.

The problem is, discovering entanglement in chemical reactions requires that measurements are continuous. This means measuring aspects such as the angles of beams which scatter reactants forcing them into contact and transform into products.

To combat this, Kai’s team generalised Bell’s Inequality to include continuous measurements in chemical reactions, in a similar way to how the theorem had previously been generalised to examine light — photonic systems.

The team then tested their generalised Bell’s inequality using a quantum simulation of a chemical reaction yielding the molecule deuterium hydride.

The process was built on a foundation established in a 2018 experiment developed by Stanford University researchers that aimed to study the quantum states of molecular interactions.

Because the simulations validated the Bells’s theorem and showed that entanglement can be classified in chemical reactions, Kais’ team proposes to further test the method on deuterium hydride in an experiment.

Kais says: “We don’t yet know what outputs we can control by taking advantage of entanglement in a chemical reaction — just that these outputs will be different.

 “Making entanglement measurable in these systems is an important first step.”

This is what quantum entanglement looks like

Scientists have managed to take a photo of one of the most bizarre phenomena in nature: quantum entanglement.

Image credits: University of Glasgow.

There’s a reason why Einstein called quantum entanglement ‘spooky action at a distance’. Quantum entanglement, by everything that we know from our macroscopic lives, should not exist. However, the laws of quantum mechanics often defy what seems normal to us, and this bizarre phenomenon actually underpins the whole field of quantum mechanics.

Quantum entanglement occurs when a pair or a group of particles interact with each other and remain connected, instantaneously sharing quantum states — no matter how great the distance that separates them (hence the spooky action at a distance). This connection is so strong that the quantum state of each particle cannot be described independently of the state of the other(s).

Predicting, achieving, and describing this phenomenon was a gargantuan task that took decades. Photographing it is also a remarkable achievement.

Researchers from the University of Glasgow modified a camera to capture 40,000 frames per second. They operated an experimental setup at -30 degrees Celsius (-22 F) in pitch-black darkness. The experimental setup shoots off streams of photons entangled in a so-called Bell state — this is the simplest example of quantum entanglement.

The entangled photons were split up, with one of them passing through a liquid crystal material called β-barium borate, triggering four phase transitions. These four phase transitions were observed in the other, entangled photons.

A composite of multiple images of the photons as they go through the quantum transitions. Image credits: University of Glasgow.

Einstein staunchly believed that quantum mechanics does not tell the whole story and must have another, underlying physical framework. He even developed a series of experiments meant to disprove this quantum mechanics — which, ironically, ended up confirming the foundations of quantum mechanics.

However, people often forget that Einstein can also be regarded as one of the fathers of quantum mechanics. For instance, he described light as quanta in his theory of the Photoelectric Effect, for which he won the 1921 Nobel Prize. Niels Bohr and Max Planck are often regarded as the two founders of quantum mechanics, although numerous outstanding physicists worked on it over the years. For instance, physicist John Stewart Bell helped define quantum entanglement, establishing a test known as ‘Bell inequality’. Essentially, if you can break Bell inequality, you can confirm true quantum entanglement — which is what researchers have done here.

“Here, we report an experiment demonstrating the violation of a Bell inequality within observed images,” the study reads.

Lead author Dr. Paul-Antoine Moreau of the University of Glasgow’s School of Physics and Astronomy comments:

“The image we’ve managed to capture is an elegant demonstration of a fundamental property of nature, seen for the very first time in the form of an image.”

“It’s an exciting result which could be used to advance the emerging field of quantum computing and lead to new types of imaging.”

The study was published in Science Advances.

Illustration of a planetary system surrounded by a debris disk. Credit: NASA

Fundamental quantum mechanics equation can also describe large-scale objects in the universe

Schrodinger’s equation is a fundamental equation of quantum mechanics that describes how particles behave at the subatomic level. Caltech researchers were astonished to learn, however, that the same equation popped up when they modeled the behavior of self-gravitating astrophysical disks. Examples of such disks include the rings of Saturn or the disks of gas and dust that surround young stars. However, there’s nothing ‘quantum’ about these objects — and yet there seems to be a connection.

Illustration of a planetary system surrounded by a debris disk. Credit: NASA

Illustration of a planetary system surrounded by a debris disk. Credit: NASA

Schrodinger’s equation is one of the cornerstones of quantum physics. It plays a similar role to Newton’s laws and conservation of energy in classical mechanics, describing how quantum objects (such as atoms and subatomic particles) tend to behave in the future based on their current state. When you get to that level, things start to get a bit bizarre.

One counter-intuitive behavior that Schrodinger’s equation describes is that subatomic particles behave more like a wave than like a particle, a phenomenon which physicists refer to as wave-particle duality. 

Massive astronomical objects often have smaller objects gravitating around them, whether it’s stars going around a black hole or space debris around a star. This debris often arranges itself in a disk, like our solar system, for instance. The problem is that it’s always been challenging to study the behavior of self-gravitating disks because they exhibit distortions, bending and warping like ripples. Capturing the disks’ complexity can be challenging, which is what a team of researchers at Caltech is trying to address.

Konstantin Batygin, a Caltech assistant professor of planetary, and colleagues turned to perturbation theory in order to develop a mathematical representation of disk evolution. This simplified model, which is based on an equation developed by 18th-century mathematicians Joseph-Louis Lagrange and Pierre-Simon Laplace, assumes that the constituents of astronomical disks are mathematically smeared together into thin “wires”. These wires form concentric circles that loop around the disk and exchange orbital angular momentum.

Batygin looked at the extreme cases where the concentric circles get thinner and thinner until they merge into a continuous disk, and what this limit rendered proved quite surprising.

“When we do this with all the material in a disk, we can get more and more meticulous, representing the disk as an ever-larger number of ever-thinner wires,” Batygin said in a statement. “Eventually, you can approximate the number of wires in the disk to be infinite, which allows you to mathematically blur them together into a continuum. When I did this, astonishingly, the Schrodinger Equation emerged in my calculations.”

“This discovery is surprising because the Schrodinger Equation is an unlikely formula to arise when looking at distances on the order of light-years,” says Batygin. “The equations that are relevant to subatomic physics are generally not relevant to massive, astronomical phenomena. Thus, I was fascinated to find a situation in which an equation that is typically used only for very small systems also works in describing very large systems.”

Credit: James Tuttle Keane, California Institute of Technology.

The study published in the Monthly Notices of the Royal Astronomical Society suggests that the shape of the disk is well-represented by the wave function of a quantum particle that bounces around the inner and outer walls of a disk-like cavity. It’s unexpected to learn that such a fundamental equation to quantum mechanics, which scientists are used to dealing with almost exclusively in the domain of the “very small”, can also describe the long-term evolution of astrophysical disks. It’s an intriguing discovery that two seemingly unrelated branches of physics can be described so similarly, mathematically speaking — and such knowledge should prove useful for researchers who model astrophysical objects.

“Fundamentally, the Schrodinger Equation governs the evolution of wave-like disturbances.” says Batygin. “In a sense, the waves that represent the warps and lopsidedness of astrophysical disks are not too different from the waves on a vibrating string, which are themselves not too different from the motion of a quantum particle in a box. In retrospect, it seems like an obvious connection, but it’s exciting to begin to uncover the mathematical backbone behind this reciprocity.”

This microscopic vibrating drum was, at one point, colder than anything found in nature. Credit: Teufel/NIST

Tiny aluminium drum cooled beyond quantum limit proves we can make things even colder. Possibly down to absolute zero

This microscopic vibrating drum was, at one point, colder than anything found in nature. Credit: Teufel/NIST

This microscopic vibrating drum was, at one point, colder than anything found in nature. Credit: Teufel/NIST

Nothing can be chilled below absolute zero ( −273.15°C) because at this temperature all molecular motion stops completely. Per Heisenberg’s uncertainty principle the forces of real particle velocities will always be above zero. It’s a fundamental limit that can’t seem to be broken. That’s fine just fine — what bothers scientists, however, are other limits that keep them from cooling things near absolute zero.

For decades, researchers have used lasers to cool down atoms very close to absolute zero. However, when you try to cool close to zero something macroscopically, like a power cable or even a coin, you get hit by a brick wall — a ‘quantum limit’ that keeps mechanical objects from getting too cold.

Physicists from the National Institute of Standards and Technology (NIST) weren’t sure that this is a fundamental limit and good thing they experimented because their findings suggest macroscopic objects can be cooled more than previously thought possible.

[ALSO SEE] The minimum and maximum temperatures 

Using lasers, the NIST team cooled an aluminum drum to 360 microKelvin or 10,000 times colder than the vacuum of space. The tiny vibrating membrane is 20 micrometers in diameter and 100 nanometers thick. It’s the coldest thing we’ve ever seen that’s larger than a few atoms across.

“The colder you can get the drum, the better it is for any application,” said NIST physicist John Teufel, who led the experiment. “Sensors would become more sensitive. You can store information longer. If you were using it in a quantum computer, then you would compute without distortion, and you would actually get the answer you want.”

“The results were a complete surprise to experts in the field,” Teufel’s group leader and co-author José Aumentado said. “It’s a very elegant experiment that will certainly have a lot of impact.”

Everyone’s familiar with lasers but firing lasers to cool stuff? It sounds counter-intuitive because we all know lasers warm targets — but that’s if you fire all of the light. The kind of lasers used for cooling fire at a specific angle and frequency. Typically multiple lasers are used. As a result of this clever tweaking photons actually end up snatching energy from its target instead of releasing it, and it’s all done by literally pushing the atoms.

Confused? It gets elementary once you understand or remember what temperature actually is — the motion of atoms. That’s it. When we feel warm, atoms are whizzing past us faster. When it’s cold outside, the molecules in the air are moving slower. So, what scientists do when they fire lasers is they push these atoms in the opposite direction of their motion. As the photon gets absorbed by the target atom(s), the photon’s momentum is transferred.

Laser pulses, however, like any light,  fires in discrete packets of energy called quanta. This means there’s a gap between packets which gives atoms the time to resume motion. That’s how light works and quantum mechanics seems to suggest there’s an upper limit. Previously, NIST researchers used sideband-cooling to limit the thermal motion of a microscopic aluminum membrane that vibrates like a drumhead to one-third the amount of its quantum motion.

The NIST researchers took laser cooling a step further by using ‘squeezed light’ — light that’s more organized in one direction than any other. By squeezing light, the noise, or unwanted fluctuations, is moved from a useful property of the light to another aspect that doesn’t affect the experiment. The NIST team used a special circuit to generate microwave photons that were purified or stripped of intensity fluctuations, which reduced inadvertent heating of the drum.

“Noise gives random kicks or heating to the thing you’re trying to cool,” Teufel said. “We are squeezing the light at a ‘magic’ level—in a very specific direction and amount—to make perfectly correlated photons with more stable intensity. These photons are both fragile and powerful.”

The NIST paper published in Nature seems to suggest squeezed light removes the generally accepted cooling limit. Teufel says their proven technique can be refined to make things even cooler — possible even to exactly absolute zero. And that, ladies and gentlemen, is the coolest thing you’ll hear today.

“In principle if you had perfect squeezed light you could do perfect cooling,” he told the Washington Post. “No matter what we’re doing next with this research, this is now something we can keep in our bag of tricks to let us always start with a colder and quieter and better device that will help with whatever science we’re trying to do.”

How Albert Einstein broke the Periodic Table

In a study published in the January 19, 2016 issue of the Journal of the American Chemical Society (JACS), scientists at Tsinghua University in China confirmed that something very unusual is happening inside extremely heavy atoms, causing them to deviate from their expect chemical behavior predicted by their place on the Periodic Table of Elements. Due to the velocity of electrons in these heavy elements getting so close to the speed of light, the effects of special relativity begin to kick-in, altering the chemical features observed.

The study shows that the behavior of the element Seaborgium (Sg) does not follow the same pattern as the other members of its group, which also contain Chromium (Cr), Molybdenum (Mo), and Tungsten (W). Where these other group members can form diatomic molecules such as Cr2, Mo2, or W2, using 6 chemical bonds, diatomic Sg2 forms using only 4 chemical bonds, going unexpectedly from a bond order of 6 to a bond order of only 4. This is not predicted by the periodic nature of the table, which itself arises from quantum mechanical considerations of electrons in energy shells around the nucleus. So what’s happening here? How does relativity throw off the periodic pattern seen in our beloved table of elements?

The Periodic Table of elements was initially conceived by Dmitri Mendeleev in the mid-19th century, well before many of the elements we know today had been discovered, and certainly before there was even an inkling of quantum mechanics and relativity lurking beyond the boundaries of classical physics. Mendeleev recognized that certain elements fell into groups with similar chemical features, and this established a periodic pattern to the elements as they went from light weight elements like hydrogen and helium, to progressively heavier ones. In fact, Mendeleev could predict the very specific properties and features of, as yet, undiscovered elements due to blank spaces in his unfinished table. Many of these predictions turned out to be correct when the elements filling the blank spots were finally discovered. See figure 1.

 

Mendeleev's 1871 version of the periodic table. Blank spaced were provided where predicted new elements would be found.

Figure 1.   Mendeleev’s 1871 version of the periodic table. Blank spaced were provided where predicted new elements would be found.

 

Once quantum theory was developed in the early 20th century, the explanation for the periodic behavior of the table became apparent. The electrons in the atom are arranged in orbital shells around the nucleus. There are several different orbital types, again based on predictions from quantum mechanics, and each type of orbital can hold only a specified number of electrons before the next orbital has to be used. As you go from top to bottom in the Periodic Table, you use orbitals of progressively higher energy levels. Periodic behavior arrises because, although the energy levels keep getting higher, the number of electrons in each orbital type are the same for each group, going from top to bottom. See figure 2.

 

Figure 2. Group 1 as an example of a group in the Periodic Table. As the group goes from top to bottom the energy levels get higher and the elements get heavier.

Figure 2.   Group 1 as an example of a group in the Periodic Table. As the group goes from top to bottom the energy levels get higher and the elements get heavier.

 

The other great area of physics developed in the early 20th century was relativity, which didn’t seem to have much importance on the scale of the very small. Albert Einstein published his ground breaking paper on Special Relativity (SR) in 1905, which described the effects on an object moving close to the speed of light. In 1915 he developed the General Theory of Relativity (GTR), describing the effects due to a massive gravitational field. It is SR that becomes an important consideration in the very heavy elements due their electrons reaching velocities at a significant percentage of the speed of light.

Einstein showed that as the velocity of an object approaches the speed of light its mass increases. This effect is too small to be noticeable at everyday speeds, but becomes pronounced near light speed. It can also be shown that the velocity of an electron in orbit around an atom, is directly proportional to the atomic number of the atom. In other words, the heavier the atom, the faster its outer electrons are moving. For the element hydrogen, with atomic number 1, the electron is calculated to be moving at 1/137 the speed of light, or 0.73% of light speed. For the element gold (Au) with atomic number 79, the electrons are moving at 79/137 the speed of light, or 58% of light speed, and for Seaborgium (Sg) with atomic number 106, the electron is going at an impressive 77% of light speed. At these speeds the crazy effects of special relativity kick-in making the electron mass significantly heavier than it is at rest. For gold this makes the electron 1.22 times more massive than at rest, and for Seaborgium the electron’s mass comes out to be 1.57 times the electron rest mass. This, in turn, has an effect on the radius of the electron’s orbit, squeezing it down closer to the nucleus.

Some relativistic effects have already been known for certain heavy elements. The color of gold, for instance, arises due to the effects of relativity acting on it’s outer electrons, altering the energy spacing between two of it’s orbitals where visible light is being absorbed, and giving gold it’s characteristic color. If not for these relativistic effects, gold would be predicted to appear whitish.

For the elements in Group 6 of the Periodic Table (Cr, Mo, and W) (see Figure 3.) that were studied in the JACS article, they each have five d-orbitals and one s-orbital capable of forming bonds with another atom. Sg breaks the periodic pattern because it’s highest energy s-orbital is so stabilized by the effects of it’s relativistically moving electron, it doesn’t contribute to bonding. Due to the intricacies inherent in molecular orbital theory, this drops the number of bonding orbitals from 6 in Cr, Mo, and W, to only 4 in Sg (even though Sg is a group 6 member). It also means that the bond between Sg and Sg in the Sg2 molecule is 0.3 angstroms longer than expected, even though the Sg radius is only 0.06 angstroms bigger than W. If relativity didn’t have an effect, then the Sg2 molecule would be joined together by 6 orbital bonds, like any respectable Group 6 element should be! The same effect was also found in the Group 7 elements, with Hassium (Hs) showing the drop in bond order due to relativistic effects, just as Sg.

 

Figure 3. A modern version of the Periodic Table of Elements. Notice the Group 6 elements Cr, Mo, W, and Sg.

Figure 3.   A modern version of the Periodic Table of Elements. Notice the Group 6 elements Cr, Mo, W, and Sg.

 

The periodic table of elements is an impressive scientific achievement, who’s periodicity reveals an underlying order in nature. While this periodicity works remarkably well, the few exceptions to the rule also uncover important principles at work. Einstein’s theory of relativity breaks the periodic table in some interesting and unexpected ways. It’s the very heavy elements on the chart that don’t show good “table” manners, thanks to Einstein.

 

Journal Reference and other reading:
1. Relativistic Effects Break Periodicity in Group 6 Diatomic Molecules Yi-Lei Wang, Han-Shi Hu*, Wan-Lu Li, Fan Wei, and Jun Li*
Department of Chemistry & Key Laboratory of Organic Optoelectronics and Molecular Engineering of Ministry of Education, Tsinghua University, Beijing 100084, China  J. Am. Chem. Soc., 2016, 138 (4), pp 1126–1129 DOI: 10.1021/jacs.5b11793 Publication Date (Web): January 19, 2016

2. Relativistic effects in structural chemistry Pekka Pyykko Chem. Rev., 1988, 88 (3), pp 563–594 DOI: 10.1021/cr00085a006 Publication Date: May 1988

3. Why is mercury liquid? Or, why do relativistic effects not get into chemistry textbooks? Lars J. Norrby J. Chem. Educ., 1991, 68 (2), p 110
DOI: 10.1021/ed068p110 Publication Date: February 1991

Electromagnetic Breakthrough: Scientists Design Antenna ‘on a Chip’

Researchers from the University of Cambridge in England claim to have unraveled one of the great mysteries of electromagnetism, and believe their work in ultra-small antennas could not only revolutionize global communications, but also explain some of the tricky areas where electromagnetism and quantum physics overlap.

Image via ScienceDaily.

Basically, they’ve found that electromagnetic waves are not only generated from the acceleration of electron, but also from something called symmetry breaking. Symmetry breaking in physics describes a phenomenon where (infinitesimally) small fluctuations acting on a system which is crossing a critical point decide the system’s fate, by determining which branch of a bifurcation is taken. Imagine taking a long line of small line of infinitely small, random 50-50 decisions which ultimately decide the (electromagnetic) outcome. Needless to say, the implications for wireless communications are huge.

“Antennas, or aerials, are one of the limiting factors when trying to make smaller and smaller systems, since below a certain size, the losses become too great,” said Professor Amaratunga of Cambridge’s Department of Engineering. “An aerial’s size is determined by the wavelength associated with the transmission frequency of the application, and in most cases it’s a matter of finding a compromise between aerial size and the characteristics required for that application.”

The problem is that even though we’ve been using these aerials (antennas) for quite a while, there’s still a lot we don’t yet understand about them. Specifically, some physical variables associated with radiation of energy are not thoroughly understood. Electromagnetic theory becomes sort of problematic when  dealing with radio wave emissions from a dielectric solid, something which occurs in ever modern phone or laptop.

“In dielectric aerials, the medium has high permitivity, meaning the velocity of the radio wave decreases as it enters the medium,” said researcher Dr Dhiraj Sinha. “What hasn’t been known is how the dielectric medium results in emission of electromagnetic waves. This mystery has puzzled scientists and engineers for more than 60 years.”

As you get to working with smaller and smaller components, quantum theory starts to slowly take over. But the thing is, the phenomenon of radiation due to electron acceleration, which stands out perfectly fine in electromagnetic theory, has no equivalent in quantum mechanics. This is where this new work might step in – proposing that symmetry breaking is also responsible for some of the radiation. When electronic charges are not in motion, there is symmetry of the electric field, and when it breaks, that creates radiation i tiny steps.

“If you want to use these materials to transmit energy, you have to break the symmetry as well as have accelerating electrons – this is the missing piece of the puzzle of electromagnetic theory,” said Amaratunga. “I’m not suggesting we’ve come up with some grand unified theory, but these results will aid understanding of how electromagnetism and quantum mechanics cross over and join up. It opens up a whole set of possibilities to explore.”

It’s quite a basic realization, but it’s actually a breakthrough – a potential paradigm shift; it’s one of those rare things that might help expand our understanding of theoretical physics, as well as having direct and immediate implications in day to day life. But don’t get all excited yet – it’s still going to be quite a while before our smartphones can be upgraded with this knowledge.

“It’s actually a very simple thing, when you boil it down,” said Sinha. “We’ve achieved a real application breakthrough, having gained an understanding of how these devices work.”

 

New study suggests Big Bang never occurred, Universe existed forever

Researchers have created a new model that applies our latest understanding of quantum mechanics to Einstein’s theory of general relativity and this is what they came up with – it’s truly hard to wrap your mind around that.

Currently accepted theories state that the Universe is around 13.8 billion years old, and before that everything in existence was squished into a tiny point – also known as the singularity – so incredibly compact that it contained everything that eventually became the Universe (actually, this is pretty hard to wrap your mind as well). As the Big Bang took place, the Universe started to expand, and it is expanding faster and faster to this day.

Image via AMNH.

The problem with current theories is that the math breaks down when you start to analyze what happened during or before the Big Bang.

“The Big Bang singularity is the most serious problem of general relativity because the laws of physics appear to break down there,” co-creator of the new model, Ahmed Farag Ali from Benha University and the Zewail City of Science and Technology, both in Egypt, told Lisa Zyga from Phys.org.

Working in a team which included Sauya Das at the University of Lethbridge in Alberta, he managed to create a new satisfying model in which the Big Bang never occurred, and the Universe simply existed forever.

“In cosmological terms, the scientists explain that the quantum corrections can be thought of as a cosmological constant term (without the need for dark energy) and a radiation term. These terms keep the Universe at a finite size, and therefore give it an infinite age. The terms also make predictions that agree closely with current observations of the cosmological constant and density of the Universe.”

According to this model, the Universe also has no end, which is perhaps even more interesting if you think about it, and that it is filled with a quantum fluid, which might be composed of gravitons – hypothetical particles that have no mass and mediate the force of gravity.

The model shows great promise, but it has to be said – it’s only a mathematical theory at this point. We don’t have the physics to back it up or prove it wrong at the moment, and we likely won’t have it in the near future. Still, it’s remarkable that it solves so many problems at once, and the conclusions are very intriguing.

“It is satisfying to note that such straightforward corrections can potentially resolve so many issues at once,” Das told Zyga.

Read the full study here.

Image: DARPA

From atoms to life size: manufacturing from nanoscale up to macro

Image: DARPA

Image: DARPA

DARPA just announced the launch of a new extremely exciting program: Atoms to Product (A2P). The aim is to develop a suit of technologies that will allow manufacturing of products from the nanoscale up to what we know as ‘life size’. The revolutionary miniaturization and assembly methods would work at scales 100,000 times smaller than current state-of-the-art technology. If found successful, then DARPA might be able to make macroscale products (anything from the size of a tennis ball to a tank) that exhibit nanoscale or quantum properties usually encountered  when we delve in the core of atoms. 

When fabricated at extremely small scales (a few ten-billionths of a meter), materials exhibit extremely peculiar behavior which in some cases can be useful to society. These include quantized electrical characteristics, glueless adhesion, rapid temperature changes, and tunable light absorption and scattering that, if available in human-scale products and systems, could offer potentially revolutionary defense and commercial capabilities.

“If successful, A2P could help enable creation of entirely new classes of materials that exhibit nanoscale properties at all scales,” DARPA program manager John Main said in a news release, “It could lead to the ability to miniaturize materials, processes and devices that can’t be miniaturized with current technology, as well as build three-dimensional products and systems at much smaller sizes.”

This kind of scaled assembly, working from the nanoscale up to millions of orders of magnitude in size, is widely found in nature. Prime examples include all plants and animals, which are effectively systems assembled from atomic- and molecular-scale components a million to a billion times smaller than the whole organism. What DARPA is trying to do is to lay a foundation for a similar assembly method that might lead to a whole new class of materials.

So, how excited should we be about this? Not all DARPA projects work and the defense agency is known for dabbling in a slew of domains. When its projects work, however, they offer the opportunity to transform the world. Though a defense institution, DARPA has slipped many of its tech to civilian hands. To name a few: internet, GPS or graphical user interface.

The apparatus employed by the researchers to characterize photon momentum, while preserving info on position. Photo: Gregory A. Howland

Measuring particle momentum without breaking the uncertainty principle

uncertainty_principle-

Quantum mechanics is weird. There’s entanglement, appropriately dubbed by Einstein as “spooky action at a distance”, that allows two particles to exchange information instantly even if they’re at opposite sides of the universe. Then, there’s the always pesky uncertainty principle that destroys any chance you might have at measuring a particle’s property after you find its location. A novel way of measuring a photon’s location that allows physicists to measure its momentum too might come as a game changer, though. During this study, no laws of physics were harmed or broken!

Proposed for the first time in 1927 by Werner Heisenberg,  the uncertainty principle  tells us that there is a fuzziness in nature, a fundamental limit to what we can know about the behaviour of quantum particles and, therefore, the smallest scales of nature. The uncertainty principle says that we cannot measure the position (x) and the momentum (p) of a particle with absolute precision. The more accurately we know one of these values, the less accurately we know the other. Why? It has to do with something innate in quantum physics.

[ALSO READ] New technique bypasses Heisenberg’s uncertainty principle

At the tiniest scale, particles stop behaving in an intuitive manner – they’re not like everyday objects you see. One key understanding is that a particle doesn’t occupy a fixed position during a fixed moment in time, instead it can exist, at any given time, in an infinite of probabilities. Their chances of being in any given state are described by an equation called the quantum wavefunction.  Oddly enough, whenever scientists perform a measurement they cause a collapse of the wavefunction and all other properties escape them forever. But there maybe a nifty workaround.

Digital photographs, MRI scans and many other technologies use compression to save space and ease use. A DSLR camera, for instance, typically records a shot in RAW format, then converts it to a compressed jpeg. A new technique called compression sensing works much in the same way, with a key difference: measurements are made while compressing. Engineers use this technique extensively to wind up with more or less the same result but with the minimal amount of measurements.

The apparatus employed by the researchers to characterize photon momentum, while preserving info on position. Photo: Gregory A.  Howland

The apparatus employed by the researchers to characterize photon momentum, while preserving info on position. Photo: Gregory A. Howland

A team of physicists at University of Rochester believed using compression sensing for particle measurement was worth a try. They devised an experiment in which a box was fitted with an array of mirrors facing either towards or away from a detector. A laser shone light on the mirrors, which were arranged to act as a filter – in some places photons would pass across, in other they were blocked. If a photon made it to the detector, the physicists knew it had been in one of the locations where the mirrors offered a throughway.

“All we know is either the photon can get through that pattern, or it can’t,” says Gregory A. Howland, first author of a paper reporting the research published June 26 in Physical Review Letters. “It turns out that because of that we’re still able to figure out the momentum—where it’s going. The penalty that we pay is that our measurement of where it’s going gets a little bit of noise on it.”

So the filter allows the scientists to measure a particle’s momentum, while knowings position as well. Of course, it’s not the absolute momentum they’re measuring, but it’s pretty close.

“We do not violate the uncertainty principle,” Howland says. “We just use it in a clever way.”

These advancements are really important if we’re ever to have a working quantum computer – the next generations of computers that exploit quantum fluctuations to generate computing power several orders of magnitude above conventional computers.

Our Universe may be just a Hologram, complex simulations show

In a black hole, Albert Einstein’s theory of gravity clashes with quantum physics; for decades, scientists have tried to find a way to bridge the cap between these monumental theories, but so far, they simply seem irreconcilable. But the conflict could be solved if our Universe were in fact a holographic projection.

String theory, dimensions and holograms

The Calabi–Yau manifold, a special type of manifold that is used in String Theory. Via Wikipedia.

Before we start getting into the research, there’s a few things I want to explain, because the field is complicated and often hard to understand.

First of all, don’t think of a hologram that’s Matrix or Star Trek style. A holograph is a mathematical representation of something inside something else. It’s like a video playing on your screen: it’s there, but it doesn’t actually take place on your screen. Furthermore, on your 2D screen, you can watch 3D and even 4D representations (time being the 4th).

String Theory is an very popular theory among modern physicists. The essential, simplified idea behind string theory is this: all of the different ‘fundamental’ particles in all the Universe are made up of one basic object: a string. String Theory is also an incredibly ambitious idea – it aims to provide a complete, unified, and consistent description of the fundamental structure of our universe – something which is considered to be the Holy Graal of physics.

But String Theory is not proven yet, and researchers have huge problems making the math behind it work. Basically, to explain some of the things that are going on, they need 10 dimensions to make the math work. But to explain other things, they need a 1 dimensional Universe. This idea of a holographic Universe is the best one so far that makes both ideas work. Here’s how.

A holographic Universe

Artistic Representation of a Black Hole. Via Nature.

In 1997, theoretical physicist Juan Maldacena proposed an audacious model of the Universe – one in which gravity arises from infinitesimally thin, 1 dimensional vibrating strings; right from the start, this model challenged, thrilled, and scandalized researchers, but there was even more to it: it proposed a 10 dimensional Universe (9 + time), and explained that it was simply a hologram – that all the real action would play out in a simpler, flatter cosmos where there is no gravity. Hard to fathom, right?

But the idea caught pretty well in the world of physicists, because as hard to believe as it seems, it has two major advantages:
– it takes the popular yet unproven string theory one step further to completion
– it bridges the gap between Einstein’s relativity and quantum mechanics.

If, through an analogy, we consider the two theories to be two different languages, with some common things but many differences, Maldacena’s theory would be a Rosetta stone – allowing physicists to translate back and forth between the two languages, and solve problems in one model that seemed intractable in the other and vice versa. But the validity of his claims was still a huge question mark; basically, his claims were a little more than educated, plausible, fitting guesses.

Now, in two different papers, Yoshifumi Hyakutake of Ibaraki University in Japan and his colleagues provide the first pieces of evidence that Maldacena’s ideas are more than wishful thinking.

“They have numerically confirmed, perhaps for the first time, something we were fairly sure had to be true, but was still a conjecture — namely that the thermodynamics of certain black holes can be reproduced from a lower-dimensional universe,” says Leonard Susskind, a theoretical physicist at Stanford University in California who was among the first theoreticians to explore the idea of holographic universes.

In the first paper, Hyakutake computed the properties of a black hole (internal energy, position of the event horizon, entropy and others) based on String Theory, as well as the effects of so-called virtual particles that continuously pop into and out of existence. In the second one, he calculated the internal energy of the corresponding lower-dimensional cosmos with no gravity. The two results matched.

“It seems to be a correct computation,” says Maldacena, who is now at the Institute for Advanced Study in Princeton, New Jersey and who did not contribute to the team’s work.

But even he noted that neither of the model universes explored by the Japanese team resembles our own.

The first one (with ten dimensions) has eight of them forming an eight-dimensional sphere. The lower-dimensional, gravity-free one has but a single dimension, and “its menagerie of quantum particles resembles a group of idealized springs, or harmonic oscillators, attached to one another”, as Nature explains. Still, Maldacena believes this is extremely promising work, and he hopes that one day, all the forces in our Universe can be explained simply through quantum mechanics and string theory.

quantum_entanglement_wormholes

New theory suggests quantum entanglement and wormholes are linked together

quantum_entanglement_wormholes

One of the predictions derived from Einsten’s theory of general relativity is the existence of wormholes – spacetime shortcuts. In theory such bridges may offer instantaneous travel between the two bridgeheads or wormholes even if these are light-years away from each other. Two independent studies suggest that there’s a link between quantum entanglement and wormholes, or to be more precise: each wormhole has a corresponding pair just like two entangled quantum particles.

Quantum entanglement is nothing short of bizarre. In a pair of entangled particles,  a change in the quantum characteristics of one of the particles can’t happen without also causing a change in the other particle, even if these particles are millions of miles away. This concomitant change happens instantaneously, which is  why some people liken it to teleportation. I know, it’s a really strange and  non-intuitive aspect of the quantum theory of matter – this is why Einstein called it “spooky action at a distance.” For what’s it worth, although quantum entanglement was first theorized a long time ago, only recently did researchers prove that it’s real.

Practical applications for quantum entanglement have already been proposed, as entangled particles have been suggest for use in  powerful quantum computers and “impossible” to crack networks. Now, it seems quantum entanglement may be linked to wormholes.

Entangled wormholes

Theoretical physicists Juan Martín Maldacena at the Institute for Advanced Study in Princeton and Leonard Susskind at Stanford University argue that wormholes are nothing but pairs of black holes entangled together. A proposed mechanism of wormhole generation would be that when a black is born, its pair is simultaneously created as well. Moreover, they conjectured that entangled particles such as electrons and photons were connected by extraordinarily tiny wormholes.

[READ] Quantum theory suggests black holes are wormholes

Kristan Jensen, a theoretical physicist at Stony Brook University in New York and his colleague theoretical physicist Andreas Karch at the University of Washington in Seattle sought to investigate entangled particles behave in supersymmety theory which suggests that all subatomic particles have a corresponding partner or pair.

One of the biggest challenges physicists seek to address is developing a unified theory of physics, one that reconciles both general relativity and quantum mechanics. Supersymmetry is one such proposition that aims to unite the two grand theories of physics that explain the large universe (general relativity) and the tiny universe (quantum mechanics).

One huge idea expressed in this theory relates to holography or the notion that actions in this universe  may emerge from a reality with multiple dimensions; like a 2-d hologram may give the impression of 3-d object. I’d highly recommend you watch this video of Carl Sagan discussing the tesserat. Anyway, if you imagine a physical system that exists in only 3 dimensions, in theory you can describe that system using objects behaving in the four dimensions that general relativity describes the universe as having  (width, length, depth and time).

Jensen and Karch found that if one imagined entangled pairs in a universe with four dimensions, they behaved in the same way as wormholes in a universe with an extra fifth dimension.  A wormhole that curves space and time until two points coincide and entanglement may be one of the same thing then.

“Entangled pairs were the holographic images of a system with a wormhole,” Jensen said. Independent research from theoretical physicist Julian Sonner at the Massachusetts Institute of Technology supports this finding.

“There are certain things that get a scientist’s heart beating faster, and I think this is one of them,” Jensen told LiveScience. “One really exciting thing is that maybe, inspired by these results, we can better understand the relation between entanglement and space-time.”

(a) SEM image of the silicon micromechanical resonator used to generate squeezed light. Light is coupled into the device using a narrow waveguide and reflects off a back mirror formed by a linear array of etched holes. Upon reflection, the light interacts with a pair of double-nanobeams (micromechanical resonator/optical cavity), which are deflected in a way that tends to cancel fluctuations in the light. (b) Numerical model of the differential in-plane motion of the nanobeams. Credit: Caltech/Amir Safavi-Naeini, Simon Groeblacher, and Jeff Hill - See more at: http://www.caltech.edu/content/caltech-team-produces-squeezed-light-using-silicon-micromechanical-system#sthash.Xfz0rGwg.dpuf

‘Squeezed light’ with less noise than found in vacuum to boost sensors

For many quantum mechanics is very hard to comprehend because so many of its insights are extremely bizarre (see spooky action at a distance or quantum entanglement) and counter-intuitive (for instance wave-particle duality, which is the idea that all things have both a wave- and particle-like nature). For many years scientists vacuum was synonymous with void. Once quantum mechanics theories that discuss  very small things like atoms and subatomic particles were proposed at the beginning of the last century, it soon became clear that vacuum was far from being empty, nor still.

The latter insight has helped scientists a great deal calibrate their instruments when observing the Universe. You see, even in vacuum there’s noise, albeit imperceptible in the form tiny quantum fluctuations. Recently, a team of researchers at the California Institute of Technology (Caltech) have engineered a system that produced, what’s referred to as, “squeezed light” – a special type of light with fewer fluctuations that those found in vacuum which is useful for making precise measurements at lower power levels than are required when using normal light.

Researcher into “squeezed light” and its potential benefits can be traced back to more than 30 years ago, when Caltech’s superstar physics department comprised of  Kip Thorne, Caltech’s Richard P. Feynman Professor of Theoretical Physics, Emeritus, and physicist Carlton Caves first proposed that squeezed light would enable scientists to build more sensitive detectors that could make more precise measurements. A decade later, colleagues at Caltech conducted some of the first experiments using squeezed light, which relied on so-called nonlinear materials, which have unusual optical properties.

‘Quiet’ light

(a) SEM image of the silicon micromechanical resonator used to generate squeezed light. Light is coupled into the device using a narrow waveguide and reflects off a back mirror formed by a linear array of etched holes. Upon reflection, the light interacts with a pair of double-nanobeams (micromechanical resonator/optical cavity), which are deflected in a way that tends to cancel fluctuations in the light. (b) Numerical model of the differential in-plane motion of the nanobeams. Credit: Caltech/Amir Safavi-Naeini, Simon Groeblacher, and Jeff Hill - See more at: http://www.caltech.edu/content/caltech-team-produces-squeezed-light-using-silicon-micromechanical-system#sthash.Xfz0rGwg.dpuf

(a) SEM image of the silicon micromechanical resonator used to generate squeezed light. Light is coupled into the device using a narrow waveguide and reflects off a back mirror formed by a linear array of etched holes. Upon reflection, the light interacts with a pair of double-nanobeams (micromechanical resonator/optical cavity), which are deflected in a way that tends to cancel fluctuations in the light. (b) Numerical model of the differential in-plane motion of the nanobeams.
Credit: Caltech/Amir Safavi-Naeini, Simon Groeblacher, and Jeff Hill
– See more at: http://www.caltech.edu/content/caltech-team-produces-squeezed-light-using-silicon-micromechanical-system#sthash.Xfz0rGwg.dpuf

These materials are very difficult to manufacture and come-by, however. This latest Caltech research into squeezed light, led by Oskar Painter, a professor of applied physics at Caltech and the senior author on a paper that describes the system, seeks to address this issue. Their system produces the same fabled squeezed light, yet instead of complicated materials, it relies on the old-fashioned and abundant silicone.

“This system should enable a new set of precision microsensors capable of beating standard limits set by quantum mechanics,” says Oskar Painter. “Our experiment brings together, in a tiny microchip package, many aspects of work that has been done in quantum optics and precision measurement over the last 40 years.” –

Instead of using complicated materials of unique optical characteristics, the researchers employed a special design.

“We work with a material that’s very plain in terms of its optical properties,” says Amir Safavi-Naeini (PhD ’13), a graduate student in Painter’s group and one of three lead authors on the new paper. “We make it special by engineering or punching holes into it, making these mechanical structures that respond to light in a very novel way. Of course, silicon is also a material that is technologically very amenable to fabrication and integration, enabling a great many applications in electronics.”

In this new system, a waveguide feeds laser light into a cavity created by two tiny silicon beams. Once there, the light bounces back and forth a bit thanks to the engineered holes, which effectively turn the beams into mirrors. When photons—particles of light—strike the beams, they cause the beams to vibrate. And the particulate nature of the light introduces quantum fluctuations that affect those vibrations. This noise interferes with measurements, which means more power needs to be dedicated to the light in order to overcome it and make precise measurements, this comes with numerous drawbacks however.

According to the researchers, here lies the key – the spooky wave-particle duality. You may have heard of noise canceling headphones, which when strapped and turned on, will cancel any kind of ambient noise around you. The new system more or less works in the same way by canceling noise through interference. Namely, in our case light and beams interact strongly with each other—so strongly, in fact, that the beams impart the quantum fluctuations they experience back on the light.

“This is a demonstration of what quantum mechanics really says: Light is neither a particle nor a wave; you need both explanations to understand this experiment,” says Safavi-Naeini. “You need the particle nature of light to explain these quantum fluctuations, and you need the wave nature of light to understand this interference.”

“This new way of ‘squeezing light’ in a silicon micro-device may provide new, significant applications in sensor technology,” said Siu Au Lee, program officer at the National Science Foundation, which provided support for the work through the Institute for Quantum Information and Matter, a Physics Frontier Center. “For decades, NSF’s Physics Division has been supporting basic research in quantum optics, precision measurements and nanotechnology that laid the foundation for today’s accomplishments.”

The findings were described in the journal Nature.

 

Quantum theory takes out singularity, suggests black holes are wormholes

Black holes are the single most interesting and puzzling objects in our Universe – that we know of. But as if they weren’t mysterious enough, researchers have found that if you apply a quantum theory of gravity to these bizarre objects, the all-crushing singularity at their core disappears, opening a whole new Universe of possibilities – literally.

What we know so far

black hole singularityAt the center of every black hole, there lies what is called a singularity – a region where space and time becomes inifinite – this was described by Albert Einstein. If you get sucked into singularity, you will become inifitely dense, but what happens after that… nobody really knows. From a mathematically physical point of view, nothing happens from that point on.

“When you reach the singularity in general relativity, physics just stops, the equations break down,” says Abhay Ashtekar of Pennsylvania State University.

But this isn’t a problem limited strictly to black holes – the big bang, the one birth of our universe is thought to have  also started out with a singularity – a singularity which again, breaks the limits of general relativity.

Adding a little quantum physics

bounce

As if things weren’t strange enough already, researchers started to add quantum physics in the mix: in 2006, Ashtekar and colleagues applied loop quantum gravity (LQG) to the birth of the universe.

LQG combines general relativity with quantum mechanics and defines space-time as a web of indivisible chunks of about 10-35 metres in size. What they found was absolutely stunning: as they went back in time in an LQG Universe, they reached the big bang, but no singularity – instead, something even more curious happened: they crossed a “quantum bridge” (politically correct term for a wormhole) into another older universe, basically confirming the Big Bounce Universe theory.

The Big Bounce is a hypothetical scientific model that claims all Universal start and end is cyclic – every Big Bang is the result of the collapse of a previous Universe.

Again, this happened in 2006, and now, Jorge Pullin at Louisiana State University and Rodolfo Gambini at the University of the Republic in Montevideo, Uruguay, have applied LQG on a much smaller scale: to an individual black hole – results were, again, stunning.

Wormholes and quantum physics

 

wormholeIn this new model, gravitational field still increased, but it didn’t reached singularity, and after you’ve passed the black hole’s center, tha gravity starts dropping, and as you come out on the other side (assuming that could be possible), you would end up in another region of our universe, or another universe altogether. Despite only holding for a simple model of a black hole, the researchers – and Ashtekar – believe the theory may banish singularities from real black holes too.

So there is a mathematical theory which suggests that if you go on one end of a black hole, you will end up in another part of the Universe, or in another Universe alltogether. While other theories, not to mention some works of science fiction, have suggested this – now it’s real business. But here’s the kicker:

It is now believed that black holes are also supposed to evaporate over time. As they soak up matter and information over a long period of time, if the information ends up in another Universe, then from our point of view, the information would be gone forever – practically destroyed – and that defies quantum theory itself!

But researchers suggest that if the black hole has no singularity, then the information needn’t be lost – it may just tunnel its way through to another universe – and we have to take that into consideration as well.

“Information doesn’t disappear, it leaks out,” says Pullin.

Sometimes, it just feels that life would be much simpler without quantum physics.

Journal Reference: Loop Quantization of the Schwarzschild Black Hole

Unjammable radar

Is there such a thing as unjammable radar? Quantum imaging radar seems so

Detecting a potential threat before it occurs is the first step to preventing any aggression. In today’s wars, the scales favor the party that controls the air. Dominate the battle in the air, and you’ll dominate the battlefield ground side as well. It’s no secret to anyone that impressive aircraft detection systems have been developed and deployed in the years past, however, every time, a counter was found. Recently, physicists at University of Rochester in New York have unveiled a novel technique based on quantum imaging that is potentially unjammable, making the detection of any object possible.

Unjammable radar

The first radar prototype came in 1936 and soon showed its value in the second World War, when it became an invaluable asset to the RAF, and a complete nightmare to the Lufftwaffe during the heavy battles over Britain. Initially radars were based on the clever principle that all metals reflect back radiowaves. For every weapon however, there’s an anti-weapon, and much in the same manner, anti-detection measures were employed and evolved along with radars. This includes drowning radar signals or launching false signals to trick the radars. One modern and highly effective anti-radar technique involves intercepting radar waves, modifying them and sending them back in such a manner that the information presented doesn’t catch the threat.

Now, Mehul Malik and colleagues believe they’ve developed a system that is able to detect aircraft without the other being capable of countering monitoring. Their technique harnesses the power of quantum imaging. Once a photon is measured, it instantly looses its quantum properties. Research in the field has been used particularly in data encryption, however the Rochester University researchers harnessed these properties in radar imaging as well.

A radar that can not be fooled

Basically, the system works by using polarized photons to detect and image objects. Once they meet an object in the air, they bounce back to form an image. If the aircraft makes an attempt to intercept these photons and change the information it conveys, then inevitably a disruption occurs and this can be registered. It’s pretty clear then for the radar system that something’s out there. The process is irreversible, so the technique is basically unjammable.

“In order to jam our imaging system, the object must disturb the delicate quantum state of the imaging photons, thus introducing statistical errors that reveal its activity,” say Malik and co.

Malik and co have tested their idea by bouncing photons off an aeroplane-shaped target and measuring the polarization error rate in the return signal. Without any eavesdropping the system easily imaged the aeroplane, however when the other end tried to alter the signal to send back the image of a bird, the interference was easy to spot.

It sounds perfect, but it’s not. Since it’s based on the same principles as quantum encryption, which has been around for some time and is still in its incipient age, one can infer the same advantages and disadvantages. It too, like this novel radar system, is uncrackable in theory – in practice not so much. Still this is highly interesting, and armed with such a sophisticated means of detection, countries could protect their boarder a lot better.

The quantum imaging radar was described in the journal Applied Physical Letters.