Tag Archives: mass

Is information the fifth state of matter? Physicist says there’s one way to find out

Credit: Pixabay.

Einstein’s theory of general relativity was revolutionary on many levels. One of its many groundbreaking consequences is that mass and energy are basically interchangeable at rest. The immediate implication is that you can make mass — tangible matter — out of energy, thereby explaining how the universe as we know it came to be during the Big Bang when a heck lot of energy turned into the first particles. But there may be much more to it.

In 2019, physicist Melvin Vopson of the University of Portsmouth proposed that information is equivalent to mass and energy, existing as a separate state of matter, a conjecture known as the mass-energy-information equivalence principle. This would mean that every bit of information has a finite and quantifiable mass. For instance, a hard drive full of information is heavier than the same drive empty.

That’s a bold claim, to say the least. Now, in a new study, Vopson is ready to put his money where his mouth is, proposing an experiment that can verify this conjecture.

“The main idea of the study is that information erasure can be achieved when matter particles annihilate their corresponding antimatter particles. This process essentially erases a matter particle from existence. The annihilation process converts all the [remaining] mass of the annihilating particles into energy, typically gamma photons. However, if the particles do contain information, then this also needs to be conserved upon annihilation, producing some lower-energy photons. In the present study, I predicted the exact energy of the infrared red photons resulting from this information erasure, and I gave a detailed protocol for the experimental testing involving the electron-positron annihilation process,” Vopson told ZME Science.

Information: just another form of matter and energy?

The mass-energy-information equivalence (M/E/I) principle combines Rolf Launder’s application of the laws of thermodynamics with information theory — which says information is another form of energy — and Claude Shannon’s information theory that led to the invention of the first digital bit. This M/E/I principle, along with its main prediction that information has mass, is what Vopson calls the 1st information conjecture.

The 2nd conjecture is that all elementary particles store information content about themselves, similarly to how living things are encoded by DNA. In another recent study, Vopson used this 2nd conjecture to calculate the information storage capacity of all visible matter in the Universe. The physicist also calculated that — at a current 50% annual growth rate in the number of digital bits humans are producing — half of Earth’s mass would be converted to digital information mass within 150 years.

However, testing these conjectures is not trivial. For instance, a 1 terabyte hard drive filled with digital information would gain a mass of only 2.5 × 10-25 Kg compared to the same erased drive. Measuring such a tiny change in mass is impossible even with the most sensitive scale in the world.

Instead, Vopson has proposed an experiment that tests both conjectures using a particle-antiparticle collision. Since every particle is supposed to contain information, which supposedly has its own mass, then that information has to go somewhere when the particle is annihilated. In this case, the information should be converted into low-energy infrared photons.

The experiment

According to Vopson’s predictions, an electron-positron collision should produce two high-energy gamma rays, as well as two infrared photons with wavelengths around 50 micrometers. The physicist adds that altering the samples’ temperature wouldn’t influence the energy of the gamma rays, but would shift the wavelength of the infrared photons. This is important because it provides a control mechanism for the experiment that can rule out other physical processes.

Validating the mass-energy-information equivalence principle could have far-reaching implications for physics as we know it. In a previous interview with ZME Science, Vopson said that if his conjectures are correct, the universe would contain a stupendous amount of digital information. He speculated that — considering all these things — the elusive dark matter could be just information. Only 5% of the universe is made of baryonic matter (i.e. things we can see or measure), while the rest of the 95% mass-energy content is made of dark matter and dark energy — fancy terms physicists use to describe things that they have no idea what they look like.

Then there’s the black hole information loss paradox. According to Einstein’s general theory of relativity, the gravity of a black hole is so overwhelming, that nothing can escape its clutches within its event horizon — not even light. But in the 1970s, Stephen Hawking and collaborators sought to finesse our understanding of black holes by using quantum theory; and one of the central tenets of quantum mechanics is that information can never be lost. One of Hawking’s major predictions is that black holes emit radiation, now called Hawking radiation. But with this prediction, the late British physicist had pitted the ultimate laws of physics — general relativity and quantum mechanics — against one another, hence the information loss paradox. The mass-energy-information equivalence principle may lend a helping hand in reconciling this paradox.

“It appears to be exactly the same thing that I am proposing in this latest article, but at very different scales. Looking closely into this problem will be the scope of a different study and for now, it is just an interesting idea that must be followed,” Vopson tells me.

Finally, the mass-energy-information equivalence could help settle a whimsical debate that has been gaining steam lately: the notion that we may all be living inside a computer simulation. The debate can be traced to a seminal paper published in 2003 by Nick Bostrom of the University of Oxford, which argued that a technologically adept civilization with immense computing power could simulate new realities with conscious beings in them. Bostrom argued that the probability that we are living in a simulation is close to one.

While it’s easy to dismiss the computer simulation theory, once you think about it, you can’t disprove it either. But Vopson thinks the two conjectures could offer a way out of this dilemma.

“It is like saying, how a character in the most advanced computer game ever created, becoming self-aware, could prove that it is inside a computer game? What experiments could this entity design from within the game to prove its reality is indeed computational?  Similarly, if our world is indeed computational / simulation, then how could someone prove this? What experiments should one perform to demonstrate this?”

“From the information storage angle – a simulation requires information to run: the code itself, all the variables, etc… are bits of information stored somewhere.”

“My latest article offers a way of testing our reality from within the simulation, so a positive result would strongly suggest that the simulation hypothesis is probably real,” the physicist said.

What is Mass-Energy Equivalence (E=mc^2): the most famous formula in science

In a series of papers beginning in 1905 Einstein’s theory of special relativity revolutionized the concepts of space and time, uniting them into a single entity–spacetime. But, the most famous element of special relativity–as famous as the man himself–was absent from the first paper.

Mass-energy equivalence, represented by E=mc2, would be introduced in a later paper published in November 1905. And just as Einstein had already unified space and time–this paper would unite energy and mass.

So what does the mass-energy equivalence tell us and what is the equation E=mc2 saying about the Universe?

The Basics

If you wanted to walk away from this article with one piece of information about the equation E=mc2 (and I hope you won’t) what would that be?

Essentially the simplified version of the equation of special relativity tells us that mass and energy are different forms of the same thing– mass is a form of energy. Probably the second most important piece of information to take away is the fact that these two aspects of the Universe are interchangeable, and the mitigating factor is the speed of light squared.

Still with us? Good!

Perhaps the most surprising thing about the equation E=mc2 is how deceptively simple it is for something so profound. Especially when considering that as the equation that describes how stars release energy and thus make all life possible. Mathematical formulae don’t get much more foundational.

Gathering Momentum: Where Does the Mass-Energy Equivalence Come From?

There are actually a few ways of considering the origin of E=mc2. One way is by considering how the relationship it describes can emerge when comparing the relativistic equation for momentum and its Newtonian counterpart. The major difference between the two, as you’ll see below, is multiplication by the Lorentz factor — you might remember in the last part of this guide to special relativity —concerning space and time— I told you it gets everywhere in special relativity!

Whilst you could argue that the only difference between the two is that velocity (v) has been replaced with a more complex counterpart that approaches v when speeds are far less than light–everyday speeds that we see everyday objects around us move at–but some physicists find this more significant than a mere substitution.

These scientists would argue that this new factor ‘belongs’ to the mass of the system in question. This view means that mass increases as velocity increases, and this means there is a discernable difference between an object’s ‘moving mass’ and its ‘rest mass.’

So, let’s look at that equation for momentum again with the idea of rest mass included.

So, if mass is increasing as velocity increases, what is responsible for this rise?

Let’s conduct an experiment to find out. Our lab bench is the 2-mile long linear particle accelerator at SLAC National Laboratory, California. Using powerful electromagnetic forces, we take electrons and accelerate them to near the speed of light. When the electrons emerge at the other end of the accelerator we find that their relativistic mass has increased by a staggering factor of 40,000.

As the electrons slow, they lose this mass again. Thus, we can see it’s the addition of kinetic energy to the object that is increasing its mass. That gives us a good hint that energy and mass are interconnected.

But, this conclusion leads to an interesting question; if the energy of motion is associated with an object’s mass when it is moving, is there energy associated with the object’s mass when it is at rest, and what kind of energy could this be?

Locked-Up Energy

An object at rest without kinetic energy can, with the transformation of an infinitesimally small amount of mass, provide energy enough to power the stars.

As the equation E=mc2 and the fact that the speed of light squared is an extremely large number implies, in terms of energy just a little mass goes a very long way. To demonstrate this, let’s see how much energy would be released if you could completely transform the rest mass of a single grain of sugar.

That’s a lot of energy!

In fact, it is roughly equivalent to the amount of energy released by ‘little boy’– the nuclear fission bomb that devastated Hiroshima on the 6th August 1945.

That means that even when an object is at standstill it has energy associated with it. A lot of energy.

As you might have guessed by this point, as energy and mass are closely associated and there are many forms of energy there are also many ways to give an object increased mass. Heating a metal rod, for example, increases the rod’s mass, but by such a small amount that it goes unnoticed. Just as liberating a tiny bit of mass releases a tremendous amount of energy, adding a relatively small amount of heat energy results in an insignificant mass increase.

We’ve already seeen that we can accelerate a particle and increase its relativistic mass, but is there anything we can do to increase a system’s rest mass?

E=mc2: Breaking the Law (and Billiard Balls)!

Until the advent of special relativity two laws, in particular, had governed the field of physics when it comes to collisions, explosions, and all that cool violent stuff: the conservation of mass and the conservation of energy. Special relativity challenged this, suggesting instead that it is not mass or energy that is conserved, but the total relativistic energy of the system.

Let’s do another experiment to test these ideas… The first location we’ll travel to in order to do this… a billiard table at the Dog & Duck pub, London.

At the billiard table, we strike a billiard ball 0.17 kg toward a stationary billiard ball of the same mass at around 2 metres per second. We hit the ball perfectly straight on so that all of the kinetic energy of the first ball is transferred to the second ball.

If we could measure the kinetic energy of the initial ball, then measure the kinetic energy of both balls after the collision, we would find that–accounting for the small losses of energy to heat and sound–the total energy of the system after the collision is the same as the energy before the collision.

That’s the conservation of energy.

Let’s rerun that experiment again, but this time we launch the billard ball so hard that instead of knocking the target ball across the table, it shatters it. Collecting together the fragments of the shattered ball and remeasuring the mass of the system, we would find the final mass is exactly the same as the initial mass.

And that’s the conservation of mass.

We’re starting to get funny looks from the Dog & Duck regulars now, and the landlord looks angry about the destruction of one of his billiard balls. Luckily, the third part of our test requires we relocate to CERN, Geneva. So we down our drinks, grab our coats and hurry out the door.

Trying the experiment a third time, we are going to replace the billiard table with the Large Hadron Collider (LHC)–that’s some upgrade– and the billiard balls with electrons and their equal rest mass anti-particles– positrons.

Using powerful magnets to feed these fundamental particles with kinetic energy we accelerate them to near light speed, directing them towards each other and colliding them. The result is a shower of particles that previously weren’t present. But, unlike in our billiard ball example, when we measure the rest mass of the system it has not remained the same.

Just one of the particles we observe after the collision event is a neutral pion–a particle with a rest mass 264 times the rest mass of an electron and thus 132 times the initial rest mass we began with.

Clearly, the creation of this pion has taken some of the kinetic energy we poured into the electrons and converted it to rest mass. We watch as the pion decays into a muon with a rest mass 204 times that of an electron, and this decays into particles that are lighter still. Each time the decay releases energy in the form of pulses of light.

Relativistic Energy .vs Rest Energy

By now it is probably clear that in special relativity rest mass and relativistic mass are very different concepts, which means that it shouldn’t come as too much of a surprise that rest energy and relativistic energy are also separate things.

Let’s alter that initial infographic to reflect the fact that the equation E=mc2 actually describes rest energy.

This raises the questions (if I’m doing this right that is) what is the equation for relativistic energy?

It’s time for another non-surprise. The equation for relativistic energy is just the equation for rest energy with that Lorentz factor playing a role.

Ultimately, it is this relativistic energy that is conserved, thus whilst we’ve sacrificed earlier ideas of the conservation of mass and the conservation of energy, we’ve recovered a relativistic version of those laws.

Of course, the presence of that Lorentz factor tells us that when speeds are nowhere near that of light — everyday speeds like that of the billiard balls in the Dog & Duck–the laws of conservation of mass and energy are sufficeint to describe these low-energy systems.

The Consequences of E=mc2

It’s hard to talk about the energy-mass equivalence or E=mc2 without touching upon the nuclear weapons that devasted Hiroshima and Nagasaki at the close of the Second World War.

It’s an unfortunate and cruel irony that Einstein–a man who was a staunch pacifist during his lifetime–has his name eternally connected to the ultimate embodiment of the most destructive elements of human nature.

The Sun photographed at 304 angstroms by the Atmospheric Imaging Assembly (AIA 304) of NASA's Solar Dynamics Observatory (SDO). This is a false-color image of the Sun observed in the extreme ultraviolet region of the spectrum. (NASA)
The Sun photographed at 304 angstroms by the Atmospheric Imaging Assembly (AIA 304) of NASA’s Solar Dynamics Observatory (SDO). This is a false-colour image of the Sun observed in the extreme ultraviolet region of the spectrum. (NASA)

Nuclear radiation had been discovered at least a decade before Einstein unveiled special relativity, but scientists had struggled to explain exactly where that energy was coming from.

That is because as rearranging E=mc2 implies, a small release of energy would be the result of the loss of an almost infinitesimally small amount of rest mass –certainly immeasurable at the time of discovery.

Of course, as we mention above, we now understand that small conversion of rest mass into energy to be the phenomena that power the stars. Every second, our own star–the Sun– takes roughly 600 tonnes of hydrogen and converts it to 596 tonnes of helium, releasing the difference in rest mass between the two as around 4 x 1026 Joules of energy.

We’ve also harnessed the mass-energy equivalence to power our homes via nuclear power plants, as well as using it to unleash a terrifying embodiment of death and destruction into our collective imaginations.

We could probably ruminate more about special relativity and its elements, as its importance to modern physics simply cannot be overstated. But, Einstein wasn’t done.

Thinking about spacetime, energy and mass had open a door and started Einstein on an intellectual journey that would take a decade to complete.

The great physicist saw special relativity as a great theory to explain physics in an empty region of space, but what if that region is occupied by a planet or a star? In those ‘general’ circumstances, a new theory would be needed. And in 1915, this need would lead Einstein to his greatest and most inspirational theory–the geometric theory of gravity, better known as general relativity.

Sources and Further Reading

Stannard. R., ‘Relativity: A Short Introduction,’ Oxford University Press, [2008].

Lambourne. R. J., ‘Relativity, Gravitation and Cosmology,’ Cambridge University Press, [2010].

Cheng. T-P., ‘Relativity, Gravitation and Cosmology,’ Oxford University Press, [2005].

Fischer. K., ‘Relativity for Everyone,’ Springer, [2015].

Takeuchi. T., ‘An Illustrated Guide to Relativity,’ Cambridge University Press, [2010].

Giant CME.

Researchers spot the first coronal mass ejection outside our solar system — it was massive

An international team of researchers has managed to identify the first coronal mass ejection, or CME, in a star other than our Sun.

Giant CME.

Image credits NASA / GSFC.

An intense flash of X-rays, followed by the bursting on an immense bubble of plasma — that’s what researchers led by Costanza Argiroffi, a researcher at the University of Palermo and associate researcher at the National Institute for Astrophysics in Italy, have seen in the corona of HR 9024, an active star about 450 light-years away from us. This is the first CME ever spotted in a star outside our solar system.

The findings help us better understand how CME fits into the lives of active stars across the Universe and will help us systematically study such dramatic events in the future.

Starburst

“The technique we used is based on monitoring the velocity of plasmas during a stellar flare,” said Costanza Argiroffi. “This is because, in analogy with the solar environment, it is expected that, during a flare, the plasma confined in the coronal loop where the flare takes place moves first upward, and then downwards reaching the lower layers of the stellar atmosphere.”

“Moreover, there is also expected to be an additional motion, always directed upwards, due to the CME associated with the flare.”

The team used data collected by NASA’s Chandra X-ray Observatory to analyze a “particularly-favorable” flare, according to a Chandra Observatory press release. Solar flares are sudden, quite violent events, during which a star’s brightness increases substantially. Flares are sometimes, but not always, associated with CMEs.

The High-Energy Transmission Grating Spectrometer (HETGS) device aboard Chandra is the only instrument we have at our disposal so far that can be used to measure the movement of matter involved in CMEs. CMEs involve the expulsion of plasma — very hot, electrically-charged gas — in a star’s corona (atmosphere), at speeds of up to tens of thousands of miles per hour.

CMEs are only produced in magnetically-active stars, the results confirm. The findings also support the validity of what we know about CMEs so far, for example, that material involved in a flare is very, very hot (from 18 to 45 million degrees Fahrenheit), and that it first rises and then drops with speeds between 225,000 to 900,000 miles per hour.

“This result, never achieved before, confirms that our understanding of the main phenomena that occur in flares is solid,” said Argiroffi. “We were not so confident that our predictions could match in such a way with observations, because our understanding of flares is based almost completely on observations of the solar environment, where the most extreme flares are even a hundred thousand times less intense in the X-radiation emitted.”

The “most important” discovery, however, is that after the flare a body of much cooler plasma (of around 7 million degrees Fahrenheit) rises from the star’s body with “a constant speed of about 185,000 miles per hour,” adds Argiroffi. Such a result is “exactly what one would have expected for the CME associated with the flare.”

The team adds that, based on Chandra’s readings, the mass of the CME in questions was roughly two billion pounds. This would make it about ten thousand times as massive as the largest CMEs put out by the Sun. This last tidbit reinforces the idea that more magnetically active stars generate larger-scale versions of solar CMEs.

“The observed speed of the CME, however, is significantly lower than expected. This suggests that the magnetic field in the active stars is probably less efficient in accelerating CMEs than the solar magnetic field,” Argiroffi concludes.

The paper “A stellar flare−coronal mass ejection event revealed by X-ray plasma motions” has been published in the journal Nature.

Neutron star.

The Universe’s densest stars have a maximum mass limit, researchers find

Researchers from the Goethe University in Frankfurt have refined our understanding of neutron stars by calculating the hard limit for their mass: these extreme stellar bodies cannot exceed 2.16 solar masses.

Neutron star.

Image credits Kevin Gill / Flickr.

Neutron stars are one of the most extreme displays of matter around. They’re the naked cores of massive stars, compressed into pure matter in their death throes moments before a supernova detonates. Neutron stars aren’t made of regular atoms (which are over 99.999% empty space) rather they resemble one huge atomic nucleus. True to their name, neutron stars are incandescent bodies of neutron next to neutron.

In many ways, neutron stars are the closest matter can get to a black hole without collapsing space-time around it. Which also raises an interesting question — how massive can these stars actually become?

Weight-watching

With radiuses that generally fall under 12 kilometers (7.45 miles) but with masses that can be twice as great as that of our sun, neutron stars produce gravitational fields comparable to those of black holes. Unlike their black-hole brethren, however, neutron stars can’t grow indefinitely. Since they’re so immensely dense, there’s almost no force in nature that can withstand their gravitational force. So, the logic goes, if they become massive enough, that same gravitational pull will overcome the neutron’s ability to resist it. Going by that same train of thought, there should be a point beyond which even the addition of a single neutron will send the neutron star collapsing into a black hole.

Researchers have been trying to determine that exact point ever since neutron stars were first discovered in the 1960s — a question which they’ve only managed to answer now, as astrophysicists at the Goethe University Frankfurt have successfully calculated the strict upper limit for a neutron star’s maximum mass.

With an accuracy within a few percentage points, the maximum mass of non-rotating neutron stars cannot exceed 2.16 solar masses, the team reports.

 

The result was based on the “universal relations” approach developed in Frankfurt a few years ago. In broad strokes, these relations say that since all neutrons stars “look alike”, their properties can be expressed in terms of dimensionless quantities. The next piece of the puzzle was supplied by the LIGO experiment, in the form of data on the gravitational-wave signals and subsequent electromagnetic radiation discharge (kilonova) recorded last year during the merging of two neutron stars.

The LIGO data was instrumental in solving the problem as they allowed the team to decouple the calculations from the equation of state — a model we use to describe matter and its composition at various depths in a star.

“The beauty of theoretical research is that it can make predictions,” says Professor Luciano Rezzolla, the paper’s first author. “Theory, however, desperately needs experiments to narrow down some of its uncertainties.”

“It’s therefore quite remarkable that the observation of a single binary neutron star merger that occurred millions of light years away combined with the universal relations discovered through our theoretical work have allowed us to solve a riddle that has seen so much speculation in the past.”

The results were published in a Letter titled “Using Gravitational-wave Observations and Quasi-universal Relations to Constrain the Maximum Mass of Neutron Stars” in The Astrophysical Journal. They were confirmed a few days after publication by groups from the USA and Japan who followed different and independent approaches.

What is mass? Baby don’t weigh me – revamping the metrology of mass

The metric system is due for a mass makeover, as scientists are preparing to redefine four basic units by the end of 2018 in an effort to provide accurate measurements at all scales.

The shift will most notably affect the kilogram, the base measure of mass and the last member of the International System of Units still defined by a physical object. Current efforts are under way to check and fine-tune measurements of fundamental natural quantities — such as Avogadro’s number — for use in giving the kilogram a new mathematical definition.

The kilogram standard.
Image via itsoktobesmart

How do we define a kilogram, and how will this change?

Since 1889, the standard for mass has been a 1-kilogram cylinder of platinum and iridium metal at the Bureau International des Poids et Mesures in Sèvres, France. While this standard is handled carefully, it’s at risk of becoming dirty or damaged, says Michael Stock, a physicist at the French bureau.

“Any material object can change over time,” he says.

“It’s also hard to accurately scale this physical standard down to very small masses, like those of electrons,” added physicist David Newell of the National Institute of Standards and Technology (NIST) in Gaithersburg, Md.

Scientists aim to give the kilogram a new definition based on nature’s fundamental physical constants. This task requires a highly accurate measurement of Planck’s constant, which links energy and frequency. Planck’s constant can be used to measure and describe mass, as the two are mathematically linked through another natural constant, the speed of light.

The American kilogram standard.
image via nist.gov

Researchers are using the existing physical definition of a kilogram to measure Planck’s constant as accurately as possible. Then, this value can be set in stone and used to define mass in the future.

With devices known as watt balances, scientists can measure Planck’s constant directly using precisely known standards of mass and electrical current. Once Planck’s constant has been fixed, watt balances will then use Planck’s constant to calculate unknown mass.

A watt balance. And a dude.
Image credits Robert Rathe

In another approach, scientists count the number of atoms in extremely pure 1-kilogram silicon spheres. This method determines the number of atoms in a kilogram, which could be used to define the unit of mass. This technique also allows scientists to calculate a different fundamental value, the Avogadro constant (or Avogadro’s number). This constant describes the number, roughly 6.02 x 1023, of units per mole, the metric unit for amount of a substance. (A mole is the mass of a substance equal to its atomic or molecular weight expressed in grams.) A precise Avogadro constant can be used to calculate and confirm Planck’s constant.

When the new value and its uncertainty is averaged with previous calculations, the Avogadro constant comes out to 6.02214082 x 1023 per mole with an uncertainty of 18 parts in a billion, scientists report July 14 in the Journal of Physical and Chemical Reference Data. This number is just slightly smaller than the value of the constant currently described by NIST — 6.022140857 x 1023 per mole.

The watt balance and atom-counting techniques now give a nearly identical value of Planck’s constant, currently given by NIST as 6.6260704 x10-34 joule-seconds, with an uncertainty of under 20 parts in a billion, says metrologist Ian Robinson of the National Physical Laboratory in Teddington, England. Further measurements are still under way.

Where is the metric system headed?

In fall 2018, international delegates at a meeting of the General Conference on Weights and Measures will decide whether or not to approve the kilogram’s new definition. Based on existing plans, many believe the redefinition will happen at this time, Stock says, though nothing is guaranteed.

Because researchers’ careful calculations have accounted for the existing definition of mass, the redefinition should cause no perceptible shift in measurement.

“If we do our jobs right, nobody’s going to notice a thing,” Newell says. But future mass measurements should become stable, Robinson says.

While redefining the kilogram will be the most critical change ahead, Stock says, scientists also hope to redefine other units, including the mole and the kelvin, which measures temperature. These redefinitions will depend on fixing other constants, including the Avogadro constant. Making all of these changes at once will limit the number of times textbooks must be changed, Stock says.

The redefinitions won’t mark an end to the quest for a perfect metric system, Newell says.

“Metrologists are going to make the measurement exactly right. And the corollary is, they never finish their measurement.”

galileo pisa

Dropping weights in space to test Einstein’s general relativity

Extraordinaire experimental physicist  Galileo Galilei allegedly climbed hundreds of step to reach the top of the Leaning Tower of Pisa’s – which wasn’t so leaned as it is today – and dropped  pairs of balls of different weights and materials onto the ground. The experiment was meant to prove in front of the crowd of scholars and students gathered in front of the tower that regardless of an object’s mass, whether it’s wood or lead, all objects fall with the same acceleration. If there were no friction, a feather and a cannon ball would reach the ground at the same time. It’s unclear if the whole story is merely a legend or not, but needless to say it’s an inspiring anecdote. Now, Galileo’s experiment will be adapted in ways the great classical physicist could never have imagined, as the ultimate test for one of Einstein’s general relativity caveats.

galileo pisa

Image: North Country Public Radio

Called the Drag-Compensated Micro-Satellite for the Observation of the Equivalence Principle (MicroSCOPE), the experiment will contain two free-floating weights of different materials and will monitor whether one feels a stronger tug from Earth’s gravity than the other. If this were to happen, it would violate the mass equivalence principle which posits inertial mass and gravitational mass are absolutely one of the same time, independent of material composition or mass. The mass equivalence principle is a key assumption made by Einstein to draft his theory of general relativity which states acceleration and gravity are essentially the same thing.

One mass

There are actually several types of mass. The kind that most corresponds to our intuitive sense of mass is inertial mass which describes resistance to acceleration. If you push two objects of different inertial masses with the same force, the one with the less inertial mass will accelerate more. A two tonne truck has more resistance than a wheel chair. Another type of mass is known as gravitational mass. Gravitational mass is what (in Newton’s gravity) causes the gravitational attraction between objects. When you step on a scale in the morning, you are measuring your gravitational mass. The third type of mass is known as relativistic mass. This stems from Einstein’s theory of special relativity and the equivalence of mass and energy (the famous E equals m c squared). In that famous equation, E is the energy of a particle, and c is the speed of light. So if you divide the energy of a particle by the speed of light squared, you get a “mass”, known as the relativistic mass of the particle.

MicroSCOPE satellite

MicroSCOPE satellite. Image: CNES/DAVID DUCROSS, 2012

General relativity isn’t completely reconciled with quantum mechanics – the physics of the small scale. Basically, some observed phenomena in quantum mechanics couldn’t be explained by the mass equivalence principle. The team behind MicroSCOPE, thus, want to test mass equivalence with unprecedented precision because even a minute variation between the two masses would mean that the equivalence principle does not apply in all cases.  Spotting a violation would definitely mean that there is some sort of physics beyond Einstein’s theory. This might help physicists make a breakthrough. Otherwise, it could prove equally useful since physicists can finally stop worrying about whether or not the mass equivalence is true or not.

Of course, the mass equivalence has gone through countless tests and experiments – not one proved a violation. The most precise experiment to date was made by Eric Adelberger, a physicist at the University of Washington, Seattle, and colleagues in the Eöt-Wash Group, named after 1800s Hungarian physicist Loránd Eötvös who pioneered the method used by the group. According to Science author Adrian Cho:

“Eötvös used a small dumbbell of weights of different materials suspended horizontally from a thin fiber. Gravity pulls each weight toward the center of Earth. But Earth also spins, so the inertia of the weights creates a tiny centrifugal force that flings them away from the planet’s axis. The sum of the two forces, which align only at the equator, defines the direction “down” for each weight. If the equivalence principle holds, then the centrifugal force on each weight is locked into proportion to the gravitational one, so down is the same for both weights. Then, the dumbbell will rest pointing in any direction.

But if inertial and gravitational mass are different, then the flinging will affect the weights differently and the net force on each one will point in a slightly different direction. “If the equivalence principle is violated, then every material has its own down,” Adelberger says. That difference would cause the dumbbell to twist toward a particular orientation. In 1889, Eötvös saw no such sign and confirmed the equivalence principle to one part in 20 million.”

Of course, today the experiment is a lot more refined. Eöt-Wash researchers have been constantly tweaking the method for the past 25 years. Their current rig consists not of a dumbbell but of a nearly cylindrical shell studded on either side with weights of different materials. Likewise, instead of being based on a static twist, the whole rig rotates. The scientists then just have to look for periodic twisting of the cylinder. Using beryllium and titanium, they found gravitational and inertial mass equal to one part in 10 trillion, as they reported in Physical Review Letters in 2008. The MicroSCOPE will test the equivalence principle to one part in a quadrillion.

The MicroSCOPE satellite will host two cylindrical shells inside: one the size of a toilet paper roll and made of titanium and a smaller one inside it made of platinum-rhodium. If the equivalence principle holds, both will glide on precisely the same orbit. If not, one should slip Earth-ward relative to the other.

According to models made by the MicroSCOPE researchers, the probe has a chance of seeing a strong signal, meaning it might show a violation of the mass equivalence principle. But nothing’s certain until the €200 million mission will reach Earth’s orbit in April 2016 – and not even then.

 

A Penning Trap - not the actual one used in the research. Photo: mpi-hd.mpg.de

Mass of the electron re-weighed for a precision of parts per trillion

Typically in physics, your calculations and such are as precise as your use of constants. Meaning, if you have a skewed value for your constant, this will obviously affect all the computations where this constant is used. Today, all the important physical constants are rather precisely known, whether we’re talking about the speed of light or tau mass. For most real-life applications, you don’t really need to work with figures that are correct down to the 19th digit. Some work, however, requires the most precise measurement possible.

Recently, German scientists published a paper in Nature in which the detail the methodology they used to perform the most precise measurement of an electron’s mass to date, down to parts per trillion. To this end, the researchers used a complex method that relies on strengthening the measurement for other constants involved in the electron mass measurement itself.

A Penning Trap - not the actual one used in the research. Photo: mpi-hd.mpg.de

A Penning Trap – not the actual one used in the research. Photo: mpi-hd.mpg.de

 

Since scientists  first began to tinker with the concepts of molecules, atoms, electrons, neutrons and so forth, there has always been a need to precisely measure particle mass. For most applications, you don’t need an exhaustive measurement, but come the age of high energy particle physics, this need has never been more important and considering an electron weighs so very little, it’s a really tough job.

The team led by Sven Sturm of the Max Planck Institute for Nuclear Physics in Heidelberg first bound an electorn to a reference ion – a hydrogen-like carbon nucleus, stripped down to a single electron. This nucleus has a known mass. Then, using Penning trap apparatus you put the ion-electron pair in motion around a circular path, coerced by both magnetic and electric field. The scientists first measured the frequency of the ion-electron system, then just that of the electron. Following a complicated  QED calculation, which took into account this ratio, the mass of the ion, the ration between electron and ion charge, a refined value for the g-factor (more on this here), the team ended up with the most precise value for an electron’s mass so far.

The electron has 0.000548579909067 of an atomic mass unit, the measurement unit for particles, according to the calculation, which factors in variables for statistical and experimental uncertainties. This marks a 13-fold improvement in measurement accuracy.

 

Higgs Englert

2013 Nobel prize in physics awarded to ‘God particle’ scientists: Peter Higgs and Francois Englert

Higgs Englert

Francois Englert (left) and Peter Higgs (right)

Just a few moments ago, the Royal Swedish Academy of Sciences awarded this year’s Nobel Prize in Physics to Francois Englert and Peter Higgs on Tuesday for their 1964 postulation of the existence of the Higgs boson. The elementary particle was finally confirmed in 2012 by a team of international researchers using the Large Hadron Collider at CERN.

The July 2012 discovery of the particle in the most powerful particle accelerator in the world, the Large Hadron Collider near Geneva, Switzerland, has been billed as one of the biggest scientific achievements of the last 50 years. The Higgs boson, also sometimes referred to as the God particle, is thought to be the elementary particle responsible for granting all matter with mass. It’s become obvious now how monumental this discovery is.

But why not last year? In 2012 everybody was expecting Englert and Higgs to win the physics prize, but instead the award went to two scientists (Haroche and Wineland ) for their work with light and matter, which may lead the way to superfast quantum computing and the most precise clocks ever seen. The  Royal Swedish Academy of Sciences often steers away from scientific premiers and chooses to opt for more mature research. This year, however, it was clear than Englert and Higgs shouldn’t be missed.

Swedish industrialist Alfred Nobel created the prizes in 1895 to honor work in physics, chemistry, literature and peace. Since 1901, the committee has handed out the Nobel Prize in physics 106 times. The youngest recipient was Lawrence Bragg, who won in 1915 at the age of 25. For the 2013 awards, so far the Nobel Prize in Physiology or Medicine has been announced: James E Rothman, Randy W Schekman and Thomas C Südhof  for their work on the mechanism that controls the transport of membrane-bound parcels or ‘vesicles’ through cells.

international prototype kilogram

How much does a kilogram weigh? The struggle of keeping standardized mass constant

international prototype kilogram

One of the international prototype kilograms.

Since 1889, the world has used the  International Prototype Kilogram (IPK) – a cylindrical chunk of metal the size of a matchbox stored in a French vault – as the standard for measuring one unit of mass. Some 40 replicas were made and shipped to countries through out the world such that an international standard could be put in place. The thing with standards is that for them to work, everything from workflow to operations to units needs to be uniform everywhere. In more than a century, however, the prototype kilograms have gained a tad of weight – something that simply won’t do.

A team of scientists at Newcastle University have proposed a washing protocol that should be implemented for all mass samples, in order to clean them from the impurities that causes them to push extra weight and prevent any further gain in mass. What’s the current discrepancy? Well, apparently the original kilogram was about 50 micrograms lighter than its brethren. This might seem extremely negligible but when you factor in the world’s entire mass operations, every billionth even trillionth of a gram counts.

The kilogram prototypes are forged from  platinum and iridium, and despite  being treated regularly, they nonetheless become contaminated, predominantly with  hydrocarbons that build up on the surface of the metal. The present treating method relies on hand washing that is far from uniform.  Peter Cumpson, a metrologist at Newcastle, along with colleagues have worked over the past two decades to devise a method that cen be internationally and uniformly applied in order to clean the prototypes and ensure a relative error between replicas as small as possible.

Their method involves exposing the metal chunks to ultraviolet light and ozone about once per decade instead of washing them, breaking the hydrocarbon bonds to the metals.

“It doesn’t really matter what it weighs as long as we are all working to the same exact standard, the problem is there are slight differences.”

“Around the world, the IPK and its 40 replicas are all growing at different rates, diverging from the original.

“We’re only talking about a very small change – less than 100 micrograms – so, unfortunately, we can’t all take a couple of kilograms off our weight and pretend the Christmas overindulgence never happened.

“But mass is such a fundamental unit that even this very small change is significant and the impact of a slight variation on a global scale is absolutely huge.

“There are cases of international trade in high-value materials – or waste – where every last microgram must be accounted for.

“What we have done at Newcastle is effectively give these surfaces a suntan. By exposing the surface to a mixture of UV and ozone we can remove the carbonaceous contamination and potentially bring prototype kilograms back to their ideal weight,” said Cumpson.

The method has so far rendered results with prototype metals, like gold, and has a good chance of being implemented on the actual standards soon. Also, the researchers are currently researching alternative storage mediums for the standards to minimize exposure as much as possible including vacuums and pure, flowing argon or nitrogen atmospheres.

Still, it’s rather surprising that even now in the 21st century the world has to rely on chunks of metals in order to standardize mass. Time keeping for instance, another extremely important standard, is now made with atomic clocks that only lose a fraction of a second in 14 billion years.

Presently, researchers are considering using  fundamental natural constants like Avogadro’s or Plank’s to standardize mass and build spheres of pure silicons with a fixed amount of atoms, instead of using a chunk of metal with variable number of atoms.

Findings were published in the journal Metrologia.

[via Wired]