A new study modeled the dynamics and evolution of some of the largest known structures in the universe.
Let’s take a moment to look at our position in the universe.
We are now living on a solar system orbiting the center of the Milky Way galaxy — which itself lies in the Local Group of galaxies neighboring a Local Void, a vast cluster of space with fewer galaxies than expected. Wait, we’re not done yet. These structures are part of a larger region that encompasses thousands of galaxies in a supercluster called the Laniakea Supercluster, which is around 520 million light-years across.
A group of researchers has now simulated the movement of galaxies in the Laniakea and other clusters of galaxies starting when the universe was in its infancy (just 1.6 million years old) until today. They used observations from the Two Micron All-Sky Survey (2MASS) and the Cosmicflows-3 as the starting point for their study. With these two tools, they looked at galaxies orbiting massive regions with velocities of up to 8,000 km/s — and made videos describing those orbits.
Because the universe is expanding and that influences the evolution of these superclusters, we first need to know how fast the universe is expanding, which has proven to be very difficult to calculate. So the team considered different plausible universal expansion scenarios to get the clusters’ motion.
Besides Laniakea, the scientists report two other zones where galaxies appear to be flowing towards a gravitational field, the Perseus-Pisces (a 250 million light-years supercluster) and the Great Wall (a cluster of about 1.37 billion light-years). In the Laniakea region, galaxies flow towards the Great Attractor, a very dense part of the supercluster. The other superclusters have similar patterns, the Perseus-Pisces galaxies flow towards the spine of the cluster’s large filament.
The researchers even predicted the future of these galaxies. They estimated the path of the galaxies to something like 10 billion years into the future. It is clear in their videos, the expansion of the universe affecting the big picture. In smaller, denser regions, the attraction prevails, like the future of Milkomeda in the Local Group.
Most large-scale simulations are of specific processes, such as star formation, galaxy merges, our solar system events, the climate, and so on. These aren’t easy to simulate at all — they’re complex displays of physical phenomena which are hard for a computer to add all the detailed information about them.
To make it even more complicated, there are also random things happening. Even something simple like a glass of water is not exactly simple. For starters, it’s never pure water, it has minerals like sodium, potassium, various amounts of air, maybe a bit of dust — if you want a model of the glass of water to be accurate, you need to account for all of those. However, not every single glass of water will contain the exact same amount of minerals. Computer simulations need to try their best to estimate the chaos within a phenomenon. Whenever you add more complexity, the longer it takes to complete the simulation and the more processing and memory you need for it.
So how could you even go about simulating the universe itself? Well, first of all, you need a good theory to explain how the universe is formed. Luckily enough, we have one — but it doesn’t mean it’s perfect or that we are 100% sure it is the correct one — we still don’t know how fast the universe expands, for example.
Next, you add all the ingredients at the right moment, on the right scale – dark matter and regular matter team up to form galaxies when the universe was around 200-500 million years old.
Universe simulations are made by scientists for multiple reasons. It’s a way to learn more about the universe, or simply to test a model and confront it with real astronomical data. If a theory is correct, then the structure formed in the simulation will look as realistic as possible.
There are different types of simulations, each with its own use and advantages. For instance, “N-body” simulations focus on the motion of particles, so there’s a lot of focus on the gravitational force and interactions.
The Millenium Run, for instance, incorporates over 10 billion dark matter particles. Even without knowing what dark matter really is, researchers can use these ‘particles’ to simulate dark matter properties. There were other simulations, such as IllustrisTNG, which offers the capability of star formation, black hole formation, and other details. The most recent one is a 100-terabyte catalog.
In the end, the simulations can’t reveal every single detail in the universe. You can’t simulate what flavor pie someone is having, but you can have enough detail to work with large-scale things such as the structure of galaxies and other clusters.
Another type of model is a mock catalog. Mocks are designed to mimic a mission and they use data gathered by telescopes over years and years. Then, a map of some structure is created — it could be galaxies, quasars, or other things.
The mocks simulate these objects just as they were observed, with their recorded physical properties. They are made according to a model of the universe, with all the ingredients we know about.
The theory from the model used for the mocks can be tested by comparing them with the telescopes’ observation. This gives an idea of how right or wrong our assumptions and theories are, and it’s a pretty good way to put ideas to the test. Usually, the researchers use around 1000 mocks to also give statistical significance to their results.
Let’s take a look behind the scenes at how the models are produced — and how much energy they use. These astronomical and climate simulations are made on supercomputers, and they are super. The Millenium Run, for example, was made using the Regatta supercomputer. For these simulations, 1 terabyte of RAM was needed and resulted in 23 terabytes of raw data.
The IllustrisTNG used the Hazel Hen. This beast can perform at 7.42 quadrillion floating-point operations per second(Pflops), which is equivalent to millions of laptops working together. In addition, Hazel Hen consumes 3200 Kilowatts of energy — which leads to a spicy electric bill. Uchuu, which had 100 terabytes of results was made using ATENURI II. This one performs with 3.087 Pflops.
In an Oort Cloud simulation, the team involved reported the amount of energy they used in their work: “This results in about 2MWh of electricity http://green-algorithms.org/), consumed by the Dutch National supercomputer.” A habit that may become more common in the future.
So what does this tell us about the possibility of our very own universe being a simulation? Could we be living in some sort of Matrix? Or just be in a Rick&Morty microverse? Imagine the societal chaos of figuring out we are in a simulated universe and you are not a privileged rich country citizen? That wouldn’t end well for the architect.
The simulation hypothesis is actually regarded seriously by some researchers. It was postulated by Nick Bostrom, and has three main conditions — at least one needs to be true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation.
This being said, the simulation hypothesis is not a scientific theory. It is simply an idea — a very interesting one, but simply put, nothing more than an idea.
Lessons from simulations
What we learned from making our simulations is that it is impossible to make a perfect copy of nature. The N-body simulations are the perfect example, we can’t simulate everything, but the particles that make what is relevant to study. In climate models we have the same problem, it is impossible to create the perfect pixel to reproduce geographic locations, you can only approximate the desired features.
The other difficulty is energy consumption, it is hard for us to simulate some phenomena. Simulating a universe in which people make their own choices would require an improbable amount of power, and how could the data be stored. Unless it ends like Asimov’s ‘The Last Question’ — which is well worth a read.
In the end, simulations are possible, but microverses are improbable. We’ll keep improving simulations, making better ones in a faster supercomputer. All this with the thought that we need an efficient program, which consumes less energy and less time.
Six galaxies detected by Hubble and Spitzer come from a time astronomers call the Cosmic Dawn — a period in the history of our universe just 250-350 million years after the Big Bang (the age of the universe is currently estimated at 13.8 billion years), when the first stars had just started shining.
After the Big Bang, the universe was a bit of a hot mess. It was hot, dense, and virtually opaque. It only became transparent during a period called Recombination, in which a soup of protons and electrons combined to form the first true hydrogen atoms. Prior to the Recombination, the light was not able to travel freely travel through the universe as it was constantly scattered off the free electrons and protons. But as the atoms started combining and there were fewer free particles, this forged a free path for light to travel the universe.
It is in this period that the universe became transparent — and it is also in this period that the six galaxies were formed. It took light from these galaxies most of the universe’s current lifetime to get to us, and looking at them is basically like looking at the Cosmic Dawn. For Professor Richard Ellis from University College London, UK, observations like this are the crowning of decades of work.
In a study published in Monthly Notices of the Royal Astronomical Society, Ellis and colleagues from the UK, Germany, and US, estimated the time at which the Cosmic Dawn began by using six galaxies which they estimate to have formed between 250 to 350 million years after the Big Bang.
In order to estimate the galaxies’ age, they must first consider a particular value of the universe’s rate of expansion (over which there is still some debate). The reason for that is because they are computing the lookback time — the time light from the ancient galaxies traveled to reach us.
As the universe expands, light coming from stars and galaxies has its wavelength increased — something called the redshift effect. By looking at how much the wavelength has increased, researchers can estimate how much light has traveled — and consequently, how old the light-producing object is.
The recent results were based on data from the Hubble and Spitzer space telescopes, both famous for being capable of observing some of the oldest objects in the universe. To estimate the redshift, the team required the Chilean Atacama Large Millimetre Array (ALMA), the European Very Large Telescope, the twin Keck telescopes in Hawaii, and Gemini-South telescope.
The age of the sample is only computed by combining data from all those different telescopes. However, astronomers and cosmologists have great expectations of the Hubble/Spitzer successor, the James Webb Space Telescope (JWST). The most ambitious, the biggest, and the most sensitive telescope NASA created will be able to observe those Cosmic Dawn galaxies directly. JWST is also the hope of a larger sample of galaxies, providing a better representation of the Cosmic Dawn.
Many things in the universe spin, at pretty much every scale you can imagine — from particles in the quantum scale to hurricanes and 0f course, planets and stars. However, this physical phenomenon is not well-explored on the cosmic scale — at the megaparsec scale, we’re not really sure how what spins.
In a study published in Nature Astronomy, physicists used data from Sloan Digital Sky Server’s to test an idea: what if galaxy filaments, the largest known structures in the universe, consisting of massive galaxy superclusters, are actually spinning?
It may sound weird to think that galaxies as a whole are moving — let alone spinning. They do move with respect to the universe’s expansion, and also on a smaller scale. We know our galactic neighborhood, some galaxies are mere satellites compared to their “mother” galaxy, like our Milky Way. We live in the Laniakea supercluster, where our big family is being pulled by the Great Attractor, the densest region of the cluster.
We can only observe some parts of the universe, due to our position in the universe and in the Solar System. In a way, we are too small to see the great vastness of the universe. So the filaments we can observe are seen in the parts of the sky visible from our cosmic neighborhood.
To compensate for this, astronomers also study such processes using computer simulations. The most famous such simulation of the Universe’s large-scale structure is called the Millennium Simulation — which used more than 10 billion particles to trace the evolution of the matter distribution in a cubic region of the Universe over 2 billion light-years on a side. The Millennium Simulation shows the dark matter distribution across the universe, forming a cosmic web interconnecting more than 10 billion particles. It is good to imagine what the big picture looks like.
In a recent study, researchers looked at the rotation movement of galaxy filaments — “bridges” that connect the cosmic web, connecting galaxies to each other.
“By mapping the motion of galaxies in these huge cosmic superhighways using the Sloan Digital Sky survey—a survey of hundreds of thousands of galaxies—we found a remarkable property of these filaments: they spin,” says Peng Wang, first author of the now published study and astronomer at the AIP.
Wang and colleagues looked at the galaxy filament cylinders which are horizontal compared to our position. They separated the cylinders into two regions to distinguish whether galaxies are coming away from us, in region A, or towards us, in region B. If that’s a simultaneous event the rectangle is actually the cylinder showing the rotation of the structure. Indeed, these structures appear to be rotating.
“Despite being thin cylinders—similar in dimension to pencils—hundreds of millions of light-years long, but just a few million light-years in diameter, these fantastic tendrils of matter rotate. On these scales, the galaxies within them are themselves just specks of dust. They move on helixes, or corkscrew-like orbits, circling around the middle of the filament while traveling along with it. Such a spin has never been seen before on such enormous scales, and the implication is that there must be an as-yet-unknown physical mechanism responsible for torquing these objects,” says Noam Libeskind, initiator of the project at the AIP.
The rotation is like a helix — the galaxies not only rotate around the axis, but they also move along the cylinders. It was estimated that filaments that contain more massive galaxy clusters at the end of the filament tend to have stronger rotational signals than those less massive. That is an important observation because it makes the hypothesis distinguishable to the universe’s expansion.
Overall, the work detected 17,181 filaments. Most of the galaxies are within nearly 30 billion billion km from the filament axis, while the cylinder’s radius is twice this size. Despite the impressive result and the numerous filaments observed, it does not mean every single cosmic web has a spin, that is because we don’t have enough galaxies to represent the whole universe. Until we get more data, the study has provided the first actual evidence for such an object and that is already stunning.
If you’ve been following our space articles, you may have come across something called “dark matter”. It’s the most abundant type of matter in the universe — our best models have established that dark matter comprises 84.4% of the matter contained in the known universe — but we still don’t really understand just what it is. Dark matter is something that we know is out there, but we have little idea what it’s made of.
Wait, so how do we know it’s even real?
“Regular” matter (technically called baryonic matter) is made of electrons, protons, and neutrons. Cosmologists define the three particles as baryons –technically speaking, electrons are something else, but that’s beside the point here. Baryons make up gas, gas makes up stars, stars go boom and make planets and other things — including you. Yes, you are made of star stuff — baryonic star stuff, to be precise.
All this is done thanks to the electromagnetic (EM) force that forms chemical bonds, glueing regular atoms together But Dark Matter (DM) plays a different game.
Dark matter doesn’t interact with things the way baryonic matter does. It doesn’t scatter or absorb light, but it still has a gravitational pull. So if there are beings made of dark matter living right here, right now, you probably wouldn’t even know it because the perception of touch is felt when your sensory nerves send the message to your brain, and these nerves work thanks to the EM force.
We can’t touch dark matter, and no optical instruments can detect, so how do we ‘see’ it? Indirectly, for starters. Look for gravity, if there isn’t enough visible mass to explain to explain the gravitational pull felt by a region of the universe, then there something there. Invisible does not mean non-existant. If it weren’t for its gravity effect, there would be little indication of dark matter existing.
The main observational evidence for dark matter is the orbital speeds of stars in the arms of spiral galaxies. If Kepler and Newton were correct, stars’ velocity would decrease with the orbital radius in a specific way. But this was not observed by Vera Rubin and Kent Ford, who tracked this relationship. Instead, Rubin and Ford got a velocity vs radius relation that looked like the stars had a nearly constant behavior from a certain point of the galactic orbit.
This could only be explained if there would be a lot more matter somewhere that we’re not seeing. Something was pulling at these stars gravitationally, and that something is dark matter.
Another important evidence of dark matter comes from galaxy superclusters like 1E 0657-558. Astronomers observed that within this cluster, there are two groups of galaxies placed in a peculiar position to one another.
If you look at the Hubble Space Telescope image below (1st and 2nd), you’ll notice that there are many galaxies on the left and another group on the opposite side. When astronomers observed in X-ray, they concluded that these two clusters collided and left a gas trace of the shock. The faster cluster was like a bullet (it’s even named the Bullet Cluster) passing through the slower one at around 3000–4000 km/s nearly 100-200 million years ago (3rd pic).
Scientists used gravitational lensing to estimate the mass of the objects involved in the collision. They found that the galaxies which hadn’t collided agreed with the weak lensing detections. This indicates those galaxies are ahead of ‘the X-ray evidence’ (below, 1st image). Since dark matter doesn’t interact with nearly anything, in this phenomenon, we see a bunch of mass (blue) moving faster than the gas (purple), barely affecting the baryonic matter (2nd image).
So what is Dark Matter?
Just because we don’t know what dark matter is doesn’t mean we have no idea. In fact, researchers have a few theories and are considering several plausible candidates.
There are three ways to classify dark matter. Cold dark matter describes the formation of the large-scale structure of the universe and is the fundamental component of the matter in the universe. It’s called cold because it moves ‘slowly’ compared to the speed of light.
Weakly interacting massive particles (WIMPs) are the candidates for cold dark matter particles. They supposedly interact via weak nuclear force. The particles which belong to this group are thought to be the lightest particles of the supersymmetric theories. Neutral examples of these particles could have been produced in the early universe, and later participated in the formation of galaxy clusters.
Hot dark matter is the opposite, it moves close to the speed of light. The warm candidate particles are thought to be less interactive than neutrinos.
Another candidate particle for dark matter is the sterile neutrino — a hypothetical particle that interacts only via gravity and not via any of the other fundamental forces. These would be responsible for forming warm dark matter. They also seem to be heavier than the standard neutrino, and have a longer lifetime before they break down.
Neutrinos were thought to be the best candidates for dark matter. Neutrinos are weird particles — they barely interact with things, and even then only gravitationally (which is a weak interaction). Besides, they do not possess electrical charge, which is why they do not interact electromagnetically. However, the neutrino temperature is high and they decouple (stop interacting with other forms of matter) only at relativistic velocities. If dark matter was made of neutrinos, the universe would have looked radically different.
Massive Compact Halo Objects (MACHOs) represent a different type of candidate. They’re no WIMPs at least. They aren’t particles either, but rather brown dwarfs, planets like Jupiter, black holes in galactic haloes. The best possible MACHOs are primordial black holes( PBHs). Different from the ordinary black holes we see in the center of galaxies, PBHs are thought to have been created nearly 10 seconds after the Big Bang. No evidence for such objects has yet been discovered, but it’s still an open possibility.
Detecting dark matter
Many ideas for the detection of each plausible candidate have been developed.
Through cosmology, the main evidence comes from the Cosmic Microwave Background (CMB). Yes, the same radiation Dr. Darcy Lewis detected (for real) in the TV miniseries WandaVision. It is the remnant electromagnetic radiation from when the universe was a 380,000-year-old baby.
The best CMB observations we have currently are from the Planck satellite 2018 survey. Different amounts of matter have distinct signals in the CMB observations, forming the temperature power spectrum.
If the theory is correct, the shape of the power spectrum is different for different amounts of matter. There’s such a thing known as the critical density of the universe, which describes the density of the universe if it was coasting, expanding but not accelerating, and if it stopped its expansion. When you divide the density of the observable matter by the critical density, you get its density parameter (Ω).
The temperature power spectrum is modeled according to the different amounts of ingredients in the universe, more matter or less matter changes its shape. Planck’s observations have shown that the matter density parameter is Ω h² ~0.14, so if the shape of the graph corresponds to that value we have evidence of the amount of dark matter in the universe.
There are also ways to detect dark matter directly, not through cosmology but through particle physics. The Large Hadron Collider (LHC) is the most powerful particle accelerator, can collide protons at extremely high (relativistic) speeds, generating a bunch of scattered particles that are then measured by the detectors.
Physicists hope to find dark matter by comparing the energy before and after the collision. Since the dark matter particles are elusive, the missing energy could explain their presence. However, no experiment has observed dark matter so far — though researchers are still looking.
Another underground experiment uses high purity sodium iodide crystals as detectors. The detectors at DAMA/LIBRA (Large sodium Iodide Bulk for RAre processes), for example, try to observe an annual variation of regular matter colliding with WIMPs due to the planet’s motion around the Sun which means we’re changing our velocity relative to the galactic dark matter halo. The problem is that DAMA’s 20 years’ worth of data didn’t have enough statistical significance. However, in an identical experiment meant to directly detect dark matter, called ANAIS (Annual modulation with NaI Scintillators), in 3 years scientists gathered more reliable data indicating this method is not conducive to finding dark matter anytime soon.
To get a better picture of the challenge in having a conclusive result, take a look at the image below. All those lines and colorful contours represent the results of different experiments, none of them seem to agree. That’s the problem with dark matter, we still don’t have the evidence to match any theory we came up with — and we can’t really rule out any possibilities either.
The questions of what dark matter is and how it works still have no satisfying answer. There are many detection experiments being planned and conducted in order to explain and verify different hypotheses, but nothing conclusive thus far. Let’s hope we don’t have to wait another 20 years to figure out if one experiment is right or wrong. Unfortunately, groundbreaking discoveries can take a lot of time, especially in astrophysics. While we wait, dark matter will continue to entertain our imagination.
The article was primarily based on the 2020 Review of Particle Physics from Particle Data Group’s Dark Matter category.
Black holes are cosmic bodies that pack an immense amount of mass into a surprisingly small space. Due to their extremely intense gravity, nothing can escape their grasp — not even light which defines the universe’s speed limit.
April 10th, 2019 marked a milestone in science history when the team at the Event Horizon Telescope revealed the first image of a supermassive black hole. As a result, these areas of space created when stars reach the end of their nuclear fuel burning and collapse creating massive gravitational wells, completed their transition from theory to reality.
This transition has been further solidified since with the revelation of a second, much clearer image of the supermassive black hole (SMBH) at the centre of the galaxy Messier 87 (M87). This second image revealing details such as the orientation of the magnetic fields that surround it and drive its powerful jets that extend for light-years.
The study of black holes could teach us much more than about these spacetime events and the environments that home them, however. Because cosmologists believe that most galaxies have an SMBH sat at their centre, greedily consuming material like a fat spider lurking at the centre of a cosmic web, learning more about these spacetime events can also teach us how galaxies themselves evolve.
The origin of black holes is one that runs in reverse to that of most astronomical objects. We didn’t discover some mysterious object in the distant cosmos and then began to theorise about it whilst making further observations.
Rather, black holes entered the scientific lexicon in a way that is more reminiscent of newly theorised particles in particle physics; emerging first from the solutions to complex mathematics. In the case of black holes, the solutions to the field equations employed by Einstein in his most important and revolutionary theory.
Just as a physical black hole forms from the collapse of a star, the theory of black holes emerged from the metaphorical collapse of the field equations that govern the geometrical theory of gravity; better known as general relativity.
One of the most common misconceptions about black holes arises from their intrinsic uniqueness and the fact that there really isn’t anything else like them in the Universe.
That’s Warped: Black Holes and Their Effect on Spacetime
General relativity introduced the idea that mass has an effect on spacetime, a concept fundamental to the idea that space and time are not passive stages upon which the events of the universe play out. Instead, those events shape that stage. As John Wheeler brilliantly and simply told us; when it comes to general relativity:
“Matter tells space how to curve. Space tells matter how to move.”
The most common analogy is for this warping of space is that of placing objects on a stretched rubber sheet. The larger the object the deeper the ‘dent’ and the more extreme the curvature it creates. In our analogy, a planet is a marble, a star an apple, and a black hole a cannonball.
Thus, considering this a black hole isn’t really ‘an object’ at all but, is actually better described as a spacetime event. When we say ‘black hole’ what we really mean is an area of space that is so ‘warped’ by a huge amount of mass condensed into a finite point that even light itself doesn’t have the necessary velocity to escape it.
This point at which light can no longer escape marks the first of two singularities that define black holes–points at which solutions of the equations of general relativity go to infinity.
The Event Horizon and the Central Singularity
The event horizon of a black hole is the point at which its escape velocity exceeds the speed of light in vacuum (c). This occurs at a radius called the Schwarzchild radius–named for astrophysicist Karl Schwarzschild, who developed a solution for Einstien’s field equations whilst serving on the Eastern Front in the First World War.
His solution to Einstein’s field equations–which would unsurprisingly become known as the Schwarzschild solution– described the spacetime geometry of an empty region of space. It had two interesting features — two singularities — one a coordinate singularity the other, a gravitational singularity. Both take on significance in the study of black holes.
Dealing with the coordinate singularity, or the Schwarzchild radius first.
The Schwarzchild radius (Rs) also takes on special meaning in cases where the radius of a body shrinks within this Schwarzschild radius (ie. Rs >r). When a body’s radius shrinks within this limit, it becomes a black hole.
All bodies have a Schwarzschild radius, but as you can see from the calculation below for a body like Earth, Rs falls well-within its radius.
That’s part of what makes black holes unique; their Schwartzchild radius is outside their physical radius because their mass is compressed into such a tiny space.
Because the outer edge of the event horizon is the last point at which light can escape it also marks the last point at which events can be seen by distant observers. Anything past this point can never be observed.
The reason the Schwarzschild radius is called a ‘coordinate singularity’ is that it can be removed with a clever choice of coordinate system. The second singularity can’t be dealt with in this way. This makes it the ‘true’ physical singularity of the black hole itself.
This is known as the gravitational singularity and is found at the centre of the black hole (r=0). This is the end-point for every particle that falls into a black hole. It’s also the point the Einstein field equations break down… maybe even all the laws of physics themselves.
The fact that the escape velocity of the event horizon exceeds the speed of light means that no physical signal could ever carry information from the central singularity to distant observers. We are forever sealed off from this aspect of black holes, which will therefore forever remain in the domain of theory.
How to Make a Black Hole
We’ve already seen that for a body with the mass of Earth to become a black hole, its diameter would have to shrink to less than 2cm. This is obviously something that just isn’t possible. In fact, not even our Sun has enough mass to end its life as a black hole. Only stars with around three times the mass of the Sun are massive enough to end their lives in this way.
But why is that the case?
It won’t surprise you to learn that for an astronomical body to become a black hole it must meet and exceed a series of limits. These limits are created by outward forces that are resisting against the inward force that leads to gravitational collapse.
For planets and other bodies with relatively small masses, the electromagnetic repulsion between atoms is strong enough to grant them stability against total gravitational collapse. For large stars the situation is different.
During the main life cycle of stars–the period of the fusion of hydrogen atoms to helium atoms–the primary protection against gravitational collapse is the outward thermal and radiation pressures that are generated by these nuclear processes. That means that the first wave of gravitational collapse occurs when a star’s hydrogen fuel is exhausted and inward pressure can no longer be resisted.
Should a star have enough mass, this collapse forces together atoms in the nucleus enough to reignite nuclear fusion— with helium atoms now fusing to create heavier elements. When this helium is exhausted, the process happens again, with the collapse again stalling if there is enough pressure to trigger the fusion of heavier elements still.
Stars like the Sun will eventually reach the point where their mass is no longer sufficient to kick start the nuclear burning of increasingly heavier elements. But if it isn’t nuclear fusion that is generating the outward forces that prevent complete collapse, what is preventing these lower-mass stars from becoming black holes?
Placing Limits on Gravitational Collapse
Lower-mass stars like the Sun will end their lives as white dwarf stars with a black hole form out of reach. The mechanism protecting these white dwarfs against complete collapse is a quantum mechanical phenomenon calleddegeneracy.
This ‘degeneracy pressure’ is a factor of the Pauli exclusion principle, which states that certain particles– known as fermions, which include electrons, protons, and neutrons– are forbidden from occupying the same ‘quantum states.’ This means that they resist being tightly crammed together.
This theory and the limitation it introduced led Indian-American astrophysicist Subrahmanyan Chandrasekhar to question if there was an upper cap at which this protection against gravitational collapse would fail.
Chandrasekhar –awarded the 1983 Nobel Prize in physics for his work concerning stellar evolution– proposed in 1931 that above 1.4 solar masses, a white dwarf would no longer be protected from gravitational collapse by degeneracy pressure. Past this limit — termed the Chandrasekhar limit — gravity overwhelms the Pauli exclusion principle and gravitational collapse can continue.
But there is another limit that prevents stars of even this greater mass from creating black holes.
Thanks to the 1932 discovery of neutrons— the neutral partner of protons in atomic nuclei — Russian theoretical physicist Lev Landau began to ponder the possible existence of neutron stars. The outer part of these stars would contain neutron-rich nuclei, whilst the inner sections would be formed from a ‘quantum fluid’ comprised of mostly neutrons
These neutron stars would also be protected against gravitational collapse by degeneracy pressure — this time provided by this neutron fluid. In addition to this, the greater mass of the neutron in comparison to the electron would allow neutron stars to reach a greater density before undergoing collapse.
By 1939, Robert Oppenheimer had calculated that the mass-limit for neutron stars would be roughly 3 times the mass of the Sun.
To put this into perspective, a white dwarf with the mass of the Sun would be expected to have a millionth of our star’s volume — giving it a radius of 5000km, roughly that of the Earth. A neutron star of a similar mass though would have a radius of about 20km — roughly the size of a city.
Above the Oppenheimer-Volkoff limit, gravitational collapse begins again. This time no limits exist between this collapse and the creation of the densest possible state in which matter can exist. The state found at the central singularity of a black hole.
We’ve covered the creation of black holes and the hurdles that stand in the way of the formation of such areas of spacetime, but theory isn’t quite ready to hand black holes over to practical observations just yet. The field equations of general relativity can also be useful in the categorisation of black holes.
The four types of black holes
Categorising black holes is actually fairly straight-forward thanks to the fact that they possess very few independent qualities. John Wheeler had a colourful way of describing this lack of characteristics. The physicist once commented that black holes ‘have no hair,’ meaning that outside a few characteristics they are essentially indistinguishable. This comment became immortalised as the no-hair theorem of black holes.
Black holes have only three independent measurable properties — mass, angular momentum and electric charge. All black holes must have mass, so this means there are only four different types of a black hole based on these qualities. Each is defined by the metric or the function used to describe it.
This means that black holes can be quite easily catagorised by the properties they possess as seen below.
This isn’t the most common or most suitable method of categorising black holes, however. As mass is the only property that is common to all black holes, the most straight-forward and natural way of listing them is by their mass. These mass categories are imperfectly defined and so far black holes in some of the categories–most notably intermediate black holes– remain undetected.
Cosmologists believe that the majority of black holes are rotating and non-charged Kerr black holes. And the study of these spacetime events reveals a phenomenon that perfectly exemplifies their power and influence on spacetime.
The Anatomy of a Kerr Black Hole
The mathematics of the Kerr metric used to describe non-charged rotating black holes reveals that as they rotate, the very fabric of spacetime that surrounds them is dragged along in the direction of the rotation.
The powerful phenomenon is known as ‘frame-dragging’ or the Lense-Thirring effect and leads to the violent churning environments that surround Kerr black holes. Recent research has revealed that this frame-dragging could be responsible for the breaking and reconnecting of magnetic field lines that in-turn, launch powerful astrophysical jets into the cosmos.
The static limit of a Kerr black hole also has an interesting physical significance. This is the point at which light–or any particle for that matter– is no-longer free to travel in any direction. Though not a light-trapping surface like the event horizon, the static limit pulls light in the direction of rotation of the black hole. Thus, light can still escape the static limit but only in a specific direction.
British theoretical physicist and 2020 Nobel Laureate Sir Roger Penrose also suggested that the static limit could be responsible for a process that could cause black holes to ‘leak’ energy into the surrounding Universe. Should a particle decay into a particle and its corresponding anti-particle at the edge of the static limit it would be possible for the latter to fall into the black hole, whilst its counterpart is launched into the surrounding Universe.
This has the net effect of reducing the black hole’s mass whilst increasing the mass content of the wider Universe.
We’ve seen what happens to light at the edge of a black hole and explored the fate of particles that fall within a Kerr black hole’s static limit, but what would happen to an astronaut that strayed too close to the edge of such a spacetime event?
Death by Spaghettification
Of course, any astronaut falling into a black hole would be completely crushed upon reaching its central gravitational singularity, but the journey may spell doom even before this point has been reached. This is thanks to the tidal forces generated by the black hole’s immense gravitational influence.
As the astronaut’s centre of mass falls towards the black hole, the object’s effect on spacetime around it causes their head and feet to arrive at significantly different times. The difference in the gravitational force at the astronaut’s head and feet gives rise to such a huge tidal force that means their body would be simultaneously compressed at the sides and stretched out.
Physicists refer to this process as spaghettification. A witty name for a pretty horrible way to die. Fortunately, we haven’t yet lost any astronauts to this bizarre demise, but astronomers have been able to watch stars meet the same fate.
For a stellar-mass black hole, spaghettification would occur not just before our astronaut reaches the central singularity, but also well before they even hit the event horizon. For a black hole 40 times the mass of our Sun — spaghettification would occur at about 1,000 km out from the event horizon, which is, itself, 120 km from the central gravitational singularity.
As well as developing the Oppenheimer-Volkoff limit, Oppenheimer also used general relativity to describe how a total gravitational collapse should appear to a distant observer. They would consider the collapse to take an infinitely long time, the process appearing to slow and freeze as the star’s surface shrinks towards the Schwarzschild radius.
An astronaut falling into a black hole would be immortalized in a similar way to a distant observer, though they themselves–could they have survived spaghettification– they would notice nothing. The passing of Rs would just seem a natural part of the fall to them despite it marking the point of no return.
Much More to Learn…
After emerging from the mathematics of general relativity at the earlier stages of the 20th Century, black holes have developed from a theoretical curiosity to the status of scientific reality. In the process, they have indelibly worked their way into our culture and lexicon.
Perhaps the most exciting thing about black holes is that there is so much we don’t yet know about them. As a striking example of that, almost all the information listed above resulted just from theory and the interrogation of the maths of Einstein’s field equations.
Unlocking the secrets held by black holes could, in turn, reveal how galaxies evolve and how the Universe itself has changed since its early epochs.
Sources and Further Reading
Relativity, Gravitation and Cosmology, Robert J. Lambourne, Cambridge Press, .
Relativity, Gravitation and Cosmology: A basic introduction, Ta-Pei Cheng, Oxford University Press, .
Let’s start with the history of the universe (a very brief one). After the Big Bang, the Universe was essentially a hot soup of particles. Things started to cool down and eventually started forming hydrogen atoms. At some point, the universe became neutral and transparent, but because the clouds of hydrogen collapsed very slowly, there were no sources of light — it was a period of complete and utter universal darkness aptly called Dark Ages.
The famous dark matter slowly started to form structures that later became the first source of light in the universe. The emergence of these sources occurred in the Epoch of Reionization (EoR), around 500 million years after the Big Bang. Now, astronomers have found that formed not that long after this period.
Astronomers from China, the US, and Chile have now found a huge galaxy protocluster (a dense system of dozens of galaxies from the early universe that grows together) from the early days of the universe. Called the LAGER-z7OD1 cluster, it dates from a time when the universe was still a baby — only 770 million years old, early in its history. These objects are important tools that enable astronomers to examine the EoR.
The group that worked detecting these objects is called the Lyman Alpha Galaxies in the Epoch of Reionization (LAGER). Lyman Alpha Galaxies are very distant objects that emit radiation from neutral hydrogen and they are the components to find clusters that are so old.
LAGER primarily used the Dark Energy Camera (DECam) from the Cerro Tololo Inter-American Observatory (CTIO) 4-m Blanco telescope in Andes, Chile. They found out was it is a system with a redshift of 6.9 — here’s why that’s intriguing.
Redshift is a measure of how something is moving in space: if it moves away from us we see a longer wavelength, which means a positive redshift and a wavelength skewed towards red — if it is moving towards us, it means shorter wavelengths, a negative redshift, and a wavelength skewed towards blue. The bigger the redshift, the more distant it is from us.
The cluster has 21 galaxies and if you want to estimate distance, the volume is probably 51,480 Mpc³ (1 Mpc is almost 3 million light-years) and it’s about 3,700,000 billion times more massive than the Sun. In addition, it has an elongated shape which means subclusters merged to form the bigger structure.
It’s basically a gazillion miles from us, but a gazillion isn’t good enough for astronomers — they always want to know just how far away things are. In this case, however, an approximation will have to do.
The Plack Collaboration estimated that the EoR probably started at z=7.67. This estimation uses the polarization of the Comic Microwave Background photons, just like polarizing light with sunglasses, but with a level of sensitivity so high that the instruments to detect it must be at temperatures close to absolute zero. Another important conclusion came from search for quasars formed in this period, usually the many papers about it conclude that the end of the EoR was around z=6.
Lyman Alpha Galaxies and quasars are major findings to understand the EoR. The best sample of quasars we have now has only 50 quasars, not much to represent the EoR for the entire universe. LAGER-z7OD1 is an example of cluster which possibly formed in the middle of the process, until absolute certainty is obtained more observations like this one need to come.
In the late 1920s, Edwin Hubble spent a great deal of his time investigating what was then called Nebulae, interstellar structures that we now know as galaxies. He used to work with the top instrument of the time, the 2.5 m Hooker telescope at Mount Wilson, collecting data from those distant objects and comparing them.
The galaxies observed by Hubble were different from one another. Some were redder, indicating they are farther away from us. This happens because the speed of light is a fixed value, and the most distant objects take longer to send their light to our eyes than the closest ones. The color red has a longer wavelength, meaning this color can travels great distances. So bluer galaxies, with blue being on the shorter end of the wavelength spectrum, are likely closer.
Interestingly, Hubble realized that the redder objects were also the ones traveling away from us faster than the blue galaxies. In other words, if a galaxy is very distant, it’s moving away from us faster than a closer galaxy. So the universe was pretty clearly expanding.
Hubble’s observations concluded that the universe expands with a velocity of 500 km/s for each 308 trillion and 570 million km — which is defined as one Megaparsec. In other words, Hubble concluded that the universe is expanding at a speed of 500 km/s/Mpc. But that’s not the end of this story. In fact, it’s only the beginning.
Today’s values, calculated with more precise instruments, are around 70 km/s/Mpc and this expansion rate is called Hubble Constant (H0). The problem if you measure this expansion rate in different ways, you’ll end up with different values — something called the ‘H0 tension’.
Important information about the Hubble constant comes from something called Cosmic Microwave Background. The Cosmic Microwave Background (CMB), as calculated by the European Space Agency’s Planck Collaboration, is the light emitted 380,000 years after the Big Bang. Measurements from the CMB by the latest Planck data have a ‘H0’ value of 67.4 km/s/Mpc. But a local measurement using the brightness of supernovae type Ia has bigger values of 72 km/s/Mpc.
In order to get 74, the SH0ES Collaboration uses a galaxy (NGC 4258) as the reference to calibrate Cepheids and SN-Ia and obtain the H0 value. SN-Ia and Cepheids are so-called ‘standard candles’ — objects with a fixed, known brightness pattern. By having a catalog of objects with standard brightness, scientists can infer the distance of the candles.
Planck, on the other hand, uses a technique not based on direct observation. In this case, the CMB can measure the sound horizon, which is a standard ruler. It needs to use the standard model of cosmology to obtain the expansion rate.
Other observations that have similar methods to the two above also vary in correspondence. The most anticipated procedure uses Gravitational Waves. It is similar to the standard candles, except this time the objects have the ‘sound’ as a pattern. Unfortunately, there are very few observations of this sort since it is a recent method of observation. One observation by LIGO that resulted in a rich catalog was GW170817, the merging of two neutron stars. The importance of this event is due to the electromagnetic observation that came together. The collision was so violent it resulted in a gamma-ray outburst, detected approximately 2s after the merging. GW170817’s H0 is equal to 69 km/s/Mpc.
The matter is hotly debated by researchers. Since the last post covering universal expansion was published on ZME Science, approximately 24,000 papers discussing the issue have been published. Many are analyzing and comparing different measurements of the value and many reviews are trying to make sense of separate studies. Despite the extensive investigation though, no conclusions were drawn.
More recently, a collaboration of the Atacama Cosmology Telescope (ACT) in Chile used the same method Planck used. Different from the ESA telescope, ACT is ground-based and covers a smaller area of the sky. Yet it agrees with Planck, the close value of 67.9 km/s/Mpc. This suggests that the same type of procedure results in similar values.
But how can that be if the universe is supposed to have the same behavior independent from the way we observe it? The answered could be errors with the observations, but it could also be that our current model of the universe is not correct. With the ATC it appears that the cosmological model can’t describe well the CMB, and we’re not really sure why.
From now on the focus will turn towards Cepheids and SN, but yet no one is sure what exactly is the problem: new physics needs to be developed or more and more observations will answer questions? Until there more papers will fill the Google Scholar pages.
An international team of researchers has used telescopes from around the world — including instruments operated by the European Southern Observatory (ESO) — to glimpse a blast of light emitted by a star as it is torn apart by the tidal forces of a supermassive black hole.
The event — technically known as a ‘tidal disruption event’ (TDE) — occurred 215-million light-years from Earth, but despite this intimidating sounding distance, this is the closest to our planet such a flare has ever been captured. This, and the fact the astronomers spotted the event early, means the team was able to study the phenomena in unprecedented detail, in turn uncovering some surprises in this violent and powerful process.
The astronomers directed the ESO’s Very Large Telescope (VLT), based in the Atacama desert, Chile, and other instruments at a blast of light that first occurred last year. They studied the flare, located in AT2019qiz in a spiral galaxy in the constellation of Eridanus, for six months as it grew in luminosity and then faded. Their findings are published today in the Monthly Notices of the Royal Astronomical Society.
“My research focuses on close encounters between stars and supermassive black holes in the centres of galaxies. Gravity very close to a black hole is so strong that a star cannot survive, and instead gets ripped apart into thin streams of gas,” Thomas Wevers, co-author of the study and an ESO Fellow in Santiago, Chile, tells ZME Science. “This process is called a tidal disruption event, or sometimes ‘spaghettification’.
“If not for such tidal disruption events, we would not be able to see these black holes. Hence, they provide a unique opportunity to study the properties of these ‘hidden’ black holes in detail.”
Thomas Wevers, ESO Fellow
Catching the Start of the Movie
Wevers, who was part of the Institute of Astronomy, University of Cambridge, UK, as the study was being conducted, explains that it can take several weeks — or even months — to identify these spaghettification events with any certainty. Such an identification also takes all the telescopes and observational power that can be mustered. This can often cause a delay that results in astronomers missing the early stages of the process.
“It’s like watching a movie but starting 30 minutes in, a lot of information is lost if you can’t watch from the very beginning, and while you might be able to reconstruct roughly what has happened, you can never be completely sure,” the researcher explains. But, that wasn’t the case with this new event.
To stick to the analogy; this time the team had their popcorn and drink and were in their seats before the trailers started rolling.
“In this new event, we were lucky enough to identify and hence observe it very quickly, which has allowed us to see and understand what happens in the early phases in great detail.”
Thomas Wevers, ESO Fellow
Spotting spaghettification events is not just difficult due to timing issues, though. Such events are fairly rare, with only 100 candidates identified thus far, and are often obscured by a curtain of dust and debris. When a black hole devours a star, a jet of material is launched outwards that can further obscure the view of astronomers. The prompt viewing of this event allowed that jet to be seen as it progressed.
“The difficulty comes first from picking out these rare events in among all the more common things changing in the night sky: variable stars and supernova explosions,” Matt Nicholl, a lecturer and Royal Astronomical Society research fellow at the University of Birmingham, UK, and the lead author of the study tells ZME Science. “A second difficulty comes from the events themselves: they were predicted to look about 100 times hotter than the flare that we observed. Our data show that this is because of all the outflowing debris launched from the black hole: this absorbs the heat and cools down as it expands.”
Spaghettification: Delicious and Dangerous
The spaghettification process is one of the most fascinating aspects of black hole physics. It arises from the massive change in gravitational forces experienced by a body as it approaches a black hole.
“A star is essentially a giant ball of hot, self-gravitating gas, which is why it is roughly spherical in shape. When the star approaches the black hole, gravity acts in a preferential direction, so the star gets squeezed in one direction but stretched in the perpendicular direction,” Wevers says. “You can compare it to a balloon: when you squeeze it between your hands, it elongates in the direction parallel to your hands. Because the gravity is so extreme, the result is that the star essentially gets squeezed into a very long and thin spaghetti strand — hence the name spaghettification.”
Nicholl continues, explaining what happens next to this stellar spaghetti strand: “Eventually, it wraps all the way around and collides with itself, and that’s when we start to see the light show as the material heats up before either falling into the black hole or being flung back into space.
“The distance at which the star encountered the supermassive black hole was around the same distance between the Earth and Sun — this shows how incredibly strong the gravitational pull of the black hole must be to be able to tear the star apart from that distance.”
“If you picture the Sun being torn into a thin stream and rushing towards us, that’s roughly what the black hole saw!”
Matt Nicholl, Royal Astronomical Society research fellow.
Suprises and Future Developments
The observations made by the astronomers have allowed them to study the dynamics of a star undergoing the spaghettification process in detail, something that hasn’t been possible before. And as is to be expected with such a first, the study yielded some surprises for the team.
“The biggest surprise with this event was how rapidly the light brightened and faded,” Nicholl tells ZME. “It took about a month from the encounter for the flare to reach its peak brightness, which is one of the fastest we’ve ever seen.”
The researcher continues to explain that faster events are harder to find, so it suggests that there might be a whole population of short-lived flares that have been escaping astronomers’ attention. “Our research may have solved a major and long-standing mystery of why these events are 100 times colder than expected — in this event, it was the outflowing gas that allowed it to cool down.”
Confirming this idea means that the team must now seek scarce telescope time to investigate more of these events to see if this characteristic is unique to the AT2019qiz flare, or if it is a common feature of such events. “Because we studied only one event, it is still unclear whether our results apply universally to all such tidal disruption events. So we need to repeat our experiment multiple times,” Wevers says. “Unfortunately, we are at the whims of nature and our ability to spot new TDEs. When we do, we will need to confirm the picture we have put forward or perhaps adapt it if we find different behaviour.”
Wevers concludes by highlighting the unique position he, Nicholl, and their team find themselves in by studying such rare and difficult to observe events and the objects that lie behind them. “We aren’t yet in the phase where we think we have mapped all the behaviour that occurs following these cataclysmic events, so while each new TDE helps us to answer outstanding questions, at the same time it also raises new questions.
“We find ourselves continually in a catch-22 like situation, which in this case is a good thing as it propels our research forward!” exclaims Wevers. “I find it pretty amazing that we can study gargantuan black holes, weighing millions or even billions of times the mass of our sun, and which are located hundreds of millions of light-years away, in such detail with our telescopes.”
Original research: Nicholl. M., Wevers. T., Oates. S. R., et al, ‘An outflow powers the optical rise of the nearby, fast-evolving tidal disruption event AT2019qiz,’ Monthly Notices of the Royal Astronomical Society, .
Our understanding of dark matter and its behavior could be missing a key ingredient. More gravitational lensing, the curving of spacetime and light by massive objects, could lead to the perfect recipe to solve this cosmic mystery.
Despite comprising anywhere between 70–90% of the Universe’s total mass and the fact that its gravitational influence literally prevents galaxies like the Milky Way from flying apart, science is still in the dark about dark matter.
As researchers around the globe investigate the nature and composition of this elusive substance, a study published in the journal Science suggests that theories of dark matter could be missing a crucial ingredient, the lack of which has hampered our understanding of the matter that literally holds the galaxies together.
The presence of something missing from our theories of dark matter and its behavior emerged from comparisons of observations of the dark matter concentrations in a sample of massive galaxy clusters and theoretical computer simulations of how dark matter should be distributed in such clusters.
Using observations made by the Hubble Space Telescope and the Very Large Telescope (VLT) array in the Atacama Desert of northern Chile, a team of astronomers led by Massimo Meneghetti of the INAF-Observatory of Astrophysics and Space Science of Bologna in Italy have found that small-scale clusters of dark matter seem to cause lensing effects that are 10 times greater than previously believed.
“Galaxy clusters are ideal laboratories in which to study whether the numerical simulations of the Universe that are currently available reproduce well what we can infer from gravitational lensing,” says Meneghetti. “We have done a lot of testing of the data in this study, and we are sure that this mismatch indicates that some physical ingredient is missing either from the simulations or from our understanding of the nature of dark matter.”
Just Add Gravitational Lensing
The lensing that the team believes accounts for dark matter discrepancies is a factor of Einstein’s theory of general relativity which suggests that gravity is actually an effect that mass has on spacetime. The most common analogy given for this effect is the distortion created on a stretched rubber sheet when a bowling ball is placed on it.
This effect in space that results from a star or even a galaxy curving space and thus bending the path of light as it passes the object. Otherwise known as gravitational lensing it is commonly seen when a background object–which could be as small as a star or as large as a galaxy– moves in front of a foreground object and curves light from it giving it an apparent location in the sky.
In extreme cases, where this lensing causes the paths of light to change in such a way that its arrival time at an observer is different, it can cause a background object to appear in the night sky at various different points. A beautiful example of this is an Einstein ring, where a single object appears multiple times forming a ring-like arrangement.
Because dark matter only interacts via gravity, ignoring even electromagnetic interactions — hence why it can’t be seen — gravitational lensing is currently the best way to infer its presence and map the location of dark matter clusters in galaxies.
Returning to the ‘rubber sheet’ analogy from above, as you can imagine, a cannonball will make a more extreme ‘dent’ in the sheet than a bowling ball, which in turn makes a bigger dent than a golf ball. Likewise, the larger the cluster of dark matter — the greater the mass — the more extreme the curvature of space and therefore, light.
But now imagine what would happen if the bowling ball on the rubber sheet was surrounded by marbles. Though their individual distortions may be small, their cumulative effect could be considerable. The team believes this may be what is happening with smaller clusters of dark matter. These small scale clumps of dark matter enhance the overall distortion. In a way, this can be seen as a large lens with smaller lenses embedded within it.
Cooking Up A High-Fidelity Dark Matter Map
The team of astronomers was able to produce a high-fidelity dark matter map by using images taken by Hubble’s Wide Field Camera 3 and Advanced Camera Survey combined spectra data collected by The European Southern Observatory’s (ESO) VLT. Using this map, and focusing on three key clusters — MACS J1206.2–0847, MACS J0416.1–2403, and Abell S1063 — the researchers tracked the lensing distortions and from there traced out the amount of dark matter and how it is distributed.
“The data from Hubble and the VLT provided excellent synergy,” says team member Piero Rosati, Università Degli Studi di Ferrara in Italy. “We were able to associate the galaxies with each cluster and estimate their distances.”
This led the team to the revelation that in addition to the dramatic arcs and elongated features of distant galaxies produced by each cluster’s gravitational lensing, the Hubble images also show something altogether unexpected–a number of smaller-scale arcs and distorted images nested near each cluster’s core, where the most massive galaxies reside.
The team thinks that these nested lenses are created by dense concentrations of matter at the center of individual cluster galaxies. They used follow-up spectroscopic observations to measure the velocity of the stars within these clusters and through a calculation method known as viral theorem, confirmed the masses of these clusters, and in turn, the amount of dark matter they contain.
This fusion of observations from these different sources allowed the team to identify dozens of background lensed galaxies that were imaged multiple times. The researchers then took this high-fidelity dark matter map and compared it to samples of simulated galaxy clusters with similar masses, located at roughly the same distances.
These simulated galaxy clusters did not show the same dark matter cluster concentrations — at least not on a small scale that is associated with individual cluster galaxies.
The discovery of this disparity should help astronomers design better computer simulation models and thus develop a better understanding of how dark matter clusters. This improved understanding may ultimately lead to the discovery of what this abundant and dominant form of matter actually is.
Original research: Meheghetti. M., Davoli. G., Bergamini. P., et al, ‘An excess of small-scale gravitational lenses observed in galaxy clusters,’ Science, ,
Gravitational waves have been detected from what appears to be the largest black hole merger ever observed. The powerful and previously unobserved hierarchical merger resulted in an intermediate-mass black hole, an object never before detected.
A massive burst of gravitational waves equivalent to the energy output of eight Suns has been detected by the LIGO laser interferometer. Researchers at LIGO and its sister project VIRGO believe that the waves originate from a merger between two black holes. But, this isn’t your average black hole merger (if there is such a thing). The merger — identified as gravitational wave event GW190521 — is not only the largest ever detected in gravitational waves — but it is also the first recorded example of what astrophysicists term a ‘hierarchical merger’ occurring between two black holes of different sizes, one of which was born from a previous merger.
“This doesn’t look much like a chirp, which is what we typically detect,” says Virgo member Nelson Christensen, a researcher at the French National Centre for Scientific Research (CNRS), comparing the signal to LIGO’s first detection of gravitational waves in 2015. “This is more like something that goes ‘bang,’ and it’s the most massive signal LIGO and Virgo have seen.”
Even more excitingly, it seems that black hole birthed in the event has a mass of between 100–1000 times that of the Sun, putting it in the mass range of an intermediate-mass black hole (IMBH). Something that researchers have theorised about for decades, but up until now, have failed to detect.
The gravitational wave signal–spotted by LIGO on 21st May 2019–appears to the untrained eye as little more than four short squiggles that lasted little more than one-tenth of a second, but for Alessandra Buonanno, Principal Investigator of the LIGO Scientific Collaboration, whose group focuses on the development of highly-accurate waveform models, it holds a wealth of information. “It’s amazing, but from about four gravitational-wave cycles, we could extract unique information about the astrophysical source,” she tells ZME Science.
“The waves are fingerprints of the source that has produced them.”
Alessandra Buonanno, Principal Investigator of the LIGO Scientific Collaboration.
As well as containing vital information about black holes and a staggering merger event, as the signal originated 17 billion light-years from Earth and at a time when the Universe was half its current age, it is also one of the most distant gravitational wave sources ever observed. The incredible distance the signal has travelled may initially seem at odds with the fact that the Universe is only 14.8 billion years old, but the disparity arises from the fact our universe is not static but is expanding.
Details of the international team’s important findings are featured in a series of papers publishing in journals such as Physical Review Letters, and The Astrophysical Journal Letters, today.
Missing Intermediete-Mass Black Holes
Thus far, the black holes discovered by astronomers have either been those with a mass inline with that of larger stars–so-called stellar-mass black holes, or supermassive black holes, with masses far exceeding this. Black holes that exist between these masses have remained, frustratingly hidden. Until now.
“The LIGO and Virgo collaborations detected a gravitational wave corresponding to a very interesting black hole merger. This was named GW190521 and corresponds to two large black holes during the final orbit and merger,” Pedro Marronetti, program director for gravitational physics at the National Science Foundation (NSF) responsible for the oversight of LIGO, tells ZME Science.
“What makes GW190521 extraordinary in comparison to other gravitational wave events is the mass of the black holes involved, the product of the merger is a 142 solar mass black hole and the first object of its kind with mass above 100 solar masses but below a million solar masses to be discovered.”
Pedro Marronetti, program director for gravitational physics, the National Science Foundation (NSF)
Thus, the resultant black hole of 142 solar masses exists in that crucial, thus far undetected, mass range indicating an intermediate-mass black hole (IMBH).
“These black holes, heavier than 100 solar masses but much lighter than the supermassive black holes at the centre of galaxies — which can be millions and billions of solar masses — have eluded detection until now,” Marronetti says. “Additionally, the heavier of the original black holes with 85 solar masses also presents an enigma.”
Pair Instability and Hierarchical Black Hole Mergers
The enigma that Marronetti refers to is the fact that heavier of the two black holes that entered the merger, is of a size that suggests it too must have been created by a merger event between two, even smaller, black holes. “The most common channel of formation of black holes involves heavy stars that end their lives in supernova explosions,” the NSF program director points out. “However, this formation channel prevents the creation of black holes heavier than 65 solar masses but lighter than 130 solar masses due to a phenomenon called ‘pair-instability’.”
As nuclear fusion ceases, there is no longer enough outward radiation pressure to prevent gravitational collapse. “The star suddenly starts producing photons that are energetic enough to create electron-positron pairs,” Marronetti explains. “These photons, in turn, create an outward pressure that is not strong enough to stop the star from collapsing violently due to its self-gravitational pull.”
This results in a difference in gravitational pressure between the star’s core and its outer layers. As a massive shock travels through these ‘puffed out’ outer layers they are blown away in a massive explosion. With smaller stars, this leaves behind an exposed core that becomes a stellar remnant such as a white dwarf, neutron star or black hole. But if the star is a range above 130 solar masses, but below 200 solar masses, the result is more disastrous.
“The resulting supernova explosion completely obliterates the star, leaving nothing behind– no black hole or neutron star is produced,” Marronetti says. “It will take stars heavier that 200 solar masses to collapse into black hole fast enough to avoid this complete disintegration.”
As Marronetti points out; this means that the 85-solar mass back hole could only be formed by the merger of two smaller black holes, as at these masses, collapsing stars can’t form black holes. “This is a quite unusual event that can only occur in regions of dense black hole population such as globular clusters,” the researcher adds. “GW190521 is the first detection that is likely to be due to this ‘hierarchical merger’ of black holes.”
Marronetti continues by explaining that a hierarchical merger consists of one or more black holes that were produced by a previous black hole merger. This hierarchy of mergers allows for the formation of progressively heavier and heavier black holes from an original population of small ones.
“We don’t really know how common these hierarchical mergers are since this is the first time we have direct evidence of one. We can only say that they are not very common.”
Pedro Marronetti, program director for gravitational physics, the National Science Foundation (NSF)
LIGO Delivering Discoveries and Surprises
The team uncovered the unusual nature of this particular merger by assessing the gravitational wave signal with a powerful state of the art computational models. Not only did this reveal that GW190251 originates from the most massive black hole merger ever observed and that this was no ordinary merger but a hierarchical merger, but also crucial information about the black holes involved in the event.
“The signal carries information about the masses and spins of the original back holes as well as their final product,” Marronetti adds, alluding to the fact that the LIGO -VIRGO team were able to measure that spin and determine that as the black holes circled together, they were also spinning around their own axes. The angles of these axes appeared to have been out of alignment with the axes of their orbit. This misaligned spin caused the black holes to ‘wobble’ as they moved together.
“Our waveform models were used to detect GW190521 and also to interpret its nature, extracting the properties of the source, such as masses, spins, sky location, and distance from Earth. For the first time, the waveform models included new physical effects, notably the precession of the spins of the black holes and higher harmonics,” Buonanno says. “What we mean when we say higher harmonics is like the difference in sound between a musical duet with musicians playing the same instrument versus different instruments.
“The more substructure and complexity the binary has — for example, black holes with different masses or spins—the richer is the spectrum of the radiation emitted.”
Alessandra Buonanno, Principal Investigator of the LIGO Scientific Collaboration.
Unanswered Questions and Future Investigations
Even with the staggering amount of information the team has been able to collect about the merger that gave rise to the signal GW190251, there are still some unanswered questions and details that must be confirmed.
The LIGO-VIRGO detectors use two very distinct methods to search the Universe for gravitational waves, an algorithm to pick out a specific wave pattern most commonly produced by compact binary mergers, and more general ‘burst’ searches. The latter searches for any signal ‘out of the ordinary’ and it’s the mechanism via which the researchers found GW190215.
Morronetti expresses some surprise that the methods used by the team were able to unlock these secrets, believing that this result demonstrates the versatility of LIGO. “My main surprise was that this event was detected using a search algorithm that was not specifically created to find merger signals,” says the NSF director. “This is the first detection of its kind and shows the capability of LIGO to detect phenomena beyond compact mergers.”
“This is of tremendous importance since it showcases the instrument’s ability to detect signals from completely unforeseen astrophysical events. LIGO shows that it can also observe the unexpected.”
Pedro Marronetti, program director for gravitational physics, the National Science Foundation (NSF)
This leaves open the small chance that the signal was created by something other than a hierarchical merger. Perhaps something entirely new. The authors hint at the tantalising prospect of some new phenomena, hitherto unknown, in their paper, but Marronetti is cautious: “By far, the most likely cause is the merger of two black holes, as explained above. However, this is not as certain as with past LIGO/Virgo detections.
“There is still the small chance that the signal was caused by a different phenomenon such as a supernova explosion or an event during the Big Bang. These scenarios are possible but highly unlikely.”
Confirming the nature of the event that gave rise to the GW190251 signal is something that the LIGO team will be focusing on in the future as the interferometer also searches for similar events via the gravitational waves they emit. “
With GW190521, we have seen the tip of the iceberg of a new population of black holes,” Buonanno says, adding that LIGO’s next operating run (O4) will explore a volume of space 3 times larger than the current run (O3). “Having access to a larger number of events, which were too weak to be observed during O3, will allow us to shed light on the formation scenario of binary black holes like GW150921.”
Many mysteries surround conditions in the early Universe, chief amongst these is the question of how and when galaxies began to form. At some point in the Universe’s history, gravitational instability brought together increasingly larger clumps of matter, beginning with atoms, dust, and gas, then stars and planets, clusters and then massive galaxies.
Whilst early protogalaxies may have formed as early as a few hundred million years after the Big Bang, the first well-formed galaxies with features such as spiral arms, rings and bars are thought to have only formed around 6 billion years into the Universe’s 13.8 billion year lifetime.
Astronomy has, in general, confirmed this. With closer and thus later galaxies displaying characteristics such as rings, bars and spiral arms, like our own home, the Milky Way. Features lacking in more distant, earlier galaxies.
New discoveries, however, are challenging this accepted view, with three recent pieces of research, in particular, suggesting that well-ordered and massive galaxies existed much earlier in the Universe than previously believed. This either means that the formation of galaxies began much earlier than expected or progressed much faster than many models suggest.
As a consequence scientists may have to refine models of galaxy formation to account for much earlier or much more rapid evolution.
The key to solving the mystery of how soon after the Big Bang galaxies with definitive shapes and features such as thin discs and spiral arms formed begins with examining theories that describe this formation. One family of theories which implies these processes occur over a prolonged period of time, and another, that suggests formation can proceed much more quickly.
Bottom’s Up! Did Formation Start Earlier or Proceed Quicker?
The simplest model of galaxy formation suggests that at a time when the Universe was mostly hydrogen and helium, such structures emerged from dense clouds of gas that collapsed under their own gravity. This so-called ‘monolithic model’ was the first suggested formation process for galaxies and the stars that comprise them and is referred to as a ‘bottom-up’ or hierarchical formation model.
There are also ‘top-down’ formation models that suggest galaxies may have emerged from larger conglomerates of matter that collapsed in a similar fashion but then went on to break apart, but these currently aren’t favoured by most cosmologists.
Under the influence of gravity, gas and dust collapse into stars which are drawn together as clusters, then superclusters, and finally galaxies. The question is, how do galaxies grow and develop their characteristics?
One idea suggests that the seed of a galaxy continues to accumulate gas and dust, slowly growing to massive size. When it reaches gigantic proportions, this galaxy is able to gobble up clusters of stars and even smaller galaxies. This process should be fairly slow, however, glacially so at first, in fact, accelerating once smaller galaxies begin to be absorbed.
If this is the predominant formation mechanism for galaxies, then what we shouldn’t see in the early universe, before about 6 billion years after the Big Bang, are disc-like massive galaxies or spiral armed galaxies like the Milky Way. Further out in space and thus further back in time, irregulars galaxies and amorphous blobs should be favoured heavily. Unless that is, galactic formation got a serious head-start.
But, there is another theory of galactic evolution. What if galaxy growth progresses predominantly through merger processes?
Rather than a galaxy waiting until it grows massive in size to start accumulating its smaller counterparts, mergers between similar-sized galaxies could be the driving factor in creating larger galaxies. This would mean that the process of galaxy formation could proceed much more quickly than previously believed.
In either case, what we should see is massive galaxies well-formed with characteristics like disks, bars, and spiral arms way further out into space, and thus further back in time.
It just so happens that is exactly what astronomers are starting to find.
Should’ve Put a Ring on it!
One such line of evidence for a more rapid form of galactic formation or a much earlier start, comes in the distinctive doughnut-like shape of a collisional ring galaxy discovered 11 billion-light-years away. This means this “cosmic ring of fire” — similar in mass to the Milky Way and notable for the massive ‘hole’ in its centre which is three million times the distance between the Earth and the Sun — existed when the Universe was just 2.7 billion years old. Far earlier than predicted.
Dr Tiantian Yuan, of Australia’s ARC Centre of Excellence for All-Sky Astrophysics in 3 Dimensions (ASTRO 3D) was part of a group that successfully gave the ring galaxy — designated R5519 — an age.
“It is very a curious object, one that we have never seen before, definitely not in the early Universe,” explains Yuan, a specialist in studying galactic features like spiral arms. “R5519 looks like a corona galaxy, but it isn’t.”
So, even if R5519 is striking, how does this imply that models of galaxy evolution could be inaccurate? The answer lies in how collisional ring galaxies such as this are created.
Yuan explains that the ‘hole’ at the centre of R5519 was created when a thin disk-like galaxy was ‘shot’ by another galaxy hitting head-on, just like a bullet hitting a thin paper target at a shooting range.
“When a galaxy hits the target galaxy — a thin stellar disk — like a bullet, head-on, it causes a pulse in the disk of the victim galaxy,” Yuan says. “The pulse then induces a radially propagating density waves through the target galaxy that form the ring.”
Yuan explains that at one time astronomers had expected to find more collisional ring galaxies in the young universe, simply because there were more galactic collisions progressing at that time. “We find that is not the case,” she continues. “The young universe might have more collisions and bullets, but it lacks thin stellar disks to act as targets… or so we thought.”
Here’s where the problem lies, thin stellar disks that serve as targets in this cosmic firing range aren’t supposed to exist so early in the Universe’s history according to currently favoured cosmological models.
“Our discovery implies that thin stellar disks similar to our Milky Way’s are already developed for some galaxies at a quarter of the age of the universe.”
Yuan and her team’s findings show galactic structures like thin disks and rings could form 3 billion years after the Big Bang. The researcher points to another piece of research that supports the idea of structured galaxies in the early Universe.
“The first step in disk formation is to form a disk at all — an object that is dominated by rotation,” Yuan says. “This is why the recent discovery of the ‘Wolfe disk’ is truly amazing — it pushes the earliest formation time of a large gas disk to much earlier than we previously thought.”
Who’s Afraid of the Big Bad Wolfe?
The discovery Dr Tiantian Yuan refers to is the identification of a massive rotating disk galaxy when the Universe was just 1.5 billion years old. The galaxy — officially named DLA0817g — is nicknamed the ‘Wolfe Disk’ in tribute to the late astronomer Arthur M. Wolfe, who first speculated about such objects in the 1990s.
The fact that the Wolfe Disk —which is spinning at tremendous speeds of around 170 miles per second — exists when the Universe was just 10% of its current age, strongly implies rapid galactic growth or the early formation of massive galaxies.
“The ‘take-home’ message from the discovery of a massive, rapidly rotating disk galaxy that resembles our Milky Way but formed only 1.5 billion years after the Big Bang, is that galaxy formation can proceed rapidly enough to generate massive, gas-rich galaxies at early times,” says J. Xavier Prochaska, professor of astronomy and astrophysics at the University of California Sant Cruz, and part of a team that discovered the Wolfe Disk.
The team behind the Wolfe Disk discovery posit the idea that its existence and the fact that it is both massive and well-formed indicate that the slow accretion of gas and dust may not be the dominant formation mechanism for galaxies. Something much more rapid could be at play.
“Most galaxies that we find early in the universe look like train wrecks because they underwent consistent and often ‘violent’ merging,” says Marcel Neeleman of the Max Planck Institute for Astronomy in Heidelberg, Germany, who led the astronomers. “These hot mergers make it difficult to form well-ordered, cold rotating disks as we observe in our present universe.”
If the Wolfe Disk grew as the result of the accumulation of cold gas and dust, Prochaska explains that this leaves questions unanswered about its stability: “The key challenge, is to rapidly assemble such a large gas mass while maintaining a relatively quiescent, thin and rotating disk.”
Of course, sometimes it can be the absence of something that provides evidence that a theory, or family of theories is inaccurate, as the following research exemplifies.
Further away and further back in time: Some of our Stars are Missing
The Hubble Space Telescope (HST) allows astronomers to stare back in time to when the Universe was just 500 million years old, allowing researchers to finally investigate the nature of the first galaxies and could deliver more contradictions to current cosmological models just as the Wolfe Disk and R5519 have.
Results recently delivered by the HST and examined by a team of European astronomers confirm the absence of the primitive stars when the Universe was just 500 million years old.
These early stars — named Population III stars — are thought to be composed of just hydrogen and helium, with tiny amounts of lithium and beryllium, reflecting the abundances of these elements in the young Universe.
A team of astronomers led by Rachana Bhatawdekar of the European Space Agency confirmed the absence of this first generation of stars by searching the Universe as it existed between 500 million years to 1 billion years into its history. Their observations were published in a 2019 paper with further research due to publish in Monthly Notices of the Royal Astronomical Society as well as being discussed at a press conference during the 236th meeting of American Astronomical Society.
“Population III stars are extremely hot and massive and so they are much bluer in colour than normal stars,” Bhatawdekar says. “We, therefore, looked at the ultraviolet colours of our galaxies to see exactly how blue they looked.”
The team found even though the galaxies they observed were blue, they weren’t blue enough to have stars with very low metals–by which, astronomers mean any element heavier than hydrogen and helium, such as oxygen, nitrogen, carbon, iron etc…
“What this tells us is that even though we are looking at a Universe that is just 500 million years old, galaxies have already been enriched by metals of significant amount,” Bhatawdekar. “This essentially means that stars and galaxies must have formed even earlier than this very early cosmic time.”
Thus the team’s observations imply that stars had already begun to fade and die by this point in time, shedding heavier elements back into the Universe. These elements would go on to form the building blocks of later generations of stars.
This piece of the puzzle would seem to suggest that the presence of massive galaxies is not a factor that arises as the result of rapid growth, but that the growth processes began earlier.
“We found no evidence of these first-generation Population III stars in this cosmic time interval,” explains Bhatawdekar. “These results have profound astrophysical consequences as they show that galaxies must have formed much earlier than we thought.”
Finding More Evidence of Early Galaxy Formation
For Bhatawdekar the further investigation on conditions in the early Universe will only really open up with the launch of the James Webb Space Telescope.
“Whilst we found is that there is no evidence of existence of Population III stars in this comic time but there are many low mass/faint galaxies in the early Universe,” she says. “This suggests that the first stars and first galaxies must have formed even earlier than this incredible instrument Hubble can probe.
“The James Webb Space Telescope, which is scheduled to be launched next year in 2021, will look even further back in time as far as when the Universe was just 200 million years old.”
Even before the launch of the James Webb Space Telescope, and as if to dismiss the idea that these results could be a fluke and thus not indicative of a wider shift towards earlier massive galaxies, Tiantian Yuan describes further findings yet to be published.
“I have actually found more collisional ring galaxies in the early universe!” exclaims Yuan. “There is a cool one that is gravitationally lensed, giving us a sharper view of the ring.
“I can tell you that this new ring is 1 billion years older than R5519, and it looks a lot different from R5519 and more like rings in our nearby Universe.”
As we refine our ideas of galaxy evolution we are likely to find that when presented with two conflicting theories, the truth is that which lies somewhere in-between. Thus, as we observe the formation of galaxies currently progressing, the mergers between galaxies, and complex structures in the Universe’s history we may find that galactic evolution may progress both slowly and quickly.
Hopefully, this mix of models will also deliver an accurate recipe for how spiral arms, rings, and bars arise from thin disks. Something currently lacking.
“What these discoveries mean is that we are entering a new era that we can ask the question of how different structures of galaxies first formed,” Yuan explains. “Galaxies do not form in one go; some parts were assembled first and others evolved later.
“It is time for the models to evolve to the next level of precision and accuracy. Like a jigsaw puzzle, the more pieces we reveal in observations, the more challenging it is to get the theoretical models correct, and the closer we are to grasp the mastery of nature.”
Sources and further reading
Yuan. T, Elagi. A, Labbe. I, Kacprzak. G. G, et al, ‘A giant galaxy in the young Universe with a massive ring,’ Nature Astronomy, .
Bhatawdekar. R, Conselice. C. J, Margalef-Bentabol. B, Duncan. K, ‘Evolution of the galaxy stellar mass functions and UV luminosity functions at z = 6−9 in the Hubble Frontier Fields,’ Monthly Notices of the Royal Astronomical Society, Volume 486, Issue 3, July 2019, Pages 3805–3830, , https://doi.org/10.1093/mnras/stz866
The cosmological constant is a problem. Actually, that is an understatement, the cosmological constant was a problem, is a problem, and may always be a problem. To understand why this little constant has caused so stress for cosmologists, it is necessary to divide its history into two very distinct eras, and perhaps then, consider its future. Thus this introduction is to the cosmological constant what Marley’s ghost was to Scrooge. A warning of a trip through its history, its future, and perhaps, offering it a hint of redemption.
The current theoretical estimations of the cosmological constant differ from the experimental measurements by such a shocking and huge magnitude, that it has often been referred to as ‘the worst prediction in the history of science.’ And as these values are provided by the non-unitable fields of physics quantum field theory and general relativity, respectively, finding an agreeable value for the constant, or even the reason why the values diverge so greatly could be the key to finding a quantum theory of gravity.
Before embarking on that trip, let’s first meet our Scrooge — the central character of this potential redemption arc. The cosmological constant represents now, something different than it did when it was first introduced.
The easiest way to understand what it means now is by considering dark energy — the hypothetical force driving the Universe apart — to be the physical manifestation of the cosmological constant. As such, again solving the mystery of the cosmological constant could be the key to discovering exactly what dark energy is, and, in turn, discovering what the final fate of the Universe will look like.
In many ways, the cosmological constant can be considered as a ‘counterpoint’ to gravity, a value for a force that repels as gravity attracts. This is something that links its present with its past.
Einstein’s biggest blunder?
It’s sometimes mind-boggling to consider that the biggest problem in modern physics is a hangover from 1917. With all the advancements we have made in terms of understanding our Universe, how can this one little element, provide such a challenge?
The key to understanding why the cosmological constant has been such a thorn in the side of physics is understanding how it confounded the greatest physicist who ever lived — Albert Einstein.
The cosmological constant, often represented by the Greek letter Lambda (λ), was added to Einstein’s field equations to balance the force of gravity. This is explainable by the fact that if only gravity acts in the Universe — an attractive force — then how can it not be shrinking? How did it form at all, if all matter is naturally drawn together?
Einstein felt that his field equations needed a repulsive factor to counter-balance the attractive force of gravity, and if this sounds like an ad-hoc solution — a fudge factor — that is because it was. Not only was the cosmological constant something of an arbitrary ‘fix’ it also made the field equations unstable. A slight variation should, according to these revised equations, cause the Universe to fall out of its static state. For example, if separation increases, gravitational attraction decreases and repulsion increases — resulting in further deviation from the initial state.
Einstein was influenced to introduce the cosmological constant by the fact that the scientific consensus in 1917 was that the Universe was static — neither expanding nor contracting — and Einstein agreed with the consensus. Unfortunately, his field equations disagreed.
The field equations of general relativity did not allow for a static universe, predicting that the Universe should either be contracting or expanding. Thus, the first role that the cosmological constant was to provide negative pressure to counterbalance gravity. The argument underpinning this addition was that even empty space-time has a gravitational influence, a so-called vacuum energy.
For twelve years, the cosmological constant remained in the field equations, fulfilling this role. But, trouble was brewing on the horizon, and by ‘the horizon,’ we mean the most distant horizon imaginable — the very edge of the Universe.
Our understanding of the cosmos was about to change forever…
Hubble trouble — an expanding problem
It may be slightly difficult to believe today, but just 90 years ago we understood far less about the cosmos and the Universe around us. The idea of billions of galaxies outside of our Milky Way was almost undreamt of, as was the idea that these galaxies could be receding from each other as space expands. Likewise, the idea that the Universe could have inflated from an infinitely small point — the concept of the ‘Big Bang’ was pure fantasy.
In 1929, Edwin Hubble’s seminal paper “A relation between distance and radial velocity among extra-galactic nebulae” would change this thinking forever. Hubble showed that the Universe was not infinite in either its reach or its age. In this relatively short paper, the astronomer presented the first observational evidence that distant galaxies are moving away from us, and further to this, the more distant they are, the more rapidly they recede.
What Hubble was unaware about when he published his 1929 paper, was that other physicists had already provided solutions to Einstein’s field equations that his results confirmed. Both Alexander Friedmann, a Russian cosmologist, and Georges Lemaitre, a Catholic priest, mathematician, astronomer, and professor of physics, had provided solutions to the field equations that showed an expanding Universe. Even with this theoretical basis, Einstein wanted to see this evidence for himself, not being quite ready to scrap his cosmological constant and accept a non-static universe.
On January 29th 1931, Edwin Hubble met Einstein at Mount Wilson, taking him to see the famous 100-inch telescope where the astronomer had made the observation that doomed the first iteration of the cosmological constant. Shortly after Einstein published his first paper with revised field equations omitting lambda. He deemed the constant ‘redundant’ as relativity could explain the expansion of the Universe without it.
George Gamow, physicist and cosmologist, remarked in a 1956 Scientific American article and then later in his autobiography, that Einstein had confided in him that the introduction of the cosmological constant was his ‘biggest blunder.’ The remark has now passed into the lore surrounding the great scientist, and even though we can’t be certain that he actually said it, he very likely believed it.
Yet, despite Einstein’s dismissal of the cosmological constant, many physicists were not quite ready to give up on this element of general relativity just yet. They argued without a cosmological constant term, models of the evolution of the cosmos would predict a universe with an age younger than the oldest stars within it.
And though Einstein was unmoved by this argument, in 1998, 43 years after his death, the cosmological constant and the symbol that represents it would be rescued from obscurity and drafted to explain a new, but related conundrum.
The modern cosmological constant and dark energy
There is something of a pleasing irony about the fact that it was the discovery of the Universe expanding that consigned the cosmological constant to the dustbin and it was the equally important discovery that this same expansion is accelerating that saved it.
During the cosmological constant’s ‘downtime’ our understanding of the origins of the Universe underwent a revolution. Cosmologists were able to deduce that regions of the Universe now separated by unimaginable distances were once in close proximity. The idea that the Universe expanded in a period of rapid inflation from an infinitely small point — though this point would become progressively smaller — to the vast entity we see today was accepted and termed the ‘Big Bang.’
Yet the commonly held idea that this period of rapid inflation had given way to a more steady rate was challenged in 1998.
In the mid-nineties, cosmologists had used solutions to the field equations of general relativity to assess the geometry of the Universe, determining that it is flat. This left some problems to be addressed. In a flat universe, we should have a matter/energy density which matches a value known as the critical density. Yet all the matter and energy that we can observe accounts for only a third of this value. Further to this missing energy problem, the flat universe suffers from a cosmic age problem, why do the oldest stars appear older than the predicted age of a flat universe?
One solution to these problems could arise if the Universe is filled with a fluid of negative pressure, a ‘dark energy’ that accounts for the energy deficit and provides accelerated expansion that would neatly explain the Universe taking a longer time to reach its current state. To measure this changing rate of expansion researchers would need a tool that could measure extraordinarily large cosmic distances — as large as 5 billion light-years, in fact.
In 1998, astronomers found evidence of such a theoretical fluid from observations of the redshifts of distant yet incredibly, bright Type Ia supernovae — often referred to as ‘standard candles’ due to their reliability in measuring cosmic distances. And of course, scientists would need a symbol to represent dark energy within their equations. As cosmology already had such a representation of negative pressure, why not simply resurrect it and place it back in the equations of general relativity?
But, they should not have been surprised, given its history, that re-employing the cosmological constant would bring new problems.
Still crazy after all these years…
The new issues with the cosmological constant very much reflect the major hurdles within physics as it currently stands. Whilst revolutions were being made on incredibly large scales thanks to cosmology, our understanding of the incredibly small was burgeoning thanks to the success of quantum physics.
The problem arises from the fact that quantum physics — and in particular quantum field theory — and general relativity can not be reconciled, there is no theory of quantum gravity.
If we are creative and slightly whimsical, we could perhaps give this struggle to unify these disciplines a value — 10¹²¹ — the magnitude between which the quantum field theory’s theoretical prediction of the cosmological constant and the observed value provided by cosmology. This massive disparity–often described as ‘the worst theoretical prediction in the history of science’– arises from the fact that quantum field theory predicts that virtual particles are popping in and out of existence at all times — an idea that may sound ridiculous but has been experimentally verified — even in the vacuum of space. Thus particles should have a measurable effect on the vacuum energy driving the expansion of the Universe but such an effect isn’t measured by cosmologists observing the redshifts of Type Ia supernovae.
There are, of course, solutions. Dark energy could be associated with some, as yet undiscovered field, which fills space in a way similar to that of the Higgs field from which the recently discovered Higgs boson emerges. Or perhaps other constants that occupy an unchallenged place in our equations of gravity aren’t constants at all but vary with time, as University of Geneva cosmologists Lucas Lombriser suggests. More extreme solutions lie in the suggestion that Einstein’s theory of gravity must be modified to account for dark energy — all though this family of theories, MONDS, are steadily moving out of favour within the physics community.
Whatever the solution to this problem is, it has a remarkable impact on the future of the Universe. Determining the true value of the cosmological constant and the strength of dark energy driving this accelerating expansion will ultimately tell us if the Universe’s final fate is to rip apart or violently crush together.
Whether by ‘Big Rip’ or ‘Big Crunch’ the Universe’s end will be determined by the value of the cosmological constant. A value that still continues to evade us and confuse us as much as it did Einstein.
Ta-Pei Cheng, Relativity, Gravitation and Cosmology, Oxford Press, (2010).
Robert Lambourne, Stephen Serjeant, Mark Jones, An Introduction to Galaxies and Cosmology, Cambridge Press, (2015).
Frank Close, The New Cosmic Onion, Taylor &Francis, (2007).
Cormac O’Raifeartaigh, Investigating the legend of Einstein’s “biggest blunder”, Physics Today, (2018)
Matts Roos, Introduction to Cosmology, Wiley, (2003).
The Nobel Physics Prize 2019 has been jointly awarded to James Peebles, Michel Mayor and Didier Queloz. Peebles received half of the prize “for theoretical discoveries in physical cosmology”, while the other half was jointly awarded to Mayor and Queloz “for the discovery of an exoplanet orbiting a solar-type star.”
It was a fitting award in the field of cosmology, which has undergone a dramatic transformation in recent decades.
“This year’s Laureates have transformed our ideas about the cosmos,” the Assembly wrote in a release accompanying the Prize’s announcement. “While James Peebles’ theoretical discoveries contributed to our understanding of how the universe evolved after the Big Bang, Michel Mayor and Didier Queloz explored our cosmic neighbourhoods on the hunt for unknown planets. Their discoveries have forever changed our conceptions of the world.”
James Peebles is widely regarded as one of the world’s leading theoretical cosmologists, being a major figure in the field ever since the 1970s. He made numerous contributions to the Big Bang model, particularly explaining what happened in the universe in the instances after the Big Bang took place. Along with several cosmologists, he successfully predicted the existence of the cosmic microwave background radiation. He was working in the field of physical cosmology long before it was regarded as a “serious” branch of physics and did much to change this unwarranted perception. Peebles also contributed to the establishment of the dark matter concept, and also worked on dark energy.
Meanwhile, Mayor and Queloz were the first to discover an exoplanet around a main-sequence star, in a solar system fairly similar to our own. In 1995 Queloz was a Ph.D. student at the University of Geneva, and Mayor was his advisor. Together, they used Doppler spectroscopy (an indirect velocity measurement using the Doppler shift) to discover 51 Pegasi b, an exoplanet which lies around 50 light-years away from Earth. 51 Pegasi b is the prototype for a class of planets called “hot Jupiters” — planets which look like Jupiter, but orbit much closer to their star and are very hot. The star marked a breakthrough in astronomical research and is still actively studied today (in 2017, traces of water were detected in its atmosphere). The exoplanet’s discovery was announced on October 6, 1995, in the journal Nature.
Today, the field of cosmology is well established, and we have discovered thousands of exoplanets — but these three were true trailblazers for their respective fields. It’s a remarkable testament to how far we’ve come and how influential their work was.
Herein also lies one of the beauties and the curses of the Nobel Prize: because it’s often awarded decades after the discovery was made, it serves as a lifetime achievement award, but it often feels non-contemporary.
Researchers have turned to a massive supercomputer — dubbed the ‘UniverseMachine’ — to model the formation of stars and galaxies. In the process, they created a staggering 8 million ‘virtual universes’ with almost 10¹⁴ galaxies.
To say that the origins and evolution of galaxies and the stars they host have been an enigma that scientists have sought to explore for decades is the ultimate understatement.
In fact, desire to understand how the stars form and why they cluster the way they do, predates science, religion and possibly civilisation itself. As long as humans could think and reason — way before we knew what either a ‘star’ or a ‘galaxy’ was— we looked to the heavens with a desire to have knowledge of its nature.
We now know more than we ever have, but the heavens and their creation still hold mysteries for us. Observing real galaxies can only provide researchers with a ‘snapshot’ of how they appear at one moment. Time is simply too vast and we exist for far too brief a spell to observe galaxies as they evolve.
Now a team of researchers led by the University of Arizona have turned to supercomputer simulations to bring us closer to an answer for these most ancient of questions.
Astronomers have used such computer simulations for many years to develop and test models of galactic creation and evolution — but it only works for one galaxy at a time — thus failing to provide a more ‘universal’ picture.
To overcome this hurdle, Peter Behroozi, an assistant professor at the UA Steward Observatory, and his team generated millions of different universes on a supercomputer. Each universe was programmed to develop with a separate set of physical theories and parameters.
As such the team developed their own supercomputer — the UniverseMachine, as the researchers call it —to create a virtual ‘multiverse’ of over 8-million universes and at least 9.6 x 10¹³ galaxies.
The results could solve a longstanding quirk of galaxy-formation — why galaxies cease forming new stars when the raw material — hydrogen — is not yet exhausted.
The study seems to show that supermassive black holes, dark matter and supernovas are far less efficient at stemming star-formation than currently theorised.
The team’s findings — published in the journal Monthly Notices of the Royal Astronomical Society — challenges many of the current ideas science holds about galaxy formation. In particular, the results urge a rethink of how galaxies form, how they birth stars and the role of dark matter — the mysterious substance that makes up 80% of the universe’s matter content.
Behroozi, the study’s lead author. says: “On the computer, we can create many different universes and compare them to the actual one, and that lets us infer which rules lead to the one we see.”
What makes the study notable is it is the first time each universe simulated has contained 12 million galaxies, spanning a time period of 400 million years after the ‘big bang’ to the present day. As such, the researchers have succeeded in the creation of self-consistent universes which closely resemble our own.
Putting the multiverse to the test — how the universe is supposed to work
To compare each universe to the actual universe, each was put through a series of tests that evaluated the appearance of the simulated galaxies they host in comparison to those in the real universe.
Common theories of how galaxies form stars involve a complex interplay between cold gas collapsing under the effect of gravity into dense pockets giving rise to stars. As this occurs, other processes are acting to counteract star formation.
For example, we believe that most galaxies harbour supermassive black holes in their centres. Matter forming accretion discs around these black holes and eventually being ‘fed’ into them, radiate tremendous energies. As such, these systems act almost as a ‘cosmic blowtorch’ heating gas and preventing it from cooling down enough to collapse into stellar nurseries.
Supernova explosions — the massive eruption of dying stars — also contribute to this process. In addition to this, dark matter provides most of the gravitational force acting on the visible matter in a galaxy — thus, pulling in cold gas from the galaxy’s surroundings and heating it up in the process.
Behroozi elaborates: “As we go back earlier and earlier in the universe, we would expect the dark matter to be denser, and therefore the gas to be getting hotter and hotter.
“This is bad for star formation, so we had thought that many galaxies in the early universe should have stopped forming stars a long time ago.”
But what the team found was the opposite.
Behroozi says: “Galaxies of a given size were more likely to form stars at a higher rate, contrary to the expectation.”
Bending the rules with bizarro universes
In order to match observations of actual galaxies, the team had to create virtual universes in which the opposite was the case — universes in which galaxies continued to birth stars for much longer.
Had the researchers created universes based on current theories of galaxy formation — universes in which the galaxies stopped forming stars early on — those galaxies appeared much redder than the galaxies we see in the sky.
Galaxies appear red for major two reasons. If the galaxy formed earlier in the history of the universe cosmic expansion — the Hubble flow — means that it will be moving away from us more rapidly, causing significant elongation in the wavelength of the light it emits shifting it to the red end of the electromagnetic spectrum. A process referred to as redshift.
In addition to this, another reason an older galaxy may appear red is intrinsic to that galaxy and not an outside effect like redshift. If a galaxy has stopped forming stars, it will contain fewer blue stars, which typically die out sooner, and therefore be left with older — redder — stars.
Behroozi point out that isn’t what the team saw in their simulations, however. He says: “If galaxies behaved as we thought and stopped forming stars earlier, our actual universe would be coloured all wrong.
“In other words, we are forced to conclude that galaxies formed stars more efficiently in the early times than we thought. And what this tells us is that the energy created by supermassive black holes and exploding stars is less efficient at stifling star formation than our theories predicted.”
Computing the multiverse is as difficult as it sounds
Creating mock universes of unprecedented complexity required an entirely new approach that was not limited by computing power and memory, and provided enough resolution to span the scales from the “small” — individual objects such as supernovae — to a sizeable chunk of the observable universe.
Behroozi explains the computing challenge the team had to overcome: “Simulating a single galaxy requires 10 to the 48th computing operations. All computers on Earth combined could not do this in a hundred years. So to just simulate a single galaxy, let alone 12 million, we had to do this differently.”
In addition to utilizing computing resources at NASA Ames Research Center and the Leibniz-Rechenzentrum in Garching, Germany, the team used the Ocelote supercomputer at the UA High-Performance Computing cluster.
Two-thousand processors crunched the data simultaneously over three weeks. Over the course of the research project, Behroozi and his colleagues generated more than 8 million universes.
He explains: “We took the past 20 years of astronomical observations and compared them to the millions of mock universes we generated.
“We pieced together thousands of pieces of information to see which ones matched. Did the universe we created look right? If not, we’d go back and make modifications, and check again.”
Behroozi and his colleagues now plan to expand the Universe Machine to include the morphology of individual galaxies and how their shapes evolve over time.
As such they stand to deepen our understanding of how the galaxies, stars and eventually, life came to be.
The Illustris project took 5 years of software development and 3 months running on 8000 processors – but it sure was worth it – the result is truly monumental! Now, researchers finally have an accurate model of the development of the universe, which even though is rough around some edges, still blends in well with today’s accepted science, and even makes some valuable predictions.
The Illustris project
Stellar light distributions (g,r,i bands) for a sample of galaxies at z = 0
The vast majority of our Universe is made out of dark energy and dark matter – something which we can’t see directly. Everything we know about them, we infer from indirect observations. Testing this extraordinary scenario requires precise predictions for the formation of structure in the visible matter, the things we can see – stars, galaxies, black holes. Astrophysicists think of these visible elements as organized in a ‘Cosmic Web’ of sheets, filaments, and voids, embedded with the basic units of the cosmic structure: galaxies.
Basically, what Illustris set out to do was project a set of large-scale cosmological simulations, including galaxy formation – one of the more complex processes in the Universe. This was the best model of galaxy formation developed to date, taking into consideration the expansion of the universe, the gravitational pull of matter onto itself, the motion or “hydrodynamics” of cosmic gas, as well as the formation of stars and black holes. The model also takes into consideration the fact that many conditions changed since the early days of the Universe, and also adds that into the fray. Their simulated volume contains tens of thousands of galaxies captured in high-detail, covering a broad range of masses, shapes, sizes and rates of star formation, and apply the properties observed in real life.
Illustris simulation overview poster. Shows the large scale dark matter and gas density fields in projection (top/bottom). The lower three panels show gas temperature, entropy, and velocity at the same scale.
But this model isn’t only a projection of what we already know – it can help us learn new things as well. However, the main problem here, before we start making deductions based on this model, is ensuring that it is an actual reflection of reality – and with the immense complexity and numerical calculations involved, that’s a hard thing to do. Naturally, this leads to the need for some sort of simplification. For starters, some processes, such as the birth of individual stars, cannot be directly captured in a cosmological simulation. But even if you look at just the larger picture, everything has to fit in with the observed data – and while there are still some corrections to be made, in the grand scheme of things, Illustris fits things almost perfectly.
The main achievements of the project
It successfully reproduces a wide range of observable properties of galaxies and the relationships between these properties. A key element here is the so-called “specific star formation rate” – the rate of new stars being formed in a galaxy, divided by the amount of already-existing stars; it fits with the observed values not just for this period, but for all ages throughout the history of the Universe.
It precisely measured the gas content of the universe, and where it resides. Furthermore, where data does not exist, the model can make predictions about the evolution of the gas. Outside of individual galaxies, Illustris also predicts that at the present time, the majority of gas (~81%) remains in the “intergalactic medium” (the space between galaxies), but that this gas contains only a minority (~34%) of the metals so-far produced in the universe.
It investigated “satellite galaxies” and their connection to cosmological evolution. Satellite galaxies are galaxies which revolve around bigger galaxies, much like a planet revolves around its star. Illustris also studied the changes in internal structure as galaxy populations evolve in time, the impact of gas on the structure of dark matter, and it can even produce “mock observations”.
For more information, be sure to check out their website (one of the best presentations I’ve ever seen), which also features several videos:
1 – Time evolution of a 10Mpc (comoving) region within Illustris from the start of the simulation to z=0. The movie transitions between the dark matter density field, gas temperature (blue: cold, green: warm: white: hot), and gas metallicity.
4 – Time evolution of a 10Mpc (comoving) cubic region within Illustris, rendered from outside. The movies shows on the left the dark matter density field, and on the right the gas temperature (blue: cold, green: warm: white: hot). The rapid temperature fluctuations around massive haloes are due to radiative AGN feedback that is most active during quasar phases. The larger ‘explosions’ are due to radio-mode feedback.
7 – Time evolution from high redshift to z=0, demonstrating the formation of a massive elliptical ‘red-and-dead’ galaxy as a result of a multiple merger around z~1. Panels show stellar light (left) and gas density (right) in a region of 1 Mpc on a side.
It is currently believed that we live in a lopsided Universe: cosmologists reached this conclusion by examining the detailed structure of the left over radiation from the Big Bang. Now, two cosmologists presented data which seems to suggest that our Universe is actually curved slightly, in a saddle-like fashion; if correct, their model would invalidate the long standing idea that the cosmos is flat.
Cosmic microwave background (CMB) is the thermal radiation left over from the “Big Bang” of cosmology. It is fundamentally important for measurements, because it is the oldest light in the universe, dating to what is called the epoch of recombination (the period during which charged electrons and protons first became bound to form electrically neutral hydrogen atoms – so REALLY early). NASA’s Wilkinson Microwave Anisotropy Probe provided the first hints of an Universal asymmetry in 2004, but some believed that was a technological error, and hoped that NASA probe’s successor, the European Space Agency’s Planck spacecraft would fix that error. But as it turns out, the Planck spacecraft confirmed the anomaly.
To explain those results, Andrew Liddle and Marina Cortês, both at the University of Edinburgh, UK, have taken on the gargantuan task of proposing a new model of cosmic inflation – a theoretized period in which the Universe expanded dramatically, growing by a few orders of magnitude in a fraction of a second.
In their paper, published this week in Physical Review Letters, Liddle and Cortês toy with the idea that aside from the initial quantum field (the inflation), there was also a secondary quantum field which caused the curvation of the Universe. The authors’ work is the first to explain the lopsidedness from first principles.
However, the problem is that numerous different measurements suggest that the Universe is flat, some of which can’t be fully explained with this new, curved model. Future improved measurements will likely show which hypothesis is right.
Physicists have successfully reproduced a pattern resembling the cosmic microwave background radiation in an experiment which used ultracold cesium atoms in a vacuum chamber. This is the first experiment which recreates at least some of the conditions from the Big Bang.
“This is the first time an experiment like this has simulated the evolution of structure in the early universe,” said Cheng Chin, professor in physics. Chin and his associates reported their feat in the Aug. 1 edition of Science Express, and it will appear soon in the print edition of Science.
The cosmic microwave background radiation (CMB or CMBR) is basically the thermal radiation left over from the Big Bang. It is very interesting for astrophyicists because it apparently exhibits a large degree of uniformity throughout the entire universe (it has more or less the same values everywhere you look for it). If you analyze the “void” between stars and even galaxies with a sufficiently sensitive radio telescope, you’ll see a faint background glow, almost exactly the same in all directions, that is not associated with … anything. The glow has the most energy in the microwave spectrum. Its rather serendipitous discovery took place in 1964, and it earned its finders a Nobel prize in 1978.
You can think of this radiation as the echo of the Big Bang – by studying it, we get a somewhat clear idea how the Universe looked some 380,000 years following its ‘birth’ – incredibly early; it doesn’t go much before or after, it’s basically a snapshot of the past. But as it turns out, under certain conditions, a cloud of atoms chilled to a billionth of a degree above absolute zero in a vacuum chamber displays phenomena similar to those which followed the big bang.
“At this ultracold temperature, atoms get excited collectively. They act as if they are sound waves in air,” he said.
This neatly correlates with what cosmologists speculated:
“Inflation set out the initial conditions for the early universe to create similar sound waves in the cosmic fluid formed by matter and radiation,” Hung said.
The tiny universe which was simulated in Chin’s laboratory measured no more than 70 microns across (about as big as a human hair) – but the physics is the same regardless of the size of your universe.
“It turns out the same kind of physics can happen on vastly different length scales,” Chin explained. “That’s the power of physics.”
But there is an important difference – and one that works greatly to our advantage:
“It took the whole universe about 380,000 years to evolve into the CMB spectrum we’re looking at now,” Chin said. But the physicists were able to reproduce much the same pattern in approximately 10 milliseconds in their experiment. “That suggests why the simulation based on cold atoms can be a powerful tool,” Chin said.
If you want, you can think of the Big Bang in oversimplified terms as an explosion which made a big BOOM! These sound waves began interfering with each other creating complicated patterns – the so-called Sakharov acoustic oscillations.
“That’s the origin of complexity we see in the universe,” he said.
This is indeed a powerful tool to find out more about our infant universe, but this is just the first step. Chin and his team plan to move on to use these Sakharov oscillations to study the property of this two-dimensional superfluid at different initial conditions, then cross check their results with what is observed by cosmologists. They will use the same type of experiment but branch out to other fields of cosmology, including the formation of galaxies and even black hole dynamics.
“We can potentially use atoms to simulate and better understand many interesting phenomena in nature,” Chin said. “Atoms to us can be anything you want them to be.”
Interestingly enough, nobody on this team was a cosmologist.
Journal Reference: C.-L. Hung, V. Gurarie, C. Chin. From Cosmology to Cold Atoms: Observation of Sakharov Oscillations in a Quenched Atomic Superfluid. DOI: 10.1126/science.1237557
Astronomers from Britain’s University of Central Lancashire have recently published a landmark paper that describes the largest known structure in the Universe, a group of quasars so large it spans 4 billion light-years across at its longest end. The study holds broader consequences, not just because of the encountered astronomical milestone, since it challenges Albert Einstein’s Cosmological Principle, the assumption that the universe looks the same from every point of view, and which has stood in place for almost a century.
Quasars are the brightest objects in the Universe. They’re actually supermassive black holes surrounded by an accretion disk. When matter absorbed by the black hole located at the center of the galaxy reaches a critical level, a gigantic collision of matter occurs resulting in a gigantic explosive output of radiation energy and light. Quasars are very distant objects, from much earlier in the Universe’s history.
Since a few decades ago scientists have known that quasars tend to group with another in structures of surprising size called large quasar groups or LQGs. The recent quasar group discovered by the British astronomers is simply gigantic even by cosmological standards with a typical dimension of 1.7 billion light-years. However, because the structure is elongated it measures at its longest section a whooping 4 billion light-years. For a bit of perspective, the LGC is about 1,600 times as big as our galaxy, the Milky Way, which houses between 200 and 400 billion stars.
“While it is difficult to fathom the scale of this LQG, we can say quite definitely it is the largest structure ever seen in the entire universe,” Roger Clowes, leader of the research team, said in a statement. “This is hugely exciting – not least because it runs counter to our current understanding of the scale of the universe.”
Here’s lies the predicament, though. The current modern theory of cosmology suggests that astrophysicists shouldn’t be able to find a structure larger than 1.2 billion light years. The theory states that on very large scales the universe looks the same no matter where you observe it from.
“Our team has been looking at similar cases which add further weight to this challenge and we will be continuing to investigate these fascinating phenomena,” continued Clowes.
Panaji (Goa-India), Dec 12, 2011: Hunt for finding the hypothetical massive elementary particle, the Higgs boson, popularly known as ‘The God Particle’. Exploring the pulls and pressures among the planets and the dark matter above. Building capacities to explore hitherto lesser known Universe to benefit humanity using science and technology tools through global collaborative efforts.
This is what eminent astrophysicists from across the world discuss and share their experiences in the VII International Conference on “Gravity and Cosmology” (ICGC -2011) beginning on December 14 for a week in this Western India’s most sought after international tourist destination.
The two most significant threads running through this conference are to understand the Universe at large – its past and future – and its constitution and the experimental hunt for gravitational waves.
International Centre for Theoretical Sciences (ICTS) under the prestigious Tata Institute of Fundamental Research (TIFR), Mumbai, is organising the event, Prof. Tejinder Singh (TIFR), Chairperson, Local Organizing Committee, ICGC-2011, told this Indian Science Writers Association(ISWA) representative.
About 250 scientists, a half of them from abroad, will attend the conference. Those from abroad include promising young Indian scientists and researchers, many of whom are eventually expected to return and take up research positions in India, according to Prof. Tejinder Singh.
Most interesting presentation in the conclave would be of Prof. John Ellis (UK), considered one of the world’s most respected particle physicists with more than a 1000 research papers to his credit and closely involved with the theoretical aspects of the Geneva-based Large Hadron Collider (LHC) project and the search for the Higgs particle.
Prof. John Ellis (UK) is likely to spell out the latest news on the LHC project hunting the elusive primary particle responsible for the weight (Higgs boson) in the universe and what the experiment unfolds about the Cosmos. A three hour long session will be especially devoted to presentations and discussions on the hitherto mysterious Dark Energy.
Prof. James E. Peebles (USA), widely respected as the founding father of Modern Cosmology, will deliver a keynote address reviewing our present understanding of the Cosmos.
Eminent Cosmologist Robert Kirshner (USA), a member of the team associated with this year’s Nobel Prize for Physics for discovery of the acceleration of the Universe will give a first hand description of the discovery and what that implies for theoretical physics.
Prof. Francois Bouchet (Paris), one of the leaders of the PLANCK satellite project which is currently observing the CBR, will present the state of the art and latest findings in this vital subject. The project is looking for fossil records of the early history of the universe by studying the cosmic microwave background
Legendary physicist Prof. Kip Thorne (USA) will present new insights into the understanding of geometry of black holes, and how these insights would be confirmed by the detection of gravitational waves.
Incidentally, Thorne is the co-founder of the Laser Interferometer Gravitational Observatory (LIGO) project which is keen to install a gravitational wave detector in India for which efforts are underway, making India the third country in the world to have such a prestigious astronomical projects, after the USA and Italy.
Prof. Bernard Schutz (Germany) an expert in the study of gravitational waves and India’s eminent cosmologist Prof. Jayant Narlikar are among the other top scientists attending the conclave.
Other astrophysicists Eric Adelberger will review experimental tests of the law of gravitation, whereas Bernard Schutz and Stan Whitcomb will make state of the art presentations on the gravitational wave detection. A three-hour session will be dedicated to gravitational wave astronomy with a global network of detectors.
Priyamvada Natarajan [Yale] will report her new findings on how the first black holes might have formed in the early history of the Universe.
J. Richard Bond [CITA, Canada] and David Wands [Portsmouth] will highlight theoretical studies of the early history of the Universe, and the consequent signatures in observations we can make today.
The physics and astrophysics of black holes will also be discussed in the plenary lectures of Mihalis Dafermos [Cambridge], Luis Lehner [Perimeter Institute, Canada], Dipankar Bhattacharya [IUCAA Pune] and Masaru Shibata [YITP, Kyoto].
Over the last few years, remarkable new connections have been discovered between gravity, fluid dynamics and thermodynamics. These will be reported by Gary Horowitz [Santa Barbara], Shiraz Minwalla [TIFR] and T. Padmanabhan [IUCAA].
Abhay Ashtekar [Penn State] and Rafael Sorkin [Perimeter] will review the progress towards obtaining the laws which might relate quantum mechanics to gravity, and help understand the very act of creation. //EOM//