It is striking that today, we can not only discover but even classify stars that are light-years from Earth — sometimes, even billions of light-years away. Stellar classification often uses the famous Hertzsprung–Russell diagram, which summarises the basics of stellar evolution. The luminosity and the temperature of stars can teach us a lot about their life journey, as they burn their fuel and change chemical composition.
We know that some stars are made up mostly of ionised helium or neutral helium, some are hotter than others, and we fit the Sun as a not so impressive star compared to the giants. Part of that development came from Annie Jump Cannon’s contribution during her long career as an astronomer.
On the shoulders of giantesses
Cannon was born in 1863 in Dover, Delaware, US. When she was 17 years old, thanks to her father’s support, she managed to travel 369 miles all the way from her hometown to attend classes at Wellesley College. It’s no big deal for teens today, but back then, this was an imaginable adventure for a young lady. The institution offered education exclusively for women, an ideal environment to spark in Cannon an ambition to become a scientist. In 1884, she graduated and later in 1896 started her career at the Harvard Observatory.
In Wellesley, she had Sarah Whiting as her astronomy professor, who sparked Cannon’s interest in spectroscopy:
“… of all branches of physics and astronomy, she was most keen on the spectroscopic development. Even at her Observatory receptions, she always had the spectra of various elements on exhibition. So great was her interest in the subject that she infused into the mind of her pupil who is writing these lines, a desire to continue the investigation of spectra.”
Cannon had an explorer spirit and travelled across Europe, publishing a photography book in 1893 called “In the footsteps of Columbus”. It is believed that during her years at Wellesley, after the trip, she got infected with scarlet fever. The disease infected her ears and she suffered severe hearing loss, but that didn’t put an end to her social or scientific activities. Annie Jump Cannon was known for not missing meetings and participating in all American Astronomical Society meetings during her career.
At Radcliffe College, she began working more with spectroscopy. Her first work with southern stars spectra was later published in 1901 in the Annals of the Harvard College Observatory. The director of the observatory, Edward C. Pickering chose Cannon as the responsible for observing stars which would later become the Henry Draper Catalogue, named after the first person to measure the spectra of a star.
The job didn’t pay much. In fact, Harvard employed a number of women as “women computers” that processed astronomic data. The women computer at Harvard earned less than secretaries, and this enabled researchers to hire more women computers, as men would have need to be paid more.
Her salary was only 25 cents an hour, a small income for a difficult job to look at the tiny details from the spectrographs, often only possible with magnifying glasses. She was known for being focused (possibly also influenced by her deafness), but she was also known for doing the job fast. Simply put,
During her career, she managed to classify the spectra of 225,000 stars. At the time, Williamina Fleming, a Scottish astronomer, was the Harvard lady in charge of the women computers. She had previously observed 10,000 stars from Draper Catalogue and classified them from letters A to N. But Annie Jump Cannon saw the link between the stars’ temperature and rearranged Fleming’s classification to the OBAFGKM system. The OBAFGKM system divides the stars from the hottest to the coldest, and astronomers created a popular mnemonic for it: “Oh Be A Fine Guy/Girl Kiss Me”.
“A bibliography of Miss Cannon’s scientific work would be exceedingly long, but it would be far easier to compile one than to presume to say how great has been the influence of her researches in astronomy. For there is scarcely a living astronomer who can remember the time when Miss Cannon was not an authoritative figure. It is nearly impossible for us to imagine the astronomical world without her. Of late years she has been not only a vital, living person; she has been an institution. Already in our school days she was a legend. The scientific world has lost something besides a great scientist.”
Annie Jump Cannon was awarded many prizes, she became honorary doctorate of Oxford University, the first woman to receive the Henry Draper Medal in 1931, and the first woman to become an officer of the American Astronomical Society.
Her work in stellar classification was followed by Cecilia Payne-Gaposchkin, another dame of stellar spectroscopy. Payne improved the system with quantum mechanics and described what stars are made of.
Very few scientists have such a competent and exemplary career as Cannon. Payne continued the work left from Cannon, her advisor, Henry Norris Russell, then improved it with minimum citation. From that, we got today’s basic understanding of stellar classification. Her beautiful legacy has been rescued recently by other female astronomers who know the importance of her life’s work.
A new study modeled the dynamics and evolution of some of the largest known structures in the universe.
Let’s take a moment to look at our position in the universe.
We are now living on a solar system orbiting the center of the Milky Way galaxy — which itself lies in the Local Group of galaxies neighboring a Local Void, a vast cluster of space with fewer galaxies than expected. Wait, we’re not done yet. These structures are part of a larger region that encompasses thousands of galaxies in a supercluster called the Laniakea Supercluster, which is around 520 million light-years across.
A group of researchers has now simulated the movement of galaxies in the Laniakea and other clusters of galaxies starting when the universe was in its infancy (just 1.6 million years old) until today. They used observations from the Two Micron All-Sky Survey (2MASS) and the Cosmicflows-3 as the starting point for their study. With these two tools, they looked at galaxies orbiting massive regions with velocities of up to 8,000 km/s — and made videos describing those orbits.
Because the universe is expanding and that influences the evolution of these superclusters, we first need to know how fast the universe is expanding, which has proven to be very difficult to calculate. So the team considered different plausible universal expansion scenarios to get the clusters’ motion.
Besides Laniakea, the scientists report two other zones where galaxies appear to be flowing towards a gravitational field, the Perseus-Pisces (a 250 million light-years supercluster) and the Great Wall (a cluster of about 1.37 billion light-years). In the Laniakea region, galaxies flow towards the Great Attractor, a very dense part of the supercluster. The other superclusters have similar patterns, the Perseus-Pisces galaxies flow towards the spine of the cluster’s large filament.
The researchers even predicted the future of these galaxies. They estimated the path of the galaxies to something like 10 billion years into the future. It is clear in their videos, the expansion of the universe affecting the big picture. In smaller, denser regions, the attraction prevails, like the future of Milkomeda in the Local Group.
Black holes are the most massive objects in the universe. Their gravitational pull is so strong that nothing can escape it — not even light. But according to a new NASA study, black holes may play a more complex role in galactic ‘ecosystems’. Specifically, a black hole was found to be contributing to the formation of a new star in its vicinity, offering tantalizing clues about how massive black holes develop in the first place.
A stellar nursery
Some ten years ago, Amy Reines, then a graduate student, discovered a black hole in a galaxy about 30 million light-years away from Earth, in the southern constellation Pyxis. She knew something was off right away, but it wasn’t until recently that new Hubble observations shed light on the situation.
“At only 30 million light-years away, Henize 2-10 is close enough that Hubble was able to capture both images and spectroscopic evidence of a black hole outflow very clearly. The additional surprise was that, rather than suppressing star formation, the outflow was triggering the birth of new stars,” said Zachary Schutte, Reines’ graduate student and lead author of the new study.
The galaxy, called Henize 2-10, is a so-called “starburst” galaxy — a galaxy where stars are being formed at a much higher rate than normal, around 1,000 times faster. The galaxy is also relatively small — a so-called dwarf galaxy — and has a black hole at its center, much like the Milky Way.
Researchers were already aware of an unusual cocoon of gas in the area, but Hubble managed to also image an outflow linked to the central black hole. Although the process is not fully understood, astronomers do believe that black holes (or at least some black holes) do have an outflow despite their massive gravity. In Henize 2-10, this outflow moves at about a million miles per hour, slamming into the gas cocoon — and as it turns out, newborn stars follow the path of the outflow.
In large galaxies, the opposite happens: material falling towards the black hole forms jets of plasma that don’t allow the formation of stars. But apparently, in the less-massive Henize 2-10, the outflow has just the right characteristics to precipitate new star formation. Previously, studies mostly focused on larger galaxies, where there is more observational evidence. Dwarf galaxies are still understudied, and it’s only thanks to Hubble that researchers were able to study this.
“Hubble’s amazing resolution clearly shows a corkscrew-like pattern in the velocities of the gas, which we can fit to the model of a precessing, or wobbling, outflow from a black hole. A supernova remnant would not have that pattern, and so it is effectively our smoking-gun proof that this is a black hole,” Reines said.
The role that black holes play in the universe is one of the biggest puzzles in astronomy, and the more data comes in, the more it’s starting to look like this is not a straightforward role, but rather a complex one. For instance, it was just recently demonstrated that researchers realized that most (if not all) galaxies have a black hole at their center. The more massive the galaxy, the more massive the central black hole — or possibly, the other way around, and the mass of the black hole is affecting the galaxy.
But we don’t really know how these central black holes (often called supermassive black holes) formed. Some researchers suspect they formed like “regular” black holes and somehow accumulated more and more mass; others believe they could only have formed in special conditions in the early stages of the universe; a further competing theory claims that the “seeds” of these black holes come from dense star clusters that collapse gravitationally. The black hole in Henize 2-10 could offer clues about these theories.
The black hole in the galaxy remained relatively small over cosmic time and did not accumulate a lot of material. This would suggest that it’s relatively unchanged since its formation, essentially offering a window into the early days of the universe.
“The era of the first black holes is not something that we have been able to see, so it really has become the big question: where did they come from? Dwarf galaxies may retain some memory of the black hole seeding scenario that has otherwise been lost to time and space,” Reines concludes.
When we think about junk, things like garbage bins or landfills come to mind — but there’s another junk problem, one that’s hard to see with the naked eye from the Earth. Space junk, researchers warn, is a growing problem, and if we don’t address it quickly, it may soon be too much to handle.
There are a total of 6,542 satellites that are currently occupying Earth’s orbit, but only half of them are actually doing something. The other half are inactive — they’re simply junk. To make matters even more problematic, over 1,200 satellites were launched in 2020 — this marks a record, but generally speaking, we could expect more and more satellites to be plopped into orbit.
Now, imagine one day Earth’s orbit becomes overcrowded and two such large satellites hit each other. Both the satellites would get broken into smaller pieces that would further clash with other satellites and trigger a series of unstoppable collisions and a lot of junk pieces flying around. This has happened a few times already.
Due to these collisions, our planet’s orbit gets more and more cluttered with debris, to the extent that eventually, we will end up having no room to launch more rockets and satellites. Such a situation in which Earth’s orbit becomes completely unusable because of large amounts of space junk is referred to as Kessler syndrome — a phenomenon first envisioned by NASA scientist Donald J. Kessler in 1978.
Fortunately, we’re not at that stage yet. For now, space junk does not seem like a big problem but aerospace experts suggest that in the coming years, the number of satellite launches and space missions could increase dramatically, and this is likely to add more junk to space and make Earth’s orbit more crowded than ever. Simply put, if we don’t start taking action quickly, it will soon be too late.
What is space junk and why it’s dangerous?
Space junk is a generic term. Unusable satellite parts, rocket components, and debris of man-made machines in space are called “space junk”. Until now, NASA has tracked 27,000 such items that are aimlessly moving in Earth’s orbit. This orbital debris can move at a speed of 24,000 km/h (15,000 mph), and therefore any such fast-moving piece of junk can hit and destroy a functional satellite or a passing by rocket at any time.
We’re already seeing some of this damage in action. In March 2021, the 18th Space Control Squadron (18SPCS), a space control unit under the US Space Force confirmed that a small debris piece named Object 48078 hit China’s Yunhai 1-02 satellite. According to Astrophysicist Jonathan McDowell, Object 48078 was a remnant of Zenet-2, a Russian rocket that was launched in the year 1996. McDowell further added that the “Yunhai 1-02 satellite broke up” after the collision.
However, such collisions due to space junk are still rare. Before the Yunhai 1-02 crash, the last collision reported was in 2009. Moreover, such collisions can be prevented by mission controllers by adjusting the position of a satellite. Every year many satellites are manoeuvered multiple times in order to avoid collision with space junk, even the International Space Station (ISS) has performed more than 20 junk avoidance maneuvers since its launch in 1998.
The space junk problem does not seem like a big issue for now but if not dealt with properly, it may lead to chaos in our planet’s orbit in the future — chaos that will be extremely difficult to address.
A small but growing problem
Before 2010, only around 100 satellites were launched every year but in the year 2020, for the first time, more than 1000 satellites were sent to space. The numbers continue to increase in 2021 as well because so far, 1400 new satellites have already been placed in orbit this year.
Moreover, in the early days of space exploration, there used to be only a few agencies that would send satellites into space — like NASA, Roscosmos, and the European Space Agency. Nowadays, active private players like SpaceX and Blue Origin have created a boom in the aerospace industry and are launching more and more satellites. These companies are planning to launch mega-constellations (groups of satellites that cover large orbital area) in Earth’s orbit to provide wireless broadband internet services across the globe, in the coming years — an exciting project that is bound to help millions around the world, but which also poses new threats to the problem of space junk.
These mega-constellations would bring an unprecedented increase in the number of satellites revolving around Earth (a report suggests that the Earth’s orbit may have 100,000 satellites by 2030). With every launch, the amount of space junk will also increase making the orbit more congested. As a result, both the existing and new satellites will have to perform more collision avoidance maneuvers.
Therefore, more fuel and resources would be spent on saving the satellites from space junk. Sooner or later, with an increasing number of space missions, the growing amounts of space junk might raise the frequency of outer space collisions and over the course of time, it could ultimately cause the Kessler syndrome.
Is it possible to free Earth’s orbit of space junk?
Cleaning up space junk is not as easy as it sounds. For starters, imposing a ban doesn’t seem like a promising idea.
Rockets are launched to explore space and collect information about other planets in our galaxy, whereas, man-made satellites are placed in Earth’s orbit in order to facilitate communication, navigation, military assistance, earth observation, weather forecast, mineral search, and many other activities that hold great importance for humans. Therefore, banning space missions and new satellite launches is obviously not a solution.
Cleaning our planet’s orbit is both an expensive and complicated process. However, researchers and space agencies are working on this and they keep coming up with new and interesting methods to remove space junk from Earth’s orbit.
Around 2012, a group of researchers working at EPFL (Swiss Federal Institute of Technology) came up with the idea of a special satellite (called CleanSpaceOne) that could attach itself to a targeted piece of space junk and drag the same back towards earth. The researchers proposed that during its journey to Earth, both the satellite and space junk would be burnt by the atmospheric heat.
This idea sounds promising, but it will also be costly, and bringing down satellites one at a time will be very time-consuming.
In 2016, the Japanese Aerospace Exploration Agency sent an electrodynamic tether in space that could direct space junk towards Earth’s atmosphere by using the planet’s magnetic field. A couple of years later, the Surrey Space Center in the UK launched the RemoveDEBRIS project in April 2018, this project was focused to encourage and demonstrate various space junk removal technologies. Under the RemoveDEBRIS initiative the effectiveness of methods involving net, harpoon, and drag sail for catching space junk was tested.
Researchers at Purdue University also developed a drag sail named Spinnaker3 in 2020. This powerful drag sail is an efficient and cost-effective way to deal with space junk as it does not require any fuel during its operation. Moreover, it can drag even rocket-sized space debris back to Earth’s atmosphere so that they get destroyed in peace. Spinnaker3 is expected to launch in November 2021 on a Firefly rocket.
Astroscale, an orbital junk removal company from Japan, launched the ELSA-d (End-of-Life Services by Astroscale-demonstration) satellite in March 2021. This advanced debris removal system uses magnetic satellite catching technology to pick small inactive satellites from Earth’s orbit. ELSA-d successfully completed its first satellite capturing test on August 25, 2021, and it is now moving on to the next phases of its space junk removing process.
The bottom line
As is generally the case, prevention is better than cure. In the case of space junk, it’s not yet a big problem — but by the time it becomes a big problem, it may be too big to handle efficiently, which is why it’s best to act as quickly as possible.
Aerospace experts are following this closely and if their research is supported, we’ll likely soon see effective waste-management strategies for space — and by the time we’re ready to go on our first interplanetary picnic, we’ll have a clean, green (hopefully), and beautiful orbital view.
Spectral analysis of Hubble data on the Jovian satellite found traces of long-lasting water in Europa’s atmosphere. The same technique had been previously applied to Ganymede, another one of Jupiter’s satellites, but researchers were surprised to see the same thing on Europa because its surface temperatures are so much lower.
Jupiter’s moons are frozen on the surface. They’re far away from the sun and just don’t receive enough solar radiation to maintain oceans of liquid water like Earth. But there’s more to some of these moons than meets the eye. For instance, Europa is thought to have an ocean of liquid water under the frozen surface, due to friction. The gravitational pull from Jupiter causes the moon’s ice shell and interior to flex during the course of its orbit (much like how we have tides on Earth). This movement produces friction, friction produces heat, and this heat is enough to melt a big chunk of Europa’s subsurface to liquid water.
In the past few years, astronomers have been watching Europa closely because having an ocean of liquid water (even under its surface) makes it a likely candidate to host life. Several studies have brought evidence that supports the existence of water on Europa, but what makes this study different is that it brings evidence of some non-solid water on the surface of the satellite.
It’s not a lot — just thin traces of vapor — but even these traces are surprising on Europa’s surface. The reason is that Europa reflects more sunlight than Ganymede, keeping its surface temperature much colder. The daytime high on Europa is a whopping -260 degrees Fahrenheit (-162 Celsius) — and that’s not a temperature you’d expect to find water at.
“The observation of water vapor on Ganymede, and on the trailing side of Europa, advances our understanding of the atmospheres of icy moons,” said Roth. “However, the detection of a stable water abundance on Europa is a bit more surprising than on Ganymede because Europa’s surface temperatures are lower than Ganymede’s.”
Previous findings of water vapor on Europa were associated with water plumes erupting through the ice — analogous to volcanoes or geysers here on Earth. These plumes can extend more than 60 miles (96 km) high, producing transient blobs of vapor in the moon’s atmosphere. But this appears to be different.
Looking at data from 1999, 2012, 2014, and 2015, Hubble found evidence that some of the water vapor is coming directly from the ice on the surface. This ice sublimates, transforming directly from solid ice into gas, and it appears to be a continual process. In other words, small parts of Europa’s surface ice are constantly transforming into water vapor, at least on the trailing side of the moon.
An earlier paper from 2021, also co-authored by Roth, found similar traces of water vapor in the atmosphere of Jupiter’s moon Ganymede. However, one limitation of both these papers is that they are indirect observations. Simply put, it’s not exactly water that they’re seeing — of oxygen, a major component of water. In theory, there could be other elements containing oxygen (carbon dioxide, lone oxygen molecules, hydroxide), but nothing seems to fit the data as well as water. It’s about as solid a conclusion as you can draw from an indirect observation, although future work will still be needed to confirm the finding.
Luckily, both NASA and the ESA are preparing missions to explore Europa in more detail. NASA’s Europa Clipper is set for launch in 2024, while ESA’s Jupiter Icy Moons Explorer is set for launch in August 2022 and will reach Jupiter in July 2031. These two missions are expected to greatly increase our understanding of these frozen worlds and offer more hints to their potential habitability.
Most large-scale simulations are of specific processes, such as star formation, galaxy merges, our solar system events, the climate, and so on. These aren’t easy to simulate at all — they’re complex displays of physical phenomena which are hard for a computer to add all the detailed information about them.
To make it even more complicated, there are also random things happening. Even something simple like a glass of water is not exactly simple. For starters, it’s never pure water, it has minerals like sodium, potassium, various amounts of air, maybe a bit of dust — if you want a model of the glass of water to be accurate, you need to account for all of those. However, not every single glass of water will contain the exact same amount of minerals. Computer simulations need to try their best to estimate the chaos within a phenomenon. Whenever you add more complexity, the longer it takes to complete the simulation and the more processing and memory you need for it.
So how could you even go about simulating the universe itself? Well, first of all, you need a good theory to explain how the universe is formed. Luckily enough, we have one — but it doesn’t mean it’s perfect or that we are 100% sure it is the correct one — we still don’t know how fast the universe expands, for example.
Next, you add all the ingredients at the right moment, on the right scale – dark matter and regular matter team up to form galaxies when the universe was around 200-500 million years old.
Universe simulations are made by scientists for multiple reasons. It’s a way to learn more about the universe, or simply to test a model and confront it with real astronomical data. If a theory is correct, then the structure formed in the simulation will look as realistic as possible.
There are different types of simulations, each with its own use and advantages. For instance, “N-body” simulations focus on the motion of particles, so there’s a lot of focus on the gravitational force and interactions.
The Millenium Run, for instance, incorporates over 10 billion dark matter particles. Even without knowing what dark matter really is, researchers can use these ‘particles’ to simulate dark matter properties. There were other simulations, such as IllustrisTNG, which offers the capability of star formation, black hole formation, and other details. The most recent one is a 100-terabyte catalog.
In the end, the simulations can’t reveal every single detail in the universe. You can’t simulate what flavor pie someone is having, but you can have enough detail to work with large-scale things such as the structure of galaxies and other clusters.
Another type of model is a mock catalog. Mocks are designed to mimic a mission and they use data gathered by telescopes over years and years. Then, a map of some structure is created — it could be galaxies, quasars, or other things.
The mocks simulate these objects just as they were observed, with their recorded physical properties. They are made according to a model of the universe, with all the ingredients we know about.
The theory from the model used for the mocks can be tested by comparing them with the telescopes’ observation. This gives an idea of how right or wrong our assumptions and theories are, and it’s a pretty good way to put ideas to the test. Usually, the researchers use around 1000 mocks to also give statistical significance to their results.
Let’s take a look behind the scenes at how the models are produced — and how much energy they use. These astronomical and climate simulations are made on supercomputers, and they are super. The Millenium Run, for example, was made using the Regatta supercomputer. For these simulations, 1 terabyte of RAM was needed and resulted in 23 terabytes of raw data.
The IllustrisTNG used the Hazel Hen. This beast can perform at 7.42 quadrillion floating-point operations per second(Pflops), which is equivalent to millions of laptops working together. In addition, Hazel Hen consumes 3200 Kilowatts of energy — which leads to a spicy electric bill. Uchuu, which had 100 terabytes of results was made using ATENURI II. This one performs with 3.087 Pflops.
In an Oort Cloud simulation, the team involved reported the amount of energy they used in their work: “This results in about 2MWh of electricity http://green-algorithms.org/), consumed by the Dutch National supercomputer.” A habit that may become more common in the future.
So what does this tell us about the possibility of our very own universe being a simulation? Could we be living in some sort of Matrix? Or just be in a Rick&Morty microverse? Imagine the societal chaos of figuring out we are in a simulated universe and you are not a privileged rich country citizen? That wouldn’t end well for the architect.
The simulation hypothesis is actually regarded seriously by some researchers. It was postulated by Nick Bostrom, and has three main conditions — at least one needs to be true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation.
This being said, the simulation hypothesis is not a scientific theory. It is simply an idea — a very interesting one, but simply put, nothing more than an idea.
Lessons from simulations
What we learned from making our simulations is that it is impossible to make a perfect copy of nature. The N-body simulations are the perfect example, we can’t simulate everything, but the particles that make what is relevant to study. In climate models we have the same problem, it is impossible to create the perfect pixel to reproduce geographic locations, you can only approximate the desired features.
The other difficulty is energy consumption, it is hard for us to simulate some phenomena. Simulating a universe in which people make their own choices would require an improbable amount of power, and how could the data be stored. Unless it ends like Asimov’s ‘The Last Question’ — which is well worth a read.
In the end, simulations are possible, but microverses are improbable. We’ll keep improving simulations, making better ones in a faster supercomputer. All this with the thought that we need an efficient program, which consumes less energy and less time.
Neutron stars are one of the most amazing things we know of in the universe. A teaspoon of neutron star material would weigh around a billion tons, making them some of the densest objects in the universe, second only to black holes. Aside from being extremely dense, they can emit bizarre pulses, and sometimes, they form in binary systems — where things can get even wilder.
In 1969, Jocelyn Bell detected the first neutron star. She was a PhD student at Cambridge University and detected a very powerful and extremely regular radio pulse (which was later named a pulsar). It was so eerily regular that the signal’s first nickname was LGM-1, “Little Green Men 1”.
The people who later got involved in the study didn’t really believe it was another civilization and set out to find the signal’s underlying cause. They found it to be a neutron star, the collapsed core of a massive supergiant star that wasn’t quite massive enough to turn into a black hole.
Today, we can detect neutron stars by looking at their signals with X-ray detectors, and we’ve learned quite a lot about them.
Neutron stars are born when a star with 8 to 20 solar masses runs out of fuel. The star undergoes a number of nuclear fusion reactions which leave behind a layered onion shape, including a core made of iron. The iron core is the key to how the star will develop; if the core’s mass is above a limit (called the Chandrasekhar limit), the star will collapse into a neutron star or black hole. Stars with masses lower than this limit will remain stable as white dwarfs.
The formation of a neutron star can happen at dazzling speed. A supernova occurs within 0.1 seconds and what is left behind from the primary star is just its core, which is now made out of neutrons. The explosion releases neutrinos which are antisocial particles that don’t interact with almost any other particles.
In 1987, a supernova exploded and we detected neutrinos from outside of the solar system for the first time in the Kamiokande detector.
The neutron star left from the colossal boom has nearly 1.5 solar masses, but its radius is just around 10 kilometres, making it the densest star in the universe we know of – one tablespoon of a neutron star contains billions of kilograms. Mostly made of neutrons, of course, but also with some protons and electrons here and there, without the extra particles it wouldn’t be stable enough, the neutrons could decay into protons and electrons.
Interestingly though, neutron stars don’t collapse on themselves despite their massive gravitational attraction and are generally stable.
This happens because neutrons are essentially fermions, subatomic particles that respect their own personal space; you could say they practice subatomic social distancing.
In more scientific terms, fermions obey the Pauli exclusion principle: “you can’t have identical fermions in the same quantum state”. This means the identical neutrons can’t occupy the same space, thus, the pressure from ‘trying to avoid other neutrons’ personal space’ competes against gravity and the neutron star keeps itself stable for a long time. This type of matter is often referred to as “degenerate matter” — a highly dense state of fermionic matter in which the Pauli exclusion principle exerts enough pressure (in addition to, or in lieu of thermal pressure) so that the neutron star doesn’t collapse.
Classifying neutrons stars
There are several ways to classify neutron stars, but commonly, there are three types of neutron stars.
The easiest ones to find are the pulsars. Pulsars are highly magnetized rotating neutron stars that emit beams of electromagnetic radiation from their magnetic poles. They have a highly periodic pulse, which can repeat a cycle within milliseconds or over several seconds. The rotation and the beam don’t necessarily need to be aligned, that’s why most images illustrating pulsars show the tilted version of the cycles.
Pulsars are very serious about their timing — so much so that astronomers sometimes use them as celestial timekeepers. The timing from their pulses can be used to precisely find objects, just like how sailors of yore used stars to guide themselves at sea. The Voyager spacecraft has a message to any civilization to find Earth. How did they map it? With radio pulsars. The positions of galactic pulsars are placed on a scale in the map with their number of rotations per cycle dashed along the lines.
Another neutron star is the magnetar, and here, things really get extreme. Pulsars are already extreme objects with titanic magnetic fields, but the magnetars have fields 1000 stronger than even that of a pulsar. To get an idea of how big a field we’re talking about, the Large Hadron Collider has magnets to help accelerate particles, and its magnetic field is around 8.3 Tesla. Magnetar’s magnetic field is 10,000,000 billion Tesla. They can also cause giant flares, and release energy 100,000,000 billion times stronger than a solar flare.
Neutron stars can orbit a companion, sometimes a white dwarf, sometimes a main-sequence star, or even another neutron star. Things get weird when they start to merge.
On 2017 August 17, both the Virgo and Laser Interferometer Gravitational-Wave Observatory (LIGO) teams detected two neutron stars merging gravitational waves in another galaxy called the GW170817. Because the merging process was so catastrophic, the event emitted gamma rays detected by the Fermi Gamma-ray Space Telescope. There were also visual signals from that event and other important measurements, multi-messenger observations, something widely anticipated as the future of astrophysics.
Remember the Pauli exclusion principle? Well, we’re not done with it just yet. There is a phase of matter which is like a fluid but not really, and it works in the same way as the superconductors.
When you try to join particles with the same charge they repel each other. But at a very specific temperature and in the case of neutron stars, density, they can get along and ignore the social distancing. Superconductors have a weird mechanism to form interactions between electrons, the Cooper pairs. These interactions make the superconductor have zero resistivity, for a superfluid, it means zero viscosity, a property in fluids that make them flow slowly.
Neutrons aren’t supposed to form pairs either, even though they aren’t charged. However, in that extreme environment, they manage to form this superconducting phase that is actually called a superfluid state. It happens inside the neutron star’s inner crust and outer core.
The Cooper pairs made of neutrons make a superfluid state possible in a neutron star. It may sound weird to call it fluid in such a dense object, but if you think about it is not a problem, everything is dense, and the core is denser, a less dense region compared to a super dense one can be called a fluid.
The evidence for that is the result of the pairing. The relationships between neutrons aren’t stable so they break up and emit neutrinos in response, this neutrino release makes the star cool down. Two groups independently detected the cooling mechanism from the neutron star inside the Cassiopeia A. The 10-year observation shows that the star cooled 4%, the best explanation is that it agrees with the superfluid theory.
That was just a few of the quirky characteristics of neutron stars, they can get weirder than that. Exotic stuff probably happens in their inner cores, explained with more quantum mechanics which could make ‘Rick and Morty’ seem old hat.
Astronomers have discovered a one-kilometre wide asteroid orbiting the Sun at a distance of just 20 million km (12 million miles). Not only does this make the asteroid–currently designated 2021 PH27–the Sun’s closest neighbour, but it also means that as it completes an orbit in just 113 days, it is also the solar system’s fastest-orbiting asteroid. 2021 PH27 skirts so close to the Sun that its discoverers say its surface temperature is around 500 degrees C–hot enough to melt lead.
Scott S. Sheppard of the Carnegie Institution of Science first spotted asteroid 2021 PH27 in data collected by the Dark Energy Camera (DECam) mounted at the prime focus of the Victor M. Blanco 4m Telescope at Cerro Tololo Inter-American Observatory (CTIO), Chile. Brown University astronomers Ian Dell’antonio and Shenming Fu took images of the asteroid on 13th August 2021 at twilight–the optimum time for hunting asteroids that lurk close to the Sun. Just like the inner planets–Mercury and Venus–asteroids that exist within the Earth’s orbit become most visible at either sunrise or sunset.
The discovery was followed by measurements of the asteroid’s position conducted by David Tholen of the University of Hawai‘i. These measurements allowed astronomers to predict asteroid 2021 PH27’s future position, leading to follow-up observations on the 14th of August by DECam and the Magellan Telescopes at the Las Campanas Observatory in Chile.
These observations were then subsequently followed on August 15th by imaging made with the LasCumbres Observatory network of 1- to 2-meter telescopes located in Chile and South America by European Space Agency (ESA) researcher Marco Micheli.
The findings were so significant that many astronomers cancelled their scheduled projects to use telescope time with a variety of sophisticated instruments to further observe the asteroid. “Though telescope time for astronomers is very precious, the international nature and love of the unknown make astronomers very willing to override their own science and observations to follow up new, interesting discoveries like this,” explains Sheppard.
What makes the discovery of asteroid 2021 PH27 so special, and of great interest to astronomers, is the fact that it belongs to a population of solar system bodies that have been, thus far, notoriously difficult to spot.
Hunting For Inner Solar System Asteroids
Interior asteroids that exist close to the Sun tend to be difficult for astronomers to spot because of the glare from our central star. This difficulty is amplified by the fact that as they get close to the Sun these objects experience intense gravitational, tidal, and thermal forces that breaks them up into smaller–thus tougher to spot–fragments.
That means tracking an intact interior asteroid could have benefits for our understanding of these objects and the conditions they experience. In particular, if there are few asteroids experiencing a similar orbit to asteroid 2021 PH27 it may indicate to astronomers many of these objects were loose ‘rubble piles.’ This may, in turn, give us a good idea of the composition of asteroids on a collision course with Earth, and crucially, how we could go about deflecting them.
“The fraction of asteroids interior to Earth and Venus compared to the exterior will give us insights into the strength and make-up of these objects,” Sheppard continues. “Understanding the population of asteroids interior to Earth’s orbit is important to complete the census of asteroids near Earth, including some of the most likely Earth impactors that may approach Earth during daylight and that cannot easily be discovered in most surveys that are observing at night, away from the Sun.”
In addition to this, asteroid 2021 PH27’s orbit is so close to the Sun that our stars exerts considerable gravitational effects upon it, something that could make it a prime target for the study of Einstein’s geometric theory of gravity–better known as general relativity.
This close proximity to the Sun may actually be a recent development for asteroid 2021 PH27.
Asterod 2021 PH27 is on the Move
Planets and asteroids don’t move around their stars in perfectly circular orbits, but in ellipses–flattened out circles. The ‘flatter’ the circle the greater we say its eccentricity is. The widest point of the ellipse is the semi-major axis and for an orbit, this represents the point at which a body is farthest from its parent star.
Asteroid 2021 PH27 has a semi-major axis of 70 million kilometres (43 million miles or 0.46 au) which gives it a 113-day orbit crossing the orbits of both Venus and Mercury. But it may not have always existed so close to the Sun.
Astronomers believe that the asteroid may have started life in the main asteroid belt between Mars and Jupiter, with the gravitational influence of the inner planets drawing it closer to the Sun. This would make it similar to the Near-Earth Object (NEO) Apophis, which has only recently been ruled out as a potential Earth impactor, which was also dragged closer to the Sun by gravitational interactions.
There is also some evidence arising from 2021 PH27’s high orbital inclination of 32 degrees that the asteroid may have a slightly more exotic origin, however. This could imply that the asteroid is actually an extinct comet that comes from the outer edge of the solar system pulled into a close orbit as it passed an inner-terrestrial–rocky–planet. Astronomers will be looking to future observations to determine which of these origins is correct, but unfortunately, this will have to wait. 2021 PH27 is about to enter solar conjunction which means that from our vantage point on Earth it is about to move behind the Sun. That means the asteroid will only become available for further observations in 2022.
These follow-up observations will allow astronomers to better determine its orbit. And with this better determination will come a new official name that is hopefully a bit less of a mouthful than 2021 PH27. But what is certain is that this asteroid is not set to become any less interesting.
In 1915, a physicist by the name of Albert Einstein published a theory that managed to connect the curvature of space-time with energy. It is called General Relativity, and Einstein focused on it as a way to bring gravity into his previous special relativity work.
The theory of general relativity says that the gravitational attraction between objects’ masses comes from their warping of spacetime. Armed with this insight, general relativity was able to predict many things, including the most famous confirmation that gravitational waves exist. But for around a long time, there was no direct proof. The first direct proof came from gravitational lensing.
As we were discussing, energy and time-space curvature are related. The most massive objects are capable of curving space-time itself. Think of a ball on a stretched sheet: the ball curves the sheet, and if you let a smaller object slide on that sheet, it will move towards the ball.
The surprise is that something absurdly massive can even bend light. Yes, light, the fastest thing to carry information in the universe, can be bent — sort of. That is the ‘innovative’ thing about general relativity, the theory allows even objects which are massless to be affected by gravity. Photons are the particles that constitute light and they have zero mass, thus light can be deformed in the presence of a strong gravitational field.
How it works — and what do lenses have to do with it
The fact that lenses can distort images does not need relativity at all. A glass filled with water can distort light behind or inside the glass. In photographic lenses, if not corrected, images are curved and don’t look realistic.
Scientists have been aware of lenses and their effects for a long time, but with the advent of telescopes, they also realized that objects with very large masses (some stars, galaxies, black holes) distort light in a similar way to lenses here on Earth. So these celestial objects can be used as a sort of lens — a gravitational lens.
When the lens and the target are close enough (from an astronomical perspective) and closely aligned, multiple images can be formed, appearing in an arc shape — this is called strong lensing. The image multiplication of a light source can be out of sync due to the curvature of space. Some images will take longer to reach the observer because the light is taking a longer path.
When lens and source are in nearly perfect alignment the image deforms to a ring shape, called Einstein–Chwolson ring. The most famous multiple image phenomena are the so-called “Einstein crosses” — where the image of a single source is deformed into a cross shape, four more versions of the target appear due to the gravitational phenomenon.
Meanwhile, weak lensing happens when the image is distorted, but without any copies of the target — just a distortion of it with elongated shapes. Microlensing, on the other hand, has to do with motion, either the source, the lens or us. The motion changes the source’s magnification making objects which are usually hard to observe brighter.
Proving Einstein right
Gravitational lensing was one of the key techniques used to prove General Relativity. In 1919, a solar eclipse was observable in some countries in the Southern Hemisphere and the Hyades star cluster happened to be in the same view range as the Sun. Sir Frank Watson Dyson sent two expeditions in different locations of the globe to observe the eclipse — coincidentally, two Portuguese-speaking places — one in the Democratic Republic of São Tomé and Príncipe (at the time called ‘the island of Príncipe’, with Arthur Eddington and Edwin Cottingham, and another in the city of Sobral in Brazil with Charles Davidson and Andrew Crommelin.
The team at Sobral found better weather conditions and registered 7 images in contrast to the Príncipe team’s only 2 images. Later, the analysis of the photographic plates was carried out by estimating the deflection angle from the two experiments. With both results, considering the error bar, the observation confirmed the theory. Despite the evidence, the confirmation did not give Einstein immediate prestige. Other eclipses had to help and the scientific community took time to digest the theory.
What can we find with gravitational lensing?
Lensing effects don’t occur when there is a star or a galaxy in our view range. Dark matter is massive, therefore has a gravitational field. Scientists use gravitational lensing to estimate the amount of dark matter from giant galaxy clusters.
Microlensing can help astronomers/astrophysicists find exoplanets. When a lens is passing in front of a star, its brightness will have a maximum in perfect alignment and as the motion continues it returns to the original magnification. Every time the star is being eclipsed by a planet, anomalies in the brightness evolution will appear and the researchers can confirm the presence of a host planet.
Remember the Cosmic Microwave Background (CMB)? It is the oldest ‘image’ of the universe, when photons could travel freely without interaction with matter. Photons are light, everything on light’s path can bend it. Scientists can know how distorted the CMB is by analyzing dark matter.
The Planck Satellite was the first instrument to give results on the distribution of dark matter in the universe through gravitational lensing. In the image illustrating this distribution, the gray color is to represent the Milky Way and very bright nearby galaxies; they need to be excluded because they mess with the measurements. Dark blue are regions with more dark matter than the bright portions.
If you expect a certain amount of effort, if not struggle just to detect a few galaxies, scientists are already thinking of lensed gravitational waves. How hard could that be, right? They predict a boost in gravitational waves’ signal if they are amplified by strong lensing. The problem is that it also helps increase the noise/errors in the observations. Until then, a lot of work is being done with gravitational lensing, something that came from a very abstract theory, proving theoretical work deserves respect.
We are closer than ever before to understanding the composition of Mars thanks to the first observations of seismic activity on the planet made by the InSight lander. The NASA-led project, which landed on the surface of the Red Planet in November 2018 with the goal of probing beneath the Martian surface, observed several so-called ‘marsquakes’ which reveal details about its crust, mantle, and core.
InSight’s primary findings which are detailed in three papers published today in the journal Science, represent the first time scientists have been able to produce a detailed picture of the interior of a planet other than Earth.
“We are seeking to understand the processes that govern planetary evolution and formation, to discover the factors that have led to Earth’s unique evolution,” says Amir Khan, ETH Zurich and the University of Zurich, whose team used direct and surface reflected seismic waves to reveal the structure of Mars’ mantle. “In this respect, the InSight mission fills a gap in the scientific exploration of the solar system by performing an in-situ investigation of a planet other than our own.”
The results from the ongoing NASA mission–with the full title ‘Interior Exploration using Seismic Investigations, Geodesy and Heat Transport’— could reveal key insights into the Red Planet‘s formation and evolution, as well as helping us understand the key differences between our planet and Mars.
“One big question we would like to understand is why Earth is the only planet with liquid oceans, plate tectonics, and abundant life?” adds Khan. “Mars is presently on the edge of the solar system’s habitable zone and may have been more hospitable in its early history. Whilst we don’t yet know the answers to these questions, we know they to be found are on Mars, most likely within its interior.”
InSight first detected the presence of marsquakes from its position in Elysium Planitia near the Red Planet’s equator in 2019 and has since picked up more than 300 events–more than 2 a day–tracing many of them back to their source.
What is really impressive is what researchers can do with these quakes, using them as a diagnostic tool to ‘see’ deep into the planet’s interior.
“Studying the signals of marsquakes, we measured the thickness of the crust and the structure of the mantle, as well as the size of the Martian core,” Simon Stähler, a research seismologist at ETH Zurich, tells ZME Science. “This replicates what was done on Earth between 1900 and 1940 using the signals of earthquakes.”
From the Crust of Mars…
The observations made by InSight have allowed researchers to assess the structure of Mars’ crust, allowing them to determine its thickness and other properties in absolute numbers for the first time. The only values we previously had for the Martian crust were relative values that showed differences in thickness from area to area.
“As part of the bigger picture on the interior structure of Mars, we have determined the thickness and structure of the Martian crust,” Brigitte Knapmeyer-Endrun, a geophysicist at the University of Cologne’s Institute of Geology, tells ZME Science. “Previous estimates could only rely on orbital data–gravity and topography–that can accurately describe relative variations in crustal thickness, but no absolute values. These estimates also showed a wide variability.”
With data collected regarding the crustal thickness at InSight’s landing area, new seismic measurements, and data collected by previous missions, the team could map the thickness across the entire Martian crust finding an average thickness of between 24 and 72 km.
Knapmeyer-Endrun explains that the data she and her team collected with InSight’s Seismic Experiment for Interior Structure (SEIS), particularly the very broad-band (VBB) seismometer–an instrument so sensitive it can record motion on an atomic scale–and information from the Marsquake Service (MQS) at ETH Zurich, suggest that the Red Planet’s crust is thinner than models have thus far predicted.
“We end up with two possible crustal thicknesses at the landing site–between 39 and 20 km– but both mean that the crust is thinner than some previous estimates and also less dense than what was postulated based on orbital measurements of the surface.”
Knapmeyer-Endrun continues by explaining that the InSight data also reveals the structure of the Martian crust as multi-layered with at least two interfaces that mark a change in composition. In addition to this, the team can’t rule out the presence of a third crustal layer before the mantle.
“The crust shows distinct layering, with a surficial layer of about 10 km thickness that has rather low velocities, implying that it probably consists of rather porous–fractured–rocks, which is not unexpected due to the repeated meteorite impacts,” says the geophysicist adding that we see something similar on the Moon, but the effect is more extreme due to that smaller body’s much thinner atmosphere.
Knapmeyer-Endrun is pleasantly surprised regarding just how much information InSight has been able to gather with just one seismometer.”It’s surprising we were really able to pull all of this information about the interior of Mars from the recordings of quakes with magnitudes of less than 4.0 from a single seismometer,” she explains. “On Earth, we would not be able to even detect those quakes at a comparable distance. We typically use 10s or even 100s of seismometers for similar studies.”
And the marsquake data collected by InSight has not just proven instrumental in assessing the thickness and composition of the planet’s crust, it has also allowed scientists to probe deeper, to the very core of Mars itself.
…To the Martian Mantle and Core
Using direct and surface reflected seismic waves from eight low-frequency marsquakes Khan and his team probed deeper beneath the surface of Mars to investigate the planet’s mantle. They found the possible presence of a thick lithosphere 500km beneath the Martian surface with an underlying low-velocity layer, similar to that found within Earth. Khan and his co-author’s study reveals that the crustal layer of Mars is likely to be enriched with radioactive elements. These elements heat this region with this warming reducing heat in lower layers.
It was these lower regions that Stähler and his colleagues investigated with the use of faint seismic signals reflected by the boundary between the Martian mantle and the planet’s core. What the team discovered is that the Red Planet’s core is actually larger than previously calculated, with a radius of around 1840 km rather than previous estimates of 1600km. This means the core begins roughly halfway between the planet’s surface and its centre.
From the new information, we can also determine the core’s density and extrapolate its composition.
“We now know for sure the size of the core and it’s significantly larger than it had been thought to be for a long time,” says Stähler. “Because we found that the core is quite large, we, therefore, know it is not very dense. This means that Mars must have accumulated a substantial quantity of light, volatile elements such as sulfur, carbon, oxygen, and hydrogen.”
This ratio of lighter elements is greater than that found within Earth’s denser core, and it could give us important hints about the differences in the formation of these neighbouring worlds.
“Somehow these light elements needed to get into the core. It may mean that the formation of Mars happened faster than Earth’s,” Stähler says. “These observations have fueled speculation that Mars might represent a stranded planetary embryo that depicts the chemical characteristics of the solar nebula located within the orbit of Mars.”
As just Knapmeyer-Endrun did, Stähler expresses some surprise regarding just how successful InSight has been in gathering seismological data, emphasising the role good fortune has played in the mission thus far.
“We were able to observe reflections of seismic waves from the core–like an echo–from relatively small quakes. And the quakes were just in the right distance from the lander. Had we landed in another location, it would not have worked out,” the seismologist says. “And the landing site was only selected because it was flat and had no rocks, so it was really pure luck.”
Stähler says that he and his team will now attempt to use seismic waves that have crossed the core of Mars to determine if the planet’s core possesses a solid-iron inner-core like Earth, or if it is entirely liquid. Just one of the lingering questions that Knapmeyer-Endrun says InSight will use marsquakes to tackle over the coming years.
“There are still multiple open questions that we’d like to tackle with seismology. For example, which geologic/tectonic features are the observed marsquakes linked to? At which depth do olivine phase transitions occur in the mantle? And Is there a solid inner core, like on Earth, or is the whole core of Mars liquid?” says the geophysicist.
And if we are to go by track record, the smart money is on InSight answering these questions and more. “Within just 2 years of recording data on Mars, this single seismometer has been able to tell us things about the crust, mantle and core of Mars that we’ve been speculating about for decades.”
The surfaces of neutron stars may feature mountains, albeit ones that are no more than millimetres tall, new research has revealed. The minuscule scale of neutron star mountains is a result of the intense gravity produced by these stellar remnants that are the second densest objects in the Universe after black holes.
Because neutron stars have the mass equivalent to a star like the Sun compressed into a diameter that is about the size of a city on Earth–about 10km– they have a gravitational pull at their surface that is as much as 40,000 billion times stronger than Earth’s.
This presses features on that surface flat, making for almost perfect spheres. Yet the new research, presented at the National Astronomy Meeting 2021 shows that these stellar remnants do feature some tiny topological deformations, analogous to mountains on a planet’s surface.
The finding was a result of complex computer modelling by a team of researchers led by the University of Southhampton’s Fabian Gittins. The Ph.D. student’s team simulated a realistic neutron star and then calculated the forces acting upon it. What the research really shows is how well neutron stars can support deviations from a perfect sphere without its crust being strained beyond breaking point.
This revealed how mountains could be created on such dense stellar remnants and demonstrated that such formations would be no taller than a fraction of a millimetre.
“For the past two decades, there has been much interest in understanding how large these mountains can be before the crust of the neutron star breaks, and the mountain can no longer be supported,” says Gittins. These results show how neutron stars truly are remarkably spherical objects. “Additionally, they suggest that observing gravitational waves from rotating neutron stars maybe even more challenging than previously thought”.”
Mountain formation has been formulated for neutron stars before, but these new findings suggest such features would be hundreds of times smaller than the mountains of a few centimetres previously predicted. This is because those older models took the crusts of neutron stars to the edge of breaking point at every single point; something the up-to-date research suggests is less than realistic.
Neutron stars form when massive stars run out of fuel to power nuclear fusion. This means that the toward force balancing against gravity’s inward pull is cancelled and leads to the gravitational collapse of the star. During the course of this collapse, the massive star ejects its outer material in supernova explosions and leaving behind a core of ultradense material. This stellar remnant is only protected from further collapse–and in turn, becoming a black hole–by the quantum mechanical properties of the neutron-rich material that composes it.
The finding may have implications that go beyond the modelling of neutron stars. Tiny deformations on the surface of rapidly spinning neutron stars called pulsars could launch gravitational waves–the tiny ripples in spacetime predicted by general relativity and detected here on Earth by the LIGO/Virgo collaboration.
Unfortunately, as precise and sensitive as the LIGO laser interferometer is, it is still not powerful enough to detect gravitational waves launched by these ant-hill like mountains. It is possible that future upgrades to these Earth-based detectors and advancements such as the space-based gravitational wave detector LISA could make observing the effect of these tiny bumps possible.
Astrophysicists have finally observed the spiralling merger between a neutron star and a black hole. The cataclysmic event was witnessed in a gravitational wave signal by the LIGO/Virgo/KAGRA collaboration and is the first time that one of these elusive but titanic ‘mixed’ merger events has been spotted and had its nature confirmed. And just like buses, you wait for an age for one to come and then two arrive at once.
The researchers also detected a gravitational wave signal from another event of the same nature just ten days after the first, with the signals picked up by LIGO/Virgo on 5th January 2020 and the 15th January 2020 respectively.
The finding is significant because of the three types of mergers between stellar remnant binaries–neutron star/neutron star mergers, black hole/ black hole mergers, and neutron star/ black hole or mixed mergers–this latter category is the only one we hadn’t detected until now and has proved fairly elusive.
“With this new discovery of neutron star- black hole mergers outside our galaxy, we have found the missing type of binary,” says Astrid Lamberts, a CNRS researcher at Observatoire de la Côte d’Azur, in Nice, France. “We can finally begin to understand how many of these systems exist, how often they merge, and why we have not yet seen examples in the Milky Way.”
These detections of signals from separate mixed merger events come just six years after the LIGO/Virgo collaboration first detected the gravitational waves confirming predictions regarding ripples in the fabric of spacetime by Einstein’s theory of general relativity a century previous.
Though further observations are needed, the results produced by the team could help astronomers and astrophysicists refine their knowledge of systems in which these elusive mergers occur determining both how these mixed binary pairings form and how frequently their components spiral together and merge.
“Gravitational waves have allowed us to detect collisions of pairs of black holes and pairs of neutron stars, but the mixed collision of a black hole with a neutron star has been the elusive missing piece of the family picture of compact object mergers,” says Chase Kimball, a Northwestern University graduate student. “Completing this picture is crucial to constraining the host of astrophysical models of compact object formation and binary evolution. Inherent to these models are their predictions of the rates that black holes and neutron stars merge amongst themselves.
“With these detections, we finally have measurements of the merger rates across all three categories of compact binary mergers.”
Chase Kimball, Northwestern University
Kimball is the co-author of a study published in the Astrophysical Journal Letters and part of a team that includes researchers from the LIGO Scientific Collaboration (LSC), the Virgo Collaboration and the Kamioka Gravitational Wave Detector (KAGRA) project.
A Gravitational-Wave Signal Signal One Billion Years in the Making
One of the most astounding things about the detection of gravitational waves is just how precise a piece of equipment has to be to detect these tiny ripples in the fabric of spacetime. Since that first key detection in 2015, the National Science Foundation’s (NSF) operators at the LIGO laser interferometer and their counterparts at the Virgo detector in Italy have detected over 50 gravitational wave signals from mergers between black hole pairs and neutron star binaries.
The first mixed neutron star/black hole merger spotted by the collaboration on January 5th is believed to be the result of a merger of a black hole six times the mass of the Sun and a neutron star with a mass 1.5 times that of our star. The event which has been designated GW200105 occurred 900 million light-years away from Earth and was picked up as a strong signal at the LIGO detector located in Livingstone, Louisiana.
LIGO Livingstone’s partner detector located in Hanford, Washington, missed the signal as it was offline at the time. Virgo on the other hand caught the signal but it was somewhat obscured by noise. “Even though we see a strong signal in only one detector, we conclude that it is real and not just detector noise,” says Harald Pfeiffer, group leader in the Astrophysical and Cosmological Relativity department at Max Planck Institute for Gravitational Physics (AEI) in Potsdam, Germany. “It passes all our stringent quality checks and sticks out from all noise events we see in the third observing run.”
The fact that GW200105 was only strongly picked up by one detector makes it difficult to pinpoint in the sky with the international team only able to ascertain that it came from a region about 34 thousand times the size of the Moon.
“While the gravitational waves alone don’t reveal the structure of the lighter object, we can infer its maximum mass,” says Bhooshan Gadre, a postdoctoral researcher at the AEI. “By combining this information with theoretical predictions of expected neutron star masses in such a binary system, we conclude that a neutron star is the most likely explanation.”
Despite the fact that the second mixed merger occurred farther away–1 billion light-years distant from Earth– its signal was spotted by both LIGO detectors and the Virgo detector. This means that the team have been able to localise the merger–named GW200115– more precisely, to a region of the sky that is around three thousand times the size of Earth’s moon. This second merger is believed to have occurred between a black hole nine times the mass of our Sun and a neutron star almost twice the size of the Sun.
These Black Holes Weren’t Messy Eaters
Because of the extraordinary distances involved, astronomers have yet to confirm either merger in the electromagnetic spectrum upon which traditional astronomy is based. Despite being informed of the event almost immediately astronomers could not find telltale flashes of light indicating the mergers.
This is unsurprising as any light from such distant events would be incredibly dim after one billion years of journeying to Earth no matter what wavelength it is observed in, or how powerful the telescope is that is used to attempt the follow-up observation.
There also remains another possibility why no light could be seen from these events. The lack of a signal in electromagnetic radiation could be because the neutron star elements of these mergers were swallowed whole by their black hole partners.
“These were not events where the black holes munched on the neutron stars like the cookie monster and flung bits and pieces about,” explains Patrick Brady, a professor at the University of Wisconsin-Milwaukee and Spokesperson of the LIGO Scientific Collaboration, colourfully. “That ‘flinging about’ is what would produce light, and we don’t think that happened in these cases.”
Whilst these are the first two confirmed examples of such mixed mergers, there have been suspects spotted by their gravitational-wave signals in the past. In August 2019 a signal designated GW190814 was detected which researchers say involved a collision of a 23-solar-mass black hole with an object of about 2.6 solar masses. This second object could have been eitherthe heaviest known neutron star or the lightest known black hole ever found. That ambiguity left this signal unconfirmed as the product of a mixed merger event and other similar finds have been plagued with similar ambiguities.
Now that two confirmed detections of mixed mergers have been made, astrophysicists can set about discovering if current estimates that say such collisions should occur at a frequency of around one per month within a distance of 1 billion light-years of Earth are correct.
They can also set about discovering the origins of such binaries, possibly eliminating one or two of the proposed locations in which such events are believed to occur: stellar binary systems, dense stellar environments including young star clusters, and the centers of galaxies.
Key to these investigations will be the fourth observation run of the laser interferometers that act as our gravitational wave detectors, set to begin in summer 2022.
“The detector groups at LIGO, Virgo, and KAGRA are improving their detectors in preparation for the next observing run scheduled to begin in summer 2022,” concludes Brady. “With the improved sensitivity, we hope to detect merger waves up to once per day and to better measure the properties of black holes and super-dense matter that makes up neutron stars.”
Six galaxies detected by Hubble and Spitzer come from a time astronomers call the Cosmic Dawn — a period in the history of our universe just 250-350 million years after the Big Bang (the age of the universe is currently estimated at 13.8 billion years), when the first stars had just started shining.
After the Big Bang, the universe was a bit of a hot mess. It was hot, dense, and virtually opaque. It only became transparent during a period called Recombination, in which a soup of protons and electrons combined to form the first true hydrogen atoms. Prior to the Recombination, the light was not able to travel freely travel through the universe as it was constantly scattered off the free electrons and protons. But as the atoms started combining and there were fewer free particles, this forged a free path for light to travel the universe.
It is in this period that the universe became transparent — and it is also in this period that the six galaxies were formed. It took light from these galaxies most of the universe’s current lifetime to get to us, and looking at them is basically like looking at the Cosmic Dawn. For Professor Richard Ellis from University College London, UK, observations like this are the crowning of decades of work.
In a study published in Monthly Notices of the Royal Astronomical Society, Ellis and colleagues from the UK, Germany, and US, estimated the time at which the Cosmic Dawn began by using six galaxies which they estimate to have formed between 250 to 350 million years after the Big Bang.
In order to estimate the galaxies’ age, they must first consider a particular value of the universe’s rate of expansion (over which there is still some debate). The reason for that is because they are computing the lookback time — the time light from the ancient galaxies traveled to reach us.
As the universe expands, light coming from stars and galaxies has its wavelength increased — something called the redshift effect. By looking at how much the wavelength has increased, researchers can estimate how much light has traveled — and consequently, how old the light-producing object is.
The recent results were based on data from the Hubble and Spitzer space telescopes, both famous for being capable of observing some of the oldest objects in the universe. To estimate the redshift, the team required the Chilean Atacama Large Millimetre Array (ALMA), the European Very Large Telescope, the twin Keck telescopes in Hawaii, and Gemini-South telescope.
The age of the sample is only computed by combining data from all those different telescopes. However, astronomers and cosmologists have great expectations of the Hubble/Spitzer successor, the James Webb Space Telescope (JWST). The most ambitious, the biggest, and the most sensitive telescope NASA created will be able to observe those Cosmic Dawn galaxies directly. JWST is also the hope of a larger sample of galaxies, providing a better representation of the Cosmic Dawn.
Anders Sandberg decided to make an effort to answer a Physics StackExchange question. Sandberg, a researcher, science debater, and futurist took the seemingly silly question very seriously. The question sounded like this:
“Supposing that the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries, what would happen from the perspective of a person on the surface?”
First of all — okay, that is… a question. But on a closer look, there’s actually a lot of physics hidden in that question, which Sandberg addressed in a small study published on arXiv.
First, we need to get an idea of how planets are formed. Our Solar System was once a bunch of gas and dust — a nebula. The center coalesced to form the Sun, and the dust of the inner solar system is responsible for the formation of Earth. Then, gravity takes part and everything depends on it.
The precise gravitational field of our planet is ideal for keeping us here on the surface, along with liquid oceans, a Moon that makes tides possible, not too many earthquakes, and maybe the most important of all, for humans — our atmosphere. Blueberries would wreak hell on all that.
Welcome to Blueberry Earth
Secondly, you need to clarify what blueberries you’re talking about. The species of blueberries Sandberg considered is the Vaccinium corymbosum blueberry, which is succulent and bigger than the other types of the fruit. To make a planet out of berries (and instantaneously) we need to consider the ingredients’ density — in this case, the blueberries’ density is 700 kg/m³, around 7 times lower than the Earth’s.
Changing the density and maintaining the volume of the Earth means the planet will lose some mass. If made entirely of the fruit, our planet’s mass will drop to 13% of what it is now, which means the gravitational attraction of the Blueberry Earth also drops substantially. We can no longer keep the Moon gravitationally locked, and presumably, the Moon would just fly away.
Because of the change in the planet’s radius, its rotation changes. It’s like a ballerina speeding down her rotation by extending her leg and then speeding up by retracting it. This is the conservation of angular momentum. A smaller planet has a faster rotation velocity, which results in a shorter day. A day in the Blueberry Earth would last approximately 19 hours.
So what happens to the berries?
So, we now have no moon. It’s time to focus on how the planet merged its ingredient and formed a spherical structure. Blueberries are not a particularly sturdy material, so coalescing them together will have some effects. Sandberg estimates that while the surface of Blueberry Earth will be free blueberries, they will start pulping a few meters below (around 11.5 meters below). The pulping effect will further cause the planet’s radius to shrink.
The Blueberry Earth’s gravitational pull will now be comparable to that of the Moon — which spells trouble for our atmosphere. With a different gravitational pull, the atmosphere is just not the same — and neither is the temperature. Everything is different, and you would not like the El Niño season in Blueberry Earth. Our sky is now blueish (go figure), but the clouds are still white).
Lava or jam?
Some geology may still happen on Blueberry Earth, but it’s not the geology we’re used to. Considering the pulp mechanism as possible, and also considering the air between the blueberries and a lower gravitational field — we end up with mechanisms that can produce colossal geysers. However, they’d be made of blueberry hot juice or jam. So… yum?
We could imagine geysers similar to Enceladus, one of Saturn’s moons which ejects water vapor and ice particles. Enceladus is small, six Enceladus fit in Amazonas – the biggest Brazilian state. Its surface is covered in ice and scientists discovered that below the icy crust there are oceans made of water.
Due to the pressure between the fruits, the core would be dense enough to become ice. Yes, even with high temperatures, Blueberry Earth’s core could basically become icecream. Considering the amount of water in blueberries we can see the formation of ice at very high pressures for the water molecule, using the phase diagram. In our case, the result would be ice VII, which forms above nearly 30,000 times the sea-level pressure.
A day on Blueberry Earth
Without our planet’s metallic core, there would also be no magnetic field to protect us from radiation — this would make us and all other species exposed to solar radiation and cosmic rays, not a nice place to live in (to put it mildly). Let’s not forget this planet would be made of organic material, so the planet itself couldn’t survive the fury of electromagnetic waves travelling the universe.
Blueberry Earth wouldn’t be stable enough for a regular rocky planet lifetime. Our good old iron-silicate-basaltic planet is much better, we can live on it, hold the atmosphere we need, and plant berries in it. Unless you are an adapted extremophile Mummy Berry citizen, you’re gonna have a bad time on Blueberry Earth. Sandberg sums it up thusly:
“So, to sum up, to a person standing on the surface of the Earth when it turns into blueberries, the first effect would be a drastic reduction of gravity. Standing on the blueberries might be possible in theory, except that almost immediately they begin to compress rapidly and air starts erupting everywhere. The effect is basically the worst earthquake ever, and it keeps on going until everything has fallen 715 km.”
“While this is going on everything heats up drastically until the entire environment is boiling jam and steam. The end result is a world that has a steam atmosphere covering an ocean of jam on top of warm blueberry granita. The final state of Blueberry Earth is somewhat similar to oceanic exoplanets.”
So as cool as it sounds, Blueberry Earth is not exactly where you’d want to spend your time. But you can spend your time having some blueberries on Earth, and that’s probably a better idea.
The original text: Blueberry Earth. If you are really that curious you can follow some of the math here.
In late 2019 and early 2020 Betelgeuse, a red supergiant in the constellation of Orion, made headlines when it underwent a period of extreme dimming. This dip in brightness for the star, which is usually around the tenth brightest in the night sky over Earth, was so extreme it could even be seen with the naked eye.
Some scientists even speculated that the orange-hued supergiant may be about to go supernova, an event which would have been visible in daylight over Earth for months thanks to its power and relative proximity–700 light-years from Earth. Yet, that supernova didn’t happen and Betelgeuse returned to its normal brightness.
This left the ‘great dimming’ of Betelgeuse–something never seen in 150 years of studying the star–an open mystery for astronomers to investigate.
Now, a team of astronomers led by Miguel Montargès, Observatoire de Paris, France, and KU Leuven, Belgium, and including Emily Cannon, KU Leuven, have found the cause of this dimming, thus finally solving this cosmic mystery. The researchers have discovered that the darkening of Betelgeuse was caused by a cloud of dust partially concealing the red supergiant.
“Our observations show that the Southern part of the star was hidden and that the whole disk of the star was fainter. The modelling is compatible with both a cool spot of the photosphere and a dusty clump in front of the star,” Montargès tells ZME Science. “Since both signatures have been detected by other observers, we conclude that the Great Dimming was caused by a cool patch of material that, due to its lower temperature, caused dust to form in gas cloud ejected by the star months to years before.”
The ‘great dimming’ of this massive star lasted a few months presented a unique opportunity for researchers to study the dimming of stars in real-time.
“The dimming of Betelgeuse was interesting to professional and amateur astronomers because not only was the appearance of the star changing in real time we could also see this change with the naked eye. Being able to resolve the surface of a star during an event like this is unprecedented.”
Emily Cannon, KU Leuven
The team’s research is published in the latest edition of the journal Nature.
A Unique Opportunity to Capture a Dimming Star
Montargès and his team first trained the Very Large Telescope (VLT)–an ESO operated telescope based in the Atacama Desert, Chile–on Betelgeuse when it began to dim in late 2019. The astronomers took advantage of the Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) instrument at the VLT as well as data from the telescope’s GRAVITY instrument. This allowed them to create stunning images tracking the great dimming event allowing them to distinguish it from regular dips in brightness demonstrated by the supergiant stars.
Betelgeuse has been seen to decrease in brightness before as a result of its convection cycle, which causes material to rise and fall throughout the star’s layers based on its temperature. This convection cycle results in a semi-regular dimming cycle that lasts around 400 days.
When the ‘great dimming’ was first observed in October 2019 astronomers had assumed this was due to its natural dimming cycle. That assumption was dismissed by December that same year when the star became the darkest that it had been in a century. The star had returned to its normal brightness by April 2020.
“No other red supergiant star has been seen dimming that way, particularly to the naked eye. Even Betelgeuse that has been closely monitored for 150 years has not shown such behaviour.”
Miguel Montargès, Observatoire de Paris, France
Not only does this finding solve the mystery of this star’s dimming, but it also provides evidence of the cooling of a star causing the creation of stardust which goes on to obscure the star.
Even though Betelgeuse is much younger than the Sun–10 million years old compared to our star’s age of 4.6 billion years–it is much closer to the supernova explosion that will signal the end of its lifecycle. Astronomers had first assumed that dimming was a sign that the red supergiant was exhibiting its death throes ahead of schedule.
Thanks to the work of Montargès and his team, we now know this isn’t the case. The dimming is the result of a veil of stardust obscuring the star’s southern region.
“We have observed dust around red supergiant stars in the past,” Cannon explains. “However, this is the first time we have witnessed the formation of dust in real-time in the line of sight of a red supergiant star,”
This stardust will go on to form the building blocks of the next generation of stars and planets, and the observations made by Montargès, Cannon and the team represent the first time we have seen an ancient supergiant star ‘burping’ this precious material into the cosmos.
The Giant that Burped Stardust
The surface of Betelgeuse–which with its diameter of around 100 times that of the Sun would consume the orbits of the inner planets including Earth were it to sit in our solar system–is subject to regular changes as bubbles of gas move around it, change in size, and swell beneath it. Montargès, Cannon and their colleagues believe that sometime before the great dimming began the red supergiant ‘burped’ out a large bubble of gas.
This bubble moved away from the star leaving a cool patch on its surface. It was within this cool patch that material was able to solidify, creating a cloud of solid stardust. The team’s observations show for the first time that stardust can rapidly form on the surface of a star.
“We have directly witnessed the formation of so-called stardust,” says Montargès. “The dust expelled from cool evolved stars, such as the ejection we’ve just witnessed, could go on to become the building blocks of terrestrial planets and life.”
With regards to the future, the researchers point to the Extremely Large Telescope (ELT), currently under construction in the Atacama Desert as the ideal instrument to conduct further observations of Betelgeuse. “With the ability to reach unparalleled spatial resolutions, the ELT will enable us to directly image Betelgeuse in remarkable detail,” says Cannon. “It will also significantly expand the sample of red supergiants for which we can resolve the surface through direct imaging, further helping us to unravel the mysteries behind the winds of these massive stars.”
For Montargès solving this mystery and observing a phenomenon for the first time, solidifies a lifetime of fascination with Betelgeuse and points towards a deeper understanding of the stardust that is the building blocks of stars, planets, and us. “We have seen the production of star dust, materials we are ourselves made of. We have even seen a star temporarily change its behavior on a human time scale.”
Astronomers have spotted a rare giant ‘blinking’ star towards the centre of the Milky Way. The team believes the serendipitous discovery, which came after 17 years of observation, represents another example of a rare class of ‘blinking giant’ stars that represents an eclipsing binary system.
The giant star with a mass around 100 times that of the Sun–designated VW-WIT-08–was spotted by the international team of researchers as it decreased in brightness by a factor of 30. A dimming extreme enough to result in the star almost disappearing entirely from the sky.
Changes in brightness such as this are usually associated with stars that pulsate or stars that exist in a binary system and are eclipsed by their companion star.
This giant star, which is located around 25,000 light-years away from Earth, dimmed for a period of several months in 2013 and then lightened again. A characteristic not commonly associated with the dimming mechanisms listed above.
The team of astronomers that have been investigating VW-WIT-08 believe that the dimming it demonstrated eight years ago and has not repeated since is the result of an as-of-yet unseen object orbital companion eclipsing the giant star.
They add that this eclipsing object could be another star or a planet, but one thing that is fairly certain is that it is surrounded by some form of an opaque disc which is responsible for causing the star’s extreme dimming.
“It’s amazing that we just observed a dark, large and elongated object pass between us and the distant star, and we can only speculate what its origin is,” says Sergey Koposov from the University of Edinburgh.
Alongside Leigh Smith from the Institute of Astronomy, the University of Cambridge, and Philip Lucas from the University of Hertfordshire, Koposov is one of the authors of a paper detailing the discovery published in the journal Monthly Notices of the Royal Astronomical Society.
VW-WIT-08 isn’t the only example of a star dimming in this unusual fashion, but arguably it is the most extreme example discovered thus far.
What’s Going On with Giant Blinking Stars?
Another example of this form of an eclipsing binary system is Epsilon Aurigae, first discovered in 1821 by German astronomer Johann Heinrich Fritsch. The visible component of this binary system is the supergiant star Almaaz–an Arabic name meaning the he-goat–which dims by around 50% every 27 years.
Though this dimming is less pronounced than that of VW-WIT-08, it lasts for a prolonged period of time; between 640 and 730 days–around two years. This means the dimming component of this binary system must be something truely immense, probably another star surrounded by a thick ring of obscuring dust, angled edge-on from our perspective.
Whilst this two-year eclipse which last occurred between 2009 and 2011 may seem extreme, it’s topped by the eclipse seen in another similar system discovered more recently–TYC 2505-672-1 found around 10,000 light-years from Earth.
This system currently holds the record for the longest known eclipse. Every 69 years the massive star component of this system is dimmed by a magnitude of 4.5 for a period of around 3 and a half years.
Thanks to the team that found VW-WIT-08 the catalogue of these eclipsing binary systems looks set to expand as the astronomers have currently found two more giant blinking stars ripe for further investigation.
“Occasionally we find variable stars that don’t fit into any established category, which we call ‘what-is-this?’, or ‘WIT’ objects,” remarks Lucas. “We really don’t know how these blinking giants came to be.”
What Does the Future Hold for Giant Blinking Stars?
The team made the discovery of VVV-WIT-08 using data collected by VISTA Variables , part of the Via Lactea (VVV Survey) which ran from 2010 to 2016. The survey’s main mission was the observation of the Milky Way’s central bulge and southern disc in near-infrared. The project utilised the capabilities of the VISTA telescope located at the Parnal Observatory, Chile.
Lucas adds: “It’s exciting to see such discoveries from VVV after so many years planning and gathering the data.”
The dimming of VVV-WIT-08 was also captured by the Gravitational Lensing Experiment (OGLE) operated by researchers at the University of Warsaw. Our galaxy’s central bulge was also a primary target for OGLE which makes its observations in light closer to the visible range of the electromagnetic spectrum.
The main advantage of OGLE is the fact that it makes frequent observations, something that was vital for building a model of VVV-WIT-08. This combination of observations also showed the astronomers that the giant star dims in both the visible spectrum and the infrared spectrum.
The team’s findings show that there are undoubtedly more eclipsing binary systems in the Milky Way left to be discovered. But this may not be the most difficult part of the process of investigating these systems.
“There are certainly more to be found, but the challenge now is in figuring out what the hidden companions are, and how they came to be surrounded by discs, despite orbiting so far from the giant star,” Smith concludes. “In doing so, we might learn something new about how these kinds of systems evolve.”
Astronomers have completed the first in-depth census of molecular clouds in the nearby Universe. The study has revealed that these star-forming regions not only look different but also behave differently. This finding runs in opposition to previous scientific consensus, which considered these clouds of dust and gas to be fairly uniform.
The project–Physics at High Angular Resolution in Nearby GalaxieS (PHANGS)–consisted of a systematic survey of 100,000 molecular clouds in 90 galaxies in the local Universe. The primary aim of the PHANGS was to get an idea of how these star-forming regions are influenced by their parent galaxies.
The census was conducted with the use of the Atacama Large Millimeter/ submillimeter Array (ALMA) located on the Chajnantor plateau, in the Atacama Desert of northern Chile. Whilst not marking the first time stellar nurseries have been studied with ALMA, this is the first census of its kind to observe globular clusters across more than either one galaxy or a small region of a single galaxy.
“We have carried out the first real ‘census’ of these stellar nurseries, and it provided us with details about their masses, locations, and other properties,” Adam Leroy, Associate Professor of Astronomy at Ohio State University (OSU) tells ZME Science. “Some people thought that all stellar nurseries across every galaxy look more or less the same, and it took having a really big, sensitive, and high-resolution survey of many galaxies with a telescope such as ALMA to see that this is not the case. This survey allows us to see how the stellar nurseries change across different galaxies. “
As a result, this is the first time that astronomers have been granted a look at the ‘big picture’ when it comes to these star-forming regions. Erik Rosolowsky, Associate Professor of Physics at the University of Alberta, and a co-author of the research points out that what ALMA has allowed the team of astronomers to create is essentially a new form of ‘cosmic cartography’ consisting of 90 maps of unparalleled detail detailing the regions of space where the next generation of stars will be born.
“By doing this we will combine what we are learning from ALMA about the clouds that form stars with pictures of newly formed stars from these other telescopes. This promises to give us the best view ever of the full life cycle of these stellar nurseries, and our most complete picture ever of the full cycle of star birth and death.”
“Our survey is the first one to capture the demographics of these stellar nurseries across a large number of the galaxies near the Milky Way,” adds Leroy, the lead author of a paper presenting the PHANGS ALMA survey. “We used these measurements to measure the characteristics of these nurseries, their lifetimes, and the ability of these objects to form new stars.”
How Galactic Neighborhoods Influence Star-Forming Clouds
The variety displayed by the molecular clouds surveyed in the PHANGS project was visible due to ALMA’s ability to take millimeter-wave images with the same sharpness and quality as images taken in the visible spectrum.
“While optical pictures show us light from stars, these ground-breaking new images show us the molecular clouds that form those stars,” says Leroy. “That helped us to see that stellar nurseries actually change from place to place.”
The team compared the changes displayed by molecular clouds from galaxy to galaxy to changes in houses, neighbourhoods and cities from region to region here on Earth.
“How stellar nurseries relate to their parent galaxies has been a big question for a long time. We’re able to answer this because our survey expands the amount of data on stellar nurseries by a factor of almost 100,” says Leroy. “Before this, it was very common to study a few hundred nurseries in one galaxy. So it was kind of like trying to learn about houses in general by looking only at neighbourhoods in Columbus, Ohio.
“You will learn some things about houses, but you miss the big picture and a lot of the variation, complexity, and commonality With this survey we looked at houses in many cities across many countries.”
Adam Leroy, Ohio State University
Leroy continues by explaining that stellar nurseries ‘know’ about their neighbourhood, meaning that molecular clouds are different depending on what galaxy they live in or where in that galaxy they are located. “So the stellar nurseries that we see in the Milky Way won’t be the same as those in a different galaxy, and the stellar nurseries in the outer part of a galaxy–where we live–aren’t the same as those near the galaxy centre.”
The team found clouds in the dense central regions of galaxies tend to be more massive, denser, and more turbulent than those located on the outskirts of a galaxy. In addition to this, the census revealed the lifecycle of clouds also depends on their environment. Annie Hughes, an astronomer at L’Institut de Recherche en Astrophysique et Planétologie (IRAP) explains that this means that both the rate at which a cloud forms stars and the processes that ultimately destroy clouds both seem to depend on where the cloud lives.
How Differences in Globular Clusters Influence the Birth of Stars
Because all stars are formed in molecular clouds, understanding the differences in these clouds of gas and dust and how they are caused by the conditions in which they exist is key to better understanding the processes that are driving the birth of stars like our own Sun.
These molecular clouds are so vast that they can birth anywhere from thousands to hundreds of thousands of stars before being exhausted of raw materials. These new observations have shown astronomers that each cosmic neighbourhood can have an effect on where stars are born and how many stars are spawned.
“Every star in the sky, in fact, every star in every galaxy, including our Sun, was born in one of these stellar nurseries. These are really the engines that build galaxies and make planets, and they’re just an essential part of the story of how we got here.”
Adam Leroy, Ohio State University
The next step for the astronomers will be to combine the data provided by ALMA with surveys conducted by other telescopes including the Hubble space telescope, and the Very Large Telescope (VLT) also located in the Atacama desert, Chile. Leroy hopes that this along with observations made with the James Webb Space Telescope (JWST), will help astronomers answer the question of how the diversity of molecular structures affects the stars which form within them. He explains: “By doing this we will combine what we are learning from ALMA about the clouds that form stars with pictures of newly formed stars from these other telescopes.
This promises to give us the best view ever of the full life cycle of these stellar nurseries, and our most complete picture ever of the full cycle of star birth and death.”
Adam Leroy, Ohio State University
Leroy concludes by pointing out why the study of these star-forming regions is so important. “This is the first time we have gotten a clear view of the population of these stellar nurseries across the whole nearby universe,” the researcher says. “It’s a big step towards understanding where we come from.”
Using data from the NASA Chandra X-ray Observatory and South African radio telescope MeerKAT, astronomer Daniel Wang detected magnetic threads and plumes emerging from the center of the Milky Way. Turns out, the center of our galaxy is proving to be a prolific zone for this type of phenomenon.
The image depicting the features clearly shows a large number of threads near the galactic center. They are related to nonthermal radio filaments (NTF) — polarized filaments perpendicular to the galaxy and whose origin is still a mystery. The NTFs are the purple lines crossing the galactic center in purple representing the radio emissions.
The images also show bright white objects which — this is cosmic dust detected by X-ray sensors. This isn’t the type of dust you normally see around your house. Specs of cosmic dust are only slightly bigger than a molecule, but that can be enough to act as seeds to form planets and asteroids. The fuzzy glow is a result of the X-ray scattering, which is only possible with a sufficient amount of dust between the source and Chandra.
The most interesting thread is the one called G0.17-0.41. Narrow and vertically oriented with respect to the Milky Way, it emits both radio and X-ray — which may be indicative of a process called magnetic field reconnection. Reconnections happen when magnetic fields connect and then disconnect, which allows a massive energy transfer to happen. This is often a type of space weather phenomenon which occurs after solar flares, magnetic fields form a crossed shape that leads to a separation, then magnetic field lines are separated from the original one.
Large plumes emitted by the galactic center were also observed. The Chandra/MeerKAT plumes have 700 light-years of extension on both sides of the galactic plane, much smaller than the Fermi Bubbles, but farther enough to appear visually disconnected from the galactic center – also a reconnection example.
With this recent discovery, it was also possible to detect several supernovae remnants, as well as neutron stars, and black holes. The most prominent black hole is the Sagittarius A*, our central supermassive black hole. This new view of the galactic could help explain similar features in other galaxies.
Whilst it may not have the snappiest name, the event GW150914 is pretty significant in terms of our understanding of the Universe. This event, with a name that includes ‘GW’ as a prefix which is an abbreviation of ‘Gravitational Wave’ and the date of observation–15/09/14– marked humanity’s first direct detection of gravitational waves.
This was groundbreaking on two fronts; firstly it successfully confirmed a prediction made by Albert Einstein’s theory of general relativity almost a century before. A prediction that stated events occurring in the Universe do not just warp spacetime, but in certain cases, can actually send ripples through this cosmic fabric.
The second significant aspect of this observation was the fact that it represented an entirely new way to ‘see’ the Universe, its events and objects. This new method of investigating the cosmos has given rise to an entirely new form of astronomy; multimessenger astronomy. This combines ‘traditional’ observations of the Universe in the electromagnetic spectrum with the detection of gravitational waves, thus allowing us to observe objects that were previously invisible to us.
Thus, the discovery of gravitational waves truly opened up an entirely new window on the cosmos, but what are gravitational waves, what do they reveal about the objects that create them, and how do we detect such tiny tremblings in reality itself?
Gravitational Waves: The Basics
Gravitational waves are ripples in the fabric of spacetime.
These ripples travel from their source at the speed of light.
The passage of gravitational waves squash and stretch space itself.
Gravitational waves can be detected by measuring these infinitesimally small changes in the distance between objects.
They are created when an object or an event that curves spacetime causes that curvature to change shape.
Amongst the causes of gravitational waves are colliding black holes and neutron stars, supernovae, and stars that are undergoing gravitational collapse.
Imagine sitting at the side of a lake, quietly observing the tranquil surface of the water undisturbed by nature, the wind, or even by the slightest breeze. Suddenly a small child runs past hurling a pebble into the lake. The tranquillity is momentarily shattered. But, even as peace returns, you watch ripples spread from the centre of the lake diminishing as they reach the banks, often splitting or reflecting back when they encounter an obstacle.
The surface of the lake is a loose 2D analogy for the fabric of spacetime, the pebble represents an event like the collision of two black holes, and our position on Earth is equivalent to a blade of grass on the bank barely feeling the ripple which has diminished tremendously in its journey to us.
Gravitational waves were first predicted by Henri Poincare in 1905 as disturbances in the fabric of spacetime that propagate at the speed of light, but it would take another ten years for the concept to really be seized upon by physicists. This happened when Albert Einstein predicted the same phenomenon as part of his revolutionary 1916 geometric theory of gravity, better known as general relativity.
Whilst this theory is most well-known for suggesting that objects with mass would cause warping of spacetime, it also went a step further positing that an accelerating object should change this curvature and cause a ripple to echo through spacetime. Such disturbances in spacetime would not have been permissible in the Newtonian view of gravity which saw the fabric of space and time as separate entities upon which the events of the Universe simply play out.
But upon Einstein’s dynamic and changing stage of united spacetime, such ripples were permissible.
Gravitational waves arose from the possibility of finding a wave-like solution to the tensor equations at the heart of general relativity. Einstein believed that gravitational waves should be generated en masse by the interaction of massive bodies such as binary systems of super-dense neutron stars and merging black holes.
The truth is that such ripples in spacetime should be generated by any accelerating objects but Earth-bound accelerating objects cause perturbations that are far too small to detect. Hence why our investigations must turn to areas of space where nature provides us with objects that are far more massive.
As these ripples radiate outwards from their source in all directions and at the speed of light, they carry information about the event or object that created them. Not only this, but gravitational waves can tell us a great deal about the nature of spacetime itself.
Where do Gravitational Waves Come From?
There are a number of events that can launch gravitational waves powerful enough for us to detect with incredibly precise equipment here on Earth. These events are some of the most powerful and violent occurrences that the Universe has to offer. For instance, the strongest undulations in spacetime are probably caused by the collision of black holes.
Other collision events are associated with the production of strong gravitational waves; for example the merger between a black hole and a neutron star, or two neutron stars colliding with each other.
But, a cosmic body doesn’t always need a partner to make waves. Stellar collapse through supernova explosion–the process that leaves behind stellar remnants like black holes and neutron stars– also causes the production of gravitational waves.
To understand how gravitational waves are produced, it is useful to look to pulsars–binary systems of two neutron stars that emit regular pulses of electromagnetic radiation in the radio region of the spectrum.
Einstein’s theory suggests that a system such as this should be losing energy by the emission of gravitational waves. This would mean that the system’s orbital period should be decreasing in a very predictable way.
The stars draw together as there is less energy in the system to resist their mutual gravitational attraction, and as a result, their orbit increases in speed, and thus the pulses of radio waves are emitted at shorter intervals. This would mean that the time it takes for the radio wave to be directly facing our line of sight would be reduced; something we can measure.
This is exactly what was observed in the Hulse-Taylor system (PSR B1913±16), discovered in 1974, which is comprised of two rapidly rotating neutron stars. This observation earned Russell A. Hulse and Joseph H. Taylor, Jr, both of Princeton University, the 1993 Nobel Prize in Physics. The reason given by the Nobel Committee was: “for the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation.”
Though inarguably an impressive and important scientific achievement, this was still only indirect evidence of gravitational waves. Whilst the effect Einstein predicted of shortening of the pulsar’s spin was definitely present, this wasn’t an actual direct detection.
In fact, though not alive to witness this momentous achievement, Einstein had predicted that this would be the only way we could ever garner any hint of gravitational waves. The great physicist believed those spacetime ripples would be so faint that they would remain impossible to detect by any technological means imaginable at that time.
Fortunately, Einstein was wrong.
How do we Detect Gravitational Waves?
It should come as no surprise that actually detecting a gravitational wave requires a piece of equipment of tremendous sensitivity. Whilst the effect of gravitational waves–the squashing and stretching space itself–sounds like something that should pre-eminently visible, the degree by which this disturbance occurs is so tiny it is totally imperceptible.
Fortunately, there is a branch of physics that is pretty adept at deal with the tiny. To spot gravitational waves, researchers would use an effect called interference, something demonstrated in the most famous quantum physics experiment of all time; the double-slit experiment.
Physicists realised that a laser interferometer could be used to measure the tiny squashing and stretching of space as it would cause the arms of the equipment to shrink by a minute amount. This means when splitting a laser and sending it through the arms of an interferometer the squeezing of space caused by the passage of a gravitational wave would cause one laser to arrive slightly ahead of the other–meaning they are out of phase and causing destructive interference. Thus, this difference in arrival times causes interference that gives an indication that gravitational waves have rippled across one of the arms.
But, not just any laser interferometer would do. Physicists would need an interferometer so large that it constituents a legitimate feat in engineering. Enter the Laser Interferometer Gravitational-wave Observatory (LIGO).
The LIGO detector uses two laser emitters based at the Hanford and Livingstone observatories, separated by thousands of kilometres apart to form an incredibly sensitive interferometer. From these emitters, lasers are sent down the ‘arms’ of the interferometer which are actually 4km long vacuum chambers.
This results in a system that is so sensitive it can measure a deviation in spacetime that is as small as 1/10,000 the size of an atomic nucleus. To put this into an astronomical context; it is equivalent to spotting a star at a distance of 4.2 light-years and pinpointing its location to within the width of a human hair! This constitutes the smallest measurement ever practically attempted in any science experiment.
And in 2015, this painstaking operation paid off.
On 14th September 2015, the LIGO and Virgo collaboration spotted a gravitational wave signal emanating from the spiralling in and eventual merger of two black holes, one 29 times the mass of the Sun, the other 36 times our star’s mass. From changes in the signal received the scientists were also able to observe the resultant single black hole.
The signal, named GW150914, represented not just the first observation of gravitational waves, but also the first time humanity had ‘seen’ a binary stellar-mass black hole system, proving that such mergers could exist in the Universe’s current epoch.
Different Kinds of Gravitational Waves
Since the initial detection of gravitational waves, researchers have made a series of important and revelatory detections. These have allowed scientists to classify different types of gravitational waves and the objects that may produce them.
Continuous Gravitational Waves
A single spinning massive object like a neutron star is believed to cause a continuous gravitational wave signal as a result of imperfections in the spherical shape of this star. if the rate of spin remains constant, so too are the gravitational waves it emits–it is continuously the same frequency and amplitude much like a singer holding a single note. Researchers have created simulations of what an arriving continuous gravitational wave would sound like if the signal LIGO detected was converted into a sound.
The sound of a continuous gravitational wave of the kind produced by a neutron star can be heard below.
Compact Binary Inspiral Gravitational Waves
All of the signals detected by LIGO thus far fit into this category as gravitational waves created by pairs of massive orbiting objects like black holes or neutron stars.
The sources fit into three distinct sub-categories:
Binary Black Hole (BBH)
Binary Neutron Star (BNS)
Neutron Star-Black Hole Binary (NSBH)
Each of these types of binary pairing creates its own unique pattern of gravitational waves but shares the same overall mechanism of wave-generation–inspiral generation. This process occurs over millions of years with gravitational waves carrying away energy from the system and causing the objects to spiral closer and closer until they meet. This also results in the objects moving more quickly and thus creating gravitational waves of increasing strength.
The ‘chirp’ of an eventual merger between neutron stars has been translated to sound waves and can be heard below.
Stochastic Gravitational Waves
Small gravitational waves that even LIGO is unable to precisely pinpoint could be passing over Earth from all directions at all times. These are known as stochastic gravitational waves due to their random nature. At least part of this stochastic signal is likely to have originated in the Big Bang.
Should we eventually be able to detect this signal it would allow us to ‘see’ further back into the history of the Universe than any electromagnetic signal could, back to the epoch before photons could freely travel through space.
The simulated sound of this stochastic signal can be heard below.
It is extremely likely given the variety of objects and events in the Universe that other types of gravitational wave signals exist. This means that the quest to detect such signals is really an exploration of the unknown. Fortunately, our capacity to explore the cosmos has been boosted tremendously by our ability to detect gravitational waves.
A New Age of Astronomy
GW150914 conformed precisely to the predictions of general relativity, confirming Einstein’s most revolutionary theory almost exactly six decades after his death in 1955. That doesn’t mean that gravitational waves are done teaching us about the Universe. In fact, these ripples in spacetime have given us a whole new way to view the cosmos.
Before the discovery of gravitational waves, astronomers were restricted to a view of the Universe painted in electromagnetic radiation and therefore our observations have been confined to that particular spectrum.
Using the electromagnetic spectrum alone, astronomers have been able to discover astronomical bodies and even thecosmic microwave background (CMB) radiation, a ‘relic’ of one of the very first events in the early universe, the recombination epoch when electrons joined with protons thus allowing photons to begin travelling rather than endlessly scattering. Therefore, the CMB is a marker of the point the universe began to be transparent to light.
Yet despite the strides traditional astronomy has allowed us to make in our understanding of the cosmos, the use of electromagnetic radiation is severely limited. It does not allow us to directly ‘see’ black holes, from which light cannot escape. Nor does it allow us to see non-baryonic, non-luminous dark matter, the predominant form of matter in galaxies–accounting for around 85% of the universe’s total mass. As the term ‘non-luminous’ suggests dark matter does not interact with the electromagnetic spectrum, it neither absorbs nor emits light. This means that observations in the electromagnetic spectrum alone will never allow us to see the majority of the matter in the universe.
Clearly, this is a problem. But one that can be avoided by using the gravitational wave spectrum as both black holes and dark matter do have considerable gravitational effects.
Gravitational waves also have another significant advantage over electromagnetic radiation.
This new form of astronomy measures the amplitude of the travelling wave, whilst electromagnetic wave astronomy measures the energy of the wave, which is proportional to the amplitude of the wave squared.
Therefore the brightness of an object in traditional astronomy is given by 1/distance² whilst ‘gravitational brightness’ falls off by just 1/distance. This means that the visibility of stars persists in gravitational waves for a much greater distance than the same factor persists in the electromagnetic spectrum.
Of course, none of this is to suggest that gravitational wave astronomy will ‘replace’ traditional electromagnetic spectrum astronomy. In fact the two are most powerful when they are unified in an exciting new discipline–multimessenger astronomy
Sources and Further Reading
Maggiore. M., Gravitational Waves: Theory and Experiments, Oxford University Press, 
Maggiore. M., Gravitational Waves: Astrophysics and Cosmology, Oxford University Press, 
Collins. H., Gravity’s Kiss: The Detection of Gravitational Waves, MIT Press, 
The Dark Energy Survey (DES) is an ambitious cosmological project that aims to map hundreds of millions of galaxies. In the process, the project will detail hundreds of millions of galaxies, observe thousands of supernovae, map the cosmic web that links galaxies, all with the aim of investigating the mysterious force that is causing the Universe to expand at an accelerating rate.
Using the 570-megapixel Dark Energy Camera on the National Science Foundation’s Víctor M. Blanco 4-meter Telescope at Cerro Tololo Inter-American Observatory (CTIO), Chile, the DES has observed a map of galaxy distribution and morphology that stretches 7 billion light-years and captures 1/8 of the sky over Earth.
Now new results from the DES which collects the work of an international team of over 400 scientists from over 25 institutions from countries including the US, UK, France, Spain, Brazil, and Australia, are in. The findings are detailed in a ground-breaking series of 29 papers and comprises of data collected during the DES’ first three years of operation providing the most detailed description of the Universe’s composition and expansion to date.
The survey was conducted between 2013 to 2019 cataloging hundreds of millions of objects, with the three years of data covered in these papers alone containing observations of at least 226 million galaxies observed over 345 nights.
The fact that some of these galaxies are close to the Milky Way and others are much more distant–up to 7 billion light-years away– gives researchers an excellent picture of the evolution of the Universe over around half of its lifetime.
The results seem to confirm the standard model of cosmology, currently the best-evidenced theory of the Universe’s composition and evolution which suggests the Universe was created in a ‘Big Bang’ event and has a composition of 5% ordinary or baryonic matter, 27% dark matter, and 68% dark energy.
The snapshot of the Universe provided by the DES does seem to show that the Universe is less ‘clumpy’ than current cosmological models suggest, however.
Illuminating the Dark Universe
The fact that the ‘Dark Universe’ consists of 95% of the matter and energy in the known cosmos means that there are huge gaps in our understanding of the evolution of the Universe, its past, present, and its future.
These gaps include the nature of dark matter, whose gravitational influence holds galaxies together, and dark energy, the force that is expanding space between the galaxies driving them apart at an accelerating rate.
These effects seem to be in opposition, with one holding matter together and the other working upon space itself to drive matter apart. And it is this cosmic struggle that shapes the Universe which the DES aimed to investigate.
There are two key phenomena which the survey used to do this. Studying ‘the cosmic web’ that links galaxies together in clusters and loose associations gives hints at the distribution and influence of dark matter.
The second phenomenon used by the DES is the bending of light as it travels past curvatures in spacetime created by objects of tremendous mass like galaxies. This effect predicted by Einstein’s theory of gravity–general relativity–is known as ‘gravitational lensing.’
The DES relied on a form of this effect called ‘weak gravitational lensing’ to assess how dark matter is distributed across the Universe, thus inferring its ‘clumpiness.’
The data collected by the DES was cross-referenced against measurements carried out by the European Space Agency (ESA) operated mission, the Planck observatory. The orbiting observatory, which operated between 2009 and 2013 and studied the cosmic background radiation (CMB)–an imprint leftover from an event shortly after the Big Bang in which electrons and protons connected thus allowing photons to travel freely for the first time.
Observing the CMB reveals conditions that were ‘frozen in’ to it at the time of this event known as the last scattering and thus gives a detailed picture of the Universe when it was just 400 thousand years old for the DES team to draw from.
Setting the Scene for Future Surveys
The DES intensely studied ten regions labeled as ‘deep fields’ which were repeatedly imaged during the course of the survey. These images were stacked which allowed astronomers to observe distant galaxies.
In addition to allowing researchers to see further into the Universe and thus further back in time, information regarding redshift– an increase in wavelength caused by objects receding which can arise as a result of the Universe’s expansion–taken from these deep fields was used to calibrate the rest of the survey. This constituted a major step forward for cosmic surveys providing the researchers with a picture of the Universe painted with stunning precision.
Whilst the DES was concluded in 2019, the sheer wealth of data collected by the survey requires a huge amount of computing power and time to assess. This is why we are only seeing the first three years of observations reported and likely means that the DES still has much more to deliver.
This will ultimately set the scene for the Legacy Survey of Space and Time (LSST) which will be conducted at the Vera C Rubin observatory–currently under construction on the El Penon peak of Cerro Pachon in northern Chile.
Whereas the DES surveyed an inarguably impressive 1/8 of the sky over the earth, the wide-field camera that will conduct the LSST will capture the entire sky over the Southern hemisphere, meaning it will view half of the entire sky over our planet.
A major part of the LSST’s mission will be the investigation of dark matter and dark energy, meaning that when the data from the DES is finally exhausted and its secrets are revealed, a worthy successor will be waiting in the wings to assume its mission of discovery.