The distance from the Sun to Pluto, the farthest planet(oid), is 0.000628 light-years. The closest solar system to us, Alpha Centauri, is 4.2 light-years away. The Milky Way Galaxy is 52,850 light-years across. But Alcyoneus, the newly-discovered galaxy, is a whopping 16.3 million light-years wide.
Giant radio galaxies (GRGs, or just ‘giants’) are the Universe’s largest structures generated by individual galaxies. They were first discovered accidentally by wartime radar engineers in the 1940s, but it took over a decade to truly understand what they were — with the aid of radio astronomy. Radio astronomy is a subfield of astronomy that studies celestial objects using radio frequencies.
These giants dominate the night sky with their radio frequency signals (astronomers use different types of frequencies to study the universe). They generally consist of a host galaxy — a cluster of stars orbiting a bright galactic nucleus containing a black hole — and some colossal jets or lobes that erupt from this galactic center.
Most commonly, radio galaxies have two elongated, fairly symmetrical lobes. These radio lobes are pretty common across many galaxies — even the Milky Way has them — but for some reason, in some galaxies, the lobes grow to be immensely long. Discovering new radio galaxies could help us understand these processes — this is where the new study comes in.
Researchers led by astronomer Martijn Oei of Leiden Observatory in the Netherlands have discovered the largest single structure of galactic origin. They used the LOw Frequency ARray (LOFAR) in Europe, a network of over 20,000 radio antennas distributed across Europe.
“If there exist host galaxy characteristics that are an important cause for giant radio galaxy growth, then the hosts of the largest giant radio galaxies are likely to possess them,” the researchers explain in their preprint paper, which has been accepted for publication in Astronomy & Astrophysics.
According to the authors, this is the most detailed search ever of radio galaxy lobes, and lo and behold, the results also came in.
Alcyoneus lies some 3 billion light-years away from us, a distance that’s hard to even contemplate (though it’s not nearly the farthest object we’ve found, which lies over 13 billion light-years away). Its host galaxy appears to be a fairly normal elliptical galaxy. In fact, it almost seems too inconspicuous.
But even this could tell us something: you don’t need a particularly large galaxy or a particularly massive black hole at its center to create a radio galaxy.
“Beyond geometry, Alcyoneus and its host are suspiciously ordinary: the total low-frequency luminosity density, stellar mass and supermassive black hole mass are all lower than, though similar to, those of the medial giant radio galaxies,” the researchers write.
“Thus, very massive galaxies or central black holes are not necessary to grow large giants, and, if the observed state is representative of the source over its lifetime, neither is high radio power.”
A new study modeled the dynamics and evolution of some of the largest known structures in the universe.
Let’s take a moment to look at our position in the universe.
We are now living on a solar system orbiting the center of the Milky Way galaxy — which itself lies in the Local Group of galaxies neighboring a Local Void, a vast cluster of space with fewer galaxies than expected. Wait, we’re not done yet. These structures are part of a larger region that encompasses thousands of galaxies in a supercluster called the Laniakea Supercluster, which is around 520 million light-years across.
A group of researchers has now simulated the movement of galaxies in the Laniakea and other clusters of galaxies starting when the universe was in its infancy (just 1.6 million years old) until today. They used observations from the Two Micron All-Sky Survey (2MASS) and the Cosmicflows-3 as the starting point for their study. With these two tools, they looked at galaxies orbiting massive regions with velocities of up to 8,000 km/s — and made videos describing those orbits.
Because the universe is expanding and that influences the evolution of these superclusters, we first need to know how fast the universe is expanding, which has proven to be very difficult to calculate. So the team considered different plausible universal expansion scenarios to get the clusters’ motion.
Besides Laniakea, the scientists report two other zones where galaxies appear to be flowing towards a gravitational field, the Perseus-Pisces (a 250 million light-years supercluster) and the Great Wall (a cluster of about 1.37 billion light-years). In the Laniakea region, galaxies flow towards the Great Attractor, a very dense part of the supercluster. The other superclusters have similar patterns, the Perseus-Pisces galaxies flow towards the spine of the cluster’s large filament.
The researchers even predicted the future of these galaxies. They estimated the path of the galaxies to something like 10 billion years into the future. It is clear in their videos, the expansion of the universe affecting the big picture. In smaller, denser regions, the attraction prevails, like the future of Milkomeda in the Local Group.
In 2002, astronomers detected a new ‘star’ in the Monoceros constellation, some 3,300 light-years away from Earth. The star is called V838 Monocerotis and was initially classified as a variable star — a star with varying brightness. However, it became apparent that the star was rather unusual.
Astronomers observed that the light intensity of this star resembled a nova — an explosive star that’s not quite as cataclysmic as a supernova. However, three months later, the star started emitting massive amounts of infrared light, so it didn’t really seem to be a nova after all. Ultimately, V838 Monocerotis was finally classified as a luminous red nova — a stellar explosion that occurs when two stars merge.
Now, researchers have captured new details about this mysterious star.
A cascading stellar event
When the merging happened, it produced one of the most spectacular images you can imagine. As the gases and dust traveled outward from the epicenter of the event, they scattered light from the explosion itself. The scattered light was then deflected by the molecular cloud, taking a little longer to reach us compared to the light coming directly to Earth — a phenomenon called a ‘light echo’.
After the stars merged, the remnant left behind is likely a red supergiant that’s dozens or even hundreds of times the size of the sun — big enough to fill Mars’s entire orbit. However, because the event took place very far away, it took years for us to observe the formation of ions from the dust ejected by the merger. The ejected material expelled during the collision traveled through space and encountered another star in the system, a third companion B-type star – this one, in particular, is a BV3 star which is nearly 8 times more massive than the Sun.
In a recent study, astronomers found direct evidence of this third star for the first time, 17 years after they observed the red nova going boom. They used observations from the Atacama Large Millimeter/submillimeter Array (ALMA) interferometer from 2019. ALMA’s data helps scientists ‘see’ what is happening in the system in terms of dust, gases, and gathers information about the stars themselves. When the material was close enough to the giant’s companion, it became ionized by the photons emitted from this star, and that helped the researchers to learn details about the B star.
Their results show that the B-star companion’ gravity pulls some of the gas away from us, making them appear redshifted. They also learned that this companion is embedded in the ejected cloud. It orbits its giant sibling over a 1000 year period from a distance greater than 230 times our distance from the Sun so that the gas only reached it 3 years after the nova event.
Researchers have also learned that the molecular cloud is traveling at 200 km per second (approximately 124.3 miles per second). With the help of spectroscopy, scientists can determine the chemical composition of the cloud because it is the preferred absorption of radiation observed by ALMA’s instruments. It is made of carbon monoxide, silicon monoxide, sulfur monoxide, sulfur dioxide, and aluminum monohydroxide.
Future observations will provide more evidence of novas ejected material and their formation through mergers thanks to millimeter/submillimeter observations, something scientists didn’t have access to 20 years ago.
When we look at the sky, we see different types of objects. Some are man-made (like the International Space Station), some are from our solar system (like Venus or Saturn), but many are twinkling, shiny objects — of course, stars from outside our solar system.
Stars have fascinated humans since time immemorial, especially because sometimes, they seem to twinkle. Stars don’t actually twinkle per se — the twinkling we observe here has more to do with the atmosphere on Earth rather than the stars themselves. There are three main factors that influence how stars “twinkle”, and to truly understand them, we need to take a short dive into some atmospheric physics.
The first physical phenomenon that makes stars appear to twinkle is turbulence.
We observe stars that are far away because the light that they emit reaches our eyes (or telescopes). But in order to do that, it must first pass through the atmosphere. That means that light is indirectly subjected to phenomena that affect the Earth’s atmosphere.
Turbulence is a phenomenon that often happens on smaller scales. In the atmosphere, we have large-scale phenomena like cold fronts or hurricanes happening every day, but inside these events, turbulence is significant on a small scale. So cold fronts bring large thunderstorms, the clouds within the front can make the sky turbulent, and that’s when the airplane pilot tells you “Ladies and gentlemen, we’re experiencing some turbulence.”
There are several types of turbulence, including one called thermal turbulence — which happens when there is a mix between hotter and colder air. This could happen whether the sky is cloudy or not. When a mass of air in the atmosphere is hotter than its surroundings, it starts to rise, creating convective currents. Basically, you end up with moving columns or pockets of heated air that arise from warmer surfaces of the earth.
These moving pockets of air can create turbulence, and in the process, they also distort light that passes through them.
When it comes to stars, twinkling is caused by the passing of light through different layers of the turbulent atmosphere. This is more pronounced near the horizon than directly overhead since light rays near the horizon pass through denser layers of the atmosphere, but twinkling (technically called scintillation) can be observed on all parts of the sky.
But there’s more to this story.
When light passes through any medium (including the Earth’s atmosphere), some of it is reflected back, while some passes through the atmosphere, but at a different angle — something called refraction. When the atmosphere is turbulent in a region, the refraction angle is not constant, so light can change path quickly.
Altering the refractive index changes the apparent position of objects, just like the straw in a glass of water experiment, it looks bent. So the turbulent sky, constantly changing the refractive index makes stars appear to be moving, so they twinkle, or scintillate.
Due to scale differences, if an astronomical object is large enough compared to the turbulence, it won’t affect the way we see it. But the light of a smaller object (or one that’s farther away) will be affected as it crosses the turbulent air. That’s the reason why planets twinkle less (or almost don’t twinkle at all) — they are closer and it makes them ‘bigger’ compared to the turbulence.
Fortunately, atmospheric scientists developed a way to monitor changes in the refractive index of the atmosphere due to turbulence. They use instruments to measure the turbulence and use it to try to estimate a future outcome.
For astronomers, twinkling can be quite problematic. So they look for the “best sky” to avoid the phenomenon. Usually, this means an environment whose climate is very dry. When that’s not possible, they try to find the dryness by placing the instruments at a high altitude. Whenever is possible to combine altitude and mostly dry weather, they have a good spot for a telescope.
In the images above we see the difference very clearly: both skies were clear when the images were taken, but one (on the left) was less turbulent than the other (on the right). On the left, we see a video of a star recorded on Mount Fuji in Japan — the star appears to be bouncing chaotically due to a turbulent sky. On the right, we see a recording of the same star taken on the Andes Mountains in Chile, a very dry, high-altitude area; the star bounces, but much less than in the Japanese images.
So stars don’t exactly twinkle, but they do appear to twinkle from here on Earth. For astronomers, though, making sure they eliminate the “twinkling” is important.
Of course, if you set your telescopes in space, you don’t have these problems because your observation point is above the atmosphere. But even here on Earth, astronomers are careful to pick the best locations for placing large optical telescopes. They typically look for the driest areas, at the highest altitude possible, without any light pollution. There’s another consideration: because the air is usually flowing from west to east because of Earth’s rotation, a way to avoid pollution is placing telescopes on west coasts or in ilands in the middle of the ocean. This rules out the vast majority of places on Earth, which is why astronomers are so particular about where they place their telescopes.
Most large-scale simulations are of specific processes, such as star formation, galaxy merges, our solar system events, the climate, and so on. These aren’t easy to simulate at all — they’re complex displays of physical phenomena which are hard for a computer to add all the detailed information about them.
To make it even more complicated, there are also random things happening. Even something simple like a glass of water is not exactly simple. For starters, it’s never pure water, it has minerals like sodium, potassium, various amounts of air, maybe a bit of dust — if you want a model of the glass of water to be accurate, you need to account for all of those. However, not every single glass of water will contain the exact same amount of minerals. Computer simulations need to try their best to estimate the chaos within a phenomenon. Whenever you add more complexity, the longer it takes to complete the simulation and the more processing and memory you need for it.
So how could you even go about simulating the universe itself? Well, first of all, you need a good theory to explain how the universe is formed. Luckily enough, we have one — but it doesn’t mean it’s perfect or that we are 100% sure it is the correct one — we still don’t know how fast the universe expands, for example.
Next, you add all the ingredients at the right moment, on the right scale – dark matter and regular matter team up to form galaxies when the universe was around 200-500 million years old.
Universe simulations are made by scientists for multiple reasons. It’s a way to learn more about the universe, or simply to test a model and confront it with real astronomical data. If a theory is correct, then the structure formed in the simulation will look as realistic as possible.
There are different types of simulations, each with its own use and advantages. For instance, “N-body” simulations focus on the motion of particles, so there’s a lot of focus on the gravitational force and interactions.
The Millenium Run, for instance, incorporates over 10 billion dark matter particles. Even without knowing what dark matter really is, researchers can use these ‘particles’ to simulate dark matter properties. There were other simulations, such as IllustrisTNG, which offers the capability of star formation, black hole formation, and other details. The most recent one is a 100-terabyte catalog.
In the end, the simulations can’t reveal every single detail in the universe. You can’t simulate what flavor pie someone is having, but you can have enough detail to work with large-scale things such as the structure of galaxies and other clusters.
Another type of model is a mock catalog. Mocks are designed to mimic a mission and they use data gathered by telescopes over years and years. Then, a map of some structure is created — it could be galaxies, quasars, or other things.
The mocks simulate these objects just as they were observed, with their recorded physical properties. They are made according to a model of the universe, with all the ingredients we know about.
The theory from the model used for the mocks can be tested by comparing them with the telescopes’ observation. This gives an idea of how right or wrong our assumptions and theories are, and it’s a pretty good way to put ideas to the test. Usually, the researchers use around 1000 mocks to also give statistical significance to their results.
Let’s take a look behind the scenes at how the models are produced — and how much energy they use. These astronomical and climate simulations are made on supercomputers, and they are super. The Millenium Run, for example, was made using the Regatta supercomputer. For these simulations, 1 terabyte of RAM was needed and resulted in 23 terabytes of raw data.
The IllustrisTNG used the Hazel Hen. This beast can perform at 7.42 quadrillion floating-point operations per second(Pflops), which is equivalent to millions of laptops working together. In addition, Hazel Hen consumes 3200 Kilowatts of energy — which leads to a spicy electric bill. Uchuu, which had 100 terabytes of results was made using ATENURI II. This one performs with 3.087 Pflops.
In an Oort Cloud simulation, the team involved reported the amount of energy they used in their work: “This results in about 2MWh of electricity http://green-algorithms.org/), consumed by the Dutch National supercomputer.” A habit that may become more common in the future.
So what does this tell us about the possibility of our very own universe being a simulation? Could we be living in some sort of Matrix? Or just be in a Rick&Morty microverse? Imagine the societal chaos of figuring out we are in a simulated universe and you are not a privileged rich country citizen? That wouldn’t end well for the architect.
The simulation hypothesis is actually regarded seriously by some researchers. It was postulated by Nick Bostrom, and has three main conditions — at least one needs to be true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation.
This being said, the simulation hypothesis is not a scientific theory. It is simply an idea — a very interesting one, but simply put, nothing more than an idea.
Lessons from simulations
What we learned from making our simulations is that it is impossible to make a perfect copy of nature. The N-body simulations are the perfect example, we can’t simulate everything, but the particles that make what is relevant to study. In climate models we have the same problem, it is impossible to create the perfect pixel to reproduce geographic locations, you can only approximate the desired features.
The other difficulty is energy consumption, it is hard for us to simulate some phenomena. Simulating a universe in which people make their own choices would require an improbable amount of power, and how could the data be stored. Unless it ends like Asimov’s ‘The Last Question’ — which is well worth a read.
In the end, simulations are possible, but microverses are improbable. We’ll keep improving simulations, making better ones in a faster supercomputer. All this with the thought that we need an efficient program, which consumes less energy and less time.
The Oort Cloud, the most distant region of our solar system, was discovered by Jan Hendrik Oort. It is a giant structure composed of billions (if not trillions) of relatively small icy and rocky objects, and unlike the rest of our solar system (which is flat like a disc), it is believed that the Oort Cloud is spherical.
Now, astronomers from the Leiden Observatory produced the first simulation to display the formation and early evolution of the cloud.
The theories that tried to describe the Oort cloud evolution are scattered and hard to reconcile. Some focus more on the formation, others are more concerned with the relation with the Sun’s position within our galaxy. The Leiden team connected different parts of those theories and simulated the development of the cloud over one billion years.
To get to the origins of the Oort cloud, we need to get to the origins of our solar system. The solar system started in a messy dusty fog suspended around the Sun. The planets and everything in the solar system formed by coagulating everything gravitationally around 4.5 billion years ago. That is an important part of the story, because if formed too early or too late, the Oort cloud couldn’t be formed. The best scenario is the one in which the Sun escapes its star cluster just in the best moment to avoid losing too many objects, thus allowing the formation of the Oort cloud.
Other crucial events needed to take place to enable the formation of the structure. Multiple encounters with passing stars and Milky Way tidal gravitational effects all played a role, helping the Oort cloud take shape some 100 million years after the Sun had escaped its star cluster.
These processes can party be seen in the below animation. In the animation, the Sun is orbiting the galactic center, passing near a sea of asteroids which are ejected by hypothetical planets from other systems, resulting in the Oort cloud.
The opposite process can also happen though — too many interactions with other systems and the galaxy can cause the loss of many objects, which would then end up in interstellar space. That’s also the possible origin of the free-floating Oumuamua that caused quite a lot of stir as it passed by our solar system.
Asteroids from the conveyor belt can also pass through the orbit of the giant planets, Jupiter, Saturn, Uranus, Neptune. These objects are placed in an irregular orbit and they can have a periodic relationship with Jupiter and Saturn, called orbital resonance. The resonance creates a chaotic environment for them, and some are kicked to a different orbit.
However, the gas giants could not have contributed much to the formation of the Oort cloud. The study has shown that their ejection timescale is much too short to contribute significantly.
Another important takeaway from the study is the simulation of a single asteroid life. The scientists depicted the evolution of an asteroid that had a resonance interaction with Jupiter. Due to this resonance, its orbit is successively altered for 2 million years. You can see the timescale increasing significantly and also the shocking increase of distance from Neptune’s orbit (in red).
In the end, asteroids from the conveyor belt by the giants alongside complex interactions with tidal forces from our galaxy helped form the Oort cloud. The same phenomenon caused the reentry of 0.2 to 0.6 objects a year. Moreover, the Sun’s orbit near a sea of Oort cloud from neighbouring stars may have caused the kidnapping of many objects, such as Sedna.
The original study can be found in the preprint for Astronomy & Astrophysics. Concerned about environmental impacts, the authors added the energy consumption to produce such a long simulation: “This results in about 2MWh of electricity http://green-algorithms.org/), consumed by the Dutch National supercomputer.”
Let’s start with the history of the universe (a very brief one). After the Big Bang, the Universe was essentially a hot soup of particles. Things started to cool down and eventually started forming hydrogen atoms. At some point, the universe became neutral and transparent, but because the clouds of hydrogen collapsed very slowly, there were no sources of light — it was a period of complete and utter universal darkness aptly called Dark Ages.
The famous dark matter slowly started to form structures that later became the first source of light in the universe. The emergence of these sources occurred in the Epoch of Reionization (EoR), around 500 million years after the Big Bang. Now, astronomers have found that formed not that long after this period.
Astronomers from China, the US, and Chile have now found a huge galaxy protocluster (a dense system of dozens of galaxies from the early universe that grows together) from the early days of the universe. Called the LAGER-z7OD1 cluster, it dates from a time when the universe was still a baby — only 770 million years old, early in its history. These objects are important tools that enable astronomers to examine the EoR.
The group that worked detecting these objects is called the Lyman Alpha Galaxies in the Epoch of Reionization (LAGER). Lyman Alpha Galaxies are very distant objects that emit radiation from neutral hydrogen and they are the components to find clusters that are so old.
LAGER primarily used the Dark Energy Camera (DECam) from the Cerro Tololo Inter-American Observatory (CTIO) 4-m Blanco telescope in Andes, Chile. They found out was it is a system with a redshift of 6.9 — here’s why that’s intriguing.
Redshift is a measure of how something is moving in space: if it moves away from us we see a longer wavelength, which means a positive redshift and a wavelength skewed towards red — if it is moving towards us, it means shorter wavelengths, a negative redshift, and a wavelength skewed towards blue. The bigger the redshift, the more distant it is from us.
The cluster has 21 galaxies and if you want to estimate distance, the volume is probably 51,480 Mpc³ (1 Mpc is almost 3 million light-years) and it’s about 3,700,000 billion times more massive than the Sun. In addition, it has an elongated shape which means subclusters merged to form the bigger structure.
It’s basically a gazillion miles from us, but a gazillion isn’t good enough for astronomers — they always want to know just how far away things are. In this case, however, an approximation will have to do.
The Plack Collaboration estimated that the EoR probably started at z=7.67. This estimation uses the polarization of the Comic Microwave Background photons, just like polarizing light with sunglasses, but with a level of sensitivity so high that the instruments to detect it must be at temperatures close to absolute zero. Another important conclusion came from search for quasars formed in this period, usually the many papers about it conclude that the end of the EoR was around z=6.
Lyman Alpha Galaxies and quasars are major findings to understand the EoR. The best sample of quasars we have now has only 50 quasars, not much to represent the EoR for the entire universe. LAGER-z7OD1 is an example of cluster which possibly formed in the middle of the process, until absolute certainty is obtained more observations like this one need to come.
Betelgeuse is a red giant star with a radius of 617,100,000 km — a whopping 887 times the Sun’s radius. To have an idea of the size, Orion’s star is so big it could envelop Jupiter’s entire orbit (depicted in the image below).
In December 2019, astronomers noticed a sudden drop in the star’s brightness. The observations continued until it reached a minimum in February 2020. The first hypothesis that came to everyone’s mind was that the star was in its final days and would explode to a Type II Supernova, which seemed plausible due to its size and characteristics.
Betelgeuse is a massive and very hot star, so it burns its fuel faster than smaller stars. As a result, red giants like Betelgeuse have shorter lives (Betelgeuse is only 10 million years old while our Sun is 450 times older) and it is estimated that it has about 100,000 years left.
When a star this size explodes into a supernova, its brightness reaches a maximum and then stabilizes as it decreases in brightness. If Betelgeuse were to indeed turn into a supernova, we’d see this in the night sky as something having 10% of the Moon’s brightness — but at its peak, the brightness will even outshine the moon in the first days of high luminosity.
However, scientists have suggested that dust in front of our point of view made it appear less bright. The Hubble space telescope observations of the sudden dimming suggested that mass ejection from the star created a bubble of gas that quickly cooled down and the dust became what we can imagine as a shadow in our line of sight, which made it appear less luminous.
After the (metaphorical) dust had settled, Betelgeuse still did not turn into a supernova. So what did it actually do?
What is the dust all about?
Another group decided to check using a submillimeter telescope. Submillimeter astronomy observes a specific part of the electromagnetic spectrum: between far-infrared and microwave wavelengths.
These parts of the spectrum help researchers detect the presence of water vapor, as well as molecular oxygen and other molecules. Observations with submillimeter telescopes are ideal for detecting the ingredients which form stars and planets.
A more recent study looked at another giant star, finding that this type of mass ejection event may be common for massive stars than we thought, and the missing piece is a gas bubble around the star. These gas bubbles which are ejected may be good candidates to understand more about the building blocks of the formation of future stars.
VY Canis Majoris (VY CMa) is a hypergiant located in the Canis Major constellation, almost twice as large as Betelgeuse. Data collected with the Hubble telescope showed that a similar variability of brightness from Betelgeuse — in other words, what happened with VY CMa likely also happened to Betelgeuse.
This research compared the recent dimming of the VY CMa with historical data from the 1880-1890 and 1920-1940 periods. The Canis Major’s star has a longer period of alternating from maximum to minimum brightness. This could be related to the size of the object: a larger star requires a longer period of dust ejection, consequently, more dust escapes it.
It was a disappointment for many astronomers who were hoping to see something spectacular — but instead, more questions than answers seem to have emerged about the behavior of these supernovae. From now on, we wait for further research and observation to better understand red giants mass ejections.
ESA’s spacecraft collected important information about nearby stars, the solar system’s motion and provided a map with the motions of 40 thousand stars in the Milky Way. It’s the best available map we have of our galaxy and it looks stunning.
Gaia is an ESA mission to build a 3D map of the Milky Way. Last week the collaboration released the third early data release, the first final release will happen in 2022. Despite being in operation for only 34 months, the Gaia telescope was able to collect around 1.8 billion sources. Gaia’s ambitious mission is to chart a 3D map of our galaxy, revealing its composition and evolution.
The mission has provided the Gaia Catalog of Nearby Stars (GCNS), stars with around 100 pc from the Sun. It has more than 330,000 galaxies, a major advance compared to the first effort in 1957 by Willhelm Gliese. The recent observations have shown that most stars in the GCNS have circular orbits just like our Sun. In addition, the catalog was used to estimate the solar velocity, which is 7 km/s.
The researchers behind the project detected 2,879 ultra-cool dwarfs (UCD) which are faint, low-mass, and relatively cold (under 2,700 Kelvin degrees, or about 24000 Celsius) star formation objects. Furthermore, Gaia was capable of detecting the most probable binary star within the nearby area; 16,565 in total, as seen from the 3D map they have a common velocity in reference to the center of the galaxy.
What caught the astronomers’ eye was the astrometry of the QSO-objects (quasi-stellar objects) outside the galaxy. As the name implies, QSOsare objects with a starlike visual appearance, but a different type of optical spectrum. Gaia was able to detect the position and velocities of 1.6 million QSO-objects. The idea is to use those outside sources as points of reference to estimate the acceleration of the solar system.
For the mapping mission, it’s also important to correct any aberration due to the motion of the solar system: when we are moving in a direction the stars placed in the opposite movement appear to be moving closer to each other, while the stars in front of the sun appear to be separating.
The result of analyzing the motion of stars is in the video bellow. It shows the motion of 40,000 stars as they cross the sky, which are 100 pc away from the sun. The dots are the stars that become trails, which represent their motion, faster objects have longer trails. Finally, the last image represents 400,000 years into the future.
The next release, DR4, will be based on 66 months of the mission. It will include Gaia’s list of exoplanets. According to the Collaboration, the spacecraft is in good condition with the exception of radiation damage, below the expected by the team.
Our galaxy is teeming with rogue planets either torn from their parent stars by chaotic conditions or born separate from a star. These orphan planets could be discovered en masse by an outcoming NASA project — Nancy Grace Roman Space Telescope.
The Milky Way is home to a multitude of lonely drifting objects, galactic orphans — with a mass similar to that of a planet — separated from a parent star. These nomad planets freely drift through galaxies alone, thus challenging the commonly accepted image of planets orbiting a parent star. ‘Rogue planets’ could, in fact, outnumber stars in our galaxy, a new study published in the Astronomical Journal indicates.
“Think about how crazy it is that there could be an Earth, a Mars, or a Jupiter floating all alone through the galaxy. You would have a perfect view of the night sky but stuck in an eternal night,” lead author of the study, Samson Johnson, an astronomy graduate student at The Ohio State University, tells ZME Science. “Although these planets could not host life, it is quite a place to travel to with your imagination. The possibility of rogue planets in our galaxy had not occurred to me until coming to Ohio State.”
Up to now, very few very of these orphan planets have actually been spotted by astronomers, but the authors’ simulations suggest that with the upcoming launch of NASA’s Nancy Grace Roman Space Telescope in the mid-2020s, this situation could change. Maybe, drastically so.
“We performed simulations of the upcoming Nancy Grace Roman Space Telescope (Roman) Galactic Exoplanet Survey to determine how sensitive it is to microlensing events caused by rogue planets,” Johnson says. “Roman will be good at detecting microlensing events from any type of ‘lens’ — whether it be a star or something else — because it has a large field of view and a high observational cadence.”
The team’s simulations showed that Roman could spot hundreds of these mysterious rogue planets, in the process, helping researchers identify how they came to wander the galaxy alone and indicating how great this population could be in the wider Universe.
Rogue by Name, Rogue by Nature: Mysterious and Missing
Thus far, much mystery surrounds the process that sees these planets freed from orbit around a star. The main two competing theories suggest that these stars either are thrown free of their parent star, or form in isolation. Each process would likely lead to rogue planets with radically different qualities.
“The first idea suggests that rogue planets form like planets in the Solar System, condensing from the protoplanetary disk that accompanies stars when they are born,” Johnson explains. “But as the evolution of planetary systems can be chaotic and messy, members can be ejected from the system leading to most likely rogue planets with masses similar to Mars or Earth.”
Johnson goes on to offer an alternative method of rogue planet formation that would see them form in isolation, similar to stars that form from giant collapsing gas clouds. “This formation process would likely produce objects with masses similar to Jupiter, roughly a few hundred times that of the Earth.”
“This likely can’t produce very low-mass planets — similar to the mass of the Earth. These almost certainly formed via the former process,” adds co-author Scott Gaudi, a professor of astronomy and distinguished university scholar at Ohio State. “The universe could be teeming with rogue planets and we wouldn’t even know it.”
The question is if these objects are so common, why have we spotted so few of them? “The difficulty with detecting rogue planets is that they emit essentially no light,” Gaudi explains. “Since detecting light from an object is the main tool astronomers use to find objects, rogue planets have been elusive.”
Astronomers can use a method called gravitational microlensing to spot rogue planets, but this method isn’t without its challenges, as Gaudi elucidates:
“Microlensing events are both unpredictable and exceedingly rare, and so one must monitor hundreds of millions of stars nearly continuously to detect these events,” the researcher tells ZME Science. “This requires looking at very dense stellar fields, such as those near the centre of our galaxy. It also requires a relatively large field of view.”
Additionally, as the centre of the Milky Way is highly obscured by requiring us to look at it in the near-infrared region of the electromagnetic spectrum — a task that is extremely difficult as the Earth’s atmosphere makes the sky extremely bright in near-infrared light.
“All of these points argue for a space-based, high angular resolution, wide-field, near-infrared telescope,” says Gaudi. “That’s where Roman — formally the Wide Field InfraRed Survey Telescope (WFIRST) — comes in.”
Nancy Grace Roman Space Telescope (and Einstein) to the Rescue!
The Roman telescope — named after Nancy Grace Roman, NASA’s first chief astronomer, who paved the way for space telescopes focused on the broader universe–will launch in the mid-2020s. It is set to become the first telescope that will attempt a census of rogue planets — focusing on planets in the Milky Way, between our sun and the centre of our galaxy, thus, covering some 24,000 light-years.
The team’s study consisted of simulations created to discover just how sensitive the Roman telescope could be to the microlensing events that indicate the presence of rogue planets, finding in the process, that the next generation space telescope was 10 times as sensitive as current Earth-based telescopes. This difference in sensitivity came as a surprise to the researchers themselves. “Determining just how sensitive Roman is was a real shock,” Johnson says. “It might even be able to tell us about moons that are ejected from planetary systems! We also, found a new ‘microlensing degeneracy’ in the process of the study — the subject another paper that will be coming out shortly.”
Johnson’s co-author Gaudi echoes this surprise. “I was surprised that Roman was sensitive to rogue planets with mass as low as that of Mars and that the signals were so strong,” the researcher adds. “I did not expect that before we started the simulations.”
The phenomenon that Roman will exploit to make its observations stems from a prediction made in Einstein’s theory of general relativity, that suggests that objects with mass ‘warp’ the fabric of space around them. The most common analogy used to explain this phenomenon is ‘dents’ created in a stretched rubber sheet by placing objects of varying mass upon it. The heavier the object — thus the greater the mass — the larger the dent.
This warping of space isn’t just responsible for the orbits of planets, it also curves the paths of light rays, the straight paths curving as they pass the ‘dents’ in space. This means that light from a background source is bent by the effect of the mass of a foreground object. The effect has recently been used to spot a distant Milky Way ‘look alike’. But in that case, and in the case of many gravitational lensing events, the intervening object was a galaxy, not a rogue planet, and thus was a much less subtle, more long-lasting, and thus less hard to detect effect than ‘microlensing’ caused by a rogue planet.
“Essentially, a microlensing event happens when a foreground object — in this case, a rogue planet — comes into very close alignment with a background star. The gravity of the foreground object focuses light from the background star, causing it to be magnified,” Gaudi says. “The magnification increases as the foreground object comes into alignment with the background star, and then decreases as the foreground object moves away from the background star.”
As Johnson points out, microlensing is an important and exciting way to study exoplanets — planets outside the solar system — but when coupled with Roman, it becomes key to spotting planetary orphans.
“Roman really is our best bet to find these objects. The next best thing would be Roman 2.0 — with a larger field of view and higher cadence,” the researcher tells ZME, stating that rogue planets are just part of the bigger picture that this forthcoming space-based telescope could allow us to see. “I’m hoping to do as much work with Roman as possible. The next big project is determining what Roman will be able to teach us about the frequency of Earth-analogs — Earth-mass planets in the habitable zones of Sun-like stars.”
Johnson. S. A., Penny. M, Gaudi. B. S, et al, ‘Predictions of the Nancy Grace Roman Space Telescope Galactic Exoplanet Survey. II. Free-floating Planet Detection Rates*,’ The Astronomical Journal, .
The Dome A, the highest ice dome on the Antarctic Plateau, is the best place on the planet to study the stars, providing the clearest views of the sky at night, according to new research, which will probably trigger the interest of astronomers ready to cope with the Antarctic cold weather.
Ice domes are the uppermost portions of ice sheets and rise high above the frozen terrain. The Dome A is considered one of the coldest places on Earth, with temperatures that can be as low as -90ºC (-130 Fahrenheit). That’s actually similar to the nighttime weather found on Mars.
That means that while it may be a great place for astronomers, its remote location and extreme conditions present significant challenges. Scientists that want to visit the Dome A would have to travel 1,200 kilometers (740 miles) into the interior of the Antarctic continent — and that’s after traveling to Antarctica itself.
“The combination of high altitude, low temperature, long periods of continuous darkness, and an exceptionally stable atmosphere, makes Dome A a very attractive location for optical and infrared astronomy. A telescope located there would have sharper images and could detect fainter objects,” said Paul Hickson, co-author of the study, in a press release.
For astronomers, light pollution isn’t just the only problem when looking at the night sky. Atmospheric turbulence can also affect clear views into space. That’s when the telescopes located at mid and high elevations become very useful, taking advantage of the weaker turbulence found at those locations.
Astronomers calculate the quality of the night sky view using a metric called the seeing number, which they measure in arcseconds. The lower the number, the lower the turbulence and the better the view they can get from the stars and the galaxies. In the elevated telescopes in Chile and Hawaii, the seeing number is 0.6 to 0.8 arcseconds.
At Dome C, which is another dome on the Antarctic Plateau, the number is between 0.23 and 0.36 arcseconds. This means that the continent is an ideal place to watch the night sky. The level of turbulence there is lower as the boundary layer, the lowest part of the Earth’s atmosphere.
Working with researchers from China, Canada, and Australia, Hickson showed in his study that the Dome A is actually better than the Dome C. They took nighttime measurements at that location, something that hadn’t been done before, and found out that the median seeing number was 0.31 arcseconds.
The researchers compared the two Antarctic sites and found that the measurements from Dome A at eight meters (26 feet) were much better than the ones taken at the same height at Dome C. The measurements from Dome A at this height were equivalent to the ones made at 20 meters (66 feet) at Dome C.
Dome A “is a natural laboratory for studies of the formation and dissipation of turbulence within the boundary layer,” wrote the authors in their paper. “Future measurements of weather, seeing and the low-altitude turbulence profile could contribute to a better understanding of the Antarctic atmosphere.”
Many mysteries surround conditions in the early Universe, chief amongst these is the question of how and when galaxies began to form. At some point in the Universe’s history, gravitational instability brought together increasingly larger clumps of matter, beginning with atoms, dust, and gas, then stars and planets, clusters and then massive galaxies.
Whilst early protogalaxies may have formed as early as a few hundred million years after the Big Bang, the first well-formed galaxies with features such as spiral arms, rings and bars are thought to have only formed around 6 billion years into the Universe’s 13.8 billion year lifetime.
Astronomy has, in general, confirmed this. With closer and thus later galaxies displaying characteristics such as rings, bars and spiral arms, like our own home, the Milky Way. Features lacking in more distant, earlier galaxies.
New discoveries, however, are challenging this accepted view, with three recent pieces of research, in particular, suggesting that well-ordered and massive galaxies existed much earlier in the Universe than previously believed. This either means that the formation of galaxies began much earlier than expected or progressed much faster than many models suggest.
As a consequence scientists may have to refine models of galaxy formation to account for much earlier or much more rapid evolution.
The key to solving the mystery of how soon after the Big Bang galaxies with definitive shapes and features such as thin discs and spiral arms formed begins with examining theories that describe this formation. One family of theories which implies these processes occur over a prolonged period of time, and another, that suggests formation can proceed much more quickly.
Bottom’s Up! Did Formation Start Earlier or Proceed Quicker?
The simplest model of galaxy formation suggests that at a time when the Universe was mostly hydrogen and helium, such structures emerged from dense clouds of gas that collapsed under their own gravity. This so-called ‘monolithic model’ was the first suggested formation process for galaxies and the stars that comprise them and is referred to as a ‘bottom-up’ or hierarchical formation model.
There are also ‘top-down’ formation models that suggest galaxies may have emerged from larger conglomerates of matter that collapsed in a similar fashion but then went on to break apart, but these currently aren’t favoured by most cosmologists.
Under the influence of gravity, gas and dust collapse into stars which are drawn together as clusters, then superclusters, and finally galaxies. The question is, how do galaxies grow and develop their characteristics?
One idea suggests that the seed of a galaxy continues to accumulate gas and dust, slowly growing to massive size. When it reaches gigantic proportions, this galaxy is able to gobble up clusters of stars and even smaller galaxies. This process should be fairly slow, however, glacially so at first, in fact, accelerating once smaller galaxies begin to be absorbed.
If this is the predominant formation mechanism for galaxies, then what we shouldn’t see in the early universe, before about 6 billion years after the Big Bang, are disc-like massive galaxies or spiral armed galaxies like the Milky Way. Further out in space and thus further back in time, irregulars galaxies and amorphous blobs should be favoured heavily. Unless that is, galactic formation got a serious head-start.
But, there is another theory of galactic evolution. What if galaxy growth progresses predominantly through merger processes?
Rather than a galaxy waiting until it grows massive in size to start accumulating its smaller counterparts, mergers between similar-sized galaxies could be the driving factor in creating larger galaxies. This would mean that the process of galaxy formation could proceed much more quickly than previously believed.
In either case, what we should see is massive galaxies well-formed with characteristics like disks, bars, and spiral arms way further out into space, and thus further back in time.
It just so happens that is exactly what astronomers are starting to find.
Should’ve Put a Ring on it!
One such line of evidence for a more rapid form of galactic formation or a much earlier start, comes in the distinctive doughnut-like shape of a collisional ring galaxy discovered 11 billion-light-years away. This means this “cosmic ring of fire” — similar in mass to the Milky Way and notable for the massive ‘hole’ in its centre which is three million times the distance between the Earth and the Sun — existed when the Universe was just 2.7 billion years old. Far earlier than predicted.
Dr Tiantian Yuan, of Australia’s ARC Centre of Excellence for All-Sky Astrophysics in 3 Dimensions (ASTRO 3D) was part of a group that successfully gave the ring galaxy — designated R5519 — an age.
“It is very a curious object, one that we have never seen before, definitely not in the early Universe,” explains Yuan, a specialist in studying galactic features like spiral arms. “R5519 looks like a corona galaxy, but it isn’t.”
So, even if R5519 is striking, how does this imply that models of galaxy evolution could be inaccurate? The answer lies in how collisional ring galaxies such as this are created.
Yuan explains that the ‘hole’ at the centre of R5519 was created when a thin disk-like galaxy was ‘shot’ by another galaxy hitting head-on, just like a bullet hitting a thin paper target at a shooting range.
“When a galaxy hits the target galaxy — a thin stellar disk — like a bullet, head-on, it causes a pulse in the disk of the victim galaxy,” Yuan says. “The pulse then induces a radially propagating density waves through the target galaxy that form the ring.”
Yuan explains that at one time astronomers had expected to find more collisional ring galaxies in the young universe, simply because there were more galactic collisions progressing at that time. “We find that is not the case,” she continues. “The young universe might have more collisions and bullets, but it lacks thin stellar disks to act as targets… or so we thought.”
Here’s where the problem lies, thin stellar disks that serve as targets in this cosmic firing range aren’t supposed to exist so early in the Universe’s history according to currently favoured cosmological models.
“Our discovery implies that thin stellar disks similar to our Milky Way’s are already developed for some galaxies at a quarter of the age of the universe.”
Yuan and her team’s findings show galactic structures like thin disks and rings could form 3 billion years after the Big Bang. The researcher points to another piece of research that supports the idea of structured galaxies in the early Universe.
“The first step in disk formation is to form a disk at all — an object that is dominated by rotation,” Yuan says. “This is why the recent discovery of the ‘Wolfe disk’ is truly amazing — it pushes the earliest formation time of a large gas disk to much earlier than we previously thought.”
Who’s Afraid of the Big Bad Wolfe?
The discovery Dr Tiantian Yuan refers to is the identification of a massive rotating disk galaxy when the Universe was just 1.5 billion years old. The galaxy — officially named DLA0817g — is nicknamed the ‘Wolfe Disk’ in tribute to the late astronomer Arthur M. Wolfe, who first speculated about such objects in the 1990s.
The fact that the Wolfe Disk —which is spinning at tremendous speeds of around 170 miles per second — exists when the Universe was just 10% of its current age, strongly implies rapid galactic growth or the early formation of massive galaxies.
“The ‘take-home’ message from the discovery of a massive, rapidly rotating disk galaxy that resembles our Milky Way but formed only 1.5 billion years after the Big Bang, is that galaxy formation can proceed rapidly enough to generate massive, gas-rich galaxies at early times,” says J. Xavier Prochaska, professor of astronomy and astrophysics at the University of California Sant Cruz, and part of a team that discovered the Wolfe Disk.
The team behind the Wolfe Disk discovery posit the idea that its existence and the fact that it is both massive and well-formed indicate that the slow accretion of gas and dust may not be the dominant formation mechanism for galaxies. Something much more rapid could be at play.
“Most galaxies that we find early in the universe look like train wrecks because they underwent consistent and often ‘violent’ merging,” says Marcel Neeleman of the Max Planck Institute for Astronomy in Heidelberg, Germany, who led the astronomers. “These hot mergers make it difficult to form well-ordered, cold rotating disks as we observe in our present universe.”
If the Wolfe Disk grew as the result of the accumulation of cold gas and dust, Prochaska explains that this leaves questions unanswered about its stability: “The key challenge, is to rapidly assemble such a large gas mass while maintaining a relatively quiescent, thin and rotating disk.”
Of course, sometimes it can be the absence of something that provides evidence that a theory, or family of theories is inaccurate, as the following research exemplifies.
Further away and further back in time: Some of our Stars are Missing
The Hubble Space Telescope (HST) allows astronomers to stare back in time to when the Universe was just 500 million years old, allowing researchers to finally investigate the nature of the first galaxies and could deliver more contradictions to current cosmological models just as the Wolfe Disk and R5519 have.
Results recently delivered by the HST and examined by a team of European astronomers confirm the absence of the primitive stars when the Universe was just 500 million years old.
These early stars — named Population III stars — are thought to be composed of just hydrogen and helium, with tiny amounts of lithium and beryllium, reflecting the abundances of these elements in the young Universe.
A team of astronomers led by Rachana Bhatawdekar of the European Space Agency confirmed the absence of this first generation of stars by searching the Universe as it existed between 500 million years to 1 billion years into its history. Their observations were published in a 2019 paper with further research due to publish in Monthly Notices of the Royal Astronomical Society as well as being discussed at a press conference during the 236th meeting of American Astronomical Society.
“Population III stars are extremely hot and massive and so they are much bluer in colour than normal stars,” Bhatawdekar says. “We, therefore, looked at the ultraviolet colours of our galaxies to see exactly how blue they looked.”
The team found even though the galaxies they observed were blue, they weren’t blue enough to have stars with very low metals–by which, astronomers mean any element heavier than hydrogen and helium, such as oxygen, nitrogen, carbon, iron etc…
“What this tells us is that even though we are looking at a Universe that is just 500 million years old, galaxies have already been enriched by metals of significant amount,” Bhatawdekar. “This essentially means that stars and galaxies must have formed even earlier than this very early cosmic time.”
Thus the team’s observations imply that stars had already begun to fade and die by this point in time, shedding heavier elements back into the Universe. These elements would go on to form the building blocks of later generations of stars.
This piece of the puzzle would seem to suggest that the presence of massive galaxies is not a factor that arises as the result of rapid growth, but that the growth processes began earlier.
“We found no evidence of these first-generation Population III stars in this cosmic time interval,” explains Bhatawdekar. “These results have profound astrophysical consequences as they show that galaxies must have formed much earlier than we thought.”
Finding More Evidence of Early Galaxy Formation
For Bhatawdekar the further investigation on conditions in the early Universe will only really open up with the launch of the James Webb Space Telescope.
“Whilst we found is that there is no evidence of existence of Population III stars in this comic time but there are many low mass/faint galaxies in the early Universe,” she says. “This suggests that the first stars and first galaxies must have formed even earlier than this incredible instrument Hubble can probe.
“The James Webb Space Telescope, which is scheduled to be launched next year in 2021, will look even further back in time as far as when the Universe was just 200 million years old.”
Even before the launch of the James Webb Space Telescope, and as if to dismiss the idea that these results could be a fluke and thus not indicative of a wider shift towards earlier massive galaxies, Tiantian Yuan describes further findings yet to be published.
“I have actually found more collisional ring galaxies in the early universe!” exclaims Yuan. “There is a cool one that is gravitationally lensed, giving us a sharper view of the ring.
“I can tell you that this new ring is 1 billion years older than R5519, and it looks a lot different from R5519 and more like rings in our nearby Universe.”
As we refine our ideas of galaxy evolution we are likely to find that when presented with two conflicting theories, the truth is that which lies somewhere in-between. Thus, as we observe the formation of galaxies currently progressing, the mergers between galaxies, and complex structures in the Universe’s history we may find that galactic evolution may progress both slowly and quickly.
Hopefully, this mix of models will also deliver an accurate recipe for how spiral arms, rings, and bars arise from thin disks. Something currently lacking.
“What these discoveries mean is that we are entering a new era that we can ask the question of how different structures of galaxies first formed,” Yuan explains. “Galaxies do not form in one go; some parts were assembled first and others evolved later.
“It is time for the models to evolve to the next level of precision and accuracy. Like a jigsaw puzzle, the more pieces we reveal in observations, the more challenging it is to get the theoretical models correct, and the closer we are to grasp the mastery of nature.”
Sources and further reading
Yuan. T, Elagi. A, Labbe. I, Kacprzak. G. G, et al, ‘A giant galaxy in the young Universe with a massive ring,’ Nature Astronomy, .
Bhatawdekar. R, Conselice. C. J, Margalef-Bentabol. B, Duncan. K, ‘Evolution of the galaxy stellar mass functions and UV luminosity functions at z = 6−9 in the Hubble Frontier Fields,’ Monthly Notices of the Royal Astronomical Society, Volume 486, Issue 3, July 2019, Pages 3805–3830, , https://doi.org/10.1093/mnras/stz866
Out beyond the orbit of Neptune and the solar system’s seven other major planets lies a ring of icy bodies known as the Kuiper Belt. The disc that is 20 times as wide and an estimated 200 times as dense as the asteroid belt houses a wide array of objects, including its most famous inhabitant — the dwarf planet Pluto. But, it holds more than objects of ice and rock. The Kuiper Belt may hold the secrets of how the planets of the solar system formed, and the raw materials that created the worlds around us and our own planet.
“The Kuiper Belt is a repository of the solar system’s most primordial material and the long-sought nursery from which most short-period comets originate,” explains David C. Jewitt, an astronomer based at the University of California, Los Angeles, who is renowned for his study of the solar system and its smaller bodies. “The scientific impact of the Kuiper Belt has been huge, in many ways reshaping our ideas about the formation and evolution of the Solar System.”
Researchers now stand on the verge of unlocking these secrets with the investigation of the Kuiper Belt contact binary Arrokoth (previously known as ‘Ultima Thule’). On January 2019, the object — named for the Native American word for ‘sky’ — became the most distant object ever visited by a man-made spacecraft.
“Most of what we know about the belt was determined using ground-based telescopes. As a result, Kuiper Belt studies have been limited to objects larger than about 100 km because the smaller ones are too faint to easily detect,” says Jewitt. “Now, 5 years after its flyby of the 2000-km-diameter Kuiper Belt object Pluto, NASA’s New Horizons spacecraft has provided the first close-up look at a small, cold classical Kuiper Belt object.”
The data collected by the New Horizons probe has allowed three separate teams of researchers to conduct the most in-depth investigation of a Kuiper Belt object ever undertaken. In the process, they discovered that our current knowledge of how these objects form is very likely incorrect. From all the evidence the three teams collected, it seems as Kuiper Belts form as a result of a far more delicate, low-velocity process than previously believed. As most astrophysicists believe that these objects — planetesimals — acted as the seeds from which the planets grew, this new model changes our idea of how the solar system formed.
How Kuiper Belt Bodies Get in shape
The majority of the clues as to Arrokoth’s low-velocity formation originate from its unusual binary lobed shape. The larger lobe is joined to the smaller lobe by an extremely narrow ‘neck.’ What is especially interesting about this shape — reminiscent of a bowling pin or a snowman — is that the lobes are perfectly aligned.
John Spencer, Institute Scientist in the Department of Space Studies, Southwest Research Institute in Boulder, Colorado, led a team of researchers that reconstructed Arrokoth’s 3-dimensional shape from a series of high resolution black and white images. Spencer’s paper concludes that Arrokoth’s lobes are much flatter than was previously believed but despite this, both lobes are denser than expected.
William McKinnon, Professor of Earth and Planetary Sciences at the Califonia Institute of Technology, and his team ran simulations of different formation methods to see which conditions led to the shape recreated and Spencer and his colleagues.
McKinnon and his team discovered that the shape of Arrokoth could only be achieved as a result of a low-velocity formation–around 3 m/s. This presents a problem to current theories of how planetesimals form.
The suggested method of planetesimal formation suggests high-velocity particles smashing together in a process called hierarchical accretion. The simulations that McKinnon produced suggest that such high-velocity collisions would not have created a larger body, but rather, would have blown it apart. The geometrical alignment of the larger and smaller lobes indicates to the team that they were once co-orbiting bodies which gradually lost angular momentum and spiralled together, resulting in a merger.
“Arrokoth’s delicate structure is difficult to reconcile with alternative models in which Arrokoth Kuiper Belt objects are fragments of larger objects shattered by energetic collisions,” Jewitt says. This supports a method of planetesimal formation called ‘cloud collapse.’
“A variety of evidence from Arrokoth points to gravitational collapse as the formation mechanism. The evidence from the shape is probably most compelling,” William Grundy of Lowell Observatory says. “Gravitational collapse is a rapid but gentle process, that only draws material from a small region. Not the much more time consuming and violent process of hierarchical accretion – merging dust grains to make bigger ones, and so on up through pebbles, cobbles, boulders, incrementally larger and larger, with more and more violent collisions as the things crashing into each other.”
Grundy, whose team analysed the thermal emissions from Arrokoth’s ‘winter’ side, goes on to explain that the speed at which cloud collapse occurs and the fact that all the material that feeds it is local to it means that all the Kuiper planetesimals should be fairly uniform.
Cold Classicals: Untouched and unpolluted
Arrokoth is part of a Kuiper Belt population referred to as ‘cold classicals,’ this particular family of bodies is important to astrophysicists researching the origins of the solar systems. This is because, at their distance from the Sun within the Kuiper Belt, they have remained virtually untouched by both other objects and by the violent radiation of the Sun.
As many of these objects, Arrokoth in particular, date back 4 billion years to the very origin of the solar system, they hold an uncontaminated record of the materials from which the solar system emerged and of the processes at play in its birth.
Arrokoth has a relatively smooth surface in comparison with other comets, moons and planets within the solar system. It does show the signs of a few impacts, with one very noticeable 7km wide impact crater located of the smaller lobe. This few craters dotted across Arrokoth’s surface do seem to point to a few small high-velocity impacts. The characteristics of Arrokoth’s cratering allowed the team in infer its age of around 4 billion years. This places its birth right around the time the planets had begun to form in the solar system.
“The smooth, relatively un-cratered surface shows that Arrokoth is relatively pristine, so evidence of its formation hasn’t been destroyed by subsequent collisions,” Spencer explains. “The number of craters nevertheless indicates that the surface is very old, likely dating back to the time of accretion.
“The almost perfect alignment of the two lobes, and the lack of obvious damage where they meet, indicate gentle coalescence of two objects that formed in orbit around each other, something most easily accomplished by local cloud collapse.”
As mentioned above, Will Grundy and his team were tasked with the analysis of thermal emissions in the radio band emitted by the side of Arrokoth facing away from the Sun.
“We looked at the thermal emission at radio wavelengths from Arrokoth’s winter night side. Arrokoth is very cold, but it does still emit thermal radiation,” Grundy says. “The signal we saw was brighter, corresponding to a warmer temperature than expected for the winter surface temperature. Our hypothesis is that we are seeing emission from below the surface, at depths where the warmth from last summer still lingers.”
Grundy’s team also looked at the colour imaging of Arrokoth with the aim of determining what it is composed of. “We looked at the variation of colour across the surface, finding it to be quite subtle,” he says. “There are variations in overall brightness, but the colour doesn’t change much from place to place, leading us to suspect that the brightness variations are more about regional differences in surface texture than compositional differences.”
The team determined that Arrokoth’s dark red colouration is likely to be a result of the presence of ‘messy’ molecular jumbles of organic materials that occur when radiation drives the construction of increasingly complex molecules–known as tholins.
“One open question is where Arrokoth’s tholins came from,” Grundy says. “Were they already present in the molecular cloud from which the Solar System formed? Did they form in the protoplanetary nebula before Arrokoth accreted? Or did they form after Arrokoth accreted, through radiation from the Sun itself?”
The researcher says that all three are possible, but he considers the uniformity of Arrokoth’s colouration to favour the first two possibilities over the third. The team also searched Arrokoth for more recognisable organic molecules, spotting methanol–albeit frozen solid–but, not finding any trace of water. Something which came as a surprise to Grundy. “It was surprising not to see a clear signature of water ice since that’s such a common material in the outer solar system. Typically, comets have around 1% methanol, relative to their water ice.”
The team believe that this disparity arises from the fact that Arrokoth accreted in a very distinct chemical environment at the extreme edge of the nebula which collapsed to create the solar system.
“If it was cold enough there for carbon monoxide (CO) and methane (CH4) to freeze as ice onto dust grains, that would enable chemical mechanisms that create methanol and potentially destroy water, too. But those mechanisms could only work where these gases are frozen solid,” Grundy says. “Arrokoth appears to be sampling a region of the nebula where such conditions held.
“We have not seen comets so rich in methanol, which probably means we have not seen comets that formed in this outermost part of the nebula. Most of them probably originally formed closer to the Sun (or else at a different time in nebular history when the chemical conditions were somewhat different).”
Looking to future Kuiper Belt investigations
Investigating Kuiper Belt objects is no walk in the park, with difficulties arising from both the disc’s distance from the Sun and from the fact that Kuiper Belt objects tend to be very small. Grundy explains that as sunlight falls off by the square of its distance, object s as far away as the Kuiper Belt require the most powerful telescopes to do much of anything.
“Sending a spacecraft for a close-up look is great to do, but it took New Horizons 13 years to reach Arrokoth,” Grundy says. “It’ll probably be some time yet before another such object gets visited up-close by a spacecraft.”
“For flybys, the journey times are very long–we flew for 13 years to get there–navigation is difficult because we don’t know the orbits of objects out there very well, we’d only been tracking Arrokoth for 4 years,” Spencer explains. “The round-trip light time is long, which makes controlling the spacecraft more challenging, and light levels are very low, so taking well-exposed, unblurred, images is difficult.”
Spencer adds that from Earth, objects like Arrokoth are mostly very faint, meaning only a small fraction of them have been discovered and learning about their detailed properties is difficult even with large telescopes. These difficulties mean that one of the things left to discover is just how common bi-lobed contact binaries like Arrokoth are in the Kuiper Belt. “Some evidence from lightcurves suggests up to 25% of cold classical could be contact binaries,” he says. ” We know that many of them are binaries composed of two objects orbiting each other, however.”
Fortunately, telescope technology promises to make leaps and bounds over the coming decades, with the launch of the space-based James Webb Space Telescope (JWST) in 2021 and the completion of the Atacama Desert based Extremely Large Telescope (ELT) in 2026.
“Both will help,” says Grundy. “Larger telescopes are needed to collect more light and feed it to more sensitive instruments. JWST and the new generation of extremely large telescopes set to come online over the coming years will enable new investigations of these objects.”
In terms of future spacecraft visits, Grundy believes that researchers and engineers should be thinking small, literally: “If technical advances were to enable highly miniaturized spacecraft to be flown to the Kuiper belt more quickly, that could enable a lot of things. The big obstacles to doing that with today’sCubeSats are power, longevity, and communications, but the rapid advance of technology makes me hopeful that it will be possible to do a whole lot more with tiny little spacecraft within a few decades.
“It’s funny how progress calls for ever bigger telescopes and ever smaller spacecraft.”
Of course one of the most lasting changes that result from this landmark triad of studies on Arrokoth published in Science is the move away from hierarchical formation models and the adoption of a gravitational or cloud collapse model to explain the creation of planetesimals. This shift will resolve one of the long-standing issues with the hierarchical model, the fact that they work quite well to grow things from dust size to pebble size, but once pebble size is reached, the particles quickly spiral-in toward the Sun.
“I think it will shift the focus to the circumstances that trigger the collapse. It’s a very fast way of making a planetesimal–decades instead of hundreds of millennia–but the circumstances have to be right for instabilities to concentrate solids enough for them to collapse,” Grundy explains. “It will be interesting to map out where and when planetesimals should form, what their size distributions should be, and where the solids that they are formed from should have originated.”
W. M. Grundy et al., Science
W. B. McKinnon et al.,
J. R. Spencer et al., Science 10.1126/science.aay3999 (2020).
D. C. Jewitt et al., Science 10.1126/science.aba6889 (2020).
After an initial setback yesterday (17/12/19) due to a software error, the European Space Agency’s (ESA) CHaracterising ExOPlanets Satellite — or CHEOPS — telescope has finally launched from the European Spaceport in Kourou, French Guiana.
CHEOPS was aboard a Russian Soyuz-Fregat rocket which blasted off at 9:54 am European time. The Rocket will take approximately 145 minutes to place the CHEOPS unit into a rare pole to pole low-Earth orbit.
The telescope hitched a ride with an Italian radar satellite, the rocket’s primary payload.
CHEOPS is the result of a collaboration between 11 member countries within the ESA, with Switzerland taking the lead on the project. Two of the country’s leading Universities — the University of Geneva and the University Bern — worked together to equip CHEOPS with a state of the art photometer.
This powerful device will measure changes in the light emitted by nearby stars as planets pass by — or transit — them. This examination reveals many details about a planet’s characteristics, its diameter, and details of its atmosphere in particular.
By combining a precise measurement of diameter with a measurement of mass, collected by an alternative method, researchers will then be able to determine a planet’s density. This, in turn, can lead to them deducing its composition and internal structure.
CHEOPS was completed in a short time with an extremely limited budget of around 50-million Euros.
“CHEOPS is the first S-class mission for ESA, meaning it has a small budget and a short timeline to completion,” explains Kate Issak, an ESA/CHEOPS project researcher. “Because of this, it is necessary for CHEOPS to build on existing technology.”
CHEOPS: Informed by the past, informing the future
The project is acting as a kind of ‘middle-man’ between existing exoplanet knowledge and future investigations. It is directed to perform follow-up investigations on 400–500 ‘targets’ found by NASA planet-hunter Transiting Exoplanet Survey Satellite (Tess) and its predecessor, the Kepler observatory. Said targets will occupy a size-range of approximately Earth-Neptune.
This mission then fits in with the launch of the James Webb Telescope in 2021 and further investigation methods such as the Extremely Large Telescope array in the Chilean desert, set to begin operations in 2026. It will do this by narrowing down its initial targets to a smaller set of ‘golden targets’. Thus, meaning its investigation should help researchers pinpoint exactly what planets in close proximity to Earth are worthy of follow-up investigation.
“It’s very classic in astronomy that you use a small telescope ‘to identify’, and then a bigger telescope ‘to understand’ — and that’s exactly the kind of process we plan to do,” explains Didier Queloz, who acted as chair of the Cheops science team. “Cheops will now pre-select the very best of the best candidates to apply to extraordinary equipment like very big telescopes on the ground and JWST. This is the chain we will operate.”
Queloz certainly has pedigree when it comes to exoplanets. The astrophysics professor was jointly awarded the 2019 Nobel Prize in Physics for the discovery of the first exoplanet orbiting a Sun-like star with Michel Mayor.
The first task of the science team operating the satellite, based out of the University of Bern, will be to open the protective doors over the 30 cm aperture telescope — thus, allowing CHEOPS to take its first glimpse of the universe.
Astronomers are expressing concerns over the plans of Elon Musk’s Space X to launch up to 42,000 satellites in a mega-constellation called Starlink. So far only 122 have been deployed — and astronomers are already reporting unwanted impacts.
With over 2,000 now active and orbiting Earth, satellites are key to modern life. Telecommunication satellites support mobile phone signals and mobile internet. As 5G services start to be deployed, a new set of satellites with the proper technology will need to be launched.
A recent incident with SpaceX raised concerns among astronomers over the consequences of Elon Musk’s plan. The 122 satellites launched by Starlink are brighter than most of the stars visible to the human eye and also move faster through the sky. This leaves a trail that can pollute astronomer’s data.
A group of 19 satellites of Starlink passed on November 18th near the Cerro Tololo Inter-American Observatory’s site in Chile. It lasted for five minutes and it affected an image taken by the Dark Energy Camera (DECam). The image shows the satellite train entering into the camera’s vision.
“Wow!! I am in shock,” wrote CTIO astronomer Clara Martinez-Vazquez on Twitter.
Satellites are usually dark in the night sky, but sunlight can reach them right after the Sun goes down or early in the morning when the sky is black, making them visible through telescopes of binoculars.
The number of Starlink satellites already launched represents only 0.3% of those proposed, so the consequences for astronomers could be worst. Looking for faint objects, which is the main goal of observatories seeking objects that could harm Earth, would be hindered, astronomers claim.
Starlink’s satellites are located in elevations of over 1,000 km, which means their orbital decay would take millennia. This can create problems with other types of satellites. For example, in September, a satellite used for Earth observation was close to crashing with a Starlink satellite. “A full constellation of Starlink satellites will likely mean the end of Earth-based microwave-radio telescopes able to scan the heavens for faint radio objects,” Swinburne University astronomer Alan Duffy told ScienceAlert in May after the first launch of Starlink satellites.
The criticism of astronomers to Starlink’s plans was dismissed by Elon Musk and SpaceX, who claimed their satellites would have a minor impact on astronomy. They said SpaceX is working on reducing the albedo of the satellites and that Starlink would adjust the satellites on-demand for astronomical experiments.
Cees Bassa from the Netherlands Institute for Radio Astronomy claims that up to 140 satellites of Starlink will be visible all times from observatories on Earth. But the difficulties could be overcome if the companies implemented some changes, according to Bassa.
Bassa suggested placing a moratorium on the launch of new satellites of Starlink until doing modifications, as well as also deorbiting the current satellites. He also said the company should redesign the satellites to reduce their reflectivity and should provide real-time information on their trajectory plans.
Researchers from the University of Bern have discovered that the Earth would be approximately 5% larger if it were hot and molten rather than rocky and solid. Pinpointing the difference between rocky exoplanets and their hot, molten counterparts is vital for the search for Earth-like exoplanets orbiting stars outside the solar system.
The fact that rocky exoplanets that are approximately Earth-sized are small in comparison to other planets, makes them notoriously difficult for astronomers to spot and characterise. Identification of a rocky exoplanet around a bright, Sun-like star will likely not be plausible until the launch of the PLATO mission in 2026. Thankfully, spotting Earth-size planets around cooler and smaller stars such as the red dwarfs Trappist-1 or Proxima b is currently possible.
But, searching for molten exoplanets could help astronomers probe the darkness of space — and identify Earth-sized rocky-exoplanets around stars like our own.
“A rocky planet that is hot, molten, and possibly harbouring a large, outgassed atmosphere ticks all the boxes,” says Dan Bower, an astrophysicist at the Center for Space and Habitability (CSH) of the University of Bern. “Such a planet could be more easily seen by telescopes due to strong outgoing radiation than its solid counterpart.”
Learning more about these hot, molten worlds could also teach astronomers and astrophysicists more about how planets such as our’s form. This is because rocky planets such as the Earth are built from ‘leftovers of leftovers’ — material not utilised in either the formation of stars or giant planets.
“Everything that doesn’t make its way into the central star or a giant planet has the potential to end up forming a much smaller terrestrial planet,” says Bower: “We have reason to believe that processes occurring during the baby years of a planet’s life are fundamental in determining its life path.”
This drove Bower and a team of colleagues mostly from within the Planet S network to attempt to discover the observable characteristics of such a planet. The resulting study — published in the journal Astronomy and Astrophysics — shows that a molten Earth would have a radius 5% or so larger than the actual solid counterpart. They believe this disparity in size is a result of the differences in behaviour between solid and molten materials under the extreme conditions generated beneath the planet’s surface.
As Bower explains: “In essence, a molten silicate occupies more volume than its equivalent solid, and this increases the size of the planet.”
This 5% difference in radii is something that can currently be measured, and future advances such as the space telescope CHEOPS — launching later this year — should make this even easier.
In fact, the most recent collection of exoplanet data suggests that low-mass molten planets, sustained by intense starlight, may already be present in the exoplanet catalogue. Some of these planets may well then be similar to Earth in regards to the material from which they are formed — with the variation in size no more than the result of the different ratios of solid and molten rock.
Bower explains: They do not necessarily need to be made of exotic light materials to explain the data.”
Even a completely molten planet would fail to explain the observation of the most extreme low-density planets, however. The research team suggest that these planets form as a result of molten planets releasing — or outgassing — large atmospheres of gas originally trapped within interior magma. This would result in a decrease in the observed density of the exoplanet.
Spotting such outgassed atmospheres of this nature should be a piece of cake for the James Webb Telescope if it is around a planet that orbits a cool red dwarf — especially should it be mostly comprised of water or carbon dioxide.
The research and its future continuation have a broader and important context, points out Bower. Probing the history of our own planet, how it formed and how it evolved.
“Clearly, we can never observe our own Earth in its history when it was also hot and molten. But interestingly, exoplanetary science is opening the door for observations of early Earth and early Venus analogues that could greatly impact our understanding of Earth and the Solar System planets,” the astrophysicist says. “Thinking about Earth in the context of exoplanets, and vice-versa offers new opportunities for understanding planets both within and beyond the Solar System.”
Original research: Dan J. Bower et al: Linking the evolution of terrestrial interiors and an early outgassed atmosphere to astrophysical observations, Astronomy & Astrophysics. DOI: https://doi.org/10.1051/0004-6361/201935710
You might think that astronomy is restricted only to extremely powerful equipment and large teams — but it turns out that’s not always the case. Sometimes, little projects can have great achievements, too.
In this case, astronomers from the National Astronomical Observatory in Tokyo have discovered an object with a radius of only 1.3 kilometers, which lies a whopping 5 billion kilometers from Earth, in the so-called Kuiper Belt, near the outer edge of the solar system. To make it even better, the project’s price tag wasn’t astronomic, it was extremely cheap.
“We got top-notch results thanks largely to our ideas. Even little guys can beat giants,” said a team member.
Artistic depiction of the newly discovered object in the Kuiper Belt. Image credits: Ko Arimatsu.
The Kuiper Belt is a circumstellar disc in the outer Solar System, extending from the orbit of Neptune. Pluto lies in the Kuiper Belt. The belt is also home to some of the oldest rocks in the solar system, and astronomers have long theorized that there are many small, kilometer-sized objects there, but no one’s ever found one. Until now, that is.
Researchers used a technique called “occultation,” which is fairly common in astronomy (with various setups). The method entails observing a large number of stars and noting every time an object passes in front of them, dimming their light in the process. The Japanese team placed two small (28 cm) telescopes on the roof of the Miyako open-air school on the Okinawa Prefecture, Japan, and monitored approximately 2.000 stars for a total of 60 hours. They managed to deduct the existence of a small object
The astronomers used 11-inch Celestron telescopes, which are worth about $3,000 each, as well as specialized cameras and astrographs. The whole project cost just a bit over $30,000.
“Our team had less than 0.3 percent of the budget of large international projects,” he added. “We didn’t even have enough money to build a second dome to protect our second telescope,” said Arimasu. The team also has even more ambitious goals.
“Now that we know our system works, we will investigate the Edgeworth-Kuiper Belt in more detail. We also have our sights set on the still undiscovered Oort Cloud out beyond that.”
Arimatsu also says that in addition to confirming a longstanding theory and filling an important knowledge gap, this also paves the way for more studies by teams with smaller budgets.
“The new (observation) method can broaden research projects by making them easier to join for amateurs and others.”
If you’d ask most people what; the closest planet to Earth, you’d probably come across one answer: Venus. That answer, while apparently logical, is not really true. Mercury is the planet closest to us.
Even more surprising is the fact that Mercury is the closest neighbor, on average, to each of the other seven planets in the solar system. How can this be?
Image credits: Image: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington (Wikimedia Commons).
Mercury’s in retrograde
What’s the planet closest to the Earth? Even without any prior knowledge, a decent guess would be Venus or Mars — these are our planetary neighbors, after all. A simple Google search reveals that Venus’ orbit is closer to that of Earth’s so, naturally, Venus must be the answer, right?
Wrong. Mercury is the planet closest to Earth — at least on average.
As it turns out, Venus being the closest planet to Earth is simply a misconception — one that has propagated greatly through the years.
“By some phenomenon of carelessness, ambiguity, or groupthink, science popularisers have disseminated information based on a flawed assumption about the average distance between planets,” write engineers Tom Stockman, Gabriel Monroe, and Samuel Cordner in a commentary published in Physics Today.
Instead, they recommend a different method of measuring which planet is closest, which they demonstrated using the motions of the planets within the last 10,000 years.
“By using a more accurate method for estimating the average distance between two orbiting bodies, we find that this distance is proportional to the relative radius of the inner orbit.”
Using this method, Mercury is closer to Earth on average. A GIF created by Reddit user u/CharcoalCharts does a great job at depicting this (the Earth is in Blue). The Earth is usually closest to Mercury, although, at some points of the year, it’s closest to Venus or Mars.
It feels intuitive that the average distance between every point on two concentric ellipses is closer than ellipses which are farther apart, but this is not necessarily the case. While Venus can get very close to the Earth (at only 0.28 Astronomical Units, with 1 AU being the distance from the Earth to the Sun), the two planets can also be quite far apart, at 1.72 AU. In total, Venus is 1.14 AU from Earth, but Mercury is a much closer 1.04 AU.
There are also two other shocking conclusions from this: first of all, on average, the Sun is closest to the Earth than any other planet (because it’s at 1 AU by definition). Secondly, it’s not just the Earth — Mercury is the closest neighbor of all planets in the solar system. In other words, Uranus is, on average, closer to Mercury than its presumed neighbor, Neptune. The same stands for even the dwarf planet Pluto (we still love you, Pluto!).
A simulation of an Earth year’s worth of orbits by the terrestrial planets begins to reveal that Mercury (gray in orbital animation) has the smallest average distance from Earth (blue) and is most frequently Earth’s nearest neighbor. Image credits: Tom Stockman/Gabriel Monroe/Samuel Cordner.
The whirly-dirly corollary
Researchers also found that the distance between two orbiting bodies is at a minimum when the inner orbit is at a minimum — something which they call the “whirly-dirly corollary” — after an episode of the cartoon Rick and Morty.
The method might also be useful in estimating distances between other orbiting bodies such as satellites or extrasolar planets or stars. In the Physics Today commentary, the researchers explain:
“As best we can tell, no one has come up with a concept like PCM to compare orbits. With the right assumptions, PCM could possibly be used to get a quick estimate of the average distance between any set of orbiting bodies. Perhaps it can be useful for quickly estimating satellite communication relays, for which signal strength falls off with the square of distance. In any case, at least we know now that Venus is not our closest neighbor—and that Mercury is everybody’s.”
A vast array of gas fuels have been used in the launching and transportation of spacecraft with liquid hydrogen and oxygen among them. Other spacecraft rely heavily on solar power to sustain their functionality once they have entered outer space. But now steam-powered vessels are being developed, and they are working efficiently as well.
People have been experimenting with this sort of technology since 1698, some decades before the American Revolution. Steam power has allowed humanity to run various modes of transportation such as steam locomotives and steamboats which were perfected and propagated in the early 1800s. In the century prior to the car and the plane, steam power revolutionized the way people traveled.
Now, in the 21st century, it is revolutionizing the way in which man, via probing instruments, explores the cosmos. The private company Honeybee Robotics, responsible for robotics being employed in fields including medical and militaristic, has developed WINE (World Is Not Enough). The project has received funding from NASA under its Small Business Technology Transfer program.
The spacecraft is intended to be capable of drilling into an asteroid’s surface, collecting water, and using it to generate steam to propel it toward its next destination. Late in 2018, WINE’s abilities were put to the test in a vacuum tank filled with simulated asteroid soil. The prototype mined water from the soil and used it to generate steam to propel it. Its drilling capabilities have also been proven in an artificial environment. To heat the water, WINE would use solar panels or a small radioisotopic decay unit.
“We could potentially use this technology to hop on the moon, Ceres, Europa, Titan, Pluto, the poles of Mercury, asteroids — anywhere there is water and sufficiently low gravity,” The University of Central Florida’s planetary researcher Phil Metzger stated.
Without having to carry a large amount of fuel and assumably having unlimited resources for acquiring its energy, WINE and its future successors might be able to continue their missions indefinitely. Similar technology might even be employed in transporting human space travelers.
Titan has seas, lakes, and rivers — and now, researchers have found, it also has rainfall and seasonal variation.
A false-color radar mosaic of Titan’s north polar region. Blue coloring depicts hydrocarbon seas, lakes and tributary networks filled with liquid ethane, methane and dissolved nitrogen. Image credits: NASA / JPL-Caltech / USGS.
If you’d picture a place that has an atmosphere and liquids on its surface, it probably wouldn’t be Titan. This frigid moon is only 50% larger than Earth’s moon and mostly consists of ice and rocky material. It features a young and smooth geological surface, with few volcanic or impact craters, and remarkably, it has not only an atmosphere, but also geological features dunes, rivers, lakes, seas, and even deltas. But there’s a key difference.
Unlike Earth’s seas, which consist of water, Titan’s seas consist of hydrocarbons such as methane and ethane.
Conversely, Titan features a nitrogen atmosphere and has a nitrogen cycle analogous to Earth’s carbon cycle, something which stunned astronomers when it was first discovered. The Cassini mission, which landed a probe on Titan in 2005, first revealed a surface which seemed to be shaped by fluids.
But Titan has far from shared all its secrets. Recently, astronomers have analyzed images suggesting that intense rainfall occurs on Titan, indicating the start of “summer” in the northern hemisphere. It’s something researchers were expecting for a long time, especially as rain had been previously observed in the southern hemisphere.
“The whole Titan community has been looking forward to seeing clouds and rains on Titan’s north pole, indicating the start of the northern summer, but despite what the climate models had predicted, we weren’t even seeing any clouds,” said Rajani Dhingra, a doctoral student in physics at the University of Idaho in Moscow, and lead author of the new. “People called it the curious case of missing clouds.”
New research provides evidence of rainfall on the north pole of Titan, the largest of Saturn’s moons, shown here. The rainfall would be the first indication of the start of a summer season in the moon’s northern hemisphere, according to the researchers. Credit: NASA/JPL/University of Arizona.
The image was taken in 2016, by the near-infrared instrument on the Cassini probe, which offered the bulk of what we know about Titan. The instrument spotted a reflective feature covering approximately 46,332 square miles, which did not seem to appear on any other images of Cassini. The analyses suggest that this reflective feature represents a wet surface.
“It’s like looking at a sunlit wet sidewalk,” Dhingra said.
So we have a strong confirmation that seasons are happening on Titan, which confirms the predictions astronomers made. However, this poses a new question that researchers will have to answer.
“We want our model predictions to match our observations.” Dhingra said. “Summer is happening. It was delayed, but it’s happening. We will have to figure out what caused the delay, though.”