Tag Archives: images

This AI module can create stunning images out of any text input

A few months ago, researchers unveiled GPT-3 — the most advanced text-writing AI ever developed so far. The results were impressive: not only could the AI produce its own texts and mimic a given style, but it could even produce bits of simple code. Now, scientists at OpenAI which developed GPT-3, have added a new module to the mix.

“an armchair in the shape of an avocado”. Credit: OpenAI

Called DALL·E, a portmanteau of the artist Salvador Dalí and Pixar’s WALL·E, the module excerpts text with multiple characteristics, analyzes it, and then creates a picture of what it understands.

Take the example above, for instance. “An armchair in the shape of an avocado” is pretty descriptive, but can also be interpreted in several slightly different ways — the AI does just that. Sometimes it struggles to understand the meaning, but if you clarify it in more than one way it usually gets the job done, the researchers note in a blog post.

“We find that DALL·E can map the textures of various plants, animals, and other objects onto three-dimensional solids. As in the preceding visual, we find that repeating the caption with alternative phrasing improves the consistency of the results.”

Details about the module’s architecture have been scarce, but what we do know is that the operating principle is the same as with the text GPT-3. If the user types in a prompt for the text AI, say “Tell me a story about a white cat who jumps on a house”, it will produce a story of that nature. The same input a second time won’t produce the same thing, but a different version of the story. The same principle is used in the graphics AI. The user can get multiple variations of the same input, not just one. Remarkably, the AI is even capable of transmitting human activities and characteristics to other objects, such as a radish walking a dog or a lovestruck cup of boba.

“an illustration of a baby daikon radish in a tutu walking a dog”. Credit: OpenAi.
“a lovestruck cup of boba”. Image credits: OpenAI.

“We find it interesting how DALL·E adapts human body parts onto animals,” the researchers note. “For example, when asked to draw a daikon radish blowing its nose, sipping a latte, or riding a unicycle, DALL·E often draws the kerchief, hands, and feet in plausible locations.”

Perhaps the most striking thing about these images is how plausible they look. It’s not just dull representations of objects, the adaptations and novelties in the images seem to bear creativity as well. There’s an almost human ambiguity to the way it interprets the input as well. For instance, here are some images it produced when asked for “a collection of glasses sitting on a table”.

Image credits: OpenAI.

The system uses a body of information consisting of internet pages. Each part of the text is taken separately and researched to see what it would look like. For instance, in the image above, it would look at thousands of photos of glasses, then thousands of photos of a table, and then it would combine the two. Sometimes, it would decide on eyeglasses; other times, drinking glasses, or a mixture of both.

DALL·E also appears capable of combining things that don’t exist (or are unlikely to exist) together, transferring traits from one to the other. This is apparent in the avocado-shaped armchair images, but is even more striking in the “snail made of harp” ones.

The algorithm also has the ability to apply some optical distortion to scenes, such as “fisheye lens view” and “a spherical panorama,” its creators note.

DALL·E is also capable of reproducing and adapting real places or objects. When prompted to draw famous landmarks or traditional food, it

At this point, it’s not entirely clear what it could be used for. Fashion and design come to mind as potential applications, though this is likely just scratching the surface of what the module can do. Until further details are released, take a moment to relax with this collage of capybaras looking at the sunset painted in different styles.

Image credits: OpenAI

Here are the impressive winning images of the British Ecological Society competition

Celebrating the beauty and diversity of the natural world, the British Ecological Society announced the winners of its 2020 “Capturing Ecology” photography competition. The images were taken by international ecologists and students from around the world and capture flora and fauna in creative settings.

Image credit: Alwin Hardenbol

Subjects range from a showdown between a roadrunner and rattlesnake, flamingos feasting at sunset, and a close-up of a humphead wrasse. The independent judging panel featured six highly respected members from different countries, including eminent ecologists and award-winning wildlife photographers.

The first prize was awarded to Alwin Hardenbol from the University of Eastern Finland and his shot of a Dalmatian pelican, the largest type of pelican and one threatened by the loss of its breeding colonies and aquatic habitats. In a statement, Hardenbol said he had to take thousands of pictures to get the perfect shot.

“I gave this image the title ‘The art of flight’ because of how impressive this bird’s wings appear in the picture, you can almost see the bird flying,” Hardenbol said. “Winning such a competition as an ecologist provides me with the opportunity to continue combining my research with my passion for nature photography.”

A biology student from Argentina, Pablo Javier Merlo, won the overall student category. He captured a Great Dusky Swift perched on the steep rocky walls of the Iguazú falls which limit between Brazil and Argentina. These birds, known as waterfall swifts, can be usually found flying among the Iguazú falls.

“The Iguazú National Park has remarkable importance since it protects a very diverse natural ecosystem, and the waterfall swift is an important icon of Iguazú and its diversity,” Merlo said in a statement. “I am very grateful to be selected as one of the winners and feel motivated to continue learning about photography.”

Image credit: Pablo Javier Merlo

A researcher at the University of Valencia, Roberto García Roa was the winner of “The Art of Ecology” category for this image of a Cope’s vine snake using its open mouth as a tactic to scare predators. This is a tactic used after being discovered, despite the fact they are considered harmless and having no venom.

Credit: Roberto García Roa

Upamanyu Chakraborty, a researcher at the Wildlife Institute of India, was one of the overall runner-ups with this impressive photo on weaver ants and their social behavior, taken in a backlit situation. They build their nests out of leaves and live a life high up in the canopy of the trees, off the ground where possible.

Image credit: Upamanyu Chakraborty

Pichaya Lertvilai, a researcher from the Scripps Institution of Oceanography at the University of California San Diego, was another overall runner-up winner this photo of the paralarvae of the California two-spot octopus hatching from their egg sacs. The egg yolks sustain them for a short period before they have to start hunting to survive.

Image credit: Pichaya Lertvilai

Peter Hudson, a researcher from Penn State University, was the winner of the dynamic ecosystems’ category. He took a photo of a roadrunner dancing around a western diamondback rattlesnake. The roadrunner keeps its wings out and feathers exposed with its body hidden to minimize the chances of death if the snake strikes.

Image credit: Peter Hudson

The ‘Up close and personal’ category winner was Michał Śmielak, from the University of New England, Australia, with his photo of this bearded leaf chameleon.

Image credits: Michał Śmielak

You can read more about the contest and see the other entries here.

Let the Olympus Image of the Year Award 2019 dazzle your imagination in these stressful times

Doing science may be hard, but it definitely pays off dividends — especially when it comes to the awesome pictures it provides.

A mix of amino acids L glutamine and beta-alanine crystallized out of an ethanol solution and photographed at 50X using polarizing filters.
Image credits Honorable Justin Zoll (U.S.A.).

The contest is run by Olympus Life Science, an optical and digital imaging manufacturer based in Germany.

And they like pretty pictures

An image of a foldable insect wing, named “a road in the sky”.
Image credits Hamed Rajabi (Germany).

For the past few years, the company has been running various types of photo competitions revolving around the beauty that can be extracted using cutting-edge imaging instruments. The images never disappoint.

Olympus has just announced the winners of its Image of the Year Award 2019, and the entries are available for the public to enjoy. Since it’s Friday and they are all excellent images, I thought we could all use to take a break from our quarantined schedules and enjoy the beauty mother nature hides among its tiniest details. First off, let’s start with the runner-ups:

Image of the ovary of a gall-inducing wasp anselmella miltoni girault showing their eggs. The image was captured with a confocal microscope.
Image credits Ming-Der Lin (Taiwan).
Different photonic crystals in insects on the elytron of the longhorned beetle Sternotomis pulchra.
Image credits Rudolf Buechi (Switzerland).

Photonic crystals are nanostructures that show a particular interaction pattern with light. Some butterflies’ wings get their iridescence and dazzling colors from the presence of such crystals on their surface.

Sweet beach, isn’t it? Nope! This is an image of green prase opal magnified through the microscope to make it look like a shoreline.
Image credits Nathan Renfro (U.S.A.).
Mouse spinal cord with green fluorescent protein expression and cleared with the CLARITY method.
Image credits Tong Zhang (China).

The Finalists

Image credits Howard Vindin (Australia).

This image of a mouse embryo created from 950 tiles stitched together won Howard Vindin, a PhD student at the University of Sydney, the Asia-Pacific regional prize.

Image credits Alan Prescott (U.K.).

This image, titled “The Mouse’s Whiskers”, shows a section through a frozen mouse’s tissues captured using fluorescent protein labels. It won the Europe, Middle East, and Africa regional prize.

Image credits Tagide deCarvalho (U.S.A.).

This image won the Americas regional prize and it is incredibly cute. The image showcases the inside of a tardigrade with colorful details. Isn’t it just so plump?

And now, the overall winner:

A brightly-colored fluorescence image showcasing a section through a mouse brain captured with a super-resolution confocal microscope system.
Image credits Ainara Pintor (Spain).

Some stunning highlights from previous years

Fluorescence image showing a marine snail shell covered in algae and cyanobacteria. First prize in 2018.
Image credits Håkan Kvanström.
Droplets of solidified dopamine captured using polarized light. Second prize in 2018.
Image credits Karl Gaff.
The intricate ‘mouth brushes’ of a mosquito larva seen through interference contrast microscopy. Third prize in 2018.
Image credits Johann Swanepoel.

If you need a fill of pretty and amazing science photographs, Olympus runs a pretty sweet Instagram page you should check out, and entries from their previous contests can be seen on their site here (after some scrolling down). They have also made the images available as wallpapers so you can enjoy them on your device or — for the fancy amongst you — draped over your walls.

For a less zoomed-in appreciation of natural beauty you can take a look at the “Capturing Ecology” competition, the “Nikon Small World” contest, Alexey Kljatov’s brilliant depiction of snowflakes or, of course, The libraries of the Art Institute of Chicago, The Smithsonian, and that of The Biodiversity Heritage Library (which have been made public).

If you’re rather looking for some peace and quiet, these pictures of the surface of Moon and Mars might do just the trick for you. If peace is what you’re after but quite definitely isn’t, researchers at Cornell University’s Lab of Ornithology have made a huge library of animal sounds free to use for anyone interested.

Jupiter.

Hubble snaps breathtaking new image of Jupiter

Jupiter is still pretty, science finds.

Jupiter.

Image credits NASA, ESA / A. Simon (Goddard Space Flight Center) and M.H. Wong (University of California, Berkeley)

The image was taken on June 27, 2019 and centers on the planet’s titanic Great Red Spot. It records Jupiter’s color palette, swirling clouds, and turbulent atmosphere in much higher quality than previously-available images. These elements provide an important glimpse into the processes unfurling in the gas giant’s atmosphere.

Ten year challenge photo

The image was taken in visible light as part of the Outer Planets Atmospheres Legacy program (OPAL). It was snapped with Hubble’s Wide Field Camera 3 when Jupiter was 400 million miles from Earth — near “opposition,” or almost directly opposite the Sun in the sky.

OPAL generates global views of the outer planets each year using the Hubble Telescope, which are meant to provide researchers with the data they need to track changes in their storm, wind, and cloud dynamics.

One of Jupiter’s most striking features is the Great Red Spot, around which the current image focuses. The Spot is a churning storm, rolling counterclockwise between two bands of clouds (above and below the Great Red Spot) which are moving in opposite directions. The red band to the northeast of the Great Red Spot contains clouds moving westward and around the north of the giant tempest. The white clouds to its southwest are moving eastward to the south of the spot. The swirling filaments seen around its outer edge are high-altitude clouds that are being pulled in and around the storm.

Jupiter’s bands are created by differences in the thickness and height of the ammonia ice clouds that blanket its surface, both properties dictated by local variations in atmospheric pressure. The more colorful bands and are generally ‘deeper’ clouds. Lighter bands rise higher and are thicker, generally, than the darker ones. 

Winds between bands can reach speeds of up to 400 miles (644 kilometers) per hour. All of the bands seen in this image are corralled to the north and to the south by powerful, constant jet streams — these remain stable even as the bands change color on the other side of the planet.  The band of deep red and bright white that border the Giant Red Spot also become much fainter on the other side of Jupiter.

You can learn more about how these colors form here.

Women are just as aroused by pornography as men, largest study of its kind shows

A review of 61 brain scanning studies contradicts the widespread belief that men enjoy sexual imagery more than women.

Although there was never really strong science behind this idea, men are typically seen as being more interested in sex than women. Questionnaire-based studies have suggested that men find erotic images more arousing than women do, which seemed to play into the same narrative — women are more likely to require an emotional connection before they can become aroused.

The first brain scan studies seemed to validate the questionnaires. Despite major differences from individual to individual, some studies seemed to suggest that men are more interested in pornography. However, a more scrutinous look can find some important shortcomings for these studies. Most importantly, they work with small sample sizes and are prone to drawing conclusions from data which may be owed to random variations.

In order to fix that issue, a team led by Hamid Noori at the Max Planck Institute for Biological Cybernetics in Tübingen analyzed the results from all the brain-scanning studies that have ever been published on this issue. In total, the studies had a combined sample sized of 2,000 — still not massive, but sufficient to draw some more reliable conclusions.

They found no important differences between how men and women react to pornographic images.

“Neuroimaging studies suggest differences in the underlying biology of sexual arousal associated with sex and sexual orientation, yet their findings are conflicting,” the study reads. “Following a thorough statistical review of all significant neuroimaging studies, we offer strong quantitative evidence that the neuronal response to visual sexual stimuli, contrary to the widely accepted view, is independent of biological sex.”

Women do watch less pornography than men (a roughly 80%-20% split), but that is owed to non-biological factors. For starters, the entire market is tailored for men, and the stigma of watching pornography is also greater for women than men.

Of course, this study also has significant shortcomings, which the researchers also admit. For starters, the study was limited to functional neuroimaging experiments — brain scans that only show activity at the level of large anatomical structures, meaning there could still be differences at smaller levels that don’t get picked up. There is also a lack of reporting on null results, which can tweak the results, and the quality of the considered studies varied significantly.

Overall though, this makes a lot of sense. Humans, like all mammals, react to sexual visual stimuli. But there is another consideration: just because there is a biological reaction doesn’t necessarily mean you’re “turned on” — our brains are often more complicated than our genital desires.

For instance, men can be physically aroused and have erections without being turned on. In some cases, even unwanted stimulation (rape) can produce unwanted arousal, in both men and women. Our sexual arousal is not just a button you can switch on or off.

The study has been published in PNAS.

RCW 38.

The constellation Vela explodes with color (and new suns) in ESO-captured snaps

The European Southern Observatory just published a breathtaking image of a nearby star nursery.

RCW 38.

Star cluster RCW 38.
Image credits ESO / K. Muzic.

Earlier today, we’ve talked about the first colors complex-ish life created — it was a story of algae, fossils, and pink. Moving on from this daring display by early life, however, I thought we’d seize the occasion to look at what colors accompany birth in the other direction — up in space.

Our eyes can’t peer that far out, but, luckily for us, the European Southern Observatory (ESO) can. Using the HWAK-I (High Acuity Wide field K-band Imager) infrared imager mounted on the Very Large Telescope (VTL) in Chile, the ESO captured some spectacular shots of stars being born in the Vela constellation.

Ashes to ashes, dust to stars

Vela constellation, RCW 38.

RCW 38 in the constellation of Vela (The Sails). The map shows most of the stars visible to the unaided eye under good conditions.
Image credits ESO / IAU / Sky & Telescope.

The image depicts the star cluster RCW 38 as seen in infrared. ESO chose this bit of the electromagnetic spectrum for their observations since infrared can see ‘through’ the clouds shrowding star nurseries such as RCW 38. The cluster itself contains hundreds of young, brightly hot, and quite massive stars. Even at the relatively short distance of 5500 light-years away, however, their (visible) light can’t peer through the vast bodies of dust surrounding the cluster.

The central area, seen as a bright blue region, houses numerous very young stars as well as a few protostars — ‘stars’ that are still forming. Observations by the Chandra X-ray Observatory revealed the presence of over 800 X-ray emitting young stellar objects in the cluster. You won’t be surprised to hear, then, that the area is drenched in radiation, making local gas clouds glow vividly. Cooler bodies of dust languishing in front of the cluster carry more subdued, darker hues of red and orange. The end result, a ‘colorful celestial landscape’ as ESO puts it, is quite the striking interplay of color and light.

This image was captured as part of a series of tests — a process known as science verification — for HAWK-I and GRAAL (the ground layer adaptive optics module of the VLT). These tests are performed to ensure newly-commisioned instruments work as intended and include a set of test observations that verify and demonstrate the capabilities of the new instrument.

RCW 38 optic.

Star cluster RCW 38 in the visible spectrum.
Image credits ESO – Digitized Sky Survey 2 / Davide De Martin.

Previous images of this region — snapped in the visible spectrum — show a very different sight. Optical images appear almost devoid of stars in comparison with those taken in the infrared spectrum due to dust obscuring the view.

Peering through dense bodies such as dust clouds or nebulae is actually one of the HWAK-I’s main roles. The device also projects four laser beams out into the night sky to use as artificial reference stars — used to correct for any atmospheric turbulence, which can bend incoming light — to increase the quality of the final image.

Adobe is using machine learning to help you spot Photoshopped pictures

Adobe, the company known for giving us Photoshop, is trying to help you recognize what photos have been tampered with.

An illustration from Adobe’s new paper showing how edits in images can be spotted by a machine learning system. (via The Verge)

As we previously wrote, it’s becoming harder and harder to detect tampered images and videos from the real thing, and AI tools continue to make it more difficult. When you consider the ability of social media to spread these images like wildfire without even the slightest fact-checking, this becomes more than a nuisance — it becomes a real problem in society.

An early arms race

Many companies (including Adobe) are developing their own tools to make it easier and easier to manipulate the visual, but there’s also the other side: detecting what’s been manipulated. At the CVPR computer vision conference, Adobe demonstrated how this field, called digital forensics, can be automated quickly and efficiently. This type of approach could ultimately be incorporated into our daily lives to establish the authenticity of social media photos.

Although the research paper does not represent a breakthrough per se, it’s intriguing to see Adobe plunging into this field.

They work on three types of manipulation:

  • splicing, where two different images are combined
  • cloning, where objects are copied and pasted
  • removal, where an object is edited out altogether

When researchers or digital workers try to assess the validity of images, they look for artifacts left behind by editing. For instance, when an object is copied from an image and pasted onto another, the background noise level of the two is often inconsistent. Adobe used an already established approach — taking a large dataset of images and “training” an algorithm — to detect tampering.

The new algorithm scored higher than other existing tools, but only marginally so. Furthermore, the tool has no application in so-called “deep fakes” — images and videos entirely created by AI. The algorithm is also only as good as the database it’s fed. For now, it’s still an early stage program.

It’s not hard to see this turning into an arms race of sorts. As detection algorithms improve, we might see more tools that better hide these manipulations. For now, we should all keep in mind just how easy it is to manipulate an image before we share it on Facebook. It’s becoming clearer that in much of today’s media, we’re already in an arms race between truth and lies. There’s a good chance this type of algorithm could play an important role in the future.

Hubble completes the most complete ultraviolet-light survey of nearby galaxies — and the photos are mind blowing

Galaxies are amazing things, and we can now see some of them in unprecedented detail.

The spiral galaxy Messier 96 lies some 35 million light-years away. Image credits: NASA, ESA, and the LEGUS TEAM.

“There has never before been a star cluster and a stellar catalog that included observations in ultraviolet light,” explained survey leader Daniela Calzetti of the University of Massachusetts, Amherst. “Ultraviolet light is a major tracer of the youngest and hottest star populations, which astronomers need to derive the ages of stars and get a complete stellar history. The synergy of the two catalogs combined offers an unprecedented potential for understanding star formation.”

Light comes in different wavelengths. Ultraviolet light (UV) has a shorter wavelength than that of visible light, but longer than X-rays, and UV constitutes about 10% of the total light output of stars like the Sun. When you “look” at something in different wavelengths, you can infer different things about its physical properties. In this case, astronomers were trying to learn more about star formation, a process that still holds many secrets.

Astronomers combined new and old Hubble observations, looking for detailed information on young, massive stars and star clusters, as well as their evolution. It’s ironic, really — almost all we know about the universe, we know thanks to light from stars, and yet we don’t know how the stars themselves form.

The spiral galaxy Messier 66. Image credits: NASA, ESA, AND THE LEGUS TEAM.

As far as we know, stars form inside relatively dense concentrations of interstellar gas and dust known as molecular clouds. These areas are extremely cold (just ten degrees K above absolute zero), and at those temperatures, gases become molecular, meaning they’re much more likely to bind together. Oftentimes, gases clump up to higher and higher densities, and once a specific point is passed, stars can form. But here’s the thing: before the star is actually formed, the region is very dense and dark, virtually opaque to visible light (something called a dark nebula). Astronomers can still investigate them to an extent, but they use infrared and radio telescopes. So, instead, researchers try to find very young stars.

The research team carefully selected the LEGUS targets from among 500 galaxies, all of which lie between 11 million and 58 million light-years from Earth. Team members chose the galaxies based on their mass, star-formation rate, and abundances of metals –which, in this context, means elements that are heavier than hydrogen and helium.

Galaxies come in multiple shapes and sizes. We tend to think of galaxies as being spiral, Milky Way-like structures, but galaxies can be quite varied in terms of shape and size. Stars tend to be distributed quite regularly in galaxies, but while groups of stars tend to be more predictable, the same can’t be said about individual stars.

“When we look at a spiral galaxy, we usually don’t just see a random distribution of stars,” Calzetti said. “It’s a very orderly structure, whether it’s spiral arms or rings, and that’s particularly true with the youngest stellar populations. On the other hand, there are multiple competing theories to connect the individual stars in individual star clusters to these ordered structures.

These six images represent the variety of star-forming regions in nearby galaxies. The galaxies are part of the Hubble Space Telescope’s Legacy ExtraGalactic UV Survey (LEGUS), the sharpest, most comprehensive ultraviolet-light survey of star-forming galaxies in the nearby universe. The six images consist of two dwarf galaxies (UGC 5340 and UGCA 281) and four large spiral galaxies (NGC 3368, NGC 3627, NGC 6744, and NGC 4258). The images are a blend of ultraviolet light and visible light from Hubble’s Wide Field Camera 3 and Advanced Camera for Surveys. Image credits: NASA/ESA/LEGUS team.

This is where the new survey comes in, and why it’s so important. By imaging the galaxies in such detail, astronomers are able to zoom in on individual star populations, thus gaining more information about them. We can almost certainly expect a flurry of studies on star formation in the near future.

“By seeing galaxies in very fine detail — the star clusters — while also showing the connection to the larger structures, we are trying to identify the physical parameters underlying this ordering of stellar populations within galaxies. Getting the final link between gas and star formation is key for understanding galaxy evolution,” Calzetti concludes.

 

Image examples.

Holograms are so last year: team develops 3D images floating in thin air

Three-dimensional recordings like those carried by R2D2 in the Star Wars film are closer to reality than you’d think.

Image examples.

Examples of the colour and resolution quality of the images.
Image credits Smalley et al., 2018, Nature.

Brigham Young University (BYU) professor and holography expert Daniel Smalley has wanted to recreate the scene ever since we first saw it. And it just goes to show that dreams come true, as a paper he recently published details the method he developed to do just that.

These are the droids you are looking for

“We refer to this colloquially as the Princess Leia project,” Smalley said. “Our group has a mission to take the 3D displays of science fiction and make them real. We have created a display that can do that.”

First of all, however, Professor Smalley notes that the image of Princess Leia we know and love from the film isn’t what people think it is — it’s not a hologram. A 3D image like that, one that floats in the air and can be viewed from every angle is called a volumetric image.

The difference between them is subtle but significant. A hologram scatters light only on a (2D) surface, and if you’re not looking at that surface from the right angle you won’t see the original (3D) image. A volumetric display, on the other hand, has little scattering surfaces spread throughout a 3D space — the same space occupied by the image — so no matter how you’re looking at it, you’re are also looking at the scatters. In short, this means a volumetric image can be seen from any angle and it would still be 3D.

Drawing on photophoretic optical trapping, Smalley and his team devised a free-space volumetric display platform that produces full-color, aerial volumetric images with 10-micron image points by persistence of vision. Since that’s probably really confusing, here’s the team explaining how their device works without all the technical terms:

“We’re using a laser beam to trap a particle, and then we can steer the laser beam around to move the particle and create the image,” said coauthor Erich Nygaard.

“This display is like a 3D printer for light,” Smalley said. “You’re actually printing an object in space with these little particles.”

To showcase their work, the team have 3D-light-printed a butterfly, a prism, the BYU logo, rings that wrap around an arm, and an individual in a lab coat crouched in a position similar to Princess Leia as she begins her projected message. While some of the previous work at BYU have related to volumetric imagery, Smalley’s team is the first to successfully merge color images and optical trapping.

The paper “A photophoretic-trap volumetric display” has been published in the journal Nature.

An image is not always worth a thousand words, researchers say

An image might evoke short-term feelings, but a wall of text might change your mind for good.

Emotional images might evoke strong feelings, but they don’t change much in the long run. Image credits: Mihai Paraschiv.

Whether it’s through traditional means like newspapers or television, or through modern age social media, we’re exposed to a number of powerful images every single day. Sometimes, we might give in to the feelings evoked by these images. We might get sad when we see an image of a struggling immigrant family, or we might be happy when we learn of a rescued pup. But does that really change us in the long run?

According to communication scientist Tom Powell… not really.

Powell spent his PhD investigating to what extent images in print and digital news influence the way people, especially in a political context. He conducted several experiments in which he exposed participants to high-impact stories on emotional topics such as refugee crisis and military intervention. He intertwined powerful images with text, and at the end of it all asked participants to talk about their opinions and behaviors.

Unsurprisingly, the more striking photos he used, the more of an emotional response he received. But images don’t really change opinions and behaviors in the long run, Powell reports. That’s something which text fares much better at, he says.

‘We also discovered that viewing news about, say, the refugee crisis in a news article encouraged people to help refugees more than seeing it in video format. Again, our findings suggest that, in general, when people read the news they become more involved in it than if they watch it.’

This finding is indeed surprising because it goes against what we traditionally thought. Ask anyone from marketers to social scientists and they’ll tell you that images make the world go round when it comes to startling people. But if you think about it, it makes a lot of sense. An image comes to you at once, basically for free — it doesn’t require you to do anything, it doesn’t require any involvement, you just see it. Text, on the other hand, requires some involvement. You read it, you make a (small) effort. So it only seems logical that you’re more involved in something you put a bit of work in, even if it’s an extremely small amount.

‘My research shows that “powerful” images can draw people into the news, but citizens will not be completely won over by them – it is how images combine with words, and with the prior knowledge of the audience, that matters.’

While Powell was focused on political outcomes, his findings could be significant in a number of ways, especially when it comes to making people more involved in topics such as climate change or vaccination. It may sound like a no-brainer, but it’s words that might get the job done — not photos.

NASA’s Juno spacecraft sends back first color image of Jupiter from orbit

Image credit NASA/JPL-Caltech/SwRI/MSSS

Image credit NASA/JPL-Caltech/SwRI/MSSS

NASA’s Juno spacecraft has sent back the first image of Jupiter since it entered its orbit last week, capturing the planet’s famous Great Red Spot as well as three of its moons – Europa, Ganymede and Io. The probe was approximately 2.7 million miles away from Jupiter when it snapped the spectacular photo.

“This scene from JunoCam indicates it survived its first pass through Jupiter’s extreme radiation environment without any degradation and is ready to take on Jupiter,” said Scott Bolton of the Southwest Research Institute in San Antonio and Juno principal investigator. “We can’t wait to see the first view of Jupiter’s poles.

It took Juno five years to reach the massive planet, eventually entering its orbit on July 4. On July 6, the team powered up the probe’s instruments and on July 10 they turned on the JunoCam, a color, visible-light camera specially designed to take pictures of Jupiter’s poles and cloud tops.

Although the JunoCam will help give context to the data gained from the other instruments on the probe, it’s main purpose is to engage the public. It is not considered to be one of the mission’s main scientific instruments.

The Juno spacecraft is currently making its way away from Jupiter towards the farthest reaches of its elliptical, 53-day orbit, where it will continue to snap photos along its journey. However, the main goal of the mission is to examine the giant planet’s magnetic and gravitational fields, as well as its composition and internal structure. Scientists are hopeful that data from the missions will help them better understand how Jupiter and the solar system formed and evolved over the years.

“JunoCam will continue to take images as we go around in this first orbit,” said Candy Hansen of the Planetary Science Institute in Tucson, Arizona and Juno co-investigator. “The first high-resolution images of the planet will be taken on August 27, when Juno makes its next close pass to Jupiter.”

The Juno mission is set to end in February 2018 with one final plunge into Jupiter’s hazy atmosphere. All photos from the JunoCam will continue to be posted on the mission’s official website.

APOD Wallpaper software

In case you don’t know, APOD is short for Astronomy Picture Of the Day. It’s home of some awesome astronomy pictures. Anyway, recently I came across this software that takes the latest APOD picture and sets is at your background, so you can see these great images each day, without lifting a finger. It also adds a description. I tested it, and it works quite fine ! Download here

Just one amazing pic from APOD

Hubble shows 2 galaxies that are just losing it

hubble-ram-pressureRam pressure is the pressure exerted on a body when it passes through a fluid medium; this causes a drag force that is exerted on the body. The same pressure occurs when a galaxy (body) is moving through an intergalactic gas (fluid), and in this case, the ram pressure can sweep a significant part of the intergalactic gas from the galaxy. The spiral galaxy NGC 4522 is located a bit more than 60 light years away from our planet and it’s a great example of a spiral galaxy being stripped of its gas.

The galaxy is part of the Virgo galaxy cluster and it moves at over 10.000.000 km/h. This is just an image, but you can almost see the galaxy swirling, highlighting its dramatic state, as the halo-like gas is being forced out of it. This was caused by a number of newly formed star clusters.

The image also shows some more subtle effects of the ram pressure, such as the curved (convex) appearance of the disk and understanding it is really important because it helps researchers understand the mechanism that leads to the birth of galaxies better, and also provide important clues on how the rate of star formation is being ‘controlled’ by galaxies.