Tag Archives: sound

Rattlesnakes modulate their tail wagging to make you think they’re closer than they are

The rattling of rattlesnakes isn’t as simple a warning as we assumed. New research explains that this sound is subtly modulated to change the listener’s perception of its source, making it seem the snake is closer than it actually is.

Image via Pixabay.

Rattlesnakes are quite famous for the warning sounds they produce with their tails, the iconic ‘rattling’ that gives them their name. Far from being a simple wagging of the tail, however, new research suggests that this rattling is a fine-tuned intimidation tool. As the snake rattles its tail, it makes an abrupt shift to a high-frequency mode, the team explains. This makes listeners perceive the source of sound as being closer than it actually is.

In effect, while definitely being deadly, rattlesnakes also engage in some strategic deception.

Rattle my bones

“Our data show that the acoustic display of rattlesnakes, which has been interpreted for decades as a simple acoustic warning signal about the presence of the snake, is in fact a far more intricate interspecies communication signal,” says senior author Boris Chagnaud at Karl-Franzens-University Graz. “The sudden switch to the high-frequency mode acts as a smart signal fooling the listener about its actual distance to the sound source. The misinterpretation of distance by the listener thereby creates a distance safety margin.”

Past studies have shown that rattlesnakes’ rattles vary in frequency, but they didn’t give us any insight into why they do, or what this behavior actually achieves in the real world.

The hypothesis behind this paper was born while Chagnaud was visiting an animal facility and noticed that rattlesnakes increased the frequency of their rattling as someone approached the snakes — but decreased when they walked away. From this observation, Chagnaud and his team developed an experiment in which objects appeared to move towards rattlesnakes. One of these objects was a human-like torso, and another was a looming black disk. The illusion of forward-back movement was created by making the objects increase or decrease in size.

The team reports that over the course of this experiment, as potential threats approached the snakes, they would increase the frequency they rattled at to approximately 40 Hz. But, abruptly, they would switch to an even higher frequency range, between 60 and 100 Hz.

Further experimentation revealed that rattlesnakes adapt their rattling frequency to the (perceived) approach velocity of an object, rather than its size.

“In real life, rattlesnakes make use of additional vibrational and infrared signals to detect approaching mammals, so we would expect the rattling responses to be even more robust,” Chagnaud says.

Inside a virtual reality environment, the team then tested how this shift in rattling frequency is perceived by a person or animal close to the snake. A group of 11 participants were asked to engage in a simulated walk inside the virtual environment — a grassland — and told they’ll be walking towards a snake. Its rattling rate increased as the participants closed in, as per the previous findings, and suddenly raised it to 70 Hz at a virtual distance of 4 meters.

The participants were asked to tell the team when the rattling sounded like it came from only 1 meter away. All the participants underestimated the distance that the virtual snake was at after it increased its rattling frequency.

“Snakes do not just rattle to advertise their presence, but they evolved an innovative solution: a sonic distance warning device similar to the one included in cars while driving backwards,” Chagnaud says. “Evolution is a random process, and what we might interpret from today’s perspective as elegant design is in fact the outcome of thousands of trials of snakes encountering large mammals. The snake rattling co-evolved with mammalian auditory perception by trial and error, leaving those snakes that were best able to avoid being stepped on.”

The paper “Frequency modulation of rattlesnake acoustic display affects acoustic distance perception in humans” has been published in the journal Current Biology.

Learning music changes how our brains process language, and vice-versa

Language and music seem to go hand-in-hand in the brain, according to new research. The team explains that music-related hobbies boost language skills by influencing how speech is processed in the brain. But flexing your language skills, by learning a new language, for example, also has an impact on how our brains process music, the authors explain.

Image credits Steve Buissinne.

The research, carried out at the University of Helsinki’s Faculty of Educational Sciences, in cooperation with researchers from the Beijing Normal University (BNU) and the University of Turku, shows that there is a strong neurological link between language acquisition and music processing in humans. Although the findings are somewhat limited due to the participant sample used, the authors are confident that further research will confirm their validity on a global scale.

Eins, Zwei, Polizei

“The results demonstrated that both the music and the language programme had an impact on the neural processing of auditory signals,” says lead author Mari Tervaniemi, a Research Director at the University of Helsinki’s Faculty of Educational Sciences.

“A possible explanation for the finding is the language background of the children, as understanding Chinese, which is a tonal language, is largely based on the perception of pitch, which potentially equipped the study subjects with the ability to utilise precisely that trait when learning new things. That’s why attending the language training programme facilitated the early neural auditory processes more than the musical training.”

The team worked with Chinese elementary school pupils aged 8-11 whom they monitored, for the duration of one full school year. All of the participants were attending music training courses, or a similar programme to help them learn English. During this time, the authors measured and recorded the children’s brain responses to auditory stimuli, both before and after the conclusion of the school programmes. This was performed using electroencephalogram (EEG) recordings; at the start, 120 children were investigated using EEG, with 80 of them being recorded again one year after the programme.

During the music training classes, pupils were taught to sing from both hand signs and sheet music and, obviously, practised singing quite a lot. Language training classes combined exercises for both spoken and written English, as it relied on a different orthography (writing system) compared to Chinese. Both were carried out in one-hour-long sessions twice a week, either after school or during school hours, throughout the school year. Around 20 pupils and two teachers attended these sessions at a time.

All in all, the team reports that pupils who underwent the English training programme showed enhanced processing of musical sounds in their brains, particularly in regards to pitch.

“To our surprise, the language program facilitated the children’s early auditory predictive brain processes significantly more than did the music program. This facilitation was most evident in pitch encoding when the experimental paradigm was musically relevant,” they explain.

The results support the hypothesis that music and language processing are closely related functions in the brain, at least as far as young brains are concerned. The authors explain that both music and language practice help modulate our brain’s ability to perceive sounds since they both rely heavily on sound — but that being said, we can’t yet say for sure whether these two have the exact same effect on the developing brain, or if they would influence it differently.

At the same time, the study used a relatively small sample size, and all participants belonged to the same cultural and linguistic background. Whether or not children who are native speakers of other languages would show the same effect is still debatable, and up for future research to determine.

The paper “Improved Auditory Function Caused by Music Versus Foreign Language Training at School Age: Is There a Difference?” has been published in the journal Cerebral Cortex.

Ocean life is suffering because we’re too loud, says new study

Noise pollution is a growing issue on land — but the seas are not safe either, apparently.

Image via Pixabay.

Marine shipping and construction, along with activity from sonar and seismic sensors are making the ocean a very loud place. While that may sound like just any other day in the big city, these high levels of noise pollution are causing a lot of damage to the health of marine ecosystems. A new paper reports on an “overwhelming body of evidence” that man-made noise is to blame.

Loud and deeply

“We’ve degraded habitats and depleted marine species,” said Prof Carlos Duarte from King Abdullah University, Saudi Arabia, lead author of the study. “So we’ve silenced the soundtrack of the healthy ocean and replaced it with the sound that we create.”

Sound plays a very important part in the lives of marine animals, the team explains, being involved in everything from feeding and navigation to communication and social interactions. A lot of what we know of marine animals such as whales comes from sound recordings.

But this state of affairs could change forever. According to the team, the youngest generations of marine animals are missing out on the “production, transmission, and reception” of key behaviors due to “an increasing cacophony in the marine environment” caused by man-made sound.

Freshly-spawned fish larvae use environmental sound and “follow it”, Duarte explains. But these sounds that helped them navigate and understand their environment are now being drowned out. Beyond noise from vessels, sonars, and acoustic deterrent devices, energy and construction infrastructure are also contributing to the issue.

“[T]here is clear evidence that noise compromises hearing ability and induces physiological and behavioral changes in marine animals,” the authors explain, adding however that currently “there is lower confidence that anthropogenic noise increases the mortality of marine animals and the settlement of their larvae” directly.

While the problems caused by marine sound pollution are pronounced and wide-reaching, the quarantine also showcased how quickly and easily they can be averted. According to the authors, levels of man-made sound in the ocean fell by around 20% last year.

Among some of the effects of this drop, the team notes that large marine mammals have been observed in waterways or coastlines that they’ve abandoned for generations. Such effects show that tackling the issue of marine noise is the “low-hanging fruit” of ocean health.

“If we look at climate change and plastic pollution, it’s a long and painful path to recovery,” Prof Duarte said. “But the moment we turn the volume down, the response of marine life is instantaneous and amazing.”

The paper “The soundscape of the Anthropocene ocean” has been published in the journal Science.

What, really, is the speed of sound?

We know that the speed of light seems to be the upper limit for how fast something can travel in the universe. But there’s a much lower speed limit that we’ve only recently (in the grand scheme of things) managed to overcome here on Earth: the speed of sound.

Vapor cone (or ‘shock collar’) around a fighter as it’s getting near to the speed of sound.
image credits Flickr / Charles Caine.

You’ve heard the term before, and you might even know its exact value or, more accurately, values.

But why exactly does sound have a ‘speed’? Is it the same everywhere? And what happens if you go over the limit? Well, one thing is for sure — sound won’t give you a fine for it. It will cause a mighty boom to mark the occasion, though, because going over the speed of sound isn’t an easy thing to pull off.

In Earth’s atmosphere, sound can travel at around 345 meters per second. Let’s take a look at why this limit exists, what says it should be this way, and just why things go boom when you blast through it. But first, let’s start at with the basics:

What is a sound wave

What we perceive as sound is actually motion. Sound is, fundamentally, a movement or vibration of particles, most commonly those in the atmosphere, where we do most of our talking and sound-making.

In very broad lines, any object in motion will come into contact with the particles in their environment. Let’s take talking as an example. When someone speaks, their lungs collide with and push out air that their vocal cords modulate to create certain sounds. This will push the air in their immediate vicinity, which will make its molecules collide with air molecules farther away, and so on, until the motion reaches the air particles next to you. They will then collide with your eardrums, which ‘translates’ it into the sensation of sound.

File:CPT-sound-physical-manifestation.svg - Wikimedia Commons
Two ways to represent the physics of sound. Areas with more dots (corresponding to peaks) show high air pressure, while whiter areas (corresponding to throughs) show areas of low pressure that interact to create sound. This pressure is generated by moving air.
Image via Wikimedia.

So from a physical point of view, sound behaves quite like waves do on a beach. Its volume is directed by how high the wave goes (amplitude) and its pitch is formed by how often these waves hit the shore (frequency). The farther a wave travels, the less energy it has (so the less pressure it can exert on new particles), which is why eventually sound dies out and we can’t hear something halfway around the world. More on sounds here.

The speed of sound is essentially the speed that these ‘acoustic waves’ can travel at through a substance. Leading us neatly to the role these substances (called “the medium”) play here.

Not all things are equal

The source of sound only plays a limited part in its propagation. Sound propagation is almost entirely dependent on the medium.

Video credits Reddit user renec112.

First off, this means that sound can’t propagate through void, as there is nothing to carry it. One handy example is that in space, nobody can hear you scream; but if you place your visor on another’s astronaut’s visor, they will. Secondly, a medium can’t carry sound unless it has some elasticity, although this is more of an academic point as every material is elastic to some degree. The corollary of this is that the more elastic our medium, the faster sound will travel through it.

Elasticity is the product of two traits: the ability to resist deformation (its ‘elastic modulus’ or rigidity) and how much you can alter it before it stops coming back to its original shape (its ‘elastic limit’ or flexibility). Steel and rubber are both very elastic, but the former is rigid while the later is flexible.

Density has a bit of a more complicated relationship to the speed of sound. Density is basically a measure of how much matter there is in a given space. On the one hand closely-packed, lightweight particles allow for higher speeds of sound as there’s less empty space they need to travel over to hit their neighbors. But if these particles are heavy and more spread apart, they will slow the sound down (as big, heavy particles are harder to move). Sound will also attenuate faster through this last type of material. In general, elastic properties tend to have more of an impact on the speed of sound than density.

A basic example involves hydrogen, oxygen, and iron. Hydrogen and oxygen have nearly the same elastic properties, but hydrogen is much less dense than oxygen. The speed of sound through hydrogen is 1,270 meters per second, but only 326 m/s through oxygen. Iron, although much denser than either of them, is also much more elastic. Sound traveling through an iron bar can reach up to 5,120 m/s.

One other thing to note here is that fluids only carry sound as compression waves (particles bumping into each other in the direction the wave is propagating. Solids carry it both as compression and shear waves (perpendicular to the direction of propagation). This is due to the fact that you can’t cut fluids with a knife (they have a shear modulus of 0). A fluid’s molecules can move too freely from one another for such motions to create such waves.

Sonic boom

So far we’ve seen that sound has a maximum speed it can travel at, based on which material it is propagating through. By ‘travelling’, we mean particles bumping into their neighbors creating wave-like areas of pressure.

So what happens when something moves faster than the speed these particles can reach? Well, you get a sonic boom, of course.

Slow-motion footage of a bullet traveling through ballistic gel. Notice how the gel in the middle is pushed away by the bullet before its edges and corners have time to move. The process is very similar to how airplanes form sonic booms. You can see the metal table buckling under the pressure. That shock corresponds to a by-stander perceiving the sonic boom after the bullet has passed them.
Image via YouTube.

Despite the name, sonic booms are more like sonic yelling. When an object is moving faster than sound can travel in its environment, it generates a thunder-like sound. Depending on how far away the source is, this boom is strong enough to damage structures and break windows.

An airplane moving faster than the speed of sound will compress the air in front of it, as this air can only move at the speed of sound. It can’t physically get out of the way fast enough. Eventually, all this compressed, moving air (which is, in essence, sound) is blasted away from the aircraft’s nose at Mach 1 (the speed of sound through air). If anyone is close enough to be reached by this blast of ultra-pressurized air, they hear the sonic boom.

Although it is perceived as an extremely loud burst of sound by a static observer, the sonic boom is a continuous phenomenon. As long as an object moves faster than sound, it will keep creating this area of ultra-compressed air, and leave a continuous boom in its wake. One nifty fact about sonic booms is that you can’t hear them coming — they move faster than sound, so you can only hear them after they’ve passed you.

Humans have only recently gone above the speed of sound, with the first supersonic flight recorded in 1947. Since then, such flights have been banned above dry land in the US and EU, in order to protect people and property (although they can still be carried out with proper authorization). Faster-than-sound travel, however, is still an alluring goal. One way to allow for supersonic speeds without blasting all the windows in the neighborhood is to travel through a vacuum or low-pressure air — a cornerstone idea of the Hyperloop.

What’s the link between music, pleasure, and emotion?

A bad day can be made better with the right jam, and a boring commute is that much more enjoyable with your favorite tune in the background. But why does music have such a powerful impact on us? And why do we like it so much?

Image via Pixabay.

We know that music has a special significance to humanity, as it’s popped up (either independently or through a cultural exchange) in virtually every society in history. We experience that special significance daily when we put our headphones on or relax after work with a nice record.

Back in 2001, researchers at the McGill University in Montreal used magnetic resonance imaging (MRI) to show that people listening to music showed activity in the limbic and paralimbic brain areas, which are related to the reward system. This reward system doles out dopamine, which makes us feel pleasure, as a reward for sex, good food, and so on. Addictive drugs also work by coaxing the production and release of dopamine in the brain.

That being said…

We don’t really know why, to be honest

But we do have a number of theories.

Back in his 1956 book Style and Music: Theory, History, and Ideology, philosopher and composer Leonard Meyer proposed that the emotional response we get from music is related to our expectations. He built on previous theories (the belief-desire–intention model) that the formation of emotion is dependent on our desires. The inability to satisfy some desire would create feelings of frustration or anger but, if we do get what we want, we get nice feelings as a reward. Delayed gratification also makes an appearance here: the greater the split between frustration and when we actually get what we want, the better we will feel once we get it, the theory goes.

In Meyer’s view, because music works with patterns, the human brain subconsciously tries to predict what the next note or groups of notes will be. If it’s right, it gives itself a shot of dopamine as a reward. If it’s not, it will try harder, and get a higher shot of dopamine once it eventually succeeds. In other words, simply having an expectation of how the song should go makes it elicit emotions in our brain, regardless of whether that expectation proves to be right or not.

It’s a nice theory, but it’s very hard to test. The main issue with it is that music can be so diverse that there are virtually endless ways to create and/or go against expectations, so it’s not exactly clear what we should test for. A song can rise or fall quickly, and we may expect a rising song to continue to rise — but it can’t do that indefinitely. We know jarring dissonances are unpleasant, but there also seems to be a cultural factor in play here: what was top of the charts two thousand years ago may sound completely horrendous today. You can listen so some reconstructions of ancient music here and here.

Expectations are in large part driven by how a particular piece we’re listening to has evolved so far, how it compares to similar songs, and how it fits in with all the music we’ve listened to so far. We all have our own subconscious understanding of what music ‘should be’ and it is to a large degree driven by our culture. This is why jazz, a melting pot of musical genres and methods, first sounds a bit off to those unacquainted with it.

Music also seems to have a physiological effect on humans. Past research has shown that our heartbeats and breathing patterns will accelerate to match the beat of a fast-paced track “independent of individual preference”, i.e. regardless of whether we ‘like’ the song. It’s possible that our brains interpret this arousal as excitement through a process called brainwave entertainment.

One other possibility is that music activates the regions of the brain that govern and process speech. As we’re very vocal and very social beasts, we’re used to conveying emotion via speech. In this view, music acts as a specific type of speech and as such can be a vehicle for transmitting emotion. Because we have the tendency to mirror the emotions of others, the song would end up making us ‘feel something’.

Music is a very rich playground — it may very well prove to be infinite. Our enjoyment of it also hinges on a very large number of very subjective factors, further complicating attempts to quantify the experience.

From a scientific point of view, it’s very interesting to ask why music sends chills down our spine. From a personal point of view, however, I’m just very thankful that it can.

Scientists eavesdrop on sound particles with quantum microphone

Researchers have developed a microphone so sensitive it’s capable of picking up individual particles of sound.

Artist’s impression of an array of nanomechanical resonators designed to generate and trap sound particles, or phonons. Image credits: Wentao Jiang.

OK, we knew light has particles, and gravity has particles. Now even sound has particles? Well, not quite. A phonon is what’s called a quasiparticle — basically, an emergent phenomenon that occurs when a microscopically complicated system behaves as if it were a particle.

But although they’re not real particles (what’s real in the quantum world anyway?), phonons have a lot in common with photons, the carriers of light: they’re quantized. Being quantized is the backbone of all quantum particles and as strange as that may sound, it actually means something very simple: it means that they can only carry some types of energy.

Let’s say we have a minimum possible energy level — a particle can only have exact multiples of that ammount. If the minimum level is “1”, it can only have 1, 2, 3, 4… and so on — not 1.5, or any value in between. Think of it as the rungs on a ladder of energy, with nothing in between.

Since photons also have this characteristic to them, it means that a quantum level, it gets pretty weird.

“Sound has this granularity that we don’t normally experience,” said study leader Amir Safavi-Naeini, from Stanford. “Sound, at the quantum level, crackles.”

Although phonons have been described by Albert Einstein, researchers have only been able to measure phonon states in engineered structures — until now.

The problem with building a phonon microphone is the scale at which you have to build it.

“One phonon corresponds to an energy ten trillion trillion times smaller than the energy required to keep a lightbulb on for one second,” said graduate student Patricio Arrangoiz-Arriola, a co-first author of the study.

The team captured the peaks of different phonon energy levels in the qubit spectrum for the first time. Image credits: Arrangoiz-Arriola, Patricio, Wollack, et al, 2019.

A regular microphone works thanks to an internal membrane that vibrates when hit by sound waves. This physical movement is converted into a measurable voltage. However, if you tried to make a quantum microphone this way, it wouldn’t work. According to Heisenberg’s uncertainty principle, a quantum object’s position can’t be precisely known without changing it. So if you tried to measure the number of phonons, the measurement itself would mask their energy levels. So the researchers decided to measure this latter property.

The quantum microphone consists of a series of supercooled nanomechanical resonators, so small that they are visible only through an electron microscope. The resonators are connected to a superconducting circuit which contains electron pairs that move around without resistance. The circuit forms a qubit — a system that can exist in two states at once and has a natural frequency, which can be read electronically.

“The resonators are formed from periodic structures that act like mirrors for sound. By introducing a defect into these artificial lattices, we can trap the phonons in the middle of the structures,” Arrangoiz-Arriola said.

While the system is extremely complex and difficult to handle, the results are also worth it. Mastering the ability to precisely generate and detect phonons could help pave the way for new kinds of quantum devices that are able to store and retrieve information encoded as particles of sound or that can convert seamlessly between optical and mechanical signals.

Safavi-Naeini concludes:

“Right now, people are using photons to encode these states. We want to use phonons, which brings with it a lot of advantages.

“Our device is an important step toward making a ‘mechanical quantum mechanical’ computer.”

The study was published in the journal Nature.

When two sea-worms engage in "mouth fighting", they produce powerful snapping sounds. Credit: Ryutaro Goto.

Snapping worms make one of the loudest noises in the ocean

Researchers have recently described the puzzling behavior of sea-dwelling worms which can produce one of the loudest sounds ever measured in aquatic animals.

When two sea-worms engage in "mouth fighting", they produce powerful snapping sounds. Credit: Ryutaro Goto.

When two sea-worms engage in “mouth fighting”, they produce powerful snapping sounds. Credit: Ryutaro Goto.

Many aquatic animals, including mammals, fish, crustaceans and insects, produce loud sounds underwater. However, this is the first time that scientists have witnessed a soft-bodied marine invertebrate making noises.

“When I first saw their video and audio recordings, my eyes just popped out of my head because it was so unexpected,” said Richard Palmer, Professor of biology at the University of Alberta.

Palmer received the footage from Ryutaro Goto, a Japanese researcher who was looking for some help in figuring out how the snapping sea-worms were producing their weird sounds.

The animals were first discovered in 2017, during a dredging expedition off the coast of Japan. Goto and Isao Hirabayashi, a curator at the Kushimoto Marine Park, were among the first to record the sounds.

Tests found that the sounds are as loud as 157 dB, with frequencies in the 1–100 kHz range and a strong signal at ∼6.9 kHz — that’s comparable to those made by snapping shrimps, which are among the most intense biological sounds that have been measured in the sea

Writing in the journal Current Biology, the Japanese and Canadian researchers explain how and why the loud snapping sounds occur.

According to the researchers, when these worms come close to each other, they open their mouths and snap — something described as “mouth fighting”. This is essentially a territorial display of force which the worms employ to protect their dwellings.

“The real challenge was figuring out how a soft-bodied animal like a worm—which is basically a hollow, muscular tube—could possibly make such loud sounds,” Palmer said.

Palmer says that the snapping sounds are produced by cavitation bubbles due to the extensive array of muscles in the worm’s pharynx.

“It’s like trying to suck a smoothie through a paper straw,” Palmer explained. “When it gets a little bit soft at the end, the tip collapses. It doesn’t take much force to make it collapse, but if you try to suck harder and harder, you build up this immense negative pressure. When the worm finally pops the valve open, it happens so fast that the water can’t fill the space, and the sides of that space collapse together in a point, creating this explosive release of energy in the form of sound.”

This hypothesis has yet to be validated but the researchers hope to conduct an experiment soon.

“It’s just an incredibly cool animal with quite the unexpected behaviour,” Palmer added. “I’ve shown the videos to biologists who study invertebrates and their reaction is always the same: they shake their heads in wonder.”

Leaf Nose Bat.

Bats can use leaves as ‘mirrors’ to spot hiding prey — but it only works at an angle

Bats use leaves as sound ‘mirrors’ to find (and eat) sneaky insects according to new research.

Leaf Nose Bat.

A Leaf Nosed Bat.
Image via Pixabay.

Even on moonless nights, leaf-nosed bats are able to snatch up insects resting still and silent on leaves. New research from the Smithsonian Tropical Research Institute (STRI) shows that bats pull off this seemingly-impossible feat by approaching packs of leaves from different directions. This gives them the chance to use their echolocation to find camouflaged prey — even prey that specifically tries to hide from the acoustic surveys.

I spy with my little ear… a bug

“For many years it was thought to be a sensory impossibility for bats to find silent, motionless prey resting on leaves by echolocation alone,” said Inga Geipel, Tupper Postdoctoral Fellow at STRI and the paper’s lead author.

Combining data from an experiment using a biosonar with video footage from high-speed video cameras of bats approaching prey, the team found how critical approach angle was to the leaf-nosed bats’ hunting prowess.

Bats can drench an area in sound waves and then listen in on the returning echoes to survey their environment. It works much like a radar that uses sound instead of radio waves, and is undoubtedly a very cool trick to pull off. However, it’s not infallible: leaves are very good sound reflectors, so they drown out the echoes produced by any insect hiding in a patch of leaves. This natural cloaking mechanism is known as acoustic camouflage and makes the insects, for all intents and purposes, undetectable for the bats.

At least, that’s what we thought. To understand how bats pick out prey through the acoustic camouflage, the team aimed sound waves at a leaf (with and without an insect) from over 500 different angles. Using this data, they created a three-dimensional representation of the echoes it generates. For each direction, the team also calculated how intense the echo was over the five different frequencies of sound present in a bat’s call.

As expected, leaves both with and without insects were very good sound reflectors if the sound approaches at an angle under 30 degrees (more-or-less from straight ahead). For a bat approaching at these angles, any echoes generated by an insect will be drowned out by the leaf’s echo. However, Geipel and colleagues found that for angles greater than 30 degrees, incoming sound waves bounce off the leaf much like light on a mirror or a lake. An approach at this angle makes the insect’s echo stand out clearly against the quiet backdrop provided by the leaf.

The optimal angle for bats to approach resting insects on leaves ranges between between 42 and 78 degrees, the authors conclude.

To verify their results, Geipel recorded actual bats at STRI’s Barro Colorado Island research station in Panama as they hunted insects positioned on artificial leaves. Their approaches were filmed using two high-speed cameras, and Geipel used the footage to reconstructed the flight paths of the bats as they closed in on the insects. Almost 80%of the approach angles were within the range of angles that makes leaves act as reflectors, she reports, suggesting that the findings are sound.

“This study changes our understanding of the potential uses of echolocation,” Geipel said. “It has important implications for the study of predator-prey interactions and for the fields of sensory ecology and evolution.”

The paper “Bats Actively Use Leaves as Specular Reflectors to Detect Acoustically Camouflaged Prey” has been published in the journal Cell Biology.

After blasting tiny jets of water with an X-ray laser, researchers watched left- and right-moving trains of shockwaves travel away from microbubble filled regions. Credit: Claudiu Stan/Rutgers University Newark.

Researchers produce the loudest sound in the world inside tiny jets of water

Researchers in the United States have produced the loudest sound possible — for science. The team blasted jets of water with a powerful X-ray laser to generate a pressure wave slightly above 270 decibels.

After blasting tiny jets of water with an X-ray laser, researchers watched left- and right-moving trains of shockwaves travel away from microbubble filled regions. Credit: Claudiu Stan/Rutgers University Newark.

After blasting tiny jets of water with an X-ray laser, researchers watched left- and right-moving trains of shockwaves travel away from microbubble filled regions. Credit: Claudiu Stan/Rutgers University Newark.

What makes one sound seem louder than another is the amount of energy pushing from a source towards the listener. This energy is carried in the form of pressure variations through the air. And like any form of energy, we can measure this pressure wave in order to objectively determine loudness.

Sound level meters measure sound intensity in units called decibels (dB), a scale first devised by Alexander Graham Bell. This is a logarithmic scale, meaning it goes up in powers of ten. So every increase of 10dB is equivalent to a 10-fold increase in sound intensity. This also means that a 100db sound is about a billion times more intense than a 10dB sound and not 10 times more intense, which would be the case for a linear scale.

Sound intensity, however, does not correspond to loudness. The reason the dB scale is logarithmic is that this is how our ears work. Every 10dB increase in sound intensity is equivalent to a doubling in loudness (our perception of sound intensity). So although 100dB is a billion times more intense than a 10dB sound, it is only 512 times louder.

To get a sense of the dB scale, 10dB is as loud as nearby falling leaves, 40dB is a quiet conversation, 100dB is an operational jackhammer at close range, 110db is a jet engine taking off at 100 meters, and 140dB is a rooster crowing right next to you (yup, they’re that loud!). The loudest animal, however, is the blue whale, which can make sounds reaching almost 190 decibels.

But not even a blue whale can match the loudness generated by a recent experiment performed by Gabriel Blaj, a staff scientist at SLAC and Stanford University, and Claudiu Stan, at Rutgers University Newark. The researchers used the SLAC’s Linac Coherent Light Source (LCLS) X-ray laser to zap micro-jets of water only 14 to 30 micrometers in diameter. The short pulses were so intense that they vaporized the water, generating a shockwave that traveled through the jet in alternating high and low-pressure zones.

There’s a limit to how loud a sound can get. For air, that limit is 194dB — anything louder than that and the air starts to break down from all of the energy. Underwater, however, things can get as loud as 270dB before water molecules are destroyed.

“The amplitudes and intensities were limited by the wave destroying its own propagation medium though cavitation, and therefore these ultrasonic waves in jets are one of the most intense propagating sounds that can be generated in liquid water. The pressure of the initial shock decayed exponentially, more rapidly in thinner jets, and the decay length was proportional to the jet diameter within the accuracy of measurements,” the authors wrote in the journal Physical Review Fluids.

Besides breaking a world record, the new study also has some practical value. In the future, the findings could help scientists devise methods in order to protect miniature samples undergoing atomic-scale analysis inside water jets.

Sun.

NASA eavesdropped on the Sun, and they made a video so you can hear it too

With a bit of help from NASA, you can now hear the sun’s roar — and it’s glorious.

Sun.

The Sun’s surface seen in ultraviolet light, colored by NASA.
Image credits NASA Goddard.

Although you never hear it, the Sun is actually pretty loud. This massive body of superheated, fusing plasma, is rife with ripples and waves generated by the same processes that generate its light and heat — and where there’s motion, there’s sound. We never get to hear it, however, as the huge expanse of nothing between the Earth and the Sun acts as a perfect acoustic insulator.

With some of ESA’s (the European Space Agency) data and a sprinkling of NASA’s magic, however, you can now hear the Sun churn in all of its (surprisingly tranquil) glory.

Hear me roar (softly)

“Waves are traveling and bouncing around inside the Sun, and if your eyes were sensitive enough they could actually see this,” says Alex Young, associate director for science in the Heliophysics Science Division at NASA’s Goddard Space Flight Center.

What Young is referring to are seismic waves, a type of acoustic waves — the same kind of motion that causes earthquakes in rocky planets — that form and propagate inside the Sun. Hypothetically, if you were to look at the star with the naked eye, you could actually see these waves rippling through its body and surface. Stars are formed of a much more fluid material than most planets, and so their bulk flows more readily under the sway of seismic waves — wiggling just like a poked block of Jell-O.

As most of us learned in early childhood, however, one cannot look directly into the Sun for long. Luckily for us, ESA recently embarked on a one-of-a-kind mission: they sent the Parker Solar Probe hurtling towards our star. Using its SOHO Michelson Dopler Imager (MDI) instrument, the probe recorded these motions inside the Sun. Researchers at NASA and the Stanford Experimental Physics Lab later processed into a soundtrack.

It’s not half-bad, as far as tunes go. I actually find it quite relaxing. Check it out:

[panel style=”panel-info” title=”Hear me roar” footer=””]

The sounds you hear in NASA’s clip are generated by the motions of plasma inside the Sun. These are the same processes that generate local magnetic fields inside the star and push matter towards the surface, causing sunspots, solar flares, or coronal mass ejections — the birthplace of space weather.

Space weather phenomena are associated with intense bursts of radiation, to which complex technological systems are susceptible. So most of our infrastructure, from satellites — and with them, cell phone networks, GPS, and other types of communication — transportation, and power grids.[/panel]

It took a great deal of work to turn the readings from Parker into something usable. Alexander Kosovichev, a physicist at the Stanford University lab, processed the raw SOHO MDI data by averaging Doppler velocity data over the solar disc and then only keeping low degree modes. These low degree modes are the only type of seismic waves whose behavior inside stars is known and accessible to helioseismologists. Afterward, he cut out any interference, such as sounds generating by whizzing of instruments inside the craft. He then filtered the data to end up with uninterrupted sound waves.

While scientists probably enjoy a groovy track just as much as the rest of us, the soundtrack actually has practical applications. By analyzing the sounds, researchers can get a very accurate picture of the churnings inside of our Sun — much more accurate than previous observations could provide.

“We don’t have straightforward ways to look inside the Sun,” Young explains. “We don’t have a microscope to zoom inside the Sun. So using a star or the Sun’s vibrations allows us to see inside of it.”

A more comprehensive understanding of the motions inside the Sun could allow researchers to better predict space weather events.

Story via NASA.

How to stop the annoying sound of a dripping tap with science

A leaky faucet is a form of Chinese water torture. *Plink *plink. Doesn’t it sound like popping brain cells? Suffice to say that many a man has been kept awake by this nuisance — but it’s only recently that some brave scientists have finally understood how to stop it. Contrary to previous research, a University of Cambridge scientist learned that the annoying sound is caused by trapped air bubbles beneath the water. Just adding a hint of dishwashing soap to the to the container catching the drips will stop the sound, according to the scientist.

Dr. Anurag Agarwal, a researcher at Cambridge’s Department of Engineering, was once sleeping overnight at a friend who had a leaky roof. Agarwal, who is an expert in the aerodynamics of aerospace, domestic appliances, and biomedical applications, was kept awake by the dripping water. Being an engineer, however, he sought to find a solution.

“While I was being kept awake by the sound of water falling into a bucket placed underneath the leak, I started thinking about this problem,” said Agarwal in a statement.

Instead of plugging the roof, Agarwal went deep at the molecular level. He spoke to a colleague about the physics involved and they both decided to set up an experiment.

Previous studies have proposed that the “plink” sound is caused by the impact of the droplets onto a surface, such as a sink, similarly to how slamming an object against the wall makes a “bang.” Other explanations propose that the plinking sound is generated by an underwater sound field propagating through the water surface, or the resonance of the cavity formed by water hitting a surface.

Agarwal and colleagues set up an experiment using high-speed cameras and high-end audio-capture equipment. Recordings of the dripping water showed that the dripping sound mechanism is actually very different from what was proposed earlier.

“A lot of work has been done on the physical mechanics of a dripping tap, but not very much has been done on the sound,” Agarwal said. “But thanks to modern video and audio technology, we can finally find out exactly where the sound is coming from, which may help us to stop it.”

Remarkably, the observations suggest that the initial splash, the formation of the cavity, and the jet of liquid are all effectively silent. The annoying plinky sound is actually the result of the oscillation, or back-and-forth movement, of as little as one tiny bubble of air trapped beneath the water’s surface. The bubble causes the water’s surface to vibrate in tune with it, which sends acoustic waves to our ears similarly to how a piston triggers an airborne sound. The trapped air bubble needs to be close to the bottom of the cavity caused by the drop impact in order for the “plink” sound to be audible.

The researchers validated their model by halting the sound, they wrote in the journal Scientific Reports. They simply added a small amount of dish-washing soap to the container catching the dripping water.

Minke whale.

Whale skulls act like resonance chambers to help them hear underwater

Whales don’t put their back into hearing — but they do put their skull. New research, along with the first-ever full-body CT scan of minke whale show how the sea-borne mammals can pick up low-frequency sounds, from the calls of other whales to the propellers of cargo ships.

Minke whale.

The minke whale specimen inside the industrial CT scanner. To reduce the time required to scan the entire whale, the team cut the specimen in half, scanned both pieces at the same time, and reconstructed the complete specimen afterward in the computer.
Image credits Ted Cranford / San Diego State University.

The gentle giants of the sea often bedazzle and impress with their songs, but… how can they hear each other underwater? New research suggests that it’s possible if you use your head. If you use your head as a huge acoustic antenna, that is.

Can you hear that?

Considering where whales like to hang out and their impressive girths, studying the marine mammals is notoriously difficult. However, one team of determined US researchers wouldn’t let that dissuade them. The duo has developed a new method of determining how baleen whales (parvorder Mysticeti) pick up low-frequency chatter between 10 to 200 Hertz.

“You can imagine that it is nearly impossible to give a hearing test to a whale, one of the largest animals in the world,” said lead researcher Ted W. Cranford, PhD, adjunct professor of research in the department of biology at San Diego State University.

“The techniques we have developed allow us to simulate the biomechanical processes of sound reception and to estimate the audiogram [hearing curve] of a whale by using the details of anatomic geometry.”

Using a computerized tomography (CT) scanner designed for industrial applications (it was originally used to spot structural defects in rockets), the researchers analyzed the internal structure of a minke whale calf (Balaenoptera acutorostrata) and a fin whale calf (B. physalus). Both animals were found stranded along the U.S. coast some years before the study and were preserved after they died during rescue operations.

CT scanners are a type of X-ray detectors that take a cross-sectional picture through objects or organisms. You’re likely quite familiar with them from hospitals or TV shows involving hospitals. The team produced 3D models showing of the calves’ skulls based these scans. Then, they used a method known as finite element modeling (FEM) to combine maps of tissue density from the CT scans with measurements of tissue elasticity. Finally, a supercomputer simulated these combined models’ response to sounds of different frequencies.

The team reports that whales’ skulls surprisingly act as antennae or a resonance chambers: the bones vibrate when impacted by sound, amplifying and transmitting the vibrations to the whales’ ears. The skulls were especially well-tuned to the low-frequency sounds that whales use to communicate. The authors also note that large shipping vessels also produce the same frequencies, a finding that should help industry and policymakers establish new regulation to limit our impact on these gentle giants.

In addition, the team’s models suggest that minke whales hear low-frequency sound best when it arrives from directly ahead of them. This suggests whales have directional hearing that provides cues about the location of sound sources, such as other whales or oncoming ships. Exactly if (and how) whales might boast directional hearing is still a puzzling question, given that low-frequency sounds tend to travel in waves that are longer than the whales themselves.

The findings were presented Monday, April 23rd at the American Association of Anatomists annual meeting during the 2018 Experimental Biology meeting in San Diego.

Geologists listen to volcanic murmur to predict eruptions

A new study found that monitoring volcanoes for inaudible, low-frequency sounds might help predict dangerous eruptions.

Audible sounds and earthquakes have a lot in common with each other — after all, they’re both caused by acoustic waves. Sure, they’re propagating at different frequencies and through different mediums, but at their core, they’re similar waves. With a bit of artistic license, you could say that seismology is the science that “listens” to the Earth.

Well, researchers from Stanford and Boise State University now want to actually listen to a volcano. They found that by monitoring the infrasound detected by monitoring stations on the slopes of the Villarrica volcano in southern Chile, one of the most active volcanoes in the world, they could predict impending eruptions.

The sounds (vibrations) they were picking up were produced by the rumbling of a lava lake located inside the volcano’s crater. When the volcano’s activity intensifies, the lake starts to shake and stir, creating more sounds.

“Our results point to how infrasound could aid in forecasting volcanic eruptions,” said study co-author Leighton Watson, a graduate student in the lab of Eric Dunham, an associate professor in the Department of Geophysics of the Stanford School of Earth, Energy & Environmental Sciences. “Infrasound is potentially a key piece of information available to volcanologists to gauge the likelihood of an eruption hours or days ahead.”

Of course, many of the world’s big volcanoes are already being monitored. Seismic activity can be a good indicator of an eruption. The idea isn’t to replace it with infrasound, but rather to complement it, along with all other methods used for volcano monitoring. However, there are still significant challenges.

Villarrica is one of Chile’s most active volcanoes.

While thus far, the infrasound readings have proven quite reliable, they also need to be confirmed in other environments, on other volcanoes. It’s not clear to what extent this information can be used to anticipate eruptions and how reliable this data can be.

Furthermore, this has only been tested on “open vent” volcanoes like Villarrica, where an exposed lake or channels of lava connect the volcano’s inner fire to the atmosphere. Applying the same method on a closed volcano will undoubtedly prove to be much more difficult, or even impossible.

“Volcanoes are complicated and there is currently no universally applicable means of predicting eruptions. In all likelihood, there never will be,” Dunham said. “Instead, we can look to the many indicators of increased volcanic activity, like seismicity, gas emissions, ground deformation, and – as we further demonstrated in this study – infrasound, in order to make robust forecasts of eruptions.”

Journal Reference: Jeffrey B. Johnson, Leighton M. Watson, Jose L. Palma, Eric M. Dunham, Jacob F. Anderson. Forecasting the eruption of an open-vent volcano using resonant infrasound tones. DOI: 10.1002/2017GL076506

Ultra slim sound diffuser could greatly improve your cinema and theater experience

A joint effort from researchers at the North Carolina State University and Nanjing University has delivered a sound diffuser 10 times thinner than today’s products.

A conventional, two-dimensional conventional Schroeder diffuser (on the left), compared to a new, ‘ultra-thin’ two-dimensional Schroeder diffuser (at right). Image credits: Yun Jing et al, 2017.

In acoustics (and architecture), diffusion is the efficacy by which sound energy is spread evenly in a given environment. Perfect diffusion would mean that sounds are heard identically everywhere in the room. Naturally, there is no perfect diffusion and no perfect environment, but architects and acoustic engineers strive to be as close to perfection as possible. Basically, in a theater, a cinema, or in any constructions in which sound is important you want good diffusion. You want everyone to hear sounds similarly, with as few interference as possible.

In any room, the walls, ceilings, and all the objects in the room influence the acoustics of the room. They create echoes and overlapping sounds, which reduce the acoustic quality. This is where the sound diffusers step in.

“Sound diffusers are panels placed on the walls and ceiling of a room to scatter sound waves in many different directions, eliminating echoes and undesirable sound reflections – ultimately improving the quality of the sound,” says Yun Jing, an assistant professor of mechanical and aerospace engineering at NC State and corresponding author of a paper on the work.

Most common diffusers, called Schroeder diffusers, can be very bulky — and they need to be bulky. They need to be bulky because of the wavelengths they are trying to diffuse. A typical male’s voice has a frequency of 85 hertz, which translates to a wavelength of 4 meters or 13.1 feet. If that’s the lowest wavelength you have to diffuse, then you’d need a diffuser about half of that, so 2 meters (6.5 feet) thick. If you want to cover an even broader range of sounds, then you might need even bigger diffusers.

This is where the new research steps in. What Jing and his team did is to develop a diffuser that works just as good (if not better), but only needs to be 5 percent of the sound’s wavelength. This means that diffusers could be made much cheaper, and using way less material.

“Diffusers are often made out of wood, so our design would use 10 times less wood than the Schroeder diffuser design,” Yun Jing, an assistant professor of mechanical aerospace engineering at NC State and corresponding author of the study, said in a statement. “That would result in lighter, less expensive diffusers that allow people to make better use of their space.”

A representation of the physical characteristics of the new diffuser. Image credits: Yun Jing et al, 2017.

The diffuser consists of several evenly spaced squares, of different sizes, each opening up into a thin, underlying chamber. These chambers have identical sizes, but different apertures.

To make things even better, the fabrication process is also easy. Researchers have created diffusers using a 3D printer and will move on to develop wooden diffusers, which promise to be even cheaper.

Journal Reference: Yifan Zhu, Xudong Fan, Bin Liang, Jianchun Cheng, and Yun Jing — Ultrathin Acoustic Metasurface-Based Schroeder Diffuser. DOI:https://doi.org/10.1103/PhysRevX.7.021034

Paper-thin speaker.

Paper-thin device turns touch into electricity, flags into loudspeakers, bracelets into microphones

Michigan State University engineers have put together a flexible, paper-thin transducer — a device which can turn physical motion into electrical energy and vice-versa. The material could be used to create a whole new range of electronics powered from motion, as well as ultra-thin microphones and loudspeakers.

Paper-thin speaker.

Nelson Sepulveda and the paper-thin speaker.
Image credits Michigan State University.

Harry Potter may have had animated newspapers but you know what Hogwarts never had? Newspaper boomboxes. So I guess it’s score one for the muggles, since we’re about to get just that. Along with newspaper microphones, or anything else that’s really thin and works either as a sound recording or replay device. Our imagination’s the limit.

Fold-a-speaker

It’s all thanks to a team of nanotech engineers from Michigan State University, who designed and produced a prototype ultra-thin transducer. The device is fully flexible and foldable, can easily be scaled up and is bidirectional — meaning it can convert mechanical energy to electrical energy and electrical energy to mechanical energy.

The device’s fabrication process starts with a silicone wafer with several thin layers or sheets of environmentally friendly substances including silver, polyimide, and polypropylene ferroelectret added over it. Ions (which are charged particles) are added onto each individual layer so that they produce an electrical current when compressed.

Known as a ferroelectret nanogenerator (abbreviated FENG), it was first showcased in late 2016 as a sheet which could turn users’ touch into energy to power a keyboard, LED lights and an LCD touch-screen.

Since then, the FENG got some new tricks. The team discovered that in addition to its touch-to-energy transformation ability, the material can be used as a microphone — by turning the mechanical energy of sound into electrical energy — and as a loudspeaker — by doing the reverse.

“Every technology starts with a breakthrough and this is a breakthrough for this particular technology,” said Nelson Sepulveda, MSU associate professor of electrical and computer engineering and primary investigator of the federally funded project.

“This is the first transducer that is ultrathin, flexible, scalable and bidirectional, meaning it can convert mechanical energy to electrical energy and electrical energy to mechanical energy.”

Multiple uses

As a proof-of-concept for sound recording, the team developed a FENG security patch which uses voice recognition software to unlock a computer. Tests revealed that the patch can pick up on voices with a high fidelity, being sensitive enough to capture several frequency channels in the human voice.

To test how well the material would function as a loudspeaker, some FENG fabric was embedded into the faculty’s own Spartan flag. It was supplied with a signal from an iPad through an amplifier. The team reports that it reproduced the sound flawlessly, the flag itself becoming a loudspeaker. One day, you could be carrying speakers around everywhere, comfortably folded in your pocket until you need them. Or you could have a poster of FENG at home, ready to hook up to your PC, taking up virtually no desk space.

“So we could use it in the future by taking traditional speakers, which are big, bulky and use a lot of power, and replacing them with this very flexible, thin, small device.”

“Or imagine a newspaper,” Sepulveda added, “where the sheets are microphones and loudspeakers. You could essentially have a voice-activated newspaper that talks back to you.”

Other applications could include a noise-canceling sheeting that also produces some energy in the bargain, or voice-operated wearable health-monitoring devices. The team says they’re also interested in developing in the “speaking and listening aspects” of the technology.

The full paper “Nanogenerator-based dual-functional and self-powered thin patch loudspeaker or microphone for flexible electronics” has been published in the journal Nature Communications.

What sound is (and why it can topple buildings)

Sound is all around us and comes in a myriad of flavors. Some are nice, like music or wind blowing through leaves. Others, like the beep when your card gets rescinded, not so much. We know our ears pick up on sounds, but then, why do we also feel the hammering of a song in our chests when the bass is loud enough? And what’s the link between instruments playing by themselves and a bridge collapsing in 1850’s France?

Let’s find out.

Rainbow-colored sound wave.
Image credits Pixabay.

What is sound?

Physically speaking, what we perceive as sound is a vibration produced by motion.

Imagine the world as a huge bathtub on which you, a yellow rubber duck, merrily float around. At various points along this tubby world, there are faucets pouring water. Some are bigger and pour a lot of water, while others are tiny and only give off occasional drips. Some are closer to you, while others are really far away. These are the sources of sound.

This is surprisingly similar to how sound would look like if we’d be able to see it.
Image credits Arek Socha.

Regardless of their position or size, each faucet creates vibrations in the form of ripples on the water’s surface — which is the medium.  Most of these will never make it all the way to you. For the ones that do, you’ll ‘hear’ the source faucet. How much you bob up and down on the wave it generated represents the sound’s amplitude — roughly equivalent to what we perceive as loudness. How frequently each sloshes you around, based on how close packed the ripples are, is the sound’s frequency — what we perceive as pitch. The way ripples push you is the direction of propagation — i.e. where we hear the sound coming from.

It’s not a perfect analogy because, as you may already suspect, the world is not a bathtub and we’re not rubber duckies. But it simplifies the conditions enough to understand the basics. For you to hear something, a few things have to happen: First, you need a source of motion to get it started. Secondly, sound travels as a wave, so there has to be a medium to carry the vibration between you and this source. You need to be close enough to the source to register the vibration before it attenuates or dies off. Lastly, the sound has to be in the right frequency interval — if the wave is too lazy or too steep, you won’t pick it up.

Spherical source producing compression waves. Darker color indicates higher pressure.
Image credits Thierry Dugnolle.

In real life, the medium can be any fluid (gas, liquid, plasma) or solid. Even you are one. The medium’s properties determine how sound propagates — fluids carry sound only as compression waves, which are alternating bands of low and high pressure, while it can propagate both as compression and transverse (shear) waves through solids. The medium’s density determines the speed of a sound, while its viscosity (how strongly particles stick to each other and resist motion) dictates how far it can travel before it runs out of energy/attenuates.

These properties aren’t constant through space or time. For example, heat dilation can cause a shift in parts of the medium’s properties, altering how fast sounds propagate at different points. Vibration can also transfer from one medium to another, each with different properties. If you’re dressed up in a sealed astronaut suit on Earth and talk loud enough, people will still be able to hear you. Take two astronauts into the void of space and they won’t hear each other talking because there are no particles to carry the vibration between them. But if they stand visor-to-visor they may faintly hear each other, as the suit and air inside it carry over part of the sound.

Perceiving is believing

From a subjective point of view, the answer to “what is sound” comes down to what you can hear. The human ear can typically pick up on frequencies between 20 Hz and 20 kHz (20,000 Hz), although age, personal traits, and the medium’s pressure shift these limits around. Everything below 20 Hz is called infrasound (under-sound), anything above 20 kHz is called ultrasound (over-sound).

All you have ever heard falls within this interval which, to be fair, is pretty limited. Cats and dogs can hear ultrasounds up to 45-65 kHz respectively, which is why they howl at a whistle you can’t even hear. Some whales and dolphins can even go as far as 100 kHz and over, an interval which they use to communicate. Still, they’re limited in what lower frequencies they can hear. An average cow, however, can probably perceive a wider range of sounds than you on both ends.

This cat has a whole playlist of awesome music you can’t even hear.
Image from public domain.

Apart from those four physical properties of sound I’ve bolded earlier, the are also perceived qualities of a sound. Pitch and loudness are directly tied to physical properties for simple sounds but this relationship breaks down for complex sounds. There’s also a sound’s perceived duration (how long a sound is) which is mostly influenced by how clearly you can hear it, a sound’s timbre (the way a sound ‘behaves’ over time, making it distinct from other sounds), its texture (the interaction between different sources), and finally the spatial location (where the different sources are relative to one another).

Your perception can also further influence the sounds you’re hearing through the Doppler effect. A sound’s relative spatial location to you over time can lower or raise a sound’s perceived frequency. That’s why you can tell if a car is rushing toward, moving away from you, or just sitting in traffic, from the pitch in sound.

Resonance

Some time ago, musicians found that playing particular notes could make chords vibrate on other instruments, even when no one was touching them. The phenomenon was dubbed after the Latin term for ‘echo’, since the chords seemed to pick up and repeat the sound played to them and Latin sounds cool. Unknowingly, they stumbled upon a phenomenon that would see today’s soldiers ordered to break stride when crossing bridges to prevent them from collapsing — resonance.

Possible side effects of resonance.
Image credits Vladyslav Topyekha.

Ok, so the nerdy bit first. Every object has the capacity to oscillate, or shift, between several states of energy storage. If you fix one end of a spring, tie a weight to the other, pull down, and then release said weight, it will bob up and down like crazy then gradually settle down. That movement is caused by the system oscillating between different states of energy — kinetic energy while the weight is in motion, potential elastic energy while it’s down, and potential gravitational energy while it’s up. It eventually settles at a particular point because this shift is inefficient, and the system loses energy overall (called damping) when transitioning from one state to the other.

But objects also have something called a resonant frequency which works the other way. They can resonate with all kinds of waves, from mechanical/acoustic waves all the way to nuclear-magnetic or quantum resonance. Each object can have more than one such frequency for every kind of wave.

When vibrating at one of these frequencies, systems can undergo the shift with much greater efficiency, so tiny but sustained external vibrations can add up inside the system to build powerful oscillations. It can even lead to a system holding more energy than it can withstand, causing it to break apart. This phenomenon became tragically evident on the 16th of April 1850 at the Angers Bridge, France, when the marching cadence of a battalion of soldiers going over the bridge amplified wind-induced oscillations, matching the structure’s resonant frequency, leading to collapse and the death of some 200 troops.

Sound is basically a mechanical wave, so it can also induce these resonant oscillations in objects — called acoustic resonance. If you’ve ever seen someone sing a glass to the breaking point, this is the phenomenon at work. If not, you can watch Chase here be adorably excited when he manages it.

Other cool and not-cool things sound does

I’m gonna start the “cool-things” list with the sonic refrigerator because get it, cool? Refrigerators? I love my job.

Pun aside, about halfway through last year a team from the Department of Prime Mover Engineering at Tokai University in Japan developed a system that uses resonant frequencies to pump and compress coolant in a refrigerator in lieu of traditional systems. Their engine draws power from the fridge’s residual heat, making for a much more energy efficient system.

The sound generated by individual atoms’ vibrations can be used to identify their chemical species, one team from Georgia Tech reported last year. These vibrations can even tell researchers what substances, and in which particular states, multiple atoms bunch together to form. It’s so accurate that CERN is already using the method to identify individual subatomic particles.

Sound may also help us stop tsunamis before they reach the shore according to Dr Usama Kadri from Cardiff University’s School of Mathematics. The math shows it’s a viable method, although we don’t yet have the technical capabilities to implement it.

Researchers at the Max Planck Institute for Intelligent Systems in Germany have also figured out a way to use sound in an acoustic tractor beam. I don’t even need to explain why that’s awesome.

Sound can also be very pleasant, in the form of music — for humans and cats alike.

Certain sounds can make your food taste sweeter or sourer, others can help you diet — but these are more tied to perception than physics.

On the “not-cool, dude” list we have sound-based weapons. From ancient weapons that used perception to shake the enemy’s morale, the infamous Jericho sirens on Nazi StuKas used as psychological weapons during WW2 (quite effective at first, then withdrawn from service since it ruined the plane’s aerodynamics and soldiers got used to them), to modern crowd-control acoustic cannons employed by police and armed forces — used with varying degrees of ethical success — sound has always played a part in warfare.

There are also some more exotic items on the list, such as the much-searched-for-but-still-undiscovered brown note. This was believed to match the human bowel’s resonance frequency and make soldiers inadvertently soil themselves in combat. Though I’d say it would only make their camouflage more effective.

From behind, at least.

Powerful sound blasters can render tsunamis dead in the water, new study shows

Blasting high-powered acoustic waves at tsunamis could break their advance before reaching the shoreline, a new theoretical study has shown.

Tsunamis are one of the most dramatic natural phenomena we know of, and they’re equally destructive. These great onslaughts of water are powered by huge amounts of energy — on a level that only major landslides, volcanoes, earthquakes, nukes, or meteorite impacts can release. And when they reach a coastline, all that water in motion wipes infrastructure and buildings clean off.

[MUST READ] How tsunamis form and why they can be so dangerous

Traditionally, there are two elements coastal communities have relied on against tsunamis: seawalls and natural barriers. Seawalls are man-made structures that work on the principle of an unmoving object, resisting the wave’s kinetic energy through sheer mass. Natural barriers are coastal ecosystems, typically mangrove forests or coral reefs, that dissipate this energy over a wider area and prevent subsequent floods. Each approach has its own shortcomings however, such as high production and maintenance cost or the risk of being overwhelmed by a big enough tsunami.

Dr Usama Kadri from Cardiff University’s School of Mathematics thinks that the best defense is offence — as such, she proposes the use of acoustic-gravity waves (AGWs) against tsunamis before they reach coastlines.  Dr Kadri proposes that AWGs can be fired at incoming tsunamis to reduce their amplitude and disperse energy over a larger area. Ok that’s cool, but how does it work?

The tsunami whisperer

Waves are a product of the interaction between two fluids (air-water) and gravity. Friction between wind and the sea’s surface causes water molecules to move sideways and on top of one another, while gravity pulls them back down. Physically speaking, ‘waves’ are periodic wavetrains — and as such, they can be described by their period (length between two wave crests), amplitude (height), and frequency (speed).

One thing you can do with periodic waves is make them interfere constructively or destructively — you can ‘sum up’ two small waves to make a bigger one, or make them cancel out. Apart from a different source of energy, tsunamis are largely similar to regular waves, so they also interfere with other waves. Here’s where AGWs come in.

Think of AGWs as massive, sound-driven shock-waves. They occur naturally, move through water or rocks at the speed of sound, and can stretch for thousand of kilometers. Dr Kadri shows that they can be used to destructively interfere with tsunamis and reduce their amplitude before reaching the coast. Which would prevent a lot of deaths and property damages.

“Within the last two decades, tsunamis have been responsible for the loss of almost half a million lives, widespread long-lasting destruction, profound environmental effects and global financial crisis,” Dr Kadri writes in her paper. “Up until now, little attention has been paid to trying to mitigate tsunamis and the potential of acoustic-gravity waves remains largely unexplored.”

“The main tsunami properties that determine the size of impact at the shoreline are its wavelength and amplitude in the ocean. Here, we show that it is in principle possible to reduce the amplitude of a tsunami, and redistribute its energy over a larger space, through forcing it to interact with resonating acoustic–gravity waves.”

Her paper also shows that it’s possible to create advanced warning systems based on AGWs, which are generated with the tsunami and induce high pressures on the seabed. She also suggests harnessing these natural AGWs against tsunamis, essentially using nature’s own energy against itself.

The challenge now is to develop technology that can generate, modulate, and transmit AGWs with high enough accuracy to allow for interference with tsunamis. She admits that this won’t be easy to do, particularly because of the high energy required to put a dent in the waves.

The full paper “Tsunami mitigation by resonant triad interaction with acoustic–gravity waves” has been published in the journal Helyion.

Noise

Brain network that picks words from the background noise revealed

Scientists have identified the brain networks that help focus on one voice or conversation in a noisy room — known as the “cocktail party effect”. They hope that by emulating the way these areas work, modern voice recognition software can be made to function much more efficiently.

Noise

Image credits Gerd Altmann / Pixabay.

When you’re at a party, your brain allows you to tune in on a single conversation while lowering the volume of background noise, so to speak. Now, have you ever tried to give a voice command to a device in any type of noisy setting? If yes, you can probably understand why scientists would love to get their hand on a similar voice recognition system for our gadgets.

A new study might offer a way forward for such a technology. Neuroscientists led by Christopher Holdgraf from the University of California, Berkeley, recorded the brain activity of participants listening to a previously distorted sentence after they were told what it meant. The team worked with seven epilepsy patients who had electrodes placed on the surface of their brain to track seizures.

They played a very distorted recording of a sentence to each participant, which almost none of them was able to initially understand. An unaltered recording of the same sentence was played afterwards, followed by the garbled version once more.

“After hearing the intact sentence” the paper explains, the subjects understood the “noisy version” without any difficulty.

Brain recordings show that this moment of recognition coincided with patterns of activity in areas known to be involved in understanding sound and speech. When subjects listened to the garbled version, the team saw little activity in these areas, but hearing the clear sentence then caused their brains to light up.

This was the first time we saw the way our brains alter their response when listening to an understandable or garbled sound. When hearing the distorted phrase again, auditory and speech processing areas lit up and changed their pattern of activity over time, apparently tuning in to the words among the distortion.

“The brain actually changes the way it focuses on different parts of the sound,” explained the researchers.

“When patients heard the clear sentences first, the auditory cortex enhanced the speech signal.”

The team is now trying to expand on their findings and understand how the brain distinguishes between the background and the sounds we’re actually interested in hearing.

“We’re starting to look for more subtle or complex relationships between the brain activity and the sound,” Mr Holdgraf said.

“Rather than just looking at ‘up or down’, it’s looking at the details of how the brain activity changes across time, and how that activity relates to features in the sound.”

This, he added, gets closer to the mechanisms behind perception. If we understand how our brains filter out the noise, we can help people with speech and hearing impediments better hear the world around them. The team hopes to use the findings to develop a speech decoder — a brain implant to interpret people’s imagined speech — which could help those with certain neurodegenerative diseases that affect their ability to speak.

The full paper “Rapid tuning shifts in human auditory cortex enhance speech intelligibility” has been published in the journal Nature Communications.

Study looks at what makes some song stick in your head, while others don’t

A study published by the American Psychological Association looked at the structure of the songs that get stuck in our heads — the so-called “earworms” — to find out exactly what makes us think about them again, and again, aaaand again.

Image via Pexels / Public Domain.

Did it ever happen to you to sing a song to yourself hours after you’ve heard it? Of course it did. And you’re not alone.

But did you ever wonder why certain songs tend to stick on in our minds, and others don’t? The oldest song in history, for instance, is pretty catchy. There are some very specific reasons why this happens, researchers from the American Psychological Association have found. The earworms are usually faster, with a rather generic but easily remembered melody. But there are also particular intervals that set earworms apart from average pop song, such as distinctive leaps or melodic repetitions, interspersed throughout the melodic line, the study found.

The team asked 3,000 people from the UK to name the song that most frequently sticks in their head. To keep it simple, the researchers limited their questioning to popular genres such as pop, rap, rock, and so on. They compared the answers to a database of songs looking for tracks that weren’t named as earworms but matched these in terms of popularity and how recently they appeared in UK’s music charts. The feature of earworms were then analyzed and compared with those of benign songs. This is the list of the most sticky songs out there — be warned, however, that you’ll be singing each and every one to yourself while you read the list.

  1. “Bad Romance” by Lady Gaga.
  2. “Can’t Get You Out Of My Head” by Kylie Minogue.
  3. “Don’t Stop Believing” by Journey.
  4. “Somebody That I Used To Know” by Gotye.
  5. “Moves Like Jagger” by Maroon 5.
  6. “California Gurls” by Katy Perry.
  7. “Bohemian Rhapsody” by Queen.
  8. “Alejandro” by Lady Gaga.
  9. “Poker Face” by Lady Gaga.

The data for the study was collected from 2010 to 2013.

“These musically sticky songs seem to have quite a fast tempo along with a common melodic shape and unusual intervals or repetitions like we can hear in the opening riff of ‘Smoke On The Water’ by Deep Purple or in the chorus of ‘Bad Romance,'” said lead author Kelly Jakubowski, PhD, of Durham University. She conducted the study while at Goldsmiths, University of London.

The team found that catchy songs tend to have more common global melodic contours — meaning they have overall melodic shapes commonly found in Western pop music. “Twinkle, Twinkle Little Star” for example, with the first phrase rising in pitch while the second one falls, shows the most common such contour pattern. Nursery tunes and children’s songs usually follow the same pattern, authors note, making them easy to remember for children. That same element, in Maroon 5’s “Moves Like Jagger” opening riff makes the song stick in your head.

Another crucial element for earworms is an unusual interval structure of the song. Unexpected leaps or more repeated notes than you’d expect to hear in an average pop song provide this structure, the study reads. The interlude of “My Sharona” by the Knack or “In The Mood” by Glen Miller can be used as examples.

Catchy songs are more likely to get aired on radio, to be featured at the top of charts, neither of which is surprising. But there has previously been little evidence as to why some songs are “catchy” regardless of how popular they are or how many people have heard them.

“Our findings show that you can, to some extent, predict which songs are going to get stuck in people’s heads based on the song’s melodic content,” Jakubowski said.

“This could help aspiring song-writers or advertisers write a jingle everyone will remember for days or months afterwards,”

In addition to a common melodic shape, the other crucial ingredient in the earworm formula is an unusual interval structure in the song, such as some unexpected leaps or more repeated notes than you would expect to hear in the average pop song, according to the study.

But let’s say you’re on the other end of the line — you’re the listener, desperately trying to get the song out of your head. What do you do? Jakubowski suggests engaging with the song, as many people report that it becomes easier to get rid of a song after listening it in full. You can also try distracting yourself with another song, either by listening or thinking about it, but it sounds a lot like replacing one thorn with another. If neither approach works, just give it time, try not to think about the song, and it will naturally fade away on its own.

The full paper, “Dissecting an Earworm: Melodic Features and Song Popularity Predict Involuntary Musical Imagery” has been published online in the journal Psychology of Aesthetics, Creativity, and the Arts.

Scientists develop a ridiculously cheap acoustic tractor beam

Researchers working on acoustic holograms have created a new sonic tractor beam system for less than it costs to get lunch.

Image via Youtube / Nature Videos.

Image via Youtube / Nature Videos.

A little bit — and I mean that literally — of Star Trek has just passed into the real world. Scientists have developed a sonic system which can push or pull on objects just like the show’s famous tractor beams, only much smaller. While the idea of using sound to manipulate objects from afar isn’t new, no one has ever done so using a system as simple and cheap as this. The full device costs a little under 10$ to manufacture.

The device, created by engineers at the Max Planck Institute for Intelligent Systems in Germany, consists of just three parts — a 3D-printed plastic disk, a thin plate of brass, and a small speaker which you could probably find in any watch alarm.

“We were genuinely surprised that nobody had ever thought of this before,” team-member Kai Melde told Popular Mechanics.

Acoustic tractor beams work by transferring force to a far-away object through the vibrations of a medium, or what our ears perceive as sound. Last year, University of Bristol engineers developed the first one-sided acoustic tractor beam by slapping 64 small speakers together and tuning them to move bits of polystyrene around. It worked, and was quite awesome to watch, but it was incredibly inefficient and expensive to scale up — each speaker required constant tweaks in the sound-waves it produced to keep the acoustic hologram stable.

So, the Max Planck team decided to try and simplify the device. Instead of using banks of speakers and tuning each one to create the acoustic hologram, they used a single speaker on which they fixed a patterned, 3D-printed plastic filter.

“It worked even better than we hoped,” Melde added.

The hologram they produced was so complex, that they estimate 20,000 unfiltered speakers working together would be needed to achieve something similar.

But there are also limits to the device. It only sends the hologram in one direction and it can’t be angled, meaning that it can move objects around on the pattern it’s designed for but can’t for example push them out of it once they’re in the air. A new plastic disk has to be printed for any new pattern required. And in its current form, the beam only works in two dimensions (moving an object around on a flat plane), so it can’t actually push or pull anything yet.

The team hopes that with further development, these issues can be overcome. There’s a lot of excitement for acoustic tractor beams, as they could revolutionize the way we think of transport, medicine, and a wide range of other fields.

“There’s a great deal of interest in using our invention to easily generate ultrasound fields with complex shapes for localised medical diagnostics and treatments,” lead researcher Peer Fischer told New Atlas. “I am sure that there are a lot of [other] areas that could be considered.”

But for now, the fact that we can create a working tractor beam for less than a good pair of jeans will cost you is simply amazing.

The full paper, “Holograms for acoustics” has been published in the journal Nature.