Tag Archives: sound

Credit: QuietWarriors

Why sonar needs to adapt to new sound highways in the Arctic

Credit: QuietWarriors

Credit: QuietWarriors

The US Navy and MIT researchers have found that underwater sound waves in the Arctic Circle have drastically changed their mode of propagation, with important consequences for sonar communication. Melting ice and shifts in ocean temperature at the hand of climate change are the main culprits.

An acoustic corridor that helps sound travel four times longer underwater than it should

Sound, defined as vibration that propagates as an audible pressure wave, travels at different speeds depending on the medium. Perhaps counterintuitively, sound travels the slowest through the air (340 m/s), or almost five times slower than in water (around 1,480 m/s), and the fastest through a metal (5,100 m/s in iron).

The reason why sound travels faster through water than in air is because water is incompressible and air is compressible. When air is pushed by a pressure wave, the molecules squeeze a bit close together holding the pressure a bit before getting pushed. Sure, the sound waves move them forward, but slower. When you push on water, that energy is immediately transferred to the next molecule. That’s why a hand grenade blowing up underwater can make a big hole in the steel hull of a ship — the generated pressure wave is not only faster, it also has a hard knock because the energy isn’t dissipated.

But sound doesn’t travel at a constant speed in the water as it does through the air. That’s because its speed is dependent on temperature (the higher it gets, the faster sound travels) and pressure (same as temperature). Sound waves also tend to bend towards the water layers where the speed of sound is lower.

Since the 1970s, the water in the Beaufort Sea, in the Arctic, has warmed by 1.5 degrees Celsius with predictable consequences to sound propagation. Specifically, climate change has altered the acoustics of the water contributing to the creation of a physical phenomena called the Beaufort Lens — a sonic superhighway stretching between Alaska and the Northwest Territories.

This lens consists of three layers. Nearest to the surface is a layer made of a mix of cold and warm water; beneath is a layer of colder water that stretches 100 to 200 meters deep; the third layer begins 200 meters ownward and is made of warmer water. It’s a peculiar water sandwich that makes acoustics behave in peculiar ways. For instance, through this corridor sound can travel for more than 400 kilometers uninterrupted — four times more than it normally should.

Most recently MIT researchers in collaboration with the US navy carried a huge furnace through the ice of Beaufort to melt its surface until they made a hole big enough to fit an 850-pound, 12-fee-wide drone. The sensors on the drone measured the acoustic properties of the water to see how these have changed since the Beaufort Lens was first discovered in 2004.

The researchers found the sound waves near the ice surface refract downward towards the Len’s middle layer. Deeper sound waves similarly refract upward to the same middle layer. Once inside this cold layer, the waves become trapped like a bowling ball stuck in a gutter and thus can travel long distances. Another foray with a more precise and complex measurement system is already being planned for 2018.

via Nautilus

New silicon chip technology amplifies light using sound waves

A whole new world of signal processing may be just around the corner. Yale scientists have developed a method of boosting the intensity of light waves on a silicon microchip using only sound.

A Yale team has found a way to amplify the intensity of light waves on a silicon microchip using only sound.
Image credit: Yale University

The paper, published in the journal Nature Photonics, describes a novel waveguide system that has the ability to control the interaction between sound and light waves. The system could form the foundation of a host of powerful new signal-processing technologies starting from the humble (and widely-used) silicon chip.

And this is one of the most exciting selling points of this technology: silicon chips are ubiquitous in today’s technology.

“Silicon is the basis for practically all microchip technologies,” said Rakich, assistant professor of applied physics and physics at Yale and lead author of the paper. “The ability to combine both light and sound in silicon permits us to control and process information in new ways that weren’t otherwise possible.”

“[The end result] is like giving a UPS driver an amphibious vehicle — you can find a much more efficient route for delivery when traveling by land or water.”

The advantages of integrating such a technology into a silicon chip were sought-after by numerous groups around the world, but with little success. Previous attempts just weren’t efficient enough for practical applications. The Yale group’s breakthrough came by using a new design that prevents light and sound from escaping the circuits.

“Figuring out how to shape this interaction without losing amplification was the real challenge,” said Eric Kittlaus, a graduate student in Rakich’s lab and the study’s first author. “With precise control over the light-sound interaction, we will be able to create devices with immediate practical uses, including new types of lasers.”

The system is part of a larger body of research the Rakich lab has conducted over the past five years, focused on designing new microchip technologies for light. Its commercial applications range over areas including communications and signal processing.

“We’re glad to help advance these new technologies, and are very excited to see what the future holds,” said Heedeuk Shin, a former member of the Rakich lab, now a professor at the Pohang University of Science and Technology in Korea and one of the study’s co-authors.

Why does your voice sound so different when recorded

Your voice does in fact sound different to other people than the way you hear it when you speak — it’s a real thing, not just some matter of subjective perception like being too used to your own voice that it becomes distorted when you hear it played back from a recording device, for instance. Why you can never hear your own, real voice without assistance (recording yourself) has to do with how sound reaches your inner ear. Basically, your inner ear picks up acoustic vibrations like the chirping of birds, rattle of the city or people’s voices and translates these vibrations into electrical signals that the brain can process as “sound”. The inner ear, however, also picks up vibrations conducted by the bones in your neck and head. This combination of internal and external vibrations produces an uniquely characteristic voice which you won’t ever be able to hear elsewhere!

Image: FACTS WT

A voice literally inside your head. More than one calls for a psychiatrist’s appointment.

A person’s voice is basically sound, which in turn is vibration that propagates through air. Humans are able to speak thanks to a nifty biological gadget called the “voice box”:  cartilage casing in the throat that is often referred to as the Adam’s Apple” (for men), the “Eve’s Apple” (for women). It’s here that the twin infoldings of tissue (roughly) the size of our eyelids called the vocal folds reside.

Diaphragm action pushes air from the lungs through the vocal folds, producing a periodic train of air pulses. This pulse train is shaped by the resonances of the vocal tract, and since each “voice box” is shaped uniquely so will the voice. Since men have an Adam’s Apple, their voice is lower since the vocal cords are longer.  The basic resonances, called vocal formants, can be changed by the action of the articulators to produce distinguishable voice sounds, like the vowel sounds.

This sound energy is spread throughout the air, and if a human is nearby with a pair of healthy ears, this energy will hit the cochlea (the inner ear) through the external ear. Fluid movement (air pressure waves) through the inner ear causes changes in tiny structures called hair cells. As these hair cells move, electrical signals from the cochlea are sent up the auditory nerve to the brain, which is then converted into information we can commonly refer to as sound. Hair cell loss comes from noise exposure, aging, toxins, infections, and certain antibiotics and anti-cancer drug.

That’s how humans are able to speak (have a voice) and hear (listen to the voice).

At the same time though, those same vibrations that resonate inside the voice box get conducted by bones inside your body and  reach the cochlea directly through the tissues of the head. If you cover your ears, you’ll still be able to hear your voice, simply because your head is ringing. The tissue is better at transmitting low frequencies than high ones, which makes you think your voice sounds lower than it does to other people.

Ultimately, the voice you hear when you speak is the combination of sound carried along both paths. Some people are more sensitive to vibrations conducted through the bones. In extreme cases, these people might hear the sound of their own breathing and even eyeballs moving in their sockets. Some people have abnormalities of the inner ear that enhance their sensitivity to this component so much that the sound of their own breathing becomes overwhelming, and they may even hear their eyeballs moving in their sockets.

sound-wave-illustration

Almost total silence: acoustic absorber cancels 99.7% of sound

sound-wave-illustration

We all need a bit of quiet in our lives sometimes, but have you ever took a minute to ponder what ‘total silence’ might feel like? It’s scary. Every bodily function, otherwise unnoticed, now sounds like a freight train. Feels like it, anyway. You can even hear your heart beats. Though not exactly ‘perfect silence’, a team of researchers at Hong Kong University of Science and Technology have come mighty close. They report 99.7% absorption of low frequency pressure waves (sound) using subwavelength structures or materials.

Their sound absorber is a dissipative system composed of two resonators, both tuned to the same frequency. Their impedance also matches that of the environment, or air in our case, which keeps reflections to a minimum. Overall, the resulting system basically destroys sound. “Owing to its subwavelength dimensions, acoustic waves incident from any direction will be completely dissipated,” the researchers write in their paper.

Motherboard explains: “The set-up then was to use a thin absorbing material in conjunction with a hard reflecting layer, with a super-thin pad of air in between. The idea was that waves would leak through the weakly absorbing material, bounce off the reflective surface, and then collide with the incoming waves in such a way to create interference and neutralize the sound.”

How this is works, very briefly, is that when sound hits the first material it will naturally want to scatter, only it will do so – this time – at the natural frequency of the resonator. The second resonator is tuned to just the right frequency, though, enough to create a destructive pattern. So, the scattered waves are canceled and almost everything is absorbed (99.7%). That’s 50 dB noise reduction, which is a heck of a lot. Can you think of any worthwhile practical applications to this?

Scientists find the sound of stars

A chance discovery has provided experimental evidence that stars may generate sound. While he was examining the interaction of an ultra-intense laser with a plasma target, John Parsley from the University of York found that interfering plasma generates a series of pressure pulses – in other words, sounds.

Solar flare. Image via Wikipedia.

Parsley and his team were studying plasma interactions on the surface of stars by applying an ultra-intense laser beam to a plasma target when they observed something rather unexpected. In the trillionth of a second when the laser strikes, plasma rapidly flows from areas of high density to areas of low density, piling up at the interface between the high and low density regions, generating pressure waves.

While this experimental setup seems rather unique, there is one place where this takes place: on the surface of stars. Parsley said:

“One of the few locations in nature where we believe this effect would occur is at the surface of stars. When they are accumulating new material stars could generate sound in a very similar manner to that which we observed in the laboratory — so the stars might be singing — but, since sound cannot propagate through the vacuum of space, no one can hear them.”

But even if you could hear the sounds… you couldn’t hear it. The sounds come out at very high frequencies, nearly a trillion hertz – almost the highest possible frequency for this type of material. To say that humans, as well as bats or dolphins can’t hear it is almost an understatement – the frequency is six million times higher than what can be heard by any mammal.

Dr Alex Robinson from the Plasma Physics Group at STFC’s Central Laser Facility took things even further, developing a numerical model to generate acoustic waves for the experiment. He said:

“It was initially hard to determine the origin of the acoustic signals, but our model produced results that compared favorably with the wavelength shifts observed in the experiment. This showed that we had discovered a new way of generating sound from fluid flows. Similar situations could occur in plasma flowing around stars”

Journal Reference: Amitava Adak, A. P. L. Robinson, Prashant Kumar Singh, Gourab Chatterjee, Amit D. Lad, John Pasley, G. Ravindra Kumar. Terahertz Acoustics in Hot Dense Laser Plasmas. Physical Review Letters, 2015; 114 (11) DOI:10.1103/PhysRevLett.114.115001

Graffiti artists can make walls turn to life, and speak to the hearts of people through art. Researchers have now given new meaning to the phrase "walls can speak". Image: Theater Fever

Making walls talk – new technique extracts audio from video

A very simple, yet effective optical technique was demonstrated that can transform video inputs, such as the motion of a piece of paper, into audio. To achieve this, the researchers involved exploited a simple principle that describes how sound waves causes objects in their path to vibrate. If you reverse engineer the vibrations, you can effectively decode the sound source and play it back. In effect, the technique could be used to extract audio information from a silent video, like a remote surveillance, granting the walls ears. Of course, the video needs to be shot at high speed since the technique can’t work without many frames per second at its disposal. Also, the demonstrations are far from conclusive, but considering it’s a first version I found it rather impressive.

Turning video into sound

Graffiti artists can make walls turn to life, and speak to the hearts of people through art. Researchers have now given new meaning to the phrase "walls can speak". Image: Theater Fever

Graffiti artists can make walls turn to life, and speak to the hearts of people through art. Researchers have now given new meaning to the phrase “walls can speak”. Image: Theater Fever

Sound is nothing but vibration. When we hear, what we’re actually sensing is air displaced in a signature manner by a mechanical pressure wave which eventually hits special receptor cells in the inner ear. This information is then transformed by nerves and relayed to the brain where it’s decoded. Because it’s a pressure wave, these mechanical vibrations we call sound can cause objects in its path to vibrate as well. This vibration is so tiny that we hardly notice it, unless you have really powerful speakers. But even with you home stereo, you might be able to notice the effect sound waves have on other objects if the latter are small enough. The vibrations, although usually with small amplitudes, can be detected and analyzed algorithmically, and audio reconstructed based on those calculations.

The researchers from the Catholic University of America used a thin sheet of paper for their tests. Matrix points were spread on the image of the paper so that the sound vibration imaged by a high-speed camera could be mapped. The Gauss-Newton algorithm and a few other measures were applied to process the image, then a simple model enacted the original audio information of the sound waves.

Credit: Optical Engineering

Credit: Optical Engineering

If you have a video of two people taking, you can tell what their conversation is about by lipreading. If you can’t see the people’s faces, though, this can’t work. This technique might thus be useful to discern conversations by studying the vibrations of objects around people.

“One of the intriguing aspects of the paper is the ability to recover spoken words from a video of objects in the room,” said journal Associate Editor Reiner Eschbach, a Research Fellow at Xerox Corp. “The paper shows that the sound creates minute vibrations in objects and that these vibrations ― given the right equipment ― can be picked up from a video signal. This is an interesting foray into a new application space and will, in my view, trigger interesting research in the field.”

Of course, the technique needs to be further refined before anyone can read anything from vibrating bricks. The paper was published in the journal Optical Engineering.

dampen

Thin metasurface absorbs sound near perfectly, while producing electricity at the same time

dampen

Image: Nature

Researchers at the Hong Kong University of Science and Technology have created a thin metamaterial surface that is capable of absorbing nearly all of the acoustic energy (sound).  Unlike conventional sound absorbing material that is sometimes only effective when meters thick, the metasurface is deeply “subwavelength” and therefore much thinner. There’s a catch though: the system has been demonstrated for near perfect sound absorption when the system is tuned to a particular frequency.

Silence: an almost perfect sound absorber

Sound absorption materials are usually manufactured with the wavelength of the desired frequency to be absorbed in mind, which for human hearing ranges from 17 meters to 17 millimeters for low to high frequencies respectively. This is why in a studio the mid and high frequencies are easily damped, but you can still hear low frequencies outside.

[SEE ALSO] Intelligent shock absorber dampens vibrations and generates power

The new metamaterial, called a “decorated membrane resonator” (DMR), works different though [cite]doi:10.1038/nmat3994[/cite]. It’s made out of  tiny drum membrane embedded in and coupled to a solid support, in the center of which is a platelet. For their demonstration, the researchers used a 9 cm membrane, only 0.2 mm thick, holding a 2 cm platelet in diameter. For the harmonics of the metasurface to correspond to the sound’s wavelengths, the membrane needs to have a very low elastic modulus. A reflecting backing then sandwiches a sealed gas layer.

The metasurface exhibits resonance at audible wavelengths such that there is near total absorption of sound, and dissipation of the energy along the lossy membrane.

The system was shown to have “impedance matching” to the airborne sound waves. This makes the metasurface an excellent energy absorber that doesn’t reflect waves coming from a particular wavelength. This high impedance is powered by the sandwiched gas, as well as the reflective backing surface.

[RELATED] How sound frequencies affects taste

Absorbing sound and generating power at the same time

What’s maybe most striking about the setup is that the vibrations induced in the platelet-membrane system can be coupled to energy generation, with a sound-to-electrical conversion efficiency of 23%. A whole DMR array coupled for various frequencies could then be used to power low-voltage devices, besides dampening sound.

Of course, there’s a catch. The system is tuned to work only for a particular frequency. More than one layer would be needed to catch multiple frequencies or a single layer must contain a number of differently-sized DMRs. Sure, the DMR will prove very useful for applications where a targeted and well-known frequency needs to be absorbed, but the system doesn’t sound that appealing for studios or even highway walls that dampen the noise around residences – it would be too expensive. To set the resonant frequency (the frequency we’re looking to dampen), all you need to do is vary the thickness of the gas layer.

loud_music

How loud music damages your hearing

loud_music

Photo: rantlifestyle.com

Listening to loud music has been shown time and time again to affect hearing in a negative way. The damage becomes more pronounced with age, leading to difficulties in understanding speech. A new analytic study by researchers at University of Leicester  examined the cellular mechanisms that underlie hearing loss and tinnitus triggered by exposure to loud sound.

Music to your ears or …

Dr Martine Hamann, Lecturer in Neurosciences at the University of Leicester, said: “People who suffer from hearing loss have difficulties in understanding speech, particularly when the environment is noisy and when other people are talking nearby.

“Understanding speech relies on fast transmission of auditory signals. Therefore it is important to understand how the speed of signal transmission gets decreased during hearing loss. Understanding these underlying phenomena means that it could be possible to find medicines to improve auditory perception, specifically in noisy backgrounds.”

There are tens of millions of people all over the world that are affected by hearing loss, with grave social consequences. Often enough, these people become isolated from friends and family because of their impaired ability to understand speech. Everybody has an annoying great uncle that incessantly asks ‘how’s school’ or ‘when are you getting married’, before exclaiming ‘what, what, what?!’ Hearing loss isn’t confined to old age anymore, though, once with the advent of high power speakers and headphones. It’s amazing to me how some people behave so unconsciously and nudge their heads straight in a 4000W speaker for hours. You’ve seen them at festivals.

In a survey of 2,711 festival-goers in 2008, 84% said they experienced dullness of hearing or ringing in the ears after listening to loud music.

“These are the first signs of hearing damage,” says Donna Tipping from Action on Hearing Loss charity.

“The next morning or a couple of days later, your hearing may gradually return to normal but over time, with continued exposure, there can be permanent damage.”

Donna says the risk of damage to hearing is based on how loud the music is and how long you listen to it for.

“If you can’t talk to someone two metres away without shouting, the noise level could be damaging,” she says.

Previous research showed that following exposure to loud sounds, the myelin coat that surrounds the auditory never becomes thinner. The audio signal travels in jumps from one myelin domain to the other. When exposed to lound sound, these domains, called Nodes of Ranvier, become elongated. It wasn’t clear however if the hearing loss was due to the actual change of the physical properties of the myelin or is it due to the redistribution of channels occurring subsequent to those changes.

“This work is a theoretical work whereby we tested the hypothesis that myelin was the prime reason for the decreased signal transmission. We simulated how physical changes to the myelin and/or redistribution of channels influenced the signal transmission along the auditory nerve. We found that the redistribution of channels had only small effect on the conduction velocity whereas physical changes to myelin were primarily responsible for the effects,” Dr. Hamann said.

The research adds further strength to the link between myelin sheath deficit and hearing loss. This is the first time a simulation was used to assess the physical changes to the myelin coat based on previous morphological data. Armed with these findings, published in the journal Frontiers in Neuroanatomy,  scientists have come to a better understanding of now only how auditory perception can become dull, but also what makes a good hearing. Translating into practice, the research suggests targeting these deficits; namely, promoting mylein repair after acoustic trauma or during age related loss.

[RELATED] Hearing restored in gerbils following stem cells treatment

A personal note: while summer’s almost gone, there are still some festivals where you might be exposed to loud music. Also, there are always loud clubs, whether you like it or not, that are open no matter the season. The best protip I can offer is to wear earplugs. I can’t stress this enough. These simple tools, highly effective and cheap, can protect you against the excess decibels monster speakers throw at you, all while preserving sound quality.

drop the bass

Why people love it when the bass drops

drop the bass

Photo: dittomusic.com

Rave parties go crazy when the bass drops, no doubt about it, but what makes people click so well with low frequencies? Canadian scientists at the McMaster Institute for Music and the Mind investigated how our brains react to low-freq pitches and found our affinity has to do with how humans detect rhythm. Basically, the bass is easier to follow, so more enjoyable.

“There is a physiological basis for why we create music the way we do,” study co-author Dr. Laurel Trainor, a neuroscientist and director of the institute, said. “Virtually all people will respond more to the beat when it is carried by lower-pitched instruments.”

Trainor and colleagues strapped 35 people with an electroencephalogram (EEG) to monitor their brain activity and played them a sequence of low and high-pitched piano notes at the same time. Sometimes, the notes were played 50 milliseconds too fast, but most people would recognize the offbeat in the lower tone, instead of the high pitch or both.

Then, the researchers prepared an experiment that investigated the participants’ unconscious reaction to rhythm. The volunteers were asked to tap their fingers to the beat and when the timing change occurred, the researchers noticed that the people were more likely to modify their tapping to fall in sync with the low-pitched tone.

Finally, the researchers played the same sequences through a computer model of the human ear. The analysis showed that recognized the offbeat in the low-pitched tone more often than it did in the higher tone. This suggests that the ear itself, not some brain mechanism, is responsible for this effect.

This study “provides a very plausible hypothesis for why bass parts play such a crucial role in rhythm perception,” Dr. Tecumseh Fitch, a University of Vienna cognitive scientist who did not participate in this research, told Nature.

The findings were published in the journal PNAS.

eating_sound

How sound frequencies affect taste – will music replace sugar in your coffee?

eating_soundListening to a high pitched tune will enhance the sweetness of food, while a low hum will make your taste buds signal bitter. Obviously, listening to all low frequencies won’t turn your chocolate bar into a pickled vegetable, but research in this respect suggests there’s genuine synesthetic behaviour. Some restaurant owners are already exploiting this knowledge and play ambient music that is suited to the kinds of taste they’d like to influence.

Scientists who are dwelling into this kind of research call it taste modulation, the ultimate frontier in food presentation. While most restaurants focus on creating an atmosphere out of decorations, a visual identity, lighting, genuine food culture, few really go over the top and create a synergy with sound. A double CD looping in the background may be a thing of the past.

The Crossmodal Laboratory at Oxford University fed a group of volunteers some cinder toffee while playing them high- and low-frequency sounds, and asked them to rate the taste on a scale running from sweet to bitter. Just as I experienced in my kitchen, high notes enhanced sweetness and low brought out the bitter. But a laboratory setting is far removed from real life, so Charles Spence, who runs the lab, teamed up with food artist Caroline Hobkinson to test whether the results would be replicated out in the field.

Playing high pitched tunes may even sweeten black coffee

For one month, London restaurant House of Wolf served a “sonic cake pop” of chocolate-coated bittersweet toffee, which came, intriguingly, with a telephone number. On the other end of the line was an operator instructing the diner to dial one for sweet and two for bitter, and they were played the high and low-pitched sounds accordingly. Hobkinson says: “It makes me laugh because it works every time, and people say, ‘Oh! That’s so weird!'”

At the  Crossmodal Laboratory at Oxford University, volunteers were offered cinder toffee while either a high or low frequency sound was played. They were then asked to rate the taste on a scale running from sweet to bitter. Apparently, high tunes induced a more sweetening sensation, while the low hum made food taste bitter. This was experienced in a lab setting, however, so the obvious next step was to take the experiment to a restaurant. The researchers teamed up with food artist Caroline Hobkinson and served what was labeled a “sonic caked pop” – chocolated-coated bittersweet coffee which when ordered was also came with a telephone number. When the guest would dial it, he would be greeted by an operator who would ask the caller to press one for sweet and two for bitter. The accompanying frequencies were then played.

 Hobkinson says: “It makes me laugh because it works every time, and people say, ‘Oh! That’s so weird!'”

“It works with coffee, too,” she adds, and she foresees exciting possibilities such as sound replacing sugar in your morning espresso.

The findings were confirmed by another study which matched the savoury taste, umami, with low pitches. Also, a study from 2011 found that during flights the loud background noise suppresses saltiness, sweetness and overall enjoyment of food.

“Have you ever noticed how many people ask for a bloody mary or tomato juice from the drinks trolley on aeroplanes? The air stewards have, and when you ask the people who order, they tell you that they rarely order such a drink at any other time.” says Charles Spence who led the study at Oxford. He  reckons this is because umami may be immune to noise suppression.

Would you like to experiment at home with this? Check this Condiment Junkie app, and play either the sweet or bitter sounds. Check yourself as objectively as you can for taste differences.

[source]

3D acoustic cloaking device makes objects undetectable with sound

Using relatively simple perforated sheets of plastic and an extensive amount computation, Duke University researchers have created the world’s first sound invisibility cloak. The cloak diverts sound waves in a way that it conceals both itself, and anything hidden beneath it.

The device is, of course, 3D, and it works in the same way, no matter what direction the sound is coming from or where the observer is located. The device can be used to cloak things from sound-based detectors, having future applications in sonar avoidance and architectural acoustics, for example.

“The particular trick we’re performing is hiding an object from sound waves,” said “By placing this cloak around an object, the sound waves behave like there is nothing more than a flat surface in their path.”

In order to do this, Duke University professor of electrical and computer engineering Steven Cummer and his colleagues used metamaterials – a combination of normal materials in repeating patterns which result in remarkable, unnatural properties. If you look at it, the device looks just like several perforated plastic plates with a repeating pattern of holes poked through them stacked on top of one another just like a pyramid.

In order to function, it has to divert soundwaves in such a way that it looks like the pyramid isn’t there – something which requires a lot of crafting and projecting. Because the sound is not reaching the surface beneath, it is traveling a shorter distance and its speed must be slowed to compensate.

So, after they developed the design and created the cloak, it was testing time! In order to test it, they covered a small sphere and pinged it with soundwaves from various distances. Using special microphones, they then mapped how the waves responded and produced videos of them traveling through the air – and everything went out fine.
)

“We conducted our tests in the air, but sound waves behave similarly underwater, so one obvious potential use is sonar avoidance,” said Cummer. “But there’s also the design of auditoriums or concert halls — any space where you need to control the acoustics. If you had to put a beam somewhere for structural reasons that was going to mess up the sound, perhaps you could fix the acoustics by cloaking it.”

The study was published online in Nature Materials.

 

Captioned: Satan

The world’s most annoying sound: whining

Captioned: Satan

Captioned: Satan

Ghastly nails on a blackboard or deafening sires don’t come any close to a infant’s whining as far as annoying sounds are concerned, according to a recent study from SUNY New Paltz.

In a fairly simple approach, researchers asked study participants to solve various math problems while a background noise was playing. Six sounds were chosen, namely screeching saw on wood, machine noise, a baby crying, the always annoying adult mimicking baby talk and, of course, whining, for a whole minute each. The highest scores were received by noises which lead to the most errors in computations.

Interestingly enough, the whining sound was voiced by an adult actor.

Subjects chosen for the study were both male and female, parents and non-parents. The results were posted in the The results, published in the Journal of Social, Evolutionary and Cultural Psychology, where they researchers concluded that “you are basically doing less work and doing it worse” when listening to whines.

Speaking to MSNBC, Rosemarie Sokol Chang, a psychologist involved in the study, said: “It’s telling you to tune in. Nobody wants to sit around and listen to a fire engine siren either, but if you hear the siren go off, it gets your attention. It has to be annoying like that, and it’s the same with the whine.”

Ignoring the noise won’t help one bit, either. Researchers say that only makes it worse.

What’s the most annoying sound in the world for you guys?

via Wired