Tag Archives: hearing

Neanderthals likely spoke and understood language like humans

With each new study, scientists’ perceptions on Neanderthals have shifted away from that of mindless brutes to highly complex hominids — a new study is cementing the notion that our extinct cousins were very human-like. One central question in human evolution is whether spoken language was employed by other species in the Homo lineage. A study published today confirms that Neanderthals were indeed linguistically capable.

“This is one of the most important studies I have been involved in during my career”, says Rolf Quam, an anthropology professor at Binghamton University and co-author of the new study. “The results are solid and clearly show the Neandertals had the capacity to perceive and produce human speech. This is one of the very few current, ongoing research lines relying on fossil evidence to study the evolution of language, a notoriously tricky subject in anthropology.”

The Atapuerca Mountains in north-eastern Spain may not look like much. They feature gentle slopes and a rather dry landscape, interrupted from time to time by forests and the occasional river. But these mountains hold a karstic environment that is key to understanding how humans came to be, and what life was for our early ancestors.

The most important site is a cave called Sima de los Huesos (Pit of Bones). Anthropologists have recovered over 5,500 human remains which are at least 350,000 years old from this site. The remains belong to 28 individuals of Homo heidelbergensis, an archaic hominin that lived from approximately 700,000 years to 300,000 years ago. Scientists believe that H. heidelbergensis is the ancestor of Homo neanderthalensis.

For their study, Quam along with colleagues at the Universidad Complutense de Madrid performed high resolution CT scans of Atapuerca fossils in order to produce virtual 3D models of the ear structure. The scientists generated models for Homo sapiens and Neanderthals, as well as for the ancestors of the Neanderthals.

The ear models were then inputted into software that can estimate hearing abilities based on the structure of the ears up to 5 kHz, which is most of the frequency range of modern human speech sounds.

Compared with the Atapuerca fossils, the researchers found that the Neanderthals had slightly better hearing in the 4-5 kHz range, which closely resembles modern humans.

The study also assessed the frequency range of maximum sensitivity, also known as the occupied bandwidth, for each species. The wider this bandwidth, the easier it is to distinguish complex sounds and to deliver a clear message in the shortest amount of time.

Once again, compared to their Atapuerca ancestors, Neanderthals showed a wider bandwidth resembling modern humans.

“This really is the key,” says Mercedes Conde-Valverde, professor at the Universidad de Alcalá in Spain and lead author of the study. “The presence of similar hearing abilities, particularly the bandwidth, demonstrates that the Neandertals possessed a communication system that was as complex and efficient as modern human speech.”

This begs the question: what did a Neanderthal language sound like? According to the researchers, one of the most intriguing findings of the study is that Neanderthal speech likely included an increased use of consonants.

“Most previous studies of Neandertal speech capacities focused on their ability to produce the main vowels in English spoken language. However, we feel this emphasis is misplaced, since the use of consonants is a way to include more information in the vocal signal and it also separates human speech and language from the communication patterns in nearly all other primates. The fact that our study picked up on this is a really interesting aspect of the research and is a novel suggestion regarding the linguistic capacities in our fossil ancestors,” Quam said.

These documented improvements in auditory capacity in Neandertals mirrors increasing complexity in stone tool technology, domestication of fire, and possible symbolic practices. We know, for instance, that Neanderthals also painted, fashioned jewelry, and employed abstract thinking in which symbols or images are used to represent objects, persons, and events that are not present.

As such, the study suggests that increasingly complex behaviors coevolve with increasing efficiency in oral communication. More insights may be gleaned once the researchers extend this investigation to other species of Homo.

“These results are particularly gratifying,” said Ignacio Martinez from Universidad de Alcalá in Spain. “We believe, after more than a century of research into this question, that we have provided a conclusive answer to the question of Neandertal speech capacities.”

How a deaf Beethoven discovered bone conduction by attaching a rod to his piano and clenching it in his teeth

On May 7, 1824, the legendary 9th Symphony premiered in the Theater am Kärntnertor in Vienna. When the masterful performance ended, 54-year-old Beethoven, who was stone deaf at this time, was still conducting along with the “official conductor” from the front row when he had to be turned around to face the thunder of an applauding audience.

Whilst Beethoven’s career was the stuff of genius, his personal life was marked by a struggle against deafness and constant suffering caused by an armada of afflictions. The German composer first noticed that his hearing was fading around the age of 28. By this time he was already an established figure in the Vienna musical scene and regarded as a rising star rivaling Wolfgang Amadeus Mozart, which only made everything worse.

One can only imagine how cruel a fate that must have felt like to such a musical mind. He, the great Beethoven, of all people, was going deaf! It’s as if Picasso lost his eyesight or Rodin had his arms cut off.

But Beethoven was a strong-willed spirit who didn’t give up easily. One of his celebrated phrases is: “I will choke on the throat of fate, it will never make me succumb.”

He was true to his word. Despite his rapidly deteriorating hearing, from 1803 to 1812, Beethoven composed an opera, six symphonies, four solo concerti, five string quartets, six-string sonatas, seven piano sonatas, five sets of piano variations, four overtures, four trios, two sextets, and 72 songs.

Beethoven… the inventor?

Despite living in pain, Beethoven did not give up. However, he had a helping hand. In order to continue composing and playing music, Beethoven stumbled across a physical phenomenon that is central to hearing: bone conduction.

At the time, scientists understood very little about how human hearing works. But despite the fact that his ears left him, he could still hear himself playing music by placing one end of a wooden stick onto his piano and clenching on the other end with his teeth. When notes were struck, the vibrations from the piano were transferred to his jaw, and from there directly to his inner ear. Miraculously, he could hear again! Bone conduction was born.

Sound is nothing more than acoustic vibration in the air. These juggling atoms vibrating at certain frequencies cause the eardrum to vibrate, which are transformed into a different kind of vibration that the cochlea, also known as the inner ear, can interpret. The cochlea then transmits the information about the sound to the brain via the auditory nerve where it is processed as hearing.

But there’s a second way that humans can hear besides air conduction. If the inner ear is directly exposed to acoustic vibration in the bones, then a person can still hear although the eardrum is bypassed. This is one of the reasons you can still hear your own voice if you plug your ears. It’s also how whales hear while diving deep in the ocean or how male elephants can listen for mating calls by stomping females several kilometers away.

Beethoven’s clever bone-conducting solution is used in some hearing devices today. A bone conducting hearing device, or BAHA, converts the sound picked up by its microphone into vibrations that are transmitted through the bones of the skull to the cochlea of the inner ear. Essentially, the bone conducting device fills the role of a defective eardrum.

Bone conduction hearing devices are also used by people with perfect hearing in certain applications. For instance, military headsets allow soldiers to hear orders relayed through a bone conduction device, sometimes integrated into the helmet, despite the background noise of enemy gunfire. Special bone conduction hearing devices also allow divers to both hear and talk underwater.

Beethoven’s final struggles with deafness

The way Beethoven dealt with his deafness is one of the great stories of humanity. The cause of his deafness, though, remains something of a mystery.

His diagnosis is made all the more challenging since he suffered from a plethora of other illnesses. The list includes chronic abdominal pain and diarrhea that might have been due to an inflammatory bowel disorder, depression, alcohol abuse, respiratory problems, joint pain, eye inflammation, and cirrhosis of the liver. 

This last item, a consequence of his prodigious drinking, may have ultimately killed Beethoven, who died in 1827. An autopsy showed signs of severe cirrhosis, but also dilatation of the auditory and other related nerves in the ear.

As a common custom of the time, a young musician by the name of Ferdinand Hiller snipped off a lock of hair from Beethoven’s head as a keepsake. The lock stayed in the Hiller family for nearly a century until it somehow made its way into the hands of a Danish physician called Kay Fremming. The physician is famous for saving thousands of Jews during the occupation of Denmark by Nazi forces by helping them escape to Sweden, whose border was closeby to a tiny fishing village Fremming called home. Some speculate that one of the Jewish refugees gave Dr. Freeming the lock of Beethoven’s hair in gratitude for saving their lives.

What we know for sure is that the lock of hair, consisting of 582 strands, was passed down to Fremming’s daughter, who put it up for auction in 1994. It was purchased by Alfredo Guevara, an Arizona urologist, for a modest $7,000. Guevara kept a few strands and donated the rest to the Ira F. Brilliant Center for Beethoven Studies at San Jose State University in California.

At this point, scientists at the university thought of examining DNA from the great composer’s hair in order to look for clues as to how Beethoven became deaf.

The hair was put through a barrage of DNA, chemical, forensic, and toxicology tests. What immediately stood out was an abnormally high level of lead. During Beethoven’s time, people weren’t aware of lead poisoning and it was quite common to use plates for food and goblets for drinking made out of the toxic metal. Even the wine of that era, Beethoven’s favorite drink, often contained lead as a sweetener. This severe lead poisoning may have contributed to the composer’s lost hearing.

For a long time, Beethoven tried to conceal his deteriorating hearing, fearing that this may ruin his career if the word was out. But he couldn’t keep it up for too long. It was common for composers to also conduct and even perform their own music, and Beethoven’s condition eventually became noticeable. After watching one of Beethoven’s piano rehearsals in 1814, fellow composer Louis Spohr said “…the music was unintelligible unless one could look into the pianoforte part. I was deeply saddened at so hard a fate.”

At age 45, Beethoven’s hearing was completely gone, and so was his public life. In the final stretch of his life, the German composer became a reclusive, insular person who allowed only a select few friends to visit him. The music he composed during this time, which includes the famous Sixth Symphony, reflects Beethoven’s love for nature and his life in total silence in the countryside. Describing the Sixth Symphony, Beethoven said it is “more the expression of feeling than painting”, a point underlined by the title of the first movement. While completely deaf, Beethoven also composed Missa Solemnis, the solemn mass for orchestra and vocalists, and the opera Fidelio, among other major works.

It’s not clear if Beethoven’s inner ear was still functional in his later days so that he could continue using his bone-conducting stick to hear his compositions on the piano. Many experts believe he didn’t need to hear his pieces anyway since he was a master composer who knew all the rules of how music is made. Even in deafness, Beethoven was an unparalleled master of the language of music and an inspiration for resilience.

Big-eyed spiders that cast nets like gladiators can hear prey despite lacking ears

Credit: Cornell University.

Ogre-faced spiders (Dinopsis spinosus) have massive eyes that grant them phenomenal night vision to hunt prey in the dark with their silk nets. It seems these skilled predators have yet another ace up their eight legs: very sensitive hearing.

Although they don’t have ears, these spiders have tiny hairs and join receptors on their legs that are sensitive enough to pick up sounds from at least 2 meters away, according to researchers at Cornell University.

“These spiders are wonderful creatures that have some fascinating behaviors, allowed by fine-tuned sensory systems very unlike our own. Sadly, these spiders are very overlooked, especially considering how impressive their sensory systems and behavior are. We’re hoping to establish a foundation for future work on these and other spiders in the realm of sensory ecology,” Jay Stafstrom, first author of the new study and a postdoc at Cornell University, told ZME Science.

The stick-like nocturnal predators are part of the net-casting family of spiders. Instead of waiting passively for prey to fall into a permanent web that they weaved earlier, ogre-faced spiders and other Dinopsis spiders throw a rectangular-shaped net over unsuspecting victims that pass by, sort of like gladiators.

In order to ensnare their prey, ogre-faced spiders are aided by their googly-looking eyes. But besides keen night vision, these spiders also perform elaborate backward strikes to catch their prey, which suggests that they have another sophisticated spidey sense.

Previously, Stafstrom and colleagues blindfolded ogre-faced spiders by placing dental silicone over their eyes. The blind spiders weren’t able to catch prey off the ground, but they could still hunt insects from out of the air. This was a huge hint that they employed other sensory systems in hunting besides vision.

For their new study, the researchers observed the spiders’ reactions to various tones by measuring their neural response with tiny electrodes placed in the spiders’ brains and legs. They found that the spiders could sense vibrations in the air of up to 10 kHz, much higher in frequency than the sounds produced by walking or flying insects.

This high-speed video shows the backwards strike of an ogre-faced spider. Credit: Sam Whitehead.

Stafstrom says that it’s difficult to compare the ogre-faced spider’s hearing with other animals, but he finds it “impressive that they can hear so well, at least in terms of speed and direction.”

“If you consider trying to catch an insect as it’s flying past you, in nearly complete darkness (as they are doing this at night), the act of snatching something with a small net would seem a pretty difficult feat. We suspect that hearing is fairly widespread across other spiders, but we haven’t been able to conduct the appropriate comparative study to really investigate how widespread this phenomenon is. The two types of sensors shown to detect sound in spiders (long hairs and the metatarsal organs) are possessed by most, if not all spiders,” the researcher told ZME Science.

Since the spiders would only need to detect low-frequency tones to snatch prey, the researchers believe that their ability to detect much higher frequency may be helpful in staying alert for signs of their own predators.

“If you give an animal a threatening stimulus, we all know about the fight or flight response. Invertebrates have that too, but the other ‘f’ is ‘freeze.’ That’s what these spiders do,” says senior author Ron Hoy, professor of neurobiology and behavior at Cornell University. “They’re in a cryptic posture. Their nervous system is in a sleep state. But as soon as they pick up any kind of salient stimulus, boom, that turns on the neuromuscular system. It’s a selective attention system.”

In the future, the researchers are interested in learning whether the spiders have directional hearing too, meaning whether they can tell where sounds are coming from. This would explain their impressive choreography while hunting and perhaps inspire a new generation of microphones.

“These spiders have evolved for millions of years to be really good at snatching things up with this net, and the sensory systems they possess are exquisite at allowing them to do so. Since these spiders have been adapting for so long to detect both visual and hearing information so well, it would be beneficial, as humans, to better understand how they do it. If we could truly understand how they detect and process environmental information, we would surely have some valuable insight that could be applied to creating more sensitive/accurate biosensors (like microphones) in ways that we have never imagined before. Looking at the ability of these spiders to detect the directional component of a sound is what we’re most interested in next – we expect these spiders are quite adept at determining the location of a sound source, and we’re interested in understanding exactly how good they are at it,” Stafstrom said.

The findings appeared in the journal Current Biology.

Leaf blowers are not only annoying but also bad for you (and the environment)

The seemingly-innocuous leaf blower may actually cause a lot more damage than you’d think — to both your health and the climate.

A groundskeeper blows autumn leaves in the Homewood Cemetery, Pittsburgh.
Image via Wikimedia.

It’s that time of the year: trees are shedding their leaves, and people are blowing them off the pavement. According to the Centers for Disease Control and Prevention (CDC), this quaint image actually hides several health concerns for operators and the public at large.

The inefficient gas engines typically used on leaf blowers generate large amounts of air pollution and particulate matter. The noise they generate can lead to serious hearing problems, including permanent hearing loss, according to the CDC.

Sounds bad

Some noise may not seem like much of an issue, but the dose can make it poison. The CDC explains that using your conventional, commercial (and gas-powered) leaf-blower for two hours has an adverse impact on your hearing. Some emit between 80 and 85 decibels (dB) while in use. Most cheap or mid-range leaf blowers, however, can expose users to up to 112 decibels (a plane taking off generates 105 decibels). At this level, they can cause instant “pain and ear injury,” with “hearing loss possible in less than [2 to] 5 minutes”.

The low-frequency sound they emit fades slowly over long distances or through building walls. Even at 800 meters away, a conventional leaf blower is still over the 55 dB limit considered safe by the World Health Organization, according to one 2017 study. Because they’re so loud, they can be heard “many homes away” from where they are being used, Quartz explains.

This ties into the greater issue of noise pollution. The 2016 Greater Boston Noise Report (link plays audio,) which surveyed 1,050 residents across the Boston area, found that most felt they “could not control noise or get away from it,” with leaf blowers being a major source of noise. Some 79% of responders said they believed no one cared that it bothered them. Leaf blowers are also seeing more use — in some cases becoming a daily occurrence. As homeowners and landscaping crews create an overlap of noise, these devices can be heard for several hours a day.

Image credits S. Hermann & F. Richter / Pixabay.

With over 11 million leaf blowers in the U.S. as of 2018, this adds up to a lot of annoyed people. Most cities don’t have legislation in place that deals with leaf blower noise specifically, and existing noise ordinances are practically unenforceable for these devices. However, there are cities across the U.S. that have some kind of leaf blower noise restrictions in place or going into effect.

Noisy environments can cause both mental and physical health complications, contributing to tinnitus, hypertension, and generating stress (which leads to annoyance and disturbed sleep).

Very polluting

A report published by the California Environmental Protection Agency (CalEPA) in the year 2000 lists several potential hazards regarding air quality when using leaf blowers:

  • Particulate Matter (PM): “Particles of 10 Fm and smaller are inhalable and able to deposit and remain on airway surfaces,” the study explains, while “smaller particles (2.5 Fm or less) are able to penetrate deep into the lungs and move into intercellular spaces.” More on the health impact of PM here.
  • Carbon Monoxide: a gas that binds to the hemoglobin protein in our red blood cells. This prevents the cell from ‘loading’ oxygen or carbon dioxide — essentially preventing respiration.
  • Unburned fuel: toxic compounds from gasoline that leak in the air, either through evaporation or due to incomplete combustion in the engine. Several of these compounds are probable carcinogens and are known irritants for eyes, skin, and the respiratory tract.

To give you an idea of the levels of exposure involved here, the study explains that landscape workers running a leaf blower are exposed to ten times more ultra-fine particles than someone standing next to a busy road.

Additionally, these tools are important sources of smog-forming compounds. It’s not a serious issue right now, but as more people buy and use leaf blowers, lawnmowers, and other small gas-powered engines, these are expected to overtake cars as the leading cause of smog in the United States.

What to do about it

Well, the easiest option is to use a rake — or just leave the leaves where they are, which is healthier for the environment.

But leaf blowers didn’t get to where they are today because people like to rake. Electrical versions, either corded or battery-powered, would address the air quality and virtually all of the noise concerns (albeit in exchange for less power).

While government regulation might help with emission levels, noise concerns might best be dealt with using more social approaches. Establishing neighborhood-wide leaf blowing intervals, or limiting the activity to a single day per week, would help make our lives a little better. As an added benefit, this would also help people feel that their concerns are being heard, and foster a sense of community.

Leaf Nose Bat.

Bats can use leaves as ‘mirrors’ to spot hiding prey — but it only works at an angle

Bats use leaves as sound ‘mirrors’ to find (and eat) sneaky insects according to new research.

Leaf Nose Bat.

A Leaf Nosed Bat.
Image via Pixabay.

Even on moonless nights, leaf-nosed bats are able to snatch up insects resting still and silent on leaves. New research from the Smithsonian Tropical Research Institute (STRI) shows that bats pull off this seemingly-impossible feat by approaching packs of leaves from different directions. This gives them the chance to use their echolocation to find camouflaged prey — even prey that specifically tries to hide from the acoustic surveys.

I spy with my little ear… a bug

“For many years it was thought to be a sensory impossibility for bats to find silent, motionless prey resting on leaves by echolocation alone,” said Inga Geipel, Tupper Postdoctoral Fellow at STRI and the paper’s lead author.

Combining data from an experiment using a biosonar with video footage from high-speed video cameras of bats approaching prey, the team found how critical approach angle was to the leaf-nosed bats’ hunting prowess.

Bats can drench an area in sound waves and then listen in on the returning echoes to survey their environment. It works much like a radar that uses sound instead of radio waves, and is undoubtedly a very cool trick to pull off. However, it’s not infallible: leaves are very good sound reflectors, so they drown out the echoes produced by any insect hiding in a patch of leaves. This natural cloaking mechanism is known as acoustic camouflage and makes the insects, for all intents and purposes, undetectable for the bats.

At least, that’s what we thought. To understand how bats pick out prey through the acoustic camouflage, the team aimed sound waves at a leaf (with and without an insect) from over 500 different angles. Using this data, they created a three-dimensional representation of the echoes it generates. For each direction, the team also calculated how intense the echo was over the five different frequencies of sound present in a bat’s call.

As expected, leaves both with and without insects were very good sound reflectors if the sound approaches at an angle under 30 degrees (more-or-less from straight ahead). For a bat approaching at these angles, any echoes generated by an insect will be drowned out by the leaf’s echo. However, Geipel and colleagues found that for angles greater than 30 degrees, incoming sound waves bounce off the leaf much like light on a mirror or a lake. An approach at this angle makes the insect’s echo stand out clearly against the quiet backdrop provided by the leaf.

The optimal angle for bats to approach resting insects on leaves ranges between between 42 and 78 degrees, the authors conclude.

To verify their results, Geipel recorded actual bats at STRI’s Barro Colorado Island research station in Panama as they hunted insects positioned on artificial leaves. Their approaches were filmed using two high-speed cameras, and Geipel used the footage to reconstructed the flight paths of the bats as they closed in on the insects. Almost 80%of the approach angles were within the range of angles that makes leaves act as reflectors, she reports, suggesting that the findings are sound.

“This study changes our understanding of the potential uses of echolocation,” Geipel said. “It has important implications for the study of predator-prey interactions and for the fields of sensory ecology and evolution.”

The paper “Bats Actively Use Leaves as Specular Reflectors to Detect Acoustically Camouflaged Prey” has been published in the journal Cell Biology.

Inner ear diagram.

We finally found the protein that turns sound and balance into electrical signals

Researchers have pinpointed the protein that turns sound and head movement into nerve signals for the brain.

Inner ear diagram.

An embroidered diagram of the inner ear.
Image credits Hey Paul Studios.

A team from the Harvard Medical School believe they’ve ended a 40-year-long search for the protein that allows us to hear and stay upright. Nestled in the inner year, this molecule turns sound and movements of the head into electrical signals — it is, in effect, what translates them into a language our brain can understand.

Ear-round signaling

The team points to TMC1 (Transmembrane channel-like protein 1), a protein discovered in 2002, as being the elusive molecule researchers have been looking for. TMC1 folds in on itself in such a way as to form a sound-and-motion-activated pore. In effect, it acts much like a microphone: the protein turns pressure waves into electronic signals, a process known as ‘auditory transduction’. These are then fed to the brain, where they’re recreated into sound and help us maintain balance.

The findings come to fill in a gap in our understanding of how hair cells in the inner ear convert sound and movement into signals for the nervous system.

“The search for this sensor protein has led to numerous dead ends, but we think this discovery ends the quest,” said David Corey, Bertarelli Professor of Translational Medical Science at Harvard and co-senior author on the study.

“It is, indeed, the gatekeeper of hearing,” says co-senior author Jeffrey Holt, a Harvard Medical School professor of otolaryngology and neurology at Boston Children’s Hospital.

The team hopes that their findings can lead to precision-targeted therapy for hearing loss associated with malformed or missing TMC1 proteins.

Hearing is one of the very few senses whose molecular converters remained unknown. This was, in part, due to the position of the inner ear. Nestled in the skull, the densest bone in the body, this organ is hard to reach. To further complicate matters, it also houses relatively few sensory cells that can be retrieved, dissected, and imaged. Our inner ear houses roughly 16,000 auditory cells — a human retina, by contrast, boasts over a hundred million sensory cells.

The team’s research was based on the discovery of the TMC1 gene in 2002. Back in 2011, a team led by Holt proved that TMC1 was required for transduction, but whether it was is a key player or just a supporting actor in the process. So, naturally, it sparked a heated debate among researchers.

Step-by-step process

The current paper aimed to put this debate to rest. In an initial set of experiments, the team found that TCM1 proteins clump together in pairs to form ion channels (basically pores). It was quite a surprising discovery, as most ion channels are built from three to seven proteins, the team explains. However, this unusual pairing also helped the researchers make sense the protein’s structure.

The second step was to map out the protein’s 3D structure. Computer predictive modeling, a process that works by predicting the most probable arrangement of a protein’s atoms based on comparisons with similar proteins with known structures, was used. This approach is based on the fact that a protein’s functions are dictated by its structure, i.e. its specific arrangement of amino acids.

The algorithm revealed that TMC1’s closest relative with known structure was a protein known as TMEM16, and yielded a possible amino acid model for TMC1.

Finally, the team set out to confirm whether the computer model was onto something or not by using mouse models. The team substituted 17 amino acids — one at a time — in the hair cells of living mice and then noted how each alteration changed the cell’s ability to pick up on sound, movement, or the flow of ions (i.e. electrical signals).

Eleven of the substitutions altered the flow of ions, five of them having a dramatic effect (reducing flow by up to 80%), the team reports. One substitution blocked the flow of calcium ions completely.

“Hair cells, like car engines, are complex machines that need to be studied as they are running,” says Corey. “You can’t figure out how a piston or a spark plug works by itself. You have to modify the part, put it back in the engine and then gauge its effect on performance.”

Another strong indicator that TMC1 is central to hearing is that it’s found in all vertebrates — mammals, birds, fish, amphibians, and reptiles.

“The fact that evolution has conserved this protein across all vertebrate species underscores how critical it is for survival,” Holt said.

The paper “TMC1 Forms the Pore of Mechanosensory Transduction Channels in Vertebrate Inner Ear Hair Cells” has been published in the journal Neuron.

Does living near wind turbines really affect your health?

Wind turbines are a clean source of energy, excellent for powering many parts of the world. But some people who live close to turbines have complained that wind turbines are affecting their quality of life, citing shadow flicker, audible sounds, and the subaudible sound pressure levels as “annoying.”  Now, a team of researchers from the University of Toronto and Ramboll, an engineering company funding the work, set out to investigate these processes.

The main idea behind turbines is centuries old: exploiting the energy of wind to get something done — whether it’s grinding cereal or producing electricity, the essential principle has remained unchanged. The benefits of harvesting wind energy are evident, but are there any downsides?

For the most part, there’s not much to complain about, but some people are complaining of the noise and rumble created by modern wind turbines. With this in mind, a team of researchers set out re-analyze to data collected for the “Community Noise and Health Study” from May to September 2013 by Statistics Canada, the national statistical office. They focused on noise from wind turbines and produced one of the most thorough studies.

“The Community Noise and Health Study generated data useful for studying the relationship between wind turbine exposures and human health — including annoyance and sleep disturbances,” said Rebecca Barry, an author on the paper. “Their original results examined modeled wind turbine noise based on a variety of factors — source sound power, distance, topography and meteorology, among others.”

Just like the first study, this second analysis found that people who live in areas with higher levels of modeled sound values (40 to 46 decibels or dB) reported more annoyance than respondents in areas with lower levels of modeled sound values (<25 dB). For reference, the urban ambient noise is around 40 dB, and a silent rural area would come in at around 30 dB.

The first study found no connection between the respondents’ quality of life and the proximity to wind turbines. However, this second study reported that survey respondents closer to wind turbines reported lower ratings for their environmental quality of life. However, Barry and her colleagues weren’t able to find out if this happened due to the turbines or if there is a separate, underlying reason. The scientists also didn’t find evidence that wind turbine exposure can affect one’s health.

“Wind turbines might have been placed in locations where residents were already concerned about their environmental quality of life,” said Sandra Sulsky, a researcher from Ramboll. “Also, as is the case with all surveys, the respondents who chose to participate may have viewpoints or experiences that differ from those who chose not to participate. Survey respondents may have participated precisely to express their dissatisfaction, while those who did not participate might not have concerns about the turbines.”

This isn’t the first study to assess the potential health impact of wind turbines — but studies generally report no reason to worry. In 2015, a study in Australia found that infrasound generated by wind turbines is less loud than the infrasound created by a human heartbeat. A subsequent study found that the so-called “wind turbine syndrome” (allegedly caused by long-term exposure to wind turbines) is essentially an imagined condition.

The evidence is not exactly clear, and it’s notoriously difficult to prove that something doesn’t happen. For now, most evidence seems to indicate that wind turbines are completely safe. However, it’s important to continue assessing any potential health risks, researchers said.

“Measuring the population’s perceptions and concerns before and after turbine installation may help to clarify what effects — if any — exposure to wind turbines may have on quality of life,” Sulsky said.

The article, “Using residential proximity to wind turbines as an alternative exposure method to investigate the association between wind turbines and human health,” is authored by Rebecca Barry, Sandra I. Sulsky and Nancy Kreiger. The article will appear in The Journal of the Acoustical Society of America June 5, 2018, (10.1121/1.5039840).

 

 

Minke whale.

Whale skulls act like resonance chambers to help them hear underwater

Whales don’t put their back into hearing — but they do put their skull. New research, along with the first-ever full-body CT scan of minke whale show how the sea-borne mammals can pick up low-frequency sounds, from the calls of other whales to the propellers of cargo ships.

Minke whale.

The minke whale specimen inside the industrial CT scanner. To reduce the time required to scan the entire whale, the team cut the specimen in half, scanned both pieces at the same time, and reconstructed the complete specimen afterward in the computer.
Image credits Ted Cranford / San Diego State University.

The gentle giants of the sea often bedazzle and impress with their songs, but… how can they hear each other underwater? New research suggests that it’s possible if you use your head. If you use your head as a huge acoustic antenna, that is.

Can you hear that?

Considering where whales like to hang out and their impressive girths, studying the marine mammals is notoriously difficult. However, one team of determined US researchers wouldn’t let that dissuade them. The duo has developed a new method of determining how baleen whales (parvorder Mysticeti) pick up low-frequency chatter between 10 to 200 Hertz.

“You can imagine that it is nearly impossible to give a hearing test to a whale, one of the largest animals in the world,” said lead researcher Ted W. Cranford, PhD, adjunct professor of research in the department of biology at San Diego State University.

“The techniques we have developed allow us to simulate the biomechanical processes of sound reception and to estimate the audiogram [hearing curve] of a whale by using the details of anatomic geometry.”

Using a computerized tomography (CT) scanner designed for industrial applications (it was originally used to spot structural defects in rockets), the researchers analyzed the internal structure of a minke whale calf (Balaenoptera acutorostrata) and a fin whale calf (B. physalus). Both animals were found stranded along the U.S. coast some years before the study and were preserved after they died during rescue operations.

CT scanners are a type of X-ray detectors that take a cross-sectional picture through objects or organisms. You’re likely quite familiar with them from hospitals or TV shows involving hospitals. The team produced 3D models showing of the calves’ skulls based these scans. Then, they used a method known as finite element modeling (FEM) to combine maps of tissue density from the CT scans with measurements of tissue elasticity. Finally, a supercomputer simulated these combined models’ response to sounds of different frequencies.

The team reports that whales’ skulls surprisingly act as antennae or a resonance chambers: the bones vibrate when impacted by sound, amplifying and transmitting the vibrations to the whales’ ears. The skulls were especially well-tuned to the low-frequency sounds that whales use to communicate. The authors also note that large shipping vessels also produce the same frequencies, a finding that should help industry and policymakers establish new regulation to limit our impact on these gentle giants.

In addition, the team’s models suggest that minke whales hear low-frequency sound best when it arrives from directly ahead of them. This suggests whales have directional hearing that provides cues about the location of sound sources, such as other whales or oncoming ships. Exactly if (and how) whales might boast directional hearing is still a puzzling question, given that low-frequency sounds tend to travel in waves that are longer than the whales themselves.

The findings were presented Monday, April 23rd at the American Association of Anatomists annual meeting during the 2018 Experimental Biology meeting in San Diego.

You shoulnd't stay too close to a rooster if you care about your hearing. Credit: Pixabay.

A rooster’s crow is as loud as a jet taking off 15 meters away. Here’s why it doesn’t go deaf

You shoulnd't stay too close to a rooster if you care about your hearing. Credit: Pixabay.

You shoulnd’t stay too close to a rooster if you care about your hearing. Credit: Pixabay.

There’s a reason roosters have been favored as natural alarm clocks ever since people first became farmers. Even before the light of dawn, raucous roosters sound the alarm, waking up every living thing around with their clamorous crow. Belgian researchers actually found that a rooster’s crow averages more than 130 decibels for 1-2 seconds, which is about as intense as standing 15 meters away from a jet taking off. One of the three studied roosters was recorded crowing at more than 143 decibels, which is like standing in the middle of an aircraft carrier with jets whizzing by.

Self-defense

Previous research conducted by Japanese researchers led by Takashi Yoshimura found that roosters love to crow in the morning because it’s primarily about announcing territory and where a particular rooster sees itself on the pecking order. Experiments revealed that the most dominant rooster was the one who would start the crowing off, which begins around two hours before first light. The birds must have an internal body clock that tells them when to crow — and very loudly we have to add — the researchers concluded.

All things considered, why doesn’t the rooster go deaf from its own clamor? According to the researchers from the University of Antwerp and the University of Brussels, half the rooster’s eardrum is covered in soft tissue that insulates it from the racket. Additionally, a quarter of the ear canal completely closes during crowing, as reported in the journal Zoology. This is quite convenient for the rooster but doesn’t do much for all other nearby creatures with a decent pair of ears.

The researchers came to this conclusion after performing micro-computerized tomography scans which rendered 3-D images of the birds’ skulls.

What this means is that a rooster can’t actually hear the full intensity of its own crows, which might explain why they’re so annoyingly loud. But even though the intensity of a rooster’s crow diminishes with distance, one can only wonder why nature would evolve such an ability that risks deafening a nest, hens and chicks included.

[ALSO READ] How we hear and other eary functions

Bird biology offers some clues. Unlike mammals, birds (along with reptiles and fish) can regenerate the hair cells of the inner ear if they are damaged. Unfortunately, all sensorineural hearing loss is permanent in mammals, which includes humans, since the hair cells in the cochlea don’t grow back. Some scientists, however, are attentively studying the hair cell regeneration process in chickens in the hope of curing hearing loss in human subjects in future. Until that happens, don’t stay too close to roosters.

Barn owl.

Owls’ ears are always in tip-top shape because they’re self-repairing

Mammal ears, including those of humans, tend to deteriorate over time. By contrast, barn owls’ hearing appears to suffer no ill effects from aging, a new paper reports. This comes down to a genetic switch allowing the birds to continuously regenerate their ears as they age — a switch locked in the ‘off’ setting in mammals.

Barn owl.

Image via Pixabay.

No matter how quiet your life is, by the time you grow old and grey, your ears simply won’t be as good as the day you got them. Past research has shown that by age 65, most people lose over 30 decibels of sensitivity in the high-frequency range due to age-related deterioration known as presbycusis. Naturally, your mileage may vary and you can lose a lot more, mostly based on lifestyle and genetics.

Not so with barn owls (Tyto alba). Hogwarts’ mailbirds seem to possess a natural regenerative mechanism that ensures they always hear at peak performance. For all intents and purposes, their ears are ageless.

Eearly problems

The team, led by Bianca Krumm from the University of Oldenburg’s animal physiology and behavior group, worked with seven barn owls named Weiss, Grün, Rot, Lisa, Bart, Ugle, and Sova. All of the birds were hatched in captivity and lived in aviaries. The researchers divided them into two age groups — those less than 2 years old, and those between 13 and 17 years old.

What they wanted to test is how well the owls could hear frequencies of 0.5, 1, 2, 4, 6.3, 10, and 12 kHz (kiloherz). To this end, they trained the birds to fly between two perches when hearing a short sound tone. After successfully completing the task, the owls would receive a food reward. To minimize training effects (false positives) the owls were trained separately, using a specific sequence of frequencies for each bird. The team also specifically tracked Weiss’ auditory sensitivity throughout his lifetime. The bird reached an impressive (for an own) 23 years old, far above the species’ typical lifespan in the wild of just 4 years old.

Overall, the paper reports that at test frequencies of 0.5, 1.0, and 6.3 kHz, the old owls performed slightly better than their younger counterparts (with mean differences in hearing thresholds between 0.1 and 3.1 dB). Younger owls fared better in the remaining frequency range (mean thresholds between 0.9 and 9.6 dB). However, the team notes that these differences are not significant, saying the results varied between individuals but “without any relation to age.”

Barn Owl pose.

Can you hear that? Can you hear adventure calling? Oh, sorry, I forgot you’re just a mammal.
Image credits Karen Arnold.

The results are consistent with previous research showing that birds, fish, and amphibians can regenerate lost hair cells in their equivalent of an ear, an organ known as the basilar papilla. These ‘hairs’ are actually very long and flexible organelles (organs, but for cells) which turn sound vibrations into an electrical signal that’s passed onto auditory nerves and the brain.

“The regeneration mechanisms, and therefore their benefits, are likely present in all bird species,” said senior author Ulrike Langemann in an interview for Seeker.

“The amazing thing is that the majority of small bird species are rather short-lived, and thus will never really benefit from a preservation of auditory sensitivity at old age.”

Humans and our fellow mammals have some regenerative capability in this direction, but it’s a far cry from what owls can pull off. We can’t replace hairs lost by injury, disease, not even old age — which is why we get presbycusis. The team believes that mammals lost the full scope of these regenerative abilities sometime during their evolution. Which is a shame, really, because according to the National Institutes of Health, over 90% “of hearing loss occurs when either hair cells or auditory nerve cells are destroyed.”

It’s not clear why we lost this very desirable trait, but “unfortunately, the genetic switch for the inner ear of mammals is in the off mode,” the team says. Barn owls however, being auditory hunters par excellence, stood to benefit a lot from such a regenerative system.

“Scientists have shown that a tame barn owl will catch a prey item in complete darkness,” Krumm explains

As such, their evolution selected heavily in favor of keeping the switch “on.” The team is now looking into how pathologies affect mammalian inner ears, and how barn owls use their hearing to accurately locate prey. We can’t yet copy these regenerative mechanisms into humans, the team notes, so for the time being you should take care of your hearing as much as possible.

“Listening to very loud music plays a role in hearing loss because it may damage sensory hair cells and the connecting nerve fibers immediately,” Langemann said. “However, the impact from constant and work-related noise is as serious. These are some of the reasons why wearing ear protection for specific types of work are nowadays standard in the professional world.”

The paper “Barn owls have ageless ears” has been published in the journal Proceedings of the Royal Society B.

New gene delivery therapy restores partial hearing, balance in deaf mice

Using an innovative genetic editing technique, researchers have managed to partially restore both heating and balance in mice born with a condition that affects both.

Most rats exhibited improvements in both hearing and balance. Image credits: Rama / Wiki Commons

Hair cells are the sensory receptors of both the auditory system and the vestibular system in our ears — and the ears of all vertebrates. They play a big part in both our hearing and our balance, transforming the sound vibrations in the cochlea into electrical signals which are fed up to auditory nerves and sent up to the brain. The problem with them is that they’re notoriously hard to treat, and despite a number of different approaches, success has been very limited. Now, a team from Harvard Medical School (HMS) and the Massachusetts General Hospital might have found something that works.

[ALSO SEE] How we hear

They used the common adeno-associated virus (AAV) as a genetic delivery service — but the trick is that they wrapped it up in protective bubbles called exosomes, an approach recently developed by study co-investigators Casey Maguire. Scientists have used AAV for genetic delivery before, but hair cells proved very difficult to penetrate, and this is where the exosomes kick in.

“To treat most forms of hearing loss, we need to find a delivery mechanism that works for all types of hair cells,” said neurobiologist David Corey, co-senior investigator on the study and the Bertarelli Professor of Translational Medical Science at HMS.

The technique involves growing the virus inside the bubbles. For some reason, the bubbles tend to bind better to the targeted area. This approach is quite novel; generally, scientists tend to modify the virus itself whereas here, researchers added external protective layers. AAV alone penetrated only 20% of hair cells, with the exosomes, it penetrated 50 to 60 percent of hair cells

“Unlike current approaches in the field, we didn’t change or directly modify the virus. Instead, we gave it a vehicle to travel in, making it better capable of navigating the terrain inside the inner ear and accessing previously resistant cells,” said Maguire, who is also co-senior author on the study.

They tested the treatment on mice born with severe hair cell affections. The mice were unable to hear even the loudest of sounds and had visible balance problems. A month after treatment, 9 out of the 12 mice had at least some hearing restored and were startled by a loud clap, a standard behavioral test for hearing. Four of them could hear sounds of 70-80 decibels, the (rough) equivalent of a loud conversation. All mice exhibited improvements in balance.

This is a very big deal considering that 30 million Americans suffer from hearing loss and every 1 in 1,000 babies is born with some kind of hearing impairment.

Journal Reference: Bence György et al. Rescue of Hearing by Gene Delivery to Inner-Ear Hair Cells Using Exosome-Associated AAV. DOI: http://dx.doi.org/10.1016/j.ymthe.2016.12.010. An accompanying commentary to the study appears in the same issue.

jumping spider

Despite lacking ears, spiders can hear you talk across the room

jumping spider

Credit: Pixabay

Everybody thought the tiny arachnids lack sensitive hearing past a couple body lengths away but it turns out spider can even hear you from across the room. Sorry, arachnophobes, spiders just got a lot scarier.

Spider senses

Humans, like most vertebrates, hear by converting the vibrations carried by acoustic pressure waves into sound via our eardrums. But spiders don’t have ears and instead rely on tiny hairs that litter their legs to transmit sound vibrations. The assumption used to be, however, that they can only sense noises extending a couple body lengths away or a few centimeters. Moreover, no one actually thought that these vibrations actually get converted into neural signals we all know as hearing.

Turns out we’ve underestimated spider senses, according to Gil Menda and Paul Shamble from Cornell University. The two were working on a new method that enabled them to read neural signals inside a spider’s minuscule brain. Because of the way their experiment was set up, a popping sound was made by a computer whenever spider neurons fired which corresponded to a stimulus. But when one of them moved a chair, a sound popped. When they clapped their hands from across the room, they were met by a pop again. Their minds immediately filled with possibilities and a new experiment was born.

Jumping spiders were placed in a special acoustic arena designed to absorb most sound reflections or echoes. Menda and Shamble then played all sorts of frequencies. One of them was the familiar frequencies made by a wasp when it flaps its wings, which the spiders responded to by freezing — a typical fear response. The other frequencies were also met by a response and high-speed cameras demonstrated that these were picked up by sensory hairs that shook back and forth.

Even from 16 feet (5 meters) away the spiders could still respond to the frequencies.

“This is real, and it’s not only with jumping spiders,” said Menda, who now plans on replicating the experiment on other spider species.

Don’t expect their hearing to be too clear though, say the researchers who published their work in Current Biology.

“It probably sounds like a really bad phone connection,” Shamble told The Guardian. “They probably can tell that you’re talking from across the room, but they’re certainly not listening to you.”

The findings have implications for animal research. Scientists who studied arachnid behavior but ignored their sense of hearing, for instance, could be missing some important cues. The findings could also help us understand how spider hearing evolved, but could also lead to new technologies. Things that come to mind are sensitive micro-robots or a new generation of microphones and hearing aids that work on the same principle that enables the spider’s astonishing hearing.

“They can hear everything we do,” said Menda, who used to bring a wary eye to “Spider-Man” movie sessions with his kids. “I used to laugh at [the movies], because spiders can’t hear sounds. Well, guess what? They can.”

 

 

Image: Flickr user Erich Ferdinand

Did Neanderthals and humans share the same hearing?

Image: Flickr user Erich Ferdinand

Image: Flickr user Erich Ferdinand

Since the first Neanderthal fossils were found in Belgium at the dawn of the 19th century, scientists have debated whether these extinct cousins were capable of speech similarly to humans. Almost two hundred years later, this discussion is far from settled. Since there’s no way of telling for sure, we can only infer that they could or couldn’t do based on indirect evidence such as the artifacts and cave drawings that they left behind.

These archeological findings suggest Neanderthals formed hunter-gatherer communities and it is likely that they used speech to communicate among themselves, and probably with humans. But there are also anatomical features that we can study. Researchers at the Max Planck Institute for Evolutionary Anthropology, for instance, recently used cutting edge imaging techniques to analyze the ear bones of Neanderthals. Their findings suggest consistent aspects of vocal communication in modern humans and Neanderthals.

Could a Neandethal dance to the beat?

The middle ear is the part of the ear that rests between the eardrum and the oval window, transmitting sound pressure waves from the outer to the inner ear. The middle ear is made of three bones called hammer, anvil and stapes. This ossicular chain is found in all mammals, and for good reason too because these amplify the energy of the sound waves and allow them to travel within the fluid-filled inner ear — otherwise, mammals wouldn’t be able to hear.

[ALSO SEE] How we hear

As important as these three bones are for hearing, they can be just as frustrating for archaeologists. Because they’re the smallest bones in the body, ossicles are often lost and rarely show up in fossil records, including those of human ancestors.

“We were really astonished how often the ear ossicles are actually present in these fossil remains, particularly when the ear became filled with sediments” says lead researcher Alexander Stoessel in a statement.

Using high-res computer tomography (CT) scans, the researchers were able to reconstruct what the ear bones look like in 3-D, all without making so much as a dent in the actual fossils. The resulting models were then compared to the ossicles of anatomically modern humans and also chimpanzees and gorillas which are our closest living relatives.

“Despite the close relationship between anatomically modern humans and neanderthals to our surprise the ear ossicles are very differently shaped between the two human species” says Romain David who was involved in the study.

Tympanic membrane (grey), ossicular chain (yellow, green, red), and bony inner ear (blue) of a modern human with a One-Eurocent coin for scale. Credit: © A. Stoessel & P. Gunz

Tympanic membrane (grey), ossicular chain (yellow, green, red), and bony inner ear (blue) of a modern human with a One-Eurocent coin for scale. Credit: © A. Stoessel & P. Gunz

In order to see whether these morphological differences in ear bones affects in any way the hearing capacity of Neanderthals, the team also analyzed the surrounding structures of the ear ossicles. Surprisingly, the answer is ‘no’. Despite morphological differences, the functional parameters of Neanderthal and modern human middle ear are largely the same, the authors reported in the journal  Proceedings of the National Academy of Sciences

These apparent differences can be attributed to different evolutionary trajectories that the two species, Neanderthal and human, took in order to increase brain volume. This evolutionary path impacted the structure of the cranial base, which also includes the middle ear. Unfortunately, a larynx, or voice box, which is formed from soft tissue, has not been found in the Neanderthal fossil record. But we do know, at least, that Neanderthals were capable of hearing the same frequency spectrum as humans.

“For us these results could be indicative for consistent aspects of vocal communication in anatomically modern humans and neanderthals that were already present in their common ancestor” says Jean-Jacques Hublin who is an author of this study and continues “these findings should be a basis for continuing research on the nature of the spoken language in archaic hominins.”

Mylan CEO Heather Bresch’s hearing on EpiPen was a complete disaster for the company

Heather Bresch, CEO of EpiPen producer Mylan, testified in front of the House Oversight and Government Reform Committee about the drug’s price increase on Wednesday.

Image via Youtube / wochit Business.

Image via Youtube.

The basic rundown of the EpiPen situation is that since 2007 when Mylan acquired patent rights for the device used to treat life-threatening allergic reactions, its price has increased by more than 500%. A two-pack of pens currently has a list price of 608$. Doug Throckmorton, deputy director of the FDA was asked to testify alongside Bresch as there is legitimate concern that lack of a competitor product on the market has allowed Mylan to inflate prices with commercial impunity.

Her prepared testimony released ahead of the hearing gave background on Mylan as a company and addressed some of the key points of the controversy. Congress, however, wasn’t impressed by her answers.

“Looking back, I wish we had better anticipated the magnitude and acceleration of the rising financial issues for a growing minority of patients who may have ended up paying the full [list] price or more,” her testimony reads. “We never intended this.”

The members of Congress had a lot of questions for Bresch, who testified alongside Doug Throckmorton, a deputy director of the Food and Drug Administration. In her defense, she said the company is implementing a number of programs to help patients pay for EpiPens.
[panel style=”panel-info” title=”Here’s the TL;DR version of what went down.” footer=””]

  • Bresch didn’t admit that the company raised EpiPens’ price to increase profits. She failed to present data pertaining to financial and patient assistance programs that Congress requested beforehand. She was unable to provide the info off the top of her head, either. She also said there were no plans to further increase prices in 2017, but didn’t give a definitive ‘no’.
  • Mylan’s EpiPen4Schools program also took a lot of flak — Rep. Tammy Duckworth called it a “monopoly” as schools that enrolled in the program had to sign a noncompete agreement. She was also outraged that most schools didn’t know the president of the National Association of the State Boards of Education, who was lobbying for them to join the program, was Bersch’s mother.
  • Congress also criticized Throckmorton as it felt the FDA’s convoluted approval process allowed this situation to arise. Throckmorton said FDA regulation prevented him from disclosing all the information the representatives requested about any applications for competitor products
  • By the end of the hearing, Bresch faced questions about Mylan’s tax inversions, private jets, and Rep. Earl Carter’s anger over Mylan’s generic version of the EpiPen.

[/panel]

Still here? Ok. Let’s go through the painful (for Bresch) step-by-step of the hearing.

*grabs popcorn*

It doesn’t make sense and we don’t believe you

“We’ve got a lot of questions,” said Rep. Jason Chaffetz, chairman of the committee, at the start of the hearing.

Chaffetz went on to ask how much money Mylan makes off each EpiPen and how much of that money goes towards its executives. He also pointed out that there’s an appalling lack of competition, which allowed for the price to skyrocket.  And when there are lives on the line, “parents don’t have a choice,” he added.

Rep. Elijah Cummings followed Chaffetz, saying he was “not impressed” by Bresch’s prepared testimony. He accused the company of using a “simple but corrupt business model” to cash in big, comparing them to Martin Shkreli of Turing Pharmaceuticals and the execs of Valeant Pharmaceuticals. He also put little faith in Mylan’s pledge to increase patient assistance programs. He referenced Shkreli’s testimony earlier this year, saying he “took his punches” then went back and kept on doing the same thing.

“We’ve heard that one before,” he said. “They never ever lower their prices.”

“I’m concerned this is a rope-a-dope strategy. It’s time for Congress to act.”

Bresch and Throckmorton both gave their prepared statements, which can be read here. Basically, it’s Bresch defending herself and Mylan while Throckmorton details how the FDA is putting effort into ramping up the approval of generics — exactly what you’d expect from a prepared speech.

The Q&A, however, was much more interesting.

Chaffetz asked what the company believed was going to happen when they raised the drug’s price. Bresch tried to explain that Mylan doesn’t actually make a lot of money on the drug.

“This doesn’t make any sense,” he said. “This is why we don’t believe you.”

He asked Throckmorton how many epinephrine products were in the FDA’s que right now, but to Chaffetz’s visible frustration he couldn’t answer the question. When pressed, Throckmorton said he wasn’t allowed to disclose “confidential commercial information” in that setting. Later, the FDA tweeted:

Another issue Chaffetz brought up was Bresch’s mother’s involvement in the issue. A USA Today article reported that she had used her influence as president of the National Association of State Boards of Education to support Mylan’s EpiPen4schools program. Bersch said the story distorted facts and basically shamed Mylan for giving schools free EpiPens.

Rep. Cummings wanted to calculate how much the company spends on marketing compared to what it rakes in. So he asked how much profit the company made off the sale of EpiPens in 2015. He was going by publicly available information but wanted the hard facts from the CEO. In the end, though, he had to figure it out without Bresch’s answers.

“You’re telling me you don’t know how much you spent on patient assistance programs and school-related programs in 2015?” he asked.

This does not look like a man happy about the answers he's hearing. Image via Youtube.

This does not look like a man happy with the answers he’s hearing.
Image via Youtube.

She replied they spent ‘maybe’ 105$ per pack because they had to raise awareness about anaphylaxis. Cummings then asked how much money was pooled into R&D in 2015 — he had to ask twice and went v-e-r-y slowly the second time. Again, Bresch came up short on answers.

“You knew what this hearing was about. I’m asking questions that if you’re the CEO I think that you would know,” he said.

Later, he asked if Bresch agreed that Mylan made hundreds of millions of dollars on EpiPen in 2015 alone, to which she replied that the pens weren’t all of the company’s $11 billion revenue. Cummings asked her again, to which she answered ‘yes’. She was then asked to produce documents showing the revenue on EpiPen (this were requested before the hearing but Bresch didn’t bring them along.)

Rep. Eleanor Norton then asked the question on everyone’s lips: will the price of EpiPen come down? The CEO replied that an authorized generic was the fastest way to make this happen — and, even if the branded product’s price went down, it wouldn’t necessarily make a difference on shelf price.

“What have you done to earn this 671% [compensation] increase?” Norton followed-up.

Bresch first tried to dodge the question by saying Mylan products have saved $180 billion in US expenses. Pressed by Norton, she pointed to the EpiPens Mylan has supplied to schools and in public places. Rep. Stephen Lynch asked how much the company made off of each pen, and Bresch tried to show using poster boards that the company got 235$ from each two-pack for a profit of about 50$. She added that the 300$ generic would make even less than 50$ profit for Mylan. Bresch later told Rep. Scott DesJarlais that she did not plan on increasing the price of the EpiPen in 2017. DesJarlais then asked if she thinks 600$ is too much to charge for the pens.

“We believe it was a fair price, and we’ve just now lowered that by half,” Bresch said.

But if the price was fair, why lower it at all, he asked Bresch. She replied it all came down to people paying closer to the list price, which wasn’t intended.

A mother’s touch

Duckworth raised concerns with the EpiPen4Schools program — to take part, the schools had to agree not to buy it from anyone else. Bresch replied that the schools are free not to join the program if they so wish.

“That, to me, is an unfair monopoly,” Duckworth said. “That’s right, they don’t have to buy them, but your own mother is out there […] passing out your guides for Mylan.”

She added that most schools had no idea the person lobbying for the program was connected to the CEO. Rep. Mick Mulvaney discussed government intervention in the project. He said that Congress talked about an industry it didn’t fully understand — but he made it clear that Bresch and Mylan won’t get off easily.

“I’ll tell you what we do know, though, is that you’ve been in our hallways to ask us to make people buy your stuff,” he said, citing that 11 states have laws requiring EpiPens be available in schools. “You’ve lobbied us to make the taxpayer buy your stuff. […] I was here when we did it.”

“You came and you asked the government to get in your business, so here we are today. And I was as uncomfortable with some of these questions as you were […] but I have to defend both my Republican and Democrat colleagues for these questions because you’ve asked for it, so I guess this is my message. If you want to come to Washington, if you want to come to the state capitol and lobby us to make us buy your stuff, this is what you get. You get a level of scrutiny and a level of treatment that would ordinarily curl my hair, but you asked for it!”

Rep. Earl Carter discussed the issue of pharmacy benefits managers, companies that serve as middlemen in negotiating the price of drugs. He’s been investigating this type of companies as he believes they’re part of the reason why patients are paying more and more for prescription drugs. Bresch agreed that more transparency is needed in this regard. Talking on the subject she also brought up the company’s authorized generic, which didn’t go over well.

“You know I know better than that,” Carter said. “Don’t try to convince me that you’re doing us a favor.”

He said that if Mylan had reduced the price of their EpiPens in the first place, they wouldn’t have received rebates from PBMs. Carter requested Bresch to follow up with more details about Mylan-PBM contracts.

Rep. Bonnie Watson Coleman then asked how Bresch came to the hearing, to which she replied she had flown in from Pittsburgh where Mylan’s US corporate offices are based on a private jet. Coleman then asked about the company’s tax rates. Last year, Mylan moved their headquarters to the Netherlands and has since had a 15-17% tax rate, down from roughly 20-25% the year before — this, Coleman points out, means the company pays less on their taxes than the average American. During the discussion, Bresch said that the company is “physically” run out of their Pennsylvania offices, where the execs are based.

“This is a sham and a shell, and it’s really sad to hear this,” Coleman said.

Chaffetz also discussed the EpiPen’s classification under Medicaid as a “Non-Innovator Multiple Source Drug.” Bresch said that the status was decided before the company acquired the patent.

By the end of the hearing things weren’t looking very well for Bresch.

“If I could sum up this hearing, it would be that the numbers don’t add up,” Cummings said. “It is extremely difficult to believe that you’re making only $50 when you’ve just increased the price by more than $100.”

“It just feels like you’re not being honest with us,” he added, saying some of the numbers and charts Bresch used during the hearing seemed over-simplified.

It seems that the representatives took previous dealings with Turing and Valeant Pharmaceuticals to heart with Cummings saying that Mylan’s arguments sounded a lot like what they’ve heard before. Bresch and Throckmorton have been given 10 days to provide the committee documents to answer some of the points that weren’t satisfactorily answered during the hearing.

You can watch the full hearing here:

https://www.youtube.com/watch?v=f60bxayNYpg

The US is rolling out superhuman hearing for its soldiers

Wearable tech could save the hearing of thousands of soldiers.

The hearing system costs $2,000. Image via US Army.

Among many other things, war is loud – especially for infantry. Gunshots, explosions, booms and bangs are part of a soldiers’ life, and even a single gunshot can be devastating to hearing. Prolonged exposure to gunshots often causes irreparable harm, and when you consider that the US has over 20 million veterans, the scale of this problem takes on dramatic proportions.

With that in mind, the US Army has developed a hearing aid that not only boosts the hearing of troops in the field but also filters out unwanted noise from the battlefield. The system, known as Tactical Communication and Protective System (TCAPS), will be rolling out soon to soldiers in the field.

In the past, ear protection was quite rudimentary, and it came at an obvious disadvantage: soldiers lost the ability to hear other useful sounds, like commands. Ear protection also impeded hearing to the point where soldiers couldn’t figure out where sounds were coming from, a vital ability in the heat of battle.

TCAPS is smarter than this – it picks up sounds through a system of microphones and dampens it for the wearer, but in such a way that you can still hear it clearly and figure out where it comes from. At the same time, the decibel cap allows TCAPS-equipped soldiers to hear the voices of others around them, including radio commands.

According to Engadget, about 20,000 TCAPS units have been deployed Army-wide, at a price of about $2,000 a piece.

This sounds like an excellent idea, and to be honest I’m surprised something like this hasn’t been introduced before. But what I’d really like to see is the Army roll out a system like this for veterans as well. I’d like to see them take care of soldiers after their service is done, something which the US seems to be still struggling with.

Here’s a video depicting how it works:

How hearing works and other eary functions

I like my ears. I’ve been told they go well with my face, and they’re really good at holding the hair out of my eyes.! Yay for ears!

But (spoiler alert) these are not our ears’ primary functions. The workings of our ears’ internal mechanisms underpin two of our senses — hearing and balance (called equilibrioception).

So, have you ever been to a concert and wondered exactly how is it you can hear that mad riff that has hairs standing up the back of your neck? Or why you get dizzy headbanging to it? Well, we’re here to tell you all about your ears.

Image via Flickr.

Hearing all about it

Our sense of hearing evolved to satisfy our need for a way to survey our environment for predators, prey, and natural disasters. While most of us tend to rely on seeing as our dominant source of information, the sense is not without its limits. The quality of reliable information our eyes feed us deteriorates rapidly as light levels drop, and they can only see a small area in front of us — and even there, it’s pretty easy to hide from or confuse it. Here’s an example:

Somewhere here there’s a snow leopard stalking the goats. Can you find it?

The leopard is literally in plain sight, but it took me around three or four minutes to spot it. And that’s only because I knew it was supposed to be there so I really looked for it. By mimicking the environment, the predator fooled my brain into signing it off as just another pebble or rock. If I relied on my eyes alone, this slope would appear safe and the next thing you know, I’m a leopard’s chew toy.

That’s why hearing is so important. It allows us to keep tabs on our whole environment, 24/7, no matter where we’re looking or what we’re doing. It’s long-range enough to give us time to react to threats and it works basically everywhere.

Except in space.

The sensory organ that handles hearing is the ear. Through them, our brain can pick up pressure waves traveling through air, water or solids by turning the particle motion into sensory input. However, the flappy piece of tissue that most of us call an “ear” is actually an auricle (or pinna in other animals) and it’s just a small part of a much larger and complex mechanism.

The auricle acts like a funnel, capturing sound and directing it into the auditory canal. It also filters sound so only frequencies that you can actually hear are sent to your actual ear.

At the end of this canal, the sound hits the tympanic membrane, a piece of tissue that you might know as the eardrum. The tympanic membrane serves as the limit between the outer and middle ear. The membrane is thin enough that pressure waves cause it to vibrate, and in turn move three tiny auditory ossicles attached to it (the malleus, incus, and stapes).

These bones amplify the sound vibrations and send them to the cochlea, a snail-shaped structure filled with fluid in the bony labyrinth.

Image Wikipediadia

An elastic membrane runs from the beginning to the end of the cochlea, splitting it into an upper and a lower part. It has a hugely important part to play in our hearing; The vibrations from the eardrum apply pressure to the fluids inside the cochlea, causing ripples to form on the membrane.

This membrane houses sensory cells that have bristly structures protruding from them (they’re named hair cells because of this) which pick up on the motion by hitting the upper part of the cochlea. When the “hairs” bend, they open pore-like structures that allow for chemicals to pass through, creating an electrical signal for the auditory nerve to pick up.

But the ear isn’t just about hearing, it’s also the organ that allows us to keep balance. Balance is the ability to maintain the body’s center of mass over its base of support. While achieving this takes a lot of information from the different senses, the ear’s vestibular system feeds our brain vital information about our body’s position and movement. Kinda like our own personal gyroscopes.

3D image of the cochlea and vestibular system.
Image via Wikipedia

The vestibular system is made up of those three semicircular canals you can see in the picture above. They’re placed at a roughly 90 degrees angle to each other and are called the lateral, superior, and inferior canals. Each of them is filled with liquid that flows in response to our body’s movements and pushes on hair cells in a structure called the cupula. Due to their position, each canal is sensitive to one type of movement:

  • The horizontal semicircular canal picks up on head movements around a vertical axis, i.e. on the neck (as when doing a pirouette)
  • The anterior and posterior canals detect rotations on the sagittal plane (for example, nodding) and the frontal plane (as when cartwheeling) respectively. Both anterior and posterior canals are orientated at an approximately 45 degrees angle between the frontal and sagittal planes.

The electrical signals from the cupula is carried through the vestibulocochlear nerve to the cerebellum for processing.

But, as always, both the voices and balance are just…

Products of your brain

Sensory organs are just that, organs that sense stuff. But they can’t make heads or tails of the information they provide, just as a microphone feeds information to your PC but doesn’t understand it by itself.

Hearing and balance also conform to that rule. The brain decodes information received from the ears and processes them mostly in the auditory cortex.

The auditory cortex, shown in pink, with other areas that lend a hand in processing information from our ears colored.

Equilibrium is maintained by the cerebellum (also known as the little brain) by using data from the semicircular canals along with information supplied by other senses.

Gene therapy restores hearing in deaf mice, paving the way for human treatment

In Mark 7:31-34, everyone’s favorite Galilean cured a deaf man so that he could hear again. Not ones to be one up-ed so easily, researchers injected genetically modified viruses – a procedure known as virotherapy – to replace faulty genes in mice with genetic deafness to help restore their hearing, and the results are promising.

By the look on its furry little face, it’s probably Skrillex.
Image via: head-fi.org

We wrote about how gene therapy was used to restore hearing in guinea pigs and how drugs were used to promote regeneration in mice’s ears. But those trials aimed to treat the effects of noise trauma. Now, researchers tried to restore hearing to mice that suffered from genetic hearing loss.

Some of them could sense and respond to noises after receiving working copies of their faulty genes, researchers report on July 8 in Science Translational Medicine. Because the mice’s mutated genes closely correspond to those responsible for some hereditary human deafness, the scientists hope the results will inform future human therapies.

Inner ear hair cells, responsible for “catching” sound waves, viewed under an electron microscope.
Image via: asbmb.org

“I would call this a really exciting big step,” says otolaryngologist Lawrence Lustig of Columbia University Medical Center.

The ear uses specialized sound-sensing cells, named hair cells that convert movement in their environment -i.e. noises- into information the brain can process. Hair cells need specific proteins to work properly, and alterations in the genetic blueprints for these proteins can cause deafness.

To combat the effects of two such mutations, the scientists injected viruses containing healthy, functioning versions of the genes into the ears of deaf baby mice. The virus infected some hair cells, giving them working genes.

A mutation causes sound-sensing cells (bright green) to die off quickly in deaf mice, but gene therapy can rescue these cells (right) in mice given a virus that delivers a working gene. Two inner ear locations are shown.
Image via: sciencenews.org

The method was used on mice showing two different types of deafness-causing mutations. For one of them, mice showed neural activity indicative of hearing, and even jumped (adorably so, probably; the study sadly does not mention) when exposed to loud noises. Treated mice with the other mutation didn’t respond to noises, but the gene therapy helped their hair cells — which normally die off quickly due to the mutation — survive. All of the untreated mice, in the control group, remained deaf.

It is however a partial fix. The mice that responded to the treatment had most of their inner hair cells, that allow basic hearing, use the new genes. But few outer hair cells, which amplify noises, accepted the viral delivery. It’s hard to get outer hair cells to respond to gene therapy, Lustig says. Still, inner hair cells control most sound transmission, he added.

The scientists hope to eventually identify the right virus and genetic instructions to treat all hair cells and get complete recovery of hearing, says study coauthor Jeffrey Holt, a neuroscientist at Boston Children’s Hospital. The team’s immediate goals are to improve the viral infection rate and test if the treatment can last for long time periods, Holt says. He also mentioned that the viruses used to deliver the genes are safe and already used in human gene therapies.

Gene therapies must work as well as existing cochlear implant technologies to become a good treatment option, Lustig adds. But a functioning inner ear would ultimately do a far better job than any cochlear implant could.

“Ultimately, we’ll get there.”

 

loud_music

How loud music damages your hearing

loud_music

Photo: rantlifestyle.com

Listening to loud music has been shown time and time again to affect hearing in a negative way. The damage becomes more pronounced with age, leading to difficulties in understanding speech. A new analytic study by researchers at University of Leicester  examined the cellular mechanisms that underlie hearing loss and tinnitus triggered by exposure to loud sound.

Music to your ears or …

Dr Martine Hamann, Lecturer in Neurosciences at the University of Leicester, said: “People who suffer from hearing loss have difficulties in understanding speech, particularly when the environment is noisy and when other people are talking nearby.

“Understanding speech relies on fast transmission of auditory signals. Therefore it is important to understand how the speed of signal transmission gets decreased during hearing loss. Understanding these underlying phenomena means that it could be possible to find medicines to improve auditory perception, specifically in noisy backgrounds.”

There are tens of millions of people all over the world that are affected by hearing loss, with grave social consequences. Often enough, these people become isolated from friends and family because of their impaired ability to understand speech. Everybody has an annoying great uncle that incessantly asks ‘how’s school’ or ‘when are you getting married’, before exclaiming ‘what, what, what?!’ Hearing loss isn’t confined to old age anymore, though, once with the advent of high power speakers and headphones. It’s amazing to me how some people behave so unconsciously and nudge their heads straight in a 4000W speaker for hours. You’ve seen them at festivals.

In a survey of 2,711 festival-goers in 2008, 84% said they experienced dullness of hearing or ringing in the ears after listening to loud music.

“These are the first signs of hearing damage,” says Donna Tipping from Action on Hearing Loss charity.

“The next morning or a couple of days later, your hearing may gradually return to normal but over time, with continued exposure, there can be permanent damage.”

Donna says the risk of damage to hearing is based on how loud the music is and how long you listen to it for.

“If you can’t talk to someone two metres away without shouting, the noise level could be damaging,” she says.

Previous research showed that following exposure to loud sounds, the myelin coat that surrounds the auditory never becomes thinner. The audio signal travels in jumps from one myelin domain to the other. When exposed to lound sound, these domains, called Nodes of Ranvier, become elongated. It wasn’t clear however if the hearing loss was due to the actual change of the physical properties of the myelin or is it due to the redistribution of channels occurring subsequent to those changes.

“This work is a theoretical work whereby we tested the hypothesis that myelin was the prime reason for the decreased signal transmission. We simulated how physical changes to the myelin and/or redistribution of channels influenced the signal transmission along the auditory nerve. We found that the redistribution of channels had only small effect on the conduction velocity whereas physical changes to myelin were primarily responsible for the effects,” Dr. Hamann said.

The research adds further strength to the link between myelin sheath deficit and hearing loss. This is the first time a simulation was used to assess the physical changes to the myelin coat based on previous morphological data. Armed with these findings, published in the journal Frontiers in Neuroanatomy,  scientists have come to a better understanding of now only how auditory perception can become dull, but also what makes a good hearing. Translating into practice, the research suggests targeting these deficits; namely, promoting mylein repair after acoustic trauma or during age related loss.

[RELATED] Hearing restored in gerbils following stem cells treatment

A personal note: while summer’s almost gone, there are still some festivals where you might be exposed to loud music. Also, there are always loud clubs, whether you like it or not, that are open no matter the season. The best protip I can offer is to wear earplugs. I can’t stress this enough. These simple tools, highly effective and cheap, can protect you against the excess decibels monster speakers throw at you, all while preserving sound quality.

sensory hair cells

Hearing restored in mice after hair cells were regenerated through drug

Hearing loss is a grave healthcare problem around the world, with 50 million cases in the US alone. The most common type is sensorineural hearing loss caused by the degradation and loss of sensory hair cells in the cochlea (the auditory part of the inner ear). While implants and various other hearing aids can improve hearing a tad, significant improvement can not be achieved since these sensory hair cells do not regenerate for mammals. In a remarkable breakthrough moment, scientists at Massachusetts Eye and Ear and Harvard Medical School developed a drug that can regenerate sensory hair cells in mouse ears damaged by noise trauma.

Fluid movement in the inner ear (cochlea), like sound waves propagating, causes changes in tiny structures called hair cells. As these hair cells move, electrical signals from the cochlea are sent up the auditory nerve to the brain, which is then converted into information we can commonly refer to as sound. Hair cell loss comes from noise exposure, aging, toxins, infections, and certain antibiotics and anti-cancer drug.

“Hair cells are the primary receptor cells for sound and are responsible for the sense of hearing,” explains senior author, Dr. Albert Edge, of Harvard Medical School and Mass. Eye and Ear. “We show that hair cells can be generated in a damaged cochlea and that hair cell replacement leads to an improvement in hearing.”

Fighting deafness

sensory hair cellsBirds or fish can regenerate their damaged or lost hair cells, however mammals can not. The researchers tested their novel drug by applying it to the cochlea of deaf mice, which had their hearing impaired from sound trauma. The drug works by inhibiting an enzyme called gamma-secretase that activates a number of cellular pathways. The drug applied to the cochlea inhibited a signal generated by a protein called Notch on the surface of cells that surround hair cells. These supporting cells turned into new hair cells upon treatment with the drug.

After the drug was administered significant hearing improvements were observed in mice, and further observations showed that improved hearing could be traced to the areas in which supporting cells had become new hair cells. The breakthrough is the latest in a slew of research that demonstrate hearing improvements in mammals – previously, we reported how improvements in mice hearing were made after scientists injected the cochlea with nasal stem cells, hair cell regeneration in gerbils again through stem cells and gene therapy that rendered similar results in guinea pigs.

“The missing hair cells had been replaced by new hair cells after the drug treatment, and analysis of their location allowed us to correlate the improvement in hearing to the areas where the hair cells were replaced,” Dr. Edge said.

“We’re excited about these results because they are a step forward in the biology of regeneration and prove that mammalian hair cells have the capacity to regenerate,” Dr. Edge said. “With more research, we think that regeneration of hair cells opens the door to potential therapeutic applications in deafness.”

Findings were documented in the journal Science.

source: Massachusetts Eye and Ear 

Human stem cell-derived otic neurons repopulating the cochlea of deaf gerbils. Human cells are labelled green, and the red is a marker of neuronal differentiation. Therefore yellow cells are neurons of human origin. (c) University of Sheffield

Hearing restored in gerbils by stem cell treatment – might work for the human ear, too

In an exceptional feat of medical and technical ingenuity, scientists have been able to restore partial hearing to deaf gerbils by implanting modified human embryonic stem cells in their ears. The success rate is pleasing, and offers solid ground on which human trials with a similar treatment might commence.

Human stem cell-derived otic neurons repopulating the cochlea of deaf gerbils. Human cells are labelled green, and the red is a marker of neuronal differentiation. Therefore yellow cells are neurons of human origin. (c) University of Sheffield

Human stem cell-derived otic neurons repopulating the cochlea of deaf gerbils. Human cells are labelled green, and the red is a marker of neuronal differentiation. Therefore yellow cells are neurons of human origin. (c) University of Sheffield

There are many causes which might lead to hearing loss. The leading cause by far is related to damage to a special cell located inside the ear, equipped with hairs that sense vibrations and transmit them back to the brain through the neural connection to be decoded as sound. Another cause, however, experienced by 10% of the approximate 275 million people worldwide suffering from some form of hearing loss, is a condition called auditory neuropathy – the impairment of auditory neurons.

Targeting this specific deafness factor, the researchers at University of Sheffield UK implanted 18 gerbils, which had their auditory nerves rendered nonfunctional in the lab, with human stem cells. In the first phase, the embryonic undifferentiated stem cells were cultured by some specific chemicals to grow into auditory neurons. These were implanted into the gerbils ears and after a mere 10 weeks the first signs of success surfaced. During this time, the neurons grew fibers and reached the brainstem. To test if any hearing progress was made, the gerbils were subjects to sound wave while electrodes attached to their skulls measured brain waves for responses.

Thus, an estimated 46 percent increase in sensitivity was recorded, although the progress was rather inconsistent. A third A third responded exceptionally well, with some regaining 90 percent of their hearing, while another third showed almost no recovery at all. Still, for a human suffering from hear loss even the slightest progress could mean a shot at living a normal life. Can the procedure be transferred to humans in the first place?

Well, for one scientists have successfully managed to grow both auditory neurons and hair cells in stem cells cultures. The tricky part lies in the implant procedure itself. Unfortunately, hair cells require a very specific and precise orientation in the inner ear to function properly. Implanting hair cells precisely and safely is a great technical challenge, which many leading experts around the world view it as unreachable at this time.

“This is promising research that demonstrates further proof-of-concept that stem cells have the potential to treat a range of human diseases that currently have no effective cures. While any new treatment is likely to take years to reach the clinic, this study clearly demonstrates that investment in UK stem cell research and regenerative medicine is beginning to bear fruit,” said Dr. Paul Colville-Nash, Program Manager for stem cell, developmental biology and regenerative medicine at the Medical Research Council.

This latest research which shows promising results concerning stem cell treatment, coupled with an earlier independent research which used gene therapy to stimulate regeneration of hair cells in the cochlea, offers a much needed ray of hope to deaf patients around the world.

source