Evolution can oftentimes be unpredictable. Around 230 million years ago, a dog-sized meat-eating dinosaur by the name of Buriolestes schultzi roamed Brazil’s forests. A hundred million years later, this small dinosaur’s cousins, such as Diplodocus and Brachiosaurus, grew to sizes spanning tens of meters in length and could weigh over 100 tons.
In many lineages, relative brain size tends to increase with time — but not in this case. According to paleontologists who performed one of the most accurate brain reconstructions of a dinosaur to date, Buriolestes schultzi‘s brain weighed approximately 1.5 grams or as much as a pea. Its humongous four-legged cousins’ brains, however, were no larger than a tennis ball, much smaller than you’d expect for their size.
The 3D reconstruction was based on three skulls unearthed by Dr. Rodrigo Temp Müller, a Brazilian paleontologist from the Universidade Federal de Santa Maria. Along with colleagues from the Universidade de São Paulo, the researchers employed computed tomography (CT) to draw inferences about the ancient dinosaur’s brain from the cavity left behind.
The small Jurassic carnivore was one of the earliest dinosaurs, and this shows in its primitively-shaped brain, which resembles the morphology of the crocodile brain.
However, Buriolestes had well-developed brain structures in the cerebellum, indicating superior abilities to track moving prey. It likely hunted using its eyes more than its nostrils, seeing how the olfactory bulb was relatively small, suggesting that smell wasn’t all that important. Conversely, olfactory bulbs grew to become very large in later sauropods and other closely related giant dinosaurs.
A strong sense of smell is associated with complex social behavior in many species. Alternatively, olfactory capabilities play an important role in foraging, and helping herbivores to distinguish between digestible and indigestible plants.
In time, Buriolestes transitioned to a plant-eating diet, which explains why its brain-to-body size ratio actually decreased. Carnivorous animals generally need more cognitive power in order to detect prey, as well as other predators. For slow-moving sauropods, brainpower wasn’t at such a premium.
Indeed, when the researchers calculated Buriolestes schultzi cognitive capability based on its brain volume and body weight, they found higher values than those seen in giant sauropods. However, the cognitive value was lower than that of theropod dinosaurs, suggesting that Buriolestes wasn’t smarter than T. rex or Velociraptor.
Our knowledge of very early dinosaurs is lacking, most paleontologists agree. This is why this study is so important, offering a rare window into the evolution of the brain and sensory systems of one of the earliest dinosaurs, and later some of the largest animals ever to walk on land.
The best actors leave their true selves at home and climb the stage in the shoes of someone who often lives a different life entirely. According to a new study, when they truly play a character, actors effectively turn off a part of their brain that is responsible for conjuring a sense of self.
When the character takes over
Steven Brown studies the cognitive and neural foundations of music, dance, and other art forms at McMaster University in Canada. Previously, Brown showed that music employs a number of mechanisms for conveying emotion, including the use of contrastive scale types — Westerners are familiar with the major/minor distinction, for instance. Brown’s research also showed that music and language sit ‘side by side’ in the brain, sharing “combinatoric generativity for complex sound structures (phonology) but distinctly different informational content (semantics).”
In a new study, Brown and colleagues were among the first to scan the brain of actors while they were acting. They recruited university-trained actors who were placed in an MRI machine while having to respond to questions in four different ways: as themselves, as themselves with a British accent, as a friend, and as if they were either Romeo or Juliet.
Each scenario led to different brain activity patterns. For instance, when the actors had to think about how a friend might reply to a question, they experienced a drop in brain activity in particular areas of the prefrontal cortex. This is in line with previous experiments involving the “theory of mind” – the ability to infer how other people might be thinking or feeling.
When they embraced the Shakespearean role, researchers noticed similar patterns as when they replied as a third-person. But, in addition, the participants also experienced a reduction in activity in two regions of the prefrontal cortex associated with a sense of self. Typically, artistic activities such as playing music or dancing increase brain activity but, this time, researchers were surprised to find that acting suppressed brain activity.
According to Brown, actors may be losing their sense of self when playing a role. In fact, the scientist draws parallels to indigenous possession ceremonies he had witnessed during a trip to Brazil. That’s not to say that actors are ‘possessed’ — there are still themselves even when on stage. But Brown suspects that the two types of persons — one on stage, the other in the Amazon jungle — might share similar brain patterns. This line of research is barely in its infancy, so we’re certain to find out more in the future as more similar studies are made.
The new generation of brain scanner that can be worn like a helmet. It allows patients to move naturally whilst being scanned. Credit: Wellcome Trust Centre for Human Neuroimaging at UCL.
It looks like something out of a SciFi blockbuster, but this one-of-a-kind helmet might revolutionize human brain imaging. Using this light-weight, magnetoencephalography (MEG) system, scientists can measure the brain activity of participants while they perform natural movements like nodding, stretching, interacting with other people, or playing sports. This used to be impossible to do with the traditional, fixed and cumbersome MEGs that require participants to stay completely still in order to measure brain activity. As such, this wacky-looking helmet could reveal new insights into the neural pathways and mechanisms involved in the human brain that would have been otherwise impossible to ascertain.
Neurons communicate with each other using chemicals called neurotransmitters. However, to transmit the actual message from the receiving neuron’s dendrites to its own axon terminals, a different medium is used: electricity. When a neurotransmitter such as dopamine triggers the receiving neuron to fire, it sends an electrical “action potential” along its length, similarly to how an electrical pulse flows down a metal wire. Instead of electrons moving through a circuit, an action potential in a neuron occurs because ions move across the neuronal membrane.
This current generates weak magnetic fields which can be detected right outside the scalp. A MEG measures these magnetic fields, allowing scientists to see which parts of the brains are engaged when undertaking certain tasks. Doctors often rely on MEG to plot a roadmap of the brain that is useful for preoperative and treatment planning for individuals with epilepsy, as well as for patients undergoing surgery to remove a brain tumor or other lesions. In research, MEG has proven indispensable for scientists who are looking to understand human brain function, as well as neurological and psychiatric disorders.
A typical MEG setup. The difference is striking. Credit: Wellcome Trust Centre for Human Neuroimaging at UCL.
Now, British researchers at University of Nottingham and University College London have come up with a revolutionary new setup for a wearable MEG. Unlike the MEGs used in practice today, which are incredibly large and weigh half a tonne, the new system is basically a helmet that can be worn while the user performs movements — something which would normally cause a lot of imaging problems in a conventional setup.
The scanner measures electrophysiological brain function – it allows is to pinpoint, with spatial accuracy of a few millimetres and temporal accuracy of a few milliseconds, which parts of the brain are involved when we undertake specific tasks. It can do this in an environment where subjects are free to move around. This is a step change for neuroscientific research, with neuroscientists able to study the brain in a whole new way,” Dr. Matt Brookes, who leads the MEG work in Nottingham, told ZME Science.
The crux of the innovation lies in the new ‘quantum’ sensors that are mounted in a 3-D printed prototype. These sensors are lightweight and can work at room temperature, whereas the sensors employed by a typical MEG have to be kept very cold (-269°C), hence the bulky configuration. The closer these quantum sensors are to the scalp, the better the brain activity signal they can pick up.
However, the sensors are so sensitive, they receive interference from Earth’s magnetic field. The team of researchers solved this problem by developing special electromagnetic coils, which reduced Earth’s magnetic field around the scanner by a factor of 50,000.
“One of the biggest challenges was that, in order to develop the system such that subjects could move their head, we had to null the Earth’s magnetic field in a region surrounding the head. Our system is housed in a magnetically screened room, which reduces the earths field by approximately a factor of 2,000, but that wasn’t good enough – we needed a factor of ~50,000. To do this we were able to design and build a novel set of electromagnetic coils. These coils had to be designed to sit on planes either side of the subject, so as not to enclose the person being scanned, or make them claustrophobic. The coils that we designed and built were able to almost completely remove Earth’s field, thus enabling the sensors to operate and imaging data to be captured whilst the subject moved their head,” Brookes explained.
Credit: Wellcome Trust Centre for Human Neuroimaging at UCL.
The new wearable MEG will likely prove revolutionary for research and will enable treatment for patients who traditionally could use a MEG scanner, such as young children with epilepsy or patients suffering from neurodegenerative disorders like Parkinson’s disease. Tests so far suggest that the MEG helmet works just as well as its fixed counterpart, although it will require further tweaking in order to provide imaging for lower bandwidths.
“The only minor limitation compared to traditional systems is bandwidth. The overwhelming majority of brain activity that MEG measures is in the 1-100Hz band – well within the scope of our sensors. However, there are effects in the brain at a higher frequency (e.g. 300Hz). This would be a challenge for our system – however future developments in quantum sensing may make this possible,” Brookes wrote in an email.
Besides improving bandwidth, the researchers will further refine their prototype so it is more “patient-friendly.” Hopefully, clinical trials will follow soon after.
“This is very much a prototype system – a one of a kind proof of principle machine. Our next step is to make it more patient (and in particular more child) friendly. To do that we intend to make the helmets less intimidating – constructing them from a flexible material so they become more like a ‘scrum-cap’ worn by rugby players. In that way we hope to construct more generic helmets that fit anyone, rather than bespoke helmets that only fit one person,” Brookes said.
Scientists have developed a surprisingly accurate mechanism of predicting autism — using a single brain scan.
The findings indicate that autism has a biological component. Image credits: Carolina Institute for Developmental Disabilities.
Predicting the unpredictable
There’s still a lot of disagreement and debate regarding the nature and causes of autism. We do know that it is a spectrum disorder, with all autistic people suffering from some level of problems, but the severity and nature of these problems differ greatly. Autism is caused by a combination of genetic and environmental factors, with some instances being associated with certain pregnancy infections or drug abuse, and others having no clear source. The diagnosis of autism is difficult because it is based on behavior, not a certain cause or a mechanism. Considering that autism affects an estimated 1 in 68 children (1 in 42 boys and 1 in 189 girls), having a possibility to not only diagnose it but predict it, through a simple brain scan, is truly exciting.
“We have been trying to identify autism as early as possible, most importantly before the actual behavioural symptoms of autism appear,” says team member Robert Emerson of the University of North Carolina at Chapel Hill.
He and his colleagues have developed an algorithm that analyzed brain scans of 6-month-old children and predicted, with almost perfect accuracy, which of them will develop autism.
For the study, they focused on babies with autism suffering siblings, which put them at a higher risk of developing the condition themselves; they settled for 59 infants aged approximately 6 months. They carried out a single brain scan (also a significant reduction from previous studies, and can be carried while the babies are sleeping) which gathered data from 230 brain regions, showing the 26,335 connections between them. Out of all these connections, researchers identified 974 regions possibly connected with autism, and put those into a machine learning algorithm. The results were impressive.
“When the classifier determined a child had autism, it was always right. But it missed two children. They developed autism but the computer program did not predict it correctly, according to the data we obtained at six months of age,” said Emerson.
The algorithm predicted that 9 of them were developing autism, and they did. Still, two more also developed the condition, which it didn’t catch. But the fact that it has no false positives is extremely encouraging, and could pave a new way for autism treatment and management. If parents know from 6 months that a child is highly likely to develop an autistic condition, they can start preparing accordingly and develop a proper environment and suitable therapies.
Vaccines don’t cause autism
This study also carries another interesting conclusion: if autism can be detected through a brain scan, it means that it has a biological component and is not fully environmental. Also, since the test was done before any vaccines were done, it invalidates (once more) the theory that vaccines cause autism. This had already been debunked several times, but for some reason, many people seem to still believe that.
“The study confirms that autism has a biological basis, manifest in the brain before behavioural symptoms appear, and that autism is not due to environmental effects that occur after 6 months, for example, vaccinations,” says Uta Frith of University College London. “This still needs pointing out.”
Of course, although results are encouraging, this is still a relatively small sample size, and it’s not clear how the algorithm would fare for different types of babies (with different brains). They will try to replicate the new findings on a broader sample size. The study comes on the heels of an earlier study that used two scans, at ages of 6 and 12 months, and had similar results in terms of accuracy.
“The more we understand about the brain before symptoms appear, the better prepared we will be to help children and their families,” said researcher Joseph Piven, also from the University of Carolina.
Journal Reference: Robert W. Emerson et al — Functional neuroimaging of high-risk 6-month-old infants predicts a diagnosis of autism at 24 months of age. DOI: 10.1126/scitranslmed.aag2882
Neuroscientists at University of Western Ontario in London, Canada found that a man who was thought to be living in a vegetative state for almost twenty years showed response when an Alfred Hitchhock movie was played in the background. The findings suggest that doctors might want to explore more methods to determine whether or not a possibly vegetative state patient is actually aware or not.
Listening in the background
The patient in question was hit by a blow to the chest during a fight as a teenager, which cut cut off blood supply to his brain. After spending 3 weeks in a coma, doctors proclaimed him to be in a vegetative state since he seemed to be completely oblivious of his surroundings, despite he would sometimes open his eyes. The incident happened in 1997, and his situations hasn’t improved the slightest since.
Lorina Naci, a neuroscientist at the University of Western Ontario in London, Canada, showed that it is possible to screen for awareness in unresponsive individuals by asking them to follow commands, such as to imagine playing tennis, then measuring brain activity using an fMRI machine. Despite successful on a few occasions, the method is obviously obstructed by the fact you need brain-damaged people to respond to commands and pay attention. So, Naci and team decided to employ a more passive means of gaining attention: watching television.
To make it easier to both elicit and gauge attention using brain scanning techniques, the researchers decided to go for a thrilling movie and what better choice than a movie made by the master of suspense himself, Alfred Hitchcock. An episode from the TV series Alfred Hitchcock Presents called “Bang! You’re Dead” was selected because of its simple, yet suspenseful plot: a young boy playing with his toy gun finds a real revolver in his uncle’s drawer and starts pointing it at his mother and a little girl in the supermarket. The viewer knows that the gun is lock and loaded, so there a lot of moments that elicit powerful emotions.
Before presenting it to patients, the researchers first made test rounds with 12 volunteers with normally functioning brains. While watching the clip, the participants had their brain activity scanned with a functional magnetic resonance imaging (fMRI). During the moments of greatest suspense, their frontal parietal brain regions lit up like crazy. This region is known to orchestrate attention. The participants were also asked about their subjective experience, and their review closely matched what researchers could objectively witness from the brain scans – feelings of anxiety and dread.
The team then proceeded with tests on actual brain-impaired patients: a 20-year-old female patient who fell into a coma in 2007 after suffering brain damage of unknown origins, and the 35-year-old man. Both patients had their eyes open during the show. After analyzing the brain scans, researchers found the young woman showed no brain activity in response to the film. The male patient, however, displayed peaks and valleys of brain activity that closely matched those of the healthy volunteers, suggesting that he might have been following the plot.
The findings suggest that not all seemingly vegetative patients are perpetually unaware – some of them might actually be tuning in. It’s yet too early to say if this method could prove to be successful for most apparently unaware and unresponsive brain injury patients, however. For the patient’s family, the results are definitely more than just another science experiment. His father used to take him to the movies once a week for years after his injury, but eventually stopped since he thought it was all useless. Now, there’s yet hope.
A common brain myth is that creative people, like artists, use their right part of the brain more, while the left part of the brain is more active in rational people. This has been debunked many times, and a few searchers on Google will satisfy your curiosity, if you think otherwise. It’s true, however, that artists and creative people, in general, have structural different brains than those less creative. Does this mean that talent is more important than practice and environment to become a successful artist? The truth may lie somewhere in the middle.
An artist’s brain
Rebecca Chamberlain from KU Leuven, Belgium led a recent study which compared the brains of 21 art students with 23 non-artists by using a brain scan method called voxel-based morphometry. The findings showed that the artists had more grey matter in an area of the brain called the precuneus in the parietal lobe, a region of the brain involved in control for fine motor performance and what neuroscietists call procedural memory.
“This region is involved in a range of functions but potentially in things that could be linked to creativity, like visual imagery – being able to manipulate visual images in your brain, combine them and deconstruct them,” Dr Chamberlain said.
Grey matter is a type of neural tissue which is found in the brain and spinal cord. It is named after its distinctive brownish-gray color, in contrast with white matter, another type of neural tissue which appears white because it is coated in myelin sheathes. The two don’t differ by colour alone – gray matter is largely composed of nerve cells, while white matter is responsible for communication between grey matter regions. Many people associate gray matter with intelligence and intellect, because it is a major component in the brain, leading to slang terms like “use those gray cells.”
Study participants were invited to draw then brain scans were performed.
It’s not entirely certain what enhanced grey matter concentration in a particular brain region means, but previous findings suggest that these individuals have better processing in those area.
Is this talent or practice? Hard to tell
Another author of the paper, Chris McManus from University College London, said it was difficult to distinguish what aspect of artistic talent was innate or learnt.
“We would need to do further studies where we look at teenagers and see how they develop in their drawing as they grow older – but I think [this study] has given us a handle on how we could begin to look at this.”
Concerning the left and right brain myths, the present study offers its on proof that this idea is all wrong since increased grey and white matter were found in the art group in both left and right structures of the brain.
Like a sticking nail, Alzheimer’s has been irritating neuroscientists for decades. After so many years and billions worth of research, the underlying causes and mechanics that cause the gruesome neurodegenerative disease have yet to be identified, though hints suggest genetics have a major role to play – never mind a cure! Clearly, Alzhaimer’s is formidable and while we’ve yet to fully understand it, scientists are doing their best and every year there seems to be a new piece added that might one day fit the whole puzzle.
For instance, a team of researchers at Stanford confirmed earlier findings that suggests a genetic variant makes women more prone to the disease than men. This is evidence that the disease affects genders unequally and suggests that future treatment should be prescribed gender specific.
That’s a really big difference, but for some reason the findings didn’t become that widely known. Michael Greicius, a neurologist at Stanford University Medical Center in California re-discovered the findings in 2008 and decided it was worth making a new investigation. He and his team first performed some neuroimaging on patients and found from the brain scans that women with the APOE4 variant had poor connectivity in brain networks typically afflicted by Alzheimer’s, even though there weren’t any symptoms for Alzheimer’s present in the first place. This was fishy.
A more comprehensive view
Greicius and colleagues settled they would have to perform a longitudinal study on this to see the full extent of this genetic variance, so they pulled data from 2588 people with mild cognitive impairment and 5496 healthy elderly who visited national Alzheimer’s centers between 2005 and 2013. Every participant was logged according to genotype (did he have the APOE4 or APOE2?) and gender. Most importantly, each participant was surveyed in follow-up studies to see if the mild impairments had grown into full-blown Alzheimer’s.
Confirmed that the APOE4 is a risk gene, males and females participants with mild cognitive disabilities who were identified carrying the gene variant equally progressed to Alzheimer’s disease more readily than those without the gene. However, among healthy seniors, women who inherited the APOE4 variant were twice as likely as noncarriers to develop mild cognitive impairment or Alzheimer’s disease, whereas APOE4 males fared only slightly worse than those without the gene variant. This is a full step ahead of the previous 1997 study because it tell us more about how the gene variant potentially leads to Alzheimer’s, especially in women.
The findings will most likely have significant implications in how Alzheimer’s is treated. Interestingly enough, some previous studies, according to the researchers, have shown that there are some side effects when treating patients that carry the APOE4 variant, but these studies were not subdivided according to gender. Moreover, it’s possible that some treatments are more effective to treating symptoms for men more than women, and this is something definitely worth taking into account.
After growing up to adulthood blinded from birth, a man now has taken a peculiar hobby: photography. Were it not for the efforts of a group of researchers who have devised a system that converts images into sequences of sound, this new found pastime had been impossible. Hobbies or not, the technology is particular impressive and judging from the stream of data reported thus far, it could prove to be a marvelous system for everyday use, helping the blind navigate their surroundings, recognize people and even appreciate visual arts — all through sound.
In all began in 1992 when a Dutch engineer called Peter Meijer invented vOICe – an algorithm that converted simple grayscale, low-resolution images into sounds that would break into an unique, discernible pattern by the trained ear. As the algorithm scans from left to right, each pixel or group of pixels has a corresponding frequency (higher positions in the image –> higher acoustic frequencies). A simple image, for instance, only showing a diagonal line stretching upward from left to right becomes a series of ascending musical notes, while a more complicate image, say a man leaning on a chair, turns into a veritable screeching spectacle.
Amir Amedi and his colleagues at the Hebrew University of Jerusalem took things further and made vOICe portable, while also studying the participants’ brain activity for clues. They recruited people that had been blind all their lives from birth, but after just 70 hours of training and obviously despite any visual cues, the individuals went from “hearing” simple dots and lines to “seeing” whole images such as faces and street corners composed of 4500 pixels. Mario on Nintendo only has 192 pixels and it still felt freaking realistic sometimes (was that just me as kid or what?).
Seeing with sound
Using head-mounted cameras that communicated with the vOICe technology, the blind participants could then navigate their surroundings and even recognize human silhouettes. To prove they could visually sense accurately, the participants mimicked the silhouette’s stances.
Things turned really interesting when the researchers analyzed the brain activity data. The traditional sensory-organized brain model says the brain is organized in regions each devoted to certain senses. For instance, the visual cortex is used for sight processing; in the blind, where these areas aren’t used conventionally, these brain regions are re-purposed to boost some other sense, like hearing. Amedi and colleagues found, however, that the area of the visual cortex responsible for recognizing body shapes in sighted people was signaling powerfully when the blind participants were interpreting the human silhouettes. Neuroscientist Ella Striem-Amit of Harvard University, who co-authored the paper, thinks it’s time for a new model. “The brain, it turns out, is a task machine, not a sensory machine,” she says. “You get areas that process body shapes with whatever input you give them—the visual cortex doesn’t just process visual information.”
“The idea that the organization of blind people’s brains is a direct analog to the organization of sighted people’s brains is an extreme one—it has an elegance you rarely actually see in practice,” says Ione Fine, a neuroscientist at the University of Washington, Seattle, who was not involved in the study. “If this hypothesis is true, and this is strong evidence that it is, it means we have a deep insight into the brain.” In an alternative task-oriented brain model, parts of the brain responsible for similar tasks—such as speech, reading, and language—would be closely linked together.
The team also devised a vOICe version that can be run as a free iPhone app, called EyeMusic. The researchers demonstrated that using the app, blind participants could recognize drawn faces and distinguish colours. The video below showcases the app. The study was reported in the journal Current Biology.
A rather debatable theory in psychology says the brain detects grammar errors even when we don’t consciously pay attention to them, sort of working on autopilot. Now, researchers at University of Oregon have come with tangible evidence pointing toward this idea after they performed a brain scan study.
The team of psychologists, led by Laura Batterink, a postdoctoral researcher, invited native-English speaking people, ages 18-30, to read out various sentences, some of which containing grammatical errors, and signal whether these were correct or not. During the whole task, the participants had their brain activity recorded using electroencephalography, from which researchers focused on a signal known as the Event-Related Potential (ERP).
Subjects were given 280 experimental sentences, including some that were syntactically (grammatically) correct and others containing grammatical errors, such as “We drank Lisa’s brandy by the fire in the lobby,” or “We drank Lisa’s by brandy the fire in the lobby.” In order to create a distraction and make participants less aware, a 50 millisecond audio tone was also played at some point in each sentence. A tone appeared before or after a grammatical faux pas was presented. The auditory distraction also appeared in grammatically correct sentences.
“Participants had to respond to the tone as quickly as they could, indicating if its pitch was low, medium or high,” Batterink said. “The grammatical violations were fully visible to participants, but because they had to complete this extra task, they were often not consciously aware of the violations. They would read the sentence and have to indicate if it was correct or incorrect. If the tone was played immediately before the grammatical violation, they were more likely to say the sentence was correct even it wasn’t.”
Your brain: a grammar nazi
The researchers found that when the tones appeared after grammatical errors, subjects detected 89 percent of the errors, but when the tones appear before the grammatical errors, subjects detected only 51 percent of them. It’s clear the tone created a disruption in the participants’ attention. Even so, while the participants weren’t able to be consciously aware of the grammar errors, their brains picked up the errors generating an early negative ERP response. These undetected errors also delayed participants’ reaction times to the tones.
“Even when you don’t pick up on a syntactic error your brain is still picking up on it,” Batterink said. “There is a brain mechanism recognizing it and reacting to it, processing it unconsciously so you understand it properly.”
“While other aspects of language, such as semantics and phonology, can also be processed implicitly, the present data represent the first direct evidence that implicit mechanisms also play a role in the processing of syntax, the core computational component of language.”
These findings might warrant changes in the way adults learn new languages. Children, for instance, learn to speak a language, and conversely pick up its grammar structure, simply routine daily interactions with parents or peers, simply hearing and processing new words and their usage before any formal instruction.
“Teach grammatical rules implicitly, without any semantics at all, like with jabberwocky. Get them to listen to jabberwocky, like a child does,” said Neville, referring to “Jabberwocky,” the nonsense poem introduced by writer Lewis Carroll in 1871 in “Through the Looking Glass,” where Alice discovers a book in an unrecognizable language that turns out to be written inversely and readable in a mirror.
The findings were detailed in the Journal of Neuroscience.
As the writers on Nature depict it, it evokes the dystopian worlds of science fiction writer Philip K. Dick – if you’ve read his works or seen Minority Report, you’ll understand it. Neuroscientists have developed a brain scan that shows how likely are convicted felons to commit crimes again.
Brain scanning felons
Kent Kiehl, a neuroscientist at the non-profit Mind Research Network in Albuquerque, New Mexico, and his collaborators studied a group of 96 male prisoners just before their release. They used their functional magnetic resonance imaging (fMRI) to scan the prisoners’ brains during computer tasks in which the prisoners had to make rash decisions and inhibit impulsive behavior. They especially focused on a section of the anterior cingulate cortex (ACC), a small region in the front of the brain involved in motor control and executive functioning. After these tests, they followed their subjects for 4 years, to see how they do.
Subjects who had lower ACC activity during the quick-decision tasks were more likely to be arrested again after getting out of prison, even after researchers eliminated disturbing factors, such as age, sex, drug and alcohol abuse and psychopathic traits – bare in mind however, that the elimination of these parameters is never perfect, and always subject to either under or overestimation. en who were in the lower half of the ACC activity ranking had a 2.6-fold higher rate of rearrest for all crimes and a 4.3-fold higher rate for nonviolent crimes.
Treading on thin ice
First of all, even the researchers themselves agree that this is just an initial study, and much more data has to be gathered before this method can be considered even remotely viable.
“This isn’t ready for prime time,” says Kiehl.
Also, they underline that only high risk subjects should be taken into consideration, and not lower risk ones. But even so… Philip K. Dick raised the very thorny ethical issues of arresting people for crimes they didn’t commit. Of course, brain scans are much, much different than the psychic powers described in Minority Report, but let’s take a moment to ponder a case. Say you have a subject with a moderately to high risk; what do you do? You can’t arrest him for something he hasn’t committed, and you can’t say, keep some surveillance on him, because that may very well be the trigger that makes him snap and commit crimes again. If you ask me, this kind of technology, if available at some point, should be used to make low stake decisions, like which rehabilitation treatment to assign a prisoner or more often visits when on parole, rather than high stake ones, like actually giving parole or a bigger sentence.
“A treatment of [these clinical neuroimaging studies] that is either too glibly enthusiastic or over-critical,” says Tor Wager, a neuroscientist at the University of Colorado in Boulder “will be damaging for this emerging science in the long run.”
Can you imagine an imaginary menagerie manager imagining managing an imaginary menagerie?
Sorry about that folks – that was a bit twisted right? Just earlier you’ve used your lips, tongue, jaw and larynx in a highly complex manner in order to render these sounds out loud. Still, very little is known of how the brain actually manages to perform this complex tongue twisting dance. A recent study from scientists at University of California, San Francisco aims to shed light on the neural codes that control the production of smooth speech, and in the process help better our understanding.
Previous neural information about the vocal tract has been minimum due to insufficient data. However, recently a team of US researchers have performed the most sophisticated scan of its kind, down to the millimeter and millisecond scale, after they recorded brain activity in three people with epilepsy using electrodes that had been implanted in the patients’ cortices as part of routine presurgical electrophysiological sessions.
As you might imagine, huge amounts of data were outputted. Luckily, the researchers developed a complex multi-dimensional statistical algorithm to filter out information so that they could reach that referring to how neural building blocks combine to form the speech sounds of American English.
Electrodes in an epilepsy patient’s brain (shown here in magnetic resonance imaging) revealed strikingly different patterns of activity in the articulation of consonants and vowels. (c) Nature
First of all, the researchers found that neurons fired differently when the brain was prompted to utter a consonant than a vowel, despite the parts of speech “use the exact same parts of the vocal tract”, says author Edward Chang, a neuroscientist at the University of California, San Francisco.
The team found that the brain seems to coordinate articulation not by what the resultant phonemes sound like, as has been hypothesized, but by how muscles need to move. Data revealed three categories of consonant: front-of-the-tongue sounds (such as ‘sa’), back-of-the-tongue sounds (‘ga’) and lip sounds (‘ma’). Vowels split into two groups: those that require rounded lips or not (‘oo’ versus ‘aa’).
“This implies that tongue twisters are hard because the representations in the brain greatly overlap,” Chang says.
Even though the study has a very limited sample size of participants, and diseased on top of it, their findings provide nevertheless some invaluable information on a subject all too poorly studied. There are a lot of people who are suffering from speech impairments, either as a result of accidents resulting to the damage to the brain or the all too common strokes.
“If we can crack the neural code for speech motor control, it could open the door to neural prostheses,” Hickok says. “There are already neural implants that allow individuals with spinal-cord injuries to control a robotic arm. Maybe we could do something similar for speech?”
A new study published by MIT revealed, for the first time, what happens inside the brain when you go unconscious.
By monitoring the patients’ brain as they were given anesthetics, the researchers were able to identify a distinctive brain activity pattern that occurred as unconsciousness settles in. The pattern can be characterized by a breakdown between different regions in the brain – each of which exhibit short activity bursts, followed by prolonged periods of silence.
“Within a small area, things can look pretty normal, but because of this periodic silencing, everything gets interrupted every few hundred milliseconds, and that prevents any communication,” says Laura Lewis, a graduate student in MIT’s Department of Brain and Cognitive Sciences (BCS) and one of the lead authors of a paper describing the findings in the Proceedings of the National Academy of Sciences this week.
The study could be very useful, helping researchers understand why some patients suddenly wake up during surgery, or why some stop breathing after given anesthetics.
“We now finally have an objective physiological signal for measuring when someone’s unconscious under anesthesia,” says Patrick Purdon, an instructor of anesthesia at MGH and Harvard Medical School and senior author on the paper. “Now clinicians will know what to look for in the EEG when they are putting someone under anesthesia.”
The conclusions were quite suggestive; quite intuitively, people who were given too little anesthetics risked waking up in the middle of the surgery, and those who were given too much risked stopping breathing. But what is the optimal dose? That’s extremely hard to pinpoint because it depends on many factors, but now, doctors can monitor it more accurately, and make an intervention when needed.
“What this study says is that you should be looking at raw EEG in order to observe the oscillations and interpret them. If you do that, you have a physiologically linked way to know when someone is unconscious,” Brown says. “We can take this into the operating room today and give better patient care.”
The team only analyzed a few common anesthetics, and now they are continuing their research to see if other drugs induce the same patterns.
“There are many other drugs — based on EEG studies — that seem like they might be producing slow oscillations. But there are other drugs that seem to be doing something totally different,” Purdon says.
Freestyle rapping is perhaps the most prized skill in hip hop – it is the ability to make rhymes on the fly, and it’s usually what rappers do to “duel” – the one who makes the better insults win.
But Siyuan Liu and Allen Braun, neuroscientists, didn’t go to a rap show – they brought the rap show to the lab. They and their team had 12 rappers freestyle in a magnetic resonance imaging (fMRI) machine. The artists were also asked to recite some memorized lyrics chosen by scientists. By comparing their brain when they were reciting from their memory to improvising, they were able to see which areas of the brain are used in improvisation – and are linked to creativity.
This study complements that conducted by Braun and Charles Limb, a doctor and musician at Johns Hopkins University in Baltimore, Maryland, who did the same thing to jazz musicians while they were improvising. Both sets of artists showed increased activity in a part of their frontal lobes called the medial prefrontal cortex. It can also be inferred that areas inactive in the process are unrelated to the creation process.
“We think what we see is a relaxation of ‘executive functions’ to allow more natural de-focused attention and uncensored processes to occur that might be the hallmark of creativity,” says Braun.
Rex Jung, a clinical neuropsychologist at the University of New Mexico in Albuquerque has also put a lot of effort into understanding the links between the brain and creativity, and he believes the highlighted areas are active in all creative processes, not only in music.
“Some of our results imply this downregulation of the frontal lobes in service of creative cognition. [The latest paper] really appears to pull it all together,” he says. “I’m excited about the findings.”
Michael Eagle, a study co-author who also raps in his spare time and provided inspiration for this study believes the creation process comes somehow outside of the “conscious awareness”:
“That’s kind of the nature of that type of improvisation. Even as people who do it, we’re not 100% sure of where we’re getting improvisation from.”
The next step in the research however will require something different than freestyle rapping; neuroscientists want to find out what happens after that first phase of creative burst.
“We think that the creative process may be divided into two phases,” he says. “The first is the spontaneous improvisatory phase. In this phase you can generate novel ideas. We think there is a second phase, some kind of creative processing [in] revision.”
Mixing in a typical fMRI brain scanner with advanced computer modeling simulations, scientists at the University of California have managed to achieve the the unthinkable – render the visual expressions triggered inside the brain and play them like a movie. This is the forefront technology which will one day allow us to tap inside the mind of coma patients or be able to watch the dream you had last night and still vaguely remember, just like a plain movie. Quite possibly one of the most fascinating SciFi ideas might become a matter of reality in the future.
“This is a major leap toward reconstructing internal imagery,” said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published online today (Sept. 22) in the journal Current Biology. “We are opening a window into the movies in our minds.”
This comes right on the heels of a recent, comparatively amazing study, from Princeton University who’ve managed to tell what study participants were thinking about, using a fMRI and a lexical algorithm. The neuroscientists from University of California have taken this one big step farther by visually representing what goes on inside the cortex.
A Sci-Fi dream come true that might show your dreams, in return
They first started out with a pictures experiment, showing participants black and white photos. After a while the researchers’ system allowed them to pick with absolute accuracy which picture the subject was looking at. For this latest one, however, scientists had to surrmount various difficult challenges which come with actually decoding brain signals generated by moving pictures.
“Our natural visual experience is like watching a movie,” said Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab. “In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.”
Nishimoto and two other research team members served as subjects for the experiment, as they stepped inside the fMRI for the experiments which requires them to sit still for hours at a time. During their enclosed space inside the fMRI, the scientists were presented with a few sets of Hollywood trailers, while blood flow through the visual cortex, the part of the brain that processes visual information, was measured. The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.
A movie of the movie inside your head. Limbo!
The second phase of the experiment is where it all becomes very interesting, as it implies the movie reconstruction algorithm. Scientists fed 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Then based on the brain imaging delivered by the fMRI, the computer program would morph various frames it had already learn into what it believed best describes the brain pattern. The result was nothing short of amazing. Just watch the video below.
This doesn’t mean that this new technology developed by UC scientists is able to read minds or the likes and visually tape ones memories on a display. Such technology, according to the researchers, is decades away, but their studies will help pave the way for future such developments. As yet, the technology can only reconstruct movie clips people have already viewed.
“We need to know how the brain works in naturalistic conditions,” he said. “For that, we need to first understand how the brain works while we are watching movies.”
It’s somewhat evidently observable that the elderly have more trouble focusing or multitasking than young people, but a recent study in which scientists used brain scans shows an unexpected explanation to the generation deficit.
Researchers from the University of California, San Francisco led by neuroscientist Adam Gazzaley, recruited 20 relatively young adults, average age 25, and 20 comparatively elderly people, average age 69. Each of them was plugged to a fMRI scanner and shown a series of images. The first one was that of a landscape, which they were asked to keep it in mind; after a few seconds, they were shown a portrait of a face, and had to answer several questions about it, and then they were shown another picture of a landscape and then asked if it matched the first.
After analyzing the results this is where things actually get interesting – it’s not that elderly people pay more attention to distractions, like most of us might have been led to believe, instead, they seem to have trouble letting go of distraction, and are slow to regain focus on their original tasks.
Researchers initially believed that elderly brain scans will show a predisposition for distractions, however that was not the case – average brain activity was little different from their younger counterparts when presented with the distracting face, the difference appeared in the next stage. When the portrait was removed, its activity lingered in elderly brains, while quickly dissipating from younger ones. When the landscape was re-introduced, elderly brains were slow to pick up, and younger brains fast.
Interesting as it may be, the study however seems to pose more questions than answers, like for instance whether the elderly are slower at multitasking because they were born and raised in an environment less fragmented and agitated as opposed to that of the youngsters. If this is the case, then multitasking can be correlated to culture, not age, and if age is indeed responsible for the multitasking difference between generations, then when does the degradation start? This may just be the premise for more extensive tests and research.
The study was published in Proceedings of the National Academy of Sciences on April 12.
Keeping a diary is not just something girls do when they break up with their boyfriends or don’t get along with their mother-in-laws. Diaries are not only for people who have absolutely no social life and consider the little notebook in front of them their best friend (even though some of us should really get out of the house more). A diary may offer, in fact, emotional stability and balance. And what is surprising, men seem to benefit from keeping one more than women. So should we all turn into a Bridget Jones?
Matthew Lieberman, a psychologist at the University of California in Los Angeles conducted a study to find out exactly how beneficial it is to express one’s feelings in writing. After brain scanning several volunteers it was established that making notes in a diary reduces activity in the amygdala, a part of the brain which controls the intensity of our emotions.
However, not only writing down one’s thoughts in a diary seems to have this effect; writing poetry or song lyrics, no matter how bad they are, can have a surprisingly calming effect. This kind of activity is different from catharsis, which means seeing a problem i another light in order to come to terms with it.
What the brain scans showed is that putting one’s thoughts on paper triggers the same reactions in the brain as the ones connected to consciously controlling one’s emotions.
So, whenever one starts writing, he or she regulates emotions without even realizing that. Th result does not have to be a poetic masterpiece or a song to break the charts. The inner results are the best one could desire.
The test involved conducting a brain scan on the volunteers before being asked to write in each of the following four days for 20 minutes. Half of the subjects chose to write about a recent emotional experience while the others chose a neutral experience.
The first category proved to have more activity in the right ventrolateral prefrontal cortex, which meant that strong emotional feelings were controlled.
Men proved to benefit the most from keeping a diary, probably because women are better at turning their feelings into thoughts. The novelty must have increased the impact. Moreover, writing seems to be much more beneficial than tiping, maybe because it is more personal.
Writing about emotions in an abstract way is also much better than describing them in a vivid language, which does nothing but to reactivate original feelings and impressions.
However a question remains: how come that writers such as Martin Amis and Michel Houellebecq aren’t exactly the jolliest people ever? Would they be different if they hadn’t written anything?