Tag Archives: writing

Our ability to read and write is housed in a ‘recycled’ part of the brain

New research is homing in on the mechanisms our brains use to process written language. 

A detail of the cuneiform script carved in basalt at the Van museum.
Image credits Verity Cridland / Flickr.

Given my profession, I’m quite happy that people can read and write. From an evolutionary standpoint, however, it’s surprising that we do. There’s no need for it in the wild, so our brains didn’t need to develop specific areas to handle the task, like they did with sight or hearing.

A new study looked into which areas of the brain handle this task, finding that we use a “recycled” brain area for reading. These structures were repurposed from the visual system and were originally involved in pattern recognition.

A change of career

“This work has opened up a potential linkage between our understanding of the neural mechanisms of visual processing and […] human reading,” says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences and the senior author of the study.

The findings suggest that even nonhuman primates have the ability to distinguish words from gibberish, or to pick out specific letters in a word, through a part of the brain called the inferotemporal (IT) cortex.

Previous research has used functional magnetic resonance imaging (fMRI) to identify which brain pathways activate when we read a word. Christened the visual word form area (VWFA), it handles the first step involved in reading: recognizing words in strings of letters or in unknown script. This area is located in the IT cortex, and is also responsible for distinguishing individual objects from visual data. The team also cites a 2012 study from France that showed baboons can learn to identify words within bunches of random letters.

DiCarlo and Dehaene wanted to see if this ability to process text is a natural part of the primate brain. They recorded neural activity patterns from 4 macaques as they were shown around 300 words and 300 ‘nonwords’ each. Data from the macaques was recorded at over 500 sites across their IT cortexes using surgically-implanted electrodes. This data was then fed through an algorithm that tried to determine whether the activity was caused by a word or not.

“The efficiency of this methodology is that you don’t need to train animals to do anything,” Rajalingham says. “What you do is just record these patterns of neural activity as you flash an image in front of the animal.”

Naturally good with letters

This model was 76% accurate at telling whether the animal was looking at a word or not, which is similar to the results of the baboons in the 2012 study.

As a control, the team performed the same experiment with data from a different brain area that is also tied to the IT and visual cortex. The accuracy of the model was worse compared to the experimental one (57% vs. 76%). This last part shows that the VWFA is particularly suited to handle the processes involved in letter and word recognition.

All in all, the findings support the hypothesis that the IT cortex could have been repurposed to enable reading, and that reading and writing themselves are an expression of our innate object recognition abilities.

Of course, whether reading and writing arose naturally from the way our brains work, or whether our brains had to shift to accommodate them, is a very interesting question — one that, for now, remains unanswered. The insight gained in this study, however, could help to guide us towards an answer there as well.

“These findings inspired us to ask if nonhuman primates could provide a unique opportunity to investigate the neuronal mechanisms underlying orthographic processing,” says Dehaene.

The next step, according to the researchers, is to train animals to read and see how their patterns of neural activity change as they learn.

The paper “The inferior temporal cortex is a potential cortical precursor of orthographic processing in untrained monkeys” has been published in the journal Nature Communications.

Can AI replace newsroom journalists?

It’s no secret that journalism is one of the most fragile industries in the world right now. After years where many publishers faced bankruptcy, layoffs, and downsizing, then came the coronavirus crisis — for many newsrooms, this was the final nail in the coffin.

Alas even more problems are on the way for publishers.

Late last month, Microsoft fired around 50 journalists in the US and another 27 in the UK who were previously employed to curate content from outlets to spotlight on the MSN homepage. Their jobs were replaced by automated systems that can find interesting news, change headlines, and select pictures without human intervention.

“Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, redeployment in others. These decisions are not the result of the current pandemic,” an MSN spokesperson said in a statement.

While it can be demoralizing for anyone to feel obsolete, we shouldn’t call the coroner on journalism just yet.

Some of the sacked journalists warned that artificial intelligence may not be fully familiar with strict editorial guidelines. What’s more, it could end up letting through stories that might not be appropriate.

Lo and behold, this is exactly what happened with an MSN story this week, after the AI mixed up the photos of two mixed-race members of British pop group Little Mix.

The story was about Little Mix singer Jade Thirlwall’s experience with racism. However, the AI used a picture of Thirlwall’s bandmate Leigh-Anne Pinnock to illustrate it. It didn’t take long for Thirlwall to notice, posting on Instagram where she wrote:

“@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group.”

She added: “This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke … It offends me that you couldn’t differentiate the two women of colour out of four members of a group … DO BETTER!”

By the looks of it, Thirlwall seems unaware that confusion is owed to a mistake made by an automated system. It’s possible the error was due to mislabelled pictures provided by wire services, although there’s no way to tell for sure because not much detail has been offered by MSN, apart from a formal apology.

“As soon as we became aware of this issue, we immediately took action to resolve it and have replaced the incorrect image,” Microsoft told The Guardian.

Are we entering the age of robot journalism?

My fellow (human) colleagues might rejoice at this news, but really this happens all the time in newsrooms — even the best of them. For instance, the BBC had to make a formal apology after one of its editors used photos of LeBron James to illustrate the death of his teammate Kobe Bryant.

And while some might believe that curating content is an entirely different matter from crafting content from scratch, think again. The Washington Post has invested considerably in AI content generation, producing a bot called Heliograf that writes stories about local news that the staff didn’t have the resources to cover.

The Associated Press has a similar AI that does the same. Such robots are based on Natural Language Generation software that processes information and transforms it into news copy by scanning data from selected sources, selecting an article template from a range of preprogrammed options, then adding specific details such as location, date, and people involved.

For instance, the following short news story that appeared in the Wolverhampton paper the Express and Star is written by AP’s robot.

The latest figures reveal that 56.5 per cent of the 3,476 babies born across the area in 2016 have parents who were not married or in a civil partnership when the birth was registered. That’s a slight increase on the previous year.

Marriage or a same-sex civil partnership is the family setting for 43.5 per cent of children.

The figures mean that parents in Wolverhampton are less likely to be get married before having children than the average UK couple. Nationwide, 52.3 per cent of babies have parents in a legally recognised relationship.

The figures on births, released by the Office for National Statistics, show that in 2016, 34 per cent of babies were registered by parents who are listed as living together but not married or in a civil partnership.

Unlike a human, robots never tire and can produce thousands of such stories per day. There’s a silver lining though for us journalists — we may have a future yet.

While robots shine when reporting on simple linear stories such as football scores, medal tallies, company profits, and just about anything where the numbers alone tell the story, they are very poor with language and analysis. Can you imagine reading an opinion piece written by a robot? Would you ever trust a robot to write my essay, for that matter? Not really? I thought so, too.

A similar argument can be made for the educational industry. Customized learning is one of the main fields of education where AI is set to have a significant impact. It used to be unthinkable to imagine one-on-one tutoring for each and every student out there, for any subject but now artificial intelligence promises to deliver. For instance, one US-based company called Content Technologies Inc is leveraging deep learning to ‘publish’ customized books — decades-old books that are automatically revamped into smart and relevant learning guides, like advice on writing a research paper about AI.

But, that doesn’t mean that human teachers can be scrapped entirely. For instance, teachers will have to help students develop non-cognitive skills such as confidence and creativity that are difficult if not impossible to transfer from a machine. Simply put, there’s no substitute for good mentors and guides.

Humans are still much better than AIs at reasoning and storytelling — what are arguably the most important journalistic qualities.

Personally, I hope that ZME readers appreciate the fact that there are real humans who care and put great thought into crafting our stories. We’re not done just yet, so until our robot overlords are ready to take over, perhaps you can stand us a while longer.

Grant Proposal.

Researchers write grant proposals differently depending on their gender, and it can lead to bias

What you’re describing in a research grant proposal is important but how you say it also matters a lot, new research shows.

Grant Proposal.

Probably the wrong wording.

The study looked at health research proposals submitted to the Bill & Melinda Gates Foundation, in particular, the wording they used. It found that men and women tend to use different types of words in this context, both of which carry their own downsides. Female authors tend to use ‘narrow’ words — more topic-specific language — while men tend to go for ‘broad’ words, the team reports. The findings further point to some of the biases proposal reviewers can fall prey to, and may help design effective automated review software in the future.

The words in our grants

“Broad words are something that reviewers and evaluators may be swayed by, but they’re not really reflecting a truly valuable underlying idea,” says Julian Kolev, an assistant professor of strategy and entrepreneurship at Southern Methodist University’s Cox School of Business in Dallas, Texas, and the lead author of the study.

It’s “more about style and presentation than the underlying substance.”

The narrower language used by female authors seems to result in lower review scores overall, the team notes. However, broad language, which tended to see more use with male authors, let them down later throughout the scientific process: proposals that used more broad words saw fewer publications in top-tier journals after receiving funding. They also weren’t more likely to generate follow-up funding that publications with narrower language.

The researchers classified words as being “narrow” if they appeared more often in proposals dealing with a particular topic than others. Words that were more common across topics were classified as “broad”. In effect, this process allowed the team to determine whether certain terms were ‘specialized’ for a particular field or were more versatile. This data-driven approach resulted in word classifications that might not have been obvious from the outset: “community” and “health” were deemed to be narrow words, for example, whereas “bacteria” and “detection” were deemed to be broad words.

Reviewers favored proposals with broader words — and those words were used more often by men. So, should we just teach women to write like men? The team “would be hesitant to recommend” it, which is basically science-speak for ‘no’. Kolev says we should instead look at the potential biases reviewers can have, especially in cases where they are favoring language that doesn’t necessarily result in better research.

“The narrower and more technical language is probably the right way to think about and evaluate science,” he says.

Kolev’s team analyzed 6794 proposals submitted to the Gates Foundation by US-based researchers between 2008 and 2017 and how reviewers scored them. Overall, they report, reviewers tended to give female applicants lower scores, although the authors’ identities were kept secret during the review process. This gap in reviewer scores stood firm even after the team controlled for a host of conditions, such as the applicant’s current career stage or their publication record. The only element that correlated with the gap is the language applicants used in their titles and proposal descriptions, the team reports.

The team isn’t exactly sure whether their findings are broadly applicable to all scientific grant application review processes or not. Other research into this subject, but this one dealing with the peer-review process at the NIH, didn’t find the same pattern. It might be a peculiarity of the Bill & Melinda Gates Foundation.

One explanation could be found in the different takes these two organizations have on reviewing processes. The Gates Foundation draws on reviewers from several disciplines and employs a “champion-based” review approach, whereby grants are much more likely to be funded if they’re rated highly by a single reviewer. This less-specialized body of reviewers may be more susceptible to claims that look good on paper (“I’m going to cure cancer!”) rather than those which actually make for good science (such as “I’m going to study how this molecule interacts with cancerous cells”). This may, unwittingly, place women at a disadvantage.

The Gates Foundation hasn’t been deaf to these findings — in fact, they were the ones who called for the study and gave the team access to their peer-review data and proposals. The organization is “committed to ensuring gender equality” and is “carefully reviewing the results of this study — as well as our own internal data — as part of our ongoing commitment to learning and evolving as an organization,” according to a written statement.

The findings also have interesting implications for automated text-analysis software, which will increasingly take on tasks like this in the future. On the one hand, it shows how altering the wording of a proposal can trick even us — nevermind a bit of code — into considering it more valuable when it’s not. On the other hand, the findings can help us iron out these kinks.

But that’s the larger picture. If you happen to be involved in academia and are working hard on a grant proposal, the study shows how important it is to tailor your paper to the peer-review process. You don’t need to be an expert –he Gates / NIH studies show that there isn’t a one-size-fits-all here, but there are services online that can help you out when the style and terminology of the assignment.

The paper “Is Blinded Review Enough? How Gendered Outcomes Arise Even Under Anonymous Evaluation” has been published in the journal NBER.

Braille neue standard alphabet.

Braille Neue can be read just as easily with your eyes as with your fingertips

In a bid to make the Braille tactile system more familiar, Japanese designer Kosuke Takahashi re-designed the script to make it readable for everyone — no matter how well they see.

Braille neue standard alphabet.

Image credits Kosuke Takahashi.

If you’re a sighted person, you probably regard Braille with a mix of awe and frustration. The flowings of dots look like coded messages as if someone wrote in Morse code — fascinating, but completely obscured to us. Our focus on visually-recognizable script across all languages and cultures has made Braille more of an exotic contender rather than a constant companion — definitely not something that the 285 million visually impaired people around the world can rely on to find dotted next to the tsunami of text the rest of humanity can access every day.

Kosuke Takahashi, a Japanese designer, thought that the dotted script would become much more widely-used if it could be read by everybody with the same ease. So, he set out do to just that.

“It all started from simple question, ‘How can I read braille?’ ‘Does it become a character if I connect the dots?’” he recounts. “Even though it is the same letter, it felt incongruous that sighted people could not read it.”

The largest issue Takahashi had to contend with is that Braille and Latin script don’t super-impose — the dots generally don’t stalk with the letters we’re used to seeing. Braille wasn’t designed to have a good correlation with the shape of visual characters, so you can’t just connect the dots and recognizable letters will pop out. For example, in Braille, numbers two and three are represented by two vertical dots and two side-by-side dots respectively.

After toying around with several designs that mixed the two alphabets together, Takahashi came up with Braille Neue. The typeface is perfectly legible to anyone with sight, but it’s built around a skeleton of Braille bumps — meaning everyone can read it.

Baille neue process.

Image credits Kosuke Takahashi.

Takahashi first tried drawing Japanese characters by connecting the dots, but soon decided it was a bad idea since he’d have to move them around so much they weren’t legible anymore. So he fell back on the (simpler) Latin alphabet. He admits that “V” and “I” are still probably not that easy to read and he’ll have to adjust them in the future, but overall, the typeface looks pretty good.

He hopes that Braille Neue could help make texts more inclusive, help sightless people navigate more easily, and possibly serve as inspiration for other graphics in public spaces.

“The biggest benefit is that one sign can work for everyone anywhere,” says Takahashi. “Additionally, this typeface does not require braille to take up additional sign space.”

He hopes to implement Braille Neue at the 2020 Tokyo Olympics and Paralympics and to use his experience designing this typeface to re-mix Braille and Japanese characters.

Will AI start to take over writing? How will we manage it?

Could robots be taking over writing? Photo taken in the ZKM Medienmuseum, Karlsruhe, Germany.

As artificial intelligence (AI) spreads its wings more and more, it also threatening more and more jobs. In an economic report issued to the White House in 2016, researchers concluded that there’s an 83% chance automation will replace workers who earn 20$/hour or less. This echoes previous studies, which found that half of US jobs are threatened by robots, including up to 87% of jobs in Accommodation & Food Services. But some jobs are safer than others. Jobs which require human creativity are safe — or so we thought.

Take writing for instance. In all the Hollywood movies and in all our minds, human writing is… well, human, strictly restricted to our biological creativity. But that might not be the case. Last year, an AI was surprisingly successful in writing horror stories, featuring particularly creepy passages such as this:

#MIRROR: “‘I slowly moved my head away from the shower curtain, and saw the reflection of the face of a tall man who looked like he was looking in the mirror in my room. I still couldn’t see his face, but I could just see his reflection in the mirror. He moved toward me in the mirror, and he was taller than I had ever seen. His skin was pale, and he had a long beard. I stepped back, and he looked directly at my face, and I could tell that he was being held against my bed.”

It wasn’t an isolated achievement either. A Japanese AI wrote a full novel, and AI is already starting to have a noticeable effect on journalism. So just like video killed the radio star, are we set for a world where AI kills writing?

What does it take to be a writer? Is it something that’s necessarily restricted to a biological mind, or can that be expanded to an artificial algorithm?

Not really.

While AIs have had some impressive writing success, they’ve also been limited in scope, and they haven’t truly exhibited what you would call creativity. In order to do that, the first thing they need to do is pass the Turing test, in which a computer must be able to trick humans into thinking that it, too, is human, in order to pass. So far, that’s proven to be a difficult challenge, and that’s only the first step. While AI can process and analyze complex data, it still does not have much prowess in areas that involve abstract, nonlinear and creative thinking. There’s nothing to suggest that AIs will be able to adapt and actually start creating new content.

Algorithms, at least in their computational sense, don’t really support creativity. Basically, they work by transforming a set of discrete input parameters into a set of discrete output parameters. This fundamental limitation means that a computer cannot be creative, as one way or another, everything in its output is still in the input. This emphasizes that computational creativity is useful and may look like creativity, but it is not real creativity because it is not actually creating something, just transforming known parameters such as words and sentences.

But to dismiss AI as unable to write would simply be wrong. In advertising, AI copywriters are already being used, and they’re surprisingly versatile: they can draft hundreds of different ad campaigns with ease. It will be a long time before we’ll start seeing an AI essay writing service, but we might get there at one point. Google claimed that its AlphaGo algorithm is able to ‘create knowledge itself’ and it demonstrated that by winning over the world champion using a move which no one has ever seen before. So it not only learned from humans, but it built its own knowledge. Is that not a type of creativity in itself? Both technically and philosophically, there’s still a lot of questions to be answered.

AI is here, and it’s here to stay. It will grow and change our lives, whether we want it or not, whether we realize it or not. What we need, especially in science and journalism, is a new paradigm of how humans and AI work together for better results. That might require some creative solutions in itself.

Graphology is a pseudoscience

Although it got some support in the 20th century, there is very little scientific evidence to support graphology. The pattern of your handwriting doesn’t describe your personality; graphology belongs in the same group as palm reading and astrology.

Writing History

In 1575, a Spanish physician by the name of Juan Huarte de San Juan published what is likely the first book on handwriting analysis: Examen de ingenios para las ciencias. It seemed to make a lot of sense. Writing is a very personal act, and everyone writes in a different way, so it seems logical that writing is connected to someone’s personality. Italian philosopher Camillo Baldi wrote another book on the matter in 1622 (Trattato come da una lettera missiva si conoscano la natura e qualita dello scrittore), but the idea didn’t really pick up until much later, in the 19th century, with Jean-Hippolyte Michon.

You could say that Michon is the grandfather of graphology. He published several papers on the subject and founded Société Graphologique in 1871. His students carried on his ideas, coming up with a holistic way of analyzing handwriting. After the first World War, it spread across Europe and to America, where it spread and picked up a lot of support. But study after study failed to find any reliable evidence behind graphology.

In 1929, graphology suffered a significant split. Milton Bunker founded The American Grapho Analysis Society teaching graphoanalysis, which looked at individual patterns in the writing as opposed to a holistic approach. Supporters of this approach believed that there is more scientific validity to looking at individual clues.

What the science says

The one area in which graphology has proven some value is gender identification.

Graphology (sometimes called graphoanalysis) should not be confused with the term graphanalysis — that one letter makes a big difference. The latter is the forensic technique of analyzing documents and letters with the purpose of identifying the author, the former is the belief that handwriting predicts personality traits. But the science disagrees.

Study after study showed that graphology fails at predicting any personality traits. A 1982 meta-analysis of over 200 studies found that graphologists were unable to predict any kind of personality trait on any personality test. The analysis has since been quoted by over 400 other studies.

Things haven’t really changed since.

In a 1988 study, authors conclusively showed that graphologists were unable to predict scores on the Myers-Briggs test. Despite this, more and more companies started using graphology. The allure of the technique was too strong for many, even without any scientific evidence. Rowan Bayne, a British psychologist said that “it’s very seductive because at a very crude level someone who is neat and well behaved tends to have neat handwriting”, adding that the practice is “useless… absolutely hopeless”. The British Psychological Society ranks graphology alongside astrology, giving them both “zero validity.”

The notable exception is gender. Several studies have demonstrated that gender can be determined at a significant level, though any other trait remains, at the very best, inconclusive.

Just don’t read into it too much.

The CIA report

Interestingly, a CIA report also assessed the potential of graphology. Of course, the Agency could greatly benefit from such a technique.

“For the clandestine services, however, graphology as a validated assessment technique might have application in a sufficient number of instances, those where background investigation is impossible, to warrant considerable research to determine its effectiveness,” the study reads.

However, the report highlights no evidence that graphology does, in fact work.

“Two threads of argument run through the foregoing article on handwriting analysis. The first asserts the great need for research studies because “a proper test run has never been devised and carried out, at least not in the United States[..].” The second asserts the value of graphology here and now as an assessment technique, making sweeping claims of what it can do. The arguments are essentially incompatible. If the claims are correct, the research is unnecessary; if there is no research evidence, the claims are unsupported. With the need for research to establish the value of graphology as an assessment technique I am in full agreement. I disagree with the claims for its current effectiveness.”

Brain studies show that writing is a complex phenomenon. Even something as simple as scribbling a “get milk” note activates many areas of your brain. There might be a way to infer something about your personality, but we haven’t found it yet — and at the moment, it doesn’t seem very likely.

German text.

Learning to read changes your brain from stem to cortex, study finds

A new study found that the human brain has to patch together a network that handles reading by re-purposing areas deep inside the brain into visual-language interfaces. The team reports that the brain can undergo this process with surprising ease.

German text.

Evolutionary speaking, reading is very novel for humans — not to mention wide-spread, widely employed reading and writing. Because of this, we didn’t develop a specific region in the brain to handle this process.

So what do you do if you’re a brain and you have to learn to make sense of these scribbles and marks for your human? You improvise, of course! Working withing the bounds of the skull means this ‘improvisation’ is more of a ‘re-qualification’, as some areas of the visual cortex — usually handling complex shape recognition — get bent to the task, while some of the earliest areas of the brain take on a mediating role between the language and visual system.

Old brain, new tricks

The fact that learning to read will cause physical changes in the brain, such as the creation of new pathways, isn’t exactly news. But until now, we’ve believed that the changes literacy brings about are confined to the cortex, the outer layer of the brain which handles higher functions and can adapt quickly to master new skills and overcome challenges.

However, a team led by Falk Huettig from the Max Planck Institute for Psycholinguistics found that the brain does a lot more heavy lifting to master literacy. The Max Plank team worked together with scientists from the Centre of Bio-Medical Research (CBMR) in Lucknow, India, and the University of Hyderabad to uncover how the brains of completely illiterate people change when they learn to read and write.

For the study, the researchers worked with people in India. Illiteracy is pretty high here, mostly due to poverty, going at a rate of roughly 38% of the population but strongly skewed towards women. The team worked with an all-women group of participants, almost all of them in their thirties. They were recruited from the same social class in two villages in Northern India to take social factors out of the final results. Participants were further matched for handedness, income, number of literate family members and took two initial measures of literacy (letter identification and word-reading ability.) Lastly, they each had their brains scanned in the city of Lucknow.

After the team had enough data to form a baseline, the women were given a 6-months long period of reading training in their native tongue of Hindi, one of the official languages of India. It is based on Devanagari, that distinctive Indian and Nepalese flowing script, a style of writing known as an alphasyllabary. In alphasyllabary, you don’t write with single letters but with whole syllables or words at a time, in consonant-vowel pairings (written in that order).

At the start of their training, the vast majority of the participants couldn’t read a single word in Hindi. But after only six months, they reached roughly the same reading proficiency as a first-grader, quite an impressive result.

Deep re-purposing

Devanagari.

I don’t know what this says but I know it’s written in Devanagari.

“While it is quite difficult for us to learn a new language, it appears to be much easier for us to learn to read,” says Huettig. “The adult brain proves to be astonishingly flexible.”

The team reports that the functional reorganization we’ve talked before extends all the way to the deep, early-brain structures of the thalamus and the brainstem. These are very old brain areas, evolutionary speaking, and are universally found in mammalian brains as well as other species.

“We observed that the so-called colliculi superiores, a part of the brainstem, and the pulvinar, located in the thalamus, adapt the timing of their activity patterns to those of the visual cortex,” says Michael Skeide, scientific researcher at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and first author of the study.

These areas take on a sort of interface role, helping the visual cortex filter relevant visual stimuli — in this case, writing — from the wealth of information our eyes supply even before we become consciously aware of it. Skeide notes that “the more the signal timings between the two brain regions are aligned, the better the reading capabilities.” This would, of course, happen with practice, explaining why experienced readers can easily take in a text which would leave an aspiring reader scrambling for help.

The findings could help uncover the causes of reading disorders such as dyslexia. The condition has previously been linked to abnormal activity in the thalamus, an avenue of research Skeide says the team has “to scrutinize” considering they showed “that only a few months of reading training can modify the thalamus fundamentally.”

Finally, the findings should come as a boon to anyone currently struggling with illiteracy, especially in the West where it’s such a taboo subject and the object of social stigma.

The full article “Learning to read alters cortico-subcortical cross-talk in the visual system of illiterates” has been published in the journal Science Advances.

UV-printed text allows sheets to be reset and re-used 80 different times

A new technology could change how we think about paper and printing — forever. Based on UV-sensitive paint, the method allows paper to be re-used, making it much cheaper and more sustainable than traditional printing.

A sample of the paper with text.
Image credits Wang et al, (2017) American Chemical Society.

So we’ve seen our fair share of creative ink recipes throughout time — some to grow, some to raise awareness, others made from pee. They’re all awesome. But once ink hits the paper, it’s there for good. If you want to print something else, you need a fresh sheet of paper.

Or do you? A joint US-Chinese team has developed a novel nanoparticle coating which can make traditional inks oh-so-last-year. This blue substance can easily be applied to paper — either by spraying or soaking — and changes color when exposed to concentrated ultraviolet (UV) light. If you need to print something else, just heat the sheet to 120 degrees Celsius (248 Fahrenheit) and voila — the ink ‘resets’. As each sheet of paper allows for more than 80 re-writes, the ink could reduce paper usage in the long run, saving a lot of money and a lot of trees in the process.

I’m blue

Treated paper “has the same feel and appearance as conventional paper, but can be printed and erased repeatedly without the need for additional ink” said teammember Yadong Yin from the University of California, Riverside, for Phys.org.

“Our work is believed to have enormous economic and environmental merits to modern society.”

The team combined two kinds of nanoparticles for the ink. The color is created using Prussian blue particles, a pigment which becomes colorless when it gains electrons. The other ingredient is titanium dioxide (TiO2) particles, which catalyze the photochemical reaction between UV rays and the ink — they release the electrons needed for the reaction.

Image credits Wang et al., (2017) American Chemical Society.

What you get is a beautiful blue color that turns colorless under UV rays. So unlike traditional printing methods, this ink prints the blank spaces of the page instead of the words themselves. Alternatively, you can print the letters only and the text will come out white on a blue backdrop.

The print remains stable for at least five days before the page slowly starts fading back to blue, as pigment particles shed the extra electrons. Or you can just heat it up to reset it as fast as you like.

Applying the coat is a quick and cheap process, and the researchers hope this will promote wide-scale use. As each sheet can be used for 80 or more different prints without further costs, it’s easy to see the commercial appeal of the technology.

Add to that the fact that it also reduces paper use and waste, and you get a real winner. In the US, estimates place up to 40% of waste as discarded paper. All this waste translates to added costs for transport, recycling, or disposal. It also fuels the country’s ever-growing need for paper, an industry which consumes around 68 million trees every year and is one of the dirtiest in the country.

Following the paper trail

Yin first unveiled the prototype ink in December 2014. Its first iteration could only sustain 20 printing cycles, and was trickier to apply onto paper than the current material. The team says they improved the stability, ease of application, and lowered production costs over their previous product.

Now, they’re hard at work taking their technology to a printer near you.

“Our immediate next step is to construct a laser printer to work with this rewritable paper to enable fast printing,” Yin told Phys.org.

“We will also look into effective methods for realizing full-colour printing.”

But my question is — can we make tattoos with this ink?

The full paper “Photocatalytic Color Switching of Transition Metal Hexacyanometalate Nanoparticles for High-Performance Light-Printable Rewritable Paper” has been published in the journal Nano Letters.

Writing about your traumas in third person eases recovery. Photo credit: culturestrike.net

Writing about your trauma in third person helps recovery

Writing about your traumas in third person eases recovery. Photo credit: culturestrike.net

Writing about your traumas in third person eases recovery. Photo credit: culturestrike.net

Writing your memoirs or simply recollecting traumatizing memories in writing has been used as tool in therapy for many years now. A new study by researchers at University of Iowa  found that switching to writing in third person eases recovery and improves health of participants.

Whether it’s a car accident, the death of someone close, surgery, illness, or even financial collapse, traumatic events can trigger a barrage of challenging emotions. Writing about trauma and the emotions it triggers in you can help you to put things into perspective and soothe some of your fears. That’s why therapists often advise keeping a journal and basically get traumatizing thoughts out of your head and onto paper. For some, this form of catharsis rends promising results.

He was writing his memories

Psychologists at University of Iowa found, however, that writing in third person leads to  greater health gains for participants who struggled with trauma-related intrusive thinking, as measured by the number of days their normal activities were restricted by any kind of illness.

So, instead of writing “I am worried by cancer will come back” or “I crashed the car on the freeway”, re-phrasing as “She was worried her cancer would come back” or “She crashed the car” would be better. The researchers’ analysis found that people suffering from high levels of intrusive thinking can yield higher benefits if they express their trauma in third-person.

“Third-person expressive writing might provide a constructive opportunity to make sense of what happened but from a safe distance that feels less immediate and threatening,” says Matthew Andersson, a graduate student in social psychology at the University of Iowa and a co-author on the study.

The results were reported in a paper published in the journal Stress and Health.

Oldest readable writing found in Europe

A 3500 years old Mycenaean tablet found last summer in Greece is the oldest form of writing found in Europe. (c) Christian Mundigler

Extraordinary enough, an ancient Greek tablet dating  far as back as 1450-1350 BC was found last summer in an olive grove in what’s now the village of Iklaina, which makes it the oldest readable piece of writing found in Europe. The position and time frame of the artifact places it in the time of the Mycenaean, often mentioned in Homer’s Illiad, who ruled much of ancient Greece between 1600 to 1100 B.C.

In the ruins of Iklaina, so far archeologists have found a palace, murals, fortified walls and this highly valuable tablet, most probably written by a local scribe. The tablet is roughly 1 inch ( 2.5 centimeters) tall by 1.5 inches (4 centimeters) wide, and has markings evident of the ancient writing Greek writing system known as Linear B, which consisted of about 87 signs, each representing one syllable.

According to lead archeologists of the project Michael Cosmopoulos, the tablet is definitely the biggest surprise they could have stumbled upon.

“According to what we knew, that tablet should not have been there,” the University of Missouri-St. Louis archaeologist told National Geographic News.

First, Mycenaean tablets weren’t thought to have been created so early, he said. Second, “until now tablets had been found only in a handful of major palaces”—including the previous record holder, which was found among palace ruins in what was the city of Mycenae.

Although the tablet is dated as being 3500 years old, it was made out of clay and clearly never meant to last. Archeologists theorize that  on the tablet fiscal related data intended for the elites’ records were scribbled, basically paper work junk, which was put in the sun to dry and then thrown in a pit as trash when it wasn’t needed. Although researchers didn’t have too much markings to read and interpret, they could tell that the front of the Iklaina tablet appears to form a verb that relates to manufacturing, the researchers say, while the back lists names alongside numbers—probably a property list.

“Those tablets were not baked, only dried in the sun and [were], therefore, very brittle. … Basically someone back then threw the tablet in the pit and then burned their garbage,” Cosmopoulos said. “This fire hardened and preserved the tablet.”

While the Iklaina tablet is an example of the earliest writing system in Europe, other writing is much older (writing in China, Mesopotamia and Egipt are thought to date back from 3,000 B.C.) , explained Classics professor Thomas Palaima, who wasn’t involved in the study, which is to be published in the April issue of the journal Proceedings of the Athens Archaeological Society.

Keeping a diary – the key to happiness?

Keeping a diary is not just something girls do when they break up with their boyfriends or don’t get along with their mother-in-laws. Diaries are not only for people who have absolutely no social life and consider the little notebook in front of them their best friend (even though some of us should really get out of the house more). A diary may offer, in fact, emotional stability and balance. And what is surprising, men seem to benefit from keeping one more than women. So should we all turn into a Bridget Jones?

Matthew Lieberman, a psychologist at the University of California in Los Angeles conducted a study to find out exactly how beneficial it is to express one’s feelings in writing. After brain scanning several volunteers it was established that making notes in a diary reduces activity in the amygdala, a part of the brain which controls the intensity of our emotions.

However, not only writing down one’s thoughts in a diary seems to have this effect; writing poetry or song lyrics, no matter how bad they are, can have a surprisingly calming effect. This kind of activity is different from catharsis, which means seeing a problem i another light in order to come to terms with it.

What the brain scans showed is that putting one’s thoughts on paper triggers the same reactions in the brain as the ones connected to consciously controlling one’s emotions.

So, whenever one starts writing, he or she regulates emotions without even realizing that. Th result does not have to be a poetic masterpiece or a song to break the charts. The inner results are the best one could desire.

The test involved conducting a brain scan on the volunteers before being asked to write in each of the following four days for 20 minutes. Half of the subjects chose to write about a recent emotional experience while the others chose a neutral experience.

The first category proved to have more activity in the right ventrolateral prefrontal cortex, which meant that strong emotional feelings were controlled.

Men proved to benefit the most from keeping a diary, probably because women are better at turning their feelings into thoughts. The novelty must have increased the impact. Moreover, writing seems to be much more beneficial than tiping, maybe because it is more personal.

Writing about emotions in an abstract way is also much better than describing them in a vivid language, which does nothing but to reactivate original feelings and impressions.

However a question remains: how come that writers such as Martin Amis and Michel Houellebecq aren’t exactly the jolliest people ever? Would they be different if they hadn’t written anything?

Source: Guardian.co.uk