Tag Archives: scientific paper

Books and papers.

Papr works like Tinder but with pre-prints instead of people

Described as the Tinder of pre-prints, Papr will let you swipe on scientific works for being “exciting,” “boring,” “probable,” or “questionable.”

Books and papers.

Image credits Johannes Jansson / Wikimedia.

You’ve already got the tap and flick motion of Tinder mastered but let’s face it — that app isn’t really for you. What you’re after is depth, meaning, a mental connection. Well, now there’s a way to use your hard-earned skill to get just that. Preprint server bioRxiv announced an app called Papr which lets you make snap judgements on pre-prints — papers published before they’ve gone through the peer-review process.

Actually, a case may be made that Papr is twice as complicated as Tinder, since you can swipe right, left, up, and down. Each direction corresponds to one of four categories: “exciting and probable,” “exciting and questionable,” “boring and probable,” “boring and questionable,” which is actually exactly how I think about my Tinder matches.

But if you’ve ever had the feeling that Tinder is just too superficial for your taste weeeell… Papr doesn’t do anything to address that. Currently, you can only see the papers’ abstracts, not the full work, you can’t see who wrote it, and you can’t rate them in any way, shape, or form beyond those four categories.

Simple by design

Papr’s co-creator Jeff Leek, a biostatistician at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland, says that this simplicity is actually an advantage. Papr’s goal isn’t to become an alternative to peer-review, but rather to help researchers cope with an “overwhelming” number of new papers and to spot areas of interdisciplinary overlap, Leek says. Scientists already use social media to find new papers, he adds, so why not simplify that process and get a general sense of their evaluation while at it?

And the four-category system works to keep it simple. Other similar-ish services, such as PubPeer, offer users a lot more space to comment and discuss on papers — but that also offers opportunities for foul play and dishonest competition. To prevent users from giving an objectively good paper written by a rival a bad rating, or rating a paper more generously because it was penned by a famous scientist, Papr simply doesn’t show you who wrote what — it doesn’t show author names and doesn’t allow you to search for a particular preprint or author.

Leek had first released an earlier version of Papr late last year but only started publicizing the app on social media earlier this month after his colleagues added a few more features, including a recommendation engine that suggests studies based on your preferences, an option to download your ratings along with links to the full preprints on bioRxiv, and suggestions for Twitter users with similar tastes as yours.

“We don’t believe that the data we are collecting is any kind of realistic peer review, but it does tell us something about the types of papers people find interesting and what leads them to be suspicious,” Leek says. “Ultimately we hope to correlate this data with information about where the papers are published, retractions, and other more in-depth measurements of paper quality and interest.”

In the end, Papr is important as it shows that the scientific community is working on finding more ways to evaluate all the papers being published every day. But whether or not the app will last is yet to be determined. Their website sums up Leek’s opinion on this issue in a very fun tidbit.

“This app is provided solely for entertainment of the scientific community and may be taken down at any time with no notice because Jeff gets tired of it. It is provided ‘as is’ and is not guaranteed to do anything really. Use at your own risk and hopefully enjoy :).”

Mathematics

Papers riddled with math put some scientists off

You’re not the only that doesn’t like math, it seems. A new study from scientists at Bristol’s School of Biological Sciences found that biologists pay less attention to theories that are dense with mathematical detail.

Mathematics

Mathematics

The scientists involved in the study compared citation data with the number of equations per page in more than 600 evolutionary biology papers in 1998. The results are rather staggering; they found that most maths-heavy papers were referenced ~50% less often than those with little or no maths.

Apparently, for biologists at least, each additional equation per page reduced a paper’s citation success by 28 per cent. Stephen Hawking envisioned math as a detrimental factor to his readership, and pondered absolutely each equation he included in his popular book, “A Brief History of Time”, in fear of reduced sales. He was on to something.

“This is an important issue, because nearly all areas of science rely on close links between mathematical theory and experimental work,” says Dr Tim Fawcett.

“If new theories are presented in a way that is off-putting to other scientists, then no one will perform the crucial experiments needed to test those theories.  This presents a barrier to scientific progress.”

In the light of these results, which frankly most scientists were already aware of, the researchers recommend some course of action which could potentially offer some tangible solutions. The first, and most difficult to apply, is to  improve the math education of science graduates in less technical fields, like biology, for an increased math-literacy.

Andrew Higginson, Dr Fawcett’s co-author and a research associate in the School of Biological Sciences, said that scientists need to think more carefully about how they present the mathematical details of their work.

“The ideal solution is not to hide the maths away, but to add more explanatory text to take the reader carefully through the assumptions and implications of the theory,” he said.

This isn’t an option for most scientific journals, however, which have a strict policy regarding conciseness and publishing space. An alternative solution is to put much of the mathematical details in an appendix, which tends to be published online.

“Our analysis seems to show that for equations put in an appendix there isn’t such an effect,” said Dr Fawcett.

“But there’s a big risk that in doing that you are potentially hiding the maths away, so it’s important to state clearly the assumptions and implications in the main text for everyone to see.

The findings were reported in the journal Proceedings of the National Academy of Sciences.

[image source]

science journals

Open access to science – its implications discussed in UK raport

Today, only 10% of the currently published scientific papers are open access; freely available to the public online in their entirety. A recently published report commissioned by the UK’s Minister of Science encourages scientists to publish their works in open access journal and claims the benefits of an open access system outweigh the downsides.

The team of experts who oversaw the report was lead by Dame Janet Finch , who argues that the issue at hand first of all possess a powerful “moral” case. Science should be free and available to everyone is the conclusion which the reports seems to reach. This would, however, mean that scientific journals should lift their dreaded “pay-walls” and cancel their paid subscriptions.

At the very dawn of modern science, science journals were THE way for a scientist or company to keep themselves updated to the various progress registered in the world or share their own progress. Much of the current scientific progress is attributed to the reach and development of science journals, and even today the process has changed very little.

science journals A scientist who has completed a scientific work, which he believes is worthy of being publicized, as in it offers value, will submit it for review to a scientific journal he believes best fits his paper. A team of editors then decide if the respective paper is of note or not; if indeed it is found of note, the paper is then transferred to a panel of experts who subject the paper to scientific scrutiny – this is peer review. If the experts find flaws in the paper, it is rejected or more explanations are required from the author; otherwise the paper is prepped for publishing. The peer review system is considered to have greatly contributed to the current high standard of scientific research, however like all things it costs money.

The money comes from subscribers; to read a certain paper, you need to pay for it. More importantly, if you’re working in a particular field of science where some sort of breakthroughs occur, they’ll be totally unbeknownst to you if you’re not a subscriber. If you don’t have access, it’s like living in cave. Sure, you could pay for a subscription, but there are countless scientific journals still in press today, a lot of information is bound to escape you. And what about students or simple science enthusiasts? Scientific research is typically public funded; why should then publishers profit off of this? The biggest and most influential scientific journals are Nature and Science, both are paid-walled. You might agree that open access to science is simply the obvious right thing to do, but in such a controversial situation things aren’t always easy and, for one, the people opposing such a move aren’t necessarily wrong in their claims either.

For one, the high standard of science I mentioned earlier might be at risk. An article at thisismoney.co.uk claims that if the local government were to impose open access to all UK based research, an estimated £1billion of income and thousands of jobs could be placed at risk. Then there’s the issue of plagiarism; the article I just referrenced offers the currently dying music industry at the hand of piracy as a comparisson of what might happened were science to become open. The analogy isn’t that well put, but it does make a point. With open access, plagiarism would become a lot easier and tempting, a fact that would be deeply detrimental to true science. Some academic publishers and researchers fear that scientific and other academic studies, paid for by the taxpayer, will be made freely available to researchers in China and elsewhere in the Far East. These and more are just a few arguments that suggest the current system is still well placed.

These are only a few voices, however – particularly influenced by personal interest, I’m willing to bet in some instances. Most scientists in the UK have expressed their excitement and full-pledged support for such an initiative.

“At my institution we are lucky enough to have access to many journals. But inevitably myself or one of my colleagues occasionally needs to see something that we haven’t subscribed to and so we have to pay a fee to see research that has been publicly funded.

“So it would be tremendously useful for our research if we didn’t have to think twice about this sort of thing”, said Prof Elizabeth Fisher, a world class neuroscientist at University College London.

“Open access is in our marrow,” he said, “greater access is for the greater good”, Professor Adam Tickell, pro-vice-chancellor for research and knowledge transfer at the University of Birmingham.

One of Dame Janet’s recommendations is to require the funders of research to set aside £60 million each year to pay the administrative fees for publication in open access publications. That might pay for the panels of experts and editorial staff for the various scientific journals, though the real sum might have to be a lot larger. Some journal might go out of business a cease publishing, and for certain profit will cease to become a reality for most publishers. Science and profit shouldn’t never go hand in hand, though.

Read the 140-page Finch report here.

source: BBC

 

 

 

Approximately 1 in 50 researchers falsifies or modifies data in studies

The topic of modification of data in scientific research is definitely a hot one; the frequency at which researchers fabricate or falsify data is extremely hard to quantify and make a statistic from it. Many different studies or surveys have tried to do this, but the results varied greatly and were difficult to compare and synthesize.

I read a study on PLoS that definitely sheds some light on the matter. Without going into details about what surveying system they used and how they assigned different weights to different subjects, I’m gonna tell you about their conclusions.

Paper retractions from the PubMed library due to misconduct, on the other hand, have a frequency of 0.02%, which led to speculation that between 0.02 and 0.2% of papers in the literature are fraudulent. Eight out of 800 papers submitted to The Journal of Cell Biology had digital images that had been improperly manipulated, suggesting a 1% frequency. Finally, routine data audits conducted by the US Food and Drug Administration between 1977 and 1990 found deficiencies and flaws in 10–20% of studies, and led to 2% of clinical investigators being judged guilty of serious scientific misconduct.

Now this, this is extremely interesting; it’s a (not so) well known fact that peer reviewal is not flawless, partially because some peer reviewers just pass the paper along to a doc or postdoc for reviewal and then just sign it. Of course the student sometimes isn’t extremely interested, and just browses it. Another mind blowing conclusion:

Among research trainees in biomedical sciences at the University of California San Diego, 4.9% said they had modified research results in the past, but 81% were “willing to select, omit or fabricate data to win a grant or publish a paper”

If four out of every five students are willing to modify data to win a grant or publish a paper, then we have a definite problem ! Also, if 2% of every researchers falsifies research data (as the study concludes), then that means that 2% of all papers aren’t trustworthy. But that’s just the ones who admitted doing this, the number is probably significantly higher than that. Kind of makes you wonder.