Tag Archives: ideas


Creativity hinges on churning as many ideas out as possible — then taking a break

Being creative is as simple as letting yourself come up with ideas — and then walking away for a while.


Image credits Pixabay.

New research from The University of Texas (UT) and the University of Illinois at Urbana-Champaign (UIUC) says that employers looking for more creative employees should encourage them to produce a wealth of ideas — even mediocre ones — and then have them take an “incubation period.”

Take a breather

“Creativity is not instantaneous, but if incentives promote enough ideas as seeds for thought, creativity eventually emerges,” said Steven Kachelmeier, the Randal B. McDonald Chair in Accounting at Texas McCombs and co-author of the study in the Accounting Review.

When people are rewarded for simply producing ideas, no matter if they’re good or bad, they end up producing more and more creative ideas, the paper reports. If your end goal is to foster creativity, then this is a much better approach than paying people based on the quality of their ideas (or not giving out any pay incentives at all). Another important requirement is to give these ideas time to grow, the team adds. All the participants involved in this study stepped away from the brainstorming part of the task for a while and returned to it at a later date. This approach — combining mass idea generation with a rest period — results in much more creative productivity than when either of the two strategies is used in the study.

The research consisted of two experiments. In the first phase, participants were asked to create rebus puzzles — riddles where words, phrases or sayings are represented using a combination of images and letters. Some participants were offered pay based on the number of ideas they generated; others, only for ideas that met a certain standard for creativity. Finally, the control group was paid a fixed wage of $25, regardless of the quantity or quality of the puzzle ideas they generated.

In the early stages of the study, both incentivized groups actually performed worse than the control (in measures of creativity as judged by an independent panel). However, in a subsequent return to this task (10 days after the first one), those in the pay-per-idea group had “a distinct creativity advantage,” the team reports, and outperformed the other participants in both quality and quantity of ideas produced.

The group with a combination of mass idea generation with a rest period outperformed either of the other two groups using these strategies in isolation. The striking surge in efficiency exhibited by the first group suggests that having an incubation period after an initial brainstorming step is key to improving creativity, the researchers said.

Exactly how much time this rest period should take was the focus point of the second experiment. Here, the team paid half the participants a fixed amount (these were the controls) and half for the number of ideas they produced. As before, the pay-for-quantity participants yielded more, but not better, initial ideas than the fixed-pay group. However, after a quiet, 20-minute walk around campus, they produced more and better quality puzzles than the control group.

“You need to rest, take a break and detach yourself — even if that detachment is just 20 minutes,” Kachelmeier said.

“The recipe for creativity is try — and get frustrated because it’s not going to happen. Relax, sit back, and then it happens.”

The paper “Incentivizing the Creative Process: From Initial Quantity to Eventual Creativity” has been published in the journal Accounting Review.


It’s getting harder and harder to come up with new ideas in science, paper reports

The well of new ideas might be drying up — or at least getting deeper.


Image credits Michal Jarmoluk.

A paper penned by researchers from MIT Sloan and the Stanford University raises a worrying possibility. New ideas, they write, are harder and harder to come by.

Now, they’re not talking about ideas pertaining to cool new places to hang out or something like that. The paper limits itself strictly to new ‘ideas’ in the context of scientific research. And, according to their findings, research productivity is falling rapidly across the board.

Uphill road

The team argues that this drop in research productivity comes down to the fact that scientists need to put in more and more effort just to maintain the same pace — even a slightly slower pace in some fields — of idea generation as a few decades ago. In other words, each new addition to the body of scientific knowledge takes more and more work.

The authors cite Moore’s Law — that the numbers of transistors that can be packed into a computer processor doubles every two years — as a prime example of this effect. This doubling requires a growth rate of embedded transistors of 35% per year, but it takes more in-depth research each year to reach this goal.

Productivity graph.

The team defined research productivity as the ratio of idea output, measured as total factor productivity (TFP) growth, to research effort.
Image credits Nicholas Bloom et al., 2018, NBER.

“Many commentators note that Moore’s Law is not a law of nature, but instead results from intense research effort: Doubling the transistor density is often viewed as a goal or target for research programs,” they write.

“The constant exponential growth implied by Moore’s Law has been achieved only by a massive increase in the amount of resources devoted to pushing the frontier forward.”

Research efforts into semiconductor technology have intensified 18-fold since the 1970s, they report. Research productivity, however, has fallen by the same factor over this time period — evening each other out. This means that it’s about 18 times as hard today to push Moore’s Law to its next ‘level’ than it was half a century ago.

It’s not only computer sciences that are affected, even agricultural output follows the same trend. Per-acre yields of corn, soybeans, wheat, and cotton grew about 1.5 percent on average every five years between 1960 and 2015, according to the paper, but the number of researchers trying to boost these yields has risen by a factor of between 3 to 25, depending on the crop. “Yield growth is relatively stable or even declining,” the team concludes, “while the effective research that has driven this yield growth has risen tremendously.”

Yield graphs.

The blue line denotes the annual growth rate of yield for each crop per year. The solid green line is based on R&D targeting seed efficiency only; the dashed line additionally includes research on crop protection.
Image credits Nicholas Bloom et al., 2018, NBER.

In the pharmaceutical industry, research efforts rose by 6% per year since the early 1970s, while productivity (measured in how many new drugs were approved by the Food and Drug Administration) fell by 3.5% per year. When comparing the years of life saved by cancer research per 100 people since the 1970s to the number of medical studies published over the same period, the team found that productivity declined by a factor of 1.2 for all work, and a factor of 4.8 when looking only at clinical trials.

Overall, in the broader economy, the authors report that it takes about 15 times as many researchers today, compared to 30 years ago, for a company to maintain the same rate of revenue growth.

“Just to sustain the constant growth in GDP per person, the U.S. must double the amount of research effort put into searching for a new idea every 13 years to offset the increased difficulty in finding new ideas,” the paper reads.

So what gives?

John Van Reenen, an MIT Sloan professor of applied economics and co-author of the paper, thinks one factor that could explain this trend is that researchers simply need more time to reach the level of education they need in order to start producing new ideas.

“As the total amount of knowledge becomes larger and larger and larger, it becomes increasingly difficult to get to its frontier of that knowledge,” Van Reenen said. “It was much easier a couple thousand years ago.”

We handle this increase in knowledge by focusing our education on a narrow domain — think of how your education became progressively more specialized as you moved from school to high-school, university, then to a master’s degree or even a Ph.D. However, this breeds its own set of issues. Innovation often requires people of various different specializations working together, and “it’s very complicated to get all of these people and ideas together,” Van Reenen said. “That, itself, could be a reason why things start slowing down.”

Not all is lost, however. The team also reports that the productivity of research efforts targeting cancer actually rose from 1975 to the mid-1980s, which would “suggest that it may get easier to find new ideas at first before getting harder, at least in some areas.”

Van Reenen also says that we’re a long way off from any sort of hard limit on technological growth. Population growth, the increasing ease of communication, and globalization also offer a lot of opportunities for new ideas to emerge. Just as long as “we keep increasing the amount of resources we put into research,” he explains “we’ll keep [generating new ideas].”

The paper “Are Ideas Getting Harder to Find” has been published (link to pre-print version) in the journal National Bureau of Economic Research.

Great Ideas.

Book review: ‘Ten Great Ideas about Chance’

If life is a game a chance, knowing how to weigh your odds makes all the difference.Great Ideas.

“Ten Great Ideas about Chance”
By Persi Diaconis, Brian Skyrms.
Princeton University Press, 272pp. | Buy on Amazon

Throughout the sixteenth and seventeenth centuries, gamblers and mathematicians set the stage for a new line of thinking that would shape nearly every field today, from economics and finance to physics and computer science: they transformed chance from something that happens to you into a well-ordered discipline, something you can calculate and quantify. This book traces ten great ideas that shaped the field, exploring the mathematical, historical, philosophical, even psychological aspects of probability and statistics.

Accessible, yet meticulous in its math, Persi Diaconis and Brian Skyrms‘ Ten Great Ideas about Chance is an instructive but fun lecture.

Roll the dice

The book was borne of an interdisciplinary course the two authors — one a mathematician and one a philosopher — taught at Stanford University. As such, it’s built on the assumption that you’ve had some prior experience with either statistics or probability. In case you haven’t, the authors included an Appendix with a brief rundown of the basic elements of probability.

Each of the ten great ideas discussed in the book gets its own chapter. The first will take you through a brief tour of the early days of probability theory, starting with the 1500s, and introduce the concept that chance is, in fact, something we can measure. Chapter 2 also deals with measurement, showcasing how probabilities can be measured in more complex situations that lack a finite collection of equally-probable outcomes.

The third great idea is that, as humans, we’re inherently bad at dealing with probabilistic concepts. One simple example that shows how much wording influences our perception is the operating room scenario: telling a patient that they have a 90% chance of surviving an operation, for example, is more likely to induce him to agree to the procedure than telling him he has a 10% chance of dying — despite that both statements mean the exact same thing.

The fourth and fifth chapter explores the connection between probability and frequency, followed by two chapters dedicated to Bayesian analysis. Chapter 7, titled “Unification”, binds all these together and cements the links between chance, probability, and frequency.

The following two chapters impart context to probability theory, showing how it relates to other disciplines. Chapter 8 deals with algorithmic randomness, the use of computers for random number generation, while chapter 9 looks at probability in the context of physics. The final chapter deals with Hume’s assertion that, in the authors’ words, “there is a problem of understanding and validating inductive reasoning.”

Should I read it?

Ten Great Ideas about Chance treats the topic from an unusual angle, and it will help any faculty members teaching probability by providing a fresh take. The book uses calculus quite freely, and a solid understanding of integral signs and limit arguments will come in very handy while navigating its pages.

But don’t get discouraged by the technical talk — the book packs this stuffy topic in a pleasant, easy to read format. As someone with only a summary education in the field, I can attest that even those of us who are newcomers to probability will find quite a lot of interesting information here, peppered with “aha” moments. Even if math wasn’t ever your cup of tea, Ten Great Ideas about Chance remains accessible — despite some chapters being quite challenging and likely to give non-specialists some hard times, most of the book (especially its earliest chapters) do a great job of conversing with a wide audience.

One feature I’ve especially appreciated is the inclusion of end-of-chapter summaries, as it really helped wrap my brain around some of the topics I’ve had difficulty with. Ten Great Ideas about Chance also features an annotated bibliography and appendices in many chapters, which treat topics the authors deemed too tangential or technical for the main body of the work.

All in all, it’s a great book for anyone who wants to understand some of the central tenets of probability, how they were discovered, and how they can be tamed in our day-to-day lives.

Trust your intuition, researchers say

A series of experiments surprisingly found that sudden insight may yield more correct solutions than gradual, methodical thinking. In other words, this study says you should trust your “aha!” moments.

Intuition is still a poorly understood phenomenon of the mind, describing the ability of the mind to acquire knowledge or make deductions without the conscious use of reason. Now, experiments conducted by a team of researchers determined that a person’s sudden insights are often more accurate at solving problems than slower, methodical approaches.

“Conscious, analytic thinking can sometimes be rushed or sloppy, leading to mistakes while solving a problem,” said team member John Kounios, PhD, professor in Drexel University’s College of Arts and Sciences and the co-author of the book “The Eureka Factor: Aha Moments, Creative Insight and the Brain.” “However, insight is unconscious and automatic — it can’t be rushed. When the process runs to completion in its own time and all the dots are connected unconsciously, the solution pops into awareness as an Aha! moment. This means that when a really creative, breakthrough idea is needed, it’s often best to wait for the insight rather than settling for an idea that resulted from analytical thinking.”

Each experiment had participants solve one of four types of puzzles (180 in total). For example, one puzzle gave them three words: “crab”, “pine” and “sauce”, asking participants provide a word that could fit all of them into a compound word. The answer was “apple”. Another visual puzzle provided a scrambled image and required the participant to say what object they thought the puzzle depicted. Overwhelmingly, results came from the spur of the moment, and not through a logical analysis. 94 percent of the responses classified as insight were correct, compared to 78 percent for the analytic thinking responses. For the visual puzzles, 78 percent of the responses were correct, versus 42 percent for the analytic responses.

Carola Salvi, PhD, of Northwestern University was one of the lead authors. She added:

“The history of great discoveries is full of successful insight episodes, fostering a common belief that when people have an insightful thought, they are likely to be correct,” Salvi explained. “However, this belief has never been tested and may be a fallacy based on the tendency to report only positive cases and neglect insights that did not work. Our study tests the hypothesis that the confidence people often have about their insights is justified.”

The differences were even more striking when we consider the answering time. The majority of those late wrong answers were based on analytic thinking – and they were more likely to be wrong. Most insight thinkers would rather not give an answer than give a wrong answer.

“Deadlines create a subtle — or not so subtle — background feeling of anxiety,” Kounios said. “Anxiety shifts one’s thinking from insightful to analytic. Deadlines are helpful to keep people on task, but if creative ideas are needed, it’s better to have a soft target date. A drop-dead deadline will get results, but they are less likely to be creative results.”