top of page
  • Writer's pictureElina Halonen

The Dark Side of Behavioural Science Stardom: Unpacking the Ariely & Gino Controversy

Updated: Oct 6, 2023

The recent downfalls of prominent researchers like Dan Ariely and Francesca Gino present an opportunity for critical examination of the dynamics and incentives that allowed misconduct to take root – raising questions about the glorification of public intellectuals, prioritization of financial motives, and valuing counterintuitive findings over substance.

Two researchers dancing on a book while money is raining from the sky
N.B. I usually include references in line with text in my articles, with links to sources of claims. In this case, there is so much material that is overlapping and complex, that I've opted to provide a list of references at the end. If you're interested in diving deeper into a particular claim or information, contact me and I will dig it out for you!

Introduction

The startling downfalls of former research “rockstars” Dan Ariely and Francesca Gino, now embroiled in data fabrication controversies, provides an opportunity to critically examine the cultural dynamics and skewed reward systems lurking beneath the surface in science. Their dramatic falls should also prompt reflection among applied behavioural science practitioners.


Once towering figures in popular behavioural science, Ariely and Gino built fame and fortune by skillfully packaging their research into bestselling books and TED talks. In the past two years, watchdog researchers (Data Colada blog) have repeatedly raised serious concerns about fabricated results underpinning some of Ariely and Gino’s most prominent studies – while Ariely denies wrongdoing, Gino’s misconduct was confirmed by Harvard after an 18-month investigation, leading the school to initiate proceedings to revoke her tenure. Many of Gino’s co-authors are now reviewing past work, reflecting credibility concerns beyond just her case. However, this scrutiny has not extended to Ariely, despite misconduct accusations – even though he remains under investigation, he has retained his position at Duke so far.


These dramatic falls from grace give us an opportunity to critically examine the cultural dynamic, skewed reward systems and misaligned incentives that exist under the surface in science. It should also make us think about the nature and tone of the public discussion around applied behavioural science - and especially the impact of glorifying of a handful prominent figures and promoting counterintuitive narratives over nuance and substance.


The intertwined trajectories of Ariely and Gino

We need to start with the crucial context of tracing the arc of Ariely and Gino's extensive collaboration so that we can understand how their transgressions might have become increasingly intertwined and potentially amplified each other over time as their stars rose in parallel.

Gino's suspect research patterns seem to begin around the time she started frequently publishing papers with Ariely as a coveted co-author. She greatly admired Ariely, and they bonded over a shared interest in studying dishonesty which resulted in multiple high-profile collaborations. This suggests Ariely, known for his disregard for academic rules, may have negatively influenced Gino's research integrity standards as she sought to emulate his prominence and success. Once Gino obtained the security of tenure, she detached from oversight of study data collection, leveraging teams to enable extremely high paper output.


Gino's book "Rebel Talent" rationalizes rule-breaking behavior, perhaps indicating she harbored similar attitudes that justified dubious practices. The timeline implies Ariely's documented cavalier attitude toward academic protocol likely normalized and enabled their mutual misconduct. Their blanket denials in response to detailed allegations have only bred more skepticism with many peers viewing their excuses as attempts to muddy the waters rather than sincerely address extensive evidence, further boosting doubts that outright data falsification may have occurred.


How conflicts of interest can compromise scientific integrity

More fundamentally, the stories of Ariely and Gino implicate deeper systemic and cultural dysfunctions that enabled alleged misconduct to evade accountability. To start, the highly competitive "publish or perish" pressures of academia especially in the US can incentivise exaggeration, misleading narratives, and cutting corners.


Academic institutions also have strong incentives to overlook flaws and misconduct from star faculty when grant and fundraising revenues are at stake - when allegations do arise, covering up is the default to protect revenue streams and brand image. Meanwhile, whistleblowers who dare to question established practices often face hostility and retaliation for threatening the status quo – especially if they are younger scholars.

However, systemic problems that incentivise academic misconduct go far beyond these two high profile cases and the field of behavioural science. For example, the president of Standford University recently resigned over manipulated research in his own papers and allegedly creating a culture that rewards "winners" producing favourable data.

Ariely and Gino’s extensive involvement with private industry also neatly illustrate conflicts between truth-seeking and profit motives: Ariely served as a paid consultant for organizations ranging from Amazon to the NFL while Gino partnered with corporate sponsors like Disney. Their fame also undoubtedly helped in both fundraising of their respective institutions as well as adding prestige to attract students. In addition, both Ariely and Gino typically charged speaking fees up to US$100,000 in addition to corporate consulting and the salary paid by their employers which, in Gino’s case, is reported to be around US$500,000.


Adding up the revenue from salaries, book sales, corporate consulting and speaking fees starts to paint a picture of the incentives and temptation that might lead a person to, let’s say, engage in motivated reasoning when it comes to justifying certain behaviours that don’t jeopardize one’s previous success.


The dangers of glorifying individual researchers

The stories of Ariely and Gino also reveal the dangerous consequences of excessive deference to prominent researchers who achieve celebrity-like status and boosting their personal brands through engaging stories such as those Ariely told on stage about climbing Mt. Annapurna or rafting the entire Mekong River.

Their pop psychology books and TED talks created a "problematic nexus of academia, business consulting and pop science" where accuracy became secondary to profit and fame, and a public discourse where bold but questionable claims are inadvertently incentivised over less flashy, complex representation of our field.

Ultimately, their charisma and clever narratives created an aura of genius that discouraged objective scrutiny. For example, Ariely has conducted ethically dubious studies involving pornography, electric shocks, and proposed infecting Israeli soldiers with COVID. His Duke lab was bankrolled by Wall Street firms like BlackRock, yet he spent minimal time there pursuing profitable non-academic collaborations instead.

Ariely has also reportedly offered his lab members lavish ski trips, beach retreats, and a $20,000 coffee machine, and when a prospective PhD student wanted to choose another university, he made an extraordinary personal loan offer to entice them to his lab instead. Meanwhile, Gino collected gift card reviews for her book from her own students when they participated in studies which demonstrates a curious set of priorities.


Embracing nuance and complexity in behavioural science

While exposing outright fabrication is crucial, restoring faith in behavioral science's credibility requires confronting systemic issues around the propagation of flawed research.


Pressure to publish attention-grabbing counterintuitive studies often judged on narrative appeal rather than methodological rigor easily incentivises neat, simple stories over nuanced conflicting truths, which rewards flashy findings over substantive rigorous work while the need for simple engaging narratives in mass communication can foster stretching the truth. This is something that practitioners need to address by assessing claims critically rather than accepting them automatically due to academic authorship - there are hundreds of researchers doing good, solid research whose research could help us in our work but often we overlook that in favour of big names.


In addition, recycling famous examples year after year (like the jam jar study on choice overload) creates a false impression that our field is actually quite limited in scope. We need to be more discerning before we amplify any particular research area or findings in public discourse as well as building scientific literacy, critical thinking skills. More fundamentally, we need to engage with complexity rather than defer to prominent counterintuitive findings or captivating half-truths over cumulative knowledge, however unsexy it might be.

Psychological research provides useful insights into human behaviour which can improve the lives of millions of people. However, it might not always reliably inform high-stakes policies because it is challenging to generalize from lab studies to complex real-world situations. As practitioners, we need to understand these limits and avoid overstating to what extent any particular research can be applied in dissimilar contexts. We also need to be more meticulous when we translate basic behavioural research into application. Before applying ideas from the scientific literature, we need to consider both questionable practices more legitimate reasons why certain findings might be unreliable and require caution before applying the ideas:

  1. Small samples that limit studies' statistical power, producing false positives

  2. Statistical procedures that boost false positives: fishing expeditions, analyzing data multiple ways until significant, hypothesizing after seeing results

  3. Storytelling to produce a more coherent narrative and increase likelihood of publication: not reporting all analyses and cherry-picking results

Of course, solutions like pre-registration, transparent reporting, and larger samples can and eventually will improve standards, but they require a continued cultural shift in values and incentives. Hyped claims from initial weak studies frequently fail replication or are debunked under scrutiny.


We love tidy stories and counterintuitive effects, yet truth is usually messy and complex. Understandably, incentives for popular books value marketability over accuracy which can encourage stretching claims for an engaging hook over nuance. For that reason, highly counterintuitive pop psychology storytelling should be viewed skeptically because compelling narratives often spread widely before rigorous validation and can lead to premature applications.


Some practitioners might argue that the research is there simply to inspire ideas for interventions and we need not concern ourselves too much with the veracity of the research. The problem is, though, that we can end up wasting our clients' limited resources on solutions that are very unlikely to work. The now-retracted "Patient Zero paper" (and its failed replication) that largely sparked the fraud investigations illustrate this perfectly.


The original research "inspired" a lot of applied work, including a field trial by the Behavioural Insights Team in Guatemala that consisted of over 600 000 tax payers and more than three million tax declarations - a significant investment of time, effort and money wasted on an intervention based on fabricated results that could have been allocated differently.


As Michael Sanders sums it up in the NPR podcast on the fraud case :

Sometimes it does make me feel like we are all a bit stupid when you see Dan Ariely who famously says ‘everybody lies a little bit’ and Francesca Gino says ‘here’s my book on how you can succeed at work if you don’t follow the rules’, and it’s like you walk in with an eye patch and a tricorn hat and a cutlass, and 100 pages into the book you realise ‘oh my god, they’re a pirate – I never saw it coming!’

It's also worth reflecting the moral aspect of how both Ariely and Gino reaped enormous financial rewards off the back of the allegedly fraudulent research, while many other parties were wasting time, money and effort in trying to replicate or build on their research in addition to the money wasted on interventions chasing the ghost of an idea.


Is behavioural science in a crisis - again?

The controversies surrounding Ariely and Gino serve as a stark reminder of the pitfalls of prioritizing fame over factual research, and have put a spotlight many deep-seated issues within behavioural science. Even at its height, the media attention on psychology's replication crisis had limited reach outside our professional circles because the problems highlighted were complex and difficult to grasp without engaging in the topic more deeply.


This time, the characters of the story are known to millions through their TED talks and books, and they have been some of the most familiar faces of behavioural science for years. Fraud is straightforward enough to summarise in a general audience article and the moral dimension adds emotional depth that was missing in the stories about the replication crisis. All of these factors combined can potentially result in a bigger impact on the credibility of behavioural science - especially as Gino's lawsuit will likely continue to give this case media attention for a long time.


Then again, we should keep in mind this reflection from Adam Mastroianni:

This whole debacle matters a lot socially: careers ruined, reputations in tatters, lawsuits flying. But strangely, itdoesn't seem to matter much scientifically. That is, our understanding of psychology remains unchanged. If you think of psychology as a forest, we haven't felled a tree or even broken a branch. We've lost a few apples.

From the perspective of the scientific world, it's not a big deal - really, it isn't. I have to quote Adam again because I can't put it better than this:

Gino's work has been cited over 33,000 times, and Ariely's work has been cited over 66,000 times. They both got tenured professorships at elite universities. They wrote books, some of which became bestsellers. They gave big TED talks and lots of people watched them. By every conventional metric of success, these folks were killing it.
Now let's imagine every allegation of fraud is true, and everything Ariely and Gino ever did gets removed from the scientific record. What would change? Not much. Let's start with Ariely. He's famous for his work on irrationality, which you could charitably summarize as “humans deviate from the rules of rationality in predictable ways,” or you could uncharitably summarize as “humans r pretty dumb lol.”
He's a great popularizer of this research because he has a knack for doing meme-able studies, like one where, uh, men reported their sexual preferences while jerking off. But psychologists have been producing studies where humans deviate from the rules of rationality for 50 years. We've piled up hundreds of heuristics, biases, illusions, effects, and paradoxes, and if you scooped out Ariely's portion of the pile, it would still be a giant pile. A world without him is scientifically a very similar world to the one we have now.

Contrary to Noam Schreiber's view in NYT, I don't think the actual field of behavioural science is in a crisis - as Adam so eloquently puts it, we could easily delete everything Ariely and Gino have ever published and it would not make a big difference in our understanding of human behaviour.


I've never used research by either one in my professional work, and rarely refer to big name researchers in general. For me, the more a scientist seems to focus on promoting themselves, the less energy will have to do solid research and the more their research likely starts to take shape according to what helps to continue that financial success.


However... perception is sometimes more important than truth, and those not deeply immersed in our field may lack the appropriate perspective. These people often include different kinds of stakeholders, whose views on our applied field inevitably influence our future. Will a few apples make it look like the whole basket is rotten?


For me, this should be a warning sign for practitioners: the progress and long term success of our applied field depends on upholding rigorous standards, viewing bold claims skeptically until replicated, and rewarding substantive work over captivating narratives. As practitioners, we need showcase and champion research that's both robust and ethical, ensuring that behavioral science remains a trusted tool for understanding and shaping human behavior.


In fact, we would do well to follow Daniel Kahneman's lead (quoted in the NYT article):

“When I see a surprising finding, my default is not to believe it,” he said of published papers. “Twelve years ago, my default was to believe anything that was surprising.

Calibrating our course for the future

To some extent, what I have written feels almost painfully obvious to me - there is little new in the fact that there are significant flaws in the scientific insight factory in academia.


Early on in my applied BeSci career I attended 25+ academic conferences in various subfields of psycholology where I had the opportunity to hear stories from "behind the scenes" of the publication process from the people actually writing what we read as practitioners. I vividly remember sitting in a presentation by Uri Simonsohn p-hacking (one of the founders of Data Colada) at the Society of Judgment and Decision Making* conference in 2012 - the early days of what is now known as the replication crisis.


It sparked numerous methodology discussions on methodology in conferences in the years that followed, including critiques like the Seven Sins of Consumer Psychology by then-President of the Society of Consumer Psychology Michael Tuan Pham - most of those critiques are still relevant, and would be valuable for the applied world to understand. The point is that although the replication crisis exposed questionable practices requiring reform, solid research still exists and failed replications also present psychology an opportunity to improve its practices and lead in promoting public understanding of science's self-correcting process.


As practitioners, we also play a role in that. Some good starting points for own reforms include reading the BIT Manifesto written by Michael Hallsworth to see what applies to your own working context (video below). I'd also recommend Merle van den Akker's excellent response The Systems that Keep Behavioural Science from Progressing - a Reply to BIT's Manifesto to provide more contextual understanding - as long as this article is, it only scratches the surface.


Final note: it's well worth it to read the NYT articles for the full, detailed story and the character description of both Ariely and Gino - scroll to bottom of the page.

 

Summary of evidence reviewed in Data Colada posts:

 

For those who prefer an audiovisual format of information, a video summary of the BIT Manifesto:


 

References

This article contains summarised information from the following articles:

Older background for perspective:

If you want to take a look at some of the evidence for Case Ariely:

If you want to take a look at some of the evidence for Case Gino:



The Harvard Professor and the Bloggers - The New York Times
.pdf
Download PDF • 395KB
They Studied Dishonesty. Was Their Work a Lie_ _ The New Yorker
.pdf
Download PDF • 4.79MB

5,290 views0 comments
bottom of page