All that glitters is not gold - 8 ways behaviour change can fail

Updated: Mar 8

Part of Squared Away newsletter series on nudging

Failed interventions are more common than most people think yet the behaviour change movement has largely focused on successes - understandably, because talking about failures might undermine the credibility of our field and profession. Yet failing forward is essential if we want to make progress, so this article takes a look at how behaviour change can go wrong.

Although some attention has been given to nudging failures, a systematic evaluation has been missing. Like many aspects of behavioural science, the incentives typically lie in discovering New Shiny Things that boost the researcher's careers and to a lesser extent in conducting research reviews which will never form the basis of accessible, entertaining pop science books or a block-busting TED talk.

Maybe it's my pessimistic Nordic nature, but this kind of skeptical pre-mortem thinking really appeals to me which is why I was fascinated by this 2020 paper by Magda Osman and her colleagues titled 'Learning from behavioural changes that fail' that aims to identify the characteristics of failed interventions.

This kind of taxonomy helps to put behavioural change on a more robust theoretical foundation as well as being hugely useful for practitioners because it can help us analyse behaviour change interventions and map out factors that might impact their effectiveness.

Why we need a structured approach

Despite what many people think, "nudge" (or nudging) is not a specific framework or even one cohesive theory. It's better seen as a collection of techniques and approaches with shared characteristics for designing choice environments in a way that changes behaviour - think of a power tool with different bits for different jobs!

As such, it doesn't help us understand more detailed dynamics of nudges - for that, we need something more structured. With a causal explanatory approach we could construct scenarios of potential outcomes and determine what features of an intervention could influence them.

The paper suggests these helpful questions to ask when planning an intervention:

  • What factors could be causally relevant to the success of the intervention?

  • How could the intervention influence these factors?

  • What precautionary measures should be taken to avoid failure?

Skipping this kind of pre-mortem analysis risks trialling interventions that may not scale or preserve the effect over time because if you don't fully understand the underlying mechanisms that might compete or undo the successful behaviour change.

Part of the promise of nudging is its low cost, which sometimes leads (mostly commercial sector) practitioners to think it doesn't really matter much if we don't really know what barriers we're trying to solve or whether we have a lot of certainty on if a particular 'nudge' will work. In reality, however, failures don't come cheap - there is always at least a time investment and the opportunity cost of doing something else that might have been more effective.

How to fail fabulously with nudging

The goal of the reviewed article was to:

  1. highlight that reports of failure and backfiring are common in the literature

  2. identify characteristic regularities and causal pathways underlying the failures

  3. present a taxonomy derived from the commonalities

The taxonomy is based on an analysis of 65 studies of field trials and experiments across a range of domains. The failures documented included e.g. the predicted outcome not happening or generating the opposite outcome as well as generating unintended side effects.

Before we dive in, here is a quick summary of the proposed taxonomy of behaviour change failures:

  1. No effect

  2. Backfiring

  3. Intervention is effective but it's offset by a negative side effect

  4. Intervention isn't effective but there's a positive side effect

  5. A proxy measure changes but not the ultimate target behaviour

  6. Successful treatment effect offset by later (bad) behaviour

  7. Environment doesn't support the desired behaviour change

  8. Intervention triggers counteracting forces

The images below illustrate the causal models of behaviour change failures with three basic elements:

  1. nodes (domain variables with two or more states)

  2. arrows (probabilistic causal relationships between variables)

  3. probabilities (not shown here - see original paper)

The original paper's round nodes have been reproduced here as head icons with additional imagery to make it easier to grasp the types quickly.

1. No effect

The most basic fail is simply that there is no treatment effect - i.e. no change in behaviour whatsoever but also no harm done (except wasted effort).


  • A social comparison nudge to reduce water consumption might fail overall and even lead certain subgroups to increase their water consumption.

  • Using financial incentives to increase people’s physical activity might fail to change behaviour.

2. Backfiring

Next up, interventions can backfire when they change the target behaviour but in the opposite direction to what was intended.


  • Providing information about the negative consequences of unhealthy food can result in increased consumption of those foods by some people which an example of reactance.

  • Using social norms can also easily backfire - using descriptive norms that communicate typical behaviours instead of injunctive norms that communicate (dis)approved behaviours can accidentally increase undesirable behaviours.

  • For example, news stories of people breaking COVID-19 regulations might be good TV, but it also unintentionally showcases and normalises undesirable behaviour!

3. Intervention is effective but it's counterbalanced by a negative side effect

The third kind of failure is one where the intervention is successful, but the benefits are offset by an unintended negative consequence that largely negates the positive change (e.g. through compensatory choices).


  • An environmental campaign might reduce water consumption but increase electricity consumption

  • A green energy default nudge can decrease support for more comprehensive but also more cumbersome policies like a carbon tax

  • Information on calories increases the choice of healthier options but the overall benefit is diminished by higher calorific sides and drinks

4. Intervention isn't effective but there's a positive side effect

Sometimes the intervention doesn't change the target behaviour, but produces unexpected positive consequences. Assessing these positive side effects is important because it's often assumed that changing specific behaviours will generalise to other behaviours.

For example:

Countries with different default policies for organ donation (opt-in vs opt-outsystem) show no differences in overall transplant rates but a more fine-grained analysis of live and deceased donor rates reveals that opt-out countries have a higher number of deceased donors and a lower number of living donors. This is a positive side effect because fewer live donors are subject to immediate risk of harm from organ harvesting.

5. A proxy measure changes but not the ultimate target behaviour

When trying to influence behaviour for a large population, true behavioural data is sometimes difficult to obtain. In those cases, we might need to settle for using proxy measures - behavioural changes that are pragmatic substitutes for the target behaviour. The problem is that a change in the proxy isn't always a reliable indicator of success.


  • Becoming a potential organ donor is a proxy for actual organs donated

  • Providing information may increase healthy food choices in a simulated supermarket but have no long-term impact on body mass index and lifestyle

6. Intervention is successful but offset by later (bad) behaviour