top of page
  • Writer's pictureElina Halonen

Which tools for the job? A snapshot of nudges across different domains

Updated: Jul 31, 2023


In this article we'll take a look at two evidence reviews that give us snapshots across domains and nudge types. We'll take a look at each paper individually and round up with a short summary at the end. Let's dive in!


The two papers reviewed in this article include (click on title to skip to 2nd paper):


How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies (2019)

This paper is focused on clarifying the effects and limits of nudging by using a quantitative review of 100 publications and 300+ effect sizes from different research areas. Although their explicit research question is how nudges can be classified and what factors influence effectiveness, there are no new theoretical concepts here - the categories used have been borrowed or adapted from elsewhere.


It's still worth a quick look because the cross-domain review gives us an interesting snapshot of which nudge types have been most popular in different application domains and assess their effectiveness.


The paper starts by dividing the choice architecture tools into either structuring the choice task (what is presented to decision-makers) and describing the choice options (how the choice is presented) - this division is based on a 2012 paper "Beyond Nudges: Tools of a Choice Architecture" by a stellar list of authors so it's worth a read too.

Their morphological box gives a very diverse picture of nudging - of course, this is a snapshot based on their selection of nudges while in reality there will be many more in the private and non-profit sectors that are not reported in scientific journals!


Some key points:

  • Two thirds of the studies used tools to describe the choice options

  • The most common nudge types were defaults, warnings, social references and change effort

  • Half of the studies sampled were field experiments

  • Two thirds had significant effects (which suggests publication bias)

  • A third of the studies had a high relative effect size

Source: A quantitative review on the effect sizes and limits of empirical nudging studies

Looking at the types of nudges by application domain was also interesting:

  • Health: changing level of effort (e.g. rearranging cafeterias) and warnings were the most common

  • Environment: social reference and defaults

  • Energy: almost entirely disclosures

  • Privacy: mainly warnings

  • Finances: defaults and reminders

  • Policy making: a mixed bag but reminders were the most popular

Category of nudges per context

Another way to look at the data is to consider where different nudge types are most commonly used:

  • Defaults: environment, finances

  • Simplification: mixed bag

  • Social references: environment

  • Change effort: health

  • Disclosure: energy

  • Warnings: health, privacy

  • Precommitment: health

  • Reminders: health, policy making, finances

  • Implementation intentions: environment, energy

  • Feedback: health, energy

Although the paper doesn't address it, the most popular nudges implicitly suggest what are presumed to be the common root causes of undesirable behaviours in each domain. For example, warnings are a form of education which assumes lack of knowledge and reminders are a way to get around forgetfulness and inattention.


So, how effective are each of these nudge types? Across application domains, defaults seem to have the largest relative effect sizes - meaning that they are the more effective than other nudge categories in this review. However, we should keep in mind this is only a sample and not the entire population of interventions so we can't say for sure what a more comprehensive effectiveness ranking would look like.


It should also be noted that 40% of the studies included in the review were from the United States, and a large part of the rest in European countries while only a few studies were conducted in African or Asian countries, and Latin America is largely uncovered (at the time of the review).


There is, of course, much more detail in the paper itself - these were just some of the highlights!


A systematic scoping review of the choice architecture movement: Toward understanding when and why nudges work (2018)

Sizing up, this second paper reviewed 422 choice architecture interventions in 156 empirical studies to review the current state of the nudge movement. The theoretical contribution of this paper is quite limited, but it certainly provides an interesting snapshot!


Key points:

  1. Health was the most studied domain (42% of studies), followed by sustainability (19%)

  2. Only 24% of the studies focused on exploring moderators or underlying processes (even though studies connected to theories are more effective)

  3. Only 7% of the studies applied power analysis

  4. Interventions were piloted in 13% of the studies and 7% included follow-ups to measure whether the intervention effect was sustained

  5. 93% of the studies included at least one successful intervention while only 18% reported unsuccessful interventions

  6. 47 unique variables were found to moderate the effectiveness of the nudges

  7. 49% of the studies were conducted in the USA, 38% in Europe (mostly UK, NL, GER) - in other words, only 13% came from non-WEIRD countries!

To help myself digest the information, I created a couple of quick and dirty charts - the original data table can be found below. After all that effort I'm not sure if it was worth it because the "sample sizes" for some of the categories are small which skews the presentation of the numbers, but for the purposes of a quick glance... here you go!

The original data that the above chart is based on

Some final thoughts and lessons for the future

Looking across the two review papers, it's hard to see clear patterns - there is a lot of data but the story is muddled.


To me that suggests that we urgently need to learn more about why, when, and to what extent interventions work. Currently, we can't really predict the effectiveness of different types of interventions across different domains because we are lacking process explanations of interventions (i.e. why they work or don't) and an understanding of their boundary conditions (under which circumstances they might work). In other words, the "choice architecture movement" has provided tools but missed out on instructions for how and when to use different intervention techniques.


At the moment there is also variation in how interventions and their effectiveness are reported because current taxonomies do not provide an exhaustive list of categories and subcategories to cover all the various forms of nudge interventions. One way to improve the situation would be using the taxonomies and reporting standards of the broader academic community and by putting more focus on the key variables, which might moderate the effectiveness of the nudge.


For now, the answer of what nudges work where and when seems to be "it depends" - we'll take a look at some more theory-led ideas on underlying mechanisms in the next two posts!

 

If you want to read previous articles in this series:

 

You can find more details and references in:

Szaszi, B., Palinkas, A., Palfi, B., Szollosi, A., & Aczel, B. (2018). A systematic scoping review of the choice architecture movement: Toward understanding when and why nudges work. Journal of Behavioral Decision Making, 31(3), 355-366.

Hummel, D., & Maedche, A. (2019). How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies. Journal of Behavioral and Experimental Economics, 80, 47-58.


73 views0 comments
bottom of page