Causal Reasoning in the Marketplace
In the big data era, correlations between consumption behaviors and health outcomes are frequently discovered and reported in newsletters and advertisements (e.g., people who take daytime naps tend to have higher mortality risk than others). Consumers often update their causal beliefs based on such information (e.g., believing that taking daytime naps increases mortality risk). However, many correlations are spurious, raising the question of when consumers perceive them as reflecting causal relationships. The following three projects examine various factors that affect the extent to which people infer causality from correlations.
Zhang, Yue, and Gabriele Paolacci, “More Correlations Signal Causation: The Effect of Correlational Scope on Perceived Causality,” Journal of Consumer Research. [Full Text] [OSF]
Across eight preregistered studies, we demonstrate that a correlation (e.g., between drinking tea and bone health) is perceived as more likely to reflect a causal relationship (i.e., drinking tea makes bones healthier) when the plausible cause reportedly correlates with additional outcomes (e.g., heart conditions). The correlational scope effect is attenuated when the additional outcomes are perceived as weakly related to the focal outcome, mitigated under a cause-last framing (in which the plausible cause in a correlation is presented after the target outcome), and can influence product choices. We propose that category-based induction may contribute to the correlational scope effect: people project the perceived susceptibility to a cause from the additional outcomes onto the focal outcome. These findings have implications for our understanding of causal judgment and for consumers’ well-being.
Zhang, Yue, and Gabriele Paolacci, “Good Is More Causal than Bad: The Effect of Correlation Framing on Perceived Causality.” Under review at Journal of Consumer Research.
Across eleven preregistered studies, we demonstrate a correlation framing effect: people are more likely to believe that the same correlation reflects causality when outcomes are framed as desirable (e.g., frequent classical music listeners tend to have better memory than frequent pop music listeners) than when outcomes are framed as undesirable (e.g., frequent pop music listeners tend to have worse memory than frequent classical music listeners). This effect is distinct from the decrease–increase framing effect and is not driven by the perceived magnitude of the correlation. Instead, through two moderation studies and three causal chain studies, we find evidence supporting that positive framing increases the perception that the plausible cause is associated with a broader range of additional outcomes, thereby further increasing perceived causality.
Zhang, Yue, and Nicholas Reinholtz, “Causal Reasoning from Memory.” Work in progress.
In this research project, we find that consumers are more likely to believe the relationship they learned about is causal when making judgments from memory rather than online judgment. Specifically, they tend to report higher perceived causality and are more likely to use causal language (vs. correlational language) to describe the relationship. We also find that these effects can further influence consumers’ WTP for products.
Effects of AI on Consumer Judgment
Zhang, Yue, Anne-Kathrin Klesse, and Alixandra Barasch, “Betting Against Randomness: Generative AI Predictions Inflate Perceived Winning Probabilities,” preparing for submission.
Gambling can negatively affect people’s financial and mental well-being, and the rise of generative AI (GenAI) may inadvertently exacerbate this problem. Recently, individuals have increasingly turned to GenAI (e.g., ChatGPT) to predict gambling outcomes, even in chance-based games—such as lotteries, roulette spins, dice rolls, or coin flips—whose outcomes are inherently unpredictable and independent of any prediction source. Across seven preregistered studies with samples from the US, the UK, and China, we find that people perceive predictions produced by GenAI for chance-based gambling as more likely to win than their own. This effect is driven by the misperception that even random events follow underlying rules, which GenAI can detect and use to make predictions. Consistent with this account, the effect is stronger among individuals who are more prone to misperceiving randomness. Across incentivized studies, these inflated perceptions translate into riskier gambling behaviors, such as being more likely to enter chance-based gambling activities and wagering more money on gambling predictions. Finally, we identify an intervention that mitigates inflated perceptions of GenAI-generated predictions by limiting the variability of predictions produced by GenAI.
Zhang, Yue, Mirjam Tuk, and Anne-Kathrin Klesse (2024), “Giving AI a Human Touch: Highlighting Human Input Increases the Perceived Helpfulness of Advice from AI Coaches,” Journal of the Association for Consumer Research. [Full Text] [OSF]
How can we increase the acceptance of artificial intelligence (AI) coaching advice? Across five studies, we document that people perceive AI advice as more helpful if human input is (made) salient. Utilizing a naturalistic field setting, study 1 shows that the more students believe that an AI coach contains human input, the more helpful the advice is perceived to be. We find that highlighting human input as an intervention strategy increases the perceived helpfulness of AI advice in the context of photography, compared to various control conditions (study 2 and follow-up study in the appendix). Study 3 shows that the effect is mediated by an increased subjective understanding of AI feedback when human input is highlighted. Study 4 provides evidence through moderation and shows that the positive impact of highlighting human input disappears under low levels of subjective understanding.