Data Analytics and Human Heuristics: How to Avoid Making Poor Decisions
The “hot hand,” a metaphor applied frequently to the game of basketball, is the idea that a basketball shooter, after making several consecutive shots, will experience a higher than normal success rate on his or her ensuing shots. I discussed the “hot hand” concept, and its flaw, at a TDWI (The Data Warehouse Institute) conference many years ago.
Much to my surprise, I noticed that Amos Tversky was the source of that original analysis. This is the same Amos Tversky who, along with Daniel Kahneman, was featured in Michael Lewis’s most recent book “The Undoing Project: A Friendship That Changed Our Minds, an account of Tversky and Kahnemans’ extraordinary bond that incited a revolution in Big Data studies, among other things, and made much of Lewis’ own work possible.
Given how much I love Michael Lewis’ work (his book “Moneyball” is still my favorite data science introduction book), I couldn’t wait to dive into this one!
Lewis is a marvelous writer with an uncanny ability to explain complex subjects in very understandable terms. I encourage you to buy and read the book, and appreciate Lewis’ uncovering Tversky and Kahnemans’ considerable work in behavior economics. (By the way, Kahneman recently published his critically acclaimed book on “Thinking, Fast and Slow.”)
Behavioral Economics: The Violent Collision of Economics and Psychology
Tversky and Kahneman were the original “Economic Psychologists.” They researched, identified, and validated heuristics in human judgment and decision-making that led to common and predictable errors in the human psyche.
Their landmark research paper, published in 1979, was titled “Prospect Theory: An Analysis of Decision under Risk.” Prospect Theory explores behavioral economic theory and analyzes the way people choose between probabilistic alternatives that involve risk. Prospect Theory investigates how people make decisions based on the potential value of losses and gains using certain heuristics, rather than the final outcome. The model is descriptive. It tries to model real-life choices, rather than optimal decisions, as normative models do.
Throughout their lifetime collaboration, the duo discovered, tested, validated, and published several types of Human Heuristics that lead human decision-making astray including:
- Availability is a judgmental heuristic in which a person evaluates the frequency of classes or the probability of events by availability. An example would be the ease with which relevant instances come to mind. In general, availability is correlated with ecological frequency, but it is also affected by other factors. Consequently, the reliance on the availability heuristic leads to systematic biases. Such biases are demonstrated in the judged frequency of classes of words, of combinatorial outcomes, and of repeated events. The phenomenon of illusory correlation is explained as an availability bias. The effects of the availability of incidents and scenarios on subjective probability are discussed.
- Representation is “the degree to which [an event] (1) is similar in essential characteristics to its parent population, and (2) reflects the salient features of the process by which it is generated.” When people rely on representativeness to make judgments, they are likely to judge wrongly. The fact that something is more representative does not actually make it more likely. The representativeness heuristic is simply described as assessing similarity of objects, and organizing them around the category prototype (e.g., like goes with like, and causes and effects should resemble each other). This heuristic is used because it is an easy computation.
- Anchoring is a heuristic used in many situations where people estimate a number. According to Tversky and Kahneman’s original description, it involves starting from a readily available number—the “anchor”—and shifting either up or down to reach an answer that seems plausible. In Tversky and Kahneman’s experiments, people did not shift far enough away from the anchor. Hence the anchor contaminates the estimate, even if it is clearly irrelevant.
- Simulation is a psychological heuristic, or simplified mental strategy, according to which people determine the likelihood of an event based on how easy it is to picture the event mentally. Partially as a result, people experience more regret over outcomes that are easier to imagine, such as “near misses.”
- Framing is an example of cognitive bias, in which people react to a particular choice in different ways depending on how it is presented; e.g. as a loss or as a gain. People tend to avoid risk when a positive frame is presented, but seek risks when a negative frame is presented. Gain and loss are defined in the scenario as descriptions of outcomes (e.g., lives lost or saved, disease patients treated and not treated, lives saved and lost during accidents, etc.).
- Confirmation Bias is the tendency to search for, interpret, favor, and recall information in a way that confirms one’s preexisting beliefs or hypotheses. To quote The Undoing Project, “The human mind was just bad at seeing things it did not expect to see, and a bit too eager to see what it expected to see.”
There are more as well, but I think these form the foundation for a potential degree in behavioral economics.
Framing Example: 401(k) Auto-enrollment
One of the best examples of exploiting these human biases to one’s advantage is the 401(k) auto-enrollment movement as detailed in the article “These Simple Moves by Your Employer Can Dramatically Improve Your Retirement.” Features like automatic enrollment and automatic escalation of contributions, with an opt-out provision, turn inertia into an asset. These features are now broadly employed and have greatly boosted both participation and deferral rates. Among companies with a 401(k) plan, 70% have some kind of auto feature, according to benefits consultant Aon Hewitt. Merrill found that plans with auto enrollment had 32% more participants, and those with an auto escalation feature had 46% more participants increasing their contributions.
Applying “The Undoing Project” Learnings
Having a solid understanding of human decision-making biases can help ensure that your analytics insights are delivered in the most effective and objective ways. While truly understanding human decision-making biases requires a heavy dose of training, analytics presented correctly and thoroughly can raise awareness of biases to which your stakeholders can unwittingly fall prey.
Figure 1: “The Hot Hand in Basketball” by Thomas Gilovich, Robert Vallone, Amos Tversky