Archive for the ‘Cognitive Bias’ Category
Spend a Five or Break a Twenty? The Denomination Effect
I’m sure many of us understand the denomination effect on a visceral level. If you’ve ever been in a store, saw something you wanted, but hesitated to buy it because you’d have to pull out a big bill, you’ve experienced this effect. Why did you hesitate? Because you knew that breaking that $20, $50 or $100 bill made it that much easier to spend your change.
Perhaps that’s why prices near a dollar amount, particularly $4.99, $9.99, $19.99, and $99.99 are so attractive to the consumer’s eye. You’re trading one physical item (a printed piece of paper) for another (perhaps a flash drive) and getting a tiny bit of money back. I wonder how much of the attractiveness of prices just below a currency denomination depends on the fact that you’re getting some change back as opposed to the first number being one less (e.g., $19.99 versus $20.00). I bet the two phenomena are intertwined in some interesting psychological ways.
You also see the denomination effect at work in gambling, but the effect works differently there depending on the game and situation. Knowledgeable poker players experience the reverse effect, becoming less likely to get involved in hands when they have fewer chips in their stack. They hesitate to invest in a hand because, when you are low on funds, the relative value of each chip goes up. Other players can use this hesitancy to their advantage and bet big to drive the small stacks out of a pot, but the small stacks can make a modest bet to induce a bluff raise from a big stack, but the big stacks can raise and hope the small stacks will think they’ve fallen into their trap, but…
You get the idea. Poker’s fun, but bring aspirin.
Test what you know, but avoid congruence bias
A few posts ago I discussed confirmation bias, where individuals interpret everything they experience as reinforcing their existing beliefs. It’s not surprising that humans fall prey to this trap. We have to make sense of our surroundings, so we develop mental models to do so. They’re our models, based on our mental, so it’s no surprise we think highly of them.
No model of the world can capture all of its complexity. We can model industrial processes at a certain level, but we can’t get all the way down to the interactions of individual atoms. Fortunately, we don’t have to to generate accurate depictions of reality. As statistician George E. P. Box noted, “All models are wrong, but some are useful.”
Many humans realize they will move through life more effectively by testing and updating their mental models, but you need to test your model correctly. If you test your mental models and other hypotheses through direct testing, rather than testing possible alternative models, you are experiencing congruence bias. You’re testing your model, which is great, but you’re not entertaining other ways of approaching the problem, which is not so great. In The Structure of Scientific Revolutions, Thomas Kuhn argued that scientists work within a paradigm, which is the dominant framework for creating, testing, and determining hypotheses at a given time. When experimental results aren’t as expected, scientists can either work to shore up the existing paradigm or create a new one.
My wife, Virginia Belt, is a director and formerly taught acting at Willamette University in Salem, Oregon. She emphasizes the need for actors to try different tactics to get what they want. Think of the young child who tries everything he can think of to have you buy him a treat at the grocery store or the cat that really, really (really) wants a piece of sausage from your pizza. You can take the same approach to life. If you find your model isn’t working well any more, such as after a promotion at work, joining a new improv group or entering into a new relationship, try different tactics to see what does work. Ginny always exhorts her students to make positive choices, to focus on what they want rather than what they don’t want. If you call Domino’s and say you don’t want anchovies, you’ll either get no pizza because you haven’t given them enough information or get a pizza that costs $50 because it has every other available topping on it.
As anyone who has ever tried anything new well knows, individuals who break away from the pack meet a lot of resistance. Having the strength to break out of congruence bias at a personal level is tough — having the strength to do it in the face of a tenure board is even tougher. Let’s leave the paradigmatic fights for the professionals and focus on our own world views for a while. We’ll be better off in the end.
When You’re “Due” — The Gambler’s Fallacy
I travel to Las Vegas once or twice a year, both to play poker (where I convince myself I have an advantage) and to dabble in other games (where I definitely don’t). Since 1993, when I started playing while on the East Coast, I’ve seen thousands of players succumb to the insidious gambler’s fallacy.
Let’s say you’re playing roulette and notice, as posted on the so very helpful display by the wheel, that five red numbers have come up in a row. Is black due? What about green (0/00)? The answer is neither. Roulette wheels are well-balanced and the little obstacles spread around the wheel, called canoes in casino parlance, make outcomes random enough to be considered independent trials. If red numbers come up five times in a row, the next number will be red 18/38 of the time, black 18/38 of the time, and green 2/38 of the time. Ironically, it’s our human urge to discover patterns that makes the gambler’s fallacy work. The wheel has no memory, but we do.
The bottom line is that when you play roulette, the proportion of red, black, and green numbers will tend toward the target ratios over millions of spins and the weighted payoffs will ensure the house earns its profit over the long run. But what about games like poker? Poker is a skill game with a healthy dose of luck thrown in, so trials aren’t truly independent. Inferior players beat better players over the short term, but only because of luck. But what happens when equal players face off?
It’s hard to find players of the same skill level at a poker table, but I tested the theory by replicating an experiment described by poker author Lou Krieger. Like Lou, I set up ten identical players in Wilson Software’s Turbo Texas Hold’em simulation mode and let them play hundreds of millions of hands against each other. Six of the ten players were just above or below breaking even, but there were two big winners and two big losers. Remember that each player followed an identical strategy — the only factor controlling their fate was the luck of the draw.
As human beings trying to extract a living from an indifferent universe, we must realize that the odds are not always in our favor and that we will go through bad streaks we can’t seem to reverse. At these times it pays to strengthen your base by learning new skills or practicing old ones, reinforcing friendships, reaching out to others for help, and offering assistance where you can. Doing these things doesn’t constitute “good karma” or “putting things out into the universe”, both dubious concepts. What you are doing is improving the chances you’ll be ready to take advantage of opportunities that you and your contacts discover.
Perceived safety increases risk-taking
In many senses, life is a series of risk/reward calculations. Choosing which school to attend, buying a house, and choosing a spouse are all risky endeavors. According to the Peltzman effect, also known as risk compensation, people have a tendency to take greater risks when perceived safety increases.
I’m sure this conclusion comes as no surprise to you. Toddlers learning to walk soon start to run, or go down stairs, with the expected results. Teen drivers (particularly teen boys) get comfortable behind the wheel and dart off in a burst of testosterone, occasionally ending up in dire circumstances. This phenomenon was very common the Formula 2 racing series. Formula 2 is a development series for the global F1 competition, which is viewed as the pinnacle of motor racing. The problem is that the Formula 2 series was plagued with multiple accidents resulting from brash moves made by the young drivers. The reason? Analysts, including current F1 drivers, argued that Formula 2 racers were overly aggressive because their cars are so safe. Romain Grosjean, a Formula 2 driver who now competes for the Renault F1 team, was fined several times and sat out for an F1 race after being at fault in repeated incidents following his promotion.
Investors make similar risk/reward calculations. Wall Street investment bankers often take significant risks because their compensation schemes reward short-term success far more than they punish failure. Why would they take such risks? Because it’s part of their overall strategy. In the Wharton School’s corporate finance MOOC I’m taking on Coursera, Professor Franklin Allen argues that one’s sense of risk is inverted when you think of investing in a portfolio of stocks rather than in a single stock. For example, imagine that you buy stock in an oil company that finds oil in 1 out of 20 wells, and each producing well returns $100. You have a hit rate of 5% which, multiplied by the return of a good well, yields an expected value of $5. Now imagine that you have a separate investment in a research company that has a 1 in 50 chance of returning $250, otherwise gaining you nothing. This investment has a similar expected value to the previous example, because 2% (1 in 50) of $250 is $5.
Which of the two investments is less risky? If you look at the expected values, they’re equally risky. However, Professor Allen argues that, when considered as part of a portfolio, the latter investment is less risky because of its higher potential return. The crux of the argument is that a diversified portfolio with numerous independent risks will tend to have a higher return than a collection of pedestrian investments with relatively low risk. The end result is safety in numbers. Just as a fair coin flipped 1,000 times will tend to show heads in about 50% of the trials, investments with independent risks will tend to earn out at their expected rate, assuming you adjudged the risks correctly in the first place. Statistics on investment return since the year 1900 bear out his argument.
Improvisers can and should take risks to make great scenes. We can do it without fear because we know our fellow players will be there to make what we say and do the right thing. Similarly, businesses can take risks as part of a diversified portfolio of ideas. Just as you wouldn’t invest in a single stock such as, I don’t know…Enron, you shouldn’t discourage experimentation and risk. That said, you must understand that risks taken within a scene or business are dependent, not independent. There’s only so much we can do to fix things if you go too far overboard. If you can’t spread out your risk, you must moderate it to be successful.
Clustering and Streaks — Real or Imagined?
The folk wisdom that “bad things come in threes” is still popular in the U.S. Whenever two celebrities die on the same day, for example, even the most hardened critical thinker feels the urge to look for the third.
Is clustering real? Do events happen in streaks, or are they just a product of our pattern-seeking brains? George Carlin made fun of the “bad things happen in threes” adage by stating that bad things actually happen in 27’s, noting that “it just takes longer to see the pattern.” You can always find instances of “bad things” in the world to fill out your sets of three, but what does the research say? There have been a lot of studies on the subject, including Koehler and Conley’s “The “Hot Hand” Myth In Professional Basketball”, published in 2003 in the Journal of Sport and Exercise Psychology. The authors examined the National Basketball Association’s long distance shooting contest and looked for statistical aberrations in the sequences of made and missed shots. As in all but a few other studies, they found no significant deviation from chance. When they took each player’s base shooting accuracy into account, the effect disappeared.
Sports are physical contests and even little variations in physical conditions can affect performance, but what about chess? Chess is a mental game played with perfect information. That is, you know everything there is to know about a position and there’s no hidden information, such as a player’s hole cards in poker. As of this writing, I have played 19,738 games of blitz chess (each player has 3 or 5 minutes to make all moves in a game) at the Internet Chess Club since June 27, 2001. As I watch my online chess rating fluctuate from embarrassing to “not bad for me”, I wonder how much the streaks of wins, losses, and draws reflect my abilities and how much is the “luck” of an opponent making some horrible mistake.
The three-year graph of my rating shows huge swings, but the average is right about where I perceive myself as a player. Perhaps my streaks are due to luck. After all, I don’t seriously study the game and play to take a break from other work. The big changes make a strong visual impression, but there are a lot of small shifts in there, too.
Improvisers can make a fun game out of looking for apparent patterns and justifying reasons for believing streaks exist. The lesson for analysts? Carefully examine whether a sequence of events is due to some underlying cause or is just a sequence of events that might be due to chance. That said, given the strength of our innate need to discover patterns, is there any way to dispel what appears to be the myth of the hot hand? In a 2006 review of the literature, Michael Bar-Elia, Simcha Avugosa, and Markus Raab summarized the situation in this way:
As Amos Tversky, who initiated the hot hand research, used to say (cited by Gilovich in an online chat, September, 2002), ‘‘I’ve been in a thousand arguments over this topic, won them all, but convinced no one’’.
Memory and the Recency Effect
It’s tempting to think that knowing about a cognitive bias or logical fallacy makes them immune to it. I’m no exception, but I constantly find myself falling prey to the recency effect, or recency bias. The good news is that I catch myself from time to time — the bad news is that I have no idea how many instances slip through.
The recency effect describes a condition where the most recent information you learned has a disproportionate impact on your opinion about a topic. I find myself watching TV programs or reading articles where the author sets out arguments on an issue and I often think, “Oh, I didn’t know that. I’ll have to revise my opinion.” The rest of the time I think, “Yeah, right” and move on with my day. If the topic’s one I don’t know much about, the information I just learned will affect my view more than it would if I knew a lot about the issue.
As I mentioned in my review of The Gamble, published here and on Technology and Society Book Reviews, the Romney 2012 presidential campaign managers attempted to use the recency effect to their candidate’s advantage. The authors cited a significant body of research showing that political ads sway opinion, but only for a few days at most before viewers’ opinions revert to their personal baselines. The Romney campaign took out a large number of ads in the days before the election in hopes of using the recency effect to their advantage. In fact, the campaign bought the entire available ad inventory in several states. Rather than leave the money in the bank, they bought ads in states they deemed less important.
If you really want to see the recency effect in action, watch the U.S. stock markets whenever major events occur. Every bit of news causes the markets to move as investors try to out-guess each other and make a profit on competitors’ decisions. I’m not sure how much of the action is individual speculators trying to get a jump on the market and others trying to guess the reactors’ reactions (and so on up the chain), but the short-term volatility can be astonishing.
Confirmation Bias Proves What You Already Knew
Human beings deal with complexities by creating mental models. Our models are necessarily simpler than reality and are based on our experiences. These considerations imply two things. First, models are intensely personal constructs. Second, personal models are difficult to change. When we find something that works, we’re reluctant to change it.
There’s a strong temptation to fit what we see into our models rather than invest the effort (and ego) into admitting our model is wrong, or at least incomplete. Oswald and Grosjean define confirmation bias as “the tendency to search for, interpret and remember information in a way that confirms one’s preconceptions.” You probably know someone who engages in impressive mental gymnastics to fit everything into their world view.
In business, falling prey to confirmation bias can cost you money. If you developed a process that worked for years but doesn’t meet your company’s needs, you must be open to change. If you interpret critiques as personal attacks, you’re much less likely to improve your processes.
You can take advantage of confirmation bias to create interesting characters or “find the game” within an improv scene. Improv scenes run on justifying why something someone else said or did is true and important. If your character’s perspective uses “Yes, and…” to bring everything into his or her world view, you can be an interesting character and entertain your audience. Like in business, you have to be careful not to let your internal game hurt your team’s performance, but it’s a fun approach to take on occasion.
The exercise “Your Place or Mine?” provides an interesting context for justification and fitting incidents into your character’s world view. In this exercise, you and a scene partner play characters in two different locations. For example, one of you might be a fast food worker in McDonald’s and the other an archery instructor on the range. If the fast food worker hands the archer a french fry, the archer could interpret it as a small arrow and shoot it into a target, which the fast food worker could interpret as throwing the food into a customer’s mouth.
Cognitive Biases are Fun!
George Carlin once pointed out that comedy depends on exaggeration — to make something funny, you must distort one aspect of the situation or description to introduce humor.
If you’re thinking, “I don’t have to exaggerate anything…I make enough mistakes to feed a hundred comics for a year,” you’re probably right. We’re all susceptible to cognitive biases that skew our judgment. If you’ve read any of Dan Ariely’s work (Predictably Irrational, The Upside of Irrationality, and The Honest Truth About Dishonesty) or read the pop psych literature, you know the human mind is a frighteningly powerful yet flawed instrument.
I have good news: you can identify and minimize the impact of cognitive biases. What’s more, performers can use them to create humorous situations on stage. I downloaded a list of cognitive biases and will do my best to explore how they affect the world where business and funny intersect.
I first thought of writing a series of posts after a ComedySportz gig for health care professional employed by the Oregon penal system. One of their handouts (I always grab the handouts) listed about 120 cognitive biases and logical traps affecting the reasoning inmates and others use to assess their circumstances. I’ll leave the connection between prison, work, and comedy to your fertile brains.
First up? Everyone’s favorite trap: confirmation bias.