Improspectives

Improv skills lead to success

Archive for the ‘Game Theory’ Category

Not All Draws are Boring…In Retrospect

leave a comment »

Among elite chess players, most games end in draws. Whether these outcomes are the result of a tense struggle that ends in a dynamic equilibrium neither player dares disturb or a “grandmaster draw” where the competitors tacitly agree to take a day off, the outcome is neither a win nor a loss.

When a player is awarded one point for a win and zero points for a loss, “splitting the point” (earning half a point each) for a draw can seem like an unsatisfying result. Of the major American sport leagues, only Major League Soccer and the National Football League allow ties—the NFL doing so only after a full period of overtime. Some chess tournaments encourage risk by awarding three points for a win, one point for a draw, and zero points for a loss, but they are the exception.

The advent of computer analysis that goes well beyond human capability adds to chess’s drawish nature. As John Gapper notes in his November 25, 2021, column in the Financial Times, “[p]layers always arrived for tournaments well prepared, but they now use software as well as human analysis to predict lines long past the opening.” While it’s true that I, a moderately-skilled amateur player, can see better moves using an app on my phone than the world champion sees over the board, there is still significant room for invention.

The first two games of the current World Championship match ended in draws, but neither was boring. What’s more, in Game 2 world champion Magnus Carlsen deviated on move 8, which is quite early in the game. While the move he played was known, it is uncommon and offered chances for an advantage. Rather than searching for an innovation on move 25, well within the preparation in deeply analyzed openings, Carlsen and his team found an opportunity to sidestep popular lines in pursuit of a meaningful imbalance.

After mutual inaccuracies—suboptimal moves identified by the computer but invisible to the likes of me—the game ended in a draw that was anything but peaceful. The struggle was compelling and the potential for any of the three possible results for each player added excitement.

Is chess the sort of game that can be presented successfully to broad audiences? Perhaps not. I agree with Gapper’s additional point that chess lacks the broader visual appeal of soccer, video games, or even poker now that players are required to expose their hole cards to cameras for the benefit of viewers at home. (Years ago, before hole-card cameras were implemented, a friend said he would rather watch a tournament on the Paint Drying Channel than a televised poker game.) The drama of the Cold War-era Fischer-Spassky battles or the bitter conflict between old-guard Soviet stalwart Anatoly Karpov and upstart Garry Kasparov in the 1980s are missing, but the intellectual and agonizingly human struggle to find the best moves over the board remains.

As a chess player I understand that a draw is not always a bad result, just as splitting a pot is better than losing at poker. Even if chess isn’t destined to be a popular sport for real-time viewing, I hope players and commentators find ways to bring the joy and excitement of the game to the public after the fact. They’ve made good progress so far and I have high hopes for what’s next.

Written by curtisfrye

November 28, 2021 at 1:58 pm

Review of Power-Up by Matthew Lane

leave a comment »

Title: Power-Up

Author: Matthew Lane

Publisher: Princeton University Press

Copyright: 2017

ISBN13: 978-0-691-16151-8

Length: 264

Price: $29.95

Rating: 94%

I purchased a copy of this book for personal use.

I enjoy creative takes on technical subjects that reveal the mechanics behind familar objects. Video games provide hours of entertainment and challenge. Beyond the need for attractive graphics and effective user interfaces, each game designer must decide how to award points, measure the effect of player choices within the game, and provide a balanced environment that maintains game play without sacrificing challenge. In Power-Up: Unlocking the Hidden Mathematics in Video Games, Matthew Lane describes how math enters into video game design. His book is an enjoyable read that taught me a lot about the math behind game design.

From Physics to Friendship

It would be difficult to find an example of a video game that doesn’t use math in some way. Some games allow exploration without awarding points, for example, but the player must still move around the game world to discover what’s next and every new discovery is an implied “score”. As Lane notes, math provides the foundation for almost every game out there. In Power-Up, he divides his coverage into nine chapters:

  • Game physics
  • Repetition in quiz games
  • Voting
  • Keeping score
  • Chase games
  • Complexity
  • Friendship
  • Chaotic systems
  • Value of games

The first eight chapters center on a specific math topic, such as the use of equations to model the physics of a game world and the difficulties of assigning points in games such as recent versions of The Sims where friendship can matter as much as health and happiness. The final chapter discusses the value of games as a human activity, specifically mentioning games as educational tools and opportunities to gamble, with a mention of early probability calculations designed to divide the pot fairly in an unfinished game.

I have a bit of math background and have studied probability and statistics in some depth, so I was able to follow almost all of the formulas and related discussion fairly easily. Lane takes care to explain the equations’ inputs and, more importantly, meanings so the calculations’ roles within games can be understood without too much trouble. I’ve seen Arrow’s Impossibility Theorem, which proves that it’s impossible to design a voting system that can’t be manipulated through strategic voting, discussed in several publications; I believe Lane explains the phenomenon effectively and makes the logic behind the theorem clear.

Repetition and Scoring

While there’s too much material to discuss each chapter in depth, I did want to offer more details about the discussions of repetition and complexity in Power-Up. I played early versions of the quiz game You Don’t Know Jack! when I was young and, as Lane indicates, I started seeing repeat questions after a relatively short time. In Chapter 2, the author shows how having a relatively small question bank suffers in the face of frequent play. The radical solution, not repeating any questions until they have all been used, has its own issues. Various strategies for reducing the repeat rate have been tried, but most center on reducing the probability that a previously used question will be selected again.

For example, if you have a die with the numbers one through six and roll a one, you might want to make the probability of rolling a one again 1/12 instead of 1/6. The problem is that 5/6 + 1/12 = 11/12, which is less than one. As Lane points out, the actual probability of rolling a one again should be 1/11. If you add 1/11 + 10/11 (the probability of rolling any other number is 2/11, multiplied by five), you get 11/11 = 1. This calculation is interesting and a bit counterintuitive, which points out the creativity required to create fair games that are also fun to play.

Lane also goes into some detail on keeping score, describing several different systems for distance traveled games, tile matching games such as 2048, and puzzle games such as Angry Birds. The discussion of Angry Birds was quite interesting for me because it overlapped with a friend’s personal experience. My friend Bill had one of the top scores in the world on the original Angry Birds, but he was frustrated that some of the reported scores above him on the leaderboard were impossible to achieve. Not because the point counts were too high, but because there was literally no way to accumulate a specific total. Lane discusses this phenomenon, where it’s possible to prove that some totals can’t be reached within a game’s scoring system, in some depth. I enjoyed the discussion and plan to share it with my friend.

Conclusions

In Power-Up, Matthew Lane describes many of the ways that math powers video games. Similar books and articles have provided in-depth coverage of a specific subject, such as physics models, but his is the first to go into detail on such a wide variety of subjects in the same book. I love his choice of topics and believe the depth of each chapter strikes an excellent balance between detail and length. Highly recommended.

Written by curtisfrye

October 4, 2017 at 1:45 am

Resilience is not a consolation prize

leave a comment »

I don’t get angry at online writers, or headline writers, that often, but a tweet from Wired regarding the go match between AlphaGo, the computing engine built by the DeepMind section of the company formerly known as Google, and world champion Lee Sedol, pushed my buttons.

Lee lost the first three games of the match, but all five games were to be played regardless of the outcome. The Wired tweet that ticked me off referred to Lee’s win in Game 4 against AlphaGo as a “consolation win”. Cade Metz, the author of the piece referenced by the tweet, said that Lee “clawed back a degree of pride for himself and the millions of people who watched the match online.”

No, he didn’t. Not because Lee couldn’t regain his pride after having no hope of winning the five-game series, but because he never lost it. Lee admitted to playing a loose opening in Game 1, but based on the AlphaGo games he’d seen from previous matches, he didn’t think the program was strong enough to take advantage of the situation. It was. At no time, the world champion said, did he think he was ahead. In Games 2 and 3 he played better moves, but AlphaGo still forced resignation. Part of the problem was that AlphaGo didn’t use as much of its allotted two hours for early moves as Lee did, so the computer was way ahead on the clock for most of the game. Early moves create the framework for the rest of the game, so players must weigh them carefully.

Lee was clearly frustrated by his inability to win any games in the first part of the match, but he came into Game 4 ready for the struggle and played a surprising, powerful move in the middle of the board after not getting much out of the opening. Expert commentator Michael Redmond, a 9-dan professional player (the highest rank awarded), said he didn’t see Lee’s wedge move coming, but as the game progressed he realized its power. Despite running very low on time, Lee was able to maintain his momentum and take advantage of aimless play by AlphaGo to secure the win.

The Wired story should have centered on the theme of a human player beating a go engine for what might be the last time. The best computer chess programs are favored to beat even world champion Magnus Carlsen in 99.9% of their games. AlphaGo’s improvement over the past five months, when it played well enough to win 5-0 against a professional rated in the top 650 players in the world but made clear errors, is astonishing. AlphaGo trains its neural nets by playing against itself at high speed, earning decades of play experience in months. I don’t doubt it will be unbeatable by humans in a very short time.

Lee stepped up under extremely difficult and very public circumstances to secure a brilliant win. The advances in machine learning behind AlphaGo’s abilities in a game thought to be too complex for computers to manage are notable, but Lee Sedol’s play and fighting spirit are the real story.

Written by curtisfrye

March 14, 2016 at 10:00 am

Reasons for Playing Chess

leave a comment »

Chess is a rewarding but maddening game. You can build up an overwhelming position for the first 40 moves and then make a simple tactical error that lets your opponent back into the game or, in extreme and highly embarrassing cases, even win on the next move.

Interviewer: So, tell me…does throwing away a win hurt?

Curt: Yes. Yes it does.

You see golfers going crazy over their rounds, alternating between self-loathing over the short putts they missed and self-praise for the 150-yard shot that ended up a foot from the hole. I played golf occasionally for a few years and can testify to that effect. Some of my friends play 18 holes just so they can feel the satisfaction of hitting one good shot.

Some days they have to play 36 holes.

A golfer having a bad day still gets in some physical exercise. What about chess players? As with many endeavors, it depends on why you’re playing in the first place. You always get to exercise your brain and look over the consequences of your moves, which keeps you sharp and might fight off the effects of aging, but what else?

If you’re playing with someone who’s about your own strength, you get the benefit of an equal competition and, very likely, enough wins to keep things interesting. Playing someone stronger than you helps you learn and winning every so often helps keep you going. Playing a weaker player lets you win more often and teach the game, even if only indirectly.

What’s often overlooked is that chess can be a social game. If you play blitz chess, where players have to make all of their moves within three or five minutes, you can get in a lot of games and try many different types of positions. Playing a longer game lets you think more deeply, and playing without a clock lets you approach the game more casually.

You can also take time to analyze your game with your opponent. Serious players often try to identify the move where the winner got an advantage and what the loser missed. When done with a spirit of exploration and sharing, post-game analysis can be fun and helpful.

Chess as metaphor

leave a comment »

Games have long played a part in literature, representing a competition between humans or supernatural beings. Chess features prominently in many stories. The game’s intellectual nature lends itself to such depictions, with the idea being that if you can beat someone else at chess, you are the better man.

Other games, both real and invented, serve similar roles. For me, the best example is the game Azad from Iain M. Banks’ book The Player of Games. The game of Azad is a vast undertaking, with high-level matches often taking a month to play. There are several boards, a combination of team and individual play, and so many pieces as to nearly defy description.

In the story, the game was developed as a metaphor for the structure and values of the Empire of Azad. It was part pastime and part civil service exam. The Azadian home world held a tournament every so often, with the winner crowned emperor. The better you did in the tournament, the higher your position in the government.

The premise of the story is that another civilization, the Culture, sends its best game player to compete in the tournament. Banks was known for a political bent to his stories; The Player of Games is no exception. On its surface a simple diplomatic exchange, our player’s participation and continued success brings the conflict between the two civilizations and their values into sharper relief.

It’s telling that the Culture’s hero only starts to play at a high level when he takes on aspects of the Empire’s philosophy in his own play. Banks manages that conflict magnificently.

Chess is an abstract game with arbitrary but well-balanced rules that allow for a wide range of successful strategies and tactics. Though it doesn’t approach the (admittedly fictional) resolution of a game like Azad, it has long played a role as a metaphor for accomplishment and brilliance. As such, it provides a terrific instructional base.

Chess as a game (among many)

leave a comment »

Chess is often called “the queen of games”, at least in Western culture. The game’s austere appearance, when combined with its tactical and strategic depth, provides an air of challenge and mystery.

In many ways, chess is the prototypical Western game. Strategies and tactics are direct, with little progress to be made unless you directly confront your opponent. Chess is also a perfect information game, meaning there is no element of chance. You might not know your opponent’s next move, but there’s nothing hiding it from you. If you didn’t see what was coming, you can only blame yourself.

Although chess has increased in popularity in Asia, the traditional strategy game of Japan, China, and South Korea is go. Unlike chess, where the goal is to create a position where your opponent’s king is under attack and cannot move to a safe square, go players place their stones in an attempt to surround territory on the board. Chess boards are 8 x 8, with 64 squares, and the pieces stand on the squares. In go, the board has 19 x 19 lines, with 361 intersections, and players may place a stone on any unoccupied intersection (with a few exceptions).

The complexity of go far outstrips that of chess, at least in terms of the computation required to analyze and evaluate a position. Computers have conquered humans at chess…their calculating speed and positional evaluation let them beat even the strongest carbon-based players regularly. The most advanced go programs can only beat top professionals if they are given a substantial head start. That said, the gap is closing.

I said that chess is the prototypical Western game, but it’s mostly thought of as a European (and even more specifically, Russian) game. In America, the game of choice is poker. Poker is a gambling game, with a significant element of chance involved. You can do everything right but still lose if your opponent decides to fight the odds and draws the cards they need. Ironically, the better you play, the more of these “bad beat” stories you’ll have to tell. If you’re always in the lead, the luck of the draw means you will get chased down on occasion.

I hope I don’t sound bitter. But I am.

Do the Russians play chess, the Chinese play go, and the Americans play poker? If you look at our cultures and practices, you’ll see there’s a fair amount of truth to that statement. How well that metaphor translates to actionable intelligence is debatable, but it’s an interesting way to start a conversation.

Law and Magic: Revealing the Links

leave a comment »

I had the very good fortune to speak at the Law and Magic: Revealing the Links conference, co-hosted by the Law and Humanities Institute and the Thomas Jefferson School of Law last Friday in beautiful San Diego. The conference was organized by Professors Christine Corcos of the LSU Law Center and Julie Cromer Young of the Thomas Jefferson School of Law. Licensed attendees could earn up to 6.5 hours of CLE credit.

As the conference’s name implies, the day’s presentations were about how the art and practice of the law intersects and interacts with the art and practice of magic and what Professor Corcos called the “crafty sciences.” I had the good fortune to perform a 30-minute show over lunch. Later in the afternoon, my presentation Rhetorical Mathematics examined how performers and lawyers can use and abuse math to further their arguments. Practitioners of both arts have a wide range of confusion-inducing techniques from which to choose: misstating probabilities, relying on unspoken assumptions, pulling numbers out of thin air, and many others.

I think my paper went over pretty well. I covered probability calculations that went beyond simple liability calculations such as the Hand Rule articulated in United States vs. Carroll Towing, so there was some head scratching at times. The most fun for me was when I presented the Monty Hall Paradox, which describes the math behind the game played at the end of Monty’s show Let’s Make a Deal. The idea of the game is that Monty displays three doors, two of which hide a losing choice, such as a goat, and the third a prize such as a new car. You start the game by choosing one of the doors. Once you do, Monty (who knows where the car is) opens a losing door. You can then either stay with your original choice or switch.

The question for you: does it matter whether you switch or stay? If so, what are your chances of winning for either strategy?

Prisoner’s Dilemma, Part 5

leave a comment »

This is the final post in my series on the Prisoner’s Dilemma. Five blog posts might seem like a lot, but many doctoral dissertations have been written on the ramifications of this deceptively simple game.

Robert Axelrod was one of first researchers to study how competing strategies for playing the Prisoner’s Dilemma interacted in a tournament setting. One of Axelrod’s main conclusions is that you can maximize your payoff in a Prisoner’s Dilemma tournament by following a nice strategy. That is, not defecting first. He also noted that it was possible for other strategies to beat the winner, Tit for Tat, by defecting first to get the higher payoff and then defecting every turn thereafter to ensure that the other program could never retaliate effectively. Over time, this strategy does not yield a higher payoff than the nice Tit for Tat; the aggressive strategy did not win either tournament.

But what happens if you put the nice Tit for Tat in an environment with a lot of aggressive programs? The answer is that Tit for Tat will always give up the higher payoff to its opponent in the first round and get the minimum payoff in every subsequent round. Based on those rules, Tit for Tat is guaranteed to lose. If you were to put a set of strategies into a tournament and then eliminate the bottom half of the field, Tit for Tat would always be eliminated, and the other more successful strategies would continue on. Those strategies being the aggressive, not nice, strategy of always defecting first and continuing to do so on every subsequent turn.

This type of attack is called an invasion. If you run a tournament and eliminate the bottom half of the field at the end of each run you’ll find certain strategies win out. If you introduce even a small number of these dominant strategies into a tournament, they will eventually take over. The problem becomes even worse if you create a series of strategies that can recognize kindred spirits, enabling them to work together to maximize their payoff by cooperating.

You can find the same type of behavior in business. In many cases when the group or company starts, you’ll find that everyone cooperates. The problem comes in when someone who doesn’t cooperate starts to get some success in the company. As the aggression is rewarded, other individuals adopt the same strategy. In time, those players can squeeze out the players who play a nice, cooperative strategy within the business. It’s a true management headache, one that is extremely difficult to stamp out once it gets started. Plus, as the aggressive players get promoted higher and higher, the reward structure changes. Now individuals who are willing to work with the aggressive individuals are rewarded with their own promotions and higher responsibilities.

In most cases, the company can continue on with this type of environment, despite the fact that there is a lack of trust among the players. In fact, this type of environment can fuel creativity for those individuals who revel in interpersonal conflict and feel it helps their creativity. At the same time, though, an organization might begin to experience problems associated with a lack of cooperation. Always looking to put one over on the other guy makes it difficult to trust anyone else, especially when you’re looking over your shoulder to see who will get the next promotion. These behaviors can lead to stress, burnout, and high turnover. In a company that requires highly skilled personnel, losing a solid contributor because of a toxic work environment is extremely costly.

In improvisational comedy groups, you find the same thing happens especially at the beginning of the group’s life. As individuals jockey for position within the group and try to have an impact on how things will be run, you will often find that individuals who started in the group either drop out or get kicked out after they try to change the group through aggression or passive aggression by not following directions of the group’s leadership. Well-established organizations with a solid player roster and workshops from which to bring in new players are less susceptible to this sort of issue. The group’s culture is solid, and the workshop process allows management to decide which players will be promoted and included in the team.

Smaller groups, such as touring companies with only four or five players, can be susceptible to problems. The trick, as always, is to select your fellow performers wisely. In many cases, it’s better to join another group or start a new group of your own than it is to continue on in a bad situation. Sometimes leaving a bad job is the best thing you could possibly do.

Prisoner’s Dilemma, Part 4

leave a comment »

I’ve spent the last few posts talking about the Prisoner’s Dilemma, where two individuals must decide whether or not to cooperate. There’s a harsh penalty for having one’s trust violated, so the most risk-averse strategy is to violate the other player’s trust. RobertAxelrod’s analysis gives us a number of results that we can use both in the realm of improv and in the realm of business. He enumerated these five principles in The Evolution of Cooperation:

  • Enlarge the shadow of the future
  • Change the payoffs
  • Teach people to care about each other
  • Teach reciprocity
  • Improve negotiation abilities

Enlarging the shadow of the future simply means taking a long view of your interactions. When you form an improvisational comedy group, you should plan to have many performances over a number of months or years. This sort of ongoing interaction, like any other relationship, requires nurturing and mutual trust. Just like saving for retirement, the more you set aside in terms of money or trust at the start, the higher your return and, as the years go by, the interest accumulates. The same principle holds for business interactions. Americans on the West Coast tend to change jobs a lot more often than folks on the East Coast, but many of us stay within the same industry and interact with our colleagues from previous jobs frequently. Within a company, you’ll find that fostering a spirit of cooperation on your team will help you generate better results. Hopefully that conclusion won’t be too surprising.

The next question is how to reward different behaviors. In the classic Prisoner’s Dilemma payoff matrix, the only logical choice is to defect. Doing so limits the damage that would be caused by trusting another individual whose rational calculus would push them to defect. In business, anyone who sees their business as a series of one-time relationships will not be all that keen on building a trusting relationship with their business partners. In the entertainment industry, it said that you haven’t really sold someone until you’ve done business with them twice. If they’re not willing to rehire you, it means that they don’t trust you based on their experience with you.

Teaching people to care about each other can be tricky, particularly if you have individuals who are not prone to trusting relationships with others. Sociopaths, who don’t empathize with other individuals at all, are a particular problem. I’m not a psychologist, so I can’t tell you how to deal with them, but there are a number of online resources that you can use to see where to go and what to do. For individuals who do have feelings toward others, you can use teambuilding exercise rewards and the warm afterglow of successful shows or projects to develop a sense of camaraderie.

In the improv world, in which interactions in local groups are reasonably equal, you don’t often have that much trouble with these relationships. Yes, every so often members of the group will disagree intensely, but if everything is in place and the relationship is solid, it’s likely that you will get through the difficulties. In a business in which promotions, internal awards, and raises are at issue, the stakes are quite a bit higher. Managers need to keep everyone’s wants, needs, and desires in mind as they manage their projects.

One of the best ways to ensure people are satisfied is to give them work they care about and reward them for doing good jobs. The nature of those rewards will vary based on your business and the resources available to you, but rewards and recognition, even if only at the personal level, go a long way toward making those relationships more solid.

Axelrod also recommends that you learn to teach reciprocity. A willingness to respond to offers of cooperation allows teams to make much more progress than a loose collection of individuals would be able to. The form that reciprocity takes depends upon your organization. For businesses, providing a bit of after-hours help for others on their part of a project after they have done the same for you is a perfect example. In the improv world, we can try to “set up players for the slam.” Just as volleyball players run through the bump, set, spike sequence to go from defense to offense, improvisers can do their fellow players a favor by giving them straight lines, by allowing them to be the focus of the scene, and by staying off the stage when their presence is not strictly necessary. All these actions are judgment calls that improve with experience, but managers can improve their odds, both in the performance and business worlds, by bringing on individuals who are predisposed toward these behaviors.

Finally, you should improve your negotiation skills. Negotiation is the art of the compromise, and there are very few solutions that will meet everyone’s wants and desires. Some folks have to compromise, some more than others, and good leaders and team members will find ways to negotiate for what they feel is necessary and compromise when it’s called for.

Prisoner’s Dilemma, Part 3

leave a comment »

My previous two posts discussed the Prisoner’s Dilemma, a classic 2 x 2 game structured so each player feels compelled to violate the trust of the other player. Researcher Robert Axelrod tried to find the best strategy for playing the Prisoner’s Dilemma by holding a tournament among computer programs playing the Prisoner’s Dilemma. Every program would play every other program, a second copy of itself, and a program Axelrod created that randomly chose whether to cooperate or defect. In that first tournament, which had 14 entrants, a program by Anatol Rapoport named Tit for Tat won.

The strategy behind Tit for Tat is extremely simple: Start out by cooperating, but if the other player defects, defect on the next turn as punishment. If the other player did not defect on the next turn, the program would switch back to cooperating. So why would this program win? As Stevens points out in his course, the best the program can hope to do is to tie. It never tries to take advantage of the other player, so it will never get a higher payoff in any round than the other program. What happened was that Tit for Tat minimized its losses. It punished other programs for defecting, but it only did so once if there was just a single defection. This strategy of minimizing its own losses while minimizing the other programs’ gain due to bad behavior made Tit for Tat the best program of the bunch.

The key to the success of Tit for Tat is that it elicits cooperation. Axelrod noted that the program is nice, provokable, forgiving, and straightforward. Among humans playing the game, or for computer programs with a memory of past turns, playing Tit for Tat lets other player accurately predict the consequences of their actions. In the first Prisoner’s Dilemma tournament, the top eight programs were all nice, which meant that they were never the first to defect.

The participants included a program called JOSS, which was the same as Tit for Tat but threw in the occasional defection at random intervals. The program’s design was meant to take advantage of the occasionally high payoff from an unchallenged defection while retaining the benefits of cooperation. Unfortunately, this strategy resulted in extremely low scores because its actions weren’t predictable. One very negative consequence was that it created a series of moves versus Tit for Tat, and variations of Tit for Tat, in which each program defected on alternate turns and led to dismally low scores.

In Axelrod’s analysis of the first tournament, he noted that there were three strategies not included in the tournament but that, if submitted, would have won. With these results made available to potential entrants, along with randomizing the number of rounds each pair of strategies competed against each other to invalidate “late round” tactics, he ran a second tournament. This new competition attracted 62 entries. Tit for Tat won again. From the results, it’s easy to see that there is a penalty for being the first to defect. Axelrod wrote:

What seems to have happened is an interesting interaction between people who drew one lesson and people who drew another from the first round. Lesson One was: “Be nice and forgiving.” Lesson Two was more exploitative: “If others are going to be nice and forgiving, it pays to try to take advantage of them.” The people who drew Lesson One suffered in the second round from those who drew Lesson Two….The reason is that in trying to exploit other rules, they often eventually got punished enough to make the whole game less rewarding for both players than pure mutual cooperation would have been.

The lessons for improv and business are obvious, so I won’t belabor them. I would point out that the Prisoner’s Dilemma is an inherently grim scenario, so it’s best not to get into this type of situation in the first place. Because each player faces potential catastrophe if they don’t protect themselves, you can allow the players to communicate and not guarantee cooperation.

Next up: further insights into the nature of competition in the Prisoner’s Dilemma scenario.