How to Win Life's Non-Zero-Sum Game

January 16, 2024

Discussing Robert Vallerand's concepts of passion, Adam Smith's views on self-interest, the Prisoner's Dilemma in game theory, and evolution of cooperation.

Transcript

Hello! I’m Leo Isikdogan, and welcome to the Cognitive Creations podcast.

Today's episode is a journey through passion, self-interest, and cooperation.

What drives us, both as individuals and as a society? Is self-interest just another name for selfishness, or can it actually benefit society? How does cooperation emerge in a world driven by self-interest?

We’ll try to answer these questions. We'll discuss Robert Vallerand's concepts of passion, Adam Smith's views on self-interest, the Prisoner's Dilemma in game theory, and evolution of cooperation. Alright, let’s get started.

Obsessive vs Harmonious Passion

Robert Vallerand, a highly cited psychologist, identifies two distinct types of passion: obsessive and harmonious.

Obsessive passions emerge when an activity starts to control us, overshadowing other aspects of our lives. It's often driven by external factors – things like seeking approval, maintaining our self-esteem, or chasing external rewards.

But there's another side to passion – a more positive one. This is what Vallerand calls harmonious passion. This type of passion arises when we engage in activities that resonate with us, activities that we inherently enjoy and find meaningful, things that we have an intrinsic motivation for.

This type of passion is likely to lead to positive outcomes such as well-being and fulfillment. And it is not just feel-good stuff. It actually correlates with beneficial outcomes.

When we're truly interested in something, we're passionate in a harmonious way. We're naturally more engaged, more innovative, and arguably, smarter in how we approach things. On the other hand, when we're bored, it can make us feel dumber.

The last part is more of my own experience than Vallerand's work, but I'm sure it's a very common experience. I feel smart when I work on things I'm enthusiastic and optimistic about. But I feel dumb when working on things where I don't see a path to a meaningful contribution from my end.

I often come across people who humble brag about working 80-hour weeks. This made me wonder: what exactly are they even doing with all that time? Then, I realized that many activities I enjoy and engage in are often seen as work by others . This includes creating art, coding for my personal projects, renovating my house, producing educational videos for YouTube, and even producing this podcast. I don't consider any of these as “work” in the traditional sense.

I like building things. I like creating and publishing. I’m not doing any of these because someone told me to. I’m doing it for its own sake.

Succeeding in Life’s Non-Zero-Sum Game

One piece of advice that I always keep in mind is that one of the best ways to succeed in life is to find a way to create value for everyone while doing something you truly enjoy. I have heard different versions of this advice, usually in the context of entrepreneurship, but I think it’s something that can be applied to any area of life.

Sometimes people think that success is all about competition and you need to make someone else lose in order for you to win. That assumption holds only if you assume that life is a zero sum game, where one person's gain is another's loss. But life is not a zero-sum game. There are plenty of opportunities in the world for anyone to win by creating a net-positive value. We don't need to push anyone down but we don’t have to sacrifice our own interests either.

Adam Smith’s Views on Self-Interest

Adam Smith has some really interesting thoughts on this topic. He's often recognized for his contributions to economics, but he also explored moral philosophy.

Many people think that Adam Smith supported selfishness, because he emphasized the importance of self-interest. He argued that self-interest drives a competitive economy. It motivates everyone to do their best for their own benefit. However, Smith also recognized that self-interest is complex and doesn't have to conflict with helping others.

He observed that people are inherently self-interested, yet they also have an intrinsic interest in the well-being of others. He noted that people derive joy from the happiness of others, even without tangible personal gain. So, self-interest and caring for others can coexist.

You don't need to be selfless to make an impact that benefits everyone. I think being selfless is not sustainable in the long term anyway. It's probably in everyone's best interest if we balance self-interest with collective interest.

Prisoner’s Dilemma in Game Theory

A relevant topic here is cooperation, including how and why it arises.

Let's play a simple game. Imagine you and I are players in a game. We can either cooperate or defect. Let's break down what that means.

If we both choose to cooperate, we each earn three coins. It's a fair deal.

But here's the twist. If one of us cooperates and the other defects, the defector gets five coins, and the cooperator ends up with nothing. A bit unfair.

And if we both choose to defect, we only get one coin each. Not great, but better than nothing.

This game is a version of the Prisoner's Dilemma from game theory.

So, what would you do?

Think about it:

  • If your opponent cooperates and you defect, you earn five coins, more than the three you'd get if you cooperated.

  • If your opponent defects and you defect, you’re still better off, you get one coin instead of nothing.

Defecting seems like the better choice in both scenarios. If you defect, you either gain the most or safeguard yourself from loss. But here's the catch.

What if your opponent is thinking along the same lines? If both of you choose to defect, you'll each end up with just one coin, even though mutual cooperation would have given you more.

Iterated Prisoner’s Dilemma

In real life, we often interact with the same people more than once. There’s a version of this game called the Iterated Prisoner’s Dilemma. Think of it like playing the game not just once, but over and over. If I choose not to cooperate now, the other person will remember it and might react differently next time. Playing multiple times adds strategy, because how you act now can affect how others respond later.

So, what would be the best strategy in this repeated game?

In 1980, Robert Axelrod, a political science professor at the University of Michigan, organized a tournament to find out. He invited experts in game theory to submit strategies for a computer-run contest. In this event, computer programs played many rounds against each other and copies of themselves. The goal was to accumulate as many points as possible by the end of all rounds.

In Axelrod's tournament, the most successful strategy turned out to be a simple tit-for-tat strategy. The strategy is simple. Begin with cooperation, then copy your opponent's previous move. If they cooperate, continue cooperating. If they defect, then defect in the next round. Similarly, if they return to cooperation, follow by cooperating in the next round.

This strategy is effective because it cooperates with cooperative players for mutual gain and it defects against those who defect. When it faces itself, it leads to continuous cooperation.

Traits of Successful Strategies

Axelrod discovered that the most successful strategies have four common traits: they are nice, retaliating, forgiving, and non-envious.

Nice means they are not the first to defect. It's like an optimistic approach. Most top performing strategies in the tournament were nice. While being nice, a good strategy also retaliates when needed. It doesn’t allow itself to be easily exploited.

Successful strategies retaliate but they don't hold grudges, they are forgiving. They retaliate but will cooperate again if the opponent stops defecting. This helps avoid endless revenge, leading to better scores.

Finally, good strategies should not be envious. They should not focus on beating their opponents at all costs.

This is one of the interesting aspects of the tit-for-tat strategy. It came out as the top performer in the tournament without beating a single opponent. It’s a strategy that can never beat its opponents one-on-one. Yet, it accumulates the most points in the long run.

There are more sophisticated strategies that do better in certain setups but tit-for-tat is considered to be the most robust basic strategy.

I actually did try to build a more sophisticated strategy in the past. I had a class project back in college, where we built our own algorithms to play the repeated prisoner's dilemma against each other. Obviously, we all knew about the tit-for-tat strategy, yet still tried different things to do better.

My algorithm was probably more sophisticated than necessary. It first attempted to identify its opponent's behavior by observing their reactions to a series of initial moves. If it believed the opponent had a very forgiving strategy, it would try to exploit them by defecting often. For all other opponents, it would fall back to tit-for-tat.

However, my algorithm failed to outperform the simple tit-for-tat strategy. My algorithm wasn’t really nice. It tried to take advantage of perceived weaker opponents. It likely misjudged some opponents, and missed chances to cooperate with them.

So, we've seen it again: what Axelrod found holds true. Good strategies are nice yet not easily exploited. They are retaliatory yet forgiving.

Evolution of Cooperation

Axelrod’s work also extends to the evolution of cooperation. In simulations of evolutionarily stable strategies, tit-for-tat almost always becomes dominant. Cooperation can start small and eventually spread to the entire population due to its evolutionary advantage at a collective level.

In nature, many life forms face conflicts like the prisoner's dilemma. Game theory overly simplifies real-world interactions but still provides insight into how cooperation can evolve among self-interested individuals. This might explain the shift from a world of purely selfish organisms to one where cooperation thrives.

Cooperation doesn't need selfless altruism. It can arise in groups focused on individual long-term benefits. This cooperative behavior doesn't even necessarily involve trust or conscious thought. If a cooperative strategy is genetically more successful, it can quickly become predominant in a population.

Alright, that was pretty much it for this episode. Thanks for listening, and I’ll see you next time.