Thinking in Bets

About a month or so ago I wrote a post about Annie Duke and a podcast that I had listened to that had her as a guest; in short, it was so good that I immediately bought her new book, Thinking in Bets, and it really has changed my life, zero hyperbole.

It’s been so good that I’ve told a dozen folks about it and the book ranks very high on my must-read list. It has not only changed the way that I think through decisions but it also has changed the very speech and psychological patterns that I’ve developed over many, many years.

I don’t want to talk it up too much, but, I did want to capture a few things here on my blog that I want to be able to reference in the future – my blog often times functions as a repository of thoughts (both complete and mostly incomplete) but also my own personal (digital) notebook of ideas, thoughts, and beliefs.

Regardless, I recommend the book without reservation and would love to hear your thoughts on it if you happen to walk through it.


Originally I had wanted to pen a handful of posts on each of these topics but I’ve decided to just dump them all into a singular post, mostly because it’ll be a much easier way of capturing all the thoughts in a single post instead of multiple posts.

There isn’t much rhyme or reason or progression here of my thoughts, so, take them as just “scratch paper” thinking, if you will.

Equating the Quality of a Decision with the Quality of Its Outcome

Annie brings this conflict front and center within the first few pages of the book and I immediately had an “Aha!” moment because its veracity. We do this all the time and when combined with hindsight bias it becomes very, very deadly.

In poker, the use the term “resulting” as a description for this fallacy and the cost of doing it (mostly without even knowing) short-circuits growth, learning, and potential for real improvement.

It would be impossible to catalogue how I’ve done this historically in my life and from here on out I’ve determined to never do this again as it’s just a useless and fruitless exercise and perspective.

If you’ve ever said to yourself aloud things like “I should have known…” or “I should have seen that coming…” or even “I knew it…!” then you’ve fallen victim to resulting as there’s no way you could have known the outcome but now that its here you believe that it was inevitable (and obvious).

No sober person thinks getting home safely after driving drunk reflects a good decision or good driving ability.

As a recovering alcoholic I know this viscerally and I won’t ever forget it.

Poker vs Chess

I loved this:

Chess is not a game. Chess is a well-defined form of computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now real games are not like that al all. Real life is not like that. Real life consists of bluffing, of little tactics of deception of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.

This is from scientist Jacob Bronowski who recounted in his book, The Ascent of Man, how John von Neumann described game theory to him. An amazing and powerful distinction because, essentially, chess contains no hidden information and very little luck as the pieces are all there for both players to see and there’s no room for chance (like using a dice, for instance).

If you win it means you’ve chosen better moves than your opponent and if you lose it’s because your adversary skillfully outplayed you. Very rarely does a complete novice beat a Chess Grandmaster but in poker a complete novice can beat a life-long professional. See the difference?

This difference is striking and I have spent the better half of the last month thinking about what things in my life (and circumstance) are really “poker” and what are actually “chess” – it’s a mind-fuck, but it’s been well-worth my time.

You see, life is much more like poker then chess and that makes a huge difference in terms of how we engage and understand our world; and especially the events and decisions that we have to make day-in and day-out.

“I Don’t Know” is a Good Thing

We’ve been taught in school that not having an answer when called upon is a bad thing and opens you up to criticism and humiliation. I mean, it’s the last thing you want to say when a teacher calls on you for an answer, right?

But changing this dynamic in our brains is important because this is where all good science (and scientific study and learning) begins. Without being able to say those 3 magical words we would be unable to actually learn anything.

It’s crazy how important this is for us, especially as kids and even adults. Admitting that you don’t know something is the first step on the path of real enlightenment and says nothing about your intelligence at all.

Sadly, we’ve been taught differently and I’m guilty of shaming my own kids when they have said it in the past. I need to do better and I need to start modeling this even more. Physicist James XClerk Maxwell once said:

Thoroughly conscious ignorance is the prelude to every real advance in science.

Yup. And, besides, being comfortable with uncertainty has an enormous number of benefits, especially in the world of startups and entrepreneurship. Go figure.

One simple change is how you express confidence: Instead of say that something is simply “true” you can instead add a layer of uncertainty and say: “I believe this to be 80% true.” That’s a big difference and the impact is large on both you and your listeners.

The net result is a much more trustworthy communication experience and will result in an even greater trust between the two parties. Why? Because it invites the other party to collaborate and correct so that both of you can get closer to 100%.

Now that’s kind of neat.

Redefining Wrong

The beautiful thing about thinking and engaging with life as a series of bets is that you begin to realize that life isn’t as binary as we believe it to be (or as we want it to be):

Decisions are bets on the future, and they aren’t “right” or “wrong” based on whether they turn out well on any particular iteration. An unwanted result doesn’t make our decision wrong if we thought about the alternatives and probabilities in advance and allocated our resources accordingly.

This was major for me as I often times want to see a negative outcome as “bad” (and consequently myself as “bad”) when I can now look at it much more objectively. In many cases our decision and the process through which we make decisions was actually quite good but we had to take a “bet” on the outcome and it just didn’t turn out in the way we had hoped it would.

This lessons the load considerably on our conscience and we’re able to operate much more freely. I love that.

The Difference Between How We Think We Form Beliefs and How They Actually Form

I’ll just copy-and-paste this because it’s so darn good:

This is how we think we form abstract beliefs:

  1. We hear something;
  2. We think about it and vet it, determining whether it is true or false; only after that
  3. We form our belief.

It turns out, though, we actually form abstract beliefs this way:

  1. We hear something;
  2. We believe it to be true;
  3. Only sometimes, later, if we have the time or the inclination, we think about it and vet it, determining whether it is, in fact, true or false.

How many times have you done this in your own life? For me, too many to count. I have thought through this paradigm 100 times since reading it and I still catch myself reviewing my own beliefs only to realize that I’ve done the latter, not the former.

Humbling and eye-opening, that’s for sure. Why? Because we’re not naturally wired to do the former.

Truthseeking, the desire to know the truth regardless of whether the truth aligns with the beliefs we currently hold, is not naturally supported by the way we process information.

You see this on Twitter, Facebook, and most media outlets and social networks. Confirmation bias is a real thing and a terribly-destructive one at that. So is motivated reasoning, which is simply the idea that how we process new information is driven by the current beliefs we hold, oftentimes strengthening them.

We simply do not want new information to self-destruct our own identity and how we see (and feel) about ourselves, even in the face of authentic, validated, and real data that may say otherwise.

Kill Self-Serving Bias

It’s just plain evil and totally robs us of being able to improve and learn from our mistakes and experiences.

Self-serving bias arises from our drive to create a positive self-narrative. In that narrative, taking credit for something good is the same as saying we made the right decision. And being right feels good. Likewise, thinking that something bad was our fault means we made a wrong decision, and being wrong feels bad. When our self-images is at stake, we treat our fielding decision as 100% or 0%: right versus wrong, skill versus luck, our responsibility versus outside our control. There are no shades of grey.

The result:

Fielding outcomes with the goal of promoting our self-narrative and doing it in an all-or-nothing fashion alters our ability to make smart bets about why the future unfolded in a particular way. Learning from experience is difficult – and sometimes impossible – with this kind of biased, broad-brush thinking. Outcomes are rarely the result of our decision quality alone or chance alone, and outcome quality is not a perfect indicator of the influence of luck or skill.

Wow. I love this too via Jean Cocteau who said:

We must believe in luck. For how else can we explain the success of those we don’t like?

Ouch.

Minimizing schadenfreude is important… that is, deriving pleasure from someone else’s misfortune. This is the opposite of compassion. If we want to become more rational thinkers we must be able to see a fuller picture of how life really operates: Sometimes we win because of luck and sometimes our opponents or competitors simply win because of their better skill. And, when the opposite happens, we simply need to call it for what it is: If our competitors lose it’s not because their stupid but perhaps simply unlucky in their decision making bet. Perhaps they were quite brilliant in their assessment and we have an opportunity to learn from them, regardless of outcome.

Thinking in Bets Works Better as a Group

This is pretty self-evident and obvious, but, if you can establish for yourself a group of folks who have agreed to truthseek with you then you’ll learn faster and make better decisions short and long-term.

There’s too much to write about for this particular section but I loved the norms featured in CUDOS which stands for:

  • Communism (data belong to the group),
  • Universalism (apply uniform standards to claims and evidence, regardless of where they came from),
  • Disinterestedness (vigilance against potential conflicts that can influence the group’s evaluation), and
  • Organized Skepticism (discussion among the group to encourage engagement and dissent).

This is via Meyer R. Schkolnick who was, among many things, a magician and performer, helps reduce the impact and effect of things like the Rashomon Effect. You can dig in further for more details, but, we see this happening every single day in our social media feeds.

Ten, Ten, Ten

This technique I used almost immediately when I first read it and I’ve been applying it quite liberally to many experiences since. Business journalist and author Suzy Welch developed this tool, the 10-10-10 and it’s simply amazing – you should try it.

Every 10-10-10 process starts with a question…: What are the consequences of each of my options in ten minutes? In ten months? In ten years?

This triggers mental time-travel and allows for a corollary set of questions like:

How would I feel today if I had made this decision ten minutes ago? Ten months ago? Ten years ago?

I used this immediately after I had read it when I was given a notice from a technological service provider that they had decided to cut off my access to their API, without reason. I was beyond livid but immediately wanted to make a few very irrational decisions as a result.

Instead, I applied the 10-10-10 model and realized that in the next 10 minutes I would most likely feel exactly the same but in 10 months I would laugh about it and, possibly, in 10 years I could see this experience as not only necessary but a major breakthrough for the product that I was working on.

This allowed me to take a gigantic step backwards and think much more positively about the experience and remembering that my decision to use that API was a bet in the first place! I just hadn’t really factored in that as a possibility and instead took an all-or-nothing approach (i.e. binary) to my use of that API.

Crazy how useful this tool really is. And, does this not resonate deeply with you?

In relationships, even small disagreements seem big in the midst of the disagreement. The problem in all these situations (and countless others) is that our in-the-moment emotions affect the quality of the decision we make in those moments, and we are very willing to make decisions when we are not emotionally fit to do so.

I can’t even count how many times this has been true for me!

Finally (and I’m done writing about this because my hands are tired…) there’s a section in the book that gives you a decent list of things to “watch out for” when you hear them that Annie classifies as worthy to be put into a “Decision Swear Jar” – I’ve captured them here for my own (and your) benefit.

These are all patterns of irrationality that get in the way of learning and truthseeking. If we hear some of these things then you know you’re about to enter into irrationality town:

  • Signs of the illusion of certainty: “I know,” “I’m sure,” “I knew it,” “It always happens this way,” “I’m certain of it,” “you’re 100% wrong,” “You have no idea what you’re talking about,” “There’s no way that’s true,” “0%” or “100%” or their equivalents, and other terms signaling that we’re presuming things are more certain than we know they are. This also includes stating things as absolutes, like “best” or “worst” and “always” or “never.”
  • Overconfidence: similar terms to the illusion of certainty.
  • Irrational outcome fielding: “I can’t believe how unlucky I got,” or the reverse, if we have some default phrase for credit taking, like “I’m at the absolute top of my game” or “I planned it perfectly.” This includes conclusions of luck, skill, blame, or credit. It includes equivalent terms for irrationally fielding the outcomes of others, like, “They totally had that coming,” “They brought it on themselves,” and “Why do they always get so luck?”
  • Any kind of moaning or complaining about bad luck just to off-load it, with no real point to the story other than to get sympathy. (An exception would be when we’re in a truthseeking group and we make explicit that we’re taking a momentary break to vent.)
  • Generalized characterizations of people meant to dismiss their ideas: insulting, pejorative characterizations of others, like “idiot” or, in poker, “donkey.” Or any phrase that starts by characterizing someone as “another typical ________.” (Like David Letterman said to Lauren Conrad, he dismissed everyone around him as an idiot, until he pulled himself into a deliberative mind one day and asked, “What are the odds that everyone is an idiot?”)
  • Other violations of the Mertonian norm of universalism, shooting the message because we don’t think much of the messenger. Any sweeping term about someone, particularly when we equate our assessment of an idea with a sweeping personality or intellectual assessment of the person delivering the idea, such as “gun nut,” “bleeding heart,” “East Coast,” “Bible belter,” “California values” – political or social issues. Also be on guard for the reverse: accepting a message because of the messenger or praising a source immediately after finding out it confirms your thinking.
  • Signals that we have zoomed in on a moment, out of proportion with the scope of time: “worst day ever,” “the day from hell.”
  • Expressions that explicitly signal motivated reasoning, accepting or rejecting information without much evidence, like “conventional wisdom” or “if you ask anybody” or “Can you prove that it’s not true?” Similarly, look for expressions that you’re participating in an echo chamber, like “everyone agrees with me.”
  • The word “wrong,” which deserves its own swear jar. The Mertonian norm of organized skepticism allows little place in exploratory discussion for the word “wrong.” “Wrong” is a conclusion, not a rationale. And it’s not a particularly accurate conclusion since, as we know, nearly nothing is 100% or 0%. Any words or thoughts denying the existence of uncertainty should be a signal that we are heading toward a poorly calibrated decision.
  • Lack of self-compassion: if we’re going to be self-critical, the focus should be on the lesson and how to calibrate future decisions. “I have the worst judgment on relationships” or “I should have known” or “How could I be so stupid?”
  • Signals we’re being overly generous editors when we share a story. Especially in our truthseeking group, are we straying from sharing the facts to emphasize our version? Even outside our group, unless we’re sharing a story purely for entertainment value, are we assuring that our listener will agree with us? In general, are we violating the Mertonian norm of communism?
  • Infecting our listeners with a conflict of interest, including our own conclusion or belief when asking for advice or informing the listener of the outcome before getting their input.
  • Terms that discourage engagement of others and their opinions, including expressions of certainty and also initial phrasing inconsistent with that great lesson from improvisation – “yes, and … ” That includes getting opinions or information from othres and starting with “no” or “but…”

So… good.

Anyways, get the book, okay?

Finally, finally… I talked about this on my vlog:

%d bloggers like this: