Imagine you were walking around in Manhattan and you chanced upon an interesting game going on at the side of the road. By the way, when you see these games going on, a safe strategy is to walk on, since they usually reduce to methods of separating a lot of money from you in various ways.
The protagonist, sitting at the table tells you (and you are able to confirm this by a video taken by a nearby security camera run by a disinterested police officer), that he has managed to toss the same quarter (an American coin) thirty times and managed to get “Heads” of those times. What would you say about the fairness or unfairness of the coin in question?
Next, your good friend rushes to your side and whispers to you that this guy is actually one of a number of people (a little more than a billion) that were asked to successively toss freshly minted, scrupulously clean and fair quarters. People that tossed tails were “tossed” out at each successive toss and only those that tossed heads were allowed to toss again. This guy (and one more like him) were the only ones that remained. What can you say now about the fairness or unfairness of the coin in question?
What if the number of coin tosses was rather than
, with a larger number of initial subjects?
Just to make sure you think about this correctly, suppose you were the Director of a large State Pension Fund and you need to invest the life savings of your state’s teachers, firemen, policemen, highway maintenance workers and the like. You get told you have to decide to allocate some money to a bet based made by an investment manager based on his or her track record (he successively tossed “Heads” a hundred times in a row). Should you invest money on the possibility that he or she will toss “Heads” again? If so, how much should you invest? Should you stay away?
This question cuts to the heart of how we operate in real life. If you cut out the analytical skills you learnt in school and revert to how our “lizard” brain thinks, we would assume the coin was unfair (in the first instance) and express total surprise at the knowledge of the second fact. In fact, even though the second situation could well have happened to every similar situation of the first sort we encounter in the real world, we would still operate as if the coin was unfair, as our “lizard” brain would instruct us to behave.
What we are doing unconsciously is using Bayes’ theorem. Bayes’ theorem is the linchpin of inferential deduction and is often misused even by people who understand what they are doing with it. If you want to read couple of rather interesting books that use it in various ways, read Gerd Gigirenzer’s “Reckoning with Risk: Learning to Live with Uncertainty” or Hans Christian von Baeyer’s “QBism“. I will discuss a few classic examples. In particular Gigirenzer’s book discusses several such, as well as ways to overcome popular mistakes made in the interpretation of the results.
Here’s a very overused, but instructive example. Let’s say there is a rare disease (pick your poison) that afflicts of the population. Unfortunately, you are worried that you might have it. Fortunately for you, there is a test that can be performed, that is
accurate – so if you do have the disease, the test will detect it
of the time. Unfortunately for us, the test has a
false positive rate, which means that if you don’t have the disease,
of such tested people will mistakenly get a positive result. Despite this, the results look exceedingly good, so the test is much admired.
You nervously proceed to your doctor’s office and get tested. Alas, the result comes back “Positive”. Now, ask yourself, what the chances you actually have the disease? After all, you have heard of false positives!
A simple way to turn the percentages above into numbers, suppose you consider a population of people. Since the disease is rather rare, only
have the disease. If they are tested, only
of them will get an erroneous “negative” result. However, if the rest of the population were tested in the same way,
people would get a “Positive” result, despite not having the disease. In other words, of the
people who would get a “Positive” result, only
actually have the disease, which is roughly
– so such an accurate test can only give you a 7-in-10 chance of actually being diseased, despite its incredible accuracy. The reason is that the “false positive” rate is low, but not low enough to overcome the extreme rarity of the disease in question.
Notice, as Gigirenzer does, how simple the argument seems when phrased with numbers, rather than with percentages. To do this using standard probability theory, one writes, if we are speaking about Events and
and write the probability that
could occur once we know that
has occurred as
, then
Using this
and then we note
since I could test positive for two reasons – either I really among the positive people and additionally was among the
that the test caught OR I really was among the
negative people but was among the
that unfortunately got a false positive.
Indeed,
which was the answer we got before.
The rather straightforward formula I used in the above is one formulation of Bayes’ theorem. Bayes’ theorem allows one to incorporate one’s knowledge of partial outcomes to deduce what the underlying probabilities of events were to start with.
There is no good answer to the question that I posed in the first paragraph. It is true that both a fair and an unfair coin could give results consistent with the first event (someone gets or even
coin tosses). However, if one desires that probability has an objective meaning independent of our experience, based upon the results of an infinite number of repetitions of some experiment (the so-called “frequentist” interpretation of probability), then one is stuck. In fact, based upon that principle, if you haven’t heard something contrary to the facts about the coin, your a priori assumption about the probability of heads must be
. On the other hand, that isn’t how you run your daily life. In fact, the most legally defensible (many people would argue the
defensible) strategy for the Director of the Pension Fund would be to
- not assume that prior returns were based on pure chance and would be equally likely to be positive or negative
- bet on the manager with the best track record
At a minimum, I would advise people to stay away from a stable of managers that simply are the survivors of a talent test where the losers were rejected (oh wait, that sounds like a large number of investment managers in business these days!). Of course, the manager that knows they have a good thing going is likely to not allow investors at all for fear of reducing their returns due to crowding. Such managers also exist in the global market.
The Bayesian approach has a lot in common with our every-day approach to life. It is not surprising that it has been applied to the interpretation of Quantum Mechanics and that will be discussed in a future post.
1 Comment