I came across the problem given below in Thinking Fast and Slow.

A cab was involved in a hit-and-run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data

- 85% of the cabs in the city are Green and 15% are Blue.
- A witness identified the cab as Blue. The court tested the reliability of the witness under the circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time.

What is the probability that the cab involved in the accident was Blue than Green? Most of us will give the solution as 80%. But it is incorrect. Let us try to solve the problem step by step.

## 85% of the Cabs are Green and 15% are Blue

If there are 100 cabs then 85 will be Green and 15 will be Blue.

## Witness is correct 80% of the times

The witness identified the color of the cab to be blue. If the witness is correct then 80% of the times it will be the blue color cab. It also means that 20% of the times it will be the incorrect green color cab.

## Solution

It is easier for the brain to deal with absolute numbers. Hence to solve this problem, I am assuming there are 100 cabs. 85 are green and 15 are blue. Witness is correct 80% of the times in identifying the cab as blue.

Total no of blue cabs identified correctly = 15 * 0.8 =12

Witness is incorrect 20% of the times identifying the cab as green.

Total no of green cabs identified incorrectly = 85 * 0.2 =17

The total cabs identified by the witness will be 12 + 17 = 29

Hence the probability of identifying blue cab correctly is = 12/29 = 41.3%

## Why is this important?

We solved the problem using Bayesian Inference. There are two pieces of information to the problem. A base rate and the imperfectly reliable testimony of a witness. Bayesian inference gives importance to the base rate by combining it with the unreliable information from the witness. Excerpt from Thinking Fast and Slow

There are two items of information: base rate and the imperfectly reliable testimony of a witness. In the absence of witness, the probability of the guilty cab being Blue is 15%, which is the base rate of that outcome. If the two cab companies had been equally large, the base rate would be uninformative and you would consider only the reliability of the witness, concluding that the probability is 80%. The two sources of information can be combined by Bayes’s rule. The correct answer is 41%. However, you can probably guess what people do when faced with this problem, they ignore the base rate and go with the witness. The most common answer is 80%

## Cancer Test

In a given population 1% of the people might have cancer. Tests can be taken to identify cancer. Following are the details about the test

- Test will be positive 90% of the time if someone has cancer.
- Test will be negative 90% of the time if someone does not have cancer.

If you take the cancer test and it comes out as positive. What is the probability of having cancer? Once again the most common answer is 90%. But it is incorrect. Let us solve the problem step by step.

**Finding all the Inputs**

To simplify the problem assume there are 1,000 people.

- P(Cancer) = 1%
- P(No Cancer) = 99% (100% – 1%)
- P(Positive Test and Cancer) = 90%
- P(Negative Test and Cancer) = 10% (100% – 90%)
- P(Negative test without Cancer) = 90%
- P(Positive test without Cancer) = 10% (100% – 90%)

**Positive Test and Cancer**

```
1% of the people have cancer = 1000 * 0.01 = 10
90% of them will test positive = 10 * 0.9 = 9 -
```**A
**

**Positive Test and No Cancer**

```
99% of the people do not have cancer = 1000 * 0.99 = 990
10% of them will test positive = 990 * 0.1 = 99 -
```**B
**

**Total Positive Tests**

Adding A and B from above = 99 + 9 = 108 -C

**Probability of Cancer with positive test**

```
From C Total Positive Tests = 108
From A positive test and cancer = 9
Probability of Cancer with positive test = 9/108 = 8.33%
```

**Why is probability of cancer with positive test is only 8.33%**

Once again base rate rescued us. In our example out of 108 people identified as positive, 99 of them do not have cancer. The test identifies lots of non cancer people as positive. As given in B above 99 of them were identified to have cancer, even though they do not have. Thus even though the test came out to be positive the probability of having cancer is only 8.33%. Remember the question to always ask is

What is the Base Rate.

## Formula for Bayes Theorem

We have solved 2 problems using common sense. It is time to look into the formulae for Bayes Theorem. Bayes theorem is named after **Thomas Bayes**, who first suggested using the theorem to update beliefs.

Bayes theorem gives the relationship between the probabilities of A and B, P(A) and P(B), and the conditional probabilities of A given B and B given A, P(A|B) and P(B|A). In its most common form, it is:

Do not worry if the formula does not make sense. Let us translate this formula to the cancer test example.

P(A|B) = P(Cancer|Positive Test) P(B|A) = P(Positive Test|Cancer) = 90% P(A) = P(Cancer) = 1% = 10 people P(B) = P(Positive Test) = 99 + 9 = 108

**Putting it together**

P(Caner|Positive Test) = (P(Positive Test|Cancer) * P(Cancer)) / P(Positive Test) P(Caner|Positive Test) = (0.9 * 10)/108 = 9 / 108 = 8.33%

Robert Hagstrom in his excellent book The Warren Buffet Portfolio discusses about the Bayesian analysis. Excerpt from the book

Bayesian analysis gives us a logical way to consider a set of outcomes of which all are possible but only one will actually occur. It is conceptually a simple procedure. We begin by assigning a probability to each of the outcomes on the basis of whatever evidence is then available. If additional evidence becomes available, the initial probability is revised to reflect the new information.

Bayes’s theorem thus gives us a mathematical procedure for updating our original beliefs (which had resulted from what he called a prior distribution of information) to produce a posterior distribution of information. In other words, prior probabilities combined with new information yield posterior probabilities and thus change our relevant odds.

Let’s imagine that you and a friend have spent the afternoon playing your favorite board game, and now, at the end of the game, are chatting about this and that. Something your friend says leads you to make a friendly wager: that with one roll of the die from the board game, you will get a 6. Straight odds are one in six, a 16 percent probability. But then suppose your friend rolls the die, quickly covers it with her hand, and takes a peek. “I can tell you this much,” she says; “it’s an even number.” With this new information, your odds change to one in three, a 33 percent probability. While you consider whether to change your bet, your friend teasingly adds: “And it’s not a 4.” With this additional bit of information, your odds have changed again, to one in two, a 50 percent probability.With this very simple sequence, you have performed a Bayesian analysis. Each new piece of information affected the original probability, and that is a Bayesian inference.

Thanks for this post. I just read the cab accident problem in Thinking Fast and Slow, and being a math-y kind of person I couldn’t help stopping to think. Your post made me understand how I misinterpreted the situation.

To me, the intuitive answer was 83%: There’s a 80% probability that the witness correctly identified the colour, and of the cases she didn’t get the colour right 15% would be blue.

Now I see that I should really have asked myself if this “blue” report is one of the few actually blue accidents, or one of the many misreported green accidents.

It also helped me to realize that a 80% chance of correctly identifying one of two colours is really crappy. If you have no skills in identifying colours, you would still get 50%. In the 80%, you have a lot of lucky guesses.

I should learn Bayesian statistics better. I was really fooled by this example – and that means that I probably misinterpret a lot of probabilities in real life.

Pingback: Bayes rule 1 - Tony's blog