options("scipen" = 9999, "digits" = 6)
This chapter introduces the concept of conditional probability, which, in my opinion, is one of the least intuitive parts of probability. I didn’t take a lot of notes here, but I did at least write down what is covered in this section. For a more detailed explanation of conditional probability, I recommend either Probability for the Enthusiastic Beginner by Morin or Mathematical Statistics with Applications by Wackerly et al.. (I make no money from these links, sales, or promotions.)
Notes
- Conditional probabilities are used when the occurence of event \(A\) is not independent from the occurence of event \(B\) – if we know the outcome of event \(B\), our belief in the chance that \(A\) will occur changes.
- We denote this as \(P(A \mid B)\), which is read “the probability of event \(A\), given event \(B\)”. This is defined as
\[P(A \mid B) = \frac{P(A \cap B)}{P(B)}.\]
- The product rule we learned before only works for independent events, so for dependent events, we have to use this definition to get a slightly different rule.
\[P(A \cap B) = P(A) \cdot P(B \mid A) = P(B) \cdot P(A \mid B).\] * Note the equivalency we see above. We can use this to derive Bayes’ theorem as well. This is a rule (not really a “theorem”) that allows us to “flip” a conditional probability.
\[P(A \mid B) = \frac{P(A) \ P(B \mid A)}{P(B)} = \frac{P(A) \ P(B \mid A)}{P(A) \ P(B \mid A) + P(\lnot A)P(B \mid \lnot A)}.\]
The book doesn’t include the last equivalency in this section but I’m not sure why because this is where it has been in every other conditional probability lecture I’ve had and it’s often quite useful.
- The book says that the interesting thing to notice about this relationship is that we have the power to relate \(P(\text{observed} \mid \text{belief})\) to \(P(\text{belief} \mid \text{observed}).\)
Solutions
Q1
What piece of information would we need in order to use Bayes’ theorem to determine the probability that someone in 2010 who had GBS also had the flu vaccine that year?
This is another question that I have beef with. I think it is reasonable would interpret “the probability that someone in 2010 who had GBS also had the flu vaccine that year” as \(P(\text{GBS} \cap \text{flu vaccine}).\) But the book wants us to read this as \(P(\text{flu vaccine} \mid \text{GBS}),\) which I would tend to word as (unambiguously) “the probability that someone in 2010 received the flu vaccine, given that they had GBS.” Since conditional/joint probabilities are often vague like this, I think it is best to be specific instead of just making the word also italicized.
But anyways. The formula for this calculation is \[P(\text{flu vaccine} \mid \text{GBS}) = \frac{P(\text{flu vaccine}) \ P(\text{GBS} \mid \text{flu vaccine})}{P(\text{GBS})},\] so the main thing we are missing is \(P(\text{GBS})\), which would allow us to calculate \(P(\text{GBS} \cap \text{flu vaccine})\), the only other part we were not given in the chapter.
Q2
What is the probability that a random person picked from the population is female and is not color blind?
Most of the details for this problem are given in the chapter. Note that for this example, the book is talking about the joint probability, despite what I think is very similar wording to the previous question, but maybe I need to work on my reading comprehension. Anyways, the relevant formula would be
\[P(\text{female} \cap \lnot \text{color blind}) = P(\text{female}) \ P(\lnot \text{color blind} \mid \text{female}).\]
Assuming male individuals and female individuals are equally likely in the population and are the only two options (neither of which is true, but they are both assumed to be true by the book for this question), we have all of the information that we need to solve this. We get
\[P(\text{female} \cap \lnot \text{color blind}) = (0.5)(1 - 0.005) = 0.4975.\] So the probability is \(49.75\%,\) since the incidence of color blindness among female members of the population is extremely low.
Q3
What is the probability that a male who received the flu vaccine in 2010 is either color blind or has GBS?
I’ll steal from the notation that the book uses here, and let \(A\) be the event that this individual is colorblind, given that they are male, and let \(B\) be the event that this individual has GBS, given that they got the flu vaccine. Note that this makes several independence assumptions (being male or color blind are both independent of the flu vaccine and GBS, and vice versa), because if we don’t make this independence assumptions we would lack the necessary information to solve this problem.
Now, the probability we want to find is \[P(A \cup B) = P(A) + P(B) - P(A \cap B) = P(A) + P(B) - P(A) P(B \mid A).\]
In the chapter, we previously learned the values for \(P(A)\) and \(P(B)\), but I think this is where the problem starts to break down. The chapter hasn’t dealt with any really complicated events like this at all, and it doesn’t really make sense to say that \[P(B \mid A) = P( (\text{GBS} \mid \text{flu vaccine}) \mid (\text{color blind} \mid \text{male}) ).\]
So of course the book does not discuss how to address this problem, as it really isn’t very well-formed, it just says to assume \(P(B \mid A) = P(B).\) This is true if we make all of the independence assumptions above, but again, stating the problem in this way doesn’t really make sense.
So, I’ll detour and work through the problem in a way that I think is a bit more logical and arrives at the same answer.
When I read the question, the probability that I think we want to get is \[P(\text{color blind} \cup \text{GBS} \mid \text{male} \cap \text{flu vaccine}).\] This is much more complex than any of the examples in the book, but fortunately for us, we can work it out using the laws of set theory that the book has not discussed at all. Note that, by definition of conditional probability,
\[ P(\text{color blind} \cup \text{GBS} \mid \text{male} \cap \text{flu vaccine}) = \frac{P\left[(\text{color blind} \cup \text{GBS}) \cap (\text{male} \cap \text{flu vaccine}) \right]}{P(\text{male} \cap \text{flu vaccine})}. \] Now, the denominator is pretty easy to sort out. Since we don’t have any information on how being male affects flu vaccine coverage, we’ll assume they are independent (although in real life, this is not true – CDC flu vaccine coverage data for the USA suggests that males under 65 are less likely to get flu vaccines than females of the same age, but this different goes away for the elderly). Under this assumption, we get \[P(\text{male} \cap \text{flu vaccine}) = P(\text{male}) \ P(\text{flu vaccine}),\] which is easy enough to calculate with the data we have.
So it remains to deal with the beast of a numerator. For ease-of-use here, let’s define the event \(C \equiv \text{color blind} \cup \text{GBS}\) and the event \(D \equiv \text{male} \cap \text{flu vaccine}.\) Then, we get that
\[ P(C \cap D) = \left(\text{color blind} \cup \text{GBS}\right) \cap D = (\text{color blind} \cap D) \cup (\text{GBS} \cap D) \] by using the distributive rules of the \(cap\) and \(cup\) operators. These are not covered in the book, but one potential resource (in terms of set theory) is here. Many books on probability will also briefly cover this material.
Now, we have a union, and we can use the addition (or inclusion-exclusion) principle to evaluate the probability. We get that
\[\begin{align*} P((\text{color blind} \cap D) \cup (\text{GBS} \cap D)) &= P(\text{color blind} \cap D) + P(\text{GBS} \cap D) - P(\text{color blind} \cap \text{GBS} \cap D) \\ &= P(\text{color blind} \cap \text{male} \cap \text{flu vaccine}) + P(\text{GBS} \cap \text{male} \cap \text{flu vaccine}) \\ &\quad \quad - P(\text{color blind} \cap \text{GBS} \cap (\text{male} \cap \text{flu vaccine})). \end{align*}\]
Under the same set of independence assumptions before, we can make the following set of simplifications: \[\begin{align*} P(\text{color blind} \cap \text{male} \cap \text{flu vaccine}) &= P(\text{color blind} \cap \text{male}) \ P(\text{flu vaccine}) \\ P(\text{GBS} \cap \text{male} \cap \text{flu vaccine}) &= P(\text{GBS} \cap \text{flu vaccine}) \ P(\text{male}) \\ P(\text{color blind} \cap \text{GBS} \cap (\text{male} \cap \text{flu vaccine})) &= P(\text{GBS} \cap \text{flu vaccine} \cap \text{male} \cap \text{color blind}) \\ &= P(\text{male} \cap \text{color blind}) \ P(\text{GBS} \cap \text{flu vaccine}). \end{align*}\]
Now, it would be just about impossible to write all that out in text format and still have it fit into the margins of this page. But nonethless we will try. We get that, overall,
\[\begin{equation*}P(\text{color blind} \cup \text{GBS} \mid \text{male} \cap \text{flu vaccine}) = \frac{ \left(\begin{split} &P(\text{color blind} \cap \text{male}) \ P(\text{flu vaccine}) \\ &\quad + P(\text{GBS} \cap \text{flu vaccine}) \ P(\text{male}) \\ &\quad - P(\text{male} \cap \text{color blind}) \ P(\text{GBS} \cap \text{flu vaccine}) \end{split}\right) }{ P(\text{male}) \ P(\text{flu vaccine}) }. \end{equation*}\]
I can’t figure out how to make the text of that smaller in this quarto HTML document and it’s too much work to figure out how to size the LaTeX code in a way that’s supported by MathJax so that’s the best that you’re getting from me.
Anyways, if we want to solve this with the information given in the book, we have to get rid of all of the \(P(\text{flu vaccine})\), since we don’t know that. Fortunately for us, they cancel out. First, let’s plug in the numbers that we do know so this is a bit less cumbersome to type. Let
\[\mathrm{P} = P(\text{color blind} \cup \text{GBS} \mid \text{male} \cap \text{flu vaccine})\]
for convenience (should’ve said this earlier but it’s too late now). We get \[\begin{align*} \mathrm{P} & = \frac{ \left(\begin{split} &P(\text{color blind} \cap \text{male}) \ P(\text{flu vaccine}) \\ &\quad + P(\text{GBS} \cap \text{flu vaccine}) \ P(\text{male}) \\ &\quad - P(\text{male} \cap \text{color blind}) \ P(\text{GBS} \cap \text{flu vaccine}) \end{split}\right) }{ P(\text{male}) \ P(\text{flu vaccine}) } \\ \\ &= \frac{ 0.04 \ P(\text{FV}) + 0.5 \ P(\text{GBS} \mid \text{FV}) \ P(\text{FV}) - 0.04 \ P(\text{GBS} \mid \text{FV}) \ P(\text{FV}) }{ 0.5 \ P(\text{FV}) } \\ &= \frac{P(\text{FV})\left( 0.04 + 0.46 \ P(\text{GBS} \mid \text{FV}) \right)}{ 0.5 \ P(\text{FV}) } \\ &= \frac{0.04 + 0.46 (3 / 100,000)}{0.5} \\ &= 0.0800276 = 8.00276\%. \end{align*}\]
This is the same answer that the book has, so their shortcut that I don’t like apparently works (and saves a lot of time). But I had a good time proving that it was true without shortcuts, so I guess it was worth it.
Well, that’s all for this chapter!