krit.club logo

Probability - Total Probability and Bayes' Theorem

Grade 12ICSE

Review the key concepts, formulae, and examples before starting your quiz.

🔑Concepts

Partition of a Sample Space: A set of events E1,E2,...,EnE_1, E_2, ..., E_n represents a partition of the sample space SS if they are mutually exclusive (EiEj=ϕE_i \cap E_j = \phi for iji \neq j) and exhaustive (E1E2...En=SE_1 \cup E_2 \cup ... \cup E_n = S). Visually, imagine a large rectangle SS divided into non-overlapping regions like a puzzle, where every piece is distinct and together they fill the entire rectangle.

Conditional Probability Foundation: The probability of an event AA occurring given that event EE has already occurred is denoted as P(AE)P(A|E). This concept is the building block for both Total Probability and Bayes' Theorem, representing a 'pathway' in a probability tree.

Theorem of Total Probability: This theorem allows us to calculate the probability of an event AA that can occur through several distinct pathways E1,E2,...,EnE_1, E_2, ..., E_n. Visually, this is represented by a probability tree where the first set of branches are E1,E2,...E_1, E_2, ... and the next level of branches represents AA happening under each EiE_i. The total probability P(A)P(A) is the sum of the probabilities of all paths leading to AA.

Bayes' Theorem (Inverse Probability): While total probability looks forward to find the likelihood of an outcome, Bayes' Theorem looks backward. It calculates the probability that a specific 'cause' EiE_i occurred, given that the 'result' AA has already been observed. It is essentially the ratio of one specific branch of a tree diagram to the sum of all branches leading to that outcome.

Prior vs. Posterior Probabilities: P(Ei)P(E_i) is called the 'Prior Probability' because it is known before the experiment is conducted. P(EiA)P(E_i|A) is called the 'Posterior Probability' because it is calculated after the outcome of the experiment (event AA) is known.

Likelihood and Evidence: In the context of Bayes' Theorem, P(AEi)P(A|E_i) is often called the likelihood, and the denominator (the total probability P(A)P(A)) acts as a normalizing constant or the 'evidence' that ensures the sum of all posterior probabilities equals 11.

📐Formulae

Conditional Probability: P(AE)=P(AE)P(E)P(A|E) = \frac{P(A \cap E)}{P(E)}, provided P(E)>0P(E) > 0

Multiplication Rule: P(AE)=P(E)P(AE)P(A \cap E) = P(E) \cdot P(A|E)

Theorem of Total Probability: P(A)=i=1nP(Ei)P(AEi)=P(E1)P(AE1)+P(E2)P(AE2)+...+P(En)P(AEn)P(A) = \sum_{i=1}^{n} P(E_i) \cdot P(A|E_i) = P(E_1)P(A|E_1) + P(E_2)P(A|E_2) + ... + P(E_n)P(A|E_n)

Bayes' Theorem: P(EkA)=P(Ek)P(AEk)i=1nP(Ei)P(AEi)P(E_k|A) = \frac{P(E_k) \cdot P(A|E_k)}{\sum_{i=1}^{n} P(E_i) \cdot P(A|E_i)}

Partition Conditions: P(Ei)>0P(E_i) > 0 for all ii, and P(Ei)=1\sum P(E_i) = 1

💡Examples

Problem 1:

Bag I contains 3 red and 4 black balls while another Bag II contains 5 red and 6 black balls. One ball is drawn at random from one of the bags and it is found to be red. Find the probability that it was drawn from Bag II.

Solution:

Let E1E_1 be the event of choosing Bag I, E2E_2 be the event of choosing Bag II, and AA be the event of drawing a red ball.\n1. Probabilities of choosing bags: P(E1)=12P(E_1) = \frac{1}{2} and P(E2)=12P(E_2) = \frac{1}{2}.\n2. Conditional probabilities of drawing a red ball:\nP(AE1)=37P(A|E_1) = \frac{3}{7} (3 red out of 7 total in Bag I)\nP(AE2)=511P(A|E_2) = \frac{5}{11} (5 red out of 11 total in Bag II)\n3. Using Bayes' Theorem to find P(E2A)P(E_2|A):\nP(E2A)=P(E2)P(AE2)P(E1)P(AE1)+P(E2)P(AE2)P(E_2|A) = \frac{P(E_2)P(A|E_2)}{P(E_1)P(A|E_1) + P(E_2)P(A|E_2)}\nP(E2A)=12511(1237)+(12511)P(E_2|A) = \frac{\frac{1}{2} \cdot \frac{5}{11}}{(\frac{1}{2} \cdot \frac{3}{7}) + (\frac{1}{2} \cdot \frac{5}{11})}\nP(E2A)=522314+522=52233+35154=52215468=5768=3568P(E_2|A) = \frac{\frac{5}{22}}{\frac{3}{14} + \frac{5}{22}} = \frac{\frac{5}{22}}{\frac{33+35}{154}} = \frac{5}{22} \cdot \frac{154}{68} = \frac{5 \cdot 7}{68} = \frac{35}{68}.

Explanation:

This is a classic Bayes' Theorem problem. We define the 'causes' (choosing Bag I or Bag II) and the 'effect' (getting a red ball). We use the individual bag probabilities and the color ratios within them to reverse-calculate which bag was most likely used.

Problem 2:

A factory has three machines A, B, and C which produce 25%, 35%, and 40% of the items respectively. The percentage of defective items produced by them are 5%, 4%, and 2% respectively. An item is selected at random. What is the probability that it is defective?

Solution:

Let E1,E2,E3E_1, E_2, E_3 be the events that the item is produced by machines A, B, and C. Let DD be the event that the item is defective.\n1. Given probabilities:\nP(E1)=0.25,P(E2)=0.35,P(E3)=0.40P(E_1) = 0.25, P(E_2) = 0.35, P(E_3) = 0.40\n2. Conditional probabilities of defects:\nP(DE1)=0.05,P(DE2)=0.04,P(DE3)=0.02P(D|E_1) = 0.05, P(D|E_2) = 0.04, P(D|E_3) = 0.02\n3. Using the Theorem of Total Probability:\nP(D)=P(E1)P(DE1)+P(E2)P(DE2)+P(E3)P(DE3)P(D) = P(E_1)P(D|E_1) + P(E_2)P(D|E_2) + P(E_3)P(D|E_3)\nP(D)=(0.25×0.05)+(0.35×0.04)+(0.40×0.02)P(D) = (0.25 \times 0.05) + (0.35 \times 0.04) + (0.40 \times 0.02)\nP(D)=0.0125+0.0140+0.0080=0.0345P(D) = 0.0125 + 0.0140 + 0.0080 = 0.0345.

Explanation:

This problem requires the Theorem of Total Probability. Since the defective item could have come from any of the three machines, we sum the weighted probabilities of defects from each machine branch.