$\text{Sample Space}$

$\text{Number of Outcomes}$

$\text{Probability of Event}$

C | |

P | |

$P(A^c)$ | $=$ |

$P(A)$ | $=$ | $P(B)$ | $=$ |

$P(A \cap B)$ | $=$ | $0$ |

$P($$)$ | $=$ | $P(A \cap B)$ | $=$ |

$P($A$)$ | $=$ | $P($$B \mid A$$)$ | $=$ |

$P(A)$ | $=$ | $P(B)$ | $=$ |

$P(A_1)$ | $=$ | $P(B \mid A_1)$ | $=$ | |||||

$P(A_2)$ | $=$ | $P(B \mid A_2)$ | $=$ | |||||

$P(A_3)$ | $=$ | $P(B \mid A_3)$ | $=$ |

Example 1 • Example 2

Example 1 • Example 2

Example 1 • Example 2

Example 1 • Example 2

Example 1 • Example 2

Example 1 • Example 2

Example 1 • Example 2

Example 1 • Example 2

How it Works:

In probability, an experiment is any process that generates some outcomes. The set of all possible outcomes of an experiment is called the sample space and each element of the sample space is known as a sample point. Since the sample space is made up of outcomes, sample point is another word for outcome. When calculating probability, it is often necessary to know how many possible outcomes there are. Sometimes there are only a few outcomes in an experiment, so it is easy to count the number of outcomes. When there are many possible outcomes in an experiment, it is helpful to use one of the counting rules.

Experiment: | Outcomes: |

Toss a Coin | Head, Tail |

Roll a Die | 1, 2, 3, 4, 5, 6 |

Play a Game | Win, Lose, Tie |

There are three different counting rules that can be used to calculate the number of outcomes in an experiment. If the experiment is made up of several smaller experiments, the counting rule for multiple-step experiments is used. It says that when there are k steps in an experiment, with n_{1} outcomes in step one, n_{2} in step two and so on, then the total number of outcomes is (n_{1})(n_{2})···(n_{k}). If the experiment involves selecting n objects from a larger set of N objects, the counting rule for combinations is used. If the experiment involves selecting n objects from a larger set of N objects and order of selection matters, the counting rule for permutations is used.

Combinations | Permutations |

$ C^N_n = \dfrac{N!}{n!(N-n)!} $ | $ P^N_n = \dfrac{N!}{(N-n)!} $ |

The probability of an outcome can be assigned in a few different ways. If it's reasonable to believe the outcomes are all equally likely, the classical method can be used. It says that the probability of each outcome is one divided by the number of possible outcomes. If historical data is available where the experiment has been repeated several times, the relative frequency method can be used. It says the probability of each outcome is equal to the proportion if times it has occured, or the relative frequency. If the outcomes are not equally likely and there's little to no historical data available, we must rely on the subjective method. It says that the probability of each outcome is the individual's degree of belief that it will occur.

Method: | Probability of Outcome: |

Classical | 1 / number of outcomes |

Relative Frequency | proportion of times |

Subjective | degree of belief |

Regardless of which method is used to assign probability, it is important that two requirements are satisfied. The first requirement is that the probability of each outcome is between zero and one. The second requirement is that the sum of the probabilities equals one. In probability, the term "event" has a different meaning than it does in everyday life. An event is defined to be a collection of sample points (outcomes). To calculate the probability of event, simply sum the probabilities of the outcomes of the event.

Given an event A, the complement of A, written A^{c}, is the event containing all sample points not in A. Relationships in probability can be illustrated in a Venn diagram, as shown below. The rectangle represents the sample space. Since an event and its complement make up the entire sample space, their probabilities sum to one. That is, P(A) + P(A^{c}) = 1. So given the probability of the complement of an event, we can calculate the probability of an event using P(A) = 1 - P(A^{c}). Alternatively, given the probability of the an event, we can calculate the probability of the complement of the event using P(A^{c}) = 1 - P(A).

Given two events A and B, the union of A and B, written A∪B is the event containing all sample points in A, B or both A and B. On the other hand, the intersection of A and B, written A∩B, is the event containing all sample points in both A and B. The addition law can be used to calculate the probability of the union of two events. It says that P(A∪B) = P(A) + P(B) - P(A∩B). If the events don't have any sample points in common, they are said to be mutually exclusive. Since there won't be any sample points in the intersection, the probability of the intersection will be zero. So, for mutually exclusive events, the addition law becomes simply P(A∪B) = P(A) + P(B).

Sometimes, the probability of one event occurring is affected by another event occurring. This is known as conditional probability and is written P(A|B). This can be read as the probability of event A occurring given that event B has occurred, or simply the probability of A given B. If event A's probability isn't affected by event B, we say A and B are independent. For independent events the formula for conditional probability is P(A|B) = P(A) or P(B|A) = P(B). By rearranging the terms in the conditional probability formula, we can calculate the probability of intersection: P(A∩B)=P(B)P(A|B) or P(A∩B)=P(A)P(B|A). If A and B are independent, we get P(A∩B)=P(A)P(B).

Conditional Probability |

$P(A \mid B) = \dfrac{P(A \cap B)}{P(B)}$ |

or |

$P(B \mid A) = \dfrac{P(A \cap B)}{P(A)}$ |