## Introduction

The probability of a specified event is the chance or likelihood that it will occur. There are several ways of viewing probability. One would be **experimental** in nature, where we repeatedly conduct an experiment. Suppose we flipped a coin over and over and over again and it came up heads about half of the time; we would expect that in the future whenever we flipped the coin it would turn up heads about half of the time. When a weather reporter says “there is a 10% chance of rain tomorrow,” she is basing that on prior evidence; that out of all days with similar weather patterns, it has rained on 1 out of 10 of those days.

Another view would be **subjective** in nature, in other words an educated guess. If someone asked you the probability that the Seattle Mariners would win their next baseball game, it would be impossible to conduct an experiment where the same two teams played each other repeatedly, each time with the same starting lineup and starting pitchers, each starting at the same time of day on the same field under the precisely the same conditions. Since there are so many variables to take into account, someone familiar with baseball and with the two teams involved might make an educated guess that there is a 75% chance they will win the game; that is, *if* the same two teams were to play each other repeatedly under identical conditions, the Mariners would win about three out of every four games. But this is just a guess, with no way to verify its accuracy, and depending upon how educated the educated guesser is, a subjective probability may not be worth very much.

We will return to the experimental and subjective probabilities from time to time, but in this course we will mostly be concerned with **theoretical** probability, which is defined as follows: Suppose there is a situation with *n* **equally likely** possible outcomes and that *m* of those *n* outcomes correspond to a particular event; then the **probability** of that event is defined as [latex]\frac{m}{n}[/latex].

## Basic Concepts

If you roll a die, pick a card from deck of playing cards, or randomly select a person and observe their hair color, we are executing an experiment or procedure. In probability, we look at the likelihood of different outcomes. We begin with some terminology.

### Events and Outcomes

The result of an experiment is called an **outcome**.

An **event** is any particular outcome or group of outcomes.

A **simple event **is an event that cannot be broken down further

The **sample space** is the set of all possible simple events.

### Example 1

If we roll a standard 6-sided die, describe the sample space and some simple events.

#### Solution

The sample space is the set of all possible simple events: {1,2,3,4,5,6}

Some examples of simple events:

- We roll a 1
- We roll a 5

Some compound events:

- We roll a number bigger than 4
- We roll an even number

### Basic Probability

Given that all outcomes are equally likely, we can compute the probability of an event *E* using this formula:

[latex]\displaystyle{P}(E)=\frac{\text{Number of outcomes corresponding to the event }E}{\text{Total number of equally likely outcomes}}[/latex]

### Example 2

If we roll a 6-sided die, calculate

*P*(rolling a 1)*P*(rolling a number bigger than 4)

#### Solution

Recall that the sample space is {1,2,3,4,5,6}

- There is one outcome corresponding to “rolling a 1,” so the probability is [latex]\frac{1}{6}[/latex]
- There are two outcomes bigger than a 4, so the probability is [latex]\frac{2}{6}=\frac{1}{3}[/latex]

Probabilities are essentially fractions, and can be reduced to lower terms like fractions.

### Example 3

Let’s say you have a bag with 20 cherries, 14 sweet and 6 sour. If you pick a cherry at random, what is the probability that it will be sweet?

#### Solution

There are 20 possible cherries that could be picked, so the number of possible outcomes is 20. Of these 20 possible outcomes, 14 are favorable (sweet), so the probability that the cherry will be sweet is [latex]\frac{14}{20}=\frac{7}{10}[/latex].

There is one potential complication to this example, however. It must be assumed that the probability of picking any of the cherries is the same as the probability of picking any other. This wouldn’t be true if (let us imagine) the sweet cherries are smaller than the sour ones. (The sour cherries would come to hand more readily when you sampled from the bag.) Let us keep in mind, therefore, that when we assess probabilities in terms of the ratio of favorable to all potential cases, we rely heavily on the assumption of equal probability for all outcomes.

### Try it Now

At some random moment, you look at your clock and note the minutes reading.

- What is probability the minutes reading is 15?
- What is the probability the minutes reading is 15 or less?

### Cards

A standard deck of 52 playing cards consists of four **suits** (hearts, spades, diamonds and clubs). Spades and clubs are black while hearts and diamonds are red. Each suit contains 13 cards, each of a different **rank**: an Ace (which in many games functions as both a low card and a high card), cards numbered 2 through 10, a Jack, a Queen and a King.

### Example 4

Compute the probability of randomly drawing one card from a deck and getting an Ace.

#### Solution

There are 52 cards in the deck and 4 Aces so [latex]P(\text{Ace})=\frac{4}{52}=\frac{1}{13}\approx{0.0769}[/latex]

We can also think of probabilities as percents: There is a 7.69% chance that a randomly selected card will be an Ace.

Notice that the smallest possible probability is 0—if there are no outcomes that correspond with the event. The largest possible probability is 1—if all possible outcomes correspond with the event.

### Certain and Impossible Events

An impossible event has a probability of 0.

A certain event has a probability of 1.

The probability of any event must be [latex]0\le{P}(E)\le{1}[/latex]

In the course of this chapter, *if you compute a probability and get an answer that is negative or greater than 1, you have made a mistake and should check your work*.

## Working with Events

### Complementary Events

Now let us examine the probability that an event does **not** happen. As in the previous section, consider the situation of rolling a six-sided die and first compute the probability of rolling a six: the answer is *P*(six) = [latex]\frac{1}{6}[/latex]. Now consider the probability that we do *not* roll a six: there are 5 outcomes that are not a six, so the answer is *P*(not a six) =[latex]\frac{5}{6}[/latex]. Notice that

[latex]P(\text{six})+P(\text{not a six})=\frac{1}{6}+\frac{5}{6}=\frac{6}{6}=1[/latex]

This is not a coincidence. Consider a generic situation with *n* possible outcomes and an event *E* that corresponds to *m* of these outcomes. Then the remaining *n* – *m* outcomes correspond to *E* not happening, thus

[latex]P(\text{not }E)=\frac{n-m}{m}=\frac{n}{n}-\frac{m}{n}=1-\frac{m}{n}=1-P(E)[/latex]

### Complement of an Event

The **complement** of an event is the event “*E* doesn’t happen”

The notation [latex]\bar{E}[/latex] is used for the complement of event *E*.

We can compute the probability of the complement using [latex]P(\bar{E})=1-P(E)[/latex]

Notice also that [latex]P(E)=1-P(\bar{E})[/latex]

### Example 5

If you pull a random card from a deck of playing cards, what is the probability it is not a heart?

#### Solution

There are 13 hearts in the deck, so [latex]P(\text{heart})=\frac{13}{52}=\frac{1}{4}[/latex].

The probability of *not* drawing a heart is the complement:

[latex]P(\text{not heart})=1-P(\text{heart})=1-\frac{1}{4}=\frac{3}{4}[/latex]

### Probability of Two Independent Events

### Example 6

Suppose we flipped a coin and rolled a die, and wanted to know the probability of getting a head on the coin and a 6 on the die.

#### Solution

We could list all possible outcomes: {H1,H2,H3,H4,H5,H6,T1,T2,T3,T4,T5,T6}.

Notice there are 2 · 6 = 12 total outcomes. Out of these, only 1 is the desired outcome, so the probability is [latex]\frac{1}{12}[/latex].

The prior example was looking at two independent events.

### Independent Events

Events A and B are **independent events** if the probability of Event B occurring is the same whether or not Event A occurs.

### Example 7

Are these events independent?

- A fair coin is tossed two times. The two events are (1) first toss is a head and (2) second toss is a head.
- The two events (1) “It will rain tomorrow in Houston” and (2) “It will rain tomorrow in Galveston” (a city near Houston).
- You draw a card from a deck, then draw a second card without replacing the first.

#### Solution

- The probability that a head comes up on the second toss is 1/2 regardless of whether or not a head came up on the first toss, so these events are independent.
- These events are not independent because it is more likely that it will rain in Galveston on days it rains in Houston than on days it does not.
- The probability of the second card being red depends on whether the first card is red or not, so these events are not independent.

When two events are independent, the probability of both occurring is the product of the probabilities of the individual events.

*P*(*A* and *B*) for Independent Events

If events *A* and *B* are independent, then the probability of both *A* and *B *occurring is

*P*(*A* and *B*) = *P*(*A*) · *P*(*B*)

where *P*(*A* and *B*) is the probability of events *A* and *B* both occurring, *P*(*A*) is the probability of event *A* occurring, and *P*(*B*) is the probability of event *B* occurring

If you look back at the coin and die example from earlier, you can see how the number of outcomes of the first event multiplied by the number of outcomes in the second event multiplied to equal the total number of possible outcomes in the combined event.

### Example 8

In your drawer you have 10 pairs of socks, 6 of which are white, and 7 tee shirts, 3 of which are white. If you randomly reach in and pull out a pair of socks and a tee shirt, what is the probability both are white?

#### Solution

The probability of choosing a white pair of socks is [latex]\frac{6}{10}[/latex].

The probability of choosing a white tee shirt is [latex]\frac{3}{7}[/latex].

The probability of both being white is [latex]\frac{6}{10}\cdot\frac{3}{7}=\frac{18}{70}=\frac{9}{35}[/latex].

### Try it Now

A card is pulled a deck of cards and noted. The card is then replaced, the deck is shuffled, and a second card is removed and noted. What is the probability that both cards are Aces?

The previous examples looked at the probability of *both* events occurring. Now we will look at the probability of *either* event occurring.

### Example 9

Suppose we flipped a coin and rolled a die, and wanted to know the probability of getting a head on the coin *or* a 6 on the die.

#### Solution

Here, there are still 12 possible outcomes: {H1,H2,H3,H4,H5,H6,T1,T2,T3,T4,T5,T6}

By simply counting, we can see that 7 of the outcomes have a head on the coin *or* a 6 on the die *or* both—we use *or* inclusively here (these 7 outcomes are H1, H2, H3, H4, H5, H6, T6), so the probability is [latex]\frac{7}{12}[/latex]. How could we have found this from the individual probabilities?

As we would expect, [latex]\frac{1}{2}[/latex] of these outcomes have a head, and [latex]\frac{1}{6}[/latex] of these outcomes have a 6 on the die. If we add these, [latex]\frac{1}{2}+\frac{1}{6}=\frac{6}{12}+\frac{2}{12}=\frac{8}{12}[/latex], which is not the correct probability. Looking at the outcomes we can see why: the outcome H6 would have been counted twice, since it contains both a head and a 6; the probability of both a head *and* rolling a 6 is [latex]\frac{1}{12}[/latex].

If we subtract out this double count, we have the correct probability: [latex]\frac{8}{12}-\frac{1}{12}=\frac{7}{12}[/latex].

*P*(*A* or *B*)

The probability of either *A* or *B *occurring (or both) is

*P*(*A* or *B*) = *P*(*A*) + *P*(*B*) – *P*(*A* and *B*)

### Example 10

Suppose we draw one card from a standard deck. What is the probability that we get a Queen or a King?

There are 4 Queens and 4 Kings in the deck, hence 8 outcomes corresponding to a Queen or King out of 52 possible outcomes. Thus the probability of drawing a Queen or a King is:

[latex]P(\text{King or Queen})=\frac{8}{52}[/latex]

Note that in this case, there are no cards that are both a Queen and a King, so[latex]P(\text{King or Queen})=0[/latex]. Using our probability rule, we could have said:

[latex]P(\text{King or Queen})=P(\text{King})+P(\text{Queen})-P(\text{King and Queen})=\frac{4}{52}+\frac{4}{52}-0=\frac{8}{52}[/latex]

In the last example, the events were **mutually exclusive**, so *P*(*A* or *B*) = *P*(*A*) + *P*(*B*).

### Example 11

Suppose we draw one card from a standard deck. What is the probability that we get a red card or a King?

Half the cards are red, so [latex]P(\text{Red})=\frac{26}{52}[/latex]

There are four kings, so [latex]P(\text{King})=\frac{4}{52}[/latex]

There are two red kings, so [latex]P(\text{Red and King})=\frac{2}{52}[/latex]

We can then calculate

[latex]P(\text{Red or King})=P(\text{Red})+P(\text{King})-P(\text{Red and King})=\frac{26}{52}+\frac{4}{52}-\frac{2}{52}=\frac{28}{52}[/latex]

### Try it Now

In your drawer you have 10 pairs of socks, 6 of which are white, and 7 tee shirts, 3 of which are white. If you reach in and randomly grab a pair of socks and a tee shirt, what the probability at least one is white?

### Example 12

The table below shows the number of survey subjects who have received and not received a speeding ticket in the last year, and the color of their car. Find the probability that a randomly chosen person:

- Has a red car
*and*got a speeding ticket - Has a red car
*or*got a speeding ticket.

#### Solution

Speeding Ticket | No Speeding Ticket | Total | |
---|---|---|---|

Red car | 15 | 135 | 150 |

Not red car | 45 | 470 | 515 |

Total | 60 | 605 | 665 |

We can see that 15 people of the 665 surveyed had both a red car and got a speeding ticket, so the probability is [latex]\frac{15}{665}\approx{0.0226}[/latex].

Notice that having a red car and getting a speeding ticket are not independent events, so the probability of both of them occurring is not simply the product of probabilities of each one occurring.

We could answer this question by simply adding up the numbers: 15 people with red cars and speeding tickets + 135 with red cars but no ticket + 45 with a ticket but no red car = 195 people. So the probability is [latex]\frac{195}{665}\approx{0.2932}[/latex].

We also could have found this probability by: *P*(had a red car) + *P*(got a speeding ticket) – *P*(had a red car and got a speeding ticket) = [latex]\frac{150}{665}+\frac{60}{665}-\frac{15}{665}=\frac{195}{665}[/latex].

## Conditional Probability

Often it is required to compute the probability of an event given that another event has occurred.

### Example 13

What is the probability that two cards drawn at random from a deck of playing cards will both be aces?

It might seem that you could use the formula for the probability of two independent events and simply multiply [latex]\frac{4}{52}\cdot\frac{4}{52}=\frac{1}{169}[/latex]. This would be incorrect, however, because the two events are not independent. If the first card drawn is an ace, then the probability that the second card is also an ace would be lower because there would only be three aces left in the deck.

Once the first card chosen is an ace, the probability that the second card chosen is also an ace is called the **conditional probability** of drawing an ace. In this case the “condition” is that the first card is an ace. Symbolically, we write this as: *P*(ace on second draw | an ace on the first draw).

The vertical bar “|” is read as “given,” so the above expression is short for “The probability that an ace is drawn on the second draw given that an ace was drawn on the first draw.” What is this probability? After an ace is drawn on the first draw, there are 3 aces out of 51 total cards left. This means that the conditional probability of drawing an ace after one ace has already been drawn is [latex]\frac{3}{51}=\frac{1}{17}[/latex].

Thus, the probability of both cards being aces is [latex]\frac{4}{52}\cdot\frac{3}{51}=\frac{12}{2652}=\frac{1}{221}[/latex].

### Conditional Probability

The probability the event *B* occurs, given that event *A* has happened, is represented as

*P*(*B* | *A*)

This is read as “the probability of *B* given *A*”

### Example 14

Find the probability that a die rolled shows a 6, given that a flipped coin shows a head.

These are two independent events, so the probability of the die rolling a 6 is [latex]\frac{1}{6}[/latex], regardless of the result of the coin flip.

### Example 15

The table below shows the number of survey subjects who have received and not received a speeding ticket in the last year, and the color of their car. Find the probability that a randomly chosen person:

- Has a speeding ticket
*given*they have a red car - Has a red car
*given*they have a speeding ticket

#### Solution

Speeding Ticket | No Speeding Ticket | Total | |
---|---|---|---|

Red car | 15 | 135 | 150 |

Not red car | 45 | 470 | 515 |

Total | 60 | 605 | 665 |

- Since we know the person has a red car, we are only considering the 150 people in the first row of the table. Of those, 15 have a speeding ticket, so

[latex]P(\text{ticket | red car})=\frac{15}{150}=\frac{1}{10}=0.1[/latex] - Since we know the person has a speeding ticket, we are only considering the 60 people in the first column of the table. Of those, 15 have a red car, so

[latex]P(\text{red car | ticket})=\frac{15}{60}=\frac{1}{4}=0.25[/latex]

Notice from the last example that P(B | A) is **not** equal to P(A | B).

These kinds of conditional probabilities are what insurance companies use to determine your insurance rates. They look at the conditional probability of you having accident, given your age, your car, your car color, your driving history, etc., and price your policy based on that likelihood.

### Conditional Probability Formula

If Events *A* and *B* are not independent, then

*P*(*A* and *B*) = *P*(*A*) · *P*(*B* | *A*)

### Example 16

If you pull 2 cards out of a deck, what is the probability that both are spades?

#### Solution

The probability that the first card is a spade is [latex]\frac{13}{52}[/latex].

The probability that the second card is a spade, given the first was a spade, is [latex]\frac{12}{51}[/latex], since there is one less spade in the deck, and one less total cards.

The probability that both cards are spades is [latex]\frac{13}{52}\cdot\frac{12}{51}=\frac{156}{2652}\approx{0.0588}[/latex]

### Example 17

If you draw two cards from a deck, what is the probability that you will get the Ace of Diamonds and a black card?

#### Solution

You can satisfy this condition by having Case A or Case B, as follows:

- Case A: you can get the Ace of Diamonds first and then a black card or
- Case B: you can get a black card first and then the Ace of Diamonds.

Let’s calculate the probability of Case A. The probability that the first card is the Ace of Diamonds is [latex]\frac{1}{52}[/latex]. The probability that the second card is black given that the first card is the Ace of Diamonds is [latex]\frac{26}{51}[/latex] because 26 of the remaining 51 cards are black. The probability is therefore [latex]\frac{1}{52}\cdot\frac{26}{51}=\frac{1}{102}[/latex].

Now for Case B: the probability that the first card is black is [latex]\frac{26}{52}=\frac{1}{2}[/latex]. The probability that the second card is the Ace of Diamonds given that the first card is black is [latex]\frac{1}{51}[/latex]. The probability of Case B is therefore [latex]\frac{1}{2}\cdot\frac{1}{51}=\frac{1}{102}[/latex], the same as the probability of Case 1.

Recall that the probability of A or B is *P*(A) + *P*(B) – *P*(A and B). In this problem, *P*(A and B) = 0 since the first card cannot be the Ace of Diamonds and be a black card. Therefore, the probability of Case A or Case B is [latex]\frac{1}{101}+\frac{1}{101}=\frac{2}{101}[/latex]. The probability that you will get the Ace of Diamonds and a black card when drawing two cards from a deck is [latex]\frac{2}{101}[/latex].

### Try it Now

In your drawer you have 10 pairs of socks, 6 of which are white. If you reach in and randomly grab two pairs of socks, what is the probability that both are white?

### Example 18

A home pregnancy test was given to women, then pregnancy was verified through blood tests. The following table shows the home pregnancy test results. Find

*P*(not pregnant | positive test result)*P*(positive test result | not pregnant)

#### Solution

Positive Test | Negative Test | Total | |
---|---|---|---|

Pregnant | 70 | 4 | 74 |

Not Pregnant | 5 | 14 | 19 |

Total | 75 | 18 | 93 |

- Since we know the test result was positive, we’re limited to the 75 women in the first column, of which 5 were not pregnant.
*P*(not pregnant | positive test result) = [latex]\frac{5}{75}\approx{0.067}[/latex]. - Since we know the woman is not pregnant, we are limited to the 19 women in the second row, of which 5 had a positive test.
*P*(positive test result | not pregnant) = [latex]\frac{5}{19}\approx{0.263}[/latex].

The second result is what is usually called a false positive: A positive result when the woman is not actually pregnant.