## The Poisson Random Variable

The Poisson random variable is a discrete random variable that counts the number of times a certain event will occur in a specific interval.

### Learning Objectives

Apply the Poisson random variable to fields outside of mathematics

### Key Takeaways

#### Key Points

- The Poisson distribution predicts the degree of spread around a known average rate of occurrence.
- The distribution was first introduced by Siméon Denis Poisson (1781–1840) and published, together with his probability theory, in his work “Research on the Probability of Judgments in Criminal and Civil Matters” (1837).
- The Poisson random variable is the number of successes that result from a Poisson experiment.
- Given the mean number of successes (μ) that occur in a specified region, we can compute the Poisson probability based on the following formula:
*P(x; μ) = (e-μ) (μx) / x!*.

#### Key Terms

**factorial**: The result of multiplying a given number of consecutive integers from 1 to the given number. In equations, it is symbolized by an exclamation mark (!). For example, 5! = 1 * 2 * 3 * 4 * 5 = 120.**Poisson distribution**: A discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space, if these events occur with a known average rate and independently of the time since the last event.**disjoint**: having no members in common; having an intersection equal to the empty set.

### The Poisson Distribution and Its History

The Poisson distribution is a discrete probability distribution. It expresses the probability of a given number of events occurring in a fixed interval of time and/or space, if these events occur with a known average rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area, or volume.

For example: Let’s suppose that, on average, a person typically receives four pieces of mail per day. There will be a certain spread—sometimes a little more, sometimes a little less, once in a while nothing at all. Given only the average rate for a certain period of observation (i.e., pieces of mail per day, phonecalls per hour, etc.), and assuming that the process that produces the event flow is essentially random, the Poisson distribution specifies how likely it is that the count will be 3, 5, 10, or any other number during one period of observation. It predicts the degree of spread around a known average rate of occurrence.

The distribution was first introduced by Siméon Denis Poisson (1781–1840) and published, together with his probability theory, in 1837 in his work *Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile* (“Research on the Probability of Judgments in Criminal and Civil Matters”). The work focused on certain random variables *N* that count, among other things, the number of discrete occurrences (sometimes called “events” or “arrivals”) that take place during a time interval of given length.

### Properties of the Poisson Random Variable

A Poisson experiment is a statistical experiment that has the following properties:

- The experiment results in outcomes that can be classified as successes or failures.
- The average number of successes (μ) that occurs in a specified region is known.
- The probability that a success will occur is proportional to the size of the region.
- The probability that a success will occur in an extremely small region is virtually zero.

Note that the specified region could take many forms: a length, an area, a volume, a period of time, etc.

The Poisson random variable, then, is the number of successes that result from a Poisson experiment, and the probability distribution of a Poisson random variable is called a Poisson distribution. Given the mean number of successes (μ) that occur in a specified region, we can compute the Poisson probability based on the following formula:

[latex]\text{P}(\text{x}; \mu ) = ((\text{e}^{-\mu }) (\mu ^\text{x})) / \text{x}![/latex],

where:

- e = a constant equal to approximately 2.71828 (actually, e is the base of the natural logarithm system);
- μ = the mean number of successes that occur in a specified region;
- x: the actual number of successes that occur in a specified region;
- P(x; μ): the Poisson probability that exactly x successes occur in a Poisson experiment, when the mean number of successes is μ; and
- x! is the factorial of x.

The Poisson random variable satisfies the following conditions:

- The number of successes in two disjoint time intervals is independent.
- The probability of a success during a small time interval is proportional to the entire length of the time interval.
- The mean of the Poisson distribution is equal to μ.
- The variance is also equal to μ.

Apart from disjoint time intervals, the Poisson random variable also applies to disjoint regions of space.

### Example

The average number of homes sold by the Acme Realty company is 2 homes per day. What is the probability that exactly 3 homes will be sold tomorrow? This is a Poisson experiment in which we know the following:

- μ = 2; since 2 homes are sold per day, on average.
- x = 3; since we want to find the likelihood that 3 homes will be sold tomorrow.
- e = 2.71828; since e is a constant equal to approximately 2.71828.

We plug these values into the Poisson formula as follows:

[latex]\text{P}(\text{x}; \mu ) = ((\text{e}^{-\mu }) (\mu ^\text{x})) / \text{x}![/latex]

[latex]\text{P}(3; 2) = ((2.71828^{-2}) (2^3)) / 3![/latex]

[latex]\text{P}(3; 2) = ((0.13534) (8)) / 6[/latex]

[latex]\text{P}(3; 2) = 0.180[/latex]

Thus, the probability of selling 3 homes tomorrow is 0.180.

### Applications of the Poisson Random Variable

Applications of the Poisson distribution can be found in many fields related to counting:

- electrical system example: telephone calls arriving in a system
- astronomy example: photons arriving at a telescope
- biology example: the number of mutations on a strand of DNA per unit length
- management example: customers arriving at a counter or call center
- civil engineering example: cars arriving at a traffic light
- finance and insurance example: number of losses/claims occurring in a given period of time

Examples of events that may be modelled as a Poisson distribution include:

- the number of soldiers killed by horse-kicks each year in each corps in the Prussian cavalry (this example was made famous by a book of Ladislaus Josephovich Bortkiewicz (1868–1931);
- the number of yeast cells used when brewing Guinness beer (this example was made famous by William Sealy Gosset (1876–1937);
- the number of goals in sports involving two competing teams;
- the number of deaths per year in a given age group; and
- the number of jumps in a stock price in a given time interval.

## The Hypergeometric Random Variable

A hypergeometric random variable is a discrete random variable characterized by a fixed number of trials with differing probabilities of success.

### Learning Objectives

Contrast hypergeometric distribution and binomial distribution

### Key Takeaways

#### Key Points

- The hypergeometric distribution applies to sampling without replacement from a finite population whose elements can be classified into two mutually exclusive categories like pass/fail, male/female or employed/unemployed.
- As random selections are made from the population, each subsequent draw decreases the population causing the probability of success to change with each draw.
- It is in contrast to the binomial distribution, which describes the probability of [latex]\text{k}[/latex] successes in [latex]\text{n}[/latex] draws with replacement.

#### Key Terms

**binomial distribution**: the discrete probability distribution of the number of successes in a sequence of $n$ independent yes/no experiments, each of which yields success with probability $p$**Bernoulli Trial**: an experiment whose outcome is random and can be either of two possible outcomes, “success” or “failure”**hypergeometric distribution**: a discrete probability distribution that describes the number of successes in a sequence of $n$ draws from a finite population without replacement

The hypergeometric distribution is a discrete probability distribution that describes the probability of [latex]\text{k}[/latex] successes in [latex]\text{n}[/latex] draws *without* replacement from a finite population of size [latex]\text{N}[/latex] containing a maximum of [latex]\text{K}[/latex] successes. This is in contrast to the binomial distribution, which describes the probability of [latex]\text{k}[/latex] successes in [latex]\text{n}[/latex] draws *with* replacement.

The hypergeometric distribution applies to sampling without replacement from a finite population whose elements can be classified into two mutually exclusive categories like pass/fail, male/female or employed/unemployed. As random selections are made from the population, each subsequent draw decreases the population causing the probability of success to change with each draw. The following conditions characterize the hypergeometric distribution:

- The result of each draw can be classified into one or two categories.
- The probability of a success changes on each draw.

A random variable follows the hypergeometric distribution if its probability mass function is given by:

[latex]\displaystyle \text{P}(\text{X}=\text{k}) = \frac{{{\text{K}}\choose{\text{k}}}{{\text{N}-\text{K}}\choose{\text{n}-\text{k}}}}{{{\text{N}}\choose{\text{n}}}}[/latex]

Where:

- [latex]\text{N}[/latex] is the population size,
- [latex]\text{K}[/latex] is the number of success states in the population,
- [latex]\text{n}[/latex] is the number of draws,
- [latex]\text{k}[/latex] is the number of successes, and
- [latex]\displaystyle {{\text{a}}\choose{\text{b}}}[/latex] is a binomial coefficient.

A hypergeometric probability distribution is the outcome resulting from a hypergeometric experiment. The characteristics of a hypergeometric experiment are:

- You take samples from 2 groups.
- You are concerned with a group of interest, called the first group.
- You sample without replacement from the combined groups. For example, you want to choose a softball team from a combined group of 11 men and 13 women. The team consists of 10 players.
- Each pick is not independent, since sampling is without replacement. In the softball example, the probability of picking a women first is [latex]\frac{13}{24}[/latex]. The probability of picking a man second is [latex]\frac{11}{23}[/latex], if a woman was picked first. It is [latex]\frac{10}{23}[/latex] if a man was picked first. The probability of the second pick depends on what happened in the first pick.
- You are not dealing with Bernoulli Trials.