Sampling and Experimentation

Learning Outcomes

  • Identify methods for obtaining a random sample of the intended population of a study
  • Identify ineffective ways of obtaining a random sample from a population
  • Identify types of sample bias
  • Identify the differences between observational study and an experiment
  • Identify the treatment in an experiment
  • Determine whether an experiment may have been influenced by confounding

As we mentioned previously, the first thing we should do before conducting a survey is to identify the population that we want to study. In this lesson, we will show you examples of how to identify the population in a study, and determine whether or not the study actually represents the intended population. We will discuss different techniques for random sampling that are intended to ensure a population is well represented in a sample.

We will also identify the difference between an observational study and an experiment, and ways experiments can be conducted. By the end of this lesson, we hope that you will also be confident in identifying when an experiment may have been affected by confounding or the placebo effect, and the methods that are employed to avoid them.

Sampling Methods and Bias

Selecting a Population

Suppose we are hired by a politician to determine the amount of support he has among the electorate should he decide to run for another term.  What population should we study? Every person in the district? Not every person is eligible to vote, and regardless of how strongly someone likes or dislikes the candidate, they don’t have much to do with him being re-elected if they are not able to vote.

What about eligible voters in the district? That might be better, but if someone is eligible to vote but does not register by the deadline, they won’t have any say in the election either. What about registered voters? Many people are registered but choose not to vote. What about “likely voters?”

This is the criteria used in much political polling, but it is sometimes difficult to define a “likely voter.” Is it someone who voted in the last election? In the last general election? In the last presidential election? Should we consider someone who just turned 18 a “likely voter?” They weren’t eligible to vote in the past, so how do we judge the likelihood that they will vote in the next election?

rows of Lego figurines

 

In November 1998, former professional wrestler Jesse “The Body” Ventura was elected governor of Minnesota. Up until right before the election, most polls showed he had little chance of winning. There were several contributing factors to the polls not reflecting the actual intent of the electorate:

  • Ventura was running on a third-party ticket and most polling methods are better suited to a two-candidate race.
  • Many respondents to polls may have been embarrassed to tell pollsters that they were planning to vote for a professional wrestler.
  • The mere fact that the polls showed Ventura had little chance of winning might have prompted some people to vote for him in protest to send a message to the major-party candidates.

But one of the major contributing factors was that Ventura recruited a substantial amount of support from young people, particularly college students, who had never voted before and who registered specifically to vote in the gubernatorial election. The polls did not deem these young people likely voters (since in most cases young people have a lower rate of voter registration and a turnout rate for elections) and so the polling samples were subject to sampling bias: they omitted a portion of the electorate that was weighted in favor of the winning candidate.

Sampling bias

A sampling method is biased if every member of the population doesn’t have equal likelihood of being in the sample.

So even identifying the population can be a difficult job, but once we have identified the population, how do we choose an appropriate sample? Remember, although we would prefer to survey all members of the population, this is usually impractical unless the population is very small, so we choose a sample. There are many ways to sample a population, but there is one goal we need to keep in mind: we would like the sample to be representative of the population.

Returning to our hypothetical job as a political pollster, we would not anticipate very accurate results if we drew all of our samples from among the customers at a Starbucks, nor would we expect that a sample drawn entirely from the membership list of the local Elks club would provide a useful picture of district-wide support for our candidate.

One way to ensure that the sample has a reasonable chance of mirroring the population is to employ randomness. The most basic random method is simple random sampling.

Simple random sample

A random sample is one in which each member of the population has an equal probability of being chosen. A simple random sample is one in which every member of the population and any group of members has an equal probability of being chosen.

example

If we could somehow identify all likely voters in the state, put each of their names on a piece of paper, toss the slips into a (very large) hat and draw 1000 slips out of the hat, we would have a simple random sample.

In practice, computers are better suited for this sort of endeavor than millions of slips of paper and extremely large headgear.

It is always possible, however, that even a random sample might end up not being totally representative of the population.  If we repeatedly take samples of 1000 people from among the population of likely voters in the state of Washington, some of these samples might tend to have a slightly higher percentage of Democrats (or Republicans) than does the general population; some samples might include more older people and some samples might include more younger people; etc. In most cases, this sampling variability is not significant.

Sampling variability

The natural variation of samples is called sampling variability.

This is unavoidable and expected in random sampling, and in most cases is not an issue.

To help account for variability, pollsters might instead use a stratified sample.

Stratified sampling

In stratified sampling, a population is divided into a number of subgroups (or strata). Random samples are then taken from each subgroup with sample sizes proportional to the size of the subgroup in the population.

example

Suppose in a particular state that previous data indicated that the electorate was comprised of 39% Democrats, 37% Republicans and 24% independents.  In a sample of 1000 people, they would then expect to get about 390 Democrats, 370 Republicans and 240 independents.  To accomplish this, they could randomly select 390 people from among those voters known to be Democrats, 370 from those known to be Republicans, and 240 from those with no party affiliation.

Stratified sampling can also be used to select a sample with people in desired age groups, a specified mix ratio of males and females, etc. A variation on this technique is called quota sampling.

Quota sampling

Quota sampling is a variation on stratified sampling, wherein samples are collected in each subgroup until the desired quota is met.

example

Suppose the pollsters call people at random, but once they have met their quota of 390 Democrats, they only gather people who do not identify themselves as a Democrat.

You may have had the experience of being called by a telephone pollster who started by asking you your age, income, etc. and then thanked you for your time and hung up before asking any “real” questions. Most likely, they already had contacted enough people in your demographic group and were looking for people who were older or younger, richer or poorer, etc. Quota sampling is usually a bit easier than stratified sampling, but also does not ensure the same level of randomness.

Another sampling method is cluster sampling, in which the population is divided into groups, and one or more groups are randomly selected to be in the sample.

Cluster sampling

In cluster sampling, the population is divided into subgroups (clusters), and a set of subgroups are selected to be in the sample.

example

If the college wanted to survey students, since students are already divided into classes, they could randomly select 10 classes and give the survey to all the students in those classes. This would be cluster sampling.

Other sampling methods include systematic sampling.

Systematic sampling

In systematic sampling, every nth member of the population is selected to be in the sample.

example

To select a sample using systematic sampling, a pollster calls every 100th name in the phone book.

Systematic sampling is not as random as a simple random sample (if your name is Albert Aardvark and your sister Alexis Aardvark is right after you in the phone book, there is no way you could both end up in the sample) but it can yield acceptable samples.

The Worst Way to Sample

Perhaps the worst types of sampling methods are convenience samples and voluntary response samples.

Convenience sampling and voluntary response sampling

Convenience sampling is the practice of samples chosen by selecting whoever is convenient.

Voluntary response sampling is allowing the sample to volunteer.

example

A pollster stands on a street corner and interviews the first 100 people who agree to speak to him. Which sampling method is represented by this scenario?


A website has a survey asking readers to give their opinion on a tax proposal. Which sampling method is represented?

Watch the following video for an overview of all the sampling methods discussed so far.

Try It

In each case, indicate what sampling method was used

a. Every 4th person in the class was selected

b. A sample was selected to contain 25 men and 35 women

c. Viewers of a new show are asked to vote on the show’s website

d. A website randomly selects 50 of their customers to send a satisfaction survey to

e. To survey voters in a town, a polling company randomly selects 10 city blocks, and interviews everyone who lives on those blocks.

Problematic Sampling and Surveying

There are number of ways that a study can be ruined before you even start collecting data. The first we have already explored – sampling or selection bias, which is when the sample is not representative of the population. One example of this is voluntary response bias, which is bias introduced by only collecting data from those who volunteer to participate. This is not the only potential source of bias.

Sources of bias

  • Sampling bias – when the sample is not representative of the population
  • Voluntary response bias – the sampling bias that often occurs when the sample is volunteers
  • Self-interest study – bias that can occur when the researchers have an interest in the outcome
  • Response bias – when the responder gives inaccurate responses for any reason
  • Perceived lack of anonymity – when the responder fears giving an honest answer might negatively affect them
  • Loaded questions – when the question wording influences the responses
  • Non-response bias – when people refusing to participate in the study can influence the validity of the outcome

examples

Consider a recent study which found that chewing gum may raise math grades in teenagers[1]. This study was conducted by the Wrigley Science Institute, a branch of the Wrigley chewing gum company. Identify the type of sampling bias found in this example.


A survey asks people “when was the last time you visited your doctor?” What type of sampling bias might this lead to?


A survey asks participants a question about their interactions with members of other races. Which sampling bias might occur for this survey strategy?


An employer puts out a survey asking their employees if they have a drug abuse problem and need treatment help. Which sampling bias may occur in this scenario?


A survey asks “do you support funding research of alternative energy sources to reduce our reliance on high-polluting fossil fuels?” Which sampling bias may result from this survey?


A telephone poll asks the question “Do you often have time to relax and read a book?”, and 50% of the people called refused to answer the survey. Which sampling bias is represented by this survey?

These problematic scenarios for statistics gathering are discussed further in the following video.

Try It

In each situation, identify a potential source of bias

a. A survey asks how many sexual partners a person has had in the last year

b. A radio station asks readers to phone in their choice in a daily poll.

c. A substitute teacher wants to know how students in the class did on their last test. The teacher asks the 10 students sitting in the front row to state their latest test score.

d. High school students are asked if they have consumed alcohol in the last two weeks.

e. The Beef Council releases a study stating that consuming red meat poses little cardiovascular risk.

f. A poll asks “Do you support a new transportation tax, or would you prefer to see our public transportation system fall apart?”

Experiments

Observing vs. Acting

So far, we have primarily discussed observational studies – studies in which conclusions would be drawn from observations of a sample or the population. In some cases these observations might be unsolicited, such as studying the percentage of cars that turn right at a red light even when there is a “no turn on red” sign. In other cases the observations are solicited, like in a survey or a poll.

In contrast, it is common to use experiments when exploring how subjects react to an outside influence. In an experiment, some kind of treatment is applied to the subjects and the results are measured and recorded.

Collection of clear glass scientific measuring equipment, with clear liquid in them

Observational studies and experiments

  • An observational study is a study based on observations or measurements
  • An experiment is a study in which the effects of a treatment are measured

Examples

Here are some examples of experiments:

A pharmaceutical company tests a new medicine for treating Alzheimer’s disease by administering the drug to 50 elderly patients with recent diagnoses. The treatment here is the new drug.


A gym tests out a new weight loss program by enlisting 30 volunteers to try out the program. The treatment here is the new program.


You test a new kitchen cleaner by buying a bottle and cleaning your kitchen. The new cleaner is the treatment.


A psychology researcher explores the effect of music on temperament by measuring people’s temperament while listening to different types of music. The music is the treatment.

 

These examples are discussed further in the following video.

Try It

Is each scenario describing an observational study or an experiment?

a. The weights of 30 randomly selected people are measured

b. Subjects are asked to do 20 jumping jacks, and then their heart rates are measured

c. Twenty coffee drinkers and twenty tea drinkers are given a concentration test

When conducting experiments, it is essential to isolate the treatment being tested.

example

Suppose a middle school (junior high) finds that their students are not scoring well on the state’s standardized math test. They decide to run an experiment to see if an alternate curriculum would improve scores. To run the test, they hire a math specialist to come in and teach a class using the new curriculum. To their delight, they see an improvement in test scores.

The difficulty with this scenario is that it is not clear whether the curriculum is responsible for the improvement, or whether the improvement is due to a math specialist teaching the class. This is called confounding – when it is not clear which factor or factors caused the observed effect. Confounding is the downfall of many experiments, though sometimes it is hidden.

Confounding

Confounding occurs when there are two potential variables that could have caused the outcome and it is not possible to determine which actually caused the result.

examples

A drug company study about a weight loss pill might report that people lost an average of 8 pounds while using their new drug. However, in the fine print you find a statement saying that participants were encouraged to also diet and exercise. It is not clear in this case whether the weight loss is due to the pill, to diet and exercise, or a combination of both. In this case confounding has occurred.

 

Example

Researchers conduct an experiment to determine whether students will perform better on an arithmetic test if they listen to music during the test. They first give the student a test without music, then give a similar test while the student listens to music. In this case, the student might perform better on the second test, regardless of the music, simply because it was the second test and they were warmed up.

View the following for additional discussion of these examples.

Try It

There are a number of measures that can be introduced to help reduce the likelihood of confounding. The primary measure is to use a control group.

Control group

When using a control group, the participants are divided into two or more groups, typically a control group and a treatment group. The treatment group receives the treatment being tested; the control group does not receive the treatment.

Ideally, the groups are otherwise as similar as possible, isolating the treatment as the only potential source of difference between the groups. For this reason, the method of dividing groups is important. Some researchers attempt to ensure that the groups have similar characteristics (same number of females, same number of people over 50, etc.), but it is nearly impossible to control for every characteristic. Because of this, random assignment is very commonly used.

examples

To determine if a two day prep course would help high school students improve their scores on the SAT test, a group of students was randomly divided into two subgroups. The first group, the treatment group, was given a two day prep course. The second group, the control group, was not given the prep course. Afterwards, both groups were given the SAT.


A company testing a new plant food grows two crops of plants in adjacent fields, the treatment group receiving the new plant food and the control group not. The crop yield would then be compared. By growing them at the same time in adjacent fields, they are controlling for weather and other confounding factors.

Sometimes not giving the control group anything does not completely control for confounding variables. For example, suppose a medicine study is testing a new headache pill by giving the treatment group the pill and the control group nothing. If the treatment group showed improvement, we would not know whether it was due to the medicine in the pill, or a response to have taken any pill. This is called a placebo effect.

Placebo effect

The placebo effect is when the effectiveness of a treatment is influenced by the patient’s perception of how effective they think the treatment will be, so a result might be seen even if the treatment is ineffectual.

example

A study found that when doing painful dental tooth extractions, patients told they were receiving a strong painkiller while actually receiving a saltwater injection found as much pain relief as patients receiving a dose of morphine.[3]

To control for the placebo effect, a placebo, or dummy treatment, is often given to the control group. This way, both groups are truly identical except for the specific treatment given.

Placebo and Placebo controlled experiments

  • A placebo is a dummy treatment given to control for the placebo effect.
  • An experiment that gives the control group a placebo is called a placebo controlled experiment.

examples

In a study for a new medicine that is dispensed in a pill form, a sugar pill could be used as a placebo.


In a study on the effect of alcohol on memory, a non-alcoholic beer might be given to the control group as a placebo.


In a study of a frozen meal diet plan, the treatment group would receive the diet food, and the control could be given standard frozen meals stripped of their original packaging.

The following video walks through the controlled experiment scenarios, including the ones using placebos.

In some cases, it is more appropriate to compare to a conventional treatment than a placebo. For example, in a cancer research study, it would not be ethical to deny any treatment to the control group or to give a placebo treatment. In this case, the currently acceptable medicine would be given to the second group, called a comparison group in this case. In our SAT test example, the non-treatment group would most likely be encouraged to study on their own, rather than be asked to not study at all, to provide a meaningful comparison.

When using a placebo, it would defeat the purpose if the participant knew they were receiving the placebo.

Blind studies

  • A blind study is one in which the participant does not know whether or not they are receiving the treatment or a placebo.
  • A double-blind study is one in which those interacting with the participants don’t know who is in the treatment group and who is in the control group.

examples

In a study about anti-depression medicine, you would not want the psychological evaluator to know whether the patient is in the treatment or control group either, as it might influence their evaluation, so the experiment should be conducted as a double-blind study.


It should be noted that not every experiment needs a control group.

If a researcher is testing whether a new fabric can withstand fire, she simply needs to torch multiple samples of the fabric – there is no need for a control group.

These examples are demonstrated in the following video.

Try It now

To test a new lie detector, two groups of subjects are given the new test. One group is asked to answer all the questions truthfully, and the second group is asked to lie on one set of questions. The person administering the lie detector test does not know what group each subject is in.

Does this experiment have a control group? Is it blind, double-blind, or neither?


  1. Reuters. http://news.yahoo.com/s/nm/20090423/od_uk_nm/oukoe_uk_gum_learning. Retrieved 4/27/09
  2. Swartz, Norbert. http://www.umich.edu/~newsinfo/MT/01/Fal01/mt6f01.html. Retrieved 3/31/2009
  3. Levine JD, Gordon NC, Smith R, Fields HL. (1981) Analgesic responses to morphine and placebo in individuals with postoperative pain. Pain. 10:379-89.