Measurement Error

Bias

Systematic, or biased, errors are errors which consistently yield results either higher or lower than the correct measurement.

Learning Objectives

Contrast random and systematic errors

Key Takeaways

Key Points

  • Systematic errors are biases in measurement which lead to a situation wherein the mean of many separate measurements differs significantly from the actual value of the measured attribute in one direction.
  • A systematic error makes the measured value always smaller or larger than the true value, but not both. An experiment may involve more than one systematic error and these errors may nullify one another, but each alters the true value in one way only.
  • Accuracy (or validity) is a measure of the systematic error. If an experiment is accurate or valid, then the systematic error is very small.
  • Systematic errors include personal errors, instrumental errors, and method errors.

Key Terms

  • systematic error: an error which consistently yields results either higher or lower than the correct measurement; accuracy error
  • random error: an error which is a combination of results both higher and lower than the desired measurement; precision error
  • Accuracy: the degree of closeness of measurements of a quantity to that quantity’s actual (true) value

Two Types of Errors

While conducting measurements in experiments, there are generally two different types of errors: random (or chance) errors and systematic (or biased) errors.

Every measurement has an inherent uncertainty. We therefore need to give some indication of the reliability of measurements and the uncertainties of the results calculated from these measurements. To better understand the outcome of experimental data, an estimate of the size of the systematic errors compared to the random errors should be considered. Random errors are due to the precision of the equipment, and systematic errors are due to how well the equipment was used or how well the experiment was controlled.

image

Low Accuracy, High Precision: This target shows an example of low accuracy (points are not close to center target) but high precision (points are close together). In this case, there is more systematic error than random error.

image

High Accuracy, Low Precision: This target shows an example of high accuracy (points are all close to center target) but low precision (points are not close together). In this case, there is more random error than systematic error.

Biased, or Systematic, Errors

Systematic errors are biases in measurement which lead to a situation wherein the mean of many separate measurements differs significantly from the actual value of the measured attribute. All measurements are prone to systematic errors, often of several different types. Sources of systematic errors may be imperfect calibration of measurement instruments, changes in the environment which interfere with the measurement process, and imperfect methods of observation.

A systematic error makes the measured value always smaller or larger than the true value, but not both. An experiment may involve more than one systematic error and these errors may nullify one another, but each alters the true value in one way only. Accuracy (or validity) is a measure of the systematic error. If an experiment is accurate or valid, then the systematic error is very small. Accuracy is a measure of how well an experiment measures what it was trying to measure. This is difficult to evaluate unless you have an idea of the expected value (e.g. a text book value or a calculated value from a data book). Compare your experimental value to the literature value. If it is within the margin of error for the random errors, then it is most likely that the systematic errors are smaller than the random errors. If it is larger, then you need to determine where the errors have occurred. When an accepted value is available for a result determined by experiment, the percent error can be calculated.

For example, consider an experimenter taking a reading of the time period of a pendulum’s full swing. If their stop-watch or timer starts with 1 second on the clock, then all of their results will be off by 1 second. If the experimenter repeats this experiment twenty times (starting at 1 second each time), then there will be a percentage error in the calculated average of their results; the final result will be slightly larger than the true period.

Categories of Systematic Errors and How to Reduce Them

  1. Personal Errors: These errors are the result of ignorance, carelessness, prejudices, or physical limitations on the experimenter. This type of error can be greatly reduced if you are familiar with the experiment you are doing.
  2. Instrumental Errors: Instrumental errors are attributed to imperfections in the tools with which the analyst works. For example, volumetric equipment, such as burets, pipets, and volumetric flasks, frequently deliver or contain volumes slightly different from those indicated by their graduations. Calibration can eliminate this type of error.
  3. Method Errors: This type of error many times results when you do not consider how to control an experiment. For any experiment, ideally you should have only one manipulated (independent) variable. Many times this is very difficult to accomplish. The more variables you can control in an experiment, the fewer method errors you will have.

Chance Error

Random, or chance, errors are errors that are a combination of results both higher and lower than the desired measurement.

Learning Objectives

Explain how random errors occur within an experiment

Key Takeaways

Key Points

  • A random error makes the measured value both smaller and larger than the true value; they are errors of precision.
  • Random errors occur by chance and cannot be avoided.
  • Random error is due to factors which we do not, or cannot, control.

Key Terms

  • systematic error: an error which consistently yields results either higher or lower than the correct measurement; accuracy error
  • random error: an error which is a combination of results both higher and lower than the desired measurement; precision error
  • Precision: the ability of a measurement to be reproduced consistently

Two Types of Errors

While conducting measurements in experiments, there are generally two different types of errors: random (or chance) errors and systematic (or biased) errors.

Every measurement has an inherent uncertainty. We therefore need to give some indication of the reliability of measurements and the uncertainties of the results calculated from these measurements. To better understand the outcome of experimental data, an estimate of the size of the systematic errors compared to the random errors should be considered. Random errors are due to the precision of the equipment, and systematic errors are due to how well the equipment was used or how well the experiment was controlled.

image

Low Accuracy, High Precision: This target shows an example of low accuracy (points are not close to center target) but high precision (points are close together). In this case, there is more systematic error than random error.

image

High Accuracy, Low Precision: This target shows an example of high accuracy (points are all close to center target) but low precision (points are not close together). In this case, there is more random error than systematic error.

Chance, or Random Errors

A random error makes the measured value both smaller and larger than the true value; they are errors of precision. Chance alone determines if the value is smaller or larger. Reading the scales of a balance, graduated cylinder, thermometer, etc. produces random errors. In other words, you can weigh a dish on a balance and get a different answer each time simply due to random errors. They cannot be avoided; they are part of the measuring process. Uncertainties are measures of random errors. These are errors incurred as a result of making measurements on imperfect tools which can only have certain degree of precision.

Random error is due to factors which we cannot (or do not) control. It may be too expensive, or we may be too ignorant of these factors to control them each time we measure. It may even be that whatever we are trying to measure is changing in time or is fundamentally probabilistic. Random error often occurs when instruments are pushed to their limits. For example, it is common for digital balances to exhibit random error in their least significant digit. Three measurements of a single object might read something like 0.9111g, 0.9110g, and 0.9112g.

Outliers

In statistics, an outlier is an observation that is numerically distant from the rest of the data.

Learning Objectives

Explain how to identify outliers in a distribution

Key Takeaways

Key Points

  • Outliers can occur by chance, by human error, or by equipment malfunction. They may be indicative of a non- normal distribution, or they may just be natural deviations that occur in a large sample.
  • Unless it can be ascertained that the deviation is not significant, it is not wise to ignore the presence of outliers.
  • There is no rigid mathematical definition of what constitutes an outlier. Often, however, we use the rule of thumb that any point that is located further than two standard deviations above or below the best fit line is an outlier.

Key Terms

  • outlier: a value in a statistical sample which does not fit a pattern that describes most other data points; specifically, a value that lies 1.5 IQR beyond the upper or lower quartile
  • best fit line: A line on a graph showing the general direction that a group of points seem to be heading.
  • regression line: A smooth curve fitted to the set of paired data in regression analysis; for linear regression the curve is a straight line.
  • interquartile range: The difference between the first and third quartiles; a robust measure of sample dispersion.

Outliers

In statistics, an outlier is an observation that is numerically distant from the rest of the data. Outliers can occur by chance in any distribution, but they are often indicative either of measurement error or that the population has a heavy-tailed distribution. In the former case, one wishes to discard them or use statistics that are robust to outliers, while in the latter case, they indicate that the distribution is skewed and that one should be very cautious in using tools or intuitions that assume a normal distribution.

When looking at regression lines that show where the data points fall, outliers are far away from the best fit line. They have large “errors,” where the “error” or residual is the vertical distance from the line to the point.

Outliers need to be examined closely. Sometimes, for some reason or another, they should not be included in the analysis of the data. It is possible that an outlier is a result of erroneous data. Other times, an outlier may hold valuable information about the population under study and should remain included in the data. The key is to carefully examine what causes a data point to be an outlier.

Identifying Outliers

We could guess at outliers by looking at a graph of the scatterplot and best fit line. However, we would like some guideline as to how far away a point needs to be in order to be considered an outlier. As a rough rule of thumb, we can flag any point that is located further than two standard deviations above or below the best fit line as an outlier, as illustrated below. The standard deviation used is the standard deviation of the residuals or errors.

image

Statistical outliers: This graph shows a best-fit line (solid blue) to fit the data points, as well as two extra lines (dotted blue) that are two standard deviations above and below the best fit line. Highlighted in orange are all the points, sometimes called “inliers”, that lie within this range; anything outside those lines—the dark-blue points—can be considered an outlier.

Note: There is no rigid mathematical definition of what constitutes an outlier; determining whether or not an observation is an outlier is ultimately a subjective exercise. The above rule is just one of many rules used. Another method often used is based on the interquartile range (IQR). For example, some people use the [latex]1.5 \cdot \text{IQR}[/latex] rule. This defines an outlier to be any observation that falls [latex]1.5 \cdot \text{IQR}[/latex] below the first quartile or any observation that falls [latex]1.5 \cdot \text{IQR}[/latex] above the third quartile.

If we are to use the standard deviation rule, we can do this visually in the scatterplot by drawing an extra pair of lines that are two standard deviations above and below the best fit line. Any data points that are outside this extra pair of lines are flagged as potential outliers. Or, we can do this numerically by calculating each residual and comparing it to twice the standard deviation. Graphing calculators make this process fairly simple.

Causes for Outliers

Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transmission or transcription. Outliers arise due to changes in system behavior, fraudulent behavior, human error, instrument error or simply through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher.

Unless it can be ascertained that the deviation is not significant, it is ill-advised to ignore the presence of outliers. Outliers that cannot be readily explained demand special attention.