{"id":136,"date":"2017-11-16T17:42:34","date_gmt":"2017-11-16T17:42:34","guid":{"rendered":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/chapter\/13-2-some-basic-null-hypothesis-tests\/"},"modified":"2017-11-16T17:42:34","modified_gmt":"2017-11-16T17:42:34","slug":"13-2-some-basic-null-hypothesis-tests","status":"publish","type":"chapter","link":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/chapter\/13-2-some-basic-null-hypothesis-tests\/","title":{"raw":"13.2 Some Basic Null Hypothesis Tests","rendered":"13.2 Some Basic Null Hypothesis Tests"},"content":{"raw":"<div class=\"bcc-box bcc-highlight\" id=\"price_1.0-ch13_s02_n01\">\n        <h3 class=\"title\">Learning Objectives<\/h3>\n        <ol class=\"orderedlist\" id=\"price_1.0-ch13_s02_l01\"><li>Conduct and interpret one-sample, dependent-samples, and independent-samples <em class=\"emphasis\">t<\/em> tests.<\/li>\n            <li>Interpret the results of one-way, repeated measures, and factorial ANOVAs.<\/li>\n            <li>Conduct and interpret null hypothesis tests of Pearson\u2019s <em class=\"emphasis\">r<\/em>.<\/li>\n        <\/ol><\/div>\n    <p class=\"para editable block\" id=\"price_1.0-ch13_s02_p01\">In this section, we look at several common null hypothesis testing procedures. The emphasis here is on providing enough information to allow you to conduct and interpret the most basic versions. In most cases, the online statistical analysis tools mentioned in <a class=\"xref\" href=\"..\/12-1-describing-single-variables\/#price_1.0-ch12\">Chapter 12 \"Descriptive Statistics\"<\/a> will handle the computations\u2014as will programs such as Microsoft Excel and SPSS.<\/p>\n    <div class=\"section\" id=\"price_1.0-ch13_s02_s01\">\n        <h2 class=\"title editable block\">The <em class=\"emphasis\">t<\/em> Test<\/h2>\n        <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_p01\">As we have seen throughout this book, many studies in psychology focus on the difference between two means. The most common null hypothesis test for this type of statistical relationship is the <span class=\"margin_term\"><b><em class=\"emphasis\">t<\/em> test<\/b><\/span>. In this section, we look at three types of <em class=\"emphasis\">t<\/em> tests that are used for slightly different research designs: the one-sample <em class=\"emphasis\">t<\/em> test, the dependent-samples <em class=\"emphasis\">t<\/em> test, and the independent-samples <em class=\"emphasis\">t<\/em> test.<\/p>\n        <div class=\"section\" id=\"price_1.0-ch13_s02_s01_s01\">\n            <h2 class=\"title editable block\">One-Sample <em class=\"emphasis\">t<\/em> Test<\/h2>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p01\">The <span class=\"margin_term\"><b>one-sample <em class=\"emphasis\">t<\/em> test<\/b><\/span> is used to compare a sample mean (<em class=\"emphasis\">M<\/em>) with a hypothetical population mean (\u03bc<sub class=\"subscript\">0<\/sub>) that provides some interesting standard of comparison. The null hypothesis is that the mean for the population (\u00b5) is equal to the hypothetical population mean: \u03bc = \u03bc<sub class=\"subscript\">0<\/sub>. The alternative hypothesis is that the mean for the population is different from the hypothetical population mean: \u03bc <sub class=\"subscript\">\u2260<\/sub> \u03bc<sub class=\"subscript\">0<\/sub>. To decide between these two hypotheses, we need to find the probability of obtaining the sample mean (or one more extreme) if the null hypothesis were true. But finding this <em class=\"emphasis\">p<\/em> value requires first computing a test statistic called <em class=\"emphasis\">t<\/em>. (A <span class=\"margin_term\"><b>test statistic<\/b><\/span> is a statistic that is computed only to help find the <em class=\"emphasis\">p<\/em> value.) The formula for <em class=\"emphasis\">t<\/em> is as follows:<\/p>\n\n\\[ t = \\frac{M - \\mu_{0}}{( \\frac{SD}{ \\sqrt{N}})} \\]\n\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p02\">Again, <em class=\"emphasis\">M<\/em> is the sample mean and \u00b5<sub class=\"subscript\">0<\/sub> is the hypothetical population mean of interest. <em class=\"emphasis\">SD<\/em> is the sample standard deviation and <em class=\"emphasis\">N<\/em> is the sample size.<\/p>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p03\">The reason the <em class=\"emphasis\">t<\/em> statistic (or any test statistic) is useful is that we know how it is distributed when the null hypothesis is true. As shown in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 \"Distribution of \"<\/a>, this distribution is unimodal and symmetrical, and it has a mean of 0. Its precise shape depends on a statistical concept called the degrees of freedom, which for a one-sample <em class=\"emphasis\">t<\/em> test is <em class=\"emphasis\">N<\/em> \u2212 1. (There are 24 degrees of freedom for the distribution shown in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 \"Distribution of \"<\/a>.) The important point is that knowing this distribution makes it possible to find the <em class=\"emphasis\">p<\/em> value for any <em class=\"emphasis\">t<\/em> score. Consider, for example, a <em class=\"emphasis\">t<\/em> score of +1.50 based on a sample of 25. The probability of a <em class=\"emphasis\">t<\/em> score at least this extreme is given by the proportion of <em class=\"emphasis\">t<\/em> scores in the distribution that are at least this extreme. For now, let us define <em class=\"emphasis\">extreme<\/em> as being far from zero in either direction. Thus the <em class=\"emphasis\">p<\/em> value is the proportion of <em class=\"emphasis\">t<\/em> scores that are +1.50 or above <em class=\"emphasis\">or<\/em> that are \u22121.50 or below\u2014a value that turns out to be .14.<\/p>\n            <div style=\"text-align: center; font-size: .8em; max-width: 497px;\" id=\"price_1.0-ch13_s02_s01_s01_f01\">\n                <p class=\"title\"><span class=\"title-prefix\">Figure 13.1<\/span> Distribution of <em class=\"emphasis\">t<\/em> Scores (With 24 Degrees of Freedom) When the Null Hypothesis Is True<\/p>\n                <a href=\"\/psychologyresearchmethods\/wp-content\/uploads\/sites\/171\/2015\/07\/7bffb75c83485bbf0a3f08783ad55bcb.jpg\"><img src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2714\/2017\/11\/16174230\/7bffb75c83485bbf0a3f08783ad55bcb.jpg\" alt=\"Distribution of t Scores (With 24 Degrees of Freedom) When the Null Hypothesis Is True. The red vertical lines represent the two-tailed critical values, and the green verticle lines the one-tailed critical values when &#x3B1; = .05\" style=\"max-width: 497px;\"\/><\/a><p class=\"para\">The red vertical lines represent the two-tailed critical values, and the green vertical lines the one-tailed critical values when \u03b1 = .05.<\/p>\n            <\/div>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p04\">Fortunately, we do not have to deal directly with the distribution of <em class=\"emphasis\">t<\/em> scores. If we were to enter our sample data and hypothetical mean of interest into one of the online statistical tools in <a class=\"xref\" href=\"..\/12-1-describing-single-variables\/#price_1.0-ch12\">Chapter 12 \"Descriptive Statistics\"<\/a> or into a program like SPSS (Excel does not have a one-sample <em class=\"emphasis\">t<\/em> test function), the output would include both the <em class=\"emphasis\">t<\/em> score and the <em class=\"emphasis\">p<\/em> value. At this point, the rest of the procedure is simple. If <em class=\"emphasis\">p<\/em> is less than .05, we reject the null hypothesis and conclude that the population mean differs from the hypothetical mean of interest. If <em class=\"emphasis\">p<\/em> is greater than .05, we retain the null hypothesis and conclude that there is not enough evidence to say that the population mean differs from the hypothetical mean of interest. (Again, technically, we conclude only that we do not have enough evidence to conclude that it <em class=\"emphasis\">does<\/em> differ.)<\/p>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p05\">If we were to compute the <em class=\"emphasis\">t<\/em> score by hand, we could use a table like <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 \"Table of Critical Values of \"<\/a> to make the decision. This table does not provide actual <em class=\"emphasis\">p<\/em> values. Instead, it provides the <span class=\"margin_term\"><b>critical values<\/b><\/span> of <em class=\"emphasis\">t<\/em> for different degrees of freedom (<em class=\"emphasis\">df)<\/em> when \u03b1 is .05. For now, let us focus on the two-tailed critical values in the last column of the table. Each of these values should be interpreted as a pair of values: one positive and one negative. For example, the two-tailed critical values when there are 24 degrees of freedom are +2.064 and \u22122.064. These are represented by the red vertical lines in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 \"Distribution of \"<\/a>. The idea is that any <em class=\"emphasis\">t<\/em> score below the lower critical value (the left-hand red line in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 \"Distribution of \"<\/a>) is in the lowest 2.5% of the distribution, while any <em class=\"emphasis\">t<\/em> score above the upper critical value (the right-hand red line) is in the highest 2.5% of the distribution. This means that any <em class=\"emphasis\">t<\/em> score beyond the critical value in <em class=\"emphasis\">either<\/em> direction is in the most extreme 5% of <em class=\"emphasis\">t<\/em> scores when the null hypothesis is true and therefore has a <em class=\"emphasis\">p<\/em> value less than .05. Thus if the <em class=\"emphasis\">t<\/em> score we compute is beyond the critical value in either direction, then we reject the null hypothesis. If the <em class=\"emphasis\">t<\/em> score we compute is between the upper and lower critical values, then we retain the null hypothesis.<\/p>\n            <div class=\"table block\" id=\"price_1.0-ch13_s02_s01_s01_t01\">\n                <p class=\"title\"><span class=\"title-prefix\">Table 13.2<\/span> Table of Critical Values of <em class=\"emphasis\">t<\/em> When \u03b1 = .05<\/p>\n                <table cellpadding=\"0\" style=\"border-spacing: 0px;\"><thead><tr><th align=\"right\">\n                            <\/th><th colspan=\"2\" align=\"right\">Critical value<\/th>\n                        <\/tr><\/thead><tbody><tr><td align=\"right\"><em class=\"emphasis\">df<\/em><\/td>\n                            <td align=\"right\">One-tailed<\/td>\n                            <td align=\"right\">Two-tailed<\/td>\n                        <\/tr><tr><td align=\"right\">3<\/td>\n                            <td align=\"right\">2.353<\/td>\n                            <td align=\"right\">3.182<\/td>\n                        <\/tr><tr><td align=\"right\">4<\/td>\n                            <td align=\"right\">2.132<\/td>\n                            <td align=\"right\">2.776<\/td>\n                        <\/tr><tr><td align=\"right\">5<\/td>\n                            <td align=\"right\">2.015<\/td>\n                            <td align=\"right\">2.571<\/td>\n                        <\/tr><tr><td align=\"right\">6<\/td>\n                            <td align=\"right\">1.943<\/td>\n                            <td align=\"right\">2.447<\/td>\n                        <\/tr><tr><td align=\"right\">7<\/td>\n                            <td align=\"right\">1.895<\/td>\n                            <td align=\"right\">2.365<\/td>\n                        <\/tr><tr><td align=\"right\">8<\/td>\n                            <td align=\"right\">1.860<\/td>\n                            <td align=\"right\">2.306<\/td>\n                        <\/tr><tr><td align=\"right\">9<\/td>\n                            <td align=\"right\">1.833<\/td>\n                            <td align=\"right\">2.262<\/td>\n                        <\/tr><tr><td align=\"right\">10<\/td>\n                            <td align=\"right\">1.812<\/td>\n                            <td align=\"right\">2.228<\/td>\n                        <\/tr><tr><td align=\"right\">11<\/td>\n                            <td align=\"right\">1.796<\/td>\n                            <td align=\"right\">2.201<\/td>\n                        <\/tr><tr><td align=\"right\">12<\/td>\n                            <td align=\"right\">1.782<\/td>\n                            <td align=\"right\">2.179<\/td>\n                        <\/tr><tr><td align=\"right\">13<\/td>\n                            <td align=\"right\">1.771<\/td>\n                            <td align=\"right\">2.160<\/td>\n                        <\/tr><tr><td align=\"right\">14<\/td>\n                            <td align=\"right\">1.761<\/td>\n                            <td align=\"right\">2.145<\/td>\n                        <\/tr><tr><td align=\"right\">15<\/td>\n                            <td align=\"right\">1.753<\/td>\n                            <td align=\"right\">2.131<\/td>\n                        <\/tr><tr><td align=\"right\">16<\/td>\n                            <td align=\"right\">1.746<\/td>\n                            <td align=\"right\">2.120<\/td>\n                        <\/tr><tr><td align=\"right\">17<\/td>\n                            <td align=\"right\">1.740<\/td>\n                            <td align=\"right\">2.110<\/td>\n                        <\/tr><tr><td align=\"right\">18<\/td>\n                            <td align=\"right\">1.734<\/td>\n                            <td align=\"right\">2.101<\/td>\n                        <\/tr><tr><td align=\"right\">19<\/td>\n                            <td align=\"right\">1.729<\/td>\n                            <td align=\"right\">2.093<\/td>\n                        <\/tr><tr><td align=\"right\">20<\/td>\n                            <td align=\"right\">1.725<\/td>\n                            <td align=\"right\">2.086<\/td>\n                        <\/tr><tr><td align=\"right\">21<\/td>\n                            <td align=\"right\">1.721<\/td>\n                            <td align=\"right\">2.080<\/td>\n                        <\/tr><tr><td align=\"right\">22<\/td>\n                            <td align=\"right\">1.717<\/td>\n                            <td align=\"right\">2.074<\/td>\n                        <\/tr><tr><td align=\"right\">23<\/td>\n                            <td align=\"right\">1.714<\/td>\n                            <td align=\"right\">2.069<\/td>\n                        <\/tr><tr><td align=\"right\">24<\/td>\n                            <td align=\"right\">1.711<\/td>\n                            <td align=\"right\">2.064<\/td>\n                        <\/tr><tr><td align=\"right\">25<\/td>\n                            <td align=\"right\">1.708<\/td>\n                            <td align=\"right\">2.060<\/td>\n                        <\/tr><tr><td align=\"right\">30<\/td>\n                            <td align=\"right\">1.697<\/td>\n                            <td align=\"right\">2.042<\/td>\n                        <\/tr><tr><td align=\"right\">35<\/td>\n                            <td align=\"right\">1.690<\/td>\n                            <td align=\"right\">2.030<\/td>\n                        <\/tr><tr><td align=\"right\">40<\/td>\n                            <td align=\"right\">1.684<\/td>\n                            <td align=\"right\">2.021<\/td>\n                        <\/tr><tr><td align=\"right\">45<\/td>\n                            <td align=\"right\">1.679<\/td>\n                            <td align=\"right\">2.014<\/td>\n                        <\/tr><tr><td align=\"right\">50<\/td>\n                            <td align=\"right\">1.676<\/td>\n                            <td align=\"right\">2.009<\/td>\n                        <\/tr><tr><td align=\"right\">60<\/td>\n                            <td align=\"right\">1.671<\/td>\n                            <td align=\"right\">2.000<\/td>\n                        <\/tr><tr><td align=\"right\">70<\/td>\n                            <td align=\"right\">1.667<\/td>\n                            <td align=\"right\">1.994<\/td>\n                        <\/tr><tr><td align=\"right\">80<\/td>\n                            <td align=\"right\">1.664<\/td>\n                            <td align=\"right\">1.990<\/td>\n                        <\/tr><tr><td align=\"right\">90<\/td>\n                            <td align=\"right\">1.662<\/td>\n                            <td align=\"right\">1.987<\/td>\n                        <\/tr><tr><td align=\"right\">100<\/td>\n                            <td align=\"right\">1.660<\/td>\n                            <td align=\"right\">1.984<\/td>\n                        <\/tr><\/tbody><\/table><\/div>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p06\">Thus far, we have considered what is called a <span class=\"margin_term\"><b>two-tailed test<\/b><\/span>, where we reject the null hypothesis if the <em class=\"emphasis\">t<\/em> score for the sample is extreme in either direction. This makes sense when we believe that the sample mean might differ from the hypothetical population mean but we do not have good reason to expect the difference to go in a particular direction. But it is also possible to do a <span class=\"margin_term\"><b>one-tailed test<\/b><\/span>, where we reject the null hypothesis only if the <em class=\"emphasis\">t<\/em> score for the sample is extreme in one direction that we specify before collecting the data. This makes sense when we have good reason to expect the sample mean will differ from the hypothetical population mean in a particular direction.<\/p>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p07\">Here is how it works. Each one-tailed critical value in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 \"Table of Critical Values of \"<\/a> can again be interpreted as a pair of values: one positive and one negative. A <em class=\"emphasis\">t<\/em> score below the lower critical value is in the lowest 5% of the distribution, and a <em class=\"emphasis\">t<\/em> score above the upper critical value is in the highest 5% of the distribution. For 24 degrees of freedom, these values are \u22121.711 and +1.711. (These are represented by the green vertical lines in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 \"Distribution of \"<\/a>.) However, for a one-tailed test, we must decide before collecting data whether we expect the sample mean to be lower than the hypothetical population mean, in which case we would use only the lower critical value, or we expect the sample mean to be greater than the hypothetical population mean, in which case we would use only the upper critical value. Notice that we still reject the null hypothesis when the <em class=\"emphasis\">t<\/em> score for our sample is in the most extreme 5% of the t scores we would expect if the null hypothesis were true\u2014so \u03b1 remains at .05. We have simply redefined <em class=\"emphasis\">extreme<\/em> to refer only to one tail of the distribution. The advantage of the one-tailed test is that critical values are less extreme. If the sample mean differs from the hypothetical population mean in the expected direction, then we have a better chance of rejecting the null hypothesis. The disadvantage is that if the sample mean differs from the hypothetical population mean in the unexpected direction, then there is no chance at all of rejecting the null hypothesis.<\/p>\n            <div class=\"section\" id=\"price_1.0-ch13_s02_s01_s01_s01\">\n                <h2 class=\"title editable block\">Example One-Sample <em class=\"emphasis\">t<\/em> Test<\/h2>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_s01_p01\">Imagine that a health psychologist is interested in the accuracy of college students\u2019 estimates of the number of calories in a chocolate chip cookie. He shows the cookie to a sample of 10 students and asks each one to estimate the number of calories in it. Because the actual number of calories in the cookie is 250, this is the hypothetical population mean of interest (\u00b5<sub class=\"subscript\">0<\/sub>). The null hypothesis is that the mean estimate for the population (\u03bc) is 250. Because he has no real sense of whether the students will underestimate or overestimate the number of calories, he decides to do a two-tailed test. Now imagine further that the participants\u2019 actual estimates are as follows:<\/p>\n                <span class=\"informalequation block\"><span class=\"mathphrase\">250, 280, 200, 150, 175, 200, 200, 220, 180, 250.<\/span><\/span>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_s01_p02\">The mean estimate for the sample (<em class=\"emphasis\">M<\/em>) is 212.00 calories and the standard deviation (<em class=\"emphasis\">SD<\/em>) is 39.17. The health psychologist can now compute the <em class=\"emphasis\">t<\/em> score for his sample:<\/p>\n\n\\[ t = \\frac{212 - 250}{ ( \\frac{39.17}{ \\sqrt{10}} ) } = -3.07 \\]               \n\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_s01_p03\">If he enters the data into one of the online analysis tools or uses SPSS, it would also tell him that the two-tailed <em class=\"emphasis\">p<\/em> value for this <em class=\"emphasis\">t<\/em> score (with 10 \u2212 1 = 9 degrees of freedom) is .013. Because this is less than .05, the health psychologist would reject the null hypothesis and conclude that college students tend to underestimate the number of calories in a chocolate chip cookie. If he computes the <em class=\"emphasis\">t<\/em> score by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 \"Table of Critical Values of \"<\/a> and see that the critical value of <em class=\"emphasis\">t<\/em> for a two-tailed test with 9 degrees of freedom is \u00b12.262. The fact that his <em class=\"emphasis\">t<\/em> score was more extreme than this critical value would tell him that his <em class=\"emphasis\">p<\/em> value is less than .05 and that he should reject the null hypothesis.<\/p>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_s01_p04\">Finally, if this researcher had gone into this study with good reason to expect that college students underestimate the number of calories, then he could have done a one-tailed test instead of a two-tailed test. The only thing this would change is the critical value, which would be \u22121.833. This slightly less extreme value would make it a bit easier to reject the null hypothesis. However, if it turned out that college students overestimate the number of calories\u2014no matter how much they overestimate it\u2014the researcher would not have been able to reject the null hypothesis.<\/p>\n            <\/div>\n        <\/div>\n        <div class=\"section\" id=\"price_1.0-ch13_s02_s01_s02\">\n            <h2 class=\"title editable block\">The Dependent-Samples <em class=\"emphasis\">t<\/em> Test<\/h2>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_p01\">The <span class=\"margin_term\"><b>dependent-samples <em class=\"emphasis\">t<\/em> test<\/b><\/span> (sometimes called the paired-samples <em class=\"emphasis\">t<\/em> test) is used to compare two means for the same sample tested at two different times or under two different conditions. This makes it appropriate for pretest-posttest designs or within-subjects experiments. The null hypothesis is that the means at the two times or under the two conditions are the same in the population. The alternative hypothesis is that they are not the same. This test can also be one-tailed if the researcher has good reason to expect the difference goes in a particular direction.<\/p>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_p02\">It helps to think of the dependent-samples <em class=\"emphasis\">t<\/em> test as a special case of the one-sample <em class=\"emphasis\">t<\/em> test. However, the first step in the dependent-samples <em class=\"emphasis\">t<\/em> test is to reduce the two scores for each participant to a single <span class=\"margin_term\"><b>difference score<\/b><\/span> by taking the difference between them. At this point, the dependent-samples <em class=\"emphasis\">t<\/em> test becomes a one-sample <em class=\"emphasis\">t<\/em> test on the difference scores. The hypothetical population mean (\u00b5<sub class=\"subscript\">0<\/sub>) of interest is 0 because this is what the mean difference score would be if there were no difference on average between the two times or two conditions. We can now think of the null hypothesis as being that the mean difference score in the population is 0 (\u00b5<sub class=\"subscript\">0<\/sub> = 0) and the alternative hypothesis as being that the mean difference score in the population is not 0 (\u00b5<sub class=\"subscript\">0<\/sub> \u2260 0).<\/p>\n            <div class=\"section\" id=\"price_1.0-ch13_s02_s01_s02_s01\">\n                <h2 class=\"title editable block\">Example Dependent-Samples <em class=\"emphasis\">t<\/em> Test<\/h2>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p01\">Imagine that the health psychologist now knows that people tend to underestimate the number of calories in junk food and has developed a short training program to improve their estimates. To test the effectiveness of this program, he conducts a pretest-posttest study in which 10 participants estimate the number of calories in a chocolate chip cookie before the training program and then again afterward. Because he expects the program to increase the participants\u2019 estimates, he decides to do a one-tailed test. Now imagine further that the pretest estimates are<\/p>\n                <span class=\"informalequation block\"><span class=\"mathphrase\">230, 250, 280, 175, 150, 200, 180, 210, 220, 190<\/span><\/span>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p02\">and that the posttest estimates (for the same participants in the same order) are<\/p>\n                <span class=\"informalequation block\"><span class=\"mathphrase\">250, 260, 250, 200, 160, 200, 200, 180, 230, 240.<\/span><\/span>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p03\">The difference scores, then, are as follows:<\/p>\n                <span class=\"informalequation block\"><span class=\"mathphrase\">+20, +10, \u221230, +25, +10, 0, +20, \u221230, +10, +50.<\/span><\/span>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p04\">Note that it does not matter whether the first set of scores is subtracted from the second or the second from the first as long as it is done the same way for all participants. In this example, it makes sense to subtract the pretest estimates from the posttest estimates so that positive difference scores mean that the estimates went up after the training and negative difference scores mean the estimates went down.<\/p>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p05\">The mean of the difference scores is 8.50 with a standard deviation of 27.27. The health psychologist can now compute the <em class=\"emphasis\">t<\/em> score for his sample as follows:<\/p>\n\n\\[ t = \\frac{8.5 - 0}{( \\frac{27.27}{ \\sqrt{10}})} = 1.11 \\]\n\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p06\">If he enters the data into one of the online analysis tools or uses Excel or SPSS, it would tell him that the one-tailed <em class=\"emphasis\">p<\/em> value for this <em class=\"emphasis\">t<\/em> score (again with 10 \u2212 1 = 9 degrees of freedom) is .148. Because this is greater than .05, he would retain the null hypothesis and conclude that the training program does not increase people\u2019s calorie estimates. If he were to compute the <em class=\"emphasis\">t<\/em> score by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 \"Table of Critical Values of \"<\/a> and see that the critical value of <em class=\"emphasis\">t<\/em> for a one-tailed test with 9 degrees of freedom is +1.833. (It is positive this time because he was expecting a positive mean difference score.) The fact that his <em class=\"emphasis\">t<\/em> score was less extreme than this critical value would tell him that his <em class=\"emphasis\">p<\/em> value is greater than .05 and that he should fail to reject the null hypothesis.<\/p>\n            <\/div>\n        <\/div>\n        <div class=\"section\" id=\"price_1.0-ch13_s02_s01_s03\">\n            <h2 class=\"title editable block\">The Independent-Samples <em class=\"emphasis\">t<\/em> Test<\/h2>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_p01\">The <span class=\"margin_term\"><b>independent-samples <em class=\"emphasis\">t<\/em> test<\/b><\/span> is used to compare the means of two separate samples (<em class=\"emphasis\">M<\/em><sub class=\"subscript\"><em class=\"emphasis\">1<\/em><\/sub> and <em class=\"emphasis\">M<\/em><sub class=\"subscript\"><em class=\"emphasis\">2<\/em><\/sub>). The two samples might have been tested under different conditions in a between-subjects experiment, or they could be preexisting groups in a correlational design (e.g., women and men, extroverts and introverts). The null hypothesis is that the means of the two populations are the same: \u00b5<sub class=\"subscript\">1<\/sub> = \u00b5<sub class=\"subscript\">2<\/sub>. The alternative hypothesis is that they are not the same: \u00b5<sub class=\"subscript\">1<\/sub> \u2260 \u00b5<sub class=\"subscript\">2<\/sub>. Again, the test can be one-tailed if the researcher has good reason to expect the difference goes in a particular direction.<\/p>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_p02\">The <em class=\"emphasis\">t<\/em> statistic here is a bit more complicated because it must take into account two sample means, two standard deviations, and two sample sizes. The formula is as follows:<\/p>\n\n\\[ t = \\frac{ M_{1} - M_{2} }{ \\sqrt{ \\frac{ SD_{1}^{2}}{n_{1}} + \\frac{ SD_{2}^{2}}{n_{2}}}} \\]\n\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_p03\">Notice that this formula includes squared standard deviations (the variances) that appear inside the square root symbol. Also, lowercase <em class=\"emphasis\">n<\/em><sub class=\"subscript\"><em class=\"emphasis\">1<\/em><\/sub> and <em class=\"emphasis\">n<\/em><sub class=\"subscript\"><em class=\"emphasis\">2<\/em><\/sub> refer to the sample sizes in the two groups or condition (as opposed to capital <em class=\"emphasis\">N<\/em>, which generally refers to the total sample size). The only additional thing to know here is that there are <em class=\"emphasis\">N<\/em> \u2212 2 degrees of freedom for the independent-samples <em class=\"emphasis\">t<\/em> test.<\/p>\n            <div class=\"section\" id=\"price_1.0-ch13_s02_s01_s03_s01\">\n                <h2 class=\"title editable block\">Example Independent-Samples <em class=\"emphasis\">t<\/em> Test<\/h2>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p01\">Now the health psychologist wants to compare the calorie estimates of people who regularly eat junk food with the estimates of people who rarely eat junk food. He believes the difference could come out in either direction so he decides to conduct a two-tailed test. He collects data from a sample of eight participants who eat junk food regularly and seven participants who rarely eat junk food. The data are as follows:<\/p>\n                <p class=\"para block\">\u00a0\u00a0<\/p>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p02\">Junk food eaters: 180, 220, 150, 85, 200, 170, 150, 190<\/p>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p03\">Non\u2013junk food eaters: 200, 240, 190, 175, 200, 300, 240<\/p>\n                <p class=\"para block\">\u00a0\u00a0<\/p>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p04\">The mean for the junk food eaters is 220.71 with a standard deviation of 41.23. The mean for the non\u2013junk food eaters is 168.12 with a standard deviation of 42.66. He can now compute his <em class=\"emphasis\">t<\/em> score as follows:<\/p>\n               \n\\[ t = \\frac{ 220.71 - 168.12}{ \\sqrt{ \\frac{41.23^{2}}{8} + \\frac{42.66^{2}}{7}}} = 2.42 \\]\n\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p05\">If he enters the data into one of the online analysis tools or uses Excel or SPSS, it would tell him that the two-tailed <em class=\"emphasis\">p<\/em> value for this <em class=\"emphasis\">t<\/em> score (with 15 \u2212 2 = 13 degrees of freedom) is .015. Because this is less than .05, the health psychologist would reject the null hypothesis and conclude that people who eat junk food regularly make lower calorie estimates than people who eat it rarely. If he were to compute the <em class=\"emphasis\">t<\/em> score by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 \"Table of Critical Values of \"<\/a> and see that the critical value of <em class=\"emphasis\">t<\/em> for a two-tailed test with 13 degrees of freedom is \u00b12.160. The fact that his <em class=\"emphasis\">t<\/em> score was more extreme than this critical value would tell him that his <em class=\"emphasis\">p<\/em> value is less than .05 and that he should fail to retain the null hypothesis.<\/p>\n            <\/div>\n        <\/div>\n    <\/div>\n    <div class=\"section\" id=\"price_1.0-ch13_s02_s02\">\n        <h2 class=\"title editable block\">The Analysis of Variance<\/h2>\n        <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_p01\">When there are more than two groups or condition means to be compared, the most common null hypothesis test is the <span class=\"margin_term\"><b>analysis of variance (ANOVA)<\/b><\/span>. In this section, we look primarily at the <span class=\"margin_term\"><b>one-way ANOVA<\/b><\/span>, which is used for between-subjects designs with a single independent variable. We then briefly consider some other versions of the ANOVA that are used for within-subjects and factorial research designs.<\/p>\n        <div class=\"section\" id=\"price_1.0-ch13_s02_s02_s01\">\n            <h2 class=\"title editable block\">One-Way ANOVA<\/h2>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_p01\">The one-way ANOVA is used to compare the means of more than two samples (<em class=\"emphasis\">M1<\/em>, <em class=\"emphasis\">M<\/em><sub class=\"subscript\"><em class=\"emphasis\">2<\/em><\/sub>\u2026<em class=\"emphasis\">M<\/em><sub class=\"subscript\"><em class=\"emphasis\">G<\/em><\/sub>) in a between-subjects design. The null hypothesis is that all the means are equal in the population: \u00b5<sub class=\"subscript\">1<\/sub>= \u00b5<sub class=\"subscript\">2<\/sub> =\u2026= \u00b5<sub class=\"subscript\">G<\/sub>. The alternative hypothesis is that not all the means in the population are equal.<\/p>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_p02\">The test statistic for the ANOVA is called <em class=\"emphasis\">F<\/em>. It is a ratio of two estimates of the population variance based on the sample data. One estimate of the population variance is called the <span class=\"margin_term\"><b>mean squares between groups (<em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub>)<\/b><\/span> and is based on the differences among the sample means. The other is called the <span class=\"margin_term\"><b>mean squares within groups (<em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub>)<\/b><\/span> and is based on the differences among the scores within each group. The <em class=\"emphasis\">F<\/em> statistic is the ratio of the <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub> to the <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> and can therefore be expressed as follows:<\/p>\n            <span class=\"informalequation block\">F=MSBMSW.<\/span>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_p03\">Again, the reason that <em class=\"emphasis\">F<\/em> is useful is that we know how it is distributed when the null hypothesis is true. As shown in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_f01\">Figure 13.2 \"Distribution of the \"<\/a>, this distribution is unimodal and positively skewed with values that cluster around 1. The precise shape of the distribution depends on both the number of groups and the sample size, and there is a degrees of freedom value associated with each of these. The between-groups degrees of freedom is the number of groups minus one: <em class=\"emphasis\">df<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub> = (<em class=\"emphasis\">G<\/em> \u2212 1). The within-groups degrees of freedom is the total sample size minus the number of groups: <em class=\"emphasis\">df<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> = <em class=\"emphasis\">N<\/em> \u2212 <em class=\"emphasis\">G<\/em>. Again, knowing the distribution of <em class=\"emphasis\">F<\/em> when the null hypothesis is true allows us to find the <em class=\"emphasis\">p<\/em> value.<\/p>\n            <div style=\"text-align: center;\"><div class=\"figure large large-height editable block\" id=\"price_1.0-ch13_s02_s02_s01_f01\"><div style=\"text-align: center; font-size: .8em; max-width: 497px;\">\n                <p class=\"title\"><span class=\"title-prefix\">Figure 13.2<\/span> Distribution of the <em class=\"emphasis\">F<\/em> Ratio With 2 and 37 Degrees of Freedom When the Null Hypothesis Is True<\/p>\n                <a href=\"\/psychologyresearchmethods\/wp-content\/uploads\/sites\/171\/2015\/07\/2a8c35892ee84b9a536f12ece8f6c540.jpg\"><img src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2714\/2017\/11\/16174232\/2a8c35892ee84b9a536f12ece8f6c540.jpg\" alt=\"Distribution of the F Ratio With 2 and 37 Degrees of Freedom When the Null Hypothesis Is True. The red vertical line represents the critical value when &#x3B1; is .05\" style=\"max-width: 497px;\"\/><\/a><p class=\"para\">The red vertical line represents the critical value when \u03b1 is .05.<\/p>\n            <\/div><\/div><\/div>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_p04\">The online tools in <a class=\"xref\" href=\"..\/12-1-describing-single-variables\/#price_1.0-ch12\">Chapter 12 \"Descriptive Statistics\"<\/a> and statistical software such as Excel and SPSS will compute <em class=\"emphasis\">F<\/em> and find the <em class=\"emphasis\">p<\/em> value. If <em class=\"emphasis\">p<\/em> is less than .05, then we reject the null hypothesis and conclude that there are differences among the group means in the population. If <em class=\"emphasis\">p<\/em> is greater than .05, then we retain the null hypothesis and conclude that there is not enough evidence to say that there are differences. In the unlikely event that we would compute <em class=\"emphasis\">F<\/em> by hand, we can use a table of critical values like <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_t01\">Table 13.3 \"Table of Critical Values of \"<\/a> to make the decision. The idea is that any <em class=\"emphasis\">F<\/em> ratio greater than the critical value has a <em class=\"emphasis\">p<\/em> value of less than .05. Thus if the <em class=\"emphasis\">F<\/em> ratio we compute is beyond the critical value, then we reject the null hypothesis. If the F ratio we compute is less than the critical value, then we retain the null hypothesis.<\/p>\n            <div class=\"table block\" id=\"price_1.0-ch13_s02_s02_s01_t01\">\n                <p class=\"title\"><span class=\"title-prefix\">Table 13.3<\/span> Table of Critical Values of <em class=\"emphasis\">F<\/em> When \u03b1 = .05<\/p>\n                <table cellpadding=\"0\" style=\"border-spacing: 0px;\"><thead><tr><th colspan=\"4\" align=\"right\">\n<em class=\"emphasis\">df<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub><\/th>\n                        <\/tr><\/thead><tbody><tr><td align=\"right\">\n<em class=\"emphasis\">df<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub><\/td>\n                            <td align=\"right\">2<\/td>\n                            <td align=\"right\">3<\/td>\n                            <td align=\"right\">4<\/td>\n                        <\/tr><tr><td align=\"right\">8<\/td>\n                            <td align=\"right\">4.459<\/td>\n                            <td align=\"right\">4.066<\/td>\n                            <td align=\"right\">3.838<\/td>\n                        <\/tr><tr><td align=\"right\">9<\/td>\n                            <td align=\"right\">4.256<\/td>\n                            <td align=\"right\">3.863<\/td>\n                            <td align=\"right\">3.633<\/td>\n                        <\/tr><tr><td align=\"right\">10<\/td>\n                            <td align=\"right\">4.103<\/td>\n                            <td align=\"right\">3.708<\/td>\n                            <td align=\"right\">3.478<\/td>\n                        <\/tr><tr><td align=\"right\">11<\/td>\n                            <td align=\"right\">3.982<\/td>\n                            <td align=\"right\">3.587<\/td>\n                            <td align=\"right\">3.357<\/td>\n                        <\/tr><tr><td align=\"right\">12<\/td>\n                            <td align=\"right\">3.885<\/td>\n                            <td align=\"right\">3.490<\/td>\n                            <td align=\"right\">3.259<\/td>\n                        <\/tr><tr><td align=\"right\">13<\/td>\n                            <td align=\"right\">3.806<\/td>\n                            <td align=\"right\">3.411<\/td>\n                            <td align=\"right\">3.179<\/td>\n                        <\/tr><tr><td align=\"right\">14<\/td>\n                            <td align=\"right\">3.739<\/td>\n                            <td align=\"right\">3.344<\/td>\n                            <td align=\"right\">3.112<\/td>\n                        <\/tr><tr><td align=\"right\">15<\/td>\n                            <td align=\"right\">3.682<\/td>\n                            <td align=\"right\">3.287<\/td>\n                            <td align=\"right\">3.056<\/td>\n                        <\/tr><tr><td align=\"right\">16<\/td>\n                            <td align=\"right\">3.634<\/td>\n                            <td align=\"right\">3.239<\/td>\n                            <td align=\"right\">3.007<\/td>\n                        <\/tr><tr><td align=\"right\">17<\/td>\n                            <td align=\"right\">3.592<\/td>\n                            <td align=\"right\">3.197<\/td>\n                            <td align=\"right\">2.965<\/td>\n                        <\/tr><tr><td align=\"right\">18<\/td>\n                            <td align=\"right\">3.555<\/td>\n                            <td align=\"right\">3.160<\/td>\n                            <td align=\"right\">2.928<\/td>\n                        <\/tr><tr><td align=\"right\">19<\/td>\n                            <td align=\"right\">3.522<\/td>\n                            <td align=\"right\">3.127<\/td>\n                            <td align=\"right\">2.895<\/td>\n                        <\/tr><tr><td align=\"right\">20<\/td>\n                            <td align=\"right\">3.493<\/td>\n                            <td align=\"right\">3.098<\/td>\n                            <td align=\"right\">2.866<\/td>\n                        <\/tr><tr><td align=\"right\">21<\/td>\n                            <td align=\"right\">3.467<\/td>\n                            <td align=\"right\">3.072<\/td>\n                            <td align=\"right\">2.840<\/td>\n                        <\/tr><tr><td align=\"right\">22<\/td>\n                            <td align=\"right\">3.443<\/td>\n                            <td align=\"right\">3.049<\/td>\n                            <td align=\"right\">2.817<\/td>\n                        <\/tr><tr><td align=\"right\">23<\/td>\n                            <td align=\"right\">3.422<\/td>\n                            <td align=\"right\">3.028<\/td>\n                            <td align=\"right\">2.796<\/td>\n                        <\/tr><tr><td align=\"right\">24<\/td>\n                            <td align=\"right\">3.403<\/td>\n                            <td align=\"right\">3.009<\/td>\n                            <td align=\"right\">2.776<\/td>\n                        <\/tr><tr><td align=\"right\">25<\/td>\n                            <td align=\"right\">3.385<\/td>\n                            <td align=\"right\">2.991<\/td>\n                            <td align=\"right\">2.759<\/td>\n                        <\/tr><tr><td align=\"right\">30<\/td>\n                            <td align=\"right\">3.316<\/td>\n                            <td align=\"right\">2.922<\/td>\n                            <td align=\"right\">2.690<\/td>\n                        <\/tr><tr><td align=\"right\">35<\/td>\n                            <td align=\"right\">3.267<\/td>\n                            <td align=\"right\">2.874<\/td>\n                            <td align=\"right\">2.641<\/td>\n                        <\/tr><tr><td align=\"right\">40<\/td>\n                            <td align=\"right\">3.232<\/td>\n                            <td align=\"right\">2.839<\/td>\n                            <td align=\"right\">2.606<\/td>\n                        <\/tr><tr><td align=\"right\">45<\/td>\n                            <td align=\"right\">3.204<\/td>\n                            <td align=\"right\">2.812<\/td>\n                            <td align=\"right\">2.579<\/td>\n                        <\/tr><tr><td align=\"right\">50<\/td>\n                            <td align=\"right\">3.183<\/td>\n                            <td align=\"right\">2.790<\/td>\n                            <td align=\"right\">2.557<\/td>\n                        <\/tr><tr><td align=\"right\">55<\/td>\n                            <td align=\"right\">3.165<\/td>\n                            <td align=\"right\">2.773<\/td>\n                            <td align=\"right\">2.540<\/td>\n                        <\/tr><tr><td align=\"right\">60<\/td>\n                            <td align=\"right\">3.150<\/td>\n                            <td align=\"right\">2.758<\/td>\n                            <td align=\"right\">2.525<\/td>\n                        <\/tr><tr><td align=\"right\">65<\/td>\n                            <td align=\"right\">3.138<\/td>\n                            <td align=\"right\">2.746<\/td>\n                            <td align=\"right\">2.513<\/td>\n                        <\/tr><tr><td align=\"right\">70<\/td>\n                            <td align=\"right\">3.128<\/td>\n                            <td align=\"right\">2.736<\/td>\n                            <td align=\"right\">2.503<\/td>\n                        <\/tr><tr><td align=\"right\">75<\/td>\n                            <td align=\"right\">3.119<\/td>\n                            <td align=\"right\">2.727<\/td>\n                            <td align=\"right\">2.494<\/td>\n                        <\/tr><tr><td align=\"right\">80<\/td>\n                            <td align=\"right\">3.111<\/td>\n                            <td align=\"right\">2.719<\/td>\n                            <td align=\"right\">2.486<\/td>\n                        <\/tr><tr><td align=\"right\">85<\/td>\n                            <td align=\"right\">3.104<\/td>\n                            <td align=\"right\">2.712<\/td>\n                            <td align=\"right\">2.479<\/td>\n                        <\/tr><tr><td align=\"right\">90<\/td>\n                            <td align=\"right\">3.098<\/td>\n                            <td align=\"right\">2.706<\/td>\n                            <td align=\"right\">2.473<\/td>\n                        <\/tr><tr><td align=\"right\">95<\/td>\n                            <td align=\"right\">3.092<\/td>\n                            <td align=\"right\">2.700<\/td>\n                            <td align=\"right\">2.467<\/td>\n                        <\/tr><tr><td align=\"right\">100<\/td>\n                            <td align=\"right\">3.087<\/td>\n                            <td align=\"right\">2.696<\/td>\n                            <td align=\"right\">2.463<\/td>\n                        <\/tr><\/tbody><\/table><\/div>\n            <div class=\"section\" id=\"price_1.0-ch13_s02_s02_s01_s01\">\n                <h2 class=\"title editable block\">Example One-Way ANOVA<\/h2>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_s01_p01\">Imagine that the health psychologist wants to compare the calorie estimates of psychology majors, nutrition majors, and professional dieticians. He collects the following data:<\/p>\n                <span class=\"informalequation block\"><span class=\"mathphrase\">Psych majors: 200, 180, 220, 160, 150, 200, 190, 200<\/span><\/span>\n                <span class=\"informalequation block\"><span class=\"mathphrase\">Nutrition majors: 190, 220, 200, 230, 160, 150, 200, 210, 195<\/span><\/span>\n                <span class=\"informalequation block\"><span class=\"mathphrase\">Dieticians: 220, 250, 240, 275, 250, 230, 200, 240<\/span><\/span>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_s01_p02\">The means are 187.50 (<em class=\"emphasis\">SD<\/em> = 23.14), 195.00 (<em class=\"emphasis\">SD<\/em> = 27.77), and 238.13 (<em class=\"emphasis\">SD<\/em> = 22.35), respectively. So it appears that dieticians made substantially more accurate estimates on average. The researcher would almost certainly enter these data into a program such as Excel or SPSS, which would compute <em class=\"emphasis\">F<\/em> for him and find the <em class=\"emphasis\">p<\/em> value. <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_s01_t01\">Table 13.4 \"Typical One-Way ANOVA Output From Excel\"<\/a> shows the output of the one-way ANOVA function in Excel for these data. This is referred to as an ANOVA table. It shows that <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub> is 5,971.88, <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> is 602.23, and their ratio, <em class=\"emphasis\">F<\/em>, is 9.92. The <em class=\"emphasis\">p<\/em> value is .0009. Because this is below .05, the researcher would reject the null hypothesis and conclude that the mean calorie estimates for the three groups are not the same in the population. Notice that the ANOVA table also includes the \u201csum of squares\u201d (<em class=\"emphasis\">SS<\/em>) for between groups and for within groups. These values are computed on the way to finding <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub> and <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> but are not typically reported by the researcher. Finally, if the researcher were to compute the <em class=\"emphasis\">F<\/em> ratio by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_t01\">Table 13.3 \"Table of Critical Values of \"<\/a> and see that the critical value of <em class=\"emphasis\">F<\/em> with 2 and 21 degrees of freedom is 3.467 (the same value in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_s01_t01\">Table 13.4 \"Typical One-Way ANOVA Output From Excel\"<\/a> under <em class=\"emphasis\">F<\/em><sub class=\"subscript\">crit<\/sub>). The fact that his <em class=\"emphasis\">t<\/em> score was more extreme than this critical value would tell him that his <em class=\"emphasis\">p<\/em> value is less than .05 and that he should reject the null hypothesis.<\/p>\n                <div class=\"table block\" id=\"price_1.0-ch13_s02_s02_s01_s01_t01\">\n                    <p class=\"title\"><span class=\"title-prefix\">Table 13.4<\/span> Typical One-Way ANOVA Output From Excel<\/p>\n                    <table cellpadding=\"0\" style=\"border-spacing: 0px;\"><thead><tr><th colspan=\"7\">ANOVA<\/th>\n                            <\/tr><\/thead><tbody><tr><td><em class=\"emphasis\">Source of variation<\/em><\/td>\n                                <td><em class=\"emphasis\">SS<\/em><\/td>\n                                <td><em class=\"emphasis\">df<\/em><\/td>\n                                <td><em class=\"emphasis\">MS<\/em><\/td>\n                                <td><em class=\"emphasis\">F<\/em><\/td>\n                                <td><em class=\"emphasis\">p-value<\/em><\/td>\n                                <td>\n<em class=\"emphasis\">F<\/em><sub class=\"subscript\">crit<\/sub><\/td>\n                            <\/tr><tr><td>Between groups<\/td>\n                                <td align=\"right\">11,943.75<\/td>\n                                <td align=\"right\">2<\/td>\n                                <td align=\"right\">5,971.875<\/td>\n                                <td align=\"right\">9.916234<\/td>\n                                <td align=\"right\">0.000928<\/td>\n                                <td align=\"right\">3.4668<\/td>\n                            <\/tr><tr><td>Within groups<\/td>\n                                <td align=\"right\">12,646.88<\/td>\n                                <td align=\"right\">21<\/td>\n                                <td align=\"right\">602.2321<\/td>\n                                <td>\n                                <\/td><td>\n                                <\/td><td>\n                            <\/td><\/tr><tr><td>Total<\/td>\n                                <td align=\"right\">24,590.63<\/td>\n                                <td align=\"right\">23<\/td>\n                                <td>\n                                <\/td><td>\n                                <\/td><td>\n                                <\/td><td>\n                            <\/td><\/tr><\/tbody><\/table><\/div>\n            <\/div>\n        <\/div>\n        <div class=\"section\" id=\"price_1.0-ch13_s02_s02_s02\">\n            <h2 class=\"title editable block\">ANOVA Elaborations<\/h2>\n            <div class=\"section\" id=\"price_1.0-ch13_s02_s02_s02_s01\">\n                <h2 class=\"title editable block\">Post Hoc Comparisons<\/h2>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s02_s01_p01\">When we reject the null hypothesis in a one-way ANOVA, we conclude that the group means are not all the same in the population. But this can indicate different things. With three groups, it can indicate that all three means are significantly different from each other. Or it can indicate that one of the means is significantly different from the other two, but the other two are not significantly different from each other. It could be, for example, that the mean calorie estimates of psychology majors, nutrition majors, and dieticians are all significantly different from each other. Or it could be that the mean for dieticians is significantly different from the means for psychology and nutrition majors, but the means for psychology and nutrition majors are not significantly different from each other. For this reason, statistically significant one-way ANOVA results are typically followed up with a series of <span class=\"margin_term\"><b>post hoc comparisons<\/b><\/span> of selected pairs of group means to determine which are different from which others.<\/p>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s02_s01_p02\">One approach to post hoc comparisons would be to conduct a series of independent-samples <em class=\"emphasis\">t<\/em> tests comparing each group mean to each of the other group means. But there is a problem with this approach. In general, if we conduct a <em class=\"emphasis\">t<\/em> test when the null hypothesis is true, we have a 5% chance of mistakenly rejecting the null hypothesis (see <a class=\"xref\" href=\"http:\/\/open.lib.umn.edu\/psychologyresearchmethods\/?p=452#price_1.0-ch13_s03\">Section 13.3 \"Additional Considerations\"<\/a> for more on such Type I errors). If we conduct several <em class=\"emphasis\">t<\/em> tests when the null hypothesis is true, the chance of mistakenly rejecting <em class=\"emphasis\">at least one<\/em> null hypothesis increases with each test we conduct. Thus researchers do not usually make post hoc comparisons using standard <em class=\"emphasis\">t<\/em> tests because there is too great a chance that they will mistakenly reject at least one null hypothesis. Instead, they use one of several modified <em class=\"emphasis\">t<\/em> test procedures\u2014among them the Bonferonni procedure, Fisher\u2019s least significant difference (LSD) test, and Tukey\u2019s honestly significant difference (HSD) test. The details of these approaches are beyond the scope of this book, but it is important to understand their purpose. It is to keep the risk of mistakenly rejecting a true null hypothesis to an acceptable level (close to 5%).<\/p>\n            <\/div>\n            <div class=\"section\" id=\"price_1.0-ch13_s02_s02_s02_s02\">\n                <h2 class=\"title editable block\">Repeated-Measures ANOVA<\/h2>\n                <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s02_s02_p01\">Recall that the one-way ANOVA is appropriate for between-subjects designs in which the means being compared come from separate groups of participants. It is not appropriate for within-subjects designs in which the means being compared come from the same participants tested under different conditions or at different times. This requires a slightly different approach, called the <span class=\"margin_term\"><b>repeated-measures ANOVA<\/b><\/span>. The basics of the repeated-measures ANOVA are the same as for the one-way ANOVA. The main difference is that measuring the dependent variable multiple times for each participant allows for a more refined measure of <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub>. Imagine, for example, that the dependent variable in a study is a measure of reaction time. Some participants will be faster or slower than others because of stable individual differences in their nervous systems, muscles, and other factors. In a between-subjects design, these stable individual differences would simply add to the variability within the groups and increase the value of <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub>. In a within-subjects design, however, these stable individual differences can be measured and subtracted from the value of <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub>. This lower value of <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> means a higher value of <em class=\"emphasis\">F<\/em> and a more sensitive test.<\/p>\n            <\/div>\n        <\/div>\n        <div class=\"section\" id=\"price_1.0-ch13_s02_s02_s03\">\n            <h2 class=\"title editable block\">Factorial ANOVA<\/h2>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s03_p01\">When more than one independent variable is included in a factorial design, the appropriate approach is the <span class=\"margin_term\"><b>factorial ANOVA<\/b><\/span>. Again, the basics of the factorial ANOVA are the same as for the one-way and repeated-measures ANOVAs. The main difference is that it produces an <em class=\"emphasis\">F<\/em> ratio and <em class=\"emphasis\">p<\/em> value for each main effect and for each interaction. Returning to our calorie estimation example, imagine that the health psychologist tests the effect of participant major (psychology vs. nutrition) and food type (cookie vs. hamburger) in a factorial design. A factorial ANOVA would produce separate <em class=\"emphasis\">F<\/em> ratios and <em class=\"emphasis\">p<\/em> values for the main effect of major, the main effect of food type, and the interaction between major and food. Appropriate modifications must be made depending on whether the design is between subjects, within subjects, or mixed.<\/p>\n        <\/div>\n    <\/div>\n    <div class=\"section\" id=\"price_1.0-ch13_s02_s03\">\n        <h2 class=\"title editable block\">Testing Pearson\u2019s <em class=\"emphasis\">r<\/em>\n<\/h2>\n        <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s03_p01\">For relationships between quantitative variables, where Pearson\u2019s <em class=\"emphasis\">r<\/em> is used to describe the strength of those relationships, the appropriate null hypothesis test is a test of Pearson\u2019s <em class=\"emphasis\">r<\/em>. The basic logic is exactly the same as for other null hypothesis tests. In this case, the null hypothesis is that there is no relationship in the population. We can use the Greek lowercase rho (\u03c1) to represent the relevant parameter: \u03c1 = 0. The alternative hypothesis is that there is a relationship in the population: \u03c1 \u2260 0. As with the <em class=\"emphasis\">t<\/em> test, this test can be two-tailed if the researcher has no expectation about the direction of the relationship or one-tailed if the researcher expects the relationship to go in a particular direction.<\/p>\n        <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s03_p02\">It is possible to use Pearson\u2019s <em class=\"emphasis\">r<\/em> for the sample to compute a <em class=\"emphasis\">t<\/em> score with <em class=\"emphasis\">N<\/em> \u2212 2 degrees of freedom and then to proceed as for a <em class=\"emphasis\">t<\/em> test. However, because of the way it is computed, Pearson\u2019s <em class=\"emphasis\">r<\/em> can also be treated as its own test statistic. The online statistical tools and statistical software such as Excel and SPSS generally compute Pearson\u2019s <em class=\"emphasis\">r<\/em> and provide the <em class=\"emphasis\">p<\/em> value associated with that value of Pearson\u2019s <em class=\"emphasis\">r<\/em>. As always, if the <em class=\"emphasis\">p<\/em> value is less than .05, we reject the null hypothesis and conclude that there is a relationship between the variables in the population. If the <em class=\"emphasis\">p<\/em> value is greater than .05, we retain the null hypothesis and conclude that there is not enough evidence to say there is a relationship in the population. If we compute Pearson\u2019s <em class=\"emphasis\">r<\/em> by hand, we can use a table like <a class=\"xref\" href=\"#price_1.0-ch13_s02_s03_t01\">Table 13.5 \"Table of Critical Values of Pearson\u2019s \"<\/a>, which shows the critical values of <em class=\"emphasis\">r<\/em> for various samples sizes when \u03b1 is .05. A sample value of Pearson\u2019s <em class=\"emphasis\">r<\/em> that is more extreme than the critical value is statistically significant.<\/p>\n        <div class=\"table block\" id=\"price_1.0-ch13_s02_s03_t01\">\n            <p class=\"title\"><span class=\"title-prefix\">Table 13.5<\/span> Table of Critical Values of Pearson\u2019s <em class=\"emphasis\">r<\/em> When \u03b1 = .05<\/p>\n            <table cellpadding=\"0\" style=\"border-spacing: 0px;\"><thead><tr><th align=\"right\">\n                        <\/th><th colspan=\"2\" align=\"right\">Critical value of <em class=\"emphasis\">r<\/em>\n<\/th>\n                    <\/tr><\/thead><tbody><tr><td align=\"right\"><em class=\"emphasis\">N<\/em><\/td>\n                        <td align=\"right\">One-tailed<\/td>\n                        <td align=\"right\">Two-tailed<\/td>\n                    <\/tr><tr><td align=\"right\">5<\/td>\n                        <td align=\"right\">.805<\/td>\n                        <td align=\"right\">.878<\/td>\n                    <\/tr><tr><td align=\"right\">10<\/td>\n                        <td align=\"right\">.549<\/td>\n                        <td align=\"right\">.632<\/td>\n                    <\/tr><tr><td align=\"right\">15<\/td>\n                        <td align=\"right\">.441<\/td>\n                        <td align=\"right\">.514<\/td>\n                    <\/tr><tr><td align=\"right\">20<\/td>\n                        <td align=\"right\">.378<\/td>\n                        <td align=\"right\">.444<\/td>\n                    <\/tr><tr><td align=\"right\">25<\/td>\n                        <td align=\"right\">.337<\/td>\n                        <td align=\"right\">.396<\/td>\n                    <\/tr><tr><td align=\"right\">30<\/td>\n                        <td align=\"right\">.306<\/td>\n                        <td align=\"right\">.361<\/td>\n                    <\/tr><tr><td align=\"right\">35<\/td>\n                        <td align=\"right\">.283<\/td>\n                        <td align=\"right\">.334<\/td>\n                    <\/tr><tr><td align=\"right\">40<\/td>\n                        <td align=\"right\">.264<\/td>\n                        <td align=\"right\">.312<\/td>\n                    <\/tr><tr><td align=\"right\">45<\/td>\n                        <td align=\"right\">.248<\/td>\n                        <td align=\"right\">.294<\/td>\n                    <\/tr><tr><td align=\"right\">50<\/td>\n                        <td align=\"right\">.235<\/td>\n                        <td align=\"right\">.279<\/td>\n                    <\/tr><tr><td align=\"right\">55<\/td>\n                        <td align=\"right\">.224<\/td>\n                        <td align=\"right\">.266<\/td>\n                    <\/tr><tr><td align=\"right\">60<\/td>\n                        <td align=\"right\">.214<\/td>\n                        <td align=\"right\">.254<\/td>\n                    <\/tr><tr><td align=\"right\">65<\/td>\n                        <td align=\"right\">.206<\/td>\n                        <td align=\"right\">.244<\/td>\n                    <\/tr><tr><td align=\"right\">70<\/td>\n                        <td align=\"right\">.198<\/td>\n                        <td align=\"right\">.235<\/td>\n                    <\/tr><tr><td align=\"right\">75<\/td>\n                        <td align=\"right\">.191<\/td>\n                        <td align=\"right\">.227<\/td>\n                    <\/tr><tr><td align=\"right\">80<\/td>\n                        <td align=\"right\">.185<\/td>\n                        <td align=\"right\">.220<\/td>\n                    <\/tr><tr><td align=\"right\">85<\/td>\n                        <td align=\"right\">.180<\/td>\n                        <td align=\"right\">.213<\/td>\n                    <\/tr><tr><td align=\"right\">90<\/td>\n                        <td align=\"right\">.174<\/td>\n                        <td align=\"right\">.207<\/td>\n                    <\/tr><tr><td align=\"right\">95<\/td>\n                        <td align=\"right\">.170<\/td>\n                        <td align=\"right\">.202<\/td>\n                    <\/tr><tr><td align=\"right\">100<\/td>\n                        <td align=\"right\">.165<\/td>\n                        <td align=\"right\">.197<\/td>\n                    <\/tr><\/tbody><\/table><\/div>\n        <div class=\"section\" id=\"price_1.0-ch13_s02_s03_s01\">\n            <h2 class=\"title editable block\">Example Test of Pearson\u2019s <em class=\"emphasis\">r<\/em>\n<\/h2>\n            <p class=\"para editable block\" id=\"price_1.0-ch13_s02_s03_s01_p01\">Imagine that the health psychologist is interested in the correlation between people\u2019s calorie estimates and their weight. He has no expectation about the direction of the relationship, so he decides to conduct a two-tailed test. He computes the correlation for a sample of 22 college students and finds that Pearson\u2019s <em class=\"emphasis\">r<\/em> is \u2212.21. The statistical software he uses tells him that the <em class=\"emphasis\">p<\/em> value is .348. It is greater than .05, so he retains the null hypothesis and concludes that there is no relationship between people\u2019s calorie estimates and their weight. If he were to compute Pearson\u2019s <em class=\"emphasis\">r<\/em> by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s03_t01\">Table 13.5 \"Table of Critical Values of Pearson\u2019s \"<\/a> and see that the critical value for 22 \u2212 2 = 20 degrees of freedom is .444. The fact that Pearson\u2019s <em class=\"emphasis\">r<\/em> for the sample is less extreme than this critical value tells him that the <em class=\"emphasis\">p<\/em> value is greater than .05 and that he should retain the null hypothesis.<\/p>\n            <div class=\"bcc-box bcc-success\" id=\"price_1.0-ch13_s02_s03_s01_n01\">\n                <h3 class=\"title\">Key Takeaways<\/h3>\n                <ul class=\"itemizedlist\" id=\"price_1.0-ch13_s02_s03_s01_l01\"><li>To compare two means, the most common null hypothesis test is the <em class=\"emphasis\">t<\/em> test. The one-sample <em class=\"emphasis\">t<\/em> test is used for comparing one sample mean with a hypothetical population mean of interest, the dependent-samples <em class=\"emphasis\">t<\/em> test is used to compare two means in a within-subjects design, and the independent-samples <em class=\"emphasis\">t<\/em> test is used to compare two means in a between-subjects design.<\/li>\n                    <li>To compare more than two means, the most common null hypothesis test is the analysis of variance (ANOVA). The one-way ANOVA is used for between-subjects designs with one independent variable, the repeated-measures ANOVA is used for within-subjects designs, and the factorial ANOVA is used for factorial designs.<\/li>\n                    <li>A null hypothesis test of Pearson\u2019s <em class=\"emphasis\">r<\/em> is used to compare a sample value of Pearson\u2019s <em class=\"emphasis\">r<\/em> with a hypothetical population value of 0.<\/li>\n                <\/ul><\/div>\n            <div class=\"bcc-box bcc-info\" id=\"price_1.0-ch13_s02_s03_s01_n02\">\n                <h3 class=\"title\">Exercises<\/h3>\n                <ol class=\"orderedlist\" id=\"price_1.0-ch13_s02_s03_s01_l02\"><li>Practice: Use one of the online tools, Excel, or SPSS to reproduce the one-sample <em class=\"emphasis\">t<\/em> test, dependent-samples <em class=\"emphasis\">t<\/em> test, independent-samples <em class=\"emphasis\">t<\/em> test, and one-way ANOVA for the four sets of calorie estimation data presented in this section.<\/li>\n                    <li>Practice: A sample of 25 college students rated their friendliness on a scale of 1 (<em class=\"emphasis\">Much Lower Than Average<\/em>) to 7 (<em class=\"emphasis\">Much Higher Than Average<\/em>). Their mean rating was 5.30 with a standard deviation of 1.50. Conduct a one-sample <em class=\"emphasis\">t<\/em> test comparing their mean rating with a hypothetical mean rating of 4 (<em class=\"emphasis\">Average<\/em>). The question is whether college students have a tendency to rate themselves as friendlier than average.<\/li>\n                    <li>Practice: Decide whether each of the following Pearson\u2019s <em class=\"emphasis\">r<\/em> values is statistically significant for both a one-tailed and a two-tailed test. (a) The correlation between height and IQ is +.13 in a sample of 35. (b) For a sample of 88 college students, the correlation between how disgusted they felt and the harshness of their moral judgments was +.23. (c) The correlation between the number of daily hassles and positive mood is \u2212.43 for a sample of 30 middle-aged adults.<\/li>\n                <\/ol><\/div>\n        <\/div>\n    <\/div>","rendered":"<div class=\"bcc-box bcc-highlight\" id=\"price_1.0-ch13_s02_n01\">\n<h3 class=\"title\">Learning Objectives<\/h3>\n<ol class=\"orderedlist\" id=\"price_1.0-ch13_s02_l01\">\n<li>Conduct and interpret one-sample, dependent-samples, and independent-samples <em class=\"emphasis\">t<\/em> tests.<\/li>\n<li>Interpret the results of one-way, repeated measures, and factorial ANOVAs.<\/li>\n<li>Conduct and interpret null hypothesis tests of Pearson\u2019s <em class=\"emphasis\">r<\/em>.<\/li>\n<\/ol>\n<\/div>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_p01\">In this section, we look at several common null hypothesis testing procedures. The emphasis here is on providing enough information to allow you to conduct and interpret the most basic versions. In most cases, the online statistical analysis tools mentioned in <a class=\"xref\" href=\"..\/12-1-describing-single-variables\/#price_1.0-ch12\">Chapter 12 &#8220;Descriptive Statistics&#8221;<\/a> will handle the computations\u2014as will programs such as Microsoft Excel and SPSS.<\/p>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s01\">\n<h2 class=\"title editable block\">The <em class=\"emphasis\">t<\/em> Test<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_p01\">As we have seen throughout this book, many studies in psychology focus on the difference between two means. The most common null hypothesis test for this type of statistical relationship is the <span class=\"margin_term\"><b><em class=\"emphasis\">t<\/em> test<\/b><\/span>. In this section, we look at three types of <em class=\"emphasis\">t<\/em> tests that are used for slightly different research designs: the one-sample <em class=\"emphasis\">t<\/em> test, the dependent-samples <em class=\"emphasis\">t<\/em> test, and the independent-samples <em class=\"emphasis\">t<\/em> test.<\/p>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s01_s01\">\n<h2 class=\"title editable block\">One-Sample <em class=\"emphasis\">t<\/em> Test<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p01\">The <span class=\"margin_term\"><b>one-sample <em class=\"emphasis\">t<\/em> test<\/b><\/span> is used to compare a sample mean (<em class=\"emphasis\">M<\/em>) with a hypothetical population mean (\u03bc<sub class=\"subscript\">0<\/sub>) that provides some interesting standard of comparison. The null hypothesis is that the mean for the population (\u00b5) is equal to the hypothetical population mean: \u03bc = \u03bc<sub class=\"subscript\">0<\/sub>. The alternative hypothesis is that the mean for the population is different from the hypothetical population mean: \u03bc <sub class=\"subscript\">\u2260<\/sub> \u03bc<sub class=\"subscript\">0<\/sub>. To decide between these two hypotheses, we need to find the probability of obtaining the sample mean (or one more extreme) if the null hypothesis were true. But finding this <em class=\"emphasis\">p<\/em> value requires first computing a test statistic called <em class=\"emphasis\">t<\/em>. (A <span class=\"margin_term\"><b>test statistic<\/b><\/span> is a statistic that is computed only to help find the <em class=\"emphasis\">p<\/em> value.) The formula for <em class=\"emphasis\">t<\/em> is as follows:<\/p>\n<p>\\[ t = \\frac{M &#8211; \\mu_{0}}{( \\frac{SD}{ \\sqrt{N}})} \\]<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p02\">Again, <em class=\"emphasis\">M<\/em> is the sample mean and \u00b5<sub class=\"subscript\">0<\/sub> is the hypothetical population mean of interest. <em class=\"emphasis\">SD<\/em> is the sample standard deviation and <em class=\"emphasis\">N<\/em> is the sample size.<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p03\">The reason the <em class=\"emphasis\">t<\/em> statistic (or any test statistic) is useful is that we know how it is distributed when the null hypothesis is true. As shown in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 &#8220;Distribution of &#8220;<\/a>, this distribution is unimodal and symmetrical, and it has a mean of 0. Its precise shape depends on a statistical concept called the degrees of freedom, which for a one-sample <em class=\"emphasis\">t<\/em> test is <em class=\"emphasis\">N<\/em> \u2212 1. (There are 24 degrees of freedom for the distribution shown in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 &#8220;Distribution of &#8220;<\/a>.) The important point is that knowing this distribution makes it possible to find the <em class=\"emphasis\">p<\/em> value for any <em class=\"emphasis\">t<\/em> score. Consider, for example, a <em class=\"emphasis\">t<\/em> score of +1.50 based on a sample of 25. The probability of a <em class=\"emphasis\">t<\/em> score at least this extreme is given by the proportion of <em class=\"emphasis\">t<\/em> scores in the distribution that are at least this extreme. For now, let us define <em class=\"emphasis\">extreme<\/em> as being far from zero in either direction. Thus the <em class=\"emphasis\">p<\/em> value is the proportion of <em class=\"emphasis\">t<\/em> scores that are +1.50 or above <em class=\"emphasis\">or<\/em> that are \u22121.50 or below\u2014a value that turns out to be .14.<\/p>\n<div style=\"text-align: center; font-size: .8em; max-width: 497px;\" id=\"price_1.0-ch13_s02_s01_s01_f01\">\n<p class=\"title\"><span class=\"title-prefix\">Figure 13.1<\/span> Distribution of <em class=\"emphasis\">t<\/em> Scores (With 24 Degrees of Freedom) When the Null Hypothesis Is True<\/p>\n<p>                <a href=\"\/psychologyresearchmethods\/wp-content\/uploads\/sites\/171\/2015\/07\/7bffb75c83485bbf0a3f08783ad55bcb.jpg\"><img decoding=\"async\" src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2714\/2017\/11\/16174230\/7bffb75c83485bbf0a3f08783ad55bcb.jpg\" alt=\"Distribution of t Scores (With 24 Degrees of Freedom) When the Null Hypothesis Is True. The red vertical lines represent the two-tailed critical values, and the green verticle lines the one-tailed critical values when &#x3b1; = .05\" style=\"max-width: 497px;\" \/><\/a><\/p>\n<p class=\"para\">The red vertical lines represent the two-tailed critical values, and the green vertical lines the one-tailed critical values when \u03b1 = .05.<\/p>\n<\/p><\/div>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p04\">Fortunately, we do not have to deal directly with the distribution of <em class=\"emphasis\">t<\/em> scores. If we were to enter our sample data and hypothetical mean of interest into one of the online statistical tools in <a class=\"xref\" href=\"..\/12-1-describing-single-variables\/#price_1.0-ch12\">Chapter 12 &#8220;Descriptive Statistics&#8221;<\/a> or into a program like SPSS (Excel does not have a one-sample <em class=\"emphasis\">t<\/em> test function), the output would include both the <em class=\"emphasis\">t<\/em> score and the <em class=\"emphasis\">p<\/em> value. At this point, the rest of the procedure is simple. If <em class=\"emphasis\">p<\/em> is less than .05, we reject the null hypothesis and conclude that the population mean differs from the hypothetical mean of interest. If <em class=\"emphasis\">p<\/em> is greater than .05, we retain the null hypothesis and conclude that there is not enough evidence to say that the population mean differs from the hypothetical mean of interest. (Again, technically, we conclude only that we do not have enough evidence to conclude that it <em class=\"emphasis\">does<\/em> differ.)<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p05\">If we were to compute the <em class=\"emphasis\">t<\/em> score by hand, we could use a table like <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 &#8220;Table of Critical Values of &#8220;<\/a> to make the decision. This table does not provide actual <em class=\"emphasis\">p<\/em> values. Instead, it provides the <span class=\"margin_term\"><b>critical values<\/b><\/span> of <em class=\"emphasis\">t<\/em> for different degrees of freedom (<em class=\"emphasis\">df)<\/em> when \u03b1 is .05. For now, let us focus on the two-tailed critical values in the last column of the table. Each of these values should be interpreted as a pair of values: one positive and one negative. For example, the two-tailed critical values when there are 24 degrees of freedom are +2.064 and \u22122.064. These are represented by the red vertical lines in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 &#8220;Distribution of &#8220;<\/a>. The idea is that any <em class=\"emphasis\">t<\/em> score below the lower critical value (the left-hand red line in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 &#8220;Distribution of &#8220;<\/a>) is in the lowest 2.5% of the distribution, while any <em class=\"emphasis\">t<\/em> score above the upper critical value (the right-hand red line) is in the highest 2.5% of the distribution. This means that any <em class=\"emphasis\">t<\/em> score beyond the critical value in <em class=\"emphasis\">either<\/em> direction is in the most extreme 5% of <em class=\"emphasis\">t<\/em> scores when the null hypothesis is true and therefore has a <em class=\"emphasis\">p<\/em> value less than .05. Thus if the <em class=\"emphasis\">t<\/em> score we compute is beyond the critical value in either direction, then we reject the null hypothesis. If the <em class=\"emphasis\">t<\/em> score we compute is between the upper and lower critical values, then we retain the null hypothesis.<\/p>\n<div class=\"table block\" id=\"price_1.0-ch13_s02_s01_s01_t01\">\n<p class=\"title\"><span class=\"title-prefix\">Table 13.2<\/span> Table of Critical Values of <em class=\"emphasis\">t<\/em> When \u03b1 = .05<\/p>\n<table cellpadding=\"0\" style=\"border-spacing: 0px;\">\n<thead>\n<tr>\n<th align=\"right\">\n                            <\/th>\n<th colspan=\"2\" align=\"right\">Critical value<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td align=\"right\"><em class=\"emphasis\">df<\/em><\/td>\n<td align=\"right\">One-tailed<\/td>\n<td align=\"right\">Two-tailed<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">3<\/td>\n<td align=\"right\">2.353<\/td>\n<td align=\"right\">3.182<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">4<\/td>\n<td align=\"right\">2.132<\/td>\n<td align=\"right\">2.776<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">5<\/td>\n<td align=\"right\">2.015<\/td>\n<td align=\"right\">2.571<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">6<\/td>\n<td align=\"right\">1.943<\/td>\n<td align=\"right\">2.447<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">7<\/td>\n<td align=\"right\">1.895<\/td>\n<td align=\"right\">2.365<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">8<\/td>\n<td align=\"right\">1.860<\/td>\n<td align=\"right\">2.306<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">9<\/td>\n<td align=\"right\">1.833<\/td>\n<td align=\"right\">2.262<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">10<\/td>\n<td align=\"right\">1.812<\/td>\n<td align=\"right\">2.228<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">11<\/td>\n<td align=\"right\">1.796<\/td>\n<td align=\"right\">2.201<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">12<\/td>\n<td align=\"right\">1.782<\/td>\n<td align=\"right\">2.179<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">13<\/td>\n<td align=\"right\">1.771<\/td>\n<td align=\"right\">2.160<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">14<\/td>\n<td align=\"right\">1.761<\/td>\n<td align=\"right\">2.145<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">15<\/td>\n<td align=\"right\">1.753<\/td>\n<td align=\"right\">2.131<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">16<\/td>\n<td align=\"right\">1.746<\/td>\n<td align=\"right\">2.120<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">17<\/td>\n<td align=\"right\">1.740<\/td>\n<td align=\"right\">2.110<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">18<\/td>\n<td align=\"right\">1.734<\/td>\n<td align=\"right\">2.101<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">19<\/td>\n<td align=\"right\">1.729<\/td>\n<td align=\"right\">2.093<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">20<\/td>\n<td align=\"right\">1.725<\/td>\n<td align=\"right\">2.086<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">21<\/td>\n<td align=\"right\">1.721<\/td>\n<td align=\"right\">2.080<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">22<\/td>\n<td align=\"right\">1.717<\/td>\n<td align=\"right\">2.074<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">23<\/td>\n<td align=\"right\">1.714<\/td>\n<td align=\"right\">2.069<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">24<\/td>\n<td align=\"right\">1.711<\/td>\n<td align=\"right\">2.064<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">25<\/td>\n<td align=\"right\">1.708<\/td>\n<td align=\"right\">2.060<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">30<\/td>\n<td align=\"right\">1.697<\/td>\n<td align=\"right\">2.042<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">35<\/td>\n<td align=\"right\">1.690<\/td>\n<td align=\"right\">2.030<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">40<\/td>\n<td align=\"right\">1.684<\/td>\n<td align=\"right\">2.021<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">45<\/td>\n<td align=\"right\">1.679<\/td>\n<td align=\"right\">2.014<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">50<\/td>\n<td align=\"right\">1.676<\/td>\n<td align=\"right\">2.009<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">60<\/td>\n<td align=\"right\">1.671<\/td>\n<td align=\"right\">2.000<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">70<\/td>\n<td align=\"right\">1.667<\/td>\n<td align=\"right\">1.994<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">80<\/td>\n<td align=\"right\">1.664<\/td>\n<td align=\"right\">1.990<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">90<\/td>\n<td align=\"right\">1.662<\/td>\n<td align=\"right\">1.987<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">100<\/td>\n<td align=\"right\">1.660<\/td>\n<td align=\"right\">1.984<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p06\">Thus far, we have considered what is called a <span class=\"margin_term\"><b>two-tailed test<\/b><\/span>, where we reject the null hypothesis if the <em class=\"emphasis\">t<\/em> score for the sample is extreme in either direction. This makes sense when we believe that the sample mean might differ from the hypothetical population mean but we do not have good reason to expect the difference to go in a particular direction. But it is also possible to do a <span class=\"margin_term\"><b>one-tailed test<\/b><\/span>, where we reject the null hypothesis only if the <em class=\"emphasis\">t<\/em> score for the sample is extreme in one direction that we specify before collecting the data. This makes sense when we have good reason to expect the sample mean will differ from the hypothetical population mean in a particular direction.<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_p07\">Here is how it works. Each one-tailed critical value in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 &#8220;Table of Critical Values of &#8220;<\/a> can again be interpreted as a pair of values: one positive and one negative. A <em class=\"emphasis\">t<\/em> score below the lower critical value is in the lowest 5% of the distribution, and a <em class=\"emphasis\">t<\/em> score above the upper critical value is in the highest 5% of the distribution. For 24 degrees of freedom, these values are \u22121.711 and +1.711. (These are represented by the green vertical lines in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_f01\">Figure 13.1 &#8220;Distribution of &#8220;<\/a>.) However, for a one-tailed test, we must decide before collecting data whether we expect the sample mean to be lower than the hypothetical population mean, in which case we would use only the lower critical value, or we expect the sample mean to be greater than the hypothetical population mean, in which case we would use only the upper critical value. Notice that we still reject the null hypothesis when the <em class=\"emphasis\">t<\/em> score for our sample is in the most extreme 5% of the t scores we would expect if the null hypothesis were true\u2014so \u03b1 remains at .05. We have simply redefined <em class=\"emphasis\">extreme<\/em> to refer only to one tail of the distribution. The advantage of the one-tailed test is that critical values are less extreme. If the sample mean differs from the hypothetical population mean in the expected direction, then we have a better chance of rejecting the null hypothesis. The disadvantage is that if the sample mean differs from the hypothetical population mean in the unexpected direction, then there is no chance at all of rejecting the null hypothesis.<\/p>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s01_s01_s01\">\n<h2 class=\"title editable block\">Example One-Sample <em class=\"emphasis\">t<\/em> Test<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_s01_p01\">Imagine that a health psychologist is interested in the accuracy of college students\u2019 estimates of the number of calories in a chocolate chip cookie. He shows the cookie to a sample of 10 students and asks each one to estimate the number of calories in it. Because the actual number of calories in the cookie is 250, this is the hypothetical population mean of interest (\u00b5<sub class=\"subscript\">0<\/sub>). The null hypothesis is that the mean estimate for the population (\u03bc) is 250. Because he has no real sense of whether the students will underestimate or overestimate the number of calories, he decides to do a two-tailed test. Now imagine further that the participants\u2019 actual estimates are as follows:<\/p>\n<p>                <span class=\"informalequation block\"><span class=\"mathphrase\">250, 280, 200, 150, 175, 200, 200, 220, 180, 250.<\/span><\/span><\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_s01_p02\">The mean estimate for the sample (<em class=\"emphasis\">M<\/em>) is 212.00 calories and the standard deviation (<em class=\"emphasis\">SD<\/em>) is 39.17. The health psychologist can now compute the <em class=\"emphasis\">t<\/em> score for his sample:<\/p>\n<p>\\[ t = \\frac{212 &#8211; 250}{ ( \\frac{39.17}{ \\sqrt{10}} ) } = -3.07 \\]               <\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_s01_p03\">If he enters the data into one of the online analysis tools or uses SPSS, it would also tell him that the two-tailed <em class=\"emphasis\">p<\/em> value for this <em class=\"emphasis\">t<\/em> score (with 10 \u2212 1 = 9 degrees of freedom) is .013. Because this is less than .05, the health psychologist would reject the null hypothesis and conclude that college students tend to underestimate the number of calories in a chocolate chip cookie. If he computes the <em class=\"emphasis\">t<\/em> score by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 &#8220;Table of Critical Values of &#8220;<\/a> and see that the critical value of <em class=\"emphasis\">t<\/em> for a two-tailed test with 9 degrees of freedom is \u00b12.262. The fact that his <em class=\"emphasis\">t<\/em> score was more extreme than this critical value would tell him that his <em class=\"emphasis\">p<\/em> value is less than .05 and that he should reject the null hypothesis.<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s01_s01_p04\">Finally, if this researcher had gone into this study with good reason to expect that college students underestimate the number of calories, then he could have done a one-tailed test instead of a two-tailed test. The only thing this would change is the critical value, which would be \u22121.833. This slightly less extreme value would make it a bit easier to reject the null hypothesis. However, if it turned out that college students overestimate the number of calories\u2014no matter how much they overestimate it\u2014the researcher would not have been able to reject the null hypothesis.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s01_s02\">\n<h2 class=\"title editable block\">The Dependent-Samples <em class=\"emphasis\">t<\/em> Test<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_p01\">The <span class=\"margin_term\"><b>dependent-samples <em class=\"emphasis\">t<\/em> test<\/b><\/span> (sometimes called the paired-samples <em class=\"emphasis\">t<\/em> test) is used to compare two means for the same sample tested at two different times or under two different conditions. This makes it appropriate for pretest-posttest designs or within-subjects experiments. The null hypothesis is that the means at the two times or under the two conditions are the same in the population. The alternative hypothesis is that they are not the same. This test can also be one-tailed if the researcher has good reason to expect the difference goes in a particular direction.<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_p02\">It helps to think of the dependent-samples <em class=\"emphasis\">t<\/em> test as a special case of the one-sample <em class=\"emphasis\">t<\/em> test. However, the first step in the dependent-samples <em class=\"emphasis\">t<\/em> test is to reduce the two scores for each participant to a single <span class=\"margin_term\"><b>difference score<\/b><\/span> by taking the difference between them. At this point, the dependent-samples <em class=\"emphasis\">t<\/em> test becomes a one-sample <em class=\"emphasis\">t<\/em> test on the difference scores. The hypothetical population mean (\u00b5<sub class=\"subscript\">0<\/sub>) of interest is 0 because this is what the mean difference score would be if there were no difference on average between the two times or two conditions. We can now think of the null hypothesis as being that the mean difference score in the population is 0 (\u00b5<sub class=\"subscript\">0<\/sub> = 0) and the alternative hypothesis as being that the mean difference score in the population is not 0 (\u00b5<sub class=\"subscript\">0<\/sub> \u2260 0).<\/p>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s01_s02_s01\">\n<h2 class=\"title editable block\">Example Dependent-Samples <em class=\"emphasis\">t<\/em> Test<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p01\">Imagine that the health psychologist now knows that people tend to underestimate the number of calories in junk food and has developed a short training program to improve their estimates. To test the effectiveness of this program, he conducts a pretest-posttest study in which 10 participants estimate the number of calories in a chocolate chip cookie before the training program and then again afterward. Because he expects the program to increase the participants\u2019 estimates, he decides to do a one-tailed test. Now imagine further that the pretest estimates are<\/p>\n<p>                <span class=\"informalequation block\"><span class=\"mathphrase\">230, 250, 280, 175, 150, 200, 180, 210, 220, 190<\/span><\/span><\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p02\">and that the posttest estimates (for the same participants in the same order) are<\/p>\n<p>                <span class=\"informalequation block\"><span class=\"mathphrase\">250, 260, 250, 200, 160, 200, 200, 180, 230, 240.<\/span><\/span><\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p03\">The difference scores, then, are as follows:<\/p>\n<p>                <span class=\"informalequation block\"><span class=\"mathphrase\">+20, +10, \u221230, +25, +10, 0, +20, \u221230, +10, +50.<\/span><\/span><\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p04\">Note that it does not matter whether the first set of scores is subtracted from the second or the second from the first as long as it is done the same way for all participants. In this example, it makes sense to subtract the pretest estimates from the posttest estimates so that positive difference scores mean that the estimates went up after the training and negative difference scores mean the estimates went down.<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p05\">The mean of the difference scores is 8.50 with a standard deviation of 27.27. The health psychologist can now compute the <em class=\"emphasis\">t<\/em> score for his sample as follows:<\/p>\n<p>\\[ t = \\frac{8.5 &#8211; 0}{( \\frac{27.27}{ \\sqrt{10}})} = 1.11 \\]<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s02_s01_p06\">If he enters the data into one of the online analysis tools or uses Excel or SPSS, it would tell him that the one-tailed <em class=\"emphasis\">p<\/em> value for this <em class=\"emphasis\">t<\/em> score (again with 10 \u2212 1 = 9 degrees of freedom) is .148. Because this is greater than .05, he would retain the null hypothesis and conclude that the training program does not increase people\u2019s calorie estimates. If he were to compute the <em class=\"emphasis\">t<\/em> score by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 &#8220;Table of Critical Values of &#8220;<\/a> and see that the critical value of <em class=\"emphasis\">t<\/em> for a one-tailed test with 9 degrees of freedom is +1.833. (It is positive this time because he was expecting a positive mean difference score.) The fact that his <em class=\"emphasis\">t<\/em> score was less extreme than this critical value would tell him that his <em class=\"emphasis\">p<\/em> value is greater than .05 and that he should fail to reject the null hypothesis.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s01_s03\">\n<h2 class=\"title editable block\">The Independent-Samples <em class=\"emphasis\">t<\/em> Test<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_p01\">The <span class=\"margin_term\"><b>independent-samples <em class=\"emphasis\">t<\/em> test<\/b><\/span> is used to compare the means of two separate samples (<em class=\"emphasis\">M<\/em><sub class=\"subscript\"><em class=\"emphasis\">1<\/em><\/sub> and <em class=\"emphasis\">M<\/em><sub class=\"subscript\"><em class=\"emphasis\">2<\/em><\/sub>). The two samples might have been tested under different conditions in a between-subjects experiment, or they could be preexisting groups in a correlational design (e.g., women and men, extroverts and introverts). The null hypothesis is that the means of the two populations are the same: \u00b5<sub class=\"subscript\">1<\/sub> = \u00b5<sub class=\"subscript\">2<\/sub>. The alternative hypothesis is that they are not the same: \u00b5<sub class=\"subscript\">1<\/sub> \u2260 \u00b5<sub class=\"subscript\">2<\/sub>. Again, the test can be one-tailed if the researcher has good reason to expect the difference goes in a particular direction.<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_p02\">The <em class=\"emphasis\">t<\/em> statistic here is a bit more complicated because it must take into account two sample means, two standard deviations, and two sample sizes. The formula is as follows:<\/p>\n<p>\\[ t = \\frac{ M_{1} &#8211; M_{2} }{ \\sqrt{ \\frac{ SD_{1}^{2}}{n_{1}} + \\frac{ SD_{2}^{2}}{n_{2}}}} \\]<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_p03\">Notice that this formula includes squared standard deviations (the variances) that appear inside the square root symbol. Also, lowercase <em class=\"emphasis\">n<\/em><sub class=\"subscript\"><em class=\"emphasis\">1<\/em><\/sub> and <em class=\"emphasis\">n<\/em><sub class=\"subscript\"><em class=\"emphasis\">2<\/em><\/sub> refer to the sample sizes in the two groups or condition (as opposed to capital <em class=\"emphasis\">N<\/em>, which generally refers to the total sample size). The only additional thing to know here is that there are <em class=\"emphasis\">N<\/em> \u2212 2 degrees of freedom for the independent-samples <em class=\"emphasis\">t<\/em> test.<\/p>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s01_s03_s01\">\n<h2 class=\"title editable block\">Example Independent-Samples <em class=\"emphasis\">t<\/em> Test<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p01\">Now the health psychologist wants to compare the calorie estimates of people who regularly eat junk food with the estimates of people who rarely eat junk food. He believes the difference could come out in either direction so he decides to conduct a two-tailed test. He collects data from a sample of eight participants who eat junk food regularly and seven participants who rarely eat junk food. The data are as follows:<\/p>\n<p class=\"para block\">\u00a0\u00a0<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p02\">Junk food eaters: 180, 220, 150, 85, 200, 170, 150, 190<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p03\">Non\u2013junk food eaters: 200, 240, 190, 175, 200, 300, 240<\/p>\n<p class=\"para block\">\u00a0\u00a0<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p04\">The mean for the junk food eaters is 220.71 with a standard deviation of 41.23. The mean for the non\u2013junk food eaters is 168.12 with a standard deviation of 42.66. He can now compute his <em class=\"emphasis\">t<\/em> score as follows:<\/p>\n<p>\\[ t = \\frac{ 220.71 &#8211; 168.12}{ \\sqrt{ \\frac{41.23^{2}}{8} + \\frac{42.66^{2}}{7}}} = 2.42 \\]<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s01_s03_s01_p05\">If he enters the data into one of the online analysis tools or uses Excel or SPSS, it would tell him that the two-tailed <em class=\"emphasis\">p<\/em> value for this <em class=\"emphasis\">t<\/em> score (with 15 \u2212 2 = 13 degrees of freedom) is .015. Because this is less than .05, the health psychologist would reject the null hypothesis and conclude that people who eat junk food regularly make lower calorie estimates than people who eat it rarely. If he were to compute the <em class=\"emphasis\">t<\/em> score by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s01_s01_t01\">Table 13.2 &#8220;Table of Critical Values of &#8220;<\/a> and see that the critical value of <em class=\"emphasis\">t<\/em> for a two-tailed test with 13 degrees of freedom is \u00b12.160. The fact that his <em class=\"emphasis\">t<\/em> score was more extreme than this critical value would tell him that his <em class=\"emphasis\">p<\/em> value is less than .05 and that he should fail to retain the null hypothesis.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/p><\/div>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s02\">\n<h2 class=\"title editable block\">The Analysis of Variance<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_p01\">When there are more than two groups or condition means to be compared, the most common null hypothesis test is the <span class=\"margin_term\"><b>analysis of variance (ANOVA)<\/b><\/span>. In this section, we look primarily at the <span class=\"margin_term\"><b>one-way ANOVA<\/b><\/span>, which is used for between-subjects designs with a single independent variable. We then briefly consider some other versions of the ANOVA that are used for within-subjects and factorial research designs.<\/p>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s02_s01\">\n<h2 class=\"title editable block\">One-Way ANOVA<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_p01\">The one-way ANOVA is used to compare the means of more than two samples (<em class=\"emphasis\">M1<\/em>, <em class=\"emphasis\">M<\/em><sub class=\"subscript\"><em class=\"emphasis\">2<\/em><\/sub>\u2026<em class=\"emphasis\">M<\/em><sub class=\"subscript\"><em class=\"emphasis\">G<\/em><\/sub>) in a between-subjects design. The null hypothesis is that all the means are equal in the population: \u00b5<sub class=\"subscript\">1<\/sub>= \u00b5<sub class=\"subscript\">2<\/sub> =\u2026= \u00b5<sub class=\"subscript\">G<\/sub>. The alternative hypothesis is that not all the means in the population are equal.<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_p02\">The test statistic for the ANOVA is called <em class=\"emphasis\">F<\/em>. It is a ratio of two estimates of the population variance based on the sample data. One estimate of the population variance is called the <span class=\"margin_term\"><b>mean squares between groups (<em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub>)<\/b><\/span> and is based on the differences among the sample means. The other is called the <span class=\"margin_term\"><b>mean squares within groups (<em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub>)<\/b><\/span> and is based on the differences among the scores within each group. The <em class=\"emphasis\">F<\/em> statistic is the ratio of the <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub> to the <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> and can therefore be expressed as follows:<\/p>\n<p>            <span class=\"informalequation block\">F=MSBMSW.<\/span><\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_p03\">Again, the reason that <em class=\"emphasis\">F<\/em> is useful is that we know how it is distributed when the null hypothesis is true. As shown in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_f01\">Figure 13.2 &#8220;Distribution of the &#8220;<\/a>, this distribution is unimodal and positively skewed with values that cluster around 1. The precise shape of the distribution depends on both the number of groups and the sample size, and there is a degrees of freedom value associated with each of these. The between-groups degrees of freedom is the number of groups minus one: <em class=\"emphasis\">df<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub> = (<em class=\"emphasis\">G<\/em> \u2212 1). The within-groups degrees of freedom is the total sample size minus the number of groups: <em class=\"emphasis\">df<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> = <em class=\"emphasis\">N<\/em> \u2212 <em class=\"emphasis\">G<\/em>. Again, knowing the distribution of <em class=\"emphasis\">F<\/em> when the null hypothesis is true allows us to find the <em class=\"emphasis\">p<\/em> value.<\/p>\n<div style=\"text-align: center;\">\n<div class=\"figure large large-height editable block\" id=\"price_1.0-ch13_s02_s02_s01_f01\">\n<div style=\"text-align: center; font-size: .8em; max-width: 497px;\">\n<p class=\"title\"><span class=\"title-prefix\">Figure 13.2<\/span> Distribution of the <em class=\"emphasis\">F<\/em> Ratio With 2 and 37 Degrees of Freedom When the Null Hypothesis Is True<\/p>\n<p>                <a href=\"\/psychologyresearchmethods\/wp-content\/uploads\/sites\/171\/2015\/07\/2a8c35892ee84b9a536f12ece8f6c540.jpg\"><img decoding=\"async\" src=\"https:\/\/s3-us-west-2.amazonaws.com\/courses-images\/wp-content\/uploads\/sites\/2714\/2017\/11\/16174232\/2a8c35892ee84b9a536f12ece8f6c540.jpg\" alt=\"Distribution of the F Ratio With 2 and 37 Degrees of Freedom When the Null Hypothesis Is True. The red vertical line represents the critical value when &#x3b1; is .05\" style=\"max-width: 497px;\" \/><\/a><\/p>\n<p class=\"para\">The red vertical line represents the critical value when \u03b1 is .05.<\/p>\n<\/p><\/div>\n<\/div>\n<\/div>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_p04\">The online tools in <a class=\"xref\" href=\"..\/12-1-describing-single-variables\/#price_1.0-ch12\">Chapter 12 &#8220;Descriptive Statistics&#8221;<\/a> and statistical software such as Excel and SPSS will compute <em class=\"emphasis\">F<\/em> and find the <em class=\"emphasis\">p<\/em> value. If <em class=\"emphasis\">p<\/em> is less than .05, then we reject the null hypothesis and conclude that there are differences among the group means in the population. If <em class=\"emphasis\">p<\/em> is greater than .05, then we retain the null hypothesis and conclude that there is not enough evidence to say that there are differences. In the unlikely event that we would compute <em class=\"emphasis\">F<\/em> by hand, we can use a table of critical values like <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_t01\">Table 13.3 &#8220;Table of Critical Values of &#8220;<\/a> to make the decision. The idea is that any <em class=\"emphasis\">F<\/em> ratio greater than the critical value has a <em class=\"emphasis\">p<\/em> value of less than .05. Thus if the <em class=\"emphasis\">F<\/em> ratio we compute is beyond the critical value, then we reject the null hypothesis. If the F ratio we compute is less than the critical value, then we retain the null hypothesis.<\/p>\n<div class=\"table block\" id=\"price_1.0-ch13_s02_s02_s01_t01\">\n<p class=\"title\"><span class=\"title-prefix\">Table 13.3<\/span> Table of Critical Values of <em class=\"emphasis\">F<\/em> When \u03b1 = .05<\/p>\n<table cellpadding=\"0\" style=\"border-spacing: 0px;\">\n<thead>\n<tr>\n<th colspan=\"4\" align=\"right\">\n<em class=\"emphasis\">df<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub><\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td align=\"right\">\n<em class=\"emphasis\">df<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub><\/td>\n<td align=\"right\">2<\/td>\n<td align=\"right\">3<\/td>\n<td align=\"right\">4<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">8<\/td>\n<td align=\"right\">4.459<\/td>\n<td align=\"right\">4.066<\/td>\n<td align=\"right\">3.838<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">9<\/td>\n<td align=\"right\">4.256<\/td>\n<td align=\"right\">3.863<\/td>\n<td align=\"right\">3.633<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">10<\/td>\n<td align=\"right\">4.103<\/td>\n<td align=\"right\">3.708<\/td>\n<td align=\"right\">3.478<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">11<\/td>\n<td align=\"right\">3.982<\/td>\n<td align=\"right\">3.587<\/td>\n<td align=\"right\">3.357<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">12<\/td>\n<td align=\"right\">3.885<\/td>\n<td align=\"right\">3.490<\/td>\n<td align=\"right\">3.259<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">13<\/td>\n<td align=\"right\">3.806<\/td>\n<td align=\"right\">3.411<\/td>\n<td align=\"right\">3.179<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">14<\/td>\n<td align=\"right\">3.739<\/td>\n<td align=\"right\">3.344<\/td>\n<td align=\"right\">3.112<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">15<\/td>\n<td align=\"right\">3.682<\/td>\n<td align=\"right\">3.287<\/td>\n<td align=\"right\">3.056<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">16<\/td>\n<td align=\"right\">3.634<\/td>\n<td align=\"right\">3.239<\/td>\n<td align=\"right\">3.007<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">17<\/td>\n<td align=\"right\">3.592<\/td>\n<td align=\"right\">3.197<\/td>\n<td align=\"right\">2.965<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">18<\/td>\n<td align=\"right\">3.555<\/td>\n<td align=\"right\">3.160<\/td>\n<td align=\"right\">2.928<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">19<\/td>\n<td align=\"right\">3.522<\/td>\n<td align=\"right\">3.127<\/td>\n<td align=\"right\">2.895<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">20<\/td>\n<td align=\"right\">3.493<\/td>\n<td align=\"right\">3.098<\/td>\n<td align=\"right\">2.866<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">21<\/td>\n<td align=\"right\">3.467<\/td>\n<td align=\"right\">3.072<\/td>\n<td align=\"right\">2.840<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">22<\/td>\n<td align=\"right\">3.443<\/td>\n<td align=\"right\">3.049<\/td>\n<td align=\"right\">2.817<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">23<\/td>\n<td align=\"right\">3.422<\/td>\n<td align=\"right\">3.028<\/td>\n<td align=\"right\">2.796<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">24<\/td>\n<td align=\"right\">3.403<\/td>\n<td align=\"right\">3.009<\/td>\n<td align=\"right\">2.776<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">25<\/td>\n<td align=\"right\">3.385<\/td>\n<td align=\"right\">2.991<\/td>\n<td align=\"right\">2.759<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">30<\/td>\n<td align=\"right\">3.316<\/td>\n<td align=\"right\">2.922<\/td>\n<td align=\"right\">2.690<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">35<\/td>\n<td align=\"right\">3.267<\/td>\n<td align=\"right\">2.874<\/td>\n<td align=\"right\">2.641<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">40<\/td>\n<td align=\"right\">3.232<\/td>\n<td align=\"right\">2.839<\/td>\n<td align=\"right\">2.606<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">45<\/td>\n<td align=\"right\">3.204<\/td>\n<td align=\"right\">2.812<\/td>\n<td align=\"right\">2.579<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">50<\/td>\n<td align=\"right\">3.183<\/td>\n<td align=\"right\">2.790<\/td>\n<td align=\"right\">2.557<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">55<\/td>\n<td align=\"right\">3.165<\/td>\n<td align=\"right\">2.773<\/td>\n<td align=\"right\">2.540<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">60<\/td>\n<td align=\"right\">3.150<\/td>\n<td align=\"right\">2.758<\/td>\n<td align=\"right\">2.525<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">65<\/td>\n<td align=\"right\">3.138<\/td>\n<td align=\"right\">2.746<\/td>\n<td align=\"right\">2.513<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">70<\/td>\n<td align=\"right\">3.128<\/td>\n<td align=\"right\">2.736<\/td>\n<td align=\"right\">2.503<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">75<\/td>\n<td align=\"right\">3.119<\/td>\n<td align=\"right\">2.727<\/td>\n<td align=\"right\">2.494<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">80<\/td>\n<td align=\"right\">3.111<\/td>\n<td align=\"right\">2.719<\/td>\n<td align=\"right\">2.486<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">85<\/td>\n<td align=\"right\">3.104<\/td>\n<td align=\"right\">2.712<\/td>\n<td align=\"right\">2.479<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">90<\/td>\n<td align=\"right\">3.098<\/td>\n<td align=\"right\">2.706<\/td>\n<td align=\"right\">2.473<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">95<\/td>\n<td align=\"right\">3.092<\/td>\n<td align=\"right\">2.700<\/td>\n<td align=\"right\">2.467<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">100<\/td>\n<td align=\"right\">3.087<\/td>\n<td align=\"right\">2.696<\/td>\n<td align=\"right\">2.463<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s02_s01_s01\">\n<h2 class=\"title editable block\">Example One-Way ANOVA<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_s01_p01\">Imagine that the health psychologist wants to compare the calorie estimates of psychology majors, nutrition majors, and professional dieticians. He collects the following data:<\/p>\n<p>                <span class=\"informalequation block\"><span class=\"mathphrase\">Psych majors: 200, 180, 220, 160, 150, 200, 190, 200<\/span><\/span><br \/>\n                <span class=\"informalequation block\"><span class=\"mathphrase\">Nutrition majors: 190, 220, 200, 230, 160, 150, 200, 210, 195<\/span><\/span><br \/>\n                <span class=\"informalequation block\"><span class=\"mathphrase\">Dieticians: 220, 250, 240, 275, 250, 230, 200, 240<\/span><\/span><\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s01_s01_p02\">The means are 187.50 (<em class=\"emphasis\">SD<\/em> = 23.14), 195.00 (<em class=\"emphasis\">SD<\/em> = 27.77), and 238.13 (<em class=\"emphasis\">SD<\/em> = 22.35), respectively. So it appears that dieticians made substantially more accurate estimates on average. The researcher would almost certainly enter these data into a program such as Excel or SPSS, which would compute <em class=\"emphasis\">F<\/em> for him and find the <em class=\"emphasis\">p<\/em> value. <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_s01_t01\">Table 13.4 &#8220;Typical One-Way ANOVA Output From Excel&#8221;<\/a> shows the output of the one-way ANOVA function in Excel for these data. This is referred to as an ANOVA table. It shows that <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub> is 5,971.88, <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> is 602.23, and their ratio, <em class=\"emphasis\">F<\/em>, is 9.92. The <em class=\"emphasis\">p<\/em> value is .0009. Because this is below .05, the researcher would reject the null hypothesis and conclude that the mean calorie estimates for the three groups are not the same in the population. Notice that the ANOVA table also includes the \u201csum of squares\u201d (<em class=\"emphasis\">SS<\/em>) for between groups and for within groups. These values are computed on the way to finding <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">B<\/em><\/sub> and <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> but are not typically reported by the researcher. Finally, if the researcher were to compute the <em class=\"emphasis\">F<\/em> ratio by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_t01\">Table 13.3 &#8220;Table of Critical Values of &#8220;<\/a> and see that the critical value of <em class=\"emphasis\">F<\/em> with 2 and 21 degrees of freedom is 3.467 (the same value in <a class=\"xref\" href=\"#price_1.0-ch13_s02_s02_s01_s01_t01\">Table 13.4 &#8220;Typical One-Way ANOVA Output From Excel&#8221;<\/a> under <em class=\"emphasis\">F<\/em><sub class=\"subscript\">crit<\/sub>). The fact that his <em class=\"emphasis\">t<\/em> score was more extreme than this critical value would tell him that his <em class=\"emphasis\">p<\/em> value is less than .05 and that he should reject the null hypothesis.<\/p>\n<div class=\"table block\" id=\"price_1.0-ch13_s02_s02_s01_s01_t01\">\n<p class=\"title\"><span class=\"title-prefix\">Table 13.4<\/span> Typical One-Way ANOVA Output From Excel<\/p>\n<table cellpadding=\"0\" style=\"border-spacing: 0px;\">\n<thead>\n<tr>\n<th colspan=\"7\">ANOVA<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><em class=\"emphasis\">Source of variation<\/em><\/td>\n<td><em class=\"emphasis\">SS<\/em><\/td>\n<td><em class=\"emphasis\">df<\/em><\/td>\n<td><em class=\"emphasis\">MS<\/em><\/td>\n<td><em class=\"emphasis\">F<\/em><\/td>\n<td><em class=\"emphasis\">p-value<\/em><\/td>\n<td>\n<em class=\"emphasis\">F<\/em><sub class=\"subscript\">crit<\/sub><\/td>\n<\/tr>\n<tr>\n<td>Between groups<\/td>\n<td align=\"right\">11,943.75<\/td>\n<td align=\"right\">2<\/td>\n<td align=\"right\">5,971.875<\/td>\n<td align=\"right\">9.916234<\/td>\n<td align=\"right\">0.000928<\/td>\n<td align=\"right\">3.4668<\/td>\n<\/tr>\n<tr>\n<td>Within groups<\/td>\n<td align=\"right\">12,646.88<\/td>\n<td align=\"right\">21<\/td>\n<td align=\"right\">602.2321<\/td>\n<td>\n                                <\/td>\n<td>\n                                <\/td>\n<td>\n                            <\/td>\n<\/tr>\n<tr>\n<td>Total<\/td>\n<td align=\"right\">24,590.63<\/td>\n<td align=\"right\">23<\/td>\n<td>\n                                <\/td>\n<td>\n                                <\/td>\n<td>\n                                <\/td>\n<td>\n                            <\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div><\/div>\n<\/p><\/div>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s02_s02\">\n<h2 class=\"title editable block\">ANOVA Elaborations<\/h2>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s02_s02_s01\">\n<h2 class=\"title editable block\">Post Hoc Comparisons<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s02_s01_p01\">When we reject the null hypothesis in a one-way ANOVA, we conclude that the group means are not all the same in the population. But this can indicate different things. With three groups, it can indicate that all three means are significantly different from each other. Or it can indicate that one of the means is significantly different from the other two, but the other two are not significantly different from each other. It could be, for example, that the mean calorie estimates of psychology majors, nutrition majors, and dieticians are all significantly different from each other. Or it could be that the mean for dieticians is significantly different from the means for psychology and nutrition majors, but the means for psychology and nutrition majors are not significantly different from each other. For this reason, statistically significant one-way ANOVA results are typically followed up with a series of <span class=\"margin_term\"><b>post hoc comparisons<\/b><\/span> of selected pairs of group means to determine which are different from which others.<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s02_s01_p02\">One approach to post hoc comparisons would be to conduct a series of independent-samples <em class=\"emphasis\">t<\/em> tests comparing each group mean to each of the other group means. But there is a problem with this approach. In general, if we conduct a <em class=\"emphasis\">t<\/em> test when the null hypothesis is true, we have a 5% chance of mistakenly rejecting the null hypothesis (see <a class=\"xref\" href=\"http:\/\/open.lib.umn.edu\/psychologyresearchmethods\/?p=452#price_1.0-ch13_s03\">Section 13.3 &#8220;Additional Considerations&#8221;<\/a> for more on such Type I errors). If we conduct several <em class=\"emphasis\">t<\/em> tests when the null hypothesis is true, the chance of mistakenly rejecting <em class=\"emphasis\">at least one<\/em> null hypothesis increases with each test we conduct. Thus researchers do not usually make post hoc comparisons using standard <em class=\"emphasis\">t<\/em> tests because there is too great a chance that they will mistakenly reject at least one null hypothesis. Instead, they use one of several modified <em class=\"emphasis\">t<\/em> test procedures\u2014among them the Bonferonni procedure, Fisher\u2019s least significant difference (LSD) test, and Tukey\u2019s honestly significant difference (HSD) test. The details of these approaches are beyond the scope of this book, but it is important to understand their purpose. It is to keep the risk of mistakenly rejecting a true null hypothesis to an acceptable level (close to 5%).<\/p>\n<\/p><\/div>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s02_s02_s02\">\n<h2 class=\"title editable block\">Repeated-Measures ANOVA<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s02_s02_p01\">Recall that the one-way ANOVA is appropriate for between-subjects designs in which the means being compared come from separate groups of participants. It is not appropriate for within-subjects designs in which the means being compared come from the same participants tested under different conditions or at different times. This requires a slightly different approach, called the <span class=\"margin_term\"><b>repeated-measures ANOVA<\/b><\/span>. The basics of the repeated-measures ANOVA are the same as for the one-way ANOVA. The main difference is that measuring the dependent variable multiple times for each participant allows for a more refined measure of <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub>. Imagine, for example, that the dependent variable in a study is a measure of reaction time. Some participants will be faster or slower than others because of stable individual differences in their nervous systems, muscles, and other factors. In a between-subjects design, these stable individual differences would simply add to the variability within the groups and increase the value of <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub>. In a within-subjects design, however, these stable individual differences can be measured and subtracted from the value of <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub>. This lower value of <em class=\"emphasis\">MS<\/em><sub class=\"subscript\"><em class=\"emphasis\">W<\/em><\/sub> means a higher value of <em class=\"emphasis\">F<\/em> and a more sensitive test.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s02_s03\">\n<h2 class=\"title editable block\">Factorial ANOVA<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s02_s03_p01\">When more than one independent variable is included in a factorial design, the appropriate approach is the <span class=\"margin_term\"><b>factorial ANOVA<\/b><\/span>. Again, the basics of the factorial ANOVA are the same as for the one-way and repeated-measures ANOVAs. The main difference is that it produces an <em class=\"emphasis\">F<\/em> ratio and <em class=\"emphasis\">p<\/em> value for each main effect and for each interaction. Returning to our calorie estimation example, imagine that the health psychologist tests the effect of participant major (psychology vs. nutrition) and food type (cookie vs. hamburger) in a factorial design. A factorial ANOVA would produce separate <em class=\"emphasis\">F<\/em> ratios and <em class=\"emphasis\">p<\/em> values for the main effect of major, the main effect of food type, and the interaction between major and food. Appropriate modifications must be made depending on whether the design is between subjects, within subjects, or mixed.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s03\">\n<h2 class=\"title editable block\">Testing Pearson\u2019s <em class=\"emphasis\">r<\/em><br \/>\n<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s03_p01\">For relationships between quantitative variables, where Pearson\u2019s <em class=\"emphasis\">r<\/em> is used to describe the strength of those relationships, the appropriate null hypothesis test is a test of Pearson\u2019s <em class=\"emphasis\">r<\/em>. The basic logic is exactly the same as for other null hypothesis tests. In this case, the null hypothesis is that there is no relationship in the population. We can use the Greek lowercase rho (\u03c1) to represent the relevant parameter: \u03c1 = 0. The alternative hypothesis is that there is a relationship in the population: \u03c1 \u2260 0. As with the <em class=\"emphasis\">t<\/em> test, this test can be two-tailed if the researcher has no expectation about the direction of the relationship or one-tailed if the researcher expects the relationship to go in a particular direction.<\/p>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s03_p02\">It is possible to use Pearson\u2019s <em class=\"emphasis\">r<\/em> for the sample to compute a <em class=\"emphasis\">t<\/em> score with <em class=\"emphasis\">N<\/em> \u2212 2 degrees of freedom and then to proceed as for a <em class=\"emphasis\">t<\/em> test. However, because of the way it is computed, Pearson\u2019s <em class=\"emphasis\">r<\/em> can also be treated as its own test statistic. The online statistical tools and statistical software such as Excel and SPSS generally compute Pearson\u2019s <em class=\"emphasis\">r<\/em> and provide the <em class=\"emphasis\">p<\/em> value associated with that value of Pearson\u2019s <em class=\"emphasis\">r<\/em>. As always, if the <em class=\"emphasis\">p<\/em> value is less than .05, we reject the null hypothesis and conclude that there is a relationship between the variables in the population. If the <em class=\"emphasis\">p<\/em> value is greater than .05, we retain the null hypothesis and conclude that there is not enough evidence to say there is a relationship in the population. If we compute Pearson\u2019s <em class=\"emphasis\">r<\/em> by hand, we can use a table like <a class=\"xref\" href=\"#price_1.0-ch13_s02_s03_t01\">Table 13.5 &#8220;Table of Critical Values of Pearson\u2019s &#8220;<\/a>, which shows the critical values of <em class=\"emphasis\">r<\/em> for various samples sizes when \u03b1 is .05. A sample value of Pearson\u2019s <em class=\"emphasis\">r<\/em> that is more extreme than the critical value is statistically significant.<\/p>\n<div class=\"table block\" id=\"price_1.0-ch13_s02_s03_t01\">\n<p class=\"title\"><span class=\"title-prefix\">Table 13.5<\/span> Table of Critical Values of Pearson\u2019s <em class=\"emphasis\">r<\/em> When \u03b1 = .05<\/p>\n<table cellpadding=\"0\" style=\"border-spacing: 0px;\">\n<thead>\n<tr>\n<th align=\"right\">\n                        <\/th>\n<th colspan=\"2\" align=\"right\">Critical value of <em class=\"emphasis\">r<\/em>\n<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td align=\"right\"><em class=\"emphasis\">N<\/em><\/td>\n<td align=\"right\">One-tailed<\/td>\n<td align=\"right\">Two-tailed<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">5<\/td>\n<td align=\"right\">.805<\/td>\n<td align=\"right\">.878<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">10<\/td>\n<td align=\"right\">.549<\/td>\n<td align=\"right\">.632<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">15<\/td>\n<td align=\"right\">.441<\/td>\n<td align=\"right\">.514<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">20<\/td>\n<td align=\"right\">.378<\/td>\n<td align=\"right\">.444<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">25<\/td>\n<td align=\"right\">.337<\/td>\n<td align=\"right\">.396<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">30<\/td>\n<td align=\"right\">.306<\/td>\n<td align=\"right\">.361<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">35<\/td>\n<td align=\"right\">.283<\/td>\n<td align=\"right\">.334<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">40<\/td>\n<td align=\"right\">.264<\/td>\n<td align=\"right\">.312<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">45<\/td>\n<td align=\"right\">.248<\/td>\n<td align=\"right\">.294<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">50<\/td>\n<td align=\"right\">.235<\/td>\n<td align=\"right\">.279<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">55<\/td>\n<td align=\"right\">.224<\/td>\n<td align=\"right\">.266<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">60<\/td>\n<td align=\"right\">.214<\/td>\n<td align=\"right\">.254<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">65<\/td>\n<td align=\"right\">.206<\/td>\n<td align=\"right\">.244<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">70<\/td>\n<td align=\"right\">.198<\/td>\n<td align=\"right\">.235<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">75<\/td>\n<td align=\"right\">.191<\/td>\n<td align=\"right\">.227<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">80<\/td>\n<td align=\"right\">.185<\/td>\n<td align=\"right\">.220<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">85<\/td>\n<td align=\"right\">.180<\/td>\n<td align=\"right\">.213<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">90<\/td>\n<td align=\"right\">.174<\/td>\n<td align=\"right\">.207<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">95<\/td>\n<td align=\"right\">.170<\/td>\n<td align=\"right\">.202<\/td>\n<\/tr>\n<tr>\n<td align=\"right\">100<\/td>\n<td align=\"right\">.165<\/td>\n<td align=\"right\">.197<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<div class=\"section\" id=\"price_1.0-ch13_s02_s03_s01\">\n<h2 class=\"title editable block\">Example Test of Pearson\u2019s <em class=\"emphasis\">r<\/em><br \/>\n<\/h2>\n<p class=\"para editable block\" id=\"price_1.0-ch13_s02_s03_s01_p01\">Imagine that the health psychologist is interested in the correlation between people\u2019s calorie estimates and their weight. He has no expectation about the direction of the relationship, so he decides to conduct a two-tailed test. He computes the correlation for a sample of 22 college students and finds that Pearson\u2019s <em class=\"emphasis\">r<\/em> is \u2212.21. The statistical software he uses tells him that the <em class=\"emphasis\">p<\/em> value is .348. It is greater than .05, so he retains the null hypothesis and concludes that there is no relationship between people\u2019s calorie estimates and their weight. If he were to compute Pearson\u2019s <em class=\"emphasis\">r<\/em> by hand, he could look at <a class=\"xref\" href=\"#price_1.0-ch13_s02_s03_t01\">Table 13.5 &#8220;Table of Critical Values of Pearson\u2019s &#8220;<\/a> and see that the critical value for 22 \u2212 2 = 20 degrees of freedom is .444. The fact that Pearson\u2019s <em class=\"emphasis\">r<\/em> for the sample is less extreme than this critical value tells him that the <em class=\"emphasis\">p<\/em> value is greater than .05 and that he should retain the null hypothesis.<\/p>\n<div class=\"bcc-box bcc-success\" id=\"price_1.0-ch13_s02_s03_s01_n01\">\n<h3 class=\"title\">Key Takeaways<\/h3>\n<ul class=\"itemizedlist\" id=\"price_1.0-ch13_s02_s03_s01_l01\">\n<li>To compare two means, the most common null hypothesis test is the <em class=\"emphasis\">t<\/em> test. The one-sample <em class=\"emphasis\">t<\/em> test is used for comparing one sample mean with a hypothetical population mean of interest, the dependent-samples <em class=\"emphasis\">t<\/em> test is used to compare two means in a within-subjects design, and the independent-samples <em class=\"emphasis\">t<\/em> test is used to compare two means in a between-subjects design.<\/li>\n<li>To compare more than two means, the most common null hypothesis test is the analysis of variance (ANOVA). The one-way ANOVA is used for between-subjects designs with one independent variable, the repeated-measures ANOVA is used for within-subjects designs, and the factorial ANOVA is used for factorial designs.<\/li>\n<li>A null hypothesis test of Pearson\u2019s <em class=\"emphasis\">r<\/em> is used to compare a sample value of Pearson\u2019s <em class=\"emphasis\">r<\/em> with a hypothetical population value of 0.<\/li>\n<\/ul>\n<\/div>\n<div class=\"bcc-box bcc-info\" id=\"price_1.0-ch13_s02_s03_s01_n02\">\n<h3 class=\"title\">Exercises<\/h3>\n<ol class=\"orderedlist\" id=\"price_1.0-ch13_s02_s03_s01_l02\">\n<li>Practice: Use one of the online tools, Excel, or SPSS to reproduce the one-sample <em class=\"emphasis\">t<\/em> test, dependent-samples <em class=\"emphasis\">t<\/em> test, independent-samples <em class=\"emphasis\">t<\/em> test, and one-way ANOVA for the four sets of calorie estimation data presented in this section.<\/li>\n<li>Practice: A sample of 25 college students rated their friendliness on a scale of 1 (<em class=\"emphasis\">Much Lower Than Average<\/em>) to 7 (<em class=\"emphasis\">Much Higher Than Average<\/em>). Their mean rating was 5.30 with a standard deviation of 1.50. Conduct a one-sample <em class=\"emphasis\">t<\/em> test comparing their mean rating with a hypothetical mean rating of 4 (<em class=\"emphasis\">Average<\/em>). The question is whether college students have a tendency to rate themselves as friendlier than average.<\/li>\n<li>Practice: Decide whether each of the following Pearson\u2019s <em class=\"emphasis\">r<\/em> values is statistically significant for both a one-tailed and a two-tailed test. (a) The correlation between height and IQ is +.13 in a sample of 35. (b) For a sample of 88 college students, the correlation between how disgusted they felt and the harshness of their moral judgments was +.23. (c) The correlation between the number of daily hassles and positive mood is \u2212.43 for a sample of 30 middle-aged adults.<\/li>\n<\/ol>\n<\/div><\/div>\n<\/p><\/div>\n\n\t\t\t <section class=\"citations-section\" role=\"contentinfo\">\n\t\t\t <h3>Candela Citations<\/h3>\n\t\t\t\t\t <div>\n\t\t\t\t\t\t <div id=\"citation-list-136\">\n\t\t\t\t\t\t\t <div class=\"licensing\"><div class=\"license-attribution-dropdown-subheading\">CC licensed content, Shared previously<\/div><ul class=\"citation-list\"><li>Research Methods in Psychology. <strong>Provided by<\/strong>: University of Minnesota Libraries Publishing. <strong>Located at<\/strong>: <a target=\"_blank\" href=\"http:\/\/open.lib.umn.edu\/psychologyresearchmethods\">http:\/\/open.lib.umn.edu\/psychologyresearchmethods<\/a>. <strong>License<\/strong>: <em><a target=\"_blank\" rel=\"license\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/4.0\/\">CC BY-NC-SA: Attribution-NonCommercial-ShareAlike<\/a><\/em><\/li><\/ul><\/div>\n\t\t\t\t\t\t <\/div>\n\t\t\t\t\t <\/div>\n\t\t\t <\/section>","protected":false},"author":23485,"menu_order":2,"template":"","meta":{"_candela_citation":"[{\"type\":\"cc\",\"description\":\"Research Methods in Psychology\",\"author\":\"\",\"organization\":\"University of Minnesota Libraries Publishing\",\"url\":\"http:\/\/open.lib.umn.edu\/psychologyresearchmethods\",\"project\":\"\",\"license\":\"cc-by-nc-sa\",\"license_terms\":\"\"}]","CANDELA_OUTCOMES_GUID":"","pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-136","chapter","type-chapter","status-publish","hentry"],"part":132,"_links":{"self":[{"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/pressbooks\/v2\/chapters\/136","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/wp\/v2\/users\/23485"}],"version-history":[{"count":0,"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/pressbooks\/v2\/chapters\/136\/revisions"}],"part":[{"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/pressbooks\/v2\/parts\/132"}],"metadata":[{"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/pressbooks\/v2\/chapters\/136\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/wp\/v2\/media?parent=136"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/pressbooks\/v2\/chapter-type?post=136"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/wp\/v2\/contributor?post=136"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-geneseo-psychologyresearchmethods\/wp-json\/wp\/v2\/license?post=136"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}