Learning Outcomes
- Calculate an F-ratio or F statistics using formulas or using technology
Recall: Ratios
A ratio compares two numbers or two quantities that are measured with the same unit. The ratio of [latex]a[/latex] to [latex]b[/latex] is written [latex]a\text{ to }b,{\Large\frac{a}{b}},\text{or}\mathit{\text{a}}\text{:}\mathit{\text{b}}\text{.}[/latex]
The distribution used for the hypothesis test is a new one. It is called the F distribution, named after Sir Ronald Fisher, an English statistician. The F statistic is a ratio (a fraction). There are two sets of degrees of freedom: one for the numerator and one for the denominator.
For example, if F follows an F distribution and the number of degrees of freedom for the numerator is four, and the number of degrees of freedom for the denominator is ten, then F [latex]\sim[/latex] F4,10.
Note
The F distribution is derived from the Student’s t-distribution. The values of the F distribution are squares of the corresponding values of the t-distribution. One-Way ANOVA expands the t-test for comparing more than two groups. The scope of that derivation is beyond the level of this course. It is preferable to use ANOVA when there are more than two groups instead of performing pairwise t-tests because performing multiple tests introduces the likelihood of making a Type 1 error.
To calculate the F ratio, two estimates of the variance are made.
- Variance between samples: an estimate of σ2 which is the variance of the sample means multiplied by n (when the sample sizes are the same.). If the samples are different sizes, the variance between samples is weighted to account for the different sample sizes. The variance is also called variation due to treatment or explained variation.
- Variance within samples: an estimate of σ2 that is the average of the sample variances (also known as a pooled variance). When the sample sizes are different, the variance within samples is weighted. The variance is also called the variation due to error or unexplained variation.
- SSbetween = the sum of squares that represents the variation among the different samples
- SSwithin = the sum of squares that represents the variation within samples that is due to chance.
To find a “sum of squares” means to add together squared quantities that, in some cases, may be weighted.
Recall: ORDER OF OPERATIONS
Please | Excuse | My | Dear | Aunt | Sally |
parentheses | exponents | multiplication | division | addition | subtraction |
[latex]( \ )[/latex] | [latex]x^2[/latex] | [latex]\times \ \mathrm{or} \ \div[/latex] | [latex]+ \ \mathrm{or} \ -[/latex] |
To calculate the Sum of Squares (SS) follow the following steps (the best way to organize these calculations is in a table, there is an example below):
Step 1: Calculate the mean of the data set. To do this, sum all of the data values and divide by how many data values are in the set.
Step 2: Analyze the data and see if any numbers appear more than once, if so, write down the frequency in which those numbers appear.
Step 3: Calculate the difference between the data value and the sample mean for each of the data values, [latex](x- \overline{x})[/latex], data value – sample mean.
Step 4: Square each of the differences you found in step 3.
Step 5: Analyze the data and see if any numbers appear more than once, if they do, multiply the squared difference of that data value by its frequency.
Step 6: Add all of the squared deviations, in order words, sum the squared deviations.
Recall: Module 2 Example
The scenario is as follows:
In a fifth grade class, the teacher was interested in the average age and the sample standard deviation of the ages of her students. The following data are the ages for a sample of [latex]n = 20[/latex] fifth grade students. The ages are rounded to the nearest half year:
[latex]\displaystyle {9; 9.5; 9.5; 10; 10; 10; 10; 10.5; 10.5; 10.5; 10.5; 11; 11; 11; 11; 11; 11; 11.5; 11.5; 11.5;}[/latex]
The sample mean is calculated as:
[latex]\displaystyle\overline{x} = \frac{9+9.5(2)+10(4)+10.5(4)+11(6)+11.5(3)}{20}={10.525}[/latex]
The average age is [latex]10.53[/latex]
The sample mean age is [latex]10.53[/latex] years, rounded to two places.
The following table shows how to calculate the [latex]SS[/latex] described above.
Data | Freq. | Deviations | [latex]Deviations^2[/latex] | (Freq.)( [latex]Deviations^2[/latex]) |
---|---|---|---|---|
[latex]x[/latex] | [latex]f[/latex] | ( [latex]x[/latex] – [latex]\displaystyle\overline{x}[/latex]) | ( [latex]x[/latex] –[latex]\displaystyle\overline{x}[/latex])2 | ( [latex]f[/latex])([latex]x[/latex] –[latex]\displaystyle\overline{x}[/latex])2 |
[latex]9[/latex] | [latex]1[/latex] | [latex]9 – 10.525 = –1.525[/latex] | [latex](–1.525)^2 = 2.325625[/latex] | [latex]1 × 2.325625 = 2.325625[/latex] |
[latex]9.5[/latex] | [latex]2[/latex] | [latex]9.5 – 10.525 = –1.025[/latex] | [latex](–1.025)^2 = 1.050625[/latex] | [latex]2 × 1.050625 = 2.101250[/latex] |
[latex]10[/latex] | [latex]4[/latex] | [latex]10 – 10.525 = –0.525[/latex] | [latex](–0.525)^2 = 0.275625[/latex] | [latex]4 × 0.275625 = 1.1025[/latex] |
[latex]10.5[/latex] | [latex]4[/latex] | [latex]10.5 – 10.525 = –0.025[/latex] | [latex](–0.025)^2 = 0.000625[/latex] | [latex]4 × 0.000625 = 0.0025[/latex] |
[latex]11[/latex] | [latex]6[/latex] | [latex]11 – 10.525 = 0.475[/latex] | [latex](0.475)^2 = 0.225625[/latex] | [latex]6 × 0.225625 = 1.35375[/latex] |
[latex]11.5[/latex] | [latex]3[/latex] | [latex]11.5 – 10.525 = 0.975[/latex] | [latex](0.975)^2 = 0.950625[/latex] | [latex]3 × 0.950625 = 2.851875[/latex] |
The total is [latex]9.7375[/latex] |
[latex]SS[/latex] is [latex]9.7375[/latex].
MS means “mean square.” MSbetween is the variance between groups, and MSwithin is the variance within groups.
Calculation of Sum of Squares and Mean Square
- k = the number of different groups
- nj = the size of the j th group
- sj = the sum of the values in the j th group
- n = total number of all the values combined (total sample size: [latex]\sum[/latex]nj)
- x = one value: [latex]\sum[/latex]x = [latex]\sum[/latex]sj
- Sum of squares of all values from every group combined: [latex]\sum[/latex]x2
- Between group variability: SStotal = [latex]\displaystyle\sum{x}^{{2}}-\frac{{{(\sum{x})}^{{2}}}}{{n}}[/latex]
- Total sum of squares: [latex]\displaystyle\sum{x}^{{2}}-\frac{{{(\sum{x})}^{{2}}}}{{n}}[/latex]
- Explained variation: sum of squares representing variation among the different samples: [latex]\displaystyle{S}{S}_{{\text{between}}}=\sum{[\frac{{({s}{j})}^{{2}}}{{n}_{{j}}}]}-\frac{{(\sum{s}_{{j}})}^{{2}}}{{n}}[/latex]
- Unexplained variation: sum of squares representing variation within samples due to chance: [latex]\displaystyle{S}{S}_{{\text{within}}}={S}{S}_{{\text{total}}}-{S}{S}_{{\text{between}}}[/latex]
- df‘s for different groups (df‘s for the numerator): df = k – 1
- Equation for errors within samples (df‘s for the denominator): dfwithin = n – k
- Mean square (variance estimate) explained by the different groups: [latex]\displaystyle{M}{S}_{{\text{between}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{d}{f}_{{\text{between}}}}}[/latex]
- Mean square (variance estimate) that is due to chance (unexplained): [latex]\displaystyle{M}{S}_{{\text{within}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{d}{f}_{{\text{within}}}}}[/latex]
MSbetween and MSwithin can be written as follows:
- [latex]\displaystyle{M}{S}_{{\text{between}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{d}{f}_{{\text{between}}}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{k}-{1}}}[/latex]
- [latex]\displaystyle{M}{S}_{{\text{within}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{d}{f}_{{\text{within}}}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{n}-{k}}}[/latex]
The one-way ANOVA test depends on the fact that MSbetween can be influenced by population differences among means of the several groups. Since MSwithin compares values of each group to its own group mean, the fact that group means might be different does not affect MSwithin.
The null hypothesis says that all groups are samples from populations having the same normal distribution. The alternate hypothesis says that at least two of the sample groups come from populations with different normal distributions. If the null hypothesis is true, MSbetween and MSwithin should both estimate the same value.
Note
The null hypothesis says that all the group population means are equal. The hypothesis of equal means implies that the populations have the same normal distribution because it is assumed that the populations are normal and that they have equal variances.
F-Ratio or F Statistic
[latex]\displaystyle{F}=\frac{{{M}{S}_{{\text{between}}}}}{{{M}{S}_{{\text{within}}}}}[/latex]
If MSbetween and MSwithin estimate the same value (following the belief that H0 is true), then the F-ratio should be approximately equal to one. Mostly, just sampling errors would contribute to variations away from one. As it turns out, MSbetween consists of the population variance plus a variance produced from the differences between the samples. MSwithin is an estimate of the population variance. Since variances are always positive, if the null hypothesis is false, MSbetween will generally be larger than MSwithin.Then the F-ratio will be larger than one. However, if the population effect is small, it is not unlikely that MSwithin will be larger in a given sample.
The foregoing calculations were done with groups of different sizes. If the groups are the same size, the calculations simplify somewhat and the F-ratio can be written as:
F-Ratio Formula when the groups are the same size
[latex]F = {\dfrac{n \cdot {s_{\overline x}}^{2}}{{s^2}_{pooled}}}[/latex]
where…
- n = the sample size
- dfnumerator = k – 1
- dfdenominator = n – k
- s2pooled = the mean of the sample variances (pooled variance)
- [latex]{s_{\overline x}}^{2}[/latex] = the variance of the sample means
Data are typically put into a table for easy viewing. One-Way ANOVA results are often displayed in this manner by computer software.
Source of Variation | Sum of Squares (SS) | Degrees of Freedom (df) | Mean Square (MS) | F |
---|---|---|---|---|
Factor (Between) |
SS(Factor) | k – 1 | MS(Factor) = SS(Factor)/(k – 1) | F = MS(Factor)/MS(Error) |
Error (Within) |
SS(Error) | n – k | MS(Error) = SS(Error)/(n – k) | |
Total | SS(Total) | n – 1 |
Example 1
Three different diet plans are to be tested for mean weight loss. The entries in the table are the weight losses for the different plans. The one-way ANOVA results are shown in the table here.
Plan 1: n1 = 4 | Plan 2: n2 = 3 | Plan 3: n3 = 3 |
---|---|---|
5 | 3.5 | 8 |
4.5 | 7 | 4 |
4 | 3.5 | |
3 | 4.5 |
s1 = 16.5, s2 =15, s3 = 15.5
Following are the calculations needed to fill in the one-way ANOVA table. The table is used to conduct a hypothesis test.
[latex]{SS}(between)=\sum{\left[\dfrac{{{({s}_{j})}^{2}}}{{{n}_{j}}}\right]}-\dfrac{{(\sum{{s}_{j})}^{2}}}{{n}}[/latex]
[latex]= {\dfrac{s_1^2}{4}} + {\dfrac{s_2^2}{3}} + {\dfrac{s_3^2}{3}} - {\dfrac{(s_1 + s_2 + s_3)^2}{10}}[/latex]
where n1 = 4, n2 = 3, n3 = 3 and n = n1 + n2 + n3 = 10
[latex]\displaystyle=\frac{{({16.5})^{2}}}{{4}}+\frac{{({15})^{2}}}{{3}}+\frac{{ ({5.5})^{2}}}{{3}}-\frac{{ {({16.5}+{15}+{15.5})}^{2}}}{{10}}[/latex]
[latex]{SS}(between) = {2.2458}[/latex]
[latex]S(total) = \sum{x}^{2}-\dfrac{{{(\sum{x})}^{2}}}{{n}}[/latex]
[latex]\displaystyle=\left({5}^{2}+{4.5}^{2}+{4}^{2}+{3}^{2}+{3.5}^{2}+{7}^{2}+{4.5}^{2}+{8}^{2}+{4}^{2}+{3.5}^{2}\right)[/latex]
[latex]\displaystyle{-}\frac{{{\left({5}+{4.5}+{4}+{3}+{3.5}+{7}+{4.5}+{8}+{4}+{3.5}\right)}^{2}}}{{10}}[/latex]
[latex]\displaystyle={244}-\frac{{{47}^{2}}}{{10}}={244}-{220.9}[/latex]
[latex]SS(total) = 23.1[/latex]
[latex]SS(within) = SS(total) - SS(between)[/latex]
[latex]= 23.1 - 2.2458[/latex]
[latex]SS(within) = 20.8542[/latex]
USING THE TI-83, 83+, 84, 84+ CALCULATOR
One-Way ANOVA Table: The formulas for SS(Total), SS(Factor) = SS(Between) and SS(Error) = SS(Within) as shown previously. The same information is provided by the TI calculator hypothesis test function ANOVA in STAT TESTS (syntax is ANOVA(L1, L2, L3) where L1, L2, L3 have the data from Plan 1, Plan 2, Plan 3 respectively).
Source of Variation | Sum of Squares (SS) | Degrees of Freedom (df) | Mean Square (MS) | F |
---|---|---|---|---|
Factor (Between) |
SS(Factor) = SS(Between) = 2.2458 |
k – 1 = 3 groups – 1 = 2 |
MS(Factor) = SS(Factor)/(k – 1) = 2.2458/2 = 1.1229 |
F = MS(Factor)/MS(Error) = 1.1229/2.9792 = 0.3769 |
Error (Within) |
SS(Error) = SS(Within) = 20.8542 |
n – k = 10 total data – 3 groups = 7 |
MS(Error) = SS(Error)/(n – k) = 20.8542/7 = 2.9792 |
|
Total | SS(Total) = 2.2458 + 20.8542 = 23.1 |
n – 1 = 10 total data – 1 = 9 |
Try it 1
As part of an experiment to see how different types of soil cover would affect slicing tomato production, Marist College students grew tomato plants under different soil cover conditions. Groups of three plants each had one of the following treatments
- bare soil
- a commercial ground cover
- black plastic
- straw
- compost
All plants grew under the same conditions and were of the same variety. Students recorded the weight (in grams) of tomatoes produced by each of the n = 15 plants:
Bare: n1 = 3 |
Ground Cover: n2 = 3 |
Plastic: n3 = 3 |
Straw: n4 = 3 |
Compost: n5 = 3 |
---|---|---|---|---|
2,625 | 5,348 | 6,583 | 7,285 | 6,277 |
2,997 | 5,682 | 8,560 | 6,897 | 7,818 |
4,915 | 5,482 | 3,830 | 9,230 | 8,677 |
Create the one-way ANOVA table.
Enter the data into lists L1, L2, L3, L4, and L5. Press STAT and arrow over to TESTS. Arrow down to ANOVA. Press ENTER and enter L1, L2, L3, L4, L5). Press ENTER. The table was filled in with the results from the calculator.
Source of Variation | Sum of Squares (SS) | Degrees of Freedom (df) | Mean Square (MS) | F |
---|---|---|---|---|
Factor (Between) | 36,648,561 | 5 – 1 = 4 | [latex]\displaystyle\frac{{{36},{648},{561}}}{{4}}={9},{162},{140}[/latex] | [latex]\displaystyle\frac{{{9},{162},{140}}}{{{2},{044},{672.6}}}={4.4810}[/latex] |
Error (Within) | 20,446,726 | 15 – 5 = 10 | [latex]\displaystyle\frac{{{20},{446},{726}}}{{10}}={2},{044},{672.6}[/latex] | |
Total | 57,095,287 | 15 – 1 = 14 |
The one-way ANOVA hypothesis test is always right-tailed because larger F-values are way out in the right tail of the F-distribution curve and tend to make us reject H0.
Notation
The notation for the F distribution is F ~ Fdf(num),df(denom)
where df(num) = dfbetween and df(denom) = dfwithin
The mean for the F distribution is [latex]\displaystyle\mu=\frac{{{d}{f}{(\text{num})}}}{{{d}{f}{(\text{denom})}-2}}[/latex]
Candela Citations
- The F Distribution and the F-Ratio. Provided by: OpenStax. Located at: https://openstax.org/books/introductory-statistics/pages/13-2-the-f-distribution-and-the-f-ratio. Project: Introductory Statistics. License: CC BY: Attribution. License Terms: Access for free at https://openstax.org/books/introductory-statistics/pages/1-introduction
- Introductory Statistics. Authored by: Barbara Illowsky, Susan Dean. Provided by: OpenStax. Located at: https://openstax.org/books/introductory-statistics/pages/1-introduction. License: CC BY: Attribution. License Terms: Access for free at https://openstax.org/books/introductory-statistics/pages/1-introduction
- Prealgebra. Provided by: OpenStax. Located at: https://openstax.org/books/prealgebra/pages/1-introduction. License: CC BY: Attribution. License Terms: Access for free at https://openstax.org/books/prealgebra/pages/1-introduction