{"id":618,"date":"2017-04-15T03:28:57","date_gmt":"2017-04-15T03:28:57","guid":{"rendered":"https:\/\/courses.lumenlearning.com\/conceptstest1\/chapter\/chi-square-tests-review\/"},"modified":"2017-04-28T06:16:21","modified_gmt":"2017-04-28T06:16:21","slug":"chi-square-tests-review","status":"web-only","type":"chapter","link":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/chapter\/chi-square-tests-review\/","title":{"raw":"Putting It Together: Chi-Square Tests","rendered":"Putting It Together: Chi-Square Tests"},"content":{"raw":"&nbsp;\r\n<h3>Let\u2019s Summarize<\/h3>\r\nIn this module, <em>Chi-Square Tests<\/em>, we discussed three different hypothesis tests using the chi-square test statistic:\r\n<ul>\r\n \t<li>Goodness-of-fit for a one-way table<\/li>\r\n \t<li>Test of independence for a two-way table<\/li>\r\n \t<li>Test of homogeneity for a two-way table<\/li>\r\n<\/ul>\r\n<h3>Goodness-of-Fit test for a One-Way Table<\/h3>\r\n<ul>\r\n \t<li>In a goodness-of-fit test, we consider one population and one categorical variable.<\/li>\r\n \t<li>The goodness-of-fit test expands the z-test for a population proportion that we learned in <em>Inference for One Proportion<\/em> by looking at the distribution of proportions for all categories defined by the categorical variable.<\/li>\r\n \t<li>The goodness-of-fit test determines whether a set of categorical data comes from a claimed distribution. The null hypothesis is that the proportion in each category in the population has a specific distribution. The alternative hypothesis says that the proportions in the population are not distributed as stated in the null hypothesis.<\/li>\r\n \t<li>To test our hypotheses, we select a random sample from the population and gather data for one categorical variable.<\/li>\r\n<\/ul>\r\n<h3>Test of Independence for a Two-Way Table<\/h3>\r\n<ul>\r\n \t<li>In the test of independence, we consider one population and two categorical variables.<\/li>\r\n \t<li>In <em>Probability and Probability Distribution<\/em>, we learned that two events are independent if <em>P<\/em>(<em>A<\/em>|<em>B<\/em>) = <em>P<\/em>(<em>A<\/em>), but we did not pay attention to variability in the sample. With the chi-square test of independence, we have a method for deciding whether our observed <em>P<\/em>(<em>A<\/em>|<em>B<\/em>) is \u201ctoo far\u201d from our observed <em>P<\/em>(<em>A<\/em>) to infer independence in the population.<\/li>\r\n \t<li>The null hypothesis says the two variables are independent (or not associated). The alternative hypothesis says the two variables are dependent (or associated).<\/li>\r\n \t<li>To test our hypotheses, we select a single random sample and gather data for two different categorical variables.<\/li>\r\n<\/ul>\r\n<h3>Test of Homogeneity for a Two-Way Table<\/h3>\r\n<ul>\r\n \t<li>In the test of homogeneity we consider two or more populations (or two or more subgroups of a population) and a single categorical variable.<\/li>\r\n \t<li>The test of homogeneity expands on the test for a difference in two population proportions that we learned in <em>Inference for Two Proportions<\/em> by comparing the distribution of the categorical variable across multiple groups or populations.<\/li>\r\n \t<li>The null hypothesis says that the distribution of proportions for all categories is the same in each group or population. The alternative hypothesis says that the distributions differ.<\/li>\r\n \t<li>To test our hypotheses, we select a random sample from each population or subgroup independently. We gather data for one categorical variable.<\/li>\r\n<\/ul>\r\n<h3>The Chi-Square Test Statistic and Distribution<\/h3>\r\nFor all chi-square tests, the chi-square test statistic \u03c7<sup>2<\/sup> is the same. It measures how far the observed data are from the null hypothesis by comparing observed counts and expected counts. <em>Expected counts<\/em> are the counts we expect to see if the null hypothesis is true.\r\n<p style=\"text-align: center;\">[latex]{\\chi }^{2}\\text{}=\\text{}\u2211\\frac{{(\\mathrm{observed}-\\mathrm{expected})}^{2}}{\\mathrm{expected}}[\/latex]<\/p>\r\nThe chi-square model is a family of curves that depend on degrees of freedom. For a one-way table the degrees of freedom equals (<em>r<\/em> \u2013 1). For a two-way table, the degrees of freedom equals (<em>r<\/em> \u2013 1)(<em>c<\/em> \u2013 1). All chi-square curves are skewed to the right with a mean equal to the degrees of freedom.\r\n\r\nA chi-square model is a good fit for the distribution of the chi-square test statistic only if the following conditions are met:\r\n<ul>\r\n \t<li>The sample is randomly selected.<\/li>\r\n \t<li>All expected counts are 5 or greater.<\/li>\r\n<\/ul>\r\nIf these conditions are met, we use the chi-square distribution to find the P-value. We use the same logic that we have used in all hypothesis tests to draw a conclusion based on the P-value. If the P-value is at least as small as the significance level, we reject the null hypothesis and accept the alternative hypothesis. The P-value is the likelihood that results from random samples have a \u03c7<sup>2<\/sup> value equal to or greater than that calculated from the data if the null hypothesis is true.\r\n\r\n&nbsp;\r\n\r\n&nbsp;","rendered":"<p>&nbsp;<\/p>\n<h3>Let\u2019s Summarize<\/h3>\n<p>In this module, <em>Chi-Square Tests<\/em>, we discussed three different hypothesis tests using the chi-square test statistic:<\/p>\n<ul>\n<li>Goodness-of-fit for a one-way table<\/li>\n<li>Test of independence for a two-way table<\/li>\n<li>Test of homogeneity for a two-way table<\/li>\n<\/ul>\n<h3>Goodness-of-Fit test for a One-Way Table<\/h3>\n<ul>\n<li>In a goodness-of-fit test, we consider one population and one categorical variable.<\/li>\n<li>The goodness-of-fit test expands the z-test for a population proportion that we learned in <em>Inference for One Proportion<\/em> by looking at the distribution of proportions for all categories defined by the categorical variable.<\/li>\n<li>The goodness-of-fit test determines whether a set of categorical data comes from a claimed distribution. The null hypothesis is that the proportion in each category in the population has a specific distribution. The alternative hypothesis says that the proportions in the population are not distributed as stated in the null hypothesis.<\/li>\n<li>To test our hypotheses, we select a random sample from the population and gather data for one categorical variable.<\/li>\n<\/ul>\n<h3>Test of Independence for a Two-Way Table<\/h3>\n<ul>\n<li>In the test of independence, we consider one population and two categorical variables.<\/li>\n<li>In <em>Probability and Probability Distribution<\/em>, we learned that two events are independent if <em>P<\/em>(<em>A<\/em>|<em>B<\/em>) = <em>P<\/em>(<em>A<\/em>), but we did not pay attention to variability in the sample. With the chi-square test of independence, we have a method for deciding whether our observed <em>P<\/em>(<em>A<\/em>|<em>B<\/em>) is \u201ctoo far\u201d from our observed <em>P<\/em>(<em>A<\/em>) to infer independence in the population.<\/li>\n<li>The null hypothesis says the two variables are independent (or not associated). The alternative hypothesis says the two variables are dependent (or associated).<\/li>\n<li>To test our hypotheses, we select a single random sample and gather data for two different categorical variables.<\/li>\n<\/ul>\n<h3>Test of Homogeneity for a Two-Way Table<\/h3>\n<ul>\n<li>In the test of homogeneity we consider two or more populations (or two or more subgroups of a population) and a single categorical variable.<\/li>\n<li>The test of homogeneity expands on the test for a difference in two population proportions that we learned in <em>Inference for Two Proportions<\/em> by comparing the distribution of the categorical variable across multiple groups or populations.<\/li>\n<li>The null hypothesis says that the distribution of proportions for all categories is the same in each group or population. The alternative hypothesis says that the distributions differ.<\/li>\n<li>To test our hypotheses, we select a random sample from each population or subgroup independently. We gather data for one categorical variable.<\/li>\n<\/ul>\n<h3>The Chi-Square Test Statistic and Distribution<\/h3>\n<p>For all chi-square tests, the chi-square test statistic \u03c7<sup>2<\/sup> is the same. It measures how far the observed data are from the null hypothesis by comparing observed counts and expected counts. <em>Expected counts<\/em> are the counts we expect to see if the null hypothesis is true.<\/p>\n<p style=\"text-align: center;\">[latex]{\\chi }^{2}\\text{}=\\text{}\u2211\\frac{{(\\mathrm{observed}-\\mathrm{expected})}^{2}}{\\mathrm{expected}}[\/latex]<\/p>\n<p>The chi-square model is a family of curves that depend on degrees of freedom. For a one-way table the degrees of freedom equals (<em>r<\/em> \u2013 1). For a two-way table, the degrees of freedom equals (<em>r<\/em> \u2013 1)(<em>c<\/em> \u2013 1). All chi-square curves are skewed to the right with a mean equal to the degrees of freedom.<\/p>\n<p>A chi-square model is a good fit for the distribution of the chi-square test statistic only if the following conditions are met:<\/p>\n<ul>\n<li>The sample is randomly selected.<\/li>\n<li>All expected counts are 5 or greater.<\/li>\n<\/ul>\n<p>If these conditions are met, we use the chi-square distribution to find the P-value. We use the same logic that we have used in all hypothesis tests to draw a conclusion based on the P-value. If the P-value is at least as small as the significance level, we reject the null hypothesis and accept the alternative hypothesis. The P-value is the likelihood that results from random samples have a \u03c7<sup>2<\/sup> value equal to or greater than that calculated from the data if the null hypothesis is true.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n\n\t\t\t <section class=\"citations-section\" role=\"contentinfo\">\n\t\t\t <h3>Candela Citations<\/h3>\n\t\t\t\t\t <div>\n\t\t\t\t\t\t <div id=\"citation-list-618\">\n\t\t\t\t\t\t\t <div class=\"licensing\"><div class=\"license-attribution-dropdown-subheading\">CC licensed content, Shared previously<\/div><ul class=\"citation-list\"><li>Concepts in Statistics. <strong>Provided by<\/strong>: Open Learning Initiative. <strong>Located at<\/strong>: <a target=\"_blank\" href=\"http:\/\/oli.cmu.edu\">http:\/\/oli.cmu.edu<\/a>. <strong>License<\/strong>: <em><a target=\"_blank\" rel=\"license\" href=\"https:\/\/creativecommons.org\/licenses\/by\/4.0\/\">CC BY: Attribution<\/a><\/em><\/li><\/ul><\/div>\n\t\t\t\t\t\t <\/div>\n\t\t\t\t\t <\/div>\n\t\t\t <\/section>","protected":false},"author":163,"menu_order":11,"template":"","meta":{"_candela_citation":"[{\"type\":\"cc\",\"description\":\"Concepts in Statistics\",\"author\":\"\",\"organization\":\"Open Learning Initiative\",\"url\":\"http:\/\/oli.cmu.edu\",\"project\":\"\",\"license\":\"cc-by\",\"license_terms\":\"\"}]","CANDELA_OUTCOMES_GUID":"7e1615b3-4e1f-42a1-9c14-884dfc0ad181","pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-618","chapter","type-chapter","status-web-only","hentry"],"part":570,"_links":{"self":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/pressbooks\/v2\/chapters\/618","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/wp\/v2\/users\/163"}],"version-history":[{"count":2,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/pressbooks\/v2\/chapters\/618\/revisions"}],"predecessor-version":[{"id":1161,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/pressbooks\/v2\/chapters\/618\/revisions\/1161"}],"part":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/pressbooks\/v2\/parts\/570"}],"metadata":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/pressbooks\/v2\/chapters\/618\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/wp\/v2\/media?parent=618"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/pressbooks\/v2\/chapter-type?post=618"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/wp\/v2\/contributor?post=618"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/courses.lumenlearning.com\/suny-hccc-wm-concepts-statistics\/wp-json\/wp\/v2\/license?post=618"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}