Textbook Notes (363,141)
United States (204,418)
POS 3713 (7)
Chapter 7

# POS 3713 Chapter 7: K & W Chapter 7 Premium

2 Pages
39 Views

School
Florida State University
Department
Political Science
Course
POS 3713
Professor
William Berry
Semester
Spring

Description
Statistically signiﬁcant: lowep-values increase our conﬁdence that there is indeed a relationship between the two variables in question An assertion of statistical signiﬁcance depends on a number of factors; “statistical signiﬁcance” is achieved only to the extent that the assumptions underlying the calculation of the p-value hold Most scientists use the standard of a p -value of .05; if p is less than .05, they consider a relationship to be statistically signiﬁcant, others use a more stringent standard of .01 or a more loose standard of 0.1 Finding that X and Y have a statistically signiﬁcant relationship does not necessarily mean that the relationship between X and Y is strong or especially that the relationship is causal 7.3.4 The Null Hypothesis and p-Values Null hypothesis: “A null hypothesis is also a theory-based statement but it is about what we would expect to observe if our theory were incorrect” Corresponding null hypothesis: there is no covariation between X and Y 7.4 Three Bivariate Hypothesis Tests 7.4.1 Example 1: Tabular Analysis Chi-Squared (x^2) test for tabular association: answers the question of whetdistances are statistically signiﬁcant x^2 = sigma((O-E)^2/E) The summation sign in this formula signiﬁes that we sum over each cell in the table; so a 2 x 2 table would have 4 cells to add up If the value observed, O, is exactly equal to the expected value if there were no relationship between the two variables, E, then we would get a contribution of zero from that cell to the overall formula (b/c O-E would be 0) Critical value: some predetermined standard Obtaining the critical value Degrees of freedom (df) df = (r-1)(c-1) r = # of rows in the table c = # of columns in the table 7.4.2 Example 2: Diﬀerences of Means In this type of bivariate hypothesis test, we are looking to see if the means are diﬀerent across the values of the independent variable Diﬀerences of means test: we compare what we have seen in the two ﬁgures with what we would expect if there were no relationship between the two independent variables t-test (follows the t-distribution) t = (Ybarsub1 - Ybarsub2)/se(Ybarsub1-Ybarsub2) Ybarsub1 = the mean of the dependent variable for the ﬁrst value of the independent variable Ybarsub2 = the mean of the dependent variable for the second value of the independent variable The greater the diﬀerence between the mean value of the dependent variable across the two values of the independent variable, the further the value of t will be from 0 The standard error of the diﬀerence between two means (Ybarsub1 and Ybarsub2), se(Ybarsub1 - Ybarsub2), is calculated from the following formula: se(Ybarsub1 - Ybarsub2) = radical((nsub1-1)s^2sub1+(nsub2-1)s^2sub2)/(nsub1+nsub2-2)) x radical((1/nsub1)+(1/nsub2)) nsub1 and nsub2 = sample sizes s^2sub1 and s^2sub2 = sample variances Degrees of freedom (df): reﬂect the basic idea that we will gain conﬁdence in an observed pattern as the amount of data on which that pattern is based increases Formula: nsub1 + nsub2 - 2 (total sample size - 2) 7.4.3 Example 3: Correlation Coeﬃcient
More Less

Related notes for POS 3713

OR

Don't have an account?

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Join to view

OR

By registering, I agree to the Terms and Privacy Policies
Just a few more details

So we can recommend you notes for your school.