Study Guides (248,278)
Psychology (579)
PSYC 305 (21)
Final

# Final Exam Review P2.doc

7 Pages
108 Views

School
Department
Psychology
Course
PSYC 305
Professor
Heungsun Hwang
Semester
Winter

Description
PSYC305 Final Exam Review 2 Week 5: Two-Way ANOVA I • Focus on experiments with TWO independent variables or factors • We assume that: • Subjects serve only in one of the treatment conditions (independent-groups design) • Sample sizes are equal in each condition (balanced design) • Refer to the two independent variables as Factor A (Row) and Factor B (Column) • Purpose: A two-way factorial experiment contains information about - Two main effects & Interaction effect • Main Effects: The effect of one factor when the other factor is ignored. The differences among marginal means for a factor • Row Main Effect: • H0R : R1 = µR2 µ Rr • H1R : Not all the Rjare the same • Column Main Effect: • H0C : C1 = µC2 µ Cr • H1C : Not all the Cjare the same • Interaction Effect: The extent which the effect of one factor depends on the level of the other factor. An interaction is present when the effects of one factor on DV change at the different levels of the other factor. The presence of an inter- action indicates that the main effects alone do not fully describe the outcome of a factorial experiment • H0RC : The interaction between R and C is equal to zero H1RC : The interaction between R and C is not zero • • Prior Requirements/Assumptions: • The distribution of observations on the DV is normal within each group (normality) • The variances of observations are equal (homogeneity of variance) • Independence of observations • Methods: If each observed F value is greater than its critical value, you may reject the corresponding null hypothesis • • Alternatively, look at the p-value of each observed F value; if p < .05, you may reject null hypothesis Week 6: Two-Way ANOVA II - Post Hoc Tests • Further analyses when two-way ANOVA results are significant: • Significant main effect - if the number of levels of factor/IV > 2, post-hoc comparison tests can be conducted to exam- ine which pairs of row/column marginal means are different (Tukey’s Test) • Significant interaction effect - simple effect analysis can be performed to clarify the nature of the significant interac- tion • Tukey’s HSD Test: • Typically used if groups have equal sizes and all comparisons represent simple differences between two means This test utilizes the ‘studentized range statistic’ Q • • The observed Q value is compared against a critical value of Q (C ) Qor a = .05 which is associated with k and N-k • HSD = The minimum absolute difference between two means required for a significant difference • Simple Effect Analyses: • The effect of one factor at each level of the other factor • Example: • Effects of Rows at C1 • Effects of Rows at C2 • Effects of Columns at R1 • Effects of Columns at R2 • Conceptually, we can apply a one-way ANOVA for one factor repeatedly at each level of the other factor. However, in computing the F-ratio for each test, we use MS(W) from the two-way ANOVA in the denominator Week 8: One-Way Repeated Measures ANOVA • We often use an experimental design in which measurements on a single DV are repeated a number of times within the same subjects • Such designs in which subjects are crossed with at least one experimental factor are called repeated-measures designs • Purpose: To determine N subjected measured on a single DV under k conditions or levels of a single IV or factor • Basic Concepts: • We can examine the mean scores of DV across groups (between-group effects of treatment) • We can examine the significant differences across subjects (subject-level variability; between-subject effect) • As the levels of the ‘subject’ factor are individual subjects, this effect represents the variance of subjects • Usually we are NOT interested in the effect of ‘subjects’ or subject-level variability If this effect is significant, it would simply tell us that subjects do differ. But that has nothing to do with our treatment (IV) • so it is irrelevant We are interested in whether the IV has an effect on the subjects, regardless of whether differences existed naturally • among the subjects • Prior Assumptions: • The distribution of observations on the dependent variable is normal within each level of the treatment factor • The variances of observations are equal (homogeneity of variance) at each level of the treatment factor • The population covariance between any pair of repeated measurements is the same (homogenous of covariance) • When both assumptions are met, this property is described as compound symmetry (CS); aka Sphericity • Methods: • Between-group effect: • H 0Tµ =1µ … 2 r • H 1TNot all the µ aje the same • Subject effect: H 0SV =S0 • • H 1SV ≠S0 • If each observed F value is greater than or equal its critical value (found using degrees of freedom), you may reject the corresponding null hypothesis • Alternatively, look at the p-value of each observed F value; if p < .05, you may reject null hypothesis • Post Hoc Test: • Apply Tukey’s HSD Test for the post hoc comparisons of group means • Compound Symmetry: • Assumes: • The population variances of observations are equal (homogeneity of variance) at each level of the treatment factor • The population covariance between any pair of repeated measurements is the same (homogenous covariance) • When both assumptions are met, this property is described as compound symmetry (CS); aka Sphericity • Violation of CS: • Tested using Motley’s W Test If p < .05, CS is violated • • When CS assumption is violated, the omnibus F tests in one-way repeated measures ANOVA tend to be inflated, lead- ing to more false rejections of H 0 • By using a conservative critical value which is the inflation of the F test by evaluating it against a greater critical value obtained by reducing the degree of freedom • When CS holds, ℇ = 1 (no correction is needed) • When CS is violated, ℇ < 1 • Apply either Greenhose-Geisser or the Huynh-Feldt as correction to reduce df Week 10: Nonparametric Tests • Statistical tests that do not assume a distribution or use parameters are called nonparametric tests (frequently called distri- bution free tests) • Can be classified according to the following criteria: • The level of measurement (nominal, ordinal) • Which information is used (frequency, sign, or rank) • Independent or dependent samples • The number of the samples to be compared (k = 1, 2, or more) • Chi-Square (Χ ) Test: Used for testing independence of nominal variables • • Independence When variables are not associated • • Scores on one variable do not depend on scores on the other • Data are arranged on a contingency table • Prior Assumptions: • Random samples • Independent observations • A sufficiently large sample size is required (at least 20) • Average cell frequency should be ≧ 5 • Hypothesis: • H 0 Two (nominally scaled) variables are statistically independent (no association) • H 1 The two variables are not independent (association) • Methods: • If the observed value of Χ is greater than a critical value with DF = (R-1)(C-1) at a = .05, we reject the null hypoth- esis of independence • Alternatively, if the p-value of the observed value of Χ is less than .05, we may also reject the null hypothesis • How Nonparametric Tests Work: • Step 1: Transform original data to sign (nominal) or rank (ordinal) data • Step 2: Apply parametric tests or new statistic to the transformed data • Step 3: Make the decision whether to reject/accept the null hypothesis under test • The Median Test (k = 2): • This is a sign test for two independent samples It compares the medians of two independent samples • • H 0 No difference exists between the medians of the populations from which the samples are drawn • H 1 H 0s not true • It is straightforward to extend the median test to the case of k > 2 • Methods: 2 • If the observed value of Χ is greater than a critical value with DF = (R-1)(C-1) at a = .05, we reject the null hypoth- esis of independence • Alternatively, if the p-value of the observed value of Χ is less than .05, we may also reject the null hypothesis • Wilcoxon Rank Sum Test: • This is a rank test for two independent samples • This test is equivalent to the Mann-Whitney U Test • Similar to a two sample t-test but doesn’t require the data to be normally distributed H 0 Two samples come from populations with the same continuous distribution • • H 1 H 0s not true • W = the sum of ranks obtained for the smaller of samples 1 and 2 (if unequal sample size) or the first rank sum (if equal sample size) • Kruskal-Wallis H Test: • This is a rank test for k independent samples A generalization of the Wilcoxon rank sum test to k groups • • Similar to one-way ANOVA but doesn’t require the data to be normally distributed • H 0 k independent samples come from the same population H : H is not true (at least one sample is different) • 1 0 • Methods: • Step 1: Jointly rank all N = N 1 N + 2.. N k • Step 2: Compute the sums of the ranks of the k samples R …, R 1 k Step 3: Calculate the H statistic • • The distribution of H approximates the chi-square distribution with DF = k-1 • We may reject H if0the observed H value is greater than the critical value of chi-square with DF = k-1 • Post-Hoc Test: • If we reject null hypothesis, we want to perform post hoc multiple comparison tests to examine which groups differ significantly • To do so: • Step 1: Write down all of the pairwise comparisons we can make • Step 2: Carry out Wilcoxon rank sum tests for each of the pairs • Step 3: Any of these tests is significant if its p-value is less than .05 Week 11: Correlation and Regression • Scatterplot: A scatterplot shows the relationship between two quantitative variables measured on the same individuals • It displays the (of the relationship): • Form (linear or nonlinear) • Direction (positive or negative) • Strength (none, weak, strong) • The relationship examined by our eyes may not be satisfactory in many cases so we need a numerical measure to sup- plement the graph: we use correlation
More Less

Related notes for PSYC 305
Me

OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Join to view

OR

By registering, I agree to the Terms and Privacy Policies
Just a few more details

So we can recommend you notes for your school.