PSY248 Study Guide - Final Guide: Null Hypothesis, Quantum, Polynomial

125 views11 pages
ANOVA Notes:
ONE-WAY ANOVA
ANOVA model:
 =  
X = stands for score
= grand population mean
= effect parameter, represents the extent to which the population means differ from each other
= error term, represents the extent to which individual observations within a population differ
from each other
i, k = refer to observations and populations respectively
Note:
 
   
The ANOVA model can be written to look like this:
   
      
  
  

    
o This is the calculable model that basically states that total deviation can be partitioned into
deviation between samples (model) and deviation within samples (residual)
Total Sum of Squares the total amount of variation within our data
    
1. Calculate  first (note, X = score, square each score and then add them all up - do not square the
total amount)
2. Calculate (sum of X), by adding up all the scores
3. Once calculated (sum of X), then square the number
4. After squaring, divide it by the number of observations (the number of scores in dataset)
5. Then put the equation together
E.g.
  3     5    = 224
2.  3 + 2 + 1… 5 + 3 + 6 = 52
3. Square  52  
4. Then divide ( 2704/15 = 180.2666667
5. Put equation together
Total Sums of Squares = 224 - 180.2666667 = 43.73333
You can also calculate Total Sums of Squares by adding Sum of Squares (model) + Sum of squares
(residual)
One-way ANOVA = 1 dependent variable, 1 independent variable (with more than 2 levels)
Factorial (two-way) ANOVA = 1 dependent variable (more than 2 levels), 2 independent variables
(more than 3/4 levels)
MANOVA (multivariate) = an ANOVA with several dependent variables
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 11 pages and 3 million more documents.

Already have an account? Log in
Model Sum of Squares:
How much of variation the regression model can explain
Formula:
o Model Sum of Squares =  
o Where T = sum of scores for each group
o n = number of observations in each group
1. Add up each conditions scores (T)
2. For each condition, square the resulting number
3. Each condition’s numbers can then be added up altogether which will equal ET
4. Then divide the number by /n (n = number of sample size)
5. Then do the second part of the equation 
6. Calculate  by adding up all the scores
7. Square the resulting number and divide by N (population size) and this will provide the second
half of the equations number
8. Now put the equation together (e.g. minus first part to the second)
E.g.
o 1. a) T-Placebo = 3 + 2 + 1 + 1 + 4 = 11; 11 = 121
b) T-Low = 5 + 2 + 4 + 2 + 3 = 16; 16  
c) T-High = 7 + 4 + 5 + 3 + 6 = 25; 25  
o 2. 121 + 256 + 625 = 1002 (
o 3. Divide 1002 by n 1002/5 = 200.4
o 4. Second part of the equation 
o 5. 3 + 2 + 1…5 + 3 + 6 = 52
o 6. 52  
o 7. 2704/15 = 180.2666666667
o 8. Model Sum of Squares = 200.4 180.2666666667 = 20.133
Residual Sum of Squares:
How much of the variation cannot be explained by the model; this value is the amount of variation
caused by extraneous factors such as individual differences in weight, testosterone etc.
Simple way to calculate the Residual Sum of Squares is to subtract Model of Sum of Squares from
Total sum of Squares
o 
Calculational formula:
o    
n
If you have already worked out Total AND Model SS, then you just use the calculations from that
e.g. (  and  = 200.4)
SO SSR = 224-200.4 = 23.6
Degrees of freedom:
Refer to the number of quantities within a set of quantities that are free to vary
In a single mean t-test, the number of observations minus one was the degrees of freedom
Three degree of freedom terms:
o Total degrees of freedom = d
N 1 (N = population size)
o Degrees of freedom between groups (model) = d
k 1 (k = number of groups)
o Degrees of freedom within groups/degrees of freedom for error (residual) = d
k(n 1)
n = sample size
k = number of groups
Degrees of freedom are additive (just like SS):
o d
= d
+ d
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 11 pages and 3 million more documents.

Already have an account? Log in
Mean squares:
Two variance estimates = mean squares
Divide SS by appropriate df to get Mean Squares
The first mean square of interest is the mean square between groups, which estimates variance
between groups (or for the model) this is called the mean square model (labeled M)
o M = S/d
The second mean square of interest is the mean square within groups, which estimates variance
within groups, or error variance, or residual this is called the mean square residual or within or
error (labeled M)
o M = S/d
Mean squares is NOT ADDITIVE
Expected mean squares:
Expected value of the two mean squares = the long run average of these values
E(M    
E(M 
E = stands for expected
population error variance
F ratio ratio of how good to bad the model is; model vs. residual
F = 

Tests the null hypothesis ratio would be 1; compare it to 1 which is found in F tables (ratio could
be bigger when null hypothesis is true)
F tables:
o Less than 1 we fail to reject nyll hypothesis
o Greater than 1 accept null hypothesis
Type 1 error rate rejecting a null hypothesis even though it’s true (e.g. adding water to toothpaste has no
effects on cavities); usually due to bad experimental data
F test:
1. Calculate the F ratio from sample data
2. Determine the type 1 error, df(M) and df(R)
3. Using the info from 2., look up the F ratio in the F tables
4. Make a decision concerning the null hypothesis. If the obtained F exceeds the critical F, reject the
null hypothesis; if obtained F does not exceed critical F, do not reject null hypothesis
5. Draw appropriate conclusions
Terminology:
What we could call between group variation, Field calls model variation
What we could call within group variation, Field calls residual variation
Applied to SS, df, MS
o ‘Between’ = ‘treat’ = ‘model’
o ‘Within’ = ‘error’ = ‘residual’
Assumptions in ANOVA
Normality normal distribution of scores
Homogeneity of variance distribution of scores having equal variances
Independence of errors knowledge of a particular score within a population tells us nothing
about the value of any of the other scores
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 11 pages and 3 million more documents.

Already have an account? Log in

Document Summary

One-way anova = 1 dependent variable, 1 independent variable (with more than 2 levels) Factorial (two-way) anova = 1 dependent variable (more than 2 levels), 2 independent variables (more than 3/4 levels) Manova (multivariate) = an anova with several dependent variables. Total sum of squares the total amount of variation within our data. You can also calculate total sums of squares by adding sum of squares (model) + sum of squares (residual) Square the resulting number and divide by n (population size) and this will provide the second half of the equations number: 4. Add up each conditions scores (t: 2. For each condition, square the resulting number: model sum of squares = (cid:1846)(cid:3038) /(cid:1866) (cid:4666) (cid:4667)(cid:2870)/n, 3. Each condition"s numbers can then be added up altogether which will equal et(cid:1846)(cid:3038) : 5. Then do the second part of the equation (cid:4666) (cid:4667)(cid:2870)/n: 6. 121 + 256 + 625 = 1002 ( (cid:1846)(cid:3038) (cid:4667: 4.

Get access

Grade+20% off
$8 USD/m$10 USD/m
Billed $96 USD annually
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
40 Verified Answers

Related Documents