Textbook Notes (280,000)
CA (170,000)
York (10,000)
MGMT (200)
Chapter 14

MGMT 1050 Chapter Notes - Chapter 14: Variance, Null Hypothesis, Dependent And Independent Variables


Department
Management
Course Code
MGMT 1050
Professor
Olga Kraminer
Chapter
14

This preview shows page 1. to view the full 5 pages of the document.
Chapter 14 Analysis of Variance (ANOVA)
Introduction to ANOVA
- The analysis of variance (ANOVA) is used to determine whether differences exist
between two or more population means by testing the variances of samples drawn
from the population
- The parameters are the population means and the null hypothesis will state that
there are no differences between the population means, while the alternate
hypothesis will always be that at least two means differ
- With ANOVA, this is the only test we do
- We do not determine which mean is biggest - we just determine whether the means
of the population are not the same
ANOVA Terminology
- The response variable is the variable for which we record values
- The responses are the values of the response variable
- The entity we measure is called an experimental unit (ex. A person)
- The criteria by which we classify the population is called a factor (e.g. nationality)
- Each population we look at is a different factor level (e.g. Canada, USA, and Italy)
these factor levels, or populations, are also called treatments (denoted with k)
The ANOVA Testing Procedure
- When the data is obtained through random sampling, we call the experimental
design the completely randomized design of the analysis of variance or a one-way
analysis of variance (also called a one-factor ANOVA)
- We assume that the samples taken from the population are independent
- If the null hypothesis is true (all the population means are equal) we would expect
the sample means to be close to one another. If the alternate is true we would
expect large differences
- The statistic that measures how close the sample means are to each other is called
the between treatments variation and it is denoted SST (sum of squares for
treatments). It looks at how different the treatments are from one another
- Notice that when the sample means are further apart from one another, the greater
the difference there will be between each sample mean and the grand mean
- If the sample means were all close to one another, all of them would be close to the
grand mean, and therefore SST would be small
- SST achieves its smallest value (zero) when all the sample means are equal
find more resources at oneclass.com
find more resources at oneclass.com
You're Reading a Preview

Unlock to view full version

Only page 1 are available for preview. Some parts have been intentionally blurred.

- Thus, if the population means are unequal, we would expect to see a very big value
for SST
- To determine whether the SST is big enough to reject the null, we need to determine
how much variation exists in the population
- This is important because if the variability in all of the populations is very high then,
even if the populations are equal, sometimes when you take a sample from each
population, the sample means will be far apart due to chance
- We measure the variability in the populations with the within treatments variation,
which is denoted SSE (sum of squares for error)
- The within treatments variation helps us find the amount of variation that is not
explained by the treatments
- The SSE find the variability due to all the other variables that impact the data
- The SST measures the explained error, while the SSE measures the unexplained error
- Essentially the SSE find the variation that is due to things we cannot explain (all of
the variability that is not the result of the data being from a different factor level)
Conditions to use ANOVA
- To use ANOVA, we assume that the variance of each population is equal and also
that the response variable is normally distributed
- NO NEED FOR AN F-TEST just look at the sample variances
Rejection Region
- First we find the degrees of freedom for treatments and errors
- Degrees of freedom for treatment is the number of treatments/populations, so k-1
- Degrees of freedom for error is total number of observations, minus
treatments/populations, so n k
Mean squares for treatment (MST) is computed by dividing SST by the degrees of freedom
for treatments. This value is also called variation explained
Mean squares for error (MSE) is computed by dividing SSE by the degrees of freedom for
error. This value is also called the variation unexplained.
The standardized test statistic, which is F-distributed, is defined as the ratio of the two
mean squares; we divide the MST by the MSE
The F-statistic is F-distributed with k-1 (degrees of freedom from the numerator, MST) and
n-k degrees of freedom (the degrees of freedom from the denominator, MSE)
find more resources at oneclass.com
find more resources at oneclass.com
You're Reading a Preview

Unlock to view full version