PSYB01 - Chapter 13 notes.doc

6 Pages
69 Views
Unlock Document

Department
Psychology
Course
PSYB01H3
Professor
Anna Nagy
Semester
Summer

Description
Chapter 13 - understanding research results: statistical inference • Inferential statistics are important; whether the results yielded in one experiment will come out again if the study was repeated over and over again Samples and populations • Inferential statistics are necessary because the results of a given study are based on data from a single sample Inferential statistics answers the questions "would these results hold up if the experiment were • conducted repeatedly each time with a new sample?" • Inferential statistics are used to determine whether we can in fact make statements that the results reflect what would happen if we were to conduct the experiment again and again with multiple samples o Can we infer that the difference in the sample reflects a true difference in the population? Inferential statistics • In the experimental design the groups are equivalent in every way except the independent variable manipulation o Equivalent groups are also achieved by randomization so that all other variables are controlled for • The difference between two groups will almost never be zero o This happens because we are dealing with samples rather than populations o Random or chance error is responsible for some of the difference in the means even if the independent variable had no effect on the dependent variable • The difference in sample means reflects any true difference in population means plus any random error • By using inferential statistics we can make inferences about the true difference in the population on the basis of the sample data o Inferential statistics gives the probability that the difference between means reflects random error rather than a real difference Null and research hypotheses • Null hypothesis: the population means are equal - the observed difference is due to random error o The logic: if we can determine that the null hypothesis is incorrect then we accept the research hypothesis as correct • Accepting the research hypothesis means the independent variable did have a effect o The null hypothesis is rejected only when there is a low probability that the obtained results could be due to a random error • Research hypothesis: population means are not equal • Statistical significance: a significant result is one that has a very low probability of occurring if the population means are equal. o Significance indicates that there is a low probability that the diference between the obtained sample means was due to random error o Matter of probability Probability and sampling distributions • Probability is the likelihood of the occurrence of some event or outcome • In statistical inference we want to specify the probability that an event will occur if these no difference in the population o Question: what is the probability of obtaining this result if only random error is operating? • If this is very low, we reject the possibility that only random or chance error is responsible for the obtained difference in means Probability: the case of ESP • The use of probability in statistical inference can be understood intuitively from this example: o Example: you test your friend on their ESP (extrasensory perception) • You test your friend by doing 10 trials and of showing them 5 cards with different symbols on each card; you show these cards twice in a random order in 1 trial • The null hypothesis is that only random error is operating • The research hypothesis is that the number of correct answers shows more than random or chance guessing • You can reasonably say that that the person will get 1/5 answers right  You can expect small deviations away from the expected 2 answers correct per trial o How unlikely does a result have to be before we decide it is significant? • A decision rule is determined prior to collecting the data o The probability required for significance is called the alpha level • Most common alpha level probability is used is 0.05  The outcome is considered significant when there is a 0.05 or less probability of obtaining the results; only 5/100 chances that the results were due to a random error Sampling distributions • You can infer using intuition that getting 7/10 answers vs. 2/10 answers correct on the ESP experiment is unlikely • Look at table 13.1 on page 251 o The probabilities shown were derived from a probability distribution called the binomial distribution All statistical significance decisions are based on probability distributions such as • this one  Called sampling distributions • The sampling distributions are based on the null hypothesis • All statistical tests rely on sampling distributions to determine the probability that the results are consistent with the null hypothesis o When the results are very unlikely according to the null hypothesis expectations the null hypothesis is rejected Sample size • The total number of observations on determinations of statistical significance • In the ESP example; lets say you tested your friend on 100 trials instead of 10 and observed 30 correct answers o 30/100 is less likely then 3/10 o The more observations sampled you are more likely to obtain an accurate estimate of teh true population value o As sample size increases so does your confidence that your outcome is actually different from the null hypothesis expectation Example: the t and F tests • Different statistical tests allow us to use probability to decide whether to reject the null hypothesis • T test is most commonly used to examine if two groups are significantly different from each other • F test is more general statistical test that can be used to ask whether there is a difference among 3 or more groups T test • To use this distribution to evaluate our data we need to calculate a value of t from the obtained data and evaluate t in terms of the sampling distribution of t that is based on the null hypothesis o If the obtained t has a low chance of occurring then we reject the null o T value is a ratio of two aspects of the data; difference is the variability within the groups • T = group difference / within-group variability • Group difference is the difference between your obtained means under the null hypothesis; expected to be 0 o The difference between the means of the two groups • T increases as the difference between your obtained sample means increases • Within-group variability is the amount of variability of scores about the mean o An indicator of the amount random error in your sample Degrees of freedom • Degrees of freedom: abbreviated as df. When comparing two means the degrees of freedom are equal to N1+N 22 or the total number of participants in the gorups minus the number of groups. The degrees of freedom are the number of scores free to vary once the means are known • If the mean of a group is 6.0 and there are five scores in the group there are 4 degrees of freedom; once you have any 4 scores the 5th is known because the mean must remain 6.0 One-tail vs. Two-tailed tests • First you must choose a critical t for the situation o Example 1 : a specified direction of difference, group 1 will be greater than group 2 • One tailed test o Example 2: did not specifiy a predicted direction of difference; group 1 will differ from group 2 • Two tailed test • In both examples different values of t need to be used • Using either a one-tailed or two-tail test will depend on whether you originally designed your study to test
More Less

Related notes for PSYB01H3

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit