Class Notes (835,340)
Canada (509,114)
Psychology (7,781)
PSYB01H3 (260)
Anna Nagy (133)
Lecture

B01 CH5-7b.pdf

13 Pages
140 Views
Unlock Document

Department
Psychology
Course
PSYB01H3
Professor
Anna Nagy
Semester
Fall

Description
Chapter 5The most common measurement strategy is to ask people to tell you about themselves o Rate your overall happinessYou can also directly observe behaviours o How many mistakes did someone makePsychological and neurological responses can be measured as wello Example heart rate muscle tension Reliability of measuresReliability refers to the consistency or stability of a measure of behaviouro a reliable test would yield the same result each time o the results should not fluctuate from one reading to the nextif there is fluctuation there is an error in the measurement deviceEvery measurement has two components 1True score the real score on the variable 2Measurement errorExample If you administer a highly reliable test multiple times the scores on them might be 97103 however if you used an unreliable test the scores might be 85115 The measurement error in the unreliable test is revealed in the greater variability shown in the scoresUsing unreliable measurements is a waste of time because the results will be unstable and are unable to be replicatedWe can assess the stability of measures using correlation coefficients o The most common correlation coefficient when discussing reliability is the Pearson productmoment correlation coefficientSymbolized as rRange from 000 to 100 and 000 to 100000 means that the variables are not related at all100 means that there is a positive relationshipWhile 100 means there is a negative relationshipTestretest reliability assessed by measuring the same individuals at two points in time o If many people have similar scores we can say that the measure reflects true scores rather then measurement error080 is how high the correlation should be before we accept the measure as reliable Internal consistency ReliabilityThe assessment of reliability using responses at only one point in time because all items measure the same variable they should yield similar or consistent results o An indicator of internal consistency is splithalf reliabilitySplithalf reliability this is the correlation of an individuals total score on one half of the test with the total score on the other half o The final measure will include items from both halvesThe combined measure will have more items and will be more reliable than either half by itself o Drawback of this is that it does not take into account each individual items role in a measures reliability each question on test is called an itemCronbachs alpha is based on individual items and is another indicator of Internal consistency Reliability o Correlates each item with every other item o The value of alpha is the average of all correlation coefficientsItemtotal correlations examines the correlation between each time and the total scoreSince cronbachs alpha and itemtotal correlations look at the individual items items that do not correlate with the other items are removed to increase reliability Interrater reliabilityA single rater might be unreliable but more the one will increase reliabilityThe degree to which raters agree in their observations is interrater reliability o A commonly used indicator of interrater reliability is called Cohens kappa Reliability and accuracy of measuresAccuracy and reliability are totally different o Example A gas station pump puts the same amount of gas in your car every time therefore the gas pump gauge is reliable However the issue of accuracy is still open The only way you can know the accuracy is to compare how much the pump gives you to a standard measure of a litre Construct Validity of measuresConstruct validity the adequacy of the operational definition of variables o To what extent does the operational variable reflect the true theoretical meaning of the variable o Construct validity is a question of whether the measure employed actually measures the construct it is intended to measure Indicators of construct validityFace validity the evidence for validity is that the measure appears on the face of it to measure what it is supposed to measure o Do the procedures used to measure the variable appear to be an accurate operational definition of the theoretical variableCriterionoriented validity relationship between scores on the measure and some on criterion o There are 4 types of criterionrelated research approaches that differ in the type of criterion that is employed 1Predictive validity scores on the measure predict behaviour on a criterion measured at a timein the future o Example LSAT test predicts how well youll do in law school 2Concurrent validity scores on the measure are related to a criterion measured at the same timeo To see whether two or more groups of people differ on the measure in expected ways 3Convergent validity scores on the measure are related to other measures of the same constructo One measure of shyness should correlate with another shyness measure or a measure of a similar construct such as social anxiety 4Discriminant validity scores on the measure are NOT related to other measures that aretheoretically different o Seeing if there are correlations between shyness test results and aggressiveforcefulness test results Research on personality and individual differencesSystematic and detailed research on validity is most often carried out measures of personality and individual differencesNEO personality Inventory NEOPI
More Less

Related notes for PSYB01H3

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit