HDF 315L Study Guide - Midterm Guide: Rating Scale, Internal Consistency, Covariance

30 views7 pages
Measurement: Using multiple indicators of constructs
Process of quantifying what we observe
Reducing people to numbers
Part of a natural process anytime we make a comparative judgment
)m really depressed
Theyre so much in love
)m under a lot of stress
Reducing people to words
Specifies requirements for making a judgment of quantity, a process called operationalization
Develop a guide or rulebook so that others will measure things in the same way
If the guidelines are not very good, the measure is not valid
Constructs vs. Indicators
Constructs: Abstract, theoretical notion, not measured directly, inferred from behaviors
Indicators: concrete, observable behavior, readily quantified
Multi-method, multi-informant measurement
Avoid linked observations (when both independent and dependent variables are derived from ratings by
same observers)
Use multiple raters per construct
Use multiple sources of data (LOTS)
Life-event data (including archival records)
Observational data (including multiple knowledgeable informants as well as direct observation)
Test situations (clear and objective scoring rules)
Self-reports (including ratings, interviews, narratives, diaries, experience sampling)
Measurement model- part which relates measured variables to latent (hidden/not directory observable) variables
Structural Model part that relates latent variables to one another
Qualitative Research:
Selected a purposive sample of cultural members
Assessed their experiences
Coded and categorized their experiences
Establishing evidence of measurement quality
Indices of measurement quality
Reliability = consistency of scores
Over time
Over observers or reporters
Over items in a test or questionnaire
Over settings
Validity = measure assesses what it is intended to assess
Relationship between Reliability & Validity
Not Valid / Not Reliable
Measures wrong construct inconsistently
Not Valid / Reliable
Measures wrong construct consistently
Valid / Not Reliable
Measures correct construct inconsistently
Valid & Reliable
Measures correct construct consistently
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 7 pages and 3 million more documents.

Already have an account? Log in
Assessing reliability
Internal consistency reliability
Consistency of items in a test or questionnaire
Similar items should provide consistent information if they are measuring the same thing
Interpreting total scale alpha
Alphas above . are generally considered good
Cronbachs alpha will go up as you add more good items that measure the same underlying construct
Cronbachs alpha will go down if you add other items that do not belong in the scale (items that measure
some other construct)
A scale can have a high alpha if it simply has a lot of items in it; alpha alone does not tell you if the scale is
unidimensional (that it is measuring a single construct)
Interpreting alpha (if item deleted)
The alpha next to an individual item shows how the alpha for the scale would change if the item was
deleted
If deleting an item causes the alpha to go down, the scale would become less reliable (generally not a good
thing!)
If deleting an item substantially increases the alpha, consider removing that item from the scale
Interpreting item-rest* correlations
Item-rest correlation shows how the item is correlated with a scale computed from only the other items
You want individual items that are correlated positively with the scale as a whole
You want to drop items that do not correlate well with the scale; they may not be measuring the same
construct
Item-rest correlations below about .30 may indicate the item is measuring something else
Assessing Reliability: Other types
Test-retest reliability
Consistency of scores over time
Should get similar scores on the same measure if little or no change is expected
Inter-rater reliability
Consistency across different observers
Parallel-forms reliability/split-half reliability
Consistency across different versions of the same test
Construct Validity
Content validity: agreement by judges or experts that the measure covers full range of meaning of construct
Face validity: apparent or obvious correspondence between test and construct
Substantive validity: theoretical evidence in support of the use of the measure
Substantive validity (Empirical validity)
Convergent validity
What the construct is like
Discriminant validity
What the construct is not like
Criterion validity
Concurrent validity = related to a contemporary criterion
Predictive validity = related to a future criterion
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 7 pages and 3 million more documents.

Already have an account? Log in

Document Summary

Part of a natural process anytime we make a comparative judgment. Specifies requirements for making a judgment of quantity, a process called operationalization. Develop a guide or rulebook so that others will measure things in the same way. If the guidelines are not very good, the measure is not valid. Constructs: abstract, theoretical notion, not measured directly, inferred from behaviors. Avoid linked observations (when both independent and dependent variables are derived from ratings by same observers) Observational data (including multiple knowledgeable informants as well as direct observation) Test situations (clear and objective scoring rules) Self-reports (including ratings, interviews, narratives, diaries, experience sampling) Measurement model- part which relates measured variables to latent (hidden/not directory observable) variables. Structural model part that relates latent variables to one another. Selected a purposive sample of cultural members. Reliability = consistency of scores: over time, over observers or reporters, over items in a test or questionnaire, over settings.