Inter-rater reliability: refers to the degree to which two independent observers or judges agree. Test-retest reliability: measures the extent to which people being observed twice or taking the same test twice score in generally the same way. Makes sense only when the theory assumes that people will not change greatly between testings on the variable being measured. Alternate-form reliability: using two forms of tests rather than giving the same test twice, perhaps when there is concern that people will remember their answers from the first test and aim merely to be consistent. The extent to which scores on the two forms of the test are consistent. Internal consistency reliability: assesses whether the items on a test are related to one another. In each of these types of reliability, a correlation is calculated between raters or sets of items. The higher the correlation, the better the reliability. Validity is generally related to whether a measure fulfills its intended purpose.