Textbook Notes (290,000)
CA (170,000)
Western (10,000)
MOS (2,000)
Chapter 3

Management and Organizational Studies 4410A/B Chapter Notes - Chapter 3: Lewis Terman, Biofeedback, Electroencephalography

Management and Organizational Studies
Course Code
MOS 4410A/B
David Vollick

This preview shows pages 1-3. to view the full 10 pages of the document.
Chapter 3 – Assessment and Diagnosis
Clinical Assessment
Clinical Assessment - series of steps to gather info about a person and their environment in order to
make a decision about the nature, status, and treatment
oReferral questions – help to determine goals of assessment and the selection of appropriate
psychological tests or measurements
Goals of Assessment
Deciding which procedures and instruments of administer
oMeasurement of biological function, cognition, emotion, behaviour, and personality style
oAge, medical condition, and description of their symptoms – influence assessment tools
oPsychologist’s theoretical perspective affects scope of assessment
Ex. Depressed – measuring environmental cues for low mood and thoughts
Develop preliminary answers – after assessment and data collected
Therapeutic effect – as they understand their emotions, behaviours, and links – symptoms can
temporarily improve
SCREENING – assessment to identify potential psychological problems or predict risk
oHelp identify people who have problems but may not be aware or be reluctant
oAll members associated with the individual are given a brief measure
oEvaluate the usefulness of a screening measure – strong sensitivity and specificity
Sensitivity – ability of the screener to identify a problem
Specificity - % of time the screener accurately identifies the absence of a problem
False positives – screening instrument indicated a problem when no problem exists
False negative – screening tool says there is no depression when there is
Good screening tools have low false positives and false negatives
oDiagnosis – identification of an illness
Complicated – requires presence of a cluster of symptoms
Made after a clinical interview with the patient
Using different instruments to gather info from patient and other sources
Facilitates communication across clinicians and researchers
oDifferential diagnosis – attempt to determine diagnosis closest to patient’s symptoms
oSometimes more than one diagnosis
oDiagnostic assessments – more extensive than screens and provide more understanding
oTreatment plan – accurate diagnosis to make plan
oClinical assessment – leads to diagnosis
Evaluation of symptoms and disorder severity
Pattern of symptoms over time (time, frequency, and duration of episodes)
Patients strengths and weaknesses
oFunctional analysis of symptoms – identifies relations between situations and behaviours to aid
in treatment
OUTCOME EVALUATION – evaluating patient satisfaction and providing data for marketing
oHelp us know whether patients are getting better, when treatment is ‘finished,’ or for
oMeasures in assessment should have a range of outcomes – symptoms, severity, ect.)
oMust be reliable and valid

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

oTreatment effect – degree of change and patient’s level of functioning assessed
oGOAL: reduce symptoms and/or eliminate disorder
oClinical significance – degree of change (how much a patients symptoms reduce)
Reliable Change Index (RCI) – whether degree of change from beginning to end of
treatment is meaningful
Properties of Assessment Instruments
Psychometric properties – affect how confident we can be in the testing results
STANDARDIZARION – does a particular score indicate the existence of a problem, its severity, or its
improvement over time?
oStandard ways of evaluating scores improve normative or self-referent comparison
oNormative – comparing a person’s score with the scores of a sample of people who are
representative of entire population or scored of a sub-group
Standard deviation – to decide whether a score is too far outside the range
Tells us how far away from the mean a score is
Score that is more than 2 SDs away from the mean is 5% of population –
considered different from what is normal
oSelf-referent comparison – those that equate responses on various instruments with the patients
own prior performance
Used most often to examine the course of symptoms over time
Also used to evaluate treatment outcome
RELIABILITY – its consistency, or how well the measure produces the same results each time
oTest-retest reliability – consistency of scores across time
Same instrument twice to the same people over some consistent interval
Calculate correlation coefficient to estimate similarities between scores
.08 or higher = measure is highly reliable over time
oInterrater agreement – rating should reflect more about the person being interviewed than
about the person doing the interview
Ask 2 different clinicians to administer the same interview to the same person
VALIDITY – degree to which a test measures what is was intended to measure
oConstruct validity – how well a measure accurately assesses a particular concept, not other
concepts related
Shyness – would measure feeling sweaty, blushing, avoiding situations ect. Not types of
oCriterion validity – assesses how well a measure correlates with other measures that assess the
same or similar constructs
Concurrent validity – assesses the relationship between 2 measures that are given at the
same time (ex. SAT and ACT)
Predictive validity – the ability of a measure to predict performance at a future date
(SAT as a predictor of collage scores)
oClinical prediction – relies on a clinician’s judgment
Issue: accuracy of a psychologist’s predictions or conclusions at the end
Useful when relevant statistical data do not exist and for new hypothesis
oStatistical prediction – using data from large groups of people to make a judgment about a
specific individual
Useful as evidence-based medicine when predicting who will benefit from a specific

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

Developmental and Cultural Considerations
Most important factors are patient’s age and developmental status – when choosing nature of tests,
normative values for score comparison, people involved in testing, and testing environment
oMeasures of psychological symptoms vary across age – specific tests for children vs. adults vs.
oAssessment process varies across age – different people are involved in a patient’s assessment if
the patient is a child vs. adult vs. elder
Children – input from parents and teachers
Older adults – input from another adult who spends a lot of time with them
Cultural factors – many measures used are developed in the US
oAdministering to more culturally diverse may produce biased results due to differenced in
educational backgrounds, language use, and cultural beliefs and values
oCulture fair – take into account variables that may affect test performance
Translating tests
Nonverbal test (Leiter International Performance Scale) – nonverbal test of
intelligence that requires no speaking or writing
Tasks such as: categorizing objects or geometric designs, matching and
Ethics and Responsibility
Psychologists who administer psychological assessments must adhere to the American Psychological
Association Code of Ethics
oSection 9 – they only use tests on which they have required training
oThey must only use instruments that have good reliability and validity and purposeful
oDo not use outdated instruments
oThey must obtain consent from the person (or a parent if too young)
oInformed consent – indicated that the person to be tested understands the tests purpose, its fees,
and who will see the results
oConfidential – stored in a secure location
Assessment Instruments
oChoosing best test depends on the goal of assessment, properties of the instruments, and nature
of the patient’s difficulties
oSefl-report measures – ask patients to evaluate their own symptoms
oClinician-rated measures – clinicians rate the symptoms
oSubjective responses – what the patient perceives
oObjective responses – what can be observed
oStructured – each patient receives the same set of questions
oUnstructured – questions vary across patients
oTest battery – number of test are given together
Clinical interviews – conversations between an interviewer and a patient
Purpose: gather info and make judgments related to the assessment goal
Major purposes: screening, diagnosis, treatment planning, or outcome evaluation
oUNSTRUCTURED INTERVIEWS – clinician decides what questions to ask and how to ask
Initial interview – is structured – allows them to get to know the patient and to determine
what other assessments might be useful
Open-ended questions – allows the patient flexibility to decide what info to provide
Close-ended questions – allows for specific questions and answers
You're Reading a Preview

Unlock to view full version