Study Guides (248,357)
Canada (121,502)
Psychology (623)
PSYCH 339 (4)

Midterm #3 Review.docx

13 Pages
Unlock Document

James Beck

PSYCH 339 – Personnel Psychology: Midterm #2 Review 9:1 Abilities - Cognitive – capacity to reason, plan and solve problems - Psychomotor o Dexterity, reaction time - Physical o Bodily powers – muscular strengths, flexibility Cognitive Abilities - General Mental Ability (GMA) o Nearly all jobs requires  Problem solving, learning, manipulation o GMA positively correlated with job performance o Higher correlation for more complex jobs o Think of GMA as atheism… If we say a person is athletic  Not experts at every task  But person probably: learns quickly, can solve problems, can work with a lot of info at once, has no good memory - GMA vs. IQ o IQ stands for “intelligence quotient” o Not used anymore  Really only make sense for measuring in children o Term IQ still used as synonym for  Intelligence, general mental ability, general factor of intelligence (G), cognitive ability - Criterion related validity of GMA o Criterion validity of GMA r = 0.51 o Low complexity jobs: r = 0.23 o High complexity jobs: r = 0.56 - As information –processing demands increase, so does predictive power of GMA - Specific Cognitive Abilities o Quantitative – just math o Verbal – fill in the best words, synonyms and such o Mechanical – physics test, e.g. which is the best hammer? o Spatial – ability to take an object and rotate it? Math, reading o Clerical – anyone can do this, getting very specific, very easy to find the right answer, how quickly you can do this and accurate Psychomotor Abilities - Formation, women putting plank board - Operating an overhead crane moving an object from 1 place to another without knocking down the comes - Driving test, no knocking down the cones Physical Ability - Muscular strength o Tension, power, endurance - Cardiovascular endurance o Stamina - Movement quality o Flexibility, balance, coordination Other Characteristics - Personality - Interests Personality - Five Factor Model (OCEAN) o Openness to experience (intellect) o Conscientiousness o Extraversion o Agreeableness o Neuroticism (emotional stability)  Derived from Lexical Hypothesis  Words used to describe people  Factor analysis  Inductive Integrity - Honest, reliable, ethical - Overt measurements o Questions about honesty, stealing, etc. - Personality based measure o Compound trait (C+A+ES) - Better predictor than C by itself - Predicts CWBS – GMA cannot Faking? - Certainly occurs - Methods to deal with it o Warning, lie scales - “Faking” vs. “Putting best foot forward” - Do you want an employee that can’t even fake it - My take on it? – screen in vs. screen out Interest - Holland’s RIASEC model o Realistic – technical, hands on o Investigative – scientific, intellectual o Artistic – creative, imaginative o Social – helping others, interpersonal o Enterprising – leadership, influencing o Conventional – data management, organization - Small (but useful) relationships with job performance - Larger validities when interest in specific job (rather than RIASEC) - Predicts voluntary turnover (quitting) Selection Methods – methods are not constructs - Paper and pencil tests o Speed vs. power o Computer adaptive testing - Speed test o Select time limits o Not expected to finish all items - Power test o Sufficient time o Items are more difficult - Latent Construct How do we measure people in the tails? (People that are very low) - More items? - Longer test? Not necessarily - Computer adaptive testing! 9:2 Methods and Constructs - Paper and pencil (speed vs. power) - Computer adaptive testing Item Characteristic Curve - Care about the tails in selection because want to hire the best - Higher % chance of getting it right compared to level of construct o Tells us that we need more different items for this purpose How do we Measure people in the tails? - More items - Longer tests? Not necessarily - Computer adaptive testing ** Remember MTMMM? - What we want: Constructs (don’t get this ideal often) - What we get: Method Factors What it means: - AC score might tell us more about the assessor (severe, lenient) - … than the person being evaluated (source of error) Selection Decision-Making - Transform, same scale … standardize Only one predictor:  What is our selection ratio? (determines who/how many are choosen) - Top-down o Sort, Highest to Lowest - Bottom-Up o Screen people out o How many people do you want to advance o For example: personality, possible to “fake”, don’t want people who can’t even fake 10:1 Selection decision-making Standardize: (z-scores) - Top-Down – start at the top and move down until you hire enough o Selection ratio (low, less people) o Move down the list - Bottom-Up – inverse (different reasons to use) o Does it make sense to screen people out? o Multiple hurdle? (clear out before the next) o How many people do we want to advance? Personality (can fake it, but want to hire those who can at least fake it)  multiple behaviours to measure construct Multiple Hurdle  These people are allowed to move on - How many people do we need to hire? (sample shrinks) o Can do any amount of hurdles o Mitigate costs Multiple cut-off - For example – Rule: no employee should be more than -1.00 SD below the mean on any of the selection criteria - Where might we see this type of rule? (astronaut, pilot, police/fire, doctor) Compensatory - High scores on one predictor can compensate for low scores on another - Clinical combination o Assessor uses his/her judgement in deciding how to combine information o May or may not apply rules consistently to applicants o Look for broken legs “Broken Leg Hypothesis Example”  Will Doug go to the movies this weekend? (like movies, money, time, etc.)  What if Doug broke his leg on Friday? (all other cues become irrelevant)  Problems with looking for broken legs?  People are bad at it  Different definitions of it o Clinical doesn’t work well o But people think they’re good at it (count hits, ignore misses) o Just like interviews o Narrow mechanically and use clinical to narrow - Mechanical combination o Unit weighting  Add together (can also include clinical – can’t be as wrong) o Multiple regression  Use regression to find optimal weights (regular program)  Chooses the weights that maximizes R (for that sample) o Other weighting schemes  Accepting error to minimize error  Coming up with a weighted linear composite  Predicted Performance = b1 * x1 + b2 * x2 … bn *xn Interviews - Structured o Same questions asked of every candidate o Specified scoring schemes (systematic) o Situation vs. Behaviour Description  Situation: How would you behave in this situation?  Behaviour: Describe a time in the past when you behaved this way? - Unstructured o Questions vary from candidate to candidates o Less detailed scoring formats - What construct? o Job knowledge o Abilities (put knowledge together – cognitive) o Skills o Personality o Person-org fit (realistic job preview)  2 way info exchange (fit) - Interviewer Illusion o People think they are better at interviewing than they are o Count the “hi
More Less

Related notes for PSYCH 339

Log In


Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.