Textbook Notes (368,611)
Canada (162,009)
Psychology (3,337)
PSYC 2650 (228)
Dan Meegan (47)

Judgment and Reasoning.docx

29 Pages
Unlock Document

PSYC 2650
Dan Meegan

Judgment and Reasoning 11/17/2012 11:44:00 AM Judgment and Reasoning  There are lots of other fields that are interested in how people make decisions  Academics (assume people behave rationally) vs. psychology (more practical  how do people make decisions) o Cannot assume we are rational Decision Making: 1. Gather evidence 2. Weighing evidence (pros and cons) 3. Making a decision based on evidence Confirmation Bias:  The tendency to seek, emphasize or remember evidence that confirms one‟s beliefs rather than evidence that challenges are beliefs, fail to consider alternative hypotheses and when disconfirming evidence is presented, individuals reinterpret evidence to diminish its value and fail to adjust their beliefs based on this new information o Never go into decision detached from our past o Picking product, past experiences with product colours our decision  In reality, disconfirmations tell us more about something than confirmations do  The act of reinterpreting the disconfirming evidence in a way that then is in accordance with one‟s beliefs actually helps someone remember that evidence (disconfirming) more than confirming evidence o But remembered in interpretation that is in line with beliefs o Winning bets remembered as “wins”, losing bets remembered as “near wins” and resulted in losses because of flukes Belief Perseverance  Make judgments about suicide notes and given feedback  Feedback was random and not indicative of the participant‟s performance  Later were debriefed and asked to make judgments about themselves after they had been told the truth about the feedback o People who were told they were doing well on task continued to think that they did well on the task and vice versa for poor  Persevered in their beliefs even when basis for belief had been discredited o Can confirm either hypothesis with a suitable selective memory search Interviewer Bias  Why different evaluators came to different conclusions as to whether person was in need of social services  One-sided evidence gathering  Interviewers‟ beliefs influence evidence gathering and reporting o Safety-net  Need to screen out people who are trying to take advantage of the system / tax payers‟ generosity  Example... o Experienced interviewers from social service agencies in NYC o One defined as “socialist” and another “prohibitionist” by coworkers o Socialist: 3 times more likely to report problems person is in need of support are due to industrial causes beyond their control o Prohibitionist: 3 times more likely to report problems are due to drug or alcohol abuse  Person is not eligible for support because problems are under their control  Should make drugs and alcohol illegal so these problems do not arise Confirmation Bias in the Lab: Covariation Judgments  Covariation: two variables that vary together, “correlation” o Matter of degree  can be strong or weak, positive or negative  Note: positive does not mean strong necessarily o Important because covariation is what we consider when we study cause and effect o E.g. # of drinks & intensity of hangover  People who drink more are more likely to have a more intense hangover  Some data points confirm our belief  Others do not (exceptions to the rule, party animals, light weight)  Rorschach tests o Participants shown inkblots and asked to describe them o Do different responses covary with personality traits?  When people have preexisting beliefs, can you see the exceptions and can you objectively judge the correlation? Or do you see a pattern that isn‟t there? o If there are a lot of exceptions  Do you notice exceptions or go off the fact that you know that people that drink more are more likely to have a more intense hangover  Covariation judgments: give a set of paired data, estimate the covariation  Data-driven: no preconceptions about covariation o Don‟t have preexisting beliefs  Only info you have at your disposal is data itself so judgments are based on data alone  Correlations you have never seen before or thought of  Would think theses are „perfect‟ but there are some flaws o E.g. series of pictures with men holding walking sticks  Rate covariation of men height vs. stick length  A bit of systematic underestimation  Perfect estimation would produce straight line  Low correlation – lots of things that violate the rule, hard to see trend  Only when we get more correlation then we see trend and as correlation increases, performance gets better  Regular results (stronger the covariation the stronger the estimate)  Theory-driven: potential for preconceptions about covariation, preexisting expectations or biases, e.g… o Child dishonesty measured by false report of athletic performance and child dishonesty measured by amount of cheating in solving a puzzle o Rate covariation of Measures A(false report of athletic performance) and B (cheating on puzzle)  Dotted line is where data driven judgments are  See overestimation in theory driven  Even when there is no correlation, they say that the correlation exists, ignores exceptions to the rule  Extravagant in their estimates o Theory-driven errors  Due to Confirmation Bias  Emphasis on data pairs that confirm preconception Frequency Judgments:  Judge relative frequency of: o Words beginning with “r” & o Words with “r” as 3rd letter  Common: (1) more frequent than (2)  Correct: (2) more frequent than (1) Why?  Availability heuristic o Frequency judgments are distorted by availability in memory  “red” more available than “dirt”  Semantic memory is organized like dictionary (words sharing sound are closer together, can come up with more words that start with r than words that have r as rd 3 letter)  Use heuristic (rule of thumb that normally gives us the right answers), since I can come up with more r words than words than r as 3 rdthe latter must be less frequent  Produces systematic biases  Two groups asked to come up with either 6 or 12 events in which they have acted assertively  Then asked general questions about how assertive they are  Those asked for 6 events would have easier time than 12 event group coming up with examples  Using availability heuristic they conclude that since they could come up with 6 events no problem, there must be other events like this therefore they must be an assertive person  Vs. these examples are difficult to recall, there must not be many, I must not be an assertive person  Regardless of fact that they had more evidence of assertiveness than 6 event group o Distinctive events are more likely to be remembered (therefore more available)  Because they are rare, come to mind more frequently (catch our attention)  E.g., doctors overestimate likelihood of rare diseases  Presented with symptoms, search memory for diseases that produces these symptoms, would come up with rare cases that are actually less likely to have caused the symptoms  Overlook correct diagnosis  List of names  Judge relative frequency of gender (after the fact)  One list has famous male names (bias list with familiarity, do memory search, famous people are more distinctive and stick out in mind)  How many male names can I come up with vs. how many female names can I come up with?  (Incorrect) judgment: more male names o Recent events are more likely to be remembered ( more available)  Frequency judgments correlate better with frequency of media coverage than actual frequency o If it is common, media is not interested in it  What you want to know vs. what you want to hear  Extraordinary > ordinary (exaggerate frequency of events) o Judge relative frequency of death from:  Motor vehicle accidents  Stomach cancer o Common: (1) more frequent than (2) o Correct: (2) more frequent than (1)  (1) makes the news more because we are more fearful of it  Frequency judgments are influenced by reference points (anchoring) o When we don‟t know the answer but we do know the ballpark the answer is in  Take initial idea as anchor and then reach answer by adjustment to that anchor  Problem: adjust too little and are more influenced by initial anchor than we should be o % of NHL players that are Canadian o Above or below: (1) 80% (2) 50% ( reference points)  Then estimate specific % o Estimates higher in (1) than (2)  Assume person asking question knows answer  Then they would be choosing anchor that is close to right answer  If true that is true, participant estimates of % is higher o Donations  Give you “typical donation”, guide or social „norm‟  Thinking about donating $5, asked if they want to donate $25 or $50, this then changes your thoughts about how much you want to donate  Primary anchor, one that occurs in left-most position in list Categorical Judgments:  Drawing conclusions about an individual (or population) based on our assumptions about the population (or individual) Representativeness Heuristic  Expectation that individuals resemble population, and vice versa o If someone is a lawyer, you expect them to have the lawyer characteristics o If someone has lawyer characteristics, you conclude that they are a lawyer  Leaves us willing to draw conclusions from small sample sizes o Good thing if sample is homogeneous  Population  Individual o The Gambler’s Fallacy: in games of chance, if you get several heads in a row, the next will be tails  The population “coin toss” is 50/50 heads & tails  Must be tails, it is “overdue”  impossible, has no memory for past events, each event is independent  Fallacy: any individual series of tosses will resemble the population, we assume homogeneity o There are so many D‟s in a row on an exam, this must not be D as well  Individual  Population o “(Wo)man who” arguments:  What do you mean sunbathing causes skin cancer? I know a woman who spent every day in the sun and died at 95 of heart failure!  Population resembles individual, assuming homogeneity  Expect overall category to have characteristics of the individual  Presumption that category resembles instance  When we don‟t want to believe, look for examples that are against believe  Shown videos of guards, some were told that the guard was atypical or typical of a normal guard and the guard interview was either “humane” or “inhumane”  If shown humane video  have neutral to positive views on guards  If shown inhumane video  more negative view on guards  Regardless on whether the participant was told they were typical or atypical of guards (ignored information)  Sample size: large number of cases, will find patterns close to those in the overall population (“law of large numbers”)  No “law of small numbers”, no tendency for small samples to approximate the pattern of the population  Base Rate Neglect o Base rate: overall likelihood that a particular case will be in this category or that one (independent from diagnostic info) o Base Rate only: From a group of 70 lawyers and 30 engineers, chances are you will pick a _______?  Typical answer: lawyer o Diagnostic only: A man‟s hobbies are carpentry and mathematical puzzles. Is he a lawyer or engineer?  Typical answer: engineer  Diagnostic  meaningful, something you should take into consideration when making decision  We know there are lawyers that also have those hobbies  But this is a forced choice, must use diagnostic information o Base Rate + Diagnostic: A man is chosen from a group of 70 lawyers and 30 engineers. His hobbies are carpentry and mathematical puzzles. Is he a lawyer or engineer?  Typical answer: engineer  Complete neglect of base rate (should take both pieces of info into consideration)  Giving same answer in situation as in situation when they only had diagnostic information, not using extra information (which is also important and telling you the opposite, do not entertain possibility that it is a lawyer with those hobbies) o Irrelevant information can also produce base rate neglect  Base Rate + Irrelevant:  A man is chosen from a group of 70 lawyers and 30 engineers. He is 31, and married with no kids. Is he a lawyer or engineer?  Answer: 50-50  Should not be distracted by irrelevant info (age and marital status)  Should be like only having base rate information, at worst, 70% should say lawyer and 30% should say engineer o Still neglect base rate information o Base Rates in the Real World: Breast Cancer  Base rate = 1% chance of cancer  Diagnostic information: results from mammography  85% who have cancer, will have positive mammogram  Not perfect tool (15% that have cancer will have a negative mammogram) o How should we interpret mammography results?  Diagnostic + Base Rate  When a mammogram indicates cancer: 850 / (850+9900) = 8% chance of cancer (should be the number that guides interpretation of mammogram)  Base rate only: 1%  Diagnostic only: 85%  Both: 8% (correct), 85% (assumed) Dual-Process Models:  Models that assert people have two distinct ways of thinking about the evidence they encounter o In some cases, people make errors associated with heuristics when making judgments but not in all cases  Have one way of thinking that is fast, effortless, automatic (where heuristics come into play) o Not always perfect and will lead to mistakes (the price you pay for being efficient)  Another way of thinking that is slower, effortful and applied only with deliberate intention (when people rise above the heuristics) o Can often catch and overrule errors Importance of Circumstance and Data Format  People choose to use slower, effortful thinking only if appropriate triggered by cues in problem and only if circumstances are right o Can become more responsive with training  Told that accidents happen but accidents don‟t happen repeatedly  Also depends on characteristics of specific judgment that is being made o E.g. when presented in the right way, can reduce error of base rate neglect o People are more likely to use base rates and diagnostic information if they are presented in frequency format Codable Data  Sometimes people don‟t realize that their statistical concepts are applicable to a judgment they are trying to make o Evidence can be understood as a sample of data drawn from a larger set of potential observations  Don‟t see that change played a role in shaping evidence they are considering  People rise above heuristic thinking if role of chance is clear in evidence and if it‟s obvious how to interpret or code the evidence in terms of a set of independent observations  More likely to drop heuristic if evidence is easily understood in statistical terms Background Knowledge  Sometimes problem evokes better reasoning because of the knowledge the person brings to the problem and the beliefs that they have about how the parts of the problem are related  Base rates are often ignored when participants don‟t see any cause and effect relationship between the base rate and the case they are interested in o Likely to consider factors that they think played a cause and effect role in shaping that case they are considering Decision Making 11/17/2012 11:44:00 AM Decision Making  Choosing among alternatives  Cost-benefit analysis Utility Theory:  Came from economics o Very theoretically driven, assume people act rationally when making decisions  2 alternatives: o A bird in the hand o Two birds in a bush  Each alternative has: o Outcome utility (two birds better than one) o Outcome probability (hand better than bush) o When making choice, we take into account both  How do we choose between alternatives? o We calculate expected utility:  Expected utility = (outcome utility) x (probability, varies between 0 and 1)  Utility = “value” o High ecological validity  Making financial decision is something we do everyday  In lab settings  students need the extra money, they now have a role to play that determines how much money they walk away with  Rules we uncover apply to human reasoning  Always looking to maximize utility Money $$$  With financial decisions, outcome values should be easy to determine (can put numbers on them, much easier when talking about money) o Thus, utility theory should apply to financial decisions  Choose between: o $8 with 1/3 probability (2 in the bush)  Expected value = $8 x 1/3 = $2.67 o $3 with 5/6 probability (bird in the hand)  Expected value = $3 x 5/6 = $2.50  People tend to choose the latter o Utility theory calculated the opposite  Why?... o Because value and/or prob
More Less

Related notes for PSYC 2650

Log In


Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.