PSYB10H3 Lecture Notes - Lecture 10: Publication Bias, Saturated Fat, Vehicle Insurance

139 views8 pages
Published on 21 Nov 2018
School
Department
Course
Professor
The Replication Crisis & Applied Social Psychology
The "Replication Crisis":
o Many scientific studies not replicated in subsequent investigations.
o Doing the same study again gives a different result.
Power Posing:
o Physical posture has a big effect on psychological state, outcome in negotiation tasks, and
stress levels.
o The pose (as shown below) makes us feel more powerful (pictured: Annie Cuddy).
o Annie Cuddy wrote that this pose is useful for behavioural change, stress levels, and
ultimately, this finding become widely accepted and celebrated (seemed significant).
o Eventually, this was shown to be difficult to replicate.
Also in Medical Research:
o Lots of medical research (including findings in cancer biology) has failed to be replicated.
Causes of Replication Crisis?
o Publication bias.
o Analytic flexibility (p-hacking).
Publication Bias:
o Preference to publish positive over negative results.
o Journals do not like to publish that their research found nothing statistically significant - or
nothing at all.
o In psychology/psychiatry, over 90% of papers published support the tested hypothesis.
Goes to show that a significantly low number of negative results are published.
Analytic Flexibility (p-Hacking):
o Reporting only a subset of the experimental conditions.
o Reporting only a subset of the many measures they collected.
o Excluding participants or trials for reasons that seem defensible but were not decided in
advance.
o Including statistical controls that seem defensible but were not chosen in advance.
Replication Rates:
o Large, multi-site projects ("Many Labs").
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 8 pages and 3 million more documents.

Already have an account? Log in
o Many studies chosen.
o Original materials from authors.
o Replication rates: 40-50%.
Fixing It - Publication Bias:
o Registered reports.
Paper published regardless of results.
Journal commits to this.
Researchers send introduction and methods to journal, they approve of it, and
then the results are published (whether positive or negative) so long as the
researchers accurately following their proposed methods (i.e., published based
on quality of research plan and execution, not the nature of the results).
o Journal replication sections.
Allows negative results to be published.
Fixing It - Analytic Flexibility:
o Disclosure standards.
Authors required to disclose dropped conditions, measures, participants, etc.
If dropping any of these things, the researchers must admit to that.
Problems:
Sometimes things are excluded for very good reasons - readers might be
suspicious of you dropping conditions, even though it was necessary.
Also, with excluded information, readers are unable to know how the
results would've been different if those things had not been excluded.
o Pre-registration.
Analysis plans must be registered in advance.
Implications:
o Almost certainly, some of the things we've been taught in class are wrong.
o No way to know which those are.
o "Science is the only self-correcting human institution, but it also is a process that progresses
only by showing itself to be wrong" - Allan Sandage.
Applications of Social Psychology:
o Elections.
o Nudges.
o Spending and saving.
o Education.
2016 US Presidential Election:
o Polling - seemed to show Clinton strolling toward victory, which obviously didn't happen.
o Turnout.
o Demographics.
o Intergroup conflict.
National Popular Vote:
o Polls: Clinton +3.2%.
o Actual: Clinton +1.7%.
o Absolute Error: 1.5.
In other words, Clinton's popular vote score was overestimated.
Historically, the average polling error is 2 percentage points (i.e., the 2016 election's
polling error was below average).
Sources of Polling Error:
o Random (sampling) error.
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 8 pages and 3 million more documents.

Already have an account? Log in
Quantifiable.
Cancels out across polls.
o Systematic error.
Harder to quantify.
Does not necessarily cancel out.
Sources of Systematic Error:
o Nonresponse bias - some people are simply harder to reach than others (e.g., younger
people don't typically have landlines - only cellphones; landlines are usually used by
pollsters.
o Social desirability bias - if people know it is socially undesirable to support a certain
candidate, they may lie to pollsters about who they are voting for, or simply say they are not
voting.
o Likely voter models - pollsters predicting whether or not someone will vote and therefore
only contacting those who they believe would vote; producing flawed results.
All of these sources line up to create the systematic error that may have contributed
to the overestimation of Clinton's lead.
Turnout:
o Was relatively the same compared to previous elections.
Demographics:
o Trump had about the same amount of white supporters as Mitt Romney in 2012.
o Clinton was less supported by men than Trump (and Romney in 2012), but also didn't gain
much support from women either, when compared to Obama in previous elections.
Intergroup Conflict:
o Realistic group conflict theory:
Competition for scarce resources.
Most ethnocentrism (ingroup preference) from groups under threat (with most to
lose).
Nudges:
o Changing behaviour:
Obesity/unhealthy eating lead to numerous health problems.
As a policy-maker, what can you do to encourage healthier eating?
Encouraging Healthier Eating:
o Incentives (increasing price).
o Information.
o Accessibility/channel factors.
Information:
o New York City calorie-labeling law.
o Study: 14 NYC fast-food restaurants before and after calorie labels were mandated.
Asked 800 diner customers for their receipts to determine how many calories they
consumed.
Before calorie labels: 825 calories.
After calorie labels: 846 calories.
Why didn't this work?
Giving people the information, but they may not use it.
Giving people the information, but customers may believe numbers are
arbitrary.
Some people may disregard the numbers because they do not want to
add them all up when ordering multiple things.
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 8 pages and 3 million more documents.

Already have an account? Log in

Document Summary

Journals do not like to publish that their research found nothing statistically significant - or nothing at all. Including statistical controls that seem defensible but were not chosen in advance: replication rates, large, multi-site projects ("many labs"), many studies chosen, original materials from authors, replication rates: 40-50%. Fixing it - publication bias: registered reports, paper published regardless of results. Journal replication sections: allows negative results to be published. Fixing it - analytic flexibility: disclosure standards, authors required to disclose dropped conditions, measures, participants, etc. If dropping any of these things, the researchers must admit to that: problems: Implications: almost certainly, some of the things we"ve been taught in class are wrong, no way to know which those are. Intergroup conflict: national popular vote, polls: clinton +3. 2%, actual: clinton +1. 7%, absolute error: 1. 5. In other words, clinton"s popular vote score was overestimated: historically, the average polling error is 2 percentage points (i. e. , the 2016 election"s polling error was below average).

Get OneClass Grade+

Unlimited access to all notes and study guides.

YearlyMost Popular
75% OFF
$9.98/m
Monthly
$39.98/m
Single doc
$39.98

or

You will be charged $119.76 upfront and auto renewed at the end of each cycle. You may cancel anytime under Payment Settings. For more information, see our Terms and Privacy.
Payments are encrypted using 256-bit SSL. Powered by Stripe.