Psych 7AllTextNotes.docx

25 Pages

Course Code

This preview shows pages 1,2,3,4. Sign up to view the full 25 pages of the document.
Psych 7 Psychology Research Methods Ch.1 Notes -Empirical Reasoning: Reasoning using observation, experimentation, and research. -Consumer of Research: Psychologists who work outside the lab and use the information already produced. -Producer of Research: Psychologists who work within the lab to produce information to use in real life -The Four Scientific Cycles -The Theory -Data Cycle -- Most important -- Involves having a theory and then collecting data to prove or disprove that theory. Theory Research Questions Research Design Hypothesis Data  Theory… Theory: a statement that describes how different variables relate to each other -A good theory is supported by data, falsifiable, and parsimonious Hypothesis: predictions for what theories will be correct. Data: set of observations Empiricism: approach f collecting data and using it to develop, support, or challenge a theory. -The Basic-Applied Research Cycle -- Basic inspires applied research and vise versa. Basic Applied Translational Applied Research: done with a practical problem in mind. Basic Research: done in order to add to general knowledge. Translational Research: the bridge between applied and basic. -The Peer-Review Cycle --Researcher writes an article, anonymous peers review and critique it, it is either dispersed or revised with more data. Journal: writings of scientific discovery compiled and distributed -The Journal to Journalism Cycle --Researchers find inspiration from popular culture and then publish their findings for popular culture. Journalism: all published and public articles (especially pop culture) Ch. 2 Research Vs. Experience -Most people don’t take a comparison group into account when creating beliefs -Personal experience has no comparison group and there are too many variables/ confounds. -Behavioral Research is probabilistic -People have a tendency to think what they want to think -Pop-up Principle -Present/Present Bias -Biased Questions -Cherry Picking Evidence -Being Overconfident Comparison Group: a group that allows comparison in the same situation can show different results Confounds: alternative explanations for any outcome Probabilistic: inferences are not expected to explain all cases all of the time Present/Present Bias: we notice certain things that are present but ignore the things that are actually there. Pop-up Principle: things that more easily come to mind tend to guide out thinking Confirmation Hypothesis testing: asking questions that will lead to the wanted answer. Types of Journal Articles Empirical Journal Articles: report results of any research study. Review Journal Articles: provide a summary of all the research that has been done in one area of research. -Always read with purpose -Ask 2 Questions when Reading… -What is the argument -What is the evidence to support that argument Parts of an Empirical Journal Article: Abstract Introduction Method Results Discussion Reference List Ch.3 Variables in a Study Variables: something that varies, has more than one value Constant: something that could potentially vary but only has one level in the study question. Measured Vs. Manipulated Variables Measured Variables: variables that are observed and recorded -A researcher must define what level of each variable means -IQ, Gender, Traits Manipulated Variables: variables that have controlled levels -Medication, participant assignment From Conceptual Variable to Operational Definition Conceptual Definition: abstract concepts -Depression -Happiness Operational Definition: using operationalizations to turn a concept of interest into a measured or manipulated variable. -Defining what depression, happiness, stress, etc… actually are The Three Claims Claim: the argument someone is trying to make -psychologists use empirical research to make a claim Frequency Claim: has to do with percentage or number data -Particular rate of something -Anecdotal claims are not frequency claims a claim that does not report the results of a study but just tell an illustrative story isolated experiences Associated Claims: suggestion that two variables are related (but don’t have causation) -Correlated variables -Positive Association: two variables are positively affected by each other -Negative Associaton: two variables are negatively affected by each other -Scatter plot -Zero Association -Curvilinear -You cannot make causation statements with correlation, rather you can make predictions. Casual Claims: claims that suggest causation between variables Interrogating the Three Claims Using the Four Big Validities The Four Validities: -Construct Validity: Asking how well a study measured or manipulated a variable.  concerns how accurately a researcher has operationalized a variable -External Validity: how well the results of the study generalize to, or represent people and contexts besides those in the study itself. Generalizability: the ability of a study to be generalized to the real world -Statistical Validity: the extent to which statistical conclusions are accurate and minimal. Minimize two mistakes: -“False Alarm” (TYPE 1) -“Misses”  (TYPE 2) -Internal Validity: In a relationship between one variable (A) and another (B) the degree to which we can say that A, rather than some other variable is responsible for the effect on B. The Relationship between the Three Claims and the Four Big Validities: Frequency Claims Association Claims Casual Claims (“50% of Americans (“Happy people cut (“Music enhances IQ) struggle to stay the chit chat”) happy) Construct How well have you How well have you How well have you Validity measured the measured each measured or manipulated variable? variable? the variable? (Operational Definitions) Statistical What is the margin If the study finds a If the study finds a Validity of error of the relationship or no relationship or no (Statistical estimate? relationship, what is relationship, what is the the probability that probability that the significance and margin of the researchers have researchers have a false error) a false alarm or are alarm or are missing missing something? something? What is the effect What is the effect size size and how strong is and how strong is the the association? Is it association? Is it statistically statistically significant? significant? Internal NOT RELEVENT NOT RELEVENT Was the study an Validity experiment? Did it (Confounds) achieve temporal predence? Does the study have any alternative explanations or does it limit confounds? External Can you generalize Can you generalize Can you generalize the the estimate? How the estimate? How estimate? How Validity (Generalizaion) representative is the representative is the representative is the sample? sample? sample? How representative are the manipulations? 3 Rules for Causation/ Causual Claims: -Covariance: As A changes , B changes in the same way - Temporal Precedence: A comes first in time, before b (“the study must show that music, or A, came before an increase in IQ, or B. -Internal Validity: there are no alternative explanations for A causing B.  The use of Random Assignment helps with this *All proven with experiments that have independent and dependent variables. Independent Variable: manipulated variable  Dependent Variable: measured variable Ch. 4 Psych 7 Ethical Guidelines for Psychology Research The Tuskegee Syphilis Study  1932  600 Working Class Black men  Lasted 40 years and ruined many lives of the participants 3 Kinds of Ethical Violations by the Tuskagee Study  Harming of participants o Not told of actual treatment o Subjected to painful and dangerous test  Disrespectful treatment of participants o Witheld information from them o Full informed consent was not possible to give  Target of disadvantaged social group o All men studies were poor and African American The Milgram Studies: An Example of the Ethical Balance  1960’s  People told to shock others who got questions wrong Two Sources of Ethical Concern  Was it ethical of Milgram to put unsuspecting volunteers through such a stressful experience?  Lasting effects of the study, even with debriefing o Dramatic experience could have caused psychological damage The Belmont Report: Principles and Applications  1976  3 Principles o The Principle of Respect for Persons  Individuals participating should be treated as autonomous agents  Free to make up their own minds  Participants are entitled to give informed consent to perticipate  Informed Consent: participants know the risks and benefits and decides whether to participate  Some people are entitled to special protection o Children o Those with intellectual or mental disabilities o Prisoners o The Principle of Beneficence  Protection of participants from harm and to ensure participants well being  Consideration of risks and benefits  Psychological and physical harm must be considered  Consideration of stresses before beginning each study know if a participant can handle it o The Principle of Justicie  Calls for a fair balance between the people who participate in the people who benefit from it  Makes sure that the participants “bear the burden” of the risks for the right reasons, and that there is a chance that what they are doing will contribute to society  Makes sure the participants being studied represent the people the study will help Guidelines for Psychologists: the APA Ethical Principles  The Five General Ethical Principles (must be strived for) o Beneficence and nonmaleficence  Treat people in ways that benefit them  No harm  Benefit society o Fidelity and Responsibility  Establish trust  Accept responsibility for behavior o Integrity  Strive to be accurate, truthful, and honest in one’s role as a researcher, teacher, or practitioner o Justice  Treat all groups fairly  Sample research participants form the same populations that will benefit from the research  Beware biases o Respect for people’s rights and dignity  Autonomous agents  Protection of rights  Precautions against coercion  Ten Specific Ethical Standards (must be followed) o Standard 8 is the only standard that Involves Research  8.01 Psychologists must comply with their local IRB  Institutional Review Board o Research is conducted Ethically  8.02 Informed Consent  Researcher must explain experiments to participants  8.07 Deception  Acceptable with consideration of Beneficence  Safe under controlled circumstances  8.08 Debriefing  explanation of study and all deceptions  8.09 Animal Research  Legal protections for Lab Animals o Institutional Animal Care and Use Committee (IACUC)  Animal Care and Guidelines and the Three R’s o Replacement  Only use animals if you have to, find alternatives o Refinement  Minimize animal distress o Reduction  Use as few animals as possible Research Misconduct  Data Fabrication (APA Standard 8.10) and Data Falsification o Data Fabrication  Inventing data to fit hypothesis o Data Falsification  Interfering with study and deleting certain pieces of information o Impedes the progress of science  Plagiarism o The appropiation of another person’s ideas, processes, results, or words without giving appropriate credit  Citation of all sources Ethical Decision Making: A Thoughtful Balance  Benefit vs. Risk  Ethics in research is not a permanent set of rules Ch. 5 Identifying Good Measurement  Construct Validity Ways to Measure Variables o Any variable can be expressed in two ways  Conceptual: the researcher’s definition of a variable in question at an abstract level  Operational: the researcher’s specific decision about how to measure or manipulate the conceptual variable o Operationalizing “Happiness” and other Conceptual Variables  You define at the conceptual level before the operational  One must choose only one possible measure for a concept that may have many other ways to be studied  Three Common Types of Measures o Self- Report Measures: operationalizes a variable by recording people’s answers to verbal questions about themselves in a questionnaire or interview. o Observational Measures/ Behavioral Measures: operationalizes a variable by recording observable behaviors or physical traces of behaviors.  Counting how many times someone smiles throughout the day  Solving a puzzle in a certain amount of time o Physiological Measures: operationalizes a variable by recording biological data such as brain activity, hormone levels, or heart rate.  Requires use of equipment  Facial electromyography registers movement in face  fMRI brain activity through blood flow o One construct (variable) can be measured in many different ways  Scales of Measurement o All variables must have at least two levels o Categorical Versus Quantitative Variables  Categorical Variables: variables that can be categorized  Gender, Species  Quantitative Variables: variables coded with meaningful numbers  Weight, Height, level of brain efficiency, happiness o Three Kinds of Quantative Variables:  Ordinal Scale: applies when the numerals of a quantitative variable represent rank order  Three star versus four star resorts  Interval Scale: applies to the numerals of a quantitative variable that meet two conditions:  Numerals represent equal intervals between levels  There is no “true zero” (a score of 0 does not mean nothing) o Temperature, IQ Test  Ratio Scale: applies when the numerals of a quantitative variable have equal intervals and when the value of zero means nothing.  Weight, Income Reliability of Measurement  Constuct Validity has two aspects o Reliability of measurement o Validity of measurement concerns whether the operationalization is measuring what it is supposed to measure  Reliability: concerns how consistent a measure is o Test-Retest Reliability: consistent results are obtained every time a measure is used.  Can be used if operationalization is self-report, observational, or physiological primarily relevant when researchers are measuring variables that will stay stable over time o Interrater Reliability: two or more independent observers come up with the same (or similar) findings.  Relevant for observational measures o Internal Reliability: people within an experiment who are similar answer similarly to the people similar to them.  Using A Scatterplot to Evaluate Reliability o Can show interrater agreement or disagreement  See which ratings agree and disagree  Using the Correlation Coefficient r to Evaluate Reliability o Correlation Coefficient r: a number to indicate how close the dots on a scatterplot are to a line drawn through them.  r= -1.0-1.0 o Slope Direction: positive, negative, zero o Strength: how close or far apart points on a scatterplot are to the slope. o Test-Retest Reliability:  Measure the same set of participants twice in the same way  Compute r for all findings  If r is positive and strong then the test is reliable  If r is positive and weak then the test is not reliable  The findings should be similar therefore r should be strong o Interrater Reliability:  Two observers rate participants at the same time and then compute r  If r is strong= interrater reliability is strong  Negative correlation is rare and undesirable o Internal Reliability  There is no way to average or correlate certain data  Use Cronback’s alpa for items that cannot be correlated in a sample Validity of Measurement  Measurement Validity of Abstract Consequences o Construct Validityevidence is always a matter of degree  Face Validity: construct validity that is a plausible measure of the variable in question.  Content Validity: construct validity that involves subjective judgment about the measure. It must capture all parts of a defined construct.  Predictive and Concurrent Validity evaluate whether the measure under consideration is related to a concrete outcome that it should be related to, according to the theory being tested (Measurement of correlation and some relevant outcome).  Predictive Validity: testing the correlation with the outcome in the future.  Concurrent Validity: testing the correlation with the outcome at the same time. o Known-Groups Evidence for predictive and concurrent validity  Known-Groups Paradigm: used to gather evidence for predictive or concurrent validity  Researchers see whether scores on the measure can discriminate among a set of groups whose behavior is already well understood. o Convergent and Discriminant Validity: show whether the test shows a meaningful pattern of associations with other measures.  Convergent Validity: measure correlates more strongly with other measures of the same constructs  Cycle of comparisons between tests  Discriminant Validity: measure correlates less strongly with other, distinct constructs  Used when a researcher wants to be sure that their measure is not accidentally capturing a similar but different construct. o Relationship Between Reliability and Validity  The validity of a measure is not the same as its reliability  Although a measure may be less valid than it is reliable, it cannot be more valid than it is reliable.  Reliability is necessary (but not sufficient) for validity An Applied Review: Interrogating Construct Validity as a Consumer  Stepts to Interrogating Diener’s Measure of Happiness: o Asking how the researcher operatonalized the variable o Analyzing operationalization’s face and content validity o Analyzing empirically established reliability and validity o Analyzing validity- the pattern from the entire body of evidence  Interrogating the Gallup Poll’s Measure of Happiness o Analyzing Measurement of Happiness’ Construct Validity o Lack of detailed construct takes away from the validity and reliability  No information of the reliability of the ladder score over time  Method of assigning certain scores is questionable because there is no real evidence Ch.6 Tools for Evaluating Frequency Claims Describing What People Do: Surveys, Observations, and Sampling Construct Validity of Surveys and Polls -Survey and Poll: a sample of people being asked to answer questions on the phone, in personal interviews, on a questionnaire, or over the internet.  Choosing Question Formats: -Open-Ended Questions: allow respondents to answer in any way they see fit good: spontaneous, rich answers bad: answers must be coded and categorized -Forced-Choice format: people give their opinion by picking the best of the options Narcissistic Personality Inventory  -Likert Scale: when someone selects their agreement through numbers Semantic Differential Format when an adjective is put on a scale and you agree with it (Easiness of task: Easy 1 2 3 4 5 6 7 hard)  Writing Well-Worded Questions: -Construct Validity is affected by wording -Leading Questions Must word questions as neutrally as possible (positive and negative questions cause different reactions) -Double-Barreled Questions: asks two questions in one Complicated questions should be avoided and ask only one question at a time -Double Negatives: makes wording of questions complicated -“Does it seem possible or impossible that the Holocaust never happened?”  The answer to say that it happened would be “It is impossible that it never happened.” -Question Order: Can affect how people think about later questions  Can be stopped by creating different versions of the tests  Encouraging Accurate Responses: -Using Shortcuts: Response Sets: adoption of answering similarly to all questions (hurt construct validity) Yea-Saying and Nay-Saying: occurs when people choose strongly agree or disagree to every question not carefully thinking Fence Sitting: always answer neutrally nothing is gained -Trying to look good: socially desirable responding (faking good or bad):respondents give answers to make them look better too good to be true -in order to stop this, filler questions are added -Implicit Association test stops people from responding desirably, works with automatic responses -Self-Reporting “More Than They Can Know” People sometimes can’t explain something and give inaccurate responses when they try. Self Reporting Reasons for Behavior -Reasons for behavior is not always known  factors weren’t noticed  Self-Reporting Memories and Events -Memories are not very accurate and can be changed over time Construct Valdity and Behavioral Observations  Observational Research: watching people or animals and systematically recording what they are doing -no problems with how questions are worded or biased responses -Some Examples of Frequency Claims Based on Observational Data Observing How Much People Talk recording of talking  Observing Moms and Dads watching how they interact in certain situations Observing Parent and Child Reunions -Are Observations Better than Self-Reports? Depends on situation and whether or not the researcher is trained -Possibility of observer bias: observers record what they expect/want to see inference of judgment based on previous information  People can cause what they want to happen and give unintentional cues  Masked/blind study design allow control of biases The Observed Might React to being watched Observer Effects: people change their behavior if they know someone is watching  Ways to fix this problem: -Unobtrusive Observations -Wait it out get people use to observer’s presence -Measure the Behavior’s results  what the behavior left behind  Observing People Ethically -Not revealing specific identities -Asking permission -Good Observations are Both Reliable and Valid: observers trained to not be biased use multiple observers Generalizing to Others: Sampling Participant  Populations and Samples -Population: entire group Whatever the researcher’s say it is Sample: part of a population Biased when it includes only one kind of people Census: studying every individual within a population -What Causes Biased Samples? Sampling only those who are easy to contact Sampling only those one is able to contact Sampling only those who invite themselves -Self Selection: only people who specifically to choose to participate are used  How to get a Representative Sample -When external validity is vital probability sampling: drawing the sample at random from a population. -Sample Random Sampling: randomly assigning members of the population to the sample -Variants of Probability Sampling makes taking samples easier and more efficient Asking for specific people Cluster Sampling: choosing from a specific part of the population that has something in common (choosing college students from different colleges)  Multistage Sampling: cluster sampling and then randomly sampling from those samples (choosing college students from different colleges then choosing from those groups) S
More Less
Unlock Document

Only pages 1,2,3,4 are available for preview. Some parts have been intentionally blurred.

Unlock Document
You're Reading a Preview

Unlock to view full version

Unlock Document

Log In


Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.