Comm 88 Notes.docx

51 Pages
85 Views
Unlock Document

Department
Communication
Course
COMM 88
Professor
Mullin
Semester
Summer

Description
Ways of Knowing 8/16/2013 1:28:00 PM The science of knowing is called “Epistemology” Some “truths” – how do you know?  It is not raining outside  Vegetables are good for you  People who are similar to each other tend to like each other Some “everyday” ways of knowing  Method of tradition/tenacity o Ex how long are you supposed to wait after you eat to go in the pool? Half hour, hour, etc. – you know this because of tradition  Method of authority o Ex professor Mullin tells you that flash cards are an inefficient way of studying; therefore you believe that they are not useful to study  Method of intuition/logic o Common sense o “platonic idealism” – look at the things we know already and use them to logically figure it out – if a=b and b=c, then a=c  observing the world through your senses is misleading and truth comes instead through deep, rational thought and debate  method of experience/observation –seeing it with your eyes, feeling, hearing – using your physical senses o personal experience o “baconian empiricism” Problems with “everyday” ways of knowing  Illogical reasoning  Inaccurate observation  Selective observation – you notice some things but not others  Overgeneralization Everyday ways of knowing can even lead to conflicting ideas about “truth” The Scientific Method  Combines “platonic idealism” with “empiricism” o Logic/intuition -> constructing theories  Communication Science: use empirical observations to test theories about comm processes Unique characteristics of Science – how is science different from the other everyday ways of knowing?  Scientific research is public o Published in peer-reviewed journals o Opportunity to replicate studies  Science is empirical o Conscious, deliberate observations o Many observations  Science is objective o Control/remove personal bias o Explicit rules, standards, and procedures  Science is systematic and cumulative o Builds on prior studies/theories o New knowledge modifies old L3: Goals of Scientific Research 8/16/2013 1:28:00 PM Recall the scientific method..  Combines platonic idealism with empiricism o Logic/intuition -> constructing theories o Observation/experience -> gathering data  Comm science: use empirical observations to test theories about comm processes Goals of Scientific Research  Description o Look for social regularities of aggregates o Science can tell us “what is”  Explanation o Develop understanding of why patterns exist – ex what causes what o Science can tell us “why it is”  Prediction o Predict outcomes given certain factors o Science can tell us “what will be”  Science CANNOT settle questions of moral value Science cannot tell us “what should be” (right/wrong good/bag moral/immoral) The Research Process – Theories, Hypotheses, and Research Questions  The wheel of science o Deduction  Theories -- Hypotheses ---> Observations  Used in traditional science, quantitative methods o Induction  Observations -- Empirical generalizations -- theories  Humanistic/interpretive; qualitative methods o What’s the difference between quantitative and qualitative methods?  Quantitative:  Adhere strongly to scientific goals and principles – objectivity, empirical data, etc.  Employ numerical measures and data analysis  Examples: surveys, experiments, content analysis  Qualitative:  Also called interpretative research or field research  A “humanistic” form of social science  Values SOME aspects of science – especially empiricism  But also values researcher subjectivity  Examples: participant observation, depth interviewing, conversation analysis  Note: there is also purely humanistic research in comm called “critical studies”, AKA rhetorical criticism, feminist analysis, cultural studies  That’s not science at all – we don’t do that in comm at UCSB Using Theories in Research  Theory: an attempt to explain some aspect of social life o A scholar’s ideas about how/why events/attitudes occur o Includes set of concepts and their relationships  Scientific theories should be “falsifiable” (if the theory is wrong there should be some data to show that) o Able to be tested empirically, to be proved wrong o But note – you can never “prove theories true” – only gain support/evidence  What about theory of global warming? – evidence suggested in press is all over the place  Theories are built of “concepts” o Terms for things/ideas/parts of the theory o EX: social cognitive theory (bandura) – we learn by watching modeled behavior  Requires attention, retention, motor reproduction, motivation ex rewards/punishments  What are some “concepts” involved here?  Model, behavior, attention, motivation, etc o Concepts are studied as “variables”  They have variations that can be measured L4: Hypotheses 8/16/2013 1:28:00 PM Using Theories in Research  The Wheel of Science o Theories are built of “concepts”  terms for things/ideas/parts of the theory  researchers must define them (example would be attitudes, how it changes, etc.) o Example: Social Cognitive Theory (Bandura)  media effects theory its about how we learn  watching other people is how we learn  (mostly applies to media)  We learn by watching modeled behavior  Requires attention, retention (being able to do the behavior), motor reproduction, motivation (e.g. rewards/punishments) o What are some “concepts” involved here?  Model, behavior (is it something they’re doing? Is it violence? Caring?), attention, motivation, etc.  Concepts are studied as “variables” o they have variations that can be measured example: gender, comparing groups men vs. women Example: Motivation reward vs punishment model (amount of each) Example: Model -TV character vs parent; For TV, hero vs villain; degree of likeability or similarity of character Concepts are studied as “variables” · They have variations that can be measured ex: motivation and model From prior findings and/or theory, we derive a hypothesis:  A specific testable prediction about the relationship between variables Ex: From studies on Social Cognitive Theory *H1: TV violence viewing will produce more aggressive behavior than will non-violent TV viewing H2: Rewarded TV violence will lead to greater imitation than will punished TV violence* What are the variable involved here? H1: aggression; type of TV viewing H2: motivation (rewards and punishments); imitation If theory or previous research does not lead to a specific prediction; or if previous findings conflict/inconclusive:  Pose research question instead of hypothesis Examples: RQ: To what extent will children imitate the behavior of a TV character whom they do not like/relate to? RQ: Will there be gender differences in children’s imitation of violence? Testing a Hypothesis: An Example  Researcher A o Social Cognitive Theory: children learn behavior by watching models behave o Hypothesis: watching TV violence will increase kids’ aggressive behavior  Researcher B o Catharsis Theory: watching others behave allows “purging” of pent-up feelings o Hypothesis: watching TV violence will reduce kids’ aggressive behavior  Researcher A o Ton of money, goes out and sends grad assistants to select kids from all parts of state, 600 kids o How much violent TV viewed o How much aggression on playground o Plot in a graph (aggression / TV Violence)  Graph shows a positive correlation; the higher the score in TV violence, the more aggression the kid shows o Conclusion: TV violence increases aggression  Researcher B o No grant, gets 60 kids from local elementary school o Watch one of four clips (0, 5, 10, 20) o After watching 15 min of clips, leaves kid in room with toys, one-way mirror o Number of hits on toys recorded o Plot in graph (aggression / violence) o Conclusion: TV violence decreases aggression  Researcher A: o Corrected conclusion: TV violence is related to aggression  Researcher B o Corrected conclusion: For these participants, under these laboratory conditions, TV violence decreases aggression. Types of Hypothesis and Research Questions  Hyps and RQ’s can be o Causal (state how one variable changes/influences another) o Or correlational (state mere association between variables) o *Recall from Social Cognitive Theory!*  H1: [causal]  H2: [causal] o Could instead phrase it in this way:  H1: The more TV violence children watch, the more aggressive they will be [correlational] Different Methods for Different Hypothesis!  Survey/Correlational Research (e.g. Researcher A): o tests correlational hypothesis (mere relationship/association) o measure some variables and relate them, compare existing groups, etc. o see p. 106 on “continuous” and “difference” statemetns o great for external validity  ability to generalize results to other people and/or to “normal life” settings o poor for causality!  Experimental Research (e.g. Researcher B): o Tests causal hypothesis/predictions  Manipulate variables/groups, control everything else, and measure effects o o Great for internal validity  Ability to establish that X causes Y (rules out other explanations) o Poor for generalizability L5: The Research Process cont.: Defining Concepts and Variables 8/16/2013 1:28:00 PM Different Methods for Different Hypothesis!  Content analysis o Tests correlational hypotheses/RQs about media (or other comm) content o RQ: is TV violence more often perpetrated by heroes or villains? o RQ: how does media coverage differ for President Obama compared to that of previous presidents? The Research Process cont.: Defining Concepts and Variables Variables in Experimental Research (causal hyps)  Both qualitative and qualitative  Independent variable (IV) o Variable manipulated by researcher o The “cause” in a cause-and-effect relationship  Dependent variable (DV) o Variable affected/changed by the IV o The “effect” in a cause-and-effect relationship  Example hypothesis: Greater physical attractiveness creates impressions of greater friendliness o IV: physical attractiveness (e.g., manipulate level of attractiveness) o DV: impressions of friendliness (e.g., ratings on friendliness scale) Variables in Survey/Correlational Research (corr/relational hyps)  Quantitative – you’re just collecting numbers not talking to people  Can’t be cause-effect so… o IV considered a “predictor” variable o DV sometimes called “criterion” variable  Example hypothesis: identification with “being young” predicts exposure to “youth-oriented” media o IV: age identification o DV: “youth-oriented” media exposure (e.g., measure how often prefer MTV, shows featuring young characters, etc.) o Could the IVs/DVs be the other way around? – yes!  Surveys don’t have causal direction… o Exposure to “youth-oriented” media predicts strong identification with “being young”  IV: youth-oriented media exposure  DV: age identification Defining Concepts/Variables  Conceptual definition o A working definition of what the concept means for purposes of investigation – usually based on theory/prior research o Example variable: “fear” – what is it? Operational definition o How exactly the concept will be measured in a study Measurement – Operationalizing Variables (both IV and DVs) Types of Measures  Physiological measures o Ex BP, brain imaging, Cortisol (stress hormone)  Behavioral measures o Ex nonverbal gestures, voting, donating time/money – things that people do  Self-report measures o Ex items on questionnaire Levels of Measurement  Nominal (categorical/discrete) o Categorical and qualitative o Variable is measured merely with different categories  Ex: gender (m/f), ethnicity, yes/no Q’s, Tv violence (rewarded/punished), TV use (Hi/Lo) o Categories must be mutually exclusive o Categories must be exhaustive o Nominal measures are for comparing differences Lecture 6 8/16/2013 1:28:00 PM Levels of Measurement  Ordinal o Categorical and qualitative o Variable is measured with rank ordered categories o Example: rank top five favorite TV shows; most to least important political issues o Difficult to analyze!!!  Interval o Continuous and quantitative o Variable is measured with successive points on a scale o Example measure on immigration policy opinion:  The U.S. should build a fence along the border.  Scale from strongly oppose to strongly favor  Scale numbering could differ  Ratio o Continuous and quantitative o Interval measurement with a true, absolute zero point o It actually anchors the scale o Zero is the point where it is the absence of anything o Example: time in hours, weight in pounds, age in years, etc. o Example: Test scores (if from 0 possible) o “twice as fast, twice as much”  Interval and ratio measures are “continuous” variables o --allow continuous-type of hypotheses (the more X, the more Y, etc.)  Measures should: o · …capture variation! o Use continuous variables for DVs where possible o …have good “conceptual fit” with variables in the Hyps/RQs o Facebook use and sense of belonging o …minimize potential  social desirability effects Using Questionnaire Items as Measures  Common for IVs and DVs in surveys  Common for DVs in experiments (IV is a manipulation) Types of Questionnaire Items  Open-ended o Respondents give their own answers to Qs  Interview, observation, not quantitative, but quantitative researchers use too! They have to analyze data afterward, complicated  Therefore, it is more common for them to use  Closed-ended o Respondents select from list of choices (exhaustive and mutually exclusive) Some closed-ended formats  How do we set that up?  Likert-type items: o Respondents indicates their agreement with a particular statement o Example: Parents should talk openly about sex with their children.  Strongly disagree to strongly agree  Other response options also possible (oppose/favor; not at all/very much; almost never/almost always)  Semantic Differential o Respondents make ratings between two opposite (bipolar) adjectives o Example: my best friend is:  o Warm to cold  o Intelligent to Unintelligent  Composite Measures o · Use multiple items for one variable; combine those items into an “index” (aka “scale”) o · Scale is for a unique way of combining them, but it doesn’t apply in this class o · Example variable: Perceived credibility (of a speaker)  As a single-item measure: o The speaker I just heard is: o Credible to not credible o Knowledgeable to not knowledgeable o Experienced to inexperienced o Trustworthy to untrustworthy o Honest to dishonest o Unbiased to biased o Competent to incompetent o (larger numbers lie on the right)  How could you combine these scores?? o Use multiple items for one variable; combine those items into an “index” (aka “scale”) o All items added (or averaged) into one overall scoreà Uni- dimensional index o Combine different items into different “subscales” à Multi- dimensional index  Example from above  Options: o Uni-dimensional “credibility”: add all items into one total “credibility” score o Multidimensional “credibility”:  Know+exp+comp=”expertise” dimension  Trust+honest+unbiased=”trustworthiness” dimension How Good Is Your Measurement?  Reliability and Validity o This is when you are trying to see if your data is actually reliable o Reliability = consistency – even if its not valid its still reliable because its consistent o Can be reliable not valid, cant be valid but not reliable Reliability of Measurement  Are you measuring the concept consistently? Assessing Reliability For measures using questionnaire items:  Inter-item reliability o Administer the same items more than once (e.g. test-retest; split-half) o Look at internal consistency of similar items in a scale/index  Inter-Item: o (e.g., Cronbach’s alpha-it sees how well your items hang together) you don’t need to know how to compute it but you jut need to know that it has to be high! .7 is as low, .8, .9 way better! o Same long example from above:  If you use unidimensional you will prob likely to get LOW Cronbach’s alpha (poor reliability)  If you treat them separately using multidimensional will have likely higher reliability b/c computed separately for each subscale Lecture 7: Sampling 8/16/2013 1:28:00 PM Office hours: Tues 10-11:30am Thurs april 25 2:30-4pm Q&A MONDAY! How Good Is Your Measurement (cont.) Reliability and Validity recap Assessing Reliability  For measures using coders (e.g., behavioral observations): o Inter-coder reliability (inter=between)  Compare multiple coders Need to be higher than 80% in order to be reliable  Intra-coder reliability (within)  You want consistency within the same coder as well  Want to make sure that any one coder is consistent over time o Compare multiple observations of same coder Validity of Measurement  Does your measure really capture the concept you intend to be measuring?  This is a little more subjective  Think about acceptance and tolerant  Tolerant of illegal drug use, but that doesn’t mean you accept it!  When people ask if it is ok for use? Well what does “ok” mean in this context? Assessing Validity  Subjective types of validation: o Face Validity  The measure looks/sounds good “on the face of it”  Mostly used because it’s the easiest –no numbers involved  Usually only discussed during criticism o Content Validity  The measure captures the full range of meanings/ dimensions of the concept  Criterion-related validation: o Predictive Validity  The measure is shown to predict scores on an appropriate future measure  EX: SAT scores (your “potential” to achieve)à college GPA (your achievement) o Construct validation:  The measure is shown to be related to measures of other concepts that should be related (and not to ones that shouldn’t)  EX: aggression scale ß à hostility scale Relationship between validity and reliability  Can a measure be reliable but not valid? – yes  Can a measure valid but not reliable? – no Sampling How we select participants (or other units) for a study  Sample: a subset of the target population (who/what you want to report about) EX: voters, Facebook users, married couples, juries, football fans, business owners, etc. Or: TV shows, magazine ads, blog posts, etc. Sampling units:  Individual persons (e.g., voters, fans, etc.)  Groups (e.g., couples, juries, orgs, countries)  Social artifacts (e.g., ads, TV scenes/episodes) Two General Types of Samples  Representative samples o Intended to be a “miniature version” of the target pop o Typical of surveys (esp polls) and content analyses  Non-representative samples o Not intended to generalize o Typical of experimental designs and qualitative research Representative Sampling (probability sampling)  Representative because of random selection: everyone in pop has equal chance of being included in sample How representative?  Will always be “Sampling Error”: o Sample data will be slightly different from pop because of chance alone o Statistically, this is the “margin of error” EX: National poll N=~1,000à+/-3% ^(up) sample size, (down) margin of error Example: Gallup survey: Margin of error +/- 4% Q: DO you think the use of marijuana should be made legal, or not? 44% yes ß “real” % could be as low as 40, as high as 48 54% no ß “real” % could be as low as 50, as high as 58 Lecture 8: Sampling (cont.) 8/16/2013 1:28:00 PM TA Q and A – Monday 11-12 SSMS 1009 Representative Sampling Techniques  Based on probability  Equal chances that everyone will be selected = representative  Four types: o Simple random sampling  Select elements randomly from population  In listed populations: random #’s tables are used  Using phones: random-digit dialing o Systematic random sampling  From a list of the population, select every “n^th” element, AND must have random start and cycle through the entire list  Similar results as SRS  Close to the right proportions, but there will be some sampling error  Watch out for potential “periodicity” o Stratified random sampling  If you want to get the exact proportion without sampling error – it reduces sampling error  First you have to divide the population in subsets (“strata”) of a particular variable  Usually stratify for demographic variables e- ex sex race political party etc  Select randomly from each strata to get right # of people/right proportions of the population  Can only be done if you have prior knowledge of population proportions  Increases representativeness  B/c it reduces sampling error (for the stratified variable)  But more costly and time consuming o Multistage cluster sampling  First randomly sample groups “clusters” then randomly sample individual elements within each cluster  Useful for populations not listed as individuals  Ex. sampling “high school athletes”  1 stage: random sample high schools nd  2 stage: random sample athletes form these schools in the sample  reduces costs  but sampling error at each stage = probably a bigger margin of error o Can combine multistage and stratified sampling  Ex: sampling “high school athletes”  1 stage: sample high schools  BUT – stratify for private/public nd  2 stage: sample athletes  BUT – stratify for different kinds of sports (football, water polo, tennis, etc.)  So, for all of the representative sampling techniques: o Will always have sampling error o But can generalize to the larger target population (assuming done properly) o Caution: (how not to screw it up) –  avoid “ecological fallacy” – making unwarranted assertions about individuals based on observations about groups – ex you think all college students are smart therefore when you meet one you think hes smart but he is actually dumb  avoid Systematic Error (AKA sampling bias) – systematically over- or under-represent certain segments of population  caused by:  improper weighting  very low response rate  wrong sampling frame  using non-representative sampling methods  sampling error – annoying but not bad, can happen because of chance; can be reduced with a bigger population size vs. systematic error – actually a bad thing (aka sampling bias) Non-representative Sampling Techniques  Convenience sample o Select individuals that are available/handy  Purposive sample o Select certain individuals for special reason – their characteristics, etc.  Volunteer sample o People select themselves to be included  Quota sample o Non-representative but trying to get proportions right o Select individuals to match demographic proportion in population  Network sample/snowball sample o Start with a small number and they contact other people who contact other people… o Select individuals who contact similar individuals and so on Ch. 9 and 10 8/16/2013 1:28:00 PM Chapter 9: Survey Research General Features of Survey Research  Different levels of formality: o 1. A large number of respondents are chosen through probability sampling procedures to represent the population of interest o 2. Systematic questionnaire or interview procedures are used to ask prescribed questions of respondents and record their answers o 3. Answers are numerically coded and analyzed.  Large-Scale Probability sampling o Large samples chosen through scientific sampling procedures to ensure precise estimates of population characteristics o Typically around 1000 in national opinion polls, but can be much larger o Although large-scale probability scales are idea, surveys vary considerably in sample size and sampling design. o In cross-national surveys, equivalent surveys are conducted in different countries  Systematic Procedures: Interviews and Questionnaires o Surveys obtain info through interviews and or self- administered questionnaires o Unstructured vs structured interviewing  In structured interviews, the objectives are very specific, all questions are written beforehand and asked in the same order for all respondents, and the interviewer is highly restricted in such matters as the use of intro and closing remarks  Used when trying to derive facts or precise quantitative descriptions  In unstructured interviews, the objectives may be very general, the discussion may be wide-ranging, and individual questions will be developed spontaneously in the course of the interview  Used when the research purpose is to understand the meaning of respondents’ experiences  The semi structured interview would have specific objectives, but the interviewer would be permitted some freedom in meeting them o Quantitative Data Analysis  Descriptive surveys seek to describe the distribution within a population of certain characteristics, attitudes, or experiences and make use of simpler forms of analysis.  Explanatory surveys investigate relationships between two or more variables and attempt to explain these in cause-and-effect terms o Secondary analysis of surveys – the analysis of survey data by analysts other than the primary investigator who collected the data Uses and Limitations of Surveys  surveys are used extensively for both descriptive and explanatory purposes  they offer the most effective means of social description  compared with experiments, surveys provide weaker tests of causal relationships but more precise descriptions of populations Survey Research Designs  Research design generally refers to the overall structure or plan of a study  The basic idea of a survey is to measure variables by asking people questions and then to examine the relationships among the meausres  The cross-sectional design is the most commonly used survey design, in which data on a sample or “cross section” of respondents chosen to represent a particular target population are gathered at essentially one point of time. Two variations: o Contextual designs sample enough cases within particular groups or contents to describe accurately certain characteristics of those contexts o Social network designs focus on the relationships or connections among social actors (ex people, organizations, countries, etc.) and the transaction flows (processes) occurring along the connecting links  The longitudinal design is when the two same questions are asked at two or more points in time; they may be asked repeatedly either of independently selected samples of the same general population o or the same individuals. Two types: o Trend studies – consists of a repeated cross-sectional design in which each survey collects data on the same items or variables with a new, independent sample of the same target population (track general social changes)  Cohort studies – enable one to study three different influences associated with the passage of time (gauge changes in age groups) o Panel studies – can reveal which individuals are changing over time because the same respondents are surveyed again and again (measure changes in individuals over time) Steps in Survey Research  1. Planning  2. Field administration  3. Data processing and analysis Survey Research 8/16/2013 1:28:00 PM Primary Goals  Identify/describe attitudes or behaviors (in a given population)  Examine relationships between the att/beh variables measured o Does X predict/relate to Y?  Ex: does exposure to alcohol ads (X) predict teen drinking behavior (Y)? o Do many factors together predict Y?  Ex: do alc ads, parent drinking, peer drinking, and risk taking (all X’s) predict teen drinking (Y)? Administering Surveys  Self-administered questionnaires o Mail surveys; online or emailed questionnaires; handouts  Relatively easy and inexpensive  No interviewer influence  Increased privacy/anonymity  BUT  must be self-explanatory  Very low response rate! o Ways to increase response rate  Have inducements/incentives  Make it easy to complete and return  Include persuasive cover letter and/or do advance mailing  Send follow-up mailings  Interview surveys – you have a researcher or researcher assistant actually do the survey o Personal/face-to-face
More Less

Related notes for COMM 88

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit