Community Chapter 13.docx

7 Pages
147 Views
Unlock Document

Department
Psychology
Course
PS282
Professor
Colleen Loomis
Semester
Winter

Description
Ch. 13: Promoting Community and Social Change Evaluation in Everyday Life - We engage in evaluation every day (going to a restaurant, watching a sports team, etc.) - Can result in a yes or no decision, but more likely to result in decisions regarding steps to take to foster improvement It Seemed Like a Good Idea, But Is It Really Working? (Results-Based Accountability) - DARE(Drug Abuse Resistance Education) involves police officers going to classrooms in local schools to present a curriculum over several weeks ~used classroom time that could be allotted to other educational purposes ~the school uses less of its own money since law enforcement is used - Each year billions of dollars in tax money, charitable contributions, and grants by foundations are spent to do good things in communities ~Is all this time, effort, and money making a difference? - Results-based accountability – showing the results of government, non-profit, and private sectors - Complaints about program evaluation: it can create anxiety among program staff, staff members may be unsure how to conduct evaluation, evaluation can interfere with program activities or compete with services for scarce resources, and evaluation results can be misused and misinterpreted (especially by program opponents) - Several responses to the question “how can we know if your program, supported by our grant money, is actually accomplishing its goals?” ~Trust and Values – discussing good intentions, good values, etc. A problem with this is that citizens don’t know how the program works or if there are any results. ~Process and Outputs – informing how many clients are seen and how many licensed staff members are working. A problem with this is that they are not providing the information needed to know if the service is actually effective. ~Results-Based Accountability – are able to show that a specific program achieved its intended effects. A problem with this is that agency members may not be trained to do evaluation. - Program Evaluation and Desire for Improvement ~DARE proponents rejected the validity of the results ~DARE has successful national and international prevention support and delivery systems ~DARE never had any efficacy studies and basically went straight to the harder standard of effectiveness - DARE illustrated important themes from chapter 13: evaluations about the effectiveness of a program can lead to program improvement efforts, and whether programs work and for whom should influence data-informed decision making in communities. - Results-based accountability requires us to understand program evaluation and how programs can be improved to achieve their goals. The Logic of Program Evaluation - Evaluation studies were initially designed just to yield a final verdict on the program’s effectiveness - Two reasons why programs do not work: ~Theory failure – concerns program theory: the rationale for why a particular intervention is considered appropriate for a particular problem with a specific target population in a particular culture and social context. It also helps choose appropriate measurements or methods to study the effects of the program ~Implementation failure – the implementation in your location mat be weak due to a lack of resources, inexperienced personnel, insufficient training, or other reasons - The desired effects are not likely to occur if: the underlying assumptions of the program theory are not appropriate for the program’s context, the program is implemented well yet does not affect the variables specified by program theory, or the activity or program is not adequately implemented. - The principal purpose of a causal log model is o show in a simple, understandable way the logical connections that contribute to the need for a program in a community, the activities aimed at addressing these conditions, and the outcomes and impacts expected to result from the activities ~the logic model is a graphic representation of how the program works (Fig. 13.1) ~ conditions – activities – outcomes – impacts ~Conditions include risk factors or processes, community problems, or organizational difficulties that the program seeks to address ~Activities address each condition. One or more activities can aim at solving each condition ~Outcomes result immediately from the activity (changes in knowledge, attitude, etc.) ~Impacts are of the program on the community at large A Four-Step Model of Program Evaluation Step 1: Identify Goals and Desired Outcomes - Goals can be general; outcomes must be specific and measurable - Program developers describe the program’s: ~Primary goals, such as increasing parent involvement in schools or reducing drug use ~Target group(s) can be described by demographic characteristics (age, sex, race, etc.), developmental transitions (entering middle school, divorce, etc.), risk processes (low grades, multiple conduct incidents, etc.), locality, or other criteria ~Desired outcomes, such as increases in attitudes rejecting smoking. Should be clearly defined, specific, realistic and attainable, and measurable Step 2: Process Evaluation - The activities designed to reach the desired outcomes are described ~Answer the question “what did the program actually do?” -Purposes of Process Evaluation ~monitoring program activities helps organize program efforts and helps the program use resources where they are needed ~information in a process evaluation provides accountability that the program is conducting the activities it promised to do ~can provide information about why the program worked or did not and help with future improvements ~helps keep track of changes - Conducting a Process Evaluation ~What were the intended and actual activities of the program? After it was implemented, what did program planners and staff members learn from their experiences? ~process evaluation asks: who was supposed to do what with whom and when was it to be done? ~the more clearly the questions are answered, the more useful the process evaluation will be Step 3: Outcome Evaluation - Assesses the immediate and direct effects of a program - Outcome Measures ~should be closely linked to goals but more specific ~self-report questionnaires are commonly used to measure outcomes ~program developers need to consider what measures of what constructs will best reflect the true outcomes of the program ~persons completing questionnaires who are not reporting on themselves are called key informants Step 4: Impact Evaluation - Concerned with the ultimate effects desired by a program - Long-term effects of the program - Archival data, based on records often collected for other purposes, help assess impacts Mentoring: A Program Evaluation Perspective - Mentoring relationships generally involve an older, more experienced person and a younger, less experienced person ~the mentor helps develop character and competence or provides help in reaching goals - Related to adolescents’ increased positive social and psycholog
More Less

Related notes for PS282

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit