Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Mod 07 - 08 Designs for Implementation Evaluation / G&O/ Logic Models

Mod 07 Lectures 1-3; Mod 08 Lectures 1-2
by

Lyn Paleo

on 28 December 2012

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Mod 07 - 08 Designs for Implementation Evaluation / G&O/ Logic Models

Implement and evaluate
the program Analyze and interpret
evaluation findings Make decisions about next steps Participatory Needs
Strengths
Ideas Goals
Resources
Activities Hoped for Outcomes Logic Model Process
Objectives Outcome
Objectives Needs Assessment Identify a "social" problem Assess the need for an intervention Assess the assets available in the community Garner the "Will" (and resources) for change Design No One Right Way There is no right way to format a logic model.
There are (at least) 5 ways to format a logic model that makes it hard for others to understand. Everything leads to everything The “Everything leads to everything” format puts all activities in one column, all outputs in another, all outcomes in another, and so forth. 1. It is a bit “old school” Kellogg. Logic model format has evolved in the past several years.

This format does not well use a strength of logic and impact models: the ability to show which inputs or activities lead to which outcomes. The “More said the better” format includes many, many activities and outcomes. The More Said the Better (The example shows only one of the three pages developed for this program. The program consists of one full-time staff member and several part time outreach workers.) It would take a lot of dedication and concentration to slog through this format to fully understand what the program designer’s logic is.

It includes many operational details and many outcomes that are not very important. Some outcomes could be effectively combined. 2. A Few Words Explain Everything The “Few words explain everything” format includes a few verbs and objects.

This format often incorporates more shapes and colors than concepts. More time was spent by the graphic artist than the program designer, it would appear, from this presentation.

The reader does not know who does what to achieve the very big impacts. 3. The “Everything leads every which way” format is a jumble of boxes and arrows.
Especially arrows. Exuberant Use of Color
Enhances Understanding 5. The “Exuberant use of color enhances understanding” format strikes the eye with many bold, bright colors. The eye is drawn to the color, and not the meaning.

This format does not print well in black and white, and much critical meaning is lost without the color. Examples Everything Leads Every Which Way 4. It would take a lot of dedication and concentration to slog through this format to fully understand what the program designer’s logic is.

It does differentiate which activities lead to which outcomes, but the paths are hard to follow. Why Create a Logic Model? a) Our funder makes us. b) We find it useful for program planning, evaluation, and communication. Either answer is fine! Logic Models are increasingly being used to both design a program and its evaluation. When programs are based on a good logic model, plans for services and for evaluation develop simultaneously, and one informs the other. Intervention: Underlying Generative Mechanism: Outcome: Healthful eating The Health Belief Model 5 a Day Campaign Turning the hand over Intervention: Underlying Generative Mechanism: Outcome: Gravity Ball falls to ground and stays there http://www.productivity501.com/your-locus-of-control/104/ Stages of Change Social Learning Theory The Health Belief Model Locus of Control Misc. related to Social Theory Social Theory Extra material What is the
relationship between Impact Models, Eval Questions and Indicators? This example shows a Logic Model for a program that hires specialists in the area of children's special needs, such as disabilities and social emotional problems, to coach preschool teachers and provide guide them in including a special needs child in the setting. What a Logic Model Is Not 1. It is not reality. Is a simple model that represents program intention.
2. It is not complete. It does not display many cultural, social, and environmental factors that influence process and outcomes outside the program
3. Does not prove causal attribution of the intervention to the change
4. It is not a Theory of Change. But it relies on a social theory or other theory of change.
5. Doesn’t address: Are we doing the right thing? What Is a Logic Model? A logic model is a diagram of how the resources of the program lead to desired changes among the target population. It provides a common approach for integrating planning, implementation, evaluation and reporting. Goals & Objs Program Logic Interventions Causation Melvin Mark and others suggest that if social programs have effects, it is because they serve as triggers, setting in motion a causal sequence of events based on underlying generative mechanisms. Look at the
connection between what
happens with the program,
the impact or logic model, and
Process Objectives/ Outcome Objectives. Direct contact Social marketing Outcomes for Different Types of Programs Policy / Advocacy Treatment
Case management
Counseling
Social media campaign
Group activities
Outreach
Community organizing
Workshops or trainings
Advocacy and policy Types of Interventions for Social Betterment Conduct a
Needs Assessment Design a program
and evaluation 1. 2. 3 3. 4. 5. Sampling Design Validity Use: Who is interested determines methods.
What approach is chosen determines use. Outcome: ball accepts gravity positive mixed negative (Mark adds that these underlying generative mechanisms may not operate in all cases, or may be balanced by countervailing forces.) Possible evaluation questions PH W218 Evaluation for Health and Social Programs Lyn Paleo, DrPH, MPA Sonya Dublin, MPH-MSW Module 6: Validity of Outcome Designs Module 9: From Concepts to Indicators Module 10: Will the Evidence be Credible: Causation and Attribution Module 11: Finally! Methods for Evaluation: Quantitative Methods Module 12: Methods for Evaluation: Qualitative Methods Module 13: Sampling Strategies for Qualitative and Quantitative Methods Module 14: Managing the Evaluation Module 15: Ethical Issues in Evaluation Module 16: Tell an Evidence-based Story with Qualitative Analysis Module 17: Turning Measures into Data Module 18: Interpreting Findings and Making Recommendations Module 19: Presenting Results Module 20: Putting It All Together and Moving On Non-Experimental Experimental Quasi-Experimental Module 5: Evaluation Designs for Program Outcomes Module 5: Evaluation Designs Not measures, yet. Not methods, yet. Design determines the level of design validity.
Time series analysis (One slide)
A: Appears to be no out of the ordinary change in the observations after the program.
B: Illustrates what is usually the most hoped for finding in an interrupted time series analysis: a marked increase from a fairly stable level before the intervention, and the criterion remains fairly stable afterwords.
C: Shows an increase in slope after the intervention. The variable being measured began to increase over time after the intervention; however, there was no immediate impact as in Panel B. An influence such as television viewing or improved nutrition, whose impact is diffuse and cumulative, might produce the result in Panel C.
D: There is a localized increase apparently due to the intervention superimposed on a general increasing trend.
E: There appears to be an effect due to the intervention; however, it seems temporary. Many new programs are introduced with much publicity, and deeply involved staff members want the program to be effective. Perhaps extra staff effort is responsible for the initial impact. However, once the program is part of the regular procedure and the enthusiasm of
the staff has diminished, the outcome variable returns to its former levels.
F: Shows a steady increase over time before and after the intervention. The contrast may be statistically significant, but it does not help in understanding the effect of the intervention.
Change is seen with B, C, and a bit with D. Regression discontinuity designs - Also called the Cut Point design. In this classic one, those eligible for an intervention perform less well on the outcome measure at the start. They are thus eligible for the intervention. Like a tutoring program for college, or income level eligibility level for a program. If at the start the intervention group performs less well and at the end performs the same, you’ve got success. Non-Equivalent Comparison Groups - This Nonequivalent comparison group design is best done with statistical controls that part happens in the analysis phase. Just select the people to be as similar as possible with the three issues in mind. Selection Bias alidity D 6 Module 4: Focusing the Evaluation Module 4: Focusing the Evaluations Lectures 2 - 4 Patton, p. 269 Accountability holds the program accountable to funders and other external stakeholders. Most often performance measures derived from process objectives are reported monthly or quarterly.

Formative evaluation aims to identify areas for improving program operations and enhancing the quality of the service delivery.

Summative evaluation: assesses the overall effectiveness of a program for the purpose of making decisions about its future.

Evaluation research conducted to add to the field of knowledge about a program model. Often done before program replication or scale up or to provide evidence that a program is evidence-based. ormative ummative F S A K ccountability nowledge Generation Learn about the program's context via a situational analysis.
Know that the evaluation will need to include Monitoring, Implementation, and Outcomes Evaluation components.
Listen for anything that may make the evaluation more difficult, such as hot spots or hidden agendas.
Come to understand the purpose of the purpose of the evaluation.
With that information, you will be ready to begin the evaluation design -- the topic of Module 5. Steps: 4. Focusing the Evaluation Questions Program context
MIO components
Purpose of the evaluation
Hot Spots & Hidden Agendas
Useful and feasible questions What to consider in order to focus the evaluation “Every evaluation situation is unique.
A successful evaluation (one that is useful, practical, ethical, accurate, and accountable) emerges from the special characteristics and conditions of a particular situation – a mixture of people,, politics, history, context, resources, constraints, values, needs, interests, and chance.” Pattton p. 97 The bottom line is that every evaluation must be tailored to the specific situation, needs, and interests.
Don’t propose a large-scale experimental design for a new program that is just getting started.
And don’t propose a dinky evaluation that will produce very little in the way of findings for decision making if the organization is ready to engage with evaluation and wants to use the results for decision-making. Hot Spots and Hidden Agendas Think about the situational assessment. Were there any “hot spots” that came up in the conversation? These could be anything and may be only referred to indirectly. Generally these are areas of stakeholder nonalignment.
"Do we have too many program components? Does the program work as well for African Americans as it does for Latinos? Should we be using a different evidence-based model?" Sometimes the true purpose of the evaluation, at least for those who initiate it, has little to do with actually obtaining information about the program’s performance. Sometimes evaluation is sought because the program status quo has been called into question. This may result from political attack, competition, mounting program costs, changes in the intended population, or dissatisfaction with program performance. When this happens, restructuring may be an option and evaluation may be sought to guide that change. - 2. Situational Analysis Outcomes Evaluation Performance Monitoring Implementation Evaluation Non-profit for Kids MIO Objectives Purpose Logic model Situational Analysis Evaluation
Questions Lecture 4: Putting it all together Module 4 Lectures 3
MIO Patton, p. 269 M I O onitoring mplementation utcomes The set of procedures to collect and report on the number and type of service activities and beneficiaries for the purposes of accountability (e.g., to funders and managers). Provides information to supplement that gained through monitoring the number and type of service activities and beneficiaries. It may include an analysis of the reach and dose of an intervention, the organizational plan and service utilization plan. It may include documentation of fidelity to a program model. It often includes measures of client satisfaction with staff and services. Depending on the political, economic, organizational or community context, it may various other questions. Assesses the achievement of program outcomes such as the overall effectiveness of a project in producing favorable knowledge, attitudes, behaviors, health status and/or skills in intended population. Typically, (but not always) outcomes are derived from immediate or intermediate outcome objectives. V D esign M odule Threats to Validity Is an outcome effect of 35 good? The basic idea about validity O O 1 2 Observation Intervention Observation X 3. Monitoring, Implementation, Outcomes for Kids Non-profit Non-profit for kids 1. Program goals, objectives, and priority information needs are well defined.
2. Program goals and objectives are plausible.
3. Relevant performance data can be obtained.
4. The intended users of the evaluation results have agreed on how they will use the information. Evaluability Assessment Joseph Wholey Lyn Paleo, DrPH
Modules 7 and 8

Lecture for UC Berkeley School of Public Health Masters-level course,
PH W218 Evaluation for Health and Social Programs
all material copyright 2012 More on Logic and Impact Models More on Outcome Objectives Module 7 Lecture 3:
Questions for an Implementation Evaluation 3paleo@gmail.com
Full transcript