Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

SSR Programming Trends, Challenges and Emerging Issues Part IV: Monitoring and Evaluation

No description
by

on 24 August 2016

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of SSR Programming Trends, Challenges and Emerging Issues Part IV: Monitoring and Evaluation

PROJECT IMPLEMENTATION
MONITORING EVALUATION
PROJECT IDENTIFICATION AND DESIGN
SSR Programming Trends, Challenges and Emerging Issues - Part IV: Monitoring and Evaluation
POLICY
CONCEPT
APPROACH

1. Monitoring
and Evaluations
Evidence Base
There is an over reliance on anecdotal evidence to showcase SSR success stories or to substantiate perceived positive effects of SSR processes - too often not enough data is available to capture the real/actual impacts and influences of SSR. The lack of evidence and hard data stymies the effectiveness of efforts to build a business case for greater commitment to, and investment in, SSR and security sector management issues overall.

The most widely used metric for evaluating the success of SSR processes is the level of actual and perceived safety of citizens, including desegregation by ethnicity and gender. Yet, such data is often missing altogether or irregularly collected.



Evaluations
Measuring Nuances
An inability to capture or measure the nuanced changes and reforms accurately, key and impactful reforms are quickly abandoned due to the perception that no visible changes have been achieved. Evaluation and monitoring processes lack the sophistication and approach to to track and measure subtle changes in the overall reform process related to high level governance reforms (these can indicate long-term trajectory of the reform process and incremental changes over time)
Direct Observation
Little time and resources are spent by programmes on assessment and direct observation of needs and challenges in courts or police stations during programme implementation or programme design. This leads to supply driven or external solutions to local problems without understanding of context. Project budgets do not build in flexibility to travel locally and work plans focus on support at HQ level.
Indicators
SSR processes and the wider SSR community continues to struggle with identifying suitable indicators at process and programme levels. This is especially the case for indicators related to issues of governance, management, and accountability. Relative to other security and justice disciplines SSR has not developed any standard globally collected indicators for governance or accountability for the security sector.
Evaluations of programmes typically take place in the middle of implementation and at the end. Yet, much of the evidence of sustained systemic and behavioural changes are available only a few years later. In this regard, SSR programmes, with the exception of those that are continued in subsequent phases, have rarely deployed impact studies to identify sustained changes and gather evidence of actual outcomes and impacts.
Considering the large number of donors and interventions in the security sector, attribution of results has been a difficult endeavor in many evaluations.
Evaluation commonly focus on the output level - there is limited tracking of outcomes and an overview of impact on the overall process. There is limited capacity and know how at national level and within programme management structures on how to develop outcome indicators.
Few evaluations of peacekeeping and crisis management missions, despite their increased engagement in ''classic'' development activities that otherwise should be subject to independent evalatuations
Evaluations of large and mid sized programmes are becoming standard practice
Overview
M&E
Irregular - ad-hoc
Poor indicators
Output/Input focused - few if any indicators at outcome level
Few impact evaluations
Tracking system of implementation of recommendations
Challenges
Opportunities/Good Practice
Inclusive M&E process
Using evaluations/reviews to benefit both national stakeholders and programme - not creating parallel reporting or assessments
Regular collection of data by programme
Lack of baseline data
Evaluation reports are public
M&E not integrated into programme design
SSR Community has few globally collected indicators related to SSG
High reliance on public perception surveys without complementary indicators or data
M&E focuses on lesson learning rather than simply measuring outputs 'what has been done'
Centralised system of collecting lessons identified and wide dissemination beyond programme staff (eg. globally, across agency)
M&E: balance between identifying 'challenges' and 'what works'
Periodically reviewing relevance of indicators
Data triangulation
Poor integrity/availability of data
M&E not used for lesson learning - lessons rarely get extracted and cross-fertilized to other programmes
Limited attention to attribution
http://prezi.com/n9-v8liuvfko/?utm_campaign=share&utm_medium=copy
To see part II click on the link
To see part III click on the link
http://prezi.com/ywvjaxxfx2nk/?utm_campaign=share&utm_medium=copy
http://prezi.com/ldkv2cqvzuwf/?utm_campaign=share&utm_medium=copy
To see part I click on the link:
Please note that this tool is under continuous development. At the moment we are still in the early phase of data collection and analysis.
This is intended to be an open tool. We would greatly appreciate any comments, suggestions and contributions (examples, emerging practice, analysis, etc). Please let us know your views on trends, innovations and emerging issues through the ISSAT Community of Practice: http://issat.dcaf.ch/Share
Use the full screen mode to view the presentation.
Just use your computer mouse to click on areas where you would like to explore further (you can zoom in by clicking on a circle and zoom out by clicking anywhere outside the circle). You can also just follow the presentation by using the right arrow and left arrows on your keyboard. By clicking the left arrow on your keyboard you can always go back to the previous slide of the presentation.
Please give the presentation a few minutes to load in full. If you have trouble with internet speed we recommend that you download the presentation to your desktop
INTRODUCTION
Much of the knowledge base underpinning SSR practice is fragmented, localized to individual donors, SSR practitioners or even programmes. In the absence of a robust evidence base and investment in global learning and monitoring platforms for SSR, much of what is known about SSR is based on anecdotal evidence or remains scattered through various evaluations or reports that are inconsistently shared in the public domain. This tool is a unique attempt to synthesize, document and collate some of the existing knowledge base on SSR that otherwise would remain in various reports or conference papers. The tool draws information from various ISSAT supported evaluations, assessments, and advocacy events but also reports, findings and lessons identified from the wider SSR practitioner community.

To take advantage of ISSAT’s unique position, working intimately with a large number of the leading actors in the field of SSR, significant effort is made in gathering and synthesizing emerging issues and trends in SSR. This is done to better understand how SSR is evolving and where emerging gaps in practice or effectiveness are forming. As part of this process the ISSAT methodology cell, a group of staff tasked with developing the ISSAT methodology, has focused on streamlining lesson learning throughout ISSAT support activities. ISSAT has institutionalised processes whereby all ISSAT advisory field support, training and advocacy and outreach activities feed into a centralised mapping of the various challenges/trends, emerging issues, and innovations. This includes issues found at the guidance and policy level, the way development partners are supporting SSR, or the common issues found in national SSR processes. The gathered knowledge is progressively synthesized and analyzed by the methodology cell to identify common issues and trends. The analysis is progressively documented and tracked through this database of trends, issues and challenges.


There are 4 sections to the trends, challenges and issues database.
Part 1: Policy, Concept and Approach to SSR (click: http://prezi.com/ldkv2cqvzuwf/?utm_campaign=share&utm_medium=copy)
Part II: Project Design and Formulation (click: http://prezi.com/n9-v8liuvfko/?utm_campaign=share&utm_medium=copy)
Part III: Project Implementation (click: http://prezi.com/ywvjaxxfx2nk/?utm_campaign=share&utm_medium=copy)
Part IV: Monitoring and Evaluation (click: http://prezi.com/cfg6qbvf0h1d/?utm_campaign=share&utm_medium=copy)



Click left arrow on keyboard to go back
Click right arrow on keyboard to go to next slide
Option 1: Use your mouse to click on the slide you would like to explore
Option 2: Use your keyboard to explore the presentation
Investment
Common to find that projects under investment or miscalculate projected costs for robust M/E. Typically M/E budgets focus on funding single evaluations (eg. mid-term evaluations) rather than providing funding for periodic or systematic M/E systems as part of the programme operational budgets.
Full transcript