Posted on

Evaluators’ Varied Roles in Collective Impact

Person wearing many hats to represent varied roles

Over the next few months, we’ll be releasing a series of blogs on topics we’ll be presenting on at the American Evaluation Association’s (AEA) annual meeting, which will be in Atlanta, GA October 24-29. You can learn more about the meeting, including how to register here.

Google “Collective Impact” and you’ll get roughly 1.8 million hits (including this blog). Although collective impact (CI) is just one path out of many, it is clear the framework has taken hold as a means to tackle complex problems through a systemic lens. By their nature, however, CI initiatives are complex and emergent. The often include a mix of policy, practice, program, and alignment strategies that engage many different organizations and stakeholders. Moreover, it is not uncommon to have a diverse array of stakeholders, including funders, in the mix.

As CI grows, many different leaders are building our understanding of how to best support the work through evaluation. One thing we have come to realize is that, as varied and complex as CI initiatives are, so are the roles of their evaluators. We can be learning partners, developers of shared measurement systems, strategy partners, or even systems partners, helping align evaluation and learning throughout the system. Because of this, our effectiveness as evaluators depends on understanding which roles are needed and when, as well as how to balance these multiple roles.

Person wearing many hats to represent varied rolesIn addition to traditional formative and summative evaluation in a CI context, an evaluator may also be a:

  1. Developmental evaluator, providing real-time learning focused on supporting innovation in a complex context;
  2. Facilitator, helping partners develop and test a collective theory of change, use data to make better decisions, or align systems across evaluations;
  3. Data collector/analyzer, helping to support problem definition, identify and map the stakeholders in the system, or vet possible solutions and understand their potential for improving outcomes;
  4. Developer of system-level measures of collective capacity and impact, as well as evaluator of process of CI, providing feedback on how to strengthen it; and/or
  5. Creator of a shared measurement system, including adapting core measures to local contexts.

This October, I have the privilege to present on this topic at the American Evaluation Association’s annual meeting with Hallie Preskill from FSG, Ayo Atterberry from the Annie E. Casey Foundation, Meg Hargreaves from Community Science, and Rebecca Ochtera here at Spark Policy. Our presentation will look at the varied roles evaluators play in the CI context. It will also look at what funders and initiatives look for from the CI evaluation teams, exploring how knowing how to navigate these varied roles can help evaluation support system change, leading to more effective evaluation activities.

Interested in learning more? Join us at our presentation: The many varied and complex roles of an evaluator in a collective impact initiative!

Posted on

How do you know if you’re getting the best quality in your evaluations?

How do you know if you’re getting the best quality in your evaluations?

RigorQuality in evaluation used to be defined as rigor (and sometimes still is), with rigor meaning the competence of the evaluator, the legitimacy of the process and, of course, applying the best research methods to the collection and analysis of data. These are important, but they don’t count as an all-encompassing definition of quality, particularly in complex, adaptive settings where evaluation partners with strategy.

If we cannot count of these measures to define quality, what are alternative ways of understanding if your evaluation is high quality? Hallie Preskill from FSG and I will be joining forces at the American Evaluation Association’s annual conference this Friday to explore this issue. We are proposing that the concept of “rigor” (and thus what you can look for in your evaluations) can – and should – be redefined as:

  • Balancing whether the evaluation is useful, inclusive of multiple perspectives, unbiased, accurate, and timely.
  • The quality of the learning process, including whether it engages the people who need the information when they need the information.
  • The quality of the thinking, including whether the evaluation engages in deep analysis, seeks alternative explanations, situates findings within the literature, and uses systems thinking.
  • The credibility and legitimacy of the findings, including whether people are confident in the ‘truth’ being presented.
  • Responsiveness to the cultural context, including the integration of stakeholders’ values and definitions of success, as well as who helps to interpret the findings.

Capture

Are you attending the annual conference? Come join us for an interactive discussion on how to reframe rigor and quality in your evaluations.