Posted on

Evaluators’ Varied Roles in Collective Impact

Person wearing many hats to represent varied roles

Over the next few months, we’ll be releasing a series of blogs on topics we’ll be presenting on at the American Evaluation Association’s (AEA) annual meeting, which will be in Atlanta, GA October 24-29. You can learn more about the meeting, including how to register here.

Google “Collective Impact” and you’ll get roughly 1.8 million hits (including this blog). Although collective impact (CI) is just one path out of many, it is clear the framework has taken hold as a means to tackle complex problems through a systemic lens. By their nature, however, CI initiatives are complex and emergent. The often include a mix of policy, practice, program, and alignment strategies that engage many different organizations and stakeholders. Moreover, it is not uncommon to have a diverse array of stakeholders, including funders, in the mix.

As CI grows, many different leaders are building our understanding of how to best support the work through evaluation. One thing we have come to realize is that, as varied and complex as CI initiatives are, so are the roles of their evaluators. We can be learning partners, developers of shared measurement systems, strategy partners, or even systems partners, helping align evaluation and learning throughout the system. Because of this, our effectiveness as evaluators depends on understanding which roles are needed and when, as well as how to balance these multiple roles.

Person wearing many hats to represent varied rolesIn addition to traditional formative and summative evaluation in a CI context, an evaluator may also be a:

  1. Developmental evaluator, providing real-time learning focused on supporting innovation in a complex context;
  2. Facilitator, helping partners develop and test a collective theory of change, use data to make better decisions, or align systems across evaluations;
  3. Data collector/analyzer, helping to support problem definition, identify and map the stakeholders in the system, or vet possible solutions and understand their potential for improving outcomes;
  4. Developer of system-level measures of collective capacity and impact, as well as evaluator of process of CI, providing feedback on how to strengthen it; and/or
  5. Creator of a shared measurement system, including adapting core measures to local contexts.

This October, I have the privilege to present on this topic at the American Evaluation Association’s annual meeting with Hallie Preskill from FSG, Ayo Atterberry from the Annie E. Casey Foundation, Meg Hargreaves from Community Science, and Rebecca Ochtera here at Spark Policy. Our presentation will look at the varied roles evaluators play in the CI context. It will also look at what funders and initiatives look for from the CI evaluation teams, exploring how knowing how to navigate these varied roles can help evaluation support system change, leading to more effective evaluation activities.

Interested in learning more? Join us at our presentation: The many varied and complex roles of an evaluator in a collective impact initiative!

Posted on

The Case for Developmental Evaluation

This blog is co-authored by Marci Parkhurst and Hallie Preskill from FSG, Dr. Jewlya Lynn from Spark Policy Institute, and Marah Moore from i2i Institute. It is also posted on FSG’s website: www.fsg.org 

In a recent blog post discussing the importance of good evidence in supporting systems change work, evaluation expert Lisbeth Schorr wrote, “To get better results in this complex world, we must be willing to shake the intuition that certainty should be our highest priority…” Rather, she argues, “it is time for all of us to think more expansively about evidence as we strive to understand the world of today and to improve the world of tomorrow.” [Emphasis added]

At the annual American Evaluation Association Conference (AEA) in November, practitioners, funders, and academics from around the world gave presentations and facilitated discussions around a type of evaluation that is specifically designed to meet this need for a more expanded view of evidence. It’s called developmental evaluation, and, as noted by other commentators, it took this year’s AEA conference by storm.

What is developmental evaluation?

Developmental evaluation (DE) “is grounded in systems thinking and supports innovation by collecting and analyzing real-time data in ways that lead to informed and ongoing decision making as part of the design, development, and implementation process.” As such, DE is particularly well-suited for innovations in which the path to success is not clear. By focusing on understanding what’s happening as a new approach is implemented, DE can help answer questions such as:

  • What is emerging as the innovation takes shape?
  • What do initial results reveal about expected progress?
  • What variations in effects are we seeing?
  • How have different values, perspectives, and relationships influenced the innovation and its outcomes?
  • How is the larger system or environment responding to the innovation?

DE can provide stakeholders with a deep understanding of context and real-time insights about how a new initiative, program, or innovation should be adapted in response to changing circumstances and what is being learned along the way.

A well-executed DE will effectively balance accountability with learning; rigor with flexibility and timely information; reflection and dialogue with decision-making and action; and the need for a fixed budget with the need for responsiveness and flexibility. DE also strives to balance expectations about who is expected to adapt and change based on the information provided (i.e., funders and/or grantees).

The case for developmental evaluation

Developmental evaluation (DE) has the potential to serve as an indispensable strategic learning tool for the growing number of funders and practitioners that are focusing their efforts on facilitating systems change. But, DE is different from other approaches to evaluation. Articulating what exactly DE looks like in practice, what results it can produce, and how those results can add value to a given initiative, program, or innovation is a critical challenge, even for leaders who embrace DE in concept.

To help meet the need for a clear and compelling description of how DE differs from formative and summative evaluation and what value it can add to an organization or innovation, we hosted a think tank session at AEA in which we invited attendees to share their thoughts on these questions. We identified 4 overarching value propositions of DE, which are supported by quotes from participants:

1) DE focuses on understanding an innovation in context, and explores how both the innovation and its context evolve and interact over time.

  • “DE allows evaluators AND program implementers to adapt to changing contexts and respond to real events that can and should impact the direction of the work”.
  • “DE provides a systematic way to scan and understand the critical systems and contextual elements that influence an innovation’s road to outcomes.”
  • “DE allows for fluidity and flexibility in decision-making as the issue being addressed continues to evolve.”

2) DE is specifically designed to improve innovation. By engaging early and deeply in an exploration of what a new innovation is and how it responds to its context, DE enables stakeholders to document and learn from their experiments.

  • “DE is perfect for those times when you have the resources, knowledge, and commitment to dedicate to an innovation, but the unknowns are many and having the significant impact you want will require learning along the way.”
  • “DE is a tool that facilitates “failing smart” and adapting to emergent conditions.”

3) DE supports timely decision-making in a way that monitoring and later-stage evaluation cannot. By providing real-time feedback to initiative participants, managers, and funders, DE supports rapid strategic adjustments and quick course corrections that are critical to success under conditions of complexity.

  • “DE allows for faster decision-making with ongoing information.”
  • “DE provides real time insights that can save an innovation from wasting valuable funds on theories or assumptions that are incorrect.”
  • “DE promotes rapid, adaptive learning at a deep level so that an innovation has greatest potential to achieve social impact.”

4) Well-executed DE uses an inclusive, participatory approach that helps build relationships and increase learning capacity while boosting performance.

  • “DE encourages frequent stakeholder engagement in accessing data and using it to inform decision-making, therefore maximizing both individual and organizational learning and capacity-building. This leads to better outcomes.”
  • “DE increases trust between stakeholders or participants and evaluators by making the evaluator a ‘critical friend’ to the work.”
  • “DE can help concretely inform a specific innovation, as well as help to transform an organization’s orientation toward continuous learning.”

Additionally, one participant offered a succinct summary of how DE is different from other types of evaluation: “DE helps you keep your focus on driving meaningful change and figuring out what’s needed to make that happen—not on deploying a predefined strategy or measuring a set of predefined outcomes.”

We hope that these messages and talking points will prove helpful to funders and practitioners seeking to better understand why DE is such an innovative and powerful approach to evaluation.

Have other ideas about DE’s value? Please share them in the comments.

Learn more about developmental evaluation:

Posted on

Working in Fields

Yellow flowers in a field

I’ve been thinking a lot lately about how different it looks to work in a field instead of alone. And no, I don’t mean out in a field of flowers (though that sounds lovely). Rather, I’m referring to a field of organizations trying to cause the same type of change, though not necessarily in collaboration or even cooperation.

We are all part of these fields: it’s the five other organizations who submitted nearly the same proposal as you did to a local funder; the three groups who knocked on the same policymaker’s door last week, talking about the same issue; the two partners you call when a quick turnaround opportunity comes up that you can’t pull off alone. The mix of all these types of organizations comprises our field (or fields, for multi-issue, multi-area organizations).

Years of emphasis on collaboration and collective impact have made sure we all recognize that we can’t get to the big wins without partners. However, we also deal with the competing reality that collaboration is hard, time consuming, and rarely exists across all the relevant organizations. So what if we thought about our work at a field level as more than just our collaborations? What would it take to influence how a field of organizations can achieve major wins together?

It turns out some folks out there have started to think about this and, in fact, have begun to define some dimensions of fields of advocates who are trying to advance a policy or systemic issue. Within each of these dimensions, there are concrete ways advocacy organizations, funders or even evaluators can help to strengthen the field:

Framing of the issue or issues

Effective fields share a common frame or core set of values underlying their work. For example, pursuing Healthy Eating, Active Living (HEAL) policy change in order to address inequities v. to address the lack of capacity in the healthcare system to meet the growing demand. While each of these frames is valid, one would approach the problem fairly differently and identify different solutions within each frame. Because of this, they might be different fields.

  • So what can you do with this? You can look for partners who share the same frame and identify common opportunities to act together. You can promote your frame to other organizations in the same space and work to change the overall framing around the issue. As a funder, you can invest resources to strengthen organizations that share your frame.

Resources and skills

Fields are composed of organizations with different resources and skills to influence an issue. Most fields have deficits, such as a lack of strong policy analysis/research capacity or insufficient community organizing. Alternatively, they may lack key skills that are rarely needed, but when needed they are critical, such as launching ballot initiative campaigns or leading litigation processes.

  • So what can you do about this? Explore the deficits and seek to grow your organization in that direction, rather than duplicating already available capacity. Build the skills of other organizations so they can engage in work that is complimentary to your own.

Connectivity

Fields have varying levels of relationships between organizations. Strong relationships allow for coordinated strategies, leveraging of capacities, and use of common messaging on specific policy opportunities, while weak relationships can make it difficult to work together at the right moments to achieve policy or systemic changes.

  • So what can you do about this? Seek out organizations that are traditionally not connected to your part of the field, particularly those that bring a different resource, skill, or voice to the work. Intentionally leverage old and new partners for concrete opportunities to move an issue together. If you’re a funder, provide resources and convening opportunities to organizations currently not connected to one another.

Composition

Composition refers to the representation of different types of stakeholders, from the inclusion of public/private partners to racial, ethnic, and economic diversity and more. Fields that represent a broad array of stakeholders carry more influence when policy opportunities arise and also help craft policy solutions that are more likely to achieve the desired outcomes than when only a couple perspectives dominate the field.

  • So what can you do about this? Identify which voices are missing from the field or are marginalized. Expand the perspectives or organizations you engage. If you’re a funder, consider bringing new voices into the field by funding direct service or community organizations who want to advocate.

Adaptive Capacity

When the context shifts in a policy campaign or systems building strategy, effective advocacy organizations shift their strategies as well. A strong field doesn’t shift in 10 different directions or miss key signals indicating a shift is needed. Rather, when part of the field identifies the need for change, the need is recognized throughout the field and the changes are aligned.

  • So what can you do about this? If your organization is skilled at monitoring the environment, share what you’re learning actively with other organizations. If you don’t have the capacity to do that monitoring, seek out partners who do and share what you learn from them. If you’re a funder, consider funding one or more organizations to engage in environmental assessments ongoing with the expectation that they will disseminate the learning actively and in a timely manner.

This might be the longest blog I’ve ever written, but I hope you find the ideas are worth the number of words on the screen. Working at a field level may lead to stronger collaborations in the future, but just as important is the way it will change how organizations respond and react to each other and the environment in order to advocate in ways that collectively contribute to the likelihood of success.

I will be joining thought leaders on this issue of working collectively (without having to work collaboratively) at the American Evaluation Association’s annual conference this year. If you’re attending, I hope you can join us and move this dialogue forward.

Posted on

Using Learning To Do Good, Even Better

One of the best parts of my job is helping organizations use learning to do good, even better. Recently, we worked with Project Health Colorado, a strategy funded by The Colorado Trust with support from The Colorado Health Foundation, focused on building public will to achieve access to health for all Coloradans by fostering a statewide discussion about health care and how it can be improved. The strategy included fourteen grantees and a communications campaign working independently and together to build public will. It also combined an impact evaluation with coaching on engaging in real-time, data-driven strategic learning to help grantees and The Trust test and adapt their strategies to improve outcomes.

Lessons learned about strategic learning:

So, how can organizations use real time learning to tackle a complex strategy in a complex environment – building will around a highly politicized issue? Our strategic learning model built the capacity of The Trust and grantees to engage in systematic data collection, along with collective interpretation and use of information to improve strategies. As a result, grantees shifted strategies in real time, increasing their ability to influence audience awareness of access to health issues and willingness to take action.

As a result of the learning, The Trust made major changes to the overarching strategy including shifting from asking grantees to use a prepackaged message to using the “intent” of the message with training on how to adapt it. This was particularly important for grantees working with predominately minority communities, who reported the original message did not resonate in their communities.

The real-time learning was effective because it allowed grantees and the Trust to practice interpreting and using the results of systematic data collection, applying what they learned to improve their strategies. The evaluation also supported adaptation over accountability to pre-defined plans, creating a culture of adaptation and helping participants strategize how to be effective at building will.

Lessons learned about evaluation:

The evaluation focused learning at the portfolio level, looking at the collective impact on public will across all grantee strategies. As the evaluator charged with figuring out the impact of this strategy, where everyone was encouraged to constantly adapt and improve, we learned that having multiple in-depth data collection methods, tailored to the ways different audiences engaged in the strategy, and explicitly planning for how to capture emergent outcomes allowed the evaluation to stay relevant even as the strategy shifted.

Rad Resources:

Want to learn more?

This post originally appeared September 14, 2015 on AEA365, the American Evaluation Association blog. The American Evaluation Association is an international professional association of evaluators devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation. The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members.