Posted on

For the Good of the Group: Be Nice, Respond in Kind, Be Forgiving

When working to change complex systems it can be difficult for individual stakeholders to engage in authentic collaboration. This is neuroscience. We are all motivated to move away from perceived threats and toward perceived reward. Bringing multiple actors together to work toward a common goal can create conflict between doing what is best for the individual organization and doing what is best for the system.

In the latest issue of The Foundation Review, we’ve shared tools on how to navigate this difficult terrain using an on-the-ground example: The Colorado Health Foundation’s (TCHF) Creating Healthy Schools funding strategy. TCHF engaged Spark, as well as Harder+Company and The Civic Canopy to support an emergent approach to design and implement the strategy.

Here are some highlights on how to help stakeholders align their work and build inclusive engagement and partnership:

  • Lead stakeholders to a shared understanding of systems thinking and how it translates to systems acting.
  • Leverage a neutral facilitator.
  • Engage on-the-ground perspectives to involve those who will be most impacted by the change.
  • Support increased communication between systems-level and on-the-ground groups.
  • Develop clear function-group goals.
  • Be transparent about what you are doing, how you are approaching the problem, and how decisions are made.

Read more about TCHF’s implementation of an emergent philanthropy philosophy in Insights from Deploying a Collaborative Process for Funding Systems Change.

Posted on

The Collective Impact Research Study: What is all this really going to mean, anyway?

By Jewlya Lynn, CEO, Spark Policy Institute; Sarah Stachowiak, CEO, ORS Impact

It’s easy for evaluators to sometimes get tied up in the technical terms around our work, leaving lay people unclear on what some of our decisions and choices mean.  Without care, we can also risk being opaque about what a particular design can and can’t do.  With this blog, we want to untangle what we think our design will tell us, and what it won’t do.

With this research study, ORS Impact and Spark Policy Institute are seeking to understand the degree to which the collective impact approach contributed meaningfully to observed positive changes in people’s’ lives (or, in some cases, species or ecosystems).  In other words, when and under what conditions did collective impact make a difference where we’re seeing positive changes, or are there other explanations or more significant contributors to identified changes?  While we’ll learn a lot more than just that, at its heart, that’s what this study will do.  

Our primary approach to understand the core question around contribution and causal relationships will be to use process tracing.  Process tracing provides a rigorous and structured way to identify and explore competing explanations for why change happens and to determine the necessity and sufficiency of different kinds of evidence to support different explanations that we’ll find through our data collection efforts.

To implement the process tracing, we will dig deeply into data around successful changes—a population change or set of changes plausibly linked to the CI efforts—within six sites.  We’ll explore these changes and their contributing factors with data from existing documents, interviews with site informants, focus groups with engaged individuals, and a participatory process to review and engage in sense-making with stakeholders around the ways in which we understand change to have happened.  We’ll try and untangle the links between implementation of the collective impact approach and early outcomes, the links between early outcomes and systems changes, and the links between systems changes and ultimate impacts.

Figure:  Diagram of “Process” for Tracing

Note:  Future blogs will provide more information on the different rubrics we’ve developed and are using.

Using a process tracing approach also means that we’ll explicitly explore alternate hypotheses for why change happened—was there another more impactful initiative?  Was there a federal funding stream that supported important related work?  Was there state policy that paved the way that was unconnected to stakeholders’ work?  Would these changes have occurred whether collective impact was around or not?

Additionally, we’ll look at two sites where we would expect to see change but don’t, with the expectation that these sites can help us understand if the patterns we’re seeing at successful sites are absent or showing up differently, findings that would help give us more confidence that the patterns we’re seeing are meaningful.

Process tracing as our approach does mean that our unit of analysis—the sphere within which we will be exploring change and causal relationships—is going to be approximately eight sites.  While we hope to find sites where a cluster of impact outcomes result from a specific set of activities (or “process”), we are choosing to go deeply in a few sites with an approach that will provide rigor around how we develop and confirm our understanding of the relationships between activities and changes.  And because we are looking across diverse sites, working on varied issue areas (e.g., food systems, education, environmental issues, etc.) and at different scales (e.g., cities, multiple counties, entire states), identifying patterns across diverse contexts will increase our confidence around what collective impact conditions, principles and other contextual factors are most related to these successes.

With more data around if and when we find causal relationships, we will also go back to our data set of 22 sites that we are also engaging with early to see if we can, likewise, find similar patterns to those found through the process tracings.  For these sites, we’ll use data we will have collected on their fidelity to collective impact, efforts around equity, successes with different types of systems changes, and types of ultimate impacts.  Are we seeing similar patterns around the necessity of fidelity to certain conditions?  Are we seeing similar patterns in the relationship between certain types of systems changes and impacts?

Despite the strengths we believe this study has, it will not be the end-all-be-all, final say on the efficacy of collective impact.  All studies have limitations, and we want to be clear about those as well.  Given time and resources, we can’t conduct in-depth evaluations of the full range of efforts and activities any given collective impact site is undertaking.  Our unit of analysis isn’t a full site; it won’t take in the full complexity of the history of the initiative, or the full array of activities and efforts.  For example, it’s likely that a site that we engage with around a particular success has also experienced areas with no discernable progress.  We also are not comparing collective impact to other change models.  That doesn’t make the exploration of causality around successful changes less meaningful, but it does mean that we’ll understand contribution to specific changes well rather than understanding and judging the success of collective impact at a community-level or comparing collective impact to other models of driving systemic change.

We do believe that this study will fill a gap in the growing body of research, evaluation and evidence around collective impact by deeply understanding contribution in particular cases and by looking at a diverse and varied set of cases.  The social sector will benefit from continued interrogation of collective impact using many methods, units of analysis and approaches.  In the end, the more we learn, the better we can make meaningful progress on the gnarly issues that face vulnerable places and populations.

Posted on

Sharing as We Go: The Collective Impact Research Study

By Jewlya Lynn, CEO, Spark Policy Institute; Sarah Stachowiak, CEO, ORS Impact

 

 

At Spark Policy Institute (Spark) and ORS Impact (ORS), we have been doing systems building work for over a decade. When the Collective Impact approach came along, it created a level of clarity for many people, both about what it means to change systems as well as providing clear insights about how to do so.

And now, six years into the CI approach creating momentum and excitement, many systems change leaders find themselves asking the questions:

“Does the Collective Impact (CI) approach directly contribute to systems changes that leads to population changes?  When does it contribute and in what ways?  And most importantly, what does that mean for our work?”

We at ORS Impact and Spark Policy Institute are excited to have the opportunity to answer the first two questions in partnership with 30 collective impact sites in the US and Canada as part of the Collective Impact Research Study. Our goal is to provide the type of information that can help with the third question – putting the learning into action.

Research and evaluation of CI, particularly when testing the efficacy of the approach, must be approached differently than program evaluation or a more-straightforward descriptive study. It is not sufficient to collect self-report data about activities and changes occurring in the system, even with a verification process, without having a clear understanding of the types of changes that matter and the types of impact desired.

Consequently, our approach will consider how the external environment and CI initiatives have evolved over time and support an understanding of the causal relationship between CI efforts and their outcomes.  As part of the study, we will seek to understand the range of ways CI is experienced and perceived, the implications of these differences on its effectiveness, and the implications for how the approach is deployed and supported.

Together, Spark and ORS bring extensive expertise in the study of complex initiatives. We know communities, organizations, and funders, and we know what it means to fully participate in a long-term initiative that involves multiple individuals, organizations, and systems moving toward a common goal of change. We also bring a healthy skepticism about the approach and how the five conditions and principles come together to drive systemic change.

We are also acutely aware of the need for a credible, actionable study. We will be following rigorous research practices and providing a high level of transparency around our methods.  To that end, we want to share some high-level attributes of our study and lay out some of the content we will be providing along the way.

Research Study Phases

ORS and Spark are approaching this research in a multiphase process that will allow us to use multiple methods that will add rigor and enhance our ability to make useful comparisons across disparate sites while focusing on answering the primary causal question.  Our research will occur through three phases:

  • Develop a set of analytic rubrics that will provide the foundation for all our research activities. These analytic rubrics will be grounded in the conditions and principles of CI, as well as approaches for tracking systems changes, equity and population-level changes.
  • Examine extant data, review documents, and collect new high-level data across a broad set of ~30 CI initiatives to understand more broadly how CI initiatives are implementing the conditions and principles of the approach and their systems change outcomes and population-level impacts. As you may have seen in outreach from the CI Forum, we used an open nomination process to help ensure our sample for this stage is broad and diverse in its initiative issue areas, origins, and funding sources.
  • Dive more deeply in a focused group of 8 CI initiatives initially evaluated as part of the first phase of site analysis to better understand the conditions that support or impede population success. Our goal in this phase is to examine the implementation of the CI approach and more deeply understand the degree to which different causal explanations can be supported in different contexts and with differing levels of success in achieving population outcomes. We are using a method called process tracing, which is a qualitative analysis approach that helps understand causal inferences by interrogating rival hypotheses to explain changes observed (we will describe process tracing in detail in a future blog post).

Future Blog Topics

To continue in our efforts to bring transparency to this work, we will be blogging each month about this study, presenting our methods and specific rubrics we will be using as well as providing examples and lessons learned. Please check back each month for blogs on the following topics.

  • Early June: Design details and list of sites being included in the study.
  • June and July: Three-part series discussing the rubrics being used for this study: CI, systems change, and equity.
  • August: A description of process tracing and an example.
  • September: Key lessons from untangling cause and effect of CI and population outcomes.
  • October: A case study example from one site evaluated.
  • November/December: Key findings from the study.
  • January: Final report release via the CI Forum.

We encourage you to share any of your insights about CI in the comments section below!

Posted on

We’re all in this together: Why partnership makes advocacy work better

We recently wrapped up an evaluation of a national advocacy campaign, where advocacy organizations were funded in states throughout the country to push forward a common agenda. The evaluation findings highlighted how different advocacy organizations bring different capacities to the table. While technical assistance can expand that capacity, it can’t change the reality that no organization can be the expert in everything!

In other words – most organizations are not experts at policy analysis, coalition building, lobbying, media engagement, grassroots organizing, AND grasstops organizing. Usually, our organizations only have expertise in a few of these areas.

Yet, how many funders can raise their hands when you ask,

“When is the last time you released an RFP that asked grantees to have three, four, or even five distinct types of advocacy skills?”

And, how many advocacy organizations can raise their hands when you ask,

“When is the last time you responded to a RFP that asked you to be good at more things than your organization normally takes on?”

 

Alright, so what should we do differently?

Some funders are already tackling this issue through funding a field of advocates. In other words, they are funding multiple advocacy organizations within the same advocacy environment (such as a state) to work collectively on a common advocacy campaign or even just on a broad advocacy goal. If you’re a funder, the question becomes – what capacity do you need in that field? And, is it enough to fund a field, or do you also need to require them to come together and work in active coordination? These are important questions for funders to tackle together, along with their advocacy partners.

If you’re an advocacy organization, the opportunity is the same. We all partner with other advocacy organizations regularly, but do we partner to seek funding? What would look different if the organization down the street that does an amazing job at grassroots organizing had a grant funding the same policy priorities as our grant that pays for policy analysis and coalition building? Starting the conversation with funders as a team, with two or more organizations collectively providing a diverse set of advocacy skills, not only has potential to make your advocacy efforts more appealing to funders, it may also make them more successful!

 

What capacities really matter ~ do we need everything?

Advocacy organizations don’t have to start from scratch to answer this question. National leaders in advocacy evaluation have done the legwork to find out what really matters – what does effective media capacity look like, and what about grassroots capacity? Check out:

 

[info_box style=”note” icon=”none”]Alliance for Justice’s Advocacy Evaluation website. They identified the most important advocacy capacities and turned them into an assessment tool. Also, you can use their evaluation design tool to think about how you might evaluate the impact of your work.[/info_box]

 

[info_box style=”note” icon=”none”]The Aspen Institute’s Advocacy Progress Planner. This tool is helpful in planning your advocacy campaign, identifying advocacy capacity benchmarks you want to meet, and evaluating the work.[/info_box]

 

Regardless of whether you use a formal tool or go through an internal process of assessing your advocacy strengths and gaps, by the end of your process, you should have a clear sense of what is missing from your capacity. What’s next? Finding the right partners! But that’s a whole different blog… (stay tuned!)