Posted on

Collective Impact Study Update

By Jewlya Lynn, Founder and Chief Learning Officer, Spark Policy Institute; and Sarah Stachowiak, CEO, ORS Impact

Back in May, 2017 ORS Impact and Spark Policy Institute embarked on an ambitious and important study to explore how collective impact contributes to changes in systems and populations through its unique approach to addressing social issues. We are pleased to update the field on our study progress as we near the end of our data collection.

Study Selection Process

To seek out a representative group of sites that would allow us to dig into how collective impact leads to systems and population changes, we invited the field to nominate sites that were good examples of how the collective impact model is contributing to changes in systems and outcomes.

We screened over 150 sites to find examples of mature initiatives (at least three years old), in the US and Canada, with evidence of strongly implementing all of the five conditions of collective impact, changing a number of systems, and moving the needle on outcomes. Of the 39 sites that met our initial screening criteria, 25 initiatives across a wide range of topics and geographies consented to participate in our inquiry. Our research steering team helped us vet the final set of sites and identify any potential challenges.

Methods

To develop a broad understanding of how collective impact works across many different settings, we interviewed two key people in each of the 25 sites who had a deep knowledge of the initiative– often backbone leaders or steering team members. In addition, we reviewed many documents that described activities, goals, and progress. Through the interviews and documents, we sought to understand how collective impact shows up in different initiatives and how the collective impact conditions might be linked to important changes in systems and outcomes.

Drilling Down to Equity and Impact

Next, we selected two sets of sites to dive deeper into our primary research question of “what is the contribution of collective impact to population and systems changes?” and to explore more explicitly how equity intersects with the CI model– a priority identified by our steering team as an opportunity to raise up a significant principle that is important to the field.

To address our primary research question about the contribution of CI, we selected eight “contribution” sites  from our original pool of 25. Our goal in selection was NOT to identify the “best” CI sites, but rather to identify a set of initiatives that allowed the best chance of examining causal linkages among how collective impact is implemented and the changes that occur in people, organizations, systems, and ultimate impact. Therefore, we selected eight sites that had the strongest evidence that all five collective impact conditions were present and that multiple social and systems changes could  clearly be linked to population changes achieved.

Within our contribution sites, we collected additional data using structured group dialogues to help us understand how collective impact is implemented and what challenges the initiatives faced in implementation. In addition, we facilitated group process tracing sessions where a group of stakeholders pressure tested theories of change that the systems changes and population changes they were experiencing could be attributed to their collective impact efforts and not to external events or context.

To address growing interest and urgency around infusing equity into collective impact work, we also identified three sites to allow us to understand more deeply the issues related to meaningful and authentic inclusion of beneficiary communities in CI planning, implementation and leadership; what types of equity focused strategies are being implemented; what factors are related to “readiness” to engage in equity work; and how initiatives are achieving equity-focused systems changes and outcomes. ARISE– an initiative focused on the needs of indigenous students in Anchorage, Promesa Boyle Heights– a community-driven initiative in Los Angeles, and RGV Focus– a regional initiative focused on low-income Hispanic children and families in the Rio Grande valley– all provide unique opportunities to learn how CI can actively engage the families and communities they intend to benefit and how having an equity focus interacts with the CI model.

Coming up in future posts, we’ll share our collective impact and equity rubrics that we are using to understand how the model is being implemented, and blogs on systems changes and process tracing as a methodology. As we wrap up our analysis and distill findings, we’ll also provide a glimpse into our initial results and share lessons with the field from what we are learning.

Posted on

Three Tips for Making Network Analysis Actionable for Your Social Impact Project

Three Tips for making network analysis actionable for your social impact project

Joby Schaffer, MA, Associate Researcher at Spark Policy InstituteMany of our partners have adopted what Jed Miller and Rob Stuart called “Network-Centric Thinking.” They recognize that long-term sustainable progress on today’s social problems rarely comes from the efforts of a single organization. Rather, progress requires a strategy involving networks of organizations with the aim of producing network effects.

However, the strategist and evaluator’s task of connecting network strategy to network effects to final outcomes is often difficult, not least because networks are embedded in complex, adaptive systems in which cause and effect relationships are rarely straightforward. Moreover, because quantitative social network analysis (SNA) is often new to many social impact organizations, it is easy to get bogged down in superficial findings to the determinant of more actionable insights.

Three Tips for making network analysis actionable for your social impact projectThere are now a large number of resources on designing network analyses for complex evaluations (see some of our favorites below), but we’ve found three tips particularly useful for ensuring a network analysis yields actionable insights. In short, a design for evaluating a network should:

  • Start by adopting a framework for how network structure leads to network effects;
  • Avoid the lure of only using quantitative SNA; and  
  • Design your network analysis with future data collections in mind: connecting change in the network to outcomes is one of the most powerful insights you’ll uncover.  

Get a Framework

Our partners often make use of theories of change, systems maps, scenario mapping, power analyses, and other tools to frame the nature of the problem they want to address and to develop strategies to guide their work. For learning partners like us, these tools are often a key part of developing and shaping evaluation questions and hypotheses. However, because network theory is relatively new to most people, the expected impact of network strategies is often underspecified in these documents.

For example:

An initiative may agree that the presence of working relationships among cross-sector partners is an important interim outcome…

       …with the expectation these partnerships will help address an upstream driver of a problem…

       …but they may not fully consider how the strengths and weaknesses of the current network structure alters
their chances of activating this “network effect”…

      …which in turn limits their understanding of which actions are needed to advance the network strategy.

Frameworks help to address these problems because they relate network structure to network effects. For example, Peter Plastrik and Madeleine Taylor describe three networks [pdf] on the basis of the depth of their connections – connectivity, alignment, and production. If an initiative aims for cross-sector collaboration (production), but the initial network analysis reveals little connectivity between organizations, it’s best to engage in more connectivity-related and alignment-related network building tasks before encouraging project collaboration.

Choose a Multi-method Approach

When most people think about network analysis, they think of network maps or strange-sounding network statistics like density or centrality. This is quantitative SNA, and it is an essential tool for describing structural properties of a network. Among other things, an SNA will reveal gaps in the network (e.g. perhaps organizations from a certain sector are underrepresented), show areas of deep or shallow connections (e.g connectivity among one subset and alignment among another subset), and identify which organizations play important roles in the network (e.g. bring unique partners to the network).

However, if used alone, SNA may mask a lot of the network information leaders need to make effective decisions. For example, network strategy often involves developing structures for coordination, including convenings, working groups, and shared measurement systems. While it’s possible to use SNA to wrangle some insights about whether these coordinating efforts lead to more effective partnerships, it’s often more meaningful to hear from participants how these structures influenced their work. In short, interviews are much better at capturing the organizational and inter-organizational effects of the network – innovations, greater efficiencies realized, knowledge and information shared, etc.

Design with the Future in Mind

It is good practice to design any evaluation with pre- and post-interventions in mind. Especially for quantitative SNA, it is worth the upfront time to identify what you hope your network will look like in the future, not just examine it today. Repeated network maps can show how the network is evolving over time, which is a great way to identify how coordinating efforts are producing network-level effects (e.g., better representation of certain sectors at convening events, connections made between subsets of the networks, etc.). Again, adopting a framework can be very useful. Many frameworks explicitly describe the stages of network evolution and provide guidance on how to identify and manage a network in transition.

The more social change agents adopt network-centric thinking, the better the chances we’ll make real progress on today’s social problems. We can support this mindset by ensuring our network analyses produce actionable insights. We’ve found these three tips are useful to our work. Based on your experiences, what other tips do you recommend?

New to network thinking or network analysis? Here’s a few of our favorite resources.

Posted on

Case Study: Emergent Philanthropy

A new article in the Foundation Review describes the tools we used in partnership with Harder+Company and The Civic Canopy to support an adaptable design for The Colorado Health Foundation’s (TCHF) Creating Healthy Schools funding strategy.

The article dives deep into the principles of emergent philanthropy, and how TCHF worked to co-create their strategy, funding and process with key stakeholders and grantees in Colorado’s school system.

One important lesson learned? When one funder shifts its funding approach to be more emergent, it can put a burden on grantees who are still responding to the more traditional expectations of their other funders. It’s also really hard for one funder to solve all the shifting issues in a complex system.

Enter collective emergent philanthropy – a process where multiple funders combine efforts to help solve a problem through an emergent approach guided by a systems-level collaborative.

By collaborating with other funding partners interested in addressing a systems issue at the outset, funders can:

  1. Better focus a complex field by pooling and leveraging resources;
  2. Disperse power and mitigate vested interests; and
  3. Further strengthen and build partnerships to adapt to new challenges and continuously improve efforts.

Read more about TCHF’s implementation of an emergent philanthropy philosophy in Insights from Deploying a Collaborative Process for Funding Systems Change.

Posted on

For the Good of the Group: Be Nice, Respond in Kind, Be Forgiving

When working to change complex systems it can be difficult for individual stakeholders to engage in authentic collaboration. This is neuroscience. We are all motivated to move away from perceived threats and toward perceived reward. Bringing multiple actors together to work toward a common goal can create conflict between doing what is best for the individual organization and doing what is best for the system.

In the latest issue of The Foundation Review, we’ve shared tools on how to navigate this difficult terrain using an on-the-ground example: The Colorado Health Foundation’s (TCHF) Creating Healthy Schools funding strategy. TCHF engaged Spark, as well as Harder+Company and The Civic Canopy to support an emergent approach to design and implement the strategy.

Here are some highlights on how to help stakeholders align their work and build inclusive engagement and partnership:

  • Lead stakeholders to a shared understanding of systems thinking and how it translates to systems acting.
  • Leverage a neutral facilitator.
  • Engage on-the-ground perspectives to involve those who will be most impacted by the change.
  • Support increased communication between systems-level and on-the-ground groups.
  • Develop clear function-group goals.
  • Be transparent about what you are doing, how you are approaching the problem, and how decisions are made.

Read more about TCHF’s implementation of an emergent philanthropy philosophy in Insights from Deploying a Collaborative Process for Funding Systems Change.

Posted on

The Collective Impact Research Study: What is all this really going to mean, anyway?

By Jewlya Lynn, CEO, Spark Policy Institute; Sarah Stachowiak, CEO, ORS Impact

It’s easy for evaluators to sometimes get tied up in the technical terms around our work, leaving lay people unclear on what some of our decisions and choices mean.  Without care, we can also risk being opaque about what a particular design can and can’t do.  With this blog, we want to untangle what we think our design will tell us, and what it won’t do.

With this research study, ORS Impact and Spark Policy Institute are seeking to understand the degree to which the collective impact approach contributed meaningfully to observed positive changes in people’s’ lives (or, in some cases, species or ecosystems).  In other words, when and under what conditions did collective impact make a difference where we’re seeing positive changes, or are there other explanations or more significant contributors to identified changes?  While we’ll learn a lot more than just that, at its heart, that’s what this study will do.  

Our primary approach to understand the core question around contribution and causal relationships will be to use process tracing.  Process tracing provides a rigorous and structured way to identify and explore competing explanations for why change happens and to determine the necessity and sufficiency of different kinds of evidence to support different explanations that we’ll find through our data collection efforts.

To implement the process tracing, we will dig deeply into data around successful changes—a population change or set of changes plausibly linked to the CI efforts—within six sites.  We’ll explore these changes and their contributing factors with data from existing documents, interviews with site informants, focus groups with engaged individuals, and a participatory process to review and engage in sense-making with stakeholders around the ways in which we understand change to have happened.  We’ll try and untangle the links between implementation of the collective impact approach and early outcomes, the links between early outcomes and systems changes, and the links between systems changes and ultimate impacts.

Figure:  Diagram of “Process” for Tracing

Note:  Future blogs will provide more information on the different rubrics we’ve developed and are using.

Using a process tracing approach also means that we’ll explicitly explore alternate hypotheses for why change happened—was there another more impactful initiative?  Was there a federal funding stream that supported important related work?  Was there state policy that paved the way that was unconnected to stakeholders’ work?  Would these changes have occurred whether collective impact was around or not?

Additionally, we’ll look at two sites where we would expect to see change but don’t, with the expectation that these sites can help us understand if the patterns we’re seeing at successful sites are absent or showing up differently, findings that would help give us more confidence that the patterns we’re seeing are meaningful.

Process tracing as our approach does mean that our unit of analysis—the sphere within which we will be exploring change and causal relationships—is going to be approximately eight sites.  While we hope to find sites where a cluster of impact outcomes result from a specific set of activities (or “process”), we are choosing to go deeply in a few sites with an approach that will provide rigor around how we develop and confirm our understanding of the relationships between activities and changes.  And because we are looking across diverse sites, working on varied issue areas (e.g., food systems, education, environmental issues, etc.) and at different scales (e.g., cities, multiple counties, entire states), identifying patterns across diverse contexts will increase our confidence around what collective impact conditions, principles and other contextual factors are most related to these successes.

With more data around if and when we find causal relationships, we will also go back to our data set of 22 sites that we are also engaging with early to see if we can, likewise, find similar patterns to those found through the process tracings.  For these sites, we’ll use data we will have collected on their fidelity to collective impact, efforts around equity, successes with different types of systems changes, and types of ultimate impacts.  Are we seeing similar patterns around the necessity of fidelity to certain conditions?  Are we seeing similar patterns in the relationship between certain types of systems changes and impacts?

Despite the strengths we believe this study has, it will not be the end-all-be-all, final say on the efficacy of collective impact.  All studies have limitations, and we want to be clear about those as well.  Given time and resources, we can’t conduct in-depth evaluations of the full range of efforts and activities any given collective impact site is undertaking.  Our unit of analysis isn’t a full site; it won’t take in the full complexity of the history of the initiative, or the full array of activities and efforts.  For example, it’s likely that a site that we engage with around a particular success has also experienced areas with no discernable progress.  We also are not comparing collective impact to other change models.  That doesn’t make the exploration of causality around successful changes less meaningful, but it does mean that we’ll understand contribution to specific changes well rather than understanding and judging the success of collective impact at a community-level or comparing collective impact to other models of driving systemic change.

We do believe that this study will fill a gap in the growing body of research, evaluation and evidence around collective impact by deeply understanding contribution in particular cases and by looking at a diverse and varied set of cases.  The social sector will benefit from continued interrogation of collective impact using many methods, units of analysis and approaches.  In the end, the more we learn, the better we can make meaningful progress on the gnarly issues that face vulnerable places and populations.