Posted on

How has Health Impact Assessment been used? Findings from a new study

Health is impacted by multiple factors outside the direct control of the public health and health care system, such as education, income, and the conditions in which people live, work, and play. Health impact assessment (HIA), provides a structured process for examining the potential health impacts of proposed policies, plans, programs, and projects. Conducting a HIA involves using an array of data sources and analytic methods, gathering input from stakeholders, and providing recommendations on monitoring and managing potential health impacts.

A new study, published this month in the Journal of School Health, systematically identified 20 HIAs conducted in the United States between 2003 and 2015 on issues related to prekindergarten, primary, and secondary education. The HIAs were conducted to examine (1) school structure and funding, (2) transportation to and from school, (3) physical modifications to school facilities, (4) in-school physical activity and nutrition, and (5) school discipline and climate. Assessments employed a range of methods to characterize the nature, magnitude, and severity of potential health impacts. Assessments fostered stakeholder engagement and provided health-promoting recommendations, some of which were subsequently incorporated into school policies.

Results suggest that HIA can serve as a promising tool that education, health, and other stakeholders can use to maximize the health and well-being of students, families, and communities. Health impact assessments should be used when: (1) there is a decision that has the potential to affect environmental or social determinants of health, but the potential health impacts are not being considered; (2) there is sufficient time to conduct an analysis before the final decision is made; (3) the assessment can add value to the decision-making process; and (4) there are stakeholders, data, and resources to support the process.

Posted on

Exciting changes at Spark: New CEO, Eval Director, and Learning Officer

Jewlya Lynn

Jewlya LynnDear Spark partners,

Today is a big day for all of us at Spark and for me personally. After 13 years of leading this organization, bringing together an amazing team of thought leaders and changemakers, I am excited to transition into a different role, creating room for new leadership to take Spark into the future. I’d like to introduce Kyle Brost, Spark’s new CEO, and Laura Pinsoneault, Spark’s new Evaluation Director. Both are bringing deep expertise in systems thinking and an understanding of how to support, expand and advance Spark in the coming years.

I am committed to working with Spark in a new role (Chief Learning Officer), one that allows me to engage in what I enjoy most and do best: facilitating learning internally and with clients and partners, with a focus on systemic change. I am looking forward to spending my days supporting changemakers to do good, even better.

More information about what is coming next for Spark can be found in Kyle’s first blog as Spark’s CEO.

Thank you for your continued support and engagement with us. Together, we can solve problems and change the world for the better.

Warmly,
Jewlya Lynn
Founder and Chief Learning Officer
Spark Policy Institute

Posted on

The Collective Impact Research Study: What is all this really going to mean, anyway?

By Jewlya Lynn, CEO, Spark Policy Institute; Sarah Stachowiak, CEO, ORS Impact

It’s easy for evaluators to sometimes get tied up in the technical terms around our work, leaving lay people unclear on what some of our decisions and choices mean.  Without care, we can also risk being opaque about what a particular design can and can’t do.  With this blog, we want to untangle what we think our design will tell us, and what it won’t do.

With this research study, ORS Impact and Spark Policy Institute are seeking to understand the degree to which the collective impact approach contributed meaningfully to observed positive changes in people’s’ lives (or, in some cases, species or ecosystems).  In other words, when and under what conditions did collective impact make a difference where we’re seeing positive changes, or are there other explanations or more significant contributors to identified changes?  While we’ll learn a lot more than just that, at its heart, that’s what this study will do.  

Our primary approach to understand the core question around contribution and causal relationships will be to use process tracing.  Process tracing provides a rigorous and structured way to identify and explore competing explanations for why change happens and to determine the necessity and sufficiency of different kinds of evidence to support different explanations that we’ll find through our data collection efforts.

To implement the process tracing, we will dig deeply into data around successful changes—a population change or set of changes plausibly linked to the CI efforts—within six sites.  We’ll explore these changes and their contributing factors with data from existing documents, interviews with site informants, focus groups with engaged individuals, and a participatory process to review and engage in sense-making with stakeholders around the ways in which we understand change to have happened.  We’ll try and untangle the links between implementation of the collective impact approach and early outcomes, the links between early outcomes and systems changes, and the links between systems changes and ultimate impacts.

Figure:  Diagram of “Process” for Tracing

Note:  Future blogs will provide more information on the different rubrics we’ve developed and are using.

Using a process tracing approach also means that we’ll explicitly explore alternate hypotheses for why change happened—was there another more impactful initiative?  Was there a federal funding stream that supported important related work?  Was there state policy that paved the way that was unconnected to stakeholders’ work?  Would these changes have occurred whether collective impact was around or not?

Additionally, we’ll look at two sites where we would expect to see change but don’t, with the expectation that these sites can help us understand if the patterns we’re seeing at successful sites are absent or showing up differently, findings that would help give us more confidence that the patterns we’re seeing are meaningful.

Process tracing as our approach does mean that our unit of analysis—the sphere within which we will be exploring change and causal relationships—is going to be approximately eight sites.  While we hope to find sites where a cluster of impact outcomes result from a specific set of activities (or “process”), we are choosing to go deeply in a few sites with an approach that will provide rigor around how we develop and confirm our understanding of the relationships between activities and changes.  And because we are looking across diverse sites, working on varied issue areas (e.g., food systems, education, environmental issues, etc.) and at different scales (e.g., cities, multiple counties, entire states), identifying patterns across diverse contexts will increase our confidence around what collective impact conditions, principles and other contextual factors are most related to these successes.

With more data around if and when we find causal relationships, we will also go back to our data set of 22 sites that we are also engaging with early to see if we can, likewise, find similar patterns to those found through the process tracings.  For these sites, we’ll use data we will have collected on their fidelity to collective impact, efforts around equity, successes with different types of systems changes, and types of ultimate impacts.  Are we seeing similar patterns around the necessity of fidelity to certain conditions?  Are we seeing similar patterns in the relationship between certain types of systems changes and impacts?

Despite the strengths we believe this study has, it will not be the end-all-be-all, final say on the efficacy of collective impact.  All studies have limitations, and we want to be clear about those as well.  Given time and resources, we can’t conduct in-depth evaluations of the full range of efforts and activities any given collective impact site is undertaking.  Our unit of analysis isn’t a full site; it won’t take in the full complexity of the history of the initiative, or the full array of activities and efforts.  For example, it’s likely that a site that we engage with around a particular success has also experienced areas with no discernable progress.  We also are not comparing collective impact to other change models.  That doesn’t make the exploration of causality around successful changes less meaningful, but it does mean that we’ll understand contribution to specific changes well rather than understanding and judging the success of collective impact at a community-level or comparing collective impact to other models of driving systemic change.

We do believe that this study will fill a gap in the growing body of research, evaluation and evidence around collective impact by deeply understanding contribution in particular cases and by looking at a diverse and varied set of cases.  The social sector will benefit from continued interrogation of collective impact using many methods, units of analysis and approaches.  In the end, the more we learn, the better we can make meaningful progress on the gnarly issues that face vulnerable places and populations.

Posted on

Sharing as We Go: The Collective Impact Research Study

By Jewlya Lynn, CEO, Spark Policy Institute; Sarah Stachowiak, CEO, ORS Impact

 

 

At Spark Policy Institute (Spark) and ORS Impact (ORS), we have been doing systems building work for over a decade. When the Collective Impact approach came along, it created a level of clarity for many people, both about what it means to change systems as well as providing clear insights about how to do so.

And now, six years into the CI approach creating momentum and excitement, many systems change leaders find themselves asking the questions:

“Does the Collective Impact (CI) approach directly contribute to systems changes that leads to population changes?  When does it contribute and in what ways?  And most importantly, what does that mean for our work?”

We at ORS Impact and Spark Policy Institute are excited to have the opportunity to answer the first two questions in partnership with 30 collective impact sites in the US and Canada as part of the Collective Impact Research Study. Our goal is to provide the type of information that can help with the third question – putting the learning into action.

Research and evaluation of CI, particularly when testing the efficacy of the approach, must be approached differently than program evaluation or a more-straightforward descriptive study. It is not sufficient to collect self-report data about activities and changes occurring in the system, even with a verification process, without having a clear understanding of the types of changes that matter and the types of impact desired.

Consequently, our approach will consider how the external environment and CI initiatives have evolved over time and support an understanding of the causal relationship between CI efforts and their outcomes.  As part of the study, we will seek to understand the range of ways CI is experienced and perceived, the implications of these differences on its effectiveness, and the implications for how the approach is deployed and supported.

Together, Spark and ORS bring extensive expertise in the study of complex initiatives. We know communities, organizations, and funders, and we know what it means to fully participate in a long-term initiative that involves multiple individuals, organizations, and systems moving toward a common goal of change. We also bring a healthy skepticism about the approach and how the five conditions and principles come together to drive systemic change.

We are also acutely aware of the need for a credible, actionable study. We will be following rigorous research practices and providing a high level of transparency around our methods.  To that end, we want to share some high-level attributes of our study and lay out some of the content we will be providing along the way.

Research Study Phases

ORS and Spark are approaching this research in a multiphase process that will allow us to use multiple methods that will add rigor and enhance our ability to make useful comparisons across disparate sites while focusing on answering the primary causal question.  Our research will occur through three phases:

  • Develop a set of analytic rubrics that will provide the foundation for all our research activities. These analytic rubrics will be grounded in the conditions and principles of CI, as well as approaches for tracking systems changes, equity and population-level changes.
  • Examine extant data, review documents, and collect new high-level data across a broad set of ~30 CI initiatives to understand more broadly how CI initiatives are implementing the conditions and principles of the approach and their systems change outcomes and population-level impacts. As you may have seen in outreach from the CI Forum, we used an open nomination process to help ensure our sample for this stage is broad and diverse in its initiative issue areas, origins, and funding sources.
  • Dive more deeply in a focused group of 8 CI initiatives initially evaluated as part of the first phase of site analysis to better understand the conditions that support or impede population success. Our goal in this phase is to examine the implementation of the CI approach and more deeply understand the degree to which different causal explanations can be supported in different contexts and with differing levels of success in achieving population outcomes. We are using a method called process tracing, which is a qualitative analysis approach that helps understand causal inferences by interrogating rival hypotheses to explain changes observed (we will describe process tracing in detail in a future blog post).

Future Blog Topics

To continue in our efforts to bring transparency to this work, we will be blogging each month about this study, presenting our methods and specific rubrics we will be using as well as providing examples and lessons learned. Please check back each month for blogs on the following topics.

  • Early June: Design details and list of sites being included in the study.
  • June and July: Three-part series discussing the rubrics being used for this study: CI, systems change, and equity.
  • August: A description of process tracing and an example.
  • September: Key lessons from untangling cause and effect of CI and population outcomes.
  • October: A case study example from one site evaluated.
  • November/December: Key findings from the study.
  • January: Final report release via the CI Forum.

We encourage you to share any of your insights about CI in the comments section below!

Posted on

Evaluating Multi-stakeholder Advocacy Efforts

This is the second in a series of blogs on topics we’ll be presenting on at the American Evaluation Association’s (AEA) annual meeting, which will be in Atlanta, GA October 24-29.

Today’s advocacy environment is complex, with multiple stakeholders working together in campaigns that range from informal networks to collaborative impact and other similarly coordinated efforts. As a result, evaluating these initiatives is equally as complex, looking not only at outcomes, but the roles and contributions of multiple stakeholders. While advocacy evaluation has evolved over the past 10 years, transitioning from an emergent area to an established field of practice, effectively addressing the complexity of multi-stakeholder efforts that may or may not directly align remains one of the most challenging.

You can aggregate to tell the story of a group of organizations, but it’s the aggregate of individual organization evaluations, not an evaluation of a field of organizations. Rather, there is a need to understand the dynamics of how organizations – a term that may also encompass partners in government, private sector, service delivery, etc. – interact, in concert or, sometimes, even at odds. These dynamics are the key understanding how multi-stakeholder advocacy gets to impact along with understanding how organizations come together to influence policy change, build cohesive fields of practice, and accomplish more than any one group can do.

Adding to the Toolbox

This week, I will be presenting on this topic at the American Evaluation Association’s annual meeting with Jewlya Lynn here at Spark, Jared Raynor of TCC Group, and Anne Gienapp from ORS Impact. The session will look at examples of how evaluators work in multi-stakeholder environments to design different methods for collecting and analyzing data. No different from any other field of evaluation, advocacy and multi-stakeholder advocacy evaluations draw on surveys, interviews, focus groups, and observations. While these traditional methods are important, our session will take a look at other frameworks and types of analysis can help strengthen these more traditional processes, such as:

  • Multi-stakeholder advocacy toolboxAssessing mature and emergent advocacy fields, using an advocacy field framework, can help evaluators understand how a field of advocacy organizations collectively influences a specific policy area. The five dimensions of advocacy fields – field frame, skills and resources, adaptive capacity, connectivity, and composition – make it easier to untangle the concept of a field.
  • Machine learning, a data analysis approach using algorithms to generate patterns or predictions, is useful in surfacing themes in large, unstructured data sets. It can help address questions such as perceptions regarding a particular issue, differences in perceptions based on geography or language, how sentiment has changed over time, the likelihood sentiment turns to action, and how actions reflect policy decisions.
  • Dashboard tracking can help facilitate agreement on measures and create a tracking system to collect relevant data across multiple stakeholders, which is often one of the largest logistical issues faced by multi-stakeholder evaluations, particularly when the groups are working autonomously or across a wide range of activities.

Interested in learning more? Join us at our presentation: Advocacy as a Team Game: Methods for Evaluating Multi-Stakeholder Advocacy Efforts this Thursday, October 27!