Posted on Leave a comment

Embracing Values in Evaluation Practice

Research has traditionally defined rigor as obtaining an unbiased estimate of impact, suggesting the need for experimental or quasi-experimental methods and objective, quantitative measures in order to obtain trustworthy results.

I’ve spent the past few months as a member of Colorado’s Equitable Evaluation Collaboratory, which aims to examine the role evaluation plays in supporting or inhibiting progress toward equity and identifying opportunities to integrate equitable evaluation principles into practice. In particular, I’ve reflected on how the research tradition has impacted evaluation’s working orthodoxies including the notion that “credible evidence comes from quantitative data and experimental research” and “evaluators are objective.”

On the surface, these statements don’t appear particularly problematic, but dig a little deeper and we begin to see how value judgments are an integral part of how we practice evaluation. The types of projects we take on, the questions we ask, the frameworks we use, the types of data we collect, and the ways we interpret results – are all deeply rooted in what we value. As an evaluator focused on use, I aim to make these practice decisions in partnership with my clients; however, suggesting that I, or any evaluator, does not play an active role in making these decisions discounts our inherent position of power.

Now that I’ve tuned into the orthodoxies, I see them everywhere, often dominating the conversation. In a meeting last week, a decision-maker was describing the path forward for making a controversial policy decision. He wanted to remove subjectivity and values from the conversation by developing guidelines rooted in “evidence-based practice” and turned to me to present the “facts.”

As a proponent of data-driven decision making, I value the role of evidence; however, there is a lot to unpack behind what we have declared – through traditional notions of rigor – “works” to improve health and social outcomes. Looking retrospectively at the evidence, and thinking prospectively about generating new knowledge, it’s time to ask ourselves some hard questions, including:

  • What interventions do we choose to study? Who developed them? Why did they develop them?
  • What have we (as a society) chosen not to investigate?
  • What population have we “tested” our interventions on? Have we looked for potentially differential impacts?
  • What outcomes do we examine? Who identified these impacts to be important?
  • Who reported the outcomes? Whose perspective do we value?
  • What time-period do we examine? Is that time-period meaningful to the target population?
  • Do we look for potentially unintended consequences?

As we begin to unpack the notion of “what works” we begin to see the decision-points, the values and the inherent power and privilege in what it means to be an evaluator. It is time that we owned the notion that what we choose to study and how we choose to measure success are not objective, rather, they are inherently subjective. And importantly, our choices communicate values.

So how do we begin to embrace our role? As a step forward, I have started including a discussion of values, both mine and my clients, at the beginning of a project and clarifying how those values will influence the evaluation scope and process. Explicitly naming the importance of equity during the evaluative process has helped keep the goals of social change and social justice front and center.  Naming values helps stakeholders acknowledge their power and provides a lens through which to make decisions.

Equitable evaluation is an expedition into the unknown, requiring a transformation in how we conceptualize our role as evaluator. Having taken my initial steps into the Upside Down, I look forward to the many unknowns.

In what way do you see values showing up in your evaluative work?

 

Posted on Leave a comment

Systems and Programs: Moving from Enemies to Friends

Too often, programs are framed as the enemy of systems change.

Over the past few years, there has been an increasing emphasis on the need for “systems change” to achieve large-scale social impact.

As someone deeply embedded in research and evaluation at the systems-level, I fundamentally believe that addressing complex problems requires system-level solutions. An increasing emphasis on systems – including a greater focus on multi-stakeholder collaboratives, discussions of leverage points, and the need to shift how organizations operate – gets me excited. I see the potential to do things like reform the justice system, shift to a more prevention-focused model of health and tackle big issues like climate change.

There is, however, one framing of how to address complex issues that dampens my excitement: systems versus programs. The discussion can become a battle between two opposing forces, including phrases like, “we don’t fund program work,” “we only focus on systems,” or “we need to move from a program focus to a systems focus.” Systems and programs are painted as victor and villain,  fundamentally at odds – and I believe this framing is not only incorrect but has the potential to hamper meaningful change.

Too often, programs are framed as the enemy of systems change.

I am in full agreement with the adage that we cannot program our way out of complex problems. Programs alone are rarely the solution. In my years as a researcher and evaluator I have learned time and again that focusing entirely on programs can prevent us from addressing structural inequities and root causes. Often, one of my first questions to an organization with a completely programmatic focus is, how does the program fit within your broader agenda to change the system?

That does not mean, however, that programs are fundamentally at odds with the system in which they are situated. It also does not mean that programs are not a critical component of addressing complex problems.

Consider a parable most of us have heard: A fisherman notices people are falling and drowning in a river; so, he goes upstream to prevent it from happening by building a bridge (a systems change). Great idea! But the problem is unlikely to be solved with construction alone. What if people don’t know how to use the bridge or do not see its value, won’t you have to educate them? Moreover, it is highly likely that no matter how beautiful the bridge, not all people will use it (maybe it is too far away), are you going to let those people who fall in drown?

Programs play a critical role in addressing complex health and social challenges. To lower teen pregnancy, we need to provide evidence-based sexuality education, alongside systems to increase access to contraception. Food banks and school meals programs are critical components of a well-functioning food system. In youth development, school-based mental health services are a key strategy to address issues such as trauma. For economic development, opportunities for meaningful employment need to be coupled with job training programs that set people up for success. The list goes on and on…

Programs contribute to sustainable systems-level change.

To me, what is needed is a balance between systems and programs: we must consider how programs fit within a systems change strategy.

In a recent study of 25 collective impact initiatives, changes to programs and services were identified as a critical component of achieving population-level outcomes.

I think it is time that we, as a field, pause and ask ourselves some tough questions: How can we make sure that we are appropriately delivering and scaling programs while also working to change key parts of the system? How can we best use programs to advance a systems-change strategy, for example, training community leaders to advance system reform? In what ways can programs be integrated to better address root causes? It is time to swap the pendulum approach for one that forces us to consider how programs and systems are related. With this shift in thinking, we might then begin to see that programs and system are not enemies, rather, they are friends – maybe even best ones.

Posted on Leave a comment

Keeping Youth Out of the Juvenile Justice System: Creating Policy and System Change

By Lauren Gase, Spark Policy Institute and Taylor Schooley, Los Angeles County Department of Public Health

Each year, roughly one million young people are arrested in the United States. Contact with the justice system is not only a public safety issue – research shows that it can lead to a range of negative health and social outcomes, including damaging family functioning, decreasing high school graduation and employment rates, increasing the risk for involvement in violence, and worsening mental health outcomes.

Contact with the justice system is also an equity issue; persons of color are disproportionately represented at every stage of justice system processing. It should concern anyone interested in promoting health, educational achievement, and community and economic development.

The public health sector can be a strong leader in creating justice systems transformation because it has experience bringing together diverse stakeholders to facilitate meaningful dialogue and collaborative decision-making. Public health focuses on prevention, holistic wellbeing, and the root causes of poor outcomes. It is grounded in using data to drive decision-making to identify opportunities for improvement.

To illustrate this, we’ve gathered examples of several jurisdictions that have begun to advance promising solutions to justice reform in partnership with public health:

  • In Los Angeles County, California, the Board of Supervisors established a new division of Youth Diversion & Development within the integrated Health Agency. This division is tasked with coordinating and contracting community-based services in lieu of arrest or citation for youth countywide.
  • In King County, Washington, Executive Dow Constantine announced an executive order to place juvenile justice under the purview of the public health department. The order aims to change policies and system to “keep youth from returning to detention, or prevent them from becoming involved in the justice system in the first place.”
  • A recent analysis from Human Impact Partners examines the impacts of youth arrest on health and well-being in Michigan and identifies a number of recommendations, including diverting youth pre-arrest, training agency personnel to be trauma-informed, sealing youth records, and changing state sentencing laws.

To promote health, safety, and racial equity, we need to transform our current justice system to create the social, economic, and political conditions that allow individuals, families, and communities to thrive. Some jurisdictions have begun to advance public health solutions to justice reform, but there is more to be done. We need to think differently about the role of multiple partners – including law enforcement, courts, health, schools, social services, and community-based organizations – in creating opportunities for young people to avoid or minimize formal processing in the justice system.

Posted on Leave a comment

When Collective Impact Has an Impact: A Cross-Site Study of 25 Collective Impact Initiatives

When Collective Impact has an Impact
Downloads:
Executive Summary

Full Report

We at Spark Policy Institute and ORS Impact are excited to release the findings of a ground-breaking study in partnership with 25 collective impact sites in the US and Canada as part of the Collective Impact Research Study.


The study sought to shed light on a fundamental question:
To what extent and under what conditions does the collective impact approach contribute to systems and population changes?


The study findings can be a tool—for refining the collective impact approach, strengthening existing initiatives, supporting new initiatives, and evaluating collective impact more meaningfully.

Our study is intended to add to the body of knowledge related to collective impact, building a better understanding of when and where it has an impact. To solve the entrenched social problems that still plague too many people and communities, it is crucial to continue deepening the sector’s understanding of the results collective impact initiatives are achieving, the challenges they face, and the lessons they have learned.

Why this study?
In 2011, John Kania and Mark Kramer published in the Stanford Social Innovation Review, laying out “collective impact” as an approach for solving social problems at scale. For some, the introduction of a defined framework for cross-sector collaboration provided a useful way to focus new and existing partnerships toward a common goal and, hopefully, greater impact.

It has not, however, been without controversy. Some critiques from the field include a sense that collective impact is just new packaging for old concepts (without fully crediting the work that preceded it); that it is inherently a top-down approach to community problems; that it is too simplistic for solving the complex social problems it seeks to address; and that it replicates unjust power dynamics. There is also criticism that the approach has not been assessed rigorously enough to warrant the number of resources being directed toward it.

In early 2017, the Collective Impact Forum, an initiative of FSG and the Aspen Institute Forum for Community Solutions, hired ORS Impact and Spark Policy Institute to conduct a field-wide study of collective impact with funding from the Annie E. Casey Foundation, Bill & Melinda Gates Foundation, the Houston Endowment, the Robert R. McCormick Foundation, the Robert Wood Johnson Foundation, and the W.K. Kellogg Foundation. The partnership of ORS Impact and Spark Policy Institute brought, across the two organizations, both knowledge, and experience with collective impact (Spark) and experience with other community change models (both), as well as a healthy skepticism and more arm’s length relationship to the approach (ORS).

We encourage you to share any of your insights about collective impact in the comments section below. Questions or comments about the study may also be sent to Terri Akey at ORS Impact or Lauren Gase at Spark Policy Institute.

Posted on Leave a comment

How has Health Impact Assessment been used? Findings from a new study

Health is impacted by multiple factors outside the direct control of the public health and health care system, such as education, income, and the conditions in which people live, work, and play. Health impact assessment (HIA), provides a structured process for examining the potential health impacts of proposed policies, plans, programs, and projects. Conducting a HIA involves using an array of data sources and analytic methods, gathering input from stakeholders, and providing recommendations on monitoring and managing potential health impacts.

A new study, published this month in the Journal of School Health, systematically identified 20 HIAs conducted in the United States between 2003 and 2015 on issues related to prekindergarten, primary, and secondary education. The HIAs were conducted to examine (1) school structure and funding, (2) transportation to and from school, (3) physical modifications to school facilities, (4) in-school physical activity and nutrition, and (5) school discipline and climate. Assessments employed a range of methods to characterize the nature, magnitude, and severity of potential health impacts. Assessments fostered stakeholder engagement and provided health-promoting recommendations, some of which were subsequently incorporated into school policies.

Results suggest that HIA can serve as a promising tool that education, health, and other stakeholders can use to maximize the health and well-being of students, families, and communities. Health impact assessments should be used when: (1) there is a decision that has the potential to affect environmental or social determinants of health, but the potential health impacts are not being considered; (2) there is sufficient time to conduct an analysis before the final decision is made; (3) the assessment can add value to the decision-making process; and (4) there are stakeholders, data, and resources to support the process.