Posted on

Embracing Values in Evaluation Practice

Research has traditionally defined rigor as obtaining an unbiased estimate of impact, suggesting the need for experimental or quasi-experimental methods and objective, quantitative measures in order to obtain trustworthy results.

I’ve spent the past few months as a member of Colorado’s Equitable Evaluation Collaboratory, which aims to examine the role evaluation plays in supporting or inhibiting progress toward equity and identifying opportunities to integrate equitable evaluation principles into practice. In particular, I’ve reflected on how the research tradition has impacted evaluation’s working orthodoxies including the notion that “credible evidence comes from quantitative data and experimental research” and “evaluators are objective.”

On the surface, these statements don’t appear particularly problematic, but dig a little deeper and we begin to see how value judgments are an integral part of how we practice evaluation. The types of projects we take on, the questions we ask, the frameworks we use, the types of data we collect, and the ways we interpret results – are all deeply rooted in what we value. As an evaluator focused on use, I aim to make these practice decisions in partnership with my clients; however, suggesting that I, or any evaluator, does not play an active role in making these decisions discounts our inherent position of power.

Now that I’ve tuned into the orthodoxies, I see them everywhere, often dominating the conversation. In a meeting last week, a decision-maker was describing the path forward for making a controversial policy decision. He wanted to remove subjectivity and values from the conversation by developing guidelines rooted in “evidence-based practice” and turned to me to present the “facts.”

As a proponent of data-driven decision making, I value the role of evidence; however, there is a lot to unpack behind what we have declared – through traditional notions of rigor – “works” to improve health and social outcomes. Looking retrospectively at the evidence, and thinking prospectively about generating new knowledge, it’s time to ask ourselves some hard questions, including:

  • What interventions do we choose to study? Who developed them? Why did they develop them?
  • What have we (as a society) chosen not to investigate?
  • What population have we “tested” our interventions on? Have we looked for potentially differential impacts?
  • What outcomes do we examine? Who identified these impacts to be important?
  • Who reported the outcomes? Whose perspective do we value?
  • What time-period do we examine? Is that time-period meaningful to the target population?
  • Do we look for potentially unintended consequences?

As we begin to unpack the notion of “what works” we begin to see the decision-points, the values and the inherent power and privilege in what it means to be an evaluator. It is time that we owned the notion that what we choose to study and how we choose to measure success are not objective, rather, they are inherently subjective. And importantly, our choices communicate values.

So how do we begin to embrace our role? As a step forward, I have started including a discussion of values, both mine and my clients, at the beginning of a project and clarifying how those values will influence the evaluation scope and process. Explicitly naming the importance of equity during the evaluative process has helped keep the goals of social change and social justice front and center.  Naming values helps stakeholders acknowledge their power and provides a lens through which to make decisions.

Equitable evaluation is an expedition into the unknown, requiring a transformation in how we conceptualize our role as evaluator. Having taken my initial steps into the Upside Down, I look forward to the many unknowns.

In what way do you see values showing up in your evaluative work?

 

Posted on

August Spark News: Getting Unstuck – Equity, Advocacy, and Collective Impact

Spark Policy Institute

Are We Getting Anywhere?

Spark Policy InstituteAt Spark, we’re experts at developing actionable strategies to achieve meaningful, measurable outcomes. But in today’s complex environment, it’s sometimes challenging for our partners to see the progress they’ve made. In our August newsletter, we’re sharing resources you can apply in real-life settings to measure your progress and take positive steps forward, no matter where you are in the process of making meaningful social change happen. We’re also excited to share new efforts in understanding Collective Impact and how it is, or isn’t moving the needle on systems change.

Want to receive more updates like this? You can also subscribe to our newsletter to receive monthly updates.

Posted on

Evaluating Multi-stakeholder Advocacy Efforts

This is the second in a series of blogs on topics we’ll be presenting on at the American Evaluation Association’s (AEA) annual meeting, which will be in Atlanta, GA October 24-29.

Today’s advocacy environment is complex, with multiple stakeholders working together in campaigns that range from informal networks to collaborative impact and other similarly coordinated efforts. As a result, evaluating these initiatives is equally as complex, looking not only at outcomes, but the roles and contributions of multiple stakeholders. While advocacy evaluation has evolved over the past 10 years, transitioning from an emergent area to an established field of practice, effectively addressing the complexity of multi-stakeholder efforts that may or may not directly align remains one of the most challenging.

You can aggregate to tell the story of a group of organizations, but it’s the aggregate of individual organization evaluations, not an evaluation of a field of organizations. Rather, there is a need to understand the dynamics of how organizations – a term that may also encompass partners in government, private sector, service delivery, etc. – interact, in concert or, sometimes, even at odds. These dynamics are the key understanding how multi-stakeholder advocacy gets to impact along with understanding how organizations come together to influence policy change, build cohesive fields of practice, and accomplish more than any one group can do.

Adding to the Toolbox

This week, I will be presenting on this topic at the American Evaluation Association’s annual meeting with Jewlya Lynn here at Spark, Jared Raynor of TCC Group, and Anne Gienapp from ORS Impact. The session will look at examples of how evaluators work in multi-stakeholder environments to design different methods for collecting and analyzing data. No different from any other field of evaluation, advocacy and multi-stakeholder advocacy evaluations draw on surveys, interviews, focus groups, and observations. While these traditional methods are important, our session will take a look at other frameworks and types of analysis can help strengthen these more traditional processes, such as:

  • Multi-stakeholder advocacy toolboxAssessing mature and emergent advocacy fields, using an advocacy field framework, can help evaluators understand how a field of advocacy organizations collectively influences a specific policy area. The five dimensions of advocacy fields – field frame, skills and resources, adaptive capacity, connectivity, and composition – make it easier to untangle the concept of a field.
  • Machine learning, a data analysis approach using algorithms to generate patterns or predictions, is useful in surfacing themes in large, unstructured data sets. It can help address questions such as perceptions regarding a particular issue, differences in perceptions based on geography or language, how sentiment has changed over time, the likelihood sentiment turns to action, and how actions reflect policy decisions.
  • Dashboard tracking can help facilitate agreement on measures and create a tracking system to collect relevant data across multiple stakeholders, which is often one of the largest logistical issues faced by multi-stakeholder evaluations, particularly when the groups are working autonomously or across a wide range of activities.

Interested in learning more? Join us at our presentation: Advocacy as a Team Game: Methods for Evaluating Multi-Stakeholder Advocacy Efforts this Thursday, October 27!