Posted on

Healthy Schools Collective Impact: Reaching the Bold Goal, Together

One of the things that has become clear in our work with systems change broadly and collective impact specifically is that no one program or organization can address large-scale issues on its own. Put another way, our impact goes further when we work together toward a common agenda.

Over the past nine months, Spark has been serving as the backbone for the Healthy Schools Collective Impact (HSCI) initiative. HSCI’s bold goal is for all Colorado K-12 public schools to provide an environment and culture that integrates health and wellness equitably for all students and staff by 2025.

Talk about creating meaningful change!

School systems work hard to address needs of all students; however, many do not have the capacity or resources to address student health and wellness consistently. This go-it-alone approach can often result in inequitable, duplicative, and siloed approaches and resources.

This is where collective impact comes in.

CI

 

Healthy Schools Collective Impact is changing how schools in Colorado approach school-based health and wellness by bringing stakeholders together in a structured way to support schools and get them the health and wellness resources they need to engage the whole child and, in turn, bolster academic achievement.

HSCI2

 

With the support of Spark as the backbone, HSCI members have been working hard to lay the foundation for change, including:

  • Engaging stakeholders from statewide systems that impact health and wellness in schools and districts, including policy, professional development, research, and marketing/communications/engagement.
  • Working with diverse stakeholders, including work groups (focused on nutrition, physical activity, behavioral health, and student health services), to create the HSCI Theory of Change, a living document that serves as a plan for the work.
  • Informing The Colorado Health Foundation’s Creating Healthy Schools funding opportunities, to address equity and align funding with the Theory of Change.
  • Establishing a new structure for HSCI that emphasizes the inclusion of voices of a diverse set of key stakeholders, with a specific focus on ensuring end-users (students, educators and families) have a seat at the table.
Moving from planning to action

With this solid foundation, our next step is to take the group from planning to doing by instilling a sense of trust and urgency, and providing the tools, data, and space for innovation that HSCI needs to achieve their bold goal. For many groups, even those that aren’t working in a collaborative context, this can be the hardest step. However, from our work with other collaborative initiatives we have found it can be helpful to keep the following in mind:

  • Remember that “partnerships move at the speed of trust”: Building a truly collaborative effort takes trust and building trust can take time. That said, groups can take steps to build authentic partnerships by developing mutual respect, fostering active and inclusive participation as well as equity, sharing power, and finding mutual benefits.
  • Experiment – find the small wins: Often, groups can be so focused on the big win they lose momentum because that big win seems so far away. Finding opportunities to experiment and achieve small wins allows groups to see the incremental change they are making in the world, often with a smaller investment of time and resources, so they can move from “oh dear, that didn’t work” to “yes, we can do it (one little piece at a time)”.
  • Evaluate, learn, adjust, repeat: Leveraging real-time data, making the time for learning from that data, and then collectively interpreting the learning can help organizations steadily shift their strategies in response to changes in their environment, thereby improving outcomes.

Systems change can be daunting – but achievable – particularly when stakeholders come together around a common agenda, and then trust, experiment, learn, and adjust as they move forward.

Do you have any tips for moving collaborative work forward? What are your experiences with finding small wins in a collective impact setting? Share with us in the comments or click here to share a case study, tip, trick, or tool!

Posted on

The Case for Developmental Evaluation

This blog is co-authored by Marci Parkhurst and Hallie Preskill from FSG, Dr. Jewlya Lynn from Spark Policy Institute, and Marah Moore from i2i Institute. It is also posted on FSG’s website: www.fsg.org 

In a recent blog post discussing the importance of good evidence in supporting systems change work, evaluation expert Lisbeth Schorr wrote, “To get better results in this complex world, we must be willing to shake the intuition that certainty should be our highest priority…” Rather, she argues, “it is time for all of us to think more expansively about evidence as we strive to understand the world of today and to improve the world of tomorrow.” [Emphasis added]

At the annual American Evaluation Association Conference (AEA) in November, practitioners, funders, and academics from around the world gave presentations and facilitated discussions around a type of evaluation that is specifically designed to meet this need for a more expanded view of evidence. It’s called developmental evaluation, and, as noted by other commentators, it took this year’s AEA conference by storm.

What is developmental evaluation?

Developmental evaluation (DE) “is grounded in systems thinking and supports innovation by collecting and analyzing real-time data in ways that lead to informed and ongoing decision making as part of the design, development, and implementation process.” As such, DE is particularly well-suited for innovations in which the path to success is not clear. By focusing on understanding what’s happening as a new approach is implemented, DE can help answer questions such as:

  • What is emerging as the innovation takes shape?
  • What do initial results reveal about expected progress?
  • What variations in effects are we seeing?
  • How have different values, perspectives, and relationships influenced the innovation and its outcomes?
  • How is the larger system or environment responding to the innovation?

DE can provide stakeholders with a deep understanding of context and real-time insights about how a new initiative, program, or innovation should be adapted in response to changing circumstances and what is being learned along the way.

A well-executed DE will effectively balance accountability with learning; rigor with flexibility and timely information; reflection and dialogue with decision-making and action; and the need for a fixed budget with the need for responsiveness and flexibility. DE also strives to balance expectations about who is expected to adapt and change based on the information provided (i.e., funders and/or grantees).

The case for developmental evaluation

Developmental evaluation (DE) has the potential to serve as an indispensable strategic learning tool for the growing number of funders and practitioners that are focusing their efforts on facilitating systems change. But, DE is different from other approaches to evaluation. Articulating what exactly DE looks like in practice, what results it can produce, and how those results can add value to a given initiative, program, or innovation is a critical challenge, even for leaders who embrace DE in concept.

To help meet the need for a clear and compelling description of how DE differs from formative and summative evaluation and what value it can add to an organization or innovation, we hosted a think tank session at AEA in which we invited attendees to share their thoughts on these questions. We identified 4 overarching value propositions of DE, which are supported by quotes from participants:

1) DE focuses on understanding an innovation in context, and explores how both the innovation and its context evolve and interact over time.

  • “DE allows evaluators AND program implementers to adapt to changing contexts and respond to real events that can and should impact the direction of the work”.
  • “DE provides a systematic way to scan and understand the critical systems and contextual elements that influence an innovation’s road to outcomes.”
  • “DE allows for fluidity and flexibility in decision-making as the issue being addressed continues to evolve.”

2) DE is specifically designed to improve innovation. By engaging early and deeply in an exploration of what a new innovation is and how it responds to its context, DE enables stakeholders to document and learn from their experiments.

  • “DE is perfect for those times when you have the resources, knowledge, and commitment to dedicate to an innovation, but the unknowns are many and having the significant impact you want will require learning along the way.”
  • “DE is a tool that facilitates “failing smart” and adapting to emergent conditions.”

3) DE supports timely decision-making in a way that monitoring and later-stage evaluation cannot. By providing real-time feedback to initiative participants, managers, and funders, DE supports rapid strategic adjustments and quick course corrections that are critical to success under conditions of complexity.

  • “DE allows for faster decision-making with ongoing information.”
  • “DE provides real time insights that can save an innovation from wasting valuable funds on theories or assumptions that are incorrect.”
  • “DE promotes rapid, adaptive learning at a deep level so that an innovation has greatest potential to achieve social impact.”

4) Well-executed DE uses an inclusive, participatory approach that helps build relationships and increase learning capacity while boosting performance.

  • “DE encourages frequent stakeholder engagement in accessing data and using it to inform decision-making, therefore maximizing both individual and organizational learning and capacity-building. This leads to better outcomes.”
  • “DE increases trust between stakeholders or participants and evaluators by making the evaluator a ‘critical friend’ to the work.”
  • “DE can help concretely inform a specific innovation, as well as help to transform an organization’s orientation toward continuous learning.”

Additionally, one participant offered a succinct summary of how DE is different from other types of evaluation: “DE helps you keep your focus on driving meaningful change and figuring out what’s needed to make that happen—not on deploying a predefined strategy or measuring a set of predefined outcomes.”

We hope that these messages and talking points will prove helpful to funders and practitioners seeking to better understand why DE is such an innovative and powerful approach to evaluation.

Have other ideas about DE’s value? Please share them in the comments.

Learn more about developmental evaluation:

Posted on

Redefining Rigor: Describing quality evaluation in complex, adaptive settings

This blog is co-authored by Dr. Jewlya Lynn, Spark Policy Institute, and Hallie Preskill, FSG. The blog is also posted on FSG’s website: www.fsg.org 

Traditionally, evaluation has focused on understanding whether a program is making progress against pre-determined indicators. In this context, the quality of the evaluation is often measured in part by the “rigor” of the methods and scientific inquiry. Experimental and quasi-experimental methods are highly-valued and seen as the most rigorous designs, even when they may hamper the ability of the program to adapt and be responsive to its environment.

Evaluations of complex systems-change strategies or adaptive, innovative programs cannot use this same yardstick to measure quality. An experimental design is hard to apply when a strategy’s success is not fully defined upfront and depends on being responsive to the environment. As the recognition of the need for these programs, and consequently the number of complex programs grows, so does the need for a new yardstick. In recognition of this need, we proposed a new definition of rigor at the 2015 American Evaluation Association annual conference, one that broadens the ways we think of quality in evaluation to encompass things that are critical when the target of the evaluation is complex, adaptive, and emergent.

We propose that rigor be redefined to include a balance between four criteria:

  • Quality of the Thinking: The extent to which the evaluation’s design and implementation engages in deep analysis that focuses on patterns, themes and values (drawing on systems thinking); seeks alternative explanations and interpretations; is grounded in the research literature; and looks for outliers that offer different perspectives.
  • Credibility and Legitimacy of the Claims: The extent to which the data is trustworthy, including the confidence in the findings; the transferability of findings to other contexts; the consistency and repeatability of the findings; and the extent to which the findings are shaped by respondents, rather than evaluator bias, motivation, or interests.
  • Cultural Responsiveness and Context: The extent to which the evaluation questions, methods, and analysis respect and reflect the stakeholders’ values and context, their definitions of success, their experiences and perceptions, and their insights about what is happening.
  • Quality and Value of the Learning Process: The extent to which the learning process engages the people who most need the information, in a way that allows for reflection, dialogue, testing assumptions, and asking new questions, directly contributing to making decisions that help improve the process and outcomes.

The concept of balancing the four criteria is at the heart of this redefinition of rigor. Regardless of its other positive attributes, an evaluation of a complex, adaptive program that fails to take into account systems thinking will not be responsive to the needs of that program. Similarly, an evaluation that fails to provide timely information for making decisions, lacks rigor even if the quality of the thinking and legitimacy of the claims is high.

The implications of this redefinition are many.

  • From an evaluator’s point of view, it provides a new checklist of considerations when designing and implementing an evaluation. It suggests that specific, up front work will be needed to understand the cultural context, the potential users of the evaluation and the decisions they need to make, and the level of complexity in the environment and the program itself. At the same time, it maintains the same focus the traditional definition of rigor has always had on leveraging learnings from previous research and seeking consistent and repeatable findings. Ultimately, it asks the evaluator to balance the desire for the highest-quality methods and design with the need for the evaluation to have value for the end-user, and for it to be contextually appropriate.
  • From an evaluation purchaser’s point of view, it provides criteria for considering the value of potential evaluators, evaluation plans, and reports. It can be a way of articulating up-front expectations or comparing the quality of different approaches to an evaluation.
  • From a programmatic point of view, it provides a yardstick by which evaluators can not only be measured, but by which the usefulness and value of their evaluation results can be assessed. It can help program leaders and staff have confidence in the evaluation findings or have a way of talking about what they are concerned about as they look at results.

Across evaluators, evaluation purchases and users of evaluation, this redefinition of rigor provides a new way of articulating expectations from evaluation and elevating the quality and value of the evaluations. It is our hope that this balanced approach helps evaluators, evaluation purchasers and evaluation users to share ownership over the concept of rigor and finding the right balance of the criteria for their evaluations.

Posted on

Evaluating Collaboration in Place-based Initiatives: Can it Move the Needle?

On October 5th and 6th, I will have the opportunity to facilitate a session on how evaluation can help stakeholders understand and strengthen cross-sector partnerships and collaboration more broadly at the Art & Science of Place-Based Evaluation. The conference is hosted by Jacobs Center for Neighborhood Innovation, the Aspen Institute Forum for Community Solutions, and the Neighborhood Funders Group and builds off of a series of on-going national conversations about the importance of “place” in philanthropic investments, including the Center of Philanthropy and Public Policy’s National Forum on Place-based Initiatives and the Aspen Institutes’ Promising Practices Conference.

If you Google “evaluate collaboration” you will see there is no shortage of tools for assessing the strength of a collaborative effort, but as I prepared for the session, I found myself asking: Is the quality of collaboration really the most important thing to investigate with your limited evaluation resources?

Effectively engaging partners in place-based work depends on more than good processes and practices. Among other things, it requires:

  • Meaningfully engaging different sectors to leverage the different motivations bringing each to the table (which requires surfacing and understanding those motivations!);
  • Tackling difficult power dynamics, sometimes evident in the room, but other times they play out in how strategies are implemented:
  • Recognizing and responding appropriately to the impact of the cultural assumptions participants bring to the process;
  • Managing the negative consequences of failed attempts to work collaboratively in the past;
  • Effectively leveraging large networks of organizations and leaders, often larger than the initiative has time to meaningfully engage and manage; and
  • Engaging with communities experiencing disparities in ways that are appropriate and lead to an impact on the work.

In addition, there is the fundamental issue of whether and how the structures and processes of collaboration are leading to something worthwhile – moving the needle on the issue that brought everyone together. Are collaboration and engagement managed in ways that advance the work or only in ways that advance the quality of collaboration?

If evaluation is going to play an role in helping place-based initiatives advance their collaboration processes, and get to the meaningful change, it needs to go beyond tools and become a real-time partner in uncovering motivations, power dynamics, and cultural assumptions; it needs to help pick apart how networks are functioning and where engagement might be most effective; and it should play a role in understanding how, and to what extent, nontraditional partners are influencing the decisions being made and contributing to shifts in the overall strategy and direction of the work.

These are the types of issues we’ll be exploring in the collaboration and cross-sector partnerships session at the convening. Don’t worry, you’ll leave with a list of evaluation tools that can be helpful if you want to focus on evaluating the effectiveness of your collaborative processes. But you’ll also leave with insights about how to engage evaluation in helping you tackle the fundamental issues standing between good collaboration and having an impact on the issues that matter.

Interested in learning more about the conference or attending? Visit the conference website: http://www.jacobscenter.org/placebased/

Want to hear from more facilitators?  Check out the blog from Meg Long of Equal Measure about connecting community change to systems change and Sonia Taddy-Sandino of Harder+Company about “getting ready” for place-based work. Interested in accessing new resources before the conference?  Check out our toolkits on engaging nontraditional voices and decision-making in complex, multi-stakeholder settings.

Posted on

Using Learning To Do Good, Even Better

One of the best parts of my job is helping organizations use learning to do good, even better. Recently, we worked with Project Health Colorado, a strategy funded by The Colorado Trust with support from The Colorado Health Foundation, focused on building public will to achieve access to health for all Coloradans by fostering a statewide discussion about health care and how it can be improved. The strategy included fourteen grantees and a communications campaign working independently and together to build public will. It also combined an impact evaluation with coaching on engaging in real-time, data-driven strategic learning to help grantees and The Trust test and adapt their strategies to improve outcomes.

Lessons learned about strategic learning:

So, how can organizations use real time learning to tackle a complex strategy in a complex environment – building will around a highly politicized issue? Our strategic learning model built the capacity of The Trust and grantees to engage in systematic data collection, along with collective interpretation and use of information to improve strategies. As a result, grantees shifted strategies in real time, increasing their ability to influence audience awareness of access to health issues and willingness to take action.

As a result of the learning, The Trust made major changes to the overarching strategy including shifting from asking grantees to use a prepackaged message to using the “intent” of the message with training on how to adapt it. This was particularly important for grantees working with predominately minority communities, who reported the original message did not resonate in their communities.

The real-time learning was effective because it allowed grantees and the Trust to practice interpreting and using the results of systematic data collection, applying what they learned to improve their strategies. The evaluation also supported adaptation over accountability to pre-defined plans, creating a culture of adaptation and helping participants strategize how to be effective at building will.

Lessons learned about evaluation:

The evaluation focused learning at the portfolio level, looking at the collective impact on public will across all grantee strategies. As the evaluator charged with figuring out the impact of this strategy, where everyone was encouraged to constantly adapt and improve, we learned that having multiple in-depth data collection methods, tailored to the ways different audiences engaged in the strategy, and explicitly planning for how to capture emergent outcomes allowed the evaluation to stay relevant even as the strategy shifted.

Rad Resources:

Want to learn more?

This post originally appeared September 14, 2015 on AEA365, the American Evaluation Association blog. The American Evaluation Association is an international professional association of evaluators devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation. The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members.