Posted on

Evaluating Collaboration in Place-based Initiatives: Can it Move the Needle?

On October 5th and 6th, I will have the opportunity to facilitate a session on how evaluation can help stakeholders understand and strengthen cross-sector partnerships and collaboration more broadly at the Art & Science of Place-Based Evaluation. The conference is hosted by Jacobs Center for Neighborhood Innovation, the Aspen Institute Forum for Community Solutions, and the Neighborhood Funders Group and builds off of a series of on-going national conversations about the importance of “place” in philanthropic investments, including the Center of Philanthropy and Public Policy’s National Forum on Place-based Initiatives and the Aspen Institutes’ Promising Practices Conference.

If you Google “evaluate collaboration” you will see there is no shortage of tools for assessing the strength of a collaborative effort, but as I prepared for the session, I found myself asking: Is the quality of collaboration really the most important thing to investigate with your limited evaluation resources?

Effectively engaging partners in place-based work depends on more than good processes and practices. Among other things, it requires:

  • Meaningfully engaging different sectors to leverage the different motivations bringing each to the table (which requires surfacing and understanding those motivations!);
  • Tackling difficult power dynamics, sometimes evident in the room, but other times they play out in how strategies are implemented:
  • Recognizing and responding appropriately to the impact of the cultural assumptions participants bring to the process;
  • Managing the negative consequences of failed attempts to work collaboratively in the past;
  • Effectively leveraging large networks of organizations and leaders, often larger than the initiative has time to meaningfully engage and manage; and
  • Engaging with communities experiencing disparities in ways that are appropriate and lead to an impact on the work.

In addition, there is the fundamental issue of whether and how the structures and processes of collaboration are leading to something worthwhile – moving the needle on the issue that brought everyone together. Are collaboration and engagement managed in ways that advance the work or only in ways that advance the quality of collaboration?

If evaluation is going to play an role in helping place-based initiatives advance their collaboration processes, and get to the meaningful change, it needs to go beyond tools and become a real-time partner in uncovering motivations, power dynamics, and cultural assumptions; it needs to help pick apart how networks are functioning and where engagement might be most effective; and it should play a role in understanding how, and to what extent, nontraditional partners are influencing the decisions being made and contributing to shifts in the overall strategy and direction of the work.

These are the types of issues we’ll be exploring in the collaboration and cross-sector partnerships session at the convening. Don’t worry, you’ll leave with a list of evaluation tools that can be helpful if you want to focus on evaluating the effectiveness of your collaborative processes. But you’ll also leave with insights about how to engage evaluation in helping you tackle the fundamental issues standing between good collaboration and having an impact on the issues that matter.

Interested in learning more about the conference or attending? Visit the conference website: http://www.jacobscenter.org/placebased/

Want to hear from more facilitators?  Check out the blog from Meg Long of Equal Measure about connecting community change to systems change and Sonia Taddy-Sandino of Harder+Company about “getting ready” for place-based work. Interested in accessing new resources before the conference?  Check out our toolkits on engaging nontraditional voices and decision-making in complex, multi-stakeholder settings.

Posted on

Using Learning To Do Good, Even Better

One of the best parts of my job is helping organizations use learning to do good, even better. Recently, we worked with Project Health Colorado, a strategy funded by The Colorado Trust with support from The Colorado Health Foundation, focused on building public will to achieve access to health for all Coloradans by fostering a statewide discussion about health care and how it can be improved. The strategy included fourteen grantees and a communications campaign working independently and together to build public will. It also combined an impact evaluation with coaching on engaging in real-time, data-driven strategic learning to help grantees and The Trust test and adapt their strategies to improve outcomes.

Lessons learned about strategic learning:

So, how can organizations use real time learning to tackle a complex strategy in a complex environment – building will around a highly politicized issue? Our strategic learning model built the capacity of The Trust and grantees to engage in systematic data collection, along with collective interpretation and use of information to improve strategies. As a result, grantees shifted strategies in real time, increasing their ability to influence audience awareness of access to health issues and willingness to take action.

As a result of the learning, The Trust made major changes to the overarching strategy including shifting from asking grantees to use a prepackaged message to using the “intent” of the message with training on how to adapt it. This was particularly important for grantees working with predominately minority communities, who reported the original message did not resonate in their communities.

The real-time learning was effective because it allowed grantees and the Trust to practice interpreting and using the results of systematic data collection, applying what they learned to improve their strategies. The evaluation also supported adaptation over accountability to pre-defined plans, creating a culture of adaptation and helping participants strategize how to be effective at building will.

Lessons learned about evaluation:

The evaluation focused learning at the portfolio level, looking at the collective impact on public will across all grantee strategies. As the evaluator charged with figuring out the impact of this strategy, where everyone was encouraged to constantly adapt and improve, we learned that having multiple in-depth data collection methods, tailored to the ways different audiences engaged in the strategy, and explicitly planning for how to capture emergent outcomes allowed the evaluation to stay relevant even as the strategy shifted.

Rad Resources:

Want to learn more?

This post originally appeared September 14, 2015 on AEA365, the American Evaluation Association blog. The American Evaluation Association is an international professional association of evaluators devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation. The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. 

Posted on

How to Build a Health Care Movement

What happens when 14 community organizations, two foundations and several communications experts come together to change how the public thinks about access to health care? You build a movement.

cotr_logo_fullProject Health Colorado (PHC) was a groundbreaking three-year Colorado Trust initiative to build public will for access to health. PHC involved 14 community organizations that used multiple innovative strategies, along with a paid media and mobilization campaign, to engage the public around access to health. A few of the innovative strategies used by the grantees included:

PHC also included a paid media campaign that targeted key groups throughout the state. In addition to traditional and social media strategies, the campaign deployed street teams at fairs and festivals. The street teams helped spread the message of the importance of access to health for all, engaging the public with an interactive website where they could ask questions, get answers and get involved.

What happened as a result of the forums, storytelling, training and mobilizing? Over 25,000 Coloradans were reached through in-person conversations and more than half a million people were reached through electronic and digital communications. People reached by grantees went on to talk to others, creating a ripple effect, carrying the message of PHC that people should be able to get the health care they need, when they need it. Volunteers from all walks of life became ambassadors for the message, particularly community members with no professional reason to be involved.

Want to learn more? The final evaluation report for PHC explores the impact of the many intersecting strategies, walking through key findings and their implications through a mix of infographics and narratives. We’ve also created a separate evaluation report intended for foundations that are undertaking complex grant strategies like PHC.

Let’s learn together about what happens when organizations come together around an innovative idea, and work to make a meaningful difference building public will for access to health.

This post originally appeared on The Colorado Trust blog April 2, 2015. Reposted with permission.

Posted on

Evaluating Complexity: Developmental Evaluation in Collective Impact

With a half dozen Collective Impact evaluations in the last year alone, it’s becoming second nature for me to think about the complexity inherent in evaluating Collective Impact. The model’s emphasis on a shared measurement system has been both a benefit to evaluation and a hindrance. Sometimes I find that recognizing the need for shared measurement has helped my partners to value data at a level that perhaps would not otherwise have been true. Other times, the emphasis on shared measurement has resulted in a perception that all we need is shared measurement. The problem is, shared measurement tells you about your outcomes but doesn’t help you understand what is and isn’t working.

It was exciting to see the new FSG publication, a Guide to Evaluating Collective Impact, because they are talking about this same issue and providing guidance to Collective Impact initiatives throughout the country on where evaluation fits into their work. I particularly appreciate that they highlighted how evaluation looks different depending on the stage of the Collective Impact work, from early years to middle years to later years. FSG Image

For me, I find evaluation in the early years most exciting.   I love the developmental evaluation approach and the case study for the early years in the FSG guide is one of Spark’s projects – an infant mortality initiative. The initiative, which is supported by the Missouri Foundation for Health (MFH), is just entering its second year and is working on foundational relationship and structure issues.

Our role with the initiative was to build everyone’s capacity to use developmental evaluation to inform the work. Developmental evaluation, by the way, is an approach to evaluation that explicitly recognizes that sometimes we need learning and feedback in the context of a messy, innovative setting where the road ahead is unclear.

Thanks to the vision the Missouri Foundation for Health had for the infant mortality initiative, we had the opportunity to both coach all the partners involved on developmental evaluation as well as implement it with the two sites and the foundation. What a great experience!

With one of the sites, the collective impact initiative in the St. Louis region, an area of focus was answering the question: “What is a process and structure for engaging stakeholders – how can we stage the engagement and how can we motivate participation?” The facilitated conversations on stakeholder engagement and interviews with key stakeholders led to a couple short briefs highlighting how people were responding to the messages and processes being used by the backbone organization. The backbone staff recently shared with the foundation that the developmental evaluation findings helped them to adapt in real-time as they prepared for their first Leadership Council meeting and continue to be fundamental information that they regularly refer to as they plan their next steps.  That might be the best part about developmental evaluation – you never generate reports that sit on a shelf because the information you collect and share is useful, timely and often critically important for success!

So, what’s my takeaway from all this time spent on Collective Impact evaluation? I really encourage you to consider how shared measurement systems can benefit from adding a more comprehensive evaluation approach. But I also hope you recognize that evaluation for Collective Impact isn’t the same as evaluation for programs. Unlike most program evaluations, Collective Impact evaluation must:

  • Be as flexible and adaptable as the initiatives themselves;
  • Focus on continuous learning  and helping to improve the outcomes of the Collective Impact initiative; and
  • Take into account the stage of the initiative – the early years, middle years, or later years.

Want to know more?  Join the FSG webinar on June 11th to learn more about evaluating collective impact.