Posted on

The Core Elements of Advancing an Emergent Strategy Toward Systems Change, Part 2

Core elements of emergent strategy

This is the second part of a two-part blog that looks at the core elements of emergent strategies and how focusing on these elements allows us to manage and even benefit from the ambiguity and conflict that naturally emerge when solving complex problems.

Our last blog looked at the four core elements of advancing an emergent strategy toward system change:

Core elements of an emergent strategyNow, let’s explore the consequences of not balancing the four elements. Let’s imagine some scenarios where only a couple of these things are moving forward and explore what that does to a group’s ability to drive change:

  • Attention to Structure and Learning: A group that focuses on how they are going to work together and engages in reflection (with a facilitator or developmental evaluator) focused on their group process is likely to fail to develop a good structure. Absent something to work on, coming up with a structure can be difficult and breed conflict that is not productive.
  • Attention to Experimentation and Learning: This sounds good! Rapid deployment of experiments, learning, and adaptation in response to feedback is critical in any systems change effort! Yet, without untangling the problem, over time the experiments may come to feel less and less rewarding as they aren’t driving toward systemic and significant change. Lacking a functional structure, it may be very difficult to switch from experimentation to scaling and institutionalizing change.
  • Attention to Untangling and Structure: Talk about a buzz kill! Attention to two things that take a great deal of time and energy, feel “processey,” and are rarely inspiring can keep a group from ever getting to action.
Setting a clear vision and goals

You may be wondering at this point where vision and goal setting fit into this description of these core elements. I am going to make a bold suggestion: setting clear goals should not be a priority when engaging in emergent strategies to drive systems change.

Emergent strategy needs space to emerge. Sometimes in the process of structuring, a clear vision or goal naturally emerges. Often in the process of untangling, a set of defined changes emerge. Experimentation can surface mechanisms to drive change. A learning process can gradually surface the underlying theories of change. Allowing this type of direction setting to emerge naturally over time frees groups to try things in new ways. Forcing clearly defined goals too early can create similar dynamics to the challenges explored above, creating conflict in the attempt to eliminate ambiguity.

Now, the reality is that groups engaged in emergent strategy will always operate with a theory of why the actions they take matter, however loosely thought through (and often not articulated), and there may be value in taking time to surface the operating theories tied to various actions. But, trying to define THE theory of how these actions will drive systems change is often counterproductive during emergent strategies, as this takes away the emergent nature of the strategy and leaves the group back where they started: implementing the strategies they can think through at this time, based on their current knowledge and experience. Innovative, transformative work requires giving ourselves more time to emerge into a new level and type of understanding before we define how change happens.

Finding Balance

Now let’s imagine a group that is in balance and allowing emergent strategy to unfold naturally:

Early in their process they agree to work collaboratively and allow any two partners to initiate an experiment together, without group consensus being needed (Structure, Experiment). They retain a developmental evaluator to help them learn from the experiments and untangle the larger problem and its systemic drivers (Learn, Untangle).

As they learn more about the problem, their experiments begin to align with specific drivers and become increasingly innovative. They also begin to see, as a group, some potential areas of focus where they feel positioned to make a significant difference. However, there is still some push/pull tension and even conflict about the focus. So before they try to resolve this tension, they decide to develop a more formal structure, setting in place a consensus decision-making process that requires organizational sign-off, not just the individuals in the room (Structure). They also dig in deep on two specific drivers to understand how they can act on them (Untangle). One of their early experiments is proving to have significant impact (Learn), so they make the decision to experiment next by expanding its scope and reach (Experiment).

At a pre-planned reflection moment, they look at their work and realize they are no longer implementing emergent strategy. Rather, the goals are becoming increasingly clear and agreed upon, and are strategies they can deploy. By giving themselves permission to operate amid ambiguity and work through conflict, they have arrived at a place where they are ready to focus and tackle complex, systemic work in a way they have never done before!

This type of progress through an emergent strategy is not easy work and it does not (and should not) eliminate ambiguity and conflict. It can turn them from barriers to emergent strategy into productive elements of strategy when groups give themselves permission to remain emergent and balance their focus on structuring, learning, experimentation, and untangling.

Posted on

The Case for Developmental Evaluation

This blog is co-authored by Marci Parkhurst and Hallie Preskill from FSG, Dr. Jewlya Lynn from Spark Policy Institute, and Marah Moore from i2i Institute. It is also posted on FSG’s website: www.fsg.org 

In a recent blog post discussing the importance of good evidence in supporting systems change work, evaluation expert Lisbeth Schorr wrote, “To get better results in this complex world, we must be willing to shake the intuition that certainty should be our highest priority…” Rather, she argues, “it is time for all of us to think more expansively about evidence as we strive to understand the world of today and to improve the world of tomorrow.” [Emphasis added]

At the annual American Evaluation Association Conference (AEA) in November, practitioners, funders, and academics from around the world gave presentations and facilitated discussions around a type of evaluation that is specifically designed to meet this need for a more expanded view of evidence. It’s called developmental evaluation, and, as noted by other commentators, it took this year’s AEA conference by storm.

What is developmental evaluation?

Developmental evaluation (DE) “is grounded in systems thinking and supports innovation by collecting and analyzing real-time data in ways that lead to informed and ongoing decision making as part of the design, development, and implementation process.” As such, DE is particularly well-suited for innovations in which the path to success is not clear. By focusing on understanding what’s happening as a new approach is implemented, DE can help answer questions such as:

  • What is emerging as the innovation takes shape?
  • What do initial results reveal about expected progress?
  • What variations in effects are we seeing?
  • How have different values, perspectives, and relationships influenced the innovation and its outcomes?
  • How is the larger system or environment responding to the innovation?

DE can provide stakeholders with a deep understanding of context and real-time insights about how a new initiative, program, or innovation should be adapted in response to changing circumstances and what is being learned along the way.

A well-executed DE will effectively balance accountability with learning; rigor with flexibility and timely information; reflection and dialogue with decision-making and action; and the need for a fixed budget with the need for responsiveness and flexibility. DE also strives to balance expectations about who is expected to adapt and change based on the information provided (i.e., funders and/or grantees).

The case for developmental evaluation

Developmental evaluation (DE) has the potential to serve as an indispensable strategic learning tool for the growing number of funders and practitioners that are focusing their efforts on facilitating systems change. But, DE is different from other approaches to evaluation. Articulating what exactly DE looks like in practice, what results it can produce, and how those results can add value to a given initiative, program, or innovation is a critical challenge, even for leaders who embrace DE in concept.

To help meet the need for a clear and compelling description of how DE differs from formative and summative evaluation and what value it can add to an organization or innovation, we hosted a think tank session at AEA in which we invited attendees to share their thoughts on these questions. We identified 4 overarching value propositions of DE, which are supported by quotes from participants:

1) DE focuses on understanding an innovation in context, and explores how both the innovation and its context evolve and interact over time.

  • “DE allows evaluators AND program implementers to adapt to changing contexts and respond to real events that can and should impact the direction of the work”.
  • “DE provides a systematic way to scan and understand the critical systems and contextual elements that influence an innovation’s road to outcomes.”
  • “DE allows for fluidity and flexibility in decision-making as the issue being addressed continues to evolve.”

2) DE is specifically designed to improve innovation. By engaging early and deeply in an exploration of what a new innovation is and how it responds to its context, DE enables stakeholders to document and learn from their experiments.

  • “DE is perfect for those times when you have the resources, knowledge, and commitment to dedicate to an innovation, but the unknowns are many and having the significant impact you want will require learning along the way.”
  • “DE is a tool that facilitates “failing smart” and adapting to emergent conditions.”

3) DE supports timely decision-making in a way that monitoring and later-stage evaluation cannot. By providing real-time feedback to initiative participants, managers, and funders, DE supports rapid strategic adjustments and quick course corrections that are critical to success under conditions of complexity.

  • “DE allows for faster decision-making with ongoing information.”
  • “DE provides real time insights that can save an innovation from wasting valuable funds on theories or assumptions that are incorrect.”
  • “DE promotes rapid, adaptive learning at a deep level so that an innovation has greatest potential to achieve social impact.”

4) Well-executed DE uses an inclusive, participatory approach that helps build relationships and increase learning capacity while boosting performance.

  • “DE encourages frequent stakeholder engagement in accessing data and using it to inform decision-making, therefore maximizing both individual and organizational learning and capacity-building. This leads to better outcomes.”
  • “DE increases trust between stakeholders or participants and evaluators by making the evaluator a ‘critical friend’ to the work.”
  • “DE can help concretely inform a specific innovation, as well as help to transform an organization’s orientation toward continuous learning.”

Additionally, one participant offered a succinct summary of how DE is different from other types of evaluation: “DE helps you keep your focus on driving meaningful change and figuring out what’s needed to make that happen—not on deploying a predefined strategy or measuring a set of predefined outcomes.”

We hope that these messages and talking points will prove helpful to funders and practitioners seeking to better understand why DE is such an innovative and powerful approach to evaluation.

Have other ideas about DE’s value? Please share them in the comments.

Learn more about developmental evaluation:

Posted on

Summertime, and the Thinking is Slow

JL in VII had the good fortune in June to find myself in the Virgin Islands facilitating a strategic roadmap session focused on addressing food systems issues, followed by a few days on the beaches with my family. The wonderful thing about a beach vacation, other than watching the absolute joy on your child’s face as they splash in the waves, is the space it creates for thought: unrushed, deadline free, wide open thinking. The combination of vacation and an inspiring conversation about the Virgin Islands food systems left me with a lot of room for deep thinking.

Thinking Fast and SlowHave you heard of the book “Thinking Fast and Slow”? It explores how our brains have two modes of thinking – instinctive, automatic thinking (fast) and deliberate thinking where you formulate arguments, solve problems, create plans, etc. (slow). Basically, slow thinking is where you exert mental energy. And because we are always operating at high speed these days, it can be easy to get caught up in fast thinking and avoid putting the energy into a more purposeful thinking process.

It’s not always a bad thing to do this though. Because we all have such rich experiences to draw from, we can intuitively read many situations quite well and act with confidence even if we haven’t had time to stop and assess more carefully. However, being away from the rush of getting things done created room for me to recommit to slow thinking, not just for major decisions or turning points in our work, but along the way to prepare for the many opportunities to catalyze meaningful change.

When we think too quickly, we make up patterns, see stories in what is otherwise random information. With slow thinking, we find underlying causes and investigate to find meaningful solutions. Have you ever watched a young child try to understand how something works? They use slow thinking, only without the benefit of all the technology and relationships we can use to track down new information. Instead, they puzzle over something new, pull it apart (and yes, occasionally break it in the process), sometimes manage to put it back together, and have the most entertaining observations along the way, like this interpretation of how to grow a pumpkin: “first dig a hole in dirt, cover the seed, then you have to water it, and wait for Halloween to come!”

Logo 1I want to bring that sense of openness, wonder and thoughtful investigation back into how we do our work every day, not just in approaching the major decisions. This might be why I’m such a fan of developmental evaluation, as it gives me an opportunity to wear the slow thinking hat when I’m working with innovative groups who are tackling important challenges.

So, here’s my summer 2015 resolution: I will take the slow, deliberative thinking that is core to developmental evaluation and integrate it more fully across many different types of change strategies. More importantly, I will help others create that same space for thinking, building our collective capacity to catalyze change based on more than just intuition, based on the best we can devise about how to improve the world. I hope you’ll all join me in a commitment to taking the time for slow-thinking this summer and go deeper and – hopefully further – in catalyzing change.

Posted on

Evaluating Complexity: Developmental Evaluation in Collective Impact

With a half dozen Collective Impact evaluations in the last year alone, it’s becoming second nature for me to think about the complexity inherent in evaluating Collective Impact. The model’s emphasis on a shared measurement system has been both a benefit to evaluation and a hindrance. Sometimes I find that recognizing the need for shared measurement has helped my partners to value data at a level that perhaps would not otherwise have been true. Other times, the emphasis on shared measurement has resulted in a perception that all we need is shared measurement. The problem is, shared measurement tells you about your outcomes but doesn’t help you understand what is and isn’t working.

It was exciting to see the new FSG publication, a Guide to Evaluating Collective Impact, because they are talking about this same issue and providing guidance to Collective Impact initiatives throughout the country on where evaluation fits into their work. I particularly appreciate that they highlighted how evaluation looks different depending on the stage of the Collective Impact work, from early years to middle years to later years. FSG Image

For me, I find evaluation in the early years most exciting.   I love the developmental evaluation approach and the case study for the early years in the FSG guide is one of Spark’s projects – an infant mortality initiative. The initiative, which is supported by the Missouri Foundation for Health (MFH), is just entering its second year and is working on foundational relationship and structure issues.

Our role with the initiative was to build everyone’s capacity to use developmental evaluation to inform the work. Developmental evaluation, by the way, is an approach to evaluation that explicitly recognizes that sometimes we need learning and feedback in the context of a messy, innovative setting where the road ahead is unclear.

Thanks to the vision the Missouri Foundation for Health had for the infant mortality initiative, we had the opportunity to both coach all the partners involved on developmental evaluation as well as implement it with the two sites and the foundation. What a great experience!

With one of the sites, the collective impact initiative in the St. Louis region, an area of focus was answering the question: “What is a process and structure for engaging stakeholders – how can we stage the engagement and how can we motivate participation?” The facilitated conversations on stakeholder engagement and interviews with key stakeholders led to a couple short briefs highlighting how people were responding to the messages and processes being used by the backbone organization. The backbone staff recently shared with the foundation that the developmental evaluation findings helped them to adapt in real-time as they prepared for their first Leadership Council meeting and continue to be fundamental information that they regularly refer to as they plan their next steps.  That might be the best part about developmental evaluation – you never generate reports that sit on a shelf because the information you collect and share is useful, timely and often critically important for success!

So, what’s my takeaway from all this time spent on Collective Impact evaluation? I really encourage you to consider how shared measurement systems can benefit from adding a more comprehensive evaluation approach. But I also hope you recognize that evaluation for Collective Impact isn’t the same as evaluation for programs. Unlike most program evaluations, Collective Impact evaluation must:

  • Be as flexible and adaptable as the initiatives themselves;
  • Focus on continuous learning  and helping to improve the outcomes of the Collective Impact initiative; and
  • Take into account the stage of the initiative – the early years, middle years, or later years.

Want to know more?  Join the FSG webinar on June 11th to learn more about evaluating collective impact.