Posted on

The Core Elements of Advancing an Emergent Strategy Toward Systems Change, Part 2

Core elements of emergent strategy

This is the second part of a two-part blog that looks at the core elements of emergent strategies and how focusing on these elements allows us to manage and even benefit from the ambiguity and conflict that naturally emerge when solving complex problems.

Our last blog looked at the four core elements of advancing an emergent strategy toward system change:

Core elements of an emergent strategyNow, let’s explore the consequences of not balancing the four elements. Let’s imagine some scenarios where only a couple of these things are moving forward and explore what that does to a group’s ability to drive change:

  • Attention to Structure and Learning: A group that focuses on how they are going to work together and engages in reflection (with a facilitator or developmental evaluator) focused on their group process is likely to fail to develop a good structure. Absent something to work on, coming up with a structure can be difficult and breed conflict that is not productive.
  • Attention to Experimentation and Learning: This sounds good! Rapid deployment of experiments, learning, and adaptation in response to feedback is critical in any systems change effort! Yet, without untangling the problem, over time the experiments may come to feel less and less rewarding as they aren’t driving toward systemic and significant change. Lacking a functional structure, it may be very difficult to switch from experimentation to scaling and institutionalizing change.
  • Attention to Untangling and Structure: Talk about a buzz kill! Attention to two things that take a great deal of time and energy, feel “processey,” and are rarely inspiring can keep a group from ever getting to action.
Setting a clear vision and goals

You may be wondering at this point where vision and goal setting fit into this description of these core elements. I am going to make a bold suggestion: setting clear goals should not be a priority when engaging in emergent strategies to drive systems change.

Emergent strategy needs space to emerge. Sometimes in the process of structuring, a clear vision or goal naturally emerges. Often in the process of untangling, a set of defined changes emerge. Experimentation can surface mechanisms to drive change. A learning process can gradually surface the underlying theories of change. Allowing this type of direction setting to emerge naturally over time frees groups to try things in new ways. Forcing clearly defined goals too early can create similar dynamics to the challenges explored above, creating conflict in the attempt to eliminate ambiguity.

Now, the reality is that groups engaged in emergent strategy will always operate with a theory of why the actions they take matter, however loosely thought through (and often not articulated), and there may be value in taking time to surface the operating theories tied to various actions. But, trying to define THE theory of how these actions will drive systems change is often counterproductive during emergent strategies, as this takes away the emergent nature of the strategy and leaves the group back where they started: implementing the strategies they can think through at this time, based on their current knowledge and experience. Innovative, transformative work requires giving ourselves more time to emerge into a new level and type of understanding before we define how change happens.

Finding Balance

Now let’s imagine a group that is in balance and allowing emergent strategy to unfold naturally:

Early in their process they agree to work collaboratively and allow any two partners to initiate an experiment together, without group consensus being needed (Structure, Experiment). They retain a developmental evaluator to help them learn from the experiments and untangle the larger problem and its systemic drivers (Learn, Untangle).

As they learn more about the problem, their experiments begin to align with specific drivers and become increasingly innovative. They also begin to see, as a group, some potential areas of focus where they feel positioned to make a significant difference. However, there is still some push/pull tension and even conflict about the focus. So before they try to resolve this tension, they decide to develop a more formal structure, setting in place a consensus decision-making process that requires organizational sign-off, not just the individuals in the room (Structure). They also dig in deep on two specific drivers to understand how they can act on them (Untangle). One of their early experiments is proving to have significant impact (Learn), so they make the decision to experiment next by expanding its scope and reach (Experiment).

At a pre-planned reflection moment, they look at their work and realize they are no longer implementing emergent strategy. Rather, the goals are becoming increasingly clear and agreed upon, and are strategies they can deploy. By giving themselves permission to operate amid ambiguity and work through conflict, they have arrived at a place where they are ready to focus and tackle complex, systemic work in a way they have never done before!

This type of progress through an emergent strategy is not easy work and it does not (and should not) eliminate ambiguity and conflict. It can turn them from barriers to emergent strategy into productive elements of strategy when groups give themselves permission to remain emergent and balance their focus on structuring, learning, experimentation, and untangling.

Posted on

The Core Elements of Advancing an Emergent Strategy toward Systems Change

Core elements of emergent strategy

This is the first of a two-part blog that looks at the core elements of emergent strategies and how focusing on these elements allows us to manage and even benefit from the ambiguity and conflict that naturally emerge when solving complex problems.

Fifteen years into designing, facilitating and evaluating emergent strategies intended to effect systems change, I have come to the conclusion that two things are inevitable:

  • Emergent strategy is messy
  • Emergent strategy generates conflict

Neither are bad things. When we decide to solve a complex problem together, chances are we aren’t headed in a transformative direction if we resolve the mess quickly, figuring out where we’re going and how. And if we manage to do the work with little to no conflict, chances are we aren’t pushing boundaries, taking risks, and scaling the work in ways that will have significant impact.

Yet, even if messiness and conflict are inevitable and are even good signs that the work is doing what it’s supposed to do, that doesn’t mean we want to get stuck in either one. Many systems change initiatives do exactly that – allow interpersonal and inter-organizational conflicts to trump the ability to drive change. And as that conflict is building, the accompanying inability to find a specific goal, project, set of outcomes or other defining “shape” to the initiative leaves participants frustrated and less and less bought in.

How can we turn the ambiguity and conflict into positive drivers of change? One way is to be careful about where we focus our energy. Based on the many initiatives I’ve worked with, I’ve come to believe groups need to attend to four things in a balanced way over time in order to progress through messiness and conflict productively:
Core elements of an emergent strategy

The Four Elements, Unpacked

Untangle: That vision statement, strategy document, or problem definition that brought the group together is rarely enough to really understand the opportunity. Even early in a group process, most initiatives can benefit from untangling the problem and opportunity more fully. They can leverage the insights from participants at the table, but also can benefit from external knowledge being brought in. Techniques like systems mapping, scenario mapping, review of similar initiatives, influence mapping and more can help untangle a problem. If the goal is to drive systems change, taking the time to understand the variety of types of leverage points that can be moved within the system can be invaluable.

Experiment: Even before the untangling has begun, and certainly while it’s underway, groups can begin experimenting. Experimentation can be as simple as finding small things to do together that are different from what has been done before. They should be low stakes, quick to implement, quick to learn from, and relevant to the larger vision or direction of the group. They do not need to be planned with specific outcomes in mind, particularly early in the process, as sometimes the attention to defining outcomes can hang up newly forming groups who are doing emergent strategy. As the work progresses, however, experiments tend to become more formally defined and tied to intentional outcomes.

Learn: What’s the point of untangling and experimentation if the group isn’t engaging in learning? Even if outcomes are not clearly defined for experiments, you can learn from them – what impact did they have on the participating organizations? What changes resulted from the experiment? What did it take to implement it? What did we learn about what is possible and what excites us as a group? What does it tell us about the direction we might want to go (or not go)? As a group begins to implement experiments with more clearly defined outcomes, the learning can shift to understanding how specific leverage points are having an impact on the problem, which leverage points are generating the greatest impact for the least effort and whether leverage points are being pushed in the right direction.

Structure: Many groups start here – spending a great deal of time planning their structure. In emergent strategy, loose coupling can be more powerful at times than a heavy-handed structure. In fact, taking time to decide how you’re going to make decisions for the long haul can be counterproductive if the decisions to be made in the next year are relatively low stakes, decision-makers will need to change as a direction begins to emerge later, and future decisions will require more formalized processes to be accepted by the organizations affected. Allowing a looser process for decision-making earlier can free the group up to be experimental and have fun. Yes, I said “have fun.” This is hard work and getting people excited and maintaining excitement and momentum is critical. The commitment to setting up the best possible structure tends to kill that excitement pretty quickly. Yet, absent any attention to structure, it becomes evident that even the fun decisions are hard to make!

Most groups are familiar with these elements, but often get stuck focusing on some of them, which at best can lead to poor structures or poorly thought out experiments, and at worse can lead to spinning wheels and burn out. We’ll dive deeper into this in our next blog. In the meantime, share your thoughts. Is there anything else you find to be essential in an emergent strategy?

Posted on

Redefining Rigor: Describing quality evaluation in complex, adaptive settings

This blog is co-authored by Dr. Jewlya Lynn, Spark Policy Institute, and Hallie Preskill, FSG. The blog is also posted on FSG’s website: www.fsg.org 

Traditionally, evaluation has focused on understanding whether a program is making progress against pre-determined indicators. In this context, the quality of the evaluation is often measured in part by the “rigor” of the methods and scientific inquiry. Experimental and quasi-experimental methods are highly-valued and seen as the most rigorous designs, even when they may hamper the ability of the program to adapt and be responsive to its environment.

Evaluations of complex systems-change strategies or adaptive, innovative programs cannot use this same yardstick to measure quality. An experimental design is hard to apply when a strategy’s success is not fully defined upfront and depends on being responsive to the environment. As the recognition of the need for these programs, and consequently the number of complex programs grows, so does the need for a new yardstick. In recognition of this need, we proposed a new definition of rigor at the 2015 American Evaluation Association annual conference, one that broadens the ways we think of quality in evaluation to encompass things that are critical when the target of the evaluation is complex, adaptive, and emergent.

We propose that rigor be redefined to include a balance between four criteria:

  • Quality of the Thinking: The extent to which the evaluation’s design and implementation engages in deep analysis that focuses on patterns, themes and values (drawing on systems thinking); seeks alternative explanations and interpretations; is grounded in the research literature; and looks for outliers that offer different perspectives.
  • Credibility and Legitimacy of the Claims: The extent to which the data is trustworthy, including the confidence in the findings; the transferability of findings to other contexts; the consistency and repeatability of the findings; and the extent to which the findings are shaped by respondents, rather than evaluator bias, motivation, or interests.
  • Cultural Responsiveness and Context: The extent to which the evaluation questions, methods, and analysis respect and reflect the stakeholders’ values and context, their definitions of success, their experiences and perceptions, and their insights about what is happening.
  • Quality and Value of the Learning Process: The extent to which the learning process engages the people who most need the information, in a way that allows for reflection, dialogue, testing assumptions, and asking new questions, directly contributing to making decisions that help improve the process and outcomes.

The concept of balancing the four criteria is at the heart of this redefinition of rigor. Regardless of its other positive attributes, an evaluation of a complex, adaptive program that fails to take into account systems thinking will not be responsive to the needs of that program. Similarly, an evaluation that fails to provide timely information for making decisions, lacks rigor even if the quality of the thinking and legitimacy of the claims is high.

The implications of this redefinition are many.

  • From an evaluator’s point of view, it provides a new checklist of considerations when designing and implementing an evaluation. It suggests that specific, up front work will be needed to understand the cultural context, the potential users of the evaluation and the decisions they need to make, and the level of complexity in the environment and the program itself. At the same time, it maintains the same focus the traditional definition of rigor has always had on leveraging learnings from previous research and seeking consistent and repeatable findings. Ultimately, it asks the evaluator to balance the desire for the highest-quality methods and design with the need for the evaluation to have value for the end-user, and for it to be contextually appropriate.
  • From an evaluation purchaser’s point of view, it provides criteria for considering the value of potential evaluators, evaluation plans, and reports. It can be a way of articulating up-front expectations or comparing the quality of different approaches to an evaluation.
  • From a programmatic point of view, it provides a yardstick by which evaluators can not only be measured, but by which the usefulness and value of their evaluation results can be assessed. It can help program leaders and staff have confidence in the evaluation findings or have a way of talking about what they are concerned about as they look at results.

Across evaluators, evaluation purchases and users of evaluation, this redefinition of rigor provides a new way of articulating expectations from evaluation and elevating the quality and value of the evaluations. It is our hope that this balanced approach helps evaluators, evaluation purchasers and evaluation users to share ownership over the concept of rigor and finding the right balance of the criteria for their evaluations.