About

Exploring the Room of the Non-Obvious

Essays

#, #, #, #

Jesper Christiansen
Jesper Christiansen is a public innovation thinker and practitioner. His work focuses on how to deal most effectively with public problems in order to most effectively pursue the common good – in particular through better ecosystems for research and development. Prior to joining Nesta, Jesper worked at Danish cross-public innovation unit MindLab for seven years. He founded and directed its research program that captured, analyzed and communicated the learnings from across MindLab’s project portfolio. He also managed MindLab’s international collaboration, and has worked with and advised several governments, public agencies and international institutions. Before this he was a programme and project manager for MindLab’s work with the Danish Ministry of Employment, focused on employment policy for marginalized and vulnerable citizens– also the focus of his 2012 secondment to design agency ThinkPlace in Canberra. Jesper holds a Ph.D. in Anthropology with a  focus on embedding human-centred innovation practices in public sector organisations. He also holds a Master’s degree in Anthropology from Aarhus University and a degree in journalism from the Danish School of Journalism. @JesperC_
Nesta
Nesta is an innovation foundation that backs new ideas to tackle the big challenges of our time. They seek out, spark, and shape powerful new ideas, joining with others to take on the big challenges of our time and shift how the world works for everyone. Nesta operates across the globe and across sectors (including education, healthcare, the arts, technology, and economic policy), working with others to turn good ideas into reality. @Nesta_UK

Governments have an ingrained rationale with a long history. Early versions of bureaucracy focused on standardized, replicable models of government. In the ‘80s and ‘90s came a shift; we learned efficiency from the private sector which created a more transactional model of government in which we outsource tasks to private businesses. The result was strict accountability—a necessity to always be able to articulate the direct link between input and output. A public official will be asked to explain every step of a process. “What will this lead to? What can I expect in two weeks?” It’s risk averse and it fundamentally undermines the process of making sense of complexity and exploring new hypotheses.

In recent years, a growing number of governments have taken interest in experimental approaches. However, governments face challenges connecting experimental approaches with strategic policy development and with implementation for complex issues and better public outcomes. In this essay, we make a case and consider strategies for government experimentation.

Experimentation as a way of accelerating learning and exploring “the room of the non-obvious”

Experimentation accelerates learning and the exploration of new solutions by systematically testing assumptions and identifying knowledge gaps. What is there to know about the problem and the function, fit, and probability of a suggested solution?

Experimentation expands the array of policy options available as solutions by creating a political environment that tests non-linear approaches to wicked problems. We distinguish between “the room of the obvious” and the “room of the non-obvious”. By designing portfolios that include – by deliberate design – the testing of at least some non-linear, non-obvious solutions, government officials can move beyond the automatic mode of many policy interventions and explore the “room of the non-obvious” in a safe context (think barbers to prevent suicides or dental insurance to prevent deforestation).

Experimentation as a way of turning uncertainty into risk

“Uncertainty” and “risk” are not interchangeable concepts. In the implementation of a solution, risk is the probability of a certain outcome. It is measurable, based on existing data that there is X percent chance of success, or X percent chance of failure. Qualitative risk factors can be developed and described as well. Uncertainty implies a lack of probabilities entirely, perhaps a lack prior data on how the solution might perform. It means unknown outcomes with unknown likelihood of success.

There is talk that governments’ need to become “risk takers”, or to become better at “managing risk.” But as Marco Steinberg, founder of strategic design practice Snowcone & Haystack, suggests, risk-management–with known probabilities–is actually something governments do well. Issues arise when governments deal with the uncertainty of complex challenges with entirely new service systems to fit the needs of our time.

Plenty of policies developed and implemented on untested assumptions have failed. Running experiments at an early stage is a systematic way of turning uncertainty into a set of probabilities that, in turn, can be managed within a more defined scope. In this sense, policy initiatives can be opportunities to close the gap between uncertainty and risk.

Experimentation as a way to reframe failure and key performance indicators (KPIs)

Admitting failure can too often mean becoming a scapegoat. To accelerate learning and deal with uncertainty, governments must allow for learning from failure. According to Harvard Business School’s Amy Edmondson, there’s a distinction between bad and good failures. Bad failures are preventable ones in predictable operations. Good failures, on the other hand, are unavoidable failures in complex systems, uncharted territory, and dealing with high levels of uncertainty.

As MIT Center for Digital Business’ Michael Schrage says, “Innovation amateurs talk good ideas; innovation experts talk testable hypotheses.” We must reframe ideas as testable hypotheses. Unfortunately, even good ideas often fail. Ideas, never fully formed, need testing, refining and development to work in and adapt to dynamic systems. Reframing ideas as hypotheses, a more humble approach to change-making, highlights this need. Testing and refining determine if ideas will address root causes of problems.

Experimentation deliberately and systematically produces good failures and allows us to learn from them to avoid policies getting stuck on the wrong track. This means reframing KPIs from an almost exclusive focus on top-down defined goals to a more deliberate embracing of bottom-up feedback mechanisms in order to accelerate the learning of those closer to the frontline.

Experimentation on a continuum between exploration and validation

Too often people equate experimentation in government with randomized control trials (RCTs). While RCTs are certainly important in qualifying and validating a hypothesis and turning it into an implementable initiative, they are less useful if—as is often the case—the problem itself and its opportunity spaces are poorly understood or underestimated. In response, we developed a “Continuum of Experimentation”, which builds on and synthesizes leading experimental initiatives in the field.

There is not a strict division between categories in the continuum, but rather a dynamic, fluid overlapping. The continuum combines the analytical and imaginative. We recognize the necessity for approaches from different disciplines—social and natural sciences, arts, data analytics and design—to enable and systematically apply different experimental approaches in accordance with what is known about a problem and its possible solutions.

‘Continuum of experimentation’ (inspired by multiple resources including Danish Design Centre’s ‘Designing policy experimentation’ and Donald Schön’s ‘The Reflective Practitioner’). Download this diagram as a PDF.

The continuum identifies three key categories of experimentation:

  1. Generating hypotheses.Shaping direction by generating multiple hypotheses for change.
  2. Establishing a hypothesis. Developing and establishing particular hypotheses to test their potential value-creation.
  3. Validating a hypothesis. Validating the fit and function of a particular hypothesis to be turned into interventions.

At the imaginative end of the continuum, probabilities and solutions are unknown. These experiments identify new frames to generate thinking and action. The exploration of options and what-ifs drive hypothesis-generation. A successful output leads to the discovery of hunches that help generate new hypotheses to test. Speculative design employs this process of discovery.

At the analytical end of the spectrum with known probabilities, activities focus on justifying decisions and managing risks. These experiments rigorously test established hypotheses to validate possible solutions before scaling them. A successful output tests a hypothesis in terms of validity and effectiveness in dealing with the problem at hand. RCTs employ this method.

In the middle of the continuum, between exploring options and validating a hypothesis, lives a category of experiments that build on both the imaginative and analytical. We have called this the  ‘trial-and-error’ approach. In this zone, successful outputs identify, test, and/or challenge existing assumptions and discover the fit and function of a potential solution: what might work (well enough) and what doesn’t. Hypotheses are tested to understand probabilities that underpin potential solutions as well as any unanticipated effects—good or bad. Prototyping is a method that follows this trial-and-error approach to test ideas at an early stage and learn fast from failure.

As a whole, the continuum is meant to highlight that successful experimentation needs a dynamic and iterative process of shaping direction to create a basis for redesign and legitimize decision-making. After all, there are different questions to ask, activities to be mindful of, and methods to use throughout the experimental process.

Experimentation as cultural change

A civil servant recently explained to us her biggest dilemma: “I can’t tell my minister that I don’t know. It is simply not acceptable to not have an answer ready. You have to come with a clear strategy promising a solution.” Beyond technical methods, effective, impactful experimental approaches necessitate a shift in the culture of decision-making on both strategic and operational levels of government. It needs to be okay and safe to say: “I don’t know, but I will do my best to find out what the best solution might be”.

On the other end of the spectrum of government attitudes, innovation can be mistaken for a quick fix for complex issues. In both cases, we believe that for more sustained, strategic, large-scale outcomes, we must instill experimentation in the everyday behavior of public servants.

Nesta focuses on what it takes to shift government culture toward embracing experimentation. This entails honing in on what changes behavior and encourages new ways of articulating professional judgment. This requires a change in:

  • Mindset – the fundamental set of assumptions and perspectives that frame the understanding of one’s own role, practice and potential.
  • Attitude – the emotional state that creates a propensity to perceive and solve problems in a particular way.
  • Habits – the fundamental actions and activities that one views as essential and valuable in exercising one’s professional role.
  • Functions – the core operational tasks of government (i.e. how to approach policy development, procurement, etc.) that often go unquestioned.
  • Environment – the factors and elements that shape how decisions are made and how development processes are enabled and authorized.

The i-school for ushering in cultural change in government

The elements listed above take ongoing in-practice learning and unlearning over time. So Nesta is working with progressive governments and leading public innovation practitioners around the world to strengthen experimentation and innovation capacity and to create change with our new support platform: i-school.

We curate a community of practice to support people in their innovation-learning journey. The faculty of international practitioners become active partners in ensuring learning from relevant (and cutting-edge) practical experiences and subject-matter expertise as well as mentorship of program participants. We hope to create a learning collective that will form around how people and organizations create practical successes in their innovation work.

There are core operational models that the public sector uses—policymaking, regulation, procurement, lawmaking— that currently work from a questionable ingrained rationality. We see huge potential in working with public servants to reshape their professional roles, skills, and tasks to better fit current contexts and environments of public problems. I-school is a deliberate effort to reframe experimentation as a cultural change and increase the ability of governments to deal with public problems.