SESYNC Immersion Program: Environmental Policy Workshop
SESYNC Immersion Program: Environmental Policy Workshop
Graeme Nicholas, guest contributor and member of SESYNC's Co-creative Capacity synthesis team.
A Co-creation Challenge: Aligning Research and Policy Processes
By Graeme Nicholas
How can people with quite different ways of ‘seeing’ and thinking about a problem discover and negotiate these differences?
A key element of co-creation is joint problem definition. However, problem definition is likely to be a matter of perspective, or a matter of how each person involved ‘frames’ the problem. Differing frames are inevitable when participants bring their differing expertise and experience to a problem. Methods and processes to support co-creation, then, need to manage the coming together of people with differing ways of framing the problem, so participants can contribute to joint problem definition.
I was first alerted to the role of framing by the work of Donald Schön. In Educating the Reflective Practitioner (1987), Schön states,
In the terrain of professional practice, applied science and research-based technique occupy a critically important though limited territory, bounded on several sides by artistry. There are an art of problem framing, an art of implementation, and an art of improvisation – all necessary to mediate the use in practice of applied science and technique.
This ‘art of problem framing’ is part of what makes the field of practice more than a simple application of knowledge or technique. Framing is an active process involving fateful judgement (artistry) that, in part, determines the outcome.
Framing for co-creation will involve collaborative processes. In my experience, working on projects such as inter-agency collaboration to manage public health risks, collaborative framing depends on bringing together diverse perspectives in ways that avoid collapsing the diversity while engaging together around some ‘good enough’ representation of the situation. Processes that encourage convergence of thinking are likely to lack attention to framing.
What I find useful is to form a provisional judgement about who are actors in a problem situation, and then invite to a workshop people representing as many likely perspectives on the situation as is practical. Workshops are designed and facilitated in ways that respect and manage diversity of perspective, expertise and experience, as well as diversity of positional and personal power.
Key to such workshop design and facilitation are two elements: how group-work is structured and the use of ‘boundary objects’. I use boundary objects in workshops as a focus for surfacing diverse understanding, interpretation and assumptions; in other words, I use boundary objects in the service of collaborative framing.
Boundary objects have been defined by Star and Griesemer (1989) as conceptual or tangible items that live “in multiple social worlds and … [have] different identities in each”. A boundary object to support co-creation of outcomes can take many forms. I have used the following as useful boundary objects:
The key attribute that allows something to function as a boundary object is that its meaning and significance can be interpreted and discussed from each of the viewpoints present in the group.
Conceptual models can be derived from key informant interviews, the literature or a scoping phase of a project. The model or models produced are deliberately provisional, and attempt to represent a chosen system at a glance. As such, a model makes discussible judgements about what is in and what is out of consideration, what interactions are important in understanding how the system works, and whose perspectives are important in understanding and changing the system.
Rich pictures are a way to get participants to illustrate the complexity of actors and interactions that make up a given situation. The key is to get the complexity into a form that can be seen at a glance, and discussed from various points of view. No consensus is needed in collaborating to produce a rich picture. Both the process of producing the composite picture and the picture itself serve as boundary objects.
Theoretical frameworks that have served as boundary objects in my work have been:
In each case inviting a diverse group to collaborate in populating the frameworks has resulted in surfacing and negotiating implicit frames.
Narrative material provides participants with anonymised quotes that provide windows on a situation and invite collaborative sense-making as to what a given anecdote or extract might signify. Again, the critical feature is that the process focuses on an object that is open to interpretation and contestation.
Archetypes can be used to characterise a situation or aspects of a situation. By their nature archetypes are open to interpretation and the choice of archetypes and their significance prove useful in discovering and negotiating the ways a situation can be framed.
There will be many other examples of using boundary objects to support problem framing. What boundary objects have you used to assist in problem framing? How have you worked with the range of ways a given situation is framed by participants to maintain diversity?
Schön, D. A. (1987). Educating the Reflective Practitioner. Jossey-Bass: San Francisco, United States of America.
Star, S. L. and Griesemer, J. R. (1989). Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science, 19, 3: 387-420.
Soft Systems Methodology, CATWOE, and Rich Pictures:
Checkland, P. and Poulter, J. (2006). Learning for Action: A Short Definitive Account of Soft Systems Methodology and its use for Practitioners, Teachers and Students. John Wiley and Sons: Chichester, United Kingdom.
Critical System Heuristics:
Ulrich, W. and Reynolds, M. (2010). Critical systems heuristics. In, M. Reynolds and S. Holwell (Eds.), Systems Approaches to Managing Change: A Practical Guide. Springer: London, United Kingdom, 243–292.
Viable System Model:
Beer, S. (1985). Diagnosing the System for Organizations. John Wiley and Sons: London, United Kingdom.
Narrative approach and use of archetypes:
Snowden, D. (2001). Narrative Patterns: the perils and possibilities of using story in organisations. Knowledge Management, 4, 10: 16-20.
Biography: Graeme Nicholas is a senior scientist in the social systems team at the New Zealand Institute of Environmental Science and Research (ESR). His speciality is applying complexity science to the understanding and improvement of complex social systems. He has led client focused research projects on service innovation in health, social services, policing and fire prevention. His qualifications, training and experience include microbiology, theology, systems oriented consulting, psychotherapeutic theory, dialogue design and facilitation, organisation consultancy, and professional training services. He is a member of the Co-Creative Capacity Pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).
SESYNC Welcomes New Advisory Board
The National Socio-Environmental Synthesis Center (SESYNC) recently welcomed a prominent group of academic, nongovernmental, and corporate leaders to its external advisory board. As leaders in social and natural sciences and in their respective disciplines, SESYNC's advisory board provides guidance and support to the Center and its mission.
The External Advisory Board members include:
High Atmospheric Carbon Dioxide Levels Threaten Coral Reefs and People
As atmospheric carbon dioxide (CO2) levels rise, very few coral reef ecosystems will be spared the impacts of ocean acidification or sea surface temperature rise, according to a new analysis. The damage will cause the most immediate and serious threats where human dependence on reefs is highest.
Katrin Prager, guest contributor and member of SESYNC's Co-creative Capacity synthesis team.
A Co-creation Challenge: Aligning Research and Policy Processes
By Katrin Prager
This blog post originally appeared in the Integration and Implementation Insights blog (http://I2Insights.org) as “A co-creation challenge: Aligning research and policy processes," and is reposted with the author’s permission.
How does the mismatch between policy and research processes and timelines stymie co-creation? I describe an example from a project in Sachsen-Anhalt state in Germany, along with lessons learnt.
The project, initiated by researchers, aimed to use a more participatory approach to developing agri-environmental schemes, in order to improve their effectiveness. Officers from the Agricultural Payments department of the Sachsen-Anhalt Ministry for Agriculture were invited to participate in an action research project that was originally conceived to also involve officers from the Conservation department of the same ministry, farmer representatives and conservation groups.
An initial meeting with the Agricultural Payments officers, to determine the focus of the participatory study, identified a problem with payments for grazing special conservation areas as their key concern. They needed to find a way to maintain payments to shepherds to graze special conservation areas. Shepherds relied on these agri-environmental payments to earn a living from grazing management, but recent changes in regulations no longer allowed payments for grazing in protected areas – where paradoxically the benefit of grazing for conservation outcomes was highest.
However, the research team had no expertise with legal issues and scheme design. What we could offer was a tool to optimise the allocation of budgets. Even though this could not help the government officials with maintaining payments to shepherds, the officials recognised potential benefits of the tool for a different problem, namely in anticipated negotiations with farmers’ associations to redistribute and reduce agri-environmental scheme budgets in the next planning period. This seemed to be their key motivator to cooperate with the researchers and to make internal budget figures available.
The researchers had to compromise by allowing the workshop participants and timeline to be determined by the ministry. The Agricultural Payments department needed the negotiation process with farmer representatives to be undertaken shortly after the project started in order to meet the timelines for scheme revisions set by the European Commission, the federal ministry and the state ministry.
This impacted the research process which aimed to combine facilitated communication with a highly structured mathematical model in a series of workshop meetings. The facilitated communication was intended to support fairness and transparency in the process, and to resolve any potential conﬂicts. The purpose of the mathematical tool was to structure and visualise the issue (budget allocation), scrutinise different scenarios, and therefore increase the transparency and efﬁciency of the process.
There was simply no time for the initially planned analysis of the ex ante situation and relevant stakeholders, nor for running more than two joint workshops. This meant that many decisions had already been taken before the first workshop, such as selecting the individual measures to consider in the model and setting some restrictions (eg., upper and lower budget limits per measure). However, scheme objectives and further model restrictions were jointly discussed and agreed at the first workshop, and weightings for the model were developed through a Delphi-style exercise.
Instead of genuine co-creation, the process was shaped by the particularities inherent in bureaucratic organisations especially at the state level. As well as the restrictions already described, the ﬂow and distribution of information from the ministry was poor and impacted on what could be entered into the model; power issues played out in terms of what information ministry staff shared at the workshops and with whom; and the hierarchical, sector-oriented focus of the bureaucracy meant that some relevant stakeholders (especially the Conservation department and conservation groups) were not invited to participate.
Nevertheless, initially the participants were satisfied with the process and this can be attributed to the facilitation generating a level playing field during workshops and the transparency afforded by the mathematical model. However, beyond the workshops, disappointment set in for the non-governmental stakeholders as the usual power structures came into play with ministry officials choosing to disregard the recommendations produced at the workshops.
We concluded that research is not set up to accommodate the requirements of policy making, in at least three ways:
We learnt that co-creation between researchers and bureaucratic organisations needs supportive gatekeepers and the opportunity for longer term involvement so that trust can be built and opportunities for mutually beneficial co-creation can be seized.
It is extremely useful if researchers are able to recognise power structures and their impact on co-creation, although there will be cases where there is little researchers can do to mitigate this impact. For example, the relevant stakeholders are unlikely to be motivated to contribute to a co-creation process for which the initiators have already decided the result.
Have you found ways to align research and policy processes, and to create the necessary flexibility in research project funding? I’d love to hear about them.
For more information:
Prager, K. and Nagel, U. J. (2008). Participatory decision making on agri-environmental programmes: A case study from Sachsen-Anhalt (Germany). Land Use Policy, 25, 1: 106-115.
Biography: Katrin Prager is a senior social scientist at the James Hutton Institute in Aberdeen, Scotland. She is involved in inter- and transdisciplinary research on agri-environmental policy making and implementation, collaborative landscape management, community engagement and farmer adoption of conservation practices. Katrin investigates these topics through the lens of institutional analysis, knowledge management, adaptive capacity and organisational behaviour. She is a member of the Co-Creative Capacity Pursuit funded by the US National Socio-Environmental Synthesis Center (SESYNC).
Val Snow, guest contributor and member of SESYNC's Core Modelling Practices synthesis team.
Should I Trust that Model?
By Val Snow
How do those building and using models decide whether a model should be trusted? While my thinking has evolved through modelling to predict the impacts of land use on losses of nutrients to the environment – such models are central to land use policy development – this under-discussed question applies to any model.
In principle, model development is a straightforward series of steps:
In reality, of course, these steps do not take place in an orderly progression, there are many loops backward, some of the parameterisation and testing occurs in parallel to the coding and the first step is often re-visited many times.
It is mostly assumed that assessment of ‘trust’ or ‘confidence’ in a particular model should be based on the metrics or statistics resulting from the comparison of the model outputs against experimental datasets. Sometimes, however, the scope of the testing data and whether the model has been published in a good journal are also taken to imply confidence in the model. These criteria largely refer to that last testing step and this focus is understandable. Of the steps above, testing is the one mostly readily documented against accepted standards with the results made available externally. However, even with a quantitative approach to testing, Bennett and colleagues note that the actual values of the statistics that are considered to be acceptable are a subjective decision.
While I agree with the approach and need for quantitative testing, the testing results themselves have very little to do with my confidence or trust in a model. My confidence will evolve over time as I become more familiar with the model. By the time I am prepared to make any statements about the specific reasons for my degree of trust, the reasons for that trust will largely have become tacit knowledge – and that makes it very difficult for me to explain to someone else why I have confidence (or not) in that model.
Here I have attempted to tease out the factors that influence my confidence in a model. I should note that my trust in the models I have been involved in developing, or that I use at an expert level, can fluctuate quite widely and wildly over time so, for me, the process of developing trust is not a linear process and is subject to continual revision. I assess four key areas concerning the model using a range of questions, as follows:
Area 1. The nature of the problem domain: Are the ‘correct’ outputs even measureable? How mature is the science community’s understanding and agreement of the conceptual and quantitative processes that must be included in the model? What constraints and deliberate assumptions have been included? Will these assumptions likely constrain error or allow (or even encourage) it to blossom?
Area 2. Software development and parameterisation: Who did the work and do I have a favourable opinion of their other modelling activities? What documented software development processes did they use? Do they use a reliable version control system and can I compare older versions of the model to the current version? Is the documentation sufficient and sufficiently well-presented that I can, for the large part, understand the workings of the model and its implementation assumptions? If I need more detail can I (or can I get someone else to) dive into the code to understand more detail? How open/transparent does the process appear to be? Can it be readily reviewed by others?
Area 3. Developer’s testing: What have the developers done with respect to testing? Does it feel robust (eg., basic things like not reusing data used for parameterisation, but also have they delved into and explained reasons for poor performance)? Have they relied mostly on reporting statistical values or are there extensive graphs that are appropriate for the domain of the model?
Area 4. User’s experience: Is the model user interface set up in such a way that I can investigate the model’s behaviour as inputs, settings and parameters are changed? When I do this investigation, how often does the model ‘surprise’ me? How many of those are “Wow!” surprises (meaning I thought the model would be unlikely to behave well but it did), how many are surprising surprises (the model outputs can be rationalised and even make sense once investigated) and how many are “Really!?!” surprises (the model outputs do not make sense in any way that I can explain and/or they seem to be in conflict with the developer’s testing or documentation)? When I get the last type of surprise: is the model constructed in such a way that I can understand the extent to which that surprise will flow through to outputs that matter or are the effect of any such surprises likely to be minimised or cancelled out by the way the model is constructed?
These questions are how I develop trust in a model. Do my questions align with your criteria or have I missed critical points? Do you have a completely different process for developing trust in a model? My approach is probably strongly tuned by my experience with mechanistic or process-based models (where the model is intended to represent an expert’s opinion of how the system works rather than being driven by data). Given that, if you work with a different type of model, does your approach to developing trust work differently? Might you place more reliance on comparison to data? I’d value your thoughts.
Bennett, N. D., Croke, B. F. W., Guariso, G., Guillaume, J. H. A., Hamilton, S. H., Jakeman, A. J., Marsili-Libelli, S., Newham, L. T. H., Norton, J. P., Perrin, C., Pierce, S. A., Robson, B., Seppelt, R., Voinov, A. A., Fath, B. D., Andreassian, V., (2013). Characterising performance of environmental models. Environmental Modelling & Software, 40: 1–20. Online (DOI): 10.1016/j.envsoft.2012.09.011
Biography: Val Snow is a systems modeller at AgResearch in New Zealand and comes from a soil physics and agricultural science background. Her research focuses on the development and use of simulation models to support technological innovation in pastoral agricultural systems and assessment of the impacts of land use. Application areas include land use policy, future farming systems, greenhouse gas mitigation and climate change adaptation.
|Feb 15 / Field Notes: Predicting how the pet trade spreads infectious disease|
|Nov 28 / Problem Framing and Co-creation|
|Nov 3 / A Co-creation Challenge: Aligning Research and Policy Processes|
|Nov 3 / Should I Trust that Model?|
|Nov 1 / Could Climate Change Keep Kids Out of School? A Q&A with Environmental Sociologist and Demographer, Heather Randell|
|Sep 7 / Toxic Breakdown|