Accessibility Page Navigation
Style sheets must be enabled to view this page as it was intended.
Green Templeton College | Oxford

Trish Greenhalgh reports on a recent conference at Green Templeton

The sun shone brightly on Green Templeton College on 13th and 14th June 2017 as we welcomed 60 participants from 8 countries to talk – with much enthusiasm – about complexity. As the organisers had hoped, the topic was framed differently by different speakers and if the field moved forward it was because we made progress in what we were disagreeing about. The Twitter hashtag #Complexity2017 allowed the many applicants who missed out on places to follow the action in real time on social media. Thanks to some sophisticated ‘big data’ analytics, the best tweets (and photographs) have been collated here.

Full TheatreA full E. P. Abraham Lecture Theatre on 13th June

What is ‘complexity’ and why do we need to talk about it?  In short, too much of medicine assumes a clockwork universe. We use machine metaphors which reinforce the assumption that we can fix a part of the system as we might fit a new carburetor to a car, and that this ‘fixed’ part will slot seamlessly into the old system. We valorise experiments, especially the randomised controlled trial, in which selected variables are carefully isolated out and all aspects of context are ‘controlled for’.  In other words, the scientific basis of healthcare interventions has mostly been developed in artificially closed systems. But as Professor Penny Hawe from the University of Sydney illustrated in her opening keynote lecture, illness and healthcare unfold in an open system in which the different parts are interconnected and changing dynamically over time. (See for example her paper showing how comparison communities in a randomised controlled trial altered their behaviour in response to being ‘controlled’

HaweProfessor Penny Hawe giving the opening keynote

To generate knowledge that applies in the real world, we need to study open systems in all their messiness, unpredictability and complexity. We need to study not merely the parts of those systems but the relationships and patterns in those systems. We need to move from flowcharts, boxes-and-arrows diagrams and static frameworks to more dynamic models of reality that capture fluidity, feedback loops (positive and negative) and emergence over time – as well as the trust relationships and partnership dynamics on which such emergence depends. We need to abandon the notion of a transferable effect size from interventions and accept that the ‘same’ intervention will play out differently in different contexts – and even in the same context over time.

All this (expressed, deliberately, at a broad-brush level) is well known to those who study complexity. There is less agreement, however, on how we should study messy, unpredictable open systems – and even on what exactly complexity is. What research methods and techniques are available, and what counts as a ‘robust’ empirical study of a complex system? If and when grant-awarding panels see the light and start to fund non-experimental and uncontrolled studies of activities in open systems, how should they decide which ones to fund? At a more abstract level, what is the philosophical basis of the study of open systems – both ontologically (the nature of reality) and epistemologically (how we might know that reality)?

“Interventions” got a bit of a bashing. The question “was the intervention implemented as planned and what change occurred in the primary outcome measure?” will inevitably provide a partial and potentially misleading picture of what actually occurred. Better to ask “what happened (intended and unintended) – and why?”. Penny Hawe exhorted us to shift our thinking from interventions as a “dose” (of a drug, an educational programme, a nudge etc delivered to individuals) and start to conceptualise them as events in systems. The “system” might be conceptualized as the sum of: (1) its constituent activity settings (the various places where people do things and interact); (2) the social networks that connect the people and these settings; and (3) time (Penny’s paper on this is well worth reading).

As Penny reminded us, an intervention has effects, and creates distant ripples, across settings, through networks and over time. Some interventions are system-stifling while others are system-stimulating; all create some kind of “structural signature” in the system. This resonated with the suggestion from our second keynote speaker, Professor Mark Petticrew from the London School of Hygiene and Tropical Medicine that the “primary outcome” of an intervention might be an enduring change in the system.

Mark suggested that the acronym ‘PICO’ (population-intervention-comparison-outcome), popular in evidence-based medicine circles, creates more problems than it solves. It implies a tidier, more closed system than is ever the case, and the selection of both intervention and outcome(s) involves value-laden (and hence, ethical) choices. Findings from oversimplified study designs and analyses will mislead us and perhaps falsely reassure us. What we gain in so-called rigour we lose in authenticity. An example is the “rigorous” but misleading Cochrane review of advertising restrictions to reduce alcohol consumption , which restricted its dataset to a single randomised controlled trial and three interrupted time series analyses (with the primary endpoint of individual alcohol consumption) and concluded that there was insufficient evidence to support or refute the use of advertising restrictions. For Mark’s own system-wide analysis, which uses a much richer dataset with multiple outcome measures and comes to a different conclusion, see this recent paper.

PetticrewMark Petticrew giving second keynote

Following this scene-setting introductory session, the conference enjoyed a mix of theoretical and empirical presentations (oral and poster) from a wide range of academic disciplines and geographical settings. The abstracts are here (oral abstracts; poster abstracts). Below, I summarise some of the over-arching themes:

Theme 1: The dangers of complexifying issues
Complexity, suggested Simon Cohn, may be as much in the eye of the beholder as inherent in the problem being analysed. Other speakers (and questioners from the floor) warned that there is a danger of making a problem seem so complex that we don’t try to address it, fearing that the humble contribution we are resourced to make will make so little difference. Critical social scientists used the term “quietening” for this phenomenon, reminding us of the deep but sometimes hidden links between knowledge and power. Mike Rayner led a provocative discussion on whether complexity theory(ies) were as useful as most of the audience assumed. Others pointed out that working with complexity does not mean taking on an exhaustive analysis of every component of the system – it may mean having conversations to identify what is most of value and pursuing a pragmatic approach to improving that aspect of the system. Thankfully, no consensus was reached.

Theme 2: Learning to work with less-than-perfect data
Miranda Wolpert from UCL drew on a previous career as a historian to design a highly unconventional, adaptive and pragmatic approach to child mental health research. Drawing on a quote from the historian EH Carr “History is a hard core of interpretation surrounded by a pulp of disputable facts”, she introduced her FUPS framework (Flawed, Uncertain, Proximate [that is, surrogate] and Sparse) to describe the kind of data that it is currently possible to generate in child mental health. Instruments for assessing health status lack precision; their validity is contested; they measure something that relates tangentially but not directly to what really matters to patients; and in any case, there’s precious little evidence in many conditions. Rather than waiting until the data are “good enough” (and achieving nothing in the interim), better to work in partnership with patients and clinical services, being honest about the limitations of the data (avoiding “complicated black boxes”) and using data (FUPS data, warts and all) to start conversations about how to move forward. This resonated with a presentation by Jean Macq from the Catholic University of Louvain, Belgium, on using agent-based modelling not as ‘hard data’ but as a starting point for interactive conversations with policymakers (who engage, interpret, suggest). Whilst such approaches feel right, the dominant forces in healthcare research view them as second-rate because rigour is conventionally defined in methodological rather than relational or ethical terms.

WolpertMiranda Wolpert shows the principles by which she runs her research unit

Theme 3: People (and other agents)
Complex systems depend on agency. People have plans, ideas, things at stake. They draw on the resources available to them to do things that matter to them. They are (more or less) creative, committed and engaged with programmes. They will (sometimes) join in conversations but they won’t fall in with your plans if they don’t trust you. The actor-network theorists in the audience declared themselves when they announced that technologies too have agency and can ‘act’ in a complex system. But most of us, I think, were more taken by philosopher Robin Durie’s account of Hannah Arendt’s ‘The Human Condition’ - what he called “an ontological account of what it means to be human”. To be human means to act – in terms of our labour (what we do to keep ourselves existing), our work (Aristotle’s poiesis, the highest form of human existence in which we “create the communal”) and action (acts and speech – the means by which we disclose ourselves to others). This led to a discussion about “the spaces in between” which both relate us and separate us – and the implications for a human-centred, relational science of healthcare improvement based on complexity principles. Heady stuff.

Theme 4: Philosophical unfinished business
Philosophical critiques of a complexity approach (which we didn’t get into in any depth) deserve to be summarised and considered carefully. Complexity is an orientation rather than a theory or philosophical position (we heard from phenomenologists, social realists, constructivists, pragmatists and many in between). Depending on your philosophical position, terms such as ‘generalisability’ and ‘causality’ mean different things and raise different theoretical and empirical challenges. Whichever way you swing philosophically, it is salutary that the complexity lens may mean little in the way of deliverables to the patient in the bed or the clinician in the busy A&E department. More work needs to be done here to ensure that complexity is more than an excuse for a satisfying intellectual get-together in the congenial surroundings of an Oxford college.

The staff at Green Templeton did us proud with a drinks reception (plus posters on complexity research) followed by a tour of the Observatory and a truly splendid dinner. We are negotiating with a publisher for a special journal issue on ‘Talking about Complexity’, so watch this space for some interdisciplinary papers.

Trisha Greenhalgh
15th June

DrinksDrinks and posters in the Common Room before dinner