A few days ago I found myself investigating the bed occupancy statistics for an NHS hospital trust. This was in response to a friend telling me that the hospital had found the need to invent a new colour level of alert beyond ‘black alert’. Black alert was the previously most extreme measure of alert that signified that bed capacity has been reached and that patients arriving at A&E would have to be taken to another hospital.
The National Audit office suggest that hospitals that have bed occupancy above 85% can have regular bed shortages, periodic bed crises and an increased number of health care-acquired infections. I’d been told that a number of hospitals were operating at 100% bed occupancy. Indeed the hospital in question had been operating at over 95%. Somewhat serendipitously, I was undertaking this activity at the same point that a report flashed up on my screen stating that NHS doctors were being pressured into manipulating patient records to ensure that hospitals did not miss waiting-time targets. In this blog I’m going to suggest that collecting such data as bed occupancy and A&E times is not enough.
Senior managers were reported to be encouraging doctors to participate in fraudulent activity, with instructions given by a senior member of staff to make alterations to a patient’s records to “find a reason to change the admittance time”. At around the same time there was a report that Welsh First Minister Carwyn Jones had written to the UK Statistics Authority about the Prime Minister’s “selective misuse” of data following her attempt to deflect criticism of the health service’s performance onto the Welsh NHS.
These two reports were vying for NHS media space with the continuous and numerous media accounts of patients dying in hospital corridors. A BBC news article reported that in the previous week there was a point when 133 out of 137 hospital trusts in England (97% of all hospital trusts) had an unsafe number of patients on their wards. It stated that just over 85% of patients were seen in four hours – well below the 95% target – and marginally worse than the previous low in January 2017.
So what does this complex blend of party politics, funding shortfalls and accountability pressures mean for the value of the data that is being produced regarding the performance of the NHS? And to what degree should members of the public have faith in the data being produced by and about their local hospitals?
Data sources such as A&E waiting times and bed occupancy are wedded to rationalist, technicist and bureaucratic logics of accountability. They are often understood as being objectively produced and reasonably robust accounts of hospital progress far removed from the subjective sway of case studies and first-person accounts. As a result, there is little understanding or appreciation of how the ‘life of data’ are negotiated in detail in specific places and in contexts.
However the validity of such measures of hospital progress are entirely dependent on context. Games of creative compliance are well known across a number of sectors where such audits function to make staff performance part of agendas for control. Indeed such gaming includes practices which, in the most extreme cases, can end with the subordination of moral and care obligations to economic obligations.
The knowledge produced about given health events is never as fixed and institutionally standardised as it is often imagined but rather it tends to be contingent on circumstances. This becomes visible in moments of extreme crisis. Measures such as bed occupation and A&E waiting times are as useful as the conditions in which they are produced allow them to be. How the numbers resonate with what they are tasked with capturing is a moving feast that is dependent on a range of factors, not least the resources at a Trust’s disposal to reach the required targets and the consequences of not achieving those targets.
When resources are insufficient and the external consequences for target failure are high, the integrity of data becomes fluid. In such circumstances it is possibly more useful to think of data performances than data in and of itself. That’s not to say that the data performances don’t resonate with the activities that they are tasked with capturing- simply that the performances can be better or worse at doing so depending on circumstance. And while it can be comforting to collectively indulge ourselves in the myth that data retains its integrity through all manner of circumstances, this ‘scandal’ suggests otherwise.
It is often uncritically assumed that such measures and targets, however flawed, will contribute to more effective patient care. However it is not only the case that these data performances are fluid in their capacity to capture the ‘thing’ that they are supposed to measure; they can, in some circumstances, operate to actively degrade patient care. They can shift from having a protective function to a being risk factor in and of themselves as patients are moved around hospitals or discharged early to ‘reset the clock’ in order that targets are met. Staff time can be taken up and clinical confusion invoked through manipulating statistics and/or moving patients around the computer system or patient care is privileged not due to need but due to the greater potential to reach target times. In this context these data performances are not only poor signifiers of events but become potential sources of patient danger.
This blog isn’t a call for us to throw out the baby with the bathwater. As promiscuous as the politics of evidence is in practice, there is self-evident value in knowing how quickly people entering trauma departments are receiving care and whether the bed occupation figures make a hospital unsafe. The intention here is simply to note that collecting the numbers isn’t enough. We need to pay real attention to creating the circumstances that allow the numbers to have meaning.
2 Responses
bill geddes on Feb 7, 2018
Thanks for this Carl it is exactly the kind of thing which will never be available in mainstream media but needs to be aired.
Colin Standfield on Feb 9, 2018
‘Over half of admissions from an unplanned A&E attendance occurred in the fourth hour (51 per cent or 1.9 million) of which 44 per cent (850,700) were admitted in the final ten minutes.’ (HSCIC, Sept 2013 to Aug 2014; Quotation from:
http://content.digital.nhs.uk/article/5246/Patient-journey-Longer-hospital-stay-for-AE-patients-admitted-in-the-fourth-hour-of-attendance