The BBC took the Nottinghamshire Healthcare Trust (NHT) to task last week amidst claims that 49% of patient reviews about their experience of healthcare on the Patient Opinion and NHS Choices websites had been posted from NHS computers.
Their ‘shocking’ discovery was that staff rather than patients were completing nearly half of the patient reviews on the site. Patient Opinion and NHS Choices defended this by reference to an established practice of staff helping patients to make feedback, using NHS computers, because those patients were unable (for a number of reasons, such as the patient was too ill) to deal with the practicalities of the online feedback system. This is a legitimate explanation. However, the Newsnight report stated that 47% of those staff that had posted comments had not reported what the BBC chose to describe as a ‘conflict of interest’, i.e. they had not declared that they were NHS staff, completing the feedback on behalf of someone else. This is an example of one way the media is quick to identify NHS staff as having ‘conflicts’ and ‘vested interests’; it’s not quite so quick to do so in relation to the commercial interests at every quarter. Even if we accept the conflict of interest argument, the figure of 47% means conversely, that 53% of those staff did report their role.
The report led with the emotive strapline that the online system was open to abuse, and the report was quick to connect to the Mid-Staffs inquiry and the Francis report recommendations that patients should routinely be able to rate and comment on their experience of hospitals. In a supposed new age of patient transparency and respect (following the terrible events of Mid-Staffs) this latest ‘scandal’ is presented as betrayal of principles of trust that exist between patients and professions. Fleeting mention is given in the report as to why staff might feel compelled to engage in these types of activities. Consider that the gain from these activities is not personal or immediate, the staff member leaving the positive feedback does not benefit directly from the action. Rather, the benefit is at an organisational level, effectively gaming the performance management metrics their organisation is held accountable to. It is not about manipulating the appearance of the trust in terms of the outward facing public website, the focus is much more on the inward facing quality and performance controls that patient experience feedback is tied into. A curious observer might ask what these actions tell us about the prevailing management structures within the NHS, and a critical observer might even question what these performance metrics tell us about issues of transparency and trust between citizens and government. This is the crux of the matter; these processes are about political practice, not professional practice, and information and data is the new battleground.
For example, the Government has promised an information revolution in the NHS. Jeremy Hunt has claimed that publishing surgical survival rates will “save thousands of lives” and “drive up clinical standards”. Similarly, going paperless by 2018 will save the NHS ‘billions of pounds’. But these data, just as much as the Patient Opinion data, are deeply problematic. There are all sorts of problems, with all sorts of data in the NHS. These problems extend from qualitative feedback data right through to the purportedly harder-edged, more ‘scientific’ quantitative data.
Consider Hospital Standardised Mortality Ratio (HSMR). The HSMR is a simple calculation, based on the number of actual deaths divided by the number of expected deaths, multiplied by 100. If the number of expected deaths corresponds with the number of actual deaths, then the trust has a score of 100. A score of more than 100 means there are more deaths than would be (statistically) expected, and under 100 that there are fewer deaths than would be expected.
These were the data that were at the heart of the Mid-Staffs inquiry, and the more recent ‘scandal’ at Leeds General Infirmary NHS Trust, which resulted in the temporary cessation of children’s heart surgery. There are a whole host of problems with these statistics (see Paul Taylor in the London Review of Books) and there is widespread disagreement about their utility.
A key issue highlighted by Taylor is how the development of HSMRs in the UK was tied into the commercial development of Dr Foster Intelligence, for publication in the “Good Hospital Guide”. These data are used, by a commercial company, to compile and publish a data guide for hospitals, which was then used to assess the relative performance of those hospitals (anyone notice a possible conflict of interest here?). Given this commercial context, and that statisticians continue to debate the overly simplistic or inappropriate interpretation of these data, surely questions have to be asked about how it is that an elevated HSMR score is taken to be indicative of poor care. Whilst it may be somewhat indicative of poor care, it is certainly also indicative of a political mechanism being used to identify what are adjudged to be poorly performing hospitals. But if the performance metric itself is broken, how can we adjudge the extent to which the hospital is broken?
Take another example, the friends and family test (FFT). This is where a simple question is asked – “How likely are you to recommend our ward/A&E department/maternity service to friends and family if they needed similar care or treatment?” This one question is intended to act as a ‘real world’ barometer, providing a live, immediate gauge of the level of quality and performances across all units within the NHS. Peter Lynn, writing in the Guardian, outlines how the question is flawed and how it will skew the ranking of hospitals across the country. The question makes no sense when it only offers a hypothetical assumption of choice, without specifying what that alternate choice might actually be. It seems to be assuming that hospitals are the same as restaurants or hotels. There needs to be a broader context to the question. Rachel Reeves, writing in the Health Service Journal takes the criticisms even further. According to Reeves, such is the poor quality of FFT data that its reporting is actually in breach of the Department of Health’s own publication guidelines. Additionally, there are differences across trusts in how the tests are implemented, making any cross trust comparison of the data problematic to say the least.
Both these examples draw from nationally collated data that assorted experts have problematised or are even dismissing. That these national level data have been implicated in several high level inquiries about professional practice, and are central to on-going funding and performance management regimes, raises questions about both the content of these data and the political uses the data are being put to. Consider the timing of this Newsnight report. It’s been a bad couple of weeks for the government in terms of healthcare data. The Newsnight report adds a new angle to the data debate, showing that the professionals are not to be trusted with our data either. Politically, the way in which the fabrication of patient experience data is portrayed as a fundamental betrayal of trust functions to take the heat off government in the wake of the care.data debacle, and amidst allegations of hospital data being sold to insurance companies. The Newsnight report does nothing to speak to the real concerns this story raises, around the commercialisation of data, or the suitability of data metrics for measuring care.
A version of this post appeared previously on the Open Democracy ourNHS blog.