Photo: a close-up of a person's face on a computer screen photo

A face-to-face consultation between a GP and their patient is an intimate, highly personal, and confidential encounter that, despite the rise of e-consults, remains at the heart of UK general practice. But it is changing fast. New technologies measuring and assessing risk – what is called here decontextualised risk information – are now being brought into the consultation by both GP and patient.

It is argued here that by introducing this “third voice”, these technologies fundamentally reshape power, control, and knowledge in the consultation. Our sociological perspective illustrates how the technology facilitates surveillance and governmentality, encourages patient compliance and responsible behaviour whilst potentially marginalising structural and societal causes of ill-health, and we propose approaches which could re-contextualise risk information within “lifestyle medicine”.

Decontextualised Risk Information (DRI) refers to the growing flow of scores, alerts, and predictions that now accompany people into clinical encounters, without being explicitly requested by the clinician. These data may originate from wearable devices, private blood testing services, pharmacy-based screening, or automated prompts embedded within electronic health records. Increasingly, DRI enters the consulting room as a given: already produced, quantified, and framed as meaningful.

For patients, such information can feel simultaneously empowering and unsettling. Numerical estimates of risk promise clarity, objectivity, and control, yet are frequently difficult to interpret, detached from lived experience, and ambiguous in their implications. Apparent precision may conceal uncertainty, provisional assumptions, or limited validity.

For GPs, DRI may arrive unexpectedly and uninvited. It can feel intrusive, stripped of the biographical, social, and clinical context that ordinarily gives risk its meaning. Its provenance may be unclear, yet its presence can still exert pressure on clinicians to respond. In this way, DRI has the potential to disrupt established forms of clinical judgement and professional authority, leading to more defensive clinical behaviour.

For both GP and patient, DRI becomes a “third voice” in the consulting room: sometimes clarifying, sometimes contradicting, and often requiring negotiation between what the data assert and what the patient’s body, narrative, and clinician suggest.

The volume of DRI produced by the “internet of things”, alongside the rapid adoption of “technologies of the self”, continues to expand.  Yet evidence of its impact within clinical encounters remains limited. Key questions remain unanswered: how does DRI affect consultation outcomes? Does it increase diagnosis, screening, and intervention? Does it redirect responsibility for care back onto individuals, increasing self-reliance and self-labour? And how does it shape patient trust and satisfaction?

A sociological perspective extends critique beyond viewing DRI as a technical innovation to be integrated or managed. Instead, it foregrounds how DRI reshapes power, knowledge, and control within the consultation. It highlights how the assemblages that produce DRI extend the medical gaze and reinforce a view of health as primarily a matter of individual responsibility, rather than one shaped by structural and social determinants.

From a Foucauldian perspective, DRI extends surveillance into everyday life while redistributing clinical authority.  Continuous self-tracking and automated alerts cultivate self-scrutiny; patients are responsibilised to monitor, interpret, and act, lest inaction be framed as negligence. Governmentality operates here not through coercion, but through shaping what appears prudent and responsible. The “good” citizen-patient checks their score, books the appointment, and follows prompts. Yet when unsolicited risk scores conflict with clinical judgement, patients may feel caught between algorithmic instruction and human advice. Trust may be strained, and the consultation risks shifting from a relational encounter to a contest over who gets to define “risk”. DRI therefore precipitates a renegotiation of power between GP and patient.

One consequence of this digital health landscape is the marginalisation of social determinants of health. Instead, this model of medical governance requires citizens to engage in “processes of endless self-examination, self-care and self-improvement”.  Datafication reinforces neoliberal emphases on behavioural self-management, enforcing disciplines of active body surveillance while reducing patients to objects of clinical knowledge. Algorithmic governance risks stigmatising “unhealthy” bodies, problematising the normal, and encouraging moralised comparisons. As others have argued, ethical demands on individuals expand even as the social resources required to meet them contract.

Inequalities run throughout this terrain. Access to devices, data plans, and private testing is uneven, as are the literacies required to interpret probabilistic information. Some patients arrive with detailed reports and visualisations; others struggle to access their records or to understand technical language. For those with fewer resources, surveillance may feel one-sided—monitoring without meaningful support. Even for digitally confident patients, DRI may mislead when data are noisy, thresholds arbitrary, or reference populations unrepresentative. Algorithmic medicine is not neutral: it reflects and reproduces racial, cultural, and socioeconomic biases. Cultural meanings of risk, stigma, uncertainty about validity, and fears of intrusive surveillance all shape how safe it feels to disclose or act upon DRI.

Managing risk therefore requires more than improved software. When regulatory oversight is weak and corporate interests increasingly influential, alternative conceptual tools are needed to understand the broader consequences of DRI.

At an individual level, clinicians can re-centre the person by explicitly situating DRI within the patient’s narrative: clarifying what question the data were intended to answer, how reliable they are in this context, what alternatives exist, and the potential benefits and harms of acting now versus waiting. Practices can support this by normalising discussions of data provenance and uncertainty, documenting shared decisions and safety-netting, and offering inclusive routes to participation that do not depend on devices or payment. At a community level, there is a need for praxis: collaborative research and action with publics to build understanding and negotiate consent.

As general practice increasingly embraces “lifestyle medicine”, there is an urgent need to understand how DRI reshapes the GP–patient relationship. Even before the widespread adoption of AI, DRI is already testing the balance between care and control. A sociologically informed approach-attentive to surveillance, responsibilisation, and governmentality-can help ensure that risk information serves patients, rather than the other way round.

About the Authors: Dr Alex Burns is a GP at Threespires Surgery, Truro, and is completing his Ph.d  at the University of Exeter.

Dr Adrian Mercer is a former NHS manager, now based in Devon. He is a Patient and Public Involvement lead at Exeter University and occasional contributor to this site. @adeindevon  @adeindevon.bsky.social