Photo: a person sitting on the floor using a laptop from Odissei unsplash

Public willingness to take part in research is in free fall. Whether invited to tick boxes in a survey, volunteer for a medical trial or talk to an interviewer, we are increasingly likely to just say ‘no’.  Research participation declines hit the UK headlines a few months ago, when the Office of National Statistics (ONS) had to admit that recruitment to their flagship Labour Force Survey was so low that its findings could no longer reliably inform economic policy.  But, in an era of information overload, does it matter that the state can no longer rely on surveys to find out what people do or think?  Or that qualitative researchers struggle to recruit?

Declines in research participation have been recorded across countries, for most disciplines, and for most methods of research.  They are best documented in large surveys, where the characteristics of all possible recruits are known, and non-response can be calculated.  However, anecdotally, struggles to recruit are also a problem for qualitative researchers.

Non-response matters. It is not always possible to adjust findings to account for the differences between those who do and don’t take part in a study – and limits in representativeness have real consequences.  They can bias inferences about who is at most risk of infection during epidemics; hide evidence of unmet social need; and, as in the ONS example, lead to poorly informed policy decisions.  Slow recruitment also wastes precious research resources, as project timelines stretch – and can completely scupper some studies.

Identifying reasons for non-participation is challenging, for obvious reasons.  If people don’t consent to take part, they probably won’t consent to being asked in detail why they refused.  When non-responders do give reasons, these are often polite brush offs, giving little insight into why this was a reason.  Many, for instance, report being ‘too busy’.  To be sure, our attention is increasingly frayed by incessant demands to rate every trivial encounter and by email inboxes clogged with marketing surveys.  With easy-to-use survey design tools, anyone can now send out a questionnaire – with some truly dispiriting, badly designed outcomes, often from organisations who should know better.  Yet, this demand saturation is universal: it doesn’t account for why some people are less likely to agree than others.

Much of what we know about why people decline to take part is derived from studies of why some participants say yes.  Motivations for research participation are diverse  – altruism, interest, and, sometimes, direct incentives such as payment can all play in role.  For qualitative research, there may be personal benefits to participation, from the therapeutic value of sharing of your story with a willing listener, to passing on views that you hope will improve services.  Interview studies might also simply be seen as ‘no skin off my nose’ to take part in. However, reasons for saying no are unlikely to be simply the absence of these motivators.

Two crucial components of willingness to take part are connection and trust.  When these are missing, recruitment is a struggle – as any researcher who has ever had to use ‘cold calling’ will know.  In a study of householders’ home energy improvements, we had no take up at all from an invitation mailed to 50 householders, but recruited easily in-person at a community event.  Participants need to feel a connection to the researcher and (up to a point, sometimes) the topic, and they need to have that difficult-to-establish sense of trust.  Trust here is multi-faceted.  It involves trust that precious time will not be wasted, that views will be respected, that the researcher (or their organisation) is a trustworthy actor, that private information will not be used inappropriately.  In a personal encounter, the foundations for these can be laid more easily, especially if the initial request is from a known and trusted gatekeeper.  However, making personal approaches is now more difficult in the context of ethical requirements to give respondents time to consider participation.  They also limit ability to generate an inclusive or representative sample.

But to get to the bottom of participation declines, we perhaps need to take refusal more seriously – as an act, not an omission.  Saying no is not just the absence of participation – a nothing – it is also doing something. As Jill Turner and Mike Michael argued in their discussion of ‘don’t know’ responses to survey questions, ignorance is not necessarily a lack of knowledge – it is potentially a ‘political statement’.  Non-response can mean the respondent has no fixed opinion, doesn’t know, or rejects the premise of the question.  It can mean they are rejecting the very obligation to be informed and have opinions.  Similarly, just saying no to participation might reflect a rejection of the premise of the study, faith in the motives of the researcher, or faith that the findings will do anything.  It might be a rejection of the obligation to know, or share information, about the self. Or to care about the topic, or consider it something that is a responsibility to have views about.  It might signal declining trust that ONS, or any other research actor (a university, the local primary care practice), is acting in the public interest, or that information provided to them will be used in the public interest.

Declines in research participation matter for researchers, for evidence-informed policy, and for social justice. They also matter for what they tell us about a fracturing of trust between knowledge providers and the wider public.  Whilst technical tinkering – better survey design, accessible formats, and reductions in the burdens from consent processes – are all necessary to maximise representative participation, they will not solve this more fundamental issue of trust.  We need a much more nuanced understanding of why people are just saying no. But that is going to be very tricky to research.