Artificial intelligence (AI) is becoming rapidly ingrained in many aspects of research (such as generating large health datasets from patient records, qualitative analysis, generating lay summaries and research translations). This is creating both excitement and nerves. Although it is often described as a revolutionary tool for improving productivity and reducing cognitive load, there are some real threats to research integrity (see example of retracted publication here). Universities and research organisations are trying to manage this and in many cases struggling to keep up with the pace of AI development across many areas of research. The challenge is evident in the swift development of policies to govern AI use, often without a wide range of stakeholder involvement. This poses serious threats to health equity.
Health equity refers to the absence of unfair and avoidable differences in health among population groups. This means ensuring everyone has the chance to reach their full health potential. Digital developments are often portrayed as a benefit to society’s health and collective wellbeing but the social and environmental consequences of AI in research will be experienced very differently. The speedy adoption therefore needs critical attention. We pose three key questions to help with this: Who can make use of AI tools? Whose knowledge is being privileged? Who bears the impact of environmental costs?
1. Who can use AI tools for their own development?
AI provides another barrier to involvement in research for people facing digital exclusion. Research by the UK focused digital inclusion charity Good Things Foundation (2024) shows that people with the lowest levels of digital skill and confidence might face further exclusion, seeing the pace of development of AI and its applications as overwhelming. With suitable support, AI tools may be valuable in building digital skills and capabilities, but access to the most advanced features of large language models (LLMs) often need to be paid for, commercialising access to tools which draw on freely available data. Research organisations need to think about who is shaping policies on how AI is used. In exploring young people’s views, researchers in the Digital Good Network asked: “Who gets to define the way AI or machine learning impacts what is considered useful knowledge?”.
While AI may support non-English speakers and those struggling with writing proficiency, there is emerging evidence to suggest there might be a hidden (cognitive) cost to learning. Overreliance on AI for writing may reduce active engagement with the writing process, potentially hindering the development of critical thinking and nuanced expression. This could lead to ‘de-skilling,’ as individuals might not develop essential writing competencies, and could potentially hinder long-term learning. This raises concerns about research integrity and the equitable development of skills, potentially disadvantaging some groups through de-skilling or preventing skill development.
2. Whose knowledge is privileged in the generation of content?
Inequalities of the past lead to inequalities of the future. During slavery and colonialism, the narratives, ideas and cultures of communities were suppressed. Medical research on racialised bodies was used to justify the horrors of slavery and colonialism as it presented some humans as weaker, diseased and simply lesser. And so, in indigenous languages, research is a ‘dirty’ word. Today, most qualitative research excludes the narratives and counternarratives of racialised communities. Therefore, AI language models can reiterate dominant white discourses, ideas and cultures. In quantitative research, large healthcare datasets from medical records, radiographs, and blood biochemistry are used to train deep learning models and improve the diagnosis of health conditions. This has enormous potential for public health, but also comes with the risks of reproducing the ills of the past and even turning a blind eye to the horrors of the present.
Data on AI platforms is skewed towards the Global North, and the proliferation of AI-generated content risks further driving inequity in research citations from the Global South. In the context of health research, historically-based systemic inequities (racism, sexism, ableism and socioeconomic disparities) have shaped knowledge generation and use in the Global North (see ForEquity for a useful introduction to this topic). Internationally, many health datasets do not represent the diversity of populations. The extent of the problem is not yet known because many datasets do not provide detailed demographic information – a failure of ‘data transparency’. Research by Standing Together showed that the creators of large datasets for AI prioritise data quantity over the quality of their inclusivity or fairness. The Standing Together team argued for recommendations to encourage transparency around ‘who’ is represented in the data and ‘how’, as well as in relation to how health data is used in research and practice.
Disease prevalence is often higher amongst marginalised communities. Because poor living conditions, mouldy homes, unhealthy foods, air pollution, and poor labour rights (the social determinants) largely determine health, including marking bodies at a cellular level. For example, race, class, and gender shape the ecological complexity of the microbiome. So, algorithms are trained with skewed data from marginalised communities — a colonial continuity of data extraction from racialised bodies. Moreover, an overemphasis on AI-generated health differences, without the social context, can risk marginalised communities being further stigmatised as disease-prone and even a burden to the healthcare service. On a global scale, advanced technological healthcare access is largely out of reach for poorer communities. Nonetheless, AI technologies are often trained on racialised bodies in the UK and abroad. The NHS has awarded its largest IT contract to Palantir, a US tech company which uses AI-enabled digital infrastructure in the NHS, as well as in surveillance and military technology. This raises privacy concerns, especially for marginalised communities who have faced historic injustices and atrocities in health research and still lack influence over digital and research infrastructure. Amnesty International has described the use of Palantir by the NHS as a ‘very troubling choice’ given its links to human rights abuses across the world, including the genocide in Gaza. Any use and governance of AI must be developed based on understandings of enduring historical legacies, systemic discriminations and human rights.
3. Who bears the impact of AI environmental costs?
Most UK universities now have a sustainability strategy to help them achieve net zero by 2030, promoted by joint charters and educational guidelines such as Advance HE Education for Sustainable Development in Higher Education framework. However, alongside the race to net zero, the use of carbon-intensive AI has increased to support core academic practices of both staff and students. This has led to concerns associated with the energy requirements of large data centres, which are used to power AI activity, and the lack of discussion of, and consideration for the disproportionate global environmental impact of AI. It is already established that climate change is affecting communities unequally, with the most marginalised bearing the biggest burden. It has been argued that some new data centres are taking scarce resources such as water from some of the world’s driest areas. Whilst the actual impact of greenhouse gas emissions, energy usage and carbon footprint are hotly contested, concerns remain that even if AI is carbon neutral, there are social and professional impacts of AI replacing job roles in inequitable ways within society.
What can we do?
These reflections highlight ways that inequalities in health will likely widen through the uncritical adoption of AI in research. Sharing learning across research institutions in all sectors, including meaningful engagement with different publics, could help inform a core set of principles to support more equitable access to and more critical use of AI. This could be supported with continually updated guidance as the technology and its applications develop, and help to bring to life a just, and climate-neutral process for the adoption of AI in the higher education and research sector. This is crucial in influencing the impact of AI on people’s livelihoods, healthcare access and safe living environments. Research organisations of every type have a responsibility to shape the ways in which AI is integrated into knowledge infrastructure.
The authors are all affiliated with the Health Equity and Inclusion Group at the University of Sheffield

