- Published:
- 17 October 2024
- Author:
- Samar Betmouni
- Read time:
- 9 Mins
This article explores the issue of equity in the use of artificial intelligence in pathology. Pathologists must ensure that these new technologies do not maintain or even intensify existing health disparities.
There is much to be excited about as pathology evolves, as it has done over many decades, to provide innovative and impactful diagnostic solutions. One only has to see the transformation of digital pathology conference agendas over the past 10 years, with their increasing pathology artificial intelligence (AI) research content and the burgeoning commercial sponsorship of these events.
There have been significant advances in the development of AI as a diagnostic tool in pathology.1 Histopathologists’ attitudes to the application of AI in pathology have been explored; there is consensus that AI will form part of our diagnostic repertoire in the future.2 However, there are still technical and systems challenges that must be addressed before routine deployment of AI in diagnostic practice can become a reality across all diagnostic pathology services.
We often refer to the NHS as providing a ‘cradle to grave’ service. Pathology is at the heart of this. With advances in personalised medicine, pathology can more accurately be considered as providing a ‘pre-womb to tomb’ service.3
The World Health Organization defines health equity as "the absence of unfair and avoidable or remediable differences in health among population groups defined socially, economically, demographically or geographically or by other dimensions of inequality (e.g. sex, gender, ethnicity, disability, or sexual orientation)."4
While there are many potential benefits of AI in pathology, we have to assess our considerations of AI as a diagnostic tool in the context of health equity.
A lack of representative data is a significant potential obstacle to ensuring that algorithms are fit-for-purpose. As Dr Sara Khalid says, “because AI-based healthcare technology depends on the data that is fed into it, a lack of representative data can lead to biased models that ultimately produce incorrect health assessments.”5
Sources of bias can be introduced at 1 or more of the many steps of AI tool development – data type, collection and preparation, machine learning (techniques used to train AI algorithms) model development and evaluation and, finally, at clinical deployment stage. They can also arise from system-wide factors, such as workforce diversity or how research agendas are developed, and by healthcare organisations themselves, e.g. provision of appropriate IT infrastructure.
An international review has identified that databases from the United States and China are over-represented in clinical AI and that the majority of these are for image-rich healthcare specialities.6 Furthermore, the top 10 databases and author nationalities were from high-income countries. In terms of authorship, authors are predominately male, with non-clinical backgrounds.
A study from the Institute of Global Health Innovation identified that minority ethnic groups in the UK are underserved by technology and that this has a basis in the way that data is collected and how the research agenda is prioritised.7 In general, this study found the data is unrepresentative and does not account for social categories, or may even misrepresent them.
The prioritisation of the research agenda is an interesting white paper finding, because it highlights structural issues that may exacerbate inequity: lack of diversity at strategic levels in an organisation (e.g. NHS, funding body, policymakers), as well as lack of diversity in the AI workforce itself.
The lack of diversity in the AI workforce is an important issue to consider further, particularly as it is one that the College can play a significant role in addressing. Diversity in the AI workforce, in the commercial sector at least, remains a significant challenge internationally, something that is recognised by the industry (Box 1).8 This workforce disparity is also likely to be operating in many science, technology, engineering and mathematics (STEM) subjects. Certainly, this is supported by a recent House of Commons Science & Technology Committee report, Diversity & Inclusion in STEM,9 which concluded that STEM has ‘a diversity problem’ (Box 2). This is an area that is still in need of improvement if we are to capitalise on the potential of AI in our diagnostic workflows.
Box 1
|
Box 2
|
It is possible to mitigate bias by interrogating each of the steps during algorithm development and monitoring its performance after deployment,10 as outlined by the US Food and Drug Administration Action Plan in 2021.11 The NHS has provided guidance on ‘how to get AI right’12 and support safe clinical practice – this is predominately around data and AI governance and approaches to promoting the adoption of AI in healthcare.
The NHS recognises that issues of equity and avoidance of bias are important for the development of effective healthcare AI tools. To deliver this, the NHS AI Lab seeks to provide a collaborative environment where barriers to development of AI can be addressed. Within this, there are specific workstreams to focus on opportunities where AI can be used to address health inequalities, optimise datasets and improve approaches to development, testing and deployment of AI in healthcare.13
Other organisations, like the Ada Lovelace Institute, are also proposing approaches to build trust in healthcare AI systems by prompting developers "to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data."14
The objectives here are to help build public trust, mitigate potential harm, maximise potential benefit and avoid exacerbation of health and social inequalities. This so-called algorithm impact assessment, therefore, aims to avoid undermining patient consent and provides a toolkit through which it is possible to assess the societal impact of AI before its clinical implementation.
The College position statement on the use of AI in pathology15 highlights support for its clinically-led development and use, emphasising the need for a ‘human in the loop’ model in the short term. It recognises the many ethical challenges that are raised by the use of AI and includes health equity as an issue that will require "ongoing multi-stakeholder dialogue across the medical sciences, computer science, the social sciences, public policy, and patient and public involvement."
The Topol Review recognises the importance of adopting new healthcare technologies in a "spirit of equality and fairness."16 One of the approaches to doing this is to ensure that the healthcare workforce is appropriately trained and skilled to implement use of new technologies in healthcare. This will require investment in workforce training.
Technical resources across health and care have also to be up to standard. A failing IT infrastructure is not an appropriate basis for the delivery and sustainability of large-scale technology projects. Not only that, but a fragile IT infrastructure represents "a clear and present threat to patient safety that also limits the potential for future transformative investment in healthcare."17
The contribution of AI in healthcare and its impact on patients should not be overlooked. The ethical perspective is highlighted as a crucial consideration by McCradden and Kirsch.18 In this review, the authors suggest that clinicians’ assessment of the efficacy of an algorithm should go beyond its accuracy to ensure that decisions are made in patients’ interests by considering the "historical patterns, societal inequalities and biases such as racism, sexism and ableism, which may influence medical recommendations and decisions."18
AI has the potential to transform healthcare. In pathology, there is opportunity to revolutionise the way we do our work to meet the significant challenges that our services face. However, we must maintain a critical eye on this promise in the hyperbole that surrounds AI. This critical eye is not intended as a signal to avoid or delay its implementation; it is a necessary means to ensure that these emerging diagnostic tools are safe, accessible to all of our patients and acceptable to wider society, and that they do not worsen health inequalities. These tools also need to be usable and trusted by the workforce charged with their deployment in diagnostic workflows.
The main challenges to be addressed are ensuring that:
- data is representative
- research agendas are inclusive
- workforces are diverse
- workforce training is up to date
- IT infrastructure is functional
- public trust in use of AI is optimised
- innovative diagnostics are available to all.
There is exciting and innovative research in AI and computational pathology. There also needs to be a parallel endeavour that takes us beyond a technology-centric view, to bring together multiple stakeholders to help inform the ethics, health equity, patient and workforce impacts of AI in our practice. I believe that such a systems-based approach will help focus how we can best capitalise on the promise of AI in pathology.
References available on our website.
Return to October 2024 Bulletin
Read next
Tackling ethnic inequalities in precision and genomic medicine
17 October 2024
2024 Kettle Memorial Lecture
17 October 2024
Global antimicrobial resistance webinar series
17 October 2024