Technology

AI is set to revolutionise cancer treatment, but at what cost?

6 Mins read

In 2023, new forms of AI out-performed human radiologists but medical professionals warn the use of new technologies could pose new risks.

In recent years, the modern world has embraced the use of AI as an aid to humans. In the world of medicine, AI has already been used to analyse data at speeds worlds faster than humans.

Now, with the development of AI models known as ‘Transformers’ – AI models that have the capacity to self-superivse learning and predict future outcomes – scientists predict that AI will be more efficient than humans, not only at analysing medical data, but using it to formulate treatment for patients.

In a lecture at Gresham College, Dr Richard Sidebottom, a radiologist with over 20 years of experience in the field, expressed his excitement and, also, scepticism for the use of the new technologies.

Sidebottom highlighted the progress being made in cancer imaging with the help of AI that operates at a restricted level.

Dr Richard Sidebottom, Radiologist, Researcher, and advisor to Google Health {Gresham College]

He shared the findings of a Swedish study from 2023 that analysed the effectiveness of using AI in breast cancer imaging: “preliminary results indicate an increase in cancer detection rates without an increase in false positives, and a significant reduction in overall screen reading volume.”

Whilst in this case the findings were positive, the doctor explained that: “The effectiveness of these networks largely depends on the quantity and quality of the data they are trained on. Datasets used in the context of breast imaging analysis typically contain tens or hundreds of thousands, or even millions, of images.”

He concluded that the effectiveness of these technologies would be far more limited in detecting rarer cancers with less medical data available to AI.

Where it gets particularly interesting, is when we consider introducing the ‘Transformer’ models of AI into healthcare practice.

The ‘Transformer’ models which are more formally known as ‘Large Language Models’, were first developed by Google in 2017 with the purpose of translating language.

Now, being developed enough to be capable of creating multimodal data including imaging, these new models of AI hold the prospect of producing tailored predictions for each patient as to how medical conditions may develop and how they may be treated.

“These models could significantly enhance diagnostic accuracy and treatment efficacy.”

Dr Richard Sidebottom

Dr Richard Sidebottom explained: “By integrating data from various sources – including family history, general healthcare information, genomics, and medical imaging – these models could significantly enhance diagnostic accuracy and treatment efficacy”

He continued; “integrating radiological and pathological imaging with electronic health records could lead to more accurate predictions of treatment outcomes, potential side effects, and tailoring post-treatment surveillance.”

AI regularly outperforms human in spotting potential cancer cells [Unsplash: National Cancer Institute]

Despite the optimism, the doctor had multiple reservations regarding the integration of this type of AI into medical settings.

“Ensuring the reliability of AI systems before their widespread use is critical… Continuous performance monitoring is essential to detect and mitigate any adverse outcomes,” Says Dr Sidebottom.

“For instance, when changes in mammography machines affect AI performance, recalibration or additional training of the system may be required. This poses a challenge in scenarios where new technology is introduced without historical data for AI training, particularly in resource-constrained settings like the NHS.”

A look into the NHS’s work towards integrating AI shows that this issue has been considered.

As part of the 2023 government investment of £21 million into AI integration in healthcare, the NHS has initiated a project piloting an ‘AI Deployment Platform’ which the NHS describes as “a store for AI medical imaging technologies” that hospitals will be able to use to “to choose approved AI medical imaging technologies from a range of vendors.”

The creation of the deployment platform aims to – “support NHS organisations by removing the need for local deployment and speed up the time AI tools are ready for use within their hospitals” as well as “help smooth the introduction of novel AI technologies”.

“if AI systems continue to improve, there may be ethical dilemmas”

Dr Richard Sidebottom

However, no update on the project has been made since its initiation, so its success is to be determined.

Dr Richard Sidebottom explains that the workings of AI can be likened to a black box as “the reasoning behind AI decisions is often not transparent.”

He warns that “if AI systems continue to improve, there may be ethical dilemmas if human intervention in AI decisions on average, lead to suboptimal patient outcomes.”

The radiologist explained that situations may arise where human intervention may be beneficial to an individual patient, but may harm the function of the AI for future use – this may be a real conundrum for doctors working with AI.

The doctor also expressed concern as to who will benefit from a world where AI is integral in healthcare.

“AI systems are developed using anonymized patient data, and the result products are controlled by private entities. This raises questions about data ownership, privacy, and the equitable distribution of AI’s benefits.”

He proposed that there should be “more democratic control and oversight of AI technologies in healthcare.”

AI offers potential benefits, if used with caution [Unsplash: Alexander Sinn]

This is a sentiment that is shared by other professionals in the industry.

As AI data expert, Jason Cohen, described in a 2021 article on the future of patient data: “Ownership of health care data is now a byproduct of attempts by holders of the data (hospitals, insurers, device manufacturers, and internet behemoths) to monetize the resource.”

Cohen suggests that, in a system where patients own their own data: “instead of being the target of interventions aimed at them, patients who own their data become self-advocates, better able to articulate their personal preferences and achieve self-determined health goals.”

However, the AI expert warns that “prying control of patient data away from hospitals, insurers, and EMR vendors will not be easy.”

The concerns held by these experts are a reflection of the overall feeling of the public. In a study conducted by the Institute for the Government in 2023, 52% of participants felt that the government would not use their data for their benefit and another 37% felt a lack of control over data about them.

“Prying control of patient data away from hospitals, insurers, and EMR vendors will not be easy.”

Jason Cohen, AI data expert

The webpage for the NHS’s AI lab offers some reassurance that these issues are being tackled.

Currently, the NHS are partnering with the Ada Lovelace Institute to develop an Algorithmic Impact Assessment. This will see “researchers and developers engaging with patients and the public about the risks and benefits of their proposed AI solutions, prior to gaining access to medical imaging data for training or testing.”

Although this is a positive step towards giving patients some autonomy over the use of their data, it doesn’t indicate a transition towards patient owned data is on the cards any time soon.

Issues with data security are not the only data concern for prospective patients being diagnosed and treated with the help of AI. Worries that AI may discriminate as a result of data bias are also being expressed by the industry.

Although AI has the potential to increase equity in healthcare, if left to its own devices, it may do the opposite. By predicting outcomes based on past data, existing inequalities may be repeated at an increased level.

The NHS’s AI lab recognises that “there is a particular risk of algorithmic bias worsening outcomes for minority ethnic patients” but work is underway to prevent this from happening.

“We might just see entirely new approaches to some aspects of healthcare.”

Dr Richard Sidebottom

In 2021, the NHS funded four projects that aimed to “facilitate the adoption of AI that serves the health needs of minority ethnic communities.”

The strategies developed in the projects “may include mitigating the risks of perpetuating and entrenching racial health inequalities through data collection and selection and during the development, testing, and deployment stages.”

Dr Richard Sidebottom concluded his talk by reflecting on the prospect of an AI revolution in healthcare: “It is underway but not yet very tangible in its impact in the clinic. We can see that the tools now beginning to be deployed look like they will have real clinical benefit. However, the implementation of these technologies in clinical settings is still in its infancy, and a cautious approach is warranted.”

Despite his reservations, the doctor left on a positive note, “If the new generations of AI technologies are able to deliver anything like their promise, then we might just see entirely new approaches to some aspects of healthcare.”

There appears to be good reasons to feel cautiously optimistic about the future of AI in healthcare, with a strong emphasis on cautious.

It is apparent that the risks of this revolution are many and, although it is clear that work is underway to mitigate these risks, it is highly likely that someone, if not many people, will suffer.

Nonetheless, this is the nature of revolution and the use of AI in healthcare is something we should support.

Provided we are diligent in enquiring about the use of our personal data in clinics, and, as long as medical organisations continue to mitigate risks, welcoming AI in healthcare holds the potential to make way for a new and improved world of medicine.

  • To read more about what the NHS are doing to prevent inequality and achieve equity in healthcare using AI, click here.
  • To read more about Dr Richard Sidebottom, and to read a transcript or to watch a video recording of his talk at Gresham College, click here

Featured image by Julien Tromeur via Unsplash

Related posts
Life

The rise in violence against women: How did we end up here and where do we go now?

9 Mins read
As it ceases to be possible to turn a blind eye to the skyrocketing number of crimes being committed against women, it’s time to start asking more difficult questions.
Newsday

Tributes paid to one of the greatest metal vocalists and original lead singer of Iron Maiden

4 Mins read
A monumental loss has come to heavy metal fans announcing the death of the original lead singer of Iron Maiden Paul Di’Anno who died at the age of 66, October 21st, 2024.
Newsday

Why Kamala Harris's appearance on Call Her Daddy feels so inauthentic

7 Mins read
Kamala Harris was interviewed on sex-positive podcast, raising some contention. Why does Gen Z feel like her campaign is inauthentic and forced?