The impact of AI in healthcare extends beyond simply reducing manpower shortages. ST PHOTO: NG SOR LUAN
A tool to prevent overprescribing antibiotics, designed by Singapore General Hospital, that will help combat the global threat of multidrug-resistant infections.
Covid-19 mRNA vaccines and how the link with potentially fatal heart muscle or heart-lining inflammation could have been detected earlier.
These two healthcare scenarios have this in common: the potential of artificial intelligence (AI) and generative large language models (LLMs) to make healthcare more safe for patients, either through their systemic abilities or how they can analyse vast amounts of data quickly.
However, current regulatory and governance approaches were built for AI tools designed to generate specific clinical decisions or recommendations, such as estimating a patient’s chance at getting kidney failure or identifying Covid-19 infection from chest X-rays.
In contrast, a single generative AI (GenAI) model can perform a wide range of tasks, from summarising medical information to suggesting diagnoses.
The Ministry of Health (MOH) has flagged the use of AI in healthcare as an emerging regulatory issue.
MOH, along with the Health Sciences Authority (HSA) and health tech agency Synapxe, co-developed the MOH Artificial Intelligence in Healthcare Guidelines (AIHGle) – setting best practice for safe and responsible AI development and implementation in healthcare settings.
AIHGle, a living document, was first published in October 2021 before the widespread introduction and adoption of generative AI tools such as ChatGPT. The emergence of these technologies has created new considerations around distinctive risks that need to be captured in the guidelines.
This is so that clinicians and patients can trust their use and take advantage of the huge potential they offer.
One area where AI can make a difference is in helping doctors when they fill out prescriptions. AI can reduce unnecessary prescriptions and personalise the optimum dosage for a patient – ensuring they take their medicine, and minimising side effects.
Among many local efforts, SGH has developed Augmented Intelligence in Infectious Diseases, a model that encourages more judicious antibiotic prescribing. By decreasing unnecessary antibiotic use, patients’ length of stay will be shorter and in-hospital deaths will decrease.
Another significant area where AI can contribute is in what healthcare experts call “pharmacovigilance” – monitoring the effects of medical drugs after they have been licensed for use.
By automating the monitoring of data, including social media, it can help identify early signals of harm. In the case of the Covid-19 mRNA vaccines, AI systems could have flagged heightened risks in adolescent and young males much earlier.
Then there is AI’s potential in detecting rare and undiagnosed diseases. Diving deep into electronic health records and clinical documents, they can flag patterns that might otherwise be missed.
The use of generative AI systems in healthcare – something that is rapidly evolving – is not intended to replace clinicians, but augment their decision-making, enhancing safety without removing human oversight. In theory, at least.
However, the tools are no immediate “wonder cure”. There are reasons why there is still significant apprehension over the use of generative AI in healthcare.
First, it is still early days in terms of research, and findings have yet to be rigorously tested in real-world clinical environments.
Then there is the well-known issue of AI generating plausible but factually incorrect information. This raises huge concerns, particularly in high-stakes settings like healthcare. There are also ethical issues, including bias in model training data and patient data privacy.
For example, ethical and regulatory concerns unique to generative AI, including model bias, lack of model transparency and lack of clarity over the source of training data, remain unaddressed in the current guidelines.
The fundamental risks of generative AI in healthcare are also becoming clearer. These include “known unknowns”: incorrect medical content, biased outputs, limited explainability, and vulnerability to adversarial or malicious cyber attacks.
The urgent need for safety guard rails against misleading AI advice was underscored by a recent case in the Annals of Internal Medicine: Clinical Cases.
A patient had suspected bromine toxicity, caused by excessive intake of bromide salts, used as sanitisers in spas and pools. The patient had consulted ChatGPT on how to reduce salt (sodium chloride) from his diet, and he saw that chloride could be swopped with bromide – though ChatGPT would have meant for purposes such as cleaning.
In another example, Med-Gemini – a healthcare AI model built by Google – referenced a completely fictitious part of the brain as “basilar ganglia”, structures that help control muscle movement. This raises significant concerns over the reliability of GenAI models, even if trained on medical datasets.
As these cases show, the healthcare sector is only beginning to grasp the risks of generative AI, and it is too early to gauge their full impact. Some risks have not even surfaced, and these “unknown unknowns” will warrant close regulatory vigilance as the technology evolves.
Singapore is among many countries finding that AI technology is outpacing regulatory guidelines. A global review covering 197 countries and territories found that nearly half still lack any form of AI-specific legislation or official guidance.
However, the good news is that here, several governmental initiatives are already under way. Importantly, the AIHGle guidelines are being constantly reviewed to strengthen governance on AI in healthcare. The review brings together key national and research bodies, including MOH, HSA, and the Centre of Regulatory Excellence at Duke-NUS Medical School.
Earlier in 2025, the Infocomm Media Development Authority and its not-for-profit subsidiary, the AI Verify Foundation, launched an initiative to assess real-world applications of generative AI across industries including healthcare.
Singapore is also spearheading international partnerships to ensure AI adoption is ethically sound, trustable and safe across different healthcare systems.
For example, at Duke-NUS Medical School, its CARE-AI initiative is developing a bioethics-centric assessment tool to ensure fair and trustworthy implementation of AI prediction models in healthcare. This goes beyond disease diagnosis and prediction to include less explored areas such as AI in drug discovery and development through building consensus with a global community.
Such steps are crucial to building models that clinicians and patients can trust, but more still needs to be done beyond guidelines and regulations.
While healthcare institutions and universities are investing in workforce readiness through upskilling current staff and integrating AI training into medical education, this calls for significant reforms to an already crowded medical curriculum.
More faculty with expertise in AI tools and their implications is needed to prepare students for AI-assisted medical practice.
With generative AI becoming widely adopted, curricula and assessment methods must be restructured to preserve critical thinking skills while streamlining or replacing purely knowledge-based modules that risk becoming obsolete. This ultimately allows students to accelerate their learning.
AI tools used in healthcare need to be thoughtfully assessed to ensure they deliver real value to patients. Economic impact assessments that capture both patient value and health system-wide benefits offer important insights.
Health technology assessments performed and published by the Agency of Care Effectiveness can be applied to the evaluation of AI tools before adoption to ensure they deliver meaningful and cost-effective outcomes.
Beyond financial returns, it is equally important to account for less tangible benefits, such as reducing caregiver burden and enhancing the patient care experience. The impact of AI in healthcare extends beyond simply reducing manpower shortages.
This involves integrating advanced technologies smoothly into care without disrupting patient trust. To secure that, health systems need to be transparent about when and how AI tools are used, maintain strong safeguards for data privacy and ensure clinicians and patients remain in control of the decision-making process – not algorithms.
By doing so, these technologies can help protect, and even strengthen, the vital physician-patient relationship.
GenAI and LLMs, though still evolving, hold the potential to pick up what the human eye may miss, pinpoint hidden risks and guide healthcare closer to an ideal future of zero harm to anyone under care.
This marks a shift from simpler, single-purpose tools that are relatively straightforward to regulate, to multi-functional, flexible systems.
It is a shift that demands a critical and comprehensive re-evaluation of current medical software regulations and healthcare governance frameworks.