WHO’s new regulation highlights risks of AI in healthcare: GlobalData
This includes unethical data collection and cybersecurity risks amongst others.
New policy considerations outlined by the World Health Organisation (WHO) raised the risks of integrating artificial intelligence (AI) tools in healthcare, GlobalData reported.
The WHO’s new publication included the need for states to establish safe and effective AI tools, and foster dialogue amongst AI developers and users.
“AI has already improved several devices and systems, and there are so many benefits of AI. However, there are risks too with these tools and the rapid adoption of them,” Alexandra Murdoch, Senior Analyst at GlobalData, said.
AI in healthcare systems allows access to personal and medical information, which could give rise to challenges, such as unethical data collection, cybersecurity risks, and amplified biases and misinformation.
“The use of false medical information is deeply concerning and could lead to a number of issues, including misdiagnoses or improper treatment for Black patients,” she said.
The WHO has released six areas for regulation of AI for health–transparency, and documentation; risk management; validating data and being clear about the intended use of AI; a commitment to data quality; privacy and data protection; and fostering collaboration.