Health Care

WHO outline’s responsible regulations needed for Artificial Intelligence in healthcare

[ad_1]

A young African - American doctor works on HUD or graphic display in front of her
image:@Ignatiev | iStock

The World Health Organization (WHO) recently published a new document outlining essential regulatory considerations for applying artificial intelligence (AI) in healthcare

The publication underscores the critical need to ensure the safety and efficacy of AI systems, accelerating their availability to those in need and encouraging collaboration among various stakeholders, including developers, regulators, manufacturers, healthcare professionals, and patients.

AI systems and healthcare data

Given the growing abundance of healthcare data and the rapid advancements in analytic techniques, such as machine learning, logic-based approaches, and statistical methods, AI has the potential to revolutionise the healthcare sector.

The WHO acknowledges the transformative impact of AI in improving health outcomes through enhanced support for clinical trials, advancements in medical diagnosis and treatment, fostering self-care and person-centred care, and augmenting the knowledge and skills of healthcare professionals.

AI holds promise in addressing challenges in regions with a shortage of medical specialists, such as aiding in interpreting retinal scans and radiology images, among other applications.

The role of regulations

The deployment of AI technologies, including large language models, is occurring without a comprehensive grasp of their potential impacts, posing potential benefits and risks to end-users, including healthcare professionals and patients.

AI systems’ utilisation of health data raises concerns about access to sensitive personal information, underscoring the imperative need for strong legal and regulatory frameworks. This publication aims to assist in establishing and sustaining robust measures for ensuring privacy, security, and data integrity in AI applications in healthcare.

“Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. “This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis while minimising the risks.” 

“Artificial intelligence holds great promise for health, but also comes with serious challenges,”

Responsible management of AI

In response to the increasing global demand for responsible management of the rapid proliferation of AI health technologies, the publication outlines six key areas for regulating AI in the health sector.

  • To instil trust, the document underscores the significance of transparency and documentation, advocating for comprehensive documentation throughout the entire product lifecycle and meticulous tracking of development processes.
  • In addressing risk management, considerations such as ‘intended use,’ ‘continuous learning,’ human interventions, model training, and cybersecurity threats must be thoroughly tackled, with a preference for simplifying models to the extent possible.
  • External validation of data and clarity regarding the intended use of AI are highlighted as essential measures to ensure safety and facilitate effective regulation.
  • A commitment to data quality, including rigorous pre-release evaluation of systems, is deemed crucial to prevent the amplification of biases and errors by AI systems.
  • The publication acknowledges the challenges posed by significant and intricate regulations like the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States, emphasising the importance of understanding jurisdictional scope and consent requirements to uphold privacy and data protection.
  • Encouraging collaboration among regulatory bodies, patients, healthcare professionals, industry representatives, and government partners is identified as a key strategy to ensure that products and services remain compliant with regulations throughout their lifecycles.

WHO guidelines

AI systems are intricate, relying on their underlying code and the data used for their training, sourced from diverse settings like clinical environments and user interactions. Improved regulation is essential to mitigate the risks of AI exacerbating biases present in training data.

For instance, challenges arise in accurately representing the diversity of populations within AI models, potentially resulting in biases, inaccuracies, or system failures. Regulatory measures can play a crucial role in addressing these risks by mandating the reporting of attributes such as gender, race, and ethnicity of individuals featured in training data, ensuring intentional efforts to make datasets representative.

The newly released WHO publication aims to delineate fundamental principles for governments and regulatory authorities. These principles guide developing new guidelines or adapting existing ones related to AI at national and regional levels.

Editor’s Recommended Articles

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button