AMA unveils ethical framework for healthcare AI
[ad_1]
A version of this article originally appeared on Medical Economics.
In response to the burgeoning field of augmented intelligence (AI) in healthcare, the American Medical Association (AMA) has introduced a set of principles designed to guide the responsible development and application of this technology.
Takeaways
- The American Medical Association (AMA) recognizes the immense potential of healthcare artificial intelligence (AI) and issues principles to guide its development responsibly.
- Key areas addressed include the need for comprehensive government-wide oversight, mandated transparency in AI processes, and disclosure when AI directly impacts patient care.
- The AMA emphasizes the importance of anticipating and minimizing potential negative effects of generative AI, urging healthcare organizations to have policies in place prior to adoption.
- Privacy and security considerations are highlighted, with AI developers urged to design systems with privacy in mind, implementing safeguards against cybersecurity threats for reliability and safety.
- The association advocates for limiting physician liability in AI-enabled technologies, proactively identifying and mitigating bias in AI algorithms, and urging payers not to use automated systems in ways that restrict access to care or override clinical judgment.
The organization acknowledges the significant potential of AI in improving diagnostic accuracy, treatment outcomes, and patient care. However, recognizing the ethical considerations and potential risks associated with such transformative power, the AMA aims to foster a proactive and principled approach to oversee and govern healthcare AI.
AMA President Jesse M. Ehrenfeld, MD, MPH, emphasized the importance of these principles in shaping discussions with legislators and industry stakeholders to formulate policies regulating the use of AI in healthcare. The key areas addressed by the principles are as follows:
- Oversight: The AMA advocates for a comprehensive government-wide approach to formulate policies that mitigate risks associated with healthcare AI. It recognizes the role of non-government entities in ensuring appropriate oversight and governance.
- Transparency: The principles stress the necessity of mandating key information about the design, development, and deployment of AI processes, including potential sources of inequity. Transparency is deemed essential to establish trust among patients and physicians.
- Disclosure and Documentation: The statement calls for appropriate disclosure and documentation when AI directly influences patient care, medical decision-making, access to care, communications, or the medical record.
- Generative AI: Health care organizations are encouraged to develop and adopt policies that anticipate and minimize potential negative effects of generative AI. Having these policies in place prior to adoption is emphasized.
- Privacy and Security: AI developers are urged to design systems with privacy in mind. Both developers and healthcare organizations must implement safeguards to assure patients that their personal information is handled responsibly. Strengthening AI systems against cybersecurity threats is emphasized for reliability, resiliency, and safety.
- Bias Mitigation: The AMA calls for proactive identification and mitigation of bias in AI algorithms to promote equitable healthcare outcomes and foster a fair and inclusive healthcare system.
- Liability: The association advocates for limiting physician liability for the use of AI-enabled technologies, aligning with current legal approaches to medical liability.
Additionally, the statement urges payers not to utilize automated decision-making systems in ways that restrict access to necessary care or withhold care from specific groups. It emphasizes the importance of ensuring that these systems do not override clinical judgment and that human review of individual circumstances is retained.
This article was written with the help of ChatGPT.
[ad_2]
Source link