WHO calls for safe and ethical artificial intelligence for public health, others

0
34

The World Health Organization (WHO) is urging caution in the use of artificial intelligence (AI) generated large language model tools (LLMs) to safeguard human well-being, safety, autonomy, and public health.

These LLMs, such as ChatGPT, Bard, Bert, and others, simulate human communication and are rapidly gaining popularity.

While their potential to support healthcare needs is exciting, it is crucial to carefully examine the risks associated with using LLMs to improve access to health information, as decision-support tools, or to enhance diagnostics in resource-limited settings.

This scrutiny is essential to protect people’s health, reduce inequities, and ensure the responsible adoption of new technologies.

Advertisement

Although WHO acknowledges the potential benefits of using technologies like LLMs to assist healthcare professionals, patients, researchers, and scientists, there is a concern that the caution typically exercised with new technologies is not consistently applied to LLMs.

It is vital to uphold values such as transparency, inclusion, public engagement, expert supervision, and rigorous evaluation when utilizing LLMs.

Rushing the adoption of untested systems could lead to errors by healthcare workers, harm patients, erode trust in AI, and hinder the long-term benefits of these technologies worldwide.

Several concerns necessitate strict oversight to ensure the safe, effective, and ethical use of LLMs:
Biased training data: AI trained on biased data may generate misleading or inaccurate health information, posing risks to equity, inclusiveness, and health outcomes.

Inaccurate responses: LLMs can produce authoritative and plausible but incorrect or erroneous responses, particularly regarding health-related matters.

Consent and data protection: LLMs may be trained on data without proper consent and may fail to adequately safeguard sensitive user-provided information, including health data.

Dissemination of disinformation: LLMs can be misused to create and spread highly convincing disinformation, making it challenging for the public to distinguish between reliable health content and falsehoods.

Patient safety and protection: While WHO supports the use of new technologies, including AI and digital health, for improving human health, policymakers must prioritize patient safety and protection as technology firms seek to commercialize LLMs.

WHO recommends addressing these concerns and demanding clear evidence of the benefits before widely implementing LLMs in routine healthcare and medicine, whether by individuals, care providers, or healthcare system administrators and policymakers.

Ethical principles and appropriate governance, as outlined in the WHO guidance on the ethics and governance of AI for health, should be followed when designing, developing, and deploying AI for health.

WHO identifies six core principles: (1) protecting autonomy, (2) promoting human well-being, safety, and the public interest, (3) ensuring transparency, explainability, and intelligibility, (4) fostering responsibility and accountability, (5) ensuring inclusiveness and equity, and (6) promoting responsive and sustainable AI.

Stay ahead with the latest updates! Join The ConclaveNG on WhatsApp and Telegram for real-time news alerts, breaking stories, and exclusive content delivered straight to your phone. Don’t miss a headline — subscribe now!

Join Our WhatsApp Channel Join Our Telegram Channel








Leave a Reply