Sun. May 5th, 2024

World Health Organization (WHO) has released guidance on the ethical use and governance of Large Multi-Modal Models (LMM) in healthcare, acknowledging the transformative impact of Generative Artificial Intelligence (AI) technologies like ChatGPT, Bard, and Bert.

Large Multi-Modal Models (LMM)

  • LMMs are models that use multiple senses to mimic human-like perception. This allows AI (Artificial Intelligence) to respond to a wider range of Human communication, making interactions more natural and intuitive.
  • LMMs integrate multiple data types, such as images, text, language, audio, and other heterogeneity. This allows the models to understand images, videos, and audio, and converse with users.
  • Some examples of multimodal LLMs include GPT-4V, MedPalm M, Dall-E, Stable Diffusion, and Midjourney.

WHO’s Guidelines Regarding the Use of LMMs in Healthcare

  • The new WHO guidance outlines five broad applications of LMMs in healthcare:
  • Diagnosis and clinical care, such as responding to patients’ written queries;
  • Patient-guided use, such as for investigating symptoms and treatment;
  • Clerical and administrative tasks, such as documenting and summarizing patient visits within electronic health records;
  • Medical and nursing education, including providing trainees with simulated patient encounters, and;
  • Scientific research and drug development, including to identify new compounds.
  • Indian Council of Medical Research issued ethical guidelines for AI in biomedical research and healthcare in June 2023.

Concerns has WHO Raised about LMMs in Healthcare

Rapid Adoption and Need for Caution

  • LMMs have experienced unprecedented adoption, surpassing the pace of any previous consumer technology.
  • LMM is known for their ability to mimic human communication and perform tasks without explicit programming.
  • However, this rapid uptake underscores the critical importance of carefully weighing their benefits against potential risks.

Risks and Challenges

  • Despite their promising applications, LMMs pose risks, including the generation of false, inaccurate, or biased statements that could misguide health decisions.
  • The data used to train these models can suffer from quality or bias issues, potentially perpetuating disparities based on race, ethnicity, sex, gender identity or age.

Accessibility and Affordability of LMMs

  • There are broader concerns as well, such as the accessibility and affordability of LMMs, and the risk of Automation Bias (tendency to rely too much on automated systems) in healthcare, leading professionals and patients to overlook errors.

Cybersecurity

  • Cybersecurity is another critical issue, given the sensitivity of patient information and the reliance on the trustworthiness of these algorithms.

Key Recommendations of WHO Regarding LMMs

  • Called for a collaborative approach involving governments, technology companies, healthcare providers, patients and civil society, in all stages of LMM development and deployment.
  • Stressed on the need for global cooperative leadership to regulate AI technologies effectively. Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs.
  • The new guidance offers a roadmap for harnessing the power of LMMs in healthcare while navigating their complexities and ethical considerations.
  • In May 2023, the WHO had highlighted the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing and deploying AI for health.

The six core principles identified by WHO are

  1. Protect autonomy
  2. Promote human well-being, human safety, and the public interest
  3. Ensure transparency, explainability, and intelligibility
  4. Foster responsibility and accountability
  5. Ensure inclusiveness and equity
  6. Promote AI that is responsive and sustainable.

Login

error: Content is protected !!