In a recent review article published in The New England Journal of Medicine , researchers review how human values can be incorporated into emerging artificial intelligence (AI)-based large language models (LLMs) and how they can impact clinical equations. Study: Medical Artificial Intelligence and Human Values . Image Credit: Gorodenkoff / Shutterstock.
com LLMs are sophisticated AI tools that perform a wide range of tasks, from writing compelling essays to passing professional examinations. Despite the growing utilization of LLMs, many healthcare professionals continue to express concerns about their application within the medical field due to confabulation, factual inaccuracy, and fragility. It remains unclear whether "human values" that reflect human goals and behaviors will remain incorporated into the creation and use of LLMs.
How human values differ from and are similar to LLM values must also be elucidated. To this end, the authors investigated the influence of human values on the creation of massive language and AI models in the healthcare sector. Human and societal values have inevitably affected the data used to train AI models.
Some recent examples of AI models used in medicine include the automated interpretation of chest radiographs, the diagnosis of skin diseases, and the development of algorithms for optimizing the allocation of healthcare resources. Generative Pretrained Transformer 4 (GPT-4) is an LLM that has been developed to consider the values of the vario.
