An evaluation of GPT models for phenotype concept recognition
- Perth Children’s Hospital (Australia); Telethon Kids Institute (Australia); Curtin Univ., Perth, WA (Australia); SingHealth Duke-NUS Institute of Precision Medicine (Singapore)
- Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)
- King Edward Memorial Hospital (Australia)
- Perth Children’s Hospital (Australia); Telethon Kids Institute (Australia); King Edward Memorial Hospital (Australia); University of Western Australia (Australia)
- University of Colorado, Aurora, CO (United States)
- Jackson Laboratory for Genomic Medicine, Farmington, CT (United States); University of Connecticut, Farmington, CT (United States)
Clinical deep phenotyping and phenotype annotation play a critical role in both the diagnosis of patients with rare disorders as well as in building computationally-tractable knowledge in the rare disorders field. These processes rely on using ontology concepts, often from the Human Phenotype Ontology, in conjunction with a phenotype concept recognition task (supported usually by machine learning methods) to curate patient profiles or existing scientific literature. With the significant shift in the use of large language models (LLMs) for most NLP tasks, we examine the performance of the latest Generative Pre-trained Transformer (GPT) models underpinning ChatGPT as a foundation for the tasks of clinical phenotyping and phenotype annotation. The experimental setup of the study included seven prompts of various levels of specificity, two GPT models (gpt-3.5-turbo and gpt-4.0) and two established gold standard corpora for phenotype recognition, one consisting of publication abstracts and the other clinical observations. The best run, using in-context learning, achieved 0.58 document-level F1 score on publication abstracts and 0.75 document-level F1 score on clinical observations, as well as a mention-level F1 score of 0.7, which surpasses the current best in class tool. Without in-context learning, however, performance is significantly below the existing approaches. Our experiments show that gpt-4.0 surpasses the state of the art performance if the task is constrained to a subset of the target ontology where there is prior knowledge of the terms that are expected to be matched. While the results are promising, the non-deterministic nature of the outcomes, the high cost and the lack of concordance between different runs using the same prompt and input make the use of these LLMs challenging for this particular task.
- Research Organization:
- Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC), Basic Energy Sciences (BES)
- Grant/Contract Number:
- AC02-05CH11231
- OSTI ID:
- 2470704
- Journal Information:
- BMC Medical Informatics and Decision Making (Online), Journal Name: BMC Medical Informatics and Decision Making (Online) Journal Issue: 1 Vol. 24; ISSN 1472-6947
- Publisher:
- BioMed CentralCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Structured Prompt Interrogation and Recursive Extraction of Semantics (SPIRES): a method for populating knowledge bases using zero-shot learning
MaTableGPT: GPT‐Based Table Data Extractor from Materials Science Literature