PLMD-QA: Intelligent Question-Answering System Integrating Large Language Models with Dynamic Medical Knowledge Graphs
Main Article Content
Abstract
An intelligent question-answering system integrating large language models (LLMs) with a dynamic medical knowledge graph is designed to provide rapid and accurate question-answering services for healthcare practitioners and patients. To address the limitations of traditional medical knowledge graphs in terms of coverage and real-time updates, a large-scale dynamic medical knowledge graph is constructed by integrating multi-source data. This graph encompasses various categories, including diseases, medications, and symptoms. An advanced information filtering module, Med-BERT, is implemented to restrict the system's responses to the medical domain effectively, improving both efficiency and accuracy. The system employs the RBBSC named entity recognition model, an entity alignment module, and the RBTRC intent recognition model to ensure precise identification of key user query information. These components enable accurate extraction of critical elements from user questions and alignment with the medical knowledge graph. Additionally, a novel question-answering paradigm is proposed by incorporating P-Tuning techniques. This approach addresses the catastrophic forgetting problem common in traditional fine-tuning methods while enhancing the normative and interpretable nature of the content generated by the LLM. Experimental evaluations demonstrate that the proposed system outperforms mainstream models in entity extraction and intent recognition tasks, achieving higher precision (P) and F1 scores. Furthermore, performance assessments comparing this system to ChatGLM2 and ChatGPT indicate significant improvements in efficiency and response quality. The results highlight the system's ability to deliver specialized and high-quality answers tailored to the medical domain, making it a practical solution for intelligent medical question-answering applications.