Systematic Review Assitance using Leveraging Background Knowledge and Language Models

From Knoesis wiki
Jump to: navigation, search

A collaborations between USC Libraries and the Artificial Intelligence Institute of South Carolina (AIISC)

Overview

The proliferation of artificial intelligence (AI) technologies, e.g., Microsoft’s recent integration of ChatGPT into its Bing search architecture, showcase AI’s immense potential for shaping information search and discovery for educational purposes. For example, students pursuing higher education at universities often need to review various subjects and topics systematically. For this, students consult expert librarians trained in finding and evaluating the information in university libraries. Explainable AI systems have the potential to assist expert librarians in guiding student users through the systematic review process. Moreover, if developed responsibly and with input from expert humans, such a tool could help scale information literacy interventions whose reach is currently limited by employee availability (e.g., time and bandwidth limitations). We propose building an AI pipeline to assist librarians with structured, systematic review search processes. Our proposed pipeline will process student inputs in natural language and reformulate the inputs as structured queries using structured background knowledge. Furthermore, our system will generate explanations of the query reformulation to an expert human who will be involved in developing the system to enable continuous feedback-based refinements.

The Systematic Review Process

The review process involves the following steps:

  1. Defining the research question: Formulating a clear, well-defined research question of appropriate scope. Often a student needs help to define a research problem precisely and interacts with the expert librarian for help with this effort.
  2. Developing a review protocol/criteria: This step is often carried out in parallel with the first step and results in defining the terminology and topics that inform the development of the research question
  3. Developing inclusion and exclusion criteria: The student needs to understand and determine whether the review will include a particular study. For this, they provide well-defined inclusion-exclusion criteria.

Steps 1, 2, and 3 correspond to the steps; identify the issue and determine the question and Write a plan for the review (protocol) in the Figure 1. The remaining steps involve searching a database and using existing machine learning tools to help with the later stages, including article screening, data extraction, and the risk of bias assessment. In this proposal, we aim to design AI technologies to help with steps 1, 2, and 3.

Proposed Methodology

Our proposed system leverages background knowledge and language models enabled tools such as ChatGPT to perform six steps. We will use the running example in Figure 2. to explain the six steps. The steps are as follows:

  1. Seed Concept Identification: We will use natural language processing tools to obtain seed concepts from the student’s query. In the figure example, the seed concept is Hepatitis A.
  2. Concept and Relations Expansion using Background Knowledge: We will leverage our knowledge graph extraction tool to obtain subgraphs corresponding to the seed concepts in the student’s query. The figure shows subgraphs of concepts connected to the seed concept Hepatitis A and the relationships that connect them. The next two steps are an advanced form of Prompt Engineering assistance.
  3. Query Expansion using the Relevant Terminology and Topics: From step 2, we obtain the expanded concepts and relations from path traversals on the knowledge graph (rooted in the seed concept). The figure shows the relationships and concepts obtained for the seed concept Hepatitis A as causes, diagnoses, affects, associated with, complicates, and Acetaminophen.
  4. Query Suggestions to the Student: The concepts and relationships from step 3 are fed into a language model such as ChatGPT with an appropriate prompt to obtain a set of reformulated queries. The reformulated queries are submitted as suggestions to the student. The figure shows the prompt to ChatGPT, and the reformulated queries it generates, e.g., “What are the causes of Hepatitis A and how is it diagnosed?”
  5. Expert Librarian Review: The generated reformulated queries from 4 are also submitted to the expert librarian for review. Librarians are experts in teaching others to find and evaluate information and regularly work through the query process. As a result, they possess an unparalleled understanding of user behavior and the features and limitations of the systems they work with. Thus, our system presents the librarian with an explanation that consists of user-friendly prompt templates annotated with concepts and relationships from the background knowledge. The librarian can then suggest modifications to them to obtain satisfactory reformulated queries. For example, the figure's prompt template is "Formulate five prompt queries with the keywords: causes, diagnoses, affects, associated with, complicates, Acetaminophen". The librarian's suggestions are used to refine step 2. Background knowledge is necessary for contextual and relevant output, without which ChatGPT might generate irrelevant and incoherent queries. Figure 3. shows examples of explanations generated by ChatGPT without background knowledge (right) vs. using background knowledge (left). ChatGPT’s explanations without a controlled vocabulary obtained using background knowledge can include irrelevant and unspecific information. Including the background knowledge produces relevant and targeted outputs more suited for systematic reviews. The librarian may also analyze the safety of the generated queries. For example, the background knowledge contains information about how acetaminophen could be misused. Our system incorporates controls to avoid showing information on that since college students are especially vulnerable to such information. Similar controls can be placed to extend safety to include the relevant ethics and bias issues. Note that no one can guarantee complete safety as safety can be highly context-sensitive. For example, it may be appropriate for addiction researchers to learn how a drug is abused (e.g., through higher doses, combining multiple drugs, snorting, etc.). However, the same may not be suitable for undergraduate students.
  6. Structured Query Construction: After finalizing a reviewed query, the language model can generate a structured query in the format supported by the underlying library database. The query can then be submitted to the librarian to review the query specifics (e.g., the mappings to different schema elements). For example, the figure shows a Resource Description Framework (RDF) query for the reformulated query “What are the causes of Hepatitis A, and how is it diagnosed?”. The language model knows schema elements from sources such as DBPedia and Schema.org and uses this information to formulate the structured query.

USCLibrary-Figure1.png

Conclusion and Deliverables

We propose to develop an AI tool that is explainable (and optionally safer). It inputs student-submitted natural language queries and uses background knowledge to guide a language model for generating reformulated queries. The system will submit the reformulated queries and the explanation (the prompt, concepts, and relationships used to obtain the reformulated queries) to the expert librarian for review. Upon review and finalization of the queries, the language model translates the reformulated natural language queries into structured queries for deployment across major databases.

  • Deliverable 1. An AI tool to extract the relevant background knowledge from seed concepts identified in student-submitted natural language queries. We will develop the algorithms and mechanisms for identifying seed concepts and retrieving relevant background knowledge with continuous feedback from expert librarians.
  • Deliverable 2. An AI-based query expansion and reformulation module that reads in the seed concepts and the background knowledge and fills out a prompt template to generate reformulated queries. The prompt templates will be designed with expert supervision to obtain optimal results in the least amount of time. The objective is to make the results explainable and, optionally, safer.
  • Deliverable 3. An AI tool to translate the reformulated natural language queries into structured queries for deployment across major university databases. For this, we will incorporate knowledge of the database systems and their limitations for literature review in consultation with the expert librarian.

Outcomes

The project has been completed and published (publications details below)

  • Roy, K., Khandelwal, V., Surana, H., Vera, V., Sheth, A., & Heckman, H. (2024). GEAR-Up: Generative AI and external knowledge-based retrieval upgrading scholarly article searches for systematic reviews. Proceedings of the 38th AAAI Conference on Artificial Intelligence (Record link: https://scholarcommons.sc.edu/aii_fac_pub/593/).