Virtual Health Assistant for Mental Health

From Knoesis wiki
Revision as of 14:48, 28 August 2022 by Admin (Talk | contribs) (DGuaranteed safety for VHA-patient interaction via bi-level dialogue management)

Jump to: navigation, search

Virtual Health Assistant for Mental Health Self-Management

Background and Motivation

The current COVID-19 pandemic has challenged global healthcare systems and caused striking increases in mental health clinical services. In addition, lockdown isolation, economic hardships, grief, and fear, have triggered new mental health concerns and exacerbated existing mental health conditions. Centers for Disease Control and Prevention (CDC) reported that symptoms of anxiety disorder, depressive disorder, and suicidal ideation had increased considerably, with 40% of U.S. adults who reported struggling with at least one mental health issue as of June 2020 [cite]. Accordingly, CDC called the public health response for increasing intervention and prevention efforts to address associated mental health conditions. With the current severe shortage of mental health clinicians coupled with a decrease of in-person visits at health care facilities, novel technological methods such as Virtual Health Assistants (VHAs) are promising to play a critical role in helping patients mitigate mental health symptoms at their early stages through active self-care for effective prevention and intervention. Thus, there is an unprecedented need for advanced mental health VHAs beyond script-based screening tasks (e.g., reminding, scheduling) that assist patients with mental health self-management through VHA-patient interactions for their daily self-care. An advanced VHA-MHSM should incorporate contextual and personalized self-management strategies including (a) the medical guidance as embodied in medical discharge summary, (b) adherence to medical guidelines (medication, management strategies such as meditation, lifestyle choices) the health provider intended to follow, and (c) patient’s unique health conditions including continuously changes in mental health. At the same time, a VHA needs to be safe (has minimal chances of negatively impacting the patient's health conditions), provide privacy, and be ethical. Such technologies will mark a significant shift in prevention and intervention strategies at a large scale with low costs.

Infrastructure Design

Automatic Qualitative Coding

To support enhanced patient self-management, the VHA needs to adequately understand a broader medical context regarding a specific disease and an individual patient`s personalized context, including personal health history, physical characteristics, ongoing activities, lifestyle choices, preferences, and so forth. Integrating this personal and broader medical context into a consolidated structure is crucial to the design and construction of the VHA to help with the individual patient. Therefore, our VHA employs a Personalized Knowledge Graph (PKG) that integrates evolving personalized medical records, daily VHA-patient interactions, and other contextual information such as time and location to achieve advanced personalization and contextualization. This PKG provides an accessible and interpretable representation of the up-to-date state of the patient.

Data-efficient and explainable learning for adaptive self-management strategy

To enable the VHA to improve itself during repeated interaction and feedback with the patient, building upon our previous work, we utilize an algorithmic framework Knowledge infused Reinforcement Learning (KiRL) that systematically incorporates the mental health clinician’s expert knowledge, personalized patient context, and patient feedback to construct high-level tasks and enable the VHA’s self-improvement in an efficient and explainable manner.

Guaranteed safety for VHA-patient interaction via bi-level dialogue management

The VHA manages the dialogue interaction with the patient using a bi-level hierarchy: the VHA first initiates a dialogue by selecting one out of several high-level tasks recommended by the KiRL algorithm and then manages the subsequent low-level dialogue interactions to be constrained within the high-level task and only capture exactly the information specified by the task. This step ensures unambiguous patient-centered decision-making that meets the safety requirements of avoiding harmful conversation and performs precise and accurate information capture.