Difference between revisions of "EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants"

From Knoesis wiki
Jump to: navigation, search
(EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants)
(EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants)
Line 4: Line 4:
 
Following the era of Symbolic AI in the 20th century and Statistical AI in the first two decades of this century, there has been a growing interest in the neuro-symbolic AI approach. This approach aims to leverage the strengths and advantages of both symbolic and statistical AI by utilizing symbolic knowledge data structures alongside deep learning techniques. In 2022, our focus revolved around a category of approaches known as Knowledge-infused (deep) Learning (KiL), which incorporates various types of knowledge at different levels of abstraction into neural network pipelines. The knowledge is represented as declarative knowledge in knowledge graphs (KGs). Since July 2022, we have improved the knowledge graphs by incorporating procedural knowledge, which refers to the knowledge of typical processes employed by experts in diverse domains and everyday applications. Our future work involves advancing KiL further and enabling it to effectively handle such process knowledge using an enhanced approach called process knowledge-infused learning (PK-iL).
 
Following the era of Symbolic AI in the 20th century and Statistical AI in the first two decades of this century, there has been a growing interest in the neuro-symbolic AI approach. This approach aims to leverage the strengths and advantages of both symbolic and statistical AI by utilizing symbolic knowledge data structures alongside deep learning techniques. In 2022, our focus revolved around a category of approaches known as Knowledge-infused (deep) Learning (KiL), which incorporates various types of knowledge at different levels of abstraction into neural network pipelines. The knowledge is represented as declarative knowledge in knowledge graphs (KGs). Since July 2022, we have improved the knowledge graphs by incorporating procedural knowledge, which refers to the knowledge of typical processes employed by experts in diverse domains and everyday applications. Our future work involves advancing KiL further and enabling it to effectively handle such process knowledge using an enhanced approach called process knowledge-infused learning (PK-iL).
  
===Specific Objectives===:
+
===Specific Objectives===
  
 
With the utilization of Pk-iL, our objective was to examine the responses to the subsequent research inquiries:
 
With the utilization of Pk-iL, our objective was to examine the responses to the subsequent research inquiries:

Revision as of 13:36, 25 August 2023

EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants

Following the era of Symbolic AI in the 20th century and Statistical AI in the first two decades of this century, there has been a growing interest in the neuro-symbolic AI approach. This approach aims to leverage the strengths and advantages of both symbolic and statistical AI by utilizing symbolic knowledge data structures alongside deep learning techniques. In 2022, our focus revolved around a category of approaches known as Knowledge-infused (deep) Learning (KiL), which incorporates various types of knowledge at different levels of abstraction into neural network pipelines. The knowledge is represented as declarative knowledge in knowledge graphs (KGs). Since July 2022, we have improved the knowledge graphs by incorporating procedural knowledge, which refers to the knowledge of typical processes employed by experts in diverse domains and everyday applications. Our future work involves advancing KiL further and enabling it to effectively handle such process knowledge using an enhanced approach called process knowledge-infused learning (PK-iL).

Specific Objectives

With the utilization of Pk-iL, our objective was to examine the responses to the subsequent research inquiries:

  1. How can we systematically incorporate explicit knowledge of concepts expressed in user-friendly terms into the implicitly characterized components (comprising billions of parameters) of a neural network?
  2. How can we accomplish this integration while ensuring that the specific goals of the application are fulfilled, such as obtaining clinically valid and safe outcomes in the healthcare domain?
  3. Is it possible to develop algorithms capable of handling a diverse range of declarative and procedural knowledge necessary for effective neurosymbolic AI, which can address issues associated with each individual method (i.e., neural or symbolic methods) in isolation?
  4. How can we robustly and accurately evaluate success or failure across the abovementioned three questions?

Current Research Questions in KiL

Based on our recent work under the purview of this grant, we have been asking the following questions concerning explainability, interpretability, safety, and reasonability.

  1. When do neural language models require non-parametric knowledge? We consider non-parametric knowledge as the source created/curated by humans. It includes lexicons, knowledge graphs, relational databases, etc.
  2. How do infuse non-parametric knowledge seamlessly into statistical AI models?
  3. It is known that deep neural models are learned by abstraction; how to leverage external knowledge’s inherent abstraction in enhancing the context of learned statistical representation?
  4. If the knowledge infusion is meant to happen at various layers in deep neural networks, how would the network regularize to prevent over-generalization or superfluous generations?
  5. We have established that the attention matrix in current transformer models makes the model reactive to global and local information in the input. It does by token-by-token square matrix. If we want to perform infusion, we must introduce two new matrices: (a) token and entity and (b) entity and entity. Simple matrix multiplication won’t work as these are out of the distribution matrix. Hence, we need to seek ways of creating a knowledge-aware attention matrix for the model from (a)token-by-token matrix, (b) token and entity, and (c) entity and entity matrix.
  6. Layered knowledge infusion might result in high-energy nodes contributing to the outcome. This is counter to the current softmax prediction. How to pick the most probable outcome? This would require us to explore marginalized loss functions using infused knowledge and input.
  7. How do you enable the generation of user-level explanations?
  8. How to enforce safety constraints in model generations. This has been a pressing need since models tend to generate risky sentences.

Significant Results So Far

An overview of significant results reported in the publications and other dissemination is provided below.

  1. We explored various statistical bottlenecks in deep neural language models from the perspective of user-level explainability, interpretability, and safety. These pre-trained models are efficient, but the datasets they are trained on are not grounded in knowledge. Sheth et al. and Gupta et al. found that models hallucinate while generating responses leading to a factually incorrect or superfluous response. We investigated various methods of controlling this hallucination.
  2. We hypothesize that humans communicate through a contextual process of understanding and response. This process can either be an n-ary tree, a flat graph, or any structure of the conceptual flow. But, we face a challenge in terms of datasets. So, we created datasets cycling through a month-long annotation, evaluation, and quality check process. These datasets have been constructed under the purview of this grant and will be made available. The procedure we laid out in constructing these datasets contributes to the significance of our results. Every dataset was created automatically following deep learning and clinical knowledge. Subject Matter Experts were tasked to evaluate our labeling process. By this means, we not only checked that our knowledge-infused learning pipeline was accurate but also scaled the annotation-evaluation process by multiple folds and reduced time. Roy et al. could gather ten Subject Matter Experts' evaluations on five sets of outcomes from 5 different deep language models.
  3. We found that the simple neural language model can provide explainable results, and we also saw that experts achieved satisfactory agreement scores of >75% with simple language models.
  4. Further, we could achieve task transferability in models as they are trained on relatable clinical process knowledge-driven datasets. So far, we have found convincing results in Depression, Anxiety, and Suicide-related research. This also marks our first concrete step in realizing Knowledge-infused Learning.
  5. Some challenges arise when mental health conditions are comorbid with diseases like Cardiovascular, where we require contextual information on diseases and gender along with users' expressions.
  6. Next, we explored the domain of conversational AI as chatbots need to be safe when they communicate with the user with depression, anxiety, or suicidal tendencies (Gaur et al. CIKM). We have developed novel evaluation metrics and interpretable and explainable algorithms for process knowledge infusion in the Knowledge-infused Learning paradigm (Roy et al. Frontiers in Big Data).
  7. We studied mental health conversations related to Cardiovascular disease on social media, which requires domain knowledge. We developed knowledge-assisted masked language models in a task adaptive multi-task learning paradigm. We could differentiate the gender language and gender-specific symptoms based on user posts and comments. Lokala et al. proposed framework GeM fall under shallow knowledge-infused learning as we use external lexicons on Anxiety, Depression, and Gender for Knowledge-aware Entity Masking.
  8. The diverse forms of knowledge we infused into statistical AI are correlational. They are defined by word co-occurrence, synonymy linkage, and others but aren't causal. Representation of causality in AI systems using knowledge graphs can further improve explainability. Jamini and Sheth proposed a neat architecture on why causal knowledge graphs (CKGs) are needed, what modifications need to be made in existing knowledge graphs, and how infusion would occur.
  9. Within the scope of Knowledge-infused Learning, the causality aspect made us explore the autonomous driving domain. There are various scenarios in autonomous driving where the vehicle needs to decide based on what it has learned in other similar situations. Situations are scenes, and every scene has an interconnected set of entities that describe the scene. Wickramarachchi et al. developed a scene ontology for autonomous driving use cases and used it to extract entities from scene descriptions. An interconnection of scene entities is what is termed a scene graph. Such a graph improves machine perception in autonomous vehicles and can define sensible actions. Scene graphs and actions are absorbed by the architecture proposed by Jamini et al. to construct CausalKG.
  10. Along with mental health and healthcare in general, we are exploring the utility of knowledge-infused learning in autonomous driving. We find synchrony between these domains as the machine is tasked to provide action. Thus a correlation-alone knowledge graph falls short in expressing high-order semantic knowledge as expressed by humans.
  11. In a complementary direction, we studied the COVID-19 pandemic with a motive to help policymakers with explainable AI tools. Sivaraman et al. presented EXO-SIR, an epidemiological model supported by a component that considers external knowledge from textual reports to bootstrap SIR in estimating the likelihood of a rise in infections.

Link to the previous EAGER award : http://wiki.aiisc.ai/index.php/Advancing_Neuro-symbolic_AI_with_Deep_Knowledge-infused_Learning

Funding

  • NSF Award #: 2335967
  • Award Period of Performance:   Start Date: 10/01/2023     End Date: 09/30/2025
  • Project Title: EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants
  • Award Amount: $200,000

Personnel

Quad Chart