Difference between revisions of "Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning"

From Knoesis wiki
Jump to: navigation, search
(Created page with "<b>Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning</b>")
 
Line 1: Line 1:
 
<b>Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning</b>
 
<b>Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning</b>
 +
After the era of Symbolic AI in the 20th century and Statistical AI in the first two decades of this century, there is a growing interest in the neuro-symbolic AI approach. It seeks to combine the respective powers and benefits of symbolic and statistical AI using knowledge graphs and deep learning. We have coined the term Knowledge-infused (deep) Learning (KiL) for a class of approaches that use a variety of knowledge at different levels of abstractions. This project will advance early and limited forms of enhancing deep learning with knowledge, called shallow and semi-deep KiL, with a more advanced form called deep-infusion. This project focuses on developing a deep learning architecture and associated algorithms that involve interleaving broader varieties of knowledge at different levels of abstractions or layers in a deep neural network.
 +
 +
Following activities are being pursued in this project.
 +
# We are developing novel datasets that would exercise the development of novel algorithms that work in synchrony with human knowledge.
 +
# Human knowledge is manifested in various forms such as rules, lexicons, relationships, relational databases, and knowledge graphs. We specifically focus on the knowledge graph's integration in deep learning algorithms (e.g., deep language models) to achieve explainability and interpretability.
 +
# For explainability, specifically, user-level explainability is described as achievable through algorithms that connect its outcome with knowledge graphs and, when converged, reflect on the part of the knowledge graph that describes the predictions.
 +
# For interpretability, at first, we are working towards leveraging simple and interpretable machine learning models that can help explain the internal mechanism of deep language models. Subsequently, we will work towards leveraging our understanding of the model's capability to define stratified knowledge structures like decision trees to allow the model to learn such trees at each layer with the help of a knowledge graph. With knowledge graph infusion, we provide semantic grounding to statistical models with unreasonable and non-deductive outcomes.
 +
# AI systems have been stymied due to the lack of safety in data-driven AI. We have started to investigate how using domain (including process)knowledge as part of KiL methods can make AI systems safer.
 +
# Building upon foundational research in this project, we have worked on several translational research opportunities that apply and advance KiL approaches. These include personalized health (specifically, mental health), personalized nutrition(specifically, management of carbohydrate intake in children with type 1 diabetes), and autonomous systems (including autonomous vehicles and smart manufacturing- see the third illustration in the attachment).
 +
 +
Based on our recent work under the purview of this grant, we have been asking the following questions concerning explainability, interpretability, safety, and reasonability.
 +
# When do neural language models require non-parametric knowledge? We consider non-parametric knowledge as the source created/curated by humans. It includes lexicons, knowledge graphs, relational databases, etc.
 +
# How do infuse non-parametric knowledge seamlessly into statistical AI models?
 +
# It is known that deep neural models are learned by abstraction; how to leverage external knowledge’s inherent abstraction in enhancing the context of learned statistical representation?
 +
# If the knowledge infusion is meant to happen at various layers in deep neural networks, how would the network regularize to prevent over-generalization or superfluous generations?
 +
# We have established that the attention matrix in current transformer models makes the model reactive to global and local information in the input. It does by token-by-token square matrix. If we want to perform infusion, we must introduce two new matrices: (a) token and entity and (b) entity and entity. Simple matrix multiplication won’t work as these are out of the distribution matrix. Hence, we need to seek ways of creating a knowledge-aware attention matrix for the model from (a)token-by-token matrix, (b) token and entity, and (c) entity and entity matrix.
 +
# Layered knowledge infusion might result in high-energy nodes contributing to the outcome. This is counter to the current softmax prediction. How to pick the most probable outcome? This would require us to explore marginalized loss functions using infused knowledge and input.
 +
# How do you enable the generation of user-level explanations? (see the second illustration in the attachment).
 +
# How to enforce safety constraints in model generations. This has been a pressing need since models tend to generate risky sentences (see the fourth illustration in the attachment).
 +
 +
An overview of significant results reported in the publications and other dissemination is provided below.
 +
# We explored various statistical bottlenecks in deep neural language models from the perspective of user-level explainability, interpretability, and safety. These pre-trained models are efficient, but the datasets they are trained on are not grounded in knowledge. Sheth et al. and Gupta et al. found that models hallucinate while generating responses leading to a factually incorrect or superfluous response. We investigated various methods of controlling this hallucination.
 +
# We hypothesize that humans communicate through a contextual process of understanding and response. This process can either be an n-ary tree, a flat graph, or any structure of the conceptual flow. But, we face a challenge in terms of datasets. So, we created datasets cycling through a month-long annotation, evaluation, and quality check process. These datasets have been constructed under the purview of this grant and will be made available. The procedure we laid out in constructing these datasets contributes to the significance of our results. Every dataset was created automatically following deep learning and clinical knowledge. Subject Matter Experts were tasked to evaluate our labeling process. By this means, we not only checked that our knowledge-infused learning pipeline was accurate but also scaled the annotation-evaluation process by multiple folds and reduced time. Roy et al. could gather ten Subject Matter Experts' evaluations on five sets of outcomes from 5 different deep language models.
 +
# We found that the simple neural language model can provide explainable results, and we also saw that experts achieved satisfactory agreement scores of >75% with simple language models.
 +
# Further, we could achieve task transferability in models as they are trained on relatable clinical process knowledge-driven datasets. So far, we have found convincing results in Depression, Anxiety, and Suicide-related research. This also marks our first concrete step in realizing Knowledge-infused Learning.
 +
# Some challenges arise when mental health conditions are comorbid with diseases like Cardiovascular, where we require contextual information on diseases and gender along with users' expressions.
 +
# Next, we explored the domain of conversational AI as chatbots need to be safe when they communicate with the user with depression, anxiety, or suicidal tendencies (Gaur et al. CIKM). We have developed novel evaluation metrics and interpretable and explainable algorithms for process knowledge infusion in the Knowledge-infused Learning paradigm (Roy et al. ACL-IJCNLP).
 +
# We studied mental health conversations related to Cardiovascular disease on social media, which requires domain knowledge. We developed knowledge-assisted masked language models in a task adaptive multi-task learning paradigm. We could differentiate the gender language and gender-specific symptoms based on user posts and comments. Lokala et al. proposed framework GeM fall under shallow knowledge-infused learning as we use external lexicons on Anxiety, Depression, and Gender for Knowledge-aware Entity Masking.
 +
# The diverse forms of knowledge we infused into statistical AI are correlational. They are defined by word co-occurrence, synonymy linkage, and others but aren't causal. Representation of causality in AI systems using knowledge graphs can further improve explainability. Jamini and Sheth proposed a neat architecture on why causal knowledge graphs (CKGs) are needed, what modifications need to be made in existing knowledge graphs, and how infusion would occur.
 +
# Within the scope of Knowledge-infused Learning, the causality aspect made us explore the autonomous driving domain. There are various scenarios in autonomous driving where the vehicle needs to decide based on what it has learned in other similar situations. Situations are scenes, and every scene has an interconnected set of entities that describe the scene. Wickramarachchi et al. developed a scene ontology for autonomous driving use cases and used it to extract entities from scene descriptions. An interconnection of scene entities is what is termed a scene graph. Such a graph improves machine perception in autonomous vehicles and can define sensible actions. Scene graphs and actions are absorbed by the architecture proposed by Jamini et al. to construct CausalKG.
 +
# Along with mental health and healthcare in general, we are exploring the utility of knowledge-infused learning in autonomous driving. We find synchrony between these domains as the machine is tasked to provide action. Thus a correlation-alone knowledge graph falls short in expressing high-order semantic knowledge as expressed by humans.
 +
# In a complementary direction, we studied the COVID-19 pandemic with a motive to help policymakers with explainable AI tools. Sivaraman et al. presented EXO-SIR, an epidemiological model supported by a component that considers external knowledge from textual reports to bootstrap SIR in estimating the likelihood of a rise in infections.
 +
 +
 +
=Funding=
 +
 +
* '''NSF Award#''': 2133842
 +
* '''EAGER: Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning'''
 +
* '''Timeline:''' 01 July 2021 - 30 June 2022
 +
* '''Award Amount:''' $139,999
 +
 +
==Personnel==
 +
* '''Faculty:''' Amit Sheth (PI)
 +
* '''Graduate Research Assistants:''' Manas Gaur, Kaushik Roy
 +
* '''AIISC Students:''' Utkarshini Jamini, Usha Lokala, Ruwan Wickramarachchi

Revision as of 01:28, 11 July 2022

Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning After the era of Symbolic AI in the 20th century and Statistical AI in the first two decades of this century, there is a growing interest in the neuro-symbolic AI approach. It seeks to combine the respective powers and benefits of symbolic and statistical AI using knowledge graphs and deep learning. We have coined the term Knowledge-infused (deep) Learning (KiL) for a class of approaches that use a variety of knowledge at different levels of abstractions. This project will advance early and limited forms of enhancing deep learning with knowledge, called shallow and semi-deep KiL, with a more advanced form called deep-infusion. This project focuses on developing a deep learning architecture and associated algorithms that involve interleaving broader varieties of knowledge at different levels of abstractions or layers in a deep neural network.

Following activities are being pursued in this project.

  1. We are developing novel datasets that would exercise the development of novel algorithms that work in synchrony with human knowledge.
  2. Human knowledge is manifested in various forms such as rules, lexicons, relationships, relational databases, and knowledge graphs. We specifically focus on the knowledge graph's integration in deep learning algorithms (e.g., deep language models) to achieve explainability and interpretability.
  3. For explainability, specifically, user-level explainability is described as achievable through algorithms that connect its outcome with knowledge graphs and, when converged, reflect on the part of the knowledge graph that describes the predictions.
  4. For interpretability, at first, we are working towards leveraging simple and interpretable machine learning models that can help explain the internal mechanism of deep language models. Subsequently, we will work towards leveraging our understanding of the model's capability to define stratified knowledge structures like decision trees to allow the model to learn such trees at each layer with the help of a knowledge graph. With knowledge graph infusion, we provide semantic grounding to statistical models with unreasonable and non-deductive outcomes.
  5. AI systems have been stymied due to the lack of safety in data-driven AI. We have started to investigate how using domain (including process)knowledge as part of KiL methods can make AI systems safer.
  6. Building upon foundational research in this project, we have worked on several translational research opportunities that apply and advance KiL approaches. These include personalized health (specifically, mental health), personalized nutrition(specifically, management of carbohydrate intake in children with type 1 diabetes), and autonomous systems (including autonomous vehicles and smart manufacturing- see the third illustration in the attachment).

Based on our recent work under the purview of this grant, we have been asking the following questions concerning explainability, interpretability, safety, and reasonability.

  1. When do neural language models require non-parametric knowledge? We consider non-parametric knowledge as the source created/curated by humans. It includes lexicons, knowledge graphs, relational databases, etc.
  2. How do infuse non-parametric knowledge seamlessly into statistical AI models?
  3. It is known that deep neural models are learned by abstraction; how to leverage external knowledge’s inherent abstraction in enhancing the context of learned statistical representation?
  4. If the knowledge infusion is meant to happen at various layers in deep neural networks, how would the network regularize to prevent over-generalization or superfluous generations?
  5. We have established that the attention matrix in current transformer models makes the model reactive to global and local information in the input. It does by token-by-token square matrix. If we want to perform infusion, we must introduce two new matrices: (a) token and entity and (b) entity and entity. Simple matrix multiplication won’t work as these are out of the distribution matrix. Hence, we need to seek ways of creating a knowledge-aware attention matrix for the model from (a)token-by-token matrix, (b) token and entity, and (c) entity and entity matrix.
  6. Layered knowledge infusion might result in high-energy nodes contributing to the outcome. This is counter to the current softmax prediction. How to pick the most probable outcome? This would require us to explore marginalized loss functions using infused knowledge and input.
  7. How do you enable the generation of user-level explanations? (see the second illustration in the attachment).
  8. How to enforce safety constraints in model generations. This has been a pressing need since models tend to generate risky sentences (see the fourth illustration in the attachment).

An overview of significant results reported in the publications and other dissemination is provided below.

  1. We explored various statistical bottlenecks in deep neural language models from the perspective of user-level explainability, interpretability, and safety. These pre-trained models are efficient, but the datasets they are trained on are not grounded in knowledge. Sheth et al. and Gupta et al. found that models hallucinate while generating responses leading to a factually incorrect or superfluous response. We investigated various methods of controlling this hallucination.
  2. We hypothesize that humans communicate through a contextual process of understanding and response. This process can either be an n-ary tree, a flat graph, or any structure of the conceptual flow. But, we face a challenge in terms of datasets. So, we created datasets cycling through a month-long annotation, evaluation, and quality check process. These datasets have been constructed under the purview of this grant and will be made available. The procedure we laid out in constructing these datasets contributes to the significance of our results. Every dataset was created automatically following deep learning and clinical knowledge. Subject Matter Experts were tasked to evaluate our labeling process. By this means, we not only checked that our knowledge-infused learning pipeline was accurate but also scaled the annotation-evaluation process by multiple folds and reduced time. Roy et al. could gather ten Subject Matter Experts' evaluations on five sets of outcomes from 5 different deep language models.
  3. We found that the simple neural language model can provide explainable results, and we also saw that experts achieved satisfactory agreement scores of >75% with simple language models.
  4. Further, we could achieve task transferability in models as they are trained on relatable clinical process knowledge-driven datasets. So far, we have found convincing results in Depression, Anxiety, and Suicide-related research. This also marks our first concrete step in realizing Knowledge-infused Learning.
  5. Some challenges arise when mental health conditions are comorbid with diseases like Cardiovascular, where we require contextual information on diseases and gender along with users' expressions.
  6. Next, we explored the domain of conversational AI as chatbots need to be safe when they communicate with the user with depression, anxiety, or suicidal tendencies (Gaur et al. CIKM). We have developed novel evaluation metrics and interpretable and explainable algorithms for process knowledge infusion in the Knowledge-infused Learning paradigm (Roy et al. ACL-IJCNLP).
  7. We studied mental health conversations related to Cardiovascular disease on social media, which requires domain knowledge. We developed knowledge-assisted masked language models in a task adaptive multi-task learning paradigm. We could differentiate the gender language and gender-specific symptoms based on user posts and comments. Lokala et al. proposed framework GeM fall under shallow knowledge-infused learning as we use external lexicons on Anxiety, Depression, and Gender for Knowledge-aware Entity Masking.
  8. The diverse forms of knowledge we infused into statistical AI are correlational. They are defined by word co-occurrence, synonymy linkage, and others but aren't causal. Representation of causality in AI systems using knowledge graphs can further improve explainability. Jamini and Sheth proposed a neat architecture on why causal knowledge graphs (CKGs) are needed, what modifications need to be made in existing knowledge graphs, and how infusion would occur.
  9. Within the scope of Knowledge-infused Learning, the causality aspect made us explore the autonomous driving domain. There are various scenarios in autonomous driving where the vehicle needs to decide based on what it has learned in other similar situations. Situations are scenes, and every scene has an interconnected set of entities that describe the scene. Wickramarachchi et al. developed a scene ontology for autonomous driving use cases and used it to extract entities from scene descriptions. An interconnection of scene entities is what is termed a scene graph. Such a graph improves machine perception in autonomous vehicles and can define sensible actions. Scene graphs and actions are absorbed by the architecture proposed by Jamini et al. to construct CausalKG.
  10. Along with mental health and healthcare in general, we are exploring the utility of knowledge-infused learning in autonomous driving. We find synchrony between these domains as the machine is tasked to provide action. Thus a correlation-alone knowledge graph falls short in expressing high-order semantic knowledge as expressed by humans.
  11. In a complementary direction, we studied the COVID-19 pandemic with a motive to help policymakers with explainable AI tools. Sivaraman et al. presented EXO-SIR, an epidemiological model supported by a component that considers external knowledge from textual reports to bootstrap SIR in estimating the likelihood of a rise in infections.


Funding

  • NSF Award#: 2133842
  • EAGER: Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning
  • Timeline: 01 July 2021 - 30 June 2022
  • Award Amount: $139,999

Personnel

  • Faculty: Amit Sheth (PI)
  • Graduate Research Assistants: Manas Gaur, Kaushik Roy
  • AIISC Students: Utkarshini Jamini, Usha Lokala, Ruwan Wickramarachchi