Difference between revisions of "Neurosymbolic Artificial Intelligence: Why, What and How?"

From Knoesis wiki
Jump to: navigation, search
(Publications)
(Publications)
Line 20: Line 20:
 
#Zi, Y., Veeramani, H., Roy, K., & Sheth, A. (2024). RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding. AAAI Workshop on Neuro-Symbolic Learning and Reasoning in the Era of Large Language Models. (Link: https://openreview.net/forum?id=hNQJI0KS3T)
 
#Zi, Y., Veeramani, H., Roy, K., & Sheth, A. (2024). RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding. AAAI Workshop on Neuro-Symbolic Learning and Reasoning in the Era of Large Language Models. (Link: https://openreview.net/forum?id=hNQJI0KS3T)
 
#Sheth, A., & Roy, K. (2024). Neurosymbolic Value-Inspired AI (Why, What, and How). IEEE Intelligent Systems. (Link: https://arxiv.org/pdf/2312.09928.pdf)
 
#Sheth, A., & Roy, K. (2024). Neurosymbolic Value-Inspired AI (Why, What, and How). IEEE Intelligent Systems. (Link: https://arxiv.org/pdf/2312.09928.pdf)
#Sheth, A., Roy, K., & Gaur. M. (2023). Neurosymbolic AI - Why, What, and How. IEEE Intelligent Systems 38 (3), 56-62. (Link: https://ieeexplore.ieee.org/abstract/document/10148662/)
+
#'''Sheth, A., Roy, K., & Gaur. M. (2023). Neurosymbolic AI - Why, What, and How. IEEE Intelligent Systems 38 (3), 56-62. (Link: https://ieeexplore.ieee.org/abstract/document/10148662/)'''
  
 
==Funding==
 
==Funding==

Revision as of 23:07, 10 February 2024

Neurosymbolic AI Overview

Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of AI, refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This requires the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision-making in safety-critical applications such as healthcare, criminal justice, and autonomous driving. While data-driven neural network-based AI algorithms effectively model machine perception, symbolic knowledge-based AI is better suited for modeling machine cognition. This is because symbolic knowledge structures support explicit representations of mappings from perception outputs to the knowledge, enabling traceability and auditing of the AI system’s decisions. Such audit trails help enforce application aspects of safety, such as regulatory compliance and explainability, through tracking the AI system’s inputs, outputs, and intermediate steps. This first article in the Neurosymbolic AI department introduces and provides an overview of the rapidly emerging paradigm of Neurosymbolic AI, combining neural networks and knowledge-guided symbolic approaches to create more capable and flexible AI systems. These systems have immense potential to advance both algorithm-level (e.g., abstraction, analogy, reasoning) and application-level (e.g., explainable and safety-constrained decision-making) capabilities of AI systems.

Why Neurosymbolic AI?

Embodying intelligent behavior in an AI system must involve both perception - processing raw data, and cognition - using background knowledge to support abstraction, analogy, reasoning, and planning. Symbolic structures represent this background knowledge explicitly. While neural networks are a powerful tool for processing and extracting patterns from data, they need explicit representations of background knowledge, hindering the reliable evaluation of their cognition capabilities. Furthermore, applying appropriate safety standards while providing explainable outcomes guided by concepts from background knowledge is crucial for establishing trustworthy models of cognition for decision support.


Significant Developments So Far

  1. We have developed a set of algorithms to enable the development of neurosymbolic AI methods that incorporate declarative knowledge using knowledge graphs and procedural knowledge from domain-specific knowledge sources.
  2. Our work shows the effectiveness of combining language models and knowledge graphs. Knowledge graphs have the potential to model heterogeneous types of application and domain-level knowledge beyond schemas. This includes workflows, constraint specifications, and process structures, further enhancing the power and usefulness of neurosymbolic architectures. Combining such enhanced knowledge graphs with high-capacity neural networks would provide the end user with an extremely high degree of algorithmic and application-level utility.
  3. Knowledge graphs are suitable for symbolic structures that bridge the cognition and perception aspects because they support real-world dynamism. Unlike static and brittle symbolic logic, such as first-order logic, they are easy to update. In addition to their suitability for enterprise-use cases and established standards for portability, knowledge graphs are part of a mature ecosystem of algorithms that enable highly efficient graph management and querying. This scalability allows for modeling large, complex datasets with millions or billions of nodes.


Link to the NSF EAGER award on Neurosymbolic AI: https://shorturl.at/qxU69

Publications

  1. Zi, Y., Veeramani, H., Roy, K., & Sheth, A. (2024). RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding. AAAI Workshop on Neuro-Symbolic Learning and Reasoning in the Era of Large Language Models. (Link: https://openreview.net/forum?id=hNQJI0KS3T)
  2. Sheth, A., & Roy, K. (2024). Neurosymbolic Value-Inspired AI (Why, What, and How). IEEE Intelligent Systems. (Link: https://arxiv.org/pdf/2312.09928.pdf)
  3. Sheth, A., Roy, K., & Gaur. M. (2023). Neurosymbolic AI - Why, What, and How. IEEE Intelligent Systems 38 (3), 56-62. (Link: https://ieeexplore.ieee.org/abstract/document/10148662/)

Funding

  • NSF Award #: 2335967
  • Award Period of Performance:   Start Date: 07/01/2021     End Date: 09/30/2025
  • Project Titles: EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants, and Advancing Neurosymbolic AI with Deep Knowledge infused Learning
  • Award Amount: $337000