Neurosymbolic Artificial Intelligence: Why, What and How?

From Knoesis wiki
Revision as of 22:12, 14 February 2024 by Admin (Talk | contribs) (Application Papers Publications)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Neurosymbolic AI Overview

Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of AI, refers to large-scale pattern recognition from raw data using foundation models trained with self-supervised learning objectives, such as next-word prediction or object recognition. However, machine cognition encompasses more complex human-like computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Furthermore, humans can also control and explain their cognitive functions. This requires the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints that drive their decision-making in safety-critical applications such as healthcare, criminal justice, and autonomous driving. While data-driven neural network-based AI algorithms effectively model machine perception, symbolic knowledge-based AI is better suited for modeling machine cognition. This is because symbolic knowledge structures support explicit representations of mappings from perception outputs to the knowledge, enabling traceability and auditing of the AI system’s decisions. Such audit trails help enforce safety guardrails in applications, such as regulatory compliance and explainability, through tracking the AI system’s inputs, outputs, and intermediate steps. Combining neural networks and knowledge-guided symbolic approaches creates more capable and flexible AI systems. These systems have immense potential to advance both algorithm-level (e.g., abstraction, analogy, reasoning) and application-level (e.g., explainable and safety-constrained decision-making) capabilities of AI systems.

Why Neurosymbolic AI?

Embodying intelligent behavior in an AI system must involve both perception - processing raw data, and cognition - using domain knowledge to support abstraction, analogy, reasoning, and planning. The advantage of symbolic structures is that they represent such domain knowledge explicitly. While neural networks are powerful tools for processing and extracting patterns from data, they lack the explicit representations of domain knowledge required for advanced cognition capabilities. Neurosymbolic AI enables this. Additionally, neurosymbolic AI enables mechanisms for applying appropriate safety standards while providing explainable outcomes guided by concepts from domain knowledge, which is crucial for establishing trustworthy models of cognition for decision support.

Methods Papers Publications

  1. Roy, K., Oltramari, A., Zi, Y., Shyalika, C., Narayanan, V., & Sheth, A. (2024). Causal event graph-guided language-based spatiotemporal question answering. AAAI Spring Symposium on Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge (Link: https://scholarcommons.sc.edu/aii_fac_pub/599/)
  2. Zi, Y., Roy, K., Narayanan, V., & Sheth, A. (2024). Exploring Alternative Approaches to Language Modeling for Learning from Data and Knowledge. AAAI Spring Symposium on Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge. (Link: https://scholarcommons.sc.edu/aii_fac_pub/600/)
  3. Zi, Y., Veeramani, H., Roy, K., & Sheth, A. (2024). RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding. AAAI Workshop on Neuro-Symbolic Learning and Reasoning in the Era of Large Language Models. (Link: https://openreview.net/forum?id=hNQJI0KS3T)

Vision Papers Publications

  1. Sheth, A., & Roy, K. (2024). Neurosymbolic Value-Inspired AI (Why, What, and How). IEEE Intelligent Systems. (Link: https://arxiv.org/pdf/2312.09928.pdf)
  2. Sheth, A., Roy, K., & Gaur. M. (2023). Neurosymbolic AI - Why, What, and How. IEEE Intelligent Systems 38 (3), 56-62. (Link: https://ieeexplore.ieee.org/abstract/document/10148662/)
  3. Wijesiriwardene, T., Sheth, A., Shalin, V. L., & Das, A. (2023). Why Do We Need Neurosymbolic AI to Model Pragmatic Analogies? IEEE Intelligent Systems, 38(5), 12-16. (Link: https://ieeexplore.ieee.org/document/10269780)
  4. Gaur, M., & Sheth, A. (2023). Building Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability, Explainability, and Safety. (Link: https://arxiv.org/pdf/2312.06798.pdf)
  5. Gaur, M., Gunaratna, K., Bhatt, S., & Sheth, A. (2022). Knowledge-infused Learning: A Sweet Spot in Neurosymbolic AI. IEEE Internet Computing, 26(4), 5-11. (Link: https://ieeexplore.ieee.org/document/9841416?denied=)

Application Papers Publications

  1. Roy, K., Khandelwal, V., Surana, H., Vera, V., Sheth, A., & Heckman, H. (2023). GEAR-Up: Generative AI and External Knowledge-based Retrieval Upgrading Scholarly Article Searches for Systematic Reviews. AAAI Conference on Artificial Intelligence 38. (Link: https://arxiv.org/pdf/2312.09948.pdf)
  2. Roy, K., Khandelwal, V., Goswami, R., Dolbir, N., Malekar, J., & Sheth, A. (2023). Demo alleviate: Demonstrating Artificial Intelligence Enabled Virtual Assistance for Telehealth: The Mental Health Case. AAAI Conference on Artificial Intelligence 37. (Link: https://arxiv.org/pdf/2304.00025)
  3. Wickramarachchi, R., Henson, C., & Sheth, A. (2023, June). CLUE-AD: a context-based method for labeling unobserved entities in autonomous driving data. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 13, pp. 16491-16493). (Link: https://ojs.aaai.org/index.php/AAAI/article/view/27089)
  4. Venkataramanan, R., Tripathy, A., Foltin, M., Yip, H. Y., Justine, A., & Sheth, A. (2023). Knowledge Graph Empowered Machine Learning Pipelines for Improved Efficiency, Reusability, and Explainability. IEEE Internet Computing, 27(1), 81-88. (Link: https://ieeexplore.ieee.org/abstract/document/10044293)

More papers related to Neurosymbolic AI by our group can be found at these links: https://shorturl.at/qxU69 , and https://shorturl.at/cftGK

Tutorials

  1. Roy, K., Lokala, U., Gaur, M., & Sheth, A. P. (2022, October). Tutorial: Neuro-symbolic AI for Mental Healthcare. In Proceedings of the Second International Conference on AI-ML Systems (pp. 1-3). (Link: https://dl.acm.org/doi/abs/10.1145/3564121.3564817)

Summary and Learnings So Far

  1. We have developed a set of algorithms to enable the development of neurosymbolic AI methods that incorporate declarative knowledge using knowledge graphs and procedural knowledge from domain-specific knowledge sources.
  2. Our work shows the effectiveness of combining language models and knowledge graphs. Knowledge graphs have the potential to model heterogeneous types of application and domain-level knowledge beyond schemas. This includes workflows, constraint specifications, and process structures, further enhancing the power and usefulness of neurosymbolic architectures. Combining such enhanced knowledge graphs with high-capacity neural networks would provide the end user with an extremely high degree of algorithmic and application-level utility.
  3. We find that knowledge graphs are suitable for symbolic structures that bridge the cognition and perception aspects because they support real-world dynamism. Unlike static and brittle symbolic logic, such as first-order logic, they are easy to update. In addition to their suitability for enterprise-use cases and established standards for portability, knowledge graphs are part of a mature ecosystem of algorithms that enable highly efficient graph management and querying. This scalability allows for modeling large, complex datasets with millions or billions of nodes.

Funding

  • NSF Award #: 2335967
  • Award Period of Performance:   Start Date: 07/01/2021     End Date: 09/30/2025
  • Project Titles: EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants, and Advancing Neurosymbolic AI with Deep Knowledge infused Learning
  • Award Amount: $337000