Difference between revisions of "Neurosymbolic Artificial Intelligence Research at AIISC"

From Knoesis wiki
Jump to: navigation, search
(Neurosymbolic AI Overview)
(Conference Publications)
Line 18: Line 18:
 
#Shiri, A., Roy, K., Sheth, A., & Gaur, M. (2023). L3 ensembles: Lifelong learning approach for ensemble of foundational language models*. Young Researchers Symposium, ACM CODS-COMAD 2024. (Link: https://scholarcommons.sc.edu/aii_fac_pub/590/)
 
#Shiri, A., Roy, K., Sheth, A., & Gaur, M. (2023). L3 ensembles: Lifelong learning approach for ensemble of foundational language models*. Young Researchers Symposium, ACM CODS-COMAD 2024. (Link: https://scholarcommons.sc.edu/aii_fac_pub/590/)
 
#Roy, K., Khandelwal, V., Surana, H., Lookingbill, V., Sheth, A., & Heckman, H. (2024). GEAR-Up: Generative AI and External Knowledge-based Retrieval Upgrading Scholarly Article Searches for Systematic Reviews. AAAI Conference on Artificial Intelligence 38. (Link: https://arxiv.org/abs/2312.09948)
 
#Roy, K., Khandelwal, V., Surana, H., Lookingbill, V., Sheth, A., & Heckman, H. (2024). GEAR-Up: Generative AI and External Knowledge-based Retrieval Upgrading Scholarly Article Searches for Systematic Reviews. AAAI Conference on Artificial Intelligence 38. (Link: https://arxiv.org/abs/2312.09948)
#Sheth, A., & Roy, K. (2024). Neurosymbolic Value-Inspired AI (Why, What, and How). IEEE Intelligent Systems. (Link: https://arxiv.org/pdf/2312.09928.pdf)
 
 
#Zi, Y., Veeramani, H., Roy, K., & Sheth, A. (2024). RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding. AAAI Workshop on Neuro-Symbolic Learning and Reasoning in the era of Large Language Models. (Link: https://openreview.net/forum?id=hNQJI0KS3T)
 
#Zi, Y., Veeramani, H., Roy, K., & Sheth, A. (2024). RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding. AAAI Workshop on Neuro-Symbolic Learning and Reasoning in the era of Large Language Models. (Link: https://openreview.net/forum?id=hNQJI0KS3T)
  

Revision as of 15:10, 21 February 2024

Neurosymbolic AI Overview

Humans interact with the environment using a combination of perception transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of artificial intelligence (AI), refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision making in safety-critical applications such as health care, criminal justice, and autonomous driving.

Embodying intelligent behavior in an artificial intelligence system must involve both perception, processing raw data and cognition, using background knowledge to support abstraction, analogy, reasoning, and planning. Symbolic structures represent this background knowledge explicitly. Although neural networks are a powerful tool for processing and extracting patterns from data, they lack explicit representations of background knowledge, hindering the reliable evaluation of their cognition capabilities. Furthermore, applying appropriate safety standards while providing explainable outcomes guided by concepts from background knowledge is crucial for establishing trustworthy models of cognition for decision support.

Outcomes Achieved So far

  1. The rapid improvement in language models suggests that they will achieve almost optimal performance levels for large-scale perception. Knowledge graphs are suitable for symbolic structures that bridge the cognition and perception aspects because they support real-world dynamism. Unlike static and brittle symbolic logics, such as first-order logic, they are easy to update. In addition to their suitability for enterprise use cases and established standards for portability, knowledge graphs are part of a mature ecosystem of algorithms that enable highly efficient graph management and querying. This scalability allows for modeling large and complex datasets with millions or billions of nodes.
  2. We find that combining language models and knowledge graphs are most effective in current implementations. However, it also suggests that future knowledge graphs have the potential to model heterogeneous types of application- and domain-level knowledge beyond schemas. This includes workflows, constraint specifications, and process structures
  3. Combining such enhanced knowledge graphs with high-capacity neural networks would provide the end user with an extremely high degree of algorithmic- and application-level utility. The concern for safety is behind the recent push to withhold further rollout of generative AI systems such as GPT* as current systems could significantly harm individuals and society without additional guardrails. We believe that guidelines, policy, and regulations can be encoded via extended forms of knowledge graphs such as those shown in Figure 4 (and hence, symbolic means), which in turn can provide explainability accountability, rigorous auditing capabilities, and safety. Encouragingly, progress is being made on all these fronts swiftly, and the future looks promising

Journal Publications

  1. Sheth, A., Roy, K., & Gaur, M. (2023). Neurosymbolic Artificial Intelligence (Why, What, and How). IEEE Intelligent Systems, 38(3), 56-62.
  2. Sheth, A., & Roy, K. (2024). Neurosymbolic Value-Inspired AI (Why, What, and How). IEEE Intelligent Systems.

Conference Publications

  1. Shiri, A., Roy, K., Sheth, A., & Gaur, M. (2023). L3 ensembles: Lifelong learning approach for ensemble of foundational language models*. Young Researchers Symposium, ACM CODS-COMAD 2024. (Link: https://scholarcommons.sc.edu/aii_fac_pub/590/)
  2. Roy, K., Khandelwal, V., Surana, H., Lookingbill, V., Sheth, A., & Heckman, H. (2024). GEAR-Up: Generative AI and External Knowledge-based Retrieval Upgrading Scholarly Article Searches for Systematic Reviews. AAAI Conference on Artificial Intelligence 38. (Link: https://arxiv.org/abs/2312.09948)
  3. Zi, Y., Veeramani, H., Roy, K., & Sheth, A. (2024). RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding. AAAI Workshop on Neuro-Symbolic Learning and Reasoning in the era of Large Language Models. (Link: https://openreview.net/forum?id=hNQJI0KS3T)

Funding

  • NSF Award #: 2335967
  • Award Period of Performance:   Start Date: 10/01/2023     End Date: 09/30/2025
  • Project Title: EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants
  • Award Amount: $200,000

Personnel