EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants

From Knoesis wiki
Jump to: navigation, search

EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants

Following the era of Symbolic AI in the 20th century and Statistical AI in the first two decades of this century, there has been a growing interest in the neuro-symbolic AI approach. This approach aims to leverage the strengths and advantages of both symbolic and statistical AI by utilizing symbolic knowledge data structures alongside deep learning techniques. In 2022, our focus revolved around a category of approaches known as Knowledge-infused (deep) Learning (KiL), which incorporates various types of knowledge at different levels of abstraction into neural network pipelines. The knowledge is represented as declarative knowledge in knowledge graphs (KGs). Since July 2022, we have improved the knowledge graphs by incorporating procedural knowledge, which refers to the knowledge of typical processes employed by experts in diverse domains and everyday applications. Our future work involves advancing KiL further and enabling it to effectively handle such process knowledge using an enhanced approach called process knowledge-infused learning (PK-iL).

Specific Objectives

With the utilization of Pk-iL, our objective was to examine the responses to the subsequent research inquiries:

  1. How can we systematically incorporate explicit knowledge of concepts expressed in user-friendly terms into the implicitly characterized components (comprising billions of parameters) of a neural network?
  2. How can we accomplish this integration while ensuring that the specific goals of the application are fulfilled, such as obtaining clinically valid and safe outcomes in the healthcare domain?
  3. Is it possible to develop algorithms capable of handling a diverse range of declarative and procedural knowledge necessary for effective neurosymbolic AI, which can address issues associated with each individual method (i.e., neural or symbolic methods) in isolation?
  4. How can we robustly and accurately evaluate success or failure across the abovementioned three questions?

Significant Results So Far

  1. We developed a set of algorithms and systematic approaches to enable the development of foundation models that incorporate both declarative and procedural knowledge. Our approaches demonstrate superior performance compared to the current state-of-the-art, with improvements of up to 14% on standard performance benchmarks. This highlights the effectiveness of Pk-iL approaches for neurosymbolic AI (link: https://arxiv.org/pdf/2306.09824.pdf).
  2. Our experiments indicate that combining neural networks with process knowledge in a mental health questionnaire yields responses that are 12.54% and 9.37% more accurate, as measured by their similarity to responses in real-world clinical settings (link: https://www.frontiersin.org/articles/10.3389/fdata.2022.1056728/full).
  3. In general, regardless of the specific language model utilized, leveraging PK-iL leads to an average improvement of 82% over pre-trained large language models in terms of safety, explainability, and the generation of process-guided questions. These improvements have been validated by domain experts (link: https://arxiv.org/pdf/2306.09824.pdf).
  4. We successfully demonstrated a fully operational system called ALLEVIATE, which provides advanced functionality for assisting patients and clinicians in the field of mental healthcare (link: https://arxiv.org/pdf/2304.00025.pdf).


Link to the previous EAGER award: http://wiki.aiisc.ai/index.php/Advancing_Neuro-symbolic_AI_with_Deep_Knowledge-infused_Learning

Publications

  1. Dalal, S., Tilwani, D., Gaur, M., Jain, S., Shalin, V.L., & Sheth, A. (2023). A Cross Attention Approach to Diagnostic Explainability Using Clinical Practice Guidelines for Depression. ArXiv, abs/2311.13852. Accepted to IEEE JBHI (IF 7.7, acceptance rate 14.5%)
  2. Tilwani, D., Venkataramanan, R., & Sheth, A.P. (2024). Neurosymbolic AI Approach to Attribution in Large Language Models. (Link: https://arxiv.org/abs/2410.03726)
  3. Jaimini, U., Henson, C., Sheth, A., and Harik, R., 2024. Causal Neuro-Symbolic AI for Root Cause Analysis in Smart Manufacturing, 23rd International Semantic Web Conference (ISWC) 2024. (Link: https://scholarcommons.sc.edu/aii_fac_pub/614/)
  4. Jaimini, U., Henson, C., and Sheth, A., 2024. Causal Neuro-Symbolic AI based Causal Entity Prediction in Autonomous Driving, 23rd International Semantic Web Conference (ISWC) 2024. (Link: https://scholarcommons.sc.edu/aii_fac_pub/615/)
  5. Jaimini, U., Henson, C., and Sheth, A., 2024. Visual Causal Question and Answering with Knowledge Graph Link Prediction, 23rd International Semantic Web Conference (ISWC) 2024. (Link: https://scholarcommons.sc.edu/aii_fac_pub/613/)
  6. Jaimini, U., Wickramarachchi , R., Henson, C., and Sheth, A., 2024. Ontology Design Metapattern for RelationType Role Composition, 15th Workshop on Ontology Design and Patterns (WOP 2024) at 23rd International Semantic Web Conference (ISWC) 2024. (Link: https://scholarcommons.sc.edu/aii_fac_pub/616/)
  7. Gaur, M., & Sheth, A. (2024). Building trustworthy NeuroSymbolic AI Systems: Consistency, reliability, explainability, and safety. AI Magazine, 45(1), 139-155 (Link: https://onlinelibrary.wiley.com/doi/pdf/10.1002/aaai.12149).
  8. Zi, Y., Veeramani, H., Roy, K., & Sheth, A. (2024). RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding. AAAI Workshop on Neuro-Symbolic Learning and Reasoning in the era of Large Language Models. (Link: https://openreview.net/forum?id=hNQJI0KS3T)
  9. Zi, Y., Roy, K., Narayanan, V., & Sheth, A. (2024). Exploring Alternative Approaches to Language Modeling for Learning from Data and Knowledge. AAAI Spring Symposium on Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge. (Link: https://scholarcommons.sc.edu/cgi/viewcontent.cgi?article=1619&context=aii_fac_pub)
  10. Roy, K., Oltramari, A., Zi, Y., Shyalika, C., Narayanan, V., & Sheth, A. (2024). Causal Event Graph-Guided Language-based Spatiotemporal Question Answering. AAAI Spring Symposium on Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge. (Link: https://scholarcommons.sc.edu/cgi/viewcontent.cgi?article=1618&context=aii_fac_pub)
  11. Roy, K., Khandelwal, V., Surana, H., Lookingbill, V., Sheth, A., & Heckman, H. (2024). GEAR-Up: Generative AI and External Knowledge-based Retrieval Upgrading Scholarly Article Searches for Systematic Reviews. AAAI Conference on Artificial Intelligence 38. (Link: https://arxiv.org/abs/2312.09948)
  12. Sheth, A., & Roy, K. (2024). Neurosymbolic Value-Inspired AI (Why, What, and How). IEEE Intelligent Systems. (Link: https://arxiv.org/pdf/2312.09928.pdf)
  13. Jaimini, U., Thirunarayan, K., Kalra, M., Dawson, R., & Sheth, A. (2024). Personalized Bayesian Inference for Explainable Healthcare Management and Intervention. Workshop on Human-Centred XAI: Enhancing AI Acceptability for Healthcare at 12th IEEE International Conference on Healthcare Informatics (ICHI) (Link: https://scholarcommons.sc.edu/cgi/viewcontent.cgi?type=pdf&article=1622&unstamped=yes&date=1713736308&preview_mode=1&context=aii_fac_pub&/1713736308-text.pdf)
  14. Jaimini, U., Henson, C., and Sheth, A., (2024). Causal Neurosymbolic AI: A Synergy Between Causality and Neurosymbolic Methods. IEEE Intelligent Systems, 39(3), pp.13-19. (Link: https://www.computer.org/csdl/magazine/ex/2024/03/10570374/1Y2h54VNhTy)

Keynotes, Tutorials and Talks

  1. Amit Sheth, "Forging Trust in Tomorrow’s AI: A Roadmap for Reliable, Explainable, and Safe NeuroSymbolic Systems,” invited talks at APPCAIR-IEEE AI Symposium, Bosch Neuro-symbolic AI Focus Group, and Ontology Summit 2024, April 2024. AbstractSlidesVideo

Funding

  • NSF Award #: 2335967
  • Award Period of Performance:   Start Date: 10/01/2023     End Date: 09/30/2025
  • Project Title: EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants
  • Award Amount: $200,000

Personnel

Quad Chart