NSF Convergence Accelerator Track F: Actionable Sensemaking Tools for Curating and Authenticating Information in the Presence of Misinform

From Knoesis wiki
Jump to: navigation, search

Overview

Trust is a fundamental construct underpinning modern society and the social exchanges it contains. The economist Arrow identifies trust as “a lubricant of the social system." Individuals and organizations within a society learn about the world by collectively acquiring, filtering, and curating information and then acting upon it. Misinformation compromises this decision-making process. The repercussions can be extensive especially with the rise of Web technologies and social media and the ease with which such misinformation can be shared, propagated, and left uncorrected. Misinformation impacts and threatens many arenas of human endeavor including the security of nation states (e.g., intelligence gathering), public diplomacy and foreign aid, peace and warfare studies, news generation, sustainability and climate change issues, democratic elections, and decision making during public health crises. For example, proposed policies to control the spread of SARS-CoV-2 are routinely undermined by politically driven disinformation, leading to the World Health Organization’s “infodemic” declaration. Ultimately, misinformation undermines trust, collective sense making, and effective decisions.

Intellectual Merits

We focus on the specific problem of how misinformation impacts decision makers and stakeholders during public health crises. The intellectual merits are anchored in the following aims:


  • Aim 1: Engage and Build Communities of Interest to inform tool development that assists the management of misinformation: Our objectives are to understand the disruptive effects of misinformation on established decision-making practice. Additionally, we hope to build community ownership in the tool we will collectively develop, to define requirements and subsequent evaluation in the context of existing work responsibilities.
  • Aim 2: Build a Prototype Tool for Data Authentication, Curation, and Sensemaking: Our goal is to facilitate actionable sensemaking from a diverse set of resources (news outlets, citizen sourced social media data, governance policy documents, open data) in the presence of misinformation. We will leverage a novel relational modeling of trust, knowledge-based coherent content curation and innovative explainable authenticity mechanisms for this purpose.
  • Aim 3: Enhance the robustness of the crisis sensemaking tool and simultaneously develop a microworld environment to help train responders and policy makers: We will develop microworld testbeds (digital twins) to simulate realistic scenarios that adapt to important characteristics of a diverse set of real situations while enabling experimental manipulation and control to both enhance the robustness of the proposed tool and simultaneously that help train decision makers and build resilience against misinformation. Novel adversarial methods will be explored in this context.


Addressing these aims requires a convergent approach drawing from the disciplines of Communication,Cognitive Psychology, Computer Science, Journalism, Linguistics, Public Health and Sociology. The team assembled draws from these many disciplines. Several team members have examined issues of trust, authenticity, and misinformation in social media. Team members also have a strong history of collaboration and engagement with emergency management responders and humanitarian agencies.

Broader Impacts: From a broader scientific impact perspective the proposed tool, simulated microworlds and l Lessons learned can extend to other silos where misinformation management is critical, e.g., intelligence gathering and information security applications, democratic politics, public policy for sustainability and climate change. We will involve nonacademic participants ranging from public health and emergency management agencies, NGOs, and companies in tasks ranging from requirements elicitation and evaluations in phase 1 to real-world deployments and impact assessment in phase 2. From a training perspective - an integral part of aim 3 - is to help train both current practitioners and the next generation of crisis decision makers. We will also be training an interdisciplinary crew of young researchers (postdocs, graduate students, and software engineers).