SaTC: CORE: Small: Enhancing Security and Mitigating Harm in AI-Generated Vision Language Models
The extraordinary benefits of large Generative (Gen) AI models also come with a substantial risk of misuse and potential for harm. A primary concern expressed by thousands of AI experts is: Should we let machines flood our information channels with propaganda, untruth, hate, and toxicity? According to a Pew study, 64% of Americans say social media have a mostly negative effect on the way things are going in the U.S. today. Given that roughly 3.2 billion images are uploaded daily on social networks and a rapidly growing percentage of these are generated by Gen AI models called Vision Language Models (VLMs), the need for robust multimodal toxicity prevention is more pressing now than ever. Specifically, the project will develop techniques for the automatic detoxification of hateful images to safeguard toxicity and bias in VLM-generated content. The project has the potential to significantly impact the media, online safety and trust, and other industries, as well as help stakeholders in government, regulatory bodies, and policymaking. Broadening participation in computing and improving diversity are achieved using an annual AI summer camp for high school students from schools with a majority URM population and undergraduate research internships. The project also directly impacts 100 journalism students through their involvement in evaluation.
Contents
PROJECTS
Project-1:Development of Multi-Modal Knowledge Graph
TEAM MEMBERS
Advised By - Dr.Amit Sheth Dr.Amitava Das
AIISC Collaborators:
Current Interns:
FUNDING
- NSF Award#: 2350302
- SaTC: CORE: Small: Enhancing Security and Mitigating Harm in AI-Generated Vision Language Models
- Timeline: Start Date: 10/01/2024 End Date: 09/30/2027
- Award Amount: $600,000