Context-Aware Harassment Detection on Social Media

From Knoesis wiki
Jump to: navigation, search

Context-Aware Harassment Detection on Social Media is an inter-disciplinary project among the Ohio Center of Excellence in Knowledge-enabled Computing (Kno.e.sis), the Department of Psychology, and Center for Urban and Public Affairs (CUPA) at Wright State University. The aim of this project is to develop comprehensive and reliable context-aware techniques (using machine learning, text mining, natural language processing, and social network analysis) to glean information about the people involved and their interconnected network of relationships, and to determine and evaluate potential harassment and harassers. An interdisciplinary team of computer scientists, social scientists, urban and public affairs professionals, educators, and the participation of college and high schools students in the research will ensure wide impact of scientific research on the support for safe social interactions.


As social media permeates our daily life, there has been a sharp rise in the use of social media to humiliate, bully, and threaten others, which has come with harmful consequences such as emotional distress, depression, and suicide. The October 2014 Pew Research survey <ref>Pew Internet, Online Harassment, 2014.</ref> shows that 73% of adult Internet users have observed online harassment and 40% have experienced it. Most of those who have experienced online harassment, 66% said their most recent incident occurred on a social networking site or app. Further, 25% of teens claim to have been cyberbullied <ref>Cyberbullying Research Center, Cyberbullying Facts, 2012.</ref>. The prevalence and serious consequences of online harassment present both social and technological challenges.

Existing work on harassment detection usually applies machine learning for binary classification, relying on message content while ignoring message context. Harassment is a pragmatic phenomenon, necessarily context-sensitive. We identify three dimensions of context for social media, people, content, and network, for the harassment phenomenon. Focusing on content, but ignoring either people (offender and victim) or network (social networks of offender and victim) yields misleading results. An apparent "bullying conversation" between good friends with sarcastic content presents no serious threat, while the same content from an identifiable stranger may function as harassment. Content analysis alone cannot capture these subtle but important distinctions.

Social science research identifies some of the necessary harassment components and features typically ignored in the existing binary harassment-or-not computation: (1) aggressive/offensive language, (2) potentially harmful consequences to emotion, such as distress and psychological trauma, and (3) a deliberate intent to harm. We investigate novel language analysis techniques that examine the target-dependent offensiveness/negativity of a message, including the notion of target (recipient) sensitivity missing in existing harassment detection systems. The harassment value depends further on the resulting emotional harm and the intent of the sender. Thus, we reframe social media harassment detection as a multi-dimensional analysis of the degree to which harassment occurs. The specific research goals of this proposal are:

  1. (i) Identify the language based target-dependent offensiveness/negativity of a message, (ii) predict message harm from an emotion perspective, (iii) recognize sender malice from an intent perspective, and (iv) consequently assess overall message harm.
  2. Detect harassing social media accounts automatically, by developing algorithms that assess the degree of message harm using features such as frequency, duration and coverage measures.
  3. Evaluate algorithm quality and generality by examining both school and workplace settings, which present different contextual variables in the people, content, and network dimensions.
  4. Provide an alert service of potential harassment messages for parents to facilitate intervention. Provide our harassment detection techniques as REST Web services for the purposes of research and education. Release our research efforts as an open source project on GitHub so that they can be adapted and reused on other platforms, e.g., Facebook and online forums.
  5. Educate teenagers regarding social media harassment, including its characteristics, the associated prohibitions and penalties, and prevention strategies. We will collaborate with local schools, to create and widely disseminate online course modules.


  1. Thilini Wijesiriwardene, Hale Inan, Ugur Kursuncu, Manas Gaur, Valerie L Shalin, Krishnaprasad Thirunarayan, Amit Sheth, I Budak Arpinar. ALONE: A Dataset for Toxic Behavior among Adolescents on Twitter In Proceedings of International Conference on Social Informatics (SocInfo 2020).
  2. Mohammadreza Rezvan, Saeedeh Shekarpour, Faisal Alshargi, Krishnaprasad Thirunarayan, Valerie L Shalin, Amit Sheth. Analyzing and learning the language for different types of harassment. Plos one 15, no. 3 (2020): e0227330.
  3. Ugur Kursuncu, Manas Gaur, Amit Sheth. Knowledge Infused Learning (K-IL): Towards Deep Incorporation of Knowledge in Deep Learning. In Proceedings of the AAAI 2020 Spring Symposium on Combining Machine Learning and Knowledge Engineering in Practice (AAAI-MAKE 2020). Stanford University, Palo Alto, California, USA, 2020.
  4. Manas Gaur, Ugur Kursuncu, Amit Sheth, Ruwan Wickramarachchi, Shweta Yadav. Knowledge-infused Deep Learning. Hypertext 2020 Tutorial.
  5. Ugur Kursuncu, Manas Gaur, Carlos Castillo, Amanuel Alambo, Krishnaprasad Thirunarayan, Valerie Shalin, Dilshod Achilov, I. Budak Arpinar, Amit Sheth. Modeling Islamist Extremist Communications on Social Media using Contextual Dimensions: Religion, Ideology, and Hate. In Proceedings of the ACM on Human-Computer Interaction 3, no. CSCW (2019): 1-22.
  6. Ugur Kursuncu, Manas Gaur, Usha Lokala, Krishnaprasad Thirunarayan, Amit Sheth and I. Budak Arpinar. "Predictive Analysis on Twitter: Techniques and Applications". In "Emerging Research Challenges and Opportunities in Computational Social Network Analysis and Mining", Springer Nature, 2019.
  7. Swati Padhee, Sarasi Lalithsena and Amit Sheth. "Creating Real-Time Dynamic Knowledge Graphs".International Semantic Web Research School (ISWS) 2018, Bertinoro, Italy;2018
  8. Kho, S. J., Padhee, S., Bajaj, G.,Thirunarayan, K., & Sheth, A. (2019). Use Cases for Knowledge-enabled Social Media Analysis. In Emerging Research Challenges and Opportunities in Computational Social Network Analysis and Mining (pp. 233-246). Springer, Cham.
  9. Mohammadreza Rezvan, Saeedeh Shekarpour, Lakshika Balasuriya, Prof. Krishnaprasad Thirunarayan, Valerie L. Shalin, Amit Sheth. A Quality Type-aware Annotated Corpus and Lexicon for Harassment Research, 10th ACM Conference on Web Science (WeSci'18) Amsterdam, The Netherlands, 27-30 May 2018 (Nominated for the best paper award)
  10. Rüsenberg, F., Hampton, A.J., Shalin, V.L. & Feufel, M. (2018). Stop-words are not “nothing”: German modal particles and public engagement in social media. In Proceedings of SBP-BRiMs: LNCS 10899 Social, Cultural and Behavioral Modeling, R. Thompson, C. Dancy, A. Hyder & H. Bisgin (Eds). pp. 89-96. Springer, Switzerland.
  11. Saeedeh Shekarpour, Edgard Marx, Sören Auer, Amit Sheth. "RQUERY: Rewriting Natural Language Queries on Knowledge Graphs to Alleviate the Vocabulary Mismatch Problem". Published in AAAI 2017
  12. Yazdavar AH, Mahdavinejad MS, Bajaj G, Romine W, Sheth A, Monadjemi AH, et al. (2020) Multimodal mental health analysis in social media. PLoS ONE 15(4): e0226248.
  13. Bhatt, S., Padhee, S., Chen, K., Shalin, V., Doran, D., Sheth, A., and Minnery, B., 2019, February. Knowledge graph enhanced community detection and characterization. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. ACM.
  14. Computational Social Science as the Ultimate Web Intelligence, Panel at Web Intelligence 2018.

Final Reports

  1. [1] Final Report
  2. [2] Outcome Report


  • NSF Award#: CNS 1513721
  • TWC SBE: Medium: Context-Aware Harassment Detection on Social Media
  • Timeline: 01 Sep 2015 - 15 Aug 2020
  • Award Amount: $925,104 + $16,000 (REU)


Invited Talks



Principal Investigators: Prof. Amit P. Sheth

Co-Investigators: Prof. Valerie L. Shalin, Prof. Krishnaprasad Thirunarayan

Postdoctoral Researchers: Dr. Ugur Kursuncu

Graduate Students: Thilini Wijesiriwardene

Other Collaborators: Prof. Debra Steele-Johnson, Dr. Jack L. Dustin, Hale Inan

Past Members: Saeedeh Shekarpour, Mohammadreza Rezvan, Monireh Ebrahimi, Lu Chen, Wenbo Wang, Pranav Karan, Rajeshwari Kandakatla, Venkatesh Edupuganti

Team members - Sep, 2015. from Left to Right - Monireh Ebrahimi, Kathleen Renee Wylds, Prof. Debra Steele-Johnson, Prof. Amit Sheth, Prof. Valerie L. Shalin, Prof. Krishnaprasad Thirunarayan, Dr. Wenbo Wang, Dr. Lu Chen, Dr. Jack L. Dustin

Social Media

Follow us on Twitter

Media Coverage

Related Projects

Concurrent Projects

Prior Projects

Related Resources

  1. A painfully funny but informative introduction to the problem of online harassment:
  2. Why People Post Benevolent and Malicious Comments Online:


  • Sanjaya Wijeratne, Amit Sheth, Shreyansh Bhatt, Lakshika Balasuriya, Hussein Al-Olimat, Manas Gaur, Amir Hossein Yazdavar, Krishnaprasad Thirunarayan. "Feature Engineering for Twitter-based Applications", in Feature Engineering for Machine Learning and Data Analytics. Editors. Guozhu Dong and Huan Liu. Chapman and Hall/CRC Data Mining and Knowledge Discovery Series. pp 359-393, March, 2018.
  • Sujan Perera, Pablo N. Mendes, Adarsh Alex, Amit P. Sheth, and Krishnaprasad Thirunarayan."Implicit Entity Linking in Tweets"In International Semantic Web Conference, pp. 118-132. Springer International Publishing; 2016.
  • Lakshika Balasuriya, Sanjaya Wijeratne, Derek Doran, Amit Sheth. "Finding Street Gang Members on Twitter" In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2016). San Francisco, CA, USA; 2016.