This website requires JavaScript.

ToxVis: Enabling Interpretability of Implicit vs. Explicit Toxicity Detection Models with Interactive Visualization

Uma GunturiXiaohan DingEugenia H. Rho
Mar 2023
摘要
The rise of hate speech on online platforms has led to an urgent need foreffective content moderation. However, the subjective and multi-faceted natureof hateful online content, including implicit hate speech, poses significantchallenges to human moderators and content moderation systems. To address thisissue, we developed ToxVis, a visually interactive and explainable tool forclassifying hate speech into three categories: implicit, explicit, andnon-hateful. We fine-tuned two transformer-based models using RoBERTa, XLNET,and GPT-3 and used deep learning interpretation techniques to provideexplanations for the classification results. ToxVis enables users to inputpotentially hateful text and receive a classification result along with avisual explanation of which words contributed most to the decision. By makingthe classification process explainable, ToxVis provides a valuable tool forunderstanding the nuances of hateful content and supporting more effectivecontent moderation. Our research contributes to the growing body of work aimedat mitigating the harms caused by online hate speech and demonstrates thepotential for combining state-of-the-art natural language processing modelswith interpretable deep learning techniques to address this critical issue.Finally, ToxVis can serve as a resource for content moderators, social mediaplatforms, and researchers working to combat the spread of hate speech online.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?