Explaining Multimodal Deceptive News Prediction Models
- BATTELLE (PACIFIC NW LAB)
- WESTERN WASHINGTON UNIVERSITY
Social media plays a valuable role in rapid news dissemination, but it also serves as a vehicle to propagate unverified information. For example, news shared on Facebook or Twitter may actually contain disinformation, propaganda, hoaxes, conspiracies, clickbait or satire. This paper presents an in-depth analysis of the behavior of suspicious news classification models including error analysis and prediction confidence. We consider five deep learning architectures that leverage combinations of text, linguistic and image input signals from tweets. The behavior of these models is analyzed across four suspicious news prediction tasks. Our findings include that models leveraging only the text of tweets outperform those leveraging only the image (by 3-13% absolute in F-measure), and that models that combine image and text signals with linguistic cues e.g., biased and subjective language markers can, but do not always, perform even better. Finally, our main contribution is a series of analyses, in which we characterize text and image traits of our classes of suspicious news and analyze patterns of errors made by the various models to inform the design of future deceptive news prediction models.
- Research Organization:
- Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
- Sponsoring Organization:
- USDOE
- DOE Contract Number:
- AC05-76RL01830
- OSTI ID:
- 1532355
- Report Number(s):
- PNNL-SA-135457
- Country of Publication:
- United States
- Language:
- English
Similar Records
Separating Facts from Fiction: Linguistic Models to Classify Suspicious and Trusted News Posts on Twitter
Misleading or Falsification? Inferring Deceptive Strategies and Types in Online News and Social Media