Abstract:
Real-time information is transforming due to technological advancements and widespread internet access. In our increasingly digital culture, fake news and misinformation are more common in journalism, news reporting, social media, and other online information consumption platforms. The spread of misinformation can have harmful impacts or even control public events by using multimedia content to deceive readers and gain dissemination. The question here is how to spot fake news about recently occurring events and it is one of the special difficulties in Fake News Detection (FND) on social networking sites. Recent study has considerably increased our ability to identify fake news, because of less emphasis on utilizing the relationship between the textual and visual information in news samples. It is possible to spot fake news by giving importance to similarity among textual and visual features. In this paper, we study the task of identifying fake news using the Fakeddit dataset, which is a collection of full-length articles and related images. We propose a multimodal approach that makes use of transfer learning to gather semantic and contextual data, develop stronger hidden representations between the words in news samples and the images, and tries to improve the accuracy of FND task. We carefully evaluate the performance of our model on the Fakeddit dataset. The results demonstrate that the proposed model learns more accurate textual features and outperforms the most current textual results on that dataset.